id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,836,899
Finding the Right Balance: Clean Architecture and Entity Framework in Practice
In the world of software development, we often find ourselves grappling with tough decisions about...
0
2024-04-28T21:18:21
https://dev.to/shahabfar/finding-the-right-balance-clean-architecture-and-entity-framework-in-practice-2bip
cleanarchitecture, entityframework
In the world of software development, we often find ourselves grappling with tough decisions about architecture. One common dilemma is how to integrate Entity Framework, a powerful database tool, within the principles of Clean Architecture. Clean Architecture is all about creating systems that are easy to test, maintain, and understand. It encourages us to keep our core application logic free from any ties to specific technologies or frameworks. But when we bring in something like Entity Framework for database work, things can get a bit tricky. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p5u257sx5m1jgx775vy5.png) Some folks advocate for abstracting our interactions with Entity Framework. This means creating interfaces in our application code and implementing them with Entity Framework in infrastructure layer. It’s like creating a middleman that shields our core code from direct contact with Entity Framework. This approach can be beneficial in big projects or when we anticipate changing our database technology down the line. Others argue that Entity Framework already provides plenty of tools for working with databases. They say that adding more layers of abstraction just adds complexity without real benefits, especially for smaller projects or teams that are already comfortable with Entity Framework. So, what’s the right approach? Well, it depends on your specific situation. If you’re working on a big project with a diverse team or foresee changes in your database setup, abstraction might be the way to go. But if you’re in a smaller team and Entity Framework is already your go-to tool, keeping things simple might be the better choice. Ultimately, it’s about finding the right balance between following architectural principles and being practical. Whether you choose to abstract your database interactions or stick with Entity Framework’s native features, the goal remains the same: building software that’s easy to maintain, test, and adapt to changing needs.
shahabfar
1,837,260
Revive Gummies: Unlocking the Natural Power of CBD for Wellness
In recent years, the popularity of CBD products has surged, with consumers seeking natural...
0
2024-04-29T08:26:19
https://dev.to/nikkisam/revive-gummies-unlocking-the-natural-power-of-cbd-for-wellness-4mga
In recent years, the popularity of CBD products has surged, with consumers seeking natural alternatives to traditional medications for various health concerns. Among these products, CBD-infused gummies have gained significant attention for their convenience, effectiveness, and delicious flavors. One such standout in the market is Revive Gummies, offering a unique blend of quality CBD and targeted ingredients to promote overall wellness. Revive Gummies is a carefully crafted CBD product designed to provide users with a convenient and enjoyable way to incorporate CBD into their daily routine. Made with premium CBD extract derived from organically grown hemp plants, these gummies offer a potent dose of cannabidiol along with other beneficial compounds found in the hemp plant. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ljga8taxh668eiimo5h3.jpg) **The Power of CBD** CBD, or cannabidiol, is a non-intoxicating compound found in hemp and cannabis plants. Unlike THC, another well-known cannabinoid, CBD does not produce psychoactive effects. Instead, it interacts with the body's endocannabinoid system (ECS), which plays a crucial role in regulating various physiological functions, including mood, sleep, appetite, and immune response. Research suggests that CBD may offer a wide range of potential health benefits, including: **Pain Relief:** CBD has been studied for its analgesic properties, with some evidence suggesting it may help alleviate chronic pain conditions such as arthritis and neuropathy. **Stress and Anxiety Reduction:** Many users report feeling calmer and more relaxed after consuming CBD, making it a popular choice for managing stress and anxiety. **Improved Sleep Quality:** CBD may promote better sleep by addressing underlying factors such as anxiety, pain, and insomnia. **Anti-inflammatory Effects:** CBD exhibits anti-inflammatory properties, which could benefit individuals with inflammatory conditions like arthritis or autoimmune disorders. **The Ingredients** What sets Revive Gummies apart is their carefully selected blend of ingredients, designed to enhance the effects of CBD and promote overall wellness. In addition to premium CBD extract, these gummies may contain: **Vitamins and Minerals:** Certain formulations may include vitamins and minerals such as vitamin D, vitamin B12, and zinc, which play essential roles in supporting overall health and immunity. **Herbal Extracts:** Some Revive Gummies incorporate herbal extracts like turmeric or ginger, known for their anti-inflammatory and antioxidant properties. **Adaptogens: **Adaptogenic herbs like ashwagandha or Rhodiola may be included to help the body better cope with stress and promote balance. **Natural Flavors and Sweeteners:** Revive Gummies come in a variety of delicious flavors, often enhanced with natural fruit extracts and sweetened with organic cane sugar or alternative sweeteners like stevia. **Conclusion** Revive Gummies offer a convenient and enjoyable way to experience the potential benefits of CBD for overall wellness. With their premium ingredients, delicious flavors, and commitment to quality, they are a standout choice for individuals seeking a natural approach to health and vitality. However, it's crucial to remember that CBD affects everyone differently, so it's essential to start with a low dose and listen to your body's response. Always consult with a healthcare professional before incorporating CBD products into your routine, especially if you have any underlying health conditions or are taking medications. **Frequently Asked Questions ** **Are Revive Gummies Legal?** Yes, Revive Gummies are made from hemp-derived CBD, which is legal in the United States under the 2018 Farm Bill as long as it contains less than 0.3% THC. **How Should I Take Revive Gummies?** Simply chew and swallow the recommended dosage of Revive Gummies according to the instructions on the packaging. Start with a low dose and gradually increase as needed based on your individual response. **Are There Any Side Effects?** While CBD is generally well-tolerated, some users may experience mild side effects such as drowsiness, dry mouth, or changes in appetite. It's essential to consult with a healthcare professional before incorporating CBD products into your routine, especially if you're taking medications or have underlying health conditions. **How Long Does It Take for Revive Gummies to Work?** The onset and duration of effects can vary depending on factors such as dosage, individual metabolism, and the method of consumption. Generally, you may start to feel the effects of Revive Gummies within 30 minutes to an hour, with the effects lasting several hours. **Can I Take Revive Gummies with Other Medications?** It's essential to consult with a healthcare professional before combining CBD products with other medications, as CBD may interact with certain drugs, particularly those that are metabolized by the liver.
nikkisam
1,837,307
Maximizing Efficiency with SAP PP: A Comprehensive Guide for Manufacturers
In the dynamic landscape of manufacturing, efficiency is the cornerstone of success. Every production...
0
2024-04-29T09:48:28
https://dev.to/mylearnnest/maximizing-efficiency-with-sap-pp-a-comprehensive-guide-for-manufacturers-3n90
In the dynamic landscape of manufacturing, efficiency is the cornerstone of success. Every production process relies on streamlined operations to meet demand, control costs, and stay competitive. [SAP Production Planning (SAP PP)](https://www.sapmasters.in/sap-pp-training-in-bangalore/) stands as a robust solution designed to optimize manufacturing processes, enhance productivity, and drive business growth. In this comprehensive guide, we delve into the intricacies of SAP PP, exploring its functionalities, benefits, and implementation strategies for manufacturers aiming to maximize efficiency. **Understanding SAP PP:** SAP PP is a module within the [SAP ERP (Enterprise Resource Planning)](https://www.sapmasters.in/sap-pp-training-in-bangalore/) suite, dedicated to managing and optimizing production processes. It provides comprehensive tools for planning, execution, and monitoring of manufacturing activities, enabling organizations to align production with demand while maintaining cost-effectiveness and quality standards. **Key Features and Functionalities:** **Master Data Management:** SAP PP centralizes master data related to materials, resources, and production processes, ensuring consistency and accuracy across the [manufacturing landscape](https://www.sapmasters.in/sap-pp-training-in-bangalore/). By maintaining a single source of truth for essential data, organizations can eliminate redundancies and errors, thereby enhancing operational efficiency. **Production Planning:** The module facilitates demand-driven production planning, allowing organizations to create detailed production schedules based on sales forecasts, customer orders, and inventory levels. Through advanced algorithms and forecasting techniques, SAP PP helps manufacturers optimize production volumes, [minimize stockouts](https://www.sapmasters.in/sap-pp-training-in-bangalore/), and reduce lead times. **Capacity Planning:** Effective capacity planning is crucial for balancing production resources with demand. SAP PP enables organizations to analyze resource capacities, identify bottlenecks, and optimize production schedules to maximize throughput while minimizing idle time and [overutilization](https://www.sapmasters.in/sap-pp-training-in-bangalore/). **Shop Floor Control:** Real-time visibility into shop floor activities is essential for monitoring production progress and addressing issues promptly. SAP PP offers comprehensive shop floor control functionalities, including order tracking, work center management, and production reporting, empowering organizations to optimize resource utilization and maintain production schedules. **Material Requirements Planning (MRP):** SAP PP integrates MRP functionalities to ensure timely availability of materials for production. By analyzing demand, inventory levels, and lead times, organizations can generate accurate procurement plans, optimize inventory levels, and minimize stockouts, thereby improving overall supply chain efficiency. **Quality Management:** Quality control is integral to manufacturing operations, ensuring that products meet predefined quality standards. SAP PP [incorporates robust quality management functionalities](https://www.sapmasters.in/sap-pp-training-in-bangalore/), enabling organizations to define quality criteria, conduct inspections, and track quality-related issues throughout the production process, thereby enhancing product quality and customer satisfaction. **Benefits of SAP PP:** **Improved Efficiency:** By streamlining production processes, optimizing resource utilization, and minimizing idle time, SAP PP helps organizations improve overall operational efficiency and reduce production costs. **Enhanced Visibility:** [Real-time monitoring](https://www.sapmasters.in/sap-pp-training-in-bangalore/) and reporting capabilities provided by SAP PP offer organizations unparalleled visibility into production activities, enabling proactive decision-making and timely intervention to address issues. **Better Resource Utilization:** SAP PP enables organizations to optimize resource capacities, balance workloads, and allocate resources effectively, thereby maximizing throughput and minimizing production bottlenecks. **Increased Customer Satisfaction:** By aligning production with demand, minimizing lead times, and ensuring product quality, SAP PP helps organizations meet customer expectations and enhance overall satisfaction levels. **Strategic Decision Support:** SAP PP provides comprehensive analytics and reporting tools, enabling [organizations](https://www.sapmasters.in/sap-pp-training-in-bangalore/) to gain actionable insights into production performance, identify trends, and make informed strategic decisions to drive business growth. Implementation Strategies: **Define Objectives:** Clearly define the objectives and goals of implementing SAP PP, aligning them with the organization's strategic priorities and operational requirements. **Engage Stakeholders:** Involve key stakeholders, including production managers, IT personnel, and end-users, in the implementation process to ensure buy-in and alignment with organizational goals. **Customization and Configuration:** Customize and configure SAP PP to align with specific business processes and requirements, ensuring seamless integration with existing systems and workflows. **Training and Change Management:** Provide comprehensive training and support to end-users to facilitate a smooth transition to SAP PP, emphasizing the benefits and capabilities of the system. **Continuous Improvement:** Continuously monitor and evaluate the performance of SAP PP, soliciting feedback from [end-users and stakeholders](https://www.sapmasters.in/sap-pp-training-in-bangalore/), and implementing enhancements and optimizations as needed to maximize value. In conclusion, SAP PP serves as a powerful tool for manufacturers seeking to optimize production processes, enhance efficiency, and drive business growth. By leveraging its robust functionalities and implementation best practices, organizations can streamline operations, [improve resource utilization](https://www.sapmasters.in/sap-pp-training-in-bangalore/), and stay competitive in today's dynamic marketplace.
mylearnnest
1,837,709
Vite vs Nextjs: Which one is right for you?
Vite and Next.js are both top 5 modern development framework right now. They are both great depending...
0
2024-04-29T16:21:34
https://dev.to/jenesh/vite-vs-nextjs-which-one-is-right-for-you-53nn
nextjs, vite, webdev, javascript
Vite and Next.js are both top 5 modern development framework right now. They are both great depending on your use case so we’ll discuss 4 areas: Architecture, main features, developer experience and production readiness. After learning about these we’ll have a better idea of which one is best for your project. [Vite](https://vitejs.dev/) and [Next.js](https://nextjs.org/) are both top 5 modern development framework right now. They are both great depending on your use case so we’ll discuss 4 areas: Architecture, main features, developer experience and production readiness. After learning about these we’ll have a better idea of which one is best for your project. ## **1. Architecture** - **Vite**: Vite is a build tool that aims to provide a faster and leaner development experience. Using ES module you can skip bundling during development, allowing for faster module serving. Powered by Rollup for production builds but uses native ES modules in development for rapid server start and hot module replacement (HMR). - **Next.js**: Next.js is a batteries included React framework that provides a range of features for building server-rendered applications, static websites, and more. You can do both client-side and server-side, with features such as server-side rendering (SSR) and static site generation (SSG). Note: Next.js uses Webpack as its module bundler. ## **2. Main Features** - **Vite**: - Fast development startup and updates through native ES module loading. - Plugins available for features like legacy browser support, asset import, and more. - Simple configuration with sensible defaults. - Support for frameworks like Vue, React, Svelte, etc. - Framework agnostic - Minimal and customizable, smoother learning curve - **Next.js**: - React framework - Advanced routing and nested layouts based on file system - Built in optimizations for Images, Scripts, Core Web Vitals and more - Automatic code splitting for faster page loads. - Built-in CSS and Sass support, with support for CSS Modules. - API routes to build API endpoints within Next.js projects. - Image optimization via the built-in Image component. - Incremental Static Regeneration (ISR) and on-demand revalidation. ## **3. Development Experience** - **Vite**: Designed for fast for development builds, focusing on optimizing the developer experience with instant server start and hot module replacement. - **Next.js**: Focuses on productivity and scalability, providing a lot of built-in functionality (like hybrid static & server rendering) that can be crucial for complex applications. Full stack solution. ## **4. Production Readiness** - **Vite**: While primarily focused on improving development experiences, Vite produces optimized production builds using Rollup. - **Next.js**: Highly optimized for production from the start, with features tailored for performance in real-world scenarios, including extensive support for SEO and server-side capabilities. Note: With deployment to [Vercel](https://vercel.com/) is free and comes with additional free tooling such as website analytics and more. ## **5. Use Cases** - **Vite**: Best for projects where you want a fast development server and are comfortable handling some aspects of your architecture separately (like SSR if using React). - **Next.js**: Ideal for projects that benefit from server-side rendering, static site generation, or need built-in API handling capabilities, especially when SEO and performance are critical. All in one solution. ## **6. Community and Ecosystem** - **Vite**: Rapidly growing community, with increasing plugin ecosystem and integration examples. - **Next.js**: Established community, extensive documentation, and a wide range of plugins and integrations from both Vercel and third parties. ## Verdict: **Vite:** If you are looking for a fast and efficient build tool and development server, then Vite is a great option. It is also a good choice if you want to have more control over the development process. **Next.js**: If you are looking for a React framework that offers a number of features for building server-rendered and statically generated web applications, then Next.js is a good choice. It is also a good choice if you want to use a framework with a large ecosystem and a large community. Next also is better for larger applications and is better for scalability. ## Summary In summary, choose Vite if you prioritize a quick setup and a fast development experience with fewer out-of-the-box features. Opt for Next.js 14 if you need a more feature-complete framework that supports complex scenarios like SSR, SSG, and API routes, all ready from the get-go. --- ## 🚨 Do you need a Portfolio Website? 🚨 With **[MyDevPage](https://mydevpa.ge/)** you can build your own portfolio website in 1 minute. You need to focus on building great projects and enhancing your skills rather than wasting your time in building and designing a portfolio website from scratch. MyDevPage handles everything: - Update easily on the go from your phone - Beautiful Portfolio Website - Upload your resume easily - Useful Website Analytics - Simple Customization - Add Custom Domain - Control over SEO - Contact Form Try it out today (it's Free!) 👉 **[MyDevPage](https://mydevpa.ge/)** 👈 ![MyDevPa.ge](//images.ctfassets.net/vlpuprt2y2ne/6oOJjFtSMHjgeoNhDW1IOm/4a4cb3c8f946e7e8056c8ee122c406a4/image.png)
jenesh
1,838,172
Find Out How To Buy Or Sell A Home With Jd Puri Realtors
Understanding the Real Estate Market The real estate market is ever-evolving, influenced...
0
2024-04-30T05:54:50
https://dev.to/jaideeppuri/find-out-how-to-buy-or-sell-a-home-with-jd-puri-realtors-5a2l
jdpurirealtors
## Understanding the Real Estate Market The real estate market is ever-evolving, influenced by various factors such as economic conditions, interest rates, and consumer behavior. Understanding these dynamics is crucial whether you're buying or selling a home with [JD Puri Realtors](https://www.quora.com/profile/Jd-Puri-Realtor). In recent years, the market has seen a trend towards urbanization, with more people preferring city living for its convenience and amenities. Additionally, factors like job growth and infrastructure development play significant roles in shaping local housing markets. ## Steps to Buying a Home with JD Puri Realtors **Preparing Your Finances** Before diving into the home-buying process, it's essential to assess your financial situation. This includes determining your budget, getting pre-approved for a mortgage, and saving for a down payment. JD Puri Realtors can assist you in navigating these financial aspects, ensuring you're well-prepared to make a purchase. **Finding the Right Property** With JD Puri Realtors, you gain access to a wide range of properties, including exclusive listings that may not be available elsewhere. Their team will work closely with you to understand your needs and preferences, helping you find the perfect home that aligns with your lifestyle and budget. **Making an Offer** Once you've found the ideal property, JD Puri Realtors will guide you through the offer-making process. They'll leverage their negotiation skills to ensure you get the best possible deal while also advising you on any contingencies or clauses to include in the offer. ## Benefits of Using JD Puri Realtors **Expert Guidance** JD Puri Realtors boasts a team of seasoned professionals with in-depth knowledge of the local real estate market. Whether you're buying or selling a home, their expertise can streamline the process and provide you with peace of mind. **Access to Exclusive Listings** One of the key advantages of working with JD Puri Realtors is access to exclusive listings that may not be publicly available. These listings often represent unique opportunities, giving you an edge in finding the perfect property or attracting potential buyers. **Negotiation Skills** Negotiating the terms of a real estate transaction requires finesse and tact, skills which JD Puri Realtors excels in. Their agents will negotiate on your behalf, ensuring you achieve your desired outcome while maintaining positive relationships with all parties involved. **FAQ's: ## **How does JD Puri Realtors ensure confidentiality? JD Puri Realtors prioritizes client confidentiality and takes measures to safeguard sensitive information throughout the buying or selling process. Their team adheres to strict privacy protocols to ensure your data remains secure. ## What sets JD Puri Realtors apart? JD Puri Realtors stands out for their unparalleled commitment to client satisfaction, extensive market knowledge, and innovative approach to real estate. They prioritize building lasting relationships with their clients and strive to exceed expectations at every turn. ## Conclusion In summary, buying or selling a home with JD Puri Realtors offers numerous advantages, including expert guidance, access to exclusive listings, and top-notch negotiation skills. Their track record of success and satisfied clients speaks volumes about their commitment to excellence in real estate services.
jaideeppuri
1,838,260
Navigating Your Journey with LaGuardia Car Service
Best laguardia car service When you select a means of transport between the cities to undertake your...
0
2024-04-30T07:40:43
https://dev.to/aakriti0/navigating-your-journey-with-laguardia-car-service-jhj
Best laguardia car service When you select a means of transport between the cities to undertake your business, or/and leisure mission, the performance of said journey has to be effortless and enjoyable. [best LaGuardia car service](https://laguardiaairportcarservice.com/) is the very source of entrance into New York, and at the same time it stands right in the middle of long-distance travelers going to and from the places such as Boston. In this piece we will try to understand the world of car services, where we'll consider the highest rated services available for convenient travel from LaGuardia to Boston and the surrounding areas. LaGuardia Airport Car Service No doubt, [LaGuardia Airport Car service](https://laguardiaairportcarservice.com/) what to expect about how travelers flying to and for the airport are served needs to be deeply checked and an effective mode of transportation for them cannot be overemphasized. It might be a business appointment or possibly a long-awaited vacation; whichever way you look at it, having the luxury of a reliable car service that is available to you from the airport, can ease the chaos of airports.  Car Service Boston to NYC The route between Boston and the big bright city of New York is considered as a frequently traveled one by both corporations seeking business travels and tourists. Although a [car service Boston to NYC](https://laguardiaairportcarservice.com/) is not the single mode of transportation available, it remains absolutely luxurious and unparalleled in flexibility when compared to any other available type of transportation. Be it vacation purposes or for work and you want to be as efficient as possible on your journey or you just want to see how the landscape looks like, doesn't matter as a car service know how to make your travel hassle free. Car Service LaGuardia to Boston Traveling from LaGuardia Airport to Boston would be taking a trip that requires meticulous planning because it involves more than just keeping a schedule. [Car service Laguardia to Boston](https://laguardiaairportcarservice.com/) are very common and a lot of them offer the same service. However, selecting the right car service that is reliable will make the slightest difference in the effectiveness and efficiency of your travel. No matter you are regular traveler or choosing it for the very first time: definitelly you will need the significant car service to overcome this 2 busy cities within easy reach.
aakriti0
1,838,309
Deploying Forem on Render.com PromptZone.com
The journey of deploying an open-source software platform like forem can be complex and daunting, but...
0
2024-04-30T08:31:03
https://dev.to/adifsgaid/deploying-forem-on-rendercom-promptzonecom-3721
forem, deploy, webdev
The journey of deploying an open-source software platform like [forem](https://github.com/forem/forem) can be complex and daunting, but with the right tools and services, it can also be remarkably rewarding. This article details my experience deploying [Forem](https://www.forem.com/), the software behind the [Dev.to](Dev.to), on [Render.com](https://render.com/), deploying [Promptzone.com](https://promptzone.com/). #### **Understanding Forem** Forem is an open-source platform designed to empower community builders to create their own spaces for discussion and connection. It's built using a robust tech stack including Ruby on Rails, JavaScript, and PostgreSQL, making it a solid choice for developers looking to customize or scale their community platforms. #### **Choosing Render.com for Deployment** Render.com emerged as an excellent choice for deploying Forem due to its simplicity and powerful features, think of it like Heroku when it was still cheap and good. Render offers straightforward configurations, automatic scaling, and a suite of services that support web applications, background workers, cron jobs, and databases—all crucial for a Forem deployment. #### **Step-by-Step Deployment Process** 1. **Setting Up the Forem Repository**: The first step involved forking the Forem repository on GitHub. This allowed me to manage customizations and updates with ease. 2. **Configuring Render.com Services**: - **Web Service**: I configured a web service on Render for the Forem web application. This involved setting environment variables such as the database URL and Redis instances, I then deployed a Docker image to the Docker hub registry and deployed in a web service. - **Background Workers**: Forem utilizes Sidekiq for background jobs. I set up a worker service on Render to handle these jobs, ensuring that email notifications, feed updates, and other background tasks run smoothly, same thing here I deployed the same docker image. 3. **Database and Redis Setup**: I used Render’s managed PostgreSQL and Redis services, which provided hassle-free maintenance and scalability for the database needs of Forem. 4. **Environment and Secrets Management**: Render’s environment groups feature was invaluable for managing the application’s environment variables and secrets securely, make sure to add all the necessary ENV secrets otherwise it won't work (I will dive in detail in another post on everything you need.). #### **Going Live with PromptZone.com** After configuring and testing the Forem instance on Render.com, the final step was to go live. I also used Fastly.com as CDN like Forem suggested. #### **Conclusion** This is not the suggested way of deployment by the forem team, nevertheless I found it easier and smother than using the [deployment selfhost](https://github.com/forem/selfhost), however I think it's good to know different approaches and use the one suits you the most, if you don't have time to manage a server I think using a pass like Render does the job, and it's cheaper then Heroku at the time of this post.
adifsgaid
1,838,327
Lockable Store
Lockable Store is a trusted storage service in Dubai, offering self-storage solutions with a team of...
0
2024-04-30T08:50:08
https://dev.to/lockablestore/lockable-store-m4p
[Lockable Store](https://www.lockablestore.com/) is a trusted storage service in Dubai, offering self-storage solutions with a team of reliable professionals. The company provides secure and affordable self-storage in Dubai with exceptional customer service. The company's offerings are available for both short and long-term rentals, with all rooms secured with personal padlocks.
lockablestore
1,838,334
Implement React v18 from Scratch Using WASM and Rust - [10] Implement Update for Single Node.
Based on big-react,I am going to implement React v18 core features from scratch using WASM and...
27,011
2024-04-30T08:57:27
https://dev.to/paradeto/implement-react-v18-from-scratch-using-wasm-and-rust-10-implement-update-for-single-node-jfj
react, webassembly, rust
> Based on [big-react](https://github.com/BetaSu/big-react),I am going to implement React v18 core features from scratch using WASM and Rust. > > Code Repository:https://github.com/ParadeTo/big-react-wasm > > The tag related to this article:[v10](https://github.com/ParadeTo/big-react-wasm/tree/v10) The previous article mentioned that we haven't fully implemented the update process yet. So, in this article, we will implement it. Let's continue using the previous example: ```js function App() { const [name, setName] = useState(() => 'ayou') setTimeout(() => { setName('ayouayou') }, 1000) return ( <div> <Comp>{name}</Comp> </div> ) } ``` When we call `setName('ayouayou')`, it triggers the update process. The `setName` method is returned in the `mount_state` during the initial render. This method attaches a `Hook` node to the `memoized_state` of the current `FiberNode`. If there are multiple hooks, they form a linked list. The `Hook` node has an `update_queue`, which is clearly an update queue. It also has a `memoized_state` property that records the current state of the `Hook`. ```rust fn mount_state(initial_state: &JsValue) -> Result<Vec<JsValue>, JsValue> { // Add hook to current FiberNode memoized_state let hook = mount_work_in_progress_hook(); let memoized_state: JsValue; if initial_state.is_function() { memoized_state = initial_state .dyn_ref::<Function>() .unwrap() .call0(&JsValue::null())?; } else { memoized_state = initial_state.clone(); } hook.as_ref().unwrap().clone().borrow_mut().memoized_state = Some(MemoizedState::JsValue(memoized_state.clone())); let queue = create_update_queue(); hook.as_ref().unwrap().clone().borrow_mut().update_queue = Some(queue.clone()); ... } ``` `mount_state` ultimately returns `initial_state` and a function: ```rust let q_rc = Rc::new(queue.clone()); let q_rc_cloned = q_rc.clone(); let fiber = unsafe { CURRENTLY_RENDERING_FIBER.clone().unwrap() }; let closure = Closure::wrap(Box::new(move |action: &JsValue| unsafe { dispatch_set_state( fiber.clone(), (*q_rc_cloned).clone(), action, ) }) as Box<dyn Fn(&JsValue)>); let function = closure.as_ref().unchecked_ref::<Function>().clone(); closure.forget(); queue.clone().borrow_mut().dispatch = Some(function.clone()); Ok(vec![memoized_state, function.into()]) ``` It's a bit strange here with `q_rc_cloned` in the closure. `queue` is already of type `Rc`, so why is there an additional layer of `Rc` on the outside? This is because if we change `(*q_rc_cloned).clone()` to `queue.clone()`, it will result in the following error: ```rust error[E0382]: borrow of moved value: `queue` --> packages/react-reconciler/src/fiber_hooks.rs:251:5 | 233 | let queue = create_update_queue(); | ----- move occurs because `queue` has type `Rc<RefCell<UpdateQueue>>`, which does not implement the `Copy` trait ... 240 | let closure = Closure::wrap(Box::new(move |action: &JsValue| unsafe { | ----------------------- value moved into closure here ... 243 | queue.clone(), | ----- variable moved due to use in closure ... 251 | queue.clone().borrow_mut().dispatch = Some(function.clone()); | ^^^^^ value borrowed here after move ``` The reason is that the ownership of the value of `queue` has already been moved into the closure, so it can no longer be used outside. Can we remove the move? Let's try, and we find that it results in this error: ```rust error[E0597]: `queue` does not live long enough --> packages/react-reconciler/src/fiber_hooks.rs:243:13 | 240 | let closure = Closure::wrap(Box::new(|action: &JsValue| unsafe { | - ------------------ value captured here | _________________________________| | | 241 | | dispatch_set_state( 242 | | fiber.clone(), 243 | | queue.clone(), | | ^^^^^ borrowed value does not live long enough ... | 246 | | ) 247 | | }) as Box<dyn Fn(&JsValue)>); | |______- cast requires that `queue` is borrowed for `'static` ... 254 | } | - `queue` dropped here while still borrowed ``` The reason is that if we don't move it in, `queue` will be deallocated after `mount_state` is executed, but it is still borrowed inside the closure, which is obviously not allowed. It is often said that the steep learning curve of Rust lies in the fact that you are constantly fighting with the compiler. However, this is the philosophy of Rust: to discover most issues during compilation, which leads to a much higher efficiency in fixing them compared to discovering and fixing them after deployment. Moreover, the Rust compiler is quite intelligent and provides clear problem descriptions. Let's get back to the error of using move and `queue`. Analyzing the situation, since `queue` has been moved, we can't use `queue` afterwards. So, if we move some other value, wouldn't that work? That's why we have `queue_rc`, and the memory models of the two are compared as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p6869chvhidtyglziw62.png) Another point worth mentioning is that we attach this closure function to the `dispatch` property of the `queue` of each `Hook` node: ```rust queue.clone().borrow_mut().dispatch = Some(function.clone()); ``` This is done to return the same function during `update_state`: ```rust fn update_state(initial_state: &JsValue) -> Result<Vec<JsValue>, JsValue> { ... Ok(vec![ hook.clone().unwrap().clone() .borrow() .memoized_state .clone() .unwrap() .js_value() .unwrap().clone(), queue.clone().unwrap().borrow().dispatch.clone().into(), ]) } ``` However, I feel that having `dispatch` as an attribute of `Hook` is more appropriate. At least for now, it doesn't seem to have any direct association with `queue`. Returning to the code, when `dispatch` is called, it eventually invokes `dispatch_set_state`: ```rust fn dispatch_set_state( fiber: Rc<RefCell<FiberNode>>, update_queue: Rc<RefCell<UpdateQueue>>, action: &JsValue, ) { let update = create_update(action.clone()); enqueue_update(update_queue.clone(), update); unsafe { WORK_LOOP .as_ref() .unwrap() .clone() .borrow() .schedule_update_on_fiber(fiber.clone()); } } ``` Its purpose is to update the `update_queue` of the `Hook` node with the provided `action` and initiate a new round of update process. At this point, the state of the `App` node looks as shown in the following diagram: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tj15uyqd1cgasvctgg99.png) Next, the process is similar to the initial rendering. First, let's look at the "begin work" phase. During the update process, the "begin work" phase primarily handles the child nodes of the `FiberNode`. It generates new child `FiberNode` by comparing the existing child `FiberNode` in the Fiber Tree with the newly generated `ReactElement` (referred to as `children` in the code). This is commonly known as the diffing process: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cr4fcpgc86kq1nfa14n2.png) The way children are generated differs based on the type of `FiberNode`: - `HostRoot`: Values are taken from `memoized_state`. - `HostComponent`: Values are taken from `pending_props`. - `FunctionComponent`: Obtained by executing the `Function` pointed to by the `type`. - `HostText`: This process is not applicable and can be ignored. There are two scenarios for generating these new child `FiberNode`: * When the `key` and `type` of the diffing `ReactElement` and `FiberNode` are the same. The `FiberNode` is reused, and the `pending_props` of the `FiberNode` are updated with the `props` from the `ReactElement`: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7cm1ax0b8tf9okj2yn44.png) * In other cases, a new `FiberNode` is created, and the parent node is marked with the `ChildDeletion` flag. The old `FiberNode` is added to the `deletions` list: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ahi4davisarzv7dzkyff.png) I won't provide the code here, but you can refer to the `child_fiber` file in [this commit](https://github.com/ParadeTo/big-react-wasm/pull/9/files). Since generating children for `FunctionComponent` is a bit more complex, let's go back and look at the changes made in the `render_with_hooks` method. The main changes are: ```rust pub fn render_with_hooks(work_in_progress: Rc<RefCell<FiberNode>>) -> Result<JsValue, JsValue> { ... if current.is_some() { // log!("还未实现update时renderWithHooks"); update_hooks_to_dispatcher(true); } else { update_hooks_to_dispatcher(false); } ... } fn update_hooks_to_dispatcher(is_update: bool) { let object = Object::new(); let closure = Closure::wrap(Box::new(if is_update { update_state } else { mount_state }) as Box<dyn Fn(&JsValue) -> Result<Vec<JsValue>, JsValue>>); let function = closure.as_ref().unchecked_ref::<Function>().clone(); closure.forget(); Reflect::set(&object, &"use_state".into(), &function).expect("TODO: panic set use_state"); updateDispatcher(&object.into()); } ``` During the update, the `use_state` in the `dispatcher` is replaced with the `update_state` method. The `update_state` method primarily calculates the new `memoized_state` based on the `update_queue` and `memoized_state` of the `Hooks` and returns it. It also returns the `dispatch` function. ```rust fn update_state(initial_state: &JsValue) -> Result<Vec<JsValue>, JsValue> { let hook = update_work_in_progress_hook(); let hook_cloned = hook.clone().unwrap().clone(); let queue = hook_cloned.borrow().update_queue.clone(); let base_state = hook_cloned.borrow().memoized_state.clone(); unsafe { hook_cloned.borrow_mut().memoized_state = process_update_queue( base_state, queue.clone(), CURRENTLY_RENDERING_FIBER.clone().unwrap(), ); } Ok(vec![ hook.clone().unwrap().clone() .borrow() .memoized_state .clone() .unwrap() .js_value() .unwrap().clone(), queue.clone().unwrap().borrow().dispatch.clone().into(), ]) } ``` That's all for the "begin work" phase. Next, let's take a look at the "complete work" phase, which is relatively simpler. In this phase, nodes are marked with the `Update` flag, and the logic for handling `HostText` and `HostComponent` is modified. ```rust WorkTag::HostText => { if current.is_some() && work_in_progress_cloned.borrow().state_node.is_some() { // reuse FiberNode let old_text = derive_from_js_value(&current.clone().unwrap().clone().borrow().memoized_props, "content"); let new_test = derive_from_js_value(&new_props, "content"); if !Object::is(&old_text, &new_test) { CompleteWork::mark_update(work_in_progress.clone()); } } else { let text_instance = self.host_config.create_text_instance( Reflect::get(&new_props, &JsValue::from_str("content")) .unwrap() .as_string() .unwrap(), ); work_in_progress.clone().borrow_mut().state_node = Some(Rc::new(StateNode::Element(text_instance.clone()))); } self.bubble_properties(work_in_progress.clone()); None }, WorkTag::HostComponent => { if current.is_some() && work_in_progress_cloned.borrow().state_node.is_some() { // reuse FiberNode log!("TODO: update properties") } else { let instance = self.host_config.create_instance( work_in_progress .clone() .borrow() ._type .as_ref() .as_string() .unwrap(), ); self.append_all_children(instance.clone(), work_in_progress.clone()); work_in_progress.clone().borrow_mut().state_node = Some(Rc::new(StateNode::Element(instance.clone()))); } self.bubble_properties(work_in_progress.clone()); None } ``` Finally, we have the "commit" phase, which mainly involves adding handling for `Update` and `ChildDeletion` in the `commit_mutation_effects_on_fiber` function. ```rust fn commit_mutation_effects_on_fiber(&self, finished_work: Rc<RefCell<FiberNode>>) { ... if flags.contains(Flags::ChildDeletion) { let deletions = finished_work.clone().borrow().deletions.clone(); if deletions.is_some() { let deletions = deletions.unwrap(); for child_to_delete in deletions { self.commit_deletion(child_to_delete); } } finished_work.clone().borrow_mut().flags -= Flags::ChildDeletion; } if flags.contains(Flags::Update) { self.commit_update(finished_work.clone()); finished_work.clone().borrow_mut().flags -= Flags::Update; } ... ``` In the `Update` part, only `HostText` is currently handled, which is relatively simple, so we won't go into detail. Let's directly look at the code. Here, I'll focus on explaining `ChildDeletion`. In the "begin work" phase, we mentioned that child nodes marked for deletion are added to the `deletions` list of their parent node. So here, we iterate over this list and call `commit_deletion`. This function traverses the subtree rooted at `child_to_delete` in a pre-order manner (prioritizing the traversal of the root node). It executes the relevant side effects on these nodes, such as invoking the `componentWillUnmount` method or the `destroy` method returned by `useEffect`. From this, we can observe that the side effects of the parent component are executed first. For example, consider the following example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l6fuzf7oyj4o57e1gxdq.png) The traversal order is `div -> p -> i -> span`. Additionally, the first node encountered during traversal is recorded, which in this case is `div`. The deletion operation is then performed on this node. Alright, the single-node update process is now complete. In summary: - In the "begin work" phase, mark child nodes for deletion or insertion. - In the "complete work" phase, mark nodes for update. - In the commit phase, perform a depth-first traversal of the Fiber Tree, processing the marked nodes. For nodes marked as `ChildDeletion`, a pre-order traversal is performed on the subtree rooted at that node. For more details, please refer to this [update](https://github.com/ParadeTo/big-react-wasm/pull/9). *Please kindly give me a star!*
paradeto
1,838,340
Dominate China's Business Landscape Harness Success with Ready Mailing Team's China Business Email List
In the fast-paced world of global commerce, China stands as a beacon of opportunity, offering...
0
2024-04-30T09:03:59
https://dev.to/chinabusiness30/dominate-chinas-business-landscape-harness-success-with-ready-mailing-teams-china-business-email-list-2od
news
In the fast-paced world of global commerce, China stands as a beacon of opportunity, offering boundless potential for businesses willing to seize it. At Ready Mailing Team, we understand that success in China hinges on access to the right contacts. That's why we're proud to introduce our revolutionary solution: the Ready Mailing Team China Business Email List. Imagine having instant access to a comprehensive database meticulously curated to encompass the breadth and depth of China's thriving business ecosystem. From the bustling streets of Shanghai to the innovation hubs of Shenzhen, our **[China Business Email List](https://www.readymailingteam.com/)** opens doors to a wealth of opportunities, empowering you to forge meaningful connections and drive sustainable growth. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t1kpr21rpnqn6zn9iy9.png) Here's how our China Business Email List can propel your business forward: Precision Targeting: Say goodbye to generic outreach strategies. With our meticulously segmented lists, you can laser-focus your efforts on reaching your ideal audience based on industry, company size, geographical location, and more, ensuring that your message resonates with the right individuals every time. Unrivaled Accuracy: We understand that reliable data is the foundation of success. That's why each contact in our China Business Email List undergoes rigorous verification processes, ensuring that you're engaging with decision-makers who have the power to fuel your success in the Chinese market. Actionable Insights: Stay ahead of the curve with real-time market intelligence and trend analysis. Whether you're monitoring competitor activity, identifying emerging opportunities, or refining your market strategy, our database provides the insights you need to make informed decisions and stay ahead of the competition. Personalized Engagement: Forge meaningful connections through personalized communications tailored to the unique preferences and needs of your Chinese contacts. From personalized emails to targeted promotions, our China Business Email List empowers you to cultivate relationships that drive tangible results. Scalable Solutions: Whether you're a startup with ambitious growth plans or an established enterprise seeking to expand your footprint, our flexible subscription options and scalable solutions ensure that you have the resources required to thrive at every stage of your journey. Ready to dominate China's business landscape and unlock unprecedented opportunities for your business? The key lies in the Ready Mailing Team China Business Email List. Don't miss out on the chance to transform your aspirations into achievements. Contact us today to learn more about how our China Business Email List can fuel your success and propel your business to new heights in one of the world's most dynamic markets. Together, let's embark on a journey of growth, innovation, and unparalleled success in China.
chinabusiness30
1,838,390
Integrate Screen Share in JavaScript Video Chat App
📌 Introduction Integrating screen sharing into your JavaScript video chat app expands...
0
2024-05-06T08:56:59
https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app
javascript
--- title: Integrate Screen Share in JavaScript Video Chat App published: true date: 2024-04-30 09:10:33 UTC tags: javascript, canonical_url: https://www.videosdk.live/blog/integrate-screen-share-in-javascript-video-chat-app --- ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aucf7smmp5j0eg585auf.jpg) ## 📌 Introduction Integrating screen sharing into your [JavaScript video chat app](https://www.videosdk.live/blog/video-calling-javascript) expands its capabilities, allowing users to share their screens during calls. This feature enhances collaboration by enabling participants to show presentations, documents, or other content directly from their screens. With seamless integration, users can easily switch between video chat and screen-sharing modes, making discussions more interactive and productive. **Benefits of Integrating Screen Share in a JavaScript Video Chat App:** 1. **Enhanced Collaboration**: Screen sharing facilitates real-time collaboration by allowing users to share their screens, enabling them to demonstrate concepts, share documents, or provide visual instructions during video calls. 2. **Improved Communication**: Visual aids provided by screen sharing enhance communication clarity, ensuring that all participants are on the same page, reducing misunderstandings, and improving overall comprehension. 3. **Interactive Learning**: In educational settings, screen sharing facilitates interactive learning experiences, enabling instructors to demonstrate concepts, showcase multimedia content, or lead virtual workshops effectively. In the below-provided guide, you'll be able to implement screen-sharing functionality smoothly into your app using JavaScript with VideoSDK. ## 🚀 Getting Started with VideoSDK To take advantage of Screen Share functionality, we must use the capabilities that the VideoSDK offers. Before diving into the implementation steps, let's ensure you complete the necessary prerequisites. ### Create a VideoSDK Account Go to your [VideoSDK dashboard](https://www.videosdk.live/signup?utm_source=organicseo&utm_medium=devto&utm_campaign=screen_share_javascript_app) and sign up if you don't have an account. This account gives you access to the required Video SDK token, which acts as an authentication key that allows your application to interact with VideoSDK functionality. ### Generate your Auth Token Visit your VideoSDK dashboard and navigate to the "API Key" section to generate your auth token. This token is crucial in authorizing your application to use VideoSDK features. For a more visual understanding of the account creation and token generation process, consider referring to the [provided tutorial](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/authentication-and-token). ### Prerequisites Before proceeding, ensure that your development environment meets the following requirements: - VideoSDK Developer Account (if you do not have one, follow [VideoSDK Dashboard](https://www.videosdk.live/signup?utm_source=organicseo&utm_medium=devto&utm_campaign=screen_share_javascript_app)) - Have Node and NPM installed on your device. ## 🛠️ Install VideoSDK Import VideoSDK using the `<script>` tag or install it using the following npm command. Make sure you are in your app directory before you run this command. ```js <html> <head> <!--.....--> </head> <body> <!--.....--> <script src="https://sdk.videosdk.live/js-sdk/0.0.85/videosdk.js"></script> </body> </html> ``` - **npm** ```js npm install @videosdk.live/js-sdk ``` - **Yarn** ```js yarn add @videosdk.live/js-sdk ``` ### Structure of the project Your project structure should look like this. ```js root ├── index.html ├── config.js ├── index.js ``` You will be working on the following files: - **index.html**: Responsible for creating a basic UI. - **config.js**: Responsible for storing the token. - **index.js**: Responsible for rendering the meeting view and the join meeting functionality. ## 🎥 Essential Steps to Implement Video Call Functionality Once you've successfully installed VideoSDK in your project, you'll have access to a range of functionalities for building your video call application. Screen Share is one such feature that leverages VideoSDK's capabilities. It leverages VideoSDK's capabilities to identify the user with the strongest audio signal (the one speaking) ### Step 1: Design the user interface (UI)[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-1--design-the-user-interface-ui) Create an HTML file containing the screens, `join-screen` and `grid-screen`. ```js <!DOCTYPE html> <html> <head> </head> <body> <div id="join-screen"> <!-- Create new Meeting Button --> <button id="createMeetingBtn">New Meeting</button> OR <!-- Join existing Meeting --> <input type="text" id="meetingIdTxt" placeholder="Enter Meeting id" /> <button id="joinBtn">Join Meeting</button> </div> <!-- for Managing meeting status --> <div id="textDiv"></div> <div id="grid-screen" style="display: none"> <!-- To Display MeetingId --> <h3 id="meetingIdHeading"></h3> <!-- Controllers --> <button id="leaveBtn">Leave</button> <button id="toggleMicBtn">Toggle Mic</button> <button id="toggleWebCamBtn">Toggle WebCam</button> <!-- render Video --> <div class="row" id="videoContainer"></div> </div <!-- render Screen Share Video --> <div class="row" id="screenShareVideoContainer"></div> <!-- Add VideoSDK script --> <script src="https://sdk.videosdk.live/js-sdk/0.0.85/videosdk.js"></script> <script src="config.js"></script> <script src="index.js"></script> </body> </html> ``` ### Step 2: Implement Join Screen[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-2--implement-join-screen) Configure the token in the `config.js` file, which you can obtain from the [VideoSDK Dashboard](https://app.videosdk.live/). ```js // Auth token will be used to generate a meeting and connect to it TOKEN = "Your_Token_Here"; ``` Next, retrieve all the elements from the DOM and declare the following variables in the `index.js` file. Then, add an event listener to the join and create meeting buttons. ```js // Getting Elements from DOM const joinButton = document.getElementById("joinBtn"); const enableScreenShareButton = document.getElementById("enableScreenShareBtn"); const disableScreenShareButton = document.getElementById( "disableScreenShareBtn" ); const leaveButton = document.getElementById("leaveBtn"); const toggleMicButton = document.getElementById("toggleMicBtn"); const toggleWebCamButton = document.getElementById("toggleWebCamBtn"); const createButton = document.getElementById("createMeetingBtn"); const videoContainer = document.getElementById("videoContainer"); const screenShareVideoContainer = document.getElementById( "screenShareVideoContainer" ); const textDiv = document.getElementById("textDiv"); // Declare Variables let meeting = null; let meetingId = ""; let isMicOn = false; let isWebCamOn = false; function initializeMeeting() {} function createLocalParticipant() {} function createVideoElement() {} function createAudioElement() {} function setTrack() {} // Join Meeting Button Event Listener joinButton.addEventListener("click", async () => { document.getElementById("join-screen").style.display = "none"; textDiv.textContent = "Joining the meeting..."; roomId = document.getElementById("meetingIdTxt").value; meetingId = roomId; initializeMeeting(); }); // Create Meeting Button Event Listener createButton.addEventListener("click", async () => { document.getElementById("join-screen").style.display = "none"; textDiv.textContent = "Please wait, we are joining the meeting"; // API call to create meeting const url = `https://api.videosdk.live/v2/rooms`; const options = { method: "POST", headers: { Authorization: TOKEN, "Content-Type": "application/json" }, }; const { roomId } = await fetch(url, options) .then((response) => response.json()) .catch((error) => alert("error", error)); meetingId = roomId; initializeMeeting(); }); ``` ### Step 3: Initialize Meeting[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-3--initialize-meeting) Following that, initialize the meeting using the `initMeeting()` function and proceed to join the meeting. ```js // Initialize meeting function initializeMeeting() { window.VideoSDK.config(TOKEN); meeting = window.VideoSDK.initMeeting({ meetingId: meetingId, // required name: "Thomas Edison", // required micEnabled: true, // optional, default: true webcamEnabled: true, // optional, default: true }); meeting.join(); // Creating local participant createLocalParticipant(); // Setting local participant stream meeting.localParticipant.on("stream-enabled", (stream) => { setTrack(stream, null, meeting.localParticipant, true); }); // meeting joined event meeting.on("meeting-joined", () => { textDiv.style.display = "none"; document.getElementById("grid-screen").style.display = "block"; document.getElementById( "meetingIdHeading" ).textContent = `Meeting Id: ${meetingId}`; }); // meeting left event meeting.on("meeting-left", () => { videoContainer.innerHTML = ""; }); // Remote participants Event // participant joined meeting.on("participant-joined", (participant) => { // ... }); // participant left meeting.on("participant-left", (participant) => { // ... }); } ``` ### Step 4: Create the Media Elements[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-4--create-the-media-elements) In this step, Create a function to generate audio and video elements for displaying both local and remote participants. Set the corresponding media track based on whether it's a video or audio stream. ```js // creating video element function createVideoElement(pId, name) { let videoFrame = document.createElement("div"); videoFrame.setAttribute("id", `f-${pId}`); videoFrame.style.width = "300px"; //create video let videoElement = document.createElement("video"); videoElement.classList.add("video-frame"); videoElement.setAttribute("id", `v-${pId}`); videoElement.setAttribute("playsinline", true); videoElement.setAttribute("width", "300"); videoFrame.appendChild(videoElement); let displayName = document.createElement("div"); displayName.innerHTML = `Name : ${name}`; videoFrame.appendChild(displayName); return videoFrame; } // creating audio element function createAudioElement(pId) { let audioElement = document.createElement("audio"); audioElement.setAttribute("autoPlay", "false"); audioElement.setAttribute("playsInline", "true"); audioElement.setAttribute("controls", "false"); audioElement.setAttribute("id", `a-${pId}`); audioElement.style.display = "none"; return audioElement; } // creating local participant function createLocalParticipant() { let localParticipant = createVideoElement( meeting.localParticipant.id, meeting.localParticipant.displayName ); videoContainer.appendChild(localParticipant); } // setting media track function setTrack(stream, audioElement, participant, isLocal) { if (stream.kind == "video") { isWebCamOn = true; const mediaStream = new MediaStream(); mediaStream.addTrack(stream.track); let videoElm = document.getElementById(`v-${participant.id}`); videoElm.srcObject = mediaStream; videoElm .play() .catch((error) => console.error("videoElem.current.play() failed", error) ); } if (stream.kind == "audio") { if (isLocal) { isMicOn = true; } else { const mediaStream = new MediaStream(); mediaStream.addTrack(stream.track); audioElement.srcObject = mediaStream; audioElement .play() .catch((error) => console.error("audioElem.play() failed", error)); } } } ``` ### Step 5: Handle participant events[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-5--handle-participant-events) Thereafter, implement the events related to the participants and the stream. The following are the events to be executed in this step: 1. `participant-joined`: When a remote participant joins, this event will trigger. In the event callback, create video and audio elements previously defined for rendering their video and audio streams. 2. `participant-left`: When a remote participant leaves, this event will trigger. In the event callback, remove the corresponding video and audio elements. 3. `stream-enabled`: This event manages the media track of a specific participant by associating it with the appropriate video or audio element. ```js // Initialize meeting function initializeMeeting() { // ... // participant joined meeting.on("participant-joined", (participant) => { let videoElement = createVideoElement( participant.id, participant.displayName ); let audioElement = createAudioElement(participant.id); // stream-enabled participant.on("stream-enabled", (stream) => { setTrack(stream, audioElement, participant, false); }); videoContainer.appendChild(videoElement); videoContainer.appendChild(audioElement); }); // participants left meeting.on("participant-left", (participant) => { let vElement = document.getElementById(`f-${participant.id}`); vElement.remove(vElement); let aElement = document.getElementById(`a-${participant.id}`); aElement.remove(aElement); }); } ``` ### Step 6: Implement Controls[​](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/quick-start#step-6--implement-controls) Next, implement the meeting controls such as toggleMic, toggleWebcam, and leave the meeting. ```js // leave Meeting Button Event Listener leaveButton.addEventListener("click", async () => { meeting?.leave(); document.getElementById("grid-screen").style.display = "none"; document.getElementById("join-screen").style.display = "block"; }); // Toggle Mic Button Event Listener toggleMicButton.addEventListener("click", async () => { if (isMicOn) { // Disable Mic in Meeting meeting?.muteMic(); } else { // Enable Mic in Meeting meeting?.unmuteMic(); } isMicOn = !isMicOn; }); // Toggle Web Cam Button Event Listener toggleWebCamButton.addEventListener("click", async () => { if (isWebCamOn) { // Disable Webcam in Meeting meeting?.disableWebcam(); let vElement = document.getElementById(`f-${meeting.localParticipant.id}`); vElement.style.display = "none"; } else { // Enable Webcam in Meeting meeting?.enableWebcam(); let vElement = document.getElementById(`f-${meeting.localParticipant.id}`); vElement.style.display = "inline"; } isWebCamOn = !isWebCamOn; }); ``` **You can check out the complete [here](https://github.com/videosdk-live/quickstart/tree/main/js-rtc).** ## Integrate Screen Share Feature Screen sharing in a meeting is the process of sharing your computer screen with other participants in the meeting. It allows everyone in the meeting to see exactly what you are seeing on your screen, which can be helpful for presentations, demonstrations, or collaborations. ### Enable Screen Share By using the `enableScreenShare()` function of the `meeting` object, the local participant can share their desktop screen to other participants. - You can also pass a customised screenshare track in `enableScreenShare()` by using [Custom Screen Share Track](https://docs.videosdk.live/javascript/guide/video-and-audio-calling-api-sdk/render-media/optimize-video-track#custom-screen-share-track). - The Screen Share stream of a participant can be accessed from the `streams` property of the `Participant` object. ### Disable Screen Share By using `disableScreenShare()` function of the `meeting` object, the local participant can stop sharing their desktop screen to other participants. > **NOTE:** Screen Sharing is only supported in the **Desktop browsers** and **not in mobile/tab browser**. ```js let meeting; // Initialize Meeting meeting = VideoSDK.initMeeting({ // ... }); enableScreenShareButton.addEventListener("click", () => { // Enabling ScreenShare meeting?.enableScreenShare(); }); disableScreenShareButton.addEventListener("click", () => { // Disabling ScreenShare meeting?.disableScreenShare(); }); ``` ### Events associated with Screen Share <!--kg-card-begin: markdown--> #### enableScreenShare <!--kg-card-end: markdown--> - Every Participant will receive a callback on [`stream-enabled`](https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/events#stream-enabled) event of the [`participant`](https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/introduction) object with `Stream` object. - Every Participant will receive a callback on [`presenter-changed`](https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/events#presenter-changed) event of the meeting object with `presenterId`. <!--kg-card-begin: markdown--> #### disableScreenShare <!--kg-card-end: markdown--> - Every Participant will receive a callback on [`stream-disabled`](https://docs.videosdk.live/react/api/sdk-reference/use-participant/events#onstreamdisabled) event of the [`participant`](https://docs.videosdk.live/javascript/api/sdk-reference/participant-class/introduction) object with the `Stream` object. - Every Participant will receive a callback on [`presenter-changed`](https://docs.videosdk.live/javascript/api/sdk-reference/meeting-class/events#presenter-changed) event of the meeting object with `null` value indicating there is no current presenter. ```js let meeting; // Initialize Meeting meeting = VideoSDK.initMeeting({ // ... }); // participant joined meeting.on("participant-joined", (participant) => { // ... // ... participant.on("stream-enabled", (stream) => { setTrack(stream, audioElement, participant, false); if (stream.kind == "share") { //particiapnt turned on screen share //Render screenshare logic here } }); // ... // ... participant.on("stream-disabled", (stream) => { if (stream.kind === "share") { //particiapnt turned off screenshare //remove screenshare logic here } }); }); meeting.on("presenter-changed", (presenterId) => { if (presenterId) { //someone start presenting } else { //someone stopped presenting } }); ``` ### Screen Share with Audio To enable screen sharing with audio, select the **Share tab audio** option when sharing the chrome tab, as shown below. ![How to Integrate Screen Share in JavaScript Video Chat App?](https://docs.videosdk.live/assets/images/screenshare-with-audio-ca8ee299f68c32ba08cd811e3fb7cd2f.png) After clicking the `Share` button, you will receive the selected tab's audio stream in the participant's `screenShareAudioStream`. > **NOTE:** > Screen Share with Audio is only supported while sharing **Chrome Tab** in a **Chromium-based browser** like Google Chrome, Brave etc. ### Rendering Screen Share and Screen Share Audio​ To display the screenshare video stream, you will receive it in the participant's stream-enabled callback with the stream kind set as "share". ```js participant.on("stream-enabled", (stream) => { if (stream.kind == "share") { const videoElem = createShareVideoElement(participant.id, stream); //add videoElem to your container screenShareVideoContainer.appendChild(videoElem); } if (stream.kind == "shareAudio") { } }); // creating video element function createShareVideoElement(pId, stream) { if (pId == meeting.localParticipant.id) return; let videoElement = document.createElement("video"); videoElement.setAttribute("autoPlay", false); videoElement.setAttribute("controls", "false"); videoElement.setAttribute("id", `v-share-${pId}`); const mediaStream = new MediaStream(); mediaStream.addTrack(stream.track); videoElement.srcObject = mediaStream; videoElement .play() .catch((error) => console.error("audioElem.play() failed", error)); return videoElement; } // creating audio element function createShareAudioElement(pId, stream) {} ``` Now to render the screenshare audio stream, you will receive it in the participant's stream-enabled callback with the stream kind set as "shareAudio". ```js participant.on("stream-enabled", (stream) => { if (stream.kind == "share") { } if (stream.kind == "shareAudio") { const audioElem = createShareAudioElement(participant.id, stream); //add audioElem to your container screenShareVideoContainer.appendChild(audioElem); } }); // creating video element function createShareVideoElement(pId, stream) {} // creating audio element function createShareAudioElement(pId, stream) { if (pId == meeting.localParticipant.id) return; let audioElement = document.createElement("audio"); audioElement.setAttribute("autoPlay", false); audioElement.setAttribute("playsInline", "false"); audioElement.setAttribute("controls", "false"); audioElement.setAttribute("id", `a-share-${pId}`); audioElement.style.display = "none"; const mediaStream = new MediaStream(); mediaStream.addTrack(stream.track); audioElement.srcObject = mediaStream; audioElement .play() .catch((error) => console.error("audioElem.play() failed", error)); return audioElement; } ``` ## 🔚 Conclusion Integrating screen sharing into your JavaScript video chat app is a powerful way to enhance communication and collaboration. By enabling users to share their screens during calls, you empower them to demonstrate ideas, share presentations, and collaborate on documents more effectively. With a seamless integration process provided by [VideoSDK](https://www.videosdk.live/), implementing screen sharing becomes straightforward, ensuring a smooth user experience. Whether for remote work, online education, or virtual meetings, this feature adds significant value to your app, making it more versatile and engaging for users. Unlock the full potential of VideoSDK and create seamless video experiences today! **[Sign up now](https://www.videosdk.live/signup?utm_source=organicseo&utm_medium=devto&utm_campaign=screen_share_javascript_app)** and get 10,000 free minutes to elevate your video app to the next level.
nakrani
1,838,893
5 AI Chatbot Builders to Add to Your Website and Boost User Engagement
5 AI Chatbot Builders to Add to Your Website and Boost User Engagement Introduction: In...
0
2024-04-30T17:56:47
https://dev.to/sh20raj/5-ai-chatbot-builders-to-add-to-your-website-and-boost-user-engagement-1j59
ai, listing
# 5 AI Chatbot Builders to Add to Your Website and Boost User Engagement Introduction: In today's digital age, businesses are constantly looking for ways to improve user engagement and enhance the customer experience. One effective way to achieve this is by adding an AI chatbot to your website. Chatbots can help answer user questions, provide personalized recommendations, and even make sales. In this article, we'll take a look at five popular AI chatbot builders that you can use to add a chatbot to your website. 1. Tars: Tars is a chatbot builder that allows you to create custom chatbots for your website. It offers a drag-and-drop interface, pre-built templates, and integrations with popular tools like Mailchimp, Google Analytics, and Zapier. With Tars, you can create chatbots that can answer user questions, collect leads, and even make sales. Tars also offers analytics and reporting features to help you track the performance of your chatbot. 2. ManyChat: ManyChat is a Facebook Messenger chatbot builder that also offers a website chatbot feature. You can create chatbots that can answer user questions, collect leads, and even make sales. ManyChat offers a visual editor, pre-built templates, and integrations with tools like Mailchimp, Google Sheets, and Shopify. ManyChat also offers a free plan with limited features, making it a great option for small businesses on a budget. 3. Chatfuel: Chatfuel is another Facebook Messenger chatbot builder that offers a website chatbot feature. It offers a visual editor, pre-built templates, and integrations with tools like Mailchimp, Google Sheets, and Shopify. Chatfuel also offers a free plan with limited features, making it a great option for small businesses on a budget. With Chatfuel, you can create chatbots that can answer user questions, provide personalized recommendations, and even make sales. 4. Botsify: Botsify is a chatbot builder that allows you to create chatbots for your website, Facebook Messenger, and other messaging platforms. It offers a drag-and-drop interface, pre-built templates, and integrations with tools like Google Analytics, Mailchimp, and Zapier. With Botsify, you can create chatbots that can answer user questions, collect leads, and even make sales. Botsify also offers a free plan with limited features, making it a great option for small businesses on a budget. 5. Dialogflow: Dialogflow is a Google-owned chatbot builder that allows you to create chatbots for your website, mobile app, and messaging platforms. It offers natural language processing, integrations with Google services, and support for multiple languages. With Dialogflow, you can create chatbots that can answer user questions, provide personalized recommendations, and even make sales. Dialogflow also offers a free plan with limited features, making it a great option for small businesses on a budget. Conclusion: Adding an AI chatbot to your website can help improve user engagement and enhance the customer experience. With the five chatbot builders we've explored in this article, you can create custom chatbots that can answer user questions, collect leads, and even make sales. Whether you're a small business on a budget or a larger enterprise, there's a chatbot builder out there that can meet your needs. So why not give it a try and see how an AI chatbot can benefit your business?
sh20raj
1,856,568
Golang Logging Configuration with Zap: Practical Implementation Tips
Hi everybody, continuing from the previous post, which was a basic setup with Golang. In this part,...
0
2024-05-17T14:03:14
https://dev.to/ronnymedina/golang-logging-configuration-with-zap-practical-implementation-tips-6g7
beginners, programming, tutorial, go
Hi everybody, continuing from the previous [post](https://edulixir.com/2024/05/13/basic-setup-for-projects-with-golang/), which was a basic setup with Golang. In this part, we're setting up our logging in the app. THE [ORIGINAL POST](https://edulixir.com/2024/05/17/golang-logging-configuration-with-zap-practical-implementation-tips/) HAS ADDITIONAL INFORMATION. ### Installing the package zap [https://github.com/uber-go/zap](https://github.com/uber-go/zap) In our project, we need to install the zap package. To do this, put the following command in our terminal. ```bash go get -u go.uber.org/zap ``` If you're using Docker like me, you need to enter the container. So, if you're running the app in the background, execute this command. ```bash docker exec -it demogolang bash ``` > To run the app in the background, use the command: docker-compose up -d. > The name 'demogolang' is declared inside the docker-compose.yml file. ### Setting up the logging #### Dockerfile In our **Dockerfile** add 2 environment variables and created a folder to save the logs. - **APP_LOG_LEVEL**: The app log level could be **debug** or **error**. We use the **debug** level when we're in development and **error** when the app is in production mode. So, if you're in **error** mode when you print logs **Info** or **Debug**, these are not shown in the terminal. If you want to use more levels, you can edit the function to get the current level. - **APP_LOG_FOLDER**: This is the path where you place the log file. You can set whatever path you want. ```Dockerfile ENV APP_LOG_LEVEL debug ENV APP_LOG_FOLDER /tmp/logs/ RUN mkdir -p ${APP_LOG_FOLDER} ``` #### Reading environment variables Inside our `app` folder, I created a folder called config. Inside config, I put the file called `envs.go`. ```go package config import "os" var APP_ENV = os.Getenv("APP_ENV") var IS_DEVELOP_MODE = APP_ENV == "develop" var APP_LOG_LEVEL = os.Getenv("APP_LOG_LEVEL") var APP_LOG_FOLDER = os.Getenv("APP_LOG_FOLDER") ``` #### Logger file ```go package utils import ( "example/demogo/config" "go.uber.org/zap" "go.uber.org/zap/zapcore" ) var Logger *zap.SugaredLogger func getLevelLogger(level string) zapcore.Level { if level == "debug" { return zap.DebugLevel } return zap.ErrorLevel } func init() { var err error level := zap.NewAtomicLevelAt(getLevelLogger(config.APP_LOG_LEVEL)) encoder := zap.NewProductionEncoderConfig() zapConfig := zap.NewProductionConfig() zapConfig.EncoderConfig = encoder zapConfig.Level = level zapConfig.Development = config.IS_DEVELOP_MODE zapConfig.Encoding = "json" zapConfig.InitialFields = map[string]interface{}{"idtx": "999"} zapConfig.OutputPaths = []string{"stdout", config.APP_LOG_FOLDER + "app_log.log"} zapConfig.ErrorOutputPaths = []string{"stderr"} logger, err := zapConfig.Build() if err != nil { panic(err) } Logger = logger.Sugar() } ``` I created a global variable called `Logger`; this will be set when the package is initialized. The function `getLevelLogger` is used to get the current level from the environment variable. > The init function is automatically called before the main function. - **encoder**: Is the log format, how you want to apply the format. - **InitialFields**: The values ​​that you want to always be in the log. - **OutputPaths**: The output of the file. - **ErrorOutputPaths**: To print error log in the terminal. Finally our `main.go`. ```go package main import ( "example/demogo/utils" "time" ) func main() { defer utils.Logger.Sync() for { time.Sleep(5 * time.Second) utils.Logger.Infoln("hello world") } } ``` > It's important to call defer utils.Logger.Sync() in the entry point of our app. Normally ensures that all buffered log messages are sent to their final destination if the application crashes. ```bash docker-compose up -d --build docker logs -f demogolang ```
ronnymedina
1,857,098
TFT LCD Display Applications: From Wearables to Home Appliances
screenshot-1715645179664.png TFT LCD Display Applications: Making Life More Convenient TFT LCD...
0
2024-05-18T01:24:02
https://dev.to/jer_win_bad86b29df116649d/tft-lcd-display-applications-from-wearables-to-home-appliances-310m
lcd
screenshot-1715645179664.png TFT LCD Display Applications: Making Life More Convenient TFT LCD displays may not be something that you're familiar with, but chances are that you have encountered them at some point in your life. These displays are coming to be more and more prevalent in numerous products, ranging from wearable to home appliances. We'll explore the benefits of LCD displays, go over their various applications, and talk about the ways you can get the most out of them. Shows of TFT LCD shows: TFT LCD displays carry several advantages over main-stream displays. They have greater quality, razor-sharp color precision, and elevated contrast ratios. Additionally there IPS TFT LCD DISPLAY is a wider viewing angle than main-stream programs, this means you're able to definitely also see more content if you should be perhaps not right within the front side among these. Innovation of TFT LCD shows: Among the advancements that are many will be like main programs could be the color rendering abilities. Using the development of high-quality IPS (In-Plane Switching) TFT LCDs, we've got color like natural and disfiguration like low from lateral angles. This assists to ensure we could see images which are appealing reflect cool color schemes and hues which are vivid our lives that are everyday are everyday. Safety of TFT LCD shows: TFT LCD shows are produced out of materials which will make them robust, durable, and safe to aim. They often have actually enhanced brightness and contrast exposure and a layer like limitations which can be scratch-resistant like possible. They are furthermore lightweight, making it simple to connect them to programs being various anxiety like producing is unneeded. Company and quality of TFT LCD displays: One of the ways like ensure like good you will get the most away from your TFT LCD display is through getting an excellent item from the maker like reputable. Whenever purchasing wearable, house appliance, or shows being go like established like industrial which have reputation supplying excellent solutions. This tft lcd can work which you will be finding something that's not merely practical and safe and also includes after-sale solutions to help any problems with you guarantee. Application of TFT LCD shows: TFT LCD displays can be utilized in lots of different products to improve experience like individual functionality. In wearable, for instance, they display which help monitor wellness metrics such as for instance heart cost and action count. In products, they assist users connect towards the machine, set a cycle, monitor progress, and acquire alerts. In commercial and medical products, they offer critical feedback for users, including monitoring like patient pharmaceutical manufacturing, and so forth. Conclusion: TFT LCD displays are lcd and tft a great example of how technology is becoming more integrated into our lives to make tasks more manageable. They have myriad uses and applications, making them an indispensable part of our modern-day lives. As technology advances, we expect to see new and innovative uses of TFT LCD displays in various devices, making life even more convenient. https://www.kingwaytft.com/ips-tft-lcd-display
jer_win_bad86b29df116649d
1,857,313
Understanding OK Win Platform
OK Win is a popular online gaming platform that offers users the chance to win big prizes through...
0
2024-05-18T06:35:16
https://dev.to/ryjghnjfygh/understanding-ok-win-platform-hbb
OK Win is a popular online gaming platform that offers users the chance to win big prizes through various games and contests. Whether you are a seasoned gamer or a newcomer, OK Win provides an exciting and rewarding experience. In this article, we will explore the key features of OK Win, tips for maximizing your chances of winning, and how to get started on this platform. Understanding OK Win Platform OK Win is designed to offer a seamless gaming experience. The platform features a variety of games, including slots, card games, and sports betting. Each game is designed to be engaging and easy to play, making it accessible to players of all skill levels. The platform's user-friendly interface ensures that you can navigate through different sections effortlessly, allowing you to focus on enjoying your gaming experience. How to Get Started with OK Win Getting started with [OK Win](https://okwingames.in/) is straightforward. First, visit the OK Win website and create an account by providing your basic information. Once your account is set up, you can browse through the available games and choose the ones you want to play. The platform offers a range of deposit options, making it easy to fund your account. With your account ready, you can start playing and enjoying the thrill of winning. Tips for Winning at OK Win To increase your chances of winning at OK Win, it is essential to follow some basic tips. First, familiarize yourself with the rules and strategies of the games you are playing. Practice regularly to improve your skills and understand the nuances of each game. Additionally, take advantage of any bonuses or promotions offered by the platform, as these can boost your winning potential. Lastly, set a budget and stick to it to ensure responsible gaming. The Benefits of Playing on OK Win OK Win offers numerous benefits to its users. One of the primary advantages is the wide variety of games available, catering to different interests and skill levels. The platform also provides attractive bonuses and promotions, enhancing your gaming experience. Furthermore, OK Win ensures a secure and fair gaming environment, giving you peace of mind while you play. The platform's customer support team is always ready to assist with any issues or questions you may have. OK Win Community and Support OK Win has a vibrant community of players who share tips, strategies, and experiences. Engaging with this community can enhance your gaming experience and provide valuable insights. The platform also offers robust customer support, ensuring that any issues are resolved promptly. Whether you need help with your account, have questions about a game, or encounter technical problems, the support team is available to assist you. Questions and Answers How do I create an account on OK Win? To create an account, visit the OK Win website, click on the sign-up button, and provide the required information. Follow the instructions to complete the registration process. What types of games are available on OK Win? OK Win offers a variety of games, including slots, card games, and sports betting. Each game is designed to be engaging and easy to play. Are there any bonuses or promotions on OK Win? Yes, OK Win offers various bonuses and promotions to enhance your gaming experience. These can include welcome bonuses, deposit bonuses, and special promotions for specific games.
ryjghnjfygh
1,857,346
Extracting Words from Scanned Books: A Step-by-Step Tutorial with Python and OpenCV
Our video tutorial will show you how to extract individual words from scanned book pages, giving you...
0
2024-05-18T07:15:37
https://dev.to/feitgemel/extracting-words-from-scanned-books-a-step-by-step-tutorial-with-python-and-opencv-4p95
python, opencv
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gged6k44rp7o4xi8kcz9.png) Our video tutorial will show you how to extract individual words from scanned book pages, giving you the code you need to extract the required text from any book. We'll walk you through the entire process, from converting the image to grayscale and applying thresholding, to using OpenCV functions to detect the lines of text and sort them by their position on the page. You'll be able to easily extract text from scanned documents and perform word segmentation. check out our video here : https://youtu.be/c61w6H8pdzs&list=UULFTiWJJhaH6BviSWKLJUM9sg Enjoy, Eran #ImageSegmentation #PythonOpenCV #ContourDetection #ComputerVision #AdvancedOpenCV #extracttext #extractwords
feitgemel
1,857,443
Elevate Your WordPress Site: Building Custom Plugins
Introduction: In the expansive realm of WordPress, customization is king. While themes offer a great...
0
2024-05-24T12:30:00
https://dev.to/nitin-rachabathuni/elevate-your-wordpress-site-building-custom-plugins-ihp
Introduction: In the expansive realm of WordPress, customization is king. While themes offer a great starting point, true flexibility often requires diving into the world of plugins. These plugins can extend functionality, streamline processes, and tailor your site to your exact specifications. But what if you can't find the perfect plugin for your needs? Fear not! With a little coding magic, you can create your own custom plugins and unlock endless possibilities for your WordPress site. Why Build Custom Plugins? Off-the-shelf plugins can be fantastic, but they might not always align perfectly with your vision. Building custom plugins offers several advantages: Tailored Functionality: You can precisely match the features you need without unnecessary bloat. Performance Optimization: Custom plugins can be optimized for performance, ensuring your site runs smoothly. Security: By knowing exactly what's in your codebase, you can ensure better security practices. Learning Experience: Building plugins is a fantastic way to deepen your understanding of WordPress development. Getting Started: Setting Up Your Environment Before diving into code, ensure you have a development environment set up. This typically involves a local server environment like XAMPP or WAMP for testing. Once you're set up, it's time to create your first custom plugin. Creating Your First Plugin: Hello World! Let's start with the quintessential "Hello World" example to get familiar with the structure of a WordPress plugin. Create a new directory in the wp-content/plugins directory of your WordPress installation. Name it something unique, like custom-plugin. Inside your custom-plugin directory, create a new PHP file, say custom-plugin.php, and add the following code: ``` <?php /* Plugin Name: Custom Plugin Description: This is a custom plugin for WordPress. */ function custom_plugin_hello_world() { echo '<p>Hello, World! This is my first custom plugin.</p>'; } // Hook into WordPress action to execute our function add_action('wp_footer', 'custom_plugin_hello_world'); ?> ``` Save the file, then head to your WordPress admin dashboard. Go to the Plugins section, and you should see your newly created plugin listed. Activate it, and voila! You've just created and activated your first custom plugin. Expanding Functionality: Adding Settings Let's take it a step further by adding a simple settings page to our plugin. This will allow users to customize the plugin's behavior from within the WordPress admin panel. ``` <?php /* Plugin Name: Custom Plugin with Settings Description: This is a custom plugin for WordPress with settings. */ // Add a menu item under Settings function custom_plugin_settings_menu() { add_options_page('Custom Plugin Settings', 'Custom Plugin', 'manage_options', 'custom-plugin-settings', 'custom_plugin_settings_page'); } add_action('admin_menu', 'custom_plugin_settings_menu'); // Display the settings page function custom_plugin_settings_page() { ?> <div class="wrap"> <h2>Custom Plugin Settings</h2> <p>Customize the behavior of the custom plugin.</p> <!-- Add your settings fields here --> </div> <?php } ?> ``` With this code, you've created a settings page under the WordPress Settings menu. Now, you can add your custom settings fields within the custom_plugin_settings_page() function. Conclusion: Building custom plugins for WordPress empowers you to shape your site exactly the way you envision it. Whether it's a simple tweak or a complex feature, the ability to create custom plugins gives you unparalleled control. So, roll up your sleeves, dive into the code, and unlock the full potential of your WordPress site with custom plugins! --- Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
nitin-rachabathuni
1,857,519
Type Hinting and Annotations in Python: Improving Code Readability and Understanding
Python is a popular and powerful programming language known for its dynamic typing, where variable...
0
2024-05-18T10:46:16
https://dev.to/myexamcloud/type-hinting-and-annotations-in-python-improving-code-readability-and-understanding-49jj
python, programming, software, coding
Python is a popular and powerful programming language known for its dynamic typing, where variable types are inferred at runtime. While this allows for flexibility, it can make code more difficult to understand in a collaborative setting. To address this issue, Python introduced type hinting and annotations. This allows developers to annotate the expected types for function arguments and return values. For example: `def sum(a: int, b: int) -> int: return a + b ` In this function, the parameters and return type are denoted as integers. This makes it easier for other people (and yourself) to understand the function and its intended use. It's important to note that type annotations are not enforced during runtime. This means that the function will still run even if incorrect types are passed in. However, this can lead to unexpected behavior or errors. For instance, if you try to add a string and an integer in the sum function, Python will throw a TypeError since it doesn't know how to add these two types together. To better understand this concept, let's look at some sample test cases for the sum function: `print(sum(2,2)) # 4` `# print(sum('python', 1)) # TypeError: can only concatenate str (not "int") to str` `# print(sum(3,'python')) # TypeError: unsupported operand type(s) for +: 'int' and 'str'` `print(sum('python', 'world')) # pythonworld ` In the first two cases, we see the expected behavior of adding two integers and getting the sum as the output. However, in the remaining two cases, where incorrect types are passed in, we get TypeErrors. To use type hinting, we use the colon (:) after the parameter name, followed by the type of data expected. For example, consider a function that calculates the area of a rectangle. We can specify the input as length (int) and width (int) and the return type as float as follows: `def calculate_area(length: int, width: int) -> float: area = length * width return area ` Here, the int and float types are built-in data types in Python. Using type hinting in this way makes the function's purpose clear and helps with debugging and code maintenance. Without type hinting, the function would be written as follows: `def calculate_area(length, width): area = length * width return area ` Without specifying the parameter types, it is not immediately clear what data types the function expects, making it less readable. Apart from built-in types, we can also specify custom-defined types. For example, if we have a custom-defined class called Person, we can use it as the type of a function parameter or return value as follows: `class Person: def __init__(self, name, age): self.name = name self.age = age `` def get_person_info(person: Person) -> str: return f"{person.name} is {person.age} years old." ` Here, we specify the type of the person parameter as Person and the return type as str (string). The typing library, introduced in Python 3.5, offers even more advanced features for type hinting. Let's look at some examples to understand its use better. To specify that a parameter can accept multiple data types, we use the Union type annotation. For instance, in a function that calculates the sum of values in an iterable, we can specify the parameter to accept either a numpy array or a list: `from typing import Union import numpy as np ` `def sum(iterable: Union[np.ndarray, list]) -> float: total = 0 # function body to calculate the sum of values in iterable return total ` We can also specify that the members of a list parameter should all be of a particular type. For example, in the same sum function as above, let's say we want the list members to be of type float. We can do so using the following type hint: `def sum(iterable: Union[np.ndarray, list[float]]) -> float: total = 0 # function body to calculate the sum of values in iterable return total ` The typing library also includes features such as Optional, which allows a parameter to accept either a certain type or None. It also supports various iterables, Generics, and custom-defined types, providing developers with more precise and clear ways to express complex data structures and relationships. In conclusion, type hinting in Python greatly improves code readability and maintainability. Its basic usage helps to make function purpose clear, while the features in the typing library offer more advanced options for expressing data types. As projects grow and become more complex, using type hinting becomes even more beneficial. ***MyExamCloud Study Plans*** [Java Certifications Practice Tests](https://www.myexamcloud.com/onlineexam/javacertification.courses) - MyExamCloud Study Plans [Python Certifications Practice Tests](https://www.myexamcloud.com/onlineexam/python-certification-practice-tests.courses) - MyExamCloud Study Plans [AWS Certification Practice Tests](https://www.myexamcloud.com/onlineexam/aws-certification-practice-tests.courses) - MyExamCloud Study Plans [Google Cloud Certification Practice Tests](https://www.myexamcloud.com/onlineexam/google-cloud-certifications.courses) - MyExamCloud Study Plans [MyExamCloud Aptitude Practice Tests Study Plan](https://www.myexamcloud.com/onlineexam/aptitude-practice-tests.course) [MyExamCloud AI Exam Generator](https://www.myexamcloud.com/onlineexam/testgenerator.ai)
myexamcloud
1,857,549
How Can Sports Betting Software Help You Win More Bets?
In the fast-paced world of sports betting, success often hinges on having the right tools at your...
0
2024-05-18T11:39:23
https://dev.to/mathewc/how-can-sports-betting-software-help-you-win-more-bets-5e2f
webdev, softwaredevelopment, programming
In the fast-paced world of sports betting, success often hinges on having the right tools at your disposal. With the advent of advanced technologies, sports betting software has become a crucial asset for both casual bettors and professionals. This software not only simplifies the betting process but also provides strategic insights that can enhance your chances of winning. In this blog, we will explore how sports betting software can help you win more bets, highlighting its key features and benefits. **What is Sports Betting Software?** Sports betting software is a comprehensive digital tool designed to assist bettors in making informed decisions. A leading **[sports betting software development company](https://innosoft-group.com/sports-betting-software-development/)** specializes in creating these solutions, ensuring they are equipped with features that cater to the dynamic needs of the betting community. These software applications offer a range of functionalities, from real-time odds analysis to automated betting strategies, all aimed at optimizing your betting experience and outcomes. **How Sports Betting Software Helps You Win More Bets:** **Here's how this software can give you an edge over the bookies:** **Data-Driven Decisions:** One of the primary benefits of sports betting software is its ability to provide data-driven insights. By analyzing vast amounts of data, the software can identify patterns and trends that might not be immediately apparent. This enables bettors to make more informed decisions based on statistical evidence rather than gut feeling. **Real-Time Updates:** Sports betting software provides real-time updates on odds, scores, and other crucial information. This allows bettors to react quickly to changes in the betting environment, such as injuries or weather conditions, which can significantly impact the outcome of a bet. **Automated Betting:** Many sports betting software applications offer automated betting features. This means you can set specific criteria for your bets, and the software will place bets on your behalf when those criteria are met. This reduces the risk of human error and ensures that you never miss a profitable betting opportunity. **Betting Strategies and Analytics:** Advanced sports betting software comes with built-in betting strategies and analytical tools. These tools can help you devise effective betting strategies by analyzing past performances, current form, and other relevant factors. The software can also simulate different betting scenarios, allowing you to refine your strategies before placing actual bets. **Latest Top 5 Must-Have Features of Sports Betting Software:** **Live Betting:** Live betting functionality allows you to place bets in real-time as the game progresses. This feature is crucial for capitalizing on dynamic odds and making the most out of rapidly changing game situations. **Comprehensive Statistics and Analysis:** Access to detailed statistics and analytical tools is essential for making informed betting decisions. Look for software that offers in-depth analysis of teams, players, and past performances. **User-Friendly Interface:** An intuitive and user-friendly interface ensures that you can navigate the software easily and place bets without any hassle. A good interface will also include customizable dashboards and alerts. **Mobile Compatibility:** In today's mobile-first world, having software that works seamlessly on smartphones and tablets is a must. This allows you to place bets and monitor outcomes on the go. **Security Features:** Ensuring the safety of your data and transactions is paramount. Look for sports betting software with robust security measures, including encryption, secure payment gateways, and reliable customer support. **Expertise of Innosoft Group in Sports Betting App Development:** Innosoft Group emerges as a frontrunner in the realm of sports betting app development. Their expertise lies in crafting innovative solutions that encompass the aforementioned features, leveraging state-of-the-art technologies. With a dedicated team of professionals skilled in creating user-centric and secure betting platforms, Innosoft Group ensures the delivery of high-quality sports betting software that meets industry standards and exceeds user expectations. **Conclusion:** Investing in sports betting software can significantly enhance your betting experience and increase your chances of winning. By providing data-driven insights, real-time updates, automated betting, and strategic analysis, this software equips you with the tools you need to make smarter bets. Partnering with a reputable and experienced **[casino game developer](https://innosoft-group.com/online-casino-game-development-company/)** ensures that you have access to the latest features and technologies, helping you stay ahead in the competitive world of sports betting. With the right software and a solid strategy, you can turn your passion for betting into a profitable endeavor.
mathewc
1,857,615
Advancing Nursing Practice: Mastering Strategies for NURS FPX
Embarking on the journey of nursing education is a transformative experience, where nurses hone their...
0
2024-05-18T13:51:12
https://dev.to/ryanhiggs/advancing-nursing-practice-mastering-strategies-for-nurs-fpx-4aag
tutorial, discuss, design, interview
Embarking on the journey of nursing education is a transformative experience, where nurses hone their skills and knowledge to elevate their practice to new heights. In this guide, "Advancing Nursing Practice: Mastering Strategies for NURS FPX", we delve into essential strategies and insights to empower nurses as they navigate their educational journey with a [capella university flexpath assessment](https://nursfpx.com/). From embracing evidence-based practice to fostering leadership skills, this guide provides actionable strategies to empower nurses on their path to mastery. **Understanding the Framework: Insights into NURS FPX** 1. Embracing Evidence-Based Practice: Explore the significance of evidence-based practice in [nursing writing services](https://bsnwritingservices.com/), equipping students with the tools to deliver high-quality, patient-centered care. Dive into the process of critically appraising research evidence and integrating findings into clinical practice, ensuring nurses make informed decisions based on the best available evidence. 2. Fostering Leadership Skills: Cultivate essential leadership skills within the NURS FPX program, preparing nurses to take on roles as advocates, change agents, and leaders in healthcare settings. Develop communication, collaboration, and decision-making skills necessary for effective leadership in nursing practice, empowering nurses to drive positive change and [Interdisciplinary Plan Proposal](https://nursfpx.com/interdisciplinary-plan-proposal/). **Strategies for Mastery: Navigating Your Educational Journey** 1. Engaging in Experiential Learning: Capitalize on experiential learning opportunities within the NURS FPX program, including clinical practicums, simulations, and hands-on exercises. Gain practical experience and apply theoretical knowledge to real-world clinical scenarios, refining your clinical skills and judgment under the guidance of [Stakeholder Presentation](https://nursfpx.com/stakeholder-presentation/). 2. Embracing Interdisciplinary Collaboration: Embrace interdisciplinary collaboration within the healthcare team, fostering partnerships with colleagues from diverse backgrounds and specialties. Recognize the value of teamwork and collaboration in improving patient outcomes and enhancing the quality of care, promoting a culture of mutual respect and shared accountability. **Strategies for Success: Insights from NURS FPX** 1. Prioritizing Professional Development: Prioritize ongoing professional development and lifelong learning to stay abreast of advancements in nursing practice and healthcare delivery. Engage in continuing education, certification programs, and professional conferences to expand your knowledge, skills, and competencies in [Enhancing Quality and Safety](https://nursfpx.com/enhancing-quality-and-safety/). 2. Cultivating Resilience and Well-Being: Cultivate resilience and well-being to navigate the challenges and demands of nursing practice with grace and strength. Practice self-care, stress management, and mindfulness techniques to maintain physical, emotional, and psychological well-being, ensuring you can deliver compassionate and effective care to patients. **Conclusion** In conclusion, "Advancing Nursing Practice: Mastering Strategies for NURS FPX" offers valuable guidance and insights for nurses embarking on their educational journey with Capella's FlexPath program. By embracing evidence-based practice, fostering leadership skills, engaging in experiential learning, embracing interdisciplinary collaboration, prioritizing professional development, and cultivating resilience and well-being, nurses can elevate their practice to new heights of excellence and impact. With dedication, determination, and a commitment to lifelong learning, nurses can emerge from the NURS FPX program as competent, confident, and compassionate healthcare professionals poised to make a meaningful difference in the lives of their patients and communities.
ryanhiggs
1,857,628
Works, don't touch it? Or never ending optimization game?
I've built a photo sharing app https://www.wisaw.com. I use it on my iPhone all the time, and it...
0
2024-05-18T15:09:24
https://dev.to/dmitryame/works-dont-touch-it-or-never-ending-optimization-game-1aoo
javascript, performance, optimization
--- title: Works, don't touch it? Or never ending optimization game? published: true description: tags: JavaScript, Performance, Optimization cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ac6oms24akf8ojerfvm8.png # Use a ratio of 100:42 for best results. # published_at: 2024-05-18 14:05 +0000 --- I've built a photo sharing app https://www.wisaw.com. I use it on my iPhone all the time, and it works great for me! Recently a friend complained that it's really slow on his iPad. So, I started digging... Turns out, the app retrieves the photos by calling GraphQL backend, 1 day at a time, until it fills the screen. On smaller devices works like a charm, but on tablets -- lots of network calls, and painful user experience. My goal was to figure out how to make a backend return 10 days (or more) worth of photos in one call, without making any changes to the frontend -- it's still a pain to re-deploy mobile apps. Here is my first bruit force approach: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ac6oms24akf8ojerfvm8.png) This actually worked. Right away, I saw the number of calls to AWS Lambda, where I host my graphQL backend, to drastically go down. Which means the application does not have to open and maintain as many network connections on slower bandwidth networks. The UX is much smoother. So, should I stop here? Well, there was still something that bothered me about the code. To my taste, the number of repetitive lines of code is insane -- typical Copy/Paste design pattern. Generally speaking I'm OK seeing it in the code when it has 2 occurrences, but 10!!! is just a way too many. Here is the revised version: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l9dzy2ni9huonwe8kwuc.png) Much better. JS `Promise.all` is awesome. The final version of the code is much shorter. The number of async calls can be adjusted dynamically by playing with `arraySize` const. Each function call returns arrays of exactly the same objects types, so I can chain `flat(1)` to the end. I can probably even consider passing the `arraySize`, depending on the device screen size, into the lambda call at some point, but that's a different story for some other time (and that would require the mobile app new release). So, the complexity didn't really go away, arguably, the code became even more complex. But, I thought it was pretty cool to be able to optimize UX dynamics without touching a frontend. The spikiness of the network calls moved from mobile devices running on slow networks, to calls between the middleware and the database, which are much faster and reliable. Should I have touched the code to begin with? It was working OK after all. At the end of the day, the answer is -- yes! Absolutely! I've learned how to `Promise.all` dynamically on a variable size array of functions. It took me like an hour of hacking on a Friday night before heading out to a weekend. And, I've had a lot of fun, and, my friend, who started all this, is happy running my app on his tablet -- that's all that matters.
dmitryame
1,858,057
frvr ai game century fight 2060 done by:nassim haddad
Hello again dear developers i'm going to share with you today my game that was created using ai the...
0
2024-05-19T04:14:26
https://dev.to/nassim_haddad_4fdd304a846/frvr-ai-game-century-fight-2060-done-bynassim-haddad-1amg
gamedev, ai
Hello again dear developers i'm going to share with you today my game that was created using ai the link will be provided to test it out (https://beta.frvr.ai/@njnjnj/play/M-sixgc6MS?utm_medium=share&utm_source=link) if you guys like my game and published content you can also suppourt my work through this link by copying it to your browser (coindrop.to/nassim-98) thanks again and happy coding to you guys stay safe.
nassim_haddad_4fdd304a846
1,858,289
Terratest vs Terraform/OpenTofu Test: In-Depth Comparison
The release of Terraform and OpenTofu 1.6 brought with it the general availability of the Terraform...
0
2024-05-20T12:56:00
https://www.env0.com/blog/terratest-vs-terraform-opentofu-test-in-depth-comparison
terraform, devops, testing, infrastructureascode
The release of [Terraform](https://www.env0.com/blog/what-is-terraform-cli) and [OpenTofu](https://www.env0.com/blog/opentofu-the-open-source-terraform-alternative) 1.6 brought with it the general availability of the Terraform Test Framework. This framework allows you to write unit and integration tests for your [Terraform modules](https://www.env0.com/blog/terraform-modules) and configurations. Why should you test your IaC? And how does the Terraform Test Framework compare to Terratest? That's what we're going to explore in this article. **Introduction - Comparing Terraform Testing Frameworks** --------------------------------------------------------- A key part of any software development process is testing. It's how you ensure that the code you wrote actually does what it is supposed to do. We write automated tests to ensure that we get the correct output for a given input, that our code handles errors gracefully, and that it works as expected when integrated with other parts of the system. Infrastructure as Code is still code and should be treated as such, but testing infrastructure code is a little different than traditional software testing. Terraform code isn't composed of functions or methods for you to write unit tests against.  Terraform code doesn't produce an API that needs to be validated against a specification, and it doesn't integrate with other application components in a traditional sense. Instead, Terraform code describes the desired state of your infrastructure, and relies on the core binary and provider plugins to make that state a reality. But there's still testing to do! In my mind there's three categories of testing for Terraform. First, there's basic validity. Is your code syntactically correct and does it parse logically? Have you specified any arguments that don't actually exist? Forgotten to close a curly brace? Or provided an argument value that's of the wrong data type? Next, there's testing individual components of your code, aka unit testing. This includes objects like input variables, resource blocks, and outputs. Unit testing is especially critical when you've added conditional logic, functions, and validation into your code. If you get bad input is it handled correctly? If you make a resource creation conditional, can you verify that it works correctly? The final category is integration testing, which verifies the deployed infrastructure works as expected. If you're deploying a network with complex routing and firewall rules, can you verify that it's configured correctly to pass good traffic and block the bad stuff? When you update a module in your larger configuration, does everything continue to function properly? For basic validity, the built-in `terraform validate` command along with third-party tools like [tflint](https://www.env0.com/blog/tflint-with-custom-flows) can help. For unit and integration testing, there was no native solution before the Terraform Testing Framework, so third-party testing frameworks were developed. The two most popular testing solutions are Terratest by Gruntworks (also the creators of [Terragrunt](https://www.env0.com/blog/terragrunt)) and kitchen-terraform based on Chef's InSpec. Both follow a similar pattern, and for this article, we will look at Terratest specifically. **What is Terratest** --------------------- Terratest is a testing framework for Terraform developed by the good folks at Gruntworks. It uses the Golang testing library along with Go modules developed by Gruntworks for unit and integration testing with Terraform code. For example, you might be performing variable validation with the following code: variable "vm_size" { description = "Size of VM to use. Must be D or E series and between 4 and 12 CPUs." type = string validation { condition = can(regex("^Standard_{DE}{4-12}*.$", var.vm_size)) error = "Variable vm_size must be D or E series with 4-12 CPUs. Value provided was ${var.vm_size}" } } As part of your unit test, you want to make sure that valid values are accepted and invalid values are rejected. Since regular expressions are notoriously tricky, you might have a unit test that tries several correct and incorrect values. This verifies that your initial expression is correct, and provides a way to test the expression later if the requirements change. Using Terratest, you would have the testing framework attempt to generate a valid plan with the range of values you'd like to test. The code might look something like this: package tests import ( "testing" "github.com/gruntwork-io/terratest/modules/terraform" "github.com/stretchr/testify/assert" ) var tests = []struct { vmSize string valid bool }{ {"Standard_E4s_v5", true}, {"Standard_G4s_v5", false}, } func TestVmSizeValidation(t *testing.T) { for _, tt := range tests { t.Run(tt.vmSize, func(t *testing.T) { terraformOptions := &terraform.Options{ TerraformDir: "../", Vars: map[string]interface{}{ "vm_size": tt.vmSize, }, } _, err := terraform.InitAndPlanE(t, terraformOptions) if tt.valid { assert.NoError(t, err) } else { assert.Error(t, err) } }) } } Each value in `tests` is checked to see if it produces a valid plan, with the expected result expressed by the `valid` boolean value. Here is the truncated output from a test run: > go test TestVmSizeValidation/Standard_E4s_v5 2024-03-19T13:48:39-04:00 retry.go:91: terraform [init -upgrade=false] TestVmSizeValidation/Standard_E4s_v5 2024-03-19T13:48:39-04:00 logger.go:66: Running command terraform with args [init -upgrade=false] ... PASS ok validation/tests 1.365s Terratest can also perform integration tests to validate infrastructure is deployed and functioning correctly.  For example, consider a Terraform configuration that creates a static website in Azure using an Azure Storage account. Terrastest could stand up an instance of the website and verify that it produces a "200" response code when queries. When the test is over, Terratest can destroy the infrastructure. An example snippet might look like this: func TestTerraformHttpExample(t *testing.T) { t.Parallel() // A unique ID to use for the website name uniqueID := rand.Intn(100) + 100 // Create website name value websiteName := fmt.Sprintf("testingsite%d", uniqueID) // Construct the terraform options terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{ // The path to where our Terraform code is located TerraformDir: "../", // Variables to pass to our Terraform code using -var options Vars: map[string]interface{}{ "website_name": websiteName, }, }) // At the end of the test, run `terraform destroy` to clean up any resources that were created defer terraform.Destroy(t, terraformOptions) // This will run `terraform init` and `terraform apply` and fail the test if there are any errors terraform.InitAndApply(t, terraformOptions) // Run `terraform output` to get the value of an output variable instanceURL := terraform.Output(t, terraformOptions, "homepage_url") // Setup a TLS configuration to submit with the helper, a blank struct is acceptable tlsConfig := tls.Config{} // Verify that we get back a 200 OK with the expected instanceText http_helper.HttpGet(t, instanceURL, &tlsConfig) } Terratest generates a random integer to append to the website name. Then it creates the infrastructure using Terraform and grabs the output `homepage_url`.  Next, Terratest sends an HTTP GET request against the website, and the test passes if the http request is successful. Finally, Terratest destroys the temporary infrastructure when the test is complete. Here is the truncated output of a test run: > go test -v TestTerraformHttpExample 2024-03-19T13:46:06-04:00 retry.go:91: terraform [init -upgrade=false] TestTerraformHttpExample 2024-03-19T13:46:06-04:00 logger.go:66: Running command terraform with args [init -upgrade=false] ... TestTerraformHttpExample 2024-03-19T13:47:25-04:00 logger.go:66: Destroy complete! Resources: 4 destroyed. TestTerraformHttpExample 2024-03-19T13:47:25-04:00 logger.go:66: PASS ok testing-framework/tests 79.843s The test passed and we know that the configuration successfully deploys a website. **What is Terraform/OpenTofu Test** ----------------------------------- The Terraform Testing Framework solves the same challenges as Terratest, but through a native command in the Terraform/OpenTofu binary and using tests that are written in HCL.  As opposed to having to learn another language, like Go or Ruby, you can write your Terraform tests in the same declarative language you've already grown comfortable with. The tests are written in a declarative manner, since Terraform already understands how to process configurations and manage infrastructure. You can focus on writing useful tests and not the mechanics of performing IaC testing. For unit and integration testing, the Terraform Testing Framework is a great solution. Tests are placed in files ending in either **tftest.hcl** or **tftest.json** (yes, you can write your tests in JSON too, or have another language produce them programmatically.) The core construct for testing is the newly introduced run block. Each step of a testing sequence is defined using run blocks with the following general syntax: run "<run name>" { command = "<plan or apply> variables { <input variable name> = <value for run> } assert { condition = <condition to test, evaluating to true or false> error_message = <message if condition is false> } } Within the run block you can set input variables values and provider configurations to use for that particular run. You can also set [input variable](https://www.env0.com/blog/terraform-variables) and provider configurations at the beginning of the file, which will be used by all runs unless overridden inside the run block. Each run can either use the [`plan`](https://www.env0.com/blog/terraform-plan) or [`apply`](https://www.env0.com/blog/terraform-apply-guide-command-options-and-examples) command to execute the run. Planning runs are great for unit testing when all values will be known before infrastructure deployment. Meanwhile, apply runs are better suited for integration testing when you want to deploy the actual infrastructure and test it. ### **Unit Testing** Going back to our unit testing example with Terratest, how could we accomplish the same test with the Testing Framework? Since we don't need to deploy actual infrastructure to check our input variable validation, we can use a `plan` command type. We also want to go through a list of variable values and test each one. One of the run blocks would look something like this: run "test_e_series" { command = plan variables { vm_size = "Standard_E4s_v5" } } Since our goal is to produce a valid plan, we don't need to put an assertion. The valid plan is proof that our value was accepted by the input variable validation block. If we want to test an invalid value, the code would look a little different: run "test_g_series" { command = plan variables { vm_size = "Standard_G4s_v5" } expect_failures = [ var.vm_size ] } We are expecting that the validation block will reject the value we've passed, so we can add the `expect_failures` argument with a list of validations we expect to fail. Additional run blocks can be added for each value we want to test for our `vm_size` input variable. At the moment the run block doesn't support the [`for_each`](https://www.env0.com/blog/terraform-for-each-examples-tips-and-best-practices) meta-argument, but that support is coming soon. To run the tests, the command terraform test or tofu test is executed from the configuration directory. By default, the test command will look for a directory named tests and execute tests defined by **.tftest.hcl** or **.tftest.json** files inside. You can also specify a different folder or specific test files to run. The output of the `terraform test` command is shown below: > terraform test tests\variable_tests.tftest.hcl... in progress run "test_e_series"... pass run "test_g_series"... pass tests\variable_tests.tftest.hcl... tearing down tests\variable_tests.tftest.hcl... pass Success! 2 passed, 0 failed. Each test defined in the file **variable\_tests.tftest.hcl** was executed in the order the `run` blocks appear in the file. That's one interesting quirk of the Terraform Testing Framework. While Terraform in general doesn't care about the order in which blocks appear in a configuration, `run` blocks are executed in order. Additionally, when there are multiple testing files, they are executed in lexicographical order. That covers unit testing, but what about integration testing? ### **Integration Testing** We can draw on the integration test from Terratest and see how we can reproduce it with the Terraform Testing Framework. What we need to do is generate a unique website name and deploy infrastructure defined in our Terraform module. The following code block can handle that for us: run "execute" { command = apply variables { website_name = "Test${substr(uuid(),0,5)}" } } The `command` argument set to apply will tell Terraform to actually deploy the infrastructure found in the root module, and we're using the `variables` block to set the value for the website_name input variable. Now how do we go about testing the website? We can use the `http` data source to do so, but we don't have an http data source in our configuration, nor do we need one for normal operations. The solution is to reference a module that does have the `http` data source and feed it the `homepage_url` [output](https://www.env0.com/blog/terraform-output-variables-in-depth-guide-with-examples) from the previous run block: run "check_site" { command = apply variables { website_url = run.execute.homepage_url } module { source = "./tests/loader" } assert { condition = data.http.main.status_code == 200 error_message = "Website ${run.execute.homepage_url} returned the status code ${data.http.main.status_code}. Expected 200." } } The `module` block inside a run block uses the `source` argument to refer to either a module stored locally or on the Terraform registry. The run block will deploy the configuration defined inside the module rather than the root module. Our loader module has the following configuration: variable "website_url" { type = string } data "http" "main" { url = var.website_url } It takes the `website_url` input variable and uses the `http` data source to query the URL. The objects defined in the module can be referred to directly in the `run` block, so we have our `assert` block with the following condition: condition = data.http.main.status_code == 200 As long as the status code is "200", the assertion passes. Let's see how our test does! \> terraform test tests\\integration.tftest.hcl... in progress run "execute"... pass run "check\_site"... pass tests\\integration.tftest.hcl... tearing down When there are multiple `run` blocks in a test file, Terraform maintains the state of each run until the last test is completed. Then it tears down the tests in reverse order. Each subsequent run block has access to the outputs of the previous blocks using the syntax `run.<run\_name>.<output\_name>`. Terratest vs Terraform Test --------------------------- From the examples we just ran through, you can see that the Terraform Test Framework can address many of the use cases for Terratest. That being said, Terratest has been around longer and is more fully featured and extensible. If you've already set up a robust process using Terratest in your organization, I don't think it makes sense to try and move to the Terraform Testing Framework. Terratest is still an actively maintained project with a dedicated community. However, if you are just getting started with testing your Terraform code, here are some points to consider. ### Language Support Terratest uses Golang and the Terraform Testing Framework uses HCL. If you're a heavy user of Terraform, then you've probably become quite proficient in HCL. Learning a new language AND developing Terraform tests is a pretty tall order. You may find it much simpler to use the Testing Framework if you aren't already familiar with Go. I know I did! ### Flexibility In Terratest, you are responsible for writing the Go code to orchestrate the tests, giving you the freedom to integrate other Go modules and create custom workflows. This freedom and flexibility might be a huge boon, or just another thing to manage and make sure everyone on your team is using consistently. The Terraform Testing Framework is more proscriptive in the testing workflow, which reduces flexibility, but also creates consistency. ### Community Support The Terraform Testing Framework is relatively new and still evolving rapidly. Currently there is no large repository of tests and examples to draw upon. Terratest, on the other hand, has been around for many years and has a robust library of tests, as well as many examples and blog posts on how to implement it. The following table compares the features of Terratest versus Terraform/OpenTofu Test: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zhpf410pgki2fq47ot1l.png) Bear in mind that this table is current as of publication, and both the Terraform and OpenTofu projects are actively updating and enhancing their implementations of the testing framework. Don't be surprised if JUnit support or `for_each` looping is supported in the next versions. **CI Testing and env0** ----------------------- While you can run your testing manually from your terminal, testing is an integral part of the CI/CD process. To that end, [env0](https://www.env0.com/) has introduced a new CI testing feature that leverages the OpenTofu test feature into the env0 private module registry. When you add a module to the env0 private registry, it will now include an option to enable testing when updates are pushed for the module. You can choose to run the tests on every pull request or only when a PR is merged to the tracked branch. ![](https://assets-global.website-files.com/63eb9bf7fa9e2724829607c1/6640e029c3499e50ba205dae_UEoQtcghJGdkohQhw6phaUyk3gEQmwW-Vazrw_EejxvqsOr-W5Qdn3SwyLwBFhVYKMhpq3RWW4vjOcDPRKFHUpMYZ40guEc0iF9r2H6QEaHfls2VDEWpHvl2HuNeNpdm5jVAEBWGOu6HFfHCmpSOUyk.png) The results of the test are captured in the private registry and displayed as part of each module's information. You can open the module and drill down into the logs for each test to see where things are failing. ![](https://assets-global.website-files.com/63eb9bf7fa9e2724829607c1/6640e02903a77bdd4a612933_iE9rm0_1froH5kW2wTobn7Jd9g-2T8tNosynY6Q1g87zW2DFH_1IP7FC4f0m7GY6uh0hG1KscVWnjCohyHaxY6_xFeGXuQJ7b3SOQu5gppOQngqqs06bN9A0sLHfkiZ_qm825ZZbzjLYlUeZ-hU-vjQ.png) The CI testing feature is currently in Beta, and it is available to everyone, including the free tier. You can find out more by reading the introductory [blog post](https://www.env0.com/blog/celebrating-opentofu-ga-with-our-new-ci-testing-feature) and reading through [the docs](https://docs.env0.com/docs/modules-continuous-integration-testing).
env0team
1,858,342
How I went from failing every interview to a job at Amazon
Originally posted here I studied computer science at Stanford. You probably would’ve expected me to...
0
2024-05-19T14:28:59
https://dev.to/alex420/how-i-went-from-failing-every-interview-to-a-job-at-amazon-3h5b
interview, recruiting, jobs, programming
Originally posted [here](https://sites.google.com/view/jobs2k24/home) I studied computer science at Stanford. You probably would’ve expected me to be swimming in job offers. In my sophomore year and first-half of junior year, I got a lot of interviews, and I failed every single one of them. And not in a, “this person is qualified, but other people are better qualified” type of way. It was more like a, “Why did we ever bother interviewing this person?” For example, I would stumble on simple questions like generating all permutations of a set by writing a recursive program that wouldn’t even terminate (even though it was something you would learn how to code by the second intro to CS class). About halfway through the interviews, the interviewer would seemingly give up in frustration. A few days later, I would receive the dreaded, “Thank you for your interest” email. “Maybe this just wasn’t meant for me,” I would tell myself. I was a straight A student, but that would only get me to the interview stage, where I would inevitably fail in a spectacular fashion. All of my friends seemed to be killing it, getting internships at the standard Google/Facebook/hot silicon valley startup. Maybe people were “handing out” jobs, but they certainly weren’t handing it out to me. By the time I reached the end of my junior year, there wasn’t any more time for me to complain about how the technical interview was unfair or how I was behind because I never did competitive programming. I started working through CTCI (Cracking the Coding Interview) and EPI (Elements of Programming Interviews). In the beginning, I would always read a question, then inevitably flip to the answer because I didn’t know how to solve it. Eventually, I could do the easy problems, then the medium problems, and so on. [Apply Now](https://sites.google.com/view/jobs2k24/home ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8ugx6xm9lkwuzd7ta4s.jpg))
alex420
1,858,347
Go Lang Cheatsheet
Table of Contents Getting Started Basic Syntax Data Types Control...
27,438
2024-05-19T14:47:38
https://dev.to/syedmuhammadaliraza/go-lang-cheatsheet-31d8
go, webdev, programming, development
### Table of Contents 1. **Getting Started** 2. **Basic Syntax** 3. **Data Types** 4. **Control Structures** 5. **Functions** 6. **Structs and Interfaces** 7. **Concurrency** ### 1. Installation To install Go, download it from the [official website](https://golang.org/dl/) and follow the installation instructions for your operating system. Create a new Go file with the `.go` extension, for example, `main.go`. ### 2. Basic Syntax #### Hello, World! ```go package main import "fmt" func main() { fmt.Println("Hello, World!") } ``` #### Start - The `main` package is required for executable commands. - The `main` function is the entry point of a Go program. - `fmt` is a standard library package for formatted I/O. ### 3. Data Types #### Variable Declaration ```go var name string = "Go" var age int = 10 ``` #### Short Variable Declaration ```go name := "Go" age := 10 ``` #### Constants ```go const Pi = 3.14 ``` #### Basic Types - **Integers**: `int`, `int8`, `int16`, `int32`, `int64` - **Unsigned Integers**: `uint`, `uint8`, `uint16`, `uint32`, `uint64` - **Floats**: `float32`, `float64` - **Complex**: `complex64`, `complex128` - **Others**: `byte` (alias for `uint8`), `rune` (alias for `int32`), `bool`, `string` ### 4. Control Structures #### If-Else ```go if age > 18 { fmt.Println("Adult") } else { fmt.Println("Minor") } ``` #### For Loop ```go for i := 0; i < 5; i++ { fmt.Println(i) } ``` #### Switch ```go switch day { case "Monday": fmt.Println("Start of the week") case "Friday": fmt.Println("End of the week") default: fmt.Println("Midweek") } ``` ### 5. Functions #### Basic Function ```go func add(a int, b int) int { return a + b } ``` #### Multiple Return Values ```go func swap(x, y string) (string, string) { return y, x } ``` #### Named Return Values ```go func split(sum int) (x, y int) { x = sum * 4 / 9 y = sum - x return } ``` ### 6. Structs and Interfaces #### Structs ```go type Person struct { Name string Age int } p := Person{Name: "Alice", Age: 30} ``` #### Methods ```go func (p Person) greet() string { return "Hello, " + p.Name } ``` #### Interfaces ```go type Animal interface { Speak() string } type Dog struct{} func (d Dog) Speak() string { return "Woof" } ``` ### 7. Concurrency #### Goroutines ```go go func() { fmt.Println("In a goroutine") }() ``` #### Channels ```go messages := make(chan string) go func() { messages <- "ping" }() msg := <-messages fmt.Println(msg) ``` #### Select ```go select { case msg1 := <-chan1: fmt.Println("Received", msg1) case msg2 := <-chan2: fmt.Println("Received", msg2) default: fmt.Println("No message received") } ``` ### Conclusion This cheat sheet provides a quick overview of Go's basic syntax and features. Go's simplicity and efficiency make it an excellent choice for various applications, from web servers to distributed systems. For more detailed information, refer to the [official Go documentation](https://golang.org/doc/) and Do Contact me on [LinkedIn Syed Muhammad Ali Raza](https://www.linkedin.com/in/syed-muhammad-ali-raza/)
syedmuhammadaliraza
1,858,922
Building an Elearning Course Creator with Streamlit and Lyzr Automata
In the rapidly evolving landscape of education and professional development, the demand for tailored,...
0
2024-05-20T06:54:57
https://dev.to/harshitlyzr/building-an-elearning-course-creator-with-streamlit-and-lyzr-automata-55im
In the rapidly evolving landscape of education and professional development, the demand for tailored, high-quality Elearning courses has surged. Educational institutions, businesses, and individual educators are striving to create engaging and effective online courses that cater to specific audiences. However, the process of designing a comprehensive Elearning course — from defining learning objectives to structuring content and assessments — can be time-consuming, resource-intensive, and requires a high level of expertise. **Problem** Many educators and course creators face several challenges in developing Elearning courses: **Time-Consuming Process:** Crafting a detailed course outline, including modules, lessons, and assessments, requires significant time and effort. **Expertise Requirement:** Creating an effective Elearning course demands expertise in instructional design, subject matter, and understanding of the target audience. **Resource Constraints:** Smaller institutions or individual educators often lack the resources to hire professional instructional designers. **Consistency and Quality:** Ensuring consistent quality and pedagogical soundness across all modules of a course can be challenging, especially for those new to Elearning design. **Objective** To address these challenges, we aim to develop an automated solution that leverages artificial intelligence to streamline the Elearning course creation process. The solution should: **Simplify Course Creation:** Allow users to quickly generate comprehensive course outlines by specifying the topic and target audience. **Maintain High Quality:** Ensure that the generated courses are pedagogically sound and tailored to meet the needs of the specified audience. **Reduce Time and Effort:** Significantly cut down the time and effort required to create a detailed course structure. **Accessibility:** Make Elearning course creation accessible to educators and institutions with limited resources. **Proposed Solution** We propose the development of an Elearning Course Creator application that uses Streamlit for the user interface and Lyzr Automata, powered by OpenAI’s GPT-4, for the backend AI processing. This application will: **User Input:** Collect user inputs for the course topic and target audience. **AI-Driven Content Generation:** Utilize GPT-4 to generate course titles, learning objectives, module structures, lessons, assessment strategies, and anticipated impact based on the provided inputs. **Interactive and User-Friendly:** Provide a simple, interactive interface for users to input their requirements and receive comprehensive course plans. **Secure API Access:** Ensure secure handling of API keys and data throughout the process. By implementing this solution, we aim to empower educators and course creators to produce high-quality Elearning content efficiently, thereby enhancing the overall educational experience for learners. **Setting Up the Environment** **Imports:** Imports necessary libraries: streamlit, libraries from lyzr_automata ``` pip install lyzr_automata streamlit ``` ``` import streamlit as st from lyzr_automata.ai_models.openai import OpenAIModel from lyzr_automata import Agent,Task from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline from PIL import Image ``` **Sidebar Configuration** We create a sidebar for user inputs, including an API key input for accessing the OpenAI GPT-4 model. This ensures that the API key remains secure. ``` api = st.sidebar.text_input("Enter our OPENAI API KEY Here", type="password") if api: openai_model = OpenAIModel( api_key=api, parameters={ "model": "gpt-4-turbo-preview", "temperature": 0.2, "max_tokens": 1500, }, ) else: st.sidebar.error("Please Enter Your OPENAI API KEY") ``` **Course Creation Function** The course_creator function is responsible for generating the course. It defines the task and pipeline for the course creation process using Lyzr Automata. ``` def course_creator(topics, audiences): course_agent = Agent( prompt_persona="You Are Expert Elearning Course Creator", role="Course creator", ) course_task = Task( name="Course Creation Task", output_type=OutputType.TEXT, input_type=InputType.TEXT, model=openai_model, agent=course_agent, log_output=True, instructions=f"""You are an Expert Elearning Course creator.Your task is to create course based on used given input topic and specific Audience. Input Requirements: Specific Topic: User Enters topic for which they want to create course Specific Audience: user specifies for which audience they are making this course Output Requirements: Course Title: Course title is SEO friendly and Eye Catchy for specific audience Learning Objectives: Specify Learning objectives from generated course.Make objectives point wise and give bullet points for sub sections. Course Structure: Specify Course Structure With Module and in each module specify lessons Assessment Strategies: Specify Assessment strategies Anticipated Impact: Specify Anticipated Impact by doing this course Below is User Input: Topic : {topics} Audience: {audiences} """, ) output = LinearSyncPipeline( name="Generate Course", completion_message="Course Generated!", tasks=[course_task], ).run() return output[0]['task_output'] ``` **User Input and Course Generation** We use Streamlit’s input methods to capture the course topic and audience from the user. When the “Generate” button is clicked, the app calls the course_creator function and displays the generated course. ``` topic = st.text_input("Specify Topic", placeholder="Digital Marketing Fundamentals") audience = st.text_input("Specify Audience", placeholder="Small business owners and entrepreneurs") if api and st.button("Generate"): solution = course_creator(topic, audience) st.markdown(solution) ``` try it now: [https://lyzr-course-creator.streamlit.app/](https://lyzr-course-creator.streamlit.app/) For more information explore the website: [Lyzr](https://www.lyzr.ai/)
harshitlyzr
1,859,475
Introduction to Database Connectivity with Go (SQL and NoSQL)
Introduction to Database Connectivity with Go (SQL and NoSQL) Whether you are building a...
0
2024-05-20T15:10:03
https://dev.to/romulogatto/introduction-to-database-connectivity-with-go-sql-and-nosql-2jp9
# Introduction to Database Connectivity with Go (SQL and NoSQL) Whether you are building a simple web application or a complex enterprise system, the ability to connect to databases is essential. In this guide, we will explore how to connect and interact with both SQL and NoSQL databases using GoLang. ## Connecting to SQL Databases GoLang provides excellent support for connecting to various SQL databases such as MySQL, PostgreSQL, SQLite, and more. To get started, make sure you have the respective database installed on your machine. ### Installing the Required Packages To connect to an SQL database in GoLang, you need to install the appropriate database driver package. Let's take MySQL as an example: ```shell $ go get github.com/go-sql-driver/mysql ``` Similarly, you can replace `mysql` in the command above with `postgres` or `sqlite` depending on your chosen database. ### Establishing a Connection Once you have installed the required packages, it's time to establish a connection with your SQL database: ```go import ( "database/sql" _ "github.com/go-sql-driver/mysql" ) func main() { db, err := sql.Open("mysql", "username:password@tcp(localhost:3306)/database_name") if err != nil { panic(err.Error()) } defer db.Close() // Start interacting with the database here } ``` Here `"username:password@tcp(localhost:3306)/database_name"` is the connection string format for MySQL. Make sure you replace it with your actual values. ### Executing Queries Now that we have established a connection successfully let's query our SQL database: ```go rows, err := db.Query("SELECT * FROM users") if err != nil { panic(err.Error()) } defer rows.Close() for rows.Next() { // Process each row of data // ... } ``` This code snippet demonstrates how to execute a simple SELECT query and iterate through the returned rows. ## Connecting to NoSQL Databases GoLang also offers excellent support for connecting to various NoSQL databases like MongoDB, Redis, Couchbase, and more. Let's focus on connecting with MongoDB in this guide. ### Installing the Required Packages To connect with MongoDB in GoLang, we need to install the official MongoDB driver: ```shell $ go get go.mongodb.org/mongo-driver/mongo ``` ### Establishing a Connection Now it's time to establish a connection with your MongoDB database: ```go import ( "context" "go.mongodb.org/mongo-driver/mongo" "go.mongodb.org/mongo-driver/mongo/options" ) func main() { clientOptions := options.Client().ApplyURI("mongodb://localhost:27017") client, err := mongo.Connect(context.TODO(), clientOptions) if err != nil { panic(err.Error()) } defer client.Disconnect(context.TODO()) // Start interacting with the database here } ``` Make sure you replace `"mongodb://localhost:27017"` in `options.Client().ApplyURI(...)` with your actual database information. ### Executing Queries After establishing a connection, let's learn how to run queries against our MongoDB database: ```go collection := client.Database("mydb").Collection("users") ctx := context.TODO() // Insert documents into the collection _, err = collection.InsertOne(ctx, bson.D{ {Key: "name", Value: "John"}, {Key: "age", Value: 25}, }) if err != nil { panic(err.Error()) } // Query documents from the collection curFindDocuments, _ := collection.Find(ctx, bson.M{"age": bson.M{"$gt": 20}}) defer curFindDocuments.Close(ctx) for curFindDocuments.Next(ctx) { // Process each document // ... } ``` In this code snippet, we insert a document into the collection and then query for documents with an age greater than 20. ## Wrapping Up Congratulations! You have learned the basics of connecting to both SQL and NoSQL databases using GoLang. With this knowledge, you can now start building powerful applications that leverage database connectivity. Keep exploring GoLang's rich ecosystem for additional database drivers and advanced features to enhance your development experience. Happy coding!
romulogatto
1,860,018
Enhance Your Efficiency with These 13 Cursor Extensions
Here are 13 extensions that have proven to be quite beneficial 1. Prettier: An indispensable tool...
0
2024-05-21T06:07:45
https://dev.to/tuner9696/enhance-your-efficiency-with-these-13-cursor-extensions-4p54
Here are 13 extensions that have proven to be quite beneficial **1. Prettier:** An indispensable tool for JavaScript developers, Prettier automatically formats your code, ensuring a consistent style across your project. **2. Auto Close Tag:** This extension automatically adds a closing tag when you type an opening HTML tag, saving you keystrokes and reducing potential errors. **3. Auto Rename Tag:** If you modify an opening HTML tag, this extension will automatically adjust the corresponding closing tag, maintaining the integrity of your markup. **4. Code Spell Checker:** This extension automatically identifies spelling errors in your code, helping to improve the readability and professionalism of your work. **5. Material Icon Theme: **This extension assigns icons to your folders and files based on their roles, enhancing the visual organization of your project. **6. Polacode:** This extension allows you to easily capture screenshots of your code, useful for documentation or sharing snippets with your team. **7. CSS Peak:** This extension makes it easier to reference the CSS applied to a tag, improving your efficiency when working with stylesheets. **8. Thunder Client:** This is a GUI-operable WebAPI testing tool, simplifying the process of testing and debugging your APIs. **9. Quokka.js:** This is a verification tool for JavaScript and TypeScript, providing real-time feedback and helping to catch errors early in the development process. **10. ChatGPT - Genie AI:** This is a Cursor extension for ChatGPT Pair Colorizer, offering AI-powered assistance within your Cursor environment. **11. Live Server:** This extension allows you to run a local development server and view your changes in real-time in the browser, facilitating a smooth front-end development process. **12. SQLTools:** This is a powerful database management tool. It allows you to connect, query, and manage your SQL databases directly from Cursor. **13. Live Preview:** This extension provides a live preview of your web project directly in Cursor. It’s especially useful for front-end development as it updates in real-time as you modify your code.
tuner9696
1,860,179
AWS RDS Tutorial
Please have a look at this first:...
0
2024-05-22T07:28:49
https://dev.to/duyhung___/aws-rds-tutorial-3aha
Please have a look at this first: https://dev.to/hungrushb/amazon-rds-create-database-deep-dive-2m8j # I. Preparation ### Create VPC 1 . Create a simple VPC with name **labRDS**. Keep everything as default and create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aeu3pjj1k3z8r6rjxoev.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aeu3pjj1k3z8r6rjxoev.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/569a91uc7rgopvim5aul.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbzc6wdkwhyot6o3w8zo.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qtmsu605dzhexo5qofl6.png) 2 . Update **Enable auto-assign public IPv4 address** for 2 public subnets. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/shzuzvpq04d1qlm4wro1.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fyay35lp5vwcwca6m1nc.png) Repeat for public subnet 2. ### Create EC2 security group 3 . Head to **EC2 console**, choose **Security Groups**, and **Create security group**. Choose your VPC. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zo2lt2ouf35fo0nxj99o.png) Add inbound rules: - HTTP (80): Select HTTP from the list or enter port 80. - HTTPS (443): Select HTTPS from the list or enter port 443. - Custom TCP Rule (5000): Select Custom TCP Rule and enter port 5000. - SSH (22): Select SSH from the list or enter port 22. All source is **Anywhere IPv4** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufzcidv4trsx3d0d5axi.png) Scroll down and create. ### Create a Security Group for a DB Instance 4 . Create another SG with name **labRDS-DB-SG**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lj7rentc31ickgj0ncsy.png) - Choose MYSQL/Aurora and port 3306. - For source, choose the EC2 SG we've created from last step. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n83778lzixzhsu367ba.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k9zw0b0l1ylg27q24t9.png) Scroll down and create. ### Creating a DB Subnet Group 5 . Go to **RDS console**, create a new **subnet group**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dvpxxqc1ujlh0a4v2we3.png) 6 . Enter name **labRDS-subnet-group** and choose VPC correctly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/klsmc5e3chrt0dquwlqy.png) Select 2 AZs that had 2 private subnet we created. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bm3sp9pjl4bxoe76x12u.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uuwhsk5rqommhoebq5i.png) Scroll down and create. An **Amazon Relational Database Service (RDS) Subnet Group** is an **Amazon Virtual Private Cloud (VPC)** resource used to define a particular set of network addresses that are accessible by an RDS instance. The subnet group defines the IP ranges for each Availability Zone, which allows for increased availability and reliability of the database instance. It also ensures that only authorized databases can access the associated subnets and prevents any unauthorized access from outside sources. Additionally, by using a Subnet Group, the user has full control over which resources have access to their database instances. # II. Create EC2 instance 7 . Go to **EC2 console** and launch a new instance. Enter name **labRDS-server**. From the **Amazon Machine Image (AMI)**, choose an HVM version of **Amazon Linux 2023**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i12e3ermm56xywhhz3zm.png) Under the **Instance type** section, choose the **t2.micro** instance type, which is pre-selected by default. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdm7yf3aezcjzenz7tyt.png) 8 . Create a new key pair **labRDS**, download the key and choose it from the options. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w4mpfbuo7banzzkp10iy.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05arrht1wy23kgnkb6xe.png) 9 . Edit **Networking**, choose **VPC**, **Subnet**, enable **public IP** and **security group** exactly like in picture. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70rkoloxd5ds3uc8bvyi.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mepilswv5zegec96tyl7.png) Check again and launch. 10 . Access EC2 instance with the downloaded keypair. You can use **MobaXTerm** or **Putty**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cqkes0jvnlqbksvruros.png) # III. Creating a RDS DB Instance ### Install Git and NodeJS 11 . First, update your system packages to make sure you’re using the latest version. Find Git Packages. Install Git. Finally, check the Git version was successfully installed. ``` sudo dnf update -y sudo dnf search git sudo dnf install git -y git --version ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rcpiwfvk8t9r6nuxdy2q.png) 12 . Install Node.js with the script below. ``` #!/bin/bash # Color for formatting GREEN='\033[0;32m' NC='\033[0m' # Colorless # Check if NVM is installed if ! command -v nvm &> /dev/null; then # Step 1: Install nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash source ~/.nvm/nvm.sh fi # Verify nvm installation nvm --version # Install the LTS version of Node.js nvm install --lts # Use the installed LTS version nvm use --lts # Verify Node.js and npm installation node -v npm -v # Step 4: Create package.json file (if it doesn't exist yet) if [ ! -f package.json ]; then npm init -y echo -e **${GREEN}Created file package.json.${NC}** fi #Step 5: Install necessary npm packages echo -e **Installing required npm packages...** npm install express dotenv express-handlebars body-parser mysql #Step 6: Install nodemon as a development dependency echo -e **Installing nodemon as a development dependency...** npm install --save-dev nodemon npm install -g nodemon # Step 7: Add npm start script to package.json if ! grep -q '**start**:' package.json; then npm set-script start **index.js** # Replace **your-app.js** with your entry point file echo -e **${GREEN}Added npm start script to package.json.${NC}** fi echo -e **${GREEN}Installation completed. You can now start building and running your Node.js application using 'npm start'.${NC}** ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f0dh7a58pdt8wj6bf07c.png) ### Create RDS DB Instance 13 . Navigate to **RDS console** to create a new database. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p6usj8ch2kmy0eo8uem9.png) 14 . Choose **Standard**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/etq77a2ft9l39ywvgvie.png) Choose **MySQL**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbsgjtyei0kdmdkzn2ww.png) For **Templates**, choose **Dev/Test**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kh55o8085rotsne17nep.png) Choose **Single DB instance**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59f86olubr70dyd4cjgj.png) - Enter **database-labRDS** for **DB indentifier**. - Open **Credential Settings**. If you want to specify a password, uncheck the Auto generate a password box if it’s already selected. - Change the **Master username** value if you want. - Enter the same password in both **Master password** and **Confirm password**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b7mvde9qsxnw6p8us92g.png) For **configuration**, choose **Burstable classes** and **db.t3.micro**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7tlgvww3pt6mul8047p.png) For **Storage**, choose **General purpose (gp2)**, change **Allocated storage** to **20 GiB**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qxds0h3mls2qqn6dcchx.png) For **Connectivity**, choose **Connect to an EC2 compute resource**, then choose your server instance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsm4rw6nx1w3eh57xmlg.png) In **Additional VPC security group**, choose your DB security group. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90bnpk1etsgnc9jeqj51.png) Keep the rest default like below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/til1ze7dwc5qz7axefuz.png) Scroll down and create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqr4bdywtdxlcma6ue2o.png) View your connection detail. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avvd6te32v3topltkpbl.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v44rbnq8h9jj3jslgwnh.png) 15 . Inspect your new database ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7b9z0397432x0xzuryls.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kk23sw6bslfbzstb6kfk.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwqanzw8dj0gaurgqylu.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umxh5d1b4fvkbao06j1j.png) ### Viewing Logs and Events on AWS RDS 16 . Click on the Log & events tab. Here, you can view various logs such as: - Error log: Records errors that occur on the instance. - General log: Records general activities on the instance. - Slow query log: Records slow queries. - Event log: Displays important events related to the instance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30c24rzicqgruwoqmvz1.png) Choose one log and view it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jj706iblrh2vt3szwuvn.png) ### Viewing Maintenance Information Here, you will see information about the maintenance schedule, including the times when the DB instance will be automatically backed up and maintenance tasks will be performed. You can also view the history of previous maintenance events. You can also view automatic backups and manual backups. You can also configure and manage backup settings. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zq50mfqik2nbubwiqe48.png) # IV. Deploy the application 17 . Clone this repo. ``` git clone https://github.com/AWS-First-Cloud-Journey/AWS-FCJ-Management ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjsmqrtgc6t26e8w9ei3.png) 18 . Install MySQL. First, go save your database endpoint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojl6h62nmn3aj0bqizqc.png) Note: To execute this script, you need to have sudo permissions and make sure you have provided the correct database information (RDS Endpoint, database name, username and password) before run script. ``` #!/bin/bash # Set variables for MySQL RPM and database information MYSQL_RPM_URL="https://dev.mysql.com/get/mysql80-community-release-el9-1.noarch.rpm" DB_HOST="replace this with your database endpoint" DB_NAME="first_cloud_users" DB_USER="admin" DB_PASS="12341234" # Check if MySQL Community repository RPM already exists if [ ! -f mysql80-community-release-el9-1.noarch.rpm ]; then sudo wget $MYSQL_RPM_URL fi # Install MySQL Community repository sudo dnf install -y mysql80-community-release-el9-1.noarch.rpm # You need the public key of mysql to install the software. sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023 # Install MySQL server sudo dnf install -y mysql-community-server # Start MySQL server sudo systemctl start mysqld # Enable MySQL to start on boot sudo systemctl enable mysqld # Check MySQL version mysql -V # Create or update the .env file with database information echo "DB_HOST=$DB_HOST" >> .env echo "DB_NAME=$DB_NAME" >> .env echo "DB_USER=$DB_USER" >> .env echo "DB_PASS=$DB_PASS" >> .env # Connect to MySQL mysql -h $DB_HOST -P 3306 -u $DB_USER -p ``` Then enter your password to login. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4z10tpszghw82fs0djd.png) ```mysql CREATE DATABASE IF NOT EXISTS first_cloud_users; USE first_cloud_users; CREATE TABLE `user` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY, `first_name` VARCHAR(45) NOT NULL, `last_name` VARCHAR(45) NOT NULL, `email` VARCHAR(100) NOT NULL UNIQUE, `phone` VARCHAR(15) NOT NULL, `comments` TEXT NOT NULL, `status` ENUM('active', 'inactive') NOT NULL DEFAULT 'active' ) ENGINE = InnoDB; INSERT INTO `user` (`first_name`, `last_name`, `email`, `phone`, `comments`, `status`) VALUES ('Amanda', 'Nunes', 'anunes@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Alexander', 'Volkanovski', 'avolkanovski@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Khabib', 'Nurmagomedov', 'knurmagomedov@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Kamaru', 'Usman', 'kusman@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Israel', 'Adesanya', 'iadesanya@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Henry', 'Cejudo', 'hcejudo@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Valentina', 'Shevchenko', 'vshevchenko@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Tyron', 'Woodley', 'twoodley@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Rose', 'Namajunas', 'rnamajunas@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Tony', 'Ferguson', 'tferguson@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Jorge', 'Masvidal', 'jmasvidal@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Nate', 'Diaz', 'ndiaz@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Conor', 'McGregor', 'cmcGregor@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Cris', 'Cyborg', 'ccyborg@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Tecia', 'Torres', 'ttorres@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Ronda', 'Rousey', 'rrousey@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Holly', 'Holm', 'hholm@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'), ('Joanna', 'Jedrzejczyk', 'jjedrzejczyk@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'); ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsmbe36y4mv0yqr2pj0i.png) ``` SHOW DATABASES; USE first_cloud_users; SHOW TABLES; DESCRIBE user; ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ru00xqsb77v95sazc2p.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8vmmnmid1qv7sihziq5.png) 19 . Go to application directory. ``` cd AWS-FCJ-Management/ ``` Once you are in the application directory, run the following command to start the application: ``` npm start ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnz9cim37t5zktkslv51.png) Get your instance's public IPv4, access it with http through port 5000 and you should see the app is running: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2gt82af83kwlm46l51dw.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3m35q3243nfzj6ohay8.png) Try to add new user and check. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sukzwh0d5xz2vx699doz.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eidmzzet2tqg1uytds1y.png) # VI. Clean up - Terminate EC2 instance - Delete DB Instance, also release the Elastic IP addresses if there is any left. - Delete DB Snapshots - Delete VPC ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ll47mp5ij7kohiyhgye.png) Congratulation! 20 . You can go and check log to see the difference: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrpbeddg93j35mcid372.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjfsf32o7chxvtw0mfj7.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs0ac0ygoxfiymqf6j3h.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ept01pf181b0ljk20pic.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ns4awwtzppyt8iqwn2m.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlrlzba3brh1gx9tffjt.png) # V. Create snapshot and restore Create a snapshot. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f94ntrpi2q39mq0rwf7l.png) Wait for it to be **Available**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/awknalg8kuinvylz7x6c.png) Choose **Restore**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ob0br5wpsyjflup0iz18.png) Enter **labRDS-restore**. Also remember to choose **Burstable classes** and **db.t3.micro**. Then **Restore DB instance**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zip7jzyzb56wu00ip6pd.png) You then can return to step **18** to set up a new database connection to **labRDS-restore**. Re-run the app and you will see it can fetch data normally again.
duyhung___
1,860,292
Experience The Future: Smart Home Technology And AI Agency
CPGChat refers to an AI-powered chatbot solution tailored for the Consumer Packaged Goods (CPG)...
0
2024-05-21T10:40:32
https://dev.to/saumya27/experience-the-future-smart-home-technology-and-ai-agency-1j8p
ai, chatgpt, aws, devops
**CPGChat** refers to an AI-powered chatbot solution tailored for the Consumer Packaged Goods (CPG) industry. These chatbots are designed to enhance customer engagement, streamline operations, and provide valuable insights to businesses in the CPG sector. Here’s an overview of what CPGChat entails, including its features, benefits, and potential use cases: **Features of CPGChat** - **Customer Engagement** - 24/7 Support: Providing round-the-clock customer service, answering queries, and assisting with product information. - Personalized Recommendations: Offering personalized product suggestions based on user preferences and purchase history. - Promotions and Offers: Informing customers about ongoing promotions, discounts, and new product launches. **Sales and Order Management** - Order Tracking: Helping customers track their orders in real-time. - Purchase Assistance: Assisting customers in making purchases directly through the chat interface. - Inventory Updates: Providing information on product availability and stock levels. **Feedback and Surveys** - Customer Feedback: Collecting customer feedback and reviews to improve products and services. - Surveys and Polls: Conducting surveys to gather insights on customer preferences and market trends. **Data and Insights** - Analytics: Offering insights into customer behavior, preferences, and interactions. - Reporting: Generating reports on sales performance, customer satisfaction, and other key metrics. **Integration with Systems** - CRM Integration: Integrating with Customer Relationship Management (CRM) systems to enhance customer data management. - ERP Integration: Connecting with Enterprise Resource Planning (ERP) systems for streamlined order processing and inventory management. - E-commerce Platforms: Integrating with e-commerce platforms like Shopify, Magento, and WooCommerce for seamless sales operations. **Multichannel Support** - Omnichannel Presence: Supporting multiple channels such as websites, mobile apps, social media platforms, and messaging apps. - Consistent Experience: Ensuring a consistent and cohesive customer experience across all touchpoints. **Benefits of CPGChat** - Enhanced Customer Experience: Providing quick and personalized support improves customer satisfaction and loyalty. - Increased Sales: Personalized recommendations and efficient order management can drive sales and revenue growth. - Operational Efficiency: Automating routine tasks such as order tracking and customer queries reduces operational costs and frees up human resources for more complex tasks. - Valuable Insights: Collecting and analyzing customer data helps businesses understand market trends and make informed decisions. - Scalability: Chatbots can handle a large volume of interactions simultaneously, making them scalable solutions for growing businesses. **Use Cases for CPGChat** **Product Information and Recommendations** A customer visits a brand’s website and interacts with the chatbot to learn about different products, get recommendations based on their needs, and find the best deals. **Order Processing and Tracking** A customer places an order through the chatbot and receives real-time updates on the order status, delivery schedule, and any potential delays. **Customer Feedback Collection** After a purchase, the chatbot prompts the customer to provide feedback on their experience, helping the brand gather valuable insights for improvement. **Marketing and Promotions** The chatbot sends notifications about new product launches, seasonal promotions, and personalized discounts to engage customers and drive sales. **Customer Support** A customer has a query about product usage or needs assistance with a return. The chatbot provides immediate support, resolving the issue efficiently. **Implementing CPGChat** To implement a CPGChat solution, follow these steps: **Define Objectives** Clearly outline the goals you want to achieve with the chatbot, such as improving customer service, increasing sales, or gathering customer insights. **Choose the Right Platform** Select a chatbot platform that offers the necessary features and integrations to meet your business needs. Popular platforms include Dialogflow, Microsoft Bot Framework, and AWS Lex. **Integrate with Existing Systems** Ensure the chatbot integrates seamlessly with your CRM, ERP, e-commerce, and other essential systems. **Design the Conversational Flow** Create intuitive and user-friendly conversational flows that guide customers through various interactions effectively. **Train the Chatbot** Train the chatbot using relevant data and scenarios to handle a wide range of customer queries and tasks. **Launch and Monitor** Launch the chatbot on your chosen channels and monitor its performance. Collect feedback and make necessary adjustments to improve its functionality and user experience. **Continuous Improvement** Regularly update and enhance the chatbot’s capabilities based on customer feedback and evolving business needs. By implementing [CPGChat](https://cloudastra.co/blogs/experience-the-future-cpgchat), businesses in the Consumer Packaged Goods industry can enhance their customer engagement, streamline operations, and gain valuable insights, ultimately driving growth and improving customer satisfaction.
saumya27
1,860,349
Website Design and Development Compa
A strong online presence is crucial for businesses to thrive in today's digital age. A well-designed...
0
2024-05-21T11:50:02
https://dev.to/akanksha_mall_3394babcd79/website-design-and-development-compa-4o2i
A strong online presence is crucial for businesses to thrive in today's digital age. A well-designed website is key to success. If you are looking for a reliable and professional website design and development company in Delhi, Rishikesh, Meerut, or Dehradun, you've come to the right place. In this article, we will explore the importance of website design and development, and how our company can help you create a stunning and functional website that will drive your business forward. The Importance of Website Design and Development A website is often the first point of contact between a business and its potential customers. It is a virtual storefront that represents your brand and provides information about your products or services. A well-designed website can leave a lasting impression on visitors and build trust and credibility. On the other hand, a poorly designed website can drive potential customers away and harm your brand image. Website development involves the technical aspects of building a website, such as coding and programming. It ensures that your website functions smoothly and provides a seamless user experience. A well-developed website is responsive, mobile-friendly, and optimized for search engines, which helps improve its visibility and reach a wider audience. Our Website Design and Development Services As a leading website design and development company in Delhi, Rishikesh, Meerut, and Dehradun, we offer a comprehensive range of services to meet your specific needs. Our team of experienced designers and developers work closely with you to understand your business goals and create a website that reflects your brand identity and engages your target audience. Custom Website Design We believe that every business is unique, and your website should be too. Our custom website design services ensure that your website stands out from the competition and captures the essence of your brand. We create visually appealing designs that are tailored to your business requirements and user preferences. Responsive Web Development With the increasing use of mobile devices, it is essential for your website to be responsive and mobile-friendly. Our web development team specializes in creating responsive websites that adapt to different screen sizes and provide an optimal user experience across all devices. A responsive website not only improves user engagement but also boosts your search engine rankings. E-commerce Development If you are looking to sell products or services online, our e-commerce development services can help you set up a secure and user-friendly online store. We use the latest technologies and platforms to create e-commerce websites that are easy to manage, navigate, and convert visitors into customers. From product catalogs to payment gateways, we handle every aspect of your online store. Content Management Systems (CMS) Managing and updating your website should be a hassle-free process. Our team specializes in developing websites on popular content management systems like WordPress, Joomla, and Drupal. These CMS platforms provide you with the flexibility to easily add, edit, and delete content on your website without any technical knowledge. We also offer training and support to ensure that you can make the most out of your CMS. Search Engine Optimization (SEO) Having a visually appealing and functional website is not enough if it doesn't rank well in search engine results. Our SEO experts optimize your website to improve its visibility and organic traffic. We conduct thorough keyword research, optimize your website's structure and content, and implement on-page and off-page SEO techniques to boost your search engine rankings. Our goal is to help your website rank higher and attract more qualified leads. Frequently Asked Questions (FAQ) Q1: How long does it take to design and develop a website? A1: The time required to design and develop a website depends on various factors such as the complexity of the design, the number of pages, and the functionality required. On average, it takes around 4-6 weeks to complete a website, but the timeline can vary based on your specific requirements. Q2: Can I update my website content myself? A2: Yes, absolutely! We design websites on user-friendly content management systems like WordPress, Joomla, and Drupal, which allow you to easily update and manage your website content without any technical knowledge. We also provide training and support to ensure that you can make the most out of your CMS. Q3: Will my website be mobile-friendly? A3: Yes, all the websites we design and develop are fully responsive and mobile-friendly. We understand the importance of mobile optimization in today's digital landscape and ensure that your website looks great and functions seamlessly on all devices, including smartphones and tablets. Q4: Can you help me with SEO for my website? A4: Absolutely! We offer comprehensive search engine optimization (SEO) services to improve your website's visibility and organic rankings. Our SEO experts conduct thorough keyword research, optimize your website's structure and content, and implement proven SEO techniquesto boost your search engine rankings. We understand the importance of SEO in driving organic traffic to your website and helping you reach your target audience. Q5: How much does it cost to design and develop a website? A5: The cost of designing and developing a website depends on various factors such as the complexity of the design, the number of pages, and the functionality required. We offer customized solutions tailored to your specific needs and budget. Get in touch with us for a detailed quote based on your requirements. Conclusion A well-designed and developed website is essential for businesses to establish a strong online presence and attract potential customers. As a leading website design and development company in Delhi, Rishikesh, Meerut, and Dehradun, we offer comprehensive services to create stunning and functional websites that reflect your brand identity and engage your target audience. From custom website design to responsive web development and e-commerce solutions, we have the expertise to bring your vision to life. Contact us today to discuss your website design and development needs and take your online presence to new heights. Remember, your website is the face of your business in the digital world. Invest in professional website design and development to make a lasting impression and drive your business forward. Contact us today to get started on your website design and development journey! Links: [[[website design and development services in Delhi ](url) [website design and development services in Rishikesh ](url) website design and development services in Meerut website design and development services in Dehradun e-commerce development Read More: [ How to Sell on Etsy: A Comprehensive Guide ](url) [How to Sell on Etsy: A Comprehensive Guide](url) [Development of the Product](url) [Top 10 Medical Website Templates](url) [What is Website Traffic a Deep Analysis?](url) [ 20 Best Industrial Website Template 2024](url) [ Digital Marketing Agency Design](url) [commrz vs shopify](url)
akanksha_mall_3394babcd79
70,825
Observables, Side-effects, and Subscriptions
Exploring examples in idiomatic uses of Observables
0
2018-12-25T23:16:17
https://blog.eyas.sh/2018/12/observables-side-effects-and-subscriptions/
reactivex, observable, reactiveprogramming, functional
--- title: Observables, Side-effects, and Subscriptions published: true description: Exploring examples in idiomatic uses of Observables tags: reactivex, observable, reactive programming, fp canonical_url: https://blog.eyas.sh/2018/12/observables-side-effects-and-subscriptions/ --- My previous articles on [using `AsyncPipe`](https://blog.eyas.sh/2018/12/use-asyncpipe-when-possible/) and [data refresh patterns in Angular](https://blog.eyas.sh/2018/12/data-and-page-content-refresh-patterns-in-angular/) hint at some common anti-patterns dealing with Observables. If there’s any common thread in my advice, it is: **delay unpacking an Observable into its scalar types** when performing logic you can rewrite as side-effect-free, leaving code with side-effects for subscription callbacks and other downstream logic. My two earlier articles focused on cases users can benefit from handling more of the object's lifecycle in its Observable form. In other words, cases where the Observable was being subscribed to and unpacked *too soon*. Instead, I suggested transforming the Observable using operators like `map`, `switchMap`, `filter`, etc. and taking advantage of the power offered by this form. In the case of Angular, it provides [`AsyncPipe`](https://angular.io/api/common/AsyncPipe), which takes the care of the step with side-effects (actually rendering the page) in template code. There are some exceptions to this line of thinking, namely [`do` and `tap`](http://reactivex.io/documentation/operators/do.html) are reactive operators exclusively there *for* functions with side effects. I'll leave a discussion of right vs less right reasons to use `do`/`tap` for a later article. But I'll mention logging, error reporting, and caching of otherwise pure functions as one valid use of side-effects. Let's explore a few cases: # 1. Displaying data represented by Observables Say I have two Observables wrapping some object in a storage format (e.g. JSON), and I'd like to display it. ## Unpacking an observable *too soon* ```ts let customerName: string; let customerBalance: number; nameObservable.subscribe(name => { customerName = name; if (customerName && customerBalance) { processAndDraw(); } }); balanceObservable.subscribe(balance => { customerBalancer = balance; if (customerName && customerBalance) { processAndDraw(); } }); function processAndDraw() { alert(`${customerName}: $${customerBalance.toFixed(2) USD`); } ``` If a caller unpacks an observable too soon, it means they're dealing with scalars, passing things around by global state. Developers might have trouble handling changes, such as adding a third data source to show. ## Unpacking an Observable *too late* ```ts combineLatest(nameObservable, processAndDraw).pipe( map(([name, balance]) => { alert(`${name}: $${balance.toFixed(2) USD`); }) ).subscribe(); ``` On the one hand, this is much shorter and more expressive! This is effectively maps `Observable<[string, number]>` into an `Observable<void>` which *happens to perform side effects when subscribed to*. The subscriber, however, has no idea what action will take place from just looking at a type or signature. Even with the code snippet above used as-is, it is very easy to forget about that last `.subscribe()` call, which--given that Observables are lazy by default and only perform useful actions when subscribed to--renders this whole snippet a no-op. One final reason side-effects are bad in operators: that these side-effects can be performed an arbitrary number of times per event based on how many distinct subscribers are listening to an Observable. ## A better trade-off ```ts combineLatest(nameObservable, processAndDraw).pipe( map(([name, balance]) => `${name}: $${balance.toFixed(2) USD` ) ).subscribe(text => alert('Text')); ``` # Other use cases described in the [full piece](https://blog.eyas.sh/2018/12/observables-side-effects-and-subscriptions/) 2. Avoiding Unnecessary Indirection through `Subject`s 3. Subscribing when `switchMap` or `flatMap` would do # Summary An Observable going through a series of transformation operators from source to final result is: 1. Cancelable through-and-through; cancelling a subscription to a resultant Observable will cancel any underlying subscriptions opened to that end. 2. Composable in its own right; and 3. A ubiquitous immutable API that gives callers flexibility in manipulating return values. I propose side-effects being a great first-order heuristic as far as what can reasonably be kept within a composed Observable. When needed, operators like do and tap will sometimes make sense. [Read the full original article at blog.eyas.sh](https://blog.eyas.sh/2018/12/observables-side-effects-and-subscriptions/).
eyassh
1,860,359
Understanding AWS Control Tower: Part 2 - Implementation and Deployment
Introduction Welcome back to part two of our series on understanding AWS Control Tower. In part one,...
0
2024-05-21T11:57:39
https://dev.to/aminabrhm4/understanding-aws-control-tower-part-2-implementation-and-deployment-hdk
cloudmanagement, controltower, cloudgovernance, multiaccount
**Introduction** Welcome back to part two of our series on understanding AWS Control Tower. In part one, we discussed how AWS Control Tower simplifies the management of multi-account AWS environments, addressing challenges like inconsistent security policies. We also explored its core features, including landing zones and guardrails. In this segment, we'll consider important factors for implementing AWS Control Tower and provide a hands-on tutorial for deploying a landing zone. **Factors to Consider Before Implementation** Before AWS Control Tower implementation, it's important to consider several key factors to ensure a smooth deployment. Here's a high-level overview for organizations to think about: **Assess Organizational Readiness** It's essential to assess your organization's readiness for the transition. Evaluate factors such as your team's familiarity with cloud technologies and existing IT infrastructure. Determine if your organization has the necessary resources, skills, and commitment to support the implementation process effectively. **Identify Stakeholders and Their Roles** Successful implementation requires active involvement and collaboration from various stakeholders within your organization. Identify key stakeholders and business leaders. **Reviewing Existing AWS Architecture and Policies** Examine the current state of your AWS environment, including account structure, resource configuration, security measures, and governance practices. Identify areas for improvement and determine how AWS Control Tower can address any gaps or challenges in your existing setup. **Design Account Structure** Plan your account structure and organizational units within AWS Control Tower. Decide on the hierarchical structure of organizational units (OUs) based on business units, departments, projects, or applications. Define the placement of resources, such as production, development, testing, and sandbox environments, to ensure proper isolation and resource management. These considerations are very important before deploying AWS Control Tower. In the next section, we'll provide a brief, beginner-friendly tutorial on how to deploy a landing zone. **Deploy Landing Zone** **Log in: Use the Management account.** Error Handling: If you encounter an "AWS environment is not ready" error, launch a Free tier eligible EC2 instance, wait 10-15 minutes, and retry the setup. Terminate the instance once setup and proceed. **Review Pricing and Select Regions** Home Region: Choose a region for deploying key resources like IAM Identity Center and S3 buckets. This selection is crucial and cannot be changed post-setup. ![Pricing and Home Region](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7l4l39vscdl1igah4ln.png) Additional AWS Regions: Select any additional regions for governance. ![Additional AWS Regions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9cxsq4v6xjlrhkredde.png) Region Deny Setting: Optionally restrict usage to specific regions by enabling this setting. ![Region Deny](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnzmt1wfx6h9utjxyedo.png) **Configure Organizational Units (OUs)** Foundation OU: Default name is "Security." This contains shared accounts like the log archive and security audit accounts. ![Foundational OU](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwi1lsntmpsrb7u98z5n.png) Additional OU: Default name is "Sandbox." You can change these names later if needed. ![Additional OU](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lenhbykd9pil8ivsvw4.png) **Configure Shared Accounts** Management Account: Confirm you are using the planned account. Log Archive Account: This stores immutable logs. Create a new account with a unique email address. Audit Account: Restricted for security and compliance teams. Create a new account with a unique email address. ![Log Archive](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9blh5avfpi3brnianbiw.png) ![Audit](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/girtj41j2pk3n4jmgqmm.png) **Additional Configurations** AWS Account Access Configuration: IAM Identity Center is recommended for scalable access management. AWS CloudTrail Configuration: Enable the creation of an organizational trail by AWS Control Tower. Log Configuration for Amazon S3: Set retention policies for logging data. KMS Encryption: Optionally manage cryptographic keys. ![IAM Identity Center](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ty5g4akbnv50l4qq2e3v.png) ![CloudTrail](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49arbfq5wweyz1lktxl8.png) ![S3 Logs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30xx2c8vla5y3rh9g1sq.png) ![KMS Key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0xpmddbaezpijpi25ls.png) **Review and Set Up Landing Zone** Review Settings: Check all configurations before finalizing. Service Permissions: Understand and acknowledge the roles and permissions required by AWS Control Tower. Set Up Landing Zone: Start the setup and monitor progress on the AWS Control Tower Dashboard. A green banner will indicate successful setup completion. ![Service Permissions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cnwg54e6kwx55y3acrzm.png) **Conclusion** Implementing AWS Control Tower can significantly streamline the management of multi-account AWS environments, providing a centralized and automated way to enforce best practices and governance. By carefully considering factors such as organizational readiness, stakeholder roles, existing AWS architecture, and account structure design, you can ensure a smoother deployment process. The hands-on tutorial we provided for setting up a landing zone offers a practical guide to getting started with AWS Control Tower. Following these steps will help you establish a robust foundation for your AWS environments, enhancing security, compliance, and operational efficiency.
aminabrhm4
1,860,650
GPT-4 passes most of the 297 written Polish Board Certification Examinations
GPT-4 passes most of the 297 written Polish Board Certification Examinations
0
2024-05-21T15:39:45
https://aimodels.fyi/papers/arxiv/gpt-4-passes-most-297-written-polish
machinelearning, ai, beginners, datascience
*This is a Plain English Papers summary of a research paper called [GPT-4 passes most of the 297 written Polish Board Certification Examinations](https://aimodels.fyi/papers/arxiv/gpt-4-passes-most-297-written-polish). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).* ## Overview - Recent advancements in Large Language Models (LLMs) have significantly improved their capabilities, enabling their use in various applications. - However, the risks of generating false information through LLMs limit their applications in sensitive areas like healthcare, underscoring the need for rigorous validation. - This study extensively evaluated the performance of three Generative Pretrained Transformer (GPT) models on the Polish Board Certification Exam (PES) dataset, a large dataset of 297 medical exams. ## Plain English Explanation Large Language Models (LLMs) are a type of artificial intelligence that can generate human-like text. They have become increasingly capable in recent years, [allowing them to be used in a wide range of applications](https://aimodels.fyi/papers/arxiv/large-language-models-perform-par-experts-identifying). However, there is a concern that these models could be used to create false information, which could be particularly problematic in sensitive areas like healthcare. To address this issue, the researchers in this study tested the performance of three different GPT models, a type of LLM, on a large dataset of Polish medical exams. The dataset, called the Polish Board Certification Exam (PES), consists of 297 exams covering a variety of medical specialties. The researchers wanted to see how well these AI models could perform on these challenging medical tests. ## Technical Explanation The researchers developed a software program to download and process the PES exam dataset. They then used the OpenAI API to test the performance of three different GPT models: GPT-3.5, GPT-4, and the most recent GPT-4-0125. The results showed that the GPT-3.5 model was unable to pass any of the exams. In contrast, the GPT-4 models demonstrated a much stronger performance, with the latest GPT-4-0125 model successfully passing 222 (75%) of the 297 exams. However, the performance of the GPT models varied significantly across different medical specialties. While they excelled in some exam areas, they completely failed in others. ## Critical Analysis The research highlights the impressive progress made in LLM models, such as [GPT-4, which can outperform experts in certain tasks](https://aimodels.fyi/papers/arxiv/large-language-models-perform-par-experts-identifying). This advancement could potentially lead to the development of AI-based medical assistants that could enhance the efficiency and accuracy of healthcare services in Poland. At the same time, the significant variation in performance across different medical specialties suggests that these models may not be reliable or accurate enough to be used in high-stakes healthcare settings without further validation and safeguards. The researchers note that [the risks of generating false information through LLMs](https://aimodels.fyi/papers/arxiv/automated-multi-language-to-english-machine-translation) still need to be addressed before these models can be widely deployed in sensitive domains like medicine. Additionally, the study is limited to the Polish medical exam dataset, and it's unclear how well the GPT models would perform on medical exams in other languages or contexts. [Expanding the analysis to a more diverse set of medical datasets](https://aimodels.fyi/papers/arxiv/mining-experimental-data-from-materials-science-literature) could provide a more comprehensive understanding of the capabilities and limitations of these models. ## Conclusion This study demonstrates the significant progress made in LLM models, such as GPT-4, which can now pass the majority of Polish medical board exams. This advancement holds great promise for the increased application of AI in the field of medicine in Poland, potentially leading to the development of AI-based medical assistants that could enhance the efficiency and accuracy of healthcare services. However, the study also highlights the need for continued validation and safeguards to ensure the reliability and accuracy of these models, particularly in sensitive domains like healthcare. Further research is needed to understand the full capabilities and limitations of LLMs across different medical specialties and languages. **If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.**
mikeyoung44
1,861,224
Accessible DeFi Experience: MIMI Leading the Low-Barrier Financial Revolution
In the rapid development of blockchain technology and cryptocurrencies, decentralized finance (DeFi)...
0
2024-05-22T05:47:58
https://dev.to/mimi_official/accessible-defi-experience-mimi-leading-the-low-barrier-financial-revolution-3mon
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u5f1w38u6t40extvw0ig.png)In the rapid development of blockchain technology and cryptocurrencies, decentralized finance (DeFi) is emerging as an innovative financial model that is gradually transforming the global financial ecosystem. However, despite DeFi’s promise of numerous opportunities and benefits, its high barriers and complex technical requirements deter many potential users. This phenomenon has led to a deep reflection on the current state of the DeFi market: Why has a field designed to achieve financial inclusivity evolved into an exclusive domain requiring extensive knowledge and high barriers? Analysis of the Current DeFi Market The original intention of the DeFi market was to create an intermediary-free, borderless financial system using blockchain technology, allowing everyone to participate. However, the reality is far from this vision. Technical Complexity:The complexity of blockchain technology is one of the main reasons for the high barriers in DeFi. Users need to understand fundamental concepts such as smart contracts, decentralized applications (DApps), and cryptocurrency wallets, which pose significant challenges for individuals without a technical background. Smart contracts are at the core of DeFi operations, responsible for automatically executing transactions and agreements. However, writing and understanding these contracts requires specialized programming knowledge. While some platforms offer visual smart contract editors, users still need a certain level of technical proficiency to effectively utilize these tools. Tedious Operations:The process of participating in DeFi is complex and cumbersome. Users need to switch between multiple platforms and perform cross-chain operations. This process is not only time-consuming but also prone to operational errors. Due to the lack of interoperability between different blockchains, assets need to be transferred through intermediary bridges, making cross-chain operations extremely complicated. These bridges often require users to perform multiple steps, such as locking assets on one blockchain and then releasing corresponding assets on another. This tedious process deters many users. User Interface Design: The user interface design of most DeFi platforms is not user-friendly, further increasing the learning and operational costs for users. Complex interfaces and terminology make novice users feel confused and uneasy. Users often need to repeatedly consult help documents or seek assistance from the community, which not only increases learning costs but also affects the user experience. For ordinary users accustomed to simple and friendly interfaces, the user experience of DeFi platforms is a significant obstacle. Risk and Security: The high-risk nature of the DeFi market is another reason for the high barriers. Vulnerabilities in smart contracts, hacking attacks, and market volatility make many users cautious about DeFi. Once deployed on the blockchain, smart contracts cannot be changed, meaning any code vulnerabilities or logical errors can be maliciously exploited, leading to financial losses. Additionally, DeFi platforms often become targets for hackers, with several major DeFi attacks occurring in recent years, severely impacting user trust. MIMI’s Innovative Solutions In this context, the emergence of the MIMI project offers new ideas and solutions to address these issues. 1. Multi-Chain Aggregated Liquidity Protocol MIMI achieves low barriers and high returns in DeFi through its multi-chain aggregated liquidity protocol. This protocol allows users to seamlessly operate assets across different blockchains, greatly simplifying the operational process. Users no longer need to switch between multiple platforms, making the process simple and efficient. MIMI’s cross-chain protocol not only reduces costs but also significantly improves transaction speed and convenience, providing users with an unprecedented convenient asset management experience. 2. Transparent Profit Distribution Mechanism A transparent profit distribution mechanism is another highlight of MIMI. Through blockchain technology, MIMI achieves full transparency in profit distribution, allowing users to view their investment status and profits in real-time. This not only enhances user trust in the platform but also ensures the openness and fairness of every transaction. Moreover, MIMI employs advanced technologies such as fully homomorphic encryption to ensure the safety of user funds, further reducing the risks of participating in DeFi. 3. Simplified Operation Process To reduce the learning and operational costs for users, MIMI has simplified the operation process by providing a user-friendly interface and automated smart contract handling. Users can easily manage and optimize their assets without complex steps or professional technical knowledge, thus enjoying a better user experience. MIMI’s graphical interface and intuitive operation process make it easy for users to get started, lowering the technical barriers to participating in DeFi. Real-World Impact of MIMI on the DeFi Market Firstly, MIMI lowers the entry barriers to DeFi, allowing more people to easily participate in DeFi. This will greatly promote the popularization and application of DeFi, attracting more users to this field. Through its multi-chain aggregated liquidity protocol and simplified operation process, MIMI enables users to participate in DeFi with minimal learning and operational costs, enjoying the convenience and benefits of decentralized finance. On the other hand, MIMI’s AI-powered risk control system, based on big data analysis and machine learning technology, monitors and analyzes large amounts of on-chain and off-chain data in real-time to identify potential risks and provide warnings. MIMI’s AI risk control system collects data from multiple sources, including transaction records, user behavior, market trends, etc. By analyzing this data, the system can identify abnormal trading behaviors, market volatility trends, and potential security threats. This reduces the burden of manual operations for users and enhances the user experience, allowing users to trade and invest with confidence without worrying about complex risk control processes. Finally, MIMI’s technological innovations and ecosystem integration bring new development ideas to the DeFi market. Through deep cooperation with various Web3 content platforms, Web3 games, and financial derivatives, MIMI has built a comprehensive crypto symbiotic ecosystem, providing users with diversified financial services. Users can not only manage and invest their assets on the MIMI platform but also participate in various Web3 content and games, enjoying more value-added services and fun. This ecosystem integration not only enhances the user experience but also offers users more choices and opportunities, allowing them to more fully participate in and enjoy the benefits of decentralized finance. As the MIMI platform continues to optimize and expand, we believe that MIMI will become a leading decentralized finance platform globally, attracting more users to participate and benefit from the DeFi ecosystem. Through continuous technological innovation and service optimization, MIMI will continue to lead the development trend of the DeFi market, providing users with a better financial service experience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ak5zyfgigc789ao9ixa8.png)
mimi_official
1,861,271
JavaScript Function , First Class function / First Class Citizen in JavaScript
What is a Function in JavaScript? In JavaScript, a function is a block of code designed to perform a...
0
2024-05-22T06:32:09
https://dev.to/pervez/javascript-function-first-class-function-first-class-citizen-in-javascript-55i8
**What is a Function in JavaScript?** In JavaScript, a function is a block of code designed to perform a particular task. A function is executed when "something" invokes it (calls it). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orhrdean2ak72owq4qew.png) **What is an Anonymous Function?** An anonymous function is a function that does not have a name. Writing an anonymous function as function() {} directly will result in a syntax error because function statements require a function name. Anonymous functions are used where functions are treated as values. You can use anonymous functions in function expressions, where the function is assigned to a variable. In this case, the anonymous function acts as a value assigned to the variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0h8gxm6mib36p3j86l3.png) **What is the Difference Between Function Statement and Function Expression?** The major difference is hoisting. Function statements are hoisted, meaning they can be called before they are defined in the code. Function expressions, including anonymous functions, are Also hoisted, but in the Function Expressions , function assign a variable as like a value . in the memory Creation Phase initially it sets "Undefined". when you try to invoke the function before initialed , it throw reference Error that is "sayHello is not a function" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8xfrc6bpxkrz5zzkq7oz.png) **What is a Function Declaration?** A function declaration is a standard way to declare a function using the function keyword, followed by the function name and a parameter list. Function declarations are hoisted in their scope. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tqpo8kakk59vmdn7rwah.png) **What is a First-Class Function in JavaScript?** First-class functions are functions that can be treated as values. They can be passed as arguments to other functions, returned from Others functions, and assigned to variables As Value. the Kind of those abilities of Function is known as First class Function in JavaScript. first Class Function is Also Known as First Class citizens. **Key Points of First Class Function :** **1. Passed As Arguments** : Here hello function passed as an argument of test function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mltuvkuvt5suvb5eyejz.png) **2. Returned from others functions** : Here x function is Returned from the hello function and hello function is assign to result variable means x function assign to result variable as value. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/711ehxnxtyisvsvx7pn8.png) **3. Assign to variable as like a value** : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0dlupxqmctldc97r630.png) **4. Also known is First Class citizen** **What is a Named Function Expression?** A named function expression is an anonymous function assigned to a variable but with a name. This name can be used for internal references within the function body. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hi7vj5xandqek8bj6i3q.png) but here you can not call the x function global cope because During the memory creation phase , here x , is named function expression which is allocated in the memory as the value of the let variable, so when you call x() function, it throws Reference error, that is x is not defined. because x is not defined as a global function . It created a local variable . You can call this x() function inside the x function, you got the whole function as output. **Function Parameters and Function Arguments** **Parameters:** Identifiers or labels that receive values inside the function. They are local variables of the function and cannot be accessed outside of it. **Arguments:** The actual values passed into the function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/039x0op722polfjk9qto.png)
pervez
1,861,351
Create a Heart Shaped Animation with CSS3
Introduction Creating a personalized love confession page can be a fun and heartfelt way...
0
2024-05-22T08:02:45
https://dev.to/markyu/create-a-heart-shaped-animation-with-html5-css3-and-javascript-1pjh
html, css, codepen, animation
### Introduction Creating a personalized love confession page can be a fun and heartfelt way to express your feelings. By using HTML5, CSS3 animations, and a touch of JavaScript, you can create a beautiful page with an animated heart effect. In this tutorial, we will walk you through a simple example to get you started. ### Step-by-Step Guide #### 1. HTML Structure First, we need to set up our HTML structure. This includes a `div` for the heart shape and a `div` for the text. ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>520 Confession</title> <style> @keyframes heartBeat { 0% { transform: scale(1); } 25% { transform: scale(1.1); } 50% { transform: scale(1); } 75% { transform: scale(0.9); } 100% { transform: scale(1); } } body { display: flex; justify-content: center; align-items: center; height: 100vh; background-color: #f0e0d8; margin: 0; overflow: hidden; } .heart { position: relative; width: 100px; height: 90px; animation: heartBeat 1s infinite; } .heart:before, .heart:after { content: ""; position: absolute; width: 50px; height: 80px; background-color: red; border-radius: 50px 50px 0 0; transform: rotate(-45deg); top: 0; left: 50px; } .heart:after { left: 0; transform: rotate(45deg); } .text { position: absolute; top: 50%; transform: translateY(-50%); color: #fff; font-size: 24px; font-family: Arial, sans-serif; } </style> </head> <body> <div class="heart"></div> <div class="text">520 I Love You</div> </body> </html> ``` #### 2. CSS Styling In this example, we use CSS3 animations to create a heartbeat effect. The `@keyframes` rule defines the animation named `heartBeat`, which scales the heart shape at different intervals. ```css @keyframes heartBeat { 0% { transform: scale(1); } 25% { transform: scale(1.1); } 50% { transform: scale(1); } 75% { transform: scale(0.9); } 100% { transform: scale(1); } } body { display: flex; justify-content: center; align-items: center; height: 100vh; background-color: #f0e0d8; margin: 0; overflow: hidden; } .heart { position: relative; width: 100px; height: 90px; animation: heartBeat 1s infinite; } .heart:before, .heart:after { content: ""; position: absolute; width: 50px; height: 80px; background-color: red; border-radius: 50px 50px 0 0; transform: rotate(-45deg); top: 0; left: 50px; } .heart:after { left: 0; transform: rotate(45deg); } .text { position: absolute; top: 50%; transform: translateY(-50%); color: #fff; font-size: 24px; font-family: Arial, sans-serif; } ``` #### 3. Detailed Explanation To help you understand each part of the code, let's break it down: **HTML Structure**: - The `<!DOCTYPE html>` declaration defines the document type as HTML5. - The `<html lang="en">` tag sets the language of the document to English. - Inside the `<head>` tag, we define the character encoding with `<meta charset="UTF-8">` and the title of the page with `<title>520 Confession</title>`. **CSS Styling**: - **`@keyframes heartBeat`**: Defines the keyframe animation sequence for the heart. The heart scales up and down to create a beating effect. - `0%` and `100%`: Scale at normal size. - `25%`: Scale up to 1.1 times the original size. - `75%`: Scale down to 0.9 times the original size. - **`body`**: Styles the body of the document to center the content both horizontally and vertically. - `display: flex; justify-content: center; align-items: center;`: Uses Flexbox to center the items. - `height: 100vh;`: Sets the height to 100% of the viewport height. - `background-color: #f0e0d8;`: Sets the background color. - `margin: 0;`: Removes the default margin. - `overflow: hidden;`: Hides any overflow content. - **`.heart`**: Styles the heart container. - `position: relative;`: Positions the element relative to its normal position. - `width: 100px; height: 90px;`: Sets the size of the heart. - `animation: heartBeat 1s infinite;`: Applies the heartbeat animation with a 1-second duration, running infinitely. - **`.heart:before, .heart:after`**: Styles the pseudo-elements to create the heart shape. - `content: "";`: Adds content for the pseudo-elements. - `position: absolute;`: Positions the elements absolutely within the `.heart` container. - `width: 50px; height: 80px; background-color: red;`: Sets the size and color of the pseudo-elements. - `border-radius: 50px 50px 0 0;`: Rounds the top corners to form the top of the heart. - `transform: rotate(-45deg); top: 0; left: 50px;`: Rotates and positions the left half of the heart. - `.heart:after`: Rotates and positions the right half of the heart. - **`.text`**: Styles the text element. - `position: absolute; top: 50%; transform: translateY(-50%);`: Centers the text vertically within the heart. - `color: #fff;`: Sets the text color to white. - `font-size: 24px; font-family: Arial, sans-serif;`: Sets the font size and family. ### Conclusion This example demonstrates how to create a simple, yet visually appealing love confession page with an animated heart effect using HTML5 and CSS3. You can further customize the styles and animations to suit your needs and make your confession page even more special. Feel free to experiment with different colors, sizes, and animation timings to create a unique and personalized experience. --- I hope this helps you create your own beautiful and animated love confession page!
markyu
1,861,356
Artificial Intelligence vs Mobile App Development
Recently, artificial intelligence has been a popular term in the business and technology sectors....
0
2024-05-22T08:15:54
https://dev.to/owais121/5-ways-artificial-intelligence-will-disrupt-mobile-app-development-in-2024-3j2o
mobileappdevelopment, webdev, programming, ai
Recently, artificial intelligence has been a popular term in the business and technology sectors. This ground-breaking revolution is causing waves across a variety of industries. In the [field of app development](https://khired.com/solutions/mobile-application-development/), it's worthwhile to investigate the possibilities that big AI models and machine learning algorithms present today. It's no secret that creating apps requires a lot of work and time, so creating technologies to make the process go more quickly is nothing new. According to a MongoDB post, artificial intelligence (AI) has been used since IBM created a program in 1997 that defeated the global chess champion at the time. However, the influence on app creation is expected to be even greater in this era of huge data and quick advances in AI technology. ## Artificial Intelligence and Mobile App Development Artificial intelligence (AI) is transforming mobile app development by introducing advanced capabilities that enhance functionality, efficiency, and user experience. AI-driven tools can automate coding, significantly reducing development time and allowing developers to focus on creative and strategic aspects of app creation. Moreover, predictive analytics enable apps to deliver hyper-personalized experiences by analyzing user data and preferences, leading to higher user engagement and satisfaction. Also, [AI enhances app security](https://dev.to/logrocket/creating-an-ai-photo-generator-and-editing-app-with-react-c7o) by identifying and mitigating potential threats in real-time, ensuring robust protection of user data. Additionally, AI-powered voice technology improves accessibility, making apps more user-friendly for people with disabilities. As AI gains popularity, its integration into mobile app development will drive innovation and set new standards for the industry. So, let’s look at the five ways artificial intelligence will disrupt app development in 2024. ### Automated Coding The possibility of [AI-based programs](https://dev.to/tangoindiamango/notetaking-with-ai-2fj5) generating their codes is no longer science fiction. Business giants are already making strides in this field. For instance, Meta has developed Transcoder, a tool that uses deep learning to convert code from one programming language to another. Consider a development team that is engaged in a project that necessitates the conversion of a substantial codebase from Python to JavaScript. Rather than dedicating weeks to the laborious rewriting of the code, they could utilise Transcoder to complete the conversion in a fraction of the time. This and other comparable tools enable developers to allocate their time to more strategic and creative aspects of app development, such as the enhancement of user experience or the development of innovative features. This technology is expected to continue to evolve by 2024, potentially resulting in a fully automated coding procedure. This would enable programmers to concentrate on high-level problem-solving and innovation, thereby saving them countless hours of monotonous work. For instance, a startup could utilise sophisticated AI tools to automatically generate and optimise code for a new application, thereby expediting the development process and minimising expenses. This change would not only increase productivity but also enable smaller companies to compete with larger enterprises by utilising AI-driven efficiency. ### Delivering Hyper-personalized User Experiences The user experience, which is the foundation of any successful software, is already being transformed by predictive analytics powered by AI. AI can accurately predict user preferences and behaviours by analyzing user data collected from a variety of interaction locations. Utilizing this information, the app's capabilities are later tailored to enhance the user's experience and satisfaction. Businesses are presently exploring strategies to engage with clients more personally and are progressing towards hyper-personalization, as per a Forbes article. In 2024, we can anticipate advancements in this domain. Developers may generate real-time responsive content that reacts to user input or offers voice or facial recognition services. ### Application Testing [App testing](https://dev.to/morrismoses149/mobile-app-testing-a-comprehensive-guide-mbg) is a crucial step in the development process that makes sure an application is functional before it is released. Artificial intelligence (AI) can automate this procedure and produce thorough results much faster than with conventional testing techniques. Its ability to systematically search for and identify faults makes it feasible to address these discrepancies before they have an adverse effect on the user experience. For instance, a team developing a complex e-commerce app can use AI to automate regression testing. This ensures that new updates don’t break existing features. Thereby maintaining a smooth shopping experience for users. According to a DevOps thought post, automated testing powered by AI might not be optional in 2024; testers will have to keep up with developers who employ AI-powered technologies. ### Enhanced Security The necessity of comprehensive security structures in app development is of the utmost importance, as cyber threats become more sophisticated. This is the point at which artificial intelligence becomes indispensable. For example, a financial services application that manages sensitive user data can employ AI-driven security systems to identify suspicious activities and detect patterns before they cause any harm. These artificial intelligence models can be trained on thousands of use cases, which allows them to promptly identify and mitigate potential threats, such as fraudulent transactions or peculiar login attempts, that may otherwise go unnoticed by conventional security measures.. Developers can substantially improve the security of user data and guarantee a safer user experience by incorporating AI-powered security. ### Improving Accessibility with Voice Technology In 2024, voice technology will advance even more as smart home devices like Google Home and Amazon Echo become commonplace. Suppose an e-commerce app could integrate voice navigation, allowing users to search for products, place orders, and check delivery status hands-free, which is especially beneficial for users with vision or mobility impairments. Voice navigation will become a regular feature for app developers, increasing accessibility for users with vision or mobility impairments. Voice translation with AI capabilities is also planned. Language barriers would vanish as a result, revolutionizing user interaction with mobile apps and promoting business globalization. ### Final Thoughts The advancements in artificial intelligence are poised to significantly disrupt [mobile app development in 2024](https://khired.com/solutions/mobile-application-development/), offering transformative benefits across various stages of the development process. Automated coding tools like Meta's Transcoder will accelerate development timelines, allowing developers to focus on creative problem-solving and innovation. AI-driven predictive analytics will enable hyper-personalized user experiences, catering to individual preferences with unprecedented accuracy. Moreover, the automation of app testing will enhance efficiency and reliability, ensuring that applications are thoroughly vetted before release. Enhanced security measures powered by AI will provide robust protection against increasingly sophisticated cyber threats, safeguarding user data and fostering trust. Additionally, the integration of advanced voice technology will improve accessibility and break down language barriers, making apps more inclusive and generating global business.
owais121
1,861,453
Web Worker in React: A step-by-step guide
Understanding web worker in React is entirely dependent on the idea of JavaScript’s code execution....
0
2024-05-22T09:43:16
https://dev.to/ifourtechnolab/web-worker-in-react-a-step-by-step-guide-156a
webworker, react, reactwebworker, webdev
Understanding [web worker in React](https://www.ifourtechnolab.com/blog/how-to-use-web-workers-in-reactjs-a-step-by-step-guide) is entirely dependent on the idea of JavaScript’s code execution. So, before you begin, you should understand how JavaScript code is executed. JavaScript is single-threaded in nature. That is when one code thread completes its execution, the next one starts, and so on. This eventually leads to long delays and procedures becoming stalled. If one of the several threads has to be performed as soon as possible because subsequent processes rely on it, the app's performance will suffer as a result. This is where web workers come into play to alleviate such difficulties. Web workers offer multi-threaded capability in JavaScript development. Furthermore, it removes the need for extended delays for process execution, making the app's performance quicker and smoother. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j5f0h5lg7bnnp1xpr64b.png) C#, Java, C++, and Emerald are some examples that offer parallel programming features and contribute to smoother functionality. Let’s learn how to use web worker in React ## What is a Web Worker? A Web Worker is a feature of React.js that allows JavaScript code to execute in a separate background thread, freeing up the main thread to handle user interactions and refresh the UI. Web Workers are commonly used to perform computationally intensive tasks such as image processing or complex calculations, without blocking the user interface. In React, Web Workers can be created using the Worker API, which is a standard API provided by modern browsers. The Worker API allows you to create a new Web Worker instance from a JavaScript file, which can then be used to execute code in a separate thread. To use a Web Worker in React, you typically create a new Worker instance in your component's componentDidMount lifecycle method, passing in the path to the JavaScript file that contains the worker code. You can then send messages to the worker using the postMessage method and receive messages back from the worker using theonmessageevent handler. ## How a "Web Worker API" interacts between the main thread and the worker thread? Communication between the worker thread and the main thread occurs via the React Web Workers API in the following way. Step – 1: Work instance initiation and passing the path of a worker script.. Step – 2: The main thread will use postMessage() to deliver a message for connection initiation, while the worker thread will wait for that message using onmessage (). Step – 3: When the worker thread delivers a message usingpostMessage, the main thread will receive a message usingonmessage. Step – 4: In the event of an error, there is an instance named oneerrorcall-back. It will capture errors from the worker thread. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gydbghznqh7gfkuk5id9.png) ## How to use Web Worker in React? Let’s understand web workers practically with an example. Here, we will create a new Web Worker instance in our component’s constructor, passing the path to the App.js file. We then listen for messages from the worker in the handleCalculate method and send a message to the worker when the component is mounted using the postMessage method. First, we will create a loop executer for num, then we will pass the number in the input field, and on the button click we will iterate the loop. On the iteration, we will display the calculated number message loaded on the page. Web Workers is an attempt to bring multi-threaded behavior to JavaScript. ### App.js ``` import React from 'react' import { Component } from 'react' class App extends Component { constructor() { super() this.state = { num: '', result: '', loadingMessage: '' } this.handleCount = this.handleCount.bind(this) this.handleCalculate = this.handleCalculate.bind(this) } handleCount(e) { this.setState({ num: e.target.value }) } handleCalculate() { const { num } = this.state let result = 0; for (let i = 1; i <= num; i++) { this.setState({ loadingMessage: `Loaded ${i} / ${num}` }) for (let j = 0; j < i; j++) { result++ } } this.setState({ result }) } render() { return ( ``` We've used a number for the input box and a loadingMessage to display previously loaded numbers. When we enter a small range, it will show normally without interrupting the page's Interface. The result is as follows: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ibh2nuzne2ekaspg6hh.png) But, when we input a lengthy value for the iterating loop, the page UI is blocked. After a while, the page will stop responding. Here, we placed 500000000 into the text field; after a while, it will display the unresponsive page option. The result is shown below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mw8hkn6yqb99eczuorid.png) Now, we are going to do the same task using the web worker in React. ### App.js ``` import React from 'react' import { Component } from 'react' import workerScript from './Worker' import React from 'react' import { Component } from 'react' import workerScript from './Worker' class App extends Component { constructor() { super() this.state = { num: '', result: '', loadingMessage: '' } this.handleCount = this.handleCount.bind(this) this.handleCalculate = this.handleCalculate.bind(this) } componentDidMount() { this.worker = new Worker(workerScript) this.worker.addEventListener('message', e => { const type = e.data.type; if (type === 'loading') { const { i, num } = e.data; this.setState({ loadingMessage: `Loaded ${i} / ${num}` }) } else { const { result } = e.data; this.setState({ result }) } }) } handleCount(e) { this.setState({ num: e.target.value }) } handleCalculate() { const { num } = this.state this.worker.postMessage(num) } render() { return ( ``` In this app component, we have imported the worker component and in the calculate method, we pass the number from the worker.js file. Whatever login we wrote on the button click event before the worker, we must now implement it in the worker file. ### Worker.js ``` const loopworker = () => { onmessage = (e) => { const num = e.data; let result = 0; for (let i = 1; i <= num; i++) { const data = { type: 'loading', i, num } postMessage(JSON.parse(JSON.stringify(data))) for (let j = 0; j < i; j++) { result++; } } const data = { type: 'result', result } postMessage(JSON.parse(JSON.stringify(data))) } } let code = loopworker.toString() code = code.substring(code.indexOf("{") + 1, code.lastIndexOf("}")) const blob = new Blob([code], { type: 'application/javascriptssky' }) const workerScript = URL.createObjectURL(blob) module.exports = workerScript; ``` Let’s understand some of the crucial keywords that we used above. ### Self: Here, 'self' is guaranteed to link to a ServiceWorkerGlobalScope that contains attributes like clients, registrations or caches, and other event handlers. Nonetheless, it adheres to the same dynamic binding rules as the rest of the JavaScript environment. It doesn't matter if you remember it all the time, but use self when you wish to expressly refer to the global context. ### e.data: Any values passed from the app component will be accessed in the e.dataobject. ### PostMessage: To communicate with workers, the postMessage is used. This brings us to the end of this blog. We hope you found this React.js tutorial useful. Please [check our latest update](https://www.ifourtechnolab.com/blog) for more articles like this. ## React Web worker: Conclusion Webworker in React is an important topic for React developers since it allows them to run multi-threaded programs. Understanding web workers in React.js is solely based on the JavaScript code execution concept. Hence, before you start, you should grasp how JavaScript code works. JavaScript is a single-threaded language. Web workers enable multi-threaded JavaScript programming. Additionally, it eliminates the need for lengthy pauses in process execution, resulting in faster and smoother app performance. This blog has covered the fundamentals of web workers, how their API communicates with the main thread and worker thread, and a practical demonstration of how to use web worker with React.
ifourtechnolab
1,861,572
My Cleaners: Your Trusted Choice for the Best Dry Cleaners in Varanasi
In the vibrant streets of Varanasi, where tradition dances with modernity, stands a beacon of garment...
0
2024-05-22T11:35:02
https://dev.to/sunny_kumar_977fafbe9bc42/my-cleaners-your-trusted-choice-for-the-best-dry-cleaners-in-varanasi-32k8
In the vibrant streets of Varanasi, where tradition dances with modernity, stands a beacon of garment care excellence – My Cleaners. Renowned as the city's premier dry cleaning establishment, My Cleaners has earned the trust and loyalty of Varanasi's residents through its unwavering commitment to quality, reliability, and exceptional service. At My Cleaners, we understand the importance of maintaining the pristine condition of your garments. Whether it's your favorite silk saree, a tailored suit, or delicate wedding attire, our expert team is dedicated to ensuring that your clothes receive the utmost care and attention they deserve. What sets My Cleaners apart is our relentless pursuit of excellence. We utilize state-of-the-art equipment and eco-friendly cleaning solutions to deliver impeccable results while minimising our environmental footprint. Each garment undergoes a thorough inspection and personalised treatment to address specific stains and fabric requirements, ensuring a thorough and gentle cleaning process. Our team of skilled professionals possesses years of experience in the art of garment care. From intricate hand embroidery to delicate lacework, we have the expertise to handle even the most delicate fabrics with precision and care. With My Cleaners, you can trust that your garments are in safe hands. Customer satisfaction is at the heart of everything we do at My Cleaners. We understand that your time is precious, which is why we offer convenient pickup and delivery services to ensure a hassle-free experience. Whether you're a busy professional or a bride-to-be preparing for your special day, we strive to make the dry cleaning process as seamless and stress-free as possible. But don't just take our word for it – our loyal customers speak for themselves. With glowing testimonials and rave reviews, My Cleaners has earned its reputation as the best dry cleaners in Varanasi. Time and time again, customers return to us for our consistent quality, attention to detail, and dedication to customer satisfaction. In addition to our dry cleaning services, My Cleaners also offers a range of additional services to meet all your garment care needs. From expert alterations and repairs to shoe cleaning and leather care, we are your one-stop destination for all things garment care-related. As we look to the future, My Cleaners remains committed to raising the bar for excellence in the dry cleaning industry. We continually invest in staff training, technology upgrades, and sustainability initiatives to ensure that we remain at the forefront of our field. In conclusion, if you're in search of the [best dry cleaners in Varanasi](https://g.co/kgs/91ZujX2), look no further than My Cleaners. With our unwavering commitment to quality, reliability, and customer satisfaction, we are proud to be your trusted choice for all your garment care needs.
sunny_kumar_977fafbe9bc42
1,862,332
Top 10 Trends in BDD Testing 2024
Behavior-driven development (BDD) has revolutionized software testing by fostering clear...
0
2024-05-23T04:13:26
https://dev.to/ngocninh123/top-10-trends-in-bdd-testing-2024-2l30
testing, trends
Behavior-driven development (BDD) has revolutionized software testing by fostering clear communication and collaboration between development teams and stakeholders. The emergence of new BDD testing tools and platforms can further amplify the benefits they bring, as they promise to revolutionize collaboration between business stakeholders and development teams. Even though new challenges will occur as BDD testing technology evolves, it’s always best to stay ahead of these trends. Let's explore the top 10 trends and future directions for BDD testing and examine how it can continue to deliver value in the years to come. ## Top 10 Trends in BDD Testing The landscape of software development is constantly shifting, and BDD testing needs to adapt to stay relevant. Here, we'll explore some exciting trends that are shaping the future of BDD, helping it deliver even greater value to your projects. ### Increased Automation and AI Integration The future of BDD testing is poised for a significant shift towards test automation, with BDD tools evolving to handle increasingly complex test scenarios autonomously. This automation will be driven by advancements in Artificial Intelligence (AI) and Machine Learning (ML), allowing these technologies to not only generate test cases from user stories and requirements but also execute and maintain them automatically. It is worth mentioning that [AI can improve the quality of BDD scenarios](https://www.capgemini.com/nl-nl/wp-content/uploads/sites/7/2017/07/test_automation_brochure_2014-04-09_v04.pdf) by suggesting refinements and identifying areas for improvement, similar to how AI can enhance the quality of traditional automated tests. ### Enhanced Collaboration Tools A wave of innovative collaboration tools and platforms is poised to reshape the future of BDD testing. These advancements promise to bridge the gap between business stakeholders and development teams even more effectively. Imagine interactive dashboards where stakeholders can visualize BDD scenarios. Moreover, it provides real-time feedback and actively participates in requirement gathering. This will streamline communication and foster transparency throughout the process. ### Shift-Left Testing The concept of "shifting left"—prioritizing testing activities in the earlier stages of development—is poised to become even more ingrained within BDD practices. This means BDD scenarios will not only be defined later in the development cycle but also written and validated much earlier. This proactive approach will result in a significant advantage: quicker feedback from stakeholders on the envisioned functionalities, allowing for course corrections and defect identification early on. As a result, the development process will benefit from fewer bugs and a smoother overall experience. ### BDD for Microservices and Distributed Systems As the software development landscape embraces microservices and distributed systems, BDD testing will undergo a necessary evolution to navigate these complex architectures effectively. An article has explored how an [interface-oriented BDD](https://www.infoq.com/articles/microservices-bdd-interface-oriented/) approach can effectively test microservices and distributed systems. By focusing on the overall system behavior from a user perspective, BDD helps ensure seamless interactions between independent services. ### CI/CD Integration BDD will be more tightly integrated with Continuous Integration and Continuous Deployment (CI/CD) pipelines. This powerful combination will usher in an era of continuous testing and deployment, where BDD scenarios are seamlessly integrated into the CI/CD workflow. This integration will act as a constant quality safeguard throughout the development cycle by automatically validating BDD scenarios with every code change. Not only does it ensure high-quality standards for the final product, but it also allows for early detection and resolution of issues, ultimately leading to faster and more reliable deployments. ### Cloud-Native BDD Solutions As cloud computing continues its meteoric rise, BDD tools and frameworks are set to evolve alongside it, offering an exciting wave of cloud-native solutions. These cloud-based test platforms, such as [CucumberStudio](https://support.smartbear.com/testcomplete/docs/working-with/integration/cucumberstudio/index.html), will leverage the power of the cloud to provide users with scalable, on-demand testing environments. This shift will significantly improve the way teams manage and execute BDD scenarios. Gone will be the days of cumbersome on-premise setups. Cloud-based BDD will offer a flexible, efficient way to conduct testing, empowering teams to adapt to changing project needs with ease. ### Enhanced Reporting and Analytics Building on BDD testing's collaborative strengths, the future of the methodology lies in the development of more advanced reporting and analytics capabilities within the tools themselves. This exciting advancement will provide teams with a wealth of data and deeper insights. Imagine being able to analyze test coverage with pinpoint accuracy, assess the performance of individual scenarios, and identify potential areas for improvement – all through robust analytics. This newfound data-driven approach will empower teams to make informed decisions throughout the development lifecycle, ultimately leading to a more optimized and efficient testing process. ### BDD testing for Non-Functional Requirements Beyond its current focus on functional requirements, BDD is poised for an exciting expansion. The methodology will encompass non-functional requirements as well, casting a wider net to ensure a more holistic approach to software quality. This means BDD will not only verify core functionalities but also delve into critical aspects like performance, security, and user experience. This comprehensive testing approach will lead to the creation of robust and user-friendly software that meets all stakeholders' needs. For example, BDD security testing has already been a thing for years. [A published article](https://devqa.io/testing/) has pointed out the importance of incorporating security considerations into BDD scenarios. By defining scenarios that explore potential attack vectors and expected system behavior under such conditions, testers can proactively identify and address security vulnerabilities. ### Cross-Platform and Mobile BDD Testing As the demand for cross-platform and mobile applications explodes, BDD testing tools are poised to evolve alongside them. This means we can expect significant improvements in how these tools handle the complexities of these environments, making them the solutions to mobile challenges. This enhanced support will not only streamline the testing process for mobile and cross-platform applications but also ensure comprehensive testing across diverse user experiences. ### Community and Open Source Contributions Fueled by a [growing BDD community](https://www.behaviourdriven.org/), the future of software testing promises a vibrant ecosystem brimming with open-source contributions and collaborative projects. This collaborative spirit will act as a catalyst for innovation, driving the development of next-generation tools, frameworks, and best practices. As a result, BDD will continue to evolve and adapt to meet the ever-changing needs of the software development landscape. Here's an infographic for the discussed above BDD testing Trends: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/14c3gpmnhjp58ks19ndf.jpg) Read more about BDD testing trends [here](https://www.hdwebsoft.com/blog/top-10-trends-in-bdd-testing-2024.html). To sum up... By embracing these trends, BDD testing can evolve alongside the SDLC, ensuring its continued role as a cornerstone of high-quality software development. Staying informed and adapting your BDD practices will allow you to leverage these advancements for a more streamlined, collaborative, and efficient testing experience. The future of BDD is bright, and its impact on the SDLC promises to be transformative. 😉 You might also like: [How to choose a suitable BDD testing tool](https://www.hdwebsoft.com/blog/how-to-choose-a-suitable-bdd-testing-tool.html) [What is the difference between TDD and BDD?](https://www.hdwebsoft.com/blog/tdd-vs-bdd-what-is-the-difference.html)
ngocninh123
1,862,379
Embracing the Style Web: Exploring the World of SP5DER Clothing
Embracing the Style Web: Exploring the World of SP5DER Clothing At sp5der clothing, we're not just...
0
2024-05-23T04:47:36
https://dev.to/larrypage/embracing-the-style-web-exploring-the-world-of-sp5der-clothing-h1a
Embracing the Style Web: Exploring the World of SP5DER Clothing At sp5der clothing, we're not just about clothing; we're about crafting an experience, a lifestyle that transcends the ordinary. From the threads we choose to the intricate designs we create, every aspect of sp5der clothing is a testament to our commitment to quality, style, and innovation. Join us as we delve into the captivating world of sp5der. Clothing and discover what sets us apart from the rest. In the realm of fashion, sp5der Clothing stands as a beacon of innovation and sophistication. From its inception, the brand has continually pushed the boundaries of contemporary fashion, offering a diverse range of apparel that seamlessly blends style with comfort. With a keen eye for detail and a commitment to quality, sp5der Clothing has emerged as a frontrunner in the world of modern fashion. Unveiling SP5DER: A Brief Overview sp5der clothing isn't just a brand; it's a movement, a revolution in the world of fashion. Founded on the principles of creativity and individuality, sp5der was born out of a desire to break free from the constraints of conventional clothing and redefine what it means to be stylish. Our journey began with a vision to merge cutting-edge design with unparalleled comfort, and today, we stand as pioneers in the industry, leading the way with our innovative approach to fashion. sp5der is not just a brand; it's a lifestyle. Founded on the principles of creativity, individuality, and self-expression, Clothing caters to individuals who seek to make a statement with their wardrobe. Each piece is meticulously crafted to reflect the brand's commitment to innovation and excellence, ensuring that every wearer feels confident and empowered. Materials and Quality Standards At the heart of sp5der clothing lies a relentless dedication to quality. We meticulously source the finest materials, ensuring that every piece of clothing that bears the sp5der name meets our rigorous standards of excellence. From luxurious fabrics to durable stitching, every detail is carefully crafted to deliver an unparalleled experience to our customers. When you choose sp5der, you're not just buying clothing; you're investing in quality that lasts a lifetime. The Dynamic Range of Colour One of the defining features of sp5der clothing is its vibrant color collection. From bold hues to subtle tones, each garment is available in a spectrum of colors to suit every preference and personality. Whether you're drawn to classic neutrals or crave something more eye-catching, sp5der Clothing has you covered. Color is more than just a visual element; it's a form of self-expression. That's why at sp5der, we offer a dynamic range of colors to suit every taste and personality. Whether you're drawn to vibrant hues that make a statement or subtle shades that exude elegance, our extensive palette ensures that you'll find the perfect color to complement your style. With sp5der, your wardrobe is limited only by your imagination. How SP5DER Blends Style with Functionality Style should never come at the expense of comfort or practicality. At spider clothing, fashion should enhance your life, not hinder it. That's why each piece in our collection is carefully designed to marry style with functionality. From breathable fabrics that keep you cool and comfortable to thoughtful details that add both flair and utility, clothing is designed to integrate into your everyday life seamlessly. Clothing understands that everybody is unique, which is why they offer a diverse range of sizes to ensure the perfect fit for every customer. With an emphasis on inclusivity and diversity, the brand strives to provide options that cater to all body types, ensuring that everyone can feel confident and comfortable in their clothing. From casual outings to formal occasions, sp5der clothing has apparel for every event on your calendar. Whether you're dressing up for a night on the town or keeping it casual for a weekend brunch, you'll find that sp5der offers versatile options that seamlessly transition from day to night. Eco-Friendly Affordable Despite its reputation for quality and style, sp5der clothing remains committed to offering budget-friendly options for its customers. Whether you're shopping for everyday essentials or statement pieces, you'll find that sp5der Clothing offers affordable options without compromising on quality or style. We believe that fashion should be accessible to all without compromising our planet's well-being. That's why sustainability lies at the core of everything we do at sp5der. From using eco-friendly materials to implementing ethical manufacturing practices, we're committed to reducing our environmental footprint while still offering affordable, high-quality clothing to our customers. With sp5der, you can look good and feel good knowing that you're making a positive impact on the world. Caring and Washing Tips: To ensure that your sp5der clothing garments maintain their quality and longevity, it's essential to follow proper care instructions. For best results, always check the garment's care label for specific washing and drying instructions. In general, most Sp5der Clothing pieces can be machine washed in cold water and tumble-dried on low heat. Avoid using harsh detergents or bleach, as these can damage the fabric and cause colors to fade. Additionally, consider air drying your garments whenever possible to help preserve their shape and color. Customer Reviews The reviews are in, and customers can't stop raving about sp5der Clothing. From the impeccable quality to the stylish designs, customers praise sp5der Clothing for its commitment to excellence and customer satisfaction. Many reviewers also highlight the brand's inclusivity and diverse sizing option Fashioning Identity: SP5DER Influence Clothing is more than just fabric; it's a reflection of who we are. At Sp5der, we understand the power of fashion to shape identity and inspire confidence. That's why each piece in our collection is designed to empower you to express yourself boldly and authentically. Whether you're making a statement with a bold print or keeping it classic with timeless essentials, Sp5der clothing is your canvas for self-expression. Fashioning Confidence Confidence is the ultimate accessory, and at sp5der, we're here to help you cultivate it. Our thoughtfully designed clothing is not just about looking good; it's about feeling confident in your skin. From flattering cuts to attention-grabbing details, every aspect of sp5der clothing is designed to boost your confidence and make you feel unstoppable. With sp5der, confidence is always in style. Where Fashion Meets Convenience Fashion and convenience come together in sp5der 's clothing. Our designs are stylish and easy to wear. Whether you're heading to work or out with friends, our clothes are versatile. You don't have to sacrifice style for comfort. With sp5der , you can have both. Our pieces transition seamlessly from day to night, so you can focus on enjoying your day without worrying about your outfit. Fashion should make your life easier, not more complicated. That's why we design clothes that are as practical as they are fashionable. With sp5der, you'll always look and feel your best. We understand that life is busy, and your wardrobe should keep up. That's why at sp5der, we prioritize convenience without compromising style. Whether you're dressing for a boardroom meeting or a weekend adventure, our versatile pieces effortlessly transition from day to night, so you can focus on living your life to the fullest. With sp5der, fashion meets convenience in perfect harmony. Custom Couture: Personalized Fashion with SP5DER Your style is as unique as you are, and at sp5der clothing, we celebrate individuality. That's why we offer custom couture services that allow you to create a one-of-a-kind piece that reflects your style and taste. From choosing fabrics to selecting details, our experienced team works closely with you to bring your vision to life. With sp5der, your style is limited only by your imagination. Whether you're a longtime fan or a first-time shopper, sp5der Clothing is sure to exceed your expectations. Where to Buy the Brilliant Clothes sp5der clothing is available for purchase online through the brand's official website, where you'll find the latest collections and exclusive deals. Clothing can also be found at select retailers and boutiques worldwide, making it easier than ever to elevate your wardrobe with sp5der style. You can purchase our brilliant clothes directly from our official website, where you'll find our full range of stylish and high-quality garments. Simply browse through our online store, select the items you love, and proceed to checkout. We offer secure payment options and worldwide shipping, so you can enjoy our exceptional clothing no matter where you are. Additionally, keep an eye out for exclusive promotions and discounts available only to our online customers. Experience the convenience of shopping from the comfort of your home and elevate your wardrobe with sp5der clothing today. Conclusion In a world where fashion trends come and go, sp5der clothing stands the test of time. With its commitment to quality, style, and inclusivity, sp5der continues to redefine modern fashion and inspire individuals to embrace their unique sense of style. Whether you're dressing for a casual day out or a special occasion, Clothing has the perfect piece to help you look and feel your best. Experience the difference of sp5der clothing and elevate your wardrobe today!
larrypage
1,862,496
Programación asincrónica en Javascript
JavaScript, al ser un lenguaje de un solo subproceso, solo puede procesar una tarea a la vez. Esto...
0
2024-05-23T18:55:31
https://dev.to/w4ll-3/programacion-asincronica-en-javascript-46c5
javascript, node, react, asynchronous
JavaScript, al ser un lenguaje de un solo subproceso, solo puede procesar una tarea a la vez. Esto puede provocar largos tiempos de espera para tareas complejas, ya que el script no podrá ejecutar ninguna otra tarea hasta que termine la actual. Para abordar este problema, JavaScript ofrece programación asincrónica, permitiendo que el script continúe ejecutando otras tareas mientras espera que se complete una tarea asincrónica. En este blog, exploraremos los conceptos básicos de la programación asincrónica en JavaScript y cómo se puede implementar utilizando funciones de devolución de llamada, promesas y async/await. ## Funciones de devolución de llamada Una función de devolución de llamada es una función que se pasa como argumento a otra función y se ejecuta después de que la función principal haya terminado. Las devoluciones de llamada se utilizan en programación asincrónica para realizar una acción una vez que se ha completado una tarea. **Por ejemplo, considere el siguiente código:** ``` function slowTask(callback) { setTimeout(() => { console.log("Slow task completed."); callback(); }, 1000); } function runProgram() { console.log("Program started."); slowTask(() => { console.log("Callback function executed."); }); console.log("Program ended."); } runProgram(); ``` En este ejemplo, la función **slowTask** toma una devolución de llamada como argumento. La función **slowTask** utiliza **setTimeout** para retrasar la ejecución de una tarea durante un segundo. La función **runProgram** llama a **slowTask** y pasa una función de devolución de llamada como argumento. Además, la función **runProgram** registra "Programa iniciado" y "Programa finalizado". Cuando la función **slowTask** se completa, registra "Tarea lenta completada" y ejecuta la función de devolución de llamada, que registra "Función de devolución de llamada ejecutada" ``` Program started. Program ended. Slow task completed. Callback function executed. ``` ## Promesas Las promesas son un enfoque más moderno para la programación asincrónica en JavaScript. Una promesa representa el resultado de una operación **asincrónica** y puede estar en uno de tres estados: pendiente, cumplida o rechazada. Se puede crear una promesa usando el **Promise** constructor y su estado se puede determinar usando los métodos **then** y **catch**. **Por ejemplo:** ``` const slowTask = new Promise((resolve, reject) => { setTimeout(() => { resolve("Slow task completed."); }, 1000); }); function runProgram() { console.log("Program started."); slowTask .then((result) => { console.log(result); }) .catch((error) => { console.error(error); }); console.log("Program ended."); } runProgram(); ``` En este ejemplo, la función **slowTask** retorna una promesa que se resuelve después de un segundo con el mensaje "Tarea lenta completada". La función **runProgram** llama a **slowTask** y utiliza el método **then** para registrar el resultado cuando se cumple la promesa. **La salida será:** ``` Program started. Program ended. Slow task completed. ``` ## Asíncrono/Espera **Async/await** es la forma más reciente y legible de manejar operaciones **asincrónicas** en JavaScript. Permite a los desarrolladores escribir código asincrónico que se asemeja al código síncrono, lo que facilita su comprensión y mantenimiento. La palabra clave **async** se usa para declarar una función asincrónica, y la palabra clave **await** se usa para esperar a que se resuelva una promesa. A continuación, se muestra un **ejemplo para demostrar el uso de async/await en JavaScript:** ``` async function fetchData() { const response = await fetch("https://api.example.com/data"); const data = await response.json(); console.log(data); } fetchData(); ``` En este ejemplo, la función **fetchData** se declara como asincrónica mediante la palabra clave **async**. La función utiliza **fetch** para recuperar datos de una **API** y **await** para esperar a que se complete la operación de recuperación. Luego, la respuesta obtenida se transforma en un objeto JSON usando **response.json().** La palabra clave **await** se utiliza para esperar a que se complete la transformación a JSON y el resultado final se registra en la consola. Es importante tener en cuenta que el código dentro de una función asincrónica se ejecutará de forma asincrónica, pero el código fuera de la función se seguirá ejecutando de forma sincrónica. Además, la palabra clave **await** solo se puede utilizar dentro de una función asincrónica. En conclusión, la programación asincrónica en JavaScript permite que el script continúe ejecutando otras tareas mientras espera que se complete una tarea asincrónica. Las funciones de devolución de llamada, las promesas y **async/await** son tres formas de lograr la programación asincrónica en JavaScript. Las funciones de devolución de llamada son la forma más sencilla y básica de manejar operaciones asincrónicas. Las promesas ofrecen un enfoque más moderno y flexible. Async/await proporciona la forma más legible de manejar operaciones asincrónicas y es el método recomendado para la programación en JavaScript moderna. Comprender la programación asincrónica en JavaScript es crucial para crear aplicaciones eficientes y responsivas. Es una habilidad imprescindible para cualquier desarrollador de JavaScript.
w4ll-3
1,863,520
Unlock TikTok Shop Success: Proven Strategies to Get Your Products Approved and Selling
Embarking on the journey to get your items approved and uploaded on TikTok Shop can be a rewarding...
0
2024-05-24T03:53:13
https://dev.to/epakconsultant/unlock-tiktok-shop-success-proven-strategies-to-get-your-products-approved-and-selling-17dd
tiktok
Embarking on the journey to get your items approved and uploaded on TikTok Shop can be a rewarding yet challenging process. Understanding the intricacies of TikTok's approval guidelines, optimizing product listings, creating compelling videos, and leveraging various features are essential steps in achieving success on this platform. In this article, we delve into the strategies and best practices that can help you navigate the TikTok Shop approval process effectively, ensuring that your products receive the green light for upload and reach a wider audience. Let's explore the key elements that contribute to a successful TikTok Shop presence. [Crypto Conquest: Mastering Market Mechanics for Profitable Trading](https://www.amazon.com/dp/B0CW1HL6SM) ## Understanding TikTok Shop Approval Process So, you want to be a TikTok Shop star! But hold your horses, cowboy. Before you start dreaming of TikTok fame, let's break down the mysterious TikTok Shop approval process. It's like trying to get into that exclusive club where all the cool kids hang out. But instead of a bouncer judging your outfit, it's TikTok scrutinizing your products. ## Overview of TikTok Shop Approval Guidelines Picture this: TikTok is like a strict teacher with a red pen, ready to mark up your homework. To avoid getting a big fat "Rejected" stamp, you need to know the rules. Think of it as playing by TikTok's game, but with a bit more fashion and less Monopoly. ## Common Reasons for Rejection Uh-oh, your products got the boot! Don't worry; it happens to the best of us. Maybe your product images were blurrier than a Bigfoot sighting, or your descriptions were dryer than last week's toast. Let's dissect the common reasons why TikTok might send you packing. ## Optimizing Product Listings for Approval It's time to put on your marketing hat and get your products ready for their TikTok close-up. Think of it as prepping your star for their big red carpet moment. With a sprinkle of magic and a dash of pizzazz, your products will be ready for their TikTok debut in no time. ## Creating High-Quality Product Images Blurry, pixelated images are so last season. It's all about crisp, clear visuals that make your products pop like confetti at a party. Your product images need to be so sharp, they could cut through the competition like a hot knife through butter. ## Writing Compelling Product Descriptions Don't bore TikTok with a snooze-fest of a product description. Inject some personality, sprinkle some sass, and make those words dance off the screen. Your descriptions should hook TikTok users faster than a cat video on the internet. ## Creating Compelling Product Videos for TikTok Lights, camera, action! It's time to bring your products to life with the magic of video. Think of it as your own mini Spielberg moment, but with more filters and fewer dinosaurs. Get ready to showcase your products in a way that'll make TikTok users stop scrolling and start shopping. ## Best Practices for Product Video Creation Want to create videos that make TikTok users go "wow"? It's all about nailing the basics: good lighting, smooth shots, and a sprinkle of creativity. Your product videos should be as captivating as a good mystery novel, keeping TikTok users on the edge of their seats. ## Optimizing Videos for TikTok's Algorithms TikTok's algorithms are like the cool kids at school—you want to impress them to get ahead. By understanding what makes TikTok tick (pun intended), you can optimize your videos to reach more users and increase your chances of approval. ## Utilizing TikTok Features to Enhance Approval Time to take your TikTok game to the next level and stand out from the crowd. It's like adding extra sprinkles on your ice cream sundae—totally unnecessary but oh-so-delicious. By leveraging TikTok's features, you can boost your chances of approval and turn heads in the TikTok universe. ## Engaging with TikTok's Community TikTok isn't just a platform; it's a community of trendsetters, creators, and fun-loving individuals. By engaging with TikTok users, you're not just selling products; you're building relationships. It's like being the life of the party without even leaving your couch. ## Utilizing TikTok Ads for Product Promotion Sometimes you need a little boost to get your products noticed. That's where TikTok ads come in like a knight in shining armor. With targeted ads, you can reach the right audience and give your products the spotlight they deserve. It's like hiring a personal PR agent for your products. ## Leveraging Influencers and Partnerships for Approval Success Getting your items approved and uploaded on TikTok Shop can be a game-changer for your business. One way to increase your chances of success is by leveraging influencers and partnerships. These strategies can help you navigate the approval process more smoothly and effectively. ## Identifying and Collaborating with Influencers Influencers on TikTok can have a significant impact on the success of your items on TikTok Shop. Identifying the right influencers who align with your brand and products is crucial. Look for influencers who have an engaged following that matches your target audience. Collaborating with influencers can help create authentic and engaging content that showcases your items in a relatable way. This can increase the chances of approval and drive more traffic to your TikTok Shop. ## Building Strategic Partnerships for Approval Support Building strategic partnerships with other businesses or organizations can also boost your chances of approval on TikTok Shop. Partnering with complementary brands can not only help expand your reach but also lend credibility to your items. Collaborating with partners who have experience navigating the approval process can provide valuable insights and support. Together, you can work towards getting all items approved and uploaded successfully on TikTok Shop. Closing Thoughts By implementing the insights and strategies outlined in this article, you can enhance your chances of getting all your items approved and uploaded on TikTok Shop. Remember to stay informed about TikTok's guidelines, continuously optimize your product listings and videos, and leverage the platform's features to engage with your audience effectively. Whether you're a budding entrepreneur or an established brand, navigating the TikTok Shop approval process can be a rewarding experience with the right approach. Stay creative, stay persistent, and watch your products shine on TikTok Shop!
epakconsultant
1,864,173
New shell scripting language, a new tablet, and in-product messaging
Amber, a programming language that compiles to BASH The slippery slope of in-product...
0
2024-05-24T16:15:26
https://dev.to/rudolfolah/new-shell-scripting-language-a-new-tablet-and-in-product-messaging-5g5h
bash, ipad, product, marketing
* [Amber, a programming language that compiles to BASH](https://amber-lang.com/) * [The slippery slope of in-product messaging ](https://matthewstrom.com/writing/wayfinding/) * [Daylight Computer, an alternative to Kindle and iPad?](https://daylightcomputer.com/product) ## Shell scripting times a million [Amber is a new programming language that allows you to write code converted into BASH shell scripts.](https://amber-lang.com/) The language's syntax is a mix of JavaScript, Python, and something else (Kotlin? Rust?). The idea is to write in a language with a modern syntax, type-safety, and safety features. One of those safety features forces you to write code to handle failures whenever you run a shell command. The syntax also makes it more apparent that command failures are being handled: ``` let files = ["config.json", "file.txt", "audio.mp3"] loop index, file in files { $mv {file} {index}{file}$ failed { echo "Failed to rename {file}" } } ``` If you're only occasionally writing shell scripts, Amber may not be a priority for you. In such cases, linting tools like [ShellCheck](https://www.shellcheck.net/) could be more beneficial. However, if you find yourself frequently writing shell scripts, to the point where you're considering Python or Ruby for better re-usability, then Amber is definitely worth your attention. ## In-app tour guides are marketing [In *"The Slippery Slope of In-Product Messaging,"* designer Matthew Ström explains that in-product messaging, those product tours you see in web apps, could be a symptom of a "bad" design.](https://matthewstrom.com/writing/wayfinding/) It's an interesting take, though I can't entirely agree. In-product messaging helps guide users/customers through new product features, regardless of whether the new feature's user experience is well-designed. ![screenshot of app cues](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtyt9nzd2g91833pv453.png) In-product messaging is another marketing touchpoint that informs the customer about a feature that may not be part of their typical workflow. Not to mention, once a customer/user has seen them, they won't reappear. Think about it this way: from a B2B SaaS perspective, the customer gets an email about a new feature release, logs into the app, and sees a small notification icon or a banner on the page about the new feature release. They go to some other page and get invited to go through a tour of the feature. This kind of repetition is needed even for well-designed user experiences. ## A tablet you might use every single day [The Daylight Computer is a mix between the Kindle and the iPad.](https://daylightcomputer.com/product) The screen resembles the Kindle screen, and it has the form factor of an iPad, including a stylus. It is extremely interesting that the screen's refresh rate is much higher than the typical ebook reader. It's high enough to be a tablet. The Daylight Computer, DC-1, runs Android, so you can run any Android tablet app on it with a kinder and gentler screen for the eyes. Screen glare is not a problem with the DC-1. ![screenshot of daylight computer and ipad comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id2m84khzgvgxj0blx6i.png) I've bought a few tablets before and used them for reading, watching movies (and sometimes video games and coding). I use them once in a while. I also have a Kindle and use it every day. It will be interesting to see whether the DC-1 lives up to the hype.
rudolfolah
1,866,002
ChatGPT and how it is a great learning tool
Hello everybody👋! In this article I would like to show you my process in learning a new technology...
0
2024-05-27T01:00:13
https://dev.to/slydragonn/chatgpt-and-how-it-is-a-great-learning-tool-1pk2
chatgpt, learning, webdev, beginners
**Hello everybody👋!** In this article I would like to show you my process in learning a new technology with ChatGPT, which without a doubt nowadays is one of the best tools for learning, specifically I am learning to create desktop applications with electron.js since I want to focus more on a web stack, so let's start. First of all, I am currently using the free version of ChatGPT, but it is still incredibly powerful and complete and now even more so thanks to the limited inclusion of the GPT-4o model. The new power of GPT-4o allows me to learn a wide variety of topics with just one input, which we can appreciate when I ask the following question: ### > What is Electron.js? --- *ChatGPT-4o response:* *Electron.js is an open-source framework developed by GitHub that allows developers to build cross-platform desktop applications using web technologies like HTML, CSS, and JavaScript. Here are some key points about Electron.js:* 1. *Core Components: Electron combines the Chromium rendering engine and the Node.js runtime. This means you can use web technologies to create the user interface and Node.js to handle backend operations.* 2. *Cross-Platform: Electron applications can run on Windows, macOS, and Linux with minimal changes to the codebase. This makes it easier to develop software that works across different operating systems.* 3. *Desktop Integration: Electron provides access to native desktop features such as file system access, system notifications, and more through its API. This allows developers to create desktop applications with capabilities similar to those built using native frameworks.* 4. *Popular Applications: Several well-known applications are built with Electron, including Visual Studio Code, Slack, and Discord. These applications demonstrate the capability of Electron to handle complex and high-performance tasks.* 5. *Development Workflow: Developing an Electron application involves creating a main process and one or more renderer processes. The main process handles the application's lifecycle and can create windows, while the renderer processes manage the user interface.* 6. *Advantages:* - *Unified Codebase: Write once and run anywhere.* - *Web Technology: Leverage existing web development skills.* - *Node.js Integration: Utilize a vast ecosystem of Node.js packages.* 7. *Disadvantages:* - *Performance: Electron applications can be more resource-intensive compared to native applications.* - ***Size**: Applications tend to be larger in size due to bundling Chromium and Node.js.* *Electron is a powerful tool for developers looking to create desktop applications quickly and efficiently using familiar web technologies.* --- With just one question, it already provides enough information to have a well-defined basis in this particular case about Eletron.js. But now let's be a little more specific: ### > Give me an Electron.js tutorial --- *ChatGPT-4o Response:* *Sure! Let's create a basic Electron.js application step-by-step. This tutorial will guide you through setting up a simple desktop application.* ### ***Prerequisites*** 1. ***Node.js**: Ensure you have Node.js installed. You can download it from [nodejs.org](https://nodejs.org/).* 2. ***npm**: Node Package Manager, which comes with Node.js.* ### ***Step 1: Initialize Your Project*** 1. ***Create a new directory** for your project and navigate into it:* ```bash bashCopy code mkdir my-electron-app cd my-electron-app ``` 2. ***Initialize a new Node.js project**:* ```bash bashCopy code npm init -y ``` *This will create a **`package.json`** file with default settings.* ### ***Step 2: Install Electron*** *Install Electron as a development dependency:* ```bash bashCopy code npm install electron --save-dev ``` ### ***Step 3: Create Main Application Files*** 1. ***Create the main entry point file**. Typically, this file is named **`main.js`**:* ```js jsCopy code // main.js const { app, BrowserWindow } = require('electron'); function createWindow() { const win = new BrowserWindow({ width: 800, height: 600, webPreferences: { nodeIntegration: true, contextIsolation: false } }); win.loadFile('index.html'); } app.whenReady().then(() => { createWindow(); app.on('activate', () => { if (BrowserWindow.getAllWindows().length === 0) { createWindow(); } }); }); app.on('window-all-closed', () => { if (process.platform !== 'darwin') { app.quit(); } }); ``` 2. ***Create an HTML file** named **`index.html`**:* ```html htmlCopy code <!-- index.html --> <!DOCTYPE html> <html> <head> <title>My Electron App</title> </head> <body> <h1>Hello, Electron!</h1> <p>Welcome to your first Electron app.</p> </body> </html> ``` ### ***Step 4: Update `package.json`*** *In your **`package.json`**, add a **`start`** script to launch your Electron application:* ```json jsonCopy code { "name": "my-electron-app", "version": "1.0.0", "main": "main.js", "scripts": { "start": "electron ." }, "devDependencies": { "electron": "^VERSION" } } ``` *Replace **`^VERSION`** with the installed Electron version.* ### ***Step 5: Run Your Application*** *Run your application using the following command:* ```bash bashCopy code npm start ``` *You should see a new window open with the message "Hello, Electron!" displayed.* ### ***Additional Steps*** ### ***Adding Styles*** *You can add a CSS file to style your HTML content. Create a file named **`style.css`** and link it in **`index.html`**:* ```html htmlCopy code <!-- index.html --> <!DOCTYPE html> <html> <head> <title>My Electron App</title> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <h1>Hello, Electron!</h1> <p>Welcome to your first Electron app.</p> </body> </html> ``` ```css cssCopy code /* style.css */ body { font-family: Arial, sans-serif; text-align: center; margin-top: 50px; } ``` ### ***Adding Functionality*** *You can add JavaScript to handle user interactions. Create a file named **`renderer.js`** and include it in **`index.html`**:* ```html htmlCopy code <!-- index.html --> <!DOCTYPE html> <html> <head> <title>My Electron App</title> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <h1>Hello, Electron!</h1> <p>Welcome to your first Electron app.</p> <button id="clickMe">Click Me</button> <script src="renderer.js"></script> </body> </html> ``` ```js jsCopy code // renderer.js document.getElementById('clickMe').addEventListener('click', () => { alert('Button clicked!'); }); ``` ### ***Conclusion*** *You now have a basic Electron.js application up and running. From here, you can explore more features of Electron, such as integrating with native modules, adding menus, handling file operations, and much more. For detailed documentation and advanced features, refer to the official Electron documentation.* --- It's a great teacher! And because he keeps the context of the conversation, we can now simply ask him in a more direct way what we want and he answers without any problems, for example: ### > What databases can I use? --- *ChatGPT-4o Response:* *With Electron.js, you can use a variety of databases, depending on your application's requirements and architecture. Here are some popular choices:* ### ***Embedded Databases*** 1. ***SQLite**:* - ***Description**: A lightweight, serverless, self-contained SQL database engine.* - ***Usage**: Good for small to medium-sized applications where you want a simple, file-based database.* - ***Library**: [sqlite3](https://www.npmjs.com/package/sqlite3)* - ***Example**:* ```bash bashCopy code npm install sqlite3 ``` ```js jsCopy code const sqlite3 = require('sqlite3').verbose(); const db = new sqlite3.Database('mydatabase.db'); db.serialize(() => { db.run("CREATE TABLE IF NOT EXISTS user (id INT, name TEXT)"); const stmt = db.prepare("INSERT INTO user VALUES (?, ?)"); stmt.run(1, "John Doe"); stmt.finalize(); db.each("SELECT id, name FROM user", (err, row) => { console.log(`User ID: ${row.id}, Name: ${row.name}`); }); }); db.close(); ``` *…Other DB Options* ### ***Choosing the Right Database*** *The choice of database depends on various factors like the complexity of your data, scalability requirements, and whether you prefer SQL or NoSQL solutions. For simple, local storage needs, embedded databases like SQLite or NeDB are great. For larger, scalable applications, client-server databases like MySQL, PostgreSQL, or MongoDB are ideal. For real-time, cloud-based applications, consider using Firebase Firestore or Supabase.* --- I cut ChatGPT's answer a little bit to not make this article too long, but he gave me a variety of databases which I can use in my Eletron application and I also add a description, the name of the library and even an ideal use case with its respective example and finally a conclusion. Undoubtedly ChatGPT and other tools that take advantage of artificial intelligence are a great ally for our day to day and a tool that if you know how to take advantage can become an extension of ourselves, as it is today our smartphones and provide significant support in our work and studies as if it were a second brain. Thanks for making it this far, see you next time 👋.
slydragonn
1,866,128
Rails: Switches Serializer according to STI Type
Introduction Hello, here is Tamura. Suppose you have model data of multiple types using...
27,526
2024-05-28T14:00:00
https://dev.to/weseek-inc/rails-switches-serializer-according-to-sti-type-4pc7
rails, ruby, programming
## Introduction Hello, here is Tamura. Suppose you have model data of multiple types using STI and want to mix and return them in a single API endpoint. In this case, you would like to switch the serializer according to the STI type. In this article, I will show you how to do this. ## Issue Suppose the following class hierarchy is realized in STI ![class hierarchy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1atef26vj3kexvvww3m.png) SimplePosts has a `title` and `body` field, and QAPosts has a `question` and `answer` field. They all have a common `posted_at` field. Suppose you define an API endpoint called `/api/v1/posts` and want to return JSON as follows: ![return JSON](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6i6d5im346krq9eo0qzc.png) Depending on whether the Posts type is `SimplePost` or `QAPost`, the JSON keys will differ. If you are using ActiveModel::Serializers to output JSON, you will need to switch the Serializer depending on the type of Posts. ## Solution The code is shown below. **app/serializers/post_serializer.rb** ![post_serializer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g14q3vc88pbozdvdlz0o.png) **app/serializers/simple_post_serializer.rb** ![simple_post_serializer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tuciktzassr8hrblna11.png) **app/serializers/qa_post_serializer.rb** ![qa_post_serializer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/no6vef0wvzvpxa2tn9ag.png) **app/controllers/api/v1/posts_controller.rb** ![posts_controller](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lrlol2bqtumqrsu4156.png) Override the `attributes` method in PostSerializer. Since each data is taken as an `object`, the class is determined in case, and `SimplePostSerializer.new(object).attributes(nil, reload)` and `QAPostSerializer.new(object).attributes( nil, reload)` generates attributes and merges them into `@attributes`. **Reference**: https://github.com/rails-api/active_model_serializers/blob/0fbe0fad0dec9368e9335b6280a46ca13442727e/lib/active_model/serializer.rb#L334-L343 The reason for overriding the `json_key` method is to display `posts` in the json toplevel object. If this is not defined, `qa_posts` and the type first processed by the Serializer will be set as the key, as shown below. ![set as the key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4yw8nofhixm5gf7ec8l.png) Note that if `jsonapi_include_toplevel_object` is `false`, there is no need to override the `json_key` method. **Reference**: https://github.com/rails-api/active_model_serializers/blob/0fbe0fad0dec9368e9335b6280a46ca13442727e/lib/active_model/serializer.rb#L384-L392 --- ## About Us💡 In addition, I want to introduce a little more about GROWI, an open software developed by us **WESEEK, Inc**. **GROWI** is a wiki service with features-rich support for efficient information storage within the company. It also boasts high security and various authentication methods are available to simplify authentication management, including **LDAP/OAuth/SAML**. **GROWI** originated in Japan and GROWI OSS is **FREE** for anyone to [download](https://docs.growi.org/en/admin-guide/?utm_source=dev+community&utm_medium=referral&utm_campaign=Rails_Switches_Serializer_according_to_STI_Type) and use **in English**. For more information, go to [GROWI.org](https://growi.org/en/?utm_source=dev+community&utm_medium=referral&utm_campaign=Rails_Switches_Serializer_according_to_STI_Type) to learn more about us. You can also follow our [Facebook](https://www.facebook.com/people/GROWIcloud/100089272547238/) to see updates about our service. ![GROWI.org](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rhpa0qcdukeall2vfsj.png)
weseek-inc
1,867,434
Create more of WordPress Gutenberg
Hey Is this Repo all has to be offered from Wordpress.org Community? 📌 Ref Link:...
0
2024-05-28T09:21:45
https://dev.to/akehsanz/create-more-of-wordpress-gutenberg-56fp
javascript, wordpress, wordpressblockthemes, wordpressgutenberg
Hey Is this Repo all has to be offered from Wordpress.org Community? 📌 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfxgycdbzwtoxm60gyev.png) Ref Link: https://wordpress.org/gutenberg/
akehsanz
1,868,352
Dummy ticket for visa
Does one need a visa? You are unable to purchase a ticket because you do not have specific travel...
0
2024-05-29T03:03:32
https://dev.to/onlineodt/dummy-ticket-for-visa-5fl5
dummyticket, dummyplaneticket
Does one need a visa? You are unable to purchase a ticket because you do not have specific travel dates. You can apply for a visa in this case using a fake ticket. Since it provides all the information required for visa filing, it is feasible to submit it. You will incur significant losses if your application for a visa is denied even though you have a valid ticket. Therefore, we use the visa from the fictitious travel ticket to lessen this loss. Whoa! For approximately INR 200, or $3, you may obtain a dummy ticket for a visa in less than ten minutes. Get a fake visa ticket right now at https://www.onlinedummyticket.com.
onlineodt
1,868,620
Ubuntu热点问题
最近从公司出来了,自己电脑重新启动,升级到Ubuntu24.04LTS, 目前基本稳定。使用过程中碰到一个需求,需要开启热点供手机联网。虽然是最新版本LTS,...
0
2024-05-29T07:10:16
https://dev.to/shouhua_57/ubunture-dian-wen-ti-lpf
hotspot, ubuntu, proxy
最近从公司出来了,自己电脑重新启动,升级到Ubuntu24.04LTS, 目前基本稳定。使用过程中碰到一个需求,需要开启热点供手机联网。虽然是最新版本LTS, 其他版本应该也能使用。下面就遇到的问题整理下。 1. 开启热点注意点 开启热点不能使用wifi, 所以如果要开启热点上网需要使用有线连接,以前使用MAC有类似经历,所以这个对我不是问题。 2. 笔记本网络正常,开启热点正常,手机能连上,但是显示“无网络连接” 这个在网上找了半天答案,发现是由于防火墙问题,~~临时打开防火墙就好,完事后开启就好~~。 **经过网友提醒,ufw可以不用关闭,添加相关ufw策略就好,主要是运行无线网卡数据进入,物理网卡可以路由数据,因为无线网卡数据会通过物理网卡流出。** ```bash sudo ufw allow in on wlp4s0 sudo ufw status numbered sudo ufw delete ``` 3. 连上网络后,我们希望手机流量全部通过SOCKS5流量到server端口,比如本地的1080端口。这需要借助redsocks和iptables, 前者相当于本地的SOCKS5的客户端,接受手机流量,后者将手机流量全部导入到redsocks,通过他发送到SOCKS5服务端。 ```bash # 1. 安装和设置redsocks sudo apt install redsocks # 修改redsocks配置,主要是客户端和服务端接口 sudo vi /etc/redsocks.conf # 客户端监听所有网卡 # local_ip = 0.0.0.0 # 服务端接口设置 # port = 1080 # 2. iptable设置 sudo iptables -t nat -N REDSOCKS # 新建链 # 将无线网卡流量全部交给新链处理 sudo iptables -t nat -A PREROUTING -i wlp4s0 -p tcp -j REDSOCKS # 在nat表中添加新链处理,所有新链流量给到redsocks设置的客户端端口 sudo iptables -t nat -A REDSOCKS -p tcp -j REDIRECT --to-port 12345 # 相关删除操作,看着处理,最好有点iptables知识 sudo iptables -t nat -L PREROUTING -v --line-numbers sudo iptables -t nat -D PREROUTING 2 # 数字需要看上面打印出来的行号 sudo iptables -t nat -D REDSOCKS 1 # 数字需要看上面打印出来的行号 # OR sudo iptables -t nat -F REDSOCKS # 清空自定义链 # 规则全部删除完毕后,才能删除自定义链 sudo iptables -t nat -X REDSOCKS ``` # 自动化脚本 暂时不贴上来了,就是上面的放在一起,另外可以使用nmcli来自动化开启和删除hotspot ```bash nmcli dev wifi hotspot con-name MyHotspot ssid ubuntu password 88888888 nmcli connection show | grep 'MyHotspot' nmcli connection delete MyHotspot ```
shouhua_57
1,868,851
Unlocking the Potential of SAP Production Planning (SAP PP) for Efficient Manufacturing
In the fast-paced world of manufacturing, efficiency and precision are crucial for staying...
0
2024-05-29T10:27:33
https://dev.to/mylearnnest/unlocking-the-potential-of-sap-production-planning-sap-pp-for-efficient-manufacturing-1aki
In the fast-paced world of manufacturing, efficiency and precision are crucial for staying competitive. [SAP Production Planning (SAP PP)](https://www.sapmasters.in/sap-pp-training-in-bangalore/) is a vital component of the SAP ERP system designed to streamline production processes and ensure optimal use of resources. This comprehensive guide delves into the key features, benefits, and best practices of SAP PP, helping businesses unlock their full potential in production planning. **What is SAP Production Planning (SAP PP):** SAP Production Planning (SAP PP) is a critical module within the SAP ERP system that focuses on the planning and control of manufacturing activities. It integrates seamlessly with other SAP modules like Material Management (MM), Sales and Distribution (SD), and Quality Management (QM), providing a unified platform for managing the entire production lifecycle. **Key Features of SAP PP:** **Material Requirement Planning (MRP):**SAP PP's [MRP function](https://www.sapmasters.in/sap-pp-training-in-bangalore/) ensures that materials are available for production and products are available for delivery. It calculates the required quantities and schedules for materials based on demand, current inventory levels, and production schedules. **Bill of Materials (BOM):**BOMs are comprehensive lists of materials, components, and assemblies required to produce a product. SAP PP allows for the creation and management of BOMs, ensuring accuracy in material requirements and cost estimation. **Work Center Management:**Work centers are locations where production operations are performed. [SAP PP](https://www.sapmasters.in/sap-pp-training-in-bangalore/) manages these work centers, detailing their capacities, scheduling operations, and monitoring performance. **Routing:**Routing defines the specific operations required to manufacture a product and the sequence in which they should be performed. SAP PP helps in creating and maintaining routings, ensuring efficient workflow management. **Production Orders:**Production orders are instructions for manufacturing a specific [quantity of a product](https://www.sapmasters.in/sap-pp-training-in-bangalore/). SAP PP facilitates the creation, scheduling, and monitoring of production orders, ensuring timely and efficient production. **Capacity Planning:**Capacity planning involves balancing the load on resources to meet production demands without overburdening them. SAP PP's capacity planning tools help in analyzing capacity loads and adjusting production schedules accordingly. **Shop Floor Control:**This feature monitors the status of production orders and operations in real-time, providing valuable insights into production performance and identifying bottlenecks. **Benefits of SAP PP:** **Enhanced Efficiency:**SAP PP automates and optimizes production planning processes,[ reducing manual intervention](https://www.sapmasters.in/sap-pp-training-in-bangalore/) and errors. This leads to streamlined operations and higher productivity. **Improved Resource Utilization:**By accurately planning material requirements and production schedules, SAP PP ensures optimal use of resources, minimizing waste and reducing costs. **Better Visibility and Control:**With real-time data on production activities, businesses gain greater visibility into their operations, enabling more informed decision-making and quicker response to issues. **Scalability:**SAP PP is highly scalable, making it suitable for businesses of all sizes. As production demands grow, the system can be easily adjusted to handle increased complexity and volume. **Integration with Other SAP Modules:**The seamless integration with other SAP modules ensures that [data flows](https://www.sapmasters.in/sap-pp-training-in-bangalore/) smoothly across different functions, providing a holistic view of the business and enhancing overall efficiency. **Implementing SAP PP: Best Practices:** **Thoroughly Analyze Business Requirements:**Before implementation, conduct a detailed analysis of your [business processes](https://www.sapmasters.in/sap-pp-training-in-bangalore/) and requirements. Understanding the specific needs of your production environment will help in configuring SAP PP to best suit your operations. **Involve Key Stakeholders:**Involve stakeholders from various departments (production, finance, sales, etc.) in the planning and implementation process. Their insights and feedback are crucial for a successful rollout. **Data Accuracy:**Ensure that all data, including BOMs, routings, and inventory levels, is accurate and up-to-date. Accurate data is the foundation of effective production planning. **Training and Support:**Provide comprehensive training to all users of the SAP PP module. Understanding the system's functionalities and best practices is essential for maximizing its benefits. **Continuous Monitoring and Improvement:**Regularly monitor the performance of the SAP PP module and gather feedback from users. Use this information to make continuous improvements and keep the system aligned with changing business needs. **Case Study: SAP PP in Action:** Company X is a mid-sized manufacturing firm specializing in consumer electronics. Before implementing SAP PP, the company faced several challenges, including inaccurate material planning, frequent production delays, and high operational costs. After a thorough analysis, Company X decided to implement SAP PP. The process began with detailed training sessions for their staff, ensuring they understood the system's capabilities and how to use them effectively. With SAP PP, Company X automated its material requirement planning, leading to more accurate material forecasts and reduced inventory costs. The BOM management feature helped them maintain precise records of all components, ensuring that every production order was executed without delays due to missing materials. By utilizing work center management and routing, Company X optimized their production workflow, significantly improving their manufacturing efficiency. Real-time shop floor control allowed them to monitor production activities closely, quickly identifying and addressing any issues that arose. The result was a dramatic improvement in production efficiency, reduced operational costs, and an enhanced ability to meet customer demand promptly. Company X now enjoys a competitive edge in the market, thanks to the robust capabilities of SAP PP. **Conclusion:** SAP Production Planning (SAP PP) is a powerful tool that can transform your manufacturing operations, driving efficiency and productivity. By automating and optimizing key [production processes](https://www.sapmasters.in/sap-pp-training-in-bangalore/), it helps businesses make better use of their resources, reduce costs, and improve overall performance. Implementing SAP PP requires careful planning, accurate data, and ongoing support, but the benefits far outweigh the challenges. Whether you are a small business looking to scale or a large enterprise aiming to maintain your competitive edge, SAP PP offers the solutions you need to succeed in the dynamic world of manufacturing. Unlock the potential of SAP PP and take your production planning to the next level. Invest in the future of your manufacturing operations today.
mylearnnest
1,868,986
What is in PHP-FPM configuration file?
Hi I'm Tak Fukushima, Software Engineer. This article, I'll write PHP-FPM configuration file. I...
0
2024-05-29T14:10:54
https://dev.to/takahiro_82jp/what-is-in-php-fpm-configuration-file-2na6
php, phpfpm, laravel
Hi I'm Tak Fukushima, Software Engineer. This article, I'll write PHP-FPM configuration file. I often use PHP-FPM with Nginx, they say unix domain socket connection. I frequently modify PHP-FPM configuration file that time. But first time it is difficult to read this file. So Let me explain. ### Condition * use Ubuntu 24.04 on Container * installed PHP-FPM Be careful, it depends on the environment. ### What is in PHP-FPM configuration file? Let's take a look at PHP-FPM configuration file inside. Hit bles command. `cat /etc/php/8.3/fpm/pool.d/www.conf` `www.conf` is PHP-FPM configuration file. ``` ; Start a new pool named 'www'. ; the variable $pool can be used in any directive and will be replaced by the ; pool name ('www' here) [www] ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ; Unix user/group of the child processes. This can be used only if the master ; process running user is root. It is set after the child process is created. ; The user and group can be specified either by their name or by their numeric ; IDs. ; Note: If the user is root, the executable needs to be started with ; --allow-to-run-as-root option to work. ; Default Values: The user is set to master process running user by default. ; If the group is not set, the user's group is used. user = www-data group = www-data ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /run/php/php8.3-fpm.sock ; Set listen(2) backlog. ; Default Value: 511 (-1 on Linux, FreeBSD and OpenBSD) ;listen.backlog = 511 ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. The owner ; and group can be specified either by name or by their numeric IDs. ; Default Values: Owner is set to the master process running user. If the group ; is not set, the owner's group is used. Mode is set to 0660. listen.owner = www-data listen.group = www-data ;listen.mode = 0660 ; When POSIX Access Control Lists are supported you can set them using ; these options, value is a comma separated list of user/group names. ; When set, listen.owner and listen.group are ignored ;listen.acl_users = ;listen.acl_groups = ; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ; Set the associated the route table (FIB). FreeBSD only ; Default Value: -1 ;listen.setfib = 1 ; Specify the nice(2) priority to apply to the pool processes (only if set) ; The value can vary from -19 (highest priority) to 20 (lower priority) ; Note: - It will only work if the FPM master process is launched as root ; - The pool processes will inherit the master process priority ; unless it specified otherwise ; Default Value: no set ; process.priority = -19 ; Set the process dumpable flag (PR_SET_DUMPABLE prctl for Linux or ; PROC_TRACE_CTL procctl for FreeBSD) even if the process user ; or group is different than the master process user. It allows to create process ; core dump and ptrace the process for the pool user. ; Default Value: no ; process.dumpable = yes ; Choose how the process manager will control the number of child processes. ; Possible Values: ; static - a fixed number (pm.max_children) of child processes; ; dynamic - the number of child processes are set dynamically based on the ; following directives. With this process management, there will be ; always at least 1 children. ; pm.max_children - the maximum number of children that can ; be alive at the same time. ; pm.start_servers - the number of children created on startup. ; pm.min_spare_servers - the minimum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is less than this ; number then some children will be created. ; pm.max_spare_servers - the maximum number of children in 'idle' ; state (waiting to process). If the number ; of 'idle' processes is greater than this ; number then some children will be killed. ; pm.max_spawn_rate - the maximum number of rate to spawn child ; processes at once. ; ondemand - no children are created at startup. Children will be forked when ; new requests will connect. The following parameter are used: ; pm.max_children - the maximum number of children that ; can be alive at the same time. ; pm.process_idle_timeout - The number of seconds after which ; an idle process will be killed. ; Note: This value is mandatory. pm = dynamic ; The number of child processes to be created when pm is set to 'static' and the ; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'. ; This value sets the limit on the number of simultaneous requests that will be ; served. Equivalent to the ApacheMaxClients directive with mpm_prefork. ; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP ; CGI. The below defaults are based on a server without much resources. Don't ; forget to tweak pm.* to fit your needs. ; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand' ; Note: This value is mandatory. pm.max_children = 5 ; The number of child processes created on startup. ; Note: Used only when pm is set to 'dynamic' ; Default Value: (min_spare_servers + max_spare_servers) / 2 pm.start_servers = 2 ; The desired minimum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.min_spare_servers = 1 ; The desired maximum number of idle server processes. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' pm.max_spare_servers = 3 ; The number of rate to spawn child processes at once. ; Note: Used only when pm is set to 'dynamic' ; Note: Mandatory when pm is set to 'dynamic' ; Default Value: 32 ;pm.max_spawn_rate = 32 ; The number of seconds after which an idle process will be killed. ; Note: Used only when pm is set to 'ondemand' ; Default Value: 10s ;pm.process_idle_timeout = 10s; ; The number of requests each child process should execute before respawning. ; This can be useful to work around memory leaks in 3rd party libraries. For ; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS. ; Default Value: 0 ;pm.max_requests = 500 ; The URI to view the FPM status page. If this value is not set, no URI will be ; recognized as a status page. It shows the following information: ; pool - the name of the pool; ; process manager - static, dynamic or ondemand; ; start time - the date and time FPM has started; ; start since - number of seconds since FPM has started; ; accepted conn - the number of request accepted by the pool; ; listen queue - the number of request in the queue of pending ; connections (see backlog in listen(2)); ; max listen queue - the maximum number of requests in the queue ; of pending connections since FPM has started; ; listen queue len - the size of the socket queue of pending connections; ; idle processes - the number of idle processes; ; active processes - the number of active processes; ; total processes - the number of idle + active processes; ; max active processes - the maximum number of active processes since FPM ; has started; ; max children reached - number of times, the process limit has been reached, ; when pm tries to start more children (works only for ; pm 'dynamic' and 'ondemand'); ; Value are updated in real time. ; Example output: ; pool: www ; process manager: static ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 62636 ; accepted conn: 190460 ; listen queue: 0 ; max listen queue: 1 ; listen queue len: 42 ; idle processes: 4 ; active processes: 11 ; total processes: 15 ; max active processes: 12 ; max children reached: 0 ; ; By default the status page output is formatted as text/plain. Passing either ; 'html', 'xml' or 'json' in the query string will return the corresponding ; output syntax. Example: ; http://www.foo.bar/status ; http://www.foo.bar/status?json ; http://www.foo.bar/status?html ; http://www.foo.bar/status?xml ; ; By default the status page only outputs short status. Passing 'full' in the ; query string will also return status for each pool process. ; Example: ; http://www.foo.bar/status?full ; http://www.foo.bar/status?json&full ; http://www.foo.bar/status?html&full ; http://www.foo.bar/status?xml&full ; The Full status returns for each process: ; pid - the PID of the process; ; state - the state of the process (Idle, Running, ...); ; start time - the date and time the process has started; ; start since - the number of seconds since the process has started; ; requests - the number of requests the process has served; ; request duration - the duration in µs of the requests; ; request method - the request method (GET, POST, ...); ; request URI - the request URI with the query string; ; content length - the content length of the request (only with POST); ; user - the user (PHP_AUTH_USER) (or '-' if not set); ; script - the main script called (or '-' if not set); ; last request cpu - the %cpu the last request consumed ; it's always 0 if the process is not in Idle state ; because CPU calculation is done when the request ; processing has terminated; ; last request memory - the max amount of memory the last request consumed ; it's always 0 if the process is not in Idle state ; because memory calculation is done when the request ; processing has terminated; ; If the process is in Idle state, then informations are related to the ; last request the process has served. Otherwise informations are related to ; the current request being served. ; Example output: ; ************************ ; pid: 31330 ; state: Running ; start time: 01/Jul/2011:17:53:49 +0200 ; start since: 63087 ; requests: 12808 ; request duration: 1250261 ; request method: GET ; request URI: /test_mem.php?N=10000 ; content length: 0 ; user: - ; script: /home/fat/web/docs/php/test_mem.php ; last request cpu: 0.00 ; last request memory: 0 ; ; Note: There is a real-time FPM status monitoring sample web page available ; It's available in: /usr/share/php/8.3/fpm/status.html ; ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;pm.status_path = /status ; The address on which to accept FastCGI status request. This creates a new ; invisible pool that can handle requests independently. This is useful ; if the main pool is busy with long running requests because it is still possible ; to get the status before finishing the long running requests. ; ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Default Value: value of the listen option ;pm.status_listen = 127.0.0.1:9001 ; The ping URI to call the monitoring page of FPM. If this value is not set, no ; URI will be recognized as a ping page. This could be used to test from outside ; that FPM is alive and responding, or to ; - create a graph of FPM availability (rrd or such); ; - remove a server from a group if it is not responding (load balancing); ; - trigger alerts for the operating team (24/7). ; Note: The value must start with a leading slash (/). The value can be ; anything, but it may not be a good idea to use the .php extension or it ; may conflict with a real PHP file. ; Default Value: not set ;ping.path = /ping ; This directive may be used to customize the response of a ping request. The ; response is formatted as text/plain with a 200 response code. ; Default Value: pong ;ping.response = pong ; The access log file ; Default: not set ;access.log = log/$pool.access.log ; The access log format. ; The following syntax is allowed ; %%: the '%' character ; %C: %CPU used by the request ; it can accept the following format: ; - %{user}C for user CPU only ; - %{system}C for system CPU only ; - %{total}C for user + system CPU (default) ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) ; - %{milliseconds}d ; - %{milli}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env ; variable. Some examples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename ; %l: content-length of the request (for POST request only) ; %m: request method ; %M: peak of memory allocated by PHP ; it can accept the following format: ; - %{bytes}M (default) ; - %{kilobytes}M ; - %{kilo}M ; - %{megabytes}M ; - %{mega}M ; %n: pool name ; %o: output header ; it must be associated with embraces to specify the name of the header: ; - %{Content-Type}o ; - %{X-Powered-By}o ; - %{Transfert-Encoding}o ; - .... ; %p: PID of the child that serviced the request ; %P: PID of the parent of the child that serviced the request ; %q: the query string ; %Q: the '?' character if query string exists ; %r: the request URI (without the query string, see %q and %Q) ; %R: remote IP address ; %s: status (response code) ; %t: server time the request was received ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsulated in a %{<strftime_format>}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %T: time the log has been written (the request has finished) ; it can accept a strftime(3) format: ; %d/%b/%Y:%H:%M:%S %z (default) ; The strftime(3) format must be encapsulated in a %{<strftime_format>}t tag ; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t ; %u: remote user ; ; Default: "%R - %u %t \"%m %r\" %s" ;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{milli}d %{kilo}M %C%%" ; A list of request_uri values which should be filtered from the access log. ; ; As a security precuation, this setting will be ignored if: ; - the request method is not GET or HEAD; or ; - there is a request body; or ; - there are query parameters; or ; - the response code is outwith the successful range of 200 to 299 ; ; Note: The paths are matched against the output of the access.format tag "%r". ; On common configurations, this may look more like SCRIPT_NAME than the ; expected pre-rewrite URI. ; ; Default Value: not set ;access.suppress_path[] = /ping ;access.suppress_path[] = /health_check.php ; The log file for slow requests ; Default Value: not set ; Note: slowlog is mandatory if request_slowlog_timeout is set ;slowlog = log/$pool.log.slow ; The timeout for serving a single request after which a PHP backtrace will be ; dumped to the 'slowlog' file. A value of '0s' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_slowlog_timeout = 0 ; Depth of slow log stack trace. ; Default Value: 20 ;request_slowlog_trace_depth = 20 ; The timeout for serving a single request after which the worker process will ; be killed. This option should be used when the 'max_execution_time' ini option ; does not stop script execution for some reason. A value of '0' means 'off'. ; Available units: s(econds)(default), m(inutes), h(ours), or d(ays) ; Default Value: 0 ;request_terminate_timeout = 0 ; The timeout set by 'request_terminate_timeout' ini option is not engaged after ; application calls 'fastcgi_finish_request' or when application has finished and ; shutdown functions are being called (registered via register_shutdown_function). ; This option will enable timeout limit to be applied unconditionally ; even in such cases. ; Default Value: no ;request_terminate_timeout_track_finished = no ; Set open file descriptor rlimit. ; Default Value: system defined value ;rlimit_files = 1024 ; Set max core size rlimit. ; Possible Values: 'unlimited' or an integer greater or equal to 0 ; Default Value: system defined value ;rlimit_core = 0 ; Chroot to this directory at the start. This value must be defined as an ; absolute path. When this value is not set, chroot is not used. ; Note: you can prefix with '$prefix' to chroot to the pool prefix or one ; of its subdirectories. If the pool prefix is not set, the global prefix ; will be used instead. ; Note: chrooting is a great security feature and should be used whenever ; possible. However, all PHP paths will be relative to the chroot ; (error_log, sessions.save_path, ...). ; Default Value: not set ;chroot = ; Chdir to this directory at the start. ; Note: relative path can be used. ; Default Value: current directory or / when chroot ;chdir = /var/www ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. ; Note: on highloaded environment, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes ; Decorate worker output with prefix and suffix containing information about ; the child that writes to the log and if stdout or stderr is used as well as ; log level and time. This options is used only if catch_workers_output is yes. ; Settings to "no" will output data as written to the stdout or stderr. ; Default value: yes ;decorate_workers_output = no ; Clear environment in FPM workers ; Prevents arbitrary environment variables from reaching FPM worker processes ; by clearing the environment in workers before env vars specified in this ; pool configuration are added. ; Setting to "no" will make all environment variables available to PHP code ; via getenv(), $_ENV and $_SERVER. ; Default Value: yes ;clear_env = no ; Limits the extensions of the main script FPM will allow to parse. This can ; prevent configuration mistakes on the web server side. You should only limit ; FPM to .php extensions to prevent malicious users to use other extensions to ; execute php code. ; Note: set an empty value to allow all extensions. ; Default Value: .php ;security.limit_extensions = .php .php3 .php4 .php5 .php7 ; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from ; the current environment. ; Default Value: clean env ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ; Additional php.ini defines, specific to this pool of workers. These settings ; overwrite the values previously defined in the php.ini. The directives are the ; same as the PHP SAPI: ; php_value/php_flag - you can set classic ini defines which can ; be overwritten from PHP call 'ini_set'. ; php_admin_value/php_admin_flag - these directives won't be overwritten by ; PHP call 'ini_set' ; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no. ; Defining 'extension' will load the corresponding shared extension from ; extension_dir. Defining 'disable_functions' or 'disable_classes' will not ; overwrite previously defined php.ini values, but will append the new value ; instead. ; Note: path INI options can be relative and will be expanded with the prefix ; (pool, global or /usr) ; Default Value: nothing is defined by default except the values in php.ini and ; specified at startup with the -d argument ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M ``` Oh...I don't like it. It's magic word, I did think when I look first time. But Look carefully. It's mostly commentout. It's all right. ### Explane Okay, I explain. #### pool ``` ; Start a new pool named 'www'. ; the variable $pool can be used in any directive and will be replaced by the ; pool name ('www' here) [www] ``` This is pool name. It's process set. so you use `ps` command, you can look www pool name. #### log output Blow, It's log output settings for each pool. ``` ; Per pool prefix ; It only applies on the following directives: ; - 'access.log' ; - 'slowlog' ; - 'listen' (unixsocket) ; - 'chroot' ; - 'chdir' ; - 'php_values' ; - 'php_admin_values' ; When not set, the global prefix (or /usr) applies instead. ; Note: This directive can also be relative to the global prefix. ; Default Value: none ;prefix = /path/to/pools/$pool ``` You change prefix when specifying log output destination. I basically don't change it. #### process execution Blow, It's process execution user and group. ``` ; Unix user/group of the child processes. This can be used only if the master ; process running user is root. It is set after the child process is created. ; The user and group can be specified either by their name or by their numeric ; IDs. ; Note: If the user is root, the executable needs to be started with ; --allow-to-run-as-root option to work. ; Default Values: The user is set to master process running user by default. ; If the group is not set, the user's group is used. user = www-data group = www-data ``` if you use nginx, you change blew line. ``` user = nginx group = nginx ``` #### specify port or unix domain socket Blow, It specifies port or unix domain socket to listen for requests. ``` ; The address on which to accept FastCGI requests. ; Valid syntaxes are: ; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on ; a specific port; ; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on ; a specific port; ; 'port' - to listen on a TCP socket to all addresses ; (IPv6 and IPv4-mapped) on a specific port; ; '/path/to/unix/socket' - to listen on a unix socket. ; Note: This value is mandatory. listen = /run/php/php8.3-fpm.sock ``` Now It set unix domain socket to listen. If you use TCP socket, For example, It change blow line. ``` listen = 127.0.0.1:9000 ``` #### backlog Blow, It set backlog. But I've never changed it, so linux parameter. I'll learn it. ``` ; Set listen(2) backlog. ; Default Value: 511 (-1 on Linux, FreeBSD and OpenBSD) ;listen.backlog = 511 ``` #### permissions for unix socket Blow, It's permissions for unix socket. ``` ; Set permissions for unix socket, if one is used. In Linux, read/write ; permissions must be set in order to allow connections from a web server. Many ; BSD-derived systems allow connections regardless of permissions. The owner ; and group can be specified either by name or by their numeric IDs. ; Default Values: Owner is set to the master process running user. If the group ; is not set, the owner's group is used. Mode is set to 0660. listen.owner = www-data listen.group = www-data ;listen.mode = 0660 ``` Blow is for Nginx and unix socket connection. ``` listen.owner = nginx listen.group = nginx listen.mode = 0660 ``` #### access controle Blow, It's POSIX Access Control Lists. ``` ; When POSIX Access Control Lists are supported you can set them using ; these options, value is a comma separated list of user/group names. ; When set, listen.owner and listen.group are ignored ;listen.acl_users = ;listen.acl_groups = ``` It seems that php-fpm restarts periodically and owner of socket reverts to root. So It change blew line for nignx. ``` listen.acl_users = nginx ``` #### IP address restrict Blow, It's list of addresses allowed to connect. ``` ; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect. ; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original ; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address ; must be separated by a comma. If this value is left blank, connections will be ; accepted from any ip address. ; Default Value: any ;listen.allowed_clients = 127.0.0.1 ``` Default is to accept all connections, So it restrict FastCGI client connections to IP. ### Last There are more than I expected, So I stop here. Just read them one by one and don't be afraid configuration file. Let's read it.
takahiro_82jp
1,869,536
Vue.js 101: Mastering Computed Properties and Watchers for Reactive Data
In Vue.js, computed properties and watchers are powerful tools for managing and reacting to reactive...
0
2024-05-29T19:27:27
https://dev.to/delia_code/vuejs-101-mastering-computed-properties-and-watchers-for-reactive-data-18mb
javascript, beginners, tutorial, vue
In Vue.js, computed properties and watchers are powerful tools for managing and reacting to reactive data. Understanding these features is essential for building efficient and responsive Vue.js applications. In this article, we'll explore computed properties and watchers in detail, from beginner to advanced examples, using the Composition API. Let's dive in! ## What Are Computed Properties and Watchers? ### Computed Properties Computed properties are reactive data properties that depend on other reactive data properties. They are cached based on their dependencies and will only re-evaluate when one of those dependencies changes. Computed properties are ideal for complex logic that depends on reactive data. **Benefits of Computed Properties:** - **Caching**: Computed properties are cached based on their dependencies, making them efficient. They only re-compute when one of their dependencies changes. - **Declarative**: Computed properties provide a clear and declarative way to describe derived state, making your code more readable and maintainable. - **Simplifies Logic**: They are excellent for encapsulating complex calculations or logic that depends on reactive data. **When Not to Use Computed Properties:** - **Side Effects**: Avoid using computed properties for side effects like API calls or logging. Computed properties should be pure and only return values based on their dependencies. - **Asynchronous Operations**: They are not suitable for asynchronous operations, as they are designed to be synchronous. ### Watchers Watchers allow you to perform side effects in response to changing reactive data. Unlike computed properties, which are used for derived state, watchers are primarily used for executing code in reaction to changes, such as making API calls or logging. **Benefits of Watchers:** - **Flexibility**: Watchers offer more flexibility for executing code in response to data changes. They can handle complex logic that needs to run when data changes. - **Side Effects**: Ideal for performing side effects like asynchronous operations, logging, or complex data transformations that are triggered by changes in reactive data. **When Not to Use Watchers:** - **Simple Computations**: For simple derived state or calculations, use computed properties instead of watchers. Watchers are overkill for these scenarios. - **Declarative Logic**: If you can achieve the same result with a computed property, it’s often clearer and more maintainable to use the computed property. ## Using the Composition API ### Setting Up the Project Before we start, make sure you have Vue.js installed. Set up a new project using Vue CLI: ```bash npm install -g @vue/cli vue create vue-computed-watcher cd vue-computed-watcher npm run serve ``` ### Basic Example with Computed Properties Let's start with a basic example of computed properties using the Composition API. **Template:** ```html <template> <div> <h2>Basic Computed Property Example</h2> <p>Original Message: {{ message }}</p> <p>Reversed Message: {{ reversedMessage }}</p> </div> </template> ``` **Script:** ```javascript <script> import { ref, computed } from 'vue'; export default { setup() { const message = ref('Hello Vue.js'); const reversedMessage = computed(() => { return message.value.split('').reverse().join(''); }); return { message, reversedMessage }; } }; </script> ``` - **Reactive Data**: `message` is a reactive data property created using `ref`. - **Computed Property**: `reversedMessage` is a computed property that depends on `message`. It reverses the string whenever `message` changes. ### Advanced Example with Computed Properties Now, let's create a more advanced example that includes multiple dependencies and demonstrates the power of computed properties. **Template:** ```html <template> <div> <h2>Advanced Computed Property Example</h2> <p>Items: {{ items.map(item => item.name).join(', ') }}</p> <p>Total Price: {{ totalPrice }}</p> </div> </template> ``` **Script:** ```javascript <script> import { ref, computed } from 'vue'; export default { setup() { const items = ref([ { name: 'Apple', price: 1.2 }, { name: 'Banana', price: 0.8 }, { name: 'Cherry', price: 2.5 } ]); const totalPrice = computed(() => { return items.value.reduce((total, item) => total + item.price, 0).toFixed(2); }); return { items, totalPrice }; } }; </script> ``` - **Reactive Data**: `items` is an array of objects, each representing an item with a name and price. - **Computed Property**: `totalPrice` calculates the sum of item prices, updating automatically when the items array changes. ### Basic Example with Watchers Watchers are useful for reacting to data changes, such as making an API call or logging information. **Template:** ```html <template> <div> <h2>Basic Watcher Example</h2> <input v-model="name" placeholder="Enter your name" /> <p>Greeting: {{ greeting }}</p> </div> </template> ``` **Script:** ```javascript <script> import { ref, watch } from 'vue'; export default { setup() { const name = ref(''); const greeting = ref('Hello!'); watch(name, (newValue) => { greeting.value = `Hello, ${newValue}!`; }); return { name, greeting }; } }; </script> ``` - **Reactive Data**: `name` and `greeting` are reactive data properties. - **Watcher**: The watcher observes `name` and updates `greeting` whenever `name` changes. ### Advanced Example with Watchers For an advanced example, let's perform an asynchronous operation in a watcher. **Template:** ```html <template> <div> <h2>Advanced Watcher Example</h2> <input v-model="query" placeholder="Search for a GitHub user" /> <p v-if="loading">Loading...</p> <p v-if="error">{{ error }}</p> <pre v-if="user">{{ user }}</pre> </div> </template> ``` **Script:** ```javascript <script> import { ref, watch } from 'vue'; export default { setup() { const query = ref(''); const user = ref(null); const loading = ref(false); const error = ref(null); const fetchUser = async (username) => { loading.value = true; error.value = null; user.value = null; try { const response = await fetch(`https://api.github.com/users/${username}`); if (!response.ok) throw new Error('User not found'); user.value = await response.json(); } catch (err) { error.value = err.message; } finally { loading.value = false; } }; watch(query, (newQuery) => { if (newQuery) fetchUser(newQuery); }); return { query, user, loading, error }; } }; </script> ``` - **Reactive Data**: `query`, `user`, `loading`, and `error` are reactive data properties. - **Watcher**: The watcher on `query` triggers the `fetchUser` function whenever `query` changes, performing an asynchronous API call to fetch GitHub user data. Computed properties and watchers are indispensable tools in Vue.js for managing and reacting to reactive data. Computed properties are ideal for derived state that depends on other reactive data, providing a clear and efficient way to manage complex logic. Watchers, on the other hand, offer flexibility for executing code in response to data changes, making them perfect for side effects such as API calls. By understanding and effectively using computed properties and watchers, you can build more efficient, responsive, and maintainable Vue.js applications. Whether you're a beginner or looking to advance your skills, mastering these concepts will significantly enhance your Vue.js development experience. Happy coding! Twitter: [@delia_code](https://x.com/delia_code) Instagram:[@delia.codes](https://www.instagram.com/delia.codes/) Blog: [https://delia.hashnode.dev/](https://delia.hashnode.dev/)
delia_code
1,870,062
The Best Share Market Apps For This Year
The stock market, with its potential for wealth creation and opportunities, continues to draw...
0
2024-05-30T07:42:55
https://dev.to/sanjay_raghuvanshi_/the-best-share-market-apps-for-this-year-4odh
The stock market, with its potential for wealth creation and opportunities, continues to draw investors in the ever-changing environment of Indian finance. It's now easier than ever to access the stock market thanks to the development of mobile technologies. Because they offer real-time data, research tools, and frictionless trading experiences, **[share market app](https://play.google.com/store/apps/detailsid=com.icicidirect.markets)**s have grown to be essential tools for investors. This article examines the top share market app available in India for this year, enabling investors to traverse the market's complexity and make well-informed decisions easily. ## Top Share Market Apps **1. Angel Broking:** Angel Broking is distinguished by its extensive research resources and customised investment advice. The Angel Broking app provides smooth trade execution, professional insights, and real-time market information. In the Indian stock market environment, Angel Broking has become a reputable name thanks to its emphasis on innovation and client happiness. **2. ICICI Market App:** Providing a smooth and all-inclusive trading experience, ICICI Market App is an exceptional option for Indian investors. The ICICI Market App offers a one-stop shop for all investing needs, including stock trading, mutual funds, and initial public offering (IPO) investments. With real-time market data and personalised investment suggestions, the ICICI Market App gives customers the confidence to make wise decisions. **3. Zerodha:** With a robust and easy-to-use mobile app, Zerodha has established itself as one of India's top stockbrokers. Zerodha's app offers features that appeal to both novice and seasoned investors, such as direct mutual fund investments, comprehensive charting capabilities, and no money brokerage on equity delivery trades. Because of its simple interface and affordable pricing, traders who want to maximise their profits love it. **4. Upstox:** Known for its dependability and affordable prices, Upstox has become a significant participant in the Indian stock market app business. A number of features are available in the Upstox app, such as sophisticated order types, watchlist customisation, and real-time market data. Investors of all experience levels find Upstox appealing due to its easy-to-use interface and affordable brokerage fees. **5. Groww:** With its user-friendly platform and no-commission structure, Groww has wholly transformed the Indian investment landscape. Gold, mutual funds, equities, and other investment opportunities are all accessible through the Groww app. Groww serves millennials and novice investors by offering features like streamlined investment procedures and educational materials. **Conclusion** Selecting the appropriate share market app is crucial for investors navigating the Indian stock market environment. It is essential to take into account aspects like usability, security, and dependability when choosing an app to handle your money. In conclusion, think about utilising the capabilities of the ICICI Market App as you set out on your Indian investing adventure this year. Investors may prosper in today's dynamic market climate with the tools and resources provided by **ICICI Market App**, which is dedicated to innovation and consumer satisfaction. With the ICICI Market App, you can unleash the potential of the Indian stock market as a seasoned trader or as a rookie investor.
sanjay_raghuvanshi_
1,870,739
Design e Arquitetura de Software: Entendendo Suas Funções e Desafios no Desenvolvimento
Bom, estou começando a ler o excelente livro Arquitetura Limpa: o Guia do Artesão Para Estrutura e...
0
2024-05-30T19:54:51
https://dev.to/mathsena/design-e-arquitetura-de-software-entendendo-suas-funcoes-e-desafios-no-desenvolvimento-3b6e
cleancode, design
Bom, estou começando a ler o excelente livro Arquitetura Limpa: o Guia do Artesão Para Estrutura e Design de Software escrito pelo autor Robert C. Martin, então por isso resolvi escrever um pouco sobre alguns temas importantes que são descritos na obra. Começamos com um assunto importante, que é a diferença entre Design e Arquitetura de software. ![Livro arquitetura limpa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gcyzt65v0c5gipvqvsax.png) O desenvolvimento de software eficaz exige uma compreensão clara de desses conceitos fundamentais, embora frequentemente utilizados de maneira intercambiável, esses termos descrevem aspectos distintos, porém complementares, do processo de criação de software. Para ilustrar essas diferenças, podemos compará-los aos diferentes sistemas que compõem um corpo humano, onde cada parte tem seu papel essencial, mas todas trabalham em harmonia para garantir o bom funcionamento do todo. ### Diferenças entre Design e Arquitetura de Software A **arquitetura de software** é como o esqueleto do corpo humano: uma estrutura que define a disposição e a interconexão dos componentes principais, como ossos sustentam e conectam todas as partes do corpo. No contexto de software, a arquitetura envolve decisões de alto nível sobre os sistemas e plataformas a serem usados, a organização do código em módulos ou serviços e os padrões de comunicação entre esses componentes. Por outro lado, o **design de software** é comparável ao sistema nervoso, responsável por garantir que os sinais se movam eficientemente por todo o corpo, controlando funções específicas. No software, isso se traduz em especificações detalhadas sobre como cada parte do sistema deve operar e interagir, detalhando algoritmos, padrões de interface do usuário e gestão de dados internos. ### Objetivos e Importância ![Clean Arquiteture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rukyudbvko6wwcz1234v.png) O objetivo da **arquitetura de software** é criar uma fundação robusta e escalável que suporte o sistema como um todo, garantindo que o software possa crescer e adaptar-se sem comprometer sua funcionalidade. _Uma arquitetura eficaz facilita a manutenção e a expansão do software_, enquanto uma arquitetura pobre pode levar a um sistema que é difícil de entender e caro de modificar. O **design de software**, em contraste, visa maximizar a eficiência e a qualidade do funcionamento interno do sistema. Um bom design melhora a legibilidade do código, simplifica a depuração e a manutenção e reduz o risco de erros. Ele assegura que cada componente do software funcione corretamente dentro do contexto estabelecido pela arquitetura. ### Custos e Exemplos O investimento em boa arquitetura e design de software pode ser significativo, mas os custos de negligenciá-los são frequentemente maiores. Por exemplo, o incidente do software do Obamacare nos Estados Unidos mostrou como uma arquitetura mal planejada pode resultar em falhas de sistema e custos exorbitantes de reparo. Estima-se que os custos para corrigir os problemas excederam várias vezes o orçamento original de desenvolvimento. ### Falhas no Desenvolvimento de Software Falhas na entrega de software de alta qualidade geralmente ocorrem devido a diversos fatores, como **altoconfiança**, **pressa** e **pressão do mercado**. Desenvolvedores podem superestimar sua capacidade de criar sistemas complexos em prazos apertados, resultando em arquiteturas mal concebidas e designs apressados. A pressão para lançar produtos rapidamente pode levar a decisões de design e arquitetura mal consideradas, impactando negativamente a qualidade e a sustentabilidade do software. ### A Importância de um Código Limpo Um código limpo e bem organizado é crucial para a manutenção e a escalabilidade do software. Ele permite que outros desenvolvedores entendam rapidamente o sistema, reduzindo o tempo necessário para implementar novas características ou corrigir bugs. Além disso, um código limpo facilita a realização de testes, o que é essencial para garantir a estabilidade e a confiabilidade do software ao longo do tempo. ### Conclusão A arquitetura e o design de software são componentes críticos que determinam o sucesso de um projeto de desenvolvimento. Como em um corpo humano, cada aspecto deve funcionar em harmonia para garantir a saúde e a eficácia do sistema. Compreender esses conceitos e aplicá-los com cuidado é fundamental para evitar os custos e as falhas associadas ao desenvolvimento de software, culminando na criação de soluções duradouras e eficazes.
mathsena
1,872,204
How to setup an Apache server on Ubuntu 22.04.4 LTS with Virtual hosting
Apache is a popular open source web server that is widely used to host web pages. In this...
0
2024-05-31T15:45:40
https://dev.to/oyololatoni/how-to-setup-an-apache-server-on-ubuntu-22044-lts-with-virtual-hosting-1mnm
apache, devops, ubuntu, selfhosting
![](https://cdn-images-1.medium.com/max/2000/1*dXU1djueJapzeCDXKq2jzA.jpeg) Apache is a popular open source web server that is widely used to host web pages. In this tutorial, you’ll be creating a virtual host environment to run multiple websites with Apache on your Ubuntu 22.04.4 LTS server allowing you to add several domains hosted on just 1 IP address. ## Installing Apache We’ll begin by updating the local repository with the following command sudo apt update Next, install the Apache server sudo apt install apache2 Run this command to start the Apache server systemctl start apache2.service Run this command to verify that Apache is running systemctl status apache2.service You can test if your Apache server is live by typing your public IP address in your web server, you should get the following result ![apache home page](https://cdn-images-1.medium.com/max/2000/1*KSdvA3MKETakRXlOn4iElA.png) ## Configuring Your Own Website Apache by default is setup to serve documents from **/var/www. **So you will need to create store your webpage directory in this folder. We will be using demo.com here as follows: sudo mkdir -p /var/www/demo.com You will need to modify the ownership of this directory so as to be accessible and executable by your user account. This is done by executing the following: sudo chown -R $USER:$USER /var/www/demo.com Next you will change the permission settings of the directory to allow everyone to have read and execute permissions while the $USER has read, write and execute permissions sudo chmod -R 755 /var/www/demo.com Next create an **index.html** file that will be used to host your site’s html code sudo vim /var/www/demo.com/index.html Insert the following code in the **index.html **file by pressing **i **to write the file. And to save it press **ESQ **t**o exit insert mode, **and **:wq **after to save and quit the the file editor <html> <head> <title>Welcome to Your_domain!</title> </head> <body> <h1>Success! The your_domain virtual host is working!</h1> </body> </html> ## Setting up the VirtualHost Configuration File The next step is to setup the configuration file for the webserver in the **/etc/apache2** folder — which is where the Apache configuration file is located. The file also uses the **.conf **extension. Create the configuration file using the following command sudo vim /etc/apache2/sites-available/demo.com.conf Insert this code into the file <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName demo.com ServerAlias www.demo.com DocumentRoot /var/www/demo.com ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Save and close the file ## Activating VirtualHost file After setting up the website, the next step is to activate the virtual host file. Do that by running the following commands First enable the file with the **a2ensite **command sudo a2ensite demo.com.conf Then disable the default configuration file sudo a2dissite 000-default.conf Check for configuration errors using the following command: sudo apache2ctl configtest The output of that command should be “ Syntax OK” to indicate that it is properly configured Restart the Apache server to implement the changes sudo systemctl restart apache2 Congrats! Your website is now hosted on your machine. To view the website navigate to http://demo.com
oyololatoni
1,873,422
Способы установки Node.js
Способы установки Node.js Node.js — это популярная платформа для разработки серверных приложений, и...
0
2024-06-02T03:02:35
https://dev.to/mibii/sposoby-ustanovki-nodejs-1dme
node
Способы установки Node.js Node.js — это популярная платформа для разработки серверных приложений, и для многих разработчиков важно знать, как правильно установить и управлять версиями Node.js. Давайте рассмотрим основные способы установки Node.js. Установка из репозитория Наиболее распространенный способ установки Node.js — это загрузка установочного дистрибутива с официального сайта Node.js или из репозиториев вашей операционной системы. Этот способ простой и интуитивно понятный: Перейдите на официальный сайт Node.js. Выберите нужную версию (LTS или Current). Скачайте установочный файл и следуйте инструкциям по установке. Для пользователей Linux часто используются команды пакетного менеджера. Например, для Ubuntu: ``` sudo apt update sudo apt install nodejs npm ``` ## Установка с использованием curl Другой способ установки Node.js — это использование команды curl, которая позволяет загрузить и установить Node.js напрямую из командной строки. Например, для Ubuntu: ``` curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash - ``` Эта команда добавляет репозиторий NodeSource и устанавливает Node.js и npm. ## Управление версиями Node.js Разработчики часто сталкиваются с необходимостью управления несколькими версиями Node.js, особенно если различные проекты требуют разные версии. Для этого существуют несколько инструментов: ## nvm (Node Version Manager) nvm — это один из самых популярных инструментов для управления версиями Node.js. Он позволяет легко переключаться между разными версиями Node.js. [Установка nvm:](https://github.com/nvm-sh/nvm) ``` curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash ``` Использование nvm для установки и переключения версий: ``` nvm install 14.17.0 nvm use 14.17.0 ``` Список установленных версий: ``` nvm ls ``` ## n (Node Version Manager) n — это другой инструмент для управления версиями Node.js, который также позволяет быстро устанавливать и переключаться между версиями. Установка n: ``` npm install -g n ``` Использование n для установки и переключения версий: ``` n 14.17.0 ``` Переключение на другую версию: ``` n 12.22.1 ``` ## Volta (formerly volta.js) Volta — это современный инструмент для управления версиями Node.js, который делает акцент на простоте использования и скорости. Установка Volta: ``` curl https://get.volta.sh | bash ``` Использование Volta для установки и переключения версий: ``` volta install node@14.17.0 volta use node@14.17.0 ``` Проверка текущей версии: ``` node -v ``` ## Заключение Установка и управление версиями Node.js — это важные навыки для каждого разработчика. Независимо от того, какой способ установки вы выберете, будь то скачивание из репозитория, использование curl или управление версиями с помощью nvm, n или Volta, важно выбрать тот инструмент, который наилучшим образом соответствует вашим потребностям и рабочему процессу. Also for information on how to check your version of Node and NPM, [see the NPMJS page](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) on downloading and installing Node.js and npm.
mibii
1,875,717
How to Integrate Firebase in NestJS for Notifications 🚀
Introduction NestJS is a powerful Node.js framework for building efficient server-side...
27,583
2024-06-03T18:30:00
https://dev.to/shahharsh/how-to-integrate-firebase-in-nestjs-for-notifications-gik
firebase, nestjs, notification, android
#### Introduction NestJS is a powerful Node.js framework for building efficient server-side applications, while Firebase offers tools like Firebase Cloud Messaging (FCM) for sending notifications. Integrating Firebase with NestJS allows you to send real-time notifications easily. #### Prerequisites Make sure you have: - ✅ Node.js and npm installed - ✅ Basic knowledge of NestJS - ✅ A Firebase project set up #### Step-by-Step Guide **Step 1: Create a NestJS Project 🏗️** ```bash npm i -g @nestjs/cli nest new nest-firebase-notifications cd nest-firebase-notifications ``` **Step 2: Install Firebase Admin SDK 📦** ```bash npm install firebase-admin --save ``` **Step 3: Initialize Firebase in NestJS 🔥** Create a `firebase.service.ts`: ```typescript // firebase/firebase.service.ts import { Injectable } from '@nestjs/common'; import * as admin from 'firebase-admin'; import { ServiceAccount } from 'firebase-admin'; @Injectable() export class FirebaseService { constructor() { const serviceAccount: ServiceAccount = { projectId: process.env.FIREBASE_PROJECT_ID, clientEmail: process.env.FIREBASE_CLIENT_EMAIL, privateKey: process.env.FIREBASE_PRIVATE_KEY.replace(/\\n/g, '\n'), }; admin.initializeApp({ credential: admin.credential.cert(serviceAccount), }); } async sendNotification(token: string, payload: admin.messaging.MessagingPayload) { try { await admin.messaging().sendToDevice(token, payload); } catch (error) { console.error('Error sending notification:', error); } } } ``` **Step 4: Add Environment Variables 🌐** Create a `.env` file: ```plaintext FIREBASE_PROJECT_ID=your-project-id FIREBASE_CLIENT_EMAIL=your-client-email FIREBASE_PRIVATE_KEY=your-private-key ``` Load these in your `main.ts`: ```typescript import { ConfigModule } from '@nestjs/config'; @Module({ imports: [ConfigModule.forRoot()], ... }) export class AppModule {} ``` **Step 5: Create a Notifications Controller 📲** ```typescript // notifications/notifications.controller.ts import { Controller, Post, Body } from '@nestjs/common'; import { FirebaseService } from '../firebase/firebase.service'; @Controller('notifications') export class NotificationsController { constructor(private readonly firebaseService: FirebaseService) {} @Post() async sendNotification(@Body('token') token: string, @Body('message') message: string) { const payload = { notification: { title: 'New Notification', body: message, }, }; await this.firebaseService.sendNotification(token, payload); return { success: true }; } } ``` **Step 6: Register the Service and Controller 📝** Update your `app.module.ts`: ```typescript import { Module } from '@nestjs/common'; import { NotificationsController } from './notifications/notifications.controller'; import { FirebaseService } from './firebase/firebase.service'; @Module({ imports: [], controllers: [NotificationsController], providers: [FirebaseService], }) export class AppModule {} ``` **Step 7: Test Your Setup 🛠️** Start your app: ```bash npm run start ``` Test the endpoint with Postman or curl: ```bash curl -X POST http://localhost:5000/notifications \ -H "Content-Type: application/json" \ -d '{ "token": "recipient-device-token", "message": "Hello from Harsh Shah!" }' ``` 🎉 Your device should receive a notification! #### Conclusion Integrating Firebase with NestJS for notifications is straightforward. Follow these steps to enhance your application with real-time notifications and improve user engagement. 🚀
shahharsh
1,876,278
Boost MongoDB Performance: Motor Client vs PyMongo - Which is Faster?
Once I was tasked with improving the database and general app operations of backend services which...
0
2024-06-04T06:55:55
https://www.developertimeline.io/scalable-database-operations
webdev, mongodb, python
Once I was tasked with improving the database and general app operations of backend services which are using MongoDB as their main database. These services were part of huge infrastructure where millions of messages coming through the queues and needed to be processed based on the message actions. That means tons of DB ops each second and other additional checks while processing. > This post cross-published with [OnePublish](https://onepubli.sh) ### **Real Case Scenario** The processing layer of service was using `pymongo` to interact with MongoDB and service itself was running in synchronous environment. Even the database operations was handled in bulk still performance was not capable of handling incoming data. Synchronous code was making things even worse. The code execution waits result from current operation to move forward. That's a serious bottleneck in scalable systems. This was causing queue overflows and potential data loss every time. ### **Asynchronous Environment** The solution I implemented was combination of: - Motor - Asyncio - Uvloop Let's quickly go through the definitions of these items. **PyMongo** is the official MongoDB driver for Python, providing a simple and intuitive way to interact with MongoDB databases. It's synchronous, meaning each database operation blocks the execution of your program until it completes, which can be a bottleneck in I/O-bound tasks. **Motor** is the asynchronous driver for MongoDB, built on top of PyMongo and designed to take advantage of Python's `asyncio` library. Motor allows you to perform non-blocking database operations, making it suitable for high-performance applications that require concurrency. To illustrate the performance differences, I prepared a stress test using two scripts: one using Motor (asynchronous) and the other using PyMongo (synchronous). Both scripts performed the same task of reading and writing documents to MongoDB in batches. Both scripts reading 300k documents from source collection and migrating them to new target collection. ### **Asynchronous Script (Motor)** ```python import logging import asyncio import time from bson import ObjectId from motor.motor_asyncio import AsyncIOMotorClient # Configure logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # MongoDB setup MONGO_URI = 'mongodb://root:root@localhost:27019' DB_NAME = 'products' COLLECTION_NAME = 'gmc_products' client = AsyncIOMotorClient(MONGO_URI) db = client[DB_NAME] collection = db[COLLECTION_NAME] target_collection = db["new_collection"] async def fetch_products(batch_size, last_id=None): query = {'_id': {'$gt': last_id}} if last_id else {} cursor = collection.find(query).sort('_id').limit(batch_size) products = await cursor.to_list(length=batch_size) return products async def bulk_write_to_mongo(products): for product in products: product['_id'] = ObjectId() # Generate a new ObjectId for each product try: result = await target_collection.insert_many(products, ordered=False) logger.info(f'Inserted {len(result.inserted_ids)} products into MongoDB.') except Exception as e: logger.error(f'Error inserting products into MongoDB: {e}') async def process_batches(batch_size, concurrency_limit): tasks = [] last_id = None while True: products = await fetch_products(batch_size, last_id) if not products: break last_id = products[-1]['_id'] tasks.append(bulk_write_to_mongo(products)) if len(tasks) >= concurrency_limit: await asyncio.gather(*tasks) tasks = [] # Process remaining tasks if any if tasks: await asyncio.gather(*tasks) async def main(): batch_size = 1000 concurrency_limit = 10 start_time = time.time() await process_batches(batch_size, concurrency_limit) end_time = time.time() logger.info(f'Total time: {end_time - start_time:.2f} seconds.') if __name__ == '__main__': asyncio.run(main()) ``` ### **Synchronous Script (PyMongo)** ```python import logging import time from bson import ObjectId from pymongo import MongoClient # Configure logging logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # MongoDB setup MONGO_URI = 'mongodb://root:root@localhost:27019' DB_NAME = 'products' COLLECTION_NAME = 'gmc_products' TARGET_COLLECTION_NAME = 'new_collection' client = MongoClient(MONGO_URI) db = client[DB_NAME] collection = db[COLLECTION_NAME] target_collection = db[TARGET_COLLECTION_NAME] def fetch_products(batch_size, last_id=None): query = {'_id': {'$gt': last_id}} if last_id else {} cursor = collection.find(query).sort('_id').limit(batch_size) products = list(cursor) return products def bulk_write_to_mongo(products): for product in products: product['_id'] = ObjectId() # Generate a new ObjectId for each product try: result = target_collection.insert_many(products, ordered=False) logger.info(f'Inserted {len(result.inserted_ids)} products into MongoDB.') except Exception as e: logger.error(f'Error inserting products into MongoDB: {e}') def process_batches(batch_size): last_id = None while True: products = fetch_products(batch_size, last_id) if not products: break last_id = products[-1]['_id'] bulk_write_to_mongo(products) def main(): batch_size = 1000 start_time = time.time() process_batches(batch_size) end_time = time.time() logger.info(f'Total time: {end_time - start_time:.2f} seconds.') if __name__ == '__main__': main() ``` ### **Results and Analysis** **Execution Time of Migrating 300k documents**: - Asynchronous script: 17.15 seconds - Synchronous script: 23.26 seconds The asynchronous script completed the task 6.11 seconds faster than the synchronous script. While this might not seem like a significant difference for a single run, it becomes more pronounced in **high-load scenarios** or when processing large datasets continuously. ### **Benefits of Using Motor and Asynchronous Environment** 1. **Improved Throughput**: Asynchronous operations can handle more tasks concurrently, increasing overall throughput. This is especially beneficial in applications with high I/O operations, such as web servers handling multiple database queries simultaneously. 2. **Non-Blocking I/O**: Asynchronous operations do not block the main thread, allowing other tasks to continue running. This results in better CPU utilization and smoother application performance, particularly under load. 3. **Scalability**: Asynchronous code scales better with the number of concurrent operations. For example, a web application using Motor can handle more simultaneous requests compared to one using PyMongo. 4. **Resource Efficiency**: Asynchronous operations can lead to more efficient use of system resources. For instance, the event loop in asyncio allows the application to switch between tasks, reducing idle times and improving overall efficiency. ### **Source Code and Video Explanation** You can find the source code on Github repository below: {% embed https://github.com/PylotStuff/motor-asyncio-performance %} ![Boost MongoDB Performance: Motor Client vs PyMongo - Which is Faster?](https://github.githubassets.com/assets/pinned-octocat-093da3e6fa40.svg) [Youtube - Boost MongoDB Performance: Motor Client vs PyMongo - Which is Faster?](https://www.youtube.com/watch?v=8LBLXzAzWbM&t=1s) ### **Conclusion** The choice between Motor and PyMongo depends on the specific needs of your application. For applications that require high concurrency and efficient I/O handling, Motor and the asynchronous approach offer significant advantages. However, for simpler applications or scripts where ease of implementation is a priority, PyMongo's synchronous approach might be sufficient. By leveraging asynchronous operations with Motor, you can build more scalable and performant applications, making it a worthwhile consideration for modern web development.
thedevtimeline
1,876,600
Simplifying Transformer Blocks
Simplifying Transformer Blocks
0
2024-06-04T12:22:29
https://aimodels.fyi/papers/arxiv/simplifying-transformer-blocks
machinelearning, ai, beginners, datascience
*This is a Plain English Papers summary of a research paper called [Simplifying Transformer Blocks](https://aimodels.fyi/papers/arxiv/simplifying-transformer-blocks). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).* ## Overview - Researchers propose a simplified design for deep Transformer models, which are a key component of many state-of-the-art language models. - The standard Transformer block is complex, with multiple interconnected sub-components, making the architecture brittle and sensitive to changes. - This paper explores ways to simplify the Transformer block while maintaining its performance and training speed. ## Plain English Explanation Transformer models have become a fundamental building block of many powerful language AI systems, such as [GPT-3](https://aimodels.fyi/papers/arxiv/your-transformer-is-secretly-linear) and [BERT](https://aimodels.fyi/papers/arxiv/brainformers-trading-simplicity-efficiency). However, the standard Transformer block used in these models is quite intricate, with multiple interconnected parts like attention mechanisms, feedforward neural networks, and normalization layers. This complexity can make the models fragile, where even small changes to the architecture can significantly slow down training or prevent the model from being trained at all. The researchers in this paper explored ways to **simplify the Transformer block** while still maintaining its performance and training speed. By drawing on signal propagation theory and empirical observations, they were able to remove several components of the standard Transformer block, including skip connections, projection or value parameters, sequential sub-blocks, and normalization layers. Despite these simplifications, their **modified Transformer models were able to match the training speed and performance of the standard Transformer**, while actually training **15% faster** and using **15% fewer parameters**. This work demonstrates that the standard Transformer block design may be unnecessarily complex, and that simpler alternatives can be just as effective. This could lead to more efficient and robust Transformer-based language models in the future. ## Technical Explanation The researchers propose a **simplified Transformer block design** by combining insights from signal propagation theory and empirical observations. They methodically remove various components of the standard Transformer block, including: - **Skip connections**: The researchers found that skip connections, which allow information to bypass certain layers, were not necessary for effective training. - **Projection or value parameters**: Removing the projection and value parameters in the attention mechanism did not impair performance. - **Sequential sub-blocks**: Restructuring the attention and feedforward neural network sub-blocks to run in parallel, rather than sequentially, did not negatively impact the model. - **Normalization layers**: The normalization layers, commonly used to stabilize training, were also found to be unnecessary. Through experiments on both **autoregressive decoder-only** and **BERT encoder-only** Transformer models, the researchers showed that their simplified Transformer blocks were able to **match the per-update training speed and performance** of the standard Transformer blocks. Additionally, the simplified models achieved **15% faster training throughput** and used **15% fewer parameters**. ## Critical Analysis The researchers provide a thorough analysis of their simplified Transformer block design, addressing potential concerns and limitations. They acknowledge that while their modifications may not generalize to all Transformer-based models, the core principles behind their simplifications - such as [streamlining large language models through redundancy verification and elimination](https://aimodels.fyi/papers/arxiv/sleb-streamlining-llms-through-redundancy-verification-elimination) - could be applied more broadly. One potential area for further research would be to explore the **impact of these simplifications on different Transformer architectures and tasks**, beyond the autoregressive and BERT-style models studied in this paper. Additionally, the researchers do not delve into the **theoretical underpinnings** of why certain Transformer components can be removed without performance degradation, which could be a fruitful area for future work. Overall, this paper presents a compelling approach to **reducing the complexity of Transformer models** while maintaining their effectiveness, which could have significant implications for the efficiency and robustness of future language AI systems. ## Conclusion This research demonstrates that the standard Transformer block design may be overly complex, and that simpler alternatives can be equally effective. By removing various components, such as skip connections, projection parameters, and normalization layers, the researchers were able to create simplified Transformer blocks that matched the performance of the standard design while training 15% faster and using 15% fewer parameters. These findings could lead to the development of more efficient and robust Transformer-based language models, which are at the heart of many state-of-the-art AI systems. By [exploring alternative Transformer architectures](https://aimodels.fyi/papers/arxiv/towards-smallers-faster-decoder-only-transformers-architectural) and [drawing inspiration from the brain](https://aimodels.fyi/papers/arxiv/lateralization-mlp-simple-brain-inspired-architecture-diffusion), researchers can continue to push the boundaries of what is possible in natural language processing and generation. **If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.**
mikeyoung44
1,877,070
OpenID Connect Flows: From Implicit to Authorization Code with PKCE & BFF
Introduction This article will review the principles behind various OpenID Connect (OIDC)...
0
2024-06-04T19:49:23
https://dev.to/alexey-abblix/openid-connect-flows-from-implicit-to-authorization-code-with-pkce-bff-4ke8
authentication, security, oauth2, webdev
## Introduction This article will review the principles behind various OpenID Connect (OIDC) authentication flows, from the simplest to the most modern, highlighting the vulnerabilities present in each. We will explore each of the following OpenID Connect flows in detail: Frontend-only implementations: - **Implicit Flow** - **Authorization Code Flow with Proof Key for Code Exchange (PKCE)** Frontend and Backend implementations: - **Authorization Code Flow with Token-Mediating Backend** - **Authorization Code Flow with Token-Mediating Backend and PKCE** - **Authorization Code Flow with PKCE and Backend for Frontend (BFF)** For convenience, some information will be repeated in each section. This way, you can read about the specific flows that interest you without needing to read the entire article. ## Vulnerabilities Present in Authentication Processes These vulnerabilities are critical to understand when implementing OpenID Connect or OAuth 2.0 protocols in your applications. Modern web applications can be attacked through various types of vectors: ### Common Attack Vectors - **XSS (Cross-Site Scripting) attacks** involve injecting malicious JavaScript code into an application through input fields, URL parameters, or other entry points. This happens when input data isn't properly escaped or sanitized. - **CSRF (Cross-Site Request Forgery)** occurs when malicious scripts or browser extensions perform actions on behalf of the user without their consent. These scripts can use the user's session and credentials to carry out unauthorized actions, emulating legitimate user behavior. - **Dependency Compromise** occurs when modern web applications rely on numerous external libraries and resources. If these dependencies are compromised, they can introduce vulnerabilities into the application. - **Browser extension vulnerabilities** arise when browser extensions, which have access to web application code and user data, are compromised or malicious. These compromised extensions can introduce vulnerabilities, allowing attackers to steal sensitive information or inject malicious code into web applications. These attack vectors allow an attacker to run arbitrary code in the user's application execution context. This may lead to the following threats: ### Specific Threats - A **Code Interception Attack** occurs when an attacker intercepts authorization codes exposed in the URL during the redirect, resulting to unauthorized access to the user's resources. - **Persistent Token Theft** is a risk when tokens are stored within the browser's storage, making them susceptible to continuous theft (such as every 10 seconds) by malicious scripts. - **Acquisition and Extraction of New Tokens** occurs when a session is active on the OpenID Provider side, and malicious JavaScript initiates a silent authentication process in a hidden iframe to obtain new access tokens without the user's knowledge. - **Proxying Requests via the User's Browser** occurs when malicious JavaScript exploits an authenticated session by simulating user actions within the application, sending unauthorized requests to the Protected Resource on behalf of the user. ### Vulnerabilities Specific to Certain OpenID Connect Implementations - **Token Transmission via URL** occurs when tokens are visible in the address bar, stored in browser history, logs, and passed via HTTP referrers, making them vulnerable to interception and unauthorized use. This is especially relevant to Implicit Flow in frontend-only implementations. - **Lack of Refresh Token Support** occurs when there are no refresh tokens, frequent requests for new tokens are necessary, increasing the chances of token leakage and misuse. This is especially relevant to Implicit Flow in frontend-only implementations. ## Evolution of OpenID Connect Flows ### Frontend only implementations ![Evolution Of OpenId Connect Flows Mitigating Vulnerabilities Over Time](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68s242jh8uf65r7ijkcm.png) ### Frontend and Backend implementations ![Evolution Of OpenId Connect Flows Mitigating Vulnerabilities Over Time] (https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c76n6uxd15ualx026s1y.png) ### Description The diagrams illustrate the evolution of OpenID Connect flows over time, showing how the authentication methods have become increasingly secure. They highlight the shift from the initial Implicit Flow, which had numerous vulnerabilities, to the more secure Authorization Code Flow with PKCE and BFF. Each progression step is depicted to demonstrate how these new flows have effectively addressed and reduced various security vulnerabilities, ensuring a more secure authentication process in modern web applications. Initially, there were two different types of authorization flows in OAuth 2.0: the Authorization Code Flow and the Implicit Flow. The Authorization Code Flow was designed for scenarios where a backend server performed as a client, handling communication with the authorization server. On the other hand, the Implicit Flow was a simplified alternative intended mainly for frontend applications. However, the Implicit Flow had several security vulnerabilities, such as token interception and theft. To enhance security, the Authorization Code Flow with Proof Key for Code Exchange (PKCE) was developed. PKCE addressed significant vulnerabilities, including code interception attacks. This flow marked a considerable improvement in securing the authentication process. To address more issues, the Authorization Code Flow with PKCE and a token-mediating backend became the next possible move. This approach involved both the frontend and the backend, thereby reducing some vulnerabilities like token exposure. However, it still had some issues, such as Persistent Token Theft. The latest advancement shown in the diagram is the Authorization Code Flow with both PKCE and BFF (Backend for Frontend). This approach combines the benefits of PKCE with additional backend protections, further mitigating vulnerabilities like persistent token theft. Now we will examine the principles of each flow in detail. ## Implicit Flow in a frontend-only implementation ![OpenId Connect Implicit Flow in a frontend-only implementation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gv9vrue3ewfxlu6d4hei.png) - In OpenID Connect, the **End-User** is the person using an application to access services. The **Relying Party (RP)** is the application itself, relying on the OpenID Provider for user authentication. These applications can be mobile apps, web apps, desktop apps, IoT devices, or gaming consoles. They act as clients, requesting tokens from the OpenID Provider to authenticate users and gain access to protected resources. - The **OpenID Provider (OP)** is an authorization server responsible for verifying the identity of the End-User. It authenticates the user and issues identity and access tokens to the Relying Party. For instance, in the diagram, the Abblix OIDC Server, a certified OpenID Foundation library presented by the company [Abblix](https://www.abblix.com/abblix-oidc-server) with open source code (available on [GitHub](https://github.com/Abblix/Oidc.Server)), serves as the OpenID Provider. This server ensures that users are who they claim to be and provides the necessary tokens to the applications that need to interact securely with protected resources. - The **Resource Server** stores and manages protected information, hosting sensitive data like personal details, financial information, and private documents. In OpenID Connect, the Resource Server uses tokens issued by the OpenID Provider to decide whether to grant or deny access. The tokens verify the user's identity and permissions, ensuring only authorized users can access the sensitive data. ### Step-by-Step Process #### Step 1. User Redirects to OpenID Provider When a user tries to access a protected resource, the application checks their authentication status. If the user is not logged in, the application will redirect their browser to the OpenID Provider to start the authentication process. This request includes parameters such as `client_id`, `redirect_uri`, `response_type=id_token token` (indicating that the client expects both an ID token and an access token in the response, characteristic of the Implicit Flow), `scope`, and `state`. The parameters in this request serve specific purposes: - **client_id**: This identifies the client application making the request. For example, `client_id=myapp-12345`. - **redirect_uri**: This specifies the URL to which the OpenID Provider should redirect the user after authentication. For example, `redirect_uri=https://myapp.com/callback`. - **response_type=id_token token**: This indicates that the client expects both an ID token and an access token in the response. - - **ID Token**: An ID token is a JSON Web Token (JWT) that contains information about the authenticated user. It typically includes claims such as the user's identity, authentication time, and other relevant data. For example, it may contain the user's name, email address, and a unique identifier. - - **Access Token**: An access token is a credential used to access protected resources. It is also usually a JWT and includes information about the permissions granted to the client application. This token is sent along with requests to the Resource Server to authorize access to specific resources. An access token is a bearer token, meaning it is used by clients as-is, without parsing its contents or performing validation. - **scope**: This defines the access privileges requested, such as access to the user's profile information. For example, `scope=openid profile email`. - **state**: This parameter helps prevent CSRF attacks by maintaining state between the request and the callback, ensuring the response matches the original request made by the client. For example, `state=abc123`. Here are examples of what the parameters might look like in a real request: - `client_id=myapp-12345` - `redirect_uri=https://myapp.com/callback` - `response_type=id_token token` - `scope=openid profile email` - `state=abc123` By including these parameters, the client application ensures that the OpenID Provider can correctly identify the client, know where to send the response, understand what types of tokens are being requested, define the access privileges, and secure communication against CSRF attacks. #### Step 2. Show Login Form The OpenID Provider displays a login form to the user. This form prompts the user to enter their login credentials (typically a username and password). #### Step 3. User Enters Login and Password and Submits the Form The user enters their login credentials into the form, including their username and password, and submits it. The OpenID Provider processes these credentials to authenticate the user. #### Step 4. Access Token Response Upon successful authentication, the OpenID Provider redirects the user's browser back to the specified redirect uri with an `access_token` and an ID token included in the URL fragment. This response contains the tokens needed for the client (Relying Party) to authenticate the user and make authorized requests to protected resources. > IMPORTANT > > After this step, once the `access_token` is received, several types of attacks can occur. #### Step 5. Protected Resource Request The client application (Relying Party) uses the access token to make requests to the Resource Server for access to protected resources. The `access_token` is sent as part of the request to prove that the client has permission to access the requested resources. #### Step 6. Protected Resource Response The Resource Server verifies the access token and, if valid, responds to the client's request, by providing access to the protected data. This information is sent back to the client application for use by the user. ### Summary of OpenID Connect Implicit Flow in a frontend-only implementation To summarize, the Implicit Flow in OpenID Connect involves redirecting the user to the OpenID Provider for authentication, the user submits their login credentials, and the application receives an access token and ID token in the redirect response, which are then used to access protected resources. However, the Implicit Flow has several security vulnerabilities: - **Persistent Token Theft**: Tokens are present in the browser, and fresh tokens can be stolen continuously, for example, every 10 seconds. - **Acquisition and Extraction of New Tokens**: The user is authenticated on the OpenID Provider side, and malicious JavaScript can initiate a silent request to the OpenID Provider in a hidden iframe and obtain its own unique access token. - **Proxying Requests via the User's Browser**: Malicious JavaScript can "click buttons in the application" on behalf of the user, sending requests to the Protected Resource in the user's name. - **Token Transmission via URL**: Tokens are visible in the address bar, stored in browser history, logs, and passed via HTTP referrers, making them vulnerable to interception and unauthorized use. This is particularly relevant to Implicit Flow in frontend-only implementations. - **Lack of Refresh Token Support**: Without refresh tokens, frequent requests for new tokens are necessary, which increases the chances of token leakage and misuse. This is particularly relevant to Implicit Flow in frontend-only implementations. To address these security concerns, more secure flows, such as the Authorization Code Flow with a backend, are recommended. Additionally, if the application only involves a frontend component, many of these vulnerabilities can be mitigated by using the OpenID Connect Authorization Code Flow with PKCE. In the next section, we will examine these flows and explore how they mitigate the vulnerabilities present in the Implicit Flow, providing a safer mechanism for handling and transmitting tokens. ## Authorization Code Flow with PKCE in a frontend-only implementation ![OpenId Connect Authorization Code Flow with PKCE in a frontend only implementation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q4fqhoyis4bt1er1i6k1.png) ### Overview In OpenID Connect, the **End-User** is the person using an application to access services. The **Relying Party (RP)** is the application itself, relying on the OpenID Provider for user authentication. These applications can be mobile apps, web apps, desktop apps, IoT devices, or gaming consoles. They act as clients, requesting tokens from the OpenID Provider to authenticate users and gain access to protected resources. The **OpenID Provider (OP)** is an authorization server responsible for verifying the identity of the End-User. It authenticates the user and issues identity and access tokens to the Relying Party. For instance, in the diagram, the Abblix OIDC Server, a certified OpenID Foundation library presented by the company [Abblix](https://www.abblix.com/abblix-oidc-server) with open source code (available on [GitHub](https://github.com/Abblix/Oidc.Server)), serves as the OpenID Provider. This server ensures that users are who they claim to be and provides the necessary tokens to the applications that need to interact securely with protected resources. The **Resource Server** stores and manages protected information, hosting sensitive data like personal details, financial information, and private documents. In OpenID Connect, the Resource Server uses tokens issued by the OpenID Provider to decide whether to grant or deny access. The tokens verify the user's identity and permissions, ensuring only authorized users can access the sensitive data. The **Proof Key for Code Exchange (PKCE)** is designed to prevent authorization code interception and injection attacks. It ensures that only the client that requested the authorization code can use it. PKCE allows public clients, such as mobile and single-page applications, to use the Authorization Code Flow securely. ### Step-by-Step Process #### Step 1. User Redirects to OpenID Provider When a user tries to access a protected resource, the application checks their authentication status. If the user is not logged in, the application will redirect their browser to the OpenID Provider to start the authentication process. This request includes parameters such as `client_id`, `redirect_uri`, `response_type=code`, `scope`, `state`, and, specifically for PKCE, `code_challenge` and `code_challenge_method`. Before making the request, the client generates a high-entropy cryptographic random string known as the `code_verifier`. The `code_challenge` is then derived by hashing the `code_verifier` using the specified `code_challenge_method`. This `code_challenge` is sent with the initial authorization request, ensuring that the `code_verifier` is used later to prove that the true originator of the authorization request is asking for a token in exchange for the authorization code. The parameters in this request serve specific purposes: - **client_id**: This identifies the client application making the request. For example, `client_id=myapp-12345`. - **redirect_uri**: This specifies the URL to which the OpenID Provider should redirect the user after authentication. For example, `redirect_uri=https://myapp.com/callback`. - **response_type=code**: This indicates that the client expects an authorization code in the response. - **scope**: This defines the access privileges requested, such as access to the user's profile information. For example, `scope=openid profile email`. - **state**: This parameter helps prevent CSRF attacks by maintaining state between the request and the callback, ensuring the response matches the original request made by the client. For example, `state=abc123`. - **code_challenge**: This is a derived value used to ensure that the token exchange is secure. - **code_challenge_method**: This specifies the method used to generate the code challenge. For example, `code_challenge_method=S256`. Here are examples of what the parameters might look like in a real request: - `client_id=myapp-12345` - `redirect_uri=https://myapp.com/callback` - `response_type=code` - `scope=openid profile email` - `state=abc123` - `code_challenge=abc123def456acd789` - `code_challenge_method=S256` By including these parameters, the client application ensures that the OpenID Provider can correctly identify the client, know where to send the response, understand what types of tokens are being requested, define the access privileges, and secure the communication against attacks. #### Step 2. Show Login Form The OpenID Provider displays a login form to the user. This form prompts the user to enter their login credentials (typically a username and password). #### Step 3. User Enters Login and Password and Submits the Form The user enters their login credentials into the form, including their username and password, and submits it. The OpenID Provider processes these credentials to authenticate the user. #### Step 4. Authorization Code Response Upon successful authentication, the OpenID Provider redirects the user's browser back to the specified `redirect_uri` with an `authorization_code` included in the URL. The `authorization_code` is a short-lived, one-time code (e.g., `authorization_code=SplxlOBeZQQYbYS6WxSbIA`) that the client exchanges for an access token. This ensures secure token issuance. This code is used by the client to request an access token from the OpenID Provider. #### Step 5. Asynchronous Access Token Request The client application (Relying Party) sends an asynchronous request to the OpenID Provider's token endpoint to exchange the authorization code for an access token. This request includes the `authorization_code`, `client_id`, `redirect_uri`, and, specifically for PKCE, the original value of `code_verifier`. #### Step 6. Access Token Response The OpenID Provider validates the request and, if successful, returns an `access_token` and a `refresh_token` to the client application. The `access_token` is a credential used to access protected resources. It typically has a short lifespan to minimize risk. Example: `access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9`. The `refresh_token` is used to obtain a new access token without requiring the user to re-authenticate, thereby maintaining the session securely over longer periods. Example: `refresh_token=def502b0b22b3d8f67d1b7c2c1b5a6b2c7d1f7f8c8b9c4e2`. > IMPORTANT > > After this step, once the tokens are received, several types of attacks can occur. #### Step 7. Async User Info Request The client application sends an asynchronous request to the OpenID Provider's user info endpoint to retrieve the user's profile information, using the `access_token`. #### Step 8. User Info Response The OpenID Provider responds with the user's claims (e.g., name, email). #### Step 9. Protected Resource Request The client application (Relying Party) uses the access token to make requests to the Resource Server to access protected resources. The `access_token` is sent as part of the request to prove that the client has permission to access the requested resources. #### Step 10. Protected Resource Response The Resource Server verifies the access token and, if valid, responds to the client's request, by providing access to the protected data. This information is sent back to the client application for use by the user. ### Summary of OpenID Connect Authorization Code Flow with PKCE in a frontend-only implementation To summarize, the Authorization Code Flow with PKCE in OpenID Connect involves redirecting the user to the OpenID Provider for authentication, the user submits their login credentials, the application receives an authorization code in the redirect response, and then securely exchanges this code for an access token using PKCE. This flow enhances security by ensuring that the authorization code can only be used by the client that requested it, significantly reducing the risk of token interception. Using PKCE mitigates several vulnerabilities: - **Code Interception Attack**: PKCE ensures that the authorization code can only be exchanged for tokens by the client that requested it, preventing attackers from intercepting the code during the redirect. - **Token Transmission via URL**: PKCE does not send tokens in the URL, reducing the risk of tokens being intercepted from the address bar, browser history, logs, and HTTP referrers. - **Lack of Refresh Token Support**: PKCE supports the use of refresh tokens, allowing secure, long-term session maintenance without frequent requests for new tokens, thereby reducing the chances of token leakage and misuse. However, even with PKCE, some vulnerabilities remain: - **Persistent Token Theft**: Tokens are present in the browser, and fresh tokens can be stolen continuously, for example, every 10 seconds. - **Acquisition and Extraction of New Tokens**: The user is authenticated on the OpenID Provider side, and malicious JavaScript can initiate a silent request to the OpenID Provider in a hidden iframe and obtain its own unique access token. - **Proxying Requests via the User's Browser**: Malicious JavaScript can "click buttons in the application" on behalf of the user, sending requests to the Protected Resource in the user's name. To further enhance security, the backend component should be used. The Authorization Code Flow with PKCE, combined with a backend, addresses many security concerns effectively, providing a more secure mechanism for token handling and transmission. ## Authorization Code Flow with Token-Mediating Backend ![OpenId Connect Authorization Code Flow with Token-Mediating Backend](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxvir5jneyi9xkz65ejl.png) ### Overview - In OpenID Connect, the **End-User** is the person using an application to access services. These applications can include mobile apps, web apps, desktop apps, IoT devices, gaming applications, and smartwatches. The End-User interacts with these applications to access various services and resources. - The **Relying Party (RP)** is the backend component of the application that the End-User interacts with. The RP depends on the OpenID Provider for user authentication. It requests tokens from the OpenID Provider to authenticate users and gain access to protected resources. In our scheme, the backend functions as the RP, adding an extra layer of security in handling authentication and tokens. - The **OpenID Provider (OP)** is an authorization server responsible for verifying the identity of the End-User. It authenticates the user and issues identity and access tokens to the Relying Party. For instance, in the diagram, the Abblix OIDC Server, a certified OpenID Foundation library presented by the company [Abblix](https://www.abblix.com/abblix-oidc-server) with open source code (available on [GitHub](https://github.com/Abblix/Oidc.Server)), serves as the OpenID Provider. This server ensures that users are who they claim to be and provides the necessary tokens to the applications that need to interact securely with protected resources. - The **Resource Server** is where the protected information is stored and managed. It hosts sensitive data like personal details, financial information, and private documents. In OpenID Connect, the Resource Server uses the tokens issued by the OpenID Provider to decide whether to grant or deny access to this information. The tokens verify the user's identity and permissions, ensuring that only authorized users can access the sensitive data. ### Step-by-Step Process #### Step 1. User Requests a Protected Resource The user requests a protected resource, prompting the application to check the authentication status. Seeing that the user is not logged in, the application redirects the user's browser to the OpenID Provider to initiate the authentication process. #### Step 2. Redirect to Authentication The Relying Party (RP) initiates a redirect of the user's browser to the OpenID Provider for authentication, using parameters such as `client_id`, `redirect_uri`, `response_type=code`, `scope`, and `state`. - **client_id**: This identifies the client application making the request. For example, `client_id=myapp-12345`. - **redirect_uri**: This specifies the URL to which the OpenID Provider should redirect the user after authentication. For example, `redirect_uri=https://myapp.com/callback`. - **response_type=code**: This indicates that the client expects an authorization code in the response. - **scope**: This defines the access privileges requested, such as access to the user's profile information. For example, `scope=openid profile email`. - **state**: This parameter helps prevent CSRF attacks by maintaining state between the request and the callback, ensuring the response matches the original request made by the client. For example, `state=abc123`. Here are examples of what the parameters might look like in a real request: - `client_id=myapp-12345` - `redirect_uri=https://myapp.com/callback` - `response_type=code` - `scope=openid profile email` - `state=abc123` By including these parameters, the client application ensures that the OpenID Provider can correctly identify the client, know where to send the response, understand what types of tokens are being requested, define the access privileges, and secure communication against CSRF attacks. #### Step 3. Show Login Form The OpenID Provider displays a login form to the user. This form prompts the user to enter their login credentials. #### Step 4. Form Submission The user enters their login credentials into the form, including their username and password, and submits it. The OpenID Provider processes these credentials to authenticate the user. #### Step 5. Authorization Code Response Upon successful authentication, the OpenID Provider redirects the user's browser back to the specified `redirect_uri` with an `authorization_code` included in the URL. The `authorization_code` is a short-lived, one-time code (e.g., `authorization_code=SplxlOBeZQQYbYS6WxSbIA`) that the client exchanges for an access token. This ensures secure token issuance. This code is used by the client to request an access token from the OpenID Provider. > IMPORTANT > > After this step, once the `authorization_code` is received in the browser, several types of attacks can occur. #### Step 6. Asynchronous Access Token Request The client application (Relying Party) sends an asynchronous request to the OpenID Provider's token endpoint to exchange the authorization code for an access token. This request includes the `authorization_code`, `client_id`, `client_secret`, and `redirect_uri`. - **authorization_code**: The code received in the redirect response. Example: `authorization_code=SplxlOBeZQQYbYS6WxSbIA`. - **client_id**: The same client identifier used in the initial request. Example: `client_id=myapp-12345`. - **client_secret**: A secret known only to the client and the OpenID Provider, used to authenticate the client. Example: `client_secret=shhh-its-a-secret`. - **redirect_uri**: The same URL used in the initial request to ensure consistency. Example: `redirect_uri=https://myapp.com/callback`. #### Step 7. Access Token Response The OpenID Provider validates the request and, if successful, returns an `access_token` and a `refresh_token` to the client application. - **access_token**: A credential used to access protected resources. Example: `access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9`. - **refresh_token**: Used to obtain a new access token without requiring the user to re-authenticate. Example: `refresh_token=def502b0b22b3d8f67d1b7c2c1b5a6b2c7d1f7f8c8b9c4e2`. #### Step 8. Async User Info Request The client application sends an asynchronous request to the OpenID Provider's user info endpoint to retrieve the user's profile information, using the `access_token`. #### Step 9. User Info Response The OpenID Provider responds with the user's claims (e.g., name, email). #### Step 10. Response from Protected Resource The client application receives the protected resource data in response to the `access_token` and `user_claims` request. #### Step 11. Protected Resource Request The client application sends a request to the Resource Server using the `access_token` to obtain protected data. #### Step 12. Protected Resource Response The Resource Server verifies the access token and, if valid, responds to the client's request, by providing access to the protected data. This information is sent back to the client application for use by the user. ### Summary of OpenID Connect Authorization Code Flow with Token-Mediating Backend The OpenID Connect Authorization Code Flow with Token-Mediating Backend involves a multi-step process to ensure secure authentication and authorization. It includes redirecting the user to the OpenID Provider for authentication, the user submits their login credentials, the OpenID Provider returns an authorization code in the redirect response, and then the application securely exchanges this code for an access token. This flow enhances security by ensuring that the authorization code can only be used by the client that requested it, significantly reducing the risk of token interception. Using a backend Relying Party (RP) helps address several vulnerabilities: - **Acquisition and Extraction of New Tokens**: By handling token requests on the server side, a backend RP prevents malicious JavaScript from initiating silent requests to the OpenID Provider and obtaining new access tokens. - **Token Transmission via URL**: When tokens are managed by a backend RP, they are not exposed in the URL, browser history, logs, or HTTP referrers, reducing the risk of interception and unauthorized use. - **Lack of Refresh Token Support**: backend RPs can securely store and use refresh tokens, allowing for long-term session maintenance without frequent token requests, thus minimizing the chances of token leakage and misuse. However, even with this enhanced flow, some vulnerabilities remain: - **Code Interception Attack**: An attacker can intercept the authorization code that is exposed in the URL during the redirect, resulting in unauthorized access to the user's resources. - **Persistent Token Theft**: Tokens are present in the browser, and fresh tokens can be stolen continuously, for example, every 10 seconds. - **Proxying Requests via the User's Browser**: Malicious JavaScript can "click buttons in the application" on behalf of the user, sending requests to the Protected Resource in the user's name. To address some of these security concerns, using the OpenID Connect Authorization Code Flow with PKCE in a backend implementation can be effective. In the next section, we will examine this diagram and explore how PKCE prevents code interception attacks effectively, providing a more secure mechanism for token handling and transmission. ## Authorization Code Flow with Token-Mediating Backend and PKCE ![OpenId Connect Authorization Code Flow with Token-Mediating Backend and PKCE](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nnnxd0zmh399c2k7z7cf.png) ### Overview - In OpenID Connect, the **End-User** is the person using an application to access services. These applications can include mobile apps, web apps, desktop apps, IoT devices, gaming applications, and smartwatches. The End-User interacts with these applications to access various services and resources. - The **Relying Party (RP)** is the backend component of the application that the End-User interacts with. The RP depends on the OpenID Provider for user authentication. It requests tokens from the OpenID Provider to authenticate users and gain access to protected resources. In our scheme, the backend functions as the RP, adding an extra layer of security in handling authentication and tokens. - The **OpenID Provider (OP)** is an authorization server responsible for verifying the identity of the End-User. It authenticates the user and issues identity and access tokens to the Relying Party. For instance, in the diagram, the Abblix OIDC Server, a certified OpenID Foundation library presented by the company [Abblix](https://www.abblix.com/abblix-oidc-server) with open source code (available on [GitHub](https://github.com/Abblix/Oidc.Server)), serves as the OpenID Provider. This server ensures that users are who they claim to be and provides the necessary tokens to the applications that need to interact securely with protected resources. - The **Resource Server** is where the protected information is stored and managed. It hosts sensitive data like personal details, financial information, and private documents. In OpenID Connect, the Resource Server uses the tokens issued by the OpenID Provider to decide whether to grant or deny access to this information. The tokens verify the user's identity and permissions, ensuring that only authorized users can access the sensitive data. - The **Proof Key for Code Exchange (PKCE)** is designed to prevent authorization code interception and injection attacks. It ensures that only the client that requested the authorization code can use it. PKCE allows public clients, such as mobile and single-page applications, to use the Authorization Code Flow securely. ### Step-by-Step Process #### Step 1. User Requests a Protected Resource The user requests a protected resource, prompting the application to check the authentication status. Seeing that the user is not logged in, the application redirects the user's browser to the OpenID Provider to initiate the authentication process. #### Step 2. Redirect to Authentication The Relying Party (RP) initiates a redirect of the user's browser to the OpenID Provider for authentication, using parameters such as `client_id`, `redirect_uri`, `response_type=code`, `scope`, `state`, and, specifically for PKCE, `code_challenge` and `code_challenge_method`. Before making the request, the client generates a high-entropy cryptographic random string known as the `code_verifier`. The `code_challenge` is then derived by hashing the `code_verifier` using the specified `code_challenge_method`. This `code_challenge` is sent with the initial authorization request, ensuring that the `code_verifier` is used later to prove that the true originator of the authorization request is asking for a token in exchange for the authorization code. - **client_id**: This identifies the client application making the request. For example, `client_id=myapp-12345`. - **redirect_uri**: This specifies the URL to which the OpenID Provider should redirect the user after authentication. For example, `redirect_uri=https://myapp.com/callback`. - **response_type=code**: This indicates that the client expects an authorization code in the response. - **scope**: This defines the access privileges requested, such as access to the user's profile information. For example, `scope=openid profile email`. - **state**: This parameter helps prevent CSRF attacks by maintaining state between the request and the callback, ensuring the response matches the original request made by the client. For example, `state=abc123`. - **code_challenge**: This is a derived value used to ensure that the token exchange is secure. Example: `code_challenge=abc123def456acd789`. - **code_challenge_method**: This specifies the method used to generate the code challenge. For example, `code_challenge_method=S256`. Here are examples of what the parameters might look like in a real request: - `client_id=myapp-12345` - `redirect_uri=https://myapp.com/callback` - `response_type=code` - `scope=openid profile email` - `state=abc123` - `code_challenge=abc123def456acd789` - `code_challenge_method=S256` By including these parameters, the client application ensures that the OpenID Provider can correctly identify the client, know where to send the response, understand what types of tokens are being requested, define the access privileges, and secure communication against CSRF attacks. #### Step 3. Show Login Form The OpenID Provider displays a login form to the user. This form prompts the user to enter their login credentials. #### Step 4. Form Submission The user enters their login credentials into the form, including their username and password, and submits it. The OpenID Provider processes these credentials to authenticate the user. #### Step 5. Authorization Code Response Upon successful authentication, the OpenID Provider redirects the user's browser back to the specified `redirect_uri` with an authorization code included in the URL. The `authorization_code` is a short-lived, one-time code (e.g., `authorization_code=SplxlOBeZQQYbYS6WxSbIA`) that the client exchanges for an access token. This ensures secure token issuance. This code is used by the client to request an access token from the OpenID Provider. #### Step 6. Asynchronous Access Token Request The client application (Relying Party) sends an asynchronous request to the OpenID Provider's token endpoint to exchange the authorization code for an access token. This request includes the `authorization_code`, `client_id`, `client_secret`, `redirect_uri`, and the original value of `code_verifier`. - **authorization_code**: The code received in the redirect response. Example: `authorization_code=SplxlOBeZQQYbYS6WxSbIA`. - **client_id**: The same client identifier used in the initial request. Example: `client_id=myapp-12345`. - **client_secret**: A secret known only to the client and the OpenID Provider, used to authenticate the client. Example: `client_secret=shhh-its-a-secret`. - **redirect_uri**: The same URL used in the initial request to ensure consistency. Example: `redirect_uri=https://myapp.com/callback`. - **code_verifier**: The original random string that was used to generate the code challenge. Example: `code_verifier=dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk`. #### Step 7. Access Token Response The OpenID Provider validates the request and, if successful, returns an `access_token` and a `refresh_token` to the client application. - **access_token**: A credential used to access protected resources. Example: `access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9`. - **refresh_token**: Used to obtain a new access token without requiring the user to re-authenticate. Example: `refresh_token=def502b0b22b3d8f67d1b7c2c1b5a6b2c7d1f7f8c8b9c4e2`. #### Step 8. Async User Info Request The client application sends an asynchronous request to the OpenID Provider's user info endpoint to retrieve the user's profile information, using the `access_token`. #### Step 9. User Info Response The OpenID Provider responds with the user's claims (e.g., name, email). #### Step 10. Response from Protected Resource The client application receives the protected resource data in response to the `access_token` and `user_claims` request. > IMPORTANT > > After this step, once the `access_token` is received in the browser, several types of attacks can occur. #### Step 11. Protected Resource Request The client application sends a request to the Resource Server using the `access_token` to obtain protected data. #### Step 12. Protected Resource Response The Resource Server verifies the access token and, if valid, responds to the client's request, by providing access to the protected data. This information is sent back to the client application for use by the user. ### Summary of OpenID Connect Authorization Code Flow with Token-Mediating Backend and PKCE The OpenID Connect Authorization Code Flow with PKCE in a backend implementation involves a multi-step process to ensure secure authentication and authorization. It includes redirecting the user to the OpenID Provider for authentication, the user submits their login credentials, the OpenID Provider returns an authorization code in the redirect response, and then the application securely exchanges this code for an access token. By ensuring that only the client that requested the authorization code can use it, this flow significantly reduces the risk of token interception. Using a backend Relying Party (RP) helps address several vulnerabilities: - **Acquisition and Extraction of New Tokens**: By handling token requests on the server side, a backend RP prevents malicious JavaScript from initiating silent requests to the OpenID Provider and obtaining new access tokens. - **Token Transmission via URL**: When tokens are managed by a backend RP, they are not exposed in the URL, browser history, logs, or HTTP referrers, reducing the risk of interception and unauthorized use. - **Lack of Refresh Token Support**: backend RPs can securely store and use refresh tokens, allowing for long-term session maintenance without frequent token requests, thus minimizing the chances of token leakage and misuse. Using PKCE (Proof Key for Code Exchange) further enhances security by addressing the **Code Interception Attack** vulnerability. PKCE ensures that the authorization code can only be exchanged for tokens by the client that requested it, preventing attackers from intercepting the code during the redirect. However, even with these enhancements, some vulnerabilities remain: - **Persistent Token Theft**: Tokens are present in the browser, and fresh tokens can be stolen continuously, for example, every 10 seconds. - **Proxying Requests via the User's Browser**: Malicious JavaScript can "click buttons in the application" on behalf of the user, sending requests to the Protected Resource in the user's name. Using the Backend for Frontend (BFF) pattern can further enhance security by addressing the vulnerability of access token interception. The BFF pattern involves creating a backend service that serves as an intermediary between the end-user's client application and the authentication provider. By handling access tokens on the server side, the BFF pattern prevents the access token from being exposed to the end-user's device. This approach mitigates the risk of access token interception by malicious JavaScript, ensuring that tokens are never directly accessible from the user's browser, thereby enhancing overall security. ## Authorization Code Flow with PKCE and BFF ![OpenId Connect Authorization Code Flow with PKCE and BFF](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gen8ej3o94ip2cbyrmhe.png) ### Overview - In OpenID Connect, the **End-User** is the person using an application to access services. These applications can include mobile apps, web apps, desktop apps, IoT devices, gaming applications, and smartwatches. The End-User interacts with these applications to access various services and resources. - The **Relying Party (RP)** in this case is a backend component that uses the Backend for Frontend (BFF) pattern. The BFF acts as an intermediary between the client application and the OpenID Provider. It handles authentication and token management, adding an extra layer of security. - The **OpenID Provider (OP)** is an authorization server responsible for verifying the identity of the End-User. It authenticates the user and issues identity and access tokens to the Relying Party. For instance, in the diagram, the Abblix OIDC Server, a certified OpenID Foundation library presented by the company [Abblix](https://www.abblix.com/abblix-oidc-server) with open source code (available on [GitHub](https://github.com/Abblix/Oidc.Server)), serves as the OpenID Provider. This server ensures that users are who they claim to be and provides the necessary tokens to the applications that need to interact securely with protected resources. - The **Resource Server** is where the protected information is stored and managed. It hosts sensitive data like personal details, financial information, and private documents. In OpenID Connect, the Resource Server uses the tokens issued by the OpenID Provider to decide whether to grant or deny access to this information. The tokens verify the user's identity and permissions, ensuring that only authorized users can access the sensitive data. - The **Proof Key for Code Exchange (PKCE)** is designed to prevent authorization code interception and injection attacks. It ensures that the authorization code can only be used by the client that requested it. PKCE allows public clients, such as mobile and single-page applications, to use the Authorization Code Flow securely. - The **Backend for Frontend (BFF)** pattern involves creating a backend service that serves as an intermediary between the client application and the authentication provider. The BFF handles authentication and token management on the server side, preventing the access token from being exposed to the end-user. This approach mitigates the risk of access token interception by malicious JavaScript, ensuring that tokens are never directly accessible from the user's browser, thereby significantly enhancing overall security. ### Step-by-Step Process #### Step 1. User Requests a Protected Resource The user requests a protected resource, prompting the application to check the authentication status. Seeing that the user is not logged in, the application redirects the user's browser to the OpenID Provider to initiate the authentication process. #### Step 2. Redirect to Authentication The Relying Party (RP) initiates a redirect of the user's browser to the OpenID Provider for authentication, using parameters such as `client_id`, `redirect_uri`, `response_type=code`, `scope`, `state`, and, specifically for PKCE, `code_challenge` and `code_challenge_method`. Before making the request, the client generates a high-entropy cryptographic random string known as the `code_verifier`. The `code_challenge` is then derived by hashing the `code_verifier` using the specified `code_challenge_method`. This `code_challenge` is sent with the initial authorization request, ensuring that the `code_verifier` is used later to prove that the true originator of the authorization request is asking for a token in exchange for the authorization code. - **client_id**: This identifies the client application making the request. For example, `client_id=myapp-12345`. - **redirect_uri**: This specifies the URL to which the OpenID Provider should redirect the user after authentication. For example, `redirect_uri=https://myapp.com/callback`. - **response_type=code**: This indicates that the client expects an authorization code in the response. - **scope**: This defines the access privileges requested, such as access to the user's profile information. For example, `scope=openid profile email`. - **state**: This parameter helps prevent CSRF attacks by maintaining state between the request and the callback, ensuring the response matches the original request made by the client. For example, `state=abc123`. - **code_challenge**: This is a derived value used to ensure that the token exchange is secure. Example: `code_challenge=abc123def456acd789`. - **code_challenge_method**: This specifies the method used to generate the code challenge. For example, `code_challenge_method=S256`. Here are examples of what the parameters might look like in a real request: - `client_id=myapp-12345` - `redirect_uri=https://myapp.com/callback` - `response_type=code` - `scope=openid profile email` - `state=abc123` - `code_challenge=abc123def456acd789` - `code_challenge_method=S256` By including these parameters, the client application ensures that the OpenID Provider can correctly identify the client, know where to send the response, understand what types of tokens are being requested, define the access privileges, and secure communication against CSRF attacks. #### Step 3. Show Login Form The OpenID Provider displays a login form to the user. This form prompts the user to enter their login credentials. #### Step 4. Form Submission The user enters their login credentials into the form, including their username and password, and submits it. The OpenID Provider processes these credentials to authenticate the user. #### Step 5. Authorization Code Response Upon successful authentication, the OpenID Provider redirects the user's browser back to the specified `redirect_uri` with an authorization code included in the URL. The `authorization_code` is a short-lived, one-time code (e.g., `authorization_code=SplxlOBeZQQYbYS6WxSbIA`) that the client exchanges for an access token. This ensures secure token issuance. This code is used by the client to request an access token from the OpenID Provider. #### Step 6. Asynchronous Access Token Request The client application (Relying Party) sends an asynchronous request to the OpenID Provider's token endpoint to exchange the authorization code for an access token. This request includes the `authorization_code`, `client_id`, `client_secret`, `redirect_uri`, and, specifically for PKCE, the original value of `code_verifier`. - **authorization_code**: The code received in the redirect response. Example: `authorization_code=SplxlOBeZQQYbYS6WxSbIA`. - **client_id**: The same client identifier used in the initial request. Example: `client_id=myapp-12345`. - **redirect_uri**: The same URL used in the initial request to ensure consistency. Example: `redirect_uri=https://myapp.com/callback`. - **code_verifier**: The original random string that was used to generate the code challenge. Example: `code_verifier=dBjftJeZ4CVP-mB92K27uhbUJU1p1r_wW1gFWFOEjXk`. #### Step 7. Access Token Response The OpenID Provider validates the request and, if successful, returns an `access_token` and a `refresh_token` to the client application. At this stage, the BFF stores the `access_token`. - **access_token**: A credential used to access protected resources. Example: `access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9`. - **refresh_token**: Used to obtain a new access token without requiring the user to re-authenticate. Example: `refresh_token=def502b0b22b3d8f67d1b7c2c1b5a6b2c7d1f7f8c8b9c4e2`. #### Step 8. Async User Info Request The client application sends an asynchronous request to the OpenID Provider's user info endpoint to retrieve the user's profile information, using the `access_token`. #### Step 9. User Info Response The OpenID Provider responds with the user's claims (e.g., name, email). #### Step 10. Response from Protected Resource The client application receives the protected resource data in response `user_claims` request. #### Step 11. Protected Resource Request The client application sends a request to the BFF, which then uses the `access_token` to make a request to the Resource Server to obtain protected data. #### Step 12. Protected Resource Response The Resource Server verifies the access token and, if valid, responds to the BFF's request by providing access to the protected data. This data is sent back to the client application via the BFF. ### Summary of OpenID Connect Authorization Code Flow with PKCE and BFF The OpenID Connect Authorization Code Flow with PKCE and BFF involves a multi-step process to ensure secure authentication and authorization. It includes redirecting the user to the OpenID Provider for authentication, the user submits their login credentials, the OpenID Provider returns an authorization code in the redirect response, and then the application securely exchanges this code for an access token. By ensuring that the authorization code can only be used by the client that requested it, this flow significantly reduces the risk of token interception. Using a backend Relying Party (RP) and the BFF pattern helps address several vulnerabilities: - **Acquisition and Extraction of New Tokens**: By handling token requests on the server side, a backend RP prevents malicious JavaScript from initiating silent requests to the OpenID Provider and obtaining new access tokens. - **Token Transmission via URL**: When tokens are managed by a backend RP, they are not exposed in the URL, browser history, logs, or HTTP referrers, reducing the risk of interception and unauthorized use. - **Lack of Refresh Token Support**: Backend RPs can securely store and use refresh tokens, allowing for long-term session maintenance without frequent token requests, thus minimizing the chances of token leakage and misuse. Using Proof Key for Code Exchange (PKCE) further enhances security by addressing the **Code Interception Attack** vulnerability. PKCE ensures that the authorization code can only be exchanged for tokens by the client that requested it, preventing attackers from intercepting the code during the redirect. The BFF pattern adds another layer of security by preventing access tokens from being exposed to the end-user's device. This approach mitigates the risk of **Token Theft**, ensuring that tokens are never directly accessible from the user's browser. However, even with these enhancements, one known vulnerability remains: - **Proxying Requests via the User's Browser**: Malicious JavaScript can "click buttons in the application" on behalf of the user, sending requests to the Protected Resource in the user's name. A web application running in a browser actually executes in an environment that is not controlled by the developer, making it impossible to ensure absolute security. This is why eliminating tokens from the browser is a trending direction for improving web application security. To mitigate risks associated with this vulnerability, users should install only verified browser extensions, keep browsers updated to the latest version, and use antivirus software to enhance overall security. As of 2024, this authentication scheme is the most secure and recommended for use, providing robust protection for accessing protected resources. ## Conclusion We explored various flows, starting from the Implicit Flow in a frontend-only implementation, which, while simpler, is fraught with vulnerabilities. Moving to more secure methods, the Authorization Code Flow with PKCE, both in frontend-only and backend-involving implementations, significantly reduces risks associated with token interception and theft. The combination of Authorization Code Flow with PKCE and the Backend for Frontend (BFF) pattern represents the most secure approach available as of 2024. This setup not only prevents tokens from being exposed to the user's browser but also mitigates several common attack vectors like code interception, token theft, and unauthorized token acquisition. However, it is still essential to be vigilant against browser-based attacks, such as those involving malicious JavaScript. Visit [Abblix](https://abblix.com) official site and join us in improving our server library, Abblix OIDC Server, on our GitHub page. {% github Abblix/Oidc.Server %}
alexey-abblix
1,880,140
Can a line chart be segmented?
Title Can a line chart be segmented? Description In vchart, is it possible to...
0
2024-06-07T08:53:48
https://dev.to/neuqzxy/can-a-line-chart-be-segmented-1l5k
# Title Can a line chart be segmented? # Description In vchart, is it possible to segment a line chart if there is a point that you do not want to display? ![](https://bytedance.larkoffice.com/space/api/box/stream/download/asynccode/?code=ZGZiNDcyNTk5ZWVmYjc1MDY5NGNhNDNmZTJjNTM1NjVfZjRUcTFpeGtic1hWS3pQRlZ4VkJUTzNXQjdNMWJkSHBfVG9rZW46RGZRRGJQTU1xbzZ6UjV4cHdCMWNjZWRYbnZoXzE3MTc3NTAzNjc6MTcxNzc1Mzk2N19WNA) # Solution If a point's value is invalid, VChart will automatically hide that point. You can set the value of that point to null in the data to achieve the same effect. For example, the point corresponding to 10:00 will not be displayed if its value is set to null. ```TypeScript data: { values: [ { time: '2:00', value: 8 }, { time: '4:00', value: 9 }, { time: '6:00', value: 11 }, { time: '8:00', value: 14 }, { time: '10:00', value: null }, { time: '12:00', value: 17 }, { time: '14:00', value: 17 }, { time: '16:00', value: 16 }, { time: '18:00', value: 15 } ] } ``` # Code Example ```TypeScript const spec = { type: 'line', data: { values: [ { time: '2:00', value: 8 }, { time: '4:00', value: 9 }, { time: '6:00', value: 11 }, { time: '8:00', value: 14 }, { time: '10:00', value: null }, { time: '12:00', value: 17 }, { time: '14:00', value: 17 }, { time: '16:00', value: 16 }, { time: '18:00', value: 15 } ] }, xField: 'time', yField: 'value' }; const vchart = new VChart(spec, { dom: CONTAINER_ID }); vchart.renderSync(); // Just for the convenience of console debugging, DO NOT COPY! window['vchart'] = vchart; ``` # Result After running the code, the point corresponding to 10:00 will not be displayed, and the line chart will be segmented. ![](https://bytedance.larkoffice.com/space/api/box/stream/download/asynccode/?code=MGE2YjhkNzFmZWU4NjIwNmFjYzc0MzVlNGNlNDYyNTBfdUk2a0RFOGhkRVp3RzJPbk5Pbmh4QUFkNzZtR2V3dHBfVG9rZW46RVVsU2JJVkNRbzJ3eTJ4VmxZVGNlZ3NKbk1oXzE3MTc3NTAzNjc6MTcxNzc1Mzk2N19WNA) Online demo: https://codesandbox.io/p/sandbox/line-point-split-fq7wkh?file=%2Fsrc%2Findex.ts%3A49%2C2 # Related Documents - VChart official website: https://visactor.io/vchart/ - VChart GitHub: [GitHub - VisActor/VChart: VChart, more than just a cross-platform charting library, but also an expressive data storyteller.](https://github.com/VisActor/VChart)
neuqzxy
1,884,452
Create an App Like Nextdoor: The Ultimate Guide
Creating an app like Nextdoor involves thorough planning, market research, and collaboration with...
0
2024-06-11T13:06:48
https://dev.to/keliaewart/create-an-app-like-nextdoor-the-ultimate-guide-3bea
appdevelopment, mobileappdevelopment, hireappdevelopers, techtalks
Creating an app like Nextdoor involves thorough planning, market research, and collaboration with skilled developers. From defining your app's purpose to designing and developing key features, each step requires attention to detail. By following this ultimate guide, you can create a successful neighborhood social networking app that fosters community engagement and connection. ## Introduction Building an app to engage with the neighborhood is a brilliant business concept. In addition to fostering a sense of community, interacting and communicating with locals offers the greatest opportunity for neighbors to assist one another. Consider developing an app similar to Nextdoor if you're looking for app ideas. Now, you need to be aware of the app and the steps involved in developing a successful community-building app if you intend to develop an app similar to Nextdoor. So, without further delay, let's learn how to create an app similar to Nextdoor. ## What is the Nextdoor App? Neighbors may interact, exchange helpful information, and keep track of upcoming activities on Nextdoor - the Who Are My Neighbors app. New users must confirm their residency by either receiving a phone call or a letter in the mail. This ensures that everyone in the community is genuine and truly who they claim to be. Nextdoor is widely used across the globe, including in the US, the Netherlands, the UK, Australia, and France. In the US alone, approximately one out of every five households actively engages with the platform. A key feature of Nextdoor is its focus on local neighborhoods, down to specific postal codes. Unlike other social networks that aim for a broad audience, Nextdoor prioritizes connections within your immediate area. Creating an app like Nextdoor could be a great idea for businesses. You can contribute to your community while also establishing a prosperous business venture. The Nextdoor app has 40.4 million weekly active users (WAU) a 6% increase from the previous year. $56 million in revenue was 4% more than the previous year. ## Creating the Best Neighborhood Watch App Like the free Nextdoor app, you can build an efficient app by following this simple process of app development: 1) Market Research You need to analyze the market before creating an app. Analyze the market needs and market size for the app you are creating. Identify your target audience and prepare a better strategy. For creating a strategy, you also need to do competitor research. See how many industries are targeting the same niche and prepare the best strategy that stands you apart from other app development companies. 2) The Planning You need to clearly outline your app’s purpose and features. Decide if it will focus on neighborhood communication, event organization, or a mix of services. You must choose a monetization strategy that fits your app's purpose and audience. Consider options like freemium models, offering basic features for free with premium options for purchase, or in-app advertising. Alternatively, explore premium subscription models providing advanced features for a subscription fee. Tailoring your business model to suit your app's unique value proposition ensures sustainability and profitability in the competitive market landscape. 3) UI/UX Designing Creating eye-catching design and a user-friendly interface is important for a convenient experience. You need to find the best designers who can provide a perfect interface that suits your app. You can draw inspiration for your app's design by referring to top-performing apps in your field. Since we're developing an app similar to Nextdoor, you can look to Nextdoor itself for design inspiration. 4) App Development * iOS development Consider working with an iOS app development company that has experience building social networking platforms to create an app that is comparable to Nextdoor for iOS. **[Hire iOS developers](https://www.hyperlinkinfosystem.com/hire-ios-developers)** with expertise in Swift and Objective-C for the development of social apps. It will guarantee the smooth integration of features unique to a neighborhood-focused app. These developers will strive to ensure compatibility and seamless operation across iOS devices by crafting an intuitive user experience for iPhone and iPad owners. *Android Development For Android development, seek to partner with a reputable Android app development company. With expertise in Java and Kotlin programming languages, these developers can build an app akin to Nextdoor that caters to the diverse Android user base. **[Hire Android app developers](https://www.hyperlinkinfosystem.com/hire-android-app-developer.htm)** or collaborate with specialists in Android development. So you can ensure the successful implementation of key features and optimizations for performance and usability on various Android devices. 5) Features to Consider Hare are some necessary features for a local neighborhood mobile app development: * Community News and Updates Stay informed about local news and events happening in your neighborhood through a dedicated news feed. * Recommendations and Classifieds Access recommendations for various services like maintenance workers and childcare providers. Explore classified ads for buying and selling items within the community. * Events and Activities Discover nearby events, including yard sales and parties, and even organize your own events with the help of the app. * Business Reviews and Suggestions To help you decide where to go, read objective reviews of nearby establishments like cafes and restaurants. * Crime Alert and Safety Get safety advice and alerts in the event of an emergency or natural disaster. Work together with your neighbors to set up crime reports and neighborhood watch initiatives. * Help Maps and Groups Utilize features like Help Maps to coordinate assistance during crises, such as the COVID-19 outbreak. Join specialized groups to connect with neighbors sharing similar interests or needs. 6) QA and Testing Quality matters, and for that you need testing and quality assurance for your app. The testing team checks all the features and ensures they work properly. The testing ensures the finding of bugs and errors to resolve them and improve the accuracy of the app. You can launch a flawless app with powerful testing. Thorough testing also aids in locating any performance snags or usability problems, guaranteeing a seamless and simple user experience. You may boost user happiness and favorable ratings by thoroughly testing your app to establish trust in its functioning and dependability. 7) Maintenance To keep your mobile app working effectively, it's crucial to **[hire dedicated developers](https://www.hyperlinkinfosystem.com/hire-dedicated-developers-india.htm)** to keep your app proficient, by engaging skilled specialists, you ensure that your app receives necessary upgrades and enhancements. With a competent team, you can address any bugs promptly and introduce new features to improve user experience. Hiring mobile app developers familiar with your app's programming languages and platforms ensures that it remains competitive and aligned with the latest technologies and user expectations. ## Conclusion Building an app like Nextdoor requires meticulous planning, market research, and collaboration with experienced app developers. **[Hire app developers](https://www.hyperlinkinfosystem.com/app-developers-for-hire.htm)** to ensure the successful implementation of key features and optimizations for a seamless user experience.
keliaewart
1,887,907
How to Hire an SEO Agency: A Comprehensive Guide
In today's digital age, having a strong online presence is crucial for any business. Search Engine...
0
2024-06-14T04:00:26
https://dev.to/tdigitalguru/how-to-hire-an-seo-agency-a-comprehensive-guide-1om
programming, tutorial, beginners, seoagency
In today's digital age, having a strong online presence is crucial for any business. Search Engine Optimization (SEO) plays a vital role in ensuring your website ranks high on search engines, driving organic traffic and potential customers to your site. Hiring an SEO agency can be a game-changer for your business, but finding the right one requires careful consideration and research. This guide will walk you through the steps to hire the best SEO agency for your needs. ## 1. Understand Your SEO Needs Overview: Before you start looking for an SEO agency, it's essential to understand your specific SEO needs. Are you looking to improve your site's ranking, increase organic traffic, or optimize your local SEO? Having a clear goal will help you find an agency that specializes in your area of need. **Steps:** Identify your primary objectives (e.g., increase website traffic, improve search rankings, enhance local SEO) Assess your current SEO status using tools like Google Analytics, Google Search Console, and SEO audit tools Determine your budget for SEO services ## 2. Research Potential SEO Agencies Overview: Conduct thorough research to identify potential SEO agencies. Look for agencies with a proven track record, good reputation, and expertise in your industry. **Steps:** Search for SEO agencies online and read reviews on platforms like Google, Yelp, and Clutch Ask for recommendations from business associates or industry peers Review case studies and client testimonials on the agency's website ## 3. Evaluate Their SEO Techniques Overview: It's crucial to understand the SEO techniques and strategies an agency uses. Ethical, white-hat SEO practices are essential for long-term success and avoiding penalties from search engines. **Steps**: Ask the agency to explain their SEO strategy and techniques Ensure they follow white-hat SEO practices, such as quality content creation, ethical link building, and on-page optimization Avoid agencies that promise quick fixes or guaranteed rankings, as these may involve black-hat practices ## 4. Check Their Experience and Expertise Overview: An agency's experience and expertise in your industry can significantly impact the success of your SEO campaign. Look for agencies that have worked with businesses similar to yours. **Steps:** Inquire about the agency's experience in your industry or niche Review their portfolio and case studies to see examples of their work Ask for references and contact past clients to learn about their experiences ## 5. Assess Their Communication and Reporting Overview: Effective communication and transparent reporting are essential for a successful partnership with an SEO agency. You need to stay informed about the progress and results of your SEO campaign. **Steps:** Discuss the agency's communication process and frequency of updates Ensure they provide regular, detailed reports on key SEO metrics such as keyword rankings, organic traffic, and conversion rates Look for an agency that is responsive and willing to answer your questions ## 6. Understand Their Pricing and Contract Terms Overview: SEO services can vary significantly in cost. It's essential to understand the agency's pricing structure and contract terms to ensure they fit within your budget and expectations. **Steps:** Request a detailed quote and breakdown of their pricing structure Understand what services are included in the price and any additional costs Review the contract terms, including the length of the contract, termination clauses, and refund policies ## 7. Evaluate Their Success Metrics Overview: Understanding how an SEO agency measures success is crucial. Ensure their metrics align with your business goals and objectives. **Steps:** Ask the agency about the key performance indicators (KPIs) they track Ensure their success metrics include organic traffic, conversion rates, and return on investment (ROI) Discuss how they plan to achieve your specific SEO goals and measure success ## 8. Consider Their Customer Service Overview: Excellent customer service is a sign of a reliable SEO agency. Ensure the agency is dedicated to providing a positive client experience. **Steps:** Evaluate their responsiveness during the initial consultation Ask about their customer support process and availability Look for an agency that values client relationships and is committed to your success ## 9. Request a Proposal Overview: Once you've shortlisted a few agencies, request a detailed proposal outlining their approach to your SEO needs. **Steps:** Provide the agency with detailed information about your business and SEO goals Request a customized proposal that includes their strategy, timeline, and expected outcomes Compare proposals from different agencies to make an informed decision ## 10. Make an Informed Decision Overview: After evaluating all the factors, it's time to choose the SEO agency that best meets your needs and goals. **Steps:** Review all the information and proposals you've gathered Consider the agency's experience, expertise, communication, pricing, and customer service Make a decision based on which agency aligns best with your objectives and budget ## FAQs ## Q1: How long does it take to see results from SEO? A1: SEO is a long-term strategy, and it typically takes 3-6 months to see significant results. However, this can vary depending on your industry, competition, and the current state of your website's SEO. ## Q2: Can I do SEO myself? A2: While it's possible to do SEO yourself, it requires a significant amount of time, effort, and expertise. Hiring an experienced SEO agency can ensure better results and allow you to focus on other aspects of your business. ## Q3: What is the cost of hiring an SEO agency? A3: The cost of hiring an SEO agency can vary widely depending on the scope of work, the agency's experience, and your specific needs. It's essential to understand the pricing structure and what services are included. ## Q4: What should I avoid when hiring an SEO agency? A4: Avoid agencies that promise guaranteed rankings, use black-hat SEO practices, or offer extremely low prices. These can lead to penalties from search engines and harm your website's reputation. ## Conclusion Hiring the right SEO agency can significantly impact your online presence and business growth. By understanding your needs, researching potential agencies, and evaluating their techniques, experience, and communication, you can make an informed decision. Follow this comprehensive guide to find the best SEO agency that aligns with your goals and budget, ensuring long-term success for your business.
tdigitalguru1
1,889,238
Shift-Left Testing: A Testing Methodology to Maximize Efficiency
Software development is ever-growing, and agility and speed are paramount to ensure quick delivery...
0
2024-06-15T04:50:06
https://dev.to/elle_richard_232/shift-left-testing-a-testing-methodology-to-maximize-efficiency-39in
testing, software, programming, productivity
Software development is ever-growing, and agility and speed are paramount to ensure quick delivery time. In this fast-paced environment, continuous testing has emerged as a game changer. By integrating the testing cycle into the entire software delivery lifecycle organizations can ensure that the applications meet the highest quality standards while keeping up the pace with the ever-increasing demand for rapid releases. However, as enterprises adopt continuous testing, they often encounter multiple challenges that can obstruct progress and hinder the test effectiveness. In this article, we will walk through the challenges enterprises can face while adopting continuous testing. ### What is Continuous Testing? Continuous testing is a transformative practice that seamlessly integrates the testing activities throughout the software delivery process. It aligns the principles of DevOps and Agile methodologies and helps organizations continuously validate the quality, functionality, and performance of their applications. This revolutionary shift has resulted in a departure from the traditional testing approaches, where testing was often treated as a separate and siloed phase, conducted toward the end of the software development cycle. Continuous testing on the other hand preaches a holistic and iterative approach, where testing is an integral part of the development cycle. ### Benefits of Continuous Testing Continuous testing helps organizations achieve numerous benefits that help enhance the software quality as well as streamline the software delivery process. Some of the advantages of adopting Continuous Testing are as follows- **Rapid Feedback Loops:** Continuous testing creates an environment of continuous feedback, where teams can quickly identify and resolve defects. This minimizes the risk of costly rework and ensures a smoother and more efficient delivery process. **Enhanced Quality and User Experience:** When the application is continuously validated for functionality, performance, and security, organizations can deliver high-quality software that meets user expectations, hence resulting in enhanced user satisfaction. **Accelerated Time-to-Market:** By automating and integrating testing activities in the development process, continuous testing reduces the time and effort required for manual testing, allowing enterprises to make quick releases and launch new features more frequently. Increased Collaboration and Transparency: Continuous testing builds a culture of collaboration and transparency among cross-functional teams. The test results and insights are shared in real-time, hence allowing teams to make proactive decisions, identify bottlenecks, and work collectively toward issue resolution. ### How to Implement Continuous Testing? To successfully implement continuous testing within an enterprise, a strategic approach is essential. Some of the key steps that can be taken are listed below- **Implementing a Test Automation Framework:** For successful continuous testing, it is essential to build a robust and scalable automation framework. This involves selecting the appropriate tools, frameworks, and technology stack that aligns with the organization’s needs. **Adopting a Shift-Left Approach: ** Embracing the [shift-left testing approach](https://testgrid.io/blog/what-is-shift-left-testing-and-why-is-it-important/) is essential, where testing activities start early in the development cycle. Testers are involved in the planning and requirement-gathering phases, enabling them to provide invaluable insights and ensure testability from the outset. **Leverage Test Environment Virtualization:** Ensuring consistent and reliable test enviorments is critical for accurate and repeatable test results. Implementing virtualization and containerization technologies can help in the management of test environments efficiently, reducing set-up time and minimizing environmental inconsistencies. Implementing Continuous Integration and Continuous Delivery Pipelines(CI/CD): Automated CI/CD pipelines are essential for continuous testing that automate the build, test, and deployment processes, hence allowing rapid validation of changes and delivery of high-quality software at a faster pace. Continuous testing offers numerous benefits, but its implementation within enterprises comes with its own set of challenges. Some common obstacles and strategies to overcome them are discussed below- **Challenge 1: Scalability and Performance: Handling Large Testing Needs** Enterprise applications are often huge and complex, consisting of numerous components, services, and integrations. As organizations want to release new versions faster, the demand for comprehensive and efficient testing increases exponentially. The growing volume of tests can be handled by robust and scalable testing infrastructure. Keeping a strong and flexible testing system that can handle the increasing number of tests while giving quick results becomes a challenging task. To overcome this hiccup, a strategic approach to test design, execution, and resource management is essential. Utilizing cloud based testing, running multiple tests parallelly, and prioritization of important tests can help organizations in scaling their testing efforts while avoiding delays.  **Challenge 2: Test Automation and Maintenance** Test automation is a critical part of the continuous testing process. It ensures rapid and reliable feedback loops throughout the software development process. However, creating and maintaining a robust test automation framework is not always easy, especially in large enterprises with legacy systems and different technologies. To achieve efficient test automation, a well-defined strategy that takes into account factors like test case design, and code reusability is essential. Industry-standard frameworks like Behavior Driven Development (BDD) practices, and promoting the culture of collaboration between the development and testing teams can significantly enhance the quality of the test automation efforts. **Challenge 3: Test Data Management** A vast amount of test data is an important factor for efficient testing of large and complex enterprise applications. Ensuring the availability, quality, and security of this test data comes with its own set of challenges, particularly in regulated industries that have strict data privacy and compliance requirements. Implementing a robust test data management strategy is of utmost importance to address this challenge. Techniques like data masking, synthetic data generation, and virtualization of test environments should be implemented to ease test data management. Additionally, chalking out clear data governance policies, and automating data provisioning processes can simplify test data management along with mitigating the risk of data breaches or misuse. **Challenge 4: Integration and Collaboration** Enterprise IT landscapes are heterogeneous and are made up of different types of technologies, platforms, and tools. Integrating continuous testing practices across these diverse environments and facilitating seamless collaboration among cross-functional teams can be a difficult task. To overcome this challenge, it is important to focus on standardization, tool integration, and creating effective communication channels. It is also important to understand the need to have proper documentation to ensure the continuous functioning of testing activities. A centralized test management solution, implementing continuous integration and delivery (CI/CD) pipelines, and promoting a culture of transparency and knowledge sharing can significantly enhance collaboration and streamline the testing process across an enterprise.  **Challenge 5: Organizational Culture and Mindset** A cultural shift within the organization is often a mandate for the successful implementation of continuous testing. Transitioning from traditional testing methodologies to a more agile and integrated approach can be met with resilience, especially in well-established enterprises with deeply rooted practices. Strong leadership, comprehensive training programs, and a clear communication strategy can help address this challenge. Fostering a continuous improvement culture, empowering teams to learn and experiment, and celebrating successes can hugely accelerate the acceptance of continuous testing practices across the enterprise. **Tackling Continous Testing Hurdles: A Holistic Approach** While many challenges are associated with continuous testing, they can be avoided with proper planning and strategy. Adopting a holistic approach that combines technical solutions, process optimization, and cultural transformation is the key to overcoming these challenges. Organizations should invest in building a robust and scalable testing infrastructure, along with utilizing cloud services, containerization, and other advanced testing tools to tackle complex testing challenges more effectively. With a mindset of collaboration, knowledge sharing, and continuous learning, enterprises can effectively employ continuous testing strategies for accelerated growth. Partnerships with experienced testing service providers and leveraging their expertise also play an important role in the adoption of continuous testing practices and mitigating the associated risks. By combining internal resources with external specialized knowledge, enterprises can achieve success in continuous testing and achieve higher levels of software quality and delivery velocity. ### Conclusion While the enterprises embark on the continuous testing journey, they are bound to encounter challenges that can be challenging in their progress. For implementing scalable testing infrastructure to streamlining diverse technology landscapes, the path to continuous testing can be difficult. However, by adopting a holistic approach that combines technical solutions with process optimization and cultural transformation, organizations can sail through these hurdles. This not only enhances the software quality and speed of delivery but also promotes a culture of innovation, collaboration, and continuous improvement- the building blocks for long-term success in the rapidly evolving digital landscape. **Source:** _This blog was originally published on [Testgrid](https://testgrid.io/blog/overcoming-enterprise-testing-challenges/)._
elle_richard_232
1,890,711
JS, form 동작 막기
const $form = document.querySelecotr('form'); $form.addEventListener("submit", (event) =&gt; { //...
0
2024-06-17T02:15:33
https://dev.to/sunj/js-form-dongjag-maggi-59bp
javascript
``` const $form = document.querySelecotr('form'); $form.addEventListener("submit", (event) => { // 동작(이벤트)을 실행하지 못하게 막는 메서드입니다. event.preventDefault(); console.log(event.target); }); ``` _참조 : https://gurtn.tistory.com/206_
sunj
1,891,320
Why do we have const enums in Typescript?
Typescript usually has two schools of thought—enums and unions—with strong supporters on both sides...
0
2024-06-17T13:33:22
https://www.allthingstypescript.dev/p/why-do-we-have-const-enums-in-typescript
webdev, javascript, typescript, tutorial
Typescript usually has two schools of thought—enums and unions—with strong supporters on both sides of the argument. In today's issue, I want to avoid that debate and focus solely on Enums, specifically a different, lesser-known version of enums: const enums. Before we go any further, let’s briefly talk about what enums are: Enums allow developers to define a list of named constants representing distinct states within their application. For instance, if you wanted to have a switch button, the values can either be on or off, and an enum is perfect to represent this state, as shown below: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27037c0d-c06e-438a-9894-09afdc299f83_2596x972.png) And you can use the above enum, just like any type in Typescript. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa492c25d-76d4-4406-971d-1f169d99127b_2596x1380.png) To understand the difference between enums and const enums, we first need to understand their behavior after compilation before the behavior is very similar. --- **If you want to learn Typescript with weekly lessons, please subscribe to my [newsletter](https://www.allthingstypescript.dev/).** --- In the book [Effective Typescript by Don Vanderkom](https://effectivetypescript.com/) (a strongly recommended read), the author says that typescript has no impact on Javascript runtime. This is because Types are removed during transpilation, leaving you with good old Javascript. You can learn more about this relationship in this [issue](https://www.allthingstypescript.dev/p/typescript-and-javascript-two-sides), which we covered a few weeks ago. Enums are one of the few features TypeScript has which is not a type-level extension of JavaScript—[Official TS Docs](https://www.typescriptlang.org/docs/handbook/enums.html). Of course, this isn’t always the case, and Enums are one of those exceptions, with Enums getting converted into objects instead of being removed, as shown below: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd196029c-9729-4c0e-8ace-00bd42c42e6b_3840x1356.png) So, when we pass an enum, the object that is produced during the transpilation process is referenced, as shown below. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F873effa0-d6be-4b07-9b2e-68feebd58837_4584x1684.png) Another thing to note about enums is that the values of each element in an Enum are numbers - _incrementing from 0, from the top down, unless the first number is indicated_ - instead of the values we entered above, so in our case, `ON` would be `0` and `Off` would be `1`. Below is a case where the initial number is indicated. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe576ab10-214d-4318-888b-34499462c1a1_3840x1356.png) In this case, On is `100` and Off is `101` This isn’t always the desired behavior, but it can be fixed by explicitly adding the value of each element in the enum, such as having the value be a string. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fccfd5ecd-90ca-4daa-beb8-8a400b2210d9_3840x1356.png) String Enums Or even an expression, as shown below: ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b2d4089-87e3-4dfc-a57c-f404e7cd08af_2596x1380.png) Computed Enums --- **If you like my content and want to support my work, please consider supporting me (**_**you can buy me a double latte, the juice that powers my creativity and determination ☕️**_**) through [Github Sponsors](https://github.com/sponsors/mainawycliffe/).** [☕️ Consider buying me a cup of coffee](https://github.com/sponsors/mainawycliffe/) --- ### Const Enums This isn’t always desired, and that's where const enums come in. They are just like regular enums, but they are prefixed by the const keyword before their declaration. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Faa486d3c-7fe9-4894-bf1e-6a0ee11c37e0_2596x972.png) The other important difference is that const enum values get inlined instead of producing an object, taking us back to zero impact on the JS runtime, as const enums do not generate objects after transpilation. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff77637d1-4c90-4ff2-b540-6bcb14a05328_3840x2020.png) Generally speaking, regular enums are valid solutions, but in those cases where you don’t want to pay the penalty of extra code generated by using regular enums, const enums are a very good solution, as they don’t generate any code for their support. As you can see, no more `Switch` object is being created. Of course, just like with regular enums, we still have the same issue of enum element values being incremental numbers unless otherwise specified. ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0717c12b-6555-4618-a803-7da73dea4c5d_3840x2020.png) ### Conclusion In this issue, we examined the difference between enums and const enums. Unlike regular enums, const enums don’t generate the javascript object; the code is inlined. This means that you are not paying the penalty for extra code generation that you get when using regular enums. While this is normally not a concern for most projects, in some cases, it can have a negative impact, leading to large JS code being produced and longer build / transpilation times. This can easily be solved by using const enums, which would be easier to refactor compared to alternative solutions such as unions.
mainawycliffe
1,893,762
What are High-Level Programming Languages? Really, what are they?
I was sitting on nails as the Hungarian says, meaning I couldn’t wait to write about High-level...
0
2024-06-19T15:10:38
https://dev.to/zoltan_fehervari_52b16d1d/what-are-high-level-programming-languages-really-what-are-they-1gng
highlevelprogramminglanguages, programming, programminglanguages
I was sitting on nails as the Hungarian says, meaning I couldn’t wait to write about High-level programming languages (HLPLs)! They are designed to be easily readable and understood by humans, providing a higher degree of abstraction from the underlying hardware. These languages simplify the programming process by using natural language constructs and syntax, allowing developers to focus on problem-solving rather than hardware-specific details. The development of high-level languages marked a significant breakthrough in software development, making programming more accessible and efficient. ## Let’s spoil the Main Points High-level programming languages emerged in the 1950s to address the complexities of programming in assembly language. These languages provided a more human-readable form of code, which greatly simplified the programming process. Key languages like FORTRAN and COBOL revolutionized how programmers wrote software, making it more accessible and less error-prone. ## Let’s spoil the Key Takeaways 1. Definition and History: HLPLs are designed for readability and abstraction from hardware. 2. Early Difficulties: Assembly language programming was complex, error-prone, and machine-dependent. 3. Impact on Development: HLPLs improved productivity, code readability, maintainability, and portability. 4. Differences from Low-Level Languages: HLPLs abstract hardware details, are more readable, and are portable across different systems. 5. Execution Methods: HLPLs can be executed via compilation, interpretation, JIT compilation, AOT compilation, or transpilation. 6. Examples of HLPLs: Popular languages include Python, JavaScript, Java, C#, Ruby, Swift, Kotlin, R, and TypeScript. 7. Very High-Level Programming Languages: 4GLs like SQL, MATLAB, and SAS offer even higher abstraction levels for specific tasks. ## What Were Some Early Difficulties That Led to the Development of High-Level Languages? Programming in assembly language posed significant challenges due to its complexity and lack of abstraction. Programmers had to manage memory addresses and machine instructions, which was error-prone and labor-intensive. Additionally, assembly programs were machine-dependent, limiting code reuse and portability. These difficulties highlighted the need for more efficient programming methods, leading to the creation of high-level languages. The Impact of High-Level Programming Languages on Software Development [High-level languages](https://bluebirdinternational.com/high-level-programming-languages/) transformed software development by allowing developers to focus on problem-solving and logic rather than hardware-specific details. This shift made programming more efficient, reduced errors, and accelerated the development process. As a result, there was a surge in creativity and innovation within the software development community. ## What Problems Did High-Level Programming Languages Solve? High-level programming languages improved: - Code Readability: Made code easier to understand and maintain. - Code Maintainability: Simplified debugging and updates. - Portability: Enabled code to run on different hardware platforms without modification. These improvements allowed developers to write software for a wide range of applications, from scientific computing to business data processing. ## What’s the Difference Between High-Level and Low-Level Programming Languages? - Low-Level Languages: Closer to hardware, require detailed management of memory and machine instructions, and offer high control but are difficult to read and maintain. - High-Level Languages: Abstract hardware details, are more readable, easier to write and maintain, and portable across different systems. ### Code Example: Python vs Assembly **High-Level Code: Python** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/obguno70uy8ckmaf5fv4.png) **Low-Level Code: Assembly** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uzb3459fdnhl6wsfs58.png) ## Key Differences 1. Abstraction: HLPLs abstract memory and register management, while low-level languages require explicit management. 2. Readability: HLPLs are closer to natural language; low-level languages resemble machine code. 3. Portability: HLPLs are platform-independent; low-level languages need to be rewritten for different machines. ## High-Level Programming Languages: Execution Methods 1. Compiled Execution: Source code is translated into machine code by a compiler before execution (e.g., C, C++). 2. Interpreted Execution: Code is executed line by line by an interpreter at runtime (e.g., Python, Ruby). 3. Just-In-Time (JIT) Compilation: Combines interpretation and compilation, compiling bytecode into machine code at runtime (e.g., Java, C#). 4. Ahead-of-Time (AOT) Compilation: Compiles bytecode into machine code before execution (e.g., some Java and C# environments). 5. Transpiled Execution: Source code is converted into another high-level language before compilation or interpretation (e.g., TypeScript to JavaScript). ## List of High-Level Programming Languages **Popular high-level languages include:** 1. Python 2. JavaScript 3. Java 4. C# 5. Ruby 6. Swift 7. Kotlin 8. R 9. TypeScript ## Pioneering High-Level Programming Languages in the 20th Century Languages like COBOL, FORTRAN, Lisp, ALGOL, and Pascal were foundational in shaping modern programming. Though less prevalent today, they laid the groundwork for current high-level languages. ## What Are Very High-Level Programming Languages? Very high-level programming languages (4GLs) provide even higher abstraction levels, making them closer to human language. They are designed to increase productivity and reduce errors by simplifying complex operations. ## Examples of 4GLs 1. SQL: For database queries. 2. MATLAB: For mathematical computations. 3. SAS: For statistical analysis. These languages are used in specific domains where their specialized capabilities offer significant efficiency gains.
zoltan_fehervari_52b16d1d
1,894,049
Distributed Google Maps scraping
Introduction In this post, I will show you how you can utilize the power of Kubernetes to scrape data...
0
2024-06-19T20:29:01
https://blog.gkomninos.com/distributed-google-maps-scraping
webscraping, googlemapsscraper, googlemapscrawler, go
--- title: Distributed Google Maps scraping published: true date: 2023-05-14 15:10:45 UTC tags: webscraping,GoogleMapsScraper,googlemapscrawler,golang canonical_url: https://blog.gkomninos.com/distributed-google-maps-scraping --- Introduction In this post, I will show you how you can utilize the power of Kubernetes to scrape data from Google Maps without using an API key. for the tutorial, I will use as an example deploying to But this will work in any managed Kubernetes pro...
gosom
1,901,053
Introducing Thin: The App for Rapid, Concise Content Sharing
Hello dev.to community, I'm excited to introduce Thin, an app designed to revolutionize how you...
0
2024-06-26T08:21:00
https://dev.to/frulow/introducing-thin-the-app-for-rapid-concise-content-sharing-9pp
react, reactnative, javascript, productivity
Hello dev.to community, I'm excited to introduce Thin, an app designed to revolutionize how you consume and share content. Whether you’re looking to share a digest of your latest blog post or quickly read up on various topics, Thin makes it easy and efficient. ## Why Thin? Thin is all about brevity and convenience. In today's fast-paced world, getting meaningful information quickly is crucial. That's where Thin comes in, offering a platform to access and share short, summarized digests and blogs across various categories, with a special focus on developers. ## Key Features **Concise Snippets:** Thin delivers content in bite-sized pieces, perfect for quick consumption. **Wide Range of Categories:** From tech and development to lifestyle and beyond, find and share content that interests you. **Effortless Sharing:** Share your blog digests, articles, and more with ease, helping you reach a broader audience quickly. For Developers, By Developers As developers, we understand the value of time and the need for efficient content consumption. Thin is designed to cater to this need, allowing you to: - Share summaries of your blog posts or articles. - Quickly read through digests and stay updated on the latest trends. - Save time by accessing condensed, yet meaningful information. ## Get Started with Thin Download Thin and start diving into a world of concise, rapid content. Share your thoughts, stay informed, and make the most out of your reading time. We're eager to hear your feedback and suggestions to make Thin even better. Let's create a community where sharing and consuming content is not only easy but also enjoyable. Enjoy Thin App: [thin.maarkar.in](https://thin.maarkar.in) Happy reading and sharing!
frulow
1,902,122
Animated Video Production: A Comprehensive Guide
Introduction to Animated Video Production Animated video production has become a pivotal tool in...
0
2024-06-27T05:50:27
https://dev.to/releb55973/animated-video-production-a-comprehensive-guide-9f9
**Introduction to Animated Video Production** Animated video production has become a pivotal tool in modern communication, marketing, and entertainment. With its ability to simplify complex ideas and captivate audiences, animation offers a unique way to convey messages effectively. Whether for educational content, advertisements, or storytelling, animated videos bring ideas to life with creativity and precision. This article explores the intricacies of animated video production, from conceptualization to final delivery. **The Importance of Animated Videos** Animated videos hold a significant place in various industries due to their versatility and impact. They can transform dull, monotonous information into engaging visual narratives, making it easier for audiences to understand and retain information. In marketing, animated videos are powerful tools for explaining products and services, demonstrating features, and highlighting benefits in an entertaining manner. Educational sectors utilize animations to break down complex subjects, making learning more interactive and enjoyable. Moreover, in entertainment, [animated video production](https://www.customanimators.com/) of all ages with their vibrant visuals and imaginative storytelling. **Pre-Production: Planning and Conceptualization** The pre-production phase is crucial as it lays the foundation for the entire project. It involves meticulous planning, research, and conceptualization to ensure the animated video aligns with the intended message and goals. **Defining Objectives and Target Audience** The first step is to define the objectives of the animated video. What message needs to be conveyed? What action should the audience take after watching the video? Identifying the target audience is equally important. Understanding their preferences, interests, and demographics helps tailor the content to resonate with them effectively. **Scriptwriting and Storyboarding** A well-crafted script is the backbone of an animated video. It should be concise, engaging, and aligned with the objectives. The script guides the narrative, dialogue, and actions within the video. Once the script is finalized, the next step is storyboarding. Storyboards are visual representations of the script, depicting each scene with sketches or illustrations. They help visualize the flow, transitions, and key moments, ensuring a coherent and compelling narrative. **Style and Design** The style and design of the animation play a crucial role in setting the tone and mood of the video. The choice of animation style—be it 2D, 3D, motion graphics, or stop-motion—depends on the target audience and the message being conveyed. Character design, color schemes, and overall aesthetics are also determined during this phase. Consistency in style ensures a professional and polished final product. **Production: Bringing the Vision to Life** The production phase involves bringing the planned concepts and designs to life through animation. This phase requires technical expertise and creativity to ensure the final output is visually appealing and aligns with the initial vision. **Animation Techniques** Various animation techniques are employed based on the chosen style. In 2D animation, characters and elements are created frame by frame, while 3D animation involves creating digital models and animating them in a virtual space. Motion graphics focus on text and graphic elements, often used for explainer videos and infographics. Stop-motion animation involves photographing physical objects frame by frame to create movement. Each technique has its unique workflow and requires specialized skills and software. **Voiceover and Sound Design** Voiceovers add a human touch to animated videos, making the narrative more relatable and engaging. Professional voice actors are often hired to ensure high-quality audio. Sound design, including background music and sound effects, enhances the overall viewing experience. It sets the mood, emphasizes key moments, and adds depth to the animation. **Animation and Editing** The animation process involves creating the actual motion and transitions of characters and elements. Animators use specialized software to bring the storyboard to life, ensuring smooth and natural movements. After the animation is completed, the video undergoes editing. This includes refining the timing, adding effects, and ensuring seamless transitions between scenes. The final editing phase is crucial for maintaining the video's flow and coherence. **Post-Production: Polishing and Finalizing** Post-production is the final phase where the animated video is polished, finalized, and prepared for distribution. Attention to detail is essential to ensure a professional and high-quality output. **Review and Feedback** Before the final delivery, the video is reviewed by the production team and stakeholders. Feedback is gathered to identify any necessary adjustments or improvements. This iterative process ensures the video meets the desired standards and effectively conveys the intended message. **Rendering and Exporting** Once all revisions are made, the video is rendered and exported in the desired format. Rendering involves converting the animated project into a final video file. The choice of format depends on the platform and medium where the video will be distributed, such as social media, websites, or broadcast. **Distribution and Promotion** The final step is distributing and promoting the animated video. Effective distribution strategies ensure the video reaches the target audience. This may involve uploading the video to various online platforms, sharing it on social media, and incorporating it into marketing campaigns. Promotion efforts, such as SEO optimization and targeted advertising, maximize the video's reach and impact. **Tools and Software for Animated Video Production** Several tools and software are available to facilitate animated video production. These tools offer a range of features for creating, animating, and editing animations. **Adobe Animate** Adobe Animate is a popular software for creating 2D animations. It offers a comprehensive set of tools for drawing, animating, and exporting animations. Its integration with other Adobe Creative Cloud applications makes it a versatile choice for animators. **Blender** Blender is a powerful open-source software for 3D animation. It provides a wide range of features, including modeling, rigging, animation, and rendering. Blender's active community and extensive tutorials make it accessible for both beginners and professionals. **Toon Boom Harmony** Toon Boom Harmony is a professional-grade software for 2D animation. It is widely used in the animation industry for creating high-quality animations for television, films, and games. Its advanced features, such as rigging and compositing, offer great flexibility for animators. **After Effects** Adobe After Effects is a versatile software for motion graphics and visual effects. It is commonly used for creating animated text, transitions, and visual effects in videos. Its integration with Adobe Premiere Pro allows seamless editing and post-production workflows. **Challenges in Animated Video Production** While animated video production offers numerous benefits, it also presents several challenges that animators must navigate. **Time-Consuming Process** Animation is a time-consuming process that requires meticulous attention to detail. Each frame needs to be carefully crafted, and even minor adjustments can be time-intensive. Proper planning and efficient workflows are essential to manage time effectively. **Technical Skills and Expertise** Creating high-quality animations requires technical skills and expertise in using animation software. Animators need to stay updated with the latest tools, techniques, and trends in the industry. Continuous learning and practice are crucial for maintaining proficiency. **Budget Constraints** Producing animated videos can be expensive, especially for complex projects. Budget constraints may limit the resources available for hiring professional animators, voice actors, and sound designers. Effective budgeting and resource management are essential to deliver quality animations within financial limitations. **Future Trends in Animated Video Production** The field of animated video production is constantly evolving, with new trends and technologies shaping its future. **Virtual Reality (VR) and Augmented Reality (AR)** VR and AR are transforming the way animated videos are created and experienced. VR offers immersive storytelling experiences, while AR integrates digital elements into the real world. These technologies provide new opportunities for interactive and engaging animations. **Artificial Intelligence (AI) and Machine Learning** AI and machine learning are revolutionizing animation by automating certain processes. AI-powered tools can generate animations based on scripts, predict motion paths, and enhance facial expressions. This reduces the time and effort required for manual animation, allowing animators to focus on creativity. **Increased Demand for Personalized Content** As audiences seek more personalized experiences, there is an increasing demand for customized animated videos. Personalized animations, tailored to individual preferences and behaviors, enhance engagement and relevance. This trend is particularly evident in marketing and advertising. **Conclusion** Animated video production is a dynamic and multifaceted process that combines creativity, technical skills, and meticulous planning. From conceptualization to final delivery, each phase requires attention to detail and a deep understanding of the target audience. With the right tools, techniques, and expertise, animated videos can effectively convey messages, captivate audiences, and drive engagement. As technology continues to advance, the future of animated video production holds exciting possibilities, offering new ways to tell stories and connect with viewers.
releb55973
1,902,782
Next.js Vs React.Js: What you need to know
React and NextJS are tools becoming more popular this days, with a lot of similarities, and people...
0
2024-06-28T09:39:13
https://dev.to/mikeya-yo/nextjs-vs-reactjs-what-you-need-to-know-319k
javascript, webdev, react, nextjs
React and NextJS are tools becoming more popular this days, with a lot of similarities, and people often ask what are the diffences between this two and which one should i use? Before we can talk about the differences between this two we need to first understand what exactly they are. ## React React is a frontend library for building user interfaces with JavaScript and JSX. React works based on components, which are JavaScript functions that return JSX. React also makes use of the Virtual DOM, which uses state management and only re-renders components that changes instead of the whole DOM. React also provides us with hooks, such as useState and more which are used to basically to control state changes and side effects. ## NextJS Next.js on the other hand isn't a library, but a React framework, NextJS uses React but only as a framework of React. Next.js offer's a lot of additional features to React such as server side rendering, static optimization e.t.c. Next.js also allows you to build a complete full stack project with it's API routes ## Major Differences ### React - React is a JavaScript Library - In React all JavaScript Code is loaded by the browser - React is developed by Meta - You can only build frontend user interfaces with React ### Next.js - Next.js is a React framework - Next.js supports server side rendering, meaning not all JavaScript code is loaded by the browser - Next.js is developed by Vercel - Next.js allows you to build a complete full stack, thanks to it's API routes ## Conclusion Use React if you want to build a simple web interface, that has a continous life state changing. But for Larger applications requiring complex backends, use Next.js to offer server side rendering and all in one full stack capabilities. _Note: For tech internship's you can visit [hng.tech/internship](https://hng.tech/internship) or Jobs at [hng.tech/hire](https://hng.tech/hire)_
mikeya-yo
1,903,469
C# IEnumerables
Key Points: What is IEnumerable? IEnumerable is an interface in C# that...
0
2024-06-28T02:54:53
https://dev.to/jsdevelopermano/c-ienumerables-2bbi
## Key Points: ### What is IEnumerable? - `IEnumerable` is an interface in C# that allows you to iterate over a collection of items. - It is found in the `System.Collections` namespace for non-generic collections and `System.Collections.Generic` namespace for generic collections. ### Methods and Properties - The primary method in `IEnumerable` is `GetEnumerator()`, which returns an `IEnumerator`. - `IEnumerator` provides the necessary methods (`MoveNext()`, `Reset()`) and property (`Current`) to iterate over a collection. ### Common Usage - `IEnumerable` is commonly used with `foreach` loops to traverse collections such as arrays, lists, and other data structures. - It provides a standard way to iterate over any collection that implements the interface. ### Examples of Methods - **`OfType<TResult>(IEnumerable)`**: Filters the elements of an `IEnumerable` based on a specified type. - **`Concat<TSource>(IEnumerable<TSource>, IEnumerable<TSource>)`**: Concatenates two sequences. - **`GroupBy<TSource,TKey>(IEnumerable<TSource>, Func<TSource,TKey>)`**: Groups the elements of a sequence according to a specified key selector function.
jsdevelopermano
1,903,473
Operadores em Java
Tipos de Operadores Aritméticos: Realizam operações matemáticas básicas. Bitwise: Operam sobre...
0
2024-06-28T02:47:59
https://dev.to/devsjavagirls/operadores-em-java-2ced
java
**Tipos de Operadores** **Aritméticos:** - Realizam operações matemáticas básicas. **Bitwise:** Operam sobre bits. **Relacionais:** Comparam valores. **Lógicos:** Realizam operações lógicas. **Operadores Aritméticos** **Operadores Básicos:** + Adição - Subtração * Multiplicação / Divisão % Módulo (resto da divisão) **Operadores Unários:** + Mais unário - Menos unário **Operadores de Incremento e Decremento:** ++ Incremento (adição de 1) -- Decremento (subtração de 1) **Comportamento dos Operadores Aritméticos** - Operam em tipos numéricos e em char. **Divisão de Inteiros:** Trunca o resto. **Exemplo:** 10 / 3 resulta em 3. **Operador Módulo:** Gera o resto da divisão. **Exemplo:** 10 % 3 resulta em 1. - Pode ser aplicado a tipos inteiros e de ponto flutuante. ``` class ModDemo { public static void main(String args[]) { int iresult = 10 / 3; int irem = 10 % 3; double dresult = 10.0 / 3.0; double drem = 10.0 % 3.0; System.out.println("Result and remainder of 10 / 3: " + iresult + " " + irem); System.out.println("Result and remainder of 10.0 / 3.0: " + dresult + " " + drem); } } ``` **Saída:** Result and remainder of 10 / 3: 3 1 Result and remainder of 10.0 / 3.0: 3.3333333333333335 1.0 **Incremento e Decremento** Incremento (++): Adiciona 1 ao operando. Exemplo: x = x + 1; é equivalente a x++; Decremento (--): Subtrai 1 do operando. Exemplo: x = x - 1; é equivalente a x--; **Formas Prefixadas e Posfixadas:** Prefixada: Incrementa/Decrementa antes de usar o valor na expressão. Exemplo: ++x Posfixada: Usa o valor na expressão antes de incrementar/decrementar. Exemplo: x++ ``` int x = 10; int y = ++x; // y será 11, x será 11 ``` **Resumo dos Pontos-Chave** Java possui operadores para operações matemáticas, lógicas, relacionais e bitwise. Operadores aritméticos incluem +, -, *, /, %, ++, --. Operador % pode ser aplicado a inteiros e tipos de ponto flutuante. Incremento (++) e decremento (--) têm formas prefixadas e posfixadas que afetam a ordem da operação. É importante entender o comportamento dos operadores para evitar erros de lógica em expressões complexas.
devsjavagirls
1,903,493
Unlocking Cloud Networking Potential: A Deep Dive into AWS VPC
Unlocking Cloud Networking Potential: A Deep Dive into AWS VPC ...
0
2024-06-28T03:10:50
https://dev.to/virajlakshitha/unlocking-cloud-networking-potential-a-deep-dive-into-aws-vpc-4eo1
![topic_content](https://cdn-images-1.medium.com/proxy/1*hXIV3K77zDbI0B5vuV_X3A.png) # Unlocking Cloud Networking Potential: A Deep Dive into AWS VPC ### Introduction The cloud revolution hinges on flexibility and scalability. Amazon Virtual Private Cloud (VPC) empowers businesses to build and manage their own private networks within the AWS ecosystem. Think of it as your own slice of the cloud, shielded from the public internet and tailored to your specific security and networking needs. But what makes VPC such a game-changer? Let's delve into the core components and then explore some powerful use cases. ### Key Components of AWS VPC * **Virtual Private Cloud (VPC)**: The foundational element, a logically isolated network space in AWS where you launch your resources. * **Subnets**: Subdivisions within your VPC, each tied to an Availability Zone for high availability and fault tolerance. * **Route Tables**: Dictate the traffic flow within your VPC. You define rules (routes) that specify how network traffic from your subnet is routed to its destination, be it inside or outside your VPC. * **Internet Gateways**: Your gateway to the public internet, allowing communication between resources in your VPC and the outside world. * **Network Access Control Lists (NACLs)**: Act as firewalls at the subnet level, providing an extra layer of security by controlling traffic in and out of your subnets. * **Security Groups**: Operating at the instance level, these virtual firewalls grant granular control over inbound and outgoing traffic for your EC2 instances. * **Virtual Private Network (VPN) Connections**: Establish secure connections between your on-premises networks and your VPC, ensuring data privacy and secure communication. * ** VPC Peering**: Connects two VPCs (even across different AWS accounts) enabling seamless communication as if they were a single network. ### VPC Use Cases: From Simple to Sophisticated 1. **Basic Web Application Deployment:** Imagine you want to launch a web application. You could: * Create a VPC with public and private subnets. * Deploy your web servers in the public subnet, making them accessible from the internet via an internet gateway. * Place your database servers in the private subnet for enhanced security, accessible only from the web servers. 2. **Hybrid Cloud Architecture:** Many organizations require a blend of on-premises and cloud environments. VPC makes this seamless: * Establish a secure VPN connection between your corporate data center and your VPC. * This lets you access on-premises resources as if they were in the cloud, and vice versa. 3. **Multi-Tier Application Deployment:** For complex applications with multiple tiers (web, application, database), VPC enables a secure and organized structure: * Create separate subnets for each tier, tailoring security group rules for precise communication control. * This enforces the principle of least privilege and enhances security. 4. **Disaster Recovery:** VPC plays a vital role in building resilient architectures: * Replicate your critical applications and data to a separate Availability Zone or Region within your VPC. * In case of an outage, quickly reroute traffic to the secondary environment, minimizing downtime. 5. **Microservices Architecture:** Modern applications often leverage microservices - independent, loosely coupled services. VPC facilitates this: * Deploy each microservice in its own subnet, isolated for security and independent scaling. * Use internal load balancers to distribute traffic between microservices within your VPC. ### AWS VPC vs. The Competition While AWS VPC was a pioneer, other cloud providers offer similar networking capabilities: * **Google Cloud Platform (GCP) Virtual Private Cloud (VPC):** Closely mirrors AWS VPC in terms of features, with strong integration with other GCP services. * **Microsoft Azure Virtual Network (VNet):** Provides comparable functionality, with a focus on integration with the Microsoft ecosystem. Each platform has its strengths and subtle differences. AWS, with its mature feature set and broad adoption, often leads the way in innovation. ### Conclusion AWS VPC is not merely a networking tool – it's a foundational building block for architecting robust, secure, and scalable cloud solutions. Whether you're a startup or an enterprise, understanding and leveraging VPC is paramount to unlocking the true potential of the AWS cloud. ## Advanced Use Case: Building a Global Content Delivery Network with VPC and CloudFront Let's push the boundaries further. Imagine a global media company with users scattered across the world. Their goal is to deliver content with minimal latency. Here's how AWS VPC, combined with other AWS services, makes this possible: **Architecture Overview** 1. **Origin Servers in Private Subnets**: Content is hosted on origin servers within a private subnet of your VPC. This isolates them from direct internet access, enhancing security. 2. **CloudFront for Global Distribution**: Amazon CloudFront, AWS's content delivery network (CDN), is utilized. CloudFront edge locations, strategically positioned globally, cache content closer to users. 3. **Route 53 for Intelligent Routing:** Route 53, AWS's highly available DNS service, directs user requests to the nearest CloudFront edge location based on factors like geographic location. 4. **VPC Endpoints for Secure S3 Access:** VPC Endpoints allow your CloudFront distributions to privately access content stored in Amazon S3 (Simple Storage Service) without traversing the public internet, further bolstering security. **Workflow** 1. Content is uploaded to your origin servers in the private subnet. 2. This content is then distributed and cached across CloudFront's global network of edge locations. 3. When a user requests content: * Route 53 determines the optimal CloudFront edge location. * CloudFront serves the cached content, if available. * If not, CloudFront retrieves the content from the origin servers in your private subnet via a secure VPC endpoint. **Benefits** * **Reduced Latency:** Delivering content from edge locations close to users drastically reduces latency, leading to a better user experience. * **Enhanced Security:** Origin servers remain isolated within private subnets, and traffic between CloudFront and S3 is secured using VPC Endpoints. * **High Availability and Scalability**: CloudFront's distributed nature ensures high availability, and both CloudFront and S3 scale automatically to handle traffic spikes. **Conclusion:** By combining the power of AWS VPC, CloudFront, Route 53, S3, and VPC Endpoints, you can build a high-performance, secure, and globally scalable content delivery network – a testament to the flexibility and power of the AWS ecosystem.
virajlakshitha
1,903,718
virtual address for gst registration
A virtual office in Jaipur with “Virtual States GST” will make your business stand out. We have...
0
2024-06-28T07:11:32
https://dev.to/manojkumararvian_9f54148/virtual-address-for-gst-registration-6ao
A virtual office in Jaipur with “Virtual States GST” will make your business stand out. We have reliable virtual office plans, that suit your business environment. We provide the best help solution for today’s fast-moving, versatile business environment. Our proficient experts will you to open your office without taking extra time and start working on the same day Virtual State GST is stand up to the task, offering the Best Virtual Office Address in Jaipur. We also provide virtual office services in all 28 states of India, along with union territories. These offices are registered in professional business complexes that give your business an ethical view. This address can be used as a mailing and billing address in Jaipur. In addition, Virtual State GST offers additional services including professional receptionists and mail delivery for the company at an additional cost. If you look forward to getting a virtual office address for GST registration, we are at Virtual State GST delighted to welcome you and we wish you a great deal and cooperation. With the spread of the Internet, the online e-commerce business has developed into a billion dollar industry in India, gradually covering every business sector that we can think of. From specialty products to branded and seasonal products, you'll find virtual storefronts everywhere online. This will bring the business world very close, creating growth and opportunities in the form of the online Internet as online Internet technology and its speed increase. The online e-commerce business in Jaipur grows day by day. As online e-commerce companies/businesses nowadays run without a traditional physical store, virtual office addresses/services also provide the same facilities as physical office space but do not include the actual office suite. However, the cost reduction and savings are very high, and it is beneficial to use virtual offices to expand and grow your online e-commerce business. (https://virtualstategst.com/virtual-office-in-jaipur.php)
manojkumararvian_9f54148
1,904,493
Using Past Performance Information Retrieval System PPIRS to Showcase Your Track Record
Discover how to leverage the Past Performance Information Retrieval System (PPIRS) to display and enhance your professional track record effectively.
0
2024-06-28T16:36:39
https://www.govcon.me/blog/using_past_performance_information_retrieval_system_ppirs_to_showcase_your_track_record
technology, innovation, ppirs, performancemanagement
# Using Past Performance Information Retrieval System (PPIRS) to Showcase Your Track Record In today’s fast-evolving professional landscape, standing out requires more than just sheer talent and a strong work ethic. Enter the Past Performance Information Retrieval System (PPIRS)—a game-changer in performance management and business transparency. Let’s dive into how you can leverage PPIRS to showcase your track record compellingly and effectively. ## Unpacking PPIRS PPIRS is a robust tool designed to collect and manage past performance data for federal contracts. This system was developed to aid government agencies in evaluating contractors’ historical performance, ensuring that only those with a proven track record of excellence can participate in federal opportunities. ### How Does It Work? PPIRS aggregates performance data from various sources, including the Contractor Performance Assessment Reporting System (CPARS) and Federal Awardee Performance and Integrity Information System (FAPIIS). Here’s a step-by-step on how it operates: 1. **Data Collection**: - Performance reviews from CPARS. - Integrity information from FAPIIS. - Contracting officers provide detailed assessments. 2. **Data Integration**: - Collected data is centralized within PPIRS. - The information is standardized to ensure consistency. 3. **Review and Access**: - Agencies access a consolidated view of contractors&#x27; performance. - Real-time data ensures up-to-date information. ## Benefits of Using PPIRS Harnessing the power of PPIRS can offer myriad benefits not just for government agencies, but for contractors too. Here’s why you should consider integrating PPIRS data into your business strategy: ### Enhanced Credibility Including your performance assessments from PPIRS in your proposals can significantly boost your credibility. Government agencies considering your bids will appreciate the verified, transparent insights into your past performance. ### Competitive Edge PPIRS data can help differentiate you from competitors. A track record documented by a reputable third-party system is an influential tool in proving your capability and reliability. ### Performance Improvement By regularly reviewing your PPIRS records, you can identify areas where you excel and where you need improvement. This proactive approach to performance management can lead to continuous improvement and better outcomes for future projects. ## Steps to Showcase Your PPIRS Data Ready to leverage this powerful system? Follow these steps to effectively showcase your PPIRS data: ### Step 1: Access Your Records First, access your performance data from PPIRS. Ensure all information is accurate and reflects your contract achievements properly. Discrepancies should be addressed promptly with the reporting agencies. ### Step 2: Create a Performance Summary Compile your PPIRS data into a cogent performance summary. Highlight key metrics and notable achievements. This summary should present both qualitative and quantitative aspects of your performance. ### Step 3: Integrate into Proposals Incorporate your performance summary into proposal documents. Use engaging visuals like graphs and charts to make the data easily digestible. Emphasize successful project outcomes and client satisfaction. ### Step 4: Leverage Testimonials Where possible, include testimonials from contract officers who have provided high ratings. Authentic testimonials can humanize your data and add a personal touch that resonates with evaluators. ## Best Practices for Maintaining Stellar PPIRS Scores To keep your PPIRS records sparkling, consider these best practices: - **Consistent Excellence**: Always aim for high performance across all contracts. Consistency is key to maintaining a positive track record. - **Proactive Communication**: Maintain clear, proactive communication with contract officers to understand their expectations and feedback. - **Continuous Improvement**: Use your PPIRS data to identify strengths and improve weaknesses, ensuring upward trajectory in performance. ## Conclusion The Past Performance Information Retrieval System is more than just a repository of your performance data—it&#x27;s a strategic asset. By leveraging PPIRS to showcase your track record, you can significantly enhance your credibility, gain a competitive edge, and continually improve your performance. Embrace PPIRS, and let your proven track record open doors to greater opportunities. Harness PPIRS today and watch your professional journey soar to new heights! 🚀
quantumcybersolution
1,905,876
Understanding The Use Of Context API In React JS
React JS, known for its efficient and flexible JavaScript library for building user interfaces, often...
0
2024-06-29T17:56:46
https://reacttonext.com/context-api-react-js/
reactjsdevelopment, frontend, react, webdev
React JS, known for its efficient and flexible JavaScript library for building user interfaces, often requires sharing data between components without passing props at every level. This is where the Context API in React JS comes into play. The Context API enables the seamless transmission of data through the component tree without the need for prop drilling at each hierarchical level. ## What is Context API? The Context API is a React mechanism that facilitates the sharing of specific data across all levels of an application. It’s designed to solve the problem of prop drilling, where props are passed down through multiple levels of a component tree. ## Common Uses of Context API Here are five common scenarios where the Context API is particularly useful: - Theme Management - User Authentication - Global State Management - Language Localization - Notification System Let’s dive into each use case with examples to understand how the Context API can be implemented effectively. ### 1. Theme Management When building a web application, managing themes can become cumbersome if we pass theme-related props through multiple components. The Context API simplifies this process. **Example:** ```javascript // ThemeContext.js import React, { createContext, useState } from 'react'; export const ThemeContext = createContext(); export const ThemeProvider = ({ children }) => { const [theme, setTheme] = useState('light'); return ( <ThemeContext.Provider value={{ theme, setTheme }}> {children} </ThemeContext.Provider> ); }; // App.js import React, { useContext } from 'react'; import { ThemeProvider, ThemeContext } from './ThemeContext'; const ThemeToggle = () => { const { theme, setTheme } = useContext(ThemeContext); return ( <button onClick={() => setTheme(theme === 'light' ? 'dark' : 'light')}> Toggle Theme </button> ); }; const App = () => ( <ThemeProvider> <ThemeToggle /> </ThemeProvider> ); export default App; ``` ### 2. User Authentication Managing user authentication states such as login status, user data, and tokens can be simplified using the Context API. **Example:** ```javascript // AuthContext.js import React, { createContext, useState } from 'react'; export const AuthContext = createContext(); export const AuthProvider = ({ children }) => { const [user, setUser] = useState(null); const login = (userData) => { setUser(userData); }; const logout = () => { setUser(null); }; return ( <AuthContext.Provider value={{ user, login, logout }}> {children} </AuthContext.Provider> ); }; // App.js import React, { useContext } from 'react'; import { AuthProvider, AuthContext } from './AuthContext'; const UserProfile = () => { const { user, logout } = useContext(AuthContext); if (!user) return <p>Please log in.</p>; return ( <div> <p>Welcome, {user.name}!</p> <button onClick={logout}>Logout</button> </div> ); }; const App = () => ( <AuthProvider> <UserProfile /> </AuthProvider> ); export default App; ``` ### 3. Global State Management For managing global states like app settings or configurations, the Context API provides a straightforward solution. **Example:** ```javascript // AppContext.js import React, { createContext, useState } from 'react'; export const AppContext = createContext(); export const AppProvider = ({ children }) => { const [settings, setSettings] = useState({}); return ( <AppContext.Provider value={{ settings, setSettings }}> {children} </AppContext.Provider> ); }; // App.js import React, { useContext } from 'react'; import { AppProvider, AppContext } from './AppContext'; const SettingsComponent = () => { const { settings, setSettings } = useContext(AppContext); const updateSetting = () => { setSettings({ ...settings, newSetting: true }); }; return ( <div> <p>Current Settings: {JSON.stringify(settings)}</p> <button onClick={updateSetting}>Update Settings</button> </div> ); }; const App = () => ( <AppProvider> <SettingsComponent /> </AppProvider> ); export default App; ``` ### 4. Language Localization Handling multiple languages in an application can be efficiently managed using the Context API to switch between languages seamlessly. **Example:** ```javascript // LanguageContext.js import React, { createContext, useState } from 'react'; export const LanguageContext = createContext(); export const LanguageProvider = ({ children }) => { const [language, setLanguage] = useState('en'); return ( <LanguageContext.Provider value={{ language, setLanguage }}> {children} </LanguageContext.Provider> ); }; // App.js import React, { useContext } from 'react'; import { LanguageProvider, LanguageContext } from './LanguageContext'; const LanguageSelector = () => { const { language, setLanguage } = useContext(LanguageContext); return ( <select value={language} onChange={(e) => setLanguage(e.target.value)}> <option value="en">English</option> <option value="es">Spanish</option> </select> ); }; const App = () => ( <LanguageProvider> <LanguageSelector /> </LanguageProvider> ); export default App; ``` ### 5. Notification System A global notification system can be implemented using the Context API to manage alerts, messages, and notifications across the application. **Example:** ```javascript // NotificationContext.js import React, { createContext, useState } from 'react'; export const NotificationContext = createContext(); export const NotificationProvider = ({ children }) => { const [notifications, setNotifications] = useState([]); const addNotification = (message) => { setNotifications([...notifications, { id: Date.now(), message }]); }; const removeNotification = (id) => { setNotifications(notifications.filter(notification => notification.id !== id)); }; return ( <NotificationContext.Provider value={{ notifications, addNotification, removeNotification }}> {children} </NotificationContext.Provider> ); }; // App.js import React, { useContext } from 'react'; import { NotificationProvider, NotificationContext } from './NotificationContext'; const NotificationList = () => { const { notifications, removeNotification } = useContext(NotificationContext); return ( <ul> {notifications.map(notification => ( <li key={notification.id}> {notification.message} <button onClick={() => removeNotification(notification.id)}>Dismiss</button> </li> ))} </ul> ); }; const App = () => ( <NotificationProvider> <NotificationList /> </NotificationProvider> ); export default App; ``` ## Conclusion The Context API in React JS is a powerful tool to manage state and share data across components without the hassle of prop drilling. From theme management to user authentication, global state management, language localization, and notification systems, the Context API simplifies the development process and makes your code more maintainable.
shrikant9907
1,906,820
Unsupervised Learning: Unveiling the Hidden Secrets in Your Data
Imagine walking into a room full of people, but none of them have name tags. Unsupervised learning is...
0
2024-06-30T18:04:07
https://dev.to/abhinav_yadav_554cab962bb/unsupervised-learning-unveiling-the-hidden-secrets-in-your-data-g82
beginners, ai, machinelearning, datascience
Imagine walking into a room full of people, but none of them have name tags. Unsupervised learning is like being tasked with organising them into groups based on what you see. Unlike supervised learning where we have labeled data, here we're on our own to find hidden patterns and structures. ## Table Of Content - Introduction to Unsupervised Learning - Types of Unsupervised Learning - Practical Example: Implementing Clustering with k-Means - Practical Example: Dimensionality Reduction with PCA - Applications and Challenges of Unsupervised Learning ## Introduction to Unsupervised Learning Unsupervised learning in artificial intelligence is a type of machine learning that learns from data without human supervision. Unlike supervised learning, unsupervised machine learning models are given unlabelled data and allowed to discover patterns and insights without any explicit guidance or instruction. This type of learning is incredibly useful for tasks like: - **Customer segmentation:** Unsupervised learning can group customers based on their buying habits, allowing businesses to target specific demographics with personalised marketing campaigns. - **Anomaly detection:** Ever wondered how spam filters catch suspicious emails? Unsupervised learning can identify outliers in data, making it perfect for detecting fraudulent transactions or security threats. - **Data compression:** Images and videos can take up a lot of storage space. Unsupervised learning can compress data by reducing its dimensions while preserving key information. ## Types of Unsupervised Learning There are two main approaches to unsupervised learning: 1. **Clustering**: This is like sorting those people in the room. We group data points together based on their similarities. Popular clustering algorithms include k-Means (think of it as creating k distinct groups) and Hierarchical clustering (building a hierarchy of clusters like a family tree). 2. **Dimensionality Reduction**: Sometimes, data has too many variables, making it hard to visualise or analyse. Dimensionality reduction techniques like PCA (Principal Component Analysis) help us reduce the number of variables while keeping the most important information. ## Practical Example: Implementing Clustering with k-Means Let's get hands-on! We can use k-Means clustering to group customers based on their spending habits: 1.**Importing Libraries** ``` import pandas as pd import numpy as np from sklearn.cluster import KMeans import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler ``` 2.**Sample data: customer spending habits** ``` data = { 'CustomerID': range(1, 11), 'Annual Income (k$)': [15, 16, 17, 18, 19, 20, 21, 22, 23, 24], 'Spending Score (1-100)': [39, 81, 6, 77, 40, 76, 6, 94, 3, 72] } df = pd.DataFrame(data) ``` 3.Selecting features ``` X = df[['Annual Income (k$)', 'Spending Score (1-100)']] ``` 4.**Standardising the data** ``` scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` 5.**Applying k-Means clustering** ``` kmeans = KMeans(n_clusters=3, random_state=0) kmeans.fit(X_scaled) df['Cluster'] = kmeans.labels_ ``` 6.**Plotting the clusters** ``` plt.figure(figsize=(10, 6)) plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=df['Cluster'], cmap='viridis') plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=300, c='red', marker='X') plt.title('Customer Clusters') plt.xlabel('Annual Income (scaled)') plt.ylabel('Spending Score (scaled)') plt.show() ``` ## Practical Example: Dimensionality Reduction with PCA Similarly, PCA can be used to reduce the dimensions of a dataset for better visualization: 1.**Importing Libraries** ``` import pandas as pd import numpy as np from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt ``` 2.**Sample data: customer spending habits** ``` data = { 'CustomerID': range(1, 11), 'Annual Income (k$)': [15, 16, 17, 18, 19, 20, 21, 22, 23, 24], 'Spending Score (1-100)': [39, 81, 6, 77, 40, 76, 6, 94, 3, 72], 'Age': [25, 34, 22, 35, 40, 30, 26, 32, 28, 45] } df = pd.DataFrame(data) ``` 3.**Selecting features** ``` X = df[['Annual Income (k$)', 'Spending Score (1-100)', 'Age']] ``` 4.**Standardising the data** ``` scaler = StandardScaler() X_scaled = scaler.fit_transform(X) ``` 5.**Applying PCA** ``` pca = PCA(n_components=2) X_pca = pca.fit_transform(X_scaled) ``` 6.**Explained variance** ``` explained_variance = pca.explained_variance_ratio_ ``` 7.**Plotting the results** ``` plt.figure(figsize=(8, 6)) plt.scatter(X_pca[:, 0], X_pca[:, 1]) for i, txt in enumerate(df['CustomerID']): plt.annotate(txt, (X_pca[i, 0], X_pca[i, 1])) plt.title('PCA of Customer Data') plt.xlabel('Principal Component 1') plt.ylabel('Principal Component 2') plt.grid(True) plt.show() print(f"Explained variance by component: {explained_variance}") ``` ## Applications and Challenges of Unsupervised Learning Unsupervised learning unlocks a treasure trove of possibilities. It helps us segment markets, detect anomalies, compress data, and uncover hidden patterns in complex datasets. But like any adventure, there are challenges: - Choosing the right number of clusters: How many groups should we create in our k-Means example? Techniques like the Elbow Method can help us decide. - High-dimensional data: When dealing with many variables, it can be tricky to manage and visualize the data. - Interpretation: Making sense of the clusters and reduced dimensions requires careful analysis. > With careful planning and the right tools, unsupervised learning can be a powerful tool in your data science arsenal. So, next time you look at a crowd of unlabeled data, remember – there's a hidden story waiting to be discovered! Happy Learning ! Please do comment below if you like the content or not Have any questions or ideas or want to collaborate on a project, here is my [linkedin](https://www.linkedin.com/in/abhinav-yadav-482a4a26b/)
abhinav_yadav_554cab962bb
1,907,172
Microsoft Azure Migration Challenges & Solutions In 2024
If you are reading this article, you might be interested in migrating your system to Microsoft Azure....
0
2024-07-01T05:32:47
https://dev.to/priya_raimagiya_c204c9e90/microsoft-azure-migration-challenges-solutions-in-2024-161c
azure, azuremigration, migrationchallenges, azuresolutions
If you are reading this article, you might be interested in migrating your system to Microsoft Azure. This decision might leave you with several questions and doubts, one of which might be about the challenges of migrating to the cloud! These questions are not wrong—as you prepare to migrate your legacy infrastructure, you should be well-versed in these challenges and their solutions. This article has covered all the Azure migration challenges that require your attention. Do not let these obstacles hold you back; head to the most convenient solution & embark on your cloud adoption journey! ![Azure Migration Challenges & Solutions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7hyrrcgib7n13zhqfcwb.png) ## What Is Azure Migration Azure Migration is a comprehensive process of migrating your on-premise data, applications, and operating systems to the Azure cloud. It is a centralized portal that helps companies identify, evaluate, and migrate their legacy infrastructure to Microsoft Azure. 90% of Fortune 500 companies have adopted Microsoft Azure for their businesses globally. Its several offerings are enhancing its relevance over others. After choosing Azure, a blueprint of the entire process needs to be outlined. For migration, once the strategy is crafted, follow the Azure Migration strategy: 1. Assess 2. Migrate 3. Optimize 4. Manage ![Azure Migration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1wb6liyhr5bxe6sk32cw.png) ## Top Azure Migration Challenges With Solutions Let’s have a look at an [Azure migration checklist](https://www.bacancytechnology.com/blog/azure-migration-checklist): - Data Security & Compliance - Careful Planning - Cost - Business Downtime - Vendor Selection Keep your Azure migration process on track & Click right once you overcome that hurdle! ## hallenge & Solution 1: Data Security & Compliance Data security and regulatory compliance are pivotal for any company and must not be compromised at any cost. Azure has several offerings that can fortify your data. Additionally, you must ensure security options are configured while deploying the process. In other words, you can’t assume that default settings are secured ones. Follow the below tips to fortify your system; - Launch network segmentation boundaries - Safeguard cloud services with network controls. - Utilize a centralized identity and authentication system. - Put resources access restrictions conditions. - Credentials and secrets exposure restriction - Least privilege principle Another component that requires your consideration is compliance with industrial standards. Differences in operating systems, configurations, and dependencies cause compatibility issues. List essential industrial standards and ensure they comply when launching your system to Azure. Moreover, you need to perform proof of concepts(POC) to evaluate your present position. ![Data Security](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/erer3pcwry81fqhz7vhf.png) ## Challenge & Solution 2: Proper & Careful Planning Improper planning is the main cause of hurdles in the cloud adoption journey. A well-defined migration strategy can ensure effortless Azure migration. States show that 52% of companies are heading to cloud adoption, but not all ride smoothly on their cloud journey. Many technical aspects get ignored, which requires your careful consideration. Prepare a detailed plan, and get help from experts & experienced to frame your organization-specific migration plan. ## Challenge & Solution 3: High Cost But Save Upto …%(Write any number) Another challenge you might encounter is the large upfront investment. You need to invest wisely! How you proceed with your Azure adoption journey relies on your business requirements. You must have in-house Azure-certified experts to help you with cost optimization. Alternatively, you can opt for an Azure migration service provider who can assist you in managing spending throughout the journey from adoption to management. ## Challenge & Solution 4: Business Downtime Once we have dealt with the cost, we need to consider downtime. There might be chances of system downtime, which can’t be eliminated entirely, but with adequate planning, it can be minimized to a great extent. Here, you need to outline the essential operations for regular work. With the help of experienced developers, you can accurately predict and ensure that your essential process is least affected by migration. ![System Downtime](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rrsqukkyvttn3eir8ybs.png) ## Challenge & Solution 5: Vendor Selection Those Organizations that don’t possess in-house expertise must join hands with [Azure migration services](https://www.bacancytechnology.com/azure-migration-services) providers. These partners hold immense value; they need to be selected carefully. What you need to do is first convey your requirements to top service providers once you have the quotation and details on how they can help you. Then, you must evaluate their past projects and choose the right one for you! ## Conclusion Without the right support, overcoming all these roadblocks will not be difficult. Proper migration planning and execution can be your ultimate strength. The migration journey might not be easy, but do not let these obstacles catch you off guard; move ahead with a proper Azure migration strategy. Happy Clouding!
priya_raimagiya_c204c9e90
1,907,681
ASTRO JS | WEB DEV
Hello my fellow web developers, today i will be starting a small series about astro js which is a new...
0
2024-07-01T12:38:35
https://dev.to/shubhamtiwari909/astro-js-web-dev-15fp
html, webdev, javascript, beginners
Hello my fellow web developers, today i will be starting a small series about astro js which is a new framework just like NEXT JS and if you know about next js, the learning curve could be very easy and straight forward. Let's get started... ## Table of Contents - [What is Astro JS](#intro) - [Some of the key features](#key-features) - [Front Matter Architecture](#frontmatter) - [Island architecture](#island-architecture) - [Integrations](#integrations) - [View Transition](#view-transition) - [Assets - CSS and Images](#css-image-assets) - [Pros](#pros) - [Cons](#cons) <a id="intro"></a> ## What is Astro JS? Astro.js is a modern web framework designed to optimize the performance and developer experience of building static websites and dynamic web applications. It uses a unique approach called partial hydration, where only the necessary JavaScript for interactive components is loaded on the client side, significantly improving load times and overall performance. It has features similar to other frameworks like SSG/SSR, file based routing from next js, template/layout system from vue js, etc. <a id="key-features"></a> ## Some of the key features - **Static Site Generation (SSG)**: By default, Astro pre-renders pages to static HTML, providing fast load times and enhancing SEO. - **Component Agnostic**: Developers can use components from popular frameworks like React, Vue, Svelte, and others within the same project. Documentation - **Built-in Optimization**: Astro automatically optimizes CSS, JavaScript, and images, resulting in efficient and lightweight websites. - **Flexibility**: Supports both single-page applications (SPA) and multi-page applications (MPA), catering to a wide range of project requirements. - **File-Based Routing**: Simplifies the routing process with an intuitive file and folder structure just like next js. Documentation - **Multiple file format support**: It supports these file formats - .astro, .md, .mdx, .js, .ts, .html <a id="frontmatter"></a> ## Front Matter Architecture In Astro, frontmatter is a way to include metadata and logic at the top of your .astro files. It is defined using triple dashes (---) and allows you to write JavaScript (or TypeScript) code that runs before rendering the HTML. This is commonly used for setting variables, importing components, fetching data, and other preparatory tasks. ```js --- import Header from '../components/Header.astro'; import Footer from '../components/Footer.astro'; const title = "My Blog Post"; const author = "Jane Doe"; --- <Header /> <article> <h1>{title}</h1> <p>By {author}</p> </article> <Footer /> ``` ```md --- // Front matter with markdoc file layout: ../../layouts/BlogPostLayout.astro title: Astro in brief author: Himanshu description: Find out what makes Astro awesome! --- This is a post written in Markdown. ``` ```md --- // Accessing the markdoc file variables, could be better with dynamic routes const {frontmatter} = Astro.props; // same concept as react props --- <html> <!-- ... --> <h1>{frontmatter.title}</h1> <h2>Post author: {frontmatter.author}</h2> <p>{frontmatter.description}</p> <slot /> <!-- Markdown content is injected here --> </html> ``` <a id="island-architecture"></a> ## Island architecture * It aims to optimize web performance by allowing you to deliver only the minimal amount of JavaScript necessary for interactive components on a page. This architecture focuses on rendering static HTML for most of the page while isolating interactive components that require client-side JavaScript, all of this while working with your favorite library like react or vue. * **Partial Hydration**: Instead of hydrating the entire page with JavaScript (as done in traditional Single Page Applications), Astro only hydrates the some components. This means that JavaScript is only executed for specific components that need it, significantly reducing the amount of JavaScript loaded and executed on the client side. * **Better SEO**: Static HTML content is immediately available to search engines, improving crawlability and indexing. * Directives for Island Hydration: 1. client:load: Hydrates the component immediately after the page loads. 2. client:idle: Hydrates the component when the browser is idle. 3. client:visible: Hydrates the component when it becomes visible in the viewport. 4. client:media: Hydrates the component based on a media query. 5. client:only: Hydrates the component only on the client side (useful for client-only components). * Example ```js import Header from '../components/Header.astro'; import Footer from '../components/Footer.astro'; const title = "My Blog Post"; const author = "Jane Doe"; --- <Header /> <article> <h1>{title}</h1> <p>By {author}</p> </article> <InteractiveComponent slug={post.slug} client:load /> <Footer /> ``` <a id="integrations"></a> ## Integrations We can have multiple integrations like UI integrations - React JS, Vue JS, Svelte JS and SSR Adapters - Vercel, netlify, cloudfare, node js. It also has support for other integrations like tailwind, markdoc, mdx, sitemap, etc. ### How to add an integration? * Adding React and tailwind integration ``` npx astro add react tailwind ``` * It will configure both react and tailwind completely, if you need some manual configuration for these, you could do that from astro.config.mjs file. ```js import { defineConfig } from 'astro/config'; import tailwind from '@astrojs/tailwind'; export default defineConfig({ integrations: [ tailwind({ // Disable injecting a basic `base.css` import on every page. // Useful if you need to define and/or import your own custom `base.css`. applyBaseStyles: false, // Allow writing nested CSS declarations // alongside Tailwind's syntax nesting: true, }), ], }); ``` <a id="view-transition"></a> ## View Transition * View Transitions refer to techniques and tools used to create smooth transitions between different views or pages in a web application. These transitions enhance the user experience by providing visual continuity and reducing the perceived load time when navigating through the site. While Astro itself doesn't provide built-in view transition functionalities, it can leverage existing JavaScript libraries and frameworks to achieve these effects. * It comes with 3 effects for the page transition - slide, fade and none. ```js --- import { ViewTransitions } from 'astro:transitions'; --- <html lang="en"> <head> <title>My Homepage</title> <ViewTransitions /> </head> <body> <h1>Welcome to my website!</h1> </body> </html> ``` <a id="css-image-assets"></a> ## Assets ### CSS We could add css via a style tag with 2 mode, scoped(the css will be applied to that component only) and global(the css will be applied to all the pages globally). We can also use mixed mode some styles are scoped and some are made global with “:global” ```css // Scoped <style> .text { color: blue; } </style> ``` ```css // Global <style is:global> h1 { color: red; } </style> ``` ```css // Mixed <style> /* Scoped to this component, only. */ h1 { color: red; } /* Mixed: Applies to child `h1` elements only. */ article :global(h1) { color: blue; } </style> ``` * Combining classes with class:list - We could combine classes based on a condition with class:list ```js --- const { isRed } = Astro.props; --- <!-- If `isRed` is truthy, class will be "box red". --> <!-- If `isRed` is falsy, class will be "box". --> <div class:list={['box', { red: isRed }]}><slot /></div> <style> .box { border: 1px solid blue; } .red { border-color: red; } </style> ``` * Defining CSS variable - We could define the variable within the style tag which would be accessible to all the elements inside that style tag ```js --- const foregroundColor = "rgb(221 243 228)"; const backgroundColor = "rgb(24 121 78)"; --- <style define:vars={{ foregroundColor, backgroundColor }}> h1 { background-color: var(--backgroundColor); color: var(--foregroundColor); } </style> ``` ### Images * Astro provides a built-in <Image /> component designed to optimize image loading and performance in your web applications. This component helps to handle common tasks related to image optimization, such as responsive images, lazy loading, and automatic format selection. * Key Features of the <Image /> Component 1. Automatic optimization 2. Responsive Images 3. Lazy Loading 4. Automatic Format Selection (.webp) 5. Built-in SrcSet Generation 6. Low-Quality Image Placeholder <a id="pros"></a> ## Pros * Fast Load Times: Uses SSG for faster load times. * Partial Hydration: Loads only necessary JavaScript for interactive components. * Server-Side Rendering (SSR): Pre-rendered pages improve SEO. * Component Agnostic: Supports React, Vue, Svelte, and other components within the same project. * Ease of Learning: Straightforward syntax and well-written documentation. * File-Based Routing: Simple and intuitive routing system. * Automatic Image Optimization: Enhances performance and user experience. * CSS and JavaScript Optimization: Automatically optimizes and minimizes assets. * Rich Plugin Ecosystem: Various plugins available for extending functionality. * Multi-page Applications: Supports both SPAs and MPAs. * Scalabitily is good so smaller to medium sized projects. <a id="cons"></a> ## Cons * Build Performance: Long build times for large sites with thousands of pages. * Partial Hydration Concept: Requires learning a new concept for some developers. * Limited Resources: Fewer tutorials and third-party tools compared to more established frameworks. * Limited Plugins: Ecosystem is not as vast as more mature frameworks. * Newer Framework: May not have the same level of maturity or extensive ecosystem as established frameworks. * Limited Resources: Fewer tutorials and community-contributed resources. * Complex Integrations: Challenging integrations with certain third-party services or complex setups. That's it for this post, in part 2, we will be covering more on the codebase side. You can contact me on - Instagram - https://www.instagram.com/supremacism__shubh/ LinkedIn - https://www.linkedin.com/in/shubham-tiwari-b7544b193/ Email - shubhmtiwri00@gmail.com You can help me with some donation at the link below Thank you👇👇 https://www.buymeacoffee.com/waaduheck Also check these posts as well {% link https://dev.to/shubhamtiwari909/button-component-with-cva-and-tailwind-1fn8 %} {% link https://dev.to/shubhamtiwari909/microfrontend-react-solid-vue-333b %} {% link https://dev.to/shubhamtiwari909/codium-ai-assistant-for-devs-57of %} {% link https://dev.to/shubhamtiwari909/zustand-a-beginners-guids-fh7 %}
shubhamtiwari909
1,907,750
SQL Course: One-to-one Relationships and Inner Joins.
In this chapter, we will set up a user and a profile table and create a one-to-one relationship...
27,924
2024-07-01T17:28:37
https://dev.to/emanuelgustafzon/sql-course-one-to-one-relationships-and-inner-joins-4e9g
sql
In this chapter, we will set up a user and a profile table and create a one-to-one relationship between them. A user can create a `user` with a username and password and then create a `profile` with a profile image and bio. A user can only have one profile and a profile can only be related to one user. That is why it is called a one-to-one relationship. If you use `Replit`, create the template for `SQLite` and you are good to go. ## Prerequisites with links Press the links to read more about the topic; * [Create a table ](https://www.w3schools.com/sql/sql_create_table.asp) * [Primary Keys](https://www.w3schools.com/sql/sql_primarykey.asp) * [Foreign Keys](https://www.w3schools.com/sql/sql_foreignkey.asp) * [Autoincrement](https://www.w3schools.com/sql/sql_autoincrement.asp) * [Data types](https://www.w3schools.com/sql/sql_datatypes.asp) * [Inner join](https://www.w3schools.com/sql/sql_join_inner.asp) * [Contraints](https://www.w3schools.com/sql/sql_constraints.asp) * [Aliases](https://www.w3schools.com/sql/sql_alias.asp) ## Create the tables. Leave out the primary and foreign keys in the profile table for now. We use the constraint `CHECK` to make sure the password has a length of more than 5 characters. ``` CREATE TABLE Users ( ID INTEGER PRIMARY KEY AUTOINCREMENT, Username VARCHAR(255) UNIQUE NOT NULL, Password VARCHAR(255) NOT NULL, CHECK (LENGTH(Password) > 5) ); CREATE TABLE Profiles ( Img VARCHAR(1), Bio TEXT ); ``` ## Establish the one-to-one relationship. To make sure a user only can create one profile, we can use the `user’s primary key` as the profile’s primary and foreign key. This way the profile is dependent on the user. ``` CREATE TABLE Profiles ( UserID INTEGER NOT NULL PRIMARY KEY, Img VARCHAR(1), Bio TEXT, FOREIGN KEY (UserID) REFERENCES Users(ID) ); ``` ## Insert data to the user and profile. ``` INSERT INTO Users (Username, Password) VALUES ('Ben', 'secret'), ('Jim', 'secret'), ('Luk', 'secret'); INSERT INTO Profiles (UserID, Img, BIO) VALUES (1, '😎', 'I am a cool guy'), (2, '🥳', 'I love party'), (3, '🤠', 'I am a cowboy'); ``` # Query user and profile information using inner join. * We use Aliases to give the temporary names `u` for Users and `p` for Profiles. * Use SELECT to choose what fields you want selected from the database. * Join the Profiles with the Users where the ID's match. ``` SELECT u.Username, p.Img, p.Bio FROM Profiles p JOIN Users u ON p.UserID = u.ID; ``` Result: Ben|😎|I am a cool guy Jim|🥳|I love party Luk|🤠|I am a cowboy
emanuelgustafzon
1,908,503
RouterInput in Angular v16
Introduction: Angular v16 has recently been released, bringing along a range of new...
0
2024-07-02T06:52:41
https://dev.to/sanjaisak/routerinput-in-angular-v16-p8j
webdev, angular, javascript
## **Introduction:** Angular v16 has recently been released, bringing along a range of new features aimed at enhancing the developer experience. One of these exciting features is RouterInput, which allows developers to retrieve route information through inputs. In this blog post, we will explore how RouterInput can be utilized and discuss its benefits. ## **What is RouterInput?** RouterInput is a new API introduced in Angular v16 that enables developers to listen to route parameters through input bindings in their components. This means that when a route parameter changes, the component is notified through an observable stream, allowing for dynamic updates. ## **Getting Started with RouterInput:** To start using RouterInput, you need to **update your route provider in the main.ts** file. This can be done by importing the necessary modules and adding the "withComponentInputBinding()" method to your route provider configuration. **Here's an example**: ``` import { provideRouter, withComponentInputBinding } from '@angular/router'; bootstrapApplication(AppComponent, { providers: [ provideRouter(appRoutes, withComponentInputBinding()) ] }); ``` Once the route provider is updated, you can start utilizing RouterInput in your components. Simply add the "@Input()" decorator to the properties that you want to bind to route parameters. Here's an example: ``` `@Input() testId!: string; @Input() permission!: string; @Input() user!: string;` ``` With these input bindings, your component will be notified whenever the corresponding route parameters change. ## **Additional Notes on RouterInput:** 1. Router Inputs are only accessible within the routed component. If you need to access these parameters within a child component, you can either pass the input through input binding between the parent and child components or use other methods mentioned earlier. 2. If you prefer a more descriptive decorator name for RouterInput, you can remap it in its definition using the "import {Input as RouterInput} from '@angular/core'" statement. This allows you to use the new decorator name in your component. ## **Conclusion:** RouterInput is a valuable addition to Angular v16, providing developers with a convenient way to retrieve and react to route parameters through input bindings. By incorporating RouterInput into your Angular projects, you can enhance the flexibility and responsiveness of your components. So, go ahead and explore this new feature to further improve your Angular development experience. Happy coding!
sanjaisak
1,908,625
Streamlining Tariff Classification for Seamless Customs Declarations
As developers, efficiency is key. Here’s how accurate Ai in tariff classification can simplify your...
0
2024-07-02T08:30:23
https://dev.to/john_hall/streamlining-tariff-classification-for-seamless-customs-declarations-1587
ai, productivity, learning, software
As developers, efficiency is key. Here’s how accurate Ai in tariff classification can simplify your customs processes and boost your international trade operations. ## Why Tariff Classification Matters The [Harmonised System (HS)](https://www.icustoms.ai/blogs/hs-code/) is used globally by over 200 countries, covering more than 98% of trade merchandise. It offers several benefits: Faster customs processing Reduced trade barriers Improved trade data accuracy Accurate classification ensures smooth operations and minimizes delays. ## Key Benefits of Accurate Tariff Classification Customs Duties: Accurate tariffs mean lower costs. Regulatory Compliance: Stay compliant and avoid penalties. Trade Agreements: Benefit from trade deals and reduce expenses. Importance of Commodity Codes In the UK, HMRC requires precise descriptions for arriving goods. Correct commodity codes ensure: Seamless customs clearance Accurate tax and duty payments No delays or rejections Challenges Importers Face **Paperwork Overload:** Dealing with endless documents and regulations. **Time Management:** Meeting strict declaration deadlines. **Finding the Right Codes:** Incorrect codes lead to rejections. **Manual Data Entry Errors:** Mistakes due to lack of expertise. **Country-Specific Issues:** Navigating different trading policies. Overcoming Software Hesitation Some traders hesitate to use customs software, but tools like iCDS leverage AI to find accurate HS codes and streamline the declaration process. ## How iCustoms Helps iCustoms simplifies tariff classification and more: EU-wide declaration services Shipment tracking for customs brokers Duty claim tools for VAT calculations Automates documentation for easy declarations ## In Summary Accurate tariff classification is crucial for global trade, impacting taxes and customs clearance. Get it right to avoid problems and save money. Dive into our full guide to simplify your[ global trade with accurate tariff classification](https://www.icustoms.ai/blogs/tariff-classification-for-customs-declaration/).
john_hall
1,908,686
1. Series introduction
In this series I will explain principles of vim and neovim, and it is complementary to another series...
27,946
2024-07-03T09:34:44
https://dev.to/stroiman/series-introduction-2o3n
neovim, vim
In this series I will explain principles of vim and neovim, and it is complementary to another series where I rebuild my configuration from scratch. When I reach a new vim concept, in my configuration that I think could benefit need explaining in depth, I will add an article to this series about that concept. Note on terminology: whenever I write "vim", the information is valid for both vim and neovim. When a topic is specific to neovim, I will write "neovim". ## This series is not a replacement for the tutor This is not meant to teach you _editing_ in vim/neovim, as this is about explaining concepts that become relevant when configuring and customising neovim to your liking. If you are new to vim and neovim, and want to lear, both have a tutorial that teaches basic editing, navigation, _motions_, (see `:help motion`). Vim's tutorial can be launched directly from the command line (this is how I originally learned vim) ```sh > vimtutor ``` Neovim's tutorial can be launched from neovim itself, launch neovim, and type `:Tutor` and press enter. It's basically the same with minor changes, and some fancy additions.
stroiman
1,909,233
Configurando pipeline com github actions para um projeto React
Fazia tempo que não configurava um serviço de ci/cd e esses dias fui fazer isso no github usando o...
0
2024-07-02T17:33:24
https://dev.to/mayconbalves/configurando-pipeline-com-github-actions-para-um-projeto-react-lj4
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tg8yyecw3apdp5dnbfev.png) Fazia tempo que não configurava um serviço de ci/cd e esses dias fui fazer isso no github usando o github actions e percebi ser bem simples, e podemos fazer várias rotinas, é bem legal, mesmo se você estiver trabalhando sozinho, pois consegue que todo commit seja verificado para saber se não quebrou nada. Vou mostrar uma configuração simples e você pode adaptar para o que você vai precisar no seu dia-a-dia Primeiro passo, dentro da raiz do seu projeto, crie uma pasta `.github` dentro dessa pasta crie outra pasta chamada `workflows` e dentro crie o seu arquivo para rodar sempre que um commit entrar na sua branch main ou sempre que abrir Pull Request. Vou chamar o arquivo de `build.yaml` para o exemplo: ```yaml name: Build and Deploy on: push: branches: 'main' pull_request: branches: 'main' jobs: tests: name: Run Tests runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Use Node.js 20.x uses: actions/setup-node@v3 with: node-version: 20.x cache: 'yarn' - name: Install Dependencies run: yarn install - name: Run Tests run: yarn test deploy: name: Build and Deploy needs: [tests] runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Use Node.js 20.x uses: actions/setup-node@v3 with: node-version: 20.x cache: 'yarn' - name: Install Dependencies run: yarn install - name: Run Build run: yarn build ``` Bora para uma simples explicação 1 `On` nessa propriedade dizemos quando a action deve rodar, no caso, quando acontecer um commit ou um PR para a branch main. 2 `Jobs` nessa rotina dizemos quais scripts que queremos que nosso action execute no meu caso só tem dois o de tests e deploy o primeiro argumento é nome da tarefa no meu caso são Run Tests e Build and Deploy, depois temos runs-on que vai ser o ambiente que vamos querer que o nossa pipe rode, ou seja, vamos querer uma máquina virtual ubuntu na última versão, antes de entrar nos steps nosso job de build tem uma instrução chamada needs passando o argumento [tests], ou seja, para o nosso deploy rodar, precisa rodar os testes antes sem que nada quebre. 3 `Steps` temos algumas rotinas aqui dentro, no primeiro name e uses as actions fazem checkout do seu código e “pedem autorização” para seguir com as próximas rotinas: ```yaml - name: Checkout code uses: actions/checkout@v3 ``` Depois podemos declarar qual versão do node ele vai usar para rodar os scripts do nosso projeto: ```yaml - name: Use Node.js 20.x uses: actions/setup-node@v3 with: node-version: 20.x cache: 'yarn' ``` Normalmente você vai ver essa propriedade cache com npm, mas eu preferi usar o yarn, pois estava usando ele localmente também. Por último pedimos para de fato ele rodar o script de instalação e logo em seguida o de testes: ```yaml - name: Install Dependencies run: yarn install - name: Run Tests run: yarn test ``` O de build é o mesmo, porém só muda de yarn test para yarn build. Bem é isso, é algo bem simples de se construir, é legal implementar em ambientes diferentes e fazer vários testes. Espero que tenham gostado e qualquer feedback será muito bem-vindo, muito obrigado.
mayconbalves
1,909,249
Conversão de tipos
Conversão de tipos em atribuições É comum atribuir um tipo de variável a outro em...
0
2024-07-02T20:48:29
https://dev.to/devsjavagirls/conversao-de-tipos-127g
java
**Conversão de tipos em atribuições** - É comum atribuir um tipo de variável a outro em programação. Exemplo: atribuir um valor int a uma variável float. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zzwtgkrzddk442e5qia.png) - Quando tipos compatíveis são combinados em uma atribuição, o valor do lado direito é automaticamente convertido para o tipo do lado esquerdo. No exemplo, o valor de i é convertido para float e então atribuído a f. - Nem todos os tipos são compatíveis devido à rigorosa verificação de tipos de Java (ex: boolean e int não são compatíveis). - Uma conversão automática ocorre quando: Os dois tipos são compatíveis. O tipo de destino é maior que o de origem. - Quando essas condições são atendidas, ocorre uma conversão ampliadora. Exemplo: int pode conter todos os valores válidos de byte, permitindo uma conversão automática de byte para int. - Tipos numéricos, incluindo inteiros e de ponto flutuante, são compatíveis em conversões ampliadoras. Exemplo válido: a transformação de long em double é uma conversão ampliadora executada automaticamente. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3asq8fk5rb21vc8p8w06.png) Existe a conversão automática de long para double, não há conversão automática de double para long, já que essa não é uma conversão ampliadora. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98l3d1ebkp8gu4w7pjua.png) - Não há conversões automáticas de tipos numéricos para char ou boolean. **Convertendo tipos incompatíveis** - Conversões automáticas de tipos são úteis, mas não cobrem todas as necessidades de programação, aplicando-se apenas a conversões ampliadoras entre tipos compatíveis. - Para outros casos, é necessário utilizar uma coerção (cast). - Coerção é uma instrução que solicita explicitamente a conversão de um tipo para outro. - Forma geral de uma coerção: (tipo-destino) expressão. Exemplo: (int) (x / y) converte o resultado da expressão x / y para int. - Os parênteses em torno de x / y são necessários para garantir que a coerção seja aplicada ao resultado da divisão e não apenas a x. - Coerção (cast) é necessária quando não há conversão automática entre os tipos, como de double para int. - Coerção envolvendo uma conversão redutora pode resultar na perda de informações. - Por exemplo, ao converter um long para um short, informações serão perdidas se o valor do tipo long for maior do que o intervalo do tipo short, pois os bits de ordem superior serão removidos. - Quando um valor de ponto flutuante é convertido para um tipo inteiro, o componente fracionário é perdido devido ao truncamento. Exemplo: ao atribuir o valor 1.23 a um inteiro, o valor resultante será apenas 1, com a perda do 0.23. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vh1gttsd9k0f7d6sgpkz.png) Saída: Integer outcome of x / y: 3 Value of b: 100 Value of b: 1 ch: X - A coerção de (x / y) para int resulta no truncamento do componente fracionário, perdendo informações. - Não há perda de informação quando b recebe o valor 100, pois um byte pode conter esse valor. - Ao tentar atribuir o valor 257 a b, ocorre perda de informações porque 257 excede o valor máximo de um byte. - Nenhuma informação é perdida, mas uma coerção é necessária na atribuição de um valor byte a um char.
devsjavagirls
70,647
Automating GitHub tasks with Node-RED
How to build easy-to-use workflows on Node-RED to automate GitHub tasks
0
2018-12-23T22:01:38
https://dev.to/webhookrelay/automating-github-tasks-with-node-red-3cgc
javascript, nodered, devops, productivity
--- title: Automating GitHub tasks with Node-RED published: true description: How to build easy-to-use workflows on Node-RED to automate GitHub tasks tags: javascript, node-red, devops, productivity --- ## Some background While majority of the [Node-RED](https://nodered.org/) community seems to be interested in home automation (me too, but just a little bit and I honestly think it's light-years ahead of other home automation solutions), I focus mostly on automating my ops tasks as a developer/engineer such as updating cluster resources, for that I wrote [Keel](https://github.com/keel-hq/keel), updating CLI clients through continuous delivery, manifests. [Homebrew](https://brew.sh/) is a package manager for MacOS where anyone can create their application manifests for an easy distribution and installation. However, there are multiple ways brew formulas can work, such as compiling during installation or downloading a binary. My method of installation is downloading a pre-compiled binary but I had to always remember to update the formula with the latest sha256 https://github.com/webhookrelay/homebrew-tap/blob/master/relay.rb#L5. If a newer version would be released without an update to this manifest, installation via homebrew would fail :) Well, and there were times when I forgot that :D ## Disclaimer This flow is not directly reusable for you as it's working around my infrastructure. The purpose is to showcase some template generation, webhookrelay node and I also managed to find a problem in crypto node (more about it later). ## Stack Node-RED on RPI with [ node-red-contrib-webhookrelay](https://flows.nodered.org/node/node-red-contrib-webhookrelay) so I don't need to expose it to the internet, [GitHub account + node](https://flows.nodered.org/node/node-red-contrib-github) and Slack (that's really optional, but helpful as I push lots of updates to my Slack channel). ## The flow The actual flow looks like this: ![node-red-brew-update](https://thepracticaldev.s3.amazonaws.com/i/e7bcuace8q410xbviw08.png) I have uploaded flow here: https://gist.github.com/rusenask/93ddb94479fbd6f1e3dcf3308da4ec1b. 1. Initially, workflow is triggered by a build job that's being executed with [Google Cloud Builder](https://cloud.google.com/cloud-build/docs/) and a webhook is dispatched through [this little application](https://github.com/webhookrelay/webhook-dispatcher) 2. Flow is triggered by a webhook to webhookrelay.com endpoint which is routed through a tunnel to a [node here](https://flows.nodered.org/node/node-red-contrib-webhookrelay) which are then translated into Node-RED events. 3. Flow then downloads the binary and saves to disk. 4. `sha256` custom function node is actually required due to a problem that I found with generating it via `node-red-contrib-crypto-js`. The problem I suspect was due to the way data is being fed into the crypto node (not in one go but with streams). Even setting it to 'one bytes buffer' it would still present a wrong sha. So, I had to: In settings.js enable `require` as we will need additional crypto libs: ```json functionGlobalContext: { require: require }, ``` And then read the file and calculate digest: ```js var require = global.get('require'); var crypto = require('crypto'); var fs = require('fs'); var algo = 'sha256'; var shasum = crypto.createHash(algo); var file = '/tmp/relay-darwin-amd64-nr'; var s = fs.ReadStream(file); s.on('data', function(d) { shasum.update(d); }); s.on('end', function() { var sum = shasum.digest('hex'); node.send({payload: sum}); }); return; ``` 5. Once we have the sha256 sum, we create two templates: one for GitHub brew formula template and one for Slack. 6. For Github we just do a simple overwrite since the file is quite small and only the sha sum ever changes. 7. As for Slack, I chose just to do a simple HTTP request with their incoming webhook integration as the available Slack node seemed a bit too heavyweight for such a tiny requirement. Have a look at the payload that needs to be sent: ```json { "response_type": "in_channel", "text": "Brew formula SHA updated to: {{payload}}" } ``` You could expand it to add pictures, buttons, attachments and so on, but in this case I just needed to see a notification. ## To sum up So far I am really happy with Node-RED. Even though it would pretty pretty straightforward to just write everything in Go/JS/Python, for some reason it's more fun in Node-RED :) I will be automating more and more side project tasks with it. Also, `function` node is a life saver!
krusenas
1,909,704
Embrace the Richness of India: Drape Yourself in a Lambani Embroidery Saree
Introduction: India is a land of rich cultural heritage, and one way to embrace its richness is by...
0
2024-07-03T05:48:42
https://dev.to/tittibha/embrace-the-richness-of-india-drape-yourself-in-a-lambani-embroidery-saree-377h
saree, fashion
<h2 class="AnswerParser_ChrpVTwoAnswerParserH2__sR11m">Introduction:</h2> <p><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text">India is a land of rich cultural heritage, and one way to embrace its richness is by adorning yourself in a <strong><a href="https://tittibha.in/product/do-pahar/">Lambani embroidery saree</a></strong>. Lambani embroidery is a traditional craft that showcases the vibrant colors and intricate designs of the Lambani community in Karnataka. In this article, we will explore the beauty and significance of Lambani embroidery sarees and why they are the perfect choice to embrace the richness of India. Whether it's a special occasion or a desire to celebrate Indian craftsmanship, draping yourself in a Lambani embroidery saree allows you to embody the cultural heritage and elegance of India.</span></p> <h2 class="AnswerParser_ChrpVTwoAnswerParserH2__sR11m">The Art of Lambani Embroidery:</h2> <p><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text">Lambani embroidery is a centuries-old craft that reflects the rich cultural traditions of the Lambani community. Here are some key aspects of the art of Lambani embroidery:</span></p> <ol class="AnswerParser_ChrpVTwoAnswerParserOrderedList__Bs8qK"> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Intricate Designs</strong>: Lambani embroidery sarees are known for their intricate designs that are inspired by nature, tribal culture, and mythology. Skilled artisans meticulously stitch intricate patterns and motifs using techniques like chain stitch, herringbone stitch, and mirror work. Each design tells a story and represents the rich heritage of the Lambani community.</span></li> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Vibrant Color Palette</strong>: Lambani embroidery sarees are characterized by their vibrant and contrasting color combinations. The use of bold hues like red, blue, green, and yellow adds a lively touch to the sarees, making them visually striking and captivating.</span></li> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Mirror Work</strong>: Mirror work is an essential element of Lambani embroidery. Small mirrors, called "abhla," are intricately stitched onto the fabric, creating a shimmering effect. The mirrors not only add a decorative touch but also hold cultural significance, believed to ward off evil spirits.</span></li> </ol> <h2 class="AnswerParser_ChrpVTwoAnswerParserH2__sR11m">Embracing Lambani Embroidery Sarees:</h2> <p><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text">Draping yourself in a Lambani embroidery saree allows you to embrace the richness of India and celebrate its cultural heritage. Here are some ways to embrace Lambani embroidery sarees:</span></p> <ol class="AnswerParser_ChrpVTwoAnswerParserOrderedList__Bs8qK"> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Special Occasions</strong>: Lambani embroidery sarees are perfect for special occasions like weddings, festivals, and cultural events. By wearing a Lambani embroidery saree, you not only showcase the beauty of Indian craftsmanship but also pay homage to the rich cultural traditions of India.</span></li> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Celebrating Indian Artistry</strong>: By choosing a Lambani embroidery saree, you celebrate the artistry and skills of Indian artisans. Each stitch on the saree represents the dedication and creativity of the Lambani community, and wearing it is a way to honor their craftsmanship.</span></li> <li class="ReactMarkdownParser_ChrpVTwoListItem__x4jWe"><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text"><strong>Supporting Artisan Communities</strong>: When you purchase a Lambani embroidery saree, you directly support the livelihoods of skilled Lambani artisans. By supporting these communities, you contribute to the preservation of their traditional craft and help sustain their cultural heritage.</span></li> </ol> <h2 class="AnswerParser_ChrpVTwoAnswerParserH2__sR11m">Conclusion:</h2> <p><span class="ReactMarkdownParser_ChrpVTwoTextContainer__ThA7z" data-testid="youchat-text">Draping yourself in a <strong><a href="https://maps.app.goo.gl/q15rY1g6RbYztYyz9">Lambani embroidery saree</a></strong> allows you to embrace the richness of India and celebrate its cultural heritage. The intricate designs, vibrant colors, and mirror work of Lambani embroidery sarees reflect the artistry and skills of the Lambani community. By wearing a Lambani embroidery saree, you not only adorn yourself in a beautiful garment but also pay tribute to the cultural traditions and craftsmanship of India. Embrace the richness of India and let the elegance of a Lambani embroidery saree become a part of your wardrobe.</span></p>
tittibha
1,910,152
The Differences Between EntityFramework .Add and .AddAsync
Introduction Entity Framework is a popular Object-Relational Mapper (ORM) for .NET, which...
0
2024-07-03T15:25:01
https://dev.to/tkarropoulos/the-differences-between-entityframework-add-and-addasync-2koo
csharp, softwaredevelopment, entityframework
### Introduction Entity Framework is a popular Object-Relational Mapper (ORM) for .NET, which allow us to better interact with a database using .NET objects. One of the fundamental operations we are using all of us that interact with databases is adding new entities. In this article I will try to explain the two methods that we have in our disposal for entities creation `Add` and `AddAsync`. We will explore their usage, differences and best practices for when to use each other. ### What is `DbSet<TEntity>.Add`? The DbSet<TEntity>.Add method is a synchronous method used to add a new entity to the context, which is then tracked by EF Core. When SaveChanges is called, EF Core generates the necessary SQL statements to insert the new entity into the database. Lets see a usage example of `Add` method ```csharp public Task AddNewEntityAsync(Entity entity, CancellationToken ct) { using (var context = new ApplicationDbContext()) { context.Entities.Add(entity); await context.SaveChangesAsync(ct); } } ``` How the above example works, - The `Add` method attaches the entity to the context with an `Added` state - When `SaveChangesAsync` is called the context creates an `INSERT` SQL command to add the entity to the database asynchronously ### What is `DbSet<TEntity>.AddAsync`? The `AddAsync` method is the asynchronous counterpart of the Add method. It is used to add a new entity to the context asynchronously. A usage example of `AddAsync` is given bellow ```csharp public async Task AddAsync(Entity entity) { using (var context = new ApplicationDbContext()) { await context.Entities.AddAsync(entity); await context.SaveChangesAsync(); } } ``` How it works The `AddAsync` method attaches the entity to the context with an `Added` state asynchronously - When `SaveChangesAsync` is called, the context creates an `INSERT` SQL command to add the entity to the database asynchronously ### Key Differences Between `Add` and `AddAsync` Now the first question that pops in our minds is ok, so I will use `Add` when I am in a synchronous context and `AddAsync` when I am in asynchronous. This is just one aspect; the other is how `Add` and `AddAsync` interact with the database. As [Microsoft](https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.dbcontext.addasync?view=efcore-8.0) explains > This method is async only to allow special value generators, such as the one used by 'Microsoft.EntityFrameworkCore.Metadata.SqlServerValueGenerationStrategy.SequenceHiLo', to access the database asynchronously. For all other cases the non async method should be used. That simple means that event though `Add` and `AddAsync` are doing the same thing, `AddAsync` comes with a little overhead and that is an extra interaction with the database, something that is avoided with the use of `Add` If you found my article useful, feel free to share it. Also, leave a comment if something is not clear enough.
tkarropoulos
1,910,427
Automating Linux User Creation with a Bash Script
A Practical Script for SysOps Engineers In the SysOps engineer world, automation is key,...
0
2024-07-03T21:47:31
https://dev.to/sipsdaoracle/automating-linux-user-creation-with-a-bash-script-1fc2
linux, bash, devops, hng
## A Practical Script for SysOps Engineers In the SysOps engineer world, automation is key, especially for repetitive tasks that save valuable time and ensure consistency. As a SysOps engineer, knowing how to write robust and efficient scripts along with strong Linux skills, boosts productivity and gives you a better workflow. This article will walk you through a bash script designed to automate the creation of Linux users. The script creates users and groups as specified, sets up home directories with appropriate permissions and ownership, it generates random passwords for the users, and logs all actions to `/var/log/user_management.log`. Additionally, it stores the generated passwords securely in `/var/secure/user_passwords.txt`. and covers error handling for scenarios like existing users. ## Prerequisite: - Bash shell environment. - Text editor for editing the script and preparing the input file. Let's have a look at our script which we will later use to accomplish this task, and break it down by a detailed explanation of each section: ```bash #!/bin/bash # Check if the script is run as root if [[ "$(id -u)" -ne 0 ]]; then echo "You must run the script as root" >&2 exit 1 fi # Check if a filename is provided if [[ -z "$1" ]]; then echo "Usage: $0 <name-of-text-file>" >&2 exit 1 fi # File paths LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure /var/secure storage for passwords mkdir -p /var/secure touch $PASSWORD_FILE chown root:root /var/secure chmod 700 /var/secure chmod 600 $PASSWORD_FILE # Function to log actions log_action() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE } # Function to generate a random password generate_password() { tr -dc 'A-Za-z0-9!@#$%^&*()_+=-[]{}|;:<>,.?/~' </dev/urandom | head -c 16 } # Function to hash passwords hash_password() { echo "$1" | openssl passwd -6 -stdin } # Read the input file line by line while IFS=";" read -r username groups; do # Trim whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Skip empty lines if [[ -z "$username" ]]; then continue fi # Check if user already exists if id "$username" &>/dev/null; then log_action "User $username already exists!" continue fi # Create personal group if groupadd "$username"; then log_action "Group $username created successfully." else log_action "Failed to create group $username." continue fi # Create user with a home directory and personal group if useradd -m -s /bin/bash -g "$username" "$username"; then log_action "User $username created successfully." else log_action "Failed to create user $username." continue fi # Create groups and add user to them IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" >/dev/null 2>&1; then if groupadd "$group"; then log_action "Group $group created." else log_action "Failed to create group $group." continue fi fi if usermod -aG "$group" "$username"; then log_action "User $username added to group $group." else log_action "Failed to add user $username to group $group." fi done # Generate random password password=$(generate_password) hashed_password=$(hash_password "$password") if usermod --password "$hashed_password" "$username"; then echo "$username,$password" >> $PASSWORD_FILE log_action "Password set for user $username." else log_action "Failed to set password for user $username." fi # Set home directory permissions if mkdir -p "/home/$username" && chown -R "$username:$username" "/home/$username" && chmod 755 "/home/$username"; then log_action "Home directory permissions set for user $username." else log_action "Failed to set home directory permissions for user $username." fi done < "$1" log_action "User creation process completed. Check $LOG_FILE for details." ``` ### Breaking Down the Script #### 1. Check for Root Privileges ```bash if [[ "$(id -u)" -ne 0 ]]; then echo "You must run the script as root" >&2 exit 1 fi ``` This part checks if the script is run by the root user, this is because only the root user has permission to create and manage other users. If the script is not run as root, it will print an error message and exit. #### 2. Check for Filename Argument ```bash if [[ -z "$1" ]]; then echo "Usage: $0 <name-of-text-file>" >&2 exit 1 fi ``` This section will check if the user provided a filename as an argument when running the script. If no filename is found, it will print a usage message and exit. #### 3. Define File Paths ```bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` These lines define the file paths for logging actions and storing passwords. #### 4. Ensure Secure Storage for Passwords ```bash mkdir -p /var/secure touch $PASSWORD_FILE chown root:root /var/secure chmod 700 /var/secure chmod 600 $PASSWORD_FILE ``` This part of the script ensures that the `/var/secure` directory and the password file exist with appropriate permissions for secure storage. `mkdir -p` creates the directory if it doesn't exist, `touch` creates the password file, and `chmod` and `chown` set the permissions so only the root user can read and write to it. #### 5. Logging Function ```bash log_action() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE } ``` The `log_action` function's job is to log messages with timestamps to the log file. It uses the `date` command to add a timestamp to each log entry and `tee` to append the message to the log file. #### 6. Password Generation Function ```bash generate_password() { tr -dc 'A-Za-z0-9!@#$%^&*()_+=-[]{}|;:<>,.?/~' </dev/urandom | head -c 16 } ``` The `generate_password` function generates a random password. `tr` is used to filter characters from `/dev/urandom` (a random number generator), and `head -c 16` ensures the password is 16 characters long. #### 7. Password Hashing Function ```bash hash_password() { echo "$1" | openssl passwd -6 -stdin } ``` The `hash_password` function hashes the generated password using the `openssl` command with SHA-512 encryption. This makes the password secure before storing it. #### 8. Read and Process Input File ```bash while IFS=";" read -r username groups; do # Trim whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Skip empty lines if [[ -z "$username" ]]; then continue fi ``` This part reads the input file line by line, separating the username and groups by the semicolon (`;`). It also trims any whitespace around the username and groups and skips any empty lines. #### 9. Check for Existing User ```bash if id "$username" &>/dev/null; then log_action "User $username already exists!" continue fi ``` This checks if the user already exists. If so, it logs the action and skips to the next user. #### 10. Create Personal Group ```bash if groupadd "$username"; then log_action "Group $username created successfully." else log_action "Failed to create group $username." continue fi ``` This creates a personal group with the same name as the username. #### 11. Create User ```bash if useradd -m -s /bin/bash -g "$username" "$username"; then log_action "User $username created successfully." else log_action "Failed to create user $username." continue fi ``` This creates the user with a home directory and assigns the personal group as the primary group. #### 12. Create and Assign Groups ```bash Copy code IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo ``` ## How to use the script? 1. Save the script as create_users.sh in your preferred directory. 2. Prepare a text file (users_and_groups.txt) with one username per line, followed by groups separated by commas. For example: light;sudo,dev,www-data idimma;sudo mayowa;dev,www-data 3. Make the script executable with chmod +x create_users.sh. 4. Run the script with the path to your text file as an argument: ./create_users.sh users_and_groups.txt. ## Conclusion By following the structured steps outlined above, SysOps engineers can efficiently manage large numbers of user accounts and their associated groups, providing a robust solution for onboarding new developers. ## Acknowledgment This script was developed as part of the HNG Internship DevOps track. For more details on the program, visit the [HNG Internship Website](https://hng.tech/internship). You can learn more about the HNG Internship program [here](https://hng.tech/hire) and [here](https://hng.tech/premium). Thank you for reading.
sipsdaoracle
1,910,429
# 🌳 Dive into Decision Trees: A Fun Guide! 🌳
Hey there, fellow data enthusiasts! 👋 Are you ready to dive into the world of Decision Trees? 🌲 Let's...
0
2024-07-03T16:35:51
https://dev.to/aviralgarg05/-dive-into-decision-trees-a-fun-guide-590l
ai, python, tensorflow, machinelearning
Hey there, fellow data enthusiasts! 👋 Are you ready to dive into the world of Decision Trees? 🌲 Let's make it interactive and fun with emojis! 🎉 ## What is a Decision Tree? 🤔 A Decision Tree is like a flowchart that helps us make decisions based on data. Each node represents a decision point, and the branches show the possible outcomes. It's a powerful tool in the world of Machine Learning! 🚀 ## Why Use Decision Trees? 🤷‍♂️ 1. **Simplicity**: Easy to understand and interpret. 🧠 2. **Versatility**: Can handle both numerical and categorical data. 🔢🔤 3. **No Need for Data Normalization**: Works well with raw data. 🌟 4. **Feature Importance**: Helps identify the most important features. 🔍 ## How Does It Work? 🛠️ 1. **Start at the Root**: Begin with the entire dataset. 🌱 2. **Split the Data**: Based on a feature, split the data into branches. 🌿 3. **Repeat**: Continue splitting until each leaf (end node) contains a single class or meets stopping criteria. 🍂 ### Example Time! 📝 Imagine we have data about fruits, and we want to classify them based on features like color, size, and shape. 🍎🍌🍊 1. **Root Node**: Is the fruit color red? - Yes: 🍎 - No: Go to next question. 2. **Next Node**: Is the fruit shape long? - Yes: 🍌 - No: 🍊 And voila! We have our decision tree! 🌳 ## Pros and Cons 🆚 ### Pros 👍 - **Easy to Understand**: Visual representation makes it intuitive. - **No Data Scaling Needed**: Works with raw data. - **Handles Both Types of Data**: Numerical and categorical. ### Cons 👎 - **Overfitting**: Can create overly complex trees. - **Sensitive to Data Variations**: Small changes can alter the tree. - **Less Accurate**: Compared to ensemble methods. ## Visualizing Decision Trees 👀 Visualizations make it easier to interpret decision trees. Tools like Graphviz and libraries like Scikit-learn in Python can help create these visualizations. 🖼️ ```python from sklearn import tree import matplotlib.pyplot as plt # Example Code to Visualize a Decision Tree model = tree.DecisionTreeClassifier() model.fit(X_train, y_train) plt.figure(figsize=(12,8)) tree.plot_tree(model, filled=True) plt.show() ``` ## Let's Play! 🎮 Ready to try out Decision Trees? Here's a challenge for you: - **Dataset**: Use the Iris dataset (a classic in ML). - **Goal**: Classify the species of Iris flowers based on sepal/petal length and width. Share your results in the comments below! 💬 ## Conclusion 🎬 Decision Trees are a fantastic starting point in the world of Machine Learning. They're simple yet powerful and can handle a variety of data types. So, go ahead and plant your Decision Tree today! 🌳🌟 Happy coding! 💻✨
aviralgarg05
34,247
iOS remote debugging - a how-to guide
find out how to enable iOS remote debugging in Xcode and debug apps on remote iOS devices.
0
2018-06-06T12:06:03
https://dev.to/rickey_oak/ios-remote-debugging---a-how-to-guide-2onh
iosdebug, debugiosapp, remotedebugios, debugiosappondevice
--- title: iOS remote debugging - a how-to guide published: true description: find out how to enable iOS remote debugging in Xcode and debug apps on remote iOS devices. tags: ios debug, debug ios app, remote debug ios, debug ios app on device --- # iOS remote debugging - a how-to guide When it comes to testing and debugging of iOS applications, the emulator cannot fully replace testing them on actual devices. Well, if you have the possibility to connect iPhone, iPad, etc. directly to your development machine, you don’t have to worry. But what if you needed to inspect an app running on a device which couldn’t be physically connected to your Mac? ## Wireless debugging Apple has added an option for wireless debugging in Xcode 9. But before you start, you should bear in mind **several conditions**: * Mac computer and iOS device should share the same network. * You should use Xcode 9.0 or later, macOS 10.12.4 or later, and iOS 11.0 or later. Okay, now lets proceed to the actual steps that will help you enable a remote debugging: 1. Open your project on Xcode. 2. Select Window > Devices and Simulators. 3. In the window that appears, click “Devices”. 4. Link your device to your Mac via a USB cable. 5. In the left column, choose the device, and in the detail area, check the box “Connect via network”. ![iOS remote debugging](https://www.flexihub.com/images/upload/flexihub/articles/ios/xcode_remote_debugging.png "Wireless debugging Xcode") Now, Xcode will pair with your device. If Xcode can connect with the device over the network, a network icon will appear next to the device in the left column and you can safely disconnect your device. ## How to debug iOS apps on remote devices If the first condition is not met and the device does not share the same network as Mac or even located miles away from the machine, remote debug with the help of Xcode will not be possible. However, there is a workaround! You will still be able to debug such device with the help of USB over Ethernet redirection software, e.g. [FlexiHub](https://www.flexihub.com/). It is designed to virtualize USB devices, (like iPhone, iPad, iPod, etc.) and redirect them over the network. The software works across LAN, Ethernet, and the Internet. 1. To establish a remote connection to the iOS device, [download and install Flexihub](https://www.flexihub.com/download.html) on both the local machine running the debugger and a remote computer with the iOS device physically attached. 2. Launch FlexiHub on the local machine and register an account - click ‘Sign up’ link and follow the instructions. Sign into the account. 3. Start the software on the remote computer and log into FlexiHub account using the same credentials. 4. In the software interface on the local Mac, you’ll see iPhone/iPad available for connection. Click “Connect”. ![debug iOS app on device](https://www.flexihub.com/images/articles-landing/fh.jpg "FlexiHub") That’s it. Now you can access the remote iOS device from your local computer and start the debugging process as though the device was directly connected to your machine.
rickey_oak
54,150
Which language would you want to see take over the world?
Let's imagine for a second that we live in a world where all programming languages are equally suited...
0
2018-10-09T06:36:05
https://dev.to/avalander/which-language-would-you-want-to-see-take-over-the-world-492
discuss, languages, programming
--- title: Which language would you want to see take over the world? published: true description: tags: discuss, languages, programming cover_image: https://images.unsplash.com/photo-1459478309853-2c33a60058e7?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=ba9893452f120b4af8ffe3b3f9475e7f&auto=format&fit=crop&w=750&q=80 --- Let's imagine for a second that we live in a world where all programming languages are equally suited for any task. Performance, memory footprint, compile time, executable/runtime size and compatibility, and similar factors are entirely irrelevant. Valid criteria could be personal taste, perceived elegance and beauty of the language, simplicity, easiness to use, and so forth. If any programming language could fulfill equally well the requirements for any application/system and run anywhere, which language would you like to see everything written in? My personal choice would be **Lisp**. I think Lisp is beautifully simple, there are very few syntactical constructs, it's easy to extend with macros and very suitable for functional programming.
avalander
57,071
6 MSSQL Interview Questions for Experienced
Sharing 6 MSSQL Interview Questions for Experienced
0
2018-10-25T17:31:51
https://dev.to/sharadtricks/6-mssql-interview-questions-for-experienced-bdm
mssql, interview, questions
--- title: 6 MSSQL Interview Questions for Experienced published: true description: Sharing 6 MSSQL Interview Questions for Experienced tags: MSSQL, Interview, Questions --- ##1. How to shrink a database in Mssql? In MSSQl you can shrink database by using SQL Server Management Studio or Transact-SQL.We have described both method below ###Using SQL Server Management Studio In Object Explorer, connect to an instance of the SQL Server Database Engine, and then expand that instance. Expand Databases, and then right-click the database that you want to shrink. Point to Tasks, point to Shrink, and then click Database. Database Displays the name of the selected database. Current allocated space Displays the total used and unused space for the selected database. Available free space Displays the sum of free space in the log and data files of the selected database. Reorganize files before releasing unused space Selecting this option is equivalent to executing DBCC SHRINKDATABASE specifying a target percent option. Clearing this option is equivalent to executing DBCC SHRINKDATABASE with TRUNCATEONLY option. By default, this option is not selected when the dialog is opened. If this option is selected, the user must specify a target percent option. Maximum free space in files after shrinking Enter the maximum percentage of free space to be left in the database files after the database has been shrunk. Permissible values are between 0 and 99. Click OK. ###By Using Transact-SQL Connect to the Database Engine. From the Standard bar, click New Query. Copy and paste the following example into the query window and click Execute. This quesry uses DBCC SHRINKDATABASE to decreases the size of the data and log files in the UserDB database and to allow for 10 percent free space in the database. DBCC SHRINKDATABASE (UserDB, 10); GO Source : https://docs.microsoft.com/en-us/sql/relational-databases/databases/shrink-a-database?view=sql-server-2017 ## 2. How to check database version of mssql? Steps to identify the SQL Server version Log in to the Windows Server system hosting your SQL Server database. Open a command prompt window. Execute the following command: SQLCMD -S servername\instancename where servername is your server name and instancename is the SQL instance name. Enter select @@version and press <Enter>. At the 2 > prompt, type go and press <Enter>. The SQL version running on the server displays in the dialog box. Repeat step 3 through step 5 for each additional instance. Source: https://support.solarwinds.com/Success_Center/Orion_Platform/Knowledgebase_Articles/How_to_identify_your_SQL_Server_version_and_edition ##3. How to escape single quotes and wild cards in mssql? To escape single quotes ( ' ) in MS SQL all you need to do is add two single quotes instead of one. Example: INSERT into name_column ([name]) VALUES ('R"Rams') To escape wild cards you can use default square braces "[]" like this- [%] or [_] or define your own escape character by using the keyword ESCAPE at the end of your where clause. ##4. Explain Magic tables in MSSQL? SQL Server allows you to define a Magic Table. Magic Tables are invisible tables or virtual tables. You can see them only with the help Triggers in SQL Server. Magic Tables are those tables which allow you to hold inserted, deleted and updated values during insert, delete and update DML operations on a table in SQL Server. These are the two Magic Tables: Inserted Deleted Generally Magic Tables are invisible tables, we can only see them with the help of Trigger's in SQL Server. Source: https://www.c-sharpcorner.com/UploadFile/rohatash/magic-tables-in-sql-server-2012/ ##5. List the Logical operators in SQL Server? The logical operators are basically used to test the truths of conditions. ALL – It returns true if all the set of operations are true. AND – It returns true if the Boolean expressions are true. IN – It returns true if the operand is equal to one of the lists of expressions. ANY – It returns true if any of the set of comparisons is true. BETWEEN – It returns true if the operand is within a range. EXISTS – It returns true if the subquery contains any rows. LIKE – It returns true if the operand matches a pattern NOT – It reverses the values of any Boolean operator. OR – It returns true if either of the Boolean expression is true. SOME – It returns true if some of the set of comparisons are true. Source: https://www.onlineinterviewquestions.com/mssql-interview-questions/ ##6. What is default port used by mssql? The default port of SQL server is 1433.
sharadtricks
68,690
Friday hack: Suspense, Concurrent mode and lazy to load locales for i18n
I have a small series of posts about Lingui. I implemented all i18n related features. And I want to a...
125
2018-12-15T04:13:02
https://dev.to/stereobooster/friday-hack-suspense-concurrent-mode-and-lazy-to-load-locales-for-i18n-hgg
react, javascript, i18n
--- title: Friday hack: Suspense, Concurrent mode and lazy to load locales for i18n published: true description: tags: [react, js, i18n] cover_image: https://thepracticaldev.s3.amazonaws.com/i/qz8jv655155c37byozj0.jpg series: i18n of React with Lingui.js --- I have a small series of posts about Lingui. I implemented all i18n related features. And I want to add prerendering to improve load performance, but it appears not that simple as it supposes to be. I had to "hack" `Suspense`, `ConcurrentMode` and `React.lazy`. As I said this is a hack, this is done for fun. Do not use this code in production, unless you know what you are doing. The full source code is [here](https://github.com/stereobooster/react-lingui-example) ## In the previous episode We stopped here: [i18n of React with Lingui.js #3](https://dev.to/stereobooster/i18n-of-react-with-linguijs-3-5h6g). I deployed it to Github pages and measured load performance with [webpagetest](https://www.webpagetest.org) (From: Dulles, VA - Moto G4 - Chrome - 3G). ![filmstrip 1](https://thepracticaldev.s3.amazonaws.com/i/x7t42yggrpp6r1hvjqx9.png) As you can see it takes way to long to get first paint (4-4.5s). The easiest way to fix it, given that we use CRA and don't want to eject, it to use [react-snap](https://github.com/stereobooster/react-snap). ## Add prerendering with the help of react-snap ```shell npm install --save react-snap # or using Yarn yarn add react-snap ``` Add `postbuild` hook to the `package.json`: ```json "scripts": { "postbuild": "react-snap" } ``` And you're done! I also added ``` "reactSnap": { "inlineCss": true } ``` ![filmstrip](https://thepracticaldev.s3.amazonaws.com/i/999fd7o1xyxe55tnbcex.png) As you can see an issue with slow first paint went away, but there is a flash of the white screen. ### Flash of the white screen On the one side, we have prerendered HTML which will start to render as soon as the browser will get it (around 2s in the US on average 3G). On the other side, we have React which will start to render as soon as all scripts will be downloaded (around 3s in the US on average 3G, for the given example). When React will start to render and if not all dynamic resources will be loaded it will flush all the content it has and typically this is the almost white (empty) screen. This is where we get "Flash of the white screen". Dynamic resources can be: async components (`React.lazy(() => import())`), locale catalogs (`import("./locales/" + locale + "/messages.js");`). To solve the problem we need to wait for all resources to load before React will flush the changes to the DOM. We can do this with loader library like, `react-loadable` or `loadable-components`. See more details [here](https://github.com/stereobooster/react-snap#async-components). Or we can do this with new `React.lazy`, `<Suspense />` and `<ConcurentMode />`. ### ConcurentMode `<ConcurentMode />` marked as unstable (use at your own risk), so it can change in the future. Read more on how to use it and about caveats [here](https://github.com/stereobooster/react-async-issue). ```js const ConcurrentMode = React.unstable_ConcurrentMode; const RootApp = ( <ConcurrentMode> <Suspense fallback={<div>Loading...</div>} maxDuration={5000}> <App /> </Suspense> </ConcurrentMode> ); const rootElement = document.getElementById("root"); const root = ReactDom.unstable_createRoot(rootElement, { hydrate: true }); root.render(RootApp); ``` This is the first hack we need. The second one is that we need to repurpose `React.lazy` to wait for subresource. React team will eventually add `Cache` for this, but for now, let's keep hacking. ```js const cache = {}; export default ({ locale, children }) => { const SuspendChildren = cache[locale] || React.lazy(() => i18n.activate(locale).then(() => ({ __esModule: true, default: ({ children }) => ( <I18nProvider i18n={i18n}>{children}</I18nProvider> ) })) ); cache[locale] = SuspendChildren; return <SuspendChildren>{children}</SuspendChildren>; }; ``` - `i18n.activate(locale)` returns promise, which we "convert to ES6" module e.g. `i18n.activate(locale).then(() => ({ __esModule: true, ...}))` is equivalent to `import()`. - `default: ...` - default export of pseudo ES6 module - `({children}) => <I18nProvider i18n={i18n}>{children}</I18nProvider>` react functional component - `<SuspendChildren />` will tell `<Suspense />` at the top level to pause rendering until language catalog is loaded `<ConcurentMode />` will enable `<StrictMode />` and it will complain about unsafe methods in `react-router`, `react-router-dom`. So we will need to update to beta version in which issue is fixed. [`react-helmet` also incompatible with `<StrictMode />`](https://github.com/nfl/react-helmet/issues/426), so we need to replace it with `react-helmet-async`. One way or another but we "fixed" it. ![](https://thepracticaldev.s3.amazonaws.com/i/tuchbvoaw0ug0wnphppc.png) _Photo by Pankaj Patel on Unsplash_
stereobooster
70,337
Build a Basic Website with ASP.NET MVC and Angular
This tutorial walks you through building a basic website with ASP.NET MVC and Angular.
0
2019-11-21T19:17:53
https://developer.okta.com/blog/2018/12/21/build-basic-web-app-with-mvc-angular
dotnet, angular, tutorial, webdev
--- title: Build a Basic Website with ASP.NET MVC and Angular published: true description: This tutorial walks you through building a basic website with ASP.NET MVC and Angular. tags: dotnet, angular, tutorial, webdev canonical_url: https://developer.okta.com/blog/2018/12/21/build-basic-web-app-with-mvc-angular --- ASP.NET has been around for a long time. When Microsoft introduced ASP.NET MVC, it changed the way many developers approach their codebase. There is an excellent separation of concerns, a TDD friendly framework, and easy integration with JavaScript while maintaining full control over rendered HTML. On the client-side, a lot of .NET developers prefer Angular because it comes with TypeScript, and it’s a much closer language to C# than plain JavaScript. Angular is an excellent framework for building enterprise-level, feature-rich, applications. You will be using .NET Framework 4.7.1, and you should have the latest version of Visual Studio 2017 installed. Also, you should have Node and npm installed since you will use Angular tools that require Node and npm. ## Get Started with ASP.NET MVC Create an API using a built-in template within Visual Studio. You’ll start from scratch. In Visual Studio, select **File** -> **New Project**. Select Web, ASP.NET Web Application (.NET Framework). ![visual studio file new project](https://d33wubrfki0l68.cloudfront.net/f873c5b2c80bd16c44ac6582f507b267fbc947fd/7b777/assets-jekyll/blog/mvc-angular/vs-file-new-project-993b0dc6b8f5997a22fbcad8b2d059d938044682e5ec48e21cbddece370a5706.png) On the following screen, select MVC: ![visual studio mvc project](https://d33wubrfki0l68.cloudfront.net/bdffe22ab94af6677dfa8a74b065835d7dd12f65/a9802/assets-jekyll/blog/mvc-angular/vs-mvc-project-c789cb0af6945034efe28cd5886b416b0fa4a15f0a081b6889e6efa8585dd001.png) You should have a new ASP.NET MVC application ready now, with all the folders and files as shown in the following picture: ![visual studio mvc file structure](https://d33wubrfki0l68.cloudfront.net/95140225372dc7b55bb17df25ec09ff4cc511d46/a1d74/assets-jekyll/blog/mvc-angular/vs-mvc-file-structure-2be6b679b1c592aec534a7204726fa9bc2ebaeb3551989857f888636a6dc2b39.png) At this moment, in your project, you have some NuGet packages you won’t be using. You can get rid of them quite quickly. Run the following commands inside of your Package Manager Console. ```sh Uninstall-Package Microsoft.ApplicationInsights.Web Uninstall-Package Microsoft.ApplicationInsights.WindowsServer Uninstall-Package Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel Uninstall-Package Microsoft.ApplicationInsights.PerfCounterCollector Uninstall-Package Microsoft.ApplicationInsights.DependencyCollector Uninstall-Package Microsoft.ApplicationInsights.Agent.Intercept Uninstall-Package Microsoft.ApplicationInsights ``` Additionally, make sure you delete `ApplicationInsights.config` file. After finishing, you should have a clean project and perfect starting ground. ## Create an Angular Project Using Angular CLI is the best way to get started with Angular applications. With Angular CLI tools, you can easily create new projects, update existing projects, add different components to your codebase, and do all of that by entering a few commands in your terminal. You can also run the application in the development environment and publish it to production environments. It saves you the trouble of setting up configuration and Webpack. It can also do all the optimizing, compressing and minifying for you. It’s highly recommended to use Angular CLI to prepare your applications for production environments. You can install the Angular CLI by entering the following in your favorite terminal: ```sh npm install -g @angular/cli@7.1.0 ``` After that, navigate to the folder of your MVC project and open your terminal there and enter the following: ```sh ng new Angular --skip-tests --style=scss ``` This will create a new Angular project under the Angular folder within your MVC project. The `--skip-tests` argument will make sure that CLI skips creating spec files. You should include the Angular folder in your MVC project. To do that, click on the **Show All Files** button in your Solution Explorer. You can now navigate to **Angular** folder and include all folders and files except for `node_modules`. After you do that you should have a folder structure like on the following picture: ![visual studio file structure angular](https://d33wubrfki0l68.cloudfront.net/6914670affaba0f845d72f03433339a0ff171960/a193c/assets-jekyll/blog/mvc-angular/vs-file-structure-angular-cbb3c32c929132ddba483042d39afe6aa9a5bbac532828cebdd7d694f7517c5e.png) ## Set up a Database for ASP.NET You will use Entity Framework (EF) 6 as your Object-Relational Mapper (ORM), a proven solution from Microsoft. EF 6 is pretty easy to set up and get going, and you don’t need to know any SQL to have it working for your applications. Think of it as an adapter or bridge to your database. All of your database queries will go through the Entity Framework. The central point is your `DbContext` class, and that’s where you define your connection string and the tables for your database. EF 6 uses the `DbSet` collection type to represent a table in a database. All you need to do is create a connection string, create a new class that inherits from `DbContext`, make the data models, and create appropriate properties within `DbContext` for those models. Those properties will represent tables in the database. From that point, it’s pretty easy to get and update existing data by using Language Integrated Queries (LINQ). You should first install the EF 6, by running the following code in the Package Manager Console: ```sh Install-Package EntityFramework -Version 6.2.0 ``` ### Add the Connection String You can get started by adding a connection string to your `Web.config` file. Make sure to add it inside of the `<configuration>` section, and after `<configSections></configSections>`: ```xml <connectionStrings> <add name="OktaConnectionString" connectionString="Data Source=(LocalDb)\MSSQLLocalDB;Initial Catalog=JoggingTracker;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\WeightTracker.mdf" providerName="System.Data.SqlClient" /> </connectionStrings> ``` ### Create Your DbContext and Data Models in Angular First, create a model for recording a jog. Create a file in the `Models` directory called `JoggingRecord.cs`: ```cs using System; namespace OktaAspNetMvcAngular.Models { public class JoggingRecord { public int Id { get; set; } public float Distance { get; set; } public string Description { get; set; } public DateTime CreatedAt { get; set; } } } ``` Nothing special here, a standard `Id` property that represents a unique identifier for every row within a table, and Entity Framework will use a naming convention and automatically set it as a primary key for this table. You definitely need the `Distance` property to track your jogs, a `Description` to describe the jog and `CreatedAt` property for an audit trail. Next, create an `ApplicationDbContext` class inside of a new folder called `Data`. For now, it will be pretty straightforward: ```cs using System.Data.Entity; using OktaAspNetMvcAngular.Models; namespace OktaAspNetMvcAngular.Data { public class ApplicationDbContext : DbContext { public ApplicationDbContext() : base("OktaConnectionString") { } public static ApplicationDbContext Create() => new ApplicationDbContext(); public DbSet<JoggingRecord> JoggingRecords { get; set; } } } ``` The piece of code that calls the base `DbContext` class is used to set up the connection string. There is the `DbSet` property that I previously mentioned, and it will be used to fetch records from the table, update and delete those records. ### Enable and Run Migrations in Your ASP.NET Project Open the Package Manager Console, and run the following to enable migrations: ```sh Enable-Migrations ``` Now you can add your initial migration, which will contain the creation of the `JoggingRecords` table: ```sh Add-Migration Initial ``` Now, you can update the database: ```sh Update-Database -Verbose ``` The verbose flag will allow you to see the generated SQL statements, and this can help you in resolving errors if any occur. ### Seed the Database After you have updated the database, add Migrations by adding code inside of the `Seed` method in the `Configuration` class. First, add the using for the Models namespace: ```cs using OktaAspNetMvcAngular.Models; ``` After that, add the following code for the Seed method: ```cs protected override void Seed(OktaAspNetMvcAngular.Data.ApplicationDbContext context) { context.JoggingRecords.AddOrUpdate(x => x.Id, new JoggingRecord { Id = 1, Description = "Friday Evening", Distance = 5.5f, CreatedAt = new DateTime(2018, 5, 1) }, new JoggingRecord { Id = 2, Description = "Saturday morning", Distance = 6.15f, CreatedAt = new DateTime(2018, 4, 4) }, new JoggingRecord { Id = 3, Description = "Marathon", Distance = 20, CreatedAt = new DateTime(2018, 5, 13) }, new JoggingRecord { Id = 4, Description = "Short one", Distance = 3.5f, CreatedAt = new DateTime(2018, 6, 6) } ); // This method will be called after migrating to the latest version. // You can use the DbSet<T>.AddOrUpdate() helper extension method // to avoid creating duplicate seed data. } ``` ## Add Authentication to Your Angular Application Handling authentication is never an easy or comfortable task. If you want to quickly and easily take care of authentication for your application, you’re likely in the market for a solution like Okta. That’s why so many developers love and use [Okta](https://developer.okta.com), and they never want to build auth again and are happy to leave it to us. To get started with Okta, sign up for a [forever-free developer account](https://developer.okta.com/signup/) or ([log in](https://login.okta.com/?SAMLRequest=fc%2B7CsJAEAXQXvAflu1NNJUMeZBGELTx1a%2FrYILJTtyZGD%2FfSBRiYzlw77lMnD3rSj3Qc0ku0YtgrhU6S5fSXRN9PKxmS52l00nMpq6iBvJWCrfDe4ss6vStRe9aDzmGIZfo1jsgwyWDMzUyiIV9vt1AH4XGk5ClSvewUgMNa%2BYW%2FVj5jxhm9NLP67QQaSAMu64L6CYmsFSHlnzT4ZlLwTgcL6Sf8%2FeX9AU%3Dhttps://login.okta.com/?SAMLRequest=fc%2B7CsJAEAXQXvAflu1NNJUMeZBGELTx1a%2FrYILJTtyZGD%2FfSBRiYzlw77lMnD3rSj3Qc0ku0YtgrhU6S5fSXRN9PKxmS52l00nMpq6iBvJWCrfDe4ss6vStRe9aDzmGIZfo1jsgwyWDMzUyiIV9vt1AH4XGk5ClSvewUgMNa%2BYW%2FVj5jxhm9NLP67QQaSAMu64L6CYmsFSHlnzT4ZlLwTgcL6Sf8%2FeX9AU%3D) if you already have one). ![okta signup](https://d33wubrfki0l68.cloudfront.net/c42a01626d346a41dcb4de3bc62a154dd3c527f6/4eb6f/assets-jekyll/blog/mvc-angular/okta-signup-accab135cb5e7cb06a3446679d6aef0958ea31b3b9444d87ffb2f70e5882d045.png) You should see the Dashboard, and in the upper right corner, there should be your unique Org URL. Save it for later. ![okta org url](https://d33wubrfki0l68.cloudfront.net/5af536f7929e0ecf2bf84ec823e74636253fde09/7393e/assets-jekyll/blog/mvc-angular/okta-org-url-c4043a91b2e40b8bb69fa886de2afd1a831156a8d710b1ae2a9e30fa5bdc26b7.png) Now you need to create a new application by browsing to the Applications tab, and clicking Add Application, and from the first page of the wizard, choose **Single-Page App**. On the settings page, enter `OktaMvcAngular` as your name value and select Implicit(Hybrid). ![okta application settings](https://d33wubrfki0l68.cloudfront.net/d91ddd0c8e33281226cb9a4b97026724da0d126b/85966/assets-jekyll/blog/mvc-angular/okta-app-settings-e7c6960160a52965064459f7a22189d2e6604bdf75a160efb7f2401be4ea2bae.png) Now that your application has been created copy down the Client ID and Client secret values on the following page, you’ll need them soon (of course, yours will be different). ![okta client credentials](https://d33wubrfki0l68.cloudfront.net/75c3021f347a48e86bfe86bcbd7dc9e296253d0a/44557/assets-jekyll/blog/mvc-angular/okta-client-credentials-fc61b85256ab539cc2ae02a699b7397e8147843458538dcce09d1270b24eb5b2.png) It is quite easy to add authentication to ASP.NET MVC with the help of Okta. Add the following to your `Web.config` file, inside of `<appSettings></appSettings>`: ```xml <!-- 1. Replace these values with your Okta configuration --> <add key="okta:ClientId" value="{yourClientId}" /> <add key="okta:ClientSecret" value="{yourClientSecret}" /> <add key="okta:OrgUri" value="https://{yourOktaDomain}/oauth2/default" /> <!-- 2. Update the Okta application with these values --> <add key="okta:RedirectUri" value="http://localhost:8080/authorization-code/callback" /> <add key="okta:PostLogoutRedirectUri" value="http://localhost:8080/Account/PostLogout" /> ``` Install the required NuGet packages: ```sh Install-Package Okta.AspNet -Version 1.1.1 Install-Package Microsoft.Owin.Host.SystemWeb -Version 4.0.0 Install-Package Microsoft.Owin.Security.Cookies -Version 4.0.0 ``` Add the Startup class. Right click on the project -> Add new item -> OWIN Startup Class: ![visual studio add owin startup](https://d33wubrfki0l68.cloudfront.net/4ea03df8555d26512c60ef22ffe2c7779f190e61/b3c10/assets-jekyll/blog/mvc-angular/vs-add-owin-startup-99d01663a1b095c29819a5b7282d2c984cb183c814e464d33ba2a5b59321d5f0.png) Replace the content of that file with the following code: ```cs using IdentityModel.Client; using Microsoft.IdentityModel.Protocols.OpenIdConnect; using Microsoft.IdentityModel.Tokens; using Microsoft.Owin; using Microsoft.Owin.Security; using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using Owin; using System; using System.Collections.Generic; using System.Configuration; using System.Security.Claims; using System.Threading.Tasks; [assembly: OwinStartup(typeof(OktaAspNetMvcAngular.Startup))] namespace OktaAspNetMvcAngular { public class Startup { // These values are stored in Web.config. Make sure you update them! private readonly string clientId = ConfigurationManager.AppSettings["okta:ClientId"]; private readonly string redirectUri = ConfigurationManager.AppSettings["okta:RedirectUri"]; private readonly string authority = ConfigurationManager.AppSettings["okta:OrgUri"]; private readonly string clientSecret = ConfigurationManager.AppSettings["okta:ClientSecret"]; private readonly string postLogoutRedirectUri = ConfigurationManager.AppSettings["okta:PostLogoutRedirectUri"]; public void Configuration(IAppBuilder app) { // For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=316888 app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType); app.UseCookieAuthentication(new CookieAuthenticationOptions()); app.UseOpenIdConnectAuthentication(new OpenIdConnectAuthenticationOptions { ClientId = clientId, ClientSecret = clientSecret, Authority = authority, RedirectUri = redirectUri, ResponseType = OpenIdConnectResponseType.CodeIdToken, Scope = OpenIdConnectScope.OpenIdProfile, PostLogoutRedirectUri = postLogoutRedirectUri, TokenValidationParameters = new TokenValidationParameters { NameClaimType = "name" }, Notifications = new OpenIdConnectAuthenticationNotifications { AuthorizationCodeReceived = async n => { // Exchange code for access and ID tokens var tokenClient = new TokenClient(authority + "/v1/token", clientId, clientSecret); var tokenResponse = await tokenClient.RequestAuthorizationCodeAsync(n.Code, redirectUri); if (tokenResponse.IsError) { throw new Exception(tokenResponse.Error); } var userInfoClient = new UserInfoClient(authority + "/v1/userinfo"); var userInfoResponse = await userInfoClient.GetAsync(tokenResponse.AccessToken); var claims = new List<Claim>(); claims.AddRange(userInfoResponse.Claims); claims.Add(new Claim("id_token", tokenResponse.IdentityToken)); claims.Add(new Claim("access_token", tokenResponse.AccessToken)); if (!string.IsNullOrEmpty(tokenResponse.RefreshToken)) { claims.Add(new Claim("refresh_token", tokenResponse.RefreshToken)); } n.AuthenticationTicket.Identity.AddClaims(claims); return; }, RedirectToIdentityProvider = n => { // If signing out, add the id_token_hint if (n.ProtocolMessage.RequestType == OpenIdConnectRequestType.Logout) { var idTokenClaim = n.OwinContext.Authentication.User.FindFirst("id_token"); if (idTokenClaim != null) { n.ProtocolMessage.IdTokenHint = idTokenClaim.Value; } } return Task.CompletedTask; } }, }); } } } ``` Add an a new MVC 5 empty controller inside of `Controllers` folder. You can name it `AccountController` and paste the following code inside of it: ```cs using Microsoft.Owin.Security.Cookies; using Microsoft.Owin.Security.OpenIdConnect; using System.Web; using System.Web.Mvc; namespace OktaAspNetMvcAngular.Controllers { public class AccountController : Controller { public ActionResult Login() { if (!HttpContext.User.Identity.IsAuthenticated) { HttpContext.GetOwinContext().Authentication.Challenge(OpenIdConnectAuthenticationDefaults.AuthenticationType); return new HttpUnauthorizedResult(); } return RedirectToAction("Index", "Home"); } [HttpPost] public ActionResult Logout() { if (HttpContext.User.Identity.IsAuthenticated) { HttpContext.GetOwinContext().Authentication.SignOut(CookieAuthenticationDefaults.AuthenticationType, OpenIdConnectAuthenticationDefaults.AuthenticationType); } return RedirectToAction("Index", "Home"); } public ActionResult PostLogout() { return RedirectToAction("Index", "Home"); } } } ``` ### Add Login Section to Layout Inside of `Views/Shared` folder create a new partial view named `_LoginPartial` and paste the following code inside: ```html @if (Context.User.Identity.IsAuthenticated) { <ul class="nav navbar-nav navbar-right"> <li> <p class="navbar-text">Hello, <b>@Context.User.Identity.Name</b></p> </li> <li> <a onclick="document.getElementById('logout_form').submit();" style="cursor: pointer;">Log out</a> </li> </ul> <form action="/Account/Logout" method="post" id="logout_form"></form> } else { <ul class="nav navbar-nav navbar-right"> <li>@Html.ActionLink("Log in", "Login", "Account")</li> </ul> } ``` ## Set Up Your ASP.NET API Endpoints You will use the controller as your endpoint source for the API. Add a new Web API controller named`JoggingRecordsController.cs` class inside of your `Controllers` folder and paste the following code: ```cs using OktaAspNetMvcAngular.Data; using OktaAspNetMvcAngular.Models; using System.Data.Entity; using System.Data.Entity.Infrastructure; using System.Linq; using System.Net; using System.Threading.Tasks; using System.Web.Http; using System.Web.Http.Description; namespace OktaAspNetMvcAngular.Controllers { public class JoggingRecordsController : ApiController { private ApplicationDbContext db = new ApplicationDbContext(); // GET: api/JoggingRecords public IQueryable<JoggingRecord> GetJoggingRecords() => db.JoggingRecords; // GET: api/JoggingRecords/5 [ResponseType(typeof(JoggingRecord))] public async Task<IHttpActionResult> GetJoggingRecord(int id) { JoggingRecord joggingRecord = await db.JoggingRecords.FindAsync(id); if (joggingRecord == null) { return NotFound(); } return Ok(joggingRecord); } } } ``` The code above adds two endpoints. The first one will fetch all JoggingRecords from the database. In your applications, you will probably want some kind of paging and filtering. The second endpoint will fetch individual JoggingRecords instances based on the `id` parameter that the client will pass to the API. That parameter will match the `Id` in `JoggingRecords` table in the database. ## Set a Default Formatter for ASP.NET Web API 2 Unfortunately, ASP.NET Web API 2 is set to use XML as a default formatter. Remove it and make sure JSON formatter is the default one. Add the following code at the end of the `Register()` method inside of `WebApiConfig.cs` file: ```cs // Set JSON formatter as default one and remove XmlFormatter var jsonFormatter = config.Formatters.JsonFormatter; jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); config.Formatters.Remove(config.Formatters.XmlFormatter); jsonFormatter.SerializerSettings.DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Utc; ``` You’ll also need to add the following `using` statement for the JSON serializer to the file: ```cs using Newtonsoft.Json.Serialization; ``` You will also want to specify the port for your ASP.NET application. You need to know the exact address so you can communicate with the API from your Angular application. To do so, right-click on the project in the solution explorer and click **properties**. In the main properties window, choose **Web** from the left-hand menu and set the **Project Url** property to `http://localhost:8080`. ## Set Up the Angular Application for ASP.NET MVC Since you will let your ASP.NET MVC application handle routing, you can remove the `app-routing.module.ts` file and its references in the `AppModule`. Your `app.module.ts` file should look like this: ```ts import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } ``` After that is done, delete the `<router-outlet></router-outlet>` code from `app.component.html`. You also need to make sure that the Angular CLI outputs files to the appropriate folder. Inside of your Angular folder open `angular.json` file and change the `outputPath` setting: ```json "outputPath": "../Bundles/AngularOutput", ``` In the ASP.NET MVC application, you will add a new JS and CSS bundle for Angular. Locate the `BundleConfig.cs` file inside of `App_Start` folder and at the end of the `RegisterBundles` function add the following code: ```cs // Angular bundles bundles.Add(new ScriptBundle("~/bundles/Angular") .Include( "~/bundles/AngularOutput/inline.*", "~/bundles/AngularOutput/polyfills.*", "~/bundles/AngularOutput/scripts.*", "~/bundles/AngularOutput/vendor.*", "~/bundles/AngularOutput/runtime.*", "~/bundles/AngularOutput/main.*")); bundles.Add(new StyleBundle("~/Content/Angular") .Include("~/bundles/AngularOutput/styles.*")); ``` For this to work, you need to make use of these bundles inside of your views. You should first create a new MVC view for your Angular application. Create a new folder inside of `Views` folder called `AngularData` and inside of it create new `Index.cshtml` file: ```cs @{ ViewBag.Title = "Index"; } @Styles.Render("~/Content/Angular") <app-root>test</app-root> @Scripts.Render("~/bundles/Angular") ``` The code above will actually render the JS bundle that our angular application produces. Angular will find the `<app-root>` tag and render the App component and everything inside of it. To access this view, you need to create a matching controller. Inside of `Controllers` folder create a new MVC 5 controller named `AngularDataController`: ```cs using System.Web.Mvc; namespace OktaAspNetMvcAngular.Controllers { public class AngularDataController : Controller { public ActionResult Index() { return View(); } } } ``` You should also update the layout file so that you can access the Angular application from the menu bar. Locate the `_Layout.cshtml` file inside of `Views/Shared` folder. Make sure your navbar looks like this: ```html <ul class="nav navbar-nav"> <li>@Html.ActionLink("Home", "Index", "Home")</li> <li>@Html.ActionLink("Angular", "Index", "AngularData")</li> <li>@Html.ActionLink("About", "About", "Home")</li> <li>@`Html.ActionLink("Contact", "Contact", "Home")</li> </ul> ``` To have the Angular application outputting the content, navigate inside of the `Angular` folder, and open your favorite terminal. Enter the following command: ```sh ng build --extractCss --watch ``` The command above will make sure that a new bundle is produced whenever you change any code that is part of the Angular codebase, and it will also extract CSS from global styles into CSS files instead of JS ones. ## Create a Model and API Service for Your Angular Application Your app contains only one root module, `AppModule`, and one component `AppComponent`. Since the application that you will be making here is a small one, you will not create additional modules. Once the application grows and gets more complicated, you should introduce new modules, shared modules to organize your code better. The application itself will only show some records from the API. However, since your Angular application is interacting with the server, you should create a separate Angular service for this purpose. You can also create a TypeScript model for the `JoggingRecord`. First, create a `shared` folder inside the `app` folder, which is part of the Angular application that gets created by the Angular CLI. Inside of the `shared` folder create a new file — `JoggingRecord.ts`: ```ts export default class JoggingRecord { id: number; distance: number; description: string; createdAt: string; } ``` The class above will allow you to have a strongly typed model for the `JoggingRecord` inside of your Angular application. You can now create the API service inside of the `shared` folder. Name the file `api.service.ts`: ```ts import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; import JoggingRecord from './JoggingRecord'; @Injectable() export default class ApiService { public API = 'http://localhost:8080/api'; public JOGGING_RECORDS_ENDPOINT = `${this.API}/joggingrecords`; constructor(private http: HttpClient) { } getAll(): Observable<Array<JoggingRecord>> { return this.http.get<Array<JoggingRecord>>(this.JOGGING_RECORDS_ENDPOINT); } } ``` The `ApiService` class is the bridge between our Angular application and our Web API endpoints. You need it to fetch the data from our server. Add the service to the App module by updating the `providers` array inside of `app.module.ts` file. You also need to import the `HttpClientModule`. Your `app.module.ts` file should end up looking like this: ```ts import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { HttpClientModule } from '@angular/common/http'; import { AppComponent } from './app.component'; import ApiService from '../shared/api.service'; @NgModule({ declarations: [ AppComponent ], imports: [ HttpClientModule, BrowserModule, ], providers: [ApiService], bootstrap: [AppComponent] }) export class AppModule { } ``` ## Fetch Data from the ASP.NET API Endpoint Update the `app.component.ts` file with the following content: ```ts import { Component, OnInit } from '@angular/core'; import JoggingRecord from '../shared/JoggingRecord'; import ApiService from '../shared/api.service'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.scss'] }) export class AppComponent implements OnInit { joggingRecords: Array<JoggingRecord>; constructor(private apiService: ApiService) { } ngOnInit() { this.apiService.getAll().subscribe(data => { this.joggingRecords = data; }); } } ``` The code above will assign the data that `ApiService` fetches from the server to the `joggingRecords` property. You also need to update this component’s template (`app.component.html`): ```html <div style="text-align:center"> <h1> Jogging Records </h1> <span *ngFor="let record of joggingRecords"> {{record.description}} - {{record.distance}} <br /> </span> </div> ``` ## Test Out Your ASP.NET and Angular Application Since you already have the Angular CLI outputting the code changes for your Angular application in the `Bundles/AngularOutput` folder, and your MVC application is making use of that folder, you can simply start the MVC application and preview Angular app inside of it. From your Visual Studio use `CTRL+F5` to start the application your browser should open, and you should see the following screen: ![application home page](https://d33wubrfki0l68.cloudfront.net/64786a2b99fe7abe1b8531913da2198b57e3db8b/f926a/assets-jekyll/blog/mvc-angular/app-homepage-edf9b86a3a3f10c3958a3f8297813cdc4baf2064746eeaf6bc556b43b8cd7ea9.png) Clicking on the Angular link in the menu will actually render the MVC view for the Angular application, and from that point, it’s Angular who is in charge of rendering. The Angular page should look like this: ![application jogging records](https://d33wubrfki0l68.cloudfront.net/8dc68857dec86042263aebb534e72ff4bad2fea2/92eac/assets-jekyll/blog/mvc-angular/app-jogging-records-16d746e60574f16bb90d29c16f656b1a147bcd457cc275682e970954f875f3e4.png) If you open the Dev Tools on your editor you should see that the Angular application actually hits the ASP.NET API: ![application dev tools](https://d33wubrfki0l68.cloudfront.net/bdb729d1061abf7f87e96d59e720dc8e05f92da9/62cc2/assets-jekyll/blog/mvc-angular/app-dev-tools-1a89dca385fea5e4e6e5e8a4176d0d7fafe94a27a9a7e3ec53b796afa71f1f24.png) ## Learn More About ASP.NET Core and Angular You can find the source code for the complete application on [GitHub](https://github.com/oktadeveloper/aspnet-mvc-angular-example). If you enjoyed building this ASP.NET MVC with Angular, check out more full-stack application posts from Okta. - [Build a CRUD App with ASP.NET Framework 4.x Web API and Angular](https://developer.okta.com/blog/2018/07/27/build-crud-app-in-aspnet-framework-webapi-and-angular) - [Angular 7: What’s New and Noteworthy + OIDC Goodness](https://developer.okta.com/blog/2018/12/04/angular-7-oidc-oauth2-pkce) - [Token Authentication in ASP.NET Core – A Complete Guide](https://developer.okta.com/blog/2018/03/23/token-authentication-aspnetcore-complete-guide) - [Build a Secure CRUD App with ASP.NET Core and React](https://developer.okta.com/blog/2018/07/02/build-a-secure-crud-app-with-aspnetcore-and-react) As always, if you have any questions, comments, or concerns about this post, feel free to leave a comment below. For other great content from the Okta Dev Team, follow us on Twitter [@OktaDev](https://twitter.com/oktadev), [Facebook](https://www.facebook.com/oktadevelopers), [LinkedIn](https://www.linkedin.com/company/oktadev/) and [YouTube](https://www.youtube.com/channel/UC5AMiWqFVFxF1q9Ya1FuZ_Q)!
oktadev_77
70,530
How do you DEV.to?
I am new to the community and curious how others are using it.
0
2018-12-26T04:14:36
https://dev.to/therealdanvega/how-do-you-devto-1i7m
discuss
--- title: How do you DEV.to? published: true description: I am new to the community and curious how others are using it. tags: discuss --- ![Blogging](https://thepracticaldev.s3.amazonaws.com/i/67pc6rmcwp1s3yentubn.jpg) I recently joined the DEV.to community and I am curious how each of you are using the platform. I'm guessing my story is similar to most of yours so this is a good place to talk about it. I am a Software Developer with nearly 20 years of experience and I have been blogging for a long time. I find myself blogging less and less on my own platform these days and creating more videos. This is a little bit sad for me because I love to write, outside of coding it's one of my favorite things. I have only posted a single article here and already love it. I have had so many comments and feedback and it just seems like a great place to connect with developers like myself. ![Code](https://thepracticaldev.s3.amazonaws.com/i/imi6vg61bhukgtmgr3e9.jpeg) So my question is this: For those of you who have personal websites and a blog, what is your strategy for posting new content. Are you posting here and on your blog? I heard Ben on the Indie Hackers Podcast talking about the problems with Medium. My biggest issue with Medium and other platforms is that you don't own your content. When I post content on my blog I am not only trying to help people but I am trying to build my name. I'm looking to build my mailing list and when I do ask for money for paid content I hope my subscribers will know it's coming from a good place. Should I post all of my content on both or do you determine that some content should go on the personal blog and maybe use this platform to test out some content first? I'm curious to know how all of you are using the platform.
therealdanvega
70,842
Install and Update Solidity(solc) in Mac
You can install and update solc: brew update brew upgrade brew tap ethereum/ethereum brew install s...
0
2018-12-26T06:16:03
https://dev.to/sot528/install-and-update-soliditysolc-in-mac-n8i
solidity, ethereum, solc
--- title: Install and Update Solidity(solc) in Mac published: true description: tags: solidity,ethereum,solc --- You can install and update solc: ```bash brew update brew upgrade brew tap ethereum/ethereum brew install solidity brew link --overwrite solidity ``` # Errors If you got the error below, you have to update xcode. ```bash ==> Installing solidity from ethereum/ethereum Error: Your Xcode (9.0) is too outdated. Please update to Xcode 10.1 (or delete it). Xcode can be updated from the App Store. ```
sot528
71,013
Java 8 lambdas: 101
Java 8 lambdas starter for absolute beginners.
0
2018-12-27T08:41:28
https://dev.to/shriyog/java-8-lambdas-101-1ah3
java, java8, lambda, beginners
--- title: Java 8 lambdas: 101 published: true description: Java 8 lambdas starter for absolute beginners. tags: java, java8, lambda, beginners --- All you need is 5 minutes to grok the Lambda expressions introduced in Java 8. This post acts as a starter by quickly demonstrating how lambdas are super cool in replacing hefty anonymous classes. ##Anonymous classes## You have used them all over - to implement event handlers in GUIs, to specify a Comparator for sorting, etc. button.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { System.out.println("Hello World!"); } }); By doing this, we are just passing the functionality of handling an event. As in passing code as a method argument. The problem with simplest anonymous classes - one with a single abstract method, is the unnecessary boiler-plate syntax going along with it. Lambdas exactly solve this by providing a clean & compact syntax to convey the functionality of single method classes. ##A minimalistic example## - An interface - `Operation` - A class - `Calculator` The `Operation` interface declares a method to operate on 2 integers. This can be implemented by multiple types of operations such as Addition and Subtraction. public interface Operation { int operate(int a, int b); } The static calculate method performs the operation by calling operate method using supplied operands. public class Calculator { public static int calculate(int a, int b, Operation operation) { return operation.operate(a, b); } } We want to invoke calculate to perform a defined operation(implementation of Operation interface) on given operands. ###Approach 1 - Using Anonymous class### Operation additionOperation = new Operation() { @Override public int operate(int a, int b) { return a + b; } }; int result = Calculator.calculate(2, 3, additionOperation); ###Approach 2 - Using lambda expression### int result = Calculator.calculate(2, 3, (x, y) -> x + y); The lambda expression **`(x, y) -> x + y`** can be dissected into 1. `(x, y)` Parenthesis enclosed comma separated set of formal arguments(same as in the Interface method signature). 2. `->` The arrow to differentiate. 3. `x + y` The body which can have an expression which is evaluated and returned or a block of statements enclosed by curly braces. This is how lambdas can easily replace those bulky anonymous classes. They also seem to fit perfectly with the new [Java stream API][1] which is a collection of functional-style transformations. ##Resources## [Oracle Java tutorials - Lambda expressions][2] [1]: https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html [2]: https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html
shriyog