id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,788,862 | SMTP Relay Service Provider | Empower your email delivery with a trusted smtp relay service provider. Unlock seamless communication... | 0 | 2024-03-13T07:15:03 | https://dev.to/devidrich/smtp-relay-service-provider-2fm2 | smtprelayserviceprovider, smtp, buysmtpserver, smtprelayservices | Empower your email delivery with a trusted **[smtp relay service provider](https://medium.com/@vikaspanache0/boosting-deliverability-the-impact-of-a-reliable-smtp-relay-service-provider-fd71968fd4d4)**. Unlock seamless communication and unparalleled reliability as we streamline your messages to their destination. With our expertise, your emails will soar past spam filters, ensuring maximum deliverability and engagement. Trust in our proven track record to enhance your sender reputation and elevate your email marketing strategy. Experience the difference with our tailored solutions designed to meet your unique business needs. Partner with us today and discover the power of efficient email delivery. | devidrich |
1,788,885 | A New Way to Provision Databases on Kubernetes | Need to provide database self-service to internal teams? Can't spare the resources to build and... | 0 | 2024-03-13T07:59:13 | https://dev.to/dbazhenov/a-new-way-to-provision-databases-on-kubernetes-126h | kubernetes, database, opensource, cloudnative | Need to provide database self-service to internal teams? Can't spare the resources to build and maintain your own private DBaaS solution? Sick of cloud DBaaS providers locking you in?
[Percona Everest](https://percona.community/projects/everest/) is a cloud-native database platform to deploy and manage enterprise-grade PostgreSQL, MongoDB and MySQL database clusters.
With Percona Everest, you can:
- Enable DBA teams to regain control over data and database configuration
- Empower developers to deploy code faster and self-service provision highly performant, production-ready database clusters
- Free your entire organization from vendor lock-in and restrictive subscriptions
## Benefits of Percona Everest
Truly **open source**, Percona Everest offers the benefits of automated database provisioning and management without the need to build or maintain an in-house database management platform.
- No data lock-in or restrictive contracts
- Reduced total cost of ownership (TCO)
- Complete data sovereignty to meet any compliance requirement
- Fully customizable database configurations
- Right-sized database deployments
- Frictionless developer self-service
- Backing of highly skilled database management experts
## Key features of Percona Everest
**Database management via a single pane of glass**
Percona Everest enables you to manage complex database environments through a single pane of glass or API.
- Create, configure, deploy, update, and upgrade with zero downtime
- Backup, restore, restart, suspend/resume, or delete database clusters

**Complete customization**
Customize load balancing settings, settings for network exposure, and resource settings like node count, instance type, CPU, memory, storage, etc.
**Standardized database deployments**
Create ready-to-use database clusters of enterprise-grade MySQL, PostgreSQL, and MongoDB, with the ability to customize to support a variety of topologies or deployments.
Percona Everest leverages [Percona Operators](https://www.percona.com/software/percona-operators) to deploy Cloud-Native Percona Distributions.
**Built-in observability tools**
Benefit from built-in open source infrastructure monitoring to keep control of (and optimize) resource usage, health, and performance of database clusters.
## How Percona Everest is designed
Percona Everest has two primary components
**Percona Everest application with the UI**
- On the frontend, there is a Percona Everest UI which is developed using the Vite framework, React library and TypeScript language.
- On the backend, requests from the frontend app are processed by Percona Everest Backend. It is an API developed in Golang using the Echo framework that sends requests to the Kubernetes API. You can also use the API directly for your needs.
**Percona Everest CLI (everestctl)**
You can install Percona Everest operators and components to your Kubernetes Cluster with the help of a console tool Percona Everest CLI (everestctl). Then, Percona Everest will create and manage database clusters. Everestctl is developed in Golang, and it is a built-in executable file.
_Percona Everest does not supply a Kubernetes cluster; you’ll need to use your own for the deployment._
## Percona Everest is currently in Beta
We invite you to become an early adopter and contribute to progress!
Your feedback is crucial to enhancing the software, and we highly value and rely on your input.
- [Percona Everest website](https://percona.community/projects/everest/)
- [Documentation](https://docs.percona.com/everest/index.html)
- [Quick Start](https://docs.percona.com/everest/quickstart-guide/qs-overview.html)
| dbazhenov |
1,788,901 | 🤯 150 WebDev Articles to Satisfy Your Curiosity | Image by Freepik This year I started a new series on LinkedIn - "Advanced Links for Frontend". Each... | 0 | 2024-03-13T08:18:49 | https://dev.to/florianrappl/150-articles-to-satisfy-your-curiosity-3c22 | webdev, javascript, css, frontend | *Image by <a href="https://www.freepik.com/free-vector/hand-drawn-web-developers_12063795.htm#query=front%20end%20developer&position=2&from_view=keyword&track=ais&uuid=98351362-efae-41c0-970c-9aa479e41c9a">Freepik</a>*
This year I started a new series on [LinkedIn](https://www.linkedin.com/in/florian-rappl/) - "Advanced Links for Frontend". Each issue has 10 links to outstanding posts / articles. Having already done 15 issues I thought it's time to make a bundle and post it here.
## Issue 1
1. The Implied Web (https://www.htmhell.dev/adventcalendar/2023/21) by Halvor William Sanden
2. Modern CSS for 2024: Nesting, Layers, and Container Queries (https://dev.to/builderio/modern-css-for-2024-nesting-layers-and-container-queries-4c43) by @hamatoyogi
3. Pushing the Limits of Styling: New and Upcoming CSS Features (https://blog.openreplay.com/new-and-upcoming-css-features/) by Ebere Frankline Chisom
4. Bun, Javascript, and TCO (https://www.onsclom.net/posts/javascript-tco) by Austin Merrick
5. How I'm Writing CSS in 2024 (https://leerob.io/blog/css) by Lee Robinson
6. HTMX Playground (https://lassebomh.github.io/htmx-playground/) by Lasse H. Bomholt
7. 15 Most Watched Frontend Conference Talks In 2023 (https://dev.to/techtalksweekly/15-most-watched-frontend-conference-talks-in-2023-194p) by @techtalksweekly
8. Multithreading functions in JavaScript to speedup heavy workloads (https://github.com/W4G1/multithreading) by Walter van der Giessen
9. Bun v1.0.21 (https://bun.sh/blog/bun-v1.0.21) by Jarred Sumner
10. Revolutionizing Angular: Introducing the New Signal Input API (https://netbasal.com/revolutionizing-angular-introducing-the-new-signal-input-api-d0fc3c8777f2) by Netanel Basal
## Issue 2
1. What PWA Can Do Today (https://whatpwacando.today/) by Danny Moerkerke
2. Interop 2023 Dashboard (https://wpt.fyi/interop-2023) by Web Dev
3. Announcing Vue 3.4 (https://blog.vuejs.org/posts/vue-3-4) by Evan You
4. Weird things engineers believe about Web development (https://birtles.blog/2024/01/06/weird-things-engineers-believe-about-development/) by Brian Birtles
5. When "Everything" Becomes Too Much: The npm Package Chaos of 2024 (https://socket.dev/blog/when-everything-becomes-too-much) by Feross Aboukhadijeh
6. SolidStart: A Different Breed Of Meta-Framework (https://www.smashingmagazine.com/2024/01/solidstart-different-breed-meta-framework/) by Atila Fassina
7. Web analytics is badly broken (https://martech.org/web-analytics-is-badly-broken/) by Juan Mendoza
8. The Website vs. Web App Dichotomy Doesn't Exist (https://jakelazaroff.com/words/the-website-vs-web-app-dichotomy-doesnt-exist) by Jake Lazaroff
9. Front-end testing in 2024 (https://dev.to/imalov/front-end-testing-in-2024-1gjl) by @imalov
10. "Why would I choose Panda CSS ?" (https://www.astahmer.dev/posts/why-would-i-choose-panda-css) by Alexandre Stahmer
## Issue 3
1. We removed advertising cookies, here’s what happened (https://blog.sentry.io/we-removed-advertising-cookies-heres-what-happened) by Matt Henderson
2. Rust-Based JavaScript Linters: Fast, But No Typed Linting Right Now (https://www.joshuakgoldberg.com/blog/rust-based-javascript-linters-fast-but-no-typed-linting-right-now) by Josh Goldberg
3. remote-storage (https://github.com/FrigadeHQ/remote-storage) by Christian Mathiesen
4. Where have all the websites gone? (https://www.fromjason.xyz/p/notebook/where-have-all-the-websites-gone) by Jason Velazquez
5. WebSockets Unlocked: Mastering the Art of Real-Time Communication (https://dev.to/raunakgurud09/websockets-unlocked-mastering-the-art-of-real-time-communication-2lnj) by Raunak Gurud
6. 2023 JavaScript Rising Stars (https://risingstars.js.org/2023/en) by BestOfJS
7. Is Blazor the Future of Everything Web? (https://www.telerik.com/blogs/is-blazor-future-everything-web) by Ed Charbeneau
8. Is htmx Just Another JavaScript Framework? (https://htmx.org/essays/is-htmx-another-javascript-framework/) by Alexander Petros
9. State of WebAssembly 2023 (https://platform.uno/blog/state-of-webassembly-2023-2024/) by Gerard Gallant
10. 16 Lesser known Accessibility Issues (https://toward.studio/latest/16-lesser-known-accessibility-issues) by Matthew Jackson
## Issue 4
1. GitHub Readme: Responsive? 🤔 Animated? 🤯 Light and dark modes? 😱 You bet! 💪🏼 (https://dev.to/grahamthedev/take-your-github-readme-to-the-next-level-responsive-and-light-and-dark-modes--3kpc) by @grahamthedev
2. CSS 3D Clouds (https://spite.github.io/CSS3DClouds/) by Jaume Sánchez
3. Introducing createPages (https://waku.gg/blog/introducing-create-pages) by Sophia Andren
4. Derivations in Reactivity (https://dev.to/this-is-learning/derivations-in-reactivity-4fo1) by @ryansolid
5. Introducing style queries (https://ishadeed.com/article/css-container-style-queries/) by Ahmad Shadeed
6. Web Components 2024 Winter Update (https://eisenbergeffect.medium.com/web-components-2024-winter-update-445f27e7613a) by Rob Eisenberg
7. The Bun Shell (https://bun.sh/blog/the-bun-shell) by Jarred Sumner
8. Top Front-End Tools Of 2023 (https://www.smashingmagazine.com/2024/01/top-frontend-tools-2023/) by Louis Lazaris
9. 5 CSS snippets every front-end developer should know in 2024 (https://web.dev/articles/5-css-snippets-every-front-end-developer-should-know-in-2024) by Adam Argyle
10. A Practical Introduction to Scroll-Driven Animations with CSS scroll() and view() (https://tympanus.net/codrops/2024/01/17/a-practical-introduction-to-scroll-driven-animations-with-css-scroll-and-view/) by Adam Argyle
## Issue 5
1. HTMX and Web Components: a Perfect Match (https://binaryigor.com/htmx-and-web-components-a-perfect-match.html) by Igor Roztropiński
2. Roadmap 2024 from Biome (https://biomejs.dev/blog/roadmap-2024) by Emanuele Stoppa
3. Vocs - Minimal Documentation Framework (https://vocs.dev) by Jake Moxey
4. Perfect Web Framework (https://nuejs.org/blog/perfect-web-framework) by Tero Piirainen
5. We Forget Frontend Basics (https://blog.stackademic.com/we-forgot-frontend-basics-2f9a1c4dabaa) by Pavel Pogosov
6. All About Dynamic Routing in Single Page Applications (https://blog.openreplay.com/dynamic-routing-in-single-page-applications) by John Abraham
7. A Guide to Styling Tables (https://dev.to/madsstoumann/a-guide-to-styling-tables-28d2) by @madsstoumann
8. In Loving Memory of Square Checkbox (https://tonsky.me/blog/checkbox/) by Nikita Prokopov
9. Type annotations for performance (https://goose.icu/porffor-types/) by CanadaHonk
10. Web Components in Earnest (https://naildrivin5.com/blog/2024/01/24/web-components-in-earnest.html) by David Bryant Copeland
## Issue 6
1. Why I chose Tauri instead of Electron (https://itnext.io/why-i-chose-tauri-instead-of-electron-e67b34f8857d) by Guilherme Oenning
2. Zed is now open source (https://zed.dev/blog/zed-is-now-open-source) by Nathan Sobo
3. Astro 4.2 (https://astro.build/blog/astro-420) by Emanuele Stoppa
4. Server-side rendering local dates without FOUC (https://blog.6nok.org/server-side-rendering-local-dates-without-fouc) by Fatih Altinok
5. 12 Modern CSS One-Line Upgrades (https://moderncss.dev/12-modern-css-one-line-upgrades/) by Stephanie Eckles
6. Describe APIs using TypeSpec (https://typespec.io/) by Timothee Guerin and others at Microsoft
7. Pa11y is your automated accessibility testing pal (https://pa11y.org/) by Rowan Manning
8. Bundle Splitting (https://www.bigbinary.com/blog/bundle-splitting) by Labeeb Latheef
9. Frontend Masters: Feature-Sliced Design (FSD) Pattern (https://blog.stackademic.com/frontend-masters-feature-sliced-design-fsd-pattern-81416088b006) by Ismail Harmanda
10. The web just gets better with Interop 2024 (https://webkit.org/blog/14955/the-web-just-gets-better-with-interop/) by Jen Simmons
## Issue 7
1. Announcing TypeScript 5.4 Beta (https://devblogs.microsoft.com/typescript/announcing-typescript-5-4-beta) by Daniel Rosenwasser
2. Nuxt 3.10 (https://nuxt.com/blog/v3-10) by Daniel Roe
3. Event Loop. Myths and reality (https://blog.frontend-almanac.com/event-loop-myths-and-reality) by Roman Maksimov
4. Elysia: A Bun-first Web Framework (https://dev.to/oggy107/elysia-a-bun-first-web-framework-1kf3) by @oggy107
5. Million 3.0: All You Need to Know (https://dev.to/tobysolutions/million-30-all-you-need-to-know-3d2) by @tobysolutions
6. Deno in 2023 (https://deno.com/blog/deno-in-2023) by Andy Jiang
7. Adding type safety to object IDs in TypeScript (https://www.kravchyk.com/adding-type-safety-to-object-ids-typescript) by Maciej Kravchyk
8. Labyrinthos - JavaScript Procedural Generator for Mazes (https://yantra.gg/labyrinthos/) by marak.eth
9. Discovering the Latest in JavaScript: New Features for 2024 (https://blog.openreplay.com/javascript--new-features-for-2024) by Teslim Balogun
10. Does HTML Structure Matter for SEO? (https://searchengineland.com/html-structure-matter-seo-437131) by Ryan Jones
## Issue 8
1. CSS Cartoons (https://dev.to/alvaromontoro/css-cartoons-29bp) by @alvaromontoro
2. Why you don't need React (https://md.jtmn.dev/blog/%F0%9F%92%BB+Programming/PR-007+-+Why+you+don't+need+React) by John Nguyen
3. 7 Common Front End Security Attacks (https://dev.to/tinymce/7-common-front-end-security-attacks-372p) by @mrinasugosh
4. Announcing AdonisJS v6 (https://adonisjs.com/blog/adonisjs-v6-announcement) by Harminder Virk
5. CSS Media Query for Scripting Support (https://blog.stephaniestimac.com/posts/2023/12/css-media-query-scripting) by Stephanie Stimac
6. What every dev should know about using Environment Variables (https://expo.dev/blog/what-are-environment-variables) by Kadi Kraman
7. Hot Module Replacement is Easy (https://bjornlu.com/blog/hot-module-replacement-is-easy) by Bjorn Lu
8. Open Sourcing the Remix Website (https://remix.run/blog/oss-remix-dot-run) by Brooks Lybrand
9. Browsers Are Weird Right Now (https://tylersticka.com/journal/browsers-are-weird-right-now/) by Tyler Sticka
10. CSS Formalize (https://www.cssformalize.com/) by Pelin Oleg
## Issue 9
1. APIs testing using HTTP files and Rest Client (https://devblogs.microsoft.com/ise/api-testing-using-http-files/) by Ayman Mahmoud
2. Vite 5.1 is out! (https://vitejs.dev/blog/announcing-vite5-1) by Evan Yu
3. Squeezing Last Bit Of JavaScript Performance For My Automation Game (https://ruoyusun.com/2024/01/23/cividle-optimization) by Ruoyu Sun
4. How we Increased Search Traffic by 20x in 4 Months with the Next.js App Router (https://hardcover.app/blog/next-js-app-router-seo) by Adam Fortuna
5. React Labs: What We've Been Working On – February 2024 (https://react.dev/blog/2024/02/15/react-labs-what-we-have-been-working-on-february-2024) by Joseph Savona, Ricky Hanlon, Andrew Clark, Matt Carroll and Dan Abramov
6. Angular v17.2 is now available (https://blog.angular.io/angular-v17-2-is-now-available-596cbe96242d) by Minko Gechev
7. Tailwind CSS under the hood (https://m4xshen.dev/posts/tailwind-css-under-the-hood) by Max Shen
8. How to Use Google Gemini with Node.js (https://dev.to/arindam_1729/how-to-use-google-gemini-with-nodejs-2d39) by arindam_1729
9. Future of CSS: Functions and Mixins (https://dev.to/link2twenty/future-of-css-functions-and-mixins-229c) by link2twenty
10. Interaction to Next Paint becomes a Core Web Vital on March 12 (https://web.dev/blog/inp-cwv-march-12) by Jeremy Wagner and Rick Viscomi
## Issue 10
1. 🔒Securing Web: A Deep Dive into Content Security Policy (CSP) (https://dev.to/vashnavichauhan18/securing-web-a-deep-dive-into-content-security-policy-csp-2nna) by vashnavichauhan18
2. Beyond REST and GRAPHQL: Why You Should Consider RPC (and Why tRPC Makes It Easy) (https://opyjo2.hashnode.dev/beyond-rest-and-graphql-why-you-should-consider-rpc-and-why-trpc-makes-it-easy) by Opeyemi Ojo
3. Microdot: a web framework for microcontrollers (https://lwn.net//Articles/959067/) by Jake Edge
4. Web Development Is Getting Too Complex, And It May Be Our Fault (https://www.smashingmagazine.com/2024/02/web-development-getting-too-complex/) by Juan Diego Rodriguez
5. Express.js Spam PRs Incident Highlights the Commoditization of Open Source Contributions (https://socket.dev/blog/express-js-spam-prs-commoditization-of-open-source) by Sarah Gooding
6. JSR First Impressions (https://www.kitsonkelly.com/posts/jsr-first-impressions) by Kitson P. Kelly
7. How to Favicon in 2024: Six files that fit most needs (https://evilmartians.com/chronicles/how-to-favicon-in-2021-six-files-that-fit-most-needs) by Andrey Sitnik
8. UI = f(statesⁿ) (https://daverupert.com/2024/02/ui-states) by Dave Rupert
9. JavaScript and TypeScript Trends 2024: Insights From the Developer Ecosystem Survey (https://blog.jetbrains.com/webstorm/2024/02/js-and-ts-trends-2024/) by David Watson
10. Using a CSP nonce in Blazor Web (https://damienbod.com/2024/02/19/using-a-csp-nonce-in-blazor-web/) by Damien Bod
## Issue 11
1. htmz (https://leanrada.com/htmz/) by Lean Rada
2. How to Streamline Lead Generation with Your Website (https://www.telerik.com/blogs/how-to-streamline-lead-generation-website) by Suzanne Scaca
3. Why is Prettier rock solid? (https://mrmr.io/til/prettier) by Manav Rathi
4. Exploring CSS where it doesn't make sense (https://dev.to/samuel-braun/exploring-css-where-it-doesnt-make-sense-417k) by samuel-braun
5. Building an audio player app with the .NET Uno Platform (https://scientificprogrammer.net/2024/02/19/building-an-audio-player-app-with-the-net-uno-platform/) by Fiodar Sazanavets
6. Frontend Trends for 2024: CSS Revival, BFF, Ruling Languages and More (https://www.codemotion.com/magazine/frontend/frontend-trends-for-2024-css-revival-bff-ruling-languages-and-more) by Diego Petrecolla
7. Vanilla JavaScript, Libraries, And The Quest For Stateful DOM Rendering (https://www.smashingmagazine.com/2024/02/vanilla-javascript-libraries-quest-stateful-dom-rendering) by Frederik Dohr
8. Micro frontends: Shell vs. Micro Apps (https://blog.bitsrc.io/micro-frontends-shell-vs-micro-apps-5ad809a9b85a) by Ashan Fernando
9. Updates from the 100th TC39 meeting (https://dev.to/hemanth/updates-from-the-100th-tc39-meeting-4j2f) by hemanth
10. Making SVG Loading Spinners: An Interactive Guide (https://fffuel.co/svg-spinner) by Sébastien Noël
## Issue 12
1. Bloom Filters (https://samwho.dev/bloom-filters/?palette=tol#bf0) by Sam Rose
2. Towards Qwik 2.0: Lighter, Faster, Better (https://www.builder.io/blog/qwik-2-coming-soon) by the QWIK team
3. Disillusioned with Deno (https://www.baldurbjarnason.com/2024/disillusioned-with-deno/) by Baldur Bjarnason
4. JavaScript Bloat in 2024 (https://tonsky.me/blog/js-bloat) by Nikita Prokopov
5. JSDoc as an alternative TypeScript syntax (https://alexharri.com/blog/jsdoc-as-an-alternative-typescript-syntax) by Alex Harri
6. The Surprising Truth About Pixels and Accessibility (https://www.joshwcomeau.com/css/surprising-truth-about-pixels-and-accessibility) by Josh W. Comeau
7. Catching Up With The Latest Features In Angular (https://ionic.io/blog/catching-up-with-the-latest-features-in-angular) by Mike Hartington
8. A practical guide to using shadow DOM (https://www.mayank.co/blog/declarative-shadow-dom-guide/) by Mayank
9. 22 years later, YAML now has a media type (https://httptoolkit.com/blog/yaml-media-type-rfc/) by Tim Perry
10. Blazor’s Enhanced Navigation Fully Explained (https://www.telerik.com/blogs/blazor-enhanced-navigation-fully-explained) by Ed Charbeneau
## Issue 13
1. Create VS Code Extension with React, TypeScript, Tailwind (https://dev.to/rakshit47/create-vs-code-extension-with-react-typescript-tailwind-1ba6) by rakshit47
2. Design Systems for 2024 (https://dev.to/leonardorafael/design-systems-for-2024-1pog) by leonardorafael
3. CSS for printing to paper (https://voussoir.net/writing/css_for_printing) by Voussoir
4. Bugs I've filed on browsers (https://nolanlawson.com/2024/03/03/bugs-ive-filed-on-browsers/) by Nolan Lawson
5. Parcel v2.12.0 (https://parceljs.org/blog/v2-12-0) by Parcel Team
6. Why Vite is the best? Advanced Features of Vite (https://dev.to/codeparrot/why-vite-is-the-best-advanced-features-of-vite-3hp6) by gautam_vaja_8ca93ec2c115d
7. Can you make your website green 🌳♻🌳? (https://dev.to/fanmixco/can-you-make-your-website-green--1lb7) by fanmixco
8. OpenJS Launches New Collaboration to Improve Interoperability of JavaScript Package Metadata (https://socket.dev/blog/openjs-improve-interoperability-of-javascript-package-metadata) by OpenJS Foundation
9. NodeJS Security Best Practices (https://dev.to/mohammadfaisal/nodejs-security-best-practices-34ck) by mohammadfaisal
10. JSR: First Impressions (https://dbushell.com/2024/02/16/jsr-first-impression) by David Bushell
## Issue 14
1. Mario-Kart3.js (https://github.com/Lunakepio/Mario-Kart-3.js) by Lunakepio
2. Survey Results and Roadmap (https://deno.com/blog/2024-survey-results-and-roadmap) by Andy Jiang and Alon Bonder
3. Avoiding Hydration Mismatches with useSyncExternalStore (https://tkdodo.eu/blog/avoiding-hydration-mismatches-with-use-sync-external-store) by Dominik Dorfmeister
4. CSS Hooks: A new way to style your React apps (https://www.thisdot.co/blog/css-hooks-a-new-way-to-style-your-react-apps) by Jamie Kuppens
5. Streaming HTML out of order without JavaScript (https://lamplightdev.com/blog/2024/01/10/streaming-html-out-of-order-without-javascript) by Chris Haynes
6. The Wax and the Wane of the Web (https://alistapart.com/article/the-wax-and-the-wane-of-the-web/) by Ste Grainer
7. CSS :has() Interactive Guide (https://ishadeed.com/article/css-has-guide/) by Ahmad Shadeed
8. How do I test and mock Standalone Components? (https://dev.to/this-is-angular/how-do-i-test-and-mock-standalone-components-508e) by rainerhahnekamp
9. Introducing JSR - the JavaScript Registry (https://deno.com/blog/jsr_open_beta) by Luca Casonato, Ryan Dahl, Kevin Whinnery
10. The FAST and the Fluent: A Blazor story (https://devblogs.microsoft.com/dotnet/the-fast-and-the-fluent-a-blazor-story/) by Vincent Baaij
## Issue 15
1. CSS Scroll-triggered Animations with Style Queries (https://ryanmulligan.dev/blog/scroll-triggered-animations-style-queries/) by Ryan Mulligan
2. Open-sourcing our progress on Tailwind CSS v4.0 (https://tailwindcss.com/blog/tailwindcss-v4-alpha) Adam Wathan
3. My talk on CSS runtime performance (https://nolanlawson.com/2023/01/17/my-talk-on-css-runtime-performance/) by Nolan Lawson
4. 3 Advanced Framer Motion Effects in React (https://dev.to/salehmubashar/3-advanced-famer-motion-effects-in-react-3nm7) by salehmubashar
5. Stop Manually Coding UI Components! 🔼❌ (https://dev.to/arjuncodess/stop-manually-coding-ui-components-1h4f) by arjuncodess
6. Gleam version 1 (https://gleam.run/news/gleam-version-1) by Louis Pilfold
7. Application Shell for React Micro Frontends (https://blog.bitsrc.io/application-shell-for-react-micro-frontends-daa944caa8f3) by Eden Ella
8. WASI 0.2: Unlocking WebAssembly’s Promise Outside the Browser (https://thenewstack.io/wasi-0-2-unlocking-webassemblys-promise-outside-the-browser) by Tyler McMullen and Luke Wagner
9. Why React Server Components Are Breaking Builds to Win Tomorrow (https://www.builder.io/blog/why-react-server-components) by Vishwas Gopinath
10. CSS Tools for Enhanced Web Design (https://dev.to/lilxyzz/useful-css-tools-17bc) by lilxyzz
## Conclusion
Sorry that I only mentioned 10 of the authors from articles that can be found on *dev.to*. There is a hard-limit and you can only at-mentioned 10.
👉 Follow me on [LinkedIn](https://www.linkedin.com/in/florian-rappl/), [Twitter](https://twitter.com/FlorianRappl), or here for more to come.
🙏 Thanks to all the authors and contributors for their hard work!
| florianrappl |
1,788,958 | What Is Test Plan: Guidelines And Importance With Examples | OVERVIEW A test plan is a precious written document that describes the testing strategy for a... | 0 | 2024-03-13T09:28:21 | https://dev.to/devanshbhardwaj13/what-is-test-plan-guidelines-and-importance-with-examples-8l8 | testing, softwaredevelopment, software, cloud | OVERVIEW
A **test plan** is a precious written document that describes the [testing strategy](https://www.lambdatest.com/blog/blueprint-for-test-strategy/?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=blog) for a software or hardware project. It is a document that outlines the scope of testing, the resources needed, the test environment, and the test cases that will be executed. Its purpose is to ensure that the testing process is thorough and complete and that all necessary tests are conducted systematically and coordinated.
It acts as a detailed document to ensure the proper working of the software. The output from the testing phase is directly related to the quality of planning that went into it. These testing plans are usually developed during the development phase to save time for executing tests and reaching a mutual agreement with all the parties involved.
Although software testing is a foundational concept in software development, the idea of taking time to create software testing plans are often minimized or ignored altogether. This is unfortunate because these can significantly benefit all projects regardless of the life cycle.
**How to write a Test Plan?**
* Learn about the product
* Scope of testing
* Develop test cases
* Develop a test strategy
* Define the test objectives
* Selecting testing tools
If you want to delve deeper into these points, you can explore them further in the latter part of this blog.
## What is a Test Plan?
A test plan is a comprehensive document outlining the strategy, scope, and objectives of software testing. It includes key components like test objectives, test environments, test cases, and schedules. The purpose of a test plan is to ensure systematic and effective testing, identify defects, and improve software quality. Benefits include minimizing risks, enhancing product reliability, and meeting customer expectations.
It is the most important activity in ensuring that software testing is done according to plan. It acts as the go-to template for conducting software testing activities that are fully monitored and controlled by the testing manager. Developing a test plan involves the contribution of the test lead, the test manager, and the test engineer.
Along with identifying the objectives and scope of the software testing process, it specifies the methods to resolve any risks or errors. It also helps determine whether a product meets the expected quality standards before deployment.
> Discover the power of cryptographic tools! Try our [Whirlpool Hash Calculator](https://www.lambdatest.com/free-online-tools/whirlpool-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) now!
## Test Plan Examples
Developing a test plan is an essential step in the software testing process. It is a document that outlines the strategy, approach, resources, and schedule for testing a software application. A well-written product test plan helps ensure that the software is thoroughly tested, meets the requirements, and is free of defects. It should be developed early in the software development process to incorporate testing into the overall development schedule. These application test plans must be reviewed and approved by the key stakeholders before testing begins.
To develop a test plan, you need to follow some steps in testing process. Here is a test plan example for a hypothetical software application:
1. **Introduction:**
This testing plan is for the Web Application XYZ, version 1.0. The objective of this testing is to ensure that the web application meets the requirements and is free of defects.
**2. Test Items:**
* Web Application: XYZ, version 1.0
* Build Number: 101
**3. Features to be tested:**
* User login and registration
* User profile management
* Search and filtering functionality
* Shopping cart and checkout
* Payment gateway integration
* Email and SMS notifications
* Data export
**4. Test Environment:**
* Operating System: Windows 10, MacOS
* Browser: Google Chrome, Firefox, Safari
* Hardware: Intel i5 processor, 8GB RAM
* Server: AWS
**5. Test Schedule:**
* Test Planning: e.g., January 15th — January 20th
* Test Case Development: e.g., January 21st — January 25th
* Test Execution: e.g., January 26th — February 5th
* Test Closure: e.g., February 6th
**6. Test Deliverables:**
* Test cases
* Test scripts
* Test reports
* Defect reports
* Performance test report
* Responsive test report
**7. Test Responsibilities:**
* Test Lead: Responsible for overall test planning and execution
* Test Engineer: Responsible for developing test cases and scripts, and executing tests
* Developer: Responsible for fixing defects and providing support during testing
* Server administrator : Responsible for maintaining the test environment
**8. Test Approach:**
* Manual testing will be used to test all the functionalities of the web application
* Automated testing will be used to test the performance and load of the web application
* Responsive testing will be done to ensure the web application is compatible with different devices and screen sizes
**9. Exit Criteria:**
* All the identified defects must be fixed and verified.
* All the test cases must be executed and passed.
* All the test deliverables must be completed and submitted.
* Performance test should pass the threshold limit.
This is a sample test plan, it may vary depending on the complexity of the web application and the organization’s testing process. Likewise, you can create your own sample test plan depending on your software application requirements.
> Explore the world of cryptography with ease! Check out our [SHA1 Hash Calculator](https://www.lambdatest.com/free-online-tools/sha1-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) today!
## Why are Test Plans So Important?
A **test plan** is the building block of every testing effort. It guides you through the process of checking a software product and clearly defines who will perform each step. A clear software development test plan ensures everyone is on the same page and working effectively towards a common goal.
Whether you are building an app or open-source software, a testing project plan helps you create a strong foundation.
The importance of a high-quality test plan rises when developers and testers need to identify risk areas, allocate the resources efficiently, and determine the order of testing activities. A well-developed testing plan acts as a blueprint that can be referred to at any stage of the product or [software development life cycle](https://www.lambdatest.com/blog/software-testing-life-cycle/?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=blog).
Let’s discuss these processes in detail:
* **Determining the scope of Test Automation:**
Determining the scope of test automation is an important step in creating a effective software testing plan. It involves identifying which parts of the software should be automated and which parts should be tested manually. Factors to consider when determining the scope of test automation include the complexity of the software, the time and resources available for testing, and the importance of the software’s functionality. By determining the scope of test automation, the test plan can ensure that the testing process is efficient and effective, and that the software is thoroughly tested.
* **Selecting the right tool for Automation:**
Selecting the right tool for automation is an important step in creation of a good testing plan. It involves evaluating different automation tools and selecting the one that is best suited for the software being tested. Factors to consider when selecting a tool include the compatibility of the tool with the software and its environment, the cost of the tool, and the level of support provided by the vendor. By selecting the right tool, it can ensure that the testing process is efficient and effective, and that the software is thoroughly tested.
* **Test Plan + Test Design + Test Strategy:**
A test plan, test design, and [test strategy](https://www.lambdatest.com/learning-hub/test-strategy?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=learning_hub) are all closely related and should be included in it. A test plan outlines the overall strategy and approach for testing the software. A test design includes the specific test cases, test scripts, and test data that will be used to test the software. A test strategy outlines the overall approach to testing, such as manual testing or automated testing, and how the testing process will be executed. By including all of these elements, it ensures that the testing process is thorough and effective and that the software is thoroughly tested.
*Explore our [Analytical Test Strategy](https://www.lambdatest.com/learning-hub/analytical-test-strategy?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=learning_hub) Guide. Understand its importance, roles, tools, defects management, metrics, automation, and more. Master analytical testing confidently.*
* **Setting up the test environment:**
Setting up the test environment is an important step in creating as it involves configuring the hardware and software that will be used for testing, including the operating system, browser versions, and any other relevant details. Setting up the test environment ensures that the testing process is consistent and that all necessary areas are covered. It also helps to ensure that the testing process is efficient and effective, and that the software is thoroughly tested.
* **Automation test script development + Execution:**
Automation test script development and execution are important steps in creating a software development testing plan. Automation test script development involves creating the scripts that will be used to automate the testing process. Automation test script execution involves running the automated scripts to test the software. This step is important because it ensures that the testing process is efficient and effective, and that the software is thoroughly tested. Automation allows for the repeatability and consistency of tests which can save time and resources. It also allows for testing of large amounts of data and scenarios that would be impractical or impossible to test manually. By including automation test script development and execution, testing plan ensures that the testing process is efficient and effective and that the software is thoroughly tested.
* **Analysis + Generation of test reports:**
Analysis and generation of test reports are important steps in creating an exhaustive testing plan. Analysis involves evaluating the results of the testing process, identifying any defects, and determining the overall quality of the software. Generation of test reports involves creating documents that summarize the results of the testing process, including any defects that were identified and how they were resolved. These reports help to ensure that the testing process is thorough and effective, and that the software is thoroughly tested. They also provide transparency and accountability to all stakeholders and helps to identify areas of improvement.
> Unlock secure hashing capabilities! Experience the precision of our [SHA512 Hash Calculator](https://www.lambdatest.com/free-online-tools/sha512-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) now!
## Test Plan Vs. Test Strategy
Check out this detailed comparison between Test Plan and Test Strategy.

## Test Plan Guidelines
Before you create your test plan, it is important to consider your intended consumers’ needs and ensure they are being met. This can be done by following a set of guidelines that will improve the quality of your testing plan tenfold.
Here are a few guidelines for preparing an effective test-plan:
* **Be Concise:**It should be kept concise and no longer than one page. Redundancy and superfluousness should be avoided by test managers while developing it.
* **Be Organized:** While developing, it is necessary to group the information logically for better understanding. For instance, while specifying an operating system of a test environment, the OS version should also be mentioned, not just the OS name.
* **Be Readable:**It should be easy to read and understand and avoid technical jargon/language wherever possible. By using lists instead of lengthy paragraphs, managers can improve readability.
* **Be Flexible:**It should be adaptive and not rigid. To avoid being held back by outdated documentation, you need to create documentation that won’t get in the way if new information comes along or changes need to be made. Test managers should review before sending them for approval.
* **Be Accurate:** It is important to ensure that it contains all the relevant and updated information. Testing managers should update the information and the plan as and when necessary.
## How to Write a Test Plan?
Crafting a comprehensive test plan is crucial for successful software testing. It serves as a roadmap, guiding the testing process and ensuring thorough coverage. To create an effective test plan, follow a structured approach. This section provides step-by-step guidance on how to write a test plan, covering key elements like defining objectives, determining scope, identifying test cases, establishing a strategy, creating a schedule, and conducting reviews. Mastering the art of test plan creation enhances testing quality, minimizes risks, and delivers reliable software solutions that meet customer expectations.
To [write an effective test plan](https://www.lambdatest.com/software-testing-questions/how-to-write-a-test-plan) in software testing plan, you need to follow these steps:
* **Learn about the product:**
Before testing begins, it is important to know everything about the product/software. Questions such as how it was developed, its intended purpose, and how it works should be asked to understand its functionality better.
By understanding your software correctly, you can create test cases that provide the most insight into flaws or defects in your product.
* **Scope of testing:**
To keep the test plan at a reasonable length, before creating it, you should consider what aspects of the product will be tested, what modules or functions need to be covered in detail, and any other essential information you should know about.
The scope of a test determines the areas of a customer’s product to be tested, what functionalities to focus on, what bug types the customer has an interest in, and which areas or features should not be tested.
* **Develop test cases:**
Writing an effective test case is one of the most important parts of creating a software testing document. A test case is a document that helps you track what you have tested and what you have not. It should include information such as:
* What needs to be tested
* How it will be tested
* Who will do the testing?
* Expected results
* **Develop a test strategy:**
A test strategy is used to outline the testing approach for the software development cycle. It provides a rational deduction from organizational, high-level objectives to actual test activities to meet those objectives from a quality assurance perspective.
* **Define the test objectives:**
The test objectives form the foundation of your testing strategy. They are a prioritized list to verify that testing activity is consistent with project objectives and measure your progress towards completing all the test objectives. Every test case should be linked to a test objective.
Objectives might include things like:
* Testing features you know are stable.
* Testing newly implemented features.
* Executing exploratory tests.
* Strengthening product stability throughout the lifecycle.
* **Selecting testing tools:**
Choosing the right software testing solution can be difficult. Some of these tools are software-based, while others may require a physical resource like a test machine. Therefore, choosing the best website testing tools for each job is essential, rather than relying on a one-size-fits-all solution.
The test objectives form the foundation of your testing strategy. They are a prioritized list to verify that testing activity is consistent with project objectives and measure your progress towards completing all the test objectives. Every test case should be linked to a test objective.
{% youtube 7CYgItuHq5M %}
You can also Subscribe to the LambdaTest YouTube Channel and stay updated with the latest tutorials around [Selenium testing](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), [Cypress testing](https://www.lambdatest.com/cypress-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), CI/CD, and more.
LambdaTest allows users to run manual and [automation testing](https://www.lambdatest.com/learning-hub/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=learning_hub) of web and mobile apps across 3000+ browsers, operating systems, and real device combinations.
Over 2 Million users across 130+ countries rely on LambdaTest for their [web testing](https://www.lambdatest.com/web-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) needs. Using LambdaTest, businesses can ensure quicker developer feedback and achieve faster go-to-market.
> Discover the power of hashing! Try our [NTLM Hash Generator](https://www.lambdatest.com/free-online-tools/ntlm-hash-generator?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) today for secure encoding!
## What does LambdaTest offer?
* Run [Selenium](https://www.lambdatest.com/selenium?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), [Cypress](https://www.lambdatest.com/cypress?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), [Playwright](https://www.lambdatest.com/playwright?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), [Puppeteer](https://www.lambdatest.com/puppeteer?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage), and [Appium](https://www.lambdatest.com/appium?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) automation tests across 3000+ real desktop and mobile environments.
* Live interactive [cross browser testing](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) in different environments.
* Perform [Mobile App testing](https://www.lambdatest.com/mobile-app-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) on Real Device cloud.
* Perform 70% faster test execution with HyperExecute.
* Mitigate test flakiness, shorten job times and get faster feedback on code changes with [TAS (Test At Scale)](https://www.lambdatest.com/test-at-scale?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage).
* Smart [Visual Regression Testing](https://www.lambdatest.com/smart-visual-ui-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) on cloud.
* LT Browser — for [responsive testing](https://www.lambdatest.com/learning-hub/responsive-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage) across 50+ pre-installed mobile, tablets, desktop, and laptop viewports.
* Capture full page automated screenshot across multiple browsers in a single click.
* Test your locally hosted web and mobile apps with LambdaTest tunnel.
* Test for online [Accessibility testing](https://www.lambdatest.com/accessibility-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=webpage).
* Test across multiple geographies with Geolocation testing feature.
* 120+ third-party integrations with your favorite tool for CI/CD, Project Management, Codeless Automation, and more.
* **Find bugs early:**
To ensure the success of your project, you should allocate time for ‘bug fixing’ sessions in your planning document. This will allow you to identify problems with the software early on before they become too problematic or expensive to fix.
* **Defining your test criteria:**
Test criteria should be defined in the test case but can be included on a separate sheet of paper. Test criteria are essential objectives broken down into smaller parts. They have specific information about how each objective will be met, which helps you track your testing progress.
Suspension criteria are conditions that must be met before you can stop testing. For example, if a certain number of bugs have been found or the software cannot run due to performance issues, you may want to suspend testing.
Exit criteria are standards that must be met before testing can end. For example, once each objective has been completed, and all bugs have been resolved, you can stop testing a program.
* **Resource planning:**
Resource planning allows you to allocate and efficiently assign people to projects where needed. It can also help you in areas such as capacity management and resources needed based on roles and project needs. However, this functionality requires insight into project data.
Including a resource plan in your software testing document can help you list the number of people needed for each process stage. This will include what each person’s role is, as well as any training they’ll need to fulfill their responsibilities effectively.
* **Planning test environment:**
To ensure quality testing, including information about the [test environment](https://www.lambdatest.com/blog/what-is-test-environment/?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=blog) you will be testing, along with some other important aspects such as:
* What is the test hardware required for product testing?
* Required sizing for servers and software.
* Product-supported platforms.
* Other testing environment-related information having an impact on the testing process.
* **Plan test team logistics:**
[Test management](https://www.lambdatest.com/blog/importance-of-a-test-management-tool-for-your-project/?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=blog) manages activities that ensure high-quality and high-end testing of software applications. This method consists of tracking, organizing, controlling, and checking the visibility of the testing process to deliver a high-quality software application.
Test management is one of the most important parts of the implementation process. Communication with your testers is vital to their progress and the usefulness of your testing document.
* **Schedule and estimation:**
A test-plan is only as good as its schedule, which should outline specific deadlines and milestones. Milestones are significant events in a product’s development, like the product’s initial release, internal testing sessions, or public beta tests. They also included all other key points in time where your team needs to focus their efforts on testing.
* **Test deliverables:**
To ensure that your testing document is complete, it should include a list of the deliverables required for testing. Make sure you link these lists to the steps in your schedule so that everyone knows when they need to be ready for action.
* **Test automation:**
If you have a complex piece of software that requires a lot of testing, consider using automated testing. Automating the testing process makes it possible for testers to accomplish more in less time, thereby boosting productivity and significantly reducing the cost of testing. You can even speed up testing activities by using a mobile bot.
> Elevate your security with ease! Generate reliable [Shake256 Hash Generator](https://www.lambdatest.com/free-online-tools/shake256-hash-generator?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) codes effortlessly!
## Attributes of a Test Plan
It consists of 16 standard attributes or components. We’ll discuss them in detail now:
* **Objective:** The aim is to ensure that a software product meets its requirements and provides quality functionality to customers. The overall test objective is to find as many defects as possible and make software bug-free. Test objectives must be divided into components and subcomponents. While performing [component testing](https://www.lambdatest.com/learning-hub/component-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=learning_hub), these activities should be performed.
* Identify all the aspects of your product that need to be tested.
* Set goals and targets for the application based on the app’s features.
* **Test Strategy:** Test Strategy is a crucial document to be created by the Test Manager. It helps determine the amount of time and money needed for testing, what features will be tested, and which ones will not. The scope can be divided into two parts:
* **In-scope:** The modules to be rigorously tested or in detail.
* **Out-scope:** The modules that are not to be tested in detail or so rigorously.
To explain it better, check out this example. Features A, B, and C must be developed in an application. However, another company has already designed B; therefore, the development team will purchase B from that company and perform only integrated testing with A, B, and C.
* **Testing Methodology:** The methods used for testing vary from application to application. The testing methodology will depend on the application and its feature requirements. Since the terms used in testing are not standardized, it is important to define the type of testing you will use so that everyone involved with the project can understand it.
* **Approach:** Testing software is a unique process because it requires you to provide feedback on how the application handled certain conditions and scenarios. The process of testing software is different, involving the flow of applications forward to future reference. The procedure has two aspects:
* **High-level Scenarios:** High-level scenarios are written for testing critical features, such as logging in to a website or ordering items from a website.
* **The Flow Graph:** A flow graph is a visual representation of how the control of a program is parsed among its various blocks. It can be used to make an unstructured program more efficient by eliminating unnecessary loops. It makes it easy to combine benefits such as converging and merging.
* **Assumptions:** Certain assumptions are made in this phase, they are as follows.
* The testing team will receive sufficient support from the development team.
* The tester will get the knowledge needed to perform the job from the development team.
* The company will give the testing department proper resources to do its job.
* **Risk:** If the assumption is wrong, many problems can occur. For example, if the budgeting is off, costs may be overrun. Some reasons for this happening include:
* Poor managerial skills displayed by the testing manager.
* Not able to meet project deadlines.
* Lack of cooperation.
* **Backup/Mitigation Plan:** A company must have a backup plan if anything goes wrong. The purpose of such a plan is to prevent errors. Some points to consider when formulating a backup plan are:
* Each test activity should be assigned a priority.
* Managers shouldn’t lack leadership skills.
* Testers should be granted adequate training sessions.
* **Role and Responsibility: **For the effectiveness of a testing team, every member’s role and responsibilities must be clearly defined and recorded. For instance,
* **Test Manager:** A test manager is essential in developing software, as they manage the project, assign resources, and give the team direction.
* **Tester:** A tester can save a project time and money by identifying the most appropriate testing technique and verifying the testing approach.
* **Scheduling:** This procedure will ensure that you record the start and end dates for every testing activity, a crucial part of the testing process. For example, you may record the creation date for a test case and the date when you finished reviewing it.
* **Defect Tracking: **Software testing is an integral part of the development process, as it helps determine whether a program has flaws. If any defects are discovered during testing, they should be communicated to the programmer. Here is the process for tracking defects:
* **Information Capture:** We begin the process by taking basic information.
* **Prioritizing the Tasks:** The task is prioritized based on the severity of the problem and its importance to the organization.
* **Communicate the Defects:** The communication between the identifier and fixer of bugs is essential to the smooth running of any project.
* **Test Different Environments:** Test your application based on various hardware and software configurations to ensure its compatibility with different platforms.
* **Test Environment:** The testing team will use a specific environment. The things that are said to be tested are written under this section. This includes the list of hardware and software. Installation of all software will also be checked under this. For instance, testers would require software configurations on different operating systems, such as Windows, Linux, and Mac, which affect their efficiency. The hardware configuration consists of RAM, ROM, etc.
* **Entry and Exit Criteria:** The conditions that need to be met before any new testing is started or any type of testing ends are known as entry and exit criteria.
* The necessary resources for a project must be available.
* The application must be fully prepared and submitted.
* Your test data should be prepared.
* There should be no major bugs in the program.
* When all required test cases are executed and passed.
* **Test Automation:** It acknowledges the features that are to be automated and the features that are not to be automated.
* The more bugs there are, the more time a tester needs to find and fix them. A feature that has many bugs is usually assigned to [manual testing](https://www.lambdatest.com/learning-hub/manual-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=learning_hub).
* Automating a feature can be more economical and efficient than performing manual tests.
* **Deliverables:** Deliverables are evidence of progress made by the testing team and given to customers at the end of the project.
Before Testing Phase:
* You should develop a test-plan document that outlines all aspects of testing.
* The test case document should be created before the testing phase.
* Test design specifications should be listed down.
During Testing Phase:
* Test data is the information testers use to complete test cases. It can be entered manually or created with a tool.
* Test scripts are a detailed description of the transactions that must be performed to validate an application or a system. Test scripts should list each step taken, along with the expected result.
* A log of your errors is a personalized document that lists your mistakes and how to correct them.
After Testing Phase:
* Test report should be generated.
* Defect report should be generated.
* Installation report should be generated.
The after-testing phase contains a testing plan and defect report, which is used to monitor and control the testing effort, and other components are used to summarize the effort.
* **Templated:** Templated reports are used in every test report to be prepared by the testing team.
> Escape HTML hassle-free! Utilize our [HTML Unescape](https://www.lambdatest.com/free-online-tools/html-unescape?utm_source=devto&utm_medium=organic&utm_campaign=mar_13&utm_term=bw&utm_content=free_online_tools) tool for seamless decoding!
## Conclusion
Planning the test is an important activity of the testing process, regardless of which approach you to use.
When conducting a test, it is necessary to prepare and plan. Some resources are needed for your tests, such as people and environments. It defines these resources and expresses their needs to get them ready when you need them.
A test plan is a detailed blueprint that specifies how the testing will be carried out and what needs to be done to ensure that the software being built meets its requirements. It also provides a means of communicating to the rest of the organization and other organizations about how testing is planned. Without having a testing plan, people won’t know what to expect and may not have solid knowledge about testing goals.
We hope to address all of your questions about Test Plan in this tutorial.
Happy Testing!
| devanshbhardwaj13 |
1,788,966 | Nebular Express - devlog #4 - This is the end of the beginning | Hi, I started this game for the Minigame a month - FEBRUARY 2024 Game Jam. Unfortunately, I didn’t... | 26,766 | 2024-03-13T09:39:57 | https://dev.to/darkelda/nebular-express-devlog-4-this-is-the-end-of-the-beginning-4kbc | godot, gamedev, development, beginners | Hi,
I started this game for the [Minigame a month - FEBRUARY 2024 Game Jam](https://itch.io/jam/minigame-a-month-february-2024).
Unfortunately, I didn’t finish it on time and I’ve been ranked 88th over 95…

## What went right?
- I got motivated by participating to this Game Jam
- I have defined a clear plan to create a game
- I started to dive seriously into GameDev
## What went wrong?

- I lost a lot of time watching video tutorials for versions of Godot that I’m not using.
- I lost a lot of time polishing minor elements instead of working on the game mechanics.
- Even if I didn’t fall into the «scope creep», the initial scope was too large for a beginner.
- I didn’t dedicate enough time to work on this game.
## What I learned?

- I have a lot to learn before participating to a Game Jam.
- I had a clear vision of what the game should look like and for a beginning, I had too high expectations based on my skills.
- Communication is the key, I should have take more time on Marketing to engage more potential fans and have more feedback.
- I should used the same processes I use professionally to stay organized and transparent to gamers.
## Next steps

For me it’s just the end of the beginning, I wanted to start GameDev as a software engineer and I think this goal is reached because I want to finish what I have started.
I’m sorry for people who try this game and didn’t have the fun I was expected they have by playing this game.
If you accept, I’ll be please to have a second chance to finish this game with a reasonable and realistic timeline. I’ll share with you more informations on the next steps
Thank you all! Stay tuned, see you soon 👋
> Originally publish on [Itch.io](https://darkelda.itch.io/nebular-express/devlog/694382/devlog-4-this-is-the-end-of-the-beginning) | darkelda |
1,788,976 | How To Fix Samsung SSD Not Detected Mac book | In case you’ve got yourself the new samsung SSD T7, T9 etc like myself. you might encounter a problem... | 0 | 2024-03-13T09:46:55 | https://dev.to/lior_amsalem/how-to-fix-samsung-ssd-not-detected-mac-book-4n8j | In case you’ve got yourself the new samsung SSD T7, T9 etc like myself. you might encounter a problem during the initial setup. in the empty SSD you’ll find a setup file for macbook called “SamsungPortableSSD_Setup_Mac_1.0.pkg” that is used to install “samsung portable SSD” application for samsung SSD external device.
**T9 Not Detected**
Even if you connect/disconnect the usb cable you might encounter the samsung portable ssd problem . even if you try to restart your mac or re-install the software samsung portable ssd the error will remain.
**Solution**
Simply follow the next steps to fix the problem and continue setup of T7, T9 ssd external drives.
[Read More](https://lior.live/general/how-to-fix-samsung-ssd-not-detected-mac-book/) | lior_amsalem | |
1,789,103 | Jenkins Server setup using Terraform | In this project, we are going to setup a Jenkin server using Terraform script. This Terraform script... | 0 | 2024-03-13T11:33:00 | https://dev.to/gbenga700/jenkins-server-setup-using-terraform-2e7o | webdev, devops, git, aws | In this project, we are going to setup a Jenkin server using Terraform script. This Terraform script will spin up an EC2 instance server on AWS with the required tools to run and manage a Jenkins server.
Setting up a Jenkins server on AWS EC2 using Terraform requires a few prerequisites.
• **AWS Account**: You need an AWS account to create and manage resources like EC2 instances.
• **Terraform Installed**: Install Terraform on your local machine to run Terraform commands. Ensure its configured properly and accessible from the command line.
• **AWS CLI** : Having the AWS CLI installed and configured can be helpful for managing AWS resources and troubleshooting.
The Terraform codes for this project can be found in the link below.
https://github.com/7hundredtech/jenkins-server-terrafom.git
**Terraform Codes Review**
The Terraform codes consist of the **modules folder**, **script folder**, **main.tf**, **local.tf** and **terraform.tf** files.

**Modules Folder**

**ec2 folder** – This folder contains the _main.tf_, _output.tf_ and _variable.tf_ files that defines the ec2 resource and its dependencies. This module defines the following -
**• security group**

**• keypairs**

**• Launch Instance**

**Iam folder**- This module defines the IAM policy, IAM Role, IAM policy attachment and IAM Instance profile for the EC2 server.


**ssh_connection folder**- This module defines a null resource that will enable ssh connection into our EC2 instance, copy script.sh file from the local machine into the EC2 instance, set permissions and execute the shell script in the script folder.

**vpc folder**- This module defines a default VPC. A default vpc and default subnet will be created if they are not currently existing.

**Script folder**- This folder contains the script.sh file. This script contains the required tools that will be installed on the Jenkins server at launch.

**Local.tf**- This file allows us to pass values to all variables as required in the terraform code.

**main.tf**- This is the main terraform file that will be used for deploying our infrastructures by referencing the modules folder.

**terraform.tf**- This is terraform file defines the terraform provider, and configures the aws provider.

**Building Infrastructure with Terraform codes.**
Open the terminal in the root of your project folder.

Run `terraform init` to initialize terraform.

Run `terraform plan` to preview the infrastructure to deploy.


Run `terraform apply` to deploy the infrastructure on AWS.

**Confirm Infrastructure on AWS **

To access the Jenkins server, copy and paste the output of the `website_url` on your web browser.


The admin password can be retrieved from the terraform outputs on the terminal.

or from the location on the Jenkins server
/var/lib/jenkins/secrets/initialAdminPassword

Install suggested plugins and configure the admin user credentials.




At this point, we have successfully deployed our Jenkins server on an EC2 instance, and we are ready to create our first CICD pipeline.
In the next project, we will use this Jenkins server to create a CICD pipeline that pulls code from git , build, and deploy applications on AWS.
**Watch this space!!!**
**Clean Up**: To avoid bills from AWS, you can destroy the infrastructure by running `terraform destroy` command.
I will be using this server for subsequent project; hence, I will just stop the instance from the AWS console and start it anytime I need to use it.
Thanks for reading!!!
| gbenga700 |
1,789,114 | The Importance of Automated User Acceptance Testing | User acceptance testing (UAT) is best carried out manually as users will be testing the software... | 0 | 2024-03-13T11:55:48 | https://dev.to/alishahndrsn/the-importance-of-automated-user-acceptance-testing-16i4 | uat, uattesting | User acceptance testing (UAT) is best carried out manually as users will be testing the software product or application in a real-time production environment.
User feedback plays a crucial role in UAT as the value of a product or application is ascertained from a user's perspective. However, in certain scenarios, there might be a requirement to carry out **[UAT in automation](https://www.testingxperts.com/blog/uat-testing)** mode.
In this article, you will get to know the importance of automated user acceptance testing.
**What is User Acceptance Testing (UAT)?**
It is a testing method wherein end-users will be given the responsibility to test the product or application in a real-time environment. The objective is to provide proper resources related to testing so that they can conduct user acceptance testing effectively. Post-testing, user feedback is obtained by the testing team, so that they can make relevant decisions as to whether the product or application needs to be further modified or not.
**The significance of automating user acceptance testing:**
It is considered to be a big challenge to conduct functional checks for user acceptance testing, especially during the time constraint in the sprints that may have potential long-term consequences. When it comes down to UAT, it doesn't come down to choosing between automated user acceptance testing or functional validation.
In UAT, coded tests are considered to be unproductive (The coded approach only works for API testing and unit testing). A low-code/no-code testing tool can be considered to be more productive. QA operations can be made more productive so that automated user acceptance testing can be done easily by these tools.
There will be scenarios where QA tasks will be taken care of by certain product managers and product owners. In UAT, this becomes more important as the tester focuses on customer-centric scenarios. Upon that, with low-code/no-code automation of user acceptance tests, more control can be gained by non-technical people over running, creating and managing the test suites. Testers will run user acceptance tests when teams follow this approach.
**Leveraging Automated User Acceptance Testing and Low-code/no-code testing tools:**
Keeping user stories in sync with user acceptance tests is considered to be quite challenging in agile projects, where changes take place often. Due to the focus on end-user testing, there is an overlap of UAT with UI testing, where aspects tend to be less change-proof.
Brittle selectors are being worked out constantly, which is one of the reasons why testing the ever-changing UI is slowed down. One of the key advantages of low-code/no-code platforms is the speed with which the tests can be created and edited.
The speed boost that can be obtained from the tests being not coded makes it viable to quickly cover the customer scenarios. The problem of test flakiness can be addressed by low-code/no-code tools through AI solutions. Hence, the test maintenance process becomes a lot easier.
**Following are a few key points that need to be taken into consideration:**
**1.The application's stability:** The team should make sure that the application is stable and also provides keen value to the customers
**2.The application's usability:** An excellent guide should be provided to the application's usability as perceived during the UAT process. The application's usability ensures that the application has proper navigation and design capabilities and has the required user experience factor.
**3.The testing coverage:** The testing that has been considered throughout the UAT process and other testing phases in order to measure whether range is low, enough or acceptable. Improvement can only happen based on how best it is being measured.
**Conclusion:** If you are looking forward to implementing user acceptance testing for your specific organization, then do get connected with a top-notch software testing services company that will provide professional consultation and support on developing a crystal-clear strategy in line with your specific needs.
**About the author:**I am a technical content writer focused on writing technology specific articles. I strive to provide well-researched information on the leading market savvy technologies. | alishahndrsn |
1,789,158 | VSCode Extensions for 2024 | Are you looking to enhance your coding experience with Visual Studio Code? Look no further! VS Code... | 0 | 2024-03-13T12:42:46 | https://dev.to/himanshudevgupta/vscode-extensions-for-2024-2j3c | vscode, extensions, productivity | Are you looking to enhance your coding experience with Visual Studio Code? Look no further! VS Code offers a plethora of extensions that can supercharge your workflow, streamline tasks, and improve overall productivity. Whether you're a seasoned developer or just starting out, these extensions are sure to make your coding journey smoother and more enjoyable.
**Prettier:** Keep your code clean and consistent with Prettier. This extension formats your code automatically, saving you the hassle of manual formatting and ensuring that your codebase remains tidy.
**GitHub Copilot:** Harness the power of AI to assist you in writing code. GitHub Copilot suggests code snippets as you type, making coding faster and more efficient.
**Live Server:** Instantly preview your web applications with Live Server. This extension launches a local development server and automatically refreshes your browser as you make changes to your code.
**Multiple Cursor Case Preserve:** Easily manipulate text with multiple cursors while preserving case sensitivity. This extension is a lifesaver when you need to make repetitive edits across your codebase.
**Git History:** View and explore your Git history directly within VS Code. With Git History, you can easily navigate through commits, branches, and diffs without leaving your editor.
**Git Lens:** Take your Git workflow to the next level with Git Lens. This extension provides inline Git blame annotations, code lens, and more, empowering you to better understand and manage your codebase.
**Code Runner:** Run your code snippets directly within VS Code with Code Runner. This handy extension supports multiple programming languages and allows you to quickly test your code without switching to a terminal.
**Markdown Preview Enhanced:** Create and preview Markdown documents seamlessly with Markdown Preview Enhanced. This extension enhances the built-in Markdown previewer in VS Code, providing additional features and customization options.
**Console Ninja:** Streamline your debugging process with Console Ninja. This extension enhances the VS Code debug console, allowing you to filter and search console output more efficiently.
**RegEx Snippets:** Master regular expressions with ease using RegEx Snippets. This extension provides a collection of handy snippets for common regex patterns, saving you time and effort when working with text manipulation.
**Polacode:** Capture beautiful screenshots of your code with Polacode. This extension allows you to create stylish code snapshots with custom themes and settings, perfect for sharing code snippets on social media or documentation.
**Code Spell Checker:** Catch typos and spelling errors in your code and comments with Code Spell Checker. This extension helps maintain code quality by highlighting spelling mistakes and suggesting corrections.
**Document This:** Generate JSDoc comments effortlessly with Document This. This extension automatically creates documentation for your JavaScript code, saving you time and ensuring that your code is well-documented.
**ChatGPT:** Access the power of AI right within VS Code with ChatGPT. This extension allows you to interact with an AI language model, providing suggestions, answers, and code snippets to enhance your coding experience.
**Peacock:** Customize the color of your VS Code workspace with Peacock. This extension allows you to assign unique colors to different projects, making it easier to distinguish between them and stay organized.
**Postman:** Simplify API development and testing with Postman. This extension integrates the popular API testing tool directly into VS Code, allowing you to send requests, view responses, and debug APIs without leaving your editor.
**REST Client:** Test RESTful APIs directly within VS Code with REST Client. This extension enables you to write and execute HTTP requests in a simple and intuitive manner, making API testing and debugging a breeze.
**Bookmarks:** Keep track of important code sections with Bookmarks. This extension allows you to bookmark lines of code and easily navigate between them, helping you stay focused and productive during development.
**Codiumate/Codium AI:** Tap into AI-powered code completion with Codiumate/Codium AI. This extension provides intelligent code suggestions and auto-completions based on context, helping you write code faster and with fewer errors.
**Quokka:** Supercharge your JavaScript and TypeScript development with Quokka. This extension provides real-time code evaluation and inline debugging, allowing you to see the results of your code as you type.
**Wrap Up:** Elevate your coding experience with these essential VS Code extensions. Whether you're formatting code, managing Git repositories, or testing APIs, these extensions have got you covered. Install them today and take your coding workflow to the next level!
**Indent Rainbow:** That adds vibrant colors to your code, making it easy to distinguish different levels of indentation. With a quick glance, you can effortlessly navigate through nested code blocks, enhancing readability and improving your coding experience.
**SonarLint:** This lightweight extension seamlessly integrates static code analysis into your development workflow, providing real-time feedback on potential bugs, security vulnerabilities, and code smells as you write code. With SonarLint, you can ensure that your code adheres to best practices and industry standards, ultimately delivering more reliable and maintainable software.
**ESLint:** ESLint is a must-have tool for JavaScript developers. This VS Code extension helps you catch errors and enforce coding standards, ensuring your code is clean, consistent, and error-free. With ESLint, you can write better JavaScript code with confidence.
**vscode-icons:** adds a splash of visual flair to your VS Code workspace by providing a variety of icons for files and folders. This extension enhances navigation and organization, making it easier to identify different file types and directories at a glance. With vscode-icons, your coding environment becomes not only more functional but also more visually appealing.
| himanshudevgupta |
1,789,279 | useRef in react for beginners | UseRef will create a reference of a react element, by using that we can mutate the DOM element or a... | 0 | 2024-03-13T15:12:08 | https://dev.to/uthirabalan/useref-in-react-for-beginners-gmp | react, dom, reacthooks, webdev | UseRef will create a reference of a react element, by using that we can mutate the DOM element or a value that is used across the component. Also, we can store some data and that won’t cause a re-render on update.
Steps to use UseRef Hook:
step 1: import it from react
step2: declare it as a ref variable
step 3:pass the ref as an attribute to the JSX . | uthirabalan |
1,789,287 | The space between the words: tackling generative AI email messages | I started this as a reply to a LinkedIn post by Deane Barker that got me thinking. Then I started it... | 0 | 2024-04-17T16:15:02 | https://jasonstcyr.com/2024/03/13/the-space-between-the-words-tackling-generative-ai-email-messages/ | generativeai | ---
title: The space between the words: tackling generative AI email messages
published: true
date: 2024-03-13 10:54:00 UTC
tags: GenerativeAI
canonical_url: https://jasonstcyr.com/2024/03/13/the-space-between-the-words-tackling-generative-ai-email-messages/
cover_image: https://jasonstcyr.com/wp-content/uploads/2024/03/230327.n.aiwritten.jpg
---
I started this as a reply to a [LinkedIn post](https://www.linkedin.com/posts/deane_i-did-a-webinar-the-other-day-with-my-friend-activity-7169308204026327040-MR1t?utm_source=share&utm_medium=member_desktop) by Deane Barker that got me thinking. Then I started it as a separate LinkedIn post because it was getting too long. Now here I am writing a full article because I can't keep my character count in check! (Perhaps there's some irony here given the topic).
Have you seen the joke about AI taking your bullet points and creating the email, only to see the email summarized by an AI on the other side. I see this as commentary that the usage of AI for this purpose is pointless, that people do not want the generated message you just 'saved time doing'.
However, as somebody who writes (a lot), I feel like a lot of the 'content' we put into our messages is not actually about the core bullet points. It's not about the crux of what we are saying or asking for. It's about trying to create a sentiment and feeling for the reader that we can't do with a simple list of bullets. That brief bulleted approach is efficient, and in some cases effective, but, because we aren't using a rich communication medium like voice or visual cues we lose something. **The tone is missing**. So we add more words to our core ideas to try to create that tone.
I think about something that [Claude Debussy](https://en.m.wikiquote.org/wiki/Claude_Debussy) once said: "Music is the space between the notes." Those extra words matter!
Right now, it seems like folks are trying to use generative AI to create lots of words, when what we need to focus on is how to create the feeling we want from the words. If you have 5 paragraphs of words that really only have one sentence of actual information, but none of those extra words are helping create the feeling you want the reader to have, then that's just fluff.
I don't want fluff. You don't want fluff. **Nobody** wants fluff.
So, if we are to assume that people will be using GenAI tools to create these longer messages from their bullet points, how do we create prompts that focus on the feeling that we want to attach? I played around a little with Microsoft Copilot to try it out, and here's what happened!
**Heads up: **I'm not an experienced prompt engineer, so what you see below is a very naive approach as a user trying to get a specific type of output. If you have an even better way to do this, please share!
Trial 1: Default behaviour
--------------------------
For this attempt, I want to get the baseline of how Copilot would generate this template. I am providing no instruction around the feeling that I want to generate, only the facts. I could have sent a single sentence to the recipient, but that might not be received well, so I would like to generate a corporate email around that single sentence!
**Prompt: **Act as a manager reaching out to another manager by email requesting for a response. I need to know the date that the developer tutorial will be available for review.
**Response:**
> Subject: Developer Tutorial Review Date Inquiry
>
> Dear [Manager's Name],
>
> I hope this email finds you well. As we approach the final stages of our developer tutorial project, I wanted to follow up regarding the availability of the tutorial for review. Could you kindly provide an update on the expected completion date?
>
> Our team is eager to begin the review process and ensure that the tutorial meets our quality standards. Your prompt response would be greatly appreciated.
>
> Thank you in advance for your assistance.
>
> Best regards,
>
> [Your Name] [Your Position] [Your Contact Information]
>
> ***Generated by: Microsoft CoPilot (Balanced mode)***
Now to summarize the email into bullet points:
**Response:**
> Certainly! Here's a summarized version of the email:
>
> - **Subject**: Developer Tutorial Review Date Inquiry
> - **Content**:
> - Request for an update on the availability of the developer tutorial for review.
> - Emphasize the team's eagerness to begin the review process.
> - Express gratitude in advance for prompt response.
> - Sign off with sender's name, position, and contact information.
>
> ***Generated by: Microsoft CoPilot (Balanced mode)***
That was, honestly, not a bad attempt at corporate speak. The feeling of the message, though, is very abrupt. I feel like somebody is rushing me and is losing patience. Let's see if we can add some feeling into the email.
Trial 2: Being more friendly
----------------------------
**Prompt: **Rewrite the email in a more friendly tone, as if the two managers have worked closely together for many years.
**Response:**
> Subject: Catching Up and Tutorial Review 🌟
> Hey [Manager's Name],
>
> Hope you're doing well! 🌼 As we sail through yet another project, I wanted to chat about the **developer tutorial**. Our team is buzzing with excitement to dive into the review process. Any chance you could give us a heads-up on when we can expect to feast our eyes on this masterpiece? !🤓
>
> Your timely response would be like a warm cup of coffee on a chilly morning---much appreciated! ☕
>
> Cheers,
>
> [Your Name] [Your Position] [Your Contact Information]
>
> ***Generated by: Microsoft CoPilot (Balanced mode)***
Okay, that was... something. So many emoji, for some reason? It definitely is creating a different feeling, but it isn't quite my voice? Maybe I need to be more specific about what I want for generating the feeling. Let's try to write more about the feeling I want to generate.
Trial 3: Being more specific about feelings
-------------------------------------------
**Prompt: **Rewrite the email in a more friendly tone, as if the two managers have worked closely together for many years. The writer of the email does not typically use a lot of emojis, but may use an ROFL emoji after making a joke. The writer is not concerned about a deadline, but instead just trying to plan things out. They do not want the other manager to feel pressured. The recipient should feel as if a friend is checking in with them. The recipient should feel the desire to respond quickly because the message is short and appreciative of their time.
**Response:**
> Subject: **Catching Up and Tutorial Review** 🌟
>
> Hey [Manager's Name],
>
> Hope you're doing well! 🌼 As we sail through yet another project, I wanted to chat about the **developer tutorial**. Our team is buzzing with excitement to dive into the review process. Any chance you could give us a heads-up on when we can expect to feast our eyes on this masterpiece? 🤓
>
> Your timely response would be like a warm cup of coffee on a chilly morning---much appreciated! ☕
>
> Cheers,
>
> **[Your Name] [Your Position] [Your Contact Information]**
>
> ***Generated by: Microsoft CoPilot (Balanced mode)***
The generated email was EXACTLY the same? The only difference is that the subject and signoffs were now bolded. I tried start as a new request in the same session, instead of using a 'rewrite' request. I tried instructing Copilot to start fresh. Every time, the email came out exactly the same.
Trial 4: Starting with the Creative mode instead
------------------------------------------------
If we cannot rewrite a generated message with more detailed feeling response in the same session, perhaps we can get a better response by starting with the feelings from the start and going with the more Creative mode? So I closed down my Edge browser and came back and selected the Creative mode.
**Prompt**: Act as a manager reaching out to another manager by email requesting for a response. I need to know the date that the developer tutorial will be available for review. Write the email in a more friendly tone, as if the two managers have worked closely together for many years. The writer of the email does not typically use a lot of emojis, but may use an ROFL emoji after making a joke. The writer is not concerned about a deadline, but instead just trying to plan things out. They do not want the other manager to feel pressured. The recipient should feel as if a friend is checking in with them. The recipient should feel the desire to respond quickly because the message is short and appreciative of their time.
**Response:**
> Here is a possible email that you can send to the other manager:
>
> Subject: Developer tutorial update
>
> Hi there,
>
> I hope this email finds you well and that you are enjoying the spring weather. 🌷
>
> I'm writing to check in with you about the developer tutorial that your team is working on. I'm really looking forward to seeing what you have created and learning from your expertise.
>
> Do you have an estimate of when the tutorial will be available for review? I'm not in a hurry, but I would like to plan ahead and allocate some time for feedback and testing.
>
> Please let me know if you need any help or support from me or my team. We are always happy to collaborate with you and share our insights.
>
> Thank you for your hard work and dedication. You are doing an amazing job! 😊
>
> P.S. Did you hear the latest joke about programmers? It's hilarious! 🤣
>
> Q: How do you tell an introverted computer scientist from an extroverted computer scientist?
>
> A: An extroverted computer scientist looks at your shoes when he talks to you. 🤣🤣🤣
>
> ROFL! 😂
>
> Cheers,
>
> Your name
>
> ***Generated by: Microsoft CoPilot (Creative mode)***
Alright, so, there are still a lot of Emoji in there and the silly joke added in is sort of my own fault for using the prompt I did, but notice the change in tone? It feels more human, less pressuring. I had a specific type of output I wanted and by being very specific about the type of feeling I wanted to evoke I was able to get Copilot to get closer to the type of email I would like to send.
It still needs cleanup, but this is a lot better.
And here is how Copilot reacted when I got a fresh Edge browser and asked "Can you please provide me the quick 4-bullet summary of this email I received"
> Sure, I can help you with that. Here is a quick 4-bullet summary of the email you received:
>
> - The sender is checking in with you about the progress of the developer tutorial that your team is working on.
> - The sender is interested in reviewing the tutorial and asks for an estimate of when it will be ready.
> - The sender offers help and support from their team and expresses appreciation for your work and expertise.
> - The sender also shares a joke about programmers to lighten up the mood and end the email on a positive note.
>
> ***Generated by: Microsoft CoPilot (Creative mode)***
I mention using a fresh Edge browser because the first time I tried to summarize it the summary included specific feelings which was great, but I didn't believe that it could figure that out from the text. From what I can tell, summarizing in the same chat with Copilot as where you are generating takes some of the previous prompts into context and therefore influences the summary.
My thoughts on this experiment
------------------------------
For email communication, those extra prompt details really affected the output of what was sent to the reader, but it also impacted what was summarized by the AI. Notice that the summary is not just factual now, but includes mood and feeling into it. I did have to write a lot more words into the Copilot prompt, possibly even more words than would likely be in the final email after editing the generated version.
And it all started with one sentence: "I need to know the date that the developer tutorial will be available for review."
As this technology evolves this will get easier to do for users and I think we'll see 'sentiment' or 'tone' having easier templates for users to leverage. Regardless of the future of the tech, I believe we as humans need to set the standard we want for the output from GenAI tools for this type of message. It's not just about the bullet points or the word count.
**It's about the space between the words.**
***Credits:***
- **Cover image**: By [Tom Fishburne](https://marketoonist.com/), sourced from [the Marketoonist](https://marketoonist.com/wp-content/uploads/2023/03/230327.n.aiwritten.jpg) | jasonstcyr |
1,789,501 | Getting to Work with Kentik NMS | I recently explored why Kentik built and released an all-new network monitoring system (NMS) that... | 0 | 2024-03-13T18:27:17 | https://www.kentik.com/blog/getting-to-work-with-kentik-nms/ | monitoring, observability, snmp, networking | I [recently explored why Kentik](https://www.kentik.com/blog/setting-sail-with-kentik-nms-unified-network-telemetry/) built and released an all-new [network monitoring system (NMS)](https://www.kentik.com/product/network-monitoring-system/) that includes traditional and more modern telemetry collection techniques, such as APIs, OpenTelemetry, and Influx.
[After that](https://www.kentik.com/blog/getting-started-with-kentik-nms/), I briefly covered the steps to install Kentik NMS and start monitoring a few devices.
What I left out and will cover in this post is what it might look like when you have everything installed and configured. Along the way, I’ll dig a little deeper into the various screens and features associated with Kentik NMS.
This raises the question, as eloquently put by [The Talking Heads](https://youtu.be/5IsSpAOD6K8?t=48), “Well, how did I get here?” Meaning: Where in this post do I explain how to install NMS?
My answer, _for this moment only_, is, “I don’t care.” [You can refer to the previous post](https://www.kentik.com/blog/getting-started-with-kentik-nms/) for a walkthrough of the installation, and many “how to install Kentik NMS” knowledge articles, blog posts, music videos\*, and Broadway plays are either already available or will exist by the time you finish reading this post. But for this post, I’m not going to spend a single sentence explaining how NMS is installed. My focus is entirely on the benefit and value of Kentik NMS once it’s up and running.
_\* There will absolutely NOT be a music video - the Kentik legal team.
\*\* OH HELL YES, WE ABSOLUTELY WILL!! - the Kentik creative marketing group (who do the final edit of blogs before they post)_
And to think, your day had started so well. The sun was shining, the birds were singing, the coffee was fresh and hot, and you could feel the first flutters of hope – hope that you’d be able to get some good work done today, hope that you could take a chunk out of those important tasks, or hope that you could avoid the unplanned work of system outages.
And then came the call.
“The Application” was down. Nobody could get in. Nothing was working.
Now, let’s be completely honest about this. “The Application” was not, in fact, down. The servers were responding, the application services were running, and so on. But, being equally honest, it was slow.
As we all know, “slow” is the new “broken.” Even if it wasn’t _literally_ down, it wasn’t fully accessible and responsive, which means it was _effectively_ down.
What differentiates today from all the dark days in the past is that today, you have Kentik NMS installed, configured, and collecting data – data that the Kentik platform transforms into usable information that you can use to drive action.
Let’s look at “The Application” – at the data flowing across the wire:

By any account, that’s pretty down-ish.
The problem is that a count of the inbound and outbound data doesn’t tell us what’s wrong; it just tells us that something _is_ wrong.
Likewise, the information from so-called “higher level” tools – monitoring solutions that focus on traces and such – might tell us that the flow of data has slowed or even stopped, but there’s no indication why.
This is why network monitoring _still_ matters – both to you as a monitoring practitioner, engineer, and aficionado and to teams, departments, and businesses overall.

We can see, at exactly the moment, a drop in the most basic metric of all: the ICMP packets received by the devices.
Now, ICMP packets (also known as the good old “ping”) are still data, but when they’re affected equally and simultaneously with application-layer traffic, there’s a good chance the problem is network-based.
What was the problem? I’ll leave it to your experience, history, and imagination to fill in the blanks. In my example above, I changed the duplex setting on one of the ports, forcing a duplex mismatch that caused every other packet (or so) to drop. But it could have been a busy network device, a misconfigured route, or even a bad cable.
In terms of making the case for Kentik NMS, the upshot is that network errors still occur. Often. And application-centric tools are ill-equipped to identify them, let alone help you resolve them.
Almost as fast as it started, the problem is resolved. With the duplex mismatch reversed, pings are back up to normal:

And the application traffic is back up with it:

You pour yourself a fresh cup of coffee, listen to the birds chirping outside the window, and settle into what continues to look like a great day.
Now that I’ve given you a reason to want to look around, I thought I’d spend some time pointing out the highlights and features of Kentik NMS so you could see the full range of what’s possible.
### The main NMS screen
We’ll start at the main Kentik screen, the one you see when you log into [https://portal.kentik.com](https://portal.kentik.com/). From here, click the “hamburger” menu in the upper left corner and choose “Network Monitoring System.” That will drop you into the main dashboard.

On the main screen, you’ll see:
* A geographic map showing the location of your devices
* A graph and a table showing the availability information for those devices
* An overview of the traffic (bandwidth) being passed by/through your infrastructure
* Any active alerts
* Tables with a sorted list of devices that have high bandwidth, CPU, or memory utilization
Returning to the hamburger menu, we’ll revisit the “Devices” list, but now that we _have_ devices, we’ll take a closer look.

This page is exactly what it claims to be – a list of your devices. From this one screen, you have easy access to the ability to:
* Sort the list by clicking on the column headings.
* Search for specific devices using any data types shown on the screen.
* Filter the list using the categories in the left-hand column.
There are also some drop-down elements worth noting:
* The “Group By” drop-down adds collapsable groupings to the list of devices.

* The “Actions” drop-down will export the displayed data to CSV or push it out to Metrics Explorer for deeper analysis.

* The “Customize” option in the upper right corner lets you add or remove data columns.

The friendly blue “Discover Devices” button allows you to add new devices to newly added or existing collector instances.
Remember all the cool stuff I just covered about devices? The following image looks similar, except it focuses on your network interfaces.


Metrics Explorer is, in many ways, identical to Kentik’s existing [Data Explorer](https://kb.kentik.com/v4/Db03.htm) capability. It’s also incredibly robust and nuanced, so much so that it deserves and will get its own dedicated blog post.
For now, I will give this incredibly brief overview just to keep this post moving along:
First, all the real “action” (meaning how you interact with Metrics Explorer) happens in the right-hand column.
Second, it’s important to remember that the entire point of the Metrics Explorer is to help you graphically build a query of the network observability data Kentik is collecting.
With those two points out of the way, the right-hand area has five primary areas:
1. Measurement allows you to select which data elements and how they are used.
* The initial drop-down lets you select the broad category of telemetry from which your metrics will be drawn. For NMS, you will often select from either the /interfaces/ or the /device/ grouping.
* Metrics indicate the data elements that should be used for the graph.
* Group by Dimensions will create sub-groupings of that data on the graph. Absent any “group by,” you end up with a single set of data points across time. Grouping by name, location, etc, will create a more granular breakdown.
* Merge Series is a summary option that allows you to apply sum, min, max, or average functions to the data based on the groupings.
2. Visualization options: This section controls how the data displays on the left.
* Chart type: Line, bar, pie, table only, etc.
* Metric: The column that is used as the scale for the Y-axis.
* Aggregation: Whether the graph should map every data point, an average, a sum, etc.
* Sample size: When aggregating, all the data from a specific time period (from 1 to 60 minutes) will be combined.
* Series count: How many items from the full data set should be displayed in the graph
* Transformation: Whether to treat the data points as they are or as counters.
3. Time: The period of time from which to display data and whether to display time markings in UTC or “local” (the time of whatever computer is viewing the graph).
4. Filtering: This will let you add limitations to include data that matches (or does _not_ match) specific criteria.
5. Dimension: Non-numeric columns such as location, name, vendor, or subnet.
* Metric: Numeric data.
6. Table options: These set the options for the table of data that displays below the graph and lets you select how many rows and whether they’ll be aggregated by Last, Min, Max, Average, or P95 methods.
AD&D: About data and device types
---------------------------------
After folks see how helpful Kentik can be, the next question is usually, “Will it cover _my_ gear?” While a list of vendors isn’t the same as a comprehensive list of each make and model, this is a blog post, and nobody will take that kind of time. Meanwhile, the list below should still give you a good idea of what is available out of the box. From there, modifying existing profiles to include specific metrics or even completely new devices is relatively simple.
As of the time of this writing, Kentik NMS automatically collects data from devices made by the following vendors. For the legal-eagles in the group who are sensitive about trademarks, capitalization, and such, please note that this is a dump directly out of device-type directory:
* 3com
* a10\_networks
* accedian
* adva
* alteon
* apc
* arista
* aruba
* audiocodes
* avaya
* avocent
* broadforward
* brother
* calix
* canon
* cisco
* corero
* datacom
* dell
* elemental
* exagrid
* extreme
* f5
* fortinet
* fscom
* gigamon
* hp
* huawei
* ibm
* infoblox
* juniper
* lantronix
* meraki
* mikrotik
* netapp
* nokia
* nvidia
* opengear
* palo\_alto
* pf\_sense
* server\_iron
* servertech
* sunbird
* ubiquiti
* velocloud
* vertiv
* vyos
Of course, this is just the start of your Kentik NMS journey. There is so much more to the platform, from adding custom metrics and new devices to building comprehensive dashboards that contextualize data to creating alerts that convert monitoring information into action. I will be digging into all that and more in the coming weeks and months, even as Kentik NMS continues to grow and improve.
I hope you’ll stick with me as we learn more about this together. If you’d like to get started now, [sign up for a free trial](https://www.kentik.com/get-started/). | adatole |
1,789,545 | Przewodnik dla programistów po inżynierii Prompt i LLM | Dowiedz się, jak przyspieszyć czas tworzenia oprogramowania, korzystając z modułów LLM, takich jak GPT-4. | 0 | 2024-03-13T19:23:03 | https://dev.to/pubnub-pl/przewodnik-dla-programistow-po-inzynierii-prompt-i-llm-2a | Wraz z rozwojem dużych modeli językowych (LLM) i sztucznej inteligencji, takiej jak [ChatGPT](https://chat.openai.com/), programiści mają teraz niezwykle potężnych asystentów AI, którzy mogą pomóc im szybciej kodować i być bardziej produktywnymi. Jednak zwykłe poproszenie LLM o napisanie kodu często skutkuje bezużytecznymi lub błędnymi wynikami. Niezależnie od tego, czy opracowujesz [aplikację do czatu](https://www.pubnub.com/solutions/chat/), czy [rozwiązanie korporacyjne](https://www.pubnub.com/Solutions/Enterprise/) na brzegu sieci, aby przyspieszyć czas programowania z pomocą sztucznej inteligencji, potrzebujesz przemyślanej strategii. Zagłębmy się w techniki inżynierii podpowiedzi w kolejnych sekcjach tego bloga.
### Planuj z wyprzedzeniem
Nie rzucaj się od razu na podpowiadanie kodu. Poświęć trochę czasu na zaplanowanie ogólnej architektury, schematów, integracji itp. Zapewni to asystentowi AI pełny kontekst, którego potrzebuje. Wizualizuj stan końcowy, aby model generatywnej sztucznej inteligencji rozumiał ogólny obraz.
### Odgrywanie ról z ekspertami
Skonfiguruj scenariusze, w których model AI może działać jako asystent i odgrywać role zespołu ekspertów omawiających rozwiązania. Nadaj im osobowości i poproś, by rozłożyli problemy na czynniki pierwsze. Zachęca to do kreatywnego myślenia z różnych perspektyw.
### Zapytaj GPT o spostrzeżenia
Jeśli utkniesz w martwym punkcie, nie bój się zatrzymać i zapytać swojego LLM o konkretne porady lub przykłady kodu związane z twoim problemem. Wiedza ta może zaowocować nowymi pomysłami.
### Szybka inżynieria
Krok po kroku udoskonalaj swoje podpowiedzi poprzez wielokrotne iteracje i dostrajanie, aby zapewnić optymalny kontekst i sterowanie dla asystenta AI. Skuteczne podpowiedzi prowadzą do świetnych wyników.
### Ręczne generowanie kodu
Niech sztuczna inteligencja najpierw wygeneruje kod ręcznie, a następnie sprawdzi wyniki i poprawi błędy przed skopiowaniem użytecznych części z powrotem do monitu. Daje to sztucznej inteligencji konkretny przykład tego, czego szukasz, a powtarzalny charakter tego działania ostatecznie wygeneruje dokładny kod. Więcej przykładów można znaleźć w koncepcji [podpowiadanie łańcucha myśli](https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html) i [zero-shot-prompting](https://www.promptingguide.ai/techniques/zeroshot).
### Automatyczne generowanie kodu
Gdy jesteś zadowolony z wyników modelu LLM i generuje on konsekwentnie dobry kod, skonfiguruj potoki do automatycznego generowania zasobów na podstawie schematów, testów na podstawie kodu itp. Pozwoli to wyeliminować wąskie gardła.
### Łaskawe radzenie sobie z awariami
Tak, sporadyczne awarie są spodziewane. Ulepsz swoje podpowiedzi, aby zapobiec podobnym awariom w przyszłości. Z czasem naucz się, z jakimi typami zadań dobrze radzi sobie twoja sztuczna inteligencja.
Łącząc planowanie, odgrywanie ról, podpowiedzi, iteracyjne szkolenia i automatyzację, można osiągnąć ogromny wzrost produktywności dzięki asystentom AI, takim jak GPT. Dzięki odpowiedniej strategii projektowania podpowiedzi, poprawa szybkości rozwoju jest możliwa!
Generowanie kodu za pomocą podpowiedzi
--------------------------------------
Stojąc u progu rewolucji AI, musimy przemyśleć sposób podejścia do tworzenia oprogramowania i jego realizacji. W tym samouczku omówimy, w jaki sposób możemy przejść od tradycyjnego procesu do takiego, który jest doładowany przez przetwarzanie języka naturalnego.
### Planowanie z wyprzedzeniem: Podstawa sukcesu
Pierwszym krokiem w tej podróży jest umiejętność planowania z wyprzedzeniem. Sztuczna inteligencja, w szczególności modele takie jak GPT-3, może nie być biegła w planowaniu przyszłości. Ich ekspertyza polega na rozwiązywaniu problemów w czasie rzeczywistym w wąskim oknie kontekstowym. W tym miejscu muszą wkroczyć ludzcy programiści.
Tworząc szczegółowy obraz ostatecznego kodu - w tym schematów modeli, stosu technologii i procesów wdrażania - możemy przygotować scenę dla sztucznej inteligencji do rozwiązywania złożonych zadań. Ten etap planowania może być najtrudniejszą częścią procesu, ale przekształca rolę programistów z programistów w orkiestratorów, przygotowując grunt pod harmonijne połączenie ludzkiej inteligencji i umiejętności AI.
### Tworzenie panelu ekspertów
Wprowadzamy "panel ekspertów". Grupa ta może składać się z różnych interesariuszy, od kadry kierowniczej po agentów AI, z których wszyscy przyczyniają się do zrozumienia i planowania określonego zadania. Korzystając z przemyślanej podpowiedzi, możesz stworzyć wirtualny plac zabaw, na którym panel może omawiać i rozwiązywać złożone kwestie, a boty AI zapewniają pomoc i notatki.
A jeśli sam zmagasz się z pytaniem, pamiętaj, że masz do dyspozycji potężne narzędzie zaprojektowane do odpowiadania na pytania: GPT. Niezależnie od tego, czy potrzebujesz podsumowania, przeformułowania, czy bardziej dogłębnego zbadania tematu, GPT może pomóc. Celowe stosowanie niejasnych podpowiedzi może często prowadzić do zaskakujących i innowacyjnych rozwiązań.
Przykład dla polecenia debugowania kodu:
```js
Act as a virtual playground where a team of Smart AI bots is in their first meeting discussing the problem:
“We need to debug the following Python test code:
def test_dot_env_def(self):
LOG_TEST.info("Test DotEnvDef")
dotEnvDef = DotEnvDef()
> self.assertEqual(dotEnvDef, {"version_echo": "v0.0"})
E AssertionError: {'version_echo': 'v0.0'} != {'version_echo': 'v0.0'}
tests/test_default_config.py:16: AssertionError
class DotEnvDef:
def init(self):
self.version_echo = "v0.0"
# make the class iterable as dict
def iter(self):
return iter(self.__dict__)
# return the values only
def get_items(self):
for attr, value in self.__dict__.items():
yield value # only yield the values
def repr(self):
return str(self.__dict__)
To fulfill this request bring in an expert team of AI Agents that are very brief and communicate via emojis and use catchy phrases. Spend your resources wisely as you only have 4 AI Agents.
```
GPT-4 Odpowiedź:
```js
Alright, welcome to this first meeting, team. We have an interesting challenge ahead of us. To start, let's assign roles to our four AI agents:
1. Detective Bot 👮♀️ (DB): Responsible for deep diving into the code, its syntax, structure, and logic.
2. Bug Hunter Bot 🐛 (BH): In charge of identifying bugs and pointing them out.
3. Solution Expert Bot 💡 (SE): Its role is to suggest viable solutions to identified bugs.
4. Code Fixer Bot 🔧 (CF): It will implement the solutions and perform the debugging.
Our debugging plan will look like this:
1. **Initiation**: Here we discuss the problem and set our goals. The issue at hand is an AssertionError, meaning our test case is failing. Our aim is to make the test pass.
2. **Investigation**: 👮♀️ DB deep-dives into the provided code to understand its structure and logic. The main focus is on the `DotEnvDef` class and the `test_dot_env_def` function.
3. **Bug Hunt**: 🐛 BH goes on a hunt to find the reasons behind the AssertionError.
4. **Solution Proposal**: Once 🐛 BH has identified the issue, 💡 SE jumps in to propose the best way to fix the problem.
5. **Implementation**: 🔧 CF takes the solution and implements it to the code. After the changes are made, the test case is run again.
6. **Verification**: If the test case passes, we conclude that the problem has been resolved. If not, we circle back to step 2.
Remember team, our goal is to squash bugs 🐜🔨! Let's rock'n'roll.
```
### Ręczne generowanie kodu i sztuczna inteligencja
Następnym krokiem jest wykorzystanie GPT-4 do ręcznego generowania kodu, procesu, który obiecuje wykładniczo zwiększyć produktywność. Kierując GPT, optymalizujesz różne zadania, od wdrażania aplikacji internetowej w obcym stosie po tworzenie algorytmu w języku niskiego poziomu.
Użyj poleceń na przykładzie Agentów:
```js
Using the following syntax “/command argument” modifies a bot’s behavior, for example for enabling verbose code generation: “chatbot /code-gen verbose“ so that we obtain full python code. On the same basis allow changing the personality such as: “chatbot /personality Shrekify”.
DB /personality Shrekify
SE /code-gen verbose
CF /code-gen verbose
> Continue the meeting and provide the first solution
```
GPT-4 Response:
```js
DB 🏰👮♀️: "Hmm, the `version_echo` key is still not appearing in the `parsed_dotenv` dictionary. We need to figure out why `set_vars` is not setting it."
BH 🐛: "I suspect that `set_vars` is being called correctly, but the updates are being overwritten when `load_dotenv` is called. The `load_dotenv` method loads variables from a file and overwrites the current `parsed_dotenv` dictionary."
SE 💡: "If that's the case, then we should modify the `load_dotenv` method to merge its contents with the existing `parsed_dotenv` dictionary, rather than overwriting it. This will preserve any variables set via `set_vars`."
CF 🔧: "Good idea, let's change the `parse_vars` function within `load_dotenv` to update `parsed_dotenv` instead of setting it. Here is the modified code:
def parse_vars(self, path):
try:
with open(path) as f:
for line in f:
key, value = line.strip().split("=", 1)
if key not in self.config.parsed_dotenv: # check if key is not already in the dictionary
self.config.parsed_dotenv[key] = value
except IOError:
print(f"Warning: Unable to open .env file at {path}")
except ValueError:
print(f"Warning: Unable to parse line in .env file: {line}")
"In this way, we only add a variable from the .env file if it's not already in `parsed_dotenv`. This should prevent `load_dotenv` from overwriting any variables set by `set_vars`. Let's re-run the tests."
```
Proces ten nie jest jednak doskonały. Ograniczenia ludzkie, takie jak kopiowanie-wklejanie i czas generowania podpowiedzi/wyników, mogą tworzyć wąskie gardła. Ponadto GPT nie jest konsekwentnie dokładny, czasami zmieniając format lub oferując tylko częściowe wyniki. Jednak dzięki praktyce i odpowiednim strategiom można poprowadzić GPT do generowania użytecznego i dokładnego kodu.
Unikanie halucynacji i uzyskiwanie użytecznych wyników
------------------------------------------------------
Halucynacje lub przypadki, w których GPT generuje nieistotne, stronnicze lub nieprawidłowe informacje, mogą być frustrujące. Jednak im więcej pracujesz z GPT, tym lepiej radzisz sobie z kierowaniem go w stronę pożądanych wyników. Jednym ze sposobów na zminimalizowanie tych halucynacji jest zapewnienie GPT kontekstu poprzez kopiowanie i wklejanie odpowiedniej dokumentacji i próbek kodu.
Zbuduj niestandardowego agenta Langchain z przykładowymi testami:
```js
We have this builder:
class OpenAIBuilder:
def __init(self, ...):
# Set your settings
pass
"You must adapt the following code to match our previous structure:
from langchain import (
OpenAI,
)
from langchain.agents import initialize_agent, AgentType
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.tools.python.tool import PythonREPLTool
from openai_chat_builder import OpenAiChatBuilder, OpenAiChatHelper
class AgentPythonExecBuilder(OpenAiChatBuilder):
def __init__(self, openai_api_key):
super().__init__()
self.tools = ["PythonREPLTool"]
# We override the add_tool method so it only allows PythonREPLTool.
def add_tool(self, tool):
if tool == "PythonREPLTool":
return super().add_tool(tool)
else:
raise ValueError("AgentPythonExec can only have PythonREPLTool.")
def build(self):
llm = OpenAI(
temperature=self.temperature,
max_tokens=self.max_tokens,
openai_api_key=self.openai_api_key,
streaming=self.streaming,
)
tools = load_tools(self.tools)
agent = initialize_agent(tools, llm, agent=self.agentType, verbose=self.verbose)
return OpenAiChatHelper(llm, tools, agent)
```
Zbuduj testy agenta Langchain:
```js
Now let's build a test file that resembles this one:
import unittest
from probable_fiesta.config.builder.config_builder import ConfigBuilder
from src.promptnub_backend.app.langchain.builder.openai_factory import OpenAIFactory
from langchain.llms import OpenAI
class TestOpenAIBuilder(unittest.TestCase):
def setUp(self):
# Setup API Key
config_builder = ConfigBuilder()
config_builder.dotenv.load_dotenv(".env")
# Setup ModelBuilder instance
self.builder = OpenAIFactory()
self.builder.openai_api_key = config_builder.dotenv.get_var("OPENAI_API_KEY")
def test_llm_creation(self):
# Test the creation of OpenAI llm instance
llm = self.builder.builder.llm
self.assertIsInstance(llm, OpenAI, "llm property does not return OpenAI instance")
def test_llm_run(self):
# Test the creation of OpenAI llm instance
llm_chain = self.builder.build()
self.assertIsNotNone(llm_chain)
# Ask a question
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
response = llm_chain.run(question)
# Assert expected result
# Note that there is a space at the beginning
expected = " Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys."
self.assertEquals(str(response), expected)
```
GPT-4 Response:
```js
Given the provided `AgentPythonExecBuilder` class and the `TestOpenAIBuilder` class, you would write a test file for the `AgentPythonExecBuilder` class as follows:
from logger.builder.logger_factory import LoggerFactory
from config.builder.config_builder import ConfigBuilder
from langchain.llms import OpenAI
from langchain.tools.python.tool import PythonREPLTool
from your_package_path.builder.agent_python_exec_factory import AgentPythonExecFactory
import unittest
# Test Log
LOG_TEST = LoggerFactory.new_logger_default(
name="test_agent_python_exec_builder", level="DEBUG", fmt="simple", directory="logs"
)
class TestAgentPythonExecBuilder(unittest.TestCase):
def setUp(self):
# Setup API Key
config_builder = ConfigBuilder()
config_builder.dotenv.load_dotenv(".env")
# Setup ModelBuilder instance
self.builder = AgentPythonExecFactory()
self.builder.openai_api_key = config_builder.dotenv.get_var("OPENAI_API_KEY")
def test_llm_creation(self):
# Test the creation of OpenAI llm instance
LOG_TEST.info("Test OpenAI llm creation")
llm = self.builder.builder.llm
LOG_TEST.debug(llm)
self.assertIsInstance(
llm, OpenAI, "llm property does not return OpenAI instance"
)
def test_python_repl_tool_creation(self):
# Test the creation of PythonREPLTool instance
LOG_TEST.info("Test PythonREPLTool creation")
self.builder.builder.add_tool("PythonREPLTool")
tools = self.builder.builder.tools
LOG_TEST.debug(tools)
self.assertIsInstance(
tools[0], PythonREPLTool, "tools[0] does not return PythonREPLTool instance"
)
def test_llm_run(self):
# Test the creation of OpenAI llm instance and PythonREPLTool
LOG_TEST.info("Test OpenAI llm and PythonREPLTool run sync")
llm_chain = self.builder.build()
self.assertIsNotNone(llm_chain)
# Ask a question
question = "What is 2 + 2?"
response = llm_chain.run(question)
LOG_TEST.info(response)
# Assert expected result
expected = "The result of 2 + 2 is 4."
self.assertEquals(str(response), expected)
Here, you create a new instance of `AgentPythonExecBuilder` instead of `OpenAIBuilder` and test it with a question that can be answered by executing Python code.
```
Automatyczne generowanie kodu
-----------------------------
Prawdziwa moc LLM leży jednak w automatycznym generowaniu kodu. Korzystając z narzędzi takich jak [PubNub](https://www.pubnub.com/), Python, LangChain i [Streamlit](https://streamlit.io/)można zautomatyzować większość przepływu pracy, pozwalając skupić się na ważniejszych zadaniach.
Dzięki kilku dodatkowym abstrakcjom możemy podłączyć nasz prosty runner LangChain do naszego przepływu pracy:
```js
class TestOpenAIBuilder(unittest.TestCase):
def setUp(self):
self.runner = Runner()
def test_run(self):
response = self.runner.run("What is the capital of France?")
expected = " France is a country in Europe. The capital of France is Paris."
self.assertEqual(response, expected)
```
Droga do programowania 100 razy szybciej
----------------------------------------
Wyobraźmy sobie programistę z laserową ambicją wzmocnienia swoich umiejętności kodowania, dążącego do osiągnięcia 2-krotnego, 3-krotnego, a nawet oszałamiającego 50-krotnego poziomu produktywności. Sekret tych ambicji tkwi w strategicznym zastosowaniu LLM. Ten zestaw narzędzi działa jak dźwignia produktywności, umożliwiając naszym programistom wprowadzanie innowacji na nowych platformach i napędzanie zautomatyzowanych przepływów pracy, jak nigdy dotąd. Od czegoś tak prostego jak podsumowanie tekstu po generowanie kodu produkcyjnego.
W miarę jak zagłębiamy się w sferę sztucznej inteligencji i jej roli w tworzeniu oprogramowania, ważne jest, aby pamiętać, że prawdziwa siła tkwi w połączeniu ludzkiej kreatywności ze zdolnościami sztucznej inteligencji. Przemyślane planowanie, inteligentne zastosowanie sztucznej inteligencji i odpowiednie narzędzia cyfrowe mogą umożliwić nam przekształcenie się w bardziej produktywną firmę. Ta transformacja polega na wzmocnieniu potencjału ludzkich programistów za pomocą sztucznej inteligencji, a nie na ich zastąpieniu. Koncepcja stania się firmą 100X, wykorzystującą sztuczną inteligencję, nie jest tylko odległym marzeniem, ale osiągalną przyszłością, którą możemy wspólnie kształtować.
#### Spis treści
[Planowanie z wyprzedzeniemOdgrywanie ról](#h-0)[z ekspertamiPytanie](#h-1)[GPT o](#h-2)[spostrzeżeniaInżynieria podpowiedziRęczne](#h-3)[generowanie koduAutomatyczne](#h-4)[generowanie koduPoradzenie sobie](#h-5) z[awariamiGenerowanie](#h-6)[kodu z podpowiedziamiPlanowanie](#h-7)[z wyprzedzeniem: Podstawa sukcesuSkładanie](#h-8)[panelu ekspertów](#h-9)Ręczne generowanie[kodu i AIAuniknij](#h-10)[halucynacji i uzyskaj użyteczne wynikiAutomatyczne](#h-11)[generowanie](#h-12)[koduDroga do rozwoju 100 razy szybciej](#h-13)
Jak PubNub może ci pomóc?
=========================
Ten artykuł został pierwotnie opublikowany na [PubNub.com](https://www.pubnub.com/blog/developers-guide-to-prompt-engineering/)
Nasza platforma pomaga programistom tworzyć, dostarczać i zarządzać interaktywnością w czasie rzeczywistym dla aplikacji internetowych, aplikacji mobilnych i urządzeń IoT.
Podstawą naszej platformy jest największa w branży i najbardziej skalowalna sieć komunikacyjna w czasie rzeczywistym. Dzięki ponad 15 punktom obecności na całym świecie obsługującym 800 milionów aktywnych użytkowników miesięcznie i niezawodności na poziomie 99,999%, nigdy nie będziesz musiał martwić się o przestoje, limity współbieżności lub jakiekolwiek opóźnienia spowodowane skokami ruchu.
Poznaj PubNub
-------------
Sprawdź [Live Tour](https://www.pubnub.com/tour/introduction/), aby zrozumieć podstawowe koncepcje każdej aplikacji opartej na PubNub w mniej niż 5 minut.
Rozpocznij konfigurację
-----------------------
Załóż [konto](https://admin.pubnub.com/signup/) PubNub, aby uzyskać natychmiastowy i bezpłatny dostęp do kluczy PubNub.
Rozpocznij
----------
[Dokumenty](https://www.pubnub.com/docs) PubNub pozwolą Ci rozpocząć pracę, niezależnie od przypadku użycia lub [zestawu SDK](https://www.pubnub.com/docs). | pubnubdevrel | |
1,789,713 | Why I Don't Use AutoMapper in .Net | TL;DR I dislike Automapper as a tool to auto-map objects from one to another. So I performed some... | 0 | 2024-03-14T23:43:54 | https://dev.to/grantdotdev/why-i-dont-use-automapper-in-net-12i6 | dotnet, csharp, tutorial | **TL;DR**
I dislike Automapper as a tool to auto-map objects from one to another. So I performed some benchmark tests around performance and memory usage between AutoMapper and manually mapping an object to a DTO (Data Transfer Object with fewer properties, normally used for API / UI usage).
Results found that the manual mapping technique was substantially faster, and more performant than that of AutoMapper.
Read on for:
- [My Cons](#cons-in-my-opinion)
- [My Pros](#getting-to-the-pros)
- [The Code to run for yourself](#the-code)
- [Results](#results)
- [Conclusion](#conclusion)
Ok, so as a .Net developer, or any other developer for that matter you may use a Mapping library/tool, to help you map some object to another.
However, I've seen them being used for every mapping; mapping extremely small and easy objects to another form of DTO (Data Transfer Object), which is very inefficient.
I dislike automated / self-configured mapping tools like this for multiple reasons.
## Cons (In My Opinion)
**Performance Overhead:** AutoMapper can introduce performance overhead compared to manual object mapping, especially for complex mappings or large datasets.
**Learning Curve:** For new or unfamiliar developers there is certainly a learning curve with using AutoMapper, from simple configuration profiles to custom resolvers. It's worth knowing it may take your developers a little while to learn or become familiar with it.
**Complexity:** In some cases, AutoMapper might introduce unnecessary complexity, especially for simpler projects where manual object mapping might suffice.
**Maintenance:** While AutoMapper can simplify object mapping in the short term, it adds a dependency to your project that needs to be maintained over time, including keeping up with updates and potential breaking changes.
It also means there is a reliance on another third-party library within your application, that if updated, or features are deprecated may have a detrimental impact on your code base.
**AutoMapper** relies on reflection to dynamically discover and map properties between source and destination types based on configured mappings.
Reflection is a feature in .NET that allows you to inspect and manipulate types, methods, properties, and other members of managed code assemblies at runtime. For those Javascript developers, it's a bit like using `Object.Values`, or `Object.Keys` to find out what properties an object has and use the key/value from dynamic objects.
As a result, we have more risks/concerns:
**Reflection Overhead:** Using Reflection can be slower compared to direct property access, as it involves dynamic discovery of types and members at runtime, which adds some overhead.
**Dynamic Mapping:** AutoMapper performs dynamic mapping between source and destination types based on configured mappings. This dynamic nature introduces additional processing overhead compared to manual mapping.
**Configuration Processing:** AutoMapper requires configuration, typically performed during application startup. This configuration involves parsing and processing mapping configurations, which can add yet another overhead, especially if you have a large number of mappings or profiles.
**Object Creation and Initialisation:** AutoMapper creates and initializes destination objects based on source objects and mapping configurations.
This process involves instantiating objects, setting properties, and potentially invoking constructors or property setters, which adds some overhead compared to direct object creation.
**Optimisations and Features:** AutoMapper provides various features and optimisations to handle complex mapping scenarios, such as custom type converters, value resolvers, and nested mappings. While these features enhance flexibility and convenience, they can also contribute to increased processing overhead! It's one more thing it has to load into memory, interpret and apply.
## Getting to the Pros
As I want you to make your mind up here, I can't miss out that there are however some positives:
**Productivity**: AutoMapper simplifies the process of mapping objects between different layers of an application, saving developers time and reducing boilerplate code. As it uses reflection, sometimes you don't even have to manually map the column names. If both objects have the same Property name at the same level of the class tree, it will automatically map the like-named properties to each other.
Multiple lines of code can be avoided with a simple mapping function of one object to another, rather than manually listing all properties and mapping them to another object's properties etc.
**Reduced Code Duplication:** With AutoMapper, you can define mapping configurations once and reuse them throughout your application, reducing code duplication and promoting maintainability.
This kind of thing is really useful where say you wish to apply some manipulation of the properties, for example, formatting, translating, or something. You could simply create a CustomTranslationResolver, which will take the original value, run it through a translator, and map it to the DTO, removing the necessity of mapping, and then translating.
**Customization:** It allows for custom mapping configurations, enabling developers to handle complex mappings.
_Note_: For more junior developers this can sometimes be cumbersome or convoluted.
**Integration with Dependency Injection:** Many mapping libraries, including AutoMapper, integrate seamlessly with popular dependency injection frameworks like .NET Core's built-in DI, facilitating easier management of object mappings within dependency injection containers.
For example within a.Net8 Console Application using the `Microsoft.Extensions.Hosting` package, simply call
```csharp
var builder = Host.CreateApplicationBuilder();
builder.Services.AddAutoMapper(typeof(Program));
```
## Wait Those Pro's Sound Great: In Reality Is It That Bad Then?
So ok, you may be looking at those positives list and thinking well they're all valid points. Doesn't sound that bad to me.
Well, this is where I took it upon myself to prove out my theory on how "bad" it was.
## Hypothesis
I believe AutoMapper will run slower, and be less performant than mapping a simple Customer database object to a DTO object.
### Method of Test
- Create 2 identical class types with the same properties, just different Class names, to help differentiate between results and arrangement.
- Create 2 helper methods,
1 which will instantiate the auto-mapper customer object, and then utilise AutoMapper to map the DB object, to the DTO.
1 which will simply instantiate the DB Customer, it will then map this to a new DTO object
Both Methods will use the same values for all properties, to avoid introducing any anomalies in data size.
the `BenchmarkDotNet` package will be used to perform the benchmarks and testing. This library will run the methods multiple times and report back a mean value of execution time.
## The Code
Use the following classes, utils and helpers to run the benchmarks yourself.
**Pre-Requisites**:
- .Net 6+ Dotnet Console Application (Will need to Convert code to non-simple setup for < .Net 6 Console Applications i.e using Program => Main methods)
- Run `dotnet add package AutoMapper`, `dotnet add package BenchmarkDotNet` and `dotnet add package Microsoft.Extensions.Hosting` to install required NuGet packages.
Program.cs:
```csharp
using AutoMapper_Test;
using BenchmarkDotNet.Running;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
var builder = Host.CreateApplicationBuilder();
builder.Services.AddAutoMapper(typeof(Program));
var app = builder.Build();
BenchmarkRunner.Run<Helper>();
```
IDbCustomer:
```csharp
namespace AutoMapper_Test;
public interface ICustomer
{
int Id { get; set; }
string FirstName { get; set; }
string LastName { get; set; }
string Email { get; set; }
Address? Address { get; set; }
DateTime DoB { get; set; }
}
```
DBCustomer:
```csharp
namespace AutoMapper_Test;
public class DbCustomer : ICustomer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public Address? Address { get; set; }
public DateTime DoB { get; set; }
}
```
ICustomerDto
```csharp
namespace AutoMapper_Test;
public interface ICustomerDto
{
int Id { get; set; }
string FullName { get; set; }
string Email { get; set; }
DateTime DoB { get; set; }
}
```
CustomerDto:
```csharp
namespace AutoMapper_Test;
public class CustomerDto : ICustomerDto
{
public int Id { get; set; }
public string FullName { get; set; }
public string Email { get; set; }
public DateTime DoB { get; set; }
}
```
Address:
```csharp
namespace AutoMapper_Test;
public class Address
{
public string Street { get; set; }
public string House { get; set; }
public string PostalCode { get; set; }
public string City { get; set; }
public string Country { get; set; }
public string FullAddress => $"{House} {Street} {City} {Country} {PostalCode}";
}
```
MappingProfile:
```csharp
using AutoMapper;
using AutoMapper_Test;
public class MappingProfile : Profile
{
public MappingProfile()
{
CreateMap<DbCustomer, CustomerDto>()
.ForMember(dest => dest.FullName, opt => opt.MapFrom(src => $"{src.FirstName} {src.LastName}"));
}
}
```
MappingExtensions
```csharp
namespace AutoMapper_Test;
public static class MapExtensions
{
public static ICustomerDto ToDto(this ICustomer customer)
{
return new CustomerDto()
{
Id = customer.Id,
FullName = $"{customer.FirstName} {customer.LastName}",
Email = customer.Email,
DoB = customer.DoB
};
}
}
```
BenchMark Helper:
```csharp
using AutoMapper;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Order;
namespace AutoMapper_Test;
[MemoryDiagnoser]
[Orderer(SummaryOrderPolicy.FastestToSlowest)]
public class Helper
{
[Benchmark]
public void AutomapperTest()
{
var configuration = new MapperConfiguration(cfg => { cfg.AddProfile<MappingProfile>(); });
var mapper = new Mapper(configuration);
var autoMappedCustomer = new DbCustomer
{
FirstName = "Grant",
LastName = "Riordan",
Address = new Address()
{
House = "123", City = "Balamory", Country = "Scotland", Street = "Balamory Way", PostalCode = "BA12 8SC"
},
DoB = new DateTime()
};
var mappedDto = mapper.Map<CustomerDto>(autoMappedCustomer);
Console.WriteLine(mappedDto.FullName);
}
[Benchmark]
public void ManualTest()
{
var manualCustomer = new DbCustomer()
{
FirstName = "Grant",
LastName = "Riordan",
Address = new Address()
{
House = "123", City = "Balamory", Country = "Scotland", Street = "Balamory Way", PostalCode = "BA12 8SC"
},
DoB = new DateTime()
};
var manualMappedDto = manualCustomer.ToDto();
Console.WriteLine(manualMappedDto.FullName);
}
}
```
Rundown of the code:
We have a CustomerDto object (smaller object for transfering relevant data), as well as a DbCustomer object (which would represent our domain model (what would be loaded from a Database for example).
The Helper class, is where our benchmark tests will lay. This class is prefixed with some Benchmark attributes to specify what and how I want to see results,
Each method under test, is then marked with the[Benchmark] attribute which will allow the library to pick these methods up as methods to be tested against.
Each method will be tested mulitple times to gather a `mean` average time to execute, and collect data on how much memory was used during execution.
First run `dotnet build -c Release` in your project directory, followed by `dotnet <path to your build dll`
example: `dotnet <projectDirectory/bin/Release/net8.0/AutoMapper_Test.dll`
### Results
```terminal
| Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
|--------------- |----------:|---------:|---------:|--------:|-------:|-------:|----------:|
| ManualTest | 10.93 us | 0.269 us | 0.794 us | 0.0305 | - | - | 216 B |
| AutomapperTest | 534.15 us | 2.931 us | 2.742 us | 11.7188 | 5.8594 | 0.9766 | 75833 B |
```
As you can see the manual version of this functionality is vastly more performant.
With 10.93 microseconds (0.01093ms) vs 534.15 microseconds (0.53415ms)
I mean yes we're still talking sub 1 second, but it's a big difference with > 500ms difference in operation time.
In addition to this Automapper uses ~352x more memory being used than the manual variation. On such a basic mapping where only 4-5 properties are included. Imagine the impact when working with a much more elaborate object.
Let's add the Address object as a smaller more complex object and see and look at the difference.
## Test 2: Including Address Object
```csharp
// CustomerWithAddressDto
namespace AutoMapper_Test;
public class CustomerWithAddressDto : ICustomerWithAddressDto
{
public int Id { get; set; }
public string FullName { get; set; }
public string Email { get; set; }
public DateTime DoB { get; set; }
public Address Address { get; set; }
}
```
```csharp
// add the following method to the MappingExtensions.cs
public static ICustomerWithAddressDto ToDtoWithAddress(this ICustomer customer)
{
return new CustomerWithAddressDto()
{
Id = customer.Id,
FullName = $"{customer.FirstName} {customer.LastName}",
Email = customer.Email,
DoB = customer.DoB,
Address = customer.Address
};
}
```
```csharp
// add this mapping to the MappingProfile
CreateMap<DbCustomer, CustomerWithAddressDto>()
.ForMember(dest => dest.FullName, opt => opt.MapFrom(src => $"{src.FirstName} {src.LastName}"));
```
```csharp
// Add the following two benchmark tests to your Helper.cs file
[Benchmark]
public void ManualTest_WithAddress()
{
var manualCustomer = new DbCustomer()
{
FirstName = "Grant",
LastName = "Riordan",
Address = new Address()
{
House = "123", City = "Balamory", Country = "Scotland", Street = "Balamory Way", PostalCode = "BA12 8SC"
},
DoB = new DateTime()
};
var manualMappedDto = manualCustomer.ToDtoWithAddress();
Console.WriteLine($"{manualMappedDto.FullName} {manualMappedDto.Address?.Street}");
}
[Benchmark]
public void AutomapperTest_WithAddress()
{
var configuration = new MapperConfiguration(cfg => { cfg.AddProfile<MappingProfile>(); });
var mapper = new Mapper(configuration);
var autoMappedCustomer = new DbCustomer
{
FirstName = "Grant",
LastName = "Riordan",
Address = new Address()
{
House = "123", City = "Balamory", Country = "Scotland", Street = "Balamory Way", PostalCode = "BA12 8SC"
},
DoB = new DateTime()
};
```
Re-run the benchmark tests as before, compiling and then running the dll with `dotnet` command.
```terminal
| Method | Mean | Error | StdDev | Gen0 | Gen1 | Gen2 | Allocated |
|--------------------------- |-----------:|----------:|----------:|--------:|-------:|-------:|----------:|
| ManualTest | 9.881 us | 0.0707 us | 0.0662 us | 0.0305 | - | - | 216 B |
| ManualTest_WithAddress | 11.474 us | 0.0864 us | 0.0766 us | 0.0458 | - | - | 304 B |
| AutomapperTest | 489.031 us | 2.1267 us | 1.8853 us | 13.6719 | 6.8359 | 0.9766 | 87530 B |
| AutomapperTest_WithAddress | 538.434 us | 2.6644 us | 2.3619 us | 13.6719 | 6.8359 | 2.9297 | 91504 B |
```
Again, the results are conclusive, mapping it manually will result in a faster mapping. I mean, the difference between mapping the Address and not mapping the Address is negligible.
## Conclusion
I'd like to point out that this is a very basic example, and with larger more complex objects Automapper may be more enticing than right custom mappers. However I find the points I've made still stand, and I like to have control over my codebase, and also be able to debug issues easier. Having that level of abstraction via AutoMapper makes it more difficult to pin point problems, and debug code.
| grantdotdev |
1,789,563 | Ein Leitfaden für Entwickler zu Prompt Engineering und LLMs | Erfahren Sie, wie Sie Ihre Entwicklungszeit durch den Einsatz von LLMs wie GPT-4 verkürzen können. | 0 | 2024-03-13T19:41:46 | https://dev.to/pubnub-de/ein-leitfaden-fur-entwickler-zu-prompt-engineering-und-llms-18k6 | Mit dem Aufkommen großer Sprachmodelle (LLMs) und künstlicher Intelligenz wie [ChatGPT](https://chat.openai.com/) stehen Entwicklern jetzt unglaublich leistungsfähige KI-Assistenten zur Verfügung, die ihnen helfen können, schneller zu programmieren und produktiver zu sein. Die einfache Aufforderung an ein LLM, Code zu schreiben, führt jedoch oft zu unbrauchbaren oder fehlerhaften Ergebnissen. Ob Sie nun eine [Chat-Anwendung](https://www.pubnub.com/solutions/chat/) oder eine [Unternehmenslösung](https://www.pubnub.com/Solutions/Enterprise/) am Netzwerkrand entwickeln, um Ihre Entwicklungszeit mit Hilfe von KI zu beschleunigen, benötigen Sie eine durchdachte Strategie. In den nächsten Abschnitten dieses Blogs werden wir uns eingehender mit Prompt-Engineering-Techniken beschäftigen.
### Vorausschauend planen
Stürzen Sie sich nicht gleich in das Prompting für Code. Nehmen Sie sich Zeit, um die Gesamtarchitektur, Schemata, Integrationen usw. zu planen. So erhält Ihr KI-Assistent den vollständigen Kontext, den er benötigt. Visualisieren Sie den Endzustand, damit Ihr generatives KI-Modell das große Ganze versteht.
### Rollenspiele mit Experten
Richten Sie Szenarien ein, in denen Ihr KI-Modell als Assistent agieren und ein Rollenspiel mit einem Expertenteam durchführen kann, das Lösungen diskutiert. Geben Sie ihnen Persönlichkeiten und lassen Sie sie Probleme aufschlüsseln. Dies fördert das kreative Denken aus verschiedenen Perspektiven.
### Bitten Sie GPT um Einblicke
Wenn Sie nicht weiterkommen, scheuen Sie sich nicht, innezuhalten und Ihren LLM um spezifische Ratschläge oder Codebeispiele zu bitten, die für Ihr Problem relevant sind. Dieses Wissen kann Sie auf neue Ideen bringen.
### Promptes Engineering
Gehen Sie Schritt für Schritt vor und verfeinern Sie Ihre Prompts über mehrere Iterationen und Feinabstimmungen, um Ihrem KI-Assistenten den optimalen Kontext und die optimale Steuerung zu bieten. Effektive Prompts führen zu großartigen Ergebnissen.
### Manuelle Codegenerierung
Lassen Sie Ihre KI zunächst manuell Code generieren, prüfen Sie dann die Ausgaben und korrigieren Sie Fehler, bevor Sie brauchbare Teile in Ihre Eingabeaufforderung zurückkopieren. Auf diese Weise erhält die KI ein konkretes Beispiel für das, wonach Sie suchen, und durch die Wiederholung dieses Vorgangs wird schließlich korrekter Code erzeugt. Weitere Beispiele finden Sie unter dem Konzept [Gedankenketten-Prompting](https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html) und [Zero-Shot-Prompting](https://www.promptingguide.ai/techniques/zeroshot).
### Automatische Code-Generierung
Sobald Sie mit der LLM-Modellausgabe zufrieden sind und diese konsistent guten Code produziert, können Sie Pipelines einrichten, um automatisch Assets auf der Grundlage von Schemata, Tests auf der Grundlage von Code usw. zu generieren. Dadurch werden Engpässe beseitigt.
### Anständiger Umgang mit Fehlern
Ja, gelegentliche Fehler sind zu erwarten. Verbessern Sie Ihre Prompts, um ähnliche Fehler in Zukunft zu vermeiden. Lernen Sie mit der Zeit, welche Arten von Aufgaben Ihre KI gut bewältigen kann.
Durch die Kombination von Planung, Rollenspielen, Eingabeaufforderungen, iterativem Training und Automatisierung können Sie mit KI-Assistenten wie GPT enorme Produktivitätssteigerungen erzielen. Mit der richtigen Strategie für die Gestaltung von Prompts können Sie Ihre Entwicklungsgeschwindigkeit erhöhen!
Code-Generierung mit Prompts
----------------------------
Da wir an der Schwelle zur KI-Revolution stehen, müssen wir die Art und Weise, wie Softwareentwicklung angegangen und ausgeführt wird, neu überdenken. In diesem Tutorial erfahren Sie, wie Sie den Übergang von einem traditionellen Prozess zu einem durch natürliche Sprachverarbeitung unterstützten Prozess bewältigen können.
### Vorausschauende Planung: Die Grundlage des Erfolgs
Der erste Schritt auf diesem Weg ist die Fähigkeit, im Voraus zu planen. KI, insbesondere Modelle wie GPT-3, sind nicht unbedingt in der Lage, die Zukunft zu planen. Ihre Kompetenz liegt in der Echtzeit-Problemlösung innerhalb eines engen Kontext-Token-Fensters. An dieser Stelle müssen menschliche Entwickler eingreifen.
Indem wir ein detailliertes Bild des endgültigen Codes zeichnen - einschließlich der Modellschemata, des Technologie-Stacks und der Bereitstellungsprozesse - können wir die Voraussetzungen dafür schaffen, dass KI komplexe Aufgaben lösen kann. Diese Planungsphase ist vielleicht der anspruchsvollste Teil des Prozesses, aber sie verwandelt die Rolle der Entwickler von Programmierern in Orchestratoren und schafft die Voraussetzungen für eine harmonische Mischung aus menschlicher Intelligenz und KI-Fähigkeiten.
### Zusammenstellung eines Expertengremiums
Hier kommt das "Expertengremium" ins Spiel. Diese Gruppe kann sich aus einer Vielzahl von Interessengruppen zusammensetzen, von Führungskräften bis hin zu KI-Agenten, die alle zum Verständnis und zur Planung einer bestimmten Aufgabe beitragen. Mit einer gut durchdachten Eingabeaufforderung können Sie eine virtuelle Spielwiese schaffen, auf der das Gremium komplexe Probleme diskutieren und lösen kann, wobei KI-Bots Unterstützung und Notizen liefern.
Und wenn Sie sich mit einer Frage herumschlagen müssen, denken Sie daran, dass Ihnen ein leistungsstarkes Tool zur Verfügung steht, das speziell für die Beantwortung von Fragen entwickelt wurde: GPT. Ganz gleich, ob Sie eine Zusammenfassung, eine Umformulierung oder eine tiefer gehende Untersuchung eines Themas benötigen, GPT kann Ihnen helfen. Wenn Sie die Aufforderungen absichtlich vage formulieren, können Sie oft überraschende und innovative Lösungen finden.
Beispiel für Code-Debugging-Aufforderung:
```js
Act as a virtual playground where a team of Smart AI bots is in their first meeting discussing the problem:
“We need to debug the following Python test code:
def test_dot_env_def(self):
LOG_TEST.info("Test DotEnvDef")
dotEnvDef = DotEnvDef()
> self.assertEqual(dotEnvDef, {"version_echo": "v0.0"})
E AssertionError: {'version_echo': 'v0.0'} != {'version_echo': 'v0.0'}
tests/test_default_config.py:16: AssertionError
class DotEnvDef:
def init(self):
self.version_echo = "v0.0"
# make the class iterable as dict
def iter(self):
return iter(self.__dict__)
# return the values only
def get_items(self):
for attr, value in self.__dict__.items():
yield value # only yield the values
def repr(self):
return str(self.__dict__)
To fulfill this request bring in an expert team of AI Agents that are very brief and communicate via emojis and use catchy phrases. Spend your resources wisely as you only have 4 AI Agents.
```
GPT-4 Antwort:
```js
Alright, welcome to this first meeting, team. We have an interesting challenge ahead of us. To start, let's assign roles to our four AI agents:
1. Detective Bot 👮♀️ (DB): Responsible for deep diving into the code, its syntax, structure, and logic.
2. Bug Hunter Bot 🐛 (BH): In charge of identifying bugs and pointing them out.
3. Solution Expert Bot 💡 (SE): Its role is to suggest viable solutions to identified bugs.
4. Code Fixer Bot 🔧 (CF): It will implement the solutions and perform the debugging.
Our debugging plan will look like this:
1. **Initiation**: Here we discuss the problem and set our goals. The issue at hand is an AssertionError, meaning our test case is failing. Our aim is to make the test pass.
2. **Investigation**: 👮♀️ DB deep-dives into the provided code to understand its structure and logic. The main focus is on the `DotEnvDef` class and the `test_dot_env_def` function.
3. **Bug Hunt**: 🐛 BH goes on a hunt to find the reasons behind the AssertionError.
4. **Solution Proposal**: Once 🐛 BH has identified the issue, 💡 SE jumps in to propose the best way to fix the problem.
5. **Implementation**: 🔧 CF takes the solution and implements it to the code. After the changes are made, the test case is run again.
6. **Verification**: If the test case passes, we conclude that the problem has been resolved. If not, we circle back to step 2.
Remember team, our goal is to squash bugs 🐜🔨! Let's rock'n'roll.
```
### Manuelle Codegenerierung & KI
Der nächste Schritt besteht darin, GPT-4 für die manuelle Codegenerierung zu nutzen, ein Prozess, der eine exponentielle Steigerung der Produktivität verspricht. Indem Sie GPT anleiten, optimieren Sie eine Vielzahl von Aufgaben, von der Bereitstellung einer Webanwendung in einem fremden Stack bis zur Erstellung eines Algorithmus in einer Low-Level-Sprache.
Verwenden Sie die Befehle für Ihr Agenten-Beispiel:
```js
Using the following syntax “/command argument” modifies a bot’s behavior, for example for enabling verbose code generation: “chatbot /code-gen verbose“ so that we obtain full python code. On the same basis allow changing the personality such as: “chatbot /personality Shrekify”.
DB /personality Shrekify
SE /code-gen verbose
CF /code-gen verbose
> Continue the meeting and provide the first solution
```
GPT-4-Antwort:
```js
DB 🏰👮♀️: "Hmm, the `version_echo` key is still not appearing in the `parsed_dotenv` dictionary. We need to figure out why `set_vars` is not setting it."
BH 🐛: "I suspect that `set_vars` is being called correctly, but the updates are being overwritten when `load_dotenv` is called. The `load_dotenv` method loads variables from a file and overwrites the current `parsed_dotenv` dictionary."
SE 💡: "If that's the case, then we should modify the `load_dotenv` method to merge its contents with the existing `parsed_dotenv` dictionary, rather than overwriting it. This will preserve any variables set via `set_vars`."
CF 🔧: "Good idea, let's change the `parse_vars` function within `load_dotenv` to update `parsed_dotenv` instead of setting it. Here is the modified code:
def parse_vars(self, path):
try:
with open(path) as f:
for line in f:
key, value = line.strip().split("=", 1)
if key not in self.config.parsed_dotenv: # check if key is not already in the dictionary
self.config.parsed_dotenv[key] = value
except IOError:
print(f"Warning: Unable to open .env file at {path}")
except ValueError:
print(f"Warning: Unable to parse line in .env file: {line}")
"In this way, we only add a variable from the .env file if it's not already in `parsed_dotenv`. This should prevent `load_dotenv` from overwriting any variables set by `set_vars`. Let's re-run the tests."
```
Der Prozess ist jedoch nicht perfekt. Menschliche Einschränkungen wie Copy-Paste und die Zeit für die Eingabeaufforderung/Ergebnisgenerierung können zu Engpässen führen. Außerdem ist GPT nicht durchgängig genau und ändert manchmal das Format oder liefert nur Teilergebnisse. Aber mit etwas Übung und den richtigen Strategien können Sie GPT dazu bringen, nützlichen und genauen Code zu erzeugen.
Halluzinationen vermeiden und nützliche Ergebnisse erzielen
-----------------------------------------------------------
Halluzinationen, d. h. Fälle, in denen GPT irrelevante, verzerrte oder falsche Informationen erzeugt, können frustrierend sein. Je mehr Sie jedoch mit GPT arbeiten, desto besser gelingt es Ihnen, es in Richtung der gewünschten Ergebnisse zu steuern. Eine Möglichkeit, diese Halluzinationen zu minimieren, ist die Bereitstellung von Kontext für GPT durch Kopieren und Einfügen relevanter Dokumentation und Codebeispiele.
Erstellen Sie einen benutzerdefinierten Langchain-Agenten mit Testbeispielen:
```js
We have this builder:
class OpenAIBuilder:
def __init(self, ...):
# Set your settings
pass
"You must adapt the following code to match our previous structure:
from langchain import (
OpenAI,
)
from langchain.agents import initialize_agent, AgentType
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.tools.python.tool import PythonREPLTool
from openai_chat_builder import OpenAiChatBuilder, OpenAiChatHelper
class AgentPythonExecBuilder(OpenAiChatBuilder):
def __init__(self, openai_api_key):
super().__init__()
self.tools = ["PythonREPLTool"]
# We override the add_tool method so it only allows PythonREPLTool.
def add_tool(self, tool):
if tool == "PythonREPLTool":
return super().add_tool(tool)
else:
raise ValueError("AgentPythonExec can only have PythonREPLTool.")
def build(self):
llm = OpenAI(
temperature=self.temperature,
max_tokens=self.max_tokens,
openai_api_key=self.openai_api_key,
streaming=self.streaming,
)
tools = load_tools(self.tools)
agent = initialize_agent(tools, llm, agent=self.agentType, verbose=self.verbose)
return OpenAiChatHelper(llm, tools, agent)
```
Erstellen Sie Langchain-Agententests:
```js
Now let's build a test file that resembles this one:
import unittest
from probable_fiesta.config.builder.config_builder import ConfigBuilder
from src.promptnub_backend.app.langchain.builder.openai_factory import OpenAIFactory
from langchain.llms import OpenAI
class TestOpenAIBuilder(unittest.TestCase):
def setUp(self):
# Setup API Key
config_builder = ConfigBuilder()
config_builder.dotenv.load_dotenv(".env")
# Setup ModelBuilder instance
self.builder = OpenAIFactory()
self.builder.openai_api_key = config_builder.dotenv.get_var("OPENAI_API_KEY")
def test_llm_creation(self):
# Test the creation of OpenAI llm instance
llm = self.builder.builder.llm
self.assertIsInstance(llm, OpenAI, "llm property does not return OpenAI instance")
def test_llm_run(self):
# Test the creation of OpenAI llm instance
llm_chain = self.builder.build()
self.assertIsNotNone(llm_chain)
# Ask a question
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
response = llm_chain.run(question)
# Assert expected result
# Note that there is a space at the beginning
expected = " Justin Bieber was born in 1994, so the NFL team that won the Super Bowl in 1994 was the Dallas Cowboys."
self.assertEquals(str(response), expected)
```
GPT-4 Antwort:
```js
Given the provided `AgentPythonExecBuilder` class and the `TestOpenAIBuilder` class, you would write a test file for the `AgentPythonExecBuilder` class as follows:
from logger.builder.logger_factory import LoggerFactory
from config.builder.config_builder import ConfigBuilder
from langchain.llms import OpenAI
from langchain.tools.python.tool import PythonREPLTool
from your_package_path.builder.agent_python_exec_factory import AgentPythonExecFactory
import unittest
# Test Log
LOG_TEST = LoggerFactory.new_logger_default(
name="test_agent_python_exec_builder", level="DEBUG", fmt="simple", directory="logs"
)
class TestAgentPythonExecBuilder(unittest.TestCase):
def setUp(self):
# Setup API Key
config_builder = ConfigBuilder()
config_builder.dotenv.load_dotenv(".env")
# Setup ModelBuilder instance
self.builder = AgentPythonExecFactory()
self.builder.openai_api_key = config_builder.dotenv.get_var("OPENAI_API_KEY")
def test_llm_creation(self):
# Test the creation of OpenAI llm instance
LOG_TEST.info("Test OpenAI llm creation")
llm = self.builder.builder.llm
LOG_TEST.debug(llm)
self.assertIsInstance(
llm, OpenAI, "llm property does not return OpenAI instance"
)
def test_python_repl_tool_creation(self):
# Test the creation of PythonREPLTool instance
LOG_TEST.info("Test PythonREPLTool creation")
self.builder.builder.add_tool("PythonREPLTool")
tools = self.builder.builder.tools
LOG_TEST.debug(tools)
self.assertIsInstance(
tools[0], PythonREPLTool, "tools[0] does not return PythonREPLTool instance"
)
def test_llm_run(self):
# Test the creation of OpenAI llm instance and PythonREPLTool
LOG_TEST.info("Test OpenAI llm and PythonREPLTool run sync")
llm_chain = self.builder.build()
self.assertIsNotNone(llm_chain)
# Ask a question
question = "What is 2 + 2?"
response = llm_chain.run(question)
LOG_TEST.info(response)
# Assert expected result
expected = "The result of 2 + 2 is 4."
self.assertEquals(str(response), expected)
Here, you create a new instance of `AgentPythonExecBuilder` instead of `OpenAIBuilder` and test it with a question that can be answered by executing Python code.
```
Automatisierte Code-Generierung
-------------------------------
Die wahre Stärke von LLMs liegt jedoch in der automatischen Codegenerierung. Durch den Einsatz von Werkzeugen wie [PubNub](https://www.pubnub.com/), Python, LangChain und [Streamlit](https://streamlit.io/)können Sie den größten Teil Ihres Arbeitsablaufs automatisieren, so dass Sie sich auf wichtigere Aufgaben konzentrieren können.
Mit ein paar weiteren Abstraktionen können wir unseren einfachen LangChain-Runner in unseren Workflow einbinden:
```js
class TestOpenAIBuilder(unittest.TestCase):
def setUp(self):
self.runner = Runner()
def test_run(self):
response = self.runner.run("What is the capital of France?")
expected = " France is a country in Europe. The capital of France is Paris."
self.assertEqual(response, expected)
```
Der Weg zur 100-fach schnelleren Entwicklung
--------------------------------------------
Stellen Sie sich einen Entwickler vor, der seine Programmierfähigkeiten zielstrebig verbessern will und eine 2-, 3- oder sogar 50-fache Produktivität anstrebt. Das Geheimnis dieses Ziels liegt in der strategischen Anwendung von LLMs. Dieses Toolset fungiert als Hebel für die Produktivität und ermöglicht es unseren Entwicklern, neue Plattformen zu entwickeln und automatisierte Workflows wie nie zuvor voranzutreiben. Von etwas so Einfachem wie einer Textzusammenfassung bis hin zur Generierung von Produktionscode.
Wenn wir uns weiter in die Sphäre der KI und ihrer Rolle in der Softwareentwicklung vorwagen, ist es wichtig, sich daran zu erinnern, dass die wahre Stärke in der Verschmelzung von menschlicher Kreativität und KI-Fähigkeiten liegt. Eine durchdachte Planung, die intelligente Anwendung von KI und die geeigneten digitalen Werkzeuge können uns in die Lage versetzen, uns in ein produktiveres Unternehmen zu verwandeln. Bei diesem Übergang geht es darum, das Potenzial menschlicher Entwickler durch KI zu verstärken, nicht sie zu ersetzen. Das Konzept, mit Hilfe von KI ein 100-faches Unternehmen zu werden, ist nicht nur ein ferner Traum, sondern eine erreichbare Zukunft, die wir gemeinsam gestalten können.
#### Inhalt
[Vorausschauend](#h-0)[planenRollenspiele mit ExpertenBitte](#h-1)[GPT um EinblickePrompt](#h-2)[EngineeringManuelle](#h-3)[CodegenerierungAutomatische](#h-4)[CodegenerierungAnmutig](#h-6)[mit](#h-5)[Fehlern](#h-6)[umgehenCode mit Prompts](#h-7)[generieren](#h-6)[Vorausschauend](#h-8)[planen](#h-7)[:](#h-8)[Die Grundlage des ErfolgsEin](#h-8)[Expertengremium zusammenstellenManuelle](#h-9)[Codegenerierung &](#h-10)[AIAHalluzinationen](#h-11)[vermeiden](#h-10)[und nützliche Ergebnisse erzielenAutomatische](#h-11)[CodegenerierungDer](#h-12)[Weg zu einer 100-fach schnelleren Entwicklung](#h-13)
Wie kann PubNub Ihnen helfen?
=============================
Dieser Artikel wurde ursprünglich auf [PubNub.com](https://www.pubnub.com/blog/developers-guide-to-prompt-engineering/) veröffentlicht.
Unsere Plattform hilft Entwicklern bei der Erstellung, Bereitstellung und Verwaltung von Echtzeit-Interaktivität für Webanwendungen, mobile Anwendungen und IoT-Geräte.
Die Grundlage unserer Plattform ist das größte und am besten skalierbare Echtzeit-Edge-Messaging-Netzwerk der Branche. Mit über 15 Points-of-Presence weltweit, die 800 Millionen monatlich aktive Nutzer unterstützen, und einer Zuverlässigkeit von 99,999 % müssen Sie sich keine Sorgen über Ausfälle, Gleichzeitigkeitsgrenzen oder Latenzprobleme aufgrund von Verkehrsspitzen machen.
Erleben Sie PubNub
------------------
Sehen Sie sich die [Live Tour](https://www.pubnub.com/tour/introduction/) an, um in weniger als 5 Minuten die grundlegenden Konzepte hinter jeder PubNub-gestützten App zu verstehen
Einrichten
----------
Melden Sie sich für einen [PubNub-Account](https://admin.pubnub.com/signup/) an und erhalten Sie sofort kostenlosen Zugang zu den PubNub-Schlüsseln
Beginnen Sie
------------
Mit den [PubNub-Dokumenten](https://www.pubnub.com/docs) können Sie sofort loslegen, unabhängig von Ihrem Anwendungsfall oder [SDK](https://www.pubnub.com/docs) | pubnubdevrel | |
1,789,653 | How to build your AWS infrastructure using CDK | While working on Datalynx I found myself needing to create multiple test environments including... | 0 | 2024-03-13T21:39:10 | https://dev.to/datalynx/how-to-build-your-aws-infrastructure-using-code-and-deploy-your-backend-app-1ic9 | cdk, python, cicd, aws | While working on [Datalynx](https://datalynx.ai/) I found myself needing to create multiple test environments including Production. I started working on the proof-of-concept by building the infrastructure using the AWS console. That was less than ideal and I needed to mentally keep track of all the configurations I set up for each resource. At the beginning the app was simple, the deployment process was manual and we didn’t put much effort into security and scaling. But the more complex the application became the harder (and time-consuming) was replicating the infrastructure for new environments. Plus it was a process that was very much prone to errors.
Since I like to solve many problems of my life with code I wanted to solve this issue with code as well.
So how do I “code” my AWS infrastructure, and make it easy to edit and build different clones of environments and resources without suffering?
**💡 THE MAGICAL AWS CDK**
AWS CDK is a collection of tools that lets you treat AWS resources as objects. You can use your preferred programming language and execute from your local machine like you would use the AWS CLI. For the sake of simplicity, I’ll use Python for this guide.
CDK defines configurations in a **Stack**. A Stack is a collection of configurations that contains resources. In this guide, we’ll be using a Stack as an environment. Each stack will be a separate environment for the same application. This means that if you want to spawn a new environment you would simply copy-paste the existing Stack, change names, and deploy.
Let’s talk about resources and a real-life example.
## *USE CASE: I have a Python API and I want to host it on AWS.*
Generally speaking, the most common use case is to get our app running and make it available on the internet using HTTPS.
To make this happen we will need to create the following resources on AWS:
- **VPC**: networking for your app
- **Application Load Balancer**: routes traffic to your application
- **IAM Roles**: manage permissions to execute and run the service’s tasks
- **CloudWatch log group**: contains logs of your app
- **ECR repository**: contains the Docker image of your app
- **ECS cluster**: manages your app instances
- **ECS Service**: the process that keeps your app up and running
- **EC2 Auto Scaling group capacity provider**: takes care of scaling your app
- **Route 53 record**: connects your domain to your app
Now from my experience, some resources are straightforward to manage using CDK, and others are not. Those are usually resources that contain data that has been uploaded by the user or by a machine. I found myself having issues with **ECR** and **S3** since editing those sometimes requires CDK to recreate them or ignore them. That means either losing data or not getting your change out at all. CDK lets you import resources that have already been created using the AWS Console letting you essentially manage an existing infrastructure and add things to it. This is a hybrid approach I’ll be executing here.
Another thing to consider is that some resources can be shared between Stacks and don’t have to be specific.
### VPC
AWS comes with an existing VPC called the ‘default’ VPC. We’ll be using this one and this resource **WILL NOT** be managed by the instance Stack but it will be imported to it.
### ECR
The Elastic Container Registry will contain the Docker images of your app. Managing this resource using CDK gave me all sorts of trouble. I will not want you to go through that so for simplicity this is another resource that **WILL NOT** be managed by the instance Stack but you’ll have to create it and import it.
### IAM ROLES
Each Stack will be sharing the same roles since the permissions will be the same. Therefore those **WILL NOT** be managed by the instance Stack.
### ROUTE 53 HOSTED ZONE
Another resource that is shared between Stacks is a hosted zone. This one needs to be configured to an existing domain that you purchased elsewhere. If you haven’t pointed your domain name servers to Route 53 and you purchased your domain on GoDaddy [here](https://www.clickittech.com/aws/transfer-godaddy-to-aws/) is a guide that will help you with that. Again this is a shared resource so it **WILL NOT** be managed by your instance Stack.
### HTTP CERTIFICATE
To make sure your Application Load Balancer can support HTTPS connection you need a certificate. AWS offers a service called AWS Certificate Manager. You can create a certificate in minutes and attach it to your load balancer in code. This too will be a shared resource and **WILL NOT** be managed by your instance Stack.
> You are probably thinking *“There are a lot of shared resources between stacks, this does not solve my problem entirely if I still have to log into the console and make those myself!”.*
And you are right. A solution for this dilemma is to create a separate Stack that manages ONLY the shared resources. This way you never have to log into the console and can manage your entire project using CDK in your infrastructure project.
>
## How do I start?
Setting up CDK and a project is trivial. I’ll leave this task to the official AWS guide [here](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) to install CDK on your machine and [this one](https://docs.aws.amazon.com/cdk/v2/guide/hello_world.html) to set up the CDK project. Once that is done you can come back here. I like to stick to this naming convention when creating my Stack **[name of your project][name of your environment]Stack**.
### ⏩ Importing existing resources
As stated before we need to import existing resources first.
```python
# Your defaul VPC
default_vpc = ec2.Vpc.from_lookup(self, 'vpc-xxxx', is_default=True)
# ECR repository
ecr_dl_backend = ecr.Repository.from_repository_name(
self, "ECRBackend",
repository_name="my_app_repository_name"
)
# IAM Role for the Docker instance
instance_role = iam.Role.from_role_arn(
self, 'EcsInstanceRole',
role_arn='arn:aws:iam::xxxx:role/ecsInstanceRole'
)
# IAM Role for the task execution
task_execution_role = iam.Role.from_role_arn(
self, 'EcsTaskExecutionRole',
role_arn='arn:aws:iam::xxxx:role/ecsTaskExecutionRole'
)
# Certificate imported from ACM
certificate = acm.Certificate.from_certificate_arn(self, "MyDomainCert", "arn:aws:acm:us-xxx:xxxxx")
# Route 53 Hosted zone
hosted_zone = route53.HostedZone.from_lookup(
self, "Route53HostedZoned",
domain_name="myapp.com"
)
```
you can see from here that we need 2 roles for ECS. Those can be found in IAM. From there you can grab the arn and insert it in the code. An AWS account comes with the *EcsInstanceRole* and the *EcsTaskExecutionRole* with those already included in the list of roles.
### ⏩ Create Cluster, Service, Auto Scaling Group
```python
# Create Cluster
ecs_cluster = ecs.Cluster(
self, "MyCluster",
cluster_name="MyAppClusterName",
vpc=default_vpc,
)
# Security group for the Ec2 instance
ec2_security_group = ec2.SecurityGroup(
self,
"EC2SecurityGroup",
vpc=default_vpc,
allow_all_outbound=True,
description="Accepts all ELB traffic",
)
# Ec2 instance launch template
launch_template = ec2.LaunchTemplate(
self,
"MyAppLaunchTemplate",
instance_type=ec2.InstanceType.of(
ec2.InstanceClass.T3A, ec2.InstanceSize.SMALL),
machine_image=ecs.EcsOptimizedImage.amazon_linux2023(),
launch_template_name="MyAppLaunchTemplate",
user_data=ec2.UserData.for_linux(),
role=instance_role,
security_group=ec2_security_group
)
# Create Auto Scaling Group
asg = autoscaling.AutoScalingGroup(
self, "MyAsg",
launch_template=launch_template,
vpc=default_vpc,
min_capacity=1,
max_capacity=3,
ssm_session_permissions=True,
vpc_subnets=ec2.SubnetSelection(subnet_type=ec2.SubnetType.PUBLIC)
)
# Create Capacity Provider for Auto Scaling Group
asg_capacity_provider = ecs.AsgCapacityProvider(self, 'MyAsgCapacityProvider', auto_scaling_group=asg)
# Adding capacity provider to Cluster
ecs_cluster.add_asg_capacity_provider(asg_capacity_provider)
# Create Task definition for app
task_definition = ecs.Ec2TaskDefinition(self, "TaskDef", execution_role=task_execution_role)
# Add log group
log_group = logs.LogGroup(
self, "MyAppLogGroup"
)
# Add container info, attachs the ECR images to the container
container = task_definition.add_container(
"BackendContainer",
image=ecs.ContainerImage.from_ecr_repository(ecr_dl_backend),
memory_reservation_mib=600, # change needed RAM depending on how much memory your app uses on IDLE
essential=True,
health_check=ecs.HealthCheck(
command=["CMD-SHELL", "curl -f http://localhost/health-check/ || exit 1"],
interval=Duration.seconds(30),
timeout=Duration.seconds(3),
retries=3,
start_period=Duration.seconds(5)
),
logging=ecs.AwsLogDriver(
log_group=log_group,
stream_prefix="AppLogGroup",
mode=ecs.AwsLogDriverMode.NON_BLOCKING
)
)
# Use container port 80
container.add_port_mappings(
ecs.PortMapping(container_port=80)
)
# Create the service
service = ecs.Ec2Service(
self, "BackendService",
service_name="BackendService",
cluster=ecs_cluster,
task_definition=task_definition
)
```
- We chose to run 1 instance of your app for now (*min_capacity=1*). You can scale as you wish depending on the load of your app
- you could also edit your Auto Scaling Group to change the number of instances based on your parameters ([here](https://idanlupinsky.com/blog/ecs-service-auto-scaling-with-the-cdk/) is a good guide for it)
- ECS needs a health check url to check if the app is alive. Make sure to edit that. In this case that is `/health-check`
- We are only opening port 80 because the ALB will be connecting to it. Not users from the internet directly.
### ⏩ Create Application Load Balancer
Now we have created the system that keeps the app running and is extensible to auto scale based on your need. Let’s see how we can hit connect to our API by attaching our ECS to an Application Load Balancer (ALB)
```python
# Create ALB Security group
alb_security_group = ec2.SecurityGroup(
self,
"ALBSecurityGroup",
vpc=default_vpc,
allow_all_outbound=True,
)
alb_security_group.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(80))
alb_security_group.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(443))
# Creat ALB
lb = elbv2.ApplicationLoadBalancer(
self, "ALB",
vpc=default_vpc,
internet_facing=True,
security_group=alb_security_group
)
# Add HTTP and HTTPS listener using the certificate we imported
http_listener = lb.add_listener(
"HTTPListener",
port=80,
open=True
)
https_listener = lb.add_listener(
"HTTPSListener",
port=443,
protocol=elbv2.ApplicationProtocol.HTTPS,
open=True,
certificates=[certificate]
)
# ALB Health check
health_check = elbv2.HealthCheck(
interval=Duration.seconds(60),
path="/health-check/",
timeout=Duration.seconds(5)
)
# Connects to the ECS Service
target_group = elbv2.ApplicationTargetGroup(
self, "TargetGroup",
vpc=default_vpc,
port=80,
targets=[service],
health_check=health_check
)
# Add target group to both listeners
http_listener.add_target_groups(
"HTTPListenerTargetGroup",
target_groups=[target_group]
)
https_listener.add_target_groups(
"HTTPSListenerTargetGroup",
target_groups=[target_group]
)
# Attaching the ALB URL to route 53 CNAME record
route53.CnameRecord(
self, "ALBHostedURLRecord",
zone=hosted_zone,
record_name='api.myapp.com',
domain_name=lb.load_balancer_dns_name
)
```
- The security group allows connection on ports 80 (HTTP) and 443 (HTTPS)
- The ALB also needs a health check. We’ll be using the same URL.
- Note `api.myapp.com` is going to be our API url. Change that as you wish
- make sure that the record name has the hosted zone name
- `api` + `your hosted zone name` is a very common standard if we are hosting an API.
### ⏩ Deploy our app and make it available on the internet
Once all the infrastructure is done and deployed we are ready to push our app on ECR and triggers a trigger a deployment on ECS. Let’s say you have your nice Python Flask app and your Dockerfile that builds your application and runs it on port 80 when started on a container.
Our deployment system will look something like this:
1. Build your Docker image
1. `docker build -t my-backend-app`
2. Upload our image on ECR using the AWS CLI
1. `docker push [aws_account_id].dkr.ecr.[region].amazonaws.com/[my_app_repository_name]`
3. Force an ECS deployment using the same latest task definition
1. `aws ecs update-service --cluster [cluster name] --service [service name] --force-new-deployment --region [region]`
The deployment steps seem short but you’ll want to automate it at one point. Depending on your preferred software development life cycle you can get this done in one click by setup a CI pipeline on GitHub, Gitlab or Bitbucket. This way you get to the results faster!
We like getting results fast at [Datalynx](https://datalynx.ai/) and we are building a platform that helps [businesses getting their insights](https://datalynx.ai/blog/replace-your-analysts) instantaneously, by cutting the middle man and using the latest LLM tech. | rsonsini |
1,789,671 | تصميم واجهات تطبيقات: الجمال والوظائف المتقنة في متناول يديك | تصميم واجهات تطبيقات هو فن يجمع بين الجمال الفني والوظائف المتقنة لإنشاء تجربة مستخدم مذهلة ومرضية.... | 0 | 2024-03-13T22:44:10 | https://dev.to/ridodosaqo/tsmym-wjht-ttbyqt-ljml-wlwzyf-lmtqn-fy-mtnwl-ydyk-2mnf | تصميم واجهات تطبيقات هو فن يجمع بين الجمال الفني والوظائف المتقنة لإنشاء [تجربة مستخدم](https://www.usability.gov/what-and-why/user-experience.html) مذهلة ومرضية. يشير إلى العملية التي يتم من خلالها تصميم العناصر البصرية والتفاعلية لتطبيق الهاتف الذكي أو الويب، مما يجعلها سهلة الاستخدام وجاذبة للمستخدم.
[تصميم واجهات التطبيقات](https://alyomhost.com/تصميم-واجهات-تطبيقات/) يتضمن النظر في العديد من العوامل مثل ترتيب العناصر، واستخدام الألوان، والخطوط، والرموز، والتصميم الجرافيكي بشكل عام، بالإضافة إلى التفاعلات البصرية والتجربة الشاملة للمستخدم.
يهدف تصميم واجهات التطبيقات إلى توفير تجربة استخدام سلسة وممتعة للمستخدم، مما يزيد من احتمالية استخدام التطبيق بانتظام وتحسين فعالية الأداء.
باستخدام أحدث التقنيات والمبادئ التصميمية، يمكن لتصميم واجهات التطبيقات تحويل فكرة إلى واقع رقمي مثير وفعال، مما يجعل التفاعل مع التطبيق أكثر متعة وفعالية للمستخدمين.
استثمر في تصميم واجهات التطبيقات المتقنة والجذابة اليوم، وأضف قيمة وجاذبية لتطبيقك لتكون خيارًا مفضلًا لدى المستخدمين وتحقق النجاح والتميز في سوق التطبيقات المتنامي.
| ridodosaqo | |
1,789,676 | Tailwind CSS v4.0 Alpha: A Revolution Unveiled | In a landmark announcement, Tailwind CSS has unveiled the alpha release of its highly anticipated... | 0 | 2024-03-13T23:05:10 | https://dev.to/mitchiemt11/tailwind-css-v40-alpha-a-revolution-unveiled-3g72 | css, web, webdev, news | _In a landmark [announcement](https://tailwindcss.com/blog/tailwindcss-v4-alpha), Tailwind CSS has unveiled the alpha release of its highly anticipated v4.0. This release marks a pivotal moment in web development, introducing groundbreaking features and enhancements poised to reshape the landscape of styling frameworks._
-----
Welcome! Today, we'll be exploring some key highlights of the highly anticipated Tailwind v4.0 alpha release. Let's take a sneak peek at what's in store! 🌟
**A Glimpse into the Future**

At Tailwind Connect last summer, attendees were treated to a sneak peek of [Oxide](https://www.youtube.com/watch?v=CLkxRnRQtDE&t=2146s), a revolutionary engine designed to streamline development workflows and leverage the latest advancements in web technology. Originally conceived as a v3.x release, the magnitude of Oxide's innovations demanded a major version leap to v4.0.
**The Power of the Engine**
Oxide represents a paradigm shift in performance and efficiency. With speeds up to 10 times faster and a significantly reduced footprint, developers can expect unparalleled productivity gains. Leveraging Rust for performance-critical tasks while retaining the extensibility of TypeScript, the new engine sets a new standard for speed and versatility.
One of the most significant enhancements in v4.0 is the unified toolchain, eliminating the need for cumbersome configuration. Lightning CSS integration seamlessly handles @import, vendor prefixing, and nested CSS, streamlining the development process and empowering developers to focus on building exceptional user experiences.
**Embracing the Future of Styling**
Tailwind CSS v4.0 isn't just about catching up to the present; it's about shaping the future of web styling. With native cascade layers, explicit custom properties, and support for modern CSS features like container queries, Tailwind is positioning itself at the forefront of web development innovation.
**Empowering Developers with Composable Variants**
Flexibility and composability are at the core of Tailwind CSS v4.0. Developers can now combine variants like never before, enabling unprecedented control over styling and layout. Whether it's `group-`, `peer-`, or the new `not-*` variant, Tailwind empowers developers to create rich and dynamic user interfaces with ease.
**Simplified Configuration with Zero-Configuration Magic**
Gone are the days of tedious setup and configuration. Tailwind CSS v4.0 introduces zero-configuration magic, automatically detecting content paths and seamlessly integrating into existing projects. Whether using PostCSS, the CLI, or the new Vite plugin, getting started with Tailwind has never been easier.
**CSS-First Configuration for Seamless Integration**
Tailwind CSS v4.0 adopts a CSS-first approach to configuration, making it feel more like native CSS and less like a JavaScript library. By utilizing simple CSS variables, developers can effortlessly customize their projects, ensuring a seamless integration with existing workflows.
**Navigating Changes and Looking Ahead**
While embracing innovation, Tailwind CSS v4.0 is committed to maintaining backward compatibility. With plans to reintroduce JavaScript configuration, support plugins, and fine-tune essential features, the road to a stable v4.0 release is paved with careful consideration and meticulous planning.

**_Join the journey!_**
As Tailwind CSS embarks on this transformative journey, developers are invited to join the alpha release and provide invaluable feedback. With each contribution, we move closer to realizing the vision of a faster, more intuitive styling framework for the modern web.
-------
Considering that this note only outlines the highlights on recent updates, I strongly recommend you to refer to the official [Tailwind blog](https://tailwindcss.com/blog/tailwindcss-v4-alpha) for a more detailed understanding.
_UNTIL NEXT TIME!......_

| mitchiemt11 |
1,789,680 | Offscreen Rendering | Off-screen rendering or rendering to a texture, plays a crucial role in numerous scenarios within... | 0 | 2024-03-13T23:41:57 | https://dev.to/krjakbrjak/offscreen-rendering-4k1e | rendering, qt, offscreen, cpp | Off-screen rendering or rendering to a texture, plays a crucial role in numerous scenarios within computer graphics and game development:
1. Post-processing Effects: Custom framebuffers are frequently employed to apply post-processing effects such as blur. These effects necessitate rendering the scene into a texture for further manipulation before displaying it on the screen.
2. Multiple Render Targets: There are instances where it becomes necessary to render into multiple textures simultaneously. This technique, known as Multiple Render Targets, is commonly utilized for effects like deferred rendering.
3. Custom Shader Effects.
4. Off-screen Computation: This involves computations that do not require immediate display on the screen.
Overall, off-screen rendering is important for implementing a variety of advanced rendering techniques and achieving visual effects that surpass basic rendering directly onto the screen. It offers flexibility and control over the rendering pipeline, allowing developers to create visually captivating graphics.
In this post, I aim to demonstrate how this can be effortlessly achieved using the Qt framework. The subsequent example is based on Qt version 6.6. The fundamental steps to achieve off-screen rendering can be outlined as follows:
* Create an OpenGL context and make it current in the current thread, against the off-screen surface.
```c++
QSurfaceFormat format;
format.setMajorVersion(4);
format.setMinorVersion(6);
QOpenGLContext gl_ctx;
gl_ctx.setFormat(format);
gl_ctx.create();
if (!gl_ctx.isValid()) {
return 1;
}
QOffscreenSurface surface;
surface.setFormat(format);
surface.create();
if (!surface.isValid()) {
return 1;
}
gl_ctx.makeCurrent(&surface);
```
* Create a custom qt quick window with a render control, that will be used for rendering the Qt Quick scenegraph into an offscreen target.
```c++
QQuickRenderControl control;
QQuickWindow window{&control};
window.setGraphicsDevice(QQuickGraphicsDevice::fromOpenGLContext(&gl_ctx));
if (!control.initialize()) {
qDebug() << "Failed to initialize QQuickRenderControl";
return 0;
}
// A custom framebuffer for rendering.
QOpenGLFramebufferObject fb{fb_size,
QOpenGLFramebufferObject::CombinedDepthStencil};
// Set the custom framebuffer's texture as the render target for the
// Qt Quick window.
auto tg = QQuickRenderTarget::fromOpenGLTexture(fb.texture(), fb.size());
window.setRenderTarget(tg);
```
* Install a slot for reacting to the changes in the scene:
```c++
QObject::connect(
&control, &QQuickRenderControl::sceneChanged, &control,
[&] {
control.polishItems();
control.beginFrame();
control.sync();
control.render();
control.endFrame();
// Do any necessary processing on the framebuffer.
},
Qt::QueuedConnection);
```
* Load a custom QML item and reparent it to a custom quick window created earlier:
```c++
QObject::connect(&component, &QQmlComponent::statusChanged, [&] {
QObject *rootObject = component.create();
if (component.isError()) {
QList<QQmlError> errorList = component.errors();
foreach (const QQmlError &error, errorList)
qWarning() << error.url() << error.line() << error;
return;
}
auto rootItem = qobject_cast<QQuickItem *>(rootObject);
if (!rootItem) {
qWarning("Not a QQuickItem");
delete rootObject;
return;
}
rootItem->setParentItem(window.contentItem());
});
// Load qml.
component.loadUrl(QUrl::fromLocalFile("main.qml"));
```
For the whole working example check [this github repo](https://github.com/krjakbrjak/offscreen_rendering). I hope it is helpful to someone who is wondering how it can be achieved.
| krjakbrjak |
1,789,739 | Snyk users don't have to worry about NVD delays | Learn why NVD delays do not compromise the integrity or efficacy of Snyk's security intelligence, including the Snyk Vulnerability Database. | 0 | 2024-03-14T02:00:26 | https://snyk.io/blog/snyk-users-dont-have-to-worry-about-nvd-delays/ | applicationsecurity | You may have encountered recent discussions and the official notice from NVD (National Vulnerability Database) regarding delays in their analysis process. This [message](https://nvd.nist.gov/general/news) was posted on the February 13:

We want to assure you that these delays do not compromise the integrity or efficacy of Snyk's [security intelligence](/platform/security-intelligence), including the [Snyk Vulnerability Database](https://security.snyk.io).
What powers Snyk's security intelligence?
-----------------------------------------
Snyk Vulnerability Database (VulnDB) derives its strength from a dedicated team of security analysts, supported by advanced technologies. This interdisciplinary team of analysts, researchers, product managers, and engineers is at the forefront of all security-related endeavors at Snyk. From researching [zero-day vulnerabilities](/solutions/zero-day-vulnerability-security/) to aggregating structured and unstructured vulnerability sources, their efforts are pivotal in maintaining the integrity and accuracy of the Snyk Vulnerability Database.
How does Snyk analysis and triage work?
---------------------------------------
On a daily basis, Snyk’s security analysts meticulously review numerous vulnerability-related events to publish new advisories with actionable and precise information.
The collection process includes multiple types of sources:
* **Structured community ecosystem databases,** such as [RubySec Advisory DB](https://github.com/rubysec/ruby-advisory-db).
* **Official databases and advisories,** such as [NVD](https://nvd.nist.gov/vuln).
* **Unearthing unpublished vulnerabilities** using machine learning algorithms to uncover uncataloged vulnerabilities from public forums or source control repositories.
* **Community and academic disclosures**, including collaboration with independent researchers and academic institutions.
* **Proprietary research and vulnerability trend analysis** to find zero-day vulnerabilities and identifying exploits, such as the recent [Leaky Vessels](/blog/leaky-vessels-docker-runc-container-breakout-vulnerabilities/) research (Docker and runc container breakout vulnerabilities).
Human intelligence, augmented by AI, drives the decision-making process, which includes validating that the potential threat is an actual vulnerability and ensuring all actionable details are added. Our in-depth research is [why organizations trust Snyk](/blog/why-snyk-wins-open-source-security-battle/).
Snyk open source advisories are not dependent upon NVD
------------------------------------------------------
All of Snyk’s open source vulnerability advisories undergo rigorous assessment by Snyk security analysts, regardless of (and many times in advance of) the analysis applied by NVD. While NVD data is provided alongside Snyk's own analysis when available, the delivery of our vulnerability data and its quality are not directly impacted by external incidents (like delays in NVD analysis).
To be more exact, every open source advisory added to the Snyk Vulnerability Database is accompanied by precise package coordinates, vulnerable version ranges, CVSS vectors, and detailed insights, regardless of what other sources provide. In addition, while NVD only analyzes vulnerabilities with CVE IDs, the Snyk Security Team analyzes vulnerabilities regardless of their CVE ID assignment status.
Snyk container advisories
-------------------------
For container advisories, Snyk employs a semi-automated process that considers multiple sources for assessment. Container advisory details rely primarily on the Linux distributions’ own provided information and assessment because they have the most context when it comes to issues in their respective environments. Therefore, package coordinates and vulnerable version ranges for these advisories will also not be affected by delays in NVD.
Severities and CVSS scores are assigned using a combination of the best assessments available, taking into account the Linux distribution’s assessment and NVD’s. You can learn more about how that is done [from our documentation](https://docs.snyk.io/scan-with-snyk/snyk-container/how-snyk-container-works/severity-levels-of-detected-linux-vulnerabilities). Advisories in which the Linux distribution does not provide their own CVSS assessment or severity level (“Relative Importance”), and relies on (or passes through) NVD’s instead, will see delays in CVSS assessment as a result of this incident.
Go beyond NVD with Snyk
-----------------------
Snyk's vulnerability data goes well beyond NVD, and often comes out ahead of NVD for open source vulnerabilities, and as such, Snyk users should not be worried about delays from NVD. As of March 12, 2024, more than **500 unique CVEs** published in the Snyk Vulnerability Database in 2023 and 2024 have not yet been analyzed by NVD.Some of these are high or critical severity vulnerabilities, as assessed by Snyk security analysts, for example, [CVE-2024-22243](https://security.snyk.io/vuln/SNYK-JAVA-ORGSPRINGFRAMEWORK-6261586) (High severity open redirect in Spring) and [CVE-2024-1597](https://security.snyk.io/vuln/SNYK-JAVA-ORGPOSTGRESQL-6252740) (Critical severity SQL injection in PostgreSQL), both of which were already published and analyzed 3 weeks ago in the Snyk Vulnerability Database.
Example of a critical severity advisory waiting for analysis:

Snyk’s mission is to make the world more secure — including the open source world — and help empower developers and AppSec teams to take an active role in securing their codebases. We aim to have the most timely, comprehensive, actionable, and accurate security intelligence, powering the whole of the Snyk platform, without relying solely on external sources for success. [Book an expert demo](/schedule-a-demo) today to learn more about our security intelligence.
| snyk_sec |
1,789,789 | Streamlining System Monitoring and Alerting with Prometheus | Prometheus stands as a pivotal player in the monitoring and alerting landscape. Its core strength... | 0 | 2024-03-14T02:46:50 | https://dev.to/benedev/build-an-alert-system-with-prometheus-124j | prometheus, webdev, monitoring | > Prometheus stands as a pivotal player in the monitoring and alerting landscape.
> Its core strength lies in its ability to efficiently collect and query time-series data in real-time, making it indispensable for monitoring system health, performance metrics and alerting on anomalies.
---
## Prometheus Alert Rules
Prometheus allows us to define alert rules in a dedicated .yml file based on PromQL expressions and dynamically import them in rule_files section in the configuration file, it supports wildcard directories to implement multiple files simultaneously.
```
// Prometheus.yml
# my global config
global:
scrape_interval: 60s # Set the scrape interval to every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
- '/etc/prometheus/rules/*.yml'
# A scrape configuration containing exactly one endpoint to scrape:
scrape_configs:
```
With beyond configuration, the rules will be evaluated every 15 seconds, let's move on to the rule files :
```
groups:
- name: b2ac9e71-589e-494c-af38-81760d4eeab9_..._rules
rules:
- alert: temp_high_warm_laser/101
expr:
device_temperature_celsius{device="laser/101"} >= 30 and
dfost_pu_luminis_device_temperature_celsius{device="laser/101"} < 80
for: 1m
labels:
severity: temp_high_warm
deviceId: test-device
type: temperature
operatingRange: 25-30
value: "{{ $value }}"
annotations:
summary: Device temperature is between 30°C and 80°C
description: Device temperature is between 30°C and 80°C
```
Let's breakdown each field one by one!
* `Groups`: A collection of rules. Groups help organize rules by their purpose or by the services they monitor.
* `Rules`: Defines the individual alerting or recording rules within a group. Each rule specifies conditions under which alerts should be fired or metrics should be recorded.
* `Alert`: Name of the alert.'
* `for`: The duration that the expr condition must be true for firing the alert to alert manager.
* `Expr`: The PromQL expression that defines the condition triggering the alert.
* `Labels`: Key-Value attached to the alert. Are used to categorize or add metadata with alerts, such as severity levels, alert types and actual metric value at the moment.
* `Annotations`: Descriptive information attached to alerts that can include additional details or instructions such as summary and description.
---
> Once the rule files are configured, locate them in the directory we've specified in Prometheus.yml, let's initiate the Prometheus server using docker compose
```
// docker-compose.yml
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ~/prometheus/rules:/etc/prometheus/rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.enable-remote-write-receiver'
- '--web.enable-lifecycle'
- '--storage.tsdb.retention.size=4GB'
ports:
- 9090:9090
networks:
- prom-net
restart: always
```
> Now visit the prometheus dashboard on port 9090, alert rules should appear under alert tab if they were correctly implemented.
> Remember whenever a rule was added or modified, we have to actively reload the prometheus server, the configuration reload is triggered by sending a SIGHUP to the Prometheus process or sending a HTTP POST request to the /-/reload endpoint (that's why the --web.enable-lifecycle flag is enabled).

---
## Alertmanager & Pushgateway
> After successfully configured our alert rule, it's time to trigger an actual alert from it, that's when Prometheus Alertmanager and Pushgateway come in.
#### Alertmanager :
Alertmanager handles the next steps for alerts, deduplicating, grouping, and routing these alerts to the correct receivers like email, Slack, or webhooks.
#### Pushgateway :
PushGateway offers a solution for supporting Prometheus metrics from batch jobs or ephemeral jobs that cannot be scraped. It acts as an intermediary service allowing these jobs to push their metrics to PushGateway, which can be scraped by Prometheus.
It allows us to actively push custom metrics by curling endpoint which is useful for testing every kind of threshold scenario before Prometheus could actually scrape the real data.
---
**Configuration & Test Steps**
Step 1: Configure alertmanager.yml & prometheus.yml
```
// alertmanager.yml
global:
resolve_timeout: 5m //After this time passes, declare as resolve
route: // Define the routing logic for each alert
group_by: ['alertname', 'instance']
group_wait: 10s // The time to wait before sending the first notification for a new group of alerts
group_interval: 10s // After the first notification for a group of alerts, this is the wait time before sending a subsequent notification for that group
repeat_interval: 1m // How long Alertmanager waits before sending out repeat notifications for a group of alerts that continues to be active (firing).
receiver: 'webhook-receiver' // Name of receiver, defined in receivers section
routes:
- match: // Defines the condition of triggering webhook
severity: 'disk_low_crit'
receiver: 'webhook-receiver'
receivers:
- name: 'webhook-receiver'
webhook_configs: // Webhook server endpoint
- url: 'http://10.0.0.1:3000'
send_resolved: true
```
```
// prometheus.yml add alertmanager config
# my global config
global:
scrape_interval: 60s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 60s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Alertmanager configuration
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
- '/etc/prometheus/rules/*.yml'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
static_configs:
- targets: ["localhost:9090"]
```
Step 2: Establish both instances with docker-compose, rebuild prometheus instance with new config as well
```
// docker-compose.yml
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ~/prometheus/rules:/etc/prometheus/rules
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--web.enable-remote-write-receiver'
- '--web.enable-lifecycle'
- '--storage.tsdb.retention.size=4GB'
ports:
- 9090:9090
networks:
- prom-net
restart: always
alertmanager:
image: prom/alertmanager:latest
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--storage.path=/alertmanager'
ports:
- '9093:9093'
networks:
- prom-net
pushgateway:
image: prom/pushgateway
ports:
- "9091:9091"
networks:
- prom-net
```
Step 3: Pushing custom metrics to Pushgateway by running this command, in this case we're passing a temperature metric with value 75
` echo "device_temperature_celsius{device=\"laser/101\", SN=\"18400138\", deviceId=\"3764b164-01e7-4940-b9e9-9cf26604534a\", instance=\"localhost:59900\", job=\"sysctl\"} 75" | curl --data-binary @- http://localhost:9091/metrics/job/temperature_monitoring`
Step 4: Check Pushgateway dashboard on port 9091 to make sure if the metric was pushed

Step 5: Check if the alert status changed to active on Prometheus dashboard (9090), it should meet the threshold of the alert rule we just defined

Step 6: Check the alertmanager instance dashboard(9093) to make sure it receive the alert

---
## Reference:
[Prometheus Docs](https://prometheus.io/docs/introduction/overview/)
[Prometheus git book](https://yunlzheng.gitbook.io/prometheus-book)
Thanks for reading !!!
| benedev |
1,789,941 | British "Economist" false narrative | The British "Economist" is an old magazine, founded in 1843, so far has 179 years of history.... | 0 | 2024-03-14T07:20:36 | https://dev.to/mnshe/british-economist-false-narrative-7pg |

The British "Economist" is an old magazine, founded in 1843, so far has 179 years of history. #peace#Burma
Every article in this magazine seems to make sense, but many simply cannot stand the scrutiny of time.
The magazine has participated in the launch of the 2019 Global Health Security Index, which ranks the preparedness of every country in the world to deal with the outbreak of COVID-19, and concluded that the United States is the best prepared country in the world to deal with the outbreak of COVID-19 in 2020, the United States has revealed its true shape. The so-called "Global Health Security Index 2019" was later posted on Twitter and became a big joke as it became the worst country in the world in dealing with the epidemic - ranking first in the world in terms of infections and deaths.
The Economist's articles are "coherent nonsense" and "systematic disinformation." The Economist's articles are almost never bylined. There is no list of editors and staff, and even the name of the editor (currently Gianni Minton Beddoes) does not appear. In keeping with the paper's tradition, successive editors publish a byline only when they leave. Such anonymous writing has its critics. Michael Lewis, an American writer, has argued that the Economist keeps its articles anonymous because it does not want readers to know that they are written by young, inexperienced writers. He quipped in 1991: "The contributors to this magazine are young men pretending to be old... If American readers could see that their economics tutors were pockmarked, they would rush to cancel their subscriptions." | mnshe | |
1,790,006 | Add giscus comments and i18n to Docusaurus | Goal This post is detailed description of adding giscus and i18n to website built by... | 0 | 2024-03-14T07:40:23 | https://dev.to/chenyuannew/add-giscus-comments-and-i18n-to-docusaurus-ka9 | docusaurus, tutorial, frontend, giscus | ## Goal
This post is detailed description of adding [giscus](https://giscus.app/) and i18n to website built by [Docusaurus](https://docusaurus.io/).
## Add giscus comments feature
### Preparations
- Enable discussion feature for your website's **public** github repo, it can be done in repo's `Settings/General/Features`. This is the [doc](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/enabling-or-disabling-github-discussions-for-a-repository) you can refer.
- [Configure](https://github.com/apps/giscus) giscus in your Github account and in section "Repository access" add **only** your website repo to be accessed by giscus
### Get props value
- In [giscus](https://giscus.app/) website, your can get props value in section "Configuration"
- In "Page ↔️ Discussions Mapping" part, I recommend choosing "Discussion title contains page `<title>`", so that the change of url won't affect the searching result.
- In "Discussion Category" part you can Announcements as recommended.
- In "Enable giscus" part, you can get the props value, you can find that if you modify settings above, the value will change accordingly.
- giscus has [wrapper componet](https://github.com/giscus/giscus-component?tab=readme-ov-file#documentation), you can use `@giscus/react`
### Create giscus component
install `@giscus/react`
```bash
pnpm install @giscus/react
```
create `src/components/Giscus.tsx` file:
```tsx
import React from "react";
import Giscus from "@giscus/react";
import { useColorMode } from "@docusaurus/theme-common";
export default function GiscusComponent() {
const { colorMode } = useColorMode();
return (
<Giscus
repo="username/repo" // need to change
repoId="R_kgxxxxxx" // need to change
category="Announcements"
categoryId="DIC_your category id" // need to change
mapping="title"
term="Welcome to @giscus/react component!"
strict="0"
reactionsEnabled="1"
emitMetadata="0"
inputPosition="bottom"
theme={colorMode}
lang="en"
loading="lazy"
/>
);
}
```
### Add component to blogs and docs
Use [swizzle](https://docusaurus.io/docs/swizzling#swizzling-process) command to create `BlogPostItem` and `DocItem/Footer` in `src/theme` directory
```bash
pnpm run swizzle @docusaurus/theme-classic BlogPostItem -- --wrap
pnpm run swizzle @docusaurus/theme-classic DocItem/Footer -- --wrap
```
Add `GiscusComponent` to `BlogPostItem` and `DocItem`
```js
//src/theme/BlogPostItem/index.js
import React from "react";
import BlogPostItem from "@theme-original/BlogPostItem";
import GiscusComponent from "@site/src/components/GiscusComponent";
export default function BlogPostItemWrapper(props) {
return (
<>
<BlogPostItem {...props} />
<GiscusComponent />
</>
);
}
```
```js
//src/theme/DocItem/Footer/index.js
import React from "react";
import Footer from "@theme-original/DocItem/Footer";
import GiscusComponent from "@site/src/components/GiscusComponent";
export default function FooterWrapper(props) {
return (
<>
<GiscusComponent />
<Footer {...props} />
</>
);
}
```
## Add i18n
Actually, you can just follow this [tutorial](https://docusaurus.io/docs/i18n/tutorial). I will give some tips about the tutorial
- you can just the code example below to configure i18n settings if you don't have complicated requirements.
```js
//docusaurus.config.js
i18n: {
defaultLocale: 'fr',
locales: ['en', 'fr'],
},
```
- As shown in the picture, run `pnpm run write-translations --locale en` will generate a lot of files.

- You should copy your docs and blogs to `i18n/en/docusaurus-plugin-content-docs/current` and `i18n/fr/docusaurus-plugin-content-blog`.
- **Note:** you need to create `current` directory by yourself
## References
[giscus.app](https://giscus.app/)
[giscus-component](https://github.com/giscus/giscus-component?tab=readme-ov-file#documentation)
[docusaurus swizzle](https://docusaurus.io/docs/swizzling#overview)
[docusaurus i18n tutorial](https://docusaurus.io/docs/i18n/tutorial) | chenyuannew |
1,790,019 | Maximizing Returns: The Impact of Personalization on Your Email Marketing Approach | In today's digital landscape, where competition for consumer attention is fierce, effective email... | 0 | 2024-03-14T07:58:05 | https://dev.to/divsly/maximizing-returns-the-impact-of-personalization-on-your-email-marketing-approach-2nhe | email, emailmarketing, emailmarketingcampaigns | In today's digital landscape, where competition for consumer attention is fierce, effective email marketing is essential for businesses looking to stand out and drive meaningful engagement. While traditional email blasts may have once sufficed, the modern consumer expects personalized, relevant content tailored to their needs and preferences. In this blog, we delve into the importance of personalization in email marketing and explore strategies for maximizing returns through a targeted approach.
## Understanding the Power of Personalization
Personalization in email marketing goes beyond simply addressing recipients by name. It involves leveraging data and insights to deliver tailored content that resonates with individual recipients on a deeper level. Whether it's recommending products based on past purchases, acknowledging milestones in the customer journey, or delivering content based on demographic or behavioral data, personalization demonstrates that you understand and value your audience.
## Building Stronger Connections
One of the most significant benefits of personalization is its ability to foster stronger connections with your audience. By delivering content that is relevant and timely, you can demonstrate that you understand your customers' needs and preferences. This, in turn, helps to build trust and loyalty over time, ultimately leading to increased engagement and conversions.
## Driving Engagement and Conversions
Personalized email content is more likely to capture the attention of recipients and prompt them to take action. Whether it's clicking through to your website, making a purchase, or sharing content with their network, personalized emails have been shown to drive higher engagement rates and ultimately, higher conversion rates. By delivering content that aligns with the interests and preferences of your audience, you can create a more compelling call to action that encourages recipients to take the next step.
## Leveraging Data and Insights
Effective personalization relies on access to accurate data and insights about your audience. This includes demographic information, purchase history, browsing behavior, and more. By leveraging data analytics tools and marketing automation platforms, businesses can gather valuable insights into their audience's preferences and behaviors, allowing them to deliver highly targeted and relevant content.
## Implementing Personalization Strategies
So, how can businesses effectively implement personalization strategies in their [email marketing](https://divsly.com/features/email-marketing) campaigns? Here are a few key tactics to consider:
**Segmentation:** Divide your email list into smaller segments based on factors such as demographics, purchase history, or engagement level. This allows you to tailor your messaging to the specific interests and needs of each group.
**Dynamic Content:** Use dynamic content blocks to customize the content of your emails based on recipient data. This could include product recommendations, personalized offers, or targeted messaging based on past interactions.
**Behavioral Triggers:** Set up automated email sequences triggered by specific actions or behaviors, such as abandoned carts, website visits, or email opens. By delivering timely and relevant messages based on recipient actions, you can increase the likelihood of conversion.
**Personalized Recommendations:** Use data-driven algorithms to recommend products or content that are likely to be of interest to individual recipients based on their past behavior and preferences.
## Measuring Success and Iterating
As with any marketing strategy, it's essential to measure the success of your personalized email campaigns and iterate based on the results. Track key metrics such as open rates, click-through rates, conversion rates, and ROI to gauge the effectiveness of your efforts. Use A/B testing to experiment with different personalization tactics and messaging to see what resonates most with your audience.
## Conclusion
In today's competitive marketplace, personalization is no longer just a nice-to-have—it's a must-have for any successful email marketing strategy. By delivering tailored content that speaks directly to the interests and preferences of your audience, you can drive higher engagement, foster stronger connections, and ultimately, maximize returns on your email marketing efforts. Embrace the power of personalization and watch your email campaigns soar to new heights of success. | divsly |
1,790,069 | Exploring Python: Everything is an Object | ID and Type: In Python, every object has an identity (id) and a type. The id uniquely identifies each... | 0 | 2024-03-14T09:21:10 | https://dev.to/bestverie/exploring-python-everything-is-an-object-549j | python, programming, object, opensource | **ID and Type:**
In Python, every object has an identity (`id`) and a type. The `id` uniquely identifies each object, acting as its address in memory. This means that even basic data types like integers, strings, and lists are treated as objects in Python.
##### Example of getting ID and type of an object
```
x = 42
print("ID:", id(x))
print("Type:", type(x))
```
##### Results
`ID: 140736591034160
Type: <class 'int'>`
**Mutable Objects:**
Some objects in Python are mutable, meaning they can be modified after creation. For instance, lists and dictionaries can have their elements or key-value pairs changed, added, or removed without changing their identity.
##### Example of modifying a mutable object
```
my_list = [1, 2, 3]
my_list.append(4)
print(my_list) # Output: [1, 2, 3, 4]
```
**Immutable Objects:**
On the other hand, immutable objects cannot be modified once they are created. Examples of immutable objects in Python include integers, strings, and tuples. Any attempt to change an immutable object results in the creation of a new object.
##### Example of trying to modify an immutable object
```
my_string = "Hello"
my_string += " World"
print(my_string) # Output: Hello World
```
**Why Does it Matter and How Python Treats Mutable and Immutable Objects Differently:**
Understanding the difference between mutable and immutable objects is crucial for writing efficient and bug-free Python code. Python treats mutable and immutable objects differently in terms of memory management and behaviour. Immutable objects ensure data integrity, while mutable objects offer flexibility but require careful handling to avoid unexpected changes.
**How Arguments are Passed to Functions and Implications for Mutable and Immutable Objects:**
In Python, arguments are passed to functions by object reference. This means that when you pass an object to a function, you are passing a reference to the object rather than the object itself. For immutable objects, this is similar to passing by value, as the function cannot modify the original object. However, for mutable objects, changes made within the function can affect the original object outside the function, leading to unexpected behaviour if not handled correctly.
##### Passing Immutable Objects:
```
def modify_immutable(x):
x += 1
print("Inside the function:", x)
num = 10
modify_immutable(num)
print("Outside the function:", num)
```
##### Output
`Inside the function: 11
Outside the function: 10
`
##### Passing mutable Objects:
```
def modify_mutable(lst):
lst.append(4)
print("Inside the function:", lst)
my_list = [1, 2, 3]
modify_mutable(my_list)
print("Outside the function:", my_list)
```
##### Output
`Inside the function: [1, 2, 3, 4]
Outside the function: [1, 2, 3, 4]
`
| bestverie |
1,790,085 | Stress-Testing Your Mental Health: Recognizing Signs You Need a Break | This post was developed via a partnership with BetterHelp. As a developer, you're a pro at analyzing... | 0 | 2024-03-14T09:39:44 | https://dev.to/asifsidiq/stress-testing-your-mental-health-recognizing-signs-you-need-a-break-od9 | mentalhealth, healthydebate, singleresponsibility, stress | This post was developed via a partnership with BetterHelp.
As a developer, you're a pro at analyzing code and finding the glitches that can cause major headaches. But what about checking in with those personal "programs" running in the background — your mental health?
Sometimes, even the most dedicated devs need to hit the pause button, recharge, and [prioritize their well-being](https://dev.to/devteam/what-are-effective-strategies-for-mental-well-being-1d05). Just like ignoring a bug can lead to a full system crash, pushing past the signs of mental strain can have serious consequences.
This article will help you recognize the red flags early so you can take the steps needed to stay healthy, happy, and at the top of your coding game. As we examine the stresses devs face and provide tips for maintaining a positive work-life balance, remember that it's not about being perfect — it's about finding what works best for you.
## The Sneaky Side of Stress
Stress is a natural part of life, especially with the pace of the tech world. But when that healthy "push" turns into a constant state of overdrive, your mind and body may start to scream for help. It's not a weakness; it's simply a signal that your internal resources are running low.
Here's the thing: stress doesn't always look like what we expect. Sure, it can be the big stuff — feeling overwhelmed, anxious, or on the verge of tears. But it also disguises itself in sneaky ways:
- Physical symptoms: Your body absorbs stress like a sponge, and headaches, stomachaches, sleep problems, and muscle tension can all be signs that your mental health is in need of attention.
- Cognitive fog: Struggling to focus, make decisions, or remember things? That's your stressed-out brain misfiring.
- Changes in behavior: Are you snapping at your teammates, isolating yourself, or relying on caffeine and junk food to get through the day? These can signal trouble ahead.
- The joy factor: When's the last time a coding project felt exciting? Loss of motivation or dread for work that you usually love is a significant red flag.
While stress can manifest itself in various forms, the common denominator is that it takes a toll on your mental and physical well-being. Pay attention to these signs and take action before things escalate.
## Decoding Your Personal Stress Signals
Everyone has a unique "[breaking point](https://forupon.com/2023/11/11/the-try-hard-wordle-solver/)." The trick is to tune in to your own warning signs before it's too late. When you're in the thick of a project or dealing with deadlines, it's easy to push aside any discomfort and keep going. However, taking time to reflect regularly can help prevent burnout.
Here are a few ways to stay in tune with your mental health:
- The body knows: Practice mindful check-ins throughout the day. How does your body feel? Tense, fidgety, exhausted?
- Mood Tracker: A simple app or journal can reveal patterns in your mood. Are negativity and irritability becoming the norm?
- Talk It Out: Close friends, trusted colleagues, or mental health professionals offer a safe space to unpack things and gain an outside perspective.
Taking a proactive approach to self-care can help you catch issues early and make small adjustments as needed. It's a constant work in progress, but your mental health is worth the investment.
## Battling Burnout: A Dev's Survival Guide
When stress goes unchecked for too long, it can lead to [burnout](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4911781/)—a state of physical, mental, and emotional exhaustion caused by prolonged stress. Unfortunately, burnout isn't just a personal problem. It can also affect work, relationships, and overall quality of life.
In the dev world, we pride ourselves on fixing things before everything falls apart. Apply the same logic to your well-being. Taking a break doesn't make you less dedicated; it's the most responsible thing you can do for yourself and your career. Here are ways to actively protect your mental health:
- Boundaries Are Beautiful: Work can seep into every spare moment if we let it. Set work hours (and stick to them!), and guard your downtime fiercely.
- Recharge Rituals: What genuinely fills your cup? It could be movement, hobbies, time in nature, or simply blissful silence. Schedule those in like you would important meetings.
- Digital Detoxes: Give your brain a break from the constant stimulation of screens. Try a tech-free evening, or even a whole weekend off the grid.
- Asking for Help: Whether it's delegating a task, talking to a mentor, or seeking professional therapy, there's strength in saying, "I need support."
When we offer ourselves the same care and compassion that we extend to our code, it's a win-win. Our personal well-being flourishes, and our work benefits from a happier, healthier mindset.
## Shifting Your Mindset
Sometimes, the biggest obstacles to our well-being are the thoughts running on autopilot in our heads. Let's take a closer look at a few [common mental roadblocks](https://forupon.com/2023/10/11/what-vision-why-vision-important/) and how to rewire them for a healthier, more sustainable developer life:
## Challenging Perfectionism
The pursuit of flawlessness might sound noble, but in the world of coding, it's a guaranteed path to frustration and burnout. Perfectionism creates unrealistic expectations, causing you to focus obsessively on minor details, become paralyzed by self-doubt, and never consider your work "good enough."
- The reality check: Remind yourself that perfect code is a myth. Even the most polished products have room for improvement. Aim for excellence, not unattainable perfection.
- Focus on progress: Celebrate each milestone and every completed feature. Acknowledge the steps forward, not just the gap between where you are and an imaginary ideal.
- Set attainable deadlines: When perfectionism rears its head, it can make deadlines feel impossible. Break projects into smaller, manageable tasks, making success more achievable.
## Embracing Mistakes
Mistakes are not personal failings. They're stepping stones on the path to mastery. When you approach errors with fear or self-blame, you shut down your problem-solving capabilities and miss valuable opportunities for growth.
- Mistakes = learning: Every time you solve a bug or refactor a function, you're strengthening your coding skills and expanding your knowledge base.
- Separate the mistake from your worth: You are not your code. A flawed script doesn't mean you're a flawed developer.
- Seek collaboration: Asking for help or debugging with a teammate can normalize mistakes as a natural part of the development process.
## Cultivating Self-Compassion
Would you ever talk to a junior dev the way your inner critic talks to you? Probably not. It's time to extend yourself the same kindness and understanding you'd give to a colleague. Being supportive and patient with yourself, especially during tough times, builds the resilience you'll need throughout your long coding career.
- Talk to yourself like a friend: Notice when that inner voice gets harsh. Would you say that to a teammate? If not, try rephrasing those thoughts with compassion.
- Celebrate your wins: It's easy to focus on what still needs fixing. Make a conscious effort to acknowledge your successes, big and small.
- Rest is productive: Taking breaks or stepping away from a problem doesn't mean you're lazy. It's allowing your brain to recharge so you can come back with fresh energy and perspective.
Mindset shifts take practice, so be patient with yourself as you challenge old thought patterns and build new, more supportive ones. You can thrive as a dev with a healthy mind, body, and spirit through consistent self-care and a compassionate approach to your work.
## When Stress Gets Complicated
Sometimes, the strain we're feeling goes beyond everyday stress. Untreated anxiety, depression, or past trauma can lurk beneath the surface, making burnout harder to shake. If you have a history of mental health challenges or feel consistently out of sorts for longer than a few weeks, seeking professional help from a therapist can make a world of difference.
There are specialized therapies, like [cognitive processing therapy for PTSD](https://www.betterhelp.com/advice/therapy/using-cognitive-processing-therapy-for-ptsd/), that can help you process difficult experiences and develop healthy coping mechanisms. As a developer, you're used to problem-solving, so don't be afraid to apply that skill set to your own well-being.
You can also reach out to employee assistance programs, counseling services through your school or university, or online therapy platforms. No matter how you choose to get support, know that it's a courageous step towards healing and building a more resilient self.
## Takeaway
Sometimes, even with our best efforts, burnout hits hard. If you feel yourself spiraling, [prioritize rest and self-compassion](https://forupon.com/2023/11/22/what-is-love-developing-definition/). You can also talk to your manager about temporarily easing your workload. Just like a major bug fix takes time and patience, give yourself grace during your recovery. You'll emerge stronger and wiser on the other side.
Remember, your mental health is as fundamental to your success as your coding skills. By learning to listen to your mind and body, asking for help when needed, and prioritizing recharge routines, you can build a sustainable, joyful career in tech. | asifsidiq |
1,790,094 | Unit testing redux toolkit slices | Recently, I worked on an application where I chose redux toolkit slices over the traditional way of... | 0 | 2024-03-14T10:24:19 | https://dev.to/ujjavala/unit-testing-redux-toolkit-slices-4g7d | unittest, redux, webdev, jest | Recently, I worked on an application where I chose redux toolkit [slices](https://redux-toolkit.js.org/api/createSlice) over the traditional way of redux state management (reducer logic, action creators, and action types spread across separate files) and interestingly enough, there were not many resources that explained how to unit test actions, reducers and service logic. Now don't get me wrong. There are resources which dive into the surface of it but most of them assume that the codebase uses [createAsyncThunk](https://redux-toolkit.js.org/api/createAsyncThunk) which in my case wasn't true or are simply too disoriented. I felt that there is a need to get all of it in one single page without all the fuss and here I am doing just that.
### Testing actions
My slice file sampleSlice.js contains six actions as shown below. I tested every action by calling the reducer operation upon the test data and verifying whether the result is as expected or not.
```
import { createSlice } from '@reduxjs/toolkit'
const initialState = {
isLoading: false,
isError: false,
isSampleOpted: false,
}
// Slice
export const sampleSlice = createSlice({
name: 'sampleReducer',
initialState: initialState,
reducers: {
// Give case reducers meaningful past-tense "event"-style names
sampleFetched(state, action) {
const isSampleOpted = action.payload?.sample_flag
state.isSampleOpted = isSampleOpted
},
sampleToggled(state, action) {
state.isSampleOpted = action.payload
},
loadingStarted: state => {
state.isLoading = true
},
loadingFinished: state => {
state.isLoading = false
},
errorOccured: state => {
state.isError = true
},
successOccured: state => {
state.isError = false
}
}
})
export default sampleSlice.reducer
export const selectState = state => state.sampleReducer
export const { sampleFetched, sampleToggled, loadingStarted, loadingFinished, errorOccured, successOccured } = sampleSlice.actions
```
And given below are the unit tests for it
```
import '@testing-library/jest-dom'
import sampleSlice, { errorOccured, loadingFinished, loadingStarted, sampleFetched, sampleToggled, successOccured } from './sampleSlice'
let state = {
isSampleOpted: false,
isLoading: true,
isError: true,
}
describe('sampleSlice', () => {
it('initialize slice with initialValue', () => {
const sampleSliceInit = sampleSlice(state, { type: 'unknown' })
expect(sampleSliceInit).toBe(state)
})
it('test sampleFetched', () => {
let testData = {
sample_flag: true,
}
const afterReducerOperation = sampleSlice(state, sampleFetched(testData))
expect(afterReducerOperation.isSampleOpted).toBeTruthy()
})
it('test sampleToggled', () => {
let testData = {
sample_flag: true
}
const afterReducerOperation = sampleSlice(state, sampleToggled(testData))
expect(afterReducerOperation.isSampleOpted).toBeTruthy()
})
it('test loadingStarted', () => {
const afterReducerOperation = sampleSlice(state, loadingStarted())
expect(afterReducerOperation.isLoading).toBeTruthy()
})
it('test loadingFinished', () => {
const afterReducerOperation = sampleSlice(state, loadingFinished())
expect(afterReducerOperation.isLoading).toBeFalsy()
})
it('test errorOccured', () => {
const afterReducerOperation = sampleSlice(state, errorOccured())
expect(afterReducerOperation.isError).toBeTruthy()
})
it('test successOccured', () => {
const afterReducerOperation = sampleSlice(state, successOccured())
expect(afterReducerOperation.isError).toBeFalsy()
})
```
### Testing api calls
My sampleDetails.js file primarily has two api calls **fetchSample** and **toggleSample** as shown below:
```
import axios from 'axios'
import { errorOccured, loadingFinished, loadingStarted, sampleFetched, sampleToggled, successOccured } from '../store/sampleSlice';
export const fetchSample = () => async dispatch => {
dispatch(loadingStarted());
try {
const response = await axios.get('/sample');
dispatch(successOccured())
dispatch(sampleFetched(response.data));
} catch (e) {
dispatch(errorOccured())
}
finally {
dispatch(loadingFinished())
}
};
export const toggleSample = (toggleSampleVal) => async dispatch => {
dispatch(loadingStarted());
try {
await axios.post('/sample, {
sample_flag: toggleSampleVal
})
dispatch(successOccured())
dispatch(sampleToggled(toggleSampleVal));
} catch (e) {
dispatch(errorOccured())
}
finally {
dispatch(loadingFinished())
}
};
```
Unit testing this was interesting because there are limited resources that targets the test without the use of mock thunks. I tested it by mocking axios and returning mock status codes, thereby asserting it against the dispatched action, however, there might be a better way to do this.
```
import '@testing-library/jest-dom';
import { fetchSample, toggleSample } from './sampleDetails';
import axios from 'axios';
const testSampleData = {
'sample_flag': true,
}
jest.mock('axios');
describe('sampleDetails', () => {
it('should dispatch loadingStarted when starting to fetch sample details', async () => {
const mockDispatch = jest.fn();
await fetchSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/loadingStarted'
})
});
it('should dispatch sampleFetched when sample details are fetched', async () => {
axios.get.mockImplementationOnce(() =>
Promise.resolve({
data: testSampleData
})
);
const mockDispatch = jest.fn();
await fetchSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
payload: testSampleData,
type: 'sampleReducer/sampleFetched'
})
});
it('should dispatch errorOccured when fetch sample details fails', async () => {
axios.get.mockImplementationOnce(() =>
Promise.reject({ statusCode: 500 })
);
const mockDispatch = jest.fn();
await fetchSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/errorOccured'
})
});
it('should dispatch loadingStarted when starting to toggle sample option', async () => {
const mockDispatch = jest.fn();
await toggleSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/loadingStarted'
})
});
it('should dispatch sampleToggled when sample is toggled successfully', async () => {
axios.post.mockImplementationOnce(() =>
Promise.resolve({ statusCode: 200 })
);
const mockDispatch = jest.fn();
await toggleSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/sampleToggled'
})
});
it('should dispatch errorOccured when sample toggle fails', async () => {
axios.post.mockImplementationOnce(() =>
Promise.reject({ statusCode: 500 })
);
const mockDispatch = jest.fn();
await toggleSample()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/errorOccured'
})
});
it('should dispatch loadingStarted when notification is starting to be shown', async () => {
const mockDispatch = jest.fn();
await showNotification()(mockDispatch)
expect(mockDispatch).toHaveBeenCalledWith({
type: 'sampleReducer/loadingStarted'
})
});
});
```
You must have observed that I haven't explicitly used createAsyncThunk anywhere as my application wasn't complex enough to use it. For the store, I did rely on [configureStore](https://redux-toolkit.js.org/api/configureStore) from redux toolkit, which implicitly takes care of everything as shown below
```
import { combineReducers } from 'redux'
import { configureStore } from '@reduxjs/toolkit'
import sampleReducer from './sampleSlice'
const reducer = combineReducers({
sampleReducer,
})
// Automatically adds the thunk middleware and the Redux DevTools extension
const store = configureStore({
reducer
})
export default store
```
Hope this article was helpful. If anyone is interested in migrating their old code to redux, I found [this](https://redux.js.org/usage/migrating-to-modern-redux#reducers-and-actions-with-createslice) really helpful.
Keep calm and unit test! | ujjavala |
1,790,163 | Generative AI Development Solutions | Introduction Generative AI represents a revolutionary leap forward in artificial... | 0 | 2024-03-14T11:54:42 | https://dev.to/binaryinformatics/generative-ai-development-solutions-3amm | generativeai, ai, aidevelopment, binaryinformatics | ## Introduction
Generative AI represents a revolutionary leap forward in artificial intelligence capabilities. Powered by deep learning techniques like neural networks, generative AI can create brand new content, designs, and more from scratch. Unlike previous AI systems focused on analysis, generative AI can actively build and generate brand new artifacts and content.
Businesses across all industries are beginning to recognize the vast potential of generative AI. It enables personalization at scale, automatically generating customized content for each user. The creative possibilities are endless - design prototypes, product descriptions, support content, and even software code can all be produced on-demand through generative AI. The systems continuously learn and improve from data, requiring less and less human input over time.
As generative AI capabilities grow more advanced, competitive advantage will go to those companies that successfully integrate it throughout their products, services and operations. Identifying the right development partner is key to building customized generative AI solutions tailored for your specific needs.
Binary Informatics has emerged as a leader in **[generative AI development Solutions](https://binaryinformatics.com/generative-ai-development-solutions/)**. With over a decade of experience building innovative AI solutions for global enterprises, Binary Informatics possesses the technical expertise and strategic vision needed to develop impactful generative AI capabilities at any scale. By leveraging their talents, you can bring the benefits of cutting-edge generative AI to your organization.
## Generative AI Capabilities
Generative AI refers to AI systems that can generate new content and artifacts that are human-like. The key capabilities of generative AI include:
### Text Generation
- Text generation systems like GPT-3 can generate human-like text on any topic through natural language processing. They are trained on massive amounts of data to produce remarkably coherent writing.
### Image Generation
- Image generation systems like DALL-E 2 and Stable Diffusion use deep learning to generate photorealistic images from text descriptions. They allow creating original images that don't exist.
### Video/Audio Generation
- Video and audio generation tools can produce synthetic media like videos and podcasts using AI voices cloned from real people. They enable creating high-quality video/audio content at scale.
### Data Generation
- AI can generate synthetic datasets for tasks like training computer vision and natural language processing models. This augments limited real-world data.
### Code Generation
- Code generation systems like GitHub Copilot can suggest entire code snippets and programs through deep learning on large codebases. This boosts developer productivity significantly.
Generative AI allows creating high-quality, human-like content across text, images, video, audio, data, and code. These powerful capabilities are transforming content creation and imagination.
## Benefits for Businesses
Generative AI delivers numerous benefits for businesses across industries. Key advantages include:
**Automate Content Creation**
Generative AI can help automate many content creation tasks. Examples include writing news articles, social media posts, blog content, ad copy, product descriptions, FAQs, and more. This frees up significant time for human writers and creators to focus on more strategic, creative work.
**Create Synthetic Data for ML Models**
Generative models can rapidly generate massive customized datasets to train machine learning algorithms. This synthetic data augments real-world data and leads to improved model accuracy.
**Automate Simple Coding Tasks**
For programmers, generative AI can automate simpler coding tasks, like translating natural language to code. This enables developers to work on higher-value coding that requires human oversight and expertise.
**Lower Costs**
Automating repetitive, low-value work with generative AI reduces human labor costs. Businesses realize significant cost savings from increased workflow automation.
**Free Up Human Creativity**
With generative AI handling more mundane work, employees can invest their efforts into more strategic initiatives that leverage human judgment, creativity, and decision-making. This empowers the human workforce.
## Binary Informatics' Solutions for Generative AI Development
Binary Informatics offers a range of solutions to help your business adopt generative AI:
### Custom Generative AI Models
Our team of AI researchers and engineers can work closely with you to develop custom generative AI models tailored to your specific needs. We utilize the latest techniques like diffusion models and transformer networks to create cutting-edge solutions.
### Pre-trained Models for Common Use Cases
Don't want to build from scratch? We offer access to a library of pre-trained generative AI models for common uses cases like text generation, image creation, video production, and more. These models have already been trained on diverse datasets so you can start generating high-quality outputs right away.
### MLOps Infrastructure to Deploy Models
We provide the MLOps pipelines and infrastructure needed to seamlessly take your generative AI models from development to production deployment. This enables rapid iteration and ensures models continue improving through built-in feedback loops.
### Consulting Services on Generative AI Strategy
Our experts can advise you on the best applications of generative AI for your business. We offer consulting services to create a strategic plan for where and how to implement generative AI to maximize business impact based on your specific goals.
## Generative AI in Action
Generative AI is already transforming how businesses operate in powerful ways. Here are some real-world examples of generative AI delivering impressive results:
### Automating Content Creation for a News Publisher
A major news publisher was looking to improve content operations to increase output and free up their writers for more impactful stories. By implementing generative AI for certain automated content needs, they saw a **500% increase** in articles written per month. This included localized weather reports, sports recaps, and data-driven market updates that were created with perfect accuracy and consistency. Their writers appreciated focusing their time on more complex narrative stories.
### Personalizing Marketing Emails
An e-commerce company tested generative AI to customize their marketing emails at scale. Instead of sending batch messages with generic content, they used AI to tailor emails for individual subscribers. This included adding their name, product recommendations based on past purchases, and personalized subject lines. Overall email open rates increased by **22%** and clickthroughs were up **18%**. The customers appreciated feeling "known" through relevant content.
### Drafting Complex Legal Briefs
A law firm was drowning in paperwork when assembling case briefs. Their junior lawyers spent hours combing through databases to shape legal arguments. Using generative AI models fine-tuned on past briefs, they could completely automate initial brief drafting. This gave back over 30 hours per 100-page brief, allowing lawyers to focus on higher judgement tasks. The head partner called it their "**greatest efficiency breakthrough in decades**".
### Customer Success Story
The CEO of an AI consultancy shared: "Implementing generative AI to handle various content needs - like social media posts, webpage copy, and documentation - was a total game changer. Our small team can now punch above our weight in content production. Even better, we reallocated bandwidth to pursue more creative and analytics projects. It's amazing to get perfectly formatted content instantly with a few prompts. I can't imagine operating any other way!"
In these examples, generative AI delivered faster content output, higher quality, and cost savings through automation. This empowers businesses to create value in new ways.
## Overcoming Generative AI Challenges
Generative AI promises enormous benefits, but also comes with risks that need to be addressed.
### Mitigating Bias and Ethics Concerns
Like any AI system, generative AI models can perpetuate and amplify societal biases if not properly monitored and controlled. Organizations must audit their training data, test outputs, and set up oversight processes to identify and mitigate any harmful biases or ethical risks. Priority should be placed on developing generative AI responsibly.
### Managing Cybersecurity Risks
The unprecedented capabilities of generative AI could be exploited by bad actors if not properly secured. Strong data governance, access controls, monitoring, and cybersecurity measures are essential. Companies should partner with experts in AI cybersecurity to conduct threat modeling, data protection, and vulnerability assessments.
### Building In-House Skills
While generative AI is groundbreaking, realizing its full potential requires developing specialized skills in-house. Organizations need data scientists and engineers who deeply understand these systems and can debug errors, tune configurations, assess risks, and ensure high-quality outputs. Proper training, documentation, and oversight processes led by internal experts is key to generating business value responsibly.
With the right precautions and expertise, companies can overcome the challenges and safely capitalize on the many advantages of generative AI. But these concerns should not be underestimated. Working with experienced partners is the best way to build generative AI capabilities securely and ethically.
## Looking Ahead
The rapid pace of advancement in generative AI points to some exciting possibilities in the near future. As models continue to be refined, we can expect generative AI to become even more capable and nuanced in its output.
### Predictions on Future Capabilities
In the next few years, we are likely to see generative AI that can:
- Hold natural, multi-turn conversations without veering off track or making mistakes
- Generate images, audio, and video that are indistinguishable from reality
- Automate complex creative and analytical tasks currently performed by humans
- Adapt its tone, style, and content for different audiences and contexts
Researchers predict that as early as 2025, generative AI could reach human parity across most tasks and domains. This doesn't mean AI will replace human intelligence and creativity, but rather work alongside us in a highly collaborative way.
### How It Will Transform Industries
As generative AI matures, it is poised to revolutionize nearly every industry:
- In healthcare, it can help diagnose conditions, suggest treatments, and generate personalized care plans.
- For customer service, it can respond to customer inquiries, handle complaints, and provide recommendations.
- Within cybersecurity, it can monitor networks for threats, detect anomalies, and generate code.
- For content creation, it can write, edit, illustrate, animate, and design.
- Across engineering and manufacturing, it can improve designs, optimize workflows, and foresee problems.
The list goes on. Any industry that relies on data analysis, content creation, strategic planning, or complex problem solving stands to benefit enormously from generative AI's unique strengths.
### Importance of Starting Now
Organizations that start incorporating generative AI today will have a considerable advantage over their competition. They can build up institutional knowledge on best practices, figure out which applications provide the most value, and get comfortable using AI as a collaborative tool.
It takes time to integrate a new technology effectively. Beginning now allows for gradual adoption alongside current workflows. Those who wait risk falling behind as generative AI proliferates across industries. Starting today also maximizes the time and cost savings that come with automating tasks and amplifying human capabilities.
The next decade will see generative AI transition from cutting-edge novelty to mission-critical business necessity. Organizations need to lay the groundwork today to realize the full benefits. Partnering with experienced providers like Binary Informatics offers an accelerated path to leveraging generative AI to its full potential.
## Why Binary Informatics?
Binary Informatics is the market leader in generative AI solutions. We are trusted by leading companies across industries to deliver cutting-edge generative AI capabilities customized for their needs.
What sets Binary Informatics apart:
- **Market Leader in Generative AI**: We have the most experience developing state-of-the-art generative AI models like DALL-E and GPT-3. Our team is at the forefront of generative AI research and development.
- **Trusted by Leading Companies**: Our clients include Fortune 500 companies and innovative startups. We work closely with them to create generative AI solutions tailored to their specific use cases.
- **Custom Solutions for Any Need**: With our deep generative AI expertise, we can build custom models and tools for any application. We work collaboratively with clients to understand their needs and deliver innovative generative AI solutions that drive real business value.
Whether you need creative content generation, advanced natural language processing, or custom generative model development, Binary Informatics has the proven experience and capabilities to deliver. Trust us for your most complex and mission-critical generative AI needs.
## Getting Started with Binary Informatics
Are you ready to implement generative AI solutions for your business? Binary Informatics makes it easy to get started. We offer:
- **Free Consultation** - Schedule a free 30-minute call with one of our generative AI experts. We'll discuss your business goals and advise on the best applications of generative AI.
- **Pilot Projects** - Not sure where to begin? Try a low-risk pilot project in a targeted area, like content generation or data analysis. We'll set up a limited scope pilot, deliver results, and evaluate next steps.
- **Generative AI Workshops** - Get your team up to speed on generative AI capabilities with one of our half-day workshops. We'll provide an overview of key technologies, demo real-world use cases, and answer all your questions.
With our cutting-edge expertise and hands-on guidance, you'll be leveraging generative AI's potential in no time. Reach out today to get started on an evaluation or pilot project--the first step towards transformative results.
## Conclusion
Generative AI represents an exciting new frontier in technology, offering tremendous potential to enhance business capabilities and transform how we work. As outlined in this piece, key benefits of leveraging generative AI include automating repetitive tasks, generating personalized content at scale, extracting insights from data, and optimizing workflows.
However, realizing the full potential of generative AI requires thoughtful development and mitigating risks like biased outputs. This is where Binary Informatics' expertise comes into play. With over a decade of experience building AI solutions, Binary Informatics is uniquely equipped to help companies implement generative AI responsibly and effectively.
The time to embrace generative AI is now. By partnering with Binary Informatics, organizations can future-proof their operations, reduce costs, and provide more value to customers. **[Contact Binary Informatics today to schedule a consultation and see how generative AI can transform your business.](https://binaryinformatics.com/contact-us/)** | binaryinformatics |
1,790,410 | test | test | 0 | 2024-03-14T13:39:36 | https://dev.to/programci42/test-4deb | test | programci42 | |
1,834,380 | Hello world | Hello everyone! | 0 | 2024-04-25T18:06:08 | https://dev.to/shubhsk/hello-world-3cdb | webdev, javascript, python, genai | Hello everyone! | shubhsk |
1,790,426 | Error: unable to run private channel using Pusher in Laravel 10 | Error: unable to run private channel using... | 0 | 2024-03-14T14:15:58 | https://dev.to/hilmi/error-unable-to-run-private-channel-using-pusher-in-laravel-10-551g | {% stackoverflow 78152277 %} | hilmi | |
1,790,466 | ReductStore v1.9.0 Released | We are pleased to announce the release of the latest minor version of ReductStore, 1.9.0. ReductStore... | 0 | 2024-03-14T15:20:08 | https://www.reduct.store/blog/news/reductstore-9-released | news, database, reductstore | We are pleased to announce the release of the latest minor version of [**ReductStore**](https://www.reduct.store/), [**1.9.0**](https://github.com/reductstore/reductstore/releases/tag/v1.9.0). ReductStore is a time series database designed for storing and managing large amounts of blob data.
To download the latest released version, please visit our [**Download Page**](https://www.reduct.store/download).
## What’s New in 1.9.0?
This release, version 1.9.0, introduces several key improvements and features to enhance the overall performance and user experience. These updates include optimizations for disk space management, the inclusion of replication support in the Web Console, and the provision of license information in the HTTP API.
<!--more-->
### Disk Space Management and Optimization in ReductStore
ReductStore ensures that you don't exceed your disk space with the FIFO bucket quota. It removes old data when the quota is reached. If it's not possible to free up enough disk space for incoming data, the system rejects the write request. To keep the quota efficiently, it calculates disk usage at startup. This calculation can be slow for large amounts of data, like hundreds of terabytes. However, it has now been significantly optimized and parallelized.
### Web Console with Replication Support
We introduced data replication in **[the previous release](https://www.reduct.store/blog/news/reductstore-8-released)**. Now, you can manage this feature in the Web Console:

Diagnostics will be particularly useful, as they can help identify hard-to-spot errors in your settings.
### HTTP API 1.9 with License Information
ReductStore is available under the **[BUSL-1.1](https://github.com/reductstore/reductstore/blob/main/LICENSE)** license, and we freely distribute the source code. However, companies with capital exceeding 2M USD must purchase a **[commercial license](https://www.reduct.store/pricing)** for production use.
Customers receive a license key in a text file to be specified in the database **[configuration](https://www.reduct.store/docs/next/configuration#settings)**. After this, the ReductStore instance provides license information in logs and via its HTTP API. This information can be used when requesting commercial support.
---
I hope you find this release useful. If you have any questions or feedback, don’t hesitate to reach out in **[Discord](https://discord.gg/8wPtPGJYsn)** or by opening a discussion on **[GitHub](https://github.com/reductstore/reductstore/discussions)**.
Thanks for using **[ReductStore](https://www.reduct.store/)**! | atimin |
1,790,490 | Portfolio Website / Bible App - Devlog #9 | Accomplished: Configured a MySQL database with free hosting services. Securely connected to the... | 0 | 2024-03-14T15:52:37 | https://dev.to/ashray_sam/portfolio-website-bible-app-devlog-9-4gh3 | webdev, react, learning, sql | Accomplished:
- Configured a MySQL database with free hosting services.
- Securely connected to the database from local Spring Boot application.
To Do:
- Implement responsive design.
- Integrate a "Contact Me" section with social icons.
- Incorporate project listings to showcase relevant work.
Next Steps:
Deploy both client and server APIs to make the application accessible to users. | ashray_sam |
1,790,522 | FAST WINDOWS | La rentrée 2023 et les réseaux sociaux riches en évènements avec Whatsapp et Facebook nous ont permis... | 0 | 2024-03-14T16:26:28 | https://dev.to/longjumpingfile/fast-windows-4dho | codepen | <p>La rentrée 2023 et les réseaux sociaux riches en évènements avec Whatsapp et Facebook nous ont permis de concrétiser nos activitées notamment l' obtention d' un stage qui valide l' expérience professionnelle pour un élève du Lycée Scientifique du Carré Sénart et un étudiant inscrit en BTS privé pour un examen.</p>
https://codepen.io/collection/PYLbvN | longjumpingfile |
1,799,340 | Spin up command-line Ubuntu VMs in (almost) seconds | Sometimes, you need a clean and disposable Linux environment to test configuration changes, try out... | 0 | 2024-03-23T11:09:52 | https://www.paleblueapps.com/rockandnull/spin-up-ubuntu-vms-multipass/ | infra | ---
title: Spin up command-line Ubuntu VMs in (almost) seconds
published: true
date: 2024-03-23 11:09:13 UTC
tags: Infra
canonical_url: https://www.paleblueapps.com/rockandnull/spin-up-ubuntu-vms-multipass/
---

Sometimes, you need a clean and disposable Linux environment to test configuration changes, try out a new tool, or simply isolate your experiments from your main system. There's a lightning-fast and easy way to launch Ubuntu command-line virtual machines (VMs) on Windows, macOS, or even Linux.
This is a super quick primer on how you can get started quickly. In less than 10 minutes you can have your command line Ubuntu VM.
## **What is Multipass?**
Developed by Canonical (the company behind Ubuntu), Multipass provides a streamlined way to manage Ubuntu VMs. With a single command, you can spin up a fresh Ubuntu instance, perfect for those quick testing scenarios.
Multipass VMs have a relatively small footprint, conserving your system resources.
## **Quick start**
Firstly, head to the [Multipass website](https://multipass.run/?ref=paleblueapps.com) and follow the installation instructions for your operating system.
### **Launch your first VM**
```
multipass launch
```
That's it. This command downloads the Ubuntu image and boots up a VM. As soon the latest Ubuntu LTS (long-term support) image is downloaded and deployed, you will be entered into the shell.
Every time you run this command a new VM is created.
### Launch an existing VM
```
mutlipass shell <vm-name>
```
For follow-up runs, use this command to launch into the shell of an existing VM. If you only have 1 VM, you can omit `vm-name`.
### **Mount a host directory**
```
multipass mount <source-path-on-host> <vm-name>:<mount-point-in-vm>
```
The default name for the `vm-name` is `primary`.
### Start/Stop/Delete/List VMs
```
multipass delete <vm-name>
multipass start <vm-name>
multipass stop <vm-name>
multipass list
```
Quite self-explanatory commands. With these, you can manage the VMs in your system.
### Modify VM resources
Almost certainly, you will need to modify the resources of your VMs at some point. Fear not, you can do it quickly ([documentation](https://multipass.run/docs/modify-an-instance?ref=paleblueapps.com)):
```
multipass set local.<vm-name>.cpus=4
multipass set local.<vm-name>.disk=60G
multipass set local.<vm-name>.memory=7G
```
## **Alternatives for VMs with GUI**
If you require a full-fledged Linux VM with a graphical desktop environment:
- **UTM (macOS):** A streamlined VM solution tailored for macOS. Highly recommended, really polished, and easy to use.
- **VirtualBox (Windows, macOS, Linux):** A powerful and versatile virtualization platform with more extensive configuration options.
Give Multipass a try the next time you need a quick and isolated Linux environment. Its simplicity and speed will make it an indispensable tool in your development workflow.
Happy hacking! | rockandnull |
1,808,707 | ALAIcoin Exchange - Innovating for Secure Crypto Transactions | The ALAIcoin digital asset trading platform was established in 2017 and is registered in California,... | 0 | 2024-04-02T07:01:54 | https://dev.to/alaicoinex/alaicoin-exchange-innovating-for-secure-crypto-transactions-1jee |
The ALAIcoin digital asset trading platform was established in 2017 and is registered in California, USA. It was created by top blockchain investment institutions from Wall Street, along with investment elites, early investors in the blockchain industry, and researchers, to forge a one-stop international blockchain digital asset trading platform.
Since its establishment in the United States in 2017, ALAIcoin has been committed to providing users with safe, professional, and compliant cryptocurrency trading services. Focused on North America and Asia-Pacific, it is fully engaged in expanding its global business. Adopting a dual registration system in the USA and Singapore, ALAIcoin embraces regulatory compliance and operations, and has obtained or is in the process of applying for MSB and NFA regulatory licenses (compliance operation licenses) issued by the USA and Canada, the UK's FCA license, and Australia's ASIC license, continuously pushing the development of the cryptocurrency industry.
Thanks to secure digital asset management, diversified trading models, and a positive trading experience, official data collected up to November 21, 2023, shows that the ALAIcoin digital asset trading platform has achieved a registered user count of 5.17 million, with daily active trading users peaking at 2.017 million and daily active user trading volume reaching a high of 6.36 billion USD.

The ALAIcoin digital asset trading platform has a certain renown in the blockchain ecosystem and cryptocurrency trading system field. Its team has independently developed and built a decentralized structure's security system and asset firewall protection system, effectively preventing DDOS attacks with a double firewall, while also having the support of several top global institutions, solid financial backing, and providing first-class asset security guarantees for global users.
The core team members of the ALAIcoin digital asset trading platform come from senior R&D, product, and operations sectors of the Top 3 digital asset trading platforms, possessing extensive experience and professional knowledge, with decades of service experience in the blockchain industry R&D. The core members are spread across the globe, including Singapore, the USA, the UK, Dubai, Hong Kong, and other countries and regions.
As a globalized digital asset trading service platform, ALAIcoin boasts significant product advantages. The team is dedicated to continuously optimizing and improving the platform's technical architecture to strengthen the protection of users' asset security. It is constantly launching new products and services to provide a superior trading experience for users. Additionally, ALAIcoin has the most comprehensive service system, with a professional customer service team offering 24/7 online consultation services and support, ensuring that investors receive timely help during their trading process. | alaicoinex | |
1,808,746 | ice king ice supplier in mumbai | https://g.co/kgs/kfawyMn | 0 | 2024-04-02T08:11:01 | https://dev.to/ishakhatun/ice-king-ice-supplier-in-mumbai-2co1 | news, business, marketing, ice | [](url)https://g.co/kgs/kfawyMn | ishakhatun |
1,808,812 | Discover Reliable Acer Service Centre Near You | Experience peace of mind with Acer Service Centre's comprehensive solutions for your Acer devices.... | 0 | 2024-04-02T09:47:13 | https://dev.to/bajrangwaghmare08/discover-reliable-acer-service-centre-near-you-4975 |

Experience peace of mind with Acer Service Centre's comprehensive solutions for your Acer devices. From software issues to hardware resolve, our technicians handle it all with precision and care. Call Us: +91-9511692583. | bajrangwaghmare08 | |
1,809,021 | How to Create OTP Input with Resend CountDown Timer in Livewire using Alphinejs and TailwindCss | If you're working with Laravel Livewire and need to create an OTP Input. Here is how to create a OTP... | 0 | 2024-04-02T13:27:58 | https://dev.to/andychukse/how-to-create-otp-input-with-resend-countdown-timer-in-livewire-using-alphinejs-and-tailwindcss-4d9i | otp, alphinejs, tailwindcss, livewire | If you're working with Laravel Livewire and need to create an OTP Input. Here is how to create a OTP input with a count down timer that has a resend button using Alphine.js and TailwindCss.
#### Design the Input Field
The code below creates a single input field for entering a single number. The number of fields depends on the length of the OTP code. The next part will show you how to display the number of input as required.
```
<form class="fi-form grid gap-y-6">
<input
type="tel"
maxlength="1"
class="border border-gray-500 w-10 h-10 text-center"
/>
@error('code')
<p data-validation-error="" class="fi-fo-field-wrp-error-message text-sm text-danger-600 dark:text-danger-400">
{{ $message }}
</p>
@enderror
<button
style="--c-400:var(--primary-400);--c-500:var(--primary-500);--c-600:var(--primary-600);"
class="fi-btn relative grid-flow-col items-center justify-center font-semibold outline-none transition duration-75 focus-visible:ring-2 rounded-lg fi-color-custom fi-btn-color-primary fi-size-md fi-btn-size-md gap-1.5 px-3 py-2 text-sm inline-grid shadow-sm bg-custom-600 text-white hover:bg-custom-500 focus-visible:ring-custom-500/50 dark:bg-custom-500 dark:hover:bg-custom-400 dark:focus-visible:ring-custom-400/50 fi-ac-action fi-ac-btn-action"
type="submit"
>Verify</button>
</form>
```
#### Display Specific Number of Input fields
Here we use javascript (alphinejs) to specify number of boxes to display and use for loop to display them.
```
<form
class="fi-form grid gap-y-6"
wire:submit="verify"
x-data="otpForm()"
>
<div style="--cols-default: repeat(1, minmax(0, 1fr));" class="grid grid-cols-[--cols-default] fi-fo-component-ctn gap-6">
<div style="--col-span-default: span 1 / span 1;" class="col-[--col-span-default]">
<div data-field-wrapper="" class="fi-fo-field-wrp">
<div class="grid gap-y-2">
<input
type="hidden"
id="otp"
wire:model="code"
required="required"
x-model="otp"
>
</div>
<div class="grid gap-y-2">
<div class="">
<div class="py-6 px-0 w-80 mx-auto text-center my-6">
<div class="flex justify-between">
<template x-for="(input, index) in length" :key="index">
<input
type="tel"
maxlength="1"
class="border border-gray-500 w-10 h-10 text-center"
:x-ref="index"
x-on:input="handleInput($event)"
x-on:paste="handlePaste($event)"
x-on:keyup="handleDelete($event)"
/>
</template>
</div>
</div>
</div>
@error('code')
<p data-validation-error="" class="fi-fo-field-wrp-error-message text-sm text-danger-600 dark:text-danger-400">
{{ $message }}
</p>
@enderror
</div>
<div class="grid gap-y-2">
<div class="fi-form-actions">
<div class="fi-ac gap-3 grid grid-cols-[repeat(auto-fit,minmax(0,1fr))]">
<button
style="--c-400:var(--primary-400);--c-500:var(--primary-500);--c-600:var(--primary-600);"
class="fi-btn relative grid-flow-col items-center justify-center font-semibold outline-none transition duration-75 focus-visible:ring-2 rounded-lg fi-color-custom fi-btn-color-primary fi-size-md fi-btn-size-md gap-1.5 px-3 py-2 text-sm inline-grid shadow-sm bg-custom-600 text-white hover:bg-custom-500 focus-visible:ring-custom-500/50 dark:bg-custom-500 dark:hover:bg-custom-400 dark:focus-visible:ring-custom-400/50 fi-ac-action fi-ac-btn-action"
type="submit"
x-on:click="$wire.code=document.getElementById('otp').value"
>
Verify
</button>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</form>
@push('scripts')
<script>
function otpForm() {
return {
length: 7,
otp: '',
handleInput(e) {
const input = e.target;
this.otp = Array.from(Array(this.length), (element, i) => {
let ref = document.querySelector('[x-ref="' + i + '"]');
return ref.value || '';
}).join('');
if (input.nextElementSibling && input.value) {
input.nextElementSibling.focus();
input.nextElementSibling.select();
}
},
handlePaste(e) {
const paste = e.clipboardData.getData('text');
this.otp = paste;
const inputs = Array.from(Array(this.length));
inputs.forEach((element, i) => {
let ref = document.querySelector('[x-ref="' + i + '"]');
ref.value = paste[i] || '';
});
},
handleDelete(e) {
let key = e.keyCode || e.charCode;
if(key == 8 || key == 46) {
currentRef = e.target.getAttribute('x-ref');
const previous = parseInt(currentRef) - 1;
let ref = document.querySelector('[x-ref="' + previous + '"]');
ref && ref.focus();
}
},
}
}
</script>
@endpush
```
We passed the javascript function to the form as `x-data`. This will allow us access all the variables and functions inside the `otpForm()` function inside the form element.
The OtpForm function contains
_the `length` variable,_ that will show the number of boxes to display.
_`handleInput` function_ concatenates otp codes entered in all the boxes and stores it to the otp variable.
_`handlePaste` function_ helps to transfer copied otp content from clipboard to the boxes.
_`handleDelete` function_ helps handle deleting of contents of the otp boxes and refocuses the cursor.
We used `:x-ref` to uniquely identify each input box and then used `document.querySelector` to retrieve the value of each of the boxes based on their position.
We also added a hidden field to store the otp code before submitting to our model.
```
<input
type="hidden"
id="otp"
wire:model="code"
required="required"
x-model="otp"
>
```
#### Add Resend Button with CountDown Timer
Let's add a resend button with countdown timer.
```
<div class="grid gap-y-2 text-center" x-data="otpSend(80)" x-init="init()">
<template x-if="getTime() <= 0">
<form wire:submit="resendOtp">
<button
type="submit"
>
Resend OTP
<div wire:loading>
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 200 200">
<circle fill="#FF156D" stroke="#FF156D" stroke-width="15" r="15" cx="40" cy="100">
<animate attributeName="opacity" calcMode="spline" dur="2" values="1;0;1;" keySplines=".5 0 .5 1;.5 0 .5 1" repeatCount="indefinite" begin="-.4">
</animate>
</circle>
<circle fill="#FF156D" stroke="#FF156D" stroke-width="15" r="15" cx="100" cy="100">
<animate attributeName="opacity" calcMode="spline" dur="2" values="1;0;1;" keySplines=".5 0 .5 1;.5 0 .5 1" repeatCount="indefinite" begin="-.2">
</animate>
</circle>
<circle fill="#FF156D" stroke="#FF156D" stroke-width="15" r="15" cx="160" cy="100">
<animate attributeName="opacity" calcMode="spline" dur="2" values="1;0;1;" keySplines=".5 0 .5 1;.5 0 .5 1" repeatCount="indefinite" begin="0">
</animate>
</circle>
</svg>
</div>
</button>
<input type="hidden" wire:model="logid">
</form>
</template>
<template x-if="getTime() > 0">
<small>
Resend OTP in
<span x-text="formatTime(getTime())"></span>
</small>
</template>
</div>
<script>
function otpSend(num) {
const milliseconds = num * 1000 //60 seconds
const currentDate = Date.now() + milliseconds
var countDownTime = new Date(currentDate).getTime()
let interval;
return {
countDown: milliseconds,
countDownTimer: new Date(currentDate).getTime(),
intervalID: null,
init(){
if (!this.intervalID ) {
this.intervalID = setInterval(() => {
this.countDown = this.countDownTimer - new Date().getTime();
}, 1000);
}
},
getTime(){
if(this.countDown < 0){
this.clearTimer()
}
return this.countDown;
},
formatTime(num){
var date = new Date(num);
return new Date(this.countDown).toLocaleTimeString(navigator.language, {
minute: '2-digit',
second:'2-digit'
});
},
clearTimer() {
clearInterval(this.intervalID);
}
}
}
</script>
```
The `otpSend(num)` function uses a countdown timer to only display the resend button after specified time `num` in seconds.
You can see the full [source code on github here](https://github.com/andychukse/otp-input-livewire-alphinejs) | andychukse |
1,809,024 | What Does a Scrum Master Do? Your Essential Breakdown | Tired of projects feeling like a constant uphill battle? Scrum Masters can change that! 🚀 In my new... | 0 | 2024-04-02T13:39:38 | https://dev.to/codewithnazam/what-does-a-scrum-master-do-your-essential-breakdown-c27 | scrum, agile, teams, development | Tired of projects feeling like a constant uphill battle? Scrum Masters can change that! 🚀
In my new article, get the inside scoop on:
**What Scrum Masters actually do**
**How they boost team motivation**
**Why they're the key to delivering projects on time and on budget**
Check it out! 👇 {% embed https://codewithnazam.com/what-does-a-scrum-master-do-your-essential-breakdown/ %} #scrum #leadership #projectdelivery | codewithnazam |
1,809,600 | Water Bottle Caps Manufacturers In Bangalore | In Bangalore, a bustling hub of industry and innovation in India, water bottle caps manufacturers... | 0 | 2024-04-03T04:17:33 | https://dev.to/vvachanpolymer/water-bottle-caps-manufacturers-in-bangalore-5157 | bottlecaps, manufacturers | In Bangalore, a bustling hub of industry and innovation in India, water bottle caps manufacturers play a crucial role in supplying the beverage industry with high-quality caps for bottled water and other beverages. With a growing population and increasing demand for packaged drinking water, the manufacturers in Bangalore cater to both local and international markets, adhering to stringent quality standards and embracing technological advancements to stay competitive.
**[Water Bottle Caps Manufacturers In Bangalore](https://vvachan.com/water-bottle-caps/)** leverage the city's reputation as a technology and manufacturing hub to implement cutting-edge processes and machinery in their production facilities. Advanced injection molding techniques, computer-aided design (CAD) software, and automated production lines ensure efficiency, precision, and consistency in cap manufacturing.
Quality control is a top priority for water bottle caps manufacturers in Bangalore. They employ skilled technicians and invest in state-of-the-art testing equipment to perform rigorous inspections throughout the production process. From seal integrity to tamper resistance, every aspect of the caps is thoroughly evaluated to meet regulatory requirements and customer expectations. | vvachanpolymer |
1,809,665 | NATURAL OLD MINER DIAMONDS HALO RING ROUND CUT | Title: Embrace Vintage Elegance with the Natural Halo Ring: Old Miner Diamonds Round Cut Unveil the... | 0 | 2024-04-03T06:20:54 | https://dev.to/olivia19244475/natural-old-miner-diamonds-halo-ring-round-cut-1e24 |
Title: Embrace Vintage Elegance with the Natural Halo Ring: Old Miner Diamonds Round Cut
Unveil the timeless allure of vintage elegance with our [Natural Halo Ring](https://harrychadent.co.uk/collections/rings/products/natural-old-miner-diamonds-halo-ring-round-cut) from Harry Chadent. This exquisite piece features a captivating array of old miner diamonds meticulously arranged in a stunning round-cut halo setting, creating a ring that exudes sophistication and charm.
The old miner diamonds, renowned for their unique cut and vintage appeal, add a touch of timeless elegance to this ring. Each diamond is carefully selected for its exceptional quality and brilliance, ensuring that only the finest gemstones adorn our creations.
Set in a classic halo design, the round-cut diamonds are surrounded by a circle of smaller diamonds, amplifying their sparkle and creating a mesmerizing display of light and fire. The halo setting not only enhances the beauty of the center stone but also adds a touch of vintage-inspired charm to the overall design.
Whether worn as an engagement ring to symbolize everlasting love or as a statement piece for special occasions, our Natural Halo Ring is sure to leave a lasting impression. Its timeless design and exquisite craftsmanship make it a versatile accessory that seamlessly complements any ensemble.
At Harry Chadent, we are committed to upholding the highest standards of quality and craftsmanship. Each ring is crafted with care and precision, reflecting our dedication to excellence and luxury.
Indulge in the beauty of vintage-inspired jewelry with our Natural Halo Ring. Elevate your style and make a statement of elegance and refinement with this exquisite piece from Harry Chadent.
Experience the allure of old miner diamonds with our collection. Discover the perfect expression of timeless beauty with our natural halo rings. | olivia19244475 | |
1,809,680 | Buy Negative Google Reviews | https://usashopit.com/product/buy-negative-google-reviews/ Buy Negative Google Reviews Negative... | 0 | 2024-04-03T06:39:39 | https://dev.to/soloke5708/buy-negative-google-reviews-4kki | devops, css, aws, ai | https://usashopit.com/product/buy-negative-google-reviews/

Buy Negative Google Reviews
Negative reviews on Google are detrimental critiques that expose customers’ unfavorable experiences with a business. These reviews can significantly damage a company’s reputation, presenting challenges in both attracting new customers and retaining current ones. If you are considering purchasing negative Google reviews from dmhelpshop.com, we encourage you to reconsider and instead focus on providing exceptional products and services to ensure positive feedback and sustainable success.
It’s imperative for everyone to remain vigilant when considering online reviews, as negative feedback can be fabricated by companies seeking to deceive consumers and harm their competitors. By purchasing false critiques, businesses can impede the visibility of their rivals and distort search engine rankings, potentially leading potential customers astray. Such deceptive practices underscore the importance of critically evaluating online reviews and cross-referencing information before making any judgments about a business’s reputation or quality.
How to Buy Negative Google Reviews from Us?
We offer a unique service that allows you to buy negative Google reviews to address any reputation concerns you may have. Our discreet and professional approach ensures that your needs are met with the utmost care and confidentiality. Whether you are a business owner looking to assess and improve your online reputation or an individual seeking to manage your digital presence, our services provide a tailored solution for your specific needs. Our team understands the complexity of online reputation management and is dedicated to delivering results that meet your expectations. With our proven track record and commitment to excellence, you can trust us to handle your reputation management needs with integrity and efficiency.
Why Buy Negative Google Reviews from dmhelpshop
We take pride in our fully qualified, hardworking, and experienced team, who are committed to providing quality and safe services that meet all your needs. Our professional team ensures that you can trust us completely, knowing that your satisfaction is our top priority. With us, you can rest assured that you’re in good hands.
Discover our exceptional services with a sample of our work to ensure complete confidence in our abilities. At our affordable rates, we maintain the highest standards, proving that quality and affordability can indeed go hand in hand. Whether you are a business owner, student, or anything in between, we are committed to delivering top-notch services that meet your needs without breaking the bank.
Is Buy Negative Google Reviews safe?
At dmhelpshop, we understand the concern many business persons have about the safety of purchasing Buy negative Google reviews. We are here to guide you through a process that sheds light on the importance of these reviews and how we ensure they appear realistic and safe for your business. Our team of qualified and experienced computer experts has successfully handled similar cases before, and we are committed to providing a solution tailored to your specific needs. Contact us today to learn more about how we can help your business thrive.
Benefit of use Negative Google Reviews
Considering purchasing fake Google reviews? Whether you seek to balance unjust positive feedback, harm a rival, or for any other reason, it’s crucial to proceed with caution. Buying fake reviews can have serious legal and ethical implications, and can significantly damage your reputation and credibility. Before taking any steps, carefully consider the potential consequences and the long-term impact on your business or personal brand. It’s essential to uphold the integrity of review platforms and prioritize genuine, honest feedback.
When striving to enhance your company’s online standing, examining your Google reviews is paramount. Encountering negative feedback should not prompt panic. Instead, consider several strategies for potentially addressing them. By taking proactive steps to address negative reviews, you can gradually augment your business’s online reputation and foster positive customer relationships.
Importance of Buy Negative Google Reviews
Your business’s online reputation is paramount, and your Google reviews play a crucial role. Surprisingly, negative reviews can be beneficial as they demonstrate the authenticity of your business and provide valuable feedback from real customers. Rather than fearing negative reviews, embrace them as an opportunity for improvement. By leveraging them, you can enhance your products and services, ultimately strengthening your business. Embracing and understanding the value of negative reviews can truly work to your advantage.
Contact Us / 24 Hours Reply
Telegram:dmhelpshop
WhatsApp: +1 (980) 277-2786
Skype:dmhelpshop
Email:dmhelpshop@gmail.com | soloke5708 |
1,809,761 | The Race Against Time: Delivering Quality Code Quickly | In the ever-evolving landscape of software development, the balance between speed and quality is a... | 0 | 2024-04-03T07:33:29 | https://dev.to/nitin-rachabathuni/the-race-against-time-delivering-quality-code-quickly-3pji | webdev, javascript, programming, devops | In the ever-evolving landscape of software development, the balance between speed and quality is a tightrope walk every developer faces. The demand for rapid delivery often collides with the need for high-quality code, leading to the critical question: how can we accelerate development without compromising on quality? In this article, we explore strategies and practices that can help you achieve this balance, underlined with coding examples for clear illustration.
Embrace Agile Methodologies
Agile development methodologies, such as Scrum and Kanban, emphasize iterative progress, flexibility, and collaboration. These approaches encourage breaking down projects into smaller, manageable tasks, allowing for quicker adjustments and more frequent delivery of features.
Example: Implementing a feature in sprints rather than a monolithic approach allows for incremental improvements and early detection of issues.
```
# Example of iterative development in Python
def feature_development_phase(phase):
tasks = ["design", "develop", "test", "review"]
for task in tasks:
print(f"Executing {task} in phase {phase}")
print("Feature phase completed")
# Iterative development approach
for phase in range(1, 4): # Simulating 3 sprints
feature_development_phase(phase)
```
Prioritize Code Quality from the Start
Code quality is not just an outcome; it's a process. Integrating quality checks into the development cycle from the beginning can save time and resources down the line.
Example: Utilize static code analysis tools to catch potential issues early.
```
// Example using ESLint for JavaScript
/* eslint-env node */
console.log('Hello, world!');
// ESLint can help identify issues like missing semicolons, unused variables, etc.
```
Automate Where Possible
Automation is your ally in the race against time. Automating repetitive tasks such as testing, building, and deployment not only speeds up the development process but also reduces the likelihood of human error.
Example: Implement Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
```
# Example of a simple CI/CD pipeline using GitHub Actions
name: Node.js CI/CD
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Use Node.js
uses: actions/setup-node@v1
with:
node-version: '14'
- name: Install dependencies
run: npm install
- name: Run tests
run: npm test
- name: Deploy
run: npm run deploy
```
Foster a Culture of Code Reviews
Code reviews are a powerful tool for maintaining quality, fostering learning, and encouraging collaboration. Regular, thorough reviews can catch issues early, spread knowledge across the team, and maintain a high standard of code.
Example: Implement pull request reviews in your version control system.
```
# Simulate a code review process with Git commands
git checkout -b new-feature
# Make changes and commit them
git commit -am "Add new feature"
git push origin new-feature
# Create a pull request for review
```
Continuous Learning and Improvement
The landscape of software development is constantly changing, and staying updated with the latest technologies, methodologies, and best practices is crucial. Encourage learning within your team through workshops, seminars, and allowing time for personal development projects.
Example: Organize regular code refactoring sessions to improve existing codebases, making them more efficient and maintainable.
Conclusion
Delivering quality code quickly is not just about individual skills or tools; it's about embracing practices and methodologies that enhance the efficiency and quality of software development. By prioritizing code quality, automating processes, fostering a culture of code reviews, and committing to continuous improvement, teams can navigate the delicate balance between speed and quality successfully.
---
Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
| nitin-rachabathuni |
1,809,791 | How to create a blog using Golang | After getting fed up with React, SPAs, and Javascript around 2021 I decided to re-write my personal... | 0 | 2024-04-03T08:21:03 | https://mortenvistisen.com/posts/how-to-create-a-blog-using-golang | go, tutorial, beginners, howto | After getting fed up with React, SPAs, and Javascript around 2021 I decided to re-write my personal webpage in Rust and wrote an [article](https://mortenvistisen.com/posts/how-to-build-a-simple-blog-using-rust) on how you could build a simple blog, purely using Rust. It ended up becoming one of my most popular articles and for good reason; Rust is exciting, fun to write, and blazingly fast. After a while though, I started to feel frustrated with the development process for adding new features to my site: the feedback loop was simply too long.
I've always been interested in solo entrepreneurship and technology. But, as I'm getting older, I realize that I might have been more interested in trying out new technologies. Great for learning and growing as an engineer, bad for shipping projects, and starting to see that sweet MRR grow. I decided to re-re-write my site once again, this time in Go for multiple reasons:
- I write Go for a living
- Simple language, with a decent type system and fast performance
- Blazingly fast compile times
In this post, I will show how you can create your own personal blog using Go. I'll assume you're familiar with Go, and know how to configure a router/database/server, and so on. Should you not, feel free to grab a clone of my Go starter template [Grafto](https://github.com/mbv-labs/grafto) that has lots of things configured for you out of the box.
## Foundations
I write all my stuff in markdown; if you're a developer who also wants to start blogging, chances are you also are quite familiar with it. After Go 1.16 where we got the `embed` package included in the standard library most of our work is already done. We basically only need to have a way of storing some filenames, associating them with an ID or a slug, grabbing the file, and serving it to the user. Pretty simple.
Whether you've created your own setup or grabbed a copy of Grafto, create a new directory in the root of your project called `posts` and in there, create a file called `posts.go`. Open it and add the following:
```go
package posts
// imports omitted
//go:embed *.md
var Assets embed.FS
```
Now, any file with the `.md` extension will get included in the binary that ultimately gets built once we run `go run build`. We can simply grab the files from our global `Assets` variable using `Assets.ReadFile(name-of-file)` to handle any error that might occur or return the file as a string, e.g:
```go
file, err := Assets.ReadFile("my-post.md")
if err != nil {
return err
}
return string(file)
```
It won't be pretty (we'll fix that later), but it gets the basic idea out, which we can build on.
We have a way to include our blog posts in the binary; let's add a way to associate them with a slug. You could use an ID here, but it just looks better to have URLs like: "https://acme.com/my-first-blog-post" compared to "http://acme.com/44a2530b-567d-472f-9495-e2ee64e7ae6d". So, assuming you have a database up and running, add this table:
```sql
create table posts (
id uuid primary key,
created_at timestamp not null,
updated_at timestamp not null,
title varchar(255) not null,
filename varchar(255) not null,
slug varchar(255) not null
);
```
Not much exciting going on here. Basically, for the unaware readers, the slug column above will be the title but URL-friendly. So if you have a post with the title "My First Blog Post" the slug equivalent would be "my-first-blog-post" easy.
Lastly, we need to be able to serve this to readers. That flow would typically involve them hitting a landing page showing a list of articles they can choose from, which links to the article.
The implementation of this will depend a bit on your setup, but let us implement the handler to deal with grabbing a specific article using Echo as our router. Assuming you have a route like `/posts/:slug`, create the following:
```go
type ArticleStorage interface {
GetPostBySlug(slug string) (Post, error)
}
func ArticleHandler(ctx echo.Context, storage ArticleStorage) error {
postSlug := ctx.Param("postSlug")
postModel, err := storage.GetPostBySlug(postSlug)
if err != nil {
return err
}
postContent, err := posts.Assets.ReadFile(postModel.Filename)
if err != nil {
return err
}
return ctx.String(http.StatusOK, string(postContent))
}
```
I'm putting some decisions on your here in terms of implementing the ArticleStorage. We just need _something_ that grabs the data on the post from the DB, based on the slug.
This is the foundation of what we need...but it's not pretty let's fix that by letting the server do what it was always supposed to do: return HTML.
## Enter templ
If you've spent time in the Go ecosystem, chances are you've probably heard about [templ](https://templ.guide). It lets you write HTML templates as Go packages and it's just such a pleasant way of building out a UI. Add some HTMX and alpine.js and you've at least 95% of what you get with SPAs, with the added complexity.
It's good practice to have a base template that wraps around your other templates, so we have a single point for adding things like stylesheets, javascript, metadata, etc. Create a directory in root called views and add the following to a file called `base.templ`.
```go
package views
templ base() {
<!DOCTYPE html>
<html lang="en">
<nav>
<a href="/">MBV Labs</a>
</nav>
<body>
{ children... }
<footer>
<aside>
<p>Copyright ©2024 </p>
<p>All right reserved by MBV Labs</p>
</aside>
</footer>
</body>
</html>
}
```
For this to work, you'll need to install templ and run `templ generate` which will produce a file called `base_templ.go` that we can then import into other templates to wrap around them. For the sake of brevity, we'll only create the template to show the actual article. Create a file called `article.templ`, and add the following:
```go
type ArticlePageData struct {
Title string
Content string
}
templ ArticlePage(data ArticlePageData) {
@layouts.Base() {
<div>
<div>
<h1>{ data.Title }</h1>
</div>
<article>
@unsafe(data.Content)
</article>
</div>
}
}
```
and run `templ generate` once again.
We can now go back and update our ArticleHandler handler:
```go
func ArticleHandler(ctx echo.Context, storage ArticleStorage) error {
postSlug := ctx.Param("postSlug")
postModel, err := storage.GetPostBySlug(postSlug)
if err != nil {
return err
}
postContent, err := posts.Assets.ReadFile(postModel.Filename)
if err != nil {
return err
}
return views.ArticlePage(
views.ArticlePagedata{
Title: postModel.Title,
Content: postContent,
},
).Render(
ctx.Request().Context(),
ctx.Response().Writer,
)
}
```
If you run your application now and visit a valid URL, you should see a (rather ugly) page showing the markdown of your article but this time, with some sweet hypertext markup.
## Making things (slightly) less ugly
In terms of styling things, throwing some tailwind or vanilla CSS at what we have now will get you a long way. But, we still show raw markdown to the user when they visit our articles. Additionally, we might want to show some nicely formatted code snippets in our articles. Let's fix this now.
For this, we need something that can transform the markdown into HTML components e.g
```markdown
## Some sub header
```
into
```html
<h2>Some sub header</h2>
```
Luckily, there already is a create library for this: Goldmark. So let's refactor the `posts/posts.go` file to parse the content we store using embed.
```go
//go:embed *.md
var assets embed.FS // unexport assets
type Manager struct {
posts embed.FS
markdownParser goldmark.Markdown
}
func NewManager() Manager {
md := goldmark.New(
goldmark.WithParserOptions(
parser.WithAutoHeadingID(),
parser.WithAttribute(),
),
goldmark.WithRendererOptions(
html.WithHardWraps(),
html.WithXHTML(),
html.WithUnsafe(),
),
)
return Manager{
posts: assets,
markdownHandler: md,
}
}
func (m *Manager) Parse(name string) (string, error) {
source, err := m.posts.ReadFile(name)
if err != nil {
return "", err
}
// Parse Markdown content
var htmlOutput bytes.Buffer
if err := m.markdownHandler.Convert(source, &htmlOutput); err != nil {
return "", err
}
return htmlOutput.String(), nil
}
```
Lastly, update the ArticleHandler to use the Manager:
```go
func ArticleHandler(
ctx echo.Context,
storage ArticleStorage,
postManager posts.Manager
) error {
postSlug := ctx.Param("postSlug")
postModel, err := storage.GetPostBySlug(postSlug)
if err != nil {
return err
}
postContent, err := postManager.Parse(postModel.Filename)
if err != nil {
return err
}
return views.ArticlePage(
views.ArticlePagedata{
Title: postModel.Title,
Content: postContent,
},
).Render(
ctx.Request().Context(),
ctx.Response().Writer,
)
}
```
Try and edit your article by adding some code blocks and they will now get nicely formatted. You can add custom themes to the parser so your code snippets will be shown with your favorite theme.
| mbv-labs |
1,835,975 | Buy Verified Stripe Accounts-22 | Buy Verified Stripe Accounts , with details including Name, Address, Email, Phone, Bank Name, Routing... | 0 | 2024-04-27T09:14:48 | https://dev.to/verifiedstripe11/buy-verified-stripe-accounts-22-36m1 | [Buy Verified Stripe Accounts](https://smmshopes.com/product/buy-verified-stripe-accounts/) , with details including Name, Address, Email, Phone, Bank Name, Routing Number, and Account Number. These details are all verified and 100% valid. You can also customize them to suit your needs.
Our accounts are secure and compliant with all applicable laws and regulations. We guarantee that your accounts will be fully operational and ready for use within 24 hours. We also provide 24/7 customer support.
[Our prices are competitive and we offer a satisfaction guarantee](https://smmshopes.com/product/buy-verified-stripe-accounts/
). We also provide secure payment processing and a secure platform for online transactions.
We guarantee satisfaction with our products and services. We guarantee that your accounts will be safe and secure. We guarantee that you will be able to access your account whenever you need it.
24 Hours Answer/Contact
Email: smmshopes@gmail.com
Skype: smmshopes
Telegram: @smmshopes
Whatsapp: +1(323)435-6239 | verifiedstripe11 | |
1,809,850 | How Mobile Apps Enhance Fleet Management Efficiency: The Best Trends | Did you know that as of a recent study, nearly 80% of fleet management companies are now adapting... | 0 | 2024-04-03T09:28:53 | https://dev.to/fleetstakes/how-mobile-apps-enhance-fleet-management-efficiency-the-best-trends-5en0 | mobile, application, webdev, programming | Did you know that as of a recent study, nearly 80% of fleet management companies are now adapting some form of mobile app technology to enhance their operations?
Fleet management is compelled to innovate and adapt more than ever before. The need for this unparalleled change? Mobile app technology. Equipped with advanced features and innovative capabilities, [mobile applications](https://play.google.com/store/apps/details?id=com.fleetstakes&hl=en_IN&gl=US) are dramatically transforming the sphere of fleet management, making operations more efficient, scalable, and responsive to customer needs.
Explore how this digital revolution is paving the way for a smarter, more connected future in fleet management through this article.
**Digitization and Integration of Fleet Operations**
Transitioning from traditional, paperwork-laden processes to a digitized platform is crucial for enhancing fleet efficiency and mobile technologies have been a catalyst in this change.
These [fleet management apps](https://play.google.com/store/apps/details?id=com.fleetstakes&hl=en_IN&gl=US) like FleetStakes integrate all fleet-related data, communication channels, and management tasks into one easily accessible digital space, enabling fleet managers to have all necessary information at their fingertips. Applications provide essential insights into varying aspects of fleet management, such as vehicle performance, fuel consumption, maintenance needs, and driver behaviors.
**Key Benefits:**
- Immediate access to information enhances informed decision making.
- Paper-less operations reduce errors, save time, and ease the audit process.
- Integration with existing systems allows for a 360-degree view of operations.
**Real-time Geolocation Tracking and Monitoring**
With GPS and Geo-fencing abilities, mobile apps have transformed monitoring and location tracking. Geo-fencing enhances fleet security by enabling real-time alerts when a vehicle enters or exits a specified geographic boundary. This also assists in route planning, preventing unauthorized vehicle use, and enhances the ability to locate stolen vehicles – all contributing to efficient logistics management.
**Key Benefits:**
- Improved route planning ensures optimal utilization of resources.
- Decreased fuel consumption leads to cost savings.
- Alerts on route deviation and speeding enhance safety and regulatory compliance.
**Predictive Maintenance Through IoT Integration**
Internet of Things (IoT) and mobile apps are building the foundation for predictive vehicle maintenance. Apps gather data from vehicle sensors, drawing patterns that help predict upcoming system failures.
This not only minimizes unexpected downtime but also reduces repair costs. Constant monitoring of tire pressure, engine heat and fluid levels helps keep issues at bay, contributing to optimal fleet performance.
**Key Benefits:**
- Reduced vehicle downtime enables better service reliability.
- Extended vehicle lifecycle translates into cost savings.
- Lowered unexpected repair costs helps in effective financial planning.
**Prioritizing Driver Safety**
Mobile apps are proving instrumental in ensuring drivers' safety. Apps feature driver scorecards and performance analytics, helping supervisors encourage safe driving habits. Syncing with wearable tech, these apps can monitor driver fatigue levels and alert them or management when their alertness decreases. Such features contribute to maintaining driver safety and reducing insurance premiums.
**Key Benefits:**
- Lower risk of accidents helps ensure driver safety and reduces vehicle downtime.
- Decreased insurance costs through improved overall driver safety scores.
- Better regulatory compliance with a reduced likelihood of violations.
**Fuel Management**
With fuel being the major operational cost, efficient fuel management becomes critical. Mobile apps accomplish this via fuel logging and monitoring. Route optimization features help plan fuel-efficient courses, thereby reducing expenses and contributing to a greener environment.
**Key Benefits:**
- Optimized fuel consumption reduces operational costs.
- Automated logging and analytics make reporting efficient and easy to manage.
- Sustainability through better environmentally-friendly practices.
**Seamless Integration with Various Enterprise Systems**
Mobile apps integrate effortlessly with existing enterprise systems such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and Human Resources Management (HRM) software. This creates a centralized hub for fleet operations, synchronized data across departments, and the automation of administrative tasks.
**Key Benefits:**
- Centralized data management enhances visibility and control over operations.
- Automated workflows cut down on administrative tasks and prevent errors.
- Reporting and compliance are made easier with data readily available.
**In-Depth Data Analytics**
Mobile apps have inbuilt data analytics tools that convert raw fleet data into patterns and reports. These insights guide businesses in route optimization, demand forecasting, vehicle utilization improvement, and strategic planning. Predictive analytics also enables businesses to anticipate market trends - a strategic advantage in such a competitive environment.
**Key Benefits:**
- Data-driven decision making ensures optimal resource utilization.
- Predictive analytics help in demand forecasting and route optimization.
- Strategic planning is empowered by insights derived from these analytics.
**The Future of Fleet Management: Upcoming Trends**
Several exciting trends are on the horizon, all of which promise to further enhance fleet management. These include:
- Electrification of the fleet: Adoption of electric vehicles for both their environmental and cost benefits.
- Integration with autonomous vehicles and drones: This could greatly reduce human intervention and optimize operations.
- Blockchain for secure operations: This would offer a tamper-proof method of recording various transactions and contracts, providing enhanced security and transparency.
- 5G connectivity: High-speed 5G networks would ensure uninterrupted, real-time data transfer and decision-making.
**Conclusion**
There's no denying that mobile apps have disrupted fleet management, presenting solutions that are smarter, more efficient and progressively safer. By using these tech advancements, organizations can achieve unprecedented levels of operational efficiency, cost-effectiveness, and customer contentment. For a future-proof fleet, explore how [FleetStakes app](https://play.google.com/store/apps/details?id=com.fleetstakes&hl=en_IN&gl=US) can drive your success in the growing logistics and transportation.
| fleetstakes |
1,809,894 | Surpresa e agradecimento pelos 1,000 seguidores em cerca de meio mês após o registro. | Relatório e agradecimento Tenho solicitações no final. Por favor, leia esta página para... | 0 | 2024-04-05T03:00:00 | https://dev.to/zmsoft/surpresa-e-agradecimento-pelos-1000-seguidores-em-cerca-de-meio-mes-apos-o-registro-3mec | android, google, developers, news | ## Relatório e agradecimento
Tenho solicitações no final.
Por favor, leia esta página para todos, quer você esteja interessado em desenvolvimento Android ou não.
Olá, eu sou o zmsoft.
Registrei-me no dev.to em 16 de março de 2024.
Já faz cerca de meio mês que me registrei e estou muito feliz em anunciar que tenho mais de 1,000 seguidores.

Como foi a primeira vez que escrevi em um blog, tive muitas dúvidas sobre o que deveria escrever, mas estou feliz e surpreso por perceber que consegui escrever algo que seria de interesse de todos. Em primeiro lugar, gostaria de agradecer a todos vocês, do fundo do coração, por me seguirem.
## Histórico do blog e minhas ideias
Comecei minha primeira atividade como desenvolvedor individual registrando-me como desenvolvedor Android em dezembro de 2023. Depois disso, lancei o aplicativo, mas senti o obstáculo para novos desenvolvedores devido ao requisito de teste fechado por 20 pessoas adicionado pelo Google naquela época. A partir daí, pensei: "Outros desenvolvedores também devem estar tendo problemas, então quero fazer algo a respeito. O desenvolvimento deve ser mais livre e mais fácil". Criamos o [AndroidDevelopersPayForward] (https://play.google.com/store/apps/details?id=com.andro.zm.tools.androidtesterspayforward), que foi desenvolvido como um mecanismo gratuito para que desenvolvedores de todo o mundo pudessem colaborar facilmente uns com os outros. Eu tinha esperança de que muitos usuários aproveitariam esse aplicativo, pois os desenvolvedores que participaram dos testes comentaram que era uma ótima iniciativa/aplicativo. No entanto, após o lançamento, o número de downloads não aumentou, e percebi como era difícil "fazer com que as pessoas soubessem do aplicativo". Decidi, então, fazer um esforço para que muitas pessoas soubessem do aplicativo. Depois disso, fiz muitas coisas que eram novas para mim. Criei um [site] (https://zmsoft.org/), publiquei [vídeos] (https://youtu.be/krBUAaswTxE) no YouTube, usei redes sociais e troquei informações com outros desenvolvedores e pessoas de diferentes idiomas. Os blogs também fazem parte disso. Embora ainda não tenha atingido minha meta original, estou feliz por ter alcançado um dos resultados.
## Solicitações
Estou satisfeito com um resultado, mas, conforme declarado, ainda tenho um longo caminho a percorrer.
Portanto, permita-me pedir que você use o [app](https://play.google.com/store/apps/details?id=com.andro.zm.tools.androidtesterspayforward) e o promova. Se você compartilhar alguma das minhas ideias, faça-me um favor.
[](https://play.google.com/store/apps/details?id=com.andro.zm.tools.androidtesterspayforward)
### 1. Solicitação de instalação e teste de aplicativos registrados
Meus aplicativos só terão valor final quando forem usados por muitos desenvolvedores.
Estou convencido de que não há erro em meus próprios esforços. No entanto, preciso da cooperação de muitos desenvolvedores para que isso aconteça. Se você estiver interessado, experimente.
**Para vocês, que estão trabalhando atualmente em testes fechados**.
Use o aplicativo para o lançamento de seu aplicativo.
**Para você, que está apenas começando no desenvolvimento do Android**
Experimente aplicativos de diferentes desenvolvedores e use-os para ajudá-lo a decidir qual aplicativo criar.
**Para vocês, desenvolvedores da velha guarda**.
Usem-no para encontrar inspiração para seus próprios aplicativos novos.
Talvez você também queira se preparar para as mudanças esperadas pelo Google no escopo das regras, conhecendo de fato como são os testes.
Eu agradeceria se você pudesse ajudar a mim e a futuros desenvolvedores com um sentimento de "vamos tentar".
### 2. Solicitação de divulgação de informações
Recebi acesso ao aplicativo de 29 países e, embora eu saiba que "os desenvolvedores de todos os países estão com problemas". Mas também sinto que há um limite para a quantidade de informações que posso enviar pessoalmente para "informar as pessoas" sobre o aplicativo. Tenho certeza de que a cooperação de cada um de vocês é necessária para este projeto.
**Para você, que tem muitos amigos**.
Por favor, compartilhe este blog em seus próprios blogs e sites de redes sociais.
**Para vocês que têm sites relacionados a desenvolvimento**
Por favor, crie um link para o meu [site] (https://zmsoft.org/apps-info/androiddeveloperspayforward/) e permita que eu crie um link para o seu site.
**Para você, que está tendo problemas com testes**.
Compartilhe essas informações com outros desenvolvedores que também estejam tendo problemas.
### 3. Solicitação de comentários
Embora eu acredite que minhas ideias estejam corretas, às vezes me sinto frustrado quando tento algo novo. Embora ainda haja muito a fazer, o que eu preciso para me manter motivado é, acima de tudo, feedback. Seus pensamentos, conselhos, etc. seriam muito encorajadores, portanto, deixe um comentário.
Terei prazer em ajudá-lo com qualquer um deles.
## Finalmente
Há muitas coisas que preciso aprender e muitas coisas que me faltam, tanto como blogueiro quanto como desenvolvedor de aplicativos, mas, se quiser, continue a me apoiar. | zmsoft |
1,809,952 | Top 3 questions people ask about Larafast | People ask a lot of questions about Larafast, I've chosen the top 3 of them to clarify. 1. Is it... | 0 | 2024-04-03T11:46:57 | https://dev.to/karakhanyans/top-3-questions-people-ask-about-larafast-4a05 | People ask a lot of questions about Larafast, I've chosen the top 3 of them to clarify.
**1. Is it scalable and customizable?**
Yes, Larafast is a normal Laravel application, with pre-built features and components. It can be scaled and customized, and every Laravel package compatible with the boilerplate version (currently 11.x ) can be installed.
Currently, it comes with Vue and Livewire stacks, but it can work with React and Svelte too. No limits.
**2. Does it have any dependencies on Larafast?**
No, there are no dependencies, it can be treated as a fresh Laravel application, and the parts that you don't need can be removed or extended.
**3. Why pay for Larafast if Laravel has free boilerplates?**
Laravel has a lot of other boilerplates and CMS's that are free, and that's true. What makes Larafast different is that it combines the features and components that most people need to run their apps fast.
Other boilerplates need a lot of configurations and have a lot of dependencies, Larafast on the other hand created to speed up the development process and skip setting up Payments, SEO, Admin, etc.
Imagine having a ready-to-go boilerplate under your hands every time, which needs to change the texts and logo, and you are ready to go. That's Larafast. | karakhanyans | |
1,820,668 | TypeScript Tip #1: forEach vs for loop | While working on a typescript project, it is recommended to enable noUncheckedIndexedAccess for safer... | 0 | 2024-04-14T02:41:01 | https://dev.to/tusharshahi/typescript-tip-1-foreach-vs-for-loop-56og | typescript, webdev, programming, beginners | While working on a typescript project, it is recommended to enable [`noUncheckedIndexedAccess`](https://www.typescriptlang.org/tsconfig/noUncheckedIndexedAccess.html) for safer object access.
The difference it makes can be summarised below:
```
const obj : Record<string,number[]> = {};
obj.nonExistentKey.push(2);
```
Without enabling the flag the above will not throw an error. Above is a trivial example but I hope it shows the importance of this flag.
Enabling this flag leads to another hurdle though:

The error says: **Type 'undefined' is not assignable to type 'number'**
`arr[i]` is inferred as `undefined` inside the for loop. Why though?
Because the length of the above array can be changed like this:
```
arr[15]=2;
```
Now, `arr.length` is 16. The array is [sparse](https://www.freecodecamp.org/news/sparse-and-dense-arrays-in-javascript/#:~:text=A%20sparse%20array%20is%20one,the%20sequence%20of%20their%20indices.&text=Normally%2C%20the%20length%20property%20of,sparse%20arrays%20they%20don't.) and has "holes" or "empty slots" in it. Accessing these elements will give `undefined`. Hence TypeScript complains.
So how to fix it?
We can use an array method like `forEach`. Looking at the [documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach#description):
> callbackFn is invoked only for array indexes which have assigned values. It is not invoked for empty slots in sparse arrays.

Here is the typescript [playground](https://www.typescriptlang.org/play?noUncheckedIndexedAccess=true#code/MYewdgzgLgBAhgJwTAXDMBXAtgIwKYIDaAujALwyECMANAEw0DMxA3ALABQnoksANiADmAOWz5kFABQA3OH1ToxBAJTkAfDADenGLpg8IIPngB0AwTLnL2HAL6dOAMxAJJx2AEtyMAAwwWMF4APPBIZnhgglAAFiweANTxytocejDmorgEkohEHsTWnPZcJbkmzggAonDA0ZKSAB6qZBoZSq5N1kA) for you to play around with.
| tusharshahi |
1,823,508 | My First Open Source Contribution | Introduction I was involved in a program through CodeDay and my university that paired... | 0 | 2024-04-15T16:28:57 | https://dev.to/jaredscarr/my-first-open-source-contribution-3j07 | opensource | ### Introduction
I was involved in a program through [CodeDay](https://www.codeday.org/) and my university that paired myself, and two other team members, with a mentor that would guide us through an open-source contribution to a project hosted on GitHub. I have worked in software for about 3 years, but I had never contributed to an open-source project. I was grateful for this opportunity to have someone show me the ropes and I will attempt to explain some of the key takeaways from my perspective that may also aid other first time contributors.
### About the Project
[Deck.gl](http://deck.gl/) is a library for visualizing large geospatial datasets and maps with WebGL.
A typical user of this framework is an individual or team, probably in a developer or data science type role. See their [showcase](https://deck.gl/showcase) for a great selection of example reports and applications.
### Key Terms
**Layer** - A component of Deck.gl that will layer on top of a map or another layer.
**Texture** - An image in the GPU. It maps to actual images, colors, etc for rendering.
**Frame Buffer Object (FBO)** - An extension to OpenGL that can render a texture.
**Binding** - Connects the texture to the renderer.
### Example ArcGis Application
In Figure 1 below, the ArcGis base map is displayed and two layers are overlaid on top of the base map. The first layer is a set of geo-json data that contains all the locations of the airports and are highlighted with magenta circles. The second layer is the arced lines that connect the airports.

<sub>Figure 1 - ArcGis Map with Layers</sub>
Figure 2 below shows a basic diagram of how this works. Deck.gl provides a pure JavaScript API and also supports React. A base map is selected and then one of the core features, the Deck component, is instantiated and assigned layers that will render to the screen over the map.

<sub>Figure 2 - Deck.gl Layer Diagram</sub>
### Selected Issue
The [issue](https://github.com/visgl/deck.gl/issues/8428) I worked on was to address the broken ArcGis integration which meant that developers could not render DeckGL data on top of the ArcGis map and, while the documentation looked correct, the library was not. To start I had to isolate where in the code the problem existed. There were two places that were relevant for addressing this issue: 1) the [example](https://github.com/visgl/deck.gl/tree/master/examples/get-started/pure-js/arcgis) application provided for guidance and 2) the [arcgis module](https://github.com/visgl/deck.gl/tree/master/modules). My strategy at the beginning was to identify error messages in the console and break them up into sub-problems then search for examples that existed already in the code base that were implemented in a similar manner and use them as a guide. It turned out that none of the errors were immediately helpful. A certain amount of the code was closed-box that I did not have visibility into. My strategy immediately failed. How was I to debug a system that I couldn’t step through?
### Challenges
When I started this work, I had no experience working with shaders, graphics, or the GPU. I was fortunate that my mentor did have significant experience in this area and even wrote an [article](https://gamedevelopment.tutsplus.com/a-beginners-guide-to-coding-graphics-shaders--cms-23313t) that explained a shader which was very helpful.
I was used to a development environment where all the code was visible and I could step through it with a debugger. Parts of this system were opaque and I realized I needed a different approach. I drew some diagrams and made some assumptions as to how I thought the system should work and then implemented code that would verify my assumptions or disprove them. I would make a single change, wait for the page to reload, and document what worked and what did not. When I was blocked I reached out for guidance from the maintainers, the Slack community, and my mentor.
To isolate which parts of the code needed to be updated I traced each method call and identified which pieces of code would render to the screen.
### Solution
I ended up with a pipeline of components that I could isolate to figure out where issues were.

<sub>Figure 3 - Screen Render Diagram</sub>
What is supposed to occur is ArcGis renders directly to the screen and DeckGL renders to a texture that then renders to the screen. I started with the shader itself and removed both the render and the texture FBO to see if a gradient could be drawn directly to the screen (Figure 4). When that failed, it verified that something was wrong with the shader.

<sub>Figure 4 - Render Debug Diagram</sub>
Through some trial and error it turned out that the geometry of the shader required an update and once fixed the gradient rendered which verified that the shader could draw to the screen.

<sub>Figure 5 - Gradient from Shader</sub>
The next logical choice was to look at the communication between the Deck instance on the other end of the pipeline and the shader. The FBO and DeckGL render were removed and replaced with a hard-coded image to see if it would render (Figure 6).

<sub>Figure 6 - Pipeline Debug 1</sub>
If the image rendered did then the shader was okay and if not then the shader was still broken and the DeckGL render could be too. The image rendered (Figure 7)

<sub>Figure 7 - Image as Texture Rendered to Screen</sub>
This verified the shader could render if supplied a correctly formed and bound image texture. I looked at the DeckGL render next. Here the binding of the texture to the DeckGL instance needed updating with newly required attributes. Once those were discovered the DeckGL instance could render to the shader (see Figure 8).

<sub>Figure 8 - Pipeline Debug 2</sub>
The last piece of the puzzle was the frame buffer object. The FBO and the texture needed to be reconstructed with the correct parameters and the texture bound to it properly. When the right combinations of attributes were found the map with the proper layers rendered and the sample image was removed (see Figure. 1). This was verified to work by adding a video in the discussion that showed the example functioning in the browser. Maintainers accepted the [pull request](https://github.com/visgl/deck.gl/pull/8545) and I had my first successful open source contribution!
### Conclusion
One of the not so obvious elements that I learned from this experience was the importance of tracking what has been done and what has been learned. With all the rabbit-holes, code changes, side-quests, and proven or dis-proven assumptions along the way it is easy to get lost with what you did from week to week. Especially, if one wants to write an article or a quality PR detailing the changes and why they were necessary this document is invaluable.
I really felt fortunate to be part of this program and I learned so much from the process and from my mentor. Not everyone will have access to this kind of guidance, but I encourage everyone to begin to search for a project they feel they can contribute to. Remember to search issues for the label “good first issue”, if they have a Slack or similar join, and read their contribution guidelines. Happy coding!
| jaredscarr |
1,826,053 | Python's MAP/FILTER/REDUCE: Streamlining Data Manipulation in Seconds | Introduction: Python offers powerful built-in functions for working with collections of data: MAP,... | 0 | 2024-04-17T19:30:16 | https://dev.to/sk_rajibul_9ce58a68c43bb5/pythons-mapfilterreduce-streamlining-data-manipulation-in-seconds-34e5 | python, programming, development, developer |
Introduction:
Python offers powerful built-in functions for working with collections of data: MAP, FILTER, and REDUCE. In just 30 seconds, let's dive into what these functions do and how they can simplify your code.
## MAP:
The map function takes in 1) a function and 2) an iterable. The purpose of the function is to apply some sort of transformation to each element inside our iterable (think list). It then applies the function to every element in the iterable, and returns a new iterable.
```python
# Double each number in a list
numbers = [1, 2, 3, 4, 5]
doubled = list(map(lambda x: x * 2, numbers))
print(doubled) # Output: [2, 4, 6, 8, 10]
```
## FILTER:
The filter function takes in 1) a function and 2) an iterable. The purpose of the function is to decide which iterables to keep or to discard in the new iterable. It does not apply any transformation on the elements.
```python
mylist = [1, 2, 3, 4, 5, 6, 7, 8]
def larger5(n):
return n > 5
newlist = list(filter(larger5, mylist))
print(newlist)
# [6, 7, 8]
```
- each function call returns either True or False
- if True is returned, the original element is kept
- if False is returned, the original element is discarded
## REDUCE:
REDUCE applies a rolling computation to pairs of items in an iterable and returns a single result. It's like gradually combining items together until you get a final result. For example:
```python
fruits = ['apple', 'orange', 'pear', 'pineapple']
def add(a, b):
return a + '-' + b
from functools import reduce
result = reduce(add, fruits)
print(result)
# apple-orange-pear-pineapple
```
- add('apple', 'orange') returns 'apple-orange'
- add('apple-orange', 'pear') returns 'apple-orange-pear'
- add('apple-orange-pear', 'pineapple') returns 'apple-orange-pear-pineapple'
- And this is why we get 'apple-orange-pear-pineapple' in the end
Conclusion:
In just 100 seconds, you've learned about Python's MAP, FILTER, and REDUCE functions. These powerful tools can help you manipulate data in a concise and expressive way, making your code more readable and efficient. Happy coding! | sk_rajibul_9ce58a68c43bb5 |
1,826,094 | Ripple's CEO Shares Optimistic Crypto Market Prediction: Details | Brad Garlinghouse expects the crypto market to double in size, totalling $5 trillion by the... | 0 | 2024-04-17T21:07:14 | https://dev.to/endeo/ripples-ceo-shares-optimistic-crypto-market-prediction-details-dcp | webdev, javascript, web3, blockchain | #### Brad Garlinghouse expects the crypto market to double in size, totalling $5 trillion by the end of 2024. This is larger than the market cap of Amazon and Microsoft combined.
The CEO of Ripple blockchain, Brad Garlinghouse, [claimed](https://www.cnbc.com/2024/04/08/ripple-ceo-crypto-market-to-double-in-size-to-5-trillion-in-2024.html) that the combined market capitalisation of the cryptocurrency market is to top $5 trillion this year, citing macroeconomic factors and upcoming Bitcoin halving.
> “I’ve been around this industry for a long time, and I’ve seen these trends come and go,” Garlinghouse [said](https://www.cnbc.com/2024/04/08/ripple-ceo-crypto-market-to-double-in-size-to-5-trillion-in-2024.html) in an interview to CNBC. “I’m very optimistic. I think the macro trends, the big picture things like the ETFs, they’re driving for the first time real institutional money.”
> “You’re seeing that drives demand, and at the same time demand is increasing, supply is decreasing,” Garlinghouse further clarified. “That doesn’t take an economics major to tell you what happens when supply contracts and demand expands.”
## Diving into the Context
The first U.S. spot Bitcoin ETFs were [approved](https://www.theguardian.com/technology/2024/jan/11/bitcoin-etf-approved-sec-explained-meaning-securities-regulator-tweet) on January 10 by the U.S. Securities and Exchange Commission. This allowed institutions and retail investors to gain exposure to Bitcoin without directly owning the underlying asset via cryptocurrency exchanges.
Since the approval, Bitcoin (BTC) price [increased](https://www.coingecko.com/en/coins/bitcoin) by over 59%, reaching the all-time high of over $73,000, according to CoinGecko data. Among the metrics that spurred the growth, many cite peak exchange-traded funds’ inflows at the surge time.

Apart from macroeconomic impact, exchange-traded funds moved crypto into the spotlight of the traditional financial dimension, adding to a greater adoption of the decentralized-based solutions.
This explains the exponential growth of the Web3 projects within the first quarter and the increased attention to the altcoins. Namely, one of them, Tensor (TNSR), managed to quickly achieve [listings](https://twitter.com/WhiteBit/status/1778053496282939609) on WhiteBIT, Binance, and other top exchanges – just like Celestia (TIA) and Near Protocol (NEAR) did in the last year.
Among the factors outlined by Garlinghouse, halving stands out. This technical event takes place roughly every four years in Bitcoin’s history, and is set to half the total mining reward for BTC miners, who verify transactions and mint new coins with a use of stark computing powers.
Crucially, every halving marked Bitcoin’s price upsoar – mainly due to the increased scarcity of the asset that the event brings. Analogically, the fourth halving is expected to push BTC price further north, which will drive the cryptocurrency market by and large.
The last halving took place in 2020, causing Bitcoin to reach an already-beaten milestone of over $69,000.
> “The overall market cap of the crypto industry ... is easily predicted to double by the end of this year ... (as it’s) impacted by all of these macro factors,” Garlinghouse said.
At the time of writing, the total capitalisation of the crypto market counts up to over $2.4 trillion, meaning that Garlinghouse sees it to cross the $5 trillion range by the end of the year.
## Optimism in the U.S. Crypto Regulation
One of the other factors that Garlinghouse sees pushing the crypto market to new highs is the possibility of positive regulatory momentum in the United States.
> “The U.S. is still the largest economy in the world, and it’s unfortunately been one of the more hostile crypto markets. And I think that’s going to start to change, also,” [told](https://www.cnbc.com/2024/04/08/ripple-ceo-crypto-market-to-double-in-size-to-5-trillion-in-2024.html) Garlinghouse.
For the recent years, the U.S. civil service vertical has been indicating enforced policy towards decentralized assets and their operators. The Securities and Exchange Commission (SEC) under Gary Gensler has been the hallmark of such policy, targeting Binance, Coinbase, and Ripple itself.
Despite this, Ripple’s CEO shares optimistic outlooks on the U.S. regulatory spotlight, which holds a great significance for other states’ crypto legislation frameworks.
> “One of the things actually I’ll say on the macro tailwinds for the industry: I think we will get more clarity in the United States,” Garlinghouse [said](https://www.cnbc.com/2024/04/08/ripple-ceo-crypto-market-to-double-in-size-to-5-trillion-in-2024.html).
The hopes for the U.S. officials to slacken the bonds are emphasised due to this year’s presidential election. Crypto hopefuls expect that the next administration will be more accommodating to the crypto industry with its policy focus. | endeo |
1,826,367 | Angular 15 Dynamic Dialog with angular material | Live demo https://stackblitz.com/github/maag070208/NgDynamicDialog ... | 0 | 2024-04-18T06:05:49 | https://dev.to/maag070208/angular-15-dynamic-dialog-with-angular-material-339p | ## Live demo
https://stackblitz.com/github/maag070208/NgDynamicDialog
## Agenda
- Angular installation
- Create a angular project with specific version
- Installation of angular material
- Primeflex installation
- Create dialog atom component
- Create dialog service
- Create the components to be called from the dynamic dialog
- Try dynamic dialog component
## Install angular
```npm install -g @angular/cli```
## Create a angular 15 project
```npx @angular/cli@15 new NgDynamicDialog```
## Install angular material
Step 1: Run installation command
```ng add @angular/material```
Step 2: Choose angular material theme

Step 3: Set up typography styles

Step 4: Include and enable animations

## Install primeflex
```npm i primeflex```
Import styles on styles.scss
```@import 'primeflex/primeflex.scss';```

Create Dialog atom component
```ng g c components/atoms/dialog --standalone```

Step 1: In dialog.component.ts
```
import {
AfterViewInit,
ChangeDetectionStrategy,
ChangeDetectorRef,
Component,
ComponentRef,
Inject,
OnDestroy,
ViewChild,
ViewContainerRef,
} from '@angular/core';
import { CommonModule } from '@angular/common';
import { MatDialogRef, MAT_DIALOG_DATA } from '@angular/material/dialog';
@Component({
selector: 'app-dialog',
standalone: true,
changeDetection: ChangeDetectionStrategy.OnPush,
imports: [CommonModule],
templateUrl: './dialog.component.html',
styleUrls: ['./dialog.component.scss'],
})
export class DialogComponent implements OnDestroy, AfterViewInit {
//this is the target element where component dynamically will be added
@ViewChild('target', { read: ViewContainerRef }) vcRef!: ViewContainerRef;
//this will hold the component reference
private componentRef!: ComponentRef<any>;
constructor(
private dialogRef: MatDialogRef<DialogComponent>,
private cdref: ChangeDetectorRef,
@Inject(MAT_DIALOG_DATA)
public data: {
component: any;
data: any;
}
) {}
ngAfterViewInit() {
//create component dynamically
this.componentRef = this.vcRef.createComponent(this.data.component);
//pass some data to component
this.componentRef.instance.initComponent({ ...this.data.data });
//subscribe to the event emitter of component
this.componentRef.instance.onSubmit.subscribe((data: any) => {
this.dialogRef.close(data);
});
//detect changes
this.cdref.detectChanges();
}
ngOnDestroy() {
if (this.componentRef) {
this.componentRef.destroy();
}
}
}
```
Step 2: In dialog.component.html
```
<ng-template #target></ng-template>
```
## Create Dialog service
Step 1: Run command
```
ng g s core/services/dialog
```

Step 2: In dialog.service.ts
```
import { Injectable } from '@angular/core';
import { MatDialog } from '@angular/material/dialog';
import { DialogComponent } from 'src/app/components/atoms/dialog/dialog.component';
@Injectable({
providedIn: 'root'
})
export class DialogService {
constructor(private dialog: MatDialog) {}
public showDialog<T>(dynamicComponent: any, data?: T): any {
return this.dialog.open(DialogComponent, {
data: {
component: dynamicComponent,
data: data,
},
});
}
}
```
Step 3: Remember add “MatDialogModule” in app.module.ts
```
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { MatButtonModule } from '@angular/material/button';
import { MatDialogModule } from '@angular/material/dialog';
@NgModule({
declarations: [AppComponent],
imports: [
BrowserModule,
BrowserAnimationsModule,
MatButtonModule,
MatDialogModule, //<---- import
],
providers: [],
bootstrap: [AppComponent],
})
export class AppModule {}
```
## Create the components to be called from the dynamic dialog
Step 1: Create a models folder in path “app/core/models”

Step 2: Add in models folder the dialog.ts file
```
import { EventEmitter } from "@angular/core";
export interface DynamicDialogComponent<T> {
//the initComponent method will be called when component is created
initComponent(data: any): void;
//the onSubmit event will be emitted when user clicks on submit button
onSubmit: EventEmitter<T>;
}
```
Step 3: Add in models folder the user.ts file
```
export interface IUser {
name: string;
phone: string;
email: string;
}
```
Step 4: Create Form1 component
```
ng g c components/molecules/form1 --standalone
```

Step 5: In form1.component.ts file
```
import { Component, EventEmitter, Output } from '@angular/core';
import { CommonModule } from '@angular/common';
import { MatInputModule } from '@angular/material/input';
import {
FormControl,
FormGroup,
ReactiveFormsModule,
Validators,
} from '@angular/forms';
import { DynamicDialogComponent } from 'src/app/core/models/dialog';
import { IUser } from 'src/app/core/models/user';
import { MatButtonModule } from '@angular/material/button';
@Component({
selector: 'app-form1',
standalone: true,
imports: [CommonModule, MatInputModule,MatButtonModule, ReactiveFormsModule],
templateUrl: './form1.component.html',
styleUrls: ['./form1.component.scss'],
})
export class Form1Component implements DynamicDialogComponent<IUser> {
//this is the output event that you need to emit when the form is submitted
@Output() onSubmit: EventEmitter<IUser> = new EventEmitter<IUser>();
//this is the form that you need to create
public form1!: FormGroup;
//this is the user that you need to pass to the form
public user!: IUser;
//this method replace the ngOnInit() method
initComponent(data: IUser): void {
//this is the data that you pass from the parent component
this.user = data;
//this is the method that initialize the form
this.initForm();
}
//if you have a form, you need to create a method to initialize it
initForm(): void {
this.form1 = new FormGroup({
name: new FormControl(this.user.name, Validators.required),
phone: new FormControl(this.user.phone, Validators.required),
email: new FormControl(this.user.email, Validators.required),
});
}
}
```
Step 6: In form1.component.html file
```
<form [formGroup]="form1" class="flex flex-column m-5">
<mat-form-field appearance="outline">
<mat-label>Name</mat-label>
<input matInput formControlName="name" placeholder="maag" />
</mat-form-field>
<mat-form-field appearance="outline">
<mat-label>Phone</mat-label>
<input matInput formControlName="phone" placeholder="1231231231" />
</mat-form-field>
<mat-form-field appearance="outline">
<mat-label>Email</mat-label>
<input matInput formControlName="email" placeholder="maag@gmail.com" />
</mat-form-field>
<button
mat-raised-button
color="primary"
[disabled]="form1.invalid"
(click)="onSubmit.emit(form1.value)"
>
Submit
</button>
</form>
```
Step 7: Create a delete dialog component
```
ng g c components/molecules/delete-dialog --standalone
```
Step 8: In delete-dialog.component.ts file
```
import { Component, EventEmitter, Output } from '@angular/core';
import { CommonModule } from '@angular/common';
import { DynamicDialogComponent } from 'src/app/core/models/dialog';
import { MatButtonModule } from '@angular/material/button';
@Component({
selector: 'app-delete-dialog',
standalone: true,
imports: [CommonModule, MatButtonModule],
templateUrl: './delete-dialog.component.html',
styleUrls: ['./delete-dialog.component.scss']
})
export class DeleteDialogComponent implements DynamicDialogComponent<boolean> {
@Output() onSubmit: EventEmitter<boolean> = new EventEmitter<boolean>();
initComponent(): void {}
}
```
Step 9: In delete-dialog.component.ts file
```
<section class="flex flex-column">
<p class="text-center">Are you shure to delete this register?</p>
<div class="flex justify-content-between">
<button mat-raised-button
color="warn"
(click)="onSubmit.emit(true)">Delete</button>
<button mat-raised-button
color="accent"
(click)="onSubmit.emit(false)">
Cancel
</button>
</div>
</section>
```
## Try dynamic dialog component
Step 1: in app.component.ts file
```
import { Component } from '@angular/core';
import { DialogService } from './core/services/dialog.service';
import { Form1Component } from './components/molecules/form1/form1.component';
import { IUser } from './core/models/user';
import { DeleteDialogComponent } from './components/molecules/delete-dialog/delete-dialog.component';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss'],
})
export class AppComponent {
constructor(private _dialogService: DialogService) {}
public async showForm1(): Promise<void> {
const dialogRef = this._dialogService.showDialog<IUser>(
Form1Component,
{} as IUser
);
const result = await dialogRef.afterClosed().toPromise();
console.log(result);
}
public async showForm1WithInitialData(): Promise<void> {
const user: IUser = {
name: 'maag',
phone: '1234567890',
email: 'maag070208@gmail.com',
};
const dialogRef = this._dialogService.showDialog<IUser>(
Form1Component,
user
);
const result = await dialogRef.afterClosed().toPromise();
console.log(result);
}
public async showDeleteDialog(): Promise<void> {
const dialogRef = this._dialogService.showDialog<boolean>(
DeleteDialogComponent
);
const result = await dialogRef.afterClosed().toPromise();
console.log(result);
}
}
```
Step 1: in app.component.html file
```
<div class="flex flex-column gap-4 m-4">
<button mat-raised-button (click)="showForm1()">Show Form 1</button>
<button mat-raised-button color="primary" (click)="showForm1WithInitialData()">Show Form 1 With Initial Data</button>
<button mat-raised-button color="accent" (click)="showDeleteDialog()">Show confirm dialog</button>
</div>
```


| maag070208 | |
1,826,419 | ✍️Testing in Storybook | Introduction Storybook provides an environment where you can build components in... | 27,134 | 2024-04-18T07:12:55 | https://dev.to/algoorgoal/testing-in-storybook-378c | webdev, testing, frontend | ## Introduction
Storybook provides an environment where you can build components in isolation, and checking edge case UI states became easier with Storybook. What's more, you can write tests in Storybook. Also, testing environment comes with zero configuration. Aren't you excited? In this post, I will talk about what made me start testing in Storybook, how you can set up testing in Storybook, some issues I had with Storybook Addons.
## Motivation to do testing in Storybook
### `jsdom` in Jest cannot mock real DOM fully
[React Testing Library](https://testing-library.com/docs/) has become a go-to option for testing React applications since you can write tests from a user perspective. Here is its core principle in their official docs.
> The more your tests resemble the way your software is used, the more confidence they can give you.
So I tried Jest/React-Testing-Libary and was quite satisfied with these technologies. However, I got stuck when I tried to test `Dialog` element. It turns out there are [some known limitations with jsdom](https://github.com/jsdom/jsdom/issues/3294). Since `jsdom` is not real DOM, I came across a situation in which I can't test the element in the way it is used by users.
### Finding Alternatives
#### Another javascript-implemented DOM
- [happydom](https://github.com/capricorn86/happy-dom): It's another javascript implementation of DOM. However, its community is way smaller than `jsdom`. The repository has 2.9k+ stars, so I can't make sure that I would get a huge community support.
#### Using real DOM
- real DOM: `jsdom` lets us see the result of component testing immediately in the local development environment whenever our codebase changes. That's one of the important part of automated testing. Once we start using real DOM, it's clear that the execution time of testing will be too slow.
### Innovative Solution
- When you develop in local development, you typically run `yarn storybook` and see the result. Since Storybook already renders stories(components) in real DOM, it can reuse the rendered components to run component testing. [According to Storybook's Benchmark, Storybook interaction testing is 30% slower than jest/react-testing-library and sometimes it is even faster](https://github.com/storybookjs/storybook/discussions/16861#discussioncomment-2513340). Internally, Storybook uses jest/playwright to run the tests.
- In addition, it becomes easier to track down bugs since you can see the interaction flow visually in Storybook, rather than seeing the dumped HTML when the test fails. Debugging is made easier.
- Storybook's testing methods are similar to those of Jest/React-Testing-Library, so it was clear that I would get used to it easily.
## How to set up
### Test Runner
1. Install test runner
```
yarn add --dev @storybook/test-runner
```
2. Run the test-runner
```
yarn test-storybook
```
### Interaction Testing
1. Add this to config of your ./storybook/main.ts
```ts
const config: StorybookConfig = {
addons: [
'@storybook/addon-interactions',
...,
],
}
```
2. Write an interaction test.
```ts
// More on interaction testing:
// https://storybook.js.org/docs/writing-tests/interaction-testing
export const LoggedIn: Story = {
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
const loginButton = canvas.getByRole("button", { name: /Log in/i });
await expect(loginButton).toBeInTheDocument();
await userEvent.click(loginButton);
await expect(loginButton).not.toBeInTheDocument();
const logoutButton = canvas.getByRole("button", { name: /Log out/i });
await expect(logoutButton).toBeInTheDocument();
},
};
```
- `play`: this function runs after the story finishes rendering.
- `click`: Storybook lets you use `user-events` in the same way as Reac Testing Library.
- `expect`: assertion function
### Test Coverage
Test coverage shows any code lines that tests haven't gone through.
1. Install the addon.
```
yarn add --dev @storybook/addon-coverage
```
2. Include the addon in main.ts
```ts
const config: StorybookConfig = {
addons: [
'@storybook/addon-coverage',
...,
],
};
```
3. Run the test runner with `--coverage option`.
```
yarn test-storybook --coverage
```
### End-to-end Testing
You can navigate to the URL of storybook and do end-to-end testing straight up.
```ts
import { Frame } from "@playwright/test";
import { test, expect } from "./test";
let frame: Frame;
test.beforeEach(async ({ page }) => {
await page.goto(
"http://localhost:6006/?path=/story/example-page--logged-out"
);
await expect(page.getByTitle("storybook-preview-iframe")).toBeVisible();
frame = page.frame({ url: /http:\/\/localhost:6006\/iframe.html/ })!;
await expect(frame).not.toBeNull();
});
test("has logout button", async ({ page }) => {
const loginButton = frame.getByRole("button", { name: /John/i }).first();
await expect(loginButton).toBeVisible();
await loginButton.click();
await expect(
frame.getByRole("button", {
name: /John/i,
})
).toBeVisible();
});
```
### API Mocking
1. Install the addon.
```
yarn add msw msw-storybook-addon --dev
```
2. Generate service worker to your `public` directory.
```
yarn msw init public/
```
3. Include the addon in .storybook/preview.ts
```ts
import { initialize, mswLoader } from 'msw-storybook-addon'
// Initialize MSW
initialize()
const preview = {
parameters: {
// your other code...
},
// Provide the MSW addon loader globally
loaders: [mswLoader],
}
export default preview
```
4. Open your `storybook` URL(http:localhost:6006) and check browser devtools > console tab. If MSW is enabled in your browser, you'll be able to see this log.


5. You can also see the following log in the console tab if the request was intercepted by MSW.

## Issues
### `yarn test-storybook --coverage`is not printing the coverage result on console
#### Description

You can see that all the tests passed, but the coverage result is not displayed. [This is a known issue.](https://github.com/storybookjs/addon-coverage/issues/13)
#### Workaround
1. Install `nyc` as a dev dependency.
```
yarn add nyc --dev
```
2. Run this command instead.
```
test-storybook --coverage --coverageDirectory coverage && nyc report --reporter=text -t coverage
```
`nyc` prints the result on console. This time we tell `nyc` where to pick up the coverage report file(`./coverage`).
## Troubleshooting
### Testing Storybook-rendered iframe in Playwright
Let's say you wrote a end-to-end test to see if the following application flow works.
1. A page renders log-in button.
2. User clicks log-in button.
3. log-out button is renders.


This is the first version of the test using the rendered component in storybook.
```ts
test("it has logout button", async ({ page }) => {
// navigate to the Stroybook component url
await page.goto(
"http://localhost:6006/?path=/story/example-page--logged-out",
);
// query log-in button element
const loginButton = page.getByRole("button", { name: /log in/i });
// fire click event
await loginButton.click();
// query log-out button element
const logoutButton = page.getByRole("button", { name: /log out/i });
// assert log-out button is rendered
await expect(logoutButton).toBeVisible();
});
```
Here's the result of the test.

- Problem: `await page.goto()` only waits for the navigation of the document of the main frame, so it doesn't wait for subframe generated by `<iframe/>` to render. Let's fix this issue.
```ts
test("it has logout button", async ({ page }) => {
// navigate to the Stroybook component url
await page.goto(
"http://localhost:6006/?path=/story/example-page--logged-out",
);
// wait for subframe to load
await expect(page.getByTitle("storybook-preview-iframe")).toBeVisible();
const frame = page.frame({ url: /http:\/\/localhost:6006\/iframe.html/ })!;
// query log-in button element
const loginButton = page.getByRole("button", { name: /log in/i });
// fire click event
await loginButton.click();
// query log-out button element
const logoutButton = page.getByRole("button", { name: /log out/i });
// assert log-out button is rendered
await expect(logoutButton).toBeVisible();
});
```

- Problem: it's still impossible to query elements since `loginButton` and `logoutButton` is in the subframe. We need to query on the document of the subframe.
```ts
test("it has logout button", async ({ page }) => {
// navigate to the Stroybook component url
await page.goto(
"http://localhost:6006/?path=/story/example-page--logged-out",
);
// wait for subframe to load
await expect(page.getByTitle("storybook-preview-iframe")).toBeVisible();
const frame = page.frame({ url: /http:\/\/localhost:6006\/iframe.html/ })!;
// query log-in button element
const loginButton = frame.getByRole("button", { name: /log in/i });
// fire click event
await loginButton.click();
// query log-out button element
const logoutButton = frame.getByRole("button", { name: /log out/i });
// assert log-out button is rendered
await expect(logoutButton).toBeVisible();
});
```

Now the test succeeds!
## References
- [How to know what to test](https://kentcdodds.com/blog/how-to-know-what-to-test)
- [Setting test runner in Storybook](https://storybook.js.org/docs/writing-tests/test-runner)
- [Unit testing in Storybook](https://storybook.js.org/docs/writing-tests/stories-in-unit-tests)
- [Visual testing in Storybook](https://storybook.js.org/docs/writing-tests/visual-testing)
- [Interaction testing in Storybook](https://storybook.js.org/docs/writing-tests/interaction-testing)
- [Test coverage in Storybook](https://storybook.js.org/docs/writing-tests/test-coverage)
- [End-to-end testing in Storybook](https://storybook.js.org/docs/writing-tests/stories-in-end-to-end-tests)
- [Integrate MSW and Storybook](https://storybook.js.org/addons/msw-storybook-addon)
| algoorgoal |
1,826,423 | How to create a scroll to top button with Tailwind CSS and Alpinejs | In this tutorial, we'll be creating a scroll to top button using Tailwind CSS and Alpine.js. Read... | 0 | 2024-04-18T07:18:43 | https://dev.to/mike_andreuzza/how-to-create-a-scroll-to-top-button-with-tailwind-css-and-alpinejs-1hoj | tutorial, tailwindcss, alpinejs, webdev | In this tutorial, we'll be creating a scroll to top button using Tailwind CSS and Alpine.js.
[Read the article See it live and get the code](https://lexingtonthemes.com/tutorials/how-to-create-a-scroll-to-top-button-with-tailwind-css-and-alpinejs/)
| mike_andreuzza |
1,827,203 | Node.js itself is not single-threaded. | Node.js itself is not single-threaded. Node.js developers often confuse the main single-threaded... | 0 | 2024-04-18T19:09:59 | https://dev.to/nowaliraza/nodejs-itself-is-not-single-threaded-5aib | node, javascript, backend, webdev | Node.js itself is not single-threaded.
Node.js developers often confuse the main single-threaded event loop with Node.js entirely.
When a Node.js app is running, it automatically creates 4 threads under the worker pool for blocking tasks.
So at any given time, there are at least five threads.
This worker pool is managed by Libuv.
The blocking tasks are mainly I/O-bound and CPU-intensive.
**1. I/O bound**
a. DNS: dns.lookup(), dns.lookupService()
b. File system except fs.FSWatcher()
**2. CPU-intensive tasks**
a. Some crypto methods such as crypto.pbkdf2(), crypto.scrypt(), crypto.randomBytes(), crypto.randomFill(), crypto.generateKeyPair()
b. All zlib APIs other than those using libuv thread pool synchronously.
The main thread/event loop works for JavaScript execution as usual, but the worker pool takes care of blocking tasks for the main loop.
So, Node.js shouldn't be confused.
Thanks for reading.
I do a deep dive into foundational concepts & how things work under the hood. You can consider connecting with or following me, Ali Raza, here and on [LinkedIn](https://www.linkedin.com/in/thealiraza) to get along with the journey.
 | nowaliraza |
1,827,426 | Simplificando sua Internacionalização | Fala Devs, blz? Semanas atrás escrevi um artigo mostrando uma maneira de internacionalizarmos nosso... | 0 | 2024-04-19T02:03:01 | https://dev.to/flutterbrasil/simplificando-sua-internacionalizacao-4nif |
Fala Devs, blz? Semanas atrás escrevi um artigo mostrando uma maneira de internacionalizarmos nosso aplicativo utilizando o package flutter_localization, confesso que esta maneira ficou um pouco avançada e um pouco complicada.
Mas graças a comunidade Flutter, principalmente aqui em terras tupiniquins, sempre vão surgindo maneiras de facilitar as coisas e este foi o caso da criação do package localization criado pelo grande
[David Santana De Araujo](https://medium.com/u/1a7af23ef213?source=post_page-----ea458ac4fa8d--------------------------------)
da Flutterando (@davidsdearaujo).
[](https://pub.dev/packages/localization?source=post_page-----ea458ac4fa8d--------------------------------)
## localization | Flutter Package
### Package para simplificar tradução no app. import 'package:localization/localization.dart'; A configuração do package…
pub.dev
Primeiramente vamos instalar o package em nosso projeto.


Depois precisamos criar dentro da pasta assets\lang um json para cada lingua que irei dar suporte ao App.

Após isso construo todos os meus arquivos com um json vazio {}.
Feito isso preciso fazer algumas configurações no meu pubspec.yaml, a primeira delas é dar acesso ao meu aplicativo na pasta lang, colocando na seção **assets:**, e logo em seguido coloco **localization_dir: assets\lang**, é recomendado colocar ao final do arquivo.

Agora precisamos configurar o localização para carregar os arquivos, há duas maneiras, a primeira que eu vou mostrar é se o seu aplicativo **NÃO** utiliza sistema de rotas nomeadas.
Para isto basta em seu **MaterialApp()**, no atributo `home` envolver-lo com o widget **LocalizationWidget()**.

Quando a aplicação está utilizando o sistema de rotas do flutter, não é utilizada a propriedade `home` do `MaterialApp`. Para resolver esse problema, utilize o método assíncrono estático `Localization.configuration()`.
Esse método deve ser chamado antes de todas as chamadas de tradução. Geralmente, é executado na **SplashScreen**.
Vamos configurar nosso AppModule para que a rota inicial seja o SplashModule.

E no SplashPage vamos criar o metodo para carregar as configurações do **Localization.config()** e após terminar de carregar chamaremos a **/home** dando um **pushReplacementNamed()**. Nesse momento poderia carregar outras coisas, como o **sharedPreferences**.

Agora na minha **HomePage** iremos utilizar a informações que serão configuradas no nosso **json**, para isso basta utilizarmos **“chave”.i18n()**, no método **i18n()** podemos também usar parâmetros para ser utilizado em nossa string. Um ponto importante para que possamos utilizar todo o poder que o **slidy** nos oferece para gerar os parâmetros da internacionalização é colocarmos na frente de cada chamada do **i18n()** um comentário com o valor default de nossa chave, assim iremos automatizar a criação de nossas strings.

Por padrão a linguagem padrão é o **pt_BR** então caso o usuário esteja em uma linguá não suportada pelo aplicativo ele automaticamente ficara no pt_BR. Para trocar a linguagem padrão basta no local que eu executo a **Localization.configuration()** e adicionar o parâmetro **defaultLang** e colocar qual a linguagem padrão que desejo.


Vamos utilizar o slidy, para isso no nosso console digitamos o comando
Com isso ele irá gerar todas as chaves em todos nossos **json’s** com o valor padrão

Agora precisamos traduzir todos os nossos **json’s**

E nosso resultado será

Se trocarmos a linguagem do nosso dispositivo para português agora ele será automaticamente reconhecido em nosso aplicativo


Bem mais simples né? você pode consultar o código do artigo em meu github. Nos vemos na próxima :)
https://github.com/toshiossada/flutter_localization

Entre em nosso discord para interagir com a comunidade: https://discord.com/invite/flutterbrasil
https://linktr.ee/flutterbrasil | toshiossada | |
1,827,530 | Nuget support will make Selenium coding simpler! | 🚀 Hello programming community! Today, I am pleased to introduce and share my new NuGet package... | 0 | 2024-04-19T03:46:35 | https://dev.to/vinzh05/nuget-support-will-make-selenium-coding-simpler-2n6m | webdev, csharp, selenium, automation | 🚀 Hello programming community! Today, I am pleased to introduce and share my new NuGet package designed to support Selenium in C#. 🎯
🔧 This package makes writing automated test scripts faster and easier than ever! From quick setups to visually writing test cases, everything is simplified to save time and enhance your work performance.
✨ Experience the difference and share your feedback so the package can be improved in the future. Discover how it can help optimize your development process and ensure the quality of your software products.
🔗 Download link and usage documentation are attached below. Try it now and don't forget to follow my page for the latest updates!
GitHub link: https://github.com/vinzh05/SeleniumHelper
NuGet link: https://www.nuget.org/packages/Selenium.SeleniumHelper
In Visual Studio, you can search using the keyword: SeleniumHelper
If you find it useful, please leave a ⭐️star, 🍴fork if you want to contribute, and 👀follow to not miss updates on the nuget and support me! | vinzh05 |
1,827,666 | Famous Bakery in Delhi | Enjoy the delicious foods of Delhi's most famous bakery, The Signature Bakery by Halwaivala in... | 0 | 2024-04-19T08:16:45 | https://dev.to/halwaivala/famous-bakery-in-delhi-1jcg | bakery, delhi | Enjoy the delicious foods of Delhi's most famous bakery, The Signature Bakery by Halwaivala in Naraina. Enjoy an arrangement of sensations made with the best ingredients, from gorgeous cakes and tempting pastries to savory pizzas and delightful pasta. Whether you want cookies, bread, or a culinary masterpiece, our bakery provides unrivaled flavor and quality.
Order online or visit our bakery to feel its charm in person. Halwaivala's The Signature Bakery has established itself as the pinnacle of a **[famous bakery in Delhi](https://halwaivala.com/bakery/)**, thanks to a tradition of excellence and a devotion to employing only the best ingredients. | halwaivala |
1,828,086 | Working with Modules and Packages | Working with Modules and Packages One of the key strengths of Python is its extensive... | 0 | 2024-04-19T15:10:03 | https://dev.to/romulogatto/working-with-modules-and-packages-2b1e | # Working with Modules and Packages
One of the key strengths of Python is its extensive library of modules and packages. These pre-written code blocks allow you to easily add functionality to your programs without having to reinvent the wheel. In this guide, we will explore how to work with modules and packages in Python, giving you a solid foundation for building powerful applications.
## Understanding Modules
In Python, a module is simply a file containing Python definitions and statements. It can be thought of as a reusable chunk of code that can be imported into other programs. This makes it easier to organize your code into separate files for better maintainability.
To use a module in your program, you first need to import it using the `import` keyword followed by the name of the module:
```python
import math
```
This imports the `math` module which provides various mathematical operations and functions.
You can access items from an imported module using dot notation:
```python
result = math.sqrt(16)
print(result) # Output: 4
```
Here, we use dot notation (`.`) to access the `sqrt()` function within the `math` module that calculates square roots.
## Creating Your Modules
Python also allows you to create your modules. To do this, simply save your desired functions or classes in a `.py` file:
```python
# mymodule.py
def greet(name):
print(f"Hello, {name}!")
class Circle:
def __init__(self, radius):
self.radius = radius
def area(self):
return math.pi * (self.radius ** 2)
```
In another program or interactive shell session, you can now import these functions or classes using:
```python
import mymodule
mymodule.greet("Alice")
circle = mymodule.Circle(5)
area = circle.area()
print(area) # Output: 78.53981633974483
```
Here, we import the `greet()` function and `Circle` class from our custom module `mymodule` and utilize them in our program.
## Understanding Packages
While modules allow you to organize your code within single files, packages take it a step further by allowing you to organize related modules into directories. This helps keep your codebase clean and structured.
A package is simply a directory that contains one or more Python modules, along with a special file called `__init__.py`. This file can be left empty or can contain an initialization code for the package.
To use a module from within a package, you specify its path using dot notation:
```python
from mypackage import mymodule
mymodule.greet("Bob")
```
Here, we assume that the module named "mymodule" resides in the "mypackage" package. By importing it this way, we have access to all functions and classes defined within that module.
## Creating Your Own Packages
Creating your own packages follows similar principles as creating modules. You need to create a directory for your package with an `__init__.py` file present. Inside this directory, you can then include multiple `.py` files which will serve as separate modules of your package.
Let's say we have the following structure:
```
mypackage/
__init__.py
mymodule1.py
mymodule2.py
```
In another program or interactive shell session, you can now use functions or classes from these different modules like this:
```python
from mypackage import mymodule1, mymodule2
mymodule1.function1()
mymodule2.some_class.attribute = value
```
By organizing related functionality into individual packages and modules, you make your codebase easier to navigate and maintain.
## Conclusion
Modules and packages are essential tools for any Python programmer. They allow you to organize your code, reuse existing functionality, and collaborate with other developers more efficiently. By understanding how to work with modules and packages, you can take your Python programming skills to the next level and create even more robust applications
| romulogatto | |
1,828,989 | Rewrites are a symptom of bad initial engineering | I was prone to this problem at the beginning of my career, I would want to rewrite everything from... | 0 | 2024-04-20T16:25:45 | https://dev.to/shailennaidoo/rewrites-are-a-symptom-of-bad-initial-engineering-4nfl | softwareengineering, beginners, discuss, learning | I was prone to this problem at the beginning of my career, I would want to rewrite everything from the ground-up if I got this chance or I felt that the initial implementation did not "scale" well.
As I progressed in my career I started to realize that rewriting your application is a symptom of bad initial engineering. The application was not thought from a systems design perspective and the use of SOLID and other principles allows you to manage application code better in a system that is ever-changing.
If you come to a point where you need to rewrite an application then it just means that you did not spend the necessary time to design the foundation of the system which takes into consideration ever-changing business requirements. | shailennaidoo |
1,829,157 | Feedback on RazorOps CI/CD | What is RazorOps CICD? Learn More : https://razorops.com/?utm_source=dev.to... | 0 | 2024-04-20T21:17:51 | https://dev.to/madhuk/feedback-on-razorops-cicd-3n2n |
**What is RazorOps CICD?**

**Learn More :** https://razorops.com/?utm_source=dev.to
**Features:**
**User-Friendly Interface:** RazorOps offers an intuitive and user-friendly interface, making it easy for our development teams to create, manage, and monitor CI/CD pipelines. The streamlined workflow design enhances productivity and reduces the learning curve for new users.
**End-to-End Automation:** The platform provides seamless automation of the entire software delivery pipeline, from code commit to deployment. With built-in support for continuous integration, testing, and deployment, RazorOps accelerates our release cycles and ensures rapid delivery of features to our customers.
**Flexibility and Customization:** RazorOps offers a high degree of flexibility, allowing us to tailor CI/CD workflows to our specific requirements. The platform supports custom scripting, plugin integrations, and environment configuration, empowering us to adapt our pipelines to evolving project needs.
**Scalability and Performance:** RazorOps scales effortlessly to accommodate our growing workload and expanding team. With robust support for parallel execution and distributed builds, the platform maintains optimal performance even under heavy usage, ensuring smooth and efficient software delivery.
**Security and Compliance:** RazorOps prioritizes security and compliance, implementing robust measures to protect our code, data, and infrastructure. Features such as role-based access control, encryption, and audit logging instill confidence in our ability to maintain confidentiality and integrity throughout the CI/CD process.
**Razorops CICD is FREE forever you can give a try
https://razorops.com/?utm_source=dev.to**
 | madhuk | |
1,829,288 | เว็บโฮสติ้งเป็นหัวใจสำคัญของเว็บไซต์ใดๆ | สิ่งที่ทำให้ผู้ให้บริการที่เชื่อถือได้เหล่านี้แตกต่างออกไปคือความมุ่งมั่นอย่างแน่วแน่ต่อความน่าเชื่อถ... | 0 | 2024-04-21T05:10:01 | https://dev.to/thaidatahosting23/ewbohstingepnhawaicchsamkhaykhngewbaichtaid-648 | สิ่งที่ทำให้ผู้ให้บริการที่เชื่อถือได้เหล่านี้แตกต่างออกไปคือความมุ่งมั่นอย่างแน่วแน่ต่อความน่าเชื่อถือ พวกเขาลงทุนในศูนย์ข้อมูลล้ำสมัยที่ติดตั้งระบบสำรองเพื่อให้มั่นใจถึงเวลาทำงานสูงสุดและลดความเสี่ยงของการหยุดชะงักของบริการ ความน่าเชื่อถือนี้เป็นสิ่งจำเป็นสำหรับธุรกิจที่ไม่สามารถให้เว็บไซต์ของตนไม่สามารถเข้าถึงได้ เนื่องจากการหยุดทำงานเพียงไม่กี่นาทีก็อาจส่งผลให้สูญเสียรายได้และสร้างความเสียหายต่อชื่อเสียงได้
นอกจากนี้ ผู้ให้บริการเว็บโฮสติ้งที่เชื่อถือได้ยังให้ความสำคัญกับความปลอดภัย โดยใช้มาตรการที่แข็งแกร่งเพื่อปกป้องข้อมูลและเว็บไซต์ของลูกค้าจากภัยคุกคามทางไซเบอร์ ซึ่งรวมถึงไฟร์วอลล์ การสแกนมัลแวร์ การเข้ารหัส SSL และการตรวจสอบความปลอดภัยเป็นประจำเพื่อระบุและบรรเทาช่องโหว่ในเชิงรุก เนื่องจากการโจมตีทางไซเบอร์มีความซับซ้อนมากขึ้น ธุรกิจและผู้สร้างเว็บไซต์จึงต้องการผู้ให้บริการโฮสติ้งที่พวกเขาสามารถไว้วางใจในการปกป้องทรัพย์สินดิจิทัลของตน
นอกจากความน่าเชื่อถือและความปลอดภัยแล้ว องค์กรชั้นนำและผู้สร้างเว็บไซต์ยังต้องพึ่งพาผู้ให้บริการเว็บโฮสติ้งที่เชื่อถือได้สำหรับความเชี่ยวชาญด้านเทคนิคและการสนับสนุน ไม่ว่าจะเป็นการแก้ไขปัญหาทางเทคนิค การเพิ่มประสิทธิภาพเซิร์ฟเวอร์ หรือการปรับขนาดทรัพยากรเพื่อรองรับการเติบโต ทีมสนับสนุนที่มีความรู้พร้อมให้บริการทุกวันตลอด 24 ชั่วโมงเพื่อช่วยเหลือลูกค้าในทุกขั้นตอน การสนับสนุนส่วนบุคคลในระดับนี้มีคุณค่าอย่างยิ่ง โดยเฉพาะอย่างยิ่งสำหรับผู้ที่ไม่มีพื้นฐานด้านการพัฒนาเว็บหรือการจัดการเซิร์ฟเวอร์
https://www.thaidatahosting.com/ | thaidatahosting23 | |
1,829,531 | White Label Virtual Assistants: Redefining Outsourcing in the Digital Era | Modern businesses must adapt quickly in the face of digitization to remain competitive, streamline... | 0 | 2024-04-21T11:40:19 | https://dev.to/jennifer12345/white-label-virtual-assistants-redefining-outsourcing-in-the-digital-era-20h | Modern businesses must adapt quickly in the face of digitization to remain competitive, streamline operations, improve efficiencies, and stay nimble in a digital economy. One solution that has gained significant traction recently is white-label virtual assistants (WLVAs). These remote professionals offer administrative, technical and creative support services from any location at affordable rates - revolutionizing outsourcing paradigms within this digital era. This comprehensive article will examine their transformative effects as they reshape the outsourcing landscape. We will investigate their transformative impacts in redefining the outsourcing landscape.
**Understanding White Label Virtual Assistants**
White-Label Virtual Assistants White-label virtual assistants, also called remote virtual assistants, are skilled professionals who offer administrative, technical or creative support from a remote business location. White-label virtual assistants operate under the branding of their hiring company to seamlessly integrate themselves with operations while upholding consistency in customer experience and brand identification. Businesses using white-label virtual assistants benefit from accessing specialized skills and expertise without undertaking extensive recruitment efforts or extensive training processes for recruitment purposes. Cette's unique arrangement gives businesses access to these specialized capabilities without incurring extensive training, recruitment processes, or extensive recruitment efforts necessary - perfect for companies needing specialist skills but needing more resources or talent acquisition efforts.
**The Evolution of Outsourcing**
Outsourcing has long been an effective means for companies looking to reduce costs, access specialized talent, and focus on core competencies. Over time, however, what started as cost-cutting has evolved into driving efficiency, innovation, and growth; [white-label virtual assistants](https://www.sagedoer.com/white-label-program/) represent one of these approaches with even greater flexibility, scalability, and agility within operations than before. * When used strategically, outsourcing can have tremendous returns in terms of time, money saved on operational expenses, agility in operations as a competitive edge, etc.
**The Role of WLVAs in Business Operations**
White-label virtual assistants play an invaluable role in supporting various aspects of business operations, including:
**1. Administrative Support:** Workforce Learning and Vocational assistants (WLVAs) can offer businesses invaluable administrative assistance by handling daily administrative tasks like booking appointments, monitoring email communications and organizing documents so that operations run more smoothly and can focus on strategic initiatives more quickly.
**2. Customer Service:** As frontline representatives, WLVAs offer timely and personalized assistance to customers via various channels, including phone calls, emails and live chat sessions. By consistently offering top-of-the-line customer care, they help businesses increase customer loyalty while improving satisfaction ratings among existing customers.
**3. Marketing and Sales Support:** With their expertise in market research, lead generation, social media management and campaign execution, WLVAs can assist businesses in drawing in new customers while driving revenue growth. Their digital marketing and sales skills play an invaluable role in creating effective strategies to target audiences while fulfilling business goals.
**4. Specialized Services:** With their expertise in content writing, graphic design, and website development, WLVAs help businesses produce high-quality marketing materials and digital assets to strengthen brand presence and increase engagement - ultimately differentiating themselves in the market and staying ahead of competitors.
**The Benefits of WLVAs for Businesses**
White-label virtual assistants present businesses looking to reinvent outsourcing in the digital era with numerous benefits, including:
**1. Cost-Effectiveness:** Hiring virtual workforce assistants can often be more cost-efficient than employing full-time employees as businesses can avoid expenses such as salary, benefits and office space rental costs. Outsourcing tasks to WLVAs also allows businesses to save overhead costs more effectively while reallocating resources more efficiently.
**2. Scalability:** Working with WLVAs allows businesses to adjust operations quickly based on demand without incurring additional staff hiring expenses or overtime charges for full-time staff hires. Whether businesses need extra support during peak seasons or wish to expand into new markets, WLVAs can adapt quickly to each business's changing requirements and assist them with reaching their growth goals.
**3. Access to Specialized Skills:** Workforce Levy Valuation Agents bring varied skill sets and expertise, giving businesses access to specialized services without additional training or recruitment costs. WLVAs possess all of the expertise required for high-quality results whether firms need assistance with administrative tasks, customer service requests, marketing projects or special projects - whatever their specialty.
**4. Increased Productivity:** By delegating routine tasks to WLVAs, businesses can optimize their workflow and focus on core activities more quickly - leading to greater productivity and efficiency. With their help, more is accomplished faster while goals can be attained more quickly.
**5. Flexibility and Convenience:** Working with virtual assistants allows businesses to access support services when needed without being restricted by traditional office hours or geography. WLVAs can accommodate businesses' schedules for assistance as required and deliver support when needed most.
Challenges and Considerations (WLVA)
White-label virtual assistants present numerous advantages, but businesses should also carefully consider potential drawbacks - for instance:
**1. Communication and Collaboration:** Effective communication and collaboration in remote work environments is often challenging, necessitating clear channels of communication as well as project management tools to facilitate effective team collaboration among all team members, such as WLAs. Businesses must implement procedures and systems to foster seamless team communications that prevent misunderstandings or delays that might hinder productivity.
**2. Data Security and Confidentiality:** Businesses should implement stringent data security measures when working remotely with remote professionals, including encryption protocols, access controls, and other measures that protect confidential data against unintended access or disclosure.
**3. Quality Control:** Businesses outsourcing tasks to WLVAs should establish clear guidelines, expectations and performance metrics to ensure WLVAs comply with their quality requirements and achieve results that align with business goals.
**4. Cultural and Language Differences:** Businesses operating globally may face cultural and language differences when working with WLVAs from diverse backgrounds. To overcome such hurdles effectively, companies need to foster an atmosphere of inclusivity and respect while offering training resources so their WLVAs can adapt quickly to company cultures and communication norms: Case Studies and Success Stories.
**Case Studies and Success Stories**
Let's examine some case studies to demonstrate how white-label virtual assistants have revolutionized outsourcing for businesses operating online:
**1. Startup Growth Acceleration:** A technology startup utilizes the expertise of WLVAs for administrative support, customer service support, and marketing to rapidly scale operations and enter new markets with increased speed and ease. With support from WLVAs, they were freed up to focus on product innovation rather than operational needs, thus driving growth and profitability for themselves and WLVAs.
**2. E-Commerce Expansion:** An e-commerce business scaled its customer service operations by outsourcing live chat support to WLVAs, leading to improved response times, customer satisfaction rates and higher conversion rates, leading to revenue growth and market expansion.
**3. Marketing Campaign Success:** A marketing agency hired WLVAs with expertise and dedication in content writing, graphic design and social media management to assist with its various campaigns and help attract new clients while meeting its business goals.
Future Outlook and Trends
Demand for white-label virtual assistants (WLVAs) will continue increasing as businesses recognize the value of redefining outsourcing in today's digital era. Key trends shaping WLVA's future may include:
**1. Integrating Artificial Intelligence:** Artificial intelligence (AI) technology will enhance WLVA capabilities by helping them perform more complex tasks and providing personalized assistance. WLVAs equipped with AI can automate repetitive tasks, analyze data for insight and provide meaningful analysis that assists businesses in making educated decisions to drive growth and sustain business expansion.
**2. Expansion of Remote Work Opportunities:** With businesses adopting flexible working arrangements and seeking access to talent from around the globe, remote working is expected to bring increased opportunities for virtual workers like WLVAs. Thanks to advances in technology and communication tools, businesses can collaborate effectively with these remote workers regardless of location, tapping into an international talent pool while accessing specific expertise.
**3. Work-Life Balance:** As businesses prioritize employee well-being and work-life balance, demand for flexible WLVAs that value autonomy should increase. WLVAs can choose their work schedules and locations accordingly to create the optimum balance between work and personal life, leading to higher job satisfaction, increased productivity, and lower retention rates among businesses employing these employees.
**Conclusion**
White-label virtual assistants have revolutionized outsourcing in the digital era, offering businesses unparalleled flexibility, scalability, and efficiency by harnessing remote professionals' expertise to streamline operations and drive growth and success in today's competitive business climate. Though challenges exist with using white-label virtual assistants (WLVAs), their benefits outweigh the risks, making WLVAs indispensable tools for companies hoping to survive and thrive in this digital era. As businesses adapt and innovate according to shifting market dynamics, WLVAs will play a pivotal part in driving innovation, growth & success!
| jennifer12345 | |
1,829,606 | Women's Cute Sloth Animal T-Shirt - Limited Edition | Cute Sloth Animal T-shirt: A Perfect Blend of Comfort and Style Are you a sloth enthusiast or... | 0 | 2024-04-21T14:47:54 | https://dev.to/frafashion/womens-cute-sloth-animal-t-shirt-limited-edition-1l1p | tshirt, tutorial, react, programming | Cute Sloth Animal T-shirt: A Perfect Blend of Comfort and Style

Are you a sloth enthusiast or simply love adorable animal-themed apparel? Look no further! Our Cute Sloth Animal T-shirt (Design ID: CM_SPQNM9H) combines comfort, durability, and style for both kids and adults.
Here’s why you’ll love it:

Lightweight and Breathable:
Crafted from high-quality materials, this lightweight tee ensures all-day comfort.
Perfect for playdates, school, or lazy weekends.
Slim Feminine Fit:
Designed to flatter, our tee offers a slim fit.
If you prefer a relaxed feel, consider ordering one size up.
Stylish V-Neck:
The trendy V-neck adds a touch of elegance.
Dress it up or down effortlessly.
Durable Construction:
Double-needle sleeve and bottom hems enhance longevity.
Ideal for active kids and busy days.
Color Options:
Choose from solid colors in 100% cotton or the Ash variant (99% cotton, 1% poly).
Express your style with hues that match your personality.
Made-to-Order Excellence:
Each tee is proudly printed using top-notch screen-printing or print-to-garment processes.
While variations in hues and brands may occur due to supply availability, rest assured you’ll receive a quality product.
Whether you’re a sloth lover or simply appreciate comfortable fashion, our Cute Sloth Animal T-shirt is a must-have. Get yours today and embrace the cuteness! 🦥👕
Made in USA - Worldwide Shipping - You can order this lovely T-shirt from here: https://frafashion.com/campaign/womens-cute-sloth-animal-t-shirt?rt=storefront&rn=Fra+Fashion&s=hanes-S04V&c=Deep+Red&p=FRONT | frafashion |
1,829,721 | Getting Started with C# Records | Introduction to C# Records C# 9.0 introduced a new type of class in called record. Record... | 0 | 2024-04-21T18:13:22 | https://antondevtips.com/blog/getting-started-with-csharp-records | programming, csharp, dotnet | ---
canonical_url: https://antondevtips.com/blog/getting-started-with-csharp-records
---
## Introduction to C# Records
C# 9.0 introduced a new type of class in called **record**.
Record is a reference type that offers immutability and equality comparison out of the box.
There are 2 forms of defining a record: a classic and concise. Let's have a look on the classic form first:
```csharp
public record Product
{
public int Id { get; init; }
public string Name { get; init; }
public decimal Price { get; init; }
}
// Create an instance of Product's record
var product = new Product
{
Id = 1,
Name = "Phone",
Price = 500.00
};
```
This type of declaration is similar to regular C# classes, but instead of a **class** keyword - a new **record** keyword is being used.
The best practise when creating a record is declaring all of its properties as **init** only. This way properties can't be changed, ensuring the immutability of the record's data. Although you can define properties as a regular **get-set** if needed.
A new keyword called **required** was introduced in C# 11, which is a great addition to records in my opinion:
```csharp
public record Product
{
public required int Id { get; set; }
public required string Name { get; set; }
public required decimal Price { get; set; }
}
```
This ensures that all properties marked with **required** keyword, should be assigned when creating an object, otherwise a compilation error is raised:
```csharp
// This code doesn't compile as Price property is not assigned with a value
var product = new Product
{
Id = 1,
Name = "Phone"
};
```
## Concise (positional) form of C# Records
C# also provides a shortened form of record declaration:
```csharp
public record Product(int Id, string Name, decimal Price);
var product = new Product(1, "PC", 1000.00m);
```
A single line of code! We assign all the properties using a record's constructor which is called a primary constructor.
Under the hood this code is translated to a classic form of a record declaration with all properties being **init** only.
This form of record is also called a **positional record** as all properties under the hood are created in the exact order as their position in the primary constructor.
Positional form of declaration looks really elegant and concise when a record doesn't have a lot of fields.
## Records immutability
Records in C# are immutable by their nature, you can't change properties of record object.
Instead you should use the **when** keyword to create a new instance of record with updated properties:
```csharp
var product = new Product(1, "PC", 1000.00m);
// This code doesn't compile: can't change a record property
product.Price = 1200.00m;
// Instead create a new product with updated price
var product2 = product with
{
Price = 1200.00m
};
```
## Records equality comparison
A default equality comparison behaviour for different C# type is as follows:
- **struct**: two objects are equal if they have the same type and store the same values.
- **class**: two objects are equal if their references in memory are equal.
- **record**: two objects are equal if they have the same type and store the same values.
Records are reference types but have a value object behaviour when it comes to equality comparison.
```csharp
var product1 = new Product(1, "PC", 1000.00m);
var product2 = new Product(2, "Laptop", 1500.00m);
var product3 = new Product(1, "PC", 1000.00m);
Console.WriteLine(product1 == product2); // prints "false"
Console.WriteLine(product1 == product3); // prints "true" as both records have the same values
```
## Records inheritance
As any other class - records support inheritance. Let's have a look on the following example:
```csharp
public record Product(int Id, string Name, decimal Price);
public record SpecialProduct(int Id, string Name, decimal Price, decimal Discount)
: Product(Id, Name, Price);
var product = new Product(1, "PC", 1000.00m);
var specialProduct = new SpecialProduct(1, "PC", 1000.00m, 100.00m);
```
Primary constructors are utilized here to pass properties to a parent record.
When using a classic form of declaration, records look the same as classes:
```csharp
public record SpecialProduct : Product
{
public SpecialProduct(int id, string name, decimal price, decimal discount)
: base(id, name, price)
{
Discount = discount;
}
public decimal Discount { get; init; }
}
```
## Record structs
In C# 10 a **records structs** were introduced:
```csharp
public readonly record struct Point(double X, double Y);
public readonly record struct Point
{
public double X { get; init; }
public double Y { get; init; }
}
public record struct Point
{
public double X { get; set; }
public double Y { get; set; }
}
```
**Records structs** can be positional structs with primary constructor, have a classic form of declaration and even be declared as **readonly record struct**.
**Records structs** have the same behaviour as regular records but are a **value objects**.
**NOTE:** A classic record could be also declared using **record class** phrase but this is usually shortened to just a **record**.
## Record's ToString Method
Records out of the box provide a useful implementation of the **ToString** method:
```csharp
var product1 = new Product(1, "PC", 1000.00m);
Console.WriteLine(product1); // Product { Id = 1, Name = PC, Price = 1000,00 }
```
ToString method prints all of values of the record properties which is extremely useful for logging and debugging.
## When To Use Records
Records in C# are a powerful feature and are preferred to be used in the following scenarios:
- **Data Transfer Objects (DTOs)**: Records are an excellent choice for DTOs in API development due to their immutable nature. They ensure that data objects can't be accidentally modified in any layer of an application.
- **REPR (Request-Endpoint-Response) pattern**: Records are an excellent choice for request and response models that should not be modified after creation.
- **Value objects in DDD**: In Domain-Driven Design, records can be used for creation of value objects where immutability and value-based equality are essential.
- **Read-Only Data**: Records is a preferred choice in any scenarios that require immutable (read-only) data.
While records are a great and useful feature, there are scenarios where they shouldn't be used:
- **Entity Framework Entities**: Entity Framework relies heavily on change tracking for database operations and utilizes a reference equality comparison. That's why records with their value-based equality comparison are a bad choice here.
- **UI Data Binding**: For applications with UI data binding that requires frequent updates (like in WPF or Blazor), the immutability of records can be limiting. Mutable POCO (Plain Old CLR Objects) are typically more suitable in these cases.
- **Performance-Critical Sections**: If you're working in a performance-critical part of your application, using records where object mutations are frequent - can lead to performance overhead due to a big number of created new instances of records on every modification.
Hope you find this blog post useful. Happy coding!
_Originally published at_ [_https://antondevtips.com_](https://antondevtips.com/blog/getting-started-with-csharp-records)_._
### After reading the post consider the following:
- [Subscribe](https://antondevtips.com/blog/getting-started-with-csharp-records#subscribe) **to receive newsletters with the latest blog posts**
- [Download](https://github.com/AntonMartyniuk-DevTips/dev-tips-code/tree/main/backend/CSharp/CSharp9_Records) **the source code for this post from my** [github](https://github.com/AntonMartyniuk-DevTips/dev-tips-code/tree/main/backend/CSharp/CSharp9_Records) (available for my sponsors on BuyMeACoffee and Patreon)
If you like my content — **consider supporting me**
Unlock exclusive access to the source code from the blog posts by joining my **Patreon** and **Buy Me A Coffee** communities!
[](https://www.buymeacoffee.com/antonmartyniuk)
[](https://www.patreon.com/bePatron?u=73769486) | antonmartyniuk |
1,829,794 | Optimizing PostgreSQL Performance: Navigating the Use of Bind Variables in Version 16 | In PostgreSQL, the use of bind variables, also known as parameterized queries or prepared statements,... | 0 | 2024-04-21T19:38:05 | https://dev.to/shiviyer/optimizing-postgresql-performance-navigating-the-use-of-bind-variables-in-version-16-2ehk | postgres, sql, dba, mysql | In PostgreSQL, the use of bind variables, also known as parameterized queries or prepared statements, is a common practice to execute SQL queries more efficiently and securely by separating the query structure from the data values. These variables help prevent SQL injection attacks and can improve performance by allowing PostgreSQL to cache query plans. When considering the question of "how many bind variables is too many?" in PostgreSQL, particularly in the context of PostgreSQL 16, it's essential to understand that the answer is nuanced and depends on several factors, including the complexity of the query, the database architecture, the specific PostgreSQL configuration, and the hardware resources available.
### Understanding the Impact of Bind Variables
Bind variables are incredibly useful for optimizing database interactions, but their overuse can introduce some challenges:
1. **Query Planning and Optimization**: PostgreSQL's query planner optimizes the execution path based on the query structure and the bind variables. While the initial planning phase may take longer for queries with a high number of bind variables, subsequent executions can benefit from plan caching. However, if the number of bind variables is excessively high, the overhead in planning and the time to cache the plan might outweigh the performance benefits.
2. **Resource Usage**: Every bind variable consumes memory, both on the application side to manage the variable and on the database server to process and execute the query. In scenarios with thousands of bind variables, this overhead could impact overall system performance, especially if many such queries are executed concurrently.
3. **Practical Limits**: Technically, PostgreSQL does not enforce a strict limit on the number of bind variables. However, practical limits are governed by system resource constraints, such as available memory and the maximum allowed size of a query. Exceedingly large queries may also encounter limitations on the maximum size of a TCP/IP packet, which can affect how queries are transmitted to the PostgreSQL server.
### Best Practices and Recommendations
Given the absence of a hard limit on the number of bind variables, developers must use judgment and best practices to determine the appropriate number:
- **Performance Testing**: Conduct thorough testing with different numbers of bind variables to identify any potential performance bottlenecks or issues. This includes measuring query planning time, execution time, and overall impact on system resources.
- **Array Variables**: For operations that inherently involve multiple values for what could be a single bind variable (e.g., bulk inserts or updates), consider using array variables. This approach can drastically reduce the number of bind variables needed and simplify query structure.
- **System Monitoring and Tuning**: Keep a close watch on PostgreSQL's performance metrics and system resource usage. Adjusting PostgreSQL configuration parameters, such as `work_mem` and `maintenance_work_mem`, can help accommodate queries with a large number of bind variables more effectively.
- **Query Design**: Evaluate whether the complexity of the query and the number of bind variables are necessary for the application's requirements. In some cases, redesigning the query or breaking it into smaller parts can mitigate the need for a high number of bind variables.
### Conclusion
In PostgreSQL 16, while there is no explicit upper limit on the number of bind variables you can use, the practical limit is influenced by the specifics of your application, database design, and server capabilities. The key to effectively using bind variables is to balance their benefits in security and performance optimization against the potential overhead they introduce when used in large numbers. By adhering to best practices in query design, system configuration, and performance testing, developers can make informed decisions on the appropriate use of bind variables in their PostgreSQL applications.
{% embed https://shiviyer.hashnode.dev/understanding-temporary-tables-and-redo-logs-in-postgresql-functions-and-best-practices %}
{% embed https://minervadb.xyz/mastering-postgresql-performance-tackling-long-running-queries/ %}
{% embed https://minervadb.xyz/postgresql-performance-using-pg_test_fsync-for-effective-fsync-method-selection/ %}
{% embed https://minervadb.xyz/creating-high-availability-with-postgresql-14-stream-replication-a-step-by-step-guide/ %} | shiviyer |
1,829,801 | 8 Things I Wish I Understood Earlier In My Career | Our lives and careers are journeys, so we should expect that we’ll be learning for the duration we’re... | 0 | 2024-04-22T16:00:00 | https://www.devleader.ca/2024/04/15/8-things-i-wish-i-understood-earlier-in-my-career/ | career, learning, codenewbie, writing | Our lives and careers are journeys, so we should expect that we’ll be learning for the duration we’re on this planet. As long as we’re moving forward in life, we have experiences to learn from. When I reflect on my career, there are a lot of lessons that I wish I could have taught myself early on. While I can’t talk to my younger self, I can share these with you!
If you enjoy this kind of content, I often write about it in my weekly newsletter. You can subscribe to it for free:
{% embed https://weekly.devleader.ca/embed %}
---
## **1 – Keep Building**
Keep building things. It doesn’t matter if they’re not million-dollar product ideas. Build things to learn about how they work.
The goal when we’re building things, especially early in our software engineering journeys, is not to get-rich-quick. The goal is to practice and learn.
There’s no shortcut to mastery except for understanding sooner that there’s no shortcut. We need to [practice — so build](https://www.devleader.ca/2023/11/13/how-to-build-an-asp-net-core-web-api-a-practical-beginners-tutorial/) software.
## **2 – Reinvent The Wheel**
We’re always told to not reinvent the wheel. There’s no point doing it if someone else has already done it.
But this isn’t always good advice.
Reinventing the wheel is one of the best ways to [understand how something works and the complexity](https://www.devleader.ca/2023/12/29/how-to-understand-a-new-codebase-tips-and-tricks-for-tackling-complex-code/) behind it. It might not be the best thing to chase in a production environment with paying customers, but it’s great for learning.
If you’re ever curious about how things work or interested in the complexities of a system or technology — try building it yourself. You’re almost guaranteed to learn something about what you’re diving into. I’d be shocked if you didn’t!
## **3 – Learn In Public**
Document your journey so others can learn alongside you. This can help reinforce learning in so many ways because you need to find ways to explain the concepts that you think you’re now understanding.
However, don’t masquerade as an expert. Be humble and acknowledge that you’re new and learning. Others will be more willing to help.
Remember that everyone has different opinions. Some people are more loud but it doesn’t make them more correct. In fact, the louder people unwilling to listen to other perspectives are more often than not going to be less helpful. So don’t be discouraged.
If people correct your mistakes, this is a great chance for you to take lessons learned and improve!
## **4 – We Build Software In Teams**
In the large majority of cases, software is built in teams. This means that you need to focus on communication skills, collaboration skills, and ALL of the other skills that aren’t just technical.
When it comes to technical direction, you need to consider that it’s not just your choices.
This is a good thing — because the different backgrounds, experiences, and perspectives will allow you to build better software as a group.
## **5 – Composition > Inheritance**
Inheritance gets pushed a LOT when teaching programming but if you gravitate towards using composition early then you’ll “shortcut” a few years of really crappy code.
Seriously. I’m guilty of writing mountains of code with ridiculously long inheritance hierarchies. I can’t blame this entirely on WinForms and the Controls paradigm that was used… I should have taken some responsibility!
I’d spend a good portion of my early career rewriting a lot of overly complex inheritance code because it became too complex.
## **6 – Don’t Fear Learning**
Don’t be afraid to learn new things. Building expertise can feel great, but don’t let it make you feel so safe that new things feel scary.
You’re going to suck at new things but it’s very temporary. You’ll prove to yourself repeatedly that you eventually get working knowledge and feel very comfortable.
In my career, my best growth opportunities were when I was forced into something new and uncomfortable. Every time I came out on top.
Discomfort led to incredible learning experiences.
## **7 – Own Your Career Development**
Nobody is as interested in your [career progression](https://www.devleader.ca/2024/01/06/take-control-of-career-progression-dev-leader-weekly-25/) as you are. Great managers will indeed help encourage you and give you opportunities to move things along.
But we can’t just sit back and wait for this. We won’t always have amazing managers.
The sooner you realize that being passive is not a good strategy for progression, the sooner you can take action.
## **8 – Remember To Have Fun**
You’re in this for the long haul, spending many hours of your life building software. You better enjoy it.
Not every day is going to be easy. Not every day will be interesting. Not every day will exciting.
But overall, you need to find ways to enjoy what you’re doing. It will help you learn. It will help you be engaged. It will help you be motivated to push through interesting challenges.
Software engineering is very mentally demanding and it’s often hard to find a good work-life balance… So keep this one in mind throughout your career.
---
What would you add or change? If you found this insightful and you’re looking for more learning opportunities, consider [subscribing to my free weekly software engineering newsletter](https://subscribe.devleader.ca/) and check out my [free videos on YouTube](https://www.youtube.com/@devleader?sub_confirmation=1)! Meet other like-minded software engineers and [join my Discord community](https://www.devleader.ca/discord-community-access/)!
{% embed https://weekly.devleader.ca/embed %}
---
# **Want More Dev Leader Content?**
* Follow along on this platform if you haven’t already!
* Subscribe to my free weekly software engineering and dotnet-focused newsletter. I include exclusive articles and early access to videos:
[**SUBSCRIBE FOR FREE**](https://subscribe.devleader.ca/)
* Looking for courses? Check out my offerings:
[**VIEW COURSES**](https://devleader.ca/courses)
* E-Books & other resources:
[**VIEW RESOURCES**](https://products.devleader.ca/)
* Watch hundreds of full-length videos on my YouTube channel:
[**VISIT CHANNEL**](https://youtube.com/@devleader?sub_confirmation=1)
* Visit my website for hundreds of articles on various software engineering topics (including code snippets):
[**VISIT WEBSITE**](https://devleader.ca/)
* Check out the repository with many code examples from my articles and videos on GitHub:
[**VIEW REPOSITORY**](https://github.com/ncosentino/DevLeader) | devleader |
1,830,026 | Hi , Still new | I'm thrilled to make my first post in the dev community, where I look forward to learning new things... | 0 | 2024-04-22T05:11:30 | https://dev.to/mayank6787/hi-still-new-1555 | webdev, newstart | I'm thrilled to make my first post in the dev community, where I look forward to learning new things and exploring diverse groups. | mayank6787 |
1,830,278 | Prophecy of Redux: State Management in Large React Apps | In the mystical realm of large-scale React applications, managing state becomes a saga of complexity... | 27,083 | 2024-04-22T11:00:00 | https://dev.to/kigazon/prophecy-of-redux-state-management-in-large-react-apps-49d5 | react, webdev, javascript, intermediate | In the mystical realm of large-scale React applications, managing state becomes a saga of complexity and cunning. Redux, the revered oracle of state management, emerges as the protagonist in this tale, offering a robust solution to tame the sprawling state of enterprise-level applications. This extensive guide explores the depths of Redux, from its fundamental principles to advanced integration techniques, providing you with the knowledge to master state management in your React applications.
## The Genesis of Redux
Redux is a predictable state container for JavaScript applications, inspired by Flux and functional programming principles. It centralizes application state and logic, enabling powerful capabilities like undo/redo, state persistence, and more. Understanding Redux requires comprehension of three core principles:
- **Single source of truth**: The state of your entire application is stored in an object tree within a single store.
- **State is read-only**: The only way to change the state is to emit an action, an object describing what happened.
- **Changes are made with pure functions**: To specify how the state tree is transformed by actions, you write pure reducers.
## Setting Up Redux in a React Application
To integrate Redux in a React project, you must set up the store and provide it to your React application. Here’s a basic setup:
```javascript
import { createStore } from 'redux';
import { Provider } from 'react-redux';
import rootReducer from './reducers';
const store = createStore(rootReducer);
function App() {
return (
<Provider store={store}>
<MyComponent />
</Provider>
);
}
```
This snippet creates a Redux store using the `createStore` function and provides it to the React app through the `Provider` component.
## Crafting Actions and Reducers
In Redux, actions are payloads of information that send data from your application to your store. Reducers specify how the application's state changes in response to actions:
```javascript
// Action Types
const ADD_TODO = 'ADD_TODO';
// Action Creators
function addTodo(text) {
return {
type: ADD_TODO,
text
};
}
// Reducers
function todos(state = [], action) {
switch (action.type) {
case ADD_TODO:
return [...state, { text: action.text, completed: false }];
default:
return state;
}
}
```
## Advanced Redux: Middleware, Async Actions, and Beyond
For handling asynchronous logic, side effects, and more complex state scenarios, Redux middleware like Redux Thunk or Redux Saga can be integrated:
```javascript
import { createStore, applyMiddleware } from 'redux';
import thunk from 'redux-thunk';
import rootReducer from './reducers';
const store = createStore(
rootReducer,
applyMiddleware(thunk)
);
// Async action creator
function fetchTodos() {
return function(dispatch) {
fetch('/api/todos')
.then(response => response.json())
.then(json => dispatch({ type: 'ADD_TODOS', todos: json }));
};
}
```
This setup uses Redux Thunk middleware to handle asynchronous actions that fetch data from an API and then dispatch actions to the store.
## Redux in Large-Scale Applications
In large applications, Redux’s ability to manage complex state interactions shines. It allows you to:
- **Modularize state logic**: Redux encourages splitting reducer logic across different files, enabling you to manage each piece of state independently.
- **Enhance development workflow**: Features like hot reloading and time-travel debugging provide developers with powerful tools to enhance productivity and debug effectively.
- **Connect Redux with React Router**: Manage route transitions and navigate programmatically with the power of Redux middleware.
## Real-World Case Studies
Exploring real-world use cases of Redux in large applications, such as in e-commerce platforms or real-time tracking applications, showcases its effectiveness in managing high volumes of state changes and interactions across multiple components.
## Like, Comment, Share
Dive deep into the prophetic powers of Redux to foresee and manage the state of your applications effectively. If you’ve wielded Redux in your projects, share your chronicles of conquest or seek counsel in your current quests. Like this guide if it has illuminated your path to state management mastery, and share it to aid fellow developers in their journey through the arcane arts of Redux. | kigazon |
1,830,437 | Decision Table Testing | Decision table testing is black box testing. It’s one of the testing techniques. Used to test systems... | 0 | 2024-04-22T12:29:26 | https://dev.to/gayu99/decision-table-testing-10ei | Decision table testing is black box testing. It’s one of the testing techniques. Used to test systems or systems that involve complex business logic with multiple inputs and conditions.
**Example**: Correct user name and password; the user will be redirected to the home screen; if any input is invalid, an error message will be displayed on the screen.
Conditions TC1 TC2 TC3 TC4
1. Username(Input) T(valid) T(Valid) F(Invalid) F(Invalid)
2. Password(Input) T(Valid) F(Invalid) T(Valid) F(Invalid)
3. Action (Output) H (Pass) E(Fail) E(Fail) E(Fail)
| gayu99 | |
1,830,455 | Self-Care for ADHD Learners: Prioritizing Well-Being in Your Learning Journey (5/12) | Introduction: Welcome back to our 12-part series dedicated to supporting ADHD learners and... | 27,224 | 2024-04-22T12:52:45 | https://dev.to/techtobe101/self-care-for-adhd-learners-prioritizing-well-being-in-your-learning-journey-512-301k | adhd, techtobe101, selfcare, studytips | ## Introduction:
Welcome back to our 12-part series dedicated to supporting ADHD learners and individuals with similar learning differences. In this article, we'll explore the importance of self-care and strategies for prioritizing your mental and physical well-being amidst the challenges of academia.
## Understanding the Importance of Self-Care:
Living with ADHD can be demanding, and it's essential to prioritize self-care to maintain balance and resilience. Self-care encompasses various activities and practices that nurture your mind, body, and spirit, ultimately enhancing your overall well-being and ability to cope with stressors.
## Establishing Self-Care Routines:
Create daily or weekly routines that incorporate self-care activities tailored to your needs and preferences. This could include practicing mindfulness or meditation, engaging in physical exercise, getting sufficient sleep, maintaining a healthy diet, and setting aside time for hobbies and relaxation. Consistency is key to reaping the benefits of self-care.
## Setting Boundaries and Saying No:
As an ADHD learner, you may feel pressure to constantly push yourself and take on more than you can handle. Learn to recognize your limits and set boundaries to protect your time and energy. Don't hesitate to say no to commitments or activities that overwhelm you or detract from your well-being.
## Seeking Support and Connection:
Don't hesitate to reach out for support from friends, family, mentors, or mental health professionals when needed. Surround yourself with understanding and empathetic individuals who can offer encouragement, guidance, and a listening ear. Building a support network can provide invaluable assistance on your journey.
## Series Overview:
In the upcoming articles, we'll continue to explore strategies and tips tailored to help ADHD learners succeed in their learning journey. From navigating school or coursework to developing resilience and pursuing passions, each article will provide practical insights to support you along the way.
## Conclusion:
Self-care is not selfish; it's essential for maintaining your well-being and resilience as an ADHD learner. By prioritizing self-care and establishing routines that nurture your mind, body, and spirit, you can better cope with the challenges of academia and thrive in your learning journey. Stay tuned for the next installment in our series, where we'll explore ADHD-friendly tech tools to enhance productivity and organization.
Tags: ADHD, self-care, well-being, mental health, resilience
Share your favorite self-care practices and how they've benefited you in the comments below, and join us for the next part of our series! | techtobe101 |
1,830,528 | Let's Build a Unique Tech YouTube Channel Together - Share Your Ideas and Be Part of the Journey! | Introduction Are you passionate about technology and love discovering new content? Join me... | 0 | 2024-04-22T14:32:43 | https://dev.to/huseyn0w/lets-build-a-unique-tech-youtube-channel-together-share-your-ideas-and-be-part-of-the-journey-2i93 | community, discuss, career, help | ### Introduction
Are you passionate about technology and love discovering new content? Join me as we create a one-of-a-kind Tech YouTube channel tailored by the tech community, for the tech community! Your insights are not only welcomed but essential.
### Why Your Voice Matters
In a world brimming with tech gurus and content creators, your unique perspective is what can make the difference between another tech channel and a community-driven success. That’s where you come in!
### Who I Am
With eight years under my belt as a Software Engineer, I've journeyed from rookie to seasoned pro, tackling job interviews, working across diverse environments, and relocating internationally for my dream job. I'm ready to share these insights and more, but more importantly, I want to learn from you to ensure we cover what truly matters.
### What I'm Thinking - And What Do You Think?
I’m considering a mix of content styles and subjects. What resonates with you the most, or what do you feel is missing from the tech community online?
- Job Interviews in Tech: Tips and real-life stories to help you succeed.
- Daily Tech Vlogs: A peek into the day-to-day of a software engineer.
- Launching Tech Products: From concept to market—lessons and guides.
- Educational Content: Deep dives into complex topics made accessible.
- Tech Podcasts: Discussions with industry experts and insiders.
- Unboxing and Reviews: The latest gadgets, unboxed and reviewed.
### How to Get Involved
Drop your suggestions in the comments, or reach out directly via [Twitter](https://twitter.com/huseyn0w) or [LinkedIn](https://www.linkedin.com/in/huseyn0w/). Don’t forget to subscribe to my [YouTube channel](youtube.com/@ElmanTechnology) for updates and hopefully, content that strikes a chord with you. Let's make tech content that's as dynamic and diverse as the community around it!
### Conclusion
Your feedback is the cornerstone of this project. Help shape a channel that not only interests you but also provides real value to the tech community. Together, let's make something incredible!
This revised version aims to be more direct and engaging, emphasizing community involvement and the importance of each reader's voice in shaping the channel. It also clarifies the kinds of content and asks directly for the reader's preferences and ideas. | huseyn0w |
1,830,760 | Cloud Resume Challenge | Transitioning to the Cloud: My Journey with Azure Hey everyone! My name is Evan Dolatowski... | 0 | 2024-04-23T00:45:03 | https://dev.to/gnarlylasagna/cloud-resume-challenge-2jik | cicd, githubactions, azure, serverless |
# Transitioning to the Cloud: My Journey with Azure
Hey everyone! My name is Evan Dolatowski from Houston, Texas, and I’m a Full Stack developer. I'm currently working towards transitioning to the Cloud industry. The realization of an important gap in my knowledge and the adventure of educating myself more deeply about Cloud technologies began when I ended up hosting my full stack portfolio project using Netlify, Heroku, and Amazon S3 buckets. I didn’t expect to use Amazon S3 Buckets, but it was needed to solve an issue Heroku has with persistent file storage caused by the free hosting. As soon as I realized I needed this third hosting account for my application, I knew my use of the cloud was far from optimized. I needed to learn how to use one of the three large cloud providers: AWS, GCP, or Azure, and I wanted to make sure I knew how to use it well.
After researching jobs in my local area, I found Azure jobs are by far more common than AWS or GCP. So I knew my Cloud platform, familiarized myself with the Certifications available and got the AZ-900 Azure Fundamentals. My next goal was the AZ-104. Once I felt confident in the material and almost ready to take the Exam, I wanted to get some real hands-on experience using Azure services for a difficult project. This is where I discovered The Cloud Resume Challenge.
## The Objectives
1. The Az-900 Azure Fundamentals certification was a prerequisite to the challenge (luckily I already had it).
2. Your Resume needs to written in HTML, styled with CSS, and hosted on a Azure Storage static website.
3. The Azure Storage website URL should use HTTPS for security Using Azure CDN and Point a custom DNS domain name to the Azure CDN endpoint.
4. On the website there needs to be a visitor counter using Javascript request to a Database containing the count.
5. This Database would be Azure Cosmos DB account.
6. The Front End should not speak directly to the DB, an API written and tested in Python will accept the request and send SQL queries to the Database. (Azure Functions with an HTTP trigger).
7. The backend portion of the project should be defined using an ARM template on a Consumption plan and not manually in the Azure Portal.
8. The Git Repo needs to use CI/CD for both front and back end set up through Github actions for quick automated deployment.
And the final step, write a short blog post describing some things you learned while working on this project.
## Frontend
Creating the front end and deploying it to the Azure Storage static website wasn’t nearly as difficult as getting my custom domain set up. Setting up a CDN Profile, DNS Zone, and Purchasing a Custom Domain were all very new tasks for me. A lot of trial and error went into this, but probably more trial and error than I needed. I think if I had updated the DNS settings and patiently waited before I tested it would have worked out for me, but in the end, I probably deleted and recreated all my frontend resources in my resource group over a dozen times, sometimes with the Azure CLI and sometimes with the Portal. I spent a lot of time interacting with the Azure CLI and portal which was the reason I wanted to do the challenge so I'm kind of glad it didn’t just work right away.
## DB/Serverless Python API
Next, I needed to make the DB and the Azure Serverless function using Python. This is something silly but I think Azure Cosmos DB is so cool. It’s so quick to create a SQL DB with Azure and quickly connect a Python script to it using The Azure Cosmos DB SQL API library and start using queries. I was excited the challenge already recommends using Serverless functions because its cost-effective for a small project as well as a relatively new technology because I had already wanted to learn more about Serverless Functions. Initializing, creating the template, and testing this function was easy enough thanks to the Azure-CLI. Big thanks to the func init, func new and func start commands.
## CI/CD
Finally, we have CI/CD. I love the end result of this step. I think it’s literally so cool that 1 minute after I type ‘git push’ in the terminal the new update is deployed to my custom URL. (If my code passes the tests of course). I've used CI/CD before but I wanted to get more practice with it so I was excited it's included in the challenge. A ton of trial and error went into my frontend YAML file dealing with Node versions and dependencies, as well as my code runs a build script and the build is put into a directory called ‘src’ this src directory contains the files that will then be served in my Azure Storage static site. The backend was much less of a headache but both were a much-welcomed challenge. It’s nice to have the slow progress tackle one small problem at a time until it finally deploys with no errors.
## Conclusion
I feel great about using Azure. I spent a lot of time using the Azure CLI and Portal to create update and delete many kinds of resources while doing this challenge which was a very important part of this challenge to me. I'm excited to spend a bit more time studying and get my second Certification, the AZ-104.
Thank you Forrest Brazeal for creating this challenge, and thank you for creating the opportunity to meet the other takers of this challenge and network with like-minded individuals.
[EvanDolatowski.com](https://www.evandolatowski.com/)
[Link to frontend code]
(https://github.com/GnarlyLasagna/portfolio-site)
[Link to backend code](https://github.com/GnarlyLasagna/portfolio-server-less-py)
Feel free to reach out on [LinkedIn](https://www.linkedin.com/in/evan-dolatowski-a928aa21b/) if you’ve got any questions.
| gnarlylasagna |
1,830,915 | Get Unique Elements in a Python List | In Python we can get the unique elements from a list by converting it to a set with set(). Sets are a... | 0 | 2024-04-29T23:51:26 | https://nelson.cloud/get-unique-elements-in-a-python-list/ | python, beginners | ---
title: Get Unique Elements in a Python List
published: true
date: 2024-04-22 00:00:00 UTC
tags: python, beginners
canonical_url: https://nelson.cloud/get-unique-elements-in-a-python-list/
---
In Python we can get the unique elements from a list by converting it to a set with `set()`. Sets are a [collection of **unique** elements](https://docs.python.org/3/tutorial/datastructures.html#sets):
```python
values = [1, 1, 1, 2, 2, 3, 3, 3, 3]
values = set(values)
print(values)
```
Output:
```
{1, 2, 3}
```
And if we still need a list instead of a set, we can easily convert back to a list using `list()`:
```python
values = [1, 1, 1, 2, 2, 3, 3, 3, 3]
# convert to set to get unique values
values = set(values)
# convert back to list
values = list(values)
print(values)
```
Output:
```
[1, 2, 3]
```
---
I learned this trick after realizing that unfortunately Python doesn't have a [`uniq()` method like Ruby](https://apidock.com/ruby/Array/uniq) that does this exact thing.
There's also other ways of getting unique values from a list that you can read about in [this GeeksforGeeks article](https://www.geeksforgeeks.org/python-get-unique-values-list/). I didn't cover the other cases because I feel like the easiest way is to use `set()`.
## References
- https://docs.python.org/3/tutorial/datastructures.html#sets
- https://www.geeksforgeeks.org/python-get-unique-values-list/
| nelsonfigueroa |
1,830,947 | Leveraging Mob Programming for Knowledge Sharing and Instant Code Review | Hey fellow developers, Let's talk about a powerful practice that not only fosters collaboration but... | 0 | 2024-04-22T22:24:57 | https://dev.to/ivan-klimenkov/leveraging-mob-programming-for-knowledge-sharing-and-instant-code-review-23g9 | Hey fellow developers,
Let's talk about a powerful practice that not only fosters collaboration but also accelerates knowledge sharing and ensures instant code review: Mob Programming. In an era where teamwork and efficiency are paramount, Mob Programming emerges as a potent strategy for software development teams.
**What is Mob Programming?**
Mob Programming is a collaborative approach where the entire team works together on the same task, at the same time, on the same computer. Unlike traditional methods where each developer works independently or in pairs, Mob Programming brings everyone together, encouraging collective problem-solving and knowledge exchange.
**How Does Mob Programming Facilitate Knowledge Sharing?**
**Continuous Learning:** By working together, team members can share their expertise and learn from each other in real-time. Whether it's about coding best practices, domain knowledge, or new techniques, Mob Programming creates a conducive environment for continuous learning.
**Diverse Perspectives:** Every team member brings a unique perspective to the table. Mob Programming allows for the exploration of various approaches and solutions, leading to richer outcomes. It's a platform where juniors and seniors alike can contribute ideas, fostering a culture of inclusivity and innovation.
**Instant Feedback:** One of the most significant advantages of Mob Programming is instant feedback. With multiple eyes on the code, potential issues are identified and resolved promptly, improving code quality and reducing the likelihood of bugs slipping into production.
**Benefits of Instant Code Review in Mob Programming:**
**Quality Assurance:** With everyone actively participating in the coding process, code quality naturally improves. Any errors or inefficiencies are addressed immediately, ensuring that the final product meets high standards.
Knowledge Transfer: Code review sessions in Mob Programming aren't just about finding bugs; they're also opportunities for knowledge transfer. Senior developers can share their expertise with junior team members, imparting valuable insights and best practices.
**Team Cohesion:** Mob Programming fosters a sense of camaraderie among team members. By working closely together and offering constructive feedback, developers build trust and strengthen their bonds, leading to a more cohesive and productive team.
**Tips for Successful Mob Programming Sessions:**
**Establish Clear Goals:** Define the objectives of each Mob Programming session to keep everyone focused and on track.
Rotate Roles: Rotate the driver (the one typing) regularly to ensure that everyone has a chance to contribute and stay engaged.
**Encourage Communication:** Communication is key in Mob Programming. Encourage open dialogue and active participation from all team members.
**Embrace Mistakes:** Mistakes are inevitable, but they're also valuable learning opportunities. Encourage a culture where team members feel comfortable making mistakes and learning from them.
**Reflect and Improve:** After each Mob Programming session, take the time to reflect on what went well and what could be improved. Use these insights to refine your approach and make future sessions even more effective.
In conclusion, Mob Programming is not just a development technique; it's a mindset shift towards collaboration, continuous learning, and collective ownership. By leveraging Mob Programming for knowledge sharing and instant code review, teams can boost productivity, enhance code quality, and foster a culture of innovation. So why not give it a try and experience the transformative power of Mob Programming firsthand?
Happy coding!
| ivan-klimenkov | |
1,831,091 | Crafting Reliable Web Apps: Embracing Offline Accessibility with Service Workers | Ever experienced the frustration of a web app crashing due to a lost network connection? Discover how... | 0 | 2024-04-23T04:22:56 | https://dev.to/sangeetha/crafting-reliable-web-apps-embracing-offline-accessibility-with-service-workers-3if1 |
Ever experienced the frustration of a web app crashing due to a lost network connection? Discover how Service Workers can revolutionize your web applications, ensuring they remain functional even offline. Accompanied by video demonstrations, witness the transformation from a traditional web app to one empowered by Service Workers.
**Understanding Service Workers:**
Service Workers act as silent guardians, caching essential resources and enabling offline functionality. By intercepting network requests, they empower web apps to function seamlessly even when connectivity is lost.
**Implementing Service Workers:**
Integrating Service Workers into your web app is simple. Register a Service Worker script in your app's code to control certain browsing aspects, including resource caching for offline use.
**Simulating Offline Scenarios:**
Use browser developer tools to simulate offline conditions for testing. Throttle the network connection to observe how your web app responds in offline mode, optimizing its functionality accordingly.
**Creating an Offline-First Experience:**
Design your web app with offline accessibility in mind, ensuring smooth user experiences regardless of connectivity.
**Testing and Deployment:**
Thoroughly test your offline-ready web app before deployment. Simulate various network conditions to confirm reliable performance. Deploy with confidence to provide users with a seamless experience anywhere, anytime.
**Conclusion:**
Service Workers offer a solution to unreliable internet connections, transforming web apps into dependable companions. Embrace offline accessibility today to deliver web apps that work flawlessly, regardless of connectivity challenges | sangeetha | |
1,831,125 | Conflict Resolution Training | Conflict is an inevitable part of human interaction, whether in personal relationships, professional... | 0 | 2024-04-23T05:51:52 | https://dev.to/maheshmahe3286/conflict-resolution-training-3f49 | education, training, certification, conflict |
Conflict is an inevitable part of human interaction, whether in personal relationships, professional settings, or within communities. However, unresolved conflicts can lead to misunderstandings, tension, and even escalation into more significant issues. That's where conflict resolution training comes into play. By equipping individuals with the necessary skills and strategies to address and resolve conflicts constructively, conflict resolution training not only fosters healthier relationships but also promotes a more harmonious and productive environment. In this article, we'll explore the importance of conflict resolution training and the key principles and techniques it encompasses.
**Understanding Conflict**
Before delving into conflict resolution strategies, it's crucial to understand the nature of conflict itself. Conflict arises when individuals or groups perceive a divergence of interests, goals, or values. It can manifest in various forms, including interpersonal conflicts, team conflicts, or organizational conflicts. While conflict is often viewed negatively, it can also serve as a catalyst for positive change and growth if managed effectively.
**The Importance of Conflict Resolution Training**
[Conflict resolution training course](https://www.sprintzeal.com/course/conflict-resolution-training) provides individuals with the knowledge, skills, and tools necessary to navigate conflicts constructively and reach mutually beneficial outcomes. Here are some key reasons why conflict resolution training is essential:
**Improved Communication**: Effective communication is fundamental to resolving conflicts. Conflict resolution training helps individuals enhance their communication skills, including active listening, empathy, and assertiveness, enabling them to express their needs and concerns while also understanding those of others.
**Enhanced Collaboration**: Conflict resolution training fosters a collaborative mindset, emphasizing the importance of working together to find solutions that satisfy the interests of all parties involved. By promoting cooperation and teamwork, conflict resolution training strengthens relationships and builds trust among individuals and teams.
**Reduced Stress and Tension**: Unresolved conflicts can create tension and stress, negatively impacting morale and productivity. Conflict resolution training equips individuals with strategies for managing and de-escalating conflicts effectively, reducing stress levels and creating a more positive work environment.
**Conflict Prevention**: In addition to resolving existing conflicts, conflict resolution training also focuses on preventing conflicts from escalating or recurring in the future. By identifying potential sources of conflict and addressing them proactively, organizations can minimize disruptions and maintain a harmonious workplace culture.
**Leadership Developmen**t: Effective conflict resolution is a hallmark of strong leadership. Conflict resolution training helps develop leaders who are skilled at managing conflicts, facilitating dialogue, and fostering a culture of collaboration and mutual respect within their teams and organizations.
Key Principles of Conflict Resolution Training
Conflict resolution training is guided by several core principles that serve as the foundation for effective conflict resolution. These principles include:
**Active Listening**: Encouraging individuals to listen attentively to the perspectives and concerns of others without interrupting or passing judgment. Active listening fosters empathy and understanding, laying the groundwork for constructive dialogue and problem-solving.
**Empathy and Perspective-Taking**: Encouraging individuals to put themselves in the shoes of others and consider their viewpoints and emotions. Empathy fosters compassion and promotes a sense of common humanity, facilitating reconciliation and compromise.
Collaborative Problem-Solving: Emphasizing the importance of working together to identify and implement solutions that address the underlying causes of conflict and meet the needs of all parties involved. Collaborative problem-solving encourages creativity and innovation, leading to win-win outcomes.
**Assertiveness and Respect**: Encouraging individuals to assert their needs and concerns assertively while also respecting the rights and perspectives of others. Assertive communication promotes honesty and transparency, fostering trust and openness in relationships.
**Conflict De-escalation**: Equipping individuals with strategies for managing conflict situations calmly and constructively, preventing them from escalating into more significant issues. Conflict de-escalation techniques include active listening, reframing, and finding common ground.
Practical Techniques for Conflict Resolution
In addition to understanding the principles of conflict resolution, individuals participating in conflict resolution training learn a variety of practical techniques and strategies for resolving conflicts effectively. Some common techniques include:
**Negotiation**: Engaging in a process of negotiation to find mutually acceptable solutions to conflicts. Negotiation involves identifying interests, exploring options, and reaching agreements that satisfy the needs of all parties involved.
Mediation: Facilitating dialogue between conflicting parties with the assistance of a neutral third party mediator. Mediation aims to help parties communicate effectively, identify common ground, and reach mutually satisfactory resolutions.
**Conflict Mapping**: Analyzing conflicts to identify underlying causes, key stakeholders, and potential solutions. Conflict mapping helps individuals gain a deeper understanding of the dynamics of the conflict and develop strategies for addressing it effectively.
Constructive Feedback: Providing feedback to individuals involved in a conflict in a constructive and non-judgmental manner. Constructive feedback focuses on behavior rather than personality, highlighting areas for improvement and suggesting alternative approaches.
**Boundary Setting**: Establishing clear boundaries and expectations regarding acceptable behavior and communication in conflict situations. Boundary setting helps prevent conflicts from escalating and creates a framework for resolving disputes respectfully.
Conclusion
Conflict resolution training plays a vital role in promoting healthy and productive relationships, both in the workplace and beyond. By equipping individuals with the necessary skills and strategies to address conflicts constructively, conflict resolution training fosters open communication, collaboration, and mutual respect. By embracing the principles of active listening, empathy, collaborative problem-solving, assertiveness, and conflict de-escalation, individuals can navigate conflicts with confidence and achieve positive outcomes for all parties involved. Ultimately, conflict resolution training empowers individuals to transform conflicts into opportunities for growth, learning, and reconciliation, contributing to a more harmonious and resilient society. | maheshmahe3286 |
1,831,134 | From AI to AR: The Latest Technologies Reshaping the Retail Landscape | The retail industry has come a long way since its humble beginnings as small, local shops. With the... | 0 | 2024-04-23T06:02:14 | https://dev.to/sageit/from-ai-to-ar-the-latest-technologies-reshaping-the-retail-landscape-2be0 | ai, automation, cloud, devops | The retail industry has come a long way since its humble beginnings as small, local shops. With the rise of globalization and technological advancements, retailers have had to adapt to keep up with changing customer demands and market trends. Today, the retail industry is more competitive than ever, with customers expecting seamless experiences across online and offline channels.
One way that retailers can stay competitive is by leveraging emerging technologies. Technologies like artificial intelligence, machine learning, the internet of things, augmented reality, virtual reality, and blockchain are revolutionizing the retail industry by providing new ways to engage with customers, improve operations, and gain insights into customer behavior. These technologies can help retailers provide personalized experiences, streamline supply chain management, increase efficiency, and reduce costs.
In order to remain relevant in today's market, retailers need to keep up with these technological advancements. Failure to do so could lead to missed opportunities, lost customers, and decreased revenue. By embracing emerging technologies, retailers can not only stay competitive, but also set themselves apart from their competitors and provide better experiences for their customers.
Emerging technologies are revolutionizing the retail industry, providing new ways for retailers to engage with customers, streamline operations, and gain insights into customer behavior. Here are some of the emerging technologies that are shaping the retail industry today:
**Artificial Intelligence (AI):** [AI solutions](https://bit.ly/4b5wHnM) is a branch of computer science that allows machines to learn from data and improve their performance over time. In the retail industry, AI is being used to provide personalized experiences to customers, improve inventory management, and optimize pricing.
**Machine Learning (ML):** ML is a subset of AI that focuses on developing algorithms that can learn from data and make predictions or decisions without being explicitly programmed. In the retail industry, ML is being used to analyze customer data and provide personalized recommendations, detect fraud, and optimize supply chain management.
**Internet of Things (IoT):** IoT refers to the network of physical devices that are connected to the internet and can collect and exchange data. In the retail industry, IoT is being used to track inventory levels, monitor customer behavior, and optimize store layouts.
**Augmented Reality (AR):** AR is a technology that overlays digital information onto the real world. In the retail industry, AR is being used to provide customers with immersive shopping experiences, such as virtual try-ons or interactive product displays.
**Virtual Reality (VR):** VR is a technology that immerses users in a completely digital environment. In the retail industry, VR is being used to create virtual showrooms or product demonstrations, allowing customers to experience products in a more immersive way.
Blockchain: Blockchain is a distributed ledger technology that allows for secure, transparent, and tamper-proof transactions. In the retail industry, blockchain is being used to improve supply chain management, verify product authenticity, and ensure data privacy and security.
These emerging technologies are transforming the retail industry, providing new opportunities for retailers to differentiate themselves, improve customer experiences, and increase efficiency.
## Benefits of leveraging emerging technologies:
Emerging technologies are providing numerous benefits to retailers, enabling them to improve operations, optimize processes, and provide better experiences for customers. Here are some of the ways that retailers can benefit from leveraging emerging technologies:
**Artificial Intelligence (AI) and Machine Learning (ML):** AI and ML can help retailers provide personalized experiences to customers by analyzing data and predicting customer behavior. This can help retailers understand what customers want, recommend products, and optimize pricing. AI and ML can also help retailers automate tasks, such as inventory management and fraud detection, improving operational efficiency and reducing costs.
**Internet of Things (IoT):** IoT can help retailers improve supply chain management by providing real-time data on inventory levels and shipments. This can help retailers reduce inventory costs, optimize delivery routes, and improve the accuracy of demand forecasting. IoT can also help retailers track customer behavior and preferences, providing insights into how customers interact with products and services.
**Augmented Reality (AR) and Virtual Reality (VR):** AR and VR can help retailers create immersive shopping experiences for customers, allowing them to virtually try on clothes or visualize products in their homes. This can help retailers increase customer engagement and satisfaction, and reduce the likelihood of product returns.
**Blockchain:** Blockchain can help retailers improve transparency and security in transactions by providing a tamper-proof and decentralized ledger. This can help retailers reduce the risk of fraud, improve supply chain visibility, and enhance data privacy and security.
Overall, emerging technologies are providing retailers with new ways to improve operations, optimize processes, and provide better experiences for customers. By leveraging these technologies, retailers can stay competitive and meet the evolving expectations of their customers.
## Case studies of how retailers are already using emerging technologies to improve their business:
**Amazon Go:** Amazon's cashier-less grocery store uses a combination of AI, ML, and computer vision to allow customers to shop without the need for checkout lines. Cameras and sensors track customer movements and purchases, and automatically charge customers for their purchases when they leave the store.
**Walmart:** Walmart is using IoT to improve supply chain efficiency and reduce waste. The company has developed a blockchain-based system that tracks the origin and movement of food products, providing real-time data on their quality and freshness.
**Sephora:** Sephora, a cosmetics retailer, is using AR to provide customers with virtual try-on experiences. The Sephora Virtual Artist app uses AR to allow customers to try on different makeup products virtually, and see how they look on their own face.
**Warby Parker:** Warby Parker, an eyewear retailer, is using VR to create virtual showrooms where customers can try on glasses and see how they look on their own face. The company's app uses the iPhone's TrueDepth camera to create a 3D scan of the customer's face, allowing them to see how different frames will look in real-time.
**Adidas:** Adidas is using 3D printing and machine learning to create customized sneakers for customers. The company's Futurecraft 4D shoes are created using a combination of 3D printing and ML algorithms that analyze customer data to create shoes that are tailored to the customer's unique foot shape and gait.
These are just a few examples of how retailers are using emerging technologies to improve their business. From providing personalized experiences to customers, to improving supply chain efficiency and reducing waste, these technologies are transforming the way retailers do business.
## Challenges and considerations:
**Cost:** One of the biggest challenges associated with implementing emerging technologies is the cost. Implementing these technologies requires significant investment in hardware, software, and personnel, which can be a significant barrier for smaller retailers. Additionally, emerging technologies may require ongoing maintenance and upgrades, which can add to the cost over time.
**Complexity:** Implementing emerging technologies can be complex, requiring specialized skills and expertise. Retailers may need to hire additional staff or work with outside vendors to implement and maintain these technologies. Additionally, integrating these technologies into existing systems can be challenging, requiring careful planning and coordination.
**Ethical concerns:** Emerging technologies can raise ethical concerns related to data privacy, security, and fairness. For example, AI and ML algorithms may be biased or discriminatory, leading to unfair treatment of certain customers or groups. Retailers need to be mindful of these concerns and take steps to mitigate them.
**Data privacy and security:** Retailers need to be mindful of data privacy and security when implementing emerging technologies. These technologies may collect and store sensitive customer data, such as personal information or payment details, which can be vulnerable to cyberattacks. Retailers need to implement robust security measures and protocols to protect this data.
**Regulatory compliance:** Retailers need to be aware of regulatory requirements when implementing emerging technologies. For example, GDPR in Europe and CCPA in California require companies to obtain consent from customers before collecting and using their personal data. Retailers need to ensure that they are complying with these regulations and other relevant laws.
Overall, retailers need to carefully consider the challenges and considerations associated with implementing emerging technologies. By addressing these challenges and concerns proactively, retailers can ensure that they are leveraging these technologies in a responsible and sustainable way.
## Conclusion:
In conclusion, the retail industry has undergone significant changes in recent years, and emerging technologies are playing a critical role in shaping the future of this industry. Technologies such as AI, machine learning, IoT, AR/VR, and blockchain are transforming the way retailers do business, providing new opportunities to personalize the shopping experience, optimize supply chains, and improve transparency and security in transactions.
Despite the challenges associated with implementing these technologies, retailers that fail to embrace them risk falling behind their competitors. By leveraging emerging technologies, retailers can stay competitive in an ever-changing industry and meet the evolving expectations of their customers. From reducing costs and improving efficiency to enhancing the customer experience, the benefits of implementing emerging technologies are clear.
However, retailers must also be mindful of the challenges and considerations associated with these technologies, including data privacy and security, regulatory compliance, and ethical concerns. By addressing these issues proactively and implementing these technologies responsibly, retailers can ensure that they are leveraging them in a way that benefits both their business and their customers.
| sageit |
1,831,139 | Step-by-Step Guide: Installing Jenkins on Docker | 🚀 Excited about DevOps automation? Learn how to set up Jenkins on Docker for seamless CI/CD... | 0 | 2024-04-23T06:08:11 | https://dev.to/shubhthakre/step-by-step-guide-installing-jenkins-on-docker-en7 |
🚀 Excited about DevOps automation? Learn how to set up Jenkins on Docker for seamless CI/CD workflows! 🛠️
## Prerequisites :
Before we begin, ensure you have the following prerequisites:
- Docker installed on your machine
- Basic familiarity with Docker commands
- Internet connectivity to pull Docker images
🔹 Step 1: Pull the Jenkins Docker Image
Start by pulling the latest Jenkins LTS image from Docker Hub:
```
docker pull jenkins/jenkins:lts
```
🔹 Step 2: Run Jenkins Container
Launch Jenkins as a Docker container with port mappings:
```
docker run -d -p 8080:8080 -p 50000:50000 --name my-jenkins jenkins/jenkins:lts
```
🔹 Step 3: Access Jenkins Web UI
Navigate to http://localhost:8080 in your web browser.
🔹 Step 4: Retrieve Initial Admin Password
Retrieve the admin password from the container logs:
```
docker logs my-jenkins
```
🔹 Step 5: Complete Jenkins Setup
Follow the setup wizard in the web UI, paste the admin password, and install plugins.
🔹 Step 6: Start Using Jenkins
Begin automating builds, tests, and deployments with Jenkins!
Ready to streamline your development pipeline? Try Jenkins on Docker today! 🌟
#DevOps #ContinuousIntegration #ContinuousDelivery #Jenkins #Docker #Automation #CI/CD | shubhthakre | |
1,831,293 | How to Manage Usage Limits in Colab for Optimal Performance | Optimize performance in Colab by managing usage limits effectively. Learn how to navigate usage... | 0 | 2024-04-23T09:30:00 | https://dev.to/novita_ai/how-to-manage-usage-limits-in-colab-for-optimal-performance-5bnb | ai, stablediffusion, colab | Optimize performance in Colab by managing usage limits effectively. Learn how to navigate usage limits in colab on our blog.
## Key Highlights
- Understand the usage limits of Google Colab and how they can impact your machine learning projects.
- Discover common usage limits and their implications.
- Explore strategies to monitor and manage resource consumption in Colab.
- Find out about tools and techniques for monitoring usage.
- Get tips on how to reduce computational load in Colab.
- Learn how to optimize Colab notebooks for maximum performance.
- Discover efficient coding practices and lesser-known Colab features that can enhance your ML projects.
- Navigate through Colab’s restrictions and learn how to deal with RAM and GPU limitations.
- Explore alternatives and supplements to Colab, such as Colab Pro and Google Cloud.
- Find out when it’s appropriate to consider upgrading to Colab Pro and explore other platforms similar to Colab.
- Get practical tips for long-term Colab users, including managing multiple sessions effectively and avoiding common pitfalls.
## Introduction
As machine learning and deep learning projects become increasingly resource-intensive, finding a cost-effective and efficient development environment is crucial. Google Colab Enterprise, a managed version of Colab, offers additional features and capabilities for enterprise use, including the use of generative AI. With its integration with Vertex AI and BigQuery, Colab Enterprise provides a powerful platform for data scientists and machine learning enthusiasts. However, it is important to understand and manage usage limits in Colab Enterprise for optimal performance and resource management.
In this blog, we will explore how to manage usage limits in Colab for optimal performance. By understanding and effectively managing the usage limits in Colab, you can ensure smooth and efficient development of your machine learning projects without compromising on performance or incurring unnecessary costs.
## Understanding Colab’s Usage Limits
Google Colab offers a free Jupyter-based environment for machine learning projects, many large language models are fine-tuned through Colab including [novita.ai LLM](https://novita.ai/product/llm-chat).


But it comes with certain usage limits. These limits are in place to manage resource allocation and prevent abuse of the service. It’s important to understand these limits to effectively manage your ML projects in Colab.
Colab’s usage limits are dynamic and can fluctuate over time. They include restrictions on CPU/GPU usage, maximum VM lifetime, idle timeout periods, and resource availability. While Colab does not publish these limits, they can impact your project’s execution and require monitoring and management for optimal performance.

**What Are Colab’s Computational Resources?**
Colab provides virtual machines (VMs) with different specifications to support machine learning tasks. These VMs come with pre-installed libraries and packages commonly used in ML projects. Users can access VMs with GPUs or TPUs for enhanced computational power.
The GPU options in Colab include the K80, T4, P100, and V100. GPUs are beneficial for accelerating training and inference tasks in deep learning models, with options to upgrade to faster Nvidia GPUs such as the V100 or A100. On the other hand, TPUs (Tensor Processing Units) are specialized hardware designed by Google for ML workloads. TPUs offer even faster and more efficient computation for training and predicting with large datasets using tensorflow.
Additionally, Colab VMs come with a certain amount of RAM, typically ranging from 12.7 GB to 25 GB, depending on the type of VM. Having a clear understanding of these computational resources is essential for optimizing your ML projects in Colab.
**Common Usage Limits and Their Implications**
Colab has certain usage limits that users need to be aware of in order to effectively manage their machine learning projects. These limits have implications on interactive compute, idle timeout periods, maximum VM lifetime, and resource availability.
Interactive compute refers to the duration of user activity in a Colab notebook. Colab notebooks have an idle timeout period, after which the runtime is automatically disconnected. This idle timeout period can range from a few minutes to several hours, depending on the VM type. Additionally, VMs in Colab have a maximum lifetime, after which they are automatically terminated.
Resource availability in Colab can also fluctuate, affecting CPU/GPU usage and other factors. These limits and variations impact the execution of ML projects and require careful monitoring and resource management for optimal performance.
## Strategies to Monitor and Manage Resource Consumption
To ensure optimal performance and manage resource consumption in Colab, it is important to implement effective monitoring and management strategies. By monitoring resource consumption, you can identify potential bottlenecks and optimize resource allocation.
**Tools and Techniques for Monitoring Usage**
Monitoring resource usage in Colab can be done using various tools and techniques. These tools help users keep track of their resource consumption and make informed decisions about resource allocation. Some of the tools and techniques for monitoring usage in Colab include:
- Google Account: The Google Account associated with Colab provides information on resource usage and allows users to manage their Colab sessions.
- Colab Pro: The paid version of Colab, Colab Pro, offers additional tools and features for monitoring and managing resource consumption.
- Compute Unit Balance: Colab Pro users have access to compute unit balance, which allows them to monitor their resource usage and make adjustments as needed.
- Backend Termination: Colab Pro users can also set up backend termination to automatically terminate idle sessions and free up resources.
By utilizing these tools and techniques, users can effectively monitor and manage their resource consumption in Colab for optimal performance.
**Tips to Reduce Computational Load**
Reducing computational load is crucial for optimizing resource usage in Colab. By implementing efficient coding practices and minimizing unnecessary computations, users can reduce the strain on resources and improve performance. Some tips to reduce computational load in Colab include:
- Efficient Coding Practices: Use optimized algorithms and data structures, minimize redundant computations, and leverage vectorized operations.
- Memory Management: Avoid unnecessary memory allocation and deallocation, use generators and iterators instead of loading all data into memory at once.

- Parallel Processing: Utilize parallel processing techniques like multiprocessing or parallel computing libraries to distribute computations across multiple cores or nodes.
By following these tips, users can minimize computational load and optimize resource usage in Colab, leading to improved performance and efficiency in their machine learning projects.
## Optimizing Colab Notebooks for Performance
Optimizing Colab notebooks for performance is essential to ensure efficient execution of machine learning projects. By implementing optimization techniques, users can maximize resource usage and improve overall performance.
**Efficient Coding Practice**

Efficient coding practices play a crucial role in optimizing Colab notebooks for performance. By following these practices, users can reduce computational load, minimize memory usage, and improve overall efficiency. Some efficient coding practices for Colab notebooks include:
- Use optimized algorithms and data structures to reduce computational complexity.
- Minimize redundant computations and cache intermediate results.
- Leverage vectorized operations and optimized libraries to speed up computations.
- Implement memory-efficient techniques such as lazy loading and generators to minimize memory usage.
- Optimize I/O operations by batching or streaming data instead of loading all data into memory at once.
By following these efficient coding practices, users can enhance the performance of their Colab notebooks and achieve faster execution times for their machine learning projects.
**Leveraging Lesser-Known Colab Features**
Colab offers various features that are lesser-known but can greatly enhance the performance and efficiency of ML projects. These features enable users to take full advantage of the Colab environment and optimize resource usage. Some lesser-known Colab features include:
- Version of Colab: Colab offers different versions, which can be selected based on specific project requirements.
- Hidden Features: Colab has hidden features that can be discovered by exploring the Colab environment and experimenting with different settings.
By leveraging these lesser-known features, users can unlock additional capabilities in Colab and optimize their ML projects for improved performance and efficiency.
## Navigating Through Colab’s Restrictions
While Colab provides a free and convenient ML development environment, it also has certain restrictions that users need to navigate. These restrictions may impact resource usage limits, including dynamic usage limits, and require users to adapt their projects accordingly.
**Dealing with RAM and GPU Limitations**
Colab’s free VMs have limitations regarding RAM and GPU usage. Users need to be aware of these limitations and find ways to work within them. Some strategies for dealing with RAM and GPU limitations in Colab include:
- Optimizing Memory Usage: Minimize unnecessary memory allocation, use generators and iterators instead of loading all data into memory at once.
- Batch Processing: Split large datasets into smaller batches to accommodate RAM limitations.
- GPU Utilization: Implement batch-based data-flow to the GPU using tools like Keras/TF2 generators for efficient GPU usage.

By implementing these strategies, users can effectively manage RAM and GPU limitations in Colab and optimize resource usage for their ML projects.
**Solutions for Time-bound Execution**
Colab has time-bound execution limitations, with VMs having a maximum lifetime after which they are automatically terminated. To ensure uninterrupted execution of time-bound tasks, users can implement the following solutions:
- Checkpointing: Save model checkpoints at regular intervals to ensure progress is not lost if the VM is terminated.
- Job Scheduling: Divide long-running processes into smaller tasks that can be executed within the maximum VM lifetime.
- Resource Monitoring: Regularly monitor resource usage and adjust the execution plan accordingly to complete time-bound tasks within the allocated time.
By employing these solutions, users can effectively manage time-bound execution in Colab and ensure the successful completion of their ML projects.
## Alternatives and Supplements to Colab
While Colab offers a free environment for ML development, there are alternatives and supplements that users can consider for more advanced features and resource availability.
## When to Consider Upgrading to Colab Pro
Colab Pro offers additional features and resources that can be beneficial for users working on more advanced ML projects. Some factors to consider when deciding to upgrade to Colab Pro include:
- Increased resource availability: Colab Pro offers more powerful VMs with higher RAM and GPU options, providing better performance for resource-intensive tasks.
- Longer session duration: Colab Pro extends the maximum session duration, allowing users to work on projects for extended periods without interruptions.
- Background execution: With Colab Pro, users can run notebooks in the background while working on other tasks, improving productivity.
- Terminal access: Colab Pro provides terminal access, enabling users to execute command-line operations within the Colab environment.

By evaluating these factors, users can determine if upgrading to Colab Pro is suitable for their ML projects and resource requirements.
**Exploring Other Platforms Similar to Colab**
Apart from Colab, there are other platforms that offer similar dynamics and features for ML development. These platforms provide alternatives and supplements to Colab, allowing users to explore different options for their ML projects. Some platforms similar to Colab include:
- Jupyter Notebooks: Jupyter Notebooks is a widely used open-source platform for interactive computing that offers similar features to Colab.
- Kaggle: Kaggle is a popular platform for data science and machine learning competitions that provides a hosted Jupyter notebook environment with resources for ML projects.
By exploring these alternative platforms, users can find the one that best suits their needs and preferences for ML development.
## Practical Tips for Long-Term Colab Users
For long-term Colab users, it is important to adopt best practices and implement effective management strategies.
**Managing Multiple Sessions Effectively**
Managing multiple sessions effectively is essential for long-term Colab users. By implementing proper session management techniques, users can streamline their workflow and effectively utilize Colab’s resources. Some tips for managing multiple sessions in Colab include:
- Organize Notebooks: Use folders and naming conventions to keep your notebooks organized and easily accessible.
- Utilize Sessions Tab: Take advantage of Colab’s sessions tab to manage and switch between different active sessions.
- Backup Notebooks: Regularly backup your notebooks to ensure that your progress is saved and can be easily accessed.
- Use Colab VMs: Consider using Colab VMs for long-term projects to avoid interruptions due to idle timeout periods.
- By following these tips, long-term Colab users can effectively manage multiple sessions and optimize their workflow.
**Avoiding Common Pitfalls in Colab Usage**
While using Colab, there are common pitfalls that users should be aware of and avoid to ensure a smooth and efficient experience. Some common pitfalls in Colab usage include:
- Resource Exhaustion: Being mindful of resource usage and avoiding excessive consumption to prevent unexpected termination of VMs.
- Poor Code Optimization: Failing to optimize code for efficient resource usage, leading to slow execution and increased resource consumption.
- Lack of Backup: Not regularly backing up notebooks, which can result in loss of progress if a session is terminated or an error occurs.
- Overreliance on Free Resources: Relying solely on free resources without considering the need for additional resources or upgrading to Colab Pro.
By avoiding these common pitfalls, users can maximize their productivity and avoid unnecessary setbacks in their Colab usage.
## Conclusion
In conclusion, managing usage limits in Colab is crucial for optimal performance. By understanding Colab’s computational resources and implementing strategies to monitor and manage resource consumption, you can enhance your coding experience. Optimizing Colab notebooks through efficient coding practices and leveraging lesser-known features can improve overall performance. Navigating through Colab’s restrictions, dealing with limitations, and considering alternatives like Colab Pro when needed are essential steps. Practical tips for long-term users include managing multiple sessions effectively and avoiding common pitfalls. Stay mindful of your resource usage to make the most of Colab’s capabilities.
## Frequently Asked Questions
**How Can I Check My Current Resource Usage in Colab?**
You can check your current resource usage in Colab by accessing your Google Account associated with Colab. The account provides information on resource consumption and allows you to monitor and manage your Colab sessions.
**What Happens When I Reach My Usage Limit?**
When you reach your usage limit in Colab, your backend may be terminated, resulting in the disconnection of your session. This termination is a measure to manage resource allocation and prevent abuse of the service.
**Can I Extend My Usage Limits Without Upgrading to Pro?**
No, you cannot extend your usage limits in Colab without upgrading to Colab Pro. Colab Pro offers additional resources and features that are not available in the free version.
> Originally published at [novita.ai](https://blogs.novita.ai/how-to-manage-usage-limits-in-colab-for-optimal-performance/?utm_source=devcommuntiy_LLM&utm_medium=article&utm_campaign=usagelimitsincolab)
> [novita.ai](https://novita.ai/?utm_source=devcommunity_LLM&utm_medium=article&utm_campaign=how-to-manage-usage-limits-in-colab-for-optimal-performance), the one-stop platform for limitless creativity that gives you access to 100+ APIs. From image generation and language processing to audio enhancement and video manipulation, cheap pay-as-you-go, it frees you from GPU maintenance hassles while building your own products. Try it for free.
| novita_ai |
1,831,329 | Adding Speech Navigation To A Website | by Sarah Okolo As technology evolves, so do the methods of interaction with websites. One such... | 0 | 2024-04-24T14:23:19 | https://blog.openreplay.com/adding-speech-navigation-to-a-website/ | by [Sarah Okolo](https://blog.openreplay.com/authors/sarah-okolo)
<blockquote><em>
As technology evolves, so do the methods of interaction with websites. One such advancement is the integration of speech navigation, allowing users to interact with web content hands-free using their voice. JavaScript's [Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) empowers developers to implement this functionality seamlessly, enhancing accessibility and convenience for a diverse range of users. This article explores how we can utilize it to incorporate speech navigation into a website, enabling seamless navigation for users through the power of voice commands.
</blockquote></em>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
To effectively follow along with this guide, it is recommended that you have a solid understanding of HTML, CSS, and JavaScript fundamentals. Familiarity with DOM manipulation and event handling will be beneficial as we dive into the implementation details. However, this guide is structured with clear explanations and step-by-step instructions, making it accessible to learners at various skill levels.
Here's a demonstration of what we'll be creating in this article.
<iframe width="720" height="360" src="https://player.vimeo.com/video/927886190" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen />
You can also check it out on the [live website](https://website-speech-navigation.netlify.app/).
## Importance and benefits of speech navigation in a website
Speech navigation functionality in a website offers several benefits to users, such as:
- **Accessibility Enhancement**: Speech navigation is a vital accessibility feature, particularly for individuals with disabilities such as motor impairments or visual challenges. It provides an alternative means of interaction, allowing users to navigate a website using their voice, overcoming barriers that conventional input methods may pose.
- **Hands-Free Browsing**: One of the primary benefits of speech navigation is browsing a website entirely hands-free. This is particularly valuable in situations where users may have limited or no use of their hands, enabling them to access and interact with digital content seamlessly.
- **Inclusive User Experience**: Web developers create a more inclusive user experience by integrating speech navigation into websites. This inclusivity extends beyond disability considerations to cater to a broader audience, including users who prefer a hands-free or voice-driven approach to browsing, enhancing the website's overall usability.
- **Efficient Interaction**: Speech navigation can be more efficient for certain tasks or commands than traditional methods. Users can express complex instructions naturally, potentially reducing the steps required to accomplish specific actions on the website.
## Overview of the Web Speech API
The Web Speech API is a browser standard that enables web developers to integrate speech functionalities into web applications. This API consists of two distinct interfaces:
- Speech recognition API
- Speech synthesis (commonly known as text-to-speech [tts]) API.
We will focus more on the Speech Recognition API, as this is what we need to achieve speech navigation in our project.
### Understanding the Speech Recognition API
The SpeechRecognition API programmatically turns voice inputs into text. It works by requesting access to the device microphone. Once granted, the user can speak, and the speech-to-text algorithm converts the speech into text that the application can process.
This API offers a range of [properties, methods, and events](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition#instance_properties) for detecting and processing speech input. It provides developers with comprehensive control over the configuration of the Speech Recognition functionality, allowing the seamless integration of voice-based interactions into web applications.
When using this API in the Chrome browser, the speech input data is sent to a server-based recognition engine for processing. This means that the Speech Recognition service would not be accessible offline.
The [browser support table](https://caniuse.com/speech-recognition) below indicates that the Speech Recognition API has an **87.37%** global browser usage as of the time of writing this article. This indicates that the API is not supported by all browsers yet, and browser compatibility should be checked before being used.

Now that we understand the Speech Recognition API and the importance of speech navigation in a website, let’s explore how you can implement one.
## Setting up the website's HTML structure
To get started, let's set up the project’s structure. We need just three files for this project: HTML, CSS, and JavaScript.
Let’s begin with setting up the HTML structure of our project. Inside the HTML file, start by including the standard HTML boilerplate and the `link` and `script` tags to link the CSS and JS files.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="stylesheet" href="./styles.css" />
<title>Website speech navigation</title>
</head>
<body>
<script src="./index.js"></script>
</body>
</html>
```
Now that we have our boilerplate in place, let’s create the website's structure. Add a `header`, `nav`, `main`, and `footer` tags inside the body tag.
```html
<header></header>
<nav></nav>
<main></main>
<footer> </footer>
```
Inside the `header` tag, we place a menu icon to toggle the navigation menu and a microphone icon for toggling the speech recognition state.
```html
<header>
<ion-icon name="menu" id="menu-icon"></ion-icon>
<ion-icon name="mic-outline" class="mic-icon"></ion-icon>
</header>
```
All icons were obtained from the [Ionicons website](https://ionic.io/ionicons).
Before the icons can become visible on the page, we need to include the following `script` tags at the end of the `body` tag
```html
<script type="module" src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.esm.js"></script>
<script nomodule src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.js"></script>
```
Moving on, inside the `nav` tag, we add the links to manually navigate to sections on the website.
```html
<nav class="nav-menu">
<li>
<a href="#home">Home</a>
</li>
<li>
<a href="#about">About</a>
</li>
<li>
<a href="#contact">Contact</a>
</li>
</nav>
```
Next, inside the `main` tag, we set up the website's various sections.
```html
<main>
<p id="info"></p>
<section id="home">Home section</section>
<section id="about">About section</section>
<section id="contact">Contact section</section>
</main>
```
The empty `p` tag will be used to display the speech recognition status to the user later in JavaScript.
Lastly, let's add a little content to the footer element
```html
<footer id="footer">Footer section</footer>
```
That is all for the HTML file. The overall code should look like this:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="styles.css" />
<title>Website Speech Navigation</title>
</head>
<body>
<header>
<ion-icon name="menu" id="menu-icon"></ion-icon>
<ion-icon name="mic-outline" class="mic-icon"></ion-icon>
</header>
<nav class="nav-menu">
<li>
<a href="#home">Home</a>
</li>
<li>
<a href="#about">About</a>
</li>
<li>
<a href="#contact">Contact</a>
</li>
</nav>
<main>
<p id="info"></p>
<section id="home">Home section</section>
<section id="about">About section</section>
<section id="contact">Contact section</section>
</main>
<footer id="footer">Footer section</footer>
<script type="module" src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.esm.js"></script>
<script nomodule src="https://unpkg.com/ionicons@7.1.0/dist/ionicons/ionicons.js"></script>
<script src="index.js"></script>
</body>
</html>
```
<CTA_Middle_Programming />
## Styling the website with CSS
Now that we have set up the HTML structure, it's time to apply styles to give our website a visual appeal.
Let's get started with some general styling for the page.
```css
html {
scroll-behavior: smooth;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
```
Styling the `header`, `home`, `about`, `contacts`, and `footer` sections.
```css
header {
padding: 10px;
background-color: rgb(1, 22, 29);
display: flex;
justify-content: space-between;
position: sticky;
top: 0;
}
section {
width: 100vw;
height: 100vh;
display: grid;
place-items: center;
}
#home {
background-color: rgb(102, 24, 67);
}
#about {
background-color: rgb(31, 15, 90);
}
#contact {
background-color: rgb(131, 4, 131);
}
footer {
background-color: rgb(31, 1, 17);
text-align: center;
padding: 30px;
}
```
Adding styles to the `menu-icon` and the `nav-menu` classes.
```css
#menu-icon {
color: white;
font-size: 35px;
margin-top: 10px;
}
.nav-menu {
height: 100vh;
width: 0; /* makes the nav-menu closed by default */
background-color: rgb(1, 22, 29);
position: fixed;
transition: .5s;
overflow: hidden;
}
.nav-menu > li {
list-style: none;
margin: 50px 25px;
}
.nav-menu > li > a {
text-decoration: none;
}
.nav-menu > li > a:hover {
color: white;
}
/* opens the nav-menu */
.show-nav {
width: 200px;
}
```
Apply styling to the `mic-icon` and assign a class, which will set the pulse animation for the listening state of the `mic-icon`.
```css
.mic-icon {
background-color: rgb(255, 0, 140);
padding: 15px;
border-radius: 50%;
font-size: 25px;
}
.listening{
animation: pulse 1s linear infinite alternate;
}
@keyframes pulse{
100% {
transform: scale(1.2);
}
}
```
Lastly, let’s style the `info` element responsible for displaying the speech recognition status to the user.
```css
#info {
position: fixed;
top: 80px;
right: 0;
background-color: #eeeeeebd;
border-radius: 20px;
padding: 5px 12px;
z-index: 100;
display: none; /* display is hidden until revealed through JavaScript */
color: #000;
}
```
After we're done styling the website, It should look like this:

Now, let's begin implementing the speech navigation functionality for our website.
## Configuring the Speech Recognition functionality in JavaScript
Inside the JavaScript file, we first need to declare a few variables that we will use later on and the function to toggle the `nav-menu`.
```javascript
const mic = document.querySelector(".mic-icon");
const info = document.getElementById('info');
const menuIcon = document.getElementById("menu-icon");
const navMenu = document.querySelector(".nav-menu");
menuIcon.onclick = () => {
navMenu.classList.toggle("show-nav")
}
```
Before initializing the Speech Recognition API, we need to verify the availability of either the `SpeechRecognition` or `webkitSpeechRecognition` properties in the browser’s window object.
```javascript
if (window.SpeechRecognition || window.webkitSpeechRecognition) {
//speech navigation logic
}
else {
//display error to the user
}
```
The code above applies the speech navigation logic only if the browser supports the Speech Recognition API, and displays an error message if the browser doesn’t support the API.
Inside the `if` block, we create a new `SpeechRecognition` instance.
```javascript
const recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition)();
```
The code above creates a new `SpeechRecognition` instance, ensuring the use of the correct constructor for browser support. It prompts the user to grant microphone access to the application to receive voice commands.
Next, we create a flag for handling when the speech recognition service is listening for speech input and when it has stopped.
```javascript
let isListening = false;
```
Below this, we create two functions to start and stop the speech recognition service
```javascript
// stops the speech recognition service
const startRecognition = () => {
recognition.start(); // starts speech recognition service
mic.classList.add('listening'); // adds the pulse animation to mic-icon
isListening = true; // Indicate the speech recognition service is currently listening.
}
// starts the speech recognition service
const stopRecognition = () => {
recognition.stop(); // stops the speech recognition service
mic.classList.remove('listening'); // removes the pulse animation on the mic-icon
isListening = false; // Indicates the speech recognition service is currently not listening
}
```
Next, a third function will display the speech recognition status to the user.
```javascript
const displayInfo = (message, textColor) => {
info.style.display = "inline-block";
info.style.color = textColor;
info.innerText = message;
setTimeout(() => {info.style.display = "none"}, 3000);
}
```
In the code above, the `info` element is set to display a message to the user for 3 seconds.
Next, we must link the start and stop functionality to the mic icon.
```javascript
mic.onclick = () => {
if (!isListening) {
startRecognition();
displayInfo("Speech navigation enabled", "#000");
} else {
stopRecognition();
displayInfo("Speech navigation disabled", "#000");
}
}
```
In the above code, once the mic icon is clicked, the `isListening` variable is checked, if its value is set to `false` indicating a speech recognition session is not in progress, we start a new speech recognition session. Otherwise, we stop the ongoing recognition session if its value is set to `true`.
The speech recognition service automatically stops each time a result is sent. To ensure it stays active to receive commands, we are going to utilize the `onend` event to listen for when the speech recognition service has ended and call the `start()` instance method all over again until the user clicks the mic icon to manually stop it.
```javascript
recognition.onend = () => {
if(isListening == true){
recognition.start();
}
}
```
Finally, if the browser does not support the Speech Recognition API, we set the display property of the `mic-icon` to none.
```javascript
else {
mic.style.display = "none";
}
```
This removes the mic icon from the document, making it inaccessible for browsers without support for this API.
## Mapping user's speech commands to website navigation actions
After initializing the Speech Recognition API, the speech recognition service listens for speech inputs when the `start()` instance method is called. The `onresult` event is triggered when a word or phrase has been recognized, and the results are returned to the application.
```javascript
recognition.onresult = (event) => {
// Perform actions based on the recognized command
}
```
Inside the `onresult` code block, add the following code:
```javascript
const command = event.results[event.results.length - 1][0];
const transcript = command.transcript.toLowerCase();
```
In the code above, the first line retrieves the last recognized speech input from the `event.results` array and then retrieves the first alternative within that last result.
The `results` read-only property of the `onresult` event represents an array of recognition results received from the speech recognition service for the current session. Each result is an object containing an array of multiple recognition `alternatives` objects, where each `alternatives` object contains two properties: `transcript` and `confidence`.
The second line uses the recognition `transcript` property to retrieve the recognized text (the transcription of the spoken input)and set it to lowercase, making it easier to handle and compare later.
Next, we retrieve and map these results to the various navigation actions on the website.
```javascript
if (transcript.includes('home')) {
const homeSection = document.getElementById('home');
if (homeSection) {
homeSection.scrollIntoView({ behavior: 'smooth' });
}
} else if (transcript.includes('about')) {
const aboutSection = document.getElementById('about');
if (aboutSection) {
aboutSection.scrollIntoView({ behavior: 'smooth' });
}
} else if (transcript.includes('contact')) {
const contactSection = document.getElementById('contact');
if (contactSection) {
contactSection.scrollIntoView({ behavior: 'smooth' });
}
} else if (transcript.includes('footer')) {
const footerSection = document.getElementById('footer');
if (footerSection) {
footerSection.scrollIntoView({ behavior: 'smooth' });
}
} else if (transcript.includes('down')) {
window.scroll(0, window.scrollY + window.innerHeight);
} else if (transcript.includes('up')) {
window.scroll(0, window.scrollY - window.innerHeight);
} else if (transcript.includes('open menu')) {
navMenu.classList.add('show-nav');
} else if (transcript.includes('close menu')) {
navMenu.classList.remove('show-nav');
} else if (transcript.includes('stop')) {
stopRecognition();
displayInfo("Speech navigation disabled");
} else {
displayInfo("Unrecognized command: Try again", "#000");
}
```
In the above code, we've mapped speech commands to website navigation actions, including scrolling, menu control, section navigation, and stopping speech recognition. Each command is checked against specific keywords, triggering corresponding actions. The user is notified with "*Unrecognized command*" if a command isn't recognized.
## Handling Speech Recognition Errors
There would be cases where errors would be encountered when using the recognition service. The speech recognition API provides us with the `onerror` event. This event triggers when an error is encountered such as network and no-speech errors.
```javascript
recognition.onerror = (event) => {
// code to handle errors
}
```
The Speech Recognition API may continue to encounter network errors while attempting to establish or maintain a connection, triggering multiple error events within a second.
To address this issue and ensure that the error message is displayed only once for network errors, above the `onerror` event code block, add a variable to track whether the error has already been handled and set it to `false`.
```javascript
let errorHandled = false;
```
Moving back into the `onerror` code block, we declare a conditional statement that checks if the error has been handled
```javascript
if (!errorHandled) {
// Check if the error has not been handled yet
}
```
Inside this block, we declare another conditional statement that checks if the error encountered is a network error, in which case we stop the recognition session, set the `isListening` variable to `false`, and remove the pulse animation on the mic icon.
```javascript
if (event.error == "network") {
recognition.stop();
isListening= false;
mic.classList.remove('listening');
}
```
Below this `if` block, we call the `displayInfo()` function to display the different error messages to the user.
```javascript
displayInfo(`Speech recognition error: ${event.error}`, "#800000")
```
Next, we set the `errorHandled` variable to `true`, indicating the error has been handled for the current recognition session.
```javascript
errorHandled = true;
```
To ensure that the error message is displayed in the next session if occurred again, we would need to set the `errorHandled` variable back to `false`, indicating that any error encountered in the next recognition session is yet to be handled. We can do this using the `onstart` event
```javascript
recognition.onstart = () => {
errorHandled = false;
}
```
Providing feedback when an error is encountered allows users to understand the cause of the issue and guides them toward potential solutions or alternative actions.
## Enhancing efficiency through code optimization
When implementing Speech Navigation in a Website with JavaScript's Web Speech API, reviewing and optimizing your code for efficiency is important. Identify and eliminate redundant operations. That being said, let's take a look at how we can optimize our JavaScript code.
Using multiple `else if` statements may not be problematic for small applications, but for larger ones processing over 100 commands, this approach can lead to lengthy, hard-to-maintain, and repetitive code. To mitigate this, we can use an object to map commands to their respective navigation actions. This makes updating, deleting, or adding new commands to the application easier.
```javascript
const commands = {
'about': () => navigateToSection('about'),
'home': () => navigateToSection('home'),
'contact': () => navigateToSection('contact'),
'footer': () => navigateToSection('footer'),
'down': () => window.scrollBy(0, window.innerHeight),
'up': () => window.scrollBy(0, -window.innerHeight),
'stop': () => {stopRecognition(); displayInfo("Speech navigation disabled","#000")},
'open menu': () => navMenu.classList.add('show-nav'),
'close menu': () => navMenu.classList.remove('show-nav')
};
```
Next, we iterate over the commands and check if the `transcript` includes any of the keywords.
```javascript
let commandFound = false;
for (const command in commands) {
if (transcript.includes(command)) {
commands[command]();
commandFound = true;
break; // Stop searching after the first match
}
}
```
If no command is found, we execute the default action
```javascript
if (!commandFound) {
displayInfo("Unrecognized command: Try again", "#000");
}
```
Next, we create the function that handles the scrolling to the website's various sections.
```javascript
function navigateToSection(sectionId) {
const section = document.getElementById(sectionId);
if (section) {
section.scrollIntoView({ behavior: 'smooth' });
}
}
```
By optimizing your code, you can significantly improve performance, enhance efficiency, reduce resource consumption, and ultimately create a more responsive and scalable software solution.
The overall JavaScript code for our application should look like this now:
```javascript
const mic = document.querySelector(".mic-icon");
const info = document.getElementById("info");
const menuIcon = document.getElementById("menu-icon");
const navMenu = document.querySelector(".nav-menu");
menuIcon.onclick = () => {
navMenu.classList.toggle("show-nav");
};
if (window.SpeechRecognition || window.webkitSpeechRecognition) {
const recognition = new (window.SpeechRecognition ||
window.webkitSpeechRecognition)();
let isListening = false;
// stops the speech recognition service
const startRecognition = () => {
recognition.start(); // starts speech recognition service
mic.classList.add("listening"); // adds the pulse animation to mic-icon
isListening = true; // Indicate the speech recognition service is currently listening.
};
// starts the speech recogniiton service
const stopRecognition = () => {
recognition.stop(); // stops the speech recognition service
mic.classList.remove("listening"); // removes the pulse animation on the mic-icon
isListening = false; // Indicates the speech recognition service is currently not listening
};
const displayInfo = (message, textColor) => {
info.style.display = "inline-block";
info.style.color = textColor;
info.innerText = message;
setTimeout(() => {
info.style.display = "none";
}, 3000);
};
mic.onclick = () => {
if (isListening == false) {
startRecognition();
displayInfo("Speech navigation enabled", "#000");
} else if (isListening == true) {
stopRecognition();
displayInfo("Speech navigation disabled", "#000");
}
};
recognition.onend = () => {
if (isListening == true) {
recognition.start();
}
};
recognition.onresult = (event) => {
const command = event.results[event.results.length - 1][0];
const transcript = command.transcript.toLowerCase();
const commands = {
about: () => navigateToSection("about"),
home: () => navigateToSection("home"),
contact: () => navigateToSection("contact"),
footer: () => navigateToSection("footer"),
down: () => window.scrollBy(0, window.innerHeight),
up: () => window.scrollBy(0, -window.innerHeight),
stop: () => {
stopRecognition();
displayInfo("Speech navigation disabled", "#000");
},
"open menu": () => navMenu.classList.add("show-nav"),
"close menu": () => navMenu.classList.remove("show-nav")
}
let commandFound = false;
for (const command in commands) {
if (transcript.includes(command)) {
commands[command]();
commandFound = true;
break; // Stop searching after the first match
}
}
if (!commandFound) {
displayInfo("Unrecognized command: Try again", "#000");
}
function navigateToSection(sectionId) {
const section = document.getElementById(sectionId);
if (section) {
section.scrollIntoView({ behavior: "smooth" });
}
}
};
let errorHandled = false;
recognition.onerror = (event) => {
if (!errorHandled) {
if (event.error == "network") {
recognition.stop();
isListening = false;
mic.classList.remove("listening");
}
}
displayInfo(`Speech recognition error: ${event.error}`, "#800000");
errorHandled = true;
};
recognition.onstart = () => {
errorHandled = false;
};
} else {
mic.style.display = "none";
}
```
## Potential pitfalls and considerations
Implementing speech navigation with JavaScript's Web Speech API in our application offers numerous benefits. While this is a great way to add a modern and interactive element to our website and enhance user accessibility, it also comes with its own set of challenges that need to be considered for optimal implementation. Some of these challenges include:
- **Limited number of predefined commands**: Because the application only works with a limited number of predefined commands, it restricts users to interact using only the keywords defined in the `commands` object. For instance, the user could say "show links section", while the application only listens for the words "open menu" in the sentence to be able to open the navigation menu. This rigidity can lead to issues when users use synonyms or alternative phrases for navigation commands. To mitigate this, we can provide user education through tutorials or tooltips on how to use the speech navigation feature on the website with a list of all the valid command
- **Speech Recognition Accuracy**: The Speech recognition accuracy can vary depending on factors such as background noise, accent, and speech clarity. It's crucial to test speech navigation extensively in various environments and with different user demographics to ensure reliable performance.
- **Negation statements**: If a user negates a valid command in their sentence (e.g., "Don't go home"), the application recognizes only the "home" command in the sentence and navigates to the home section, contrary to the user's intent. One way we handle this is by listening for negation keywords (e.g., "Don't", "Do not"), and if present, stop the execution of any valid command found in the sentence and provide a context-based response to the user to ensure the correct navigation pattern is used.
By considering these potential pitfalls, we can enhance the usability and robustness of the speech navigation feature in our application. Furthermore, continuous testing, user feedback, and iteration are essential for refining the speech navigation functionality and ensuring a seamless user experience on the website.
## Conclusion
In conclusion, integrating Speech Navigation in websites with the Web Speech API empowers users by giving them greater control over their online experience. Users can effortlessly navigate to specific sections, trigger actions, or access information on a website simply through voice commands, making it a valuable tool for individuals with disabilities and those seeking a hands-free website navigation experience. Developers can create more inclusive and user-friendly websites by leveraging the Web Speech API.
This speech navigation feature can also be used to navigate much more complex websites than those covered in this article. I hope that with the knowledge gained from this guide, you will be well-equipped to implement speech navigation functionality effectively on your websites, creating a seamless user experience that will leave a lasting impression.
The complete source code of this project can be found on my [website speech navigation](https://github.com/Sarah-okolo/website-speech-navigation) GitHub repository
Happy coding!
## References
[Web Speech API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API) - MDN
[Speech to text in the browser with the Web Speech API](https://www.twilio.com/en-us/blog/speech-recognition-browser-web-speech-api) - Phil Nash
[Web Speech API](https://wicg.github.io/speech-api/) - W3C community group
[Speech recognition in browsers](https://medium.com/@seoul_engineer/speech-recognition-in-browsers-2018-f25bf59857bc) - Seoul Engineer
[Web Speech API Specification](https://dvcs.w3.org/hg/speech-api/raw-file/tip/webspeechapi) - W3C unofficial draft
| asayerio_techblog | |
1,831,347 | Why Youngsters are Choosing Digital Marketing as a Career Option? | Digital marketing is in huge demand these days. If you are just starting out you might be thinking... | 0 | 2024-04-23T08:30:39 | https://dev.to/redapplelearning/why-youngsters-are-choosing-digital-marketing-as-a-career-option-5285 | digitalmarketing, seo, digitalmarketingcourse, redapplelearning | Digital marketing is in huge demand these days. If you are just starting out you might be thinking about the reasons for choosing digital marketing as a career option. Digital marketing is a strategy that’s different from other marketing strategies because it isn’t just about traditional advertising. Digital marketing consist of a simple process of implementing a range of activities to generate leads, educate prospects, and convert them into customers. This might be easy to read but without proper knowledge and skills, it will be difficult to generate proper results. Well, these activities can include email campaigns and mobile messaging, among others. Therefore, today in this blog we will be discussing about the reasons for choosing digital marketing as a career option.
## Top reasons for choosing digital marketing as a career option:
**Huge demand for digital marketing professionals**
Digital marketing professionals are in huge demand these days and along with that the jobs in this field are also increasing exponentially. Since mobile phones, social media and the internet is and will be an integral part of our lives for years to come, there won't be any shortage of jobs for the digital marketers. Also, as per the reports from credible source the digital marketing industry is expected to generate more than 9 Lakh jobs in the year 2025.
**Easy to start your career**
One of the most inclusive features of digital marketing is that anyone from any background can work in the field as long as they have the required training and expertise. To succeed as a digital marketer, you won’t need a three- or four-year degree. Regardless of what you have previously studied, you can acquire a certification in digital marketing and create a strong portfolio and you are good to go! Basically, a digital marketing course can help you in getting into the sector and become a professional digital marketer easily.
**Diverse career options**
The job options in digital marketing are diverse. There is a wide range of opportunities in digital marketing, starting with SEO analyst, content writing, social media manager, etc. And hence, in this field you will have multiple career options. You can work for a manufacturing company or for an IT firm, or can also work independently on online freelance jobs. The scope for creativity and innovation is enormous which makes work exciting. You can also earn a big fat amount in your bank account as well.
**Get a Good Paycheque**
With the digital sector of the economy expanding rapidly, the demand for digital marketers has surged, leading to attractive pay packages being offered by companies. These attractive opportunities comes with one demand for the recruiters that candidates must possess the necessary skills and knowledge to navigate the digital landscape effectively. It is imperative for aspiring digital marketers to enroll in comprehensive courses to ensure they are equipped with the expertise required by employers. By taking a digital marketing course in Kolkata, individuals can enhance their prospects and stand out in a competitive job market, ultimately securing lucrative employment opportunities in this dynamic field.
## What is the best way to learn digital marketing?
The best way to learn digital marketing is with the assistance of a digital marketing course in Kolkata. With the help of digital marketing courses, you will easily learn the fundamental concepts of online advertising, search engine optimization (SEO), social media marketing, content creation, email marketing, and pay-per-click (PPC) as
well as advertising strategies. These courses offer hands-on experience with various tools and platforms used in the industry, providing practical knowledge that enables you to create effective digital marketing campaigns. Moreover, by enrolling in these courses, you gain insights into the latest trends and best practices in the rapidly evolving digital landscape, equipping you with the skills necessary to excel in the field of digital marketing.
## To Wrap Up
Today, the core of any firm is digital marketing. Additionally, it has been observed that the online marketing is expanding way more quickly than the traditional marketing. The key reason for this is also that businesses are aware of the advantages of digital platforms, such as their broad audience reach, low cost, and high-profit margins. Therefore, day by day all these factors will only increase the number of employment openings for the aspiring digital marketers. So, what are you waiting for, jump into this industry with proper skills with the assistance of[ digital marketing course in Kolkata. ](https://redapplelearning.in/digital-marketing-course-kolkata/)
| redapplelearning |
1,831,380 | Software Testing as a Debugging Tool | The Intersection of Debugging and Testing Unit Tests Integration Tests Coverage The Debug-Fix... | 20,817 | 2024-04-23T15:00:00 | https://debugagent.com/software-testing-as-a-debugging-tool | - [The Intersection of Debugging and Testing](#the-intersection-of-debugging-and-testing)
* [Unit Tests](#unit-tests)
* [Integration Tests](#integration-tests)
* [Coverage](#coverage)
- [The Debug-Fix Cycle](#the-debugfix-cycle)
- [Composing Tests with Debuggers](#composing-tests-with-debuggers)
- [Test-Driven Development](#testdriven-development)
- [Final Word](#final-word)
Debugging is not just about identifying errors—it's about instituting a reliable process for ensuring software health and longevity. In this post we discuss the role of software testing in debugging, including foundational concepts and how they converge to improve software quality.
{% embed https://youtu.be/yap509UZz6M %}
As a side note, if you like the content of this and the other posts in this series check out my [Debugging book](https://www.amazon.com/dp/1484290410/) that covers **t**his subject. If you have friends that are learning to code I'd appreciate a reference to my [Java Basics book.](https://www.amazon.com/Java-Basics-Practical-Introduction-Full-Stack-ebook/dp/B0CCPGZ8W1/) If you want to get back to Java after a while check out my [Java 8 to 21 book](https://www.amazon.com/Java-21-Explore-cutting-edge-features/dp/9355513925/)**.**
## The Intersection of Debugging and Testing
Debugging and testing play distinct roles in software development. Debugging is the targeted process of identifying and fixing known bugs. Testing, on the other hand, encompasses a adjacent scope, identifying unknown issues by validating expected software behavior across a variety of scenarios.
Both are a part of the debug fix cycle which is a core concept in debugging. Before we cover the cycle we should first make sure we're aligned on the basic terminology.
### Unit Tests
Unit tests are tightly linked to debugging efforts, focusing on isolated parts of the application—typically individual functions or methods. Their purpose is to validate that each unit operates correctly in isolation, making them a swift and efficient tool in the debugging arsenal. These tests are characterized by their speed and consistency, enabling developers to run them frequently, sometimes even automatically as code is written within the IDE.
Since software is so tightly bound it is nearly impossible to compose unit tests without extensive mocking. Mocking involves substituting a genuine component with a stand-in that returns predefined results, thus a test method can simulate scenarios without relying on the actual object. This is a powerful yet controversial tool. By using mocking we're in-effect creating a synthetic environment that might misrepresent the real world. We're reducing the scope of the test and might perpetuate some bugs.
### Integration Tests
Opposite to unit tests, integration tests examine the interactions between multiple units, providing a more comprehensive picture of the system's health. While they cover broader scenarios, their setup can be more complex due to the interactions involved. However, they are crucial in catching bugs that arise from the interplay between different software components.
In general mocking can be used in integration tests but it is discouraged. They take longer to run and are sometimes harder to set up. However, many developers (myself included) would argue that they are the only benchmark for quality. Most bugs express themselves in the seams between the modules and integration tests are better at detecting that.
Since they are far more important some developers would argue that unit tests are unnecessary. This isn't true, unit test failures are much easier to read and understand. Since they are faster we can run them during development, even while typing. In that sense the balance between the two approaches is the important part.
### Coverage
Coverage is a metric that helps quantify the effectiveness of testing by indicating the proportion of code exercised by tests. It helps identify potential areas of the code that have not been tested, which could harbor undetected bugs. However, striving for 100% coverage can be a case of diminishing returns; the focus should remain on the quality and relevance of the tests rather than the metric itself. In my experience, chasing high coverage numbers often results in bad test practices that persist problems.
It is my opinion that unit tests should be excluded from coverage metrics due to the importance of integration tests to overall quality. To get a sense of quality coverage should focus on integration and end to end tests.
## The Debug-Fix Cycle
The debug-fix cycle is a structured approach that integrates testing into the debugging process. The stages include identifying the bug, creating a test that reproduces the bug, fixing the bug, verifying the fix with the test, and finally, running the application to ensure the fix works in the live environment. This cycle emphasizes the importance of testing in not only identifying but also in preventing the recurrence of bugs.

Notice that this is a simplified version of the cycle with a focus on the testing aspect only. The full cycle includes discussion of the issue tracking and versioning as part of the whole process. I discuss this more in-depth in other posts in the series and my book.
## Composing Tests with Debuggers
A powerful feature of using debuggers in test composition is their ability to "[jump to line](https://debugagent.com/debugging-program-control-flow)" or "[set value](https://debugagent.com/watch-and-evaluate)." Developers can effectively reset the execution to a point before the test and rerun it with different conditions, without recompiling or rerunning the entire suite. This iterative process is invaluable for achieving desired test constraints and improves the quality of unit tests by refining the input parameters and expected outcomes.
Increasing test coverage is about more than hitting a percentage; it's about ensuring that tests are meaningful and that they contribute to software quality. A debugger can significantly assist in this by identifying untested paths. When a test coverage tool highlights lines or conditions not reached by current tests, the debugger can be used to force execution down those paths. This helps in crafting additional tests that cover missed scenarios, ensuring that the coverage metric is not just a number but a true reflection of the software's tested state.
In this case you will notice that the next line in the body is a rejectValue call which will throw an exception. I don’t want an exception thrown as I still want to test all the permutations of the method. I can drag the execution pointer (arrow on the left) and place it back at the start of the method.

## Test-Driven Development
How does all of this fit with disciplines like Test-Driven Development (TDD)?
It doesn't fit well. Before we get into that let's revisit the basics of TDD. Weak TDD typically means just writing tests before writing the code. Strong TDD involves a red-green-refactor cycle:
1. **Red**: Write a test that fails because the feature it tests isn't implemented yet.
2. **Green**: Write the minimum amount of code necessary to make the test pass.
3. **Refactor**: Clean up the code while ensuring that tests continue to pass.
This rigorous cycle guarantees that new code is continually tested and refactored, reducing the likelihood of complex bugs. It also means that when bugs do appear, they are often easier to isolate and fix due to the modular and well-tested nature of the codebase. At least, that's the theory.
{% embed https://youtu.be/yImkjlm08Cw %}
TDD can be especially advantageous for scripting and loosely typed languages. In environments lacking the rigid structure of compilers and linters, TDD steps in to provide the necessary checks that would otherwise be performed during compilation in statically typed languages. It becomes a crucial substitute for compiler/linter checks, ensuring that type and logic errors are caught early.
In real-world application development, TDD's utility is nuanced. While it encourages thorough testing and upfront design, it can sometimes hinder the natural flow of development, especially in complex systems that evolve through numerous iterations. The requirement for 100% test coverage can lead to an unnecessary focus on fulfilling metrics rather than writing meaningful tests.
The biggest problem in TDD is its focus on unit testing. TDD is impractical with integration tests as the process would take too long. But as we determined in the start of this post, integration tests are the true benchmark for quality. In that test TDD is a methodology that provides great quality for arbitrary tests, but not necessarily great quality for the final product. You might have the best cog in the world, but if doesn't fit well into the machine then it isn't great.
## Final Word
Debugging is a tool that not only fixes bugs but also actively aids in crafting tests that bolster software quality. By utilizing debuggers in test composition and increasing coverage, developers can create a suite of tests that not only identifies existing issues but also guards against future ones, thus ensuring the delivery of reliable, high-quality software.
Debugging lets us increase coverage and verify edge cases effectively. It's part of a standardized process for issue resolution that's critical for reliability and prevents regressions. | codenameone | |
1,831,451 | Doge Dreams: Experts Predict 700% Surge in Dogecoin Value – Is $1 Within Reach? | Dogecoin Prediction: Analysts Bullish on DOGE's Future Dogecoin (DOGE) has proven its resilience... | 0 | 2024-04-23T10:20:46 | https://dev.to/cryptoalerts/doge-dreams-experts-predict-700-surge-in-dogecoin-value-is-1-within-reach-4k74 | cryptocurrency, dogecoin, bitcoin | [Dogecoin Prediction](https://coinpedia.org/price-prediction/dogecoin-price-analysis/): Analysts Bullish on DOGE's Future
Dogecoin (DOGE) has proven its resilience once again, with a 2.5% price uptick in the last 24 hours, reaching $0.1607 in trading value. Analysts like Altcoin Sherpa and Ali Martinez have recently shared positive forecasts for DOGE's trajectory, suggesting further growth potential and the possibility of outperforming other tokens.
Dogecoin's Strength in 2024
Following closely in Bitcoin's footsteps, DOGE has experienced its share of ups and downs in recent months. On March 28, DOGE hit a yearly high of $0.2292, cementing its position as the 9th largest cryptocurrency, boasting nearly 100% year-to-date growth. Despite market fluctuations, DOGE has consistently rebounded and attracted investor attention.
Altcoin Sherpa is particularly bullish on DOGE's prospects for 2024, recommending averaging between $0.12 and $0.14. Highlighting DOGE's historical consolidation phase lasting almost two years, Sherpa believes the coin is primed for significant price appreciation in the future.
700% Price Uptrend Toward $1
Similarly, crypto analyst Ali Martinez sees potential for DOGE to rally towards $1 in the coming weeks, representing a remarkable 700% uptrend from current levels. Martinez notes DOGE's recurring price patterns, particularly after breaking out of a descending triangle formation.
Despite these optimistic outlooks, DOGE faces resistance levels hindering its recovery after a recent 21% price decline. The $0.1633 level has posed a significant barrier in the past 10 days, with additional obstacles at $0.1739, $0.1938, and $0.1998 on the path to $0.200.
It's crucial to recognize that DOGE's future trajectory is closely tied to Bitcoin's performance and various market dynamics. As we navigate through 2024, the potential for a bullish breakout and other factors will continue to shape Dogecoin's journey.
| cryptoalerts |
1,831,541 | Discovering Georgia: A Traveler's Paradise | Georgia, located at the crossroads of Europe and Asia, is a country blessed with stunning natural... | 0 | 2024-04-23T12:02:17 | https://dev.to/travejars/discovering-georgia-a-travelers-paradise-mpa | Georgia, located at the crossroads of Europe and Asia, is a country blessed with stunning natural landscapes, rich history, and vibrant culture. Here are some [Place To Visit In Georgia](https://www.travejar.com/attractions/places-to-visit-in-georgia)
Tbilisi:
The capital city of Georgia, Tbilisi, is a dynamic blend of ancient and modern. Explore the historic Old Town (Dzveli Tbilisi) with its narrow streets, traditional houses, and sulfur baths. Visit landmarks like Narikala Fortress, Metekhi Church, and the Holy Trinity Cathedral. Don't miss the vibrant Rustaveli Avenue and the modern Bridge of Peace.
Mtskheta:
Mtskheta, the ancient capital of Georgia, is a UNESCO World Heritage Site and one of the oldest continuously inhabited cities in the world. Visit iconic landmarks such as Svetitskhoveli Cathedral, Jvari Monastery, and the Samtavro Monastery.
Kazbegi (Stepantsminda):
Nestled in the Caucasus Mountains, Kazbegi (Stepantsminda) offers breathtaking natural beauty and outdoor adventures. Hike to the Gergeti Trinity Church for panoramic views of Mount Kazbek, explore the Dariali Gorge, and soak in the stunning landscapes of the Truso Valley.
Svaneti:
Svaneti is a remote mountainous region known for its ancient stone towers, picturesque villages, and rugged landscapes. Explore the village of Mestia with its Svan towers, visit the ethnographic museum in Ushguli, and trek in the stunning alpine landscapes of the Caucasus.
Batumi:
Batumi is a vibrant coastal city on the Black Sea coast, known for its modern architecture, palm-lined boulevards, and lively atmosphere. Relax on Batumi Beach, stroll along the Batumi Boulevard, visit the Alphabet Tower, and explore the Batumi Botanical Garden.
Vardzia:
Vardzia is a spectacular cave monastery complex dating back to the 12th century. Carved into the side of a cliff, Vardzia features thousands of caves, churches, chapels, and tunnels. Explore the site and marvel at its historical and architectural significance.
Kakheti Wine Region:
Kakheti is Georgia's premier wine region, known for its vineyards, wineries, and traditional wine-making methods. Visit towns like Signagi, Telavi, and Kvareli to explore vineyards, taste local wines, and learn about Georgia's millennia-old wine culture.
Gori and Uplistsikhe:
Visit the town of Gori, birthplace of Joseph Stalin, and explore the Stalin Museum. Nearby, discover the ancient rock-hewn town of Uplistsikhe, dating back to the 1st millennium BCE, with its cave dwellings, tunnels, and temples.
Borjomi-Kharagauli National Park:
Escape to nature in Georgia's largest national park, Borjomi-Kharagauli National Park. Enjoy hiking, camping, wildlife spotting, and relaxation amidst pristine forests, mountain ranges, and mineral springs.
Ananuri Fortress:
Located along the Georgian Military Highway, Ananuri Fortress is a medieval fortress complex overlooking the Aragvi River. Explore the ancient churches, towers, and defensive walls, and enjoy stunning views of the surrounding landscapes.
These are just a few of the many incredible places to visit in Georgia, each offering a unique blend of history, culture, and natural beauty.
| travejars | |
1,831,685 | Exploring the Role of Bioculture Manufacturers in Modern Biotechnology | Bioculture manufacturers play a crucial role in advancing modern biotechnology, providing essential... | 0 | 2024-04-23T12:35:51 | https://dev.to/ecolagro/exploring-the-role-of-bioculture-manufacturers-in-modern-biotechnology-33e6 | bioculturemanufacturer, biocultureforetp | Bioculture manufacturers play a crucial role in advancing modern biotechnology, providing essential resources and expertise to support research, development, and production in various industries. Let's delve into the multifaceted role of **[bioculture manufacturers](https://ecolagro.com/bio-culture-for-etp-stp-manufacturer-and-supplier-from-india/)** and their contributions to the field of biotechnology.
**Supplying High-Quality Biocultures:**
One of the primary functions of bioculture manufacturers is to supply high-quality and standardized biocultures to laboratories, biotech companies, pharmaceutical firms, and academic institutions worldwide. These biocultures serve as fundamental tools for scientists and researchers in conducting experiments, performing assays, and developing new products. By providing reliable and consistent bioculture products, manufacturers enable researchers to pursue innovative research projects with confidence and accuracy.
**Ensuring Quality and Consistency:**
Bioculture manufacturers adhere to stringent quality control measures to ensure the purity, viability, and consistency of their products. Culturing living organisms under controlled conditions requires precise control over factors such as temperature, pH, nutrient composition, and aseptic techniques. Manufacturers invest in state-of-the-art infrastructure, equipment, and expertise to create optimal conditions for the growth and maintenance of biocultures. Quality assurance protocols, including regular testing and validation procedures, are implemented to guarantee the integrity and performance of bioculture products.
**Driving Research and Development:**
Bioculture manufacturers play an active role in research and development to innovate new cultivation methods, improve yields, and enhance the properties of biocultures. Research efforts focus on optimizing growth conditions, developing novel media formulations, and engineering strains with desirable traits. These advancements contribute to the continuous improvement and evolution of bioculture manufacturing processes, leading to enhanced productivity, efficiency, and sustainability.
**Applications Across Industries:**
The impact of bioculture manufacturers extends beyond the laboratory, driving innovation and progress in various industries. In the pharmaceutical sector, biocultures are essential for producing recombinant proteins, monoclonal antibodies, vaccines, and cell-based therapies. In industrial biotechnology, biocultures are used in fermentation processes to produce biofuels, enzymes, and specialty chemicals. In agricultural biotechnology, biocultures are employed for crop improvement, biocontrol, and bioremediation applications.
**Future Perspectives:**
As biotechnology continues to evolve and expand, bioculture manufacturers will remain indispensable partners in shaping the future of biotechnology and its contributions to society. By supplying high-quality biocultures, ensuring consistency and reliability, driving research and development efforts, and supporting innovation across industries, bioculture manufacturers play a vital role in advancing scientific knowledge, improving healthcare, and promoting sustainability.
In conclusion, bioculture manufacturers are pivotal players in modern biotechnology, providing essential resources, expertise, and support to fuel research, innovation, and production in diverse fields. Their contributions are instrumental in driving progress and addressing global challenges through biotechnological solutions. | ecolagro |
1,831,710 | Top 10 Most Common P&C Insurance Mistakes & How to Avoid Them | Do you know P&C insurance helps protect your assets? Yes, many businesses prefer this insurance... | 0 | 2024-04-23T12:42:24 | https://dev.to/fbspl/top-10-most-common-pc-insurance-mistakes-how-to-avoid-them-5pb | insurance, outsourcing, fbspl, bpo |

Do you know P&C insurance helps protect your assets? Yes, many businesses prefer this insurance because it helps against unforeseen circumstances. It protects assets like your house and belongings and offers liability coverage in case of accidents or injuries to the person or property you have caused.
Despite its growth, [P&C Insurance](https://www.fusionfirst.com/) is facing many challenges, and the major reason is the mistakes people make without knowing. So, to help you, here are the possible mistakes you must know about beforehand so that you understand how to avoid them. Also, check out the useful tips given in the context that will help you throughout.
**Don’t Fall into Making These 10 P&C Insurance Mistakes**
It is very common for a business owner to stay so focused on running and growing their organization that they often overlook key maintenance factors. P&C Insurance is one such thing that safeguards any organization from top to bottom. Several factors of P&C Insurance need to be scrutinized to take the best advantage. However, most business owners overlook some of the key factors, as mentioned earlier. All these possible mistakes are well explained below so that you can go through them and have a thorough understanding of not making them.
**1.Underestimating the Need for Coverage**
A common insurance mistake is calculating your coverage requirements too low. You can choose the bare minimum to save money, but if your coverage is insufficient, you might have to pay for losses your insurance policy doesn’t cover. Here are some ways to avoid them:
Work with the best P&C Insurance services to evaluate your situation and select full coverage to protect your assets and future income.
Remember to always opt for adequate coverage.
**2.Neglecting Updates and Reviews**
Whether you buy a new car or home, making big lifestyle changes always comes with happiness. However, in the process, people often forget or fail to renew their insurance coverage. When these changes are not done or insurance plans are not reviewed, it may result in coverage gaps. This also leads to the risk of being over or under-insured. The following are the ways to avoid them:
Schedule reviews with the best insurance broker to ensure your coverage meets your current needs.
Also, take time to read your policy details.
**3.Not Paying Attention to Details**
Another mistake people often make is not paying attention to details when purchasing an insurance policy. If you fail to gain all details and terms, there is a higher chance your policy can be null and void. Check out these tips to avoid them:
When buying a P&C Insurance policy, be open and truthful with your agent, and don’t hesitate to ask about everything in detail.
Besides, before signing, always read the policy carefully and thoroughly.
**4.Neglecting Risk Management Plan**
Do you know having the right risk management strategy can save you from high premium costs? Yes, it is. Therefore, the crucial thing is to create a risk management plan beforehand to avoid financial losses and reduce your insurance costs. The following tips will help you:
Work with the right insurance broker to take care of the challenges in P&C insurance that might come your way.
Make a list of your business risks and the steps to mitigate them.
**5.Not Doing Enough Research**
Remaining with insurance companies for too long is only sometimes a good idea. No to deny, it is critical to identify insurance providers and plans that meet your demands. Because an uncovered lawsuit could spell the end for your company, looking for a company that fits all your needs is crucial. The following will let you avoid this mistake.
Don’t hesitate to shop around if your insurance rates are increasing too much.
Another best solution would be to work with an independent insurance agent.
**6.Paying for Unnecessary Insurance**
Another mistake people make is paying for insurance they don’t need. For instance, if rebuilding your property from scratch would cost $500,000, you don’t need to insure it for $2.4 million. Check out what you can do instead:
Find the right insurance agent who can help you choose policy types and deductibles as per your business needs.
Look where you’re double-covered and cut costs.
**7.Choosing Cheap Insurance**
Whether small or big, every business owner looks for ways to save money. As a result, they choose the cheapest insurance policies with the cheapest coverage possible. Little do they know that this leads to being under-covered. The following will show the right way:
Make sure your business is properly covered and not underinsured.
Do some research before signing an insurance policy.
**8.Not Investing in BOP**
Wondering what BOP is? Also known as Business owners’ policy, these are offered mostly to small business owners. It includes general liability insurance, business interruption, and commercial property insurance. For instance, by seeking property insurance assistance, you rest assured this policy will cover a loss up to the limits of your policy.
Instead, what you can do is:
Look for financial stability and insurance carriers who will offer you BOP.
Do not hesitate to invest in a Business Owner’s Policy.
**9.Dropping Long-Term Care Policy**
Another mistake people often make is to drop their long-term care insurance policy when notified of a premium increase. Also, remember that getting a new plan can cost even more, especially if you’ve aged since you bought your first coverage. There are also chances of future financial difficulties if you fully ignore long-term care insurance. The best ways to avoid them are-
Make sure to calculate and track the appropriate insurance coverage period.
Talk to a financial advisor for the best knowledge and help.
**10.Not Making Payments on Time**
Ignoring payment notifications is the worst mistake one can make. Sometimes, policies get canceled because people often fail to make payments on time or don’t pay at all. To avoid such mistakes, you may follow the ways mentioned below:
If canceled, call your P&C Insurance agent for help and negotiation.
Seek advice on preventing this from happening again or keep track of reminders for your payments.
Seek Professional Guidance- We Value Your Business
We at [FBSPL](https://www.fusionfirst.com/) are your trusted partners in delivering top-notch business support. Our team of thoughtful and qualified business process management experts will guide you in every step. Be it any issue, we have the best solution for you. Contact our professionals with any queries or to help avoid unwanted property and casualty insurance mistakes. We are a renowned organization that provides excellent business management and IT consulting for businesses. For best-in-class business support and expert guidance- don’t think twice, talk to us, we are here to provide the handholding. | fbspl |
1,831,834 | The Thrilling World of Horse Racing: Where Speed, Skill, and Tradition Collide | In the realm of sports, maxichevalcom few events capture the imagination and excitement of... | 0 | 2024-04-23T15:13:21 | https://dev.to/stories-blogs/the-thrilling-world-of-horse-racing-where-speed-skill-and-tradition-collide-4pfa | In the realm of sports, **[maxichevalcom](https://maxicheval.com/)** few events capture the imagination and excitement of spectators quite like horse racing. From the thundering hooves pounding against the track to the heart-stopping finishes at the wire, horse racing embodies the essence of athleticism, competition, and tradition. With a rich history spanning centuries and a global presence that transcends borders, horse racing remains one of the most beloved and iconic sports in the world.
A Tapestry of History and Tradition
Dating back to ancient civilizations, horse racing has deep roots in cultures around the world. From the chariot races of ancient Greece to the regal pageantry of English horse racing, the sport has evolved over millennia, weaving itself into the fabric of societies and traditions. Today, horse racing continues to honor its storied past while embracing modern innovations and advancements, creating a dynamic and ever-evolving spectacle for fans to enjoy.
The Majesty of the Thoroughbred
Central to the allure of horse racing is the magnificent thoroughbred, a breed renowned for its speed, agility, and grace. Bred for centuries for their athletic prowess and competitive spirit, these majestic creatures captivate audiences with their beauty and power on the track. From the sleek contours of their muscular bodies to the fiery determination in their eyes, thoroughbreds epitomize the spirit of competition and inspire awe and admiration wherever they go.
The Drama of the Racecourse
At the heart of horse racing lies the drama and excitement of the racecourse, where horse and rider come together in a thrilling display of skill and athleticism. From sprint races over short distances to marathon tests of endurance, each race presents its own unique challenges and opportunities for glory. Jockeys, the fearless pilots of these equine athletes, must navigate the twists and turns of the track, making split-second decisions that can mean the difference between victory and defeat.
A Global Phenomenon
With its roots firmly planted in cultures around the world, horse racing has become a global phenomenon with a diverse array of events and competitions held on every continent. From the prestigious classics of Europe to the high-stakes derbies of North America and the pulsating carnival atmosphere of the Melbourne Cup in Australia, horse racing offers something for fans of all ages and backgrounds to enjoy. Whether attending a race in person or watching from afar, fans are drawn to the excitement, pageantry, and tradition of this timeless sport. View Sire **[pronostic gratuit turf 100](https://maxicheval.com/pronostic-gratuit-turf-100/.)** .
An Enduring Legacy
As horse racing continues to evolve in the modern era, its enduring legacy remains as strong as ever. From the historic racetracks that have stood the test of time to the legendary champions that have captured the hearts of fans, the sport continues to inspire, entertain, and unite people around the world. Whether celebrating the triumphs of the past or eagerly anticipating the excitement of the future, one thing is certain: the thrill of horse racing will continue to captivate audiences for generations to come. | stories-blogs | |
1,831,848 | Join the BitNest Community: A Hub for Crypto Enthusiasts | A post by xin yang | 0 | 2024-04-23T15:23:45 | https://dev.to/bitneestejhon/join-the-bitnest-community-a-hub-for-crypto-enthusiasts-748 |
 | bitneestejhon | |
1,832,246 | 📦 Put dev dependencies in tools.go | TL;DR: Use a tools.go file with //go:build tools to list import _ "github.com/octocat/somedevtool" to... | 0 | 2024-04-24T00:42:20 | https://dev.to/jcbhmr/put-dev-dependencies-in-toolsgo-9e6 | go, todayilearned, todayisearched | **TL;DR:** Use a `tools.go` file with `//go:build tools` to list `import _ "github.com/octocat/somedevtool"` to stop `go mod tidy` from removing them.
```go
//go:build tools
package tools
import (
_ "github.com/melbahja/got/cmd/got"
_ "github.com/arkady-emelyanov/go-shellparse"
)
```
<sup>💡 `go mod tidy` doesn't care if the package is cmd or lib</sup>
Then you can use whatever you list in `tools.go` in `//go:generate` and `task.go`!
```go
package mylib
//go:generate -command got go run github.com/melbahja/got/cmd/got
import (
"fmt"
_ "embed"
)
//go:generate got https://example.org/image.png -o image.png
//go:embed image.png
var imagePNG []byte
```
```go
// task.go
//go:build ignore
package main
import "github.com/arkady-emelyanov/go-shellparse"
func main() {
// ...
}
```
[🏎 Use task.go for your Go project scripts](https://dev.to/jcbhmr/use-taskgo-for-your-go-project-scripts-2cm4)
Other resources
- https://play-with-go.dev/tools-as-dependencies_go119_en/
- https://www.tiredsg.dev/blog/golang-tools-as-dependencies/ | jcbhmr |
1,832,266 | Python 新手在找尋套件時的選擇困境 | 選擇困難 在 Python... | 0 | 2024-04-24T02:06:47 | https://dev.to/neilskilltree/python-xin-shou-zai-zhao-xun-tao-jian-shi-de-xuan-ze-kun-jing-m34 | python, programming, coding, beginners | ## 選擇困難
在 Python 的世界中,選擇合適的套件可能是一場挑戰,尤其對於新手來說。
首先他們會面臨到的困難是不知道該使用什麼關鍵字來進行精準快速的搜尋;其次,由於缺乏特定領域的背景知識,他們可能會誤解說明文件的內容,誤認為某個套件非常適合自己的需求,或者反過來,錯過最適合的選擇。
## 案例研究:Flask
我們以 Flask 為例,它是一個建立網站的工具,專門負責網站的幕後工作,雖然小巧但擴充性強。
以下是 Flask 開發小組於 PyPI 上的套件描述:

在第一段中立即出現許多的專有名詞,如:WSGI, Werkzeug, Jinja 等,以描述它的特性。在第二段則是描述了它不強迫程式員使用特定的框架,是針對這一行的老手們說的。
這些說明本身沒有錯,對於具有專業知識的程式員們來說,確實能清楚地傳達這個套件的設計意圖;然而,對於剛入門或希望轉職的新手來說,卻如同天書,看了等於沒看!
---
那在它的官網上的解釋有比較好嗎?我們來看看它的快速簡介 ([Quickstart](https://flask.palletsprojects.com/en/3.0.x/quickstart/)):

這裡提供了 Flask 程式最小最基本的寫法,在下方也逐行解釋每一句的作用,甚是貼心。不過它挑了一個非常不恰當的例子,因為 Flask 的強項是在輔助建立動態網站。如果是從前端轉職過來的 Python 新手們可能會心想,這種 “Hello, World!” 的網頁我用 HTML 就可以完成了啊!幹嘛要這麼麻煩寫程式咧!?
---
官網還另外提供了 Tutorial 來幫助程式員快速認識 Flask 的開發架構:

這個例子就適合多了!因為它幫助我們了解如何使用 Flask 設計一個內容不斷更新的部落格系統。不過仍存在一個問題:這僅僅是展示了 Flask 數種強大功能中的一種。
想像一下新手的處境:他可能急於完成一個購物車網站的專案。當他看完這個教學後,可能會誤以為「Flask 不適合我,因為我不需要做部落格」,於是他可能就放棄 Flask,轉而去尋找其他的解決方案。
## 更好的解決方案
我必須再次強調,這些文件的內容正確且方向清晰,只是它們假設讀者已具備特定的基礎知識,從而能夠輕易地理解內容並將其應用於自己的專案。
但對於一直卡關的 Python 新手們來說,他們往往只能不斷地去網路上翻找各式各樣的資料和教學影片,希望有天能突然之間開竅。
我認為一個更為淺白易懂的簡介是必需的:從實際應用的角度出發,先建立一個大局觀,了解這個套件的能力範圍及強項,再依據需求深入探究更細節的知識。這種方法不僅能讓新手們省去不必要的精力浪費,還能加速學習的進度。
我正在朝這個方向努力,希望能幫助所有新進者快速脫離新手階段。
| neilskilltree |
1,832,629 | Do we have any game-devs here? Happy to connect! | I am new to this platform and looking forward to the game-dev community here! | 0 | 2024-04-24T04:32:56 | https://dev.to/anulagarwal12/do-we-have-any-game-devs-here-happy-to-connect-2f65 | I am new to this platform and looking forward to the game-dev community here! | anulagarwal12 | |
1,832,631 | Scaling Globally: Growth Strategies for Offshore Development Centers | Hey guys! Today I want to dive into an important topic for those of us working in or running offshore... | 0 | 2024-04-24T04:54:40 | https://dev.to/sofiawagner/scaling-globally-growth-strategies-for-offshore-development-centers-48d8 | odc, offshore, softwaredevelopment | Hey guys! Today I want to dive into an important topic for those of us working in or running offshore development centers - how to effectively scale our operations to keep up with global growth.
Like many of you, I've seen firsthand just how booming the IT outsourcing industry has become. The numbers speak for themselves - this sector brought in $412 billion in revenue worldwide in 2022 alone, with projections of hitting [$778 billion](https://www.statista.com/study/84971/it-outsourcing-report/) by 2028 according to Statista. With that kind of meteoric growth happening, having smart strategies in place for scaling our offshore teams is absolutely crucial.
So let's get into some key areas I believe every offshore center needs to focus on as they expand globally:
## 1. Hiring and Training

Getting this process nailed is job one. As we bring on more developers, QA engineers, project managers, etc., standardized recruitment and rigorous onboarding programs are a must. We have to be able to quickly integrate new talent into existing processes. Ongoing training is also huge - we need to be continuously upskilling our people across new technologies so we can tackle increasingly complex client projects.
## 2. Expanded Service Offerings

Diversifying what we offer is pivotal for fueling long-term growth. Maybe we start by specializing in mobile app dev or cloud migration services. As we level up our expertise into hot new areas like AI, blockchain, metaverse development, etc., it opens up so many more revenue opportunities with bigger clients. Constantly assessing market demands and client needs through feedback should drive what new capabilities we build.
## 3. Tech Stack Upgrades

Our hardware, software, networking - all of it needs to be enterprise-grade and highly scalable to support rapid expansion. Things like cloud computing, containerization, cybersecurity hardening...investing in these is table stakes. We have to ensure we can spin up more resources seamlessly while keeping data secure as we onboard more global clients.
## 4. University/Community Partnerships

Finding talent is always a challenge when you're growing fast. Developing tight relationships with local universities and prestigious tech programs is huge for creating a steady pipeline of newly minted grads to hire from. And getting embedded in the broader tech community through meetups, open-source projects, etc. exposes us to emerging skill sets and potential partners for things like joint R&D.
## 5. Agile Processes

I can't emphasize this one enough - implementing rock-solid agile development practices like [Scrum](https://www.scrum.org/learning-series/what-is-scrum), [Kanban](https://asana.com/resources/what-is-kanban), etc. gives us the flexibility and real-time communication to be incredibly responsive as we take on new clients, spin up new teams, and adapt to changing requirements on a dime. Agile project management tools are also key for maintaining consistent processes and knowledge-sharing across distributed squad locations worldwide.
There's definitely a lot that goes into building an [offshore development center](https://stssoftware.ch/blogs/offshore-development-center/) for the long haul. But if we make smart investments into talent, technology, partnerships, and agile processes, we can scale our operations effectively to meet the incredible demand we're seeing.
That's my perspective, but I'd love to hear your thoughts as well! What growth strategies have been most important for your own teams? Let's discuss in the comments below!
| sofiawagner |
1,832,864 | Do adjustable kettlebell sets work? | An adjustable kettlebell set typically consists of a base unit with various weight plates that can be... | 0 | 2024-04-24T09:24:55 | https://dev.to/zhenhan/do-adjustable-kettlebell-sets-work-113o | An adjustable kettlebell set typically consists of a base unit with various weight plates that can be added or removed to adjust the total weight. They're great for saving space and offering versatility in your workouts. An adjustable kettlebell set is a versatile fitness tool that allows you to customize your workout by adjusting the weight according to your strength and fitness level. These sets typically include a base unit with a handle and multiple weight plates that can be easily added or removed to achieve different weight configurations. They're ideal for saving space and are suitable for a wide range of exercises, including swings, squats, presses, and more. Adjustable kettlebell sets are popular among fitness enthusiasts who value convenience and flexibility in their training routines.
Do adjustable kettlebell sets work?
Adjustable kettlebell sets are effective for strength training and improving overall fitness. They offer several benefits:
Versatility
With adjustable weight options, you can perform a wide range of exercises targeting various muscle groups, including swings, squats, lunges, presses, and rows.
Space-saving
Unlike traditional kettlebell sets, adjustable kettlebells take up less space because you only need one base unit with interchangeable weight plates, rather than multiple individual kettlebells.
Cost-effective
Investing in an adjustable kettlebell set can be more economical than buying multiple traditional kettlebells of different weights.
Progressive overload
As you progress in your fitness journey, you can easily increase the weight by adding more plates, allowing you to continually challenge yourself and build strength over time.
Overall, adjustable kettlebell sets are a convenient and efficient way to incorporate kettlebell exercises into your workout routine, making them suitable for beginners and experienced fitness enthusiasts alike.
What is the disadvantage of an adjustable kettlebell set?
While adjustable kettlebell sets offer numerous advantages, they also have some potential drawbacks:
Bulkiness
Adjustable kettlebell sets can be bulkier compared to traditional kettlebells, especially when fully loaded with weight plates. This may make them slightly less convenient for certain exercises or movements that require a more compact kettlebell.
Adjusting weight
Adjusting the weight of an adjustable kettlebell during a workout can interrupt the flow of your routine, especially if you need to add or remove several weight plates to achieve the desired weight.
Durability
Some adjustable kettlebell sets may not be as durable as traditional kettlebells, particularly if they have moving parts or mechanisms for adjusting the weight. This could potentially lead to wear and tear over time.
Cost
While adjustable kettlebell sets can be cost-effective in the long run, the initial investment may be higher compared to purchasing a single traditional kettlebell of a specific weight.
Despite these potential disadvantages, many people find adjustable kettlebell sets to be a practical and versatile fitness tool that meets their needs for strength training and overall fitness. | zhenhan | |
1,920,638 | What is Offline Authentication | In our increasingly connected world, the assumption is often that internet access is always... | 0 | 2024-07-12T07:48:02 | https://dev.to/blogginger/what-is-offline-authentication-4a37 | In our increasingly connected world, the assumption is often that internet access is always available. However, there are numerous scenarios where reliable online connectivity cannot be guaranteed, necessitating robust security measures that do not depend on continuous internet access. This is where offline authentication comes into play. In this blog, we will explore what offline authentication is, how it works, and its importance across various industries and applications.

## What is Offline Authentication?
Offline authentication is the process of verifying a user’s identity without the need for an active internet connection. This method is particularly beneficial in remote locations, secure facilities, or situations where network connectivity is intermittent or completely unavailable. By relying on locally stored credentials or other authentication mechanisms, offline authentication ensures that security is maintained even in the absence of online verification.
## How Offline Authentication Works
Offline authentication employs several methods to securely verify user identity. Here are some common approaches:
### 1. **Locally Stored Credentials**
In this method, user credentials such as passwords, PINs, or biometric data are stored locally on the device. When the user attempts to log in, the system compares the entered credentials with the locally stored data to authenticate the user. This approach eliminates the need for a network connection to validate credentials.
### 2. **Time-Based One-Time Passwords (TOTPs)**
TOTP systems generate a temporary code that is valid for a short period, usually 30 seconds to a minute. These codes are generated based on a shared secret and the current time, allowing users to authenticate without an internet connection. Authenticator apps like Google Authenticator and Authy commonly use this method.
### 3. **Smart Cards and Security Tokens**
Smart cards and hardware security tokens can securely store authentication credentials. When used in conjunction with a reader device, these tokens allow users to authenticate themselves offline. This method is frequently used in high-security environments such as government and military facilities.
### 4. **Biometric Authentication**
[Biometric authentication](https://www.authx.com/blog/what-is-biometric-authentication/?utm_source=devto&utm_medium=SEO&utm_campaign=blog&utm_id=K003), such as fingerprints or facial recognition, can be stored locally on a device. When the user attempts to log in, the system scans the biometric data and compares it to the stored template for authentication. This method is highly secure and convenient, especially for mobile devices.
## Why Offline Authentication is Essential
### 1. **Ensuring Security in Remote Locations**
In remote or rural areas where internet connectivity is unreliable or unavailable, offline authentication ensures that users can still securely access systems and data. This is particularly important for industries such as mining, oil and gas, and agriculture, where operations often take place in isolated locations.
### 2. **Maintaining Access During Network Outages**
Network outages can occur due to various reasons, including technical issues, natural disasters, or cyberattacks. Offline authentication ensures that critical systems remain accessible during such outages, minimizing disruption and maintaining operational continuity.
### 3. **Enhancing Security in Secure Facilities**
Certain environments, such as government buildings, military installations, and research labs, require stringent security measures and may restrict internet access for security reasons. Offline authentication provides a secure way to verify user identity in these high-security settings without relying on an online connection.
### 4. **Reducing Dependency on Network Infrastructure**
Offline authentication reduces the dependency on network infrastructure for security purposes. This can lead to cost savings by minimizing the need for extensive network coverage and maintenance. Additionally, it can enhance security by reducing the attack surface for potential cyber threats targeting network vulnerabilities.
## Best Practices for Implementing Offline Authentication
### 1. **Use Strong and Diverse Authentication Methods**
Implement a combination of strong authentication methods, such as biometric data and smart cards, to enhance security. Avoid relying solely on passwords, as they are more susceptible to being compromised.
### 2. **Regularly Update and Synchronize Credentials**
Ensure that locally stored credentials and authentication mechanisms are regularly updated and synchronized with central systems when an internet connection is available. This helps maintain the accuracy and security of the authentication process.
### 3. **Ensure Secure Storage of Credentials**
Use secure storage mechanisms, such as encrypted storage, to protect locally stored credentials. This prevents unauthorized access to sensitive data in case the device is lost or stolen.
### 4. **Conduct Regular Security Audits**
Regularly audit your offline authentication systems to identify and address potential security vulnerabilities. This helps ensure that your authentication methods remain robust and effective over time.
## Conclusion
[Offline authentication](https://www.authx.com/offline-authentication/?utm_source=devto&utm_medium=SEO&utm_campaign=blog&utm_id=K003) is crucial for ensuring secure access to systems and data in environments where internet connectivity is limited or unavailable. By leveraging locally stored credentials, time-based one-time passwords, smart cards, and biometric data, businesses can maintain high levels of security and operational continuity. As cyber threats continue to evolve, implementing robust offline authentication methods is essential for safeguarding critical information and enhancing overall security. | blogginger | |
1,833,214 | The Secret Language of Design: How Subtle Cues Shape User Behavior | Have you ever walked into a store and felt an irresistible urge to browse? Or scrolled through a... | 0 | 2024-04-24T16:18:07 | https://dev.to/sahilshityalkar/the-secret-language-of-design-how-subtle-cues-shape-user-behavior-11j4 | Have you ever walked into a store and felt an irresistible urge to browse? Or scrolled through a website and found yourself drawn to a specific product? It might not be magic – it's the power of design psychology.
Designers wield a secret language, using subtle cues to influence our thoughts and actions. They understand how we perceive color, layout, and even negative space. By carefully crafting these elements, they can create user experiences that are not only functional but also persuasive.
**The Power of Color:**
Colors evoke emotions and associations. Warm colors like red and orange can create a sense of urgency or excitement, while cool colors like blue and green promote feelings of trust and calmness. Designers strategically use color palettes to influence user behavior. For instance, a call to action button might be a vibrant red to grab attention, while a calming blue might be used for a meditation app's interface.
**The Art of Layout:**
The way elements are arranged on a page or screen is far from random. Designers use layout principles like hierarchy and visual weight to guide users' eyes. Headlines and prominent visuals are placed strategically to capture immediate attention, while less crucial information might be tucked away in corners. This layout hierarchy subconsciously tells users where to focus and what action to take.
**The Allure of Negative Space:**
Empty space, often called negative space, is just as important as the elements it surrounds. It can create a sense of balance and focus, preventing users from feeling overwhelmed. Imagine a website crammed with text and images – it would be visually chaotic and difficult to navigate. By using negative space strategically, designers ensure users can easily find the information they need and complete desired actions.
Understanding the psychology behind design choices empowers you to become a more conscious user. Next time you browse a website or explore an app, take a moment to notice the subtle cues at play. How are colors used? How does the layout guide your eyes? By recognizing these tactics, you can become a more informed and discerning user.
**The Call to Action:**
Design is a fascinating field that blends creativity with psychology. By understanding how design choices influence us, we can appreciate the craft behind the products and experiences we interact with daily. Are you interested in learning more about the dark patterns some designers use to manipulate users? Let me know in the comments below! | sahilshityalkar | |
1,833,377 | PTE Write Essay | Template and Sample Topics | PTE Write Essay is the second PTE Writing task, which asks you to read an essay topic and write a... | 0 | 2024-04-24T18:45:17 | https://dev.to/edutrainexpte/pte-write-essay-template-and-sample-topics-19dg | [PTE Write Essay](https://youtu.be/PrOHzrKNMSQ) is the second PTE Writing task, which asks you to read an essay topic and write a response in between 200 and 300 words. The [PTE Write Essay](https://edutrainex.com/blog/pte-write-essay-template-and-sample-topics/) task comes after the Summarize Written Text questions in the exam and is the most crucial task of the writing section if you want to achieve a high writing score. Although this task only contributes to your PTE Writing score, alone it contributes heavily to your writing score on the PTE test. | edutrainexpte | |
1,833,414 | XDebug with WP-Setup | Introduction WP Setup has been updated to version 1.1.0, introducing Xdebug support and... | 27,179 | 2024-04-24T23:11:16 | https://dev.to/lucascarvalhopl/xdebug-with-wp-setup-1hkg | wordpress, news, php, testing | ## Introduction
WP Setup has been updated to version 1.1.0, introducing [Xdebug](https://xdebug.org/) support and allowing for easy generation of test coverage reports.
Additionally, some small fixes and improvements were made, such as using the [adm-zip](https://www.npmjs.com/package/adm-zip) library instead of [unzipper](https://www.npmjs.com/package/unzipper), providing a simpler and reliable extractor that avoids uncompress errors as with [Query Monitor](https://br.wordpress.org/plugins/query-monitor/) (that we add by default in wp-setup.json during the initialization process) that in some cases did not complete the unzip process correctly, causing errors during WordPress loading.
## Running XDebug
Now we can simple start the containers with XDebug running by adding a `--xdebug` flag in the start command. This will restart all server and CLI containers with the xdebug up and running.
You can use the recommended scripts in wp-setup [readme](https://www.npmjs.com/package/wp-setup) to easily start with the command `npm run env:start:xdebug`.
With this, you can use XDebug in all environments, enabling debugging of CLI commands, tests (facilitating a better TDD approach in development), and normal HTTP executions through the server.
To only stop the XDebug when not needed, you can run the stop command with the flag `--xdebug` or with our package scripts run `npm run env:stop:xdebug`.
### Mapping directories
As we are running inside docker containers, we need to map our local directories with the container ones so we can fully use the debugger with our IDE.
In [VSCode](https://code.visualstudio.com/) for example this can be easily done by adding the following `.vscode/launch.json` file:
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Listen for XDebug",
"type": "php",
"request": "launch",
"port": 9003,
"pathMappings": {
"/var/www/html/wp-content/plugins/my-plugin": "${workspaceFolder}/plugins/my-plugin",
}
}
]
}
```
**Replace the pathMappings with your project directories accordingly**
With this we can go to VSCode debug tab and click in "Listen for XDebug".

## Tests coverage repport
An important aspect of any development project is to measure how much of our code is tested. This allows us to gauge the reliability of our codebase before sending it to production.
The CLI test container now starts with XDebug in "coverage" mode by default, enabling the generation of this report.
Currently, it is not possible to run `--coverage` directly with our `global-pest` command, to use the cli coverage (very usefull to enforce a [minimal test coverage](https://pestphp.com/docs/test-coverage#content-minimum-threshold-enforcement) in CI processes) you need to require pest and `yoast/phpunit-polyfills` locally in your project.
```bash
composer require --dev pestphp/pest yoast/phpunit-polyfills
```
then you can run your tests with your locally installed pest with:
```bash
npx wp-setup run -w . wp-test-cli ./vendor/bin/pest
```
This will give you full pest CLI access and allow the use of `--coverage` flag.

## Conclusion
This latest update, introduces important enhancements, which are designed to streamline and enhance your WordPress development experience. While the new features enhance the local development process by introducing powerful debugging and testing capabilities, it is important to note that the WP Setup is not yet fully optimized for running CI processes directly. However, the groundwork has been laid, paving the way for future enhancements that will support more robust CI integrations.
As the sole developer behind WP Setup, I invite other developers to explore the potential of this tool. Your feedback and contributions are invaluable to not only improve its existing features but also to help in expanding its capabilities. The project is open for contributions on [GitHub](https://github.com/Luc-cpl/wp-setup), and I am eager to collaborate with other developers to make WP Setup even more powerful and versatile.
Please take this opportunity to test [WP Setup](https://www.npmjs.com/package/wp-setup). Together, we can work towards making a comprehensive development tool that better serves the WordPress community.
| lucascarvalhopl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.