id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,888,225 | Rhude Clothing The Ultimate Blend of Comfort and Couture | Fashion is ever-evolving, a dynamic canvas where art and practicality converge. Among the many brands... | 0 | 2024-06-14T10:01:32 | https://dev.to/ali_sajjad_3813614dd88ae6/rhude-clothing-the-ultimate-blend-of-comfort-and-couture-21bg | rhude, hoodie, rhudeclothing, rhudeofficial | Fashion is ever-evolving, a dynamic canvas where art and practicality converge. Among the many brands that have successfully carved a niche in this competitive landscape, Rhude Clothing stands out as a beacon of innovation. Rhude Clothing: The Ultimate Blend of Comfort and Couture, is a name that resonates with [Rhude Hoodie](https://rhudehood.store/) fashion aficionados and casual dressers alike. This article delves into the essence of Rhude Clothing, exploring how it masterfully combines comfort with high fashion, creating a unique style statement that transcends traditional boundaries.
The Genesis of Rhude Clothing
Rhude Clothing was founded by Rhuigi Villaseñor in 2015. Born in Manila and raised in Los Angeles, Villaseñor's multicultural background significantly influences his designs. His journey from a budding designer to the helm of a globally recognized brand is nothing short of inspirational. Rhude Clothing: The [Rhude Shorts](https://rhudehood.store/shorts/) Ultimate Blend of Comfort and Couture, began as a passion project, but its unique approach to streetwear quickly gained traction. The brand's ethos is deeply rooted in Villaseñor's personal experiences and his vision of creating pieces that embody both luxury and ease.
The Philosophy Behind Rhude Clothing
At the heart of Rhude Clothing lies a philosophy that prioritizes quality, comfort, and style. The brand's tagline, Rhude Clothing: The Ultimate Blend of Comfort and Couture, encapsulates this ethos perfectly. Each piece is meticulously crafted, blending high-end fashion with practical wearability. This balance is achieved through careful selection of materials, innovative design techniques, and a keen eye for detail. Rhude's collections often feature a mix of classic silhouettes with modern twists, creating garments that are both timeless and contemporary.
The Signature Aesthetic of Rhude
Rhude Clothing has a distinctive aesthetic that sets it apart from other brands. The designs often feature a monochromatic color palette, intricate graphics, and a blend of vintage and modern elements. This unique style is a testament to Villaseñor's ability to fuse different influences seamlessly. For instance, the bandana print, a recurring motif in Rhude's collections, is [Rhude Clothing](https://rhudclothing.store/) a nod to the designer's Filipino heritage. This signature aesthetic not only defines the brand but also appeals to a diverse audience, making Rhude Clothing: The Ultimate Blend of Comfort and Couture.
The Fusion of Streetwear and Luxury
One of the defining features of Rhude Clothing is its ability to merge streetwear with luxury fashion. Streetwear, traditionally characterized by its casual and urban roots, often seems worlds apart from the opulence of couture. However, Rhude bridges this gap effortlessly. The brand's collections feature everything from hoodies and track pants to tailored blazers and trousers, each piece exuding a sense of laid-back luxury. This fusion of streetwear and high fashion is what makes Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Craftsmanship and Quality
When it comes to craftsmanship, Rhude Clothing does not cut corners. Each garment is crafted with precision and attention to detail, ensuring superior quality. The use of premium materials, such as high-grade cotton, silk, and leather, is a hallmark of the brand. This commitment to quality is evident in every piece, from the stitching to the finishing touches. Rhude's dedication to craftsmanship ensures that every item not only looks good but also stands the test of time, reinforcing the brand's reputation as Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Celebrity Endorsements and Collaborations
Rhude Clothing's rise to prominence has been bolstered by its association with various celebrities and high-profile collaborations. A-listers like Kendrick Lamar, LeBron James, and Justin Bieber have been spotted wearing Rhude, amplifying the brand's visibility and appeal. Moreover, collaborations with renowned brands such as Puma and Thierry Lasry have further cemented Rhude's status in the fashion industry. These partnerships have not only expanded Rhude's reach but also showcased its versatility and innovative spirit, solidifying its position as Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Sustainability and Ethical Practices
In an era where sustainability is paramount, Rhude Clothing is making strides towards more ethical practices. The brand is committed to reducing its environmental footprint through responsible sourcing of materials and sustainable manufacturing processes. By prioritizing eco-friendly practices, Rhude is not only enhancing its brand value but also contributing to a larger movement towards sustainability in fashion. This commitment to ethical practices underscores the brand's holistic approach to fashion, further establishing Rhude Clothing: The Ultimate Blend of Comfort and Couture.
The Influence of Pop Culture
Pop culture plays a significant role in shaping Rhude Clothing's identity. Villaseñor often draws inspiration from music, art, and cinema, infusing these elements into his designs. This cultural resonance makes Rhude more than just a fashion label; it becomes a part of the cultural zeitgeist. The brand's ability to capture the essence of contemporary culture and translate it into wearable art is a key factor in its appeal. This cultural relevance is another reason why Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Expanding the Rhude Universe
Since its inception, Rhude has expanded beyond clothing to include accessories, footwear, and even home goods. This diversification has allowed the brand to create a cohesive lifestyle offering, appealing to a broader audience. Each product line maintains the brand's core principles of quality, comfort, and style, ensuring that every item, whether it's a pair of sneakers or a scented candle, embodies Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Customer Experience and Brand Loyalty
Rhude Clothing places a high emphasis on customer experience. From the moment a customer interacts with the brand, whether online or in-store, they are treated to a seamless and luxurious experience. This commitment to customer satisfaction has fostered a loyal fanbase, with many customers returning for the quality and exclusivity that Rhude offers. This strong customer relationship is a testament to the brand's dedication to excellence, reinforcing its standing as Rhude Clothing: The Ultimate Blend of Comfort and Couture.
The Future of Rhude Clothing
Looking ahead, the future of Rhude Clothing appears promising. With its innovative designs, commitment to quality, and growing global presence, Rhude is poised to continue its upward trajectory. The brand's ability to adapt to changing fashion trends while staying true to its core values will be crucial in maintaining its relevance and appeal. As Rhude continues to evolve, it remains dedicated to its mission of delivering comfort and couture, solidifying its place in the fashion industry as Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Rhude's Impact on the Fashion Industry
Rhude Clothing's influence extends beyond its immediate customer base. The brand has played a significant role in blurring the lines between streetwear and high fashion, challenging traditional fashion norms. This impact is evident in the way other brands have started to adopt similar approaches, integrating elements of streetwear into their luxury collections. Rhude's innovative spirit and willingness to push boundaries have not only set trends but also redefined what modern fashion can be. This trailblazing approach is a key reason why Rhude Clothing: The Ultimate Blend of Comfort and Couture.
Conclusion
Rhude Clothing: The Ultimate Blend of Comfort and Couture, is more than just a brand; it's a movement. From its humble beginnings to its current status as a global fashion powerhouse, Rhude has consistently delivered on its promise of combining comfort with high fashion. The brand's unique aesthetic, commitment to quality, and cultural relevance have made it a favorite among fashion enthusiasts and celebrities alike. As Rhude continues to innovate and grow, it remains dedicated to its mission of providing luxurious yet comfortable clothing. For those looking to make a style statement without compromising on comfort, Rhude Clothing is the ultimate choice. | ali_sajjad_3813614dd88ae6 |
1,888,223 | Harnessing Firebase in React with Custom Hooks: A Practical Guide | Explore how to create a custom React hook for real-time data fetching from a Firestore database. | 0 | 2024-06-14T10:00:37 | https://dev.to/itselftools/harnessing-firebase-in-react-with-custom-hooks-a-practical-guide-42l7 | react, firebase, realtime, customhooks |
Here at [ItselfTools](https://itselftools.com), we've gained substantial experience developing over 30 projects using technologies like Next.js and Firebase. One powerful pattern we've leveraged is the combination of React hooks and Firebase to manage real-time data efficiently. This article will dive into a specific implementation, where we create a custom hook to fetch and subscribe to document changes in a Firestore collection.
## Code Explanation
Let's break down the React hook `useFirestore` that we've implemented:
```javascript
import { useState, useEffect } from 'react';
import firebase from '../firebase/clientApp';
export const useFirestore = (collection) => {
const [docs, setDocs] = useState([]);
useEffect(() => {
const unsubscribe = firebase.firestore().collection(collection).onSnapshot(snapshot => {
const documents = snapshot.docs.map(doc => ({ id: doc.id, ...doc.data() }));
setDocs(documents);
});
return () => unsubscribe();
}, [collection]);
return docs;
};
```
### Step-by-Step Breakdown
1. **Importing Dependencies**: The hook begins with importing `useState` and `useEffect` from React, which are essential for managing state and side effects within functional components.
2. **Initializing State**: We declare a state variable `docs` using `useState([])`, which will hold the array of documents fetched from Firestore.
3. **Setting Up Firestore Subscription**: Inside `useEffect`, we set up a real-time subscription to a Firestore collection. The `onSnapshot` method listens for changes in the specified collection. Each time a change occurs, it executes a callback that maps through the documents, creating an array of objects with both—document ID and its data combined in one object. This array then updates the `docs` state using `setDocs`.
4. **Clean-up Function**: It's crucial within `useEffect` to return a clean-up function that unsubscribes from the Firestore listener when the component unmounts, preventing memory leaks and performance issues.
5. **Returning the Documents**: Finally, the hook returns the `docs` array, making it available to any component that utilizes this custom hook.
## Practical Uses and Benefits
Integrating Firebase with React through custom hooks like `useFirestore` simplifies the process of fetching and listening to real-time data, ensuring UI components receive timely updates. This encapsulation not only enhances code reusability but also scales well with complex applications.
## Conclusion
This practical implementation demonstrates a robust method to interact with Firestore in React applications, making it easier to build dynamic and responsive user interfaces. To see this code and other exciting technologies in action, visit our applications: [Free Pronunciation Dictionary](https://how-to-say.com), [Unpack Zip, Rar, 7z Files](https://online-archive-extractor.com), and [Free Online English Word Search Tool](https://find-words.com).
We hope this guide helps you in your development journey, whether you're building a small project or a comprehensive application suite. | antoineit |
1,888,221 | Swiper loop not work normal when slidesPerView: "auto" | Swiper loop not work normal when slidesPerView: "auto" how to fix this? link codepen | 0 | 2024-06-14T10:00:06 | https://dev.to/kasym/swiper-loop-not-work-normal-when-slidesperview-auto-4oja | bug, loop, swiper, javascript | Swiper loop not work normal when slidesPerView: "auto" how to fix this? link [codepen](https://codepen.io/Ermuhanow/pen/abrVXpQ) | kasym |
1,888,218 | Explore Rainbow Moonstone Bracelet from Silver Star Jewels | Introduction Having a great passion for jewelry, I have always been fascinated by the... | 0 | 2024-06-14T09:57:44 | https://dev.to/cheryl_najera_fd9ed1f49ec/explore-rainbow-moonstone-bracelet-from-silver-star-jewels-440c |

## **Introduction**
Having a great passion for jewelry, I have always been fascinated by the unique characteristics and beauty of natural stones. It gives me great pleasure to present to you today my analysis of the stunning [wholesale rainbow moonstone bracelet](https://silverstarjewellery.com/shop/bracelets/rainbow-moonstone) from Silver Star Jewels. This exquisite jewelry offers the wearer numerous benefits and has considerable cultural importance in addition to highlighting the beauty of the gemstone.
## **What is a Rainbow Moonstone?**
Rainbow moonstone, sometimes referred to as "adularia," is a kind of feldspar substance having an amazing color play that spans from blues and greens to gold and white. Because of the way it is internally structured, the stone appears iridescent and shimmering. Its name, "Moonstone," came from its likeness to the moon's delicate, hazy radiance.
## **Benefits of Wearing Rainbow Moonstone Bracelets**
Beyond its alluring beauty, rainbow moonstone is said to have several metaphysical and spiritual properties. It is related to intuition, emotional balance, and inner growth several times. One is claimed to get more relaxed, clear, and intuitive by wearing rainbow moonstone bracelets. Furthermore reported to enhance imagination, creativity, and the ability to grant wishes are the properties of the stone.
## **The Significance of Moonstone in Different Cultures**
Moonstone has been valued traditionally by many different cultures all over the world. Moonstone was used a lot in talismans and jewelry because it was believed to symbolize solidified moonlight in ancient Hindu mythology. Moonstone was associated with the moon goddess Selene and thought to offer protection and good fortune in antiquity. Particularly in bracelets, moonstones are a popular choice for jewelry even in the present day because of their captivating beauty and spiritual value.
## **Popular Designs and Styles of Rainbow Moonstone Bracelets**
Rainbow moonstone bracelet designs and types abound to accommodate a range of personal preferences. A rainbow moonstone bracelet, which comes in designs from understated elegance to eye-catching, dramatic ones, will satisfy every taste. Among the most regularly worn looks are:
**Simple and Elegant:**
This piece is made simple and elegant by the solitary, striking rainbow moonstone cabochon set on a sleek, silver or gold-plated band.
**Beaded Bracelets:**
Usually, in beaded bracelets, rainbow moonstone beads are separated by gold-filled or sterling silver spacers.
**Cluster Designs:**
Cluster designs are a bright, multidimensional look created by arranging a collection of smaller rainbow moonstone stones.
**Macramé or Leather Bracelets:**
The natural beauty of a rainbow moonstone coupled with the bohemian appeal of macramé or leather bracelets.
## **Where to Buy Wholesale Rainbow Moonstone Bracelets from Silver Star Jewels**
Looking into wholesale options or want to add a stunning rainbow moonstone bracelet to your jewelry collection? **Silver Star Jewels** is a fantastic choice. One of the many styles and price points offered by renowned suppliers of high-end, ethically manufactured gemstone jewelry, Silver Star Jewels, are rainbow moonstone bracelets.
Sustainable practices and fair trade are what set Silver Star Jewels apart. They work directly with artisanal gemstone miners and craftspeople to ensure that every piece is created with the highest care and attention to detail. You may be certain that by purchasing from Silver Star Jewels you are supporting ethical and responsible business practices.
Explore our exquisite collection to find the perfect rainbow moonstone bracelet and many more kinds of [silver bracelets](https://silverstarjewellery.com/shop/bracelets/rainbow-moonstone) to give to your clients or to add to your jewelry box. Come experience the magic of this amazing diamond right now at Silver Star Jewels.
## **Conclusion**
The rainbow moonstone bracelet from Silver Star Jewels is ultimately an incredibly beautiful piece of jewelry that blends the best workmanship, spiritual meaning, and untouched beauty. Seeking a personal gift to improve your own health or wholesale possibilities to provide this exquisite diamond to your customers? Silver Star Jewels is the only place to look. Let rainbow moonstone improve your looks and sense of inner peace.
##
| cheryl_najera_fd9ed1f49ec | |
1,888,217 | Chained Exceptions | Throwing an exception along with another exception forms a chained exception. The catch block... | 0 | 2024-06-14T09:55:30 | https://dev.to/paulike/chained-exceptions-384f | java, programming, learning, beginners | Throwing an exception along with another exception forms a chained exception. The **catch** block rethrows the original exception. Sometimes, you may need to throw a new exception (with additional information) along with the original exception. This is called _chained exceptions_. The program below illustrates how to create and throw chained exceptions.

The **main** method invokes **method1** (line 7), **method1** invokes **method2** (line 16), and **method2** throws an exception (line 24). This exception is caught in the **catch** block in **method1** and is wrapped in a new exception in line 19. The new exception is thrown and caught in the catch block in the **main** method in line 9. The sample output shows the output from the **printStackTrace()** method in line 10. The new exception thrown from **method1** is displayed first, followed by the original exception thrown from **method2**. | paulike |
1,888,211 | When to Use Exceptions | A method should throw an exception if the error needs to be handled by its caller. The try block... | 0 | 2024-06-14T09:47:04 | https://dev.to/paulike/when-to-use-exceptions-58e8 | java, programming, learning, beginners | A method should throw an exception if the error needs to be handled by its caller. The **try** block contains the code that is executed in normal circumstances. The **catch** block contains the code that is executed in exceptional circumstances. Exception handling separates error-handling code from normal programming tasks, thus making programs easier to read and to modify. Be aware, however, that exception handling usually requires more time and resources, because it requires instantiating a new exception object, rolling back the call stack, and propagating the exception through the chain of methods invoked to search for the handler.
An exception occurs in a method. If you want the exception to be processed by its caller, you should create an exception object and throw it. If you can handle the exception in the method where it occurs, there is no need to throw or use exceptions.
In general, common exceptions that may occur in multiple classes in a project are candidates for exception classes. Simple errors that may occur in individual methods are best handled without throwing exceptions. This can be done by using **if** statements to check for errors.
When should you use a **try-catch** block in the code? Use it when you have to deal with unexpected error conditions. Do not use a **try-catch** block to deal with simple, expected situations. For example, the following code
`try {
System.out.println(refVar.toString());
}
catch (NullPointerException ex) {
System.out.println("refVar is null");
}`
is better replaced by
`if (refVar != null)
System.out.println(refVar.toString());
else
System.out.println("refVar is null");`
Which situations are exceptional and which are expected is sometimes difficult to decide. The point is not to abuse exception handling as a way to deal with a simple logic test. | paulike |
1,888,216 | HTML QUESTIONS AND ANSWERS | Question1: What is HTML Answer1: HTML stands for HyperText Markup Language and is the language of the... | 0 | 2024-06-14T09:54:43 | https://dev.to/denkogee/html-questions-and-answers-3b0k | Question1: What is HTML
Answer1: HTML stands for HyperText Markup Language and is the language of the internet. It is the standard text formatting language used for creating and displaying pages on the Internet
HTML documents are made up of the elements and the tags that format it for proper display on pages. | denkogee | |
1,888,215 | HTML QUESTIONS AND ANSWERS | Question1: What is HTML Answer1: HTML stands for HyperText Markup Language and is the language of the... | 0 | 2024-06-14T09:54:21 | https://dev.to/denkogee/html-questions-and-answers-4p27 | Question1: What is HTML
Answer1: HTML stands for HyperText Markup Language and is the language of the internet. It is the standard text formatting language used for creating and displaying pages on the Internet
HTML documents are made up of the elements and the tags that format it for proper display on pages. | denkogee | |
1,888,213 | Rethrowing Exceptions | Java allows an exception handler to rethrow the exception if the handler cannot process the exception... | 0 | 2024-06-14T09:48:50 | https://dev.to/paulike/rethrowing-exceptions-4i2 | java, programming, learning, beginners | Java allows an exception handler to rethrow the exception if the handler cannot process the exception or simply wants to let its caller be notified of the exception.
The syntax for rethrowing an exception may look like this:
`try {
statements;
}
catch (TheException ex) {
perform operations before exits;
throw ex;
}`
The statement **throw ex** rethrows the exception to the caller so that other handlers in the caller get a chance to process the exception **ex**. | paulike |
1,888,212 | Better ORM then Sqlalchemy Python ? Tortoise ORM 101 | What is Tortoise ORM? Tortoise ORM is an easy-to-use Object Relational Mapper (ORM) for... | 0 | 2024-06-14T09:47:51 | https://dev.to/0xaungkon/better-orm-then-sqlalchemy-python-tortoise-orm-101-29hp | webdev, python, sql, database | ### What is Tortoise ORM?
Tortoise ORM is an easy-to-use Object Relational Mapper (ORM) for Python, inspired by Django. It is designed to be simple yet powerful, providing a high-level API for interacting with databases.
### Benefits of Tortoise ORM:
- **Asynchronous Support:** Fully asynchronous, making it ideal for modern async frameworks like FastAPI and Sanic.
- **Easy to Use:** Intuitive and simple to set up, with a syntax similar to Django's ORM.
- **Lightweight:** Minimalist with a small footprint, focusing on essential features.
- **Type Hinting:** Provides excellent support for type hinting and static analysis.
- **Flexible:** Supports multiple databases including SQLite, PostgreSQL, and MySQL.
### Comparison with SQLAlchemy and Other ORMs
| Feature | Tortoise ORM | SQLAlchemy | Other ORMs |
| --- | --- | --- | --- |
| Asynchronous | Yes | Yes (with SQLAlchemy 1.4) | Varies |
| Ease of Use | High (Django-like syntax) | Medium (More complex and flexible) | Varies (often simpler or more complex) |
| Type Hinting | Excellent | Good | Varies |
| Performance | Good | Excellent (highly optimized) | Varies |
| Flexibility | Moderate (focuses on simplicity) | High (very flexible and extensible) | Varies |
| Documentation | Good | Excellent | Varies |
### How to Use Tortoise ORM
### Installation
```bash
pip install tortoise-orm
```
### Basic Setup
1. **Initialize Tortoise ORM**
```python
from tortoise import Tortoise, fields
from tortoise.models import Model
async def init():
await Tortoise.init(
db_url='sqlite://db.sqlite3',
modules={'models': ['__main__']}
)
await Tortoise.generate_schemas()
class User(Model):
id = fields.IntField(pk=True)
username = fields.CharField(max_length=50)
email = fields.CharField(max_length=100)
def __str__(self):
return self.username
import asyncio
asyncio.run(init())
```
### Example CRUD Operations
### Create
```python
from models import User
async def create_user(username: str, email: str):
user = await User.create(username=username, email=email)
return user
asyncio.run(create_user("john_doe", "john@example.com"))
```
### Read
```python
async def get_user(user_id: int):
user = await User.get(id=user_id)
return user
user = asyncio.run(get_user(1))
print(user)
```
### Update
```python
async def update_user(user_id: int, new_email: str):
user = await User.get(id=user_id)
user.email = new_email
await user.save()
asyncio.run(update_user(1, "john_new@example.com"))
```
### Delete
```python
async def delete_user(user_id: int):
user = await User.get(id=user_id)
await user.delete()
asyncio.run(delete_user(1))
```
### Conclusion
Tortoise ORM is a robust choice for projects requiring an async ORM with straightforward and intuitive usage. While it may not have all the advanced features of SQLAlchemy, its simplicity and async capabilities make it a strong contender for modern Python web applications.
## Feel free to reach out if you have any questions or need help getting started!
Github: https://0xaungkon.github.io/
Linkedin: https://www.linkedin.com/in/aungkon-malakar/
Facebook: https://www.facebook.com/0xAungkon/
Email: [aungkonmalakar@gmail.com](mailto:aungkonmalakar@gmail.com) | 0xaungkon |
1,887,987 | Difference Between NextJS vs ExpressJS | Quick Overview: The difference between Next JS and Express JS is a matter of discussion. Clients... | 0 | 2024-06-14T05:36:45 | https://dev.to/milanpanchasara/difference-between-nextjs-vs-expressjs-4j1p |
**<u>Quick Overview</u>**: The difference between Next JS and Express JS is a matter of discussion. Clients become doubtful in the selection of an ideal framework. Both have benefits and limitations, so, when you need to make your application, you can choose based on your need. To make it an easy choice, the article provides a clear comparison between Express.js and Next.js.
———-
The question of choosing between Next.js and Express is common and answering this is challenging. Because every project type is different and their target audience and the market are different. So, the framework selection varies. However, the evaluation can be quick with having a deeper knowledge of both Next.js and Express.js.
Do you know? The latest research on the most used frameworks among developers shows Express holds the 4th position with 19.28% and Next.js has a share of 16.67%. Additionally, Node and React are in the highest and second-highest positions respectively. So, their popularity among developers is increasing gradually. When you **[hire Full-Stack developers](https://www.rlogical.com/hire-dedicated-developers/hire-full-stack-developer/)**, you might even get competency of both in one.
However, you should know the basic concepts of frameworks to examine the differences.
**What are Frameworks?
**The frameworks are the organized form of infrastructure for developers to undertake computer programming. It is mainly used to develop and design software, mobile applications, and websites. Accordingly, there are different types of frameworks, such as;
- Web Application Frameworks
- Mobile Application Frameworks
- Testing Frameworks
- Frontend Frameworks
- Backend Frameworks
Moreover, these types of JavaScript frameworks are quite popular in leading-edge software projects. Hence, here presenting you the backend and frontend framework solutions i.e., Next.js vs Express.js. As our clients get confused about these, the following points will explain the difference between them thoroughly.
**What is Next.js?
**Next.js is the open-source React-based web app development framework. It has been developed by a privately owned company named Vercel. Next.js carries the modern React features that effectively implement client-side JavaScript components.
With its high-performing solution, developers can easily create cutting-edge web applications. From frontend development to serverless architecture, Next.js extensively covers your web app needs. You can begin your application project by running the following code.
```
~ npx create-next-app@latest
```
Let’s discuss further the features of NextJS to get its detailed concepts. It will assist in evaluating Express vs Next js for your application needs efficiently.
**Top Companies Using Next.js**
- Ticketmaster
- NerdWallet
- Deliveroo
- DoorDash
- Binance
- Hulu
- Porsche and many more
**What is Express.js?**
Express.js is a flexible and fast backend framework that supports Node Js and RESTful APIs. It is increasingly popular for developing performance-rich applications. With its lightweight and easy-to-use function, ExpressJS has been considered a proficient solution for web applications.
Due to its minimalist nature, Express.js allows the development of web and mobile apps swiftly. You can get the benefit of middleware to streamline the development. Moreover, the compatibility with NodeJs packages makes Express the right pick for industry-specific applications. Get started with installing the ExpressJS from the Node package manager (npm) using the below code.
```
$ npm install express –save
```
The following features define the competency of Express.js thoroughly. Furthermore, it will help to evaluate the difference between Next js and Express js.
**Top Companies Using Express.js**
- PayPal
- Uber
- IBM
- Trello
- Panasonic
**Finally**
To have the best-in-class services, you need to contact a **[Full-Stack development company](https://www.rlogical.com/web-development/full-stack-development/)**. An experienced organization can make your work streamlined and boost your web application with wide-scope solutions.
You can check out details comparisons, Advantages & use cases by following this link:
**Original article post at**:
https://www.rlogical.com/blog/nextjs-vs-expressjs-performance/
| milanpanchasara | |
1,888,210 | iPhone App Development with Swift: A Step-by-Step Tutorial | Developing an iPhone app can be an exciting and rewarding experience. With Swift, Apple’s powerful... | 0 | 2024-06-14T09:47:01 | https://dev.to/chariesdevil/iphone-app-development-with-swift-a-step-by-step-tutorial-o20 | iphone, appdevelopment, hireiphoneappdeveloper | Developing an iPhone app can be an exciting and rewarding experience. With Swift, Apple’s powerful and intuitive programming language, creating robust and interactive applications has never been easier. This step-by-step tutorial will guide you through the basics of developing an iPhone app using Swift, from setting up your environment to deploying your app.
## Prerequisites
Before we start, ensure you have the following:
1. **A Mac computer** with macOS installed.
2. **Xcode**: Apple’s Integrated Development Environment (IDE) for macOS. Download it from the Mac App Store.
3. **An Apple Developer account**: Necessary for testing on a physical device and submitting apps to the App Store.
## Step 1: Setting Up Xcode
1. **Download and Install Xcode:**
- Open the Mac App Store and search for "Xcode".
- Click "Get" and then "Install".
- Once installed, open Xcode and follow the initial setup instructions.
**2. Create a New Xcode Project:**
- Open Xcode and select "Create a new Xcode project".
- Choose the "App" template under the iOS section and click "Next".
- Fill in your project details:
**Product Name:** Name of your app.
**Team:** Your Apple Developer account.
**Organization Identifier:** A reverse domain name string (e.g., com.yourname.appname).
**Interface:** SwiftUI or UIKit (this tutorial will use UIKit).
**Language:** Swift.
Click "Next" and save your project.
## Step 2: Understanding the Project Structure
An Xcode project consists of several key components:
**Project Navigator**: Lists all files in your project.
**Main.storyboard**: The visual interface builder for designing your app’s UI.
**ViewController.swift**: The default view controller class.
**Assets.xcassets**: Where you store images and other resources.
## Step 3: Designing the UI
**1. Open Main.storyboard:**
- Drag and drop UI components from the Object Library to the view controller (e.g., buttons, labels).
## 2. Set up Auto Layout Constraints:
- Select a UI component, then click the "Add New Constraints" button.
- Set the constraints to position and size your components correctly.
## Step 4: Connecting UI to Code
**1. Open Assistant Editor:**
Open Main.storyboard and ViewController.swift side by side.
**2. Create Outlets and Actions:**
- Control-drag from a UI component (e.g., a button) in the storyboard to ViewController.swift to create an outlet or action.
- Name your outlet/action and click "Connect".
## Step 5: Writing Swift Code
**1. Import Necessary Modules:**
- Ensure your ViewController.swift file imports UIKit.
**2. Add Code to ViewController:
**
- Implement functionality within the ViewController class.
## Step 6: Running Your App
**1. Select a Simulator:
**
- Choose a device to simulate from the toolbar (e.g., iPhone 14).
**2. Run the App:
**
- Click the "Run" button or press Cmd + R to build and run your app on the selected simulator.
## Step 7: Debugging
**1. Using the Debugger:
**
- Use Xcode’s debugger to inspect variables, set breakpoints, and step through code.
**2. Console Output:**
- Print statements to the console for debugging.
Step 8: Testing on a Physical Device
1. Connect Your Device:
- Connect your iPhone via USB and select it as the target device.
2. Provisioning Profile:
- Ensure your Apple Developer account is configured correctly in Xcode.
3. Run the App on Device:
- Click the "Run" button to build and install the app on your device.
## Step 9: Preparing for App Store Submission
1. App Icon:
- Add app icons in various sizes in Assets.xcassets.
2. Build and Archive:
- Go to Product > Archive, then click "Distribute App".
3.Submit to App Store:
- Follow the prompts to upload your app to App Store Connect.
## Conclusion
Congratulations! You’ve built a basic iPhone app using Swift and Xcode. This tutorial covered the fundamental steps, from setting up your environment to submitting your app. With practice and exploration, you can expand your skills and create more complex and feature-rich applications. Happy coding! | chariesdevil |
1,888,147 | How to use is() Property in CSS | In the ever-evolving world of web development, CSS remains a fundamental language for crafting... | 27,759 | 2024-06-14T09:46:55 | https://dev.to/nnnirajn/mastering-the-is-property-in-css-step-by-step-guide-c2n | css, tutorial, html, ui | In the ever-evolving world of web development, CSS remains a fundamental language for crafting visually appealing user interfaces. Among the myriad of selectors and properties CSS offers, the `is()` function stands out as a potent tool that every UI developer should master. This article delves into the nuances of the `is()` property, exploring its functionalities, use cases, and best practices to elevate your CSS skills.
### Introduction: Why the `is()` Property Matters
As a UI developer, you continuously search for ways to make your CSS more efficient and readable. The `is()` property, introduced in CSS Selectors Level 4, offers a streamlined approach to writing complex selectors. At its core, `is()` allows you to consolidate multiple selectors into a single, easier-to-read expression, simplifying your CSS and enhancing maintainability.
But what exactly is the `is()` property, and how can it transform your CSS writing experience? In this comprehensive guide, we will unpack everything you need to know about `is()`, from its syntax and practical applications to advanced techniques and potential pitfalls. By the end of this article, you'll be well-equipped to incorporate the `is()` property into your CSS toolkit, creating cleaner, more robust code.
### Unpacking the is() Property
#### What is the is() Property?
The `is()` property, also known as the "matches-any" pseudo-class, is a CSS selector function that enables you to group multiple selectors together. This means that you can target elements that match any of the selectors within the `is()` function. The syntax is straightforward:
Example
```css
element:is(selector1, selector2, selector3) {
/* CSS rules */
}
```
In this context, `element` is the base element you want to style, and `selector1`, `selector2`, `selector3` are the individual selectors that you want to combine. The `is()` function checks each selector and applies the CSS rules if any of them match.
#### The Syntax: A Closer Look
Understanding the syntax of `is()` is crucial for leveraging its power effectively. Let's break down the basic structure:
```css
/* Basic usage */
element:is(selector1, selector2, selector3) {
property: value;
}
/* Example */
button:is(:hover, :focus) {
background-color: blue;
}
```
In this example, the `button` element will have a blue background color when it is either hovered over or focused, demonstrating the utility of `is()` in combining pseudo-classes.
**The Compatibility Factor**
Before diving deeper, it's essential to consider browser compatibility. The `is()` property is relatively new, so ensuring that it's supported in the browsers your users are likely to use is crucial. As of now, modern browsers such as Chrome, Firefox, Safari, and Edge support the `is()` property, making it a reliable choice for contemporary web development.
### Practical Applications of the `is()` Property
**1. Streamlining Complex Selectors**
One of the most significant advantages of the `is()` property is its ability to simplify complex selectors. Traditionally, combining selectors can result in convoluted code that's difficult to read and maintain. With `is()`, you can consolidate these selectors into a more comprehensible format.
Consider a navigation menu with various states such as `hover`, `focus`, and `active`. Without `is()`, you might write something like this:
Example
```css
nav a:hover,
nav a:focus,
nav a:active {
color: red;
}
```
Using the `is()` property, you can streamline this to:
```css
nav a:is(:hover, :focus, :active) {
color: red;
}
```
This approach not only reduces redundancy but also enhances readability, making your CSS easier to maintain.
**2. Enhancing Specificity Control**
Specificity often poses challenges in CSS, especially when dealing with nested elements and inheritance. The `is()` property can help manage specificity by providing a clear and concise way to target elements based on multiple conditions.
Suppose you want to apply a border to input fields, textareas, and select boxes when they are focused. Here's how you can achieve this using `is()`:
Example
```css
form :is(input, textarea, select):focus {
border: 2px solid green;
}
```
This selector ensures that all form elements specified will receive the same styling when focused, without writing repetitive rules for each element type.
**3. Cleaner Code with HTML5 Tags**
With the introduction of HTML5, new semantic tags such as section, `article`, `aside`, and `nav`have become commonplace. These tags improve accessibility and SEO but can also lead to verbose CSS. The `is()` property offers a solution to this by allowing you to target multiple semantic tags simultaneously.
Example
```css
:is(section, article, aside, nav) {
margin: 20px;
}
```
If you want to apply margins to several semantic containers uniformly, you can utilize the `is()` property as follows:
This method ensures consistency across all specified elements with minimal code.
### Advanced Techniques with the `is()` Property
**1. Combining is() with Other Pseudo-Classes**
The true potential of the `is()` property emerges when you combine it with other pseudo-classes and pseudo-elements. This allows for more granular control and sophisticated styling strategies.
Example
```css
form :is(input, textarea, select):invalid {
border: 2px solid red;
}
```
Imagine creating a form where invalid inputs need to be highlighted. You can combine `is()`with the `:invalid` pseudo-class to achieve this:
By doing so, you ensure that all specified form fields receive the same error styling without redundancy.
**Nesting `is()` for Complex Scenarios**
In some cases, you may encounter scenarios where nesting `is()` within other selectors can simplify your code even further. However, it's essential to use this technique judiciously to avoid overcomplicating your CSS.
Example
```css
nav > ul > :is(li):hover > ul {
display: block;
}
```
This selector targets the second-level unordered lists within a navigation menu, showing them when their parent list items are hovered over.
### Best Practices for Using the `is()` Property
**Keep It Readable**
While the `is()` property offers powerful capabilities, it's crucial to maintain readability. Overusing or nesting `is()` excessively can lead to convoluted CSS that defeats the purpose of simplification.
**Test Across Browsers**
Despite its support in modern browsers, always test your CSS across different browsers to ensure consistent behavior. This practice is especially important for websites with diverse user bases.
**Leverage Preprocessors**
If you're using a CSS preprocessor like SASS or LESS, you can combine `is()` with variables and mixins for added flexibility and maintainability. This approach allows for more dynamic and reusable code.
**Over-Nesting and Complexity**
As with any powerful tool, there's a risk of overcomplicating your CSS with excessive nesting. Always strive for balance and readability. If a selector becomes too complex, consider breaking it down or re-evaluating your approach.
### Conclusion: Elevate Your CSS with `is()`
The `is()` property is a valuable addition to any UI developer's toolkit. By enabling the consolidation of multiple selectors and enhancing specificity control, `is()` can streamline your CSS, making it more efficient and maintainable. Embrace this powerful feature to create cleaner, more robust stylesheets that stand the test of time.
> Mastering the `is()` property in CSS is not just about writing less code—it's about writing better, more sustainable code.
So go ahead, experiment with `is()`, and take your CSS skills to the next level! Whether you're simplifying complex selectors, enhancing specificity, or combining it with other pseudo-classes, `is()` opens up a world of possibilities for modern UI development. | nnnirajn |
1,888,209 | The Hidden Truth: Quality Issues Are Often Just Quantity Problems | As a software engineer with nine years of experience, I've had the privilege of working with some... | 0 | 2024-06-14T09:46:55 | https://dev.to/haikelei/the-hidden-truth-quality-issues-are-often-just-quantity-problems-2fc2 | webdev, vscode, programming, devops | As a software engineer with nine years of experience, I've had the privilege of working with some truly exceptional developers. These colleagues have mastered the art of coding and consistently deliver high-quality software. However, I've also seen many others remain average and unremarkable. This disparity intrigued me, and I set out to understand the underlying reason.
### A Simple Concept
After observing and analyzing my experiences, I discovered that the key difference often boiled down to a very simple concept: quantity. More specifically, the amount of practice, iterations, and exposure these experts had compared to their average counterparts. It wasn’t necessarily about innate talent or intelligence; it was about the sheer volume of work they put in.
### Quantity: The Key to Quality
**What we often think of as quality problems are actually quantity problems. It's usually just a matter of not having enough, sometimes by several orders of magnitude.** Let me explain with a few examples:
1. **Iteration and Improvement:** In my early years, I was part of an Android application development team. I noticed that the best developer in our team, who later became one of my best friends and mentors, constantly iterated on his work. The Android ecosystem is continually evolving, and every time Google introduced new features, he would enthusiastically go through all of them and pick a few to apply in our application. He never settled for the first solution; instead, he would write, test, refactor, and improve his code multiple times, always incorporating the latest technology. **It was this relentless cycle of iteration that honed his skills and led to higher quality outcomes.**As a result, he grew faster than any other engineer I knew and eventually joined VIVO, one of China’s largest smartphone brand.
2. **Exposure to Challenges:** The more projects and diverse problems a developer tackled, the better they became. Those who sought out additional projects, participated in hackathons, or contributed to open-source projects accumulated a wealth of experience. **This varied exposure enabled them to approach problems from multiple angles and devise more effective solutions.**
3. **Feedback and Learning:** Developers who actively sought feedback and learned from their mistakes improved rapidly. In contrast, those who did the bare minimum and avoided critique stagnated. Regular code reviews, pair programming sessions, and mentorship were invaluable in accelerating their growth.
### Misconception: Quantity vs. Quality
**The biggest misconception is thinking a quantity problem is a quality problem. This leads to the false hope that we can achieve our goals with clever tricks or cutting corners instead of increasing quantity.**That's when pseudoscience, superstition, and unnecessary complaints start to appear. Many think that producing a large volume of work means sacrificing quality. However, my experience has shown that the opposite is often true.
When I started picking up my English, I noticed Ruri Ohama, a successful English learner on YouTube. In her video ["How I learned English by myself for free without studying" ](https://www.youtube.com/watch?v=NQlFIrSZiIE) Ruri describes how she mastered English through consistent practice and exposure. She watched English movies, listened to music, and engaged in conversations with native speakers. This relentless practice and immersion significantly improved her fluency and the quality of her language skills. Ruri’s experience shows that the more you immerse yourself in an activity, the better you become at it.
Another example is my favorite author, Keigo Higashino. Known for his captivating mystery novels, Higashino writes every day. This disciplined approach has produced a prolific body of high-quality work. His consistent writing practice allows him to explore new ideas and refine his storytelling techniques. Higashino’s daily writing habit demonstrates that the more you practice, the better you get, and the higher the quality of your output.
### Conclusion
**Quantity is the key to quality. Most quality issues boil down to not having enough of something on a smaller scale.** Understanding that can really change the way we approach learning anything. Instead of getting frustrated or anxious when you don’t meet your goals right away, remember that it often just takes more practice. Think about how Ruri Ohama improved her English by immersing herself in it every day, or how Keigo Higashino writes daily to hone his craft.
So, next time you feel like giving up, just remember that sometimes, it’s simply a matter of putting in more reps. Keep at it, and the quality will come. | haikelei |
1,888,207 | Flutter vs. React Native: A Detailed Comparison for App Development in 2024 | hoosing the right framework for your cross-platform mobile app can be a tough decision. Both Flutter... | 0 | 2024-06-14T09:43:48 | https://dev.to/dhaval_vaghela_86d1a3c769/flutter-vs-react-native-a-detailed-comparison-for-app-development-in-2024-5hbh | reactnative, flutter, reactjsdevelopment, webdev | hoosing the right framework for your cross-platform mobile app can be a tough decision. Both Flutter and React Native offer compelling features and have earned their place in the development world. Here is a detailed comparison to help you weigh their strengths and weaknesses. Here are the pros and cons of developing mobile apps in [React Native vs Flutter](https://nectarbits.com/blog/react-native-or-flutter-know-the-pros-and-cons-of-both-to-make-a-choice/). | dhaval_vaghela_86d1a3c769 |
1,888,206 | The finally Clause | The finally clause is always executed regardless whether an exception occurred or not. Occasionally,... | 0 | 2024-06-14T09:43:07 | https://dev.to/paulike/the-finally-clause-4jfl | java, programming, learning, beginners | The **finally** clause is always executed regardless whether an exception occurred or not. Occasionally, you may want some code to be executed regardless of whether an exception occurs or is caught. Java has a **finally** clause that can be used to accomplish this objective.
The syntax for the **finally** clause might look like this:
`try {
statements;
}
catch (TheException ex) {
handling ex;
}
finally {
finalStatements;
}`
The code in the **finally** block is executed under all circumstances, regardless of whether an exception occurs in the **try** block or is caught. Consider three possible cases:
- If no exception arises in the **try** block, **finalStatements** is executed, and the next statement after the **try** statement is executed.
- If a statement causes an exception in the **try** block that is caught in a **catch** block, the rest of the statements in the **try** block are skipped, the **catch** block is executed, and the **finally** clause is executed. The next statement after the **try** statement is executed.
- If one of the statements causes an exception that is not caught in any **catch** block, the other statements in the **try** block are skipped, the **finally** clause is executed, and the exception is passed to the caller of this method.
The **finally** block executes even if there is a **return** statement prior to reaching the **finally** block. The **catch** block may be omitted when the **finally** clause is used. | paulike |
1,888,204 | More on Exception Handling | A handler for an exception is found by propagating the exception backward through a chain of method... | 0 | 2024-06-14T09:36:31 | https://dev.to/paulike/more-on-exception-handling-529c | java, programming, learning, beginners | A handler for an exception is found by propagating the exception backward through a chain of method calls, starting from the current method. Java’s exception-handling model is based on three operations: _declaring an exception_, _throwing an exception_, and _catching an exception_, as shown in figure below.

## Declaring Exceptions
In Java, the statement currently being executed belongs to a method. The Java interpreter invokes the **main** method to start executing a program. Every method must state the types of checked exceptions it might throw. This is known as _declaring exceptions_. Because system errors and runtime errors can happen to any code, Java does not require that you declare **Error** and **RuntimeException** (unchecked exceptions) explicitly in the method. However, all other exceptions thrown by the method must be explicitly declared in the method header so that the caller of the method is informed of the exception.
To declare an exception in a method, use the **throws** keyword in the method header, as in this example:
`public void myMethod() throws IOException`
The **throws** keyword indicates that **myMethod** might throw an **IOException**. If the method might throw multiple exceptions, add a list of the exceptions, separated by commas, after **throws**:
`public void myMethod()
throws Exception1, Exception2, ..., ExceptionN`
If a method does not declare exceptions in the superclass, you cannot override it to declare exceptions in the subclass.
## Throwing Exceptions
A program that detects an error can create an instance of an appropriate exception type and throw it. This is known as _throwing an exception_. Here is an example: Suppose the program detects that an argument passed to the method violates the method contract (e.g., the argument must be nonnegative, but a negative argument is passed); the program can create an instance of **IllegalArgumentException** and throw it, as follows:
`IllegalArgumentException ex =
new IllegalArgumentException("Wrong Argument");
throw ex;`
Or, if you prefer, you can use the following:
`throw new IllegalArgumentException("Wrong Argument");`
**IllegalArgumentException** is an exception class in the Java API. In general, each exception class in the Java API has at least two constructors: a no-arg constructor, and a constructor with a **String** argument that describes the exception. This argument is called the exception message, which can be obtained using **getMessage()**. The keyword to declare an exception is **throws**, and the keyword to throw an exception is **throw**.
## Catching Exceptions
You now know how to declare an exception and how to throw an exception. When an exception is thrown, it can be caught and handled in a **try-catch** block, as follows:
`try {
statements; // Statements that may throw exceptions
}
catch (Exception1 exVar1) {
handler for exception1;
}
catch (Exception2 exVar2) {
handler for exception2;
}
...
catch (ExceptionN exVarN) {
handler for exceptionN;
}`
If no exceptions arise during the execution of the **try** block, the **catch** blocks are skipped.
If one of the statements inside the **try** block throws an exception, Java skips the remaining statements in the **try** block and starts the process of finding the code to handle the exception. The code that handles the exception is called the _exception handler_; it is found by _propagating the exception_ backward through a chain of method calls, starting from the current method. Each **catch** block is examined in turn, from first to last, to see whether the type of the exception object is an instance of the exception class in the **catch** block. If so, the exception object is assigned to the variable declared, and the code in the **catch** block is executed. If no handler is found, Java exits this method, passes the exception to the method that invoked the method, and continues the same process to find a handler. If no handler is found in the chain of methods being invoked, the program terminates and prints an error message on the console. The process of finding a handler is called _catching an exception_.
Suppose the **main** method invokes **method1**, **method1** invokes **method2**, **method2** invokes **method3**, and method3 throws an exception, as shown in Figure below. Consider the following scenario:
- If the exception type is **Exception3**, it is caught by the **catch** block for handling exception **ex3** in **method2**. **statement5** is skipped, and **statement6** is executed.
- If the exception type is **Exception2**,** method2** is aborted, the control is returned to **method1**, and the exception is caught by the **catch** block for handling exception **ex2** in **method1**. **statement3** is skipped, and **statement4** is executed.
- If the exception type is **Exception1**, **method1** is aborted, the control is returned to the **main** method, and the exception is caught by the **catch** block for handling exception **ex1** in the **main** method. **statement1** is skipped, and **statement2** is executed.
- If the exception type is not caught in **method2**, **method1**, or **main**, the program terminates, and **statement1** and **statement2** are not executed.

Various exception classes can be derived from a common superclass. If a **catch** block catches exception objects of a superclass, it can catch all the exception objects of the subclasses of that superclass.
The order in which exceptions are specified in **catch** blocks is important. A compile error will result if a catch block for a superclass type appears before a catch block for a subclass type. For example, the ordering in (a) on the next page is erroneous, because **RuntimeException** is a subclass of **Exception**. The correct ordering should be as shown in (b).

Java forces you to deal with checked exceptions. If a method declares a checked exception (i.e., an exception other than **Error** or **RuntimeException**), you must invoke it in a **try-catch** block or declare to throw the exception in the calling method. For example, suppose that method **p1** invokes method **p2**, and **p2** may throw a checked exception (e.g., **IOException**); you have to write the code as shown in (a) or (b) below.

You can use the multi-catch feature to simplify coding for the exceptions
with the same handling code. The syntax is:
`catch (Exception1 | Exception2 | ... | Exceptionk ex) {
// Same code for handling these exceptions
}`
Each exception type is separated from the next with a vertical bar (**|**). If one of the exceptions is caught, the handling code is executed.
## Getting Information from Exceptions
An exception object contains valuable information about the exception. You may use the following instance methods in the **java.lang.Throwable** class to get information regarding the exception, as shown in Figure below. The **printStackTrace()** method prints stack trace information on the console. The **getStackTrace()** method provides programmatic access to the stack trace information printed by **printStackTrace()**.

The program below gives an example that uses the methods in **Throwable** to display exception information. Line 7 invokes the **sum** method to return the sum of all the elements in the array. There is an error in line 26 that causes the **ArrayIndexOutOfBoundsException**, a subclass of **IndexOutOfBoundsException**. This exception is caught in the **try-catch** block. Lines 10, 11, and 12 display the stack trace, exception message, and exception object and message using the **printStackTrace()**, **getMessage()**, and **toString()** methods. Line 15 brings stack trace elements into an array. Each element represents a method call. You can obtain the method (line 17), class name (line 18), and exception line number (line 19) for each element.

`java.lang.ArrayIndexOutOfBoundsException: Index 5 out of bounds for length 5
at demo.TestException.sum(TestException.java:27)
at demo.TestException.main(TestException.java:7)
Index 5 out of bounds for length 5
java.lang.ArrayIndexOutOfBoundsException: Index 5 out of bounds for length 5
Trace Info Obtained from getStackTrace
method sum(demo.TestException:27)
method main(demo.TestException:7)`
## Example: Declaring, Throwing, and Catching Exceptions
This example demonstrates declaring, throwing, and catching exceptions by modifying the **setRadius** method in the **Circle** class in, CircleWithPrivateDataFields.java ([here](https://dev.to/paulike/data-field-encapsulation-4i7b)). The new **setRadius** method throws an exception if the radius is negative.
The program below defines a new circle class named **CircleWithException**, which is the same as **CircleWithPrivateDataFields** except that the **setRadius(double newRadius)** method throws an **IllegalArgumentException** if the argument **newRadius** is negative.
```
package demo;
public class CircleWithException {
/** The radius of the circle */
private double radius;
/** The number of the objects created */
private static int numberOfObjects = 0;
/** Construct a circle with radius 1 */
public CircleWithException() {
this(1.0);
}
/** Construct a circle with a specified radius */
public CircleWithException(double newRadius) {
setRadius(newRadius);
numberOfObjects++;
}
/** Return radius */
public double getRadius() {
return radius;
}
/** Set a new radius */
public void setRadius(double newRadius) throws IllegalArgumentException {
if(newRadius > 0)
radius = newRadius;
else
throw new IllegalArgumentException("Radius cannot be negative");
}
/** Return numberOfObjects */
public static int getNumberOfObjects() {
return numberOfObjects;
}
/** Return the area of this circle */
public double findArea() {
return radius * radius * 3.14159;
}
}
```
A test program that uses the new **Circle** class is given in the program below.

The original **Circle** class remains intact except that the class name is changed to **CircleWithException**, a new constructor **CircleWithException(newRadius)** is added, and the **setRadius** method now declares an exception and throws it if the radius is negative.
The **setRadius** method declares to throw **IllegalArgumentException** in the method header (lines 27–32 in CircleWithException.java). The **CircleWithException** class would still compile if the **throws IllegalArgumentException** clause (line 27) were removed from the method declaration, since it is a subclass of **RuntimeException** and every method can throw **RuntimeException** (an unchecked exception) regardless of whether it is declared in the method header.
The test program creates three **CircleWithException** objects—**c1**, **c2**, and **c3**—to test how to handle exceptions. Invoking **new CircleWithException(-5)** (line 8 in TestCircleWithException.java) causes the **setRadius** method to be invoked, which throws an **IllegalArgumentException**, because the radius is negative. In the **catch** block, the type of the object **ex** is **IllegalArgumentException**, which matches the exception object thrown by the **setRadius** method, so this exception is caught by the **catch** block.
The exception handler prints a short message, **ex.toString()** (line 12 in TestCircleWithException.java), about the exception, using **System.out.println(ex)**.
Note that the execution continues in the event of the exception. If the handlers had not caught the exception, the program would have abruptly terminated.
The test program would still compile if the **try** statement were not used, because the method throws an instance of **IllegalArgumentException**, a subclass of **RuntimeException** (an unchecked exception). If a method throws an exception other than **RuntimeException** or **Error**, the method must be invoked within a try-catch block. | paulike |
1,888,203 | Share multiple files in Flutter | If you are developing mobile apps using the Flutter framework, you have the question, “How to share... | 0 | 2024-06-14T09:35:48 | https://dev.to/jigneshpatel_flutterdeveloper/share-multiple-files-in-flutter-5202 | flutter, android, ios, beginners | If you are developing mobile apps using the Flutter framework, you have the question, “How to share one or multiple files (image, PDF, video, document) to WhatsApp or social media in Flutter?” If you are searching for this, you are in the right place. You can [learn Flutter](https://fluttertutorialhub.com/flutter/what-is-flutter-why-should-learn/) easily and build applications very quickly. In this article, I will explain how to share one or more images, PDFs, videos, or document files to WhatsApp or social media using Share Plus in Flutter.

Here you can get full code for [Share multiple files in Flutter](https://fluttertutorialhub.com/flutter/share-multiple-files-image-flutter/). | jigneshpatel_flutterdeveloper |
1,888,201 | How to Invoke the Same API with Varying Parameters | In the rapidly changing world of software development, managing API parameters efficiently can... | 0 | 2024-06-14T09:34:30 | https://dev.to/satokenta/how-to-invoke-the-same-api-with-varying-parameters-3aaf | api, parameters | In the rapidly changing world of software development, managing API parameters efficiently can greatly simplify the process of integrating complex systems. This guide offers a comprehensive look at methods for dynamically configuring API parameters, enabling developers to extract and manage data with enhanced precision.
## Advantages of Using Variable Parameters in a Single API
Adjusting parameters during an API call can significantly improve the application’s functionality and responsiveness. Here are some key benefits:
### Accurate Data Retrieval
With simple parameter modifications, developers can fetch comprehensive datasets or focus on specific details. For example, a developer can retrieve a complete list of recipes or just those related to a particular cuisine, such as French, by changing a single parameter. This approach optimizes the usage of server and network resources.
### Increased Application Efficiency
In developing an advanced search feature, utilizing varied parameters within a single API request saves time and reduces the load on the infrastructure, ensuring a seamless operational flow.
### Improved User Experience
For platforms like online shopping sites, dynamic parameters allow users to filter products based on criteria such as price, brand, or category. This level of customization makes navigation intuitive and can significantly enhance user satisfaction and conversion rates.
### Flexibility and Custom Solutions
A flexible API that accepts multiple parameters can cater to a broader range of use cases, allowing developers to build customized solutions on top of an existing API framework.
### Facilitated Code Maintenance
Using parameters promotes code reusability. Instead of creating multiple endpoints, developers can use a single, configurable endpoint, which improves the readability and maintainability of the codebase.
## Utilizing Apidog for Streamlined API Parameter Management
[Apidog](https://www.apidog.com/?utm_source=&utm_medium=blogger&utm_campaign=test1) provides a comprehensive toolset for automated API testing and parameter management, making it easier to configure and test various API scenarios.
### Steps to Configure Parameters in Apidog
**Step 1:** In your Apidog project workspace, go to the "Automated Testing" section and create a new test or choose an existing one.

**Step 2:** Navigate to the `Test Data` section and define or modify datasets effortlessly using the intuitive UI.

**Step 3:** In the testing area, select your prepared dataset from the `Test Data` dropdown menu. Apidog will automatically estimate the number of iterations required for your test.

**Step 4:** Adjust individual parameters in each test request to correspond with variables in your dataset.

**Step 5:** Execute your configured test scenario. During execution, choose your dataset to ensure systematic parameter adjustments and precise request execution.

Apidog also allows integration of APIs, custom requests, and stored cases into your test scenarios. Its robust features, combined with easy dataset creation and import options, make Apidog an invaluable tool for **[API test automation](https://apidog.com/blog/postman-automation-testing/)**.
## Conclusion
Expertly managing dynamic API parameters enables the tailored retrieval and manipulation of data, conserving resources and enhancing user engagement. Effective parameter management leads to the development of responsive, efficient, and user-friendly applications. By continuously exploring and leveraging API parameters, developers unlock new opportunities for innovation and functionality.
Following this guideline, developers can seamlessly optimize API interactions, ensuring robust, scalable, and efficient software solutions that meet user expectations and needs. | satokenta |
1,873,569 | Difference between Docker, Kubernetes, and Podman for System Design Interview? | What is difference between Docker, Kubernetes or K8s and Podman container technologies | 0 | 2024-06-14T09:32:50 | https://dev.to/somadevtoo/difference-between-docker-kubernetes-and-podman-for-system-design-interview-3an6 | systemdesign, docker, kubernetes, softwaredevelopment | ---
title: Difference between Docker, Kubernetes, and Podman for System Design Interview?
published: true
description: What is difference between Docker, Kubernetes or K8s and Podman container technologies
tags: systemdesign, docker, kubernetes, softwaredevelopment
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-02 08:21 +0000
---
*Disclosure: This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.*

Hello friends, if you are preparing for Tech interviews then you must prepare for container technologies like Docker and Kubernetes because containers are now used to deploy most of the apps, including Microservices and monoliths.
One of the most common question on System Design and Software developer interviews now a days is **difference between Docker, Kubernetes, and Podman?** **What they are and when to use them**.
In the past, I have talked about system design questions like [API Gateway vs Load Balancer](https://dev.to/somadevtoo/difference-between-api-gateway-and-load-balancer-in-system-design-54dd) and [Horizontal vs Vertical Scaling](https://dev.to/somadevtoo/horizontal-scaling-vs-vertical-scaling-in-system-design-3n09), [Forward proxy vs reverse proxy](https://dev.to/somadevtoo/difference-between-forward-proxy-and-reverse-proxy-in-system-design-54g5) and today, I will answer the difference between Docker, Kubernetes and Podman.
Docker, Kubernetes, and Podman are all popular containerization tools that allow developers and DevOps to package and deploy applications in a consistent and efficient manner.
**Docker** is the popular containerization platform that allows developers to create, deploy, and run applications in containers.
Docker provides a set of tools and APIs that enable developers to build and manage containerized applications, including Docker Engine, Docker Hub, and Docker Compose.
On the other hand, **Kubernetes** is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Kubernetes also provides a set of APIs and tools that enable developers to deploy and manage containerized applications at scale, across multiple hosts and environments.
And**, Podman** is a relatively new containerization tool that is similar to Docker, but with a different architecture. Podman does not require a daemon to run containers, and it is compatible with Docker images and registries.
Podman provides a simple command-line interface for creating and managing containers, and it can be used as a drop-in replacement for Docker in many cases.
Now that we have basic idea of what they are and what they do, let's deep dive into them to understand how they work as well.
By the way, if you are preparing for System design interviews and want to learn System Design in depth then you can also checkout sites like [**ByteByteGo**](https://bit.ly/3P3eqMN), [**Design Guru**](https://bit.ly/3pMiO8g), [**Exponent**](https://bit.ly/3cNF0vw), [**Educative**](https://bit.ly/3Mnh6UR) and [**Udemy**](https://bit.ly/3vFNPid) which have many great System design courses
[](https://bit.ly/3pMiO8g)
P.S. Keep reading until the end. I have a free bonus for you.
------
## What is Docker? How does it work?
As I said, Docker is an open-source platform that enables developers to automate the deployment and management of applications within containers.
It provides *a way to package an application and its dependencies into a standardized unit called a container,* which can be run on any compatible system without worrying about differences in operating systems or underlying infrastructure.
Here's few important Docker concepts which you as a Developer or DevOps Engineer should know :
**1\. Containerization**
Docker utilizes containerization technology to create isolated environments, known as containers, for running applications. Containers are lightweight and encapsulate the application code, runtime, system tools, libraries, and dependencies required to run the application.
This allows applications to run consistently across different environments, ensuring that they behave the same regardless of the underlying system.
**2\. Docker Images**
A Docker image serves as a template for creating containers. It is a read-only snapshot that contains the application code and all the necessary dependencies.
Docker images are created using a `Docker file`, which is a text file that specifies the steps to build the image. Each step in the `Dockerfile` represents a layer in the image, allowing for efficient storage and sharing of images.
**3\. Docker Engine**
The Docker Engine is the core component of Docker. It is responsible for building and running containers based on Docker images. The Docker Engine includes a server that manages the containers and a command-line interface (CLI) that allows users to interact with Docker.
**4\. Docker Registry**
Docker images can be stored in a registry, such as Docker Hub or a private registry. A registry is a centralized repository for `Docker` images, making it easy to share and distribute images across different systems. Developers can pull pre-built images from registries or push their own custom images for others to use.
**5\. Container Lifecycle**
To run an application, [Docker](https://medium.com/javarevisited/5-best-docker-courses-for-java-and-spring-boot-developers-bbf01c5e6542) creates a container from an image. Containers are isolated and have their own filesystem, processes, and network interfaces.
They can be started, stopped, paused, and removed as needed. Docker provides a set of commands and APIs to manage the lifecycle of containers, allowing for easy scaling, updates, and monitoring.
**6\. Container Orchestration**
While [Docker](https://medium.com/javarevisited/why-every-developer-should-learn-docker-in-2023-ac27fac5fd6f) itself provides container management capabilities, it also works seamlessly with container orchestration platforms like Kubernetes.
These platforms enable the management of large clusters of containers, handling tasks such as load balancing, scaling, and automated deployments across multiple hosts.
Overall, **Docker simplifies the process of packaging, distributing, and running applications by utilizing containerization technology.** It helps developers achieve consistency, portability, and scalability for their applications, making it a popular choice in modern software development and deployment workflows.
Here is also a nice diagram from [ByteByteGo](https://bit.ly/3P3eqMN) which highlights key components of Docker and how it works:
[] (https://bit.ly/3P3eqMN)
-------
## What is Kubernetes? How does it work?
Both Docker and Kubernetes are like brothers and they are often refereed together but they are very different from each other. [Kubernetes](https://medium.com/javarevisited/10-best-kubernetes-courses-for-developers-and-devops-engineers-94c35cd3a2fd) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
It **provides a framework for running and coordinating multiple containers across a cluster of machines**, making it easier to manage complex distributed systems.
Here are important Kubernetes or K8 concepts which I think every developer or DevOps should learn and know:
**1\. Cluster Architecture**
Kubernetes operates in a cluster architecture, which consists of a master node and multiple worker nodes. The master node manages the cluster and coordinates the overall operations, while the worker nodes are responsible for running the containers.
**2\. Pods**
The basic unit of deployment in Kubernetes is a pod. A pod is a logical group of one or more containers that are co-located and share the same resources, such as network namespace and storage.
Containers within a pod can communicate with each other using localhost. Pods are treated as ephemeral units and can be easily created, updated, or terminated.
**3\. Replica Sets and Deployments**
Replica Sets define the desired number of identical pod replicas to be running at any given time.
They ensure high availability and scalability by automatically managing and maintaining the desired number of pod instances.
Deployments are a higher-level abstraction that allows you to manage and update Replica Sets declaratively, enabling seamless rolling updates and rollbacks of application versions.
**4\. Services**
Kubernetes Services provide stable network endpoints to connect to a set of pods. They enable load balancing and expose the containers within a pod to other services or external clients.
Services abstract the underlying pod instances, allowing applications to communicate with other components without worrying about their dynamic nature.
**5\. Labels and Selectors**
Kubernetes uses labels and selectors to enable flexible and dynamic grouping and selection of objects. Labels are key-value pairs attached to pods, deployments, services, and other Kubernetes objects.
Selectors are used to filter and match objects based on their labels, allowing for targeted operations and grouping of related resources.
**6\. Scaling and Auto-Scaling**
Kubernetes allows you to scale applications by adjusting the number of pod replicas. Horizontal Pod Autoscaling (HPA) is a feature that automatically scales the number of pod replicas based on resource utilization metrics such as CPU or memory usage.
**7\. Container Networking**
Kubernetes also manages networking between pods and nodes. Each pod gets its own IP address, and containers within a pod can communicate with each other using `localhost`.
Kubernetes provides **network plugins** that facilitate container networking and enable communication across pods and clusters.
**8\. Cluster Management**
Kubernetes also offers extensive cluster management capabilities, including rolling updates, secrets management, configuration management, and health monitoring.
It provides a declarative approach to define the desired state of the system, allowing Kubernetes to continuously monitor and reconcile the actual state with the desired state.
**9\. Container Storage**
Kubernetes supports various storage options, including persistent volumes and storage classes. Persistent volumes provide a way to decouple storage from the lifecycle of pods, enabling data persistence and sharing across pods and container restarts.
By abstracting the *complexities of managing containers at scale, Kubernetes enables developers to focus on application logic* rather than infrastructure management.
It provides a robust and scalable platform for deploying and managing containerized applications, making it a popular choice for building modern, cloud-native systems.
Here is a nice diagram which shows different component of K8 or Kubernetes and how they work together:

------
## What is Podman? How does it work?
Now that you already know What is Docker and Kuberntes, its time to take a look another popular tool called Podman which is often seen as an alternative to Docker.
Podman is an open-source container runtime and management tool that provides a command-line interface (CLI) for managing containers.
It aims to be a compatible alternative to Docker, offering a Docker-compatible API and allowing users familiar with Docker to transition easily*. Podman is designed to provide a secure and lightweight container experience.
Here's an overview of how Podman works and importnat Podman concepts you should know:
**1\. Container Runtime**
Podman serves as a container runtime, which means it can create and run containers. It uses the Open Container Initiative (OCI)-compatible container format, which ensures compatibility with other container runtimes and allows Podman to run OCI-compliant containers.
**2\. CLI Compatibility**
Podman's CLI is designed to be familiar to Docker users. It provides commands similar to Docker CLI, allowing users to manage containers, images, volumes, and networks with ease.
This compatibility makes it easier for developers and system administrators to transition from Docker to Podman without significant changes to their workflows.
**3\. Rootless Containers**
One notable feature of Podman is its support for rootless containers. It allows non-root users to run containers without requiring privileged access.
This enhances security by isolating containers from the host system and reducing the risk of container escapes.
**4\. Container Management**
Podman provides a range of management capabilities, such as creating, starting, stopping, and removing containers. It supports network configuration, allowing containers to communicate with each other and the host system.
Podman also provides options for managing container volumes, environment variables, and resource constraints.
**5\. Container Images**
Like Docker, Podman relies on container images as the basis for creating containers. It can pull and push container images from various container registries, including Docker Hub. Podman can also build images locally using a Dockerfile or import images from other container runtimes.
**6\. Pod Support**
Podman extends beyond individual containers and supports the concept of pods, similar to Kubernetes. Pods are a group of containers that share the same network namespace and resources.
Podman allows users to create and manage pods, enabling more complex deployments and communication patterns between containers.
**7\. Integration with Orchestration Platforms**
While Podman can be used as a standalone container runtime, it can also integrate with container orchestration platforms like Kubernetes. It can act as the container runtime for Kubernetes pods, allowing users to leverage Podman's features and compatibility within a Kubernetes cluster.
**8\. Security Focus**
Podman places a strong emphasis on security. It supports features such as user namespace mapping, which maps container user IDs to non-root user IDs on the host, enhancing container isolation.
Podman also integrates with security-enhancing technologies like SELinux and seccomp profiles to provide additional layers of protection.
Podman aims to provide a seamless transition for Docker users while emphasizing security and lightweight container management.
It offers compatibility, flexibility, and a user-friendly CLI, making it a compelling option for those seeking an alternative container runtime.

------
## What is difference between Docker, Kubernetes, and Podman?
Here are the key differences between Docker, Kubernetes, and Podman, I have compared them on different points which are mainly features, and capabilities each of these tools provides like containerization and container management etc.
**1\. Container Engine**
Docker is primarily a container runtime and engine for building, running, and distributing containers. Kubernetes, on the other hand, is an orchestration platform designed for managing containerized applications across a cluster of machines.
Podman is a container runtime and management tool that provides a Docker-compatible CLI and container runtime.
**2\. Container Format**
Docker uses its own container format called Docker containers. [Kubernetes](https://medium.com/javarevisited/7-free-online-courses-to-learn-kubernetes-in-2020-3b8a68ec7abc) can work with multiple container formats, but Docker containers are the most common choice.
Podman, on the other hand, uses the Open Container Initiative (OCI)-compatible container format and can run OCI-compliant containers.
**3\. Orchestration**
Docker has Docker Swarm, its built-in orchestration tool, which allows managing a swarm of Docker nodes for running containers.
Kubernetes, on the other hand, provides advanced orchestration capabilities for managing containerized applications, including scaling, load balancing, automated deployments, and self-healing.
Podman does not have built-in orchestration capabilities like Docker Swarm or Kubernetes, but it can work alongside Kubernetes or other orchestration platforms.
**4\. Cluster Management**
Docker does not have native support for managing container clusters. Kubernetes, on the other hand, is specifically designed for managing container clusters and provides features for scaling, upgrading, monitoring, and managing containerized applications.
Podman does not have native support for managing container clusters but can be used with external tools like Kubernetes or other container orchestration frameworks.
**5\. Security**
For Security comparison, Docker provides basic isolation and security features, but its primary focus is on running single containers. Kubernetes offers advanced security features such as network policies, secrets management, and RBAC.
**Podman**, on the other hand, focuses on security and provides features like user namespace mapping, seccomp profiles, and SELinux integration for enhanced container security.
**6\. User Interface**
When it comes to comparing UI, [Docker](https://medium.com/javarevisited/10-free-courses-to-learn-docker-and-devops-for-frontend-developers-691ac7652cee) provides a user-friendly CLI and a web-based graphical interface (Docker Desktop) for managing containers. Kubernetes has a CLI tool called `"kubectl"` and a web-based dashboard (Kubernetes Dashboard) for managing containers and clusters.
While, Podman provides a CLI similar to the Docker CLI and can be used with third-party tools like `Cockpit` for web-based management.
And, if you like tables, here is a nice table where I have put all the *differences between Docker, Kubernetes, and Podman* in tabular format:

These are the fundamental *differences between Docker, Kubernetes, and Podman*, each serving different purposes in the containerization ecosystem.
-------
### System Design Interviews Resources:
And, here are curated list of best system design books, online courses, and practice websites which you can check to better prepare for System design interviews. Most of these courses also answer questions I have shared here.
1. [**DesignGuru's Grokking System Design Course**](https://bit.ly/3pMiO8g): An interactive learning platform with hands-on exercises and real-world scenarios to strengthen your system design skills.
2. [**"System Design Interview" by Alex Xu**](https://amzn.to/3nU2Mbp): This book provides an in-depth exploration of system design concepts, strategies, and interview preparation tips.
3. [**"Designing Data-Intensive Applications"**](https://amzn.to/3nXKaas) by Martin Kleppmann: A comprehensive guide that covers the principles and practices for designing scalable and reliable systems.
4. [LeetCode System Design Tag](https://leetcode.com/explore/learn/card/system-design): LeetCode is a popular platform for technical interview preparation. The System Design tag on LeetCode includes a variety of questions to practice.
5. [**"System Design Primer"**](https://bit.ly/3bSaBfC) on GitHub: A curated list of resources, including articles, books, and videos, to help you prepare for system design interviews.
6. [**Educative's System Design Cours**](https://bit.ly/3Mnh6UR)e: An interactive learning platform with hands-on exercises and real-world scenarios to strengthen your system design skills.
7. **High Scalability Blog**: A blog that features articles and case studies on the architecture of high-traffic websites and scalable systems.
8. **[YouTube Channels](https://medium.com/javarevisited/top-8-youtube-channels-for-system-design-interview-preparation-970d103ea18d)**: Check out channels like "Gaurav Sen" and "Tech Dummies" for insightful videos on system design concepts and interview preparation.
9. [**ByteByteGo**](https://bit.ly/3P3eqMN): A live book and course by Alex Xu for System design interview preparation. It contains all the content of System Design Interview book volume 1 and 2 and will be updated with volume 3 which is coming soon.
10. [**Exponent**](https://bit.ly/3cNF0vw): A specialized site for interview prep especially for FAANG companies like Amazon and Google, They also have a great system design course and many other material which can help you crack FAAN interviews.
[](https://bit.ly/3P3eqMN)
image_credit - [ByteByteGo](https://bit.ly/3P3eqMN)
That's all about the **difference between Docker, Kubernetes, and Podman.** In summary, Docker is a popular containerization platform for creating and managing containers, Kubernetes is a container orchestration platform for managing containerized applications at scale, and `Podman` is a containerization tool with a different architecture that can be used as a drop-in replacement for Docker in many cases.
Each of these tools serves a different purpose, and they can all be used together to provide a comprehensive containerization solution for developers but more important is that every Developer and DevOps should be aware of these tools.
### Bonus
As promised, here is the bonus for you, a free book. I just found a new free book to learn Distributed System Design, you can also read it here on Microsoft --- <https://info.microsoft.com/rs/157-GQE-382/images/EN-CNTNT-eBook-DesigningDistributedSystems.pdf>
[](https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrc1jn751mzs4ru91zt3.png)
Thank you
| somadevtoo |
1,888,200 | Download Kinemaster Mod APK | Kinemaster mod APK is the modified form of Kinemaster APK which is a boon for the users in 2024.... | 0 | 2024-06-14T09:31:05 | https://dev.to/tushar_kalal_41c93b6bfaf0/download-kinemaster-mod-apk-4f34 | kinemastermodapk, downlaod | [Kinemaster mod APK](https://kinemastermodapkdownload.in/) is the modified form of Kinemaster APK which is a boon for the users in 2024. Kinemaster mod APK is a video editing app available on the Google Play Store for Android smartphones. It is specially a professional video editing app which can be used to create good quality videos. It can be used by any common user or a professional user, making videos in it is very easy. Users can also create 3D videos through this app, which is very popular, and the videos created by this app can also be shared on many social media platforms.
[](https://kinemastermodapkdownload.in/) | tushar_kalal_41c93b6bfaf0 |
1,888,199 | One Byte Explainer | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-14T09:29:48 | https://dev.to/sharmi2020/one-byte-explainer-57b0 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Local storage ,Session storage and Cookies:
**Local storage**:
It was just a web storage mechanism used to store the data locally whenever the user shuts down the window there is no loss of our data(storage capacity:upto5MB)
**Session storage**:
It was also allows use to store data in a web browser but for the short amount of time(i.e. window get closed).we can retrieve values by using methods like get, set
**Cookies:**
It was used to small amount data in web browser when compared to local and session usage of cookies like authentication, tracking and maintaining session state
<!-- Explain a computer science concept in 256 characters or less. -->
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | sharmi2020 |
1,888,198 | Leading the Way in Microfinance Technology: Vexil Infotech's Integrated Solution Suite | Embrace the future of Microfinance Software technology with Vexil Infotech Pvt Ltd's Integrated... | 0 | 2024-06-14T09:28:35 | https://dev.to/vexilinfotech24/leading-the-way-in-microfinance-technology-vexil-infotechs-integrated-solution-suite-2p7l | microfinancesoftware | Embrace the future of [Microfinance Software](https://vexilinfotech.com/microfinance-software
) technology with Vexil Infotech Pvt Ltd's Integrated Microfinance Solution (IMS). Vexil is the best Microsoft Software Provider in India. With its advanced features and customizable modules, IMS provides microfinance institutions with a powerful toolkit for success in today's dynamic market. Experience seamless integration, real-time insights, and unparalleled support as you embark on your journey towards sustainable growth with Vexil's IMS. Vexil is the best Microsoft Provider in India.
| vexilinfotech24 |
1,888,197 | Top Brands Use Social Media Marketing To Expand Their Business | With regard to increasing brand recognition and cultivating brand loyalty among a clientele, social... | 0 | 2024-06-14T09:25:48 | https://dev.to/toby_garfield_8e5dc77e905/top-brands-use-social-media-marketing-to-expand-their-business-3ck5 | socialmediamarketing | With regard to increasing brand recognition and cultivating brand loyalty among a clientele, social media platforms have shown to be effective instruments. When used strategically, social media can have a noticeable, quantifiable impact on an organization's bottom line due to its extraordinary development potential. On the other hands, it provides a better opportunity to know about your brand in detail like the entire description, key features of the product, benefits, and other added specifications. It is found to be astonishing that top brands are leveraging social media marketing services to have a unique place in the markets.
Higher the social media Promotion, double the reach and sales….!!!!
In this interesting article, let's have a look over the top trending brands that make use of social media marketing to make their brands gain popularity.
Spotify
Almost everyone listens to music, which is why Spotify's social media content marketing strategy is so successful. Spotify’s social media campaigns are more impressive that details all the essential features to grab the attention of people. Thereby, the entry of people onto the Spotify platform is found to be increased on a day-to-day basis. Effective social media promotions are the reasons here. Latest song releases, Podcasts, and trending artists songs is all they cover.
National Geographic
Being the most followed brand channel, National Geographic has its popularity among wider audiences. The first non-Instagram brand with over 300 million followers on Instagram was National Geographic. A community of photographers who produce and contribute user-generated content has been successfully established by them with the picturization of alluring visual treat to all. In order to increase reach and educate its sizable audience on significant global topics, National Geographic also collaborates with well-known corporations on social media.
RedBull
Having millions of subscribers on YouTube, Red Bull showcases its popularity through its brand’s video focussed content that effectively appeals to adventure sports persons. Short documentaries, longer-form videos, and live-streamed events in the domains of gaming, music, car racing, snowboarding, and surfing are all included in the video library. Its extremely effective social strategy has come to be defined by its emphasis on experimental video, as well as its deft branding and strategic alliances.
GoPro
Owing to their visually stunning video, GoPro has amassed millions of followers across multiple social media platforms. #GoPro is now such a trending hashtag. Action sports and adventure have come to be associated with it. High-quality videos and user-generated material featuring their products make up the majority of the content they post.
Starbucks
Starbucks' social media strategy is widely recognized. Starbucks' success on social media platforms can be attributed to its astute grasp of the medium. Starbucks is adept at attracting and retaining a large and active social media following due to its consistent branding and ongoing communication with followers. Even while Starbucks focuses mostly on one product i.e., delicious coffee, its social media initiatives are anything but narrow-minded.
Oreo
The initial formula for Oreo's social media campaign has never been too far off. The brand consistently finds methods to highlight its cookies while being creative and purely product-focused. All their social media activities in video formats start attracting millions of followers on various platforms.
Chanel
The most powerful luxury brand on social media was identified as Chanel. Influencer alliances are one method by which it accomplishes this. In addition to its highly conceptualized advertisements, Chanel uses behind-the-scenes content and brief but educational lessons for its makeup line to encourage audience participation. It has kept up its growth which has helped to maintain audience engagement on social media, especially Instagram.
Made.com
One segment of retail that's practically suited for social media is furniture and decor. Users can view how the furniture appears in actual homes thanks to user-generated footage, which especially gives Made.com customers more confidence when making expensive online purchases. In the social media page of Made.com, User-generated content is turned into marketing assets by the business through campaigns like "Design Your Happy Place”.
Fenty Beauty
13 million people follow Fenty Beauty on Instagram. Since then, genuine brand loyalty has replaced this hype. The originator of the brand, Rihanna, and several other influencers who share the brand's principles of inclusivity, diversity, and body acceptance are largely responsible for this. One of the most well-liked types of social media content the brand creates is still makeup tutorials.
Coming in line, Bizvertex is the dominant Social Media Marketing Agency
Bizvertex is the leading Social Media marketing agency having its core value in rendering top-notch social media marketing solutions. Having deep experience in utilizing various social media platforms, Our team of experts will assist you in promoting your brand with alluring post submissions, video creation with creative contents. With market expertise in rendering all-inclusive marketing campaigns, our workflow will be more impressive in making progress.
+918807211121
| toby_garfield_8e5dc77e905 |
1,888,196 | Iron Casting Foundries: Where Tradition Meets Innovation | screenshot-1718295455310.png Why Iron casting: Where Tradition Meets Innovation is the Way to... | 0 | 2024-06-14T09:25:28 | https://dev.to/jahira_hanidha_ac8711fb57/iron-casting-foundries-where-tradition-meets-innovation-544m |
screenshot-1718295455310.png
Why Iron casting: Where Tradition Meets Innovation is the Way to Go
Benefits of Making Use Of Iron casting
Iron casting are devices which were built to supply a high-quality and way like efficient of iron into various services and products
Utilizing the Iron casting, manufacturers can make an assortment like wide of that need dimensions that are precise shapes
Some of the features of selecting Iron casting will be the manufacturing like high, the capability to create designs that are intricate and also the power to produce different shapes and forms for various applications
Additionally, Iron casting are popular among manufacturers being that they are cost-effective and could assist reduce the production time for items
Iron castings likewise safe to make use of and require maintenance like minimal making them a fantastic device for any manufacturing unit seeking to improve their production process
Innovation in Iron casting
Iron casting have encountered advancements which can be major the years, and today's machines are far more efficient and user-friendly than their predecessors
Making use of modern tools, manufacturers happens to be able to create molds with extra precision and rate, leading to more products which are consistent
Further innovation has seen the introduction of automation systems making it possible to handle the manufacturing like entire from starting to end, reducing scrap and improving the entire quality connected with items produced
Security Considerations whenever Iron like casting like using
Security can be an aspect like essential of manufacturing process, and Ductile Iron Casting are no exception
Considering that devices can handle molten metal, they pose significant safety risks, and measures which can be appropriate be taken up to guarantee the security of the operator and any other workers utilizing the services for the machine
Safety precautions include putting on gear like protective such as for instance helmets and gloves, setting up effective ventilation systems, and making sure the apparatus is operated inside a appropriate area with sufficient lighting and area like adequate
Regular inspections and maintenance of the iron casting machine may also be required to minimize safety dangers and ensure that the gear runs optimally
Simple tips to make use of Iron casting
Making use of Iron casting calls for experience like knowledge like particular
The operator needs to have understanding like sufficient of equipment's different components, their functions, and security features before operating the device
The complete means of utilizing Iron casting involves preparation associated with molds, melting associated with steel, pouring the metal like molten towards the molds, and solidification regarding the cast
The method requires the employment of various tools and equipment, including crucibles, ladles, and tongs
Manufacturers should ensure that the device is handled by trained workers in order to avoid accidents and maximize production prices
Quality Assurance and Customer Support
Quality assurance is actually a piece like fundamental of manufacturing procedure, and gray cast iron castings aren't any exception
The standard of the cast products might make or break the producer's reputation, and therefore, it is important to have quality like measures that are strict up
As a result, Iron casting should stay glued to quality like strict and conduct regular inspections to make certain their products or solutions meet up with the necessary specifications
Quality control measures also needs to add screening like appropriate because of the cast services and products to be sure they meet the needed requirements before these are generally released to your market
Lastly, customer service is very important when dealing with Iron casting
Manufacturers should provide their customers consumer like very good, including prompt a reaction to any inquiries, on-time distribution, and aftersales help
Conclusion
Iron casting represent the perfect marriage between tradition and innovation. Traditional foundries have been in use for centuries, and today's machines have undergone significant innovation, making them more efficient and user-friendly. Their numerous advantages, including cost-effectiveness, safety, and the ability to produce intricate designs, make them an excellent choice for any manufacturing plant. However, appropriate safety measures must be taken when using these machines to ensure their safe operation. Furthermore, strict quality control measures should be in place to guarantee the production of high-quality cast products. Ultimately, Iron casting Products have proved to be a vital tool in the manufacturing industry, and their timely delivery, coupled with excellent customer service, will help establish good relationships with clients and foster business growth.
Source: https://www.sx-casting.com/product-ductile-iron-casting-high-quality-custom-cast-iron-foundry--sand-casting-cnc-machining-products-ggg45-ggg50-gjs40-gjs45 | jahira_hanidha_ac8711fb57 | |
1,888,173 | 5 Essential Cybersecurity Tips to Secure Your Business in 2024 | Keeping Your Business Secure: 5 Essential Cybersecurity Tips There are so many possibilities for... | 0 | 2024-06-14T09:19:43 | https://dev.to/sennovate/5-essential-cybersecurity-tips-to-secure-your-business-in-2024-4870 | cybersecurity, security, iam, identity | Keeping Your Business Secure: 5 Essential Cybersecurity Tips
There are so many possibilities for your business to grow within this new era of customer engagement and market expansion. However, it is not without its risks. To secure your business—akin to fortifying your home against insiders and intruders—you need robust defenses. These measures are crucial to protect your data, secure your IT/OT environments, and ensure uninterrupted business operations. Secure your future in the digital world by making cybersecurity a priority, not an option.
Here are 5 essential cybersecurity tips to help you build your fortress this year:
**1. Train Your Team: Your First Line of Defense**
Your employees are the cornerstone of your company’s cybersecurity. It’s essential to arm them with the knowledge to identify and fend off potential threats. Through regular and engaging training sessions, they’ll learn to spot the telltale signs of phishing attempts, the subtleties of social engineering, and the dangers of malware. Think of these trainings as essential exercises that build your team’s defensive reflexes.
Encourage a culture where reporting anything out of the ordinary is not just accepted but expected. This proactive stance can be the difference between a minor hiccup and a major breach. Equip your team with the right tools and knowledge and watch them become your most reliable line of defense.
**2. Double the Locks: Multi-Factor Authentication**
Protecting your business’s online presence is non-negotiable.
Multi-factor authentication (MFA) is a critical security measure that adds a necessary layer of protection to your accounts. By requiring an additional verification step beyond just a password, MFA significantly reduces the risk of unauthorized access. It’s imperative for all team members, regardless of their position, to enable MFA on all accounts. There are various MFA methods available, and it’s important to implement the one that best fits your operational needs.
**3. Patch It Up: Keeping Your Software Up-to-Date**
Software updates are more than just new features. They often contain crucial security patches that fix vulnerabilities hackers love to exploit. For those that need manual updates, create a clear schedule, prioritizing updates that address security issues. Stay on top of updates from software companies and apply critical patches promptly.
**4. Be Prepared for Anything: Backups and Disaster Plans**
Life throws curveballs, and the digital world is no different. Cyberattacks can disrupt operations and cause data loss. Having a solid data backup and disaster recovery plan is like having a fire escape plan for your business. Regularly back up your data – how often depends on how important it is. Consider both on-site and off-site backups for extra protection. Don’t forget to test your backups regularly to make sure they work when you need them most! A disaster recovery plan outlines the steps to take in case of a cyberattack or any other event that disrupts your business. The plan should include how to restore your systems, recover data, and get your business back up and running quickly. Regularly review and update your plan to keep it effective.
**5. Security is Everyone’s Job: Building Awareness**
Cybersecurity isn’t just for the IT department – it’s a team effort! Here’s how to create a culture where everyone is security-conscious:
Leadership by Example: Management sets the tone. Allocate resources for security programs, promote awareness initiatives, and hold everyone accountable for following security policies.
Security Champions: Empower employees from different departments to become security champions. They can be a bridge between IT and their colleagues, spreading best practices and keeping security top-of-mind.
Constant Communication: Keep your employees informed about security updates, potential threats, and best practices. Use various channels like company newsletters, internal communication platforms, and regular security awareness campaigns.
Bonus Tip: Partner with a Cybersecurity Expert
The world of cybersecurity is constantly changing and complex. The presence of seasoned security professionals to guide your defenses can help you navigate the changing threat landscape. With Sennovate’s 16+ years of expertise in the security services sector, we have consistently enabled our clients to maintain a resilient and secure stance. Specialists on our team meticulously evaluate your system’s weak points and suggest enhancements, serving as reliable consultants in the development of an all-encompassing security plan.
By following these tips, you can significantly improve your cybersecurity posture in 2024. Remember, cybersecurity is an ongoing process, but with these strong defenses, your business can thrive and grow.
| sennovate |
1,888,195 | Formengine Release notes 1.3.0 | Added the RsSignature component for drawing signatures. The text of the form data and errors in the... | 0 | 2024-06-14T09:24:29 | https://dev.to/optimajetlimited/formengine-release-notes130-3hgo | - Added the RsSignature component for drawing signatures.
- The text of the form data and errors in the left panel on the designer preview is now scrollable.
- Fixed minor style breakdowns in the designer scrollbars.
[Formengine Release notes ](https://formengine.io/documentation/release-notes) | optimajet | |
1,888,194 | Coil and Ktor in Kotlin Multiplatform Compose project | Many people are trying but having a hard time adding Coil in a Multiplatform compose project that... | 0 | 2024-06-14T09:22:50 | https://dev.to/gochev/coil-and-ktor-in-kotlin-multiplatform-compose-project-5d3i | kotlin, multiplatform, compose, multiplatformcompose | Many people are trying but having a hard time adding Coil in a Multiplatform compose project that works on both JVM/Android/iOS and WASM.
Now we have to use Ktor since okHttp is android only for now.
We will be using:
- Coil 3.0 https://coil-kt.github.io/coil/
- Ktor 3.0 with wasm support
After spending half a day making it work I decide to share it for others.
I will be using the latest versions that support wasm as of 14.06.2024, for later readers and experimenting you might want to upgrade the versions.
## 1) Libraries and versions
In short first you need to add **ktor**. In your libs.versions.toml add
```
[versions]
ktor = "3.0.0-wasm2"
```
```
[libraries]
ktor-client-core = { module = "io.ktor:ktor-client-core", version.ref = "ktor" }
ktor-client-okhttp = { module = "io.ktor:ktor-client-okhttp", version.ref = "ktor" }
ktor-client-darwin = { module = "io.ktor:ktor-client-darwin", version.ref = "ktor" }
ktor-client-java = { group = "io.ktor", name = "ktor-client-java", version.ref = "ktor" }
ktor-serialization-kotlinx-json = { group = "io.ktor", name = "ktor-serialization-kotlinx-json", version.ref = "ktor" }
ktor-client-serialization = { group = "io.ktor", name = "ktor-client-serialization", version.ref = "ktor" }
ktor-client-content-negotiation = { group = "io.ktor", name = "ktor-client-content-negotiation", version.ref = "ktor" }
ktor-client-json = { group = "io.ktor", name = "ktor-client-json", version.ref = "ktor" }
ktor-client-logging = { group = "io.ktor", name = "ktor-client-logging", version.ref = "ktor" }
```
now you can also create a bundle since this makes your `build.gradle.kts` far smaller so just add the bundle as well
```
[bundles]
ktor-common = ["ktor-client-core", "ktor-client-json", "ktor-client-logging", "ktor-client-serialization", "ktor-client-content-negotiation", "ktor-serialization-kotlinx-json"]
```
Now you can stop here if you only need to use ktor in your multiplatform compose projects, in such case scroll down to point **2 updating your build script**, however if you want to use coil you need 4 extra dependancies
Add this version
```
[versions]
coilComposeCore = "3.0.0-alpha06"
```
and this 4 libraries
```
[libraries]
coil = { module = "io.coil-kt.coil3:coil", version.ref = "coilComposeCore" }
coil-compose-core = { module = "io.coil-kt.coil3:coil-compose-core", version.ref = "coilComposeCore" }
coil-compose = { module = "io.coil-kt.coil3:coil-compose", version.ref = "coilComposeCore" }
coil-network-ktor = { module = "io.coil-kt.coil3:coil-network-ktor", version.ref = "coilComposeCore" }
```
Now you are good to go to your `build.gradle.kts` file
## updating your build script
Open your `build.gradle.kts`
add ` maven("https://maven.pkg.jetbrains.space/kotlin/p/wasm/experimental")` to your repositories so you can download the ktor wasm version
```kotlin
allprojects {
repositories {
google()
mavenCentral()
maven("https://maven.pkg.jetbrains.space/kotlin/p/wasm/experimental")
}
}
```
then in commonMain.dependancies add
```kotlin
commonMain.dependencies {
....
implementation(libs.bundles.ktor.common)
implementation(libs.coil.compose.core)
implementation(libs.coil.compose)
implementation(libs.coil)
implementation(libs.coil.network.ktor)
```
in Android dependancies add
```kotlin
androidMain.dependencies {
....
implementation(libs.ktor.client.okhttp)
}
```
in iOS/Apple dependancies add
```kotlin
iosMain.dependencies {
implementation(libs.ktor.client.darwin)
}
```
in JVM dependancies add
```kotlin
jvmMain.dependencies {
....
implementation(libs.ktor.client.java)
}
```
Thats it more or less for dependancies.
NOTE you might need also kotlinx.serialization, if you have it ignroe this and scroll to **3 how to verify coil is working.**
so in libs add
```
[versions]
kotlinx-serialization = "1.6.3"
[libraries]
kotlinx-serialization-json = { module = "org.jetbrains.kotlinx:kotlinx-serialization-json", version.ref = "kotlinx-serialization" }
```
then in your gradle kts file
```kotlin
plugins {
....
alias(libs.plugins.kotlinx.serialization)
}
```
and only in
```kotlin
commonMain.dependencies {
....
implementation(libs.kotlinx.serialization.json)
```
Thats it you are good to use Ktor in your multiplatform compose project or Ktor + Coil .
## How to verify coil is working ?
Example how to use coil can be found over the internet but in short in your App.kt in commonMain add this function
```kotlin
fun getAsyncImageLoader(context: PlatformContext)=
ImageLoader.Builder(context).crossfade(true).logger(DebugLogger()).build()
```
Then in your App function just add this
```kotlin
@Composable
internal fun App() = AppTheme() {
setSingletonImageLoaderFactory { context ->
getAsyncImageLoader(context)
}
.... rest code
```
and then you can use Coil.. for example like this :
```kotlin
AsyncImage(
model = "https://1.bp.blogspot.com/-m4g5Q9WZuLw/YO7FxYJsnsI/AAAAAAAA6fs/nyDiNA_6EHMrPw3qRLJ7FcR1-MoC4rkZwCLcBGAsYHQ/s0/javabeer.jpg",
placeholder = painterResource(Res.drawable.ic_cyclone),
contentDescription = null,
contentScale = ContentScale.Crop,
modifier = Modifier.clip(CircleShape)
)
```
I hope this is helpful for someone ! And go enjoy coding with multiplatform compose, I hope I saved you the 4 hours I lost.
P.S. I use kotlin 2.0 and compose 1.6.11 and serialization 1.6.3
| gochev |
1,888,193 | Memory Data Register (MDR) in Computer Architecture | In the realm of computer architecture, efficient data management is pivotal for the smooth operation... | 0 | 2024-06-14T09:21:29 | https://dev.to/pushpendra_sharma_f1d2cbe/memory-data-register-mdr-in-computer-architecture-4c8 | webdev, productivity, learning, design | In the realm of computer architecture, efficient data management is pivotal for the smooth operation of a computer system. One of the critical components in this domain is the Memory Data Register (MDR), which plays a fundamental role in the interaction between the CPU and the memory subsystem. This article delves into the intricacies of the MDR, exploring its function, significance, and its role in modern computing systems.

## What is the Memory Data Register (MDR)?
[The Memory Data Register](https://www.tutorialandexample.com/mdr-in-computer-architecture) (MDR), also known as the Memory Buffer Register (MBR), is a register in a computer's central processing unit (CPU) that temporarily holds data being transferred to or from the computer's main memory. The MDR acts as a buffer, facilitating the smooth transfer of data between the CPU and memory, thereby ensuring that data is correctly read from or written to memory locations.
## Function and Operation
The primary function of the MDR is to store data that is being transferred between the CPU and memory. This includes:
-
**Reading Data from Memory:**
When the CPU needs to read data from the memory, the address of the desired data is placed in the Memory Address Register (MAR). The memory unit then fetches the data from the specified address and places it in the MDR. The CPU can then access this data from the MDR.
-
**Writing Data to Memory:**
When the CPU wants to write data to memory, it first places the data in the MDR. Subsequently, the address of the memory location where the data needs to be written is placed in the MAR. The memory unit then takes the data from the MDR and writes it to the specified address.
In both read and write operations, the MDR acts as an intermediary, holding the data temporarily to ensure proper synchronization and data integrity during the transfer process.
## Significance of the MDR
The MDR is crucial for several reasons:
-
**Data Integrity:**
By temporarily holding data during transfers, the MDR ensures that the data being read from or written to memory remains accurate and uncorrupted. This is essential for maintaining the integrity of computations and operations performed by the CPU.
-
**System Performance:**
The MDR contributes to the overall performance of the computer system. By acting as a buffer, it helps in minimizing the delays associated with data transfers between the CPU and memory, thereby enhancing the speed and efficiency of data handling operations.
-
**Simplifying Data Transfer:**
The MDR simplifies the process of data transfer by providing a dedicated register for holding data temporarily. This reduces the complexity of the CPU's interaction with memory, allowing for more streamlined and efficient data management.
## Role in Modern Computing Systems
In modern computing systems, the MDR continues to play a vital role, albeit within a more complex and sophisticated architecture. With advancements in technology, modern CPUs often feature multiple levels of cache memory and more intricate data transfer mechanisms. However, the fundamental principle of the MDR—temporarily holding data during transfers—remains unchanged.
In contemporary architectures, the MDR works in conjunction with other registers and components, such as cache controllers and advanced memory management units, to optimize data flow and enhance system performance. The MDR's role is particularly significant in scenarios involving large data transfers, such as in high-performance computing and data-intensive applications.
## Conclusion
The Memory Data Register (MDR) is a foundational component in computer architecture, facilitating efficient and reliable data transfers between the CPU and memory. Its role in maintaining data integrity, enhancing system performance, and simplifying data transfer processes underscores its importance in both traditional and modern computing systems. As technology continues to evolve, the principles embodied by the MDR will remain integral to the design and operation of efficient and effective computer architectures. | pushpendra_sharma_f1d2cbe |
1,888,163 | CrewAI: Revolutionizing Workforce Management with AI | In the fast-evolving landscape of technology, artificial intelligence (AI) continues to transform... | 0 | 2024-06-14T09:19:49 | https://dev.to/rootviper4/crewai-revolutionizing-workforce-management-with-ai-405m | crewai, python, ai, webdev | In the fast-evolving landscape of technology, artificial intelligence (AI) continues to transform industries by optimizing processes, enhancing efficiency, and driving innovation. One such revolutionary application of AI is in workforce management, and at the forefront of this transformation is CrewAI. In this article, we delve into what CrewAI is, how it leverages AI to revolutionize workforce management, and why it stands out in the competitive tech ecosystem.
**What is CrewAI?**
CrewAI is an advanced AI-driven platform designed to streamline and optimize workforce management. It provides businesses with powerful tools to manage their teams more effectively, ensuring optimal productivity, improved decision-making, and enhanced employee satisfaction. By harnessing the power of AI, CrewAI automates routine tasks, provides actionable insights, and facilitates better communication within teams.
**Key Features of CrewAI**
**1. Intelligent Scheduling:**
CrewAI's intelligent scheduling feature uses AI algorithms to create
optimal work schedules. It considers various factors such as
employee availability, skill sets, workload, and organizational
priorities to generate schedules that maximize efficiency and
employee satisfaction.
**2. Real-Time Analytics:**
With real-time analytics, CrewAI offers businesses valuable insights into workforce performance. It tracks key metrics such as productivity, attendance, and task completion rates, enabling managers to make data-driven decisions.
**3. Automated Task Management:**
CrewAI automates the assignment and tracking of tasks, ensuring that the right person is assigned to the right job at the right time. This reduces the administrative burden on managers and helps teams stay organized and focused.
**4. Predictive Analytics:**
Leveraging predictive analytics, CrewAI can forecast workforce needs based on historical data and trends. This helps businesses anticipate demand, manage resources efficiently, and avoid potential bottlenecks.
**5. Enhanced Communication:**
CrewAI includes a suite of communication tools that facilitate seamless interaction within teams. From instant messaging to collaborative project management, these tools ensure that everyone is on the same page.
**6. Employee Engagement:**
By providing personalized feedback and recognizing employee achievements, CrewAI fosters a positive work environment. This boosts morale and encourages a culture of continuous improvement.
**How CrewAI Stands Out**
While there are several workforce management tools available, CrewAI distinguishes itself through its innovative use of AI and its commitment to user-centric design. Here are some reasons why CrewAI stands out:
**1. AI-Driven Efficiency:**
Unlike traditional tools, CrewAI leverages sophisticated AI algorithms to automate complex tasks and provide actionable insights. This not only saves time but also ensures more accurate and efficient workforce management.
**2. Scalability:**
CrewAI is designed to scale with your business. Whether you are a small startup or a large enterprise, CrewAI can adapt to your needs, offering flexible solutions that grow with you.
**3. User-Friendly Interface:**
The platform boasts an intuitive interface that is easy to navigate. This reduces the learning curve for new users and ensures that businesses can start reaping the benefits of CrewAI quickly.
**4. Customizability:**
CrewAI understands that every business is unique. Therefore, it offers customizable features that can be tailored to meet specific organizational requirements. This flexibility makes CrewAI a versatile tool for a wide range of industries.
**5. Continuous Improvement:**
CrewAI is committed to continuous improvement. The platform regularly updates its features based on user feedback and the latest advancements in AI technology. This ensures that businesses always have access to cutting-edge tools for workforce management.
**Code Example: Using CrewAI's API with Python**
To illustrate the ease of integrating CrewAI into your existing systems, here’s a small example of how to interact with CrewAI's API using Python. This example demonstrates how to fetch and display the current schedule for a team.
``` python
import requests
# CrewAI API base URL
base_url = "https://api.crewai.com/v1"
# Replace with your actual API key
api_key = "YOUR_API_KEY"
# Function to fetch the current schedule for a team
def get_team_schedule(team_id):
endpoint = f"{base_url}/teams/{team_id}/schedule"
headers = {
"Authorization": f"Bearer {api_key}"
}
response = requests.get(endpoint, headers=headers)
if response.status_code == 200:
return response.json()
else:
print(f"Error: {response.status_code}")
return None
# Replace with your actual team ID
team_id = "YOUR_TEAM_ID"
# Fetch and display the schedule
schedule = get_team_schedule(team_id)
if schedule:
print("Current Team Schedule:")
for shift in schedule['shifts']:
print(f"Employee: {shift['employee_name']}, Start: {shift['start_time']}, End: {shift['end_time']}")
```
This simple example shows how you can use Python to interact with the CrewAI API, allowing you to integrate its powerful workforce management capabilities into your own applications seamlessly.
**Conclusion**
In an era of paramount efficiency and productivity, CrewAI emerges as a game-changer in workforce management. By harnessing the power of AI, it provides businesses with the tools they need to manage their teams effectively, make informed decisions, and foster a positive work environment. As technology continues to evolve, platforms like CrewAI will undoubtedly play a crucial role in shaping the future of work. | rootviper4 |
1,888,172 | odoo v14 and issues with requirements.txt | when you install odoo requirements.txt, you may get error as below UnicodeDecodeError:... | 0 | 2024-06-14T09:19:07 | https://dev.to/jeevanizm/odoo-v14-and-issues-with-requirementstxt-2o23 | when you install odoo requirements.txt, you may get error as below
```
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 967: character maps to <undefined>
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/89/ad/9388970542f82857ac2958b3eaddfad16caaf967cf8532e9486dedc69420/python-stdnum-1.8.tar.gz#sha256=3f42639cae75c0f6ba734eaa7391d411b7fdef868873503f7d2b2962fc3d71bd (from https://pypi.org/simple/python-stdnum/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement python-stdnum==1.8 (from versions: 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.8.1, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.8.1, 1.9, 1.10, 1.11, 1.12, 1.13, 1.14, 1.15, 1.16, 1.17, 1.18, 1.19, 1.20)
ERROR: No matching distribution found for python-stdnum==1.8
```
the solution is change the versions of stdnum and psutil as below
```
python-stdnum==1.8.1
psutil==5.6.7
```
another error could be
```
import passlib.utils
File "C:\Python3810\lib\site-packages\passlib\utils\__init__.py", line 845, in <module>
from time import clock as timer
ImportError: cannot import name 'clock' from 'time' (unknown location)
```
so the solution is
change
```
passlib==1.7.2
```
you may get another error as
```
"AttributeError: module 'lxml.html.clean' has no attribute 'defs'"
```
and the solution is
pip uninstall lxml
pip install lxml==4.8.0
the final working requirements.txt for odoo v14 is
```
Babel==2.6.0; python_version <= '3.9'
Babel==2.9.1; python_version > '3.9' # (Jammy) 2.6.0 has issues with python 3.10
chardet==3.0.4
decorator==4.3.0
docutils==0.14
ebaysdk==2.1.5
freezegun==0.3.11; python_version < '3.8'
freezegun==0.3.15; python_version >= '3.8'
gevent==1.1.2 ; sys_platform != 'win32' and python_version < '3.7'
gevent==1.4.0 ; sys_platform == 'win32' and python_version < '3.7'
gevent==1.5.0 ; python_version == '3.7'
gevent==20.9.0 ; python_version > '3.7' and python_version <= '3.9'
gevent==21.8.0 ; python_version > '3.9' # (Jammy)
greenlet==0.4.10 ; python_version < '3.7'
greenlet==0.4.15 ; python_version == '3.7'
greenlet==0.4.17 ; python_version > '3.7' and python_version <= '3.9'
greenlet==1.1.2 ; python_version > '3.9' # (Jammy)
idna==2.6
Jinja2==2.10.1; python_version < '3.8'
# bullseye version, focal patched 2.10
Jinja2==2.11.2; python_version >= '3.8'
libsass==0.17.0
lxml==4.8.0
Mako==1.0.7
MarkupSafe==1.1.0
num2words==0.5.6
ofxparse==0.19; python_version <= '3.9'
ofxparse==0.21; python_version > '3.9' # (Jammy) ABC removed from collections in 3.10 but still used in ofxparse < 0.21
passlib==1.7.2
Pillow==5.4.1 ; python_version <= '3.7' and sys_platform != 'win32'
Pillow==6.1.0 ; python_version <= '3.7' and sys_platform == 'win32'
Pillow==8.1.1 ; python_version > '3.7'
polib==1.1.0
psutil==5.6.7
psycopg2==2.7.7; sys_platform != 'win32' and python_version < '3.8'
psycopg2==2.8.5; sys_platform == 'win32' or python_version >= '3.8'
pydot==1.4.1
python-ldap==3.1.0; sys_platform != 'win32'
PyPDF2==1.26.0
pyserial==3.4
python-dateutil==2.7.3
pytz==2019.1
pyusb==1.0.2
qrcode==6.1
reportlab==3.5.13; python_version < '3.8'
reportlab==3.5.55; python_version >= '3.8'
requests==2.21.0; python_version <= '3.9'
requests==2.25.1; python_version > '3.9' # (Jammy) versions < 2.25 aren't compatible w/ urllib3 1.26. Bullseye = 2.25.1. min version = 2.22.0 (Focal)
urllib3==1.26.5; python_version > '3.9' # (Jammy) indirect / min version = 1.25.8 (Focal with security backports)
zeep==3.2.0
python-stdnum==1.8.1
vobject==0.9.6.1
Werkzeug==0.16.1 ; python_version <= '3.9'
Werkzeug==2.0.2 ; python_version > '3.9' # (Jammy)
XlsxWriter==1.1.2
xlwt==1.3.*
xlrd==1.1.0; python_version < '3.8'
xlrd==1.2.0; python_version >= '3.8'
pypiwin32 ; sys_platform == 'win32'
```
| jeevanizm | |
1,881,404 | The "this" confusion in JS: function vs arrow | Hey there, Good to see you back again!, In this blog I would like to clear a common confusion for... | 0 | 2024-06-14T09:16:22 | https://dev.to/balajich004/the-this-confusion-in-js-function-vs-arrow-11lj | javascript, webdev, programming, node | Hey there, Good to see you back again!, In this blog I would like to clear a common confusion for many js beginners, the "this" confusion while using it b/w arrow and normal functions.
## Standard function's perspective of this keyword
"this" is a keyword in js which is normally used to refer the object that we are currently working with or in terms of methods "this" refers to the object in which the method is located at.
```
const obj={
name:"bj",
info:function(){
console.log(this.name);
}
}
```
The above is a simple object representation in js with a name attribute and an info method now when we call the obj.info() what do you think it console logs?
```
bj
```
Yep! it prints the name of obj. But if write the same code outside of the object in the global execution context then "this " references to window object which is the initial global object in browser by default and it may vary from run time to run time.
So by this I would like to very much say that the default global object for any standard function is global object, but if working with any object then it becomes the this reference.
## Arrow functions perspective of this keyword
In terms of arrow functions things slightly change, Let's write the same object above but in terms of arrow function this time and see what it outputs.
```
const obj={
name:"bj",
info:()=>{
console.log(this.name);
}
}
```
Now what do you think happens when i do the method call obj.info() well you might think we just changed the syntax but working is same, Well I am sorry to say but you are wrong mate. The standard functions in js by default have their own scope whereas arrow functions highly depend on lexical scope (simply scope around it) since the global scope refers to window object and window object does not have a name property we get the value undefined as output.
```
undefined
```
So by the end of this blog, What I would like to share with you is try to use this keyword only in standard functions and constructors, try not to use this in ES6 star boys (arrow functions 😉) and try to understand how they work.
That's it for this blog, Catch you in the next one till then Adios.
| balajich004 |
1,888,171 | Electrifying the Road: Exploring DC EV Charging Technology | H09b71ff2f7a942d18ebc7910a120b077J.png Electrifying the Road: Exploring DC EV Charging... | 0 | 2024-06-14T09:16:20 | https://dev.to/jahira_hanidha_ac8711fb57/electrifying-the-road-exploring-dc-ev-charging-technology-180d | H09b71ff2f7a942d18ebc7910a120b077J.png
Electrifying the Road: Exploring DC EV Charging Technology
Introduction
Electric automobiles (EVs) are becoming a lot more popular amidst car purchasers, and for justification. Not simply do they help decrease greenhouse gas emissions, additionally provide the trip plus quieter that has been smoother. Nevertheless, the duty that are biggest plus EVs may be the option of recharging networks. That is why DC EV Charging Technology that decide to try recharging that you developing that are changing the game that is overall are general.
Advantages of DC EV Charging Technology
DC EV Charging Technology that decide to try billing value which was distinct AC that is traditional asking. To begin with, they have been quicker. The DC charger may do it in under twenty to 30 mins although an AC charger could completely bring hours to charge an auto. Furthermore, DC chargers was much more effective, meaning they can charge automobiles which can be many the period that has been exact same any strain on the grid. Finally, EV DC Charger are best, meaning they waste less energy into the ongoing work that are asking.
Innovation in DC EV Charging Technology
The innovation in DC EV Charging Technology that are recharging being remarkable within the past ages which can be a few. One of these simple with this try the asking you which was being which are ultra-fast. These stations could charge a vehicle in under 5 to 10 minutes which are complete enabling motorists to top their EVs quickly until the go. Also, wireless charging you tech generally being developed, which might allow motorists to park their vehicle just over a pad that are asking bought it straight away charge without necessity for virtually any cables.
Protection of DC EV Charging Technology
one stressed about almost any technology which try fast-charging safeguards. Luckily for us, DC EV Charging Technology that are billing be created security that is using your mind. All DC payment channels include security which are many, like over-current safeguards plus safety that are over-voltage. Moreover, the cables plus plugs used for DC charging are designed to be a little more durable plus durable in comparison to those used by AC recharging.
Using DC EV Charging Technology
Utilizing DC EV Charging Technology that has been asking your very easy. First, it is in addition crucial to find a DC put that are charging you your. These stations have grown to be increasingly typical in public places areas areas like stores plus lots which was parking. Next, you shall have to link inside their EV Charging Stack using the cable that are asking. Finally, you shall have to starting the process that was billing your that could vary according to the DC charging section that is sure utilized.
Service plus Quality of DC EV Charging Technology
Similar to any kind of DC EV Charging Technology, solution plus quality have become critical indicators DC which was EV that is regarding are recharging. Luckily for us, EV that is most that are larger are actually partnering plus destination which are recharging to ensure the solutions plus quality associated with the charging stations is appropriate. Moreover, many networks that are billing service like payment practices plus smartphone apps, that makes it feasible for motorists find out, access, plus buy solutions that is asking.
Application of DC EV Charging Technology
DC EV Charging Technology features its own applications past simply use which is EV are individual. For instance, dc charging station can be employed in commercial plus transport that was public that was general enabling organizations to alter to electric automobiles plus minmise their carbon effect. Additionally, DC charging stations could possibly be accustomed power buses being trains which can be assisting that is electric help lessen greenhouse gas emissions in the transportation sector.
DC EV Charging Technology that are charging you your the cutting-edge treatment plan for the down sides experienced by EV motorists. Plus faster period payment that is being which was increasing as well as the wider selection of use circumstances, DC chargers are becoming to feel a lot more popular among motorists plus companies alike. Since the technology continues to improve, we're able to expect you'll see additional utilize which was substantial of cars, making a impact that was close the surroundings along with the means we travel.
Source: https://www.ppowercharging.com/Ev-dc-charger | jahira_hanidha_ac8711fb57 | |
1,888,170 | FSSC 22000 Certification in Zimbabwe | FSSC 22000 Certification in Zimbabwe is increasingly gaining traction among food industry... | 0 | 2024-06-14T09:15:21 | https://dev.to/neha_1ba1ad0deb4d4c484e1c/fssc-22000-certification-in-zimbabwe-a5g | FSSC 22000 Certification in Zimbabwe is increasingly gaining traction among food industry stakeholders. This certification ensures that companies adhere to international food safety standards, which is crucial in a market where food safety and quality are paramount. By obtaining FSSC 22000 Certification in Zimbabwe, businesses can enhance their operational efficiencies, reduce the risk of foodborne illnesses, and improve their marketability both locally and internationally. The certification process involves a comprehensive assessment of a company's food safety management system, ensuring it meets the rigorous requirements of the FSSC 22000 standard. The demand for FSSC 22000 Certification in Zimbabwe is driven by the need to comply with global food safety regulations and the growing awareness among consumers about food safety. Companies that achieve FSSC 22000 Certification in Zimbabwe demonstrate their commitment to maintaining high food safety standards, which can lead to increased consumer trust and business growth. Additionally, this certification helps businesses in Zimbabwe align with international trade requirements, opening up new opportunities in global markets.
| neha_1ba1ad0deb4d4c484e1c | |
1,888,169 | What are the courses provided by careerpedia ? | You're interested in pursuing courses in careerpedia like data science, UI/UX design, web... | 0 | 2024-06-14T09:14:51 | https://dev.to/naveen_azmeera_3b6658344b/what-are-the-courses-provided-by-careerpedia--3gj5 |
You're interested in pursuing courses in careerpedia like data science, UI/UX design, web development, automation, and potentially even language proficiency through IELTS. That's quite a diverse set of skills and interests! Here's how you might approach "coaching" in these areas:
Data Science:
Look for online courses, bootcamps, or university programs that offer comprehensive training in data science. Make sure the curriculum covers topics like statistics, machine learning, data visualization, and programming languages like Python or R.
UI/UX Design:
Explore online resources, tutorials, and courses that focus on user interface (UI) and user experience (UX) design principles. Practice designing interfaces, conducting user research, and prototyping using tools like Adobe XD, Sketch, or Figma.
Web Development:
There are numerous online platforms offering courses on web development, covering both front-end (HTML, CSS, JavaScript) and back-end (Node.js, databases) development. Consider enrolling in structured programs or bootcamps to gain hands-on experience.
Automation:
Automation skills are highly valuable in today's tech-driven world. Look for courses or tutorials on scripting languages like Python or tools like Selenium for web automation. Additionally, explore concepts related to DevOps and continuous integration/continuous deployment (CI/CD) pipelines.
IELTS Preparation:
For language proficiency, especially if you're aiming for IELTS, you can find specialized coaching programs or resources tailored specifically for IELTS preparation. These might include practice tests, speaking workshops, and strategies for improving your listening, reading, writing, and speaking skills.
When seeking coaching or training programs, ensure they're reputable and offer practical, hands-on experience. Additionally, consider building a portfolio to showcase your skills in each of these areas, as practical experience can often be just as important as formal education. Good luck with your career pursuits!
It sounds like you're interested in pursuing courses in careerpedia which is an excellent choice considering the growing demand for data-driven insights across various industries. If you're seeking guidance or resources, "Careerpedia" might be a helpful platform. However, did you mean "coaching" instead of "couching"? Coaching can provide personalized guidance, advice, and support tailored to your specific career goals and needs in the field of data science. There are numerous online resources, courses, and communities dedicated to data science education and career development. Would you like some recommendations or assistance in finding the right resources for your journey into data science?
https://www.careerpedia.co/ui-ux-course-hyderabad
https://www.careerpedia.co/full-stack-developer-course-hyderabad
https://www.careerpedia.co/data-science-training-hyderabad
https://www.careerpedia.co/software-testing-course-hyderabad
https://www.careerpedia.co/ielts-coaching-hyderabad
| naveen_azmeera_3b6658344b | |
1,890,778 | Create a pagination API with Spring Boot | Splitting larger content into distinct pages is known as pagination. This method significantly... | 0 | 2024-06-17T03:37:22 | https://blog.stackpuz.com/create-a-pagination-api-with-spring-boot/ | springboot, pagination | ---
title: Create a pagination API with Spring Boot
published: true
date: 2024-06-14 09:12:00 UTC
tags: SpringBoot,Pagination
canonical_url: https://blog.stackpuz.com/create-a-pagination-api-with-spring-boot/
---

Splitting larger content into distinct pages is known as pagination. This method significantly enhances the user experience and speeds up the loading of web pages. This example will demonstrate how to create a pagination API using Spring-Boot and use MySQL as a database.
## Prerequisites
- JAVA 17
- Maven
- MySQL
## Setup project
Create a testing database named "example" and run the [database.sql](https://github.com/StackPuz/Example-Pagination-Spring-Boot-3/blob/main/database.sql) file to import the table and data.
## Project structure
```
├─ pom.xml
└─ src
└─ main
├─ java
│ └─ com
│ └─ stackpuz
│ └─ example
│ ├─ App.java
│ ├─ controller
│ │ └─ ProductController.java
│ ├─ entity
│ │ └─ Product.java
│ └─ repository
│ └─ ProductRepository.java
└─ resources
├─ application.properties
└─ static
└─ index.html
```
## Project files
### pom.xml
This file contains the configuration and dependencies of the Maven project.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.stackpuz</groupId>
<artifactId>example-pagination</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>example-pagination</name>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.0.10</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.30</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
</dependencies>
</project>
```
### application.properties
This file contains the database configuration.
```
spring.datasource.url = jdbc:mysql://localhost/example
spring.datasource.username = root
spring.datasource.password =
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQLDialect
```
### App.java
This file is the main entry point for the Spring Boot application.
```java
package com.stackpuz.example;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
```
### ProductRepository.java
This file defines the product repository by utilizing the PagingAndSortingRepository class, which has the pagination feature, so we can use it to implement the pagination with less effort.
```java
package com.stackpuz.example.repository;
import com.stackpuz.example.entity.Product;
import org.springframework.data.repository.PagingAndSortingRepository;
public interface ProductRepository extends PagingAndSortingRepository<Product, Integer> {
}
```
## Product.java
This file defines the product entity that maps to our database table named "Product".
```java
package com.stackpuz.example.entity;
import jakarta.persistence.Entity;
import jakarta.persistence.GeneratedValue;
import jakarta.persistence.*;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.Setter;
import java.math.BigDecimal;
@Entity
@Getter
@Setter
@NoArgsConstructor
public class Product {
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
private int id;
private String name;
private BigDecimal price;
}
```
- We use the `lombok` library features to reduce the amount of code written for our entity by using `@Getter @Setter @NoArgsConstructor`
### ProductController.java
This file is used to handle incoming requests and produce the paginated data for the client.
```java
package com.stackpuz.example.controller;
import java.util.Optional;
import com.stackpuz.example.entity.Product;
import com.stackpuz.example.repository.ProductRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.*;
import org.springframework.data.domain.Sort.Direction;
import org.springframework.web.bind.annotation.*;
import java.util.List;
@RestController
public class ProductController {
@Autowired
private ProductRepository repository;
@GetMapping("/api/products")
public List<Product> getProducts(@RequestParam("page") Optional<Integer> pageParam, @RequestParam("size") Optional<Integer> sizeParam, @RequestParam("order") Optional<String> orderParam, @RequestParam("direction") Optional<String> directionParam) {
int page = pageParam.orElse(1) - 1;
int size = sizeParam.orElse(10);
String order = orderParam.orElse("id");
String direction = directionParam.orElse("asc");
return repository.findAll(PageRequest.of(page, size, Sort.by(Direction.fromString(direction), order))).getContent();
}
}
```
- We use `@RequestParam` to map each query string to the URL.
- Because Spring Boot page index is starts at zero, so we use `pageParam.orElse(1) - 1` for this purpose.
- `repository.findAll()` receives the `Pageable` object as the parameter. We create this `Pageable` object by using the `PageRequest.of()` method.
- `repository.findAll()` will return `Page<T>` object that contains all information about paginated data, but in this case we just want to return the product data, so we use `getContent()` here.
### Index.html
Instead of entering the URL manually to test our API, we used this file to create links for easier testing.
```html
<!DOCTYPE html>
<head>
</head>
<body>
<ul>
<li><a target="_blank" href="/api/products">Default</a></li>
<li><a target="_blank" href="/api/products?page=2">Page 2</a></li>
<li><a target="_blank" href="/api/products?page=2&size=25">Page 2 and Size 25</a></li>
<li><a target="_blank" href="/api/products?page=2&size=25&order=name">Page 2 and Size 25 and Order by name</a></li>
<li><a target="_blank" href="/api/products?page=2&size=25&order=name&direction=desc">Page 2 and Size 25 and Order by name descending</a></li>
</ul>
</body>
</html>
```
## Run project
```
mvn spring-boot:run
```
Open the web browser and goto http://localhost:8080
You will find this test page.

## Testing
### Testing without any parameters
Click the "Default" link, and it will open the URL `http://localhost:8080/api/products`

The API will return paginated data with default parameters (page = 1 and size = 10).
### Page index testing
Click the "Page 2" link, and it will open the URL `http://localhost:8080/api/products?page=2`

The API will return paginated data on the second page, starting with product id 11
### Page size testing
Click the "Page 2 and Size 25" link, and it will open the URL `http://localhost:8080/api/products?page=2&size=25`

The API will return paginated data on the second page by starting with product id 26 because the page size is 25.
### Order tesing
Click the "Page 2 and Size 25 and Order by name" link, and it will open the URL `http://localhost:8080/api/products?page=2&size=25&order=name`

The API will return paginated data on the second page, but the product order is based on the product name.
### Descending order testing
Click the "Page 2 and Size 25 and Order by name descending" link, and it will open the URL `http://localhost:8080/api/products?page=2&size=25&order=name&direction=desc`

The API will return paginated data on the second page, but the product order is based on the product name in descending order.
## Conclusion
In this article, you have learned how to utilize the PagingAndSortingRepository class to implement the pagination API for our web application with less effort. This will enhance the user experience and speed up your Spring Boot application. If you like this article please share it with your friends.
Source code: [https://github.com/stackpuz/Example-Pagination-Spring-Boot-3](https://github.com/stackpuz/Example-Pagination-Spring-Boot-3)
Create a CRUD Web App in Minutes: [https://stackpuz.com](https://stackpuz.com) | stackpuz |
1,888,167 | Steps To Successfully Outsource Your Software Projects | Effective Outsource Software Projects requires careful execution and strategic planning. Clearly... | 0 | 2024-06-14T09:11:55 | https://dev.to/infowindtech57/steps-to-successfully-outsource-your-software-projects-245k | mobile, softwaredevelopment | Effective Outsource Software Projects requires careful execution and strategic planning. Clearly define the parameters and objectives of your project from the start to ensure alignment with the outsourced partner. According to Statistica, IT outsourcing contributes $66.5 billion, or the highest percentage of the $92.5 billion global outsourcing sector.
Review portfolios and client testimonials to select a reputable, experienced software development business. To keep the project transparent and under control, set up regular checkpoints and updates through effective communication.
To safeguard your intellectual property and ensure it complies with international standards, consider the legal and contractual issues. Keep a careful eye on the project to reduce risks and put quality control procedures in place to uphold standards.
By taking care of these important details, companies may use outsourcing to improve their technology capacities, cut expenses, and concentrate more on their core competencies, ultimately leading to increased productivity and creativity in their software projects.
Introduction To Software Project Outsourcing
Leveraging global talent and technical skills can dramatically improve a company’s offers and efficiency in today’s cutthroat market. Software project outsourcing has been a popular strategic technique that lets companies assign software development work to outside experts while concentrating on their core skills. This concept has two main benefits. First, it accelerates the time to market for new technologies. Second, it helps manage budgetary restraints effectively. Additionally, it simplifies access to worldwide expertise.
The data indicates that the software outsourcing industry is set for growth. This growth is expected to be positive. It is anticipated that the compound annual growth rate (CAGR) will be 7.54%. The projected period of growth is 2023–2027. This means the market is expected to expand steadily. [Statista].
For businesses without in-house software development experience or for projects requiring specialist knowledge on a temporary basis, outsourcing is especially beneficial. Businesses may secure high-quality results and more adeptly manage complicated project needs by working with specialized software developers on an external basis, all without the added expense of growing their own workforce.
Advantages of Contracting Out Software
Businesses seeking to enhance operational effectiveness and innovate can benefit from software outsourcing. Several strategic advantages come with this approach. The benefits of software outsourcing are extensively examined below:
1. Cost Efficiency:
When compared to in-house development, outsourcing software development frequently results in significant cost reductions. Businesses that outsource can reduce costs on overhead. This includes savings on workspace, benefits, and salaries. Outsourcing services helps cut expenses significantly. Collaborating with foreign partners with lower labor costs is especially beneficial. According to 57% of executives in the Deloitte outsourcing report, cost-cutting is counted amongst the major reasons for outsourcing.
This method does not compromise quality. Outsourcing to nations with cheaper labor expenses is the key. This financial flexibility enables companies to invest more in other critical areas like marketing and research and development or to deploy resources more wisely.
2. Access to Specialized Skills:
The rapid pace of technological advancement demands proficiency in the latest tools. It also requires knowledge of current frameworks and languages. Companies can find specialized partners globally through outsourcing. This allows them to access diverse expertise from various regions. This is particularly beneficial for tasks requiring niche knowledge or advanced skills. Outsourcing ensures the integration of cutting-edge technologies in software development. Deloitte reports that 50% of American businesses prioritize talent acquisition. This strategy enables access to a vast global talent pool.
3. A Greater Focus on Core Business Functions
A corporation can concentrate its resources on its core competencies by outsourcing non-essential operations such as software development. This focus is crucial for staying competitive in its main market. For example, a retail company outsourcing the development of its e-commerce website can then improve its product offerings and customer service. Where it counts most, this division of labor helps to enhance efficiency and innovation.
4. Enhanced adaptability and scalability
Companies need to adjust their operations to meet shifting consumer expectations swiftly. Outsourcing provides the flexibility to adjust the development team’s size quickly. This is done in response to project needs without hiring or layoffs. This scalability allows firms to remain adaptable and responsive to market conditions. For example, they can speed up software development to launch new products or services.
5. Reduction of Risk
Software development involves a number of risks, including financial, regulatory, and technical ones, all of which outsourcing may assist in mitigating. Professional outsourcing firms have created protocols and methods to effectively manage and lower these risks. They are equipped to handle unanticipated issues, like sudden increases in workload or technical glitches that can derail a less experienced team.
Moreover, outsourcing firms usually follow global standards and industry best practices, ensuring that the software they produce meets security and quality criteria.
6. Faster Time-to-Market:
Outsourcing can accelerate product launches. McKinsey found a hotel chain reduced time to market by 25%. Businesses outsource to shorten development cycles. Outsourcing companies speed up processes by working 24/7. They operate across multiple time zones, ensuring efficiency. Speed is quite critical looking at the fast-faced business world today. Swift launches provide competitive advantages.
Software outsourcing is vital for businesses. It helps maintain competitiveness in the digital era. Outsourcing offers advantages like improving operational capabilities. It also supports strategic business growth and innovation. Therefore, companies rely on outsourcing to stay ahead.
Common Challenges And Drawbacks to Steer Clear of When Outsourcing
Outsourcing software development projects offers numerous advantages. However, there are also disadvantages. If not managed properly, these disadvantages could derail your goals. The following are a few typical downsides to be aware of and mitigation techniques for them:
1. Communication Barriers:
Any project’s success depends on effective communication, yet outsourcing can create challenges, including time zone variations, cultural barriers, and language impediments. These may result in miscommunication, misplaced project objectives, and delays.
Regular update schedules and strong communication channels are essential to reducing these hazards. Closing such gaps can also be facilitated by adopting an open feedback culture and using project management tools. Furthermore, choosing outsourced partners whose time zones partially coincide with your own can help to improve communication.
2. Quality Control Issues:
When outsourcing, it can be difficult to maintain high levels of quality since different teams may interpret project needs and quality benchmarks differently. The finished product and the reputation of your company may suffer from inconsistent quality. Define precise, comprehensive project specifications and set quality standards early on to prevent this.
To keep an eye on quality throughout the development process, do frequent reviews and audits and insist on a continuous integration/continuous delivery (CI/CD) pipeline. Take part in extensive testing phases and think about integrating your internal team at crucial development stages to make sure the results meet your needs.
3. Loss of Control:
It’s possible to feel as though you have less influence over a project’s planning and execution after outsourcing. If this isn’t checked, projects may go off course. To handle this, keep up active participation in project governance. Set up regular checkpoints and key performance indicators (KPIs) to assess progress toward milestones. Use contracts wisely, ensuring that all parties understand exactly what is expected. This includes adhering to deadlines, providing project updates, and managing changes.
4. Security Risks:
Vulnerability to data theft and security breaches is increased when private information and system access are shared with an outside organization. Investigate the security procedures and compliance requirements of the outsourced provider in order to mitigate these risks.
Make sure they have strong cybersecurity safeguards in place and follow international security requirements. Use secure data transfer techniques and encrypted communication routes. To formally bind the supplier to confidentiality and security requirements, consider legal measures such as non-disclosure agreements (NDAs).
5. Dependency on Vendor:
It might be dangerous to become overly reliant on an outsourcing provider, particularly if that source experiences operational or financial instability. Critical services may be interrupted or maybe stop as a result of this. Maintain a diverse portfolio of vendors or have fallback options ready to go in case your primary provider is unable to meet your expectations in order to reduce this risk. Ensure you are the codebase’s owner and any associated documentation so you can easily switch vendors if needed.
Through proactive implementation of techniques to prevent these potential downsides, firms can guarantee that their outsourcing endeavors result in successful project outputs and expanded capabilities rather than problems and failures.
How Infowind Technologies Can Support You?
**[Infowind Technologies](https://www.infowindtech.com/)** has established itself as a reliable and innovative partner in software outsourcing. The company offers a wide range of services. It is known for its strong commitment to quality. Infowindtech is uniquely positioned to support your business. They assist at every step of the software development process. Here’s how Infowind can help you accomplish the goals of your project:
1. Access to Expert Talent:
Access to a large pool of highly qualified software developers with expertise in a variety of contemporary technologies and sector-specific solutions is provided by Infowind. They can offer the experts you need. This gives your projects the benefit of having a seasoned team on hand. Our experts can take on challenging tasks right away.
2. Customized Solutions:
Infowind Technologies places a strong emphasis on developing specialized software solutions. These solutions align with your particular business objectives and operational requirements. The company understands that every business has different demands and challenges. The organization takes a consultative approach to understand your business drivers and pain points. This approach enables the creation of customized solutions that meet your current needs. Additionally, these solutions are designed to grow with your business in the future.
3. Enhanced Project Management:
Infowind uses cutting-edge project management techniques and tools. They guarantee smooth collaboration and communication. This thorough oversight aids in risk mitigation and effective resource management. It also upholds a high standard of quality throughout the project.
4. Robust Quality Assurance:
A primary focus of Infowind’s service portfolio is quality. The organization incorporates stringent quality assurance (QA) procedures at every development stage. They ensure the finished product is reliable, secure, and functional. This is achieved using both manual and automated testing techniques. This painstaking attention to detail raises user satisfaction and lowers post-deployment difficulties.
5. Security and Compliance:
Infowind complies with strict security guidelines and compliance standards since it understands how important security is in today’s digital environment. Infowind is prepared to manage your security requirements. This includes data protection, privacy protection, and adherence to industry rules. The organization uses industry best practices for safeguarding data. It constantly updates its security protocols to meet new threats.
6. Continuous Support and Maintenance:
Post-deployment support is essential to the continuous success of any software program. After your software is released, Infowind offers full support and maintenance services to make sure it keeps working properly. Infowindtech’s committed support team is available to help make sure your application responds to changing business needs and technological landscapes, from routine updates to troubleshooting and feature enhancements.
Infowind Technologies is a partner that cares about your success as much as a supplier. Choosing Infowindtech as your outsourcing partner offers more than software development expertise. You gain a partner dedicated to enhancing your company’s value. We achieve this through cutting-edge technological solutions tailored to your needs. Our commitment ensures your company stays competitive and innovative. With Infowindtech, you invest in a future-proof and value-driven partnership.
Conclusion
Outsourcing software projects successfully requires a careful strategy to optimize gains and minimize risks. Start by outlining the project’s goals and scope precisely. This ensures alignment with the outsourcing partner’s capabilities.
Hire Dedicated Software Developers with experience in the technology disciplines. Plus, they should have a track record of success. To overcome any obstacles posed by distance and cultural differences, place a high priority on efficient communication by creating clear channels and frequent updates for outsourcing software projects . Adopt strict quality control procedures to guarantee that high standards are upheld during the whole development process.
Take care of security issues by making sure the partner complies with strict data protection laws and performing extensive due diligence. Lastly, stay in charge of the project by conducting frequent reviews and evaluations and being ready to make changes as needed. Companies can use outsourcing to improve their technical capabilities, reduce expenses, and concentrate more on key business objectives by carefully managing these elements.
| infowindtech57 |
1,888,166 | Affordable Web Hosting: Reliable Services at Unbeatable Prices | Discover our affordable web hosting services, where reliability meets unbeatable prices. Whether... | 0 | 2024-06-14T09:10:04 | https://dev.to/zapakhost/affordable-web-hosting-reliable-services-at-unbeatable-prices-521j | affordable, webhosting, zapakhost, hosting | Discover our [affordable web hosting](https://zapakhost.com/?utm_source=Web%202.0&utm_medium=Dev%20to&utm_campaign=Articles) services, where reliability meets unbeatable prices. Whether you're launching a personal blog or managing a business website, our hosting solutions ensure dependable performance and support.

With robust security features and 24/7 customer service, we're committed to keeping your online presence secure and accessible. Explore our range of plans tailored to suit every need, and experience the convenience of seamless hosting at competitive rates. Join us and simplify your web hosting journey today! For More [Click Here](https://zapakhost.com/kb/top-affordable-web-hosting-services/?utm_source=Web%202.0&utm_medium=Dev%20to&utm_campaign=Article) | zapakhost |
1,888,165 | How to Start a Business in Allahabad | Starting a business in Allahabad, a city known for its rich cultural heritage and growing economy,... | 0 | 2024-06-14T09:08:45 | https://dev.to/aditya_pandey_1847fe5a44a/how-to-start-a-business-in-allahabad-3oe2 |

Starting a business in Allahabad, a city known for its rich cultural heritage and growing economy, can be a rewarding venture. From understanding the local market to leveraging digital marketing strategies, this guide will walk you through the essential steps to establish your business successfully. Additionally, we will explore how partnering with the Best Digital Marketing Company in Allahabad can significantly boost your business’s growth from the very beginning.
1. Conduct Market Research
Before launching any business, it’s crucial to understand the local market. Conduct thorough market research to identify potential customers, competitors, and market gaps. This research will help you tailor your products or services to meet local demands and differentiate your business from others.
2. Develop a Business Plan
A well-crafted business plan serves as a roadmap for your business. It should include:
• Executive Summary: An overview of your business idea.
• Market Analysis: Insights from your market research.
• Business Structure: The type of business entity you will form (e.g., sole proprietorship, partnership, or corporation).
• Products/Services: A description of what you will offer.
• Marketing Strategy: How you plan to attract and retain customers.
• Financial Projections: Estimated costs and revenue forecasts.
3. Register Your Business
Choose a unique name for your business and register it with the appropriate government authorities. Ensure that you obtain all necessary licenses and permits to operate legally in Allahabad.
4. Secure Funding
Determine how much capital you need to start and run your business until it becomes profitable. Explore various funding options, including personal savings, bank loans, angel investors, or venture capitalists.
5. Set Up Your Business Location
Choose a strategic location for your business. Consider factors like accessibility, foot traffic, and proximity to suppliers and customers. Whether it’s a physical storefront or an online presence, ensure your business is easily reachable.
6. Create a Strong Online Presence
In today’s digital age, having a robust online presence is essential for business success. Here’s where the Best Digital Marketing Company in Allahabad can play a pivotal role. They can help you:
Build a Professional Website
A professional, user-friendly website is the foundation of your online presence. The digital marketing company can design and develop a website that reflects your brand, provides valuable information, and facilitates customer engagement.
Optimize for Search Engines (SEO)
Search engine optimization (SEO) is crucial for driving organic traffic to your website. The Best Digital Marketing Company in Allahabad can implement SEO strategies to improve your website’s visibility on search engines, helping potential customers find you easily.
Leverage Social Media
Social media platforms are powerful tools for reaching and engaging with your target audience. A digital marketing agency can manage your social media profiles, create engaging content, and run targeted ad campaigns to boost your brand’s visibility and attract customers.
Utilize Content Marketing
Content marketing involves creating valuable, relevant content to attract and retain a clearly defined audience. The digital marketing company can help you develop a content strategy, produce high-quality blog posts, videos, and infographics, and distribute them across various channels to drive traffic and generate leads.
Implement Email Marketing
Email marketing is an effective way to nurture relationships with your customers. The digital marketing agency can set up automated email campaigns to keep your audience informed about new products, special offers, and company updates.
7. Focus on Customer Service
Exceptional customer service is key to building a loyal customer base. Train your staff to provide friendly, efficient, and personalized service. Encourage customer feedback and use it to improve your offerings and address any issues promptly.
8. Monitor Your Progress
Regularly monitor your business’s performance using key metrics such as sales, customer satisfaction, and online engagement. Adjust your strategies based on the insights you gather to ensure continuous growth and improvement.
Conclusion
Starting a business in Allahabad requires careful planning, dedication, and strategic execution. By following these steps and partnering with the Best Digital Marketing Company in Allahabad, you can establish a strong foundation for your business and accelerate its growth. With the right support, your business can thrive in the competitive market of Allahabad and beyond. | aditya_pandey_1847fe5a44a | |
1,888,164 | Why Should I Invest in Developing a Metaverse NFT Marketplace? | The digital world is changing fast, with two big developments: the Metaverse and NFTs (Non-Fungible... | 0 | 2024-06-14T09:08:09 | https://dev.to/elena_marie_dad5c9d5d5706/why-should-i-invest-in-developing-a-metaverse-nft-marketplace-4ogb | metaverse, nft, marketplace | The digital world is changing fast, with two big developments: the Metaverse and NFTs (Non-Fungible Tokens). It's time to get caught up if you haven't noticed already. Why should you consider creating a Metaverse NFT marketplace? Let's find out.
What is a Metaverse NFT Marketplace?
The Metaverse is an online space where people can interact, work, play, and create. It's like a virtual universe. NFTs are unique digital items verified by blockchain technology. A Metaverse NFT marketplace is where people can buy, sell, and trade these digital items within the Metaverse.
Benefits of **[Metaverse NFT Marketplace development](https://www.clarisco.com/metaverse-nft-marketplace-development)**:
Let’s explore the Benefits of Metaverse NFT Marketplace development,
High Growth Potential
The Metaverse and NFTs are just getting started, so there's a lot of room for growth. Getting in early can mean big gains as more people start using and trading in these digital spaces.
Diversification of Investment Portfolio
Investing in a Metaverse NFT marketplace adds variety to your investment portfolio. It's a chance to be part of new, fast-growing digital technologies.
Technological Innovation and Adoption
Investing in this area isn't just about following a trend—it's about being part of a major tech revolution. The use of VR, AR, blockchain, and AI in the Metaverse can lead to even more new technologies and opportunities.
Economic Opportunities in the Metaverse
Virtual Real Estate
Virtual real estate is one of the most profitable areas in the Metaverse. Similar to the real world, you can buy, sell, and rent out prime virtual properties to make money.
Digital Goods and Services
The market for digital goods, such as virtual fashion and digital art, is rapidly expanding. NFTs allow creators to earn money from their work in entirely new ways.
Job Creation and Economic Impact
The Metaverse is generating new job opportunities, including roles for developers, virtual architects, and designers. This growing industry has the potential to make a big impact on the global economy.
Conclusion:
Investing in a Metaverse NFT marketplace presents a unique chance to be part of a digital revolution. With significant growth potential, diverse economic opportunities, and involvement in cutting-edge technology, it's an exciting venture to consider. However, it's important to understand the challenges and plan carefully to manage risks effectively. Working with a **[Metaverse development company](https://www.clarisco.com/metaverse-development-company)** that offers Metaverse development services can help navigate this complex landscape.
| elena_marie_dad5c9d5d5706 |
1,888,162 | Pure ML vs Applied ML | ⚡𝐏𝐮𝐫𝐞 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 is a subfield of artificial intelligence that focuses on the development and... | 0 | 2024-06-14T09:04:35 | https://dev.to/sagorbro005/pure-ml-vs-applied-ml-197n | pureml, appliedml, machinelearning | ⚡𝐏𝐮𝐫𝐞 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 is a subfield of artificial intelligence that focuses on the development and study of algorithms that can learn from and make predictions or decisions based on data. It involves the study of how these algorithms can be designed to learn from data, identify patterns, and make decisions with minimal human intervention. It is about understanding the underlying principles and theories of machine learning algorithms.
Consider a scenario where you have a dataset of images labeled as either cats or dogs. In pure machine learning, you would use this dataset to train an algorithm to recognize the features that distinguish cats from dogs. The algorithm would analyze the images, learn the common characteristics of each category, and then be able to classify new images as either a cat or a dog based on what it has learned.
⚡𝐀𝐩𝐩𝐥𝐢𝐞𝐝 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 refers to the practical application of machine learning techniques to solve real-world problems. It involves using algorithms and statistical models to analyze and interpret complex data, make predictions, or automate decision-making processes. Unlike pure machine learning research, which focuses on developing new algorithms or improving existing ones, applied machine learning is about using these tools to deliver tangible results in various fields such as finance, healthcare, marketing, and more.
For example, in healthcare, applied machine learning might be used to predict patient outcomes based on historical data, while in finance, it could be used to detect fraudulent transactions. The key aspect of applied machine learning is its focus on practical implementation and the ability to create value from data by making informed decisions or predictions.
Both pure and applied machine learning are crucial for the field's progress. Pure ML advancements provide the foundation for applied ML applications, while applied ML success stories motivate further pure ML research.
𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐅𝐮𝐧𝐝𝐚𝐦𝐞𝐧𝐭𝐚𝐥𝐬: https://lnkd.in/gVDDBUim
| sagorbro005 |
1,888,161 | Interfacing with FMZ robot using "Tradingview" indicator | Background introduction TradingView is a good market quotes drawing tool. The pine script... | 0 | 2024-06-14T09:03:28 | https://dev.to/fmzquant/interfacing-with-fmz-robot-using-tradingview-indicator-5hme | trading, indicator, robot, fmzquant | ## Background introduction
TradingView is a good market quotes drawing tool.
The pine script is also a powerful existence!
Backtesting, alarming, and various docking is a very complete financial tool.
But there are two issues that have been plaguing us...
- One is the expensive membership system
- The second is that there are very few exchanges where signals are directly tradable, it seems to be two or three.
Today our article is to take you to solve the problem of exchange docking issues.
## Implementation
The overall idea is like this:
TV(TradingView) pine script -> signal alarm webhook -> local webhook server forwarding request -> FMZ bot receives the request to operate
let's go step by step.
Go to TradingView website:
https://www.tradingview.com/
Next, we first create an Alert, see the figure below for details

Some aspects in the picture need to pay attention to, when generating Alert.
The validity period, webhook address, and message content must be well done.
The expiration date, this one will know at a glance, and it will be invalid when it expires...
Webhook URL, let's keep it empty first, we will fill it in when the local webhook service is done.
Message here, it is best we have a clear explanation, in order to let the bot distinguish from Alert messages.
I generally set it like this: XXX strategy, order quantity and trading direction
So far, the TradingView part is basically done!
Next, let's get the local webhook service job done!
This kind of work, Google it will show you lots of results. this article will skip this part, you can do it by yourself.
here is a simple framework for python:
```
GitHub: https://github.com/shawn-sterling/gitlab-webhook-receiver
```
Safe, worry-free and convenient, but there are also issues.
This little frame, it will!! Suicide!! Please pay attention to this issue!
So, I wrote another script on the server, When "die" or "offline" appears in the log, I will restart it. later on, i still feel not safe, so i set it restart regularly. Find an unimportant time every hour... Give it a restart, it has been safely running for two months now and there is no more signal losses.
In addition, TradingView only recognizes the port 80, so don't mess up the service port.
So far, We have done the Message from Alert part. Next, how do we get Bot?
I don't know if you have paid attention to the interface API document of FMZ at the bottom:

We can pass some commands to our little Bot through API!
The specific request example is here, the red box is the request we need.

Here also needs some preparation work.
FMZ API (avatar->account settings->API interface),
A Bot that has been started (we want to get its ID, so we create a new ID first), the number in the URL of a general robot is the ID.

Next, we transform the webhook service so that after receiving the message, it will be automatically forwarded to the FMZ Bot.
Finally, don’t forget to fill in the completed webhook address in the TradingView Alert(format: http://xx.xx.xx.xx:80)
The following is the service code I changed, you can use it as a reference:
```
#!/usr/bin/python -tt
# -*- coding: UTF-8 -*-
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import json
import logging
import logging.handlers
import os
import re
import shutil
import subprocess
import time
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
try:
import md5
import urllib2
from urllib import urlencode
except:
import hashlib as md5
import urllib.request as urllib2
from urllib.parse import urlencode
############################################################
##### You will likely need to change some of the below #####
# log file for this script
log_file = '/root/webhook/VMA/webhook.log'
# Bot api licence
accessKey = ''
secretKey = ''
# HTTP config
log_max_size = 25165824 # 24 MB
log_level = logging.INFO
#log_level = logging.DEBUG # DEBUG is quite verbose
listen_port = 80
##### You should stop changing things unless you know what you are doing #####
##############################################################################
log = logging.getLogger('log')
log.setLevel(log_level)
log_handler = logging.handlers.RotatingFileHandler(log_file,
maxBytes=log_max_size,
backupCount=4)
f = logging.Formatter("%(asctime)s %(filename)s %(levelname)s %(message)s",
"%B %d %H:%M:%S")
log_handler.setFormatter(f)
log.addHandler(log_handler)
class webhookReceiver(BaseHTTPRequestHandler):
def run_it(self, cmd):
"""
runs a command
"""
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
log.debug('running:%s' % cmd)
p.wait()
if p.returncode != 0:
log.critical("Non zero exit code:%s executing: %s" % (p.returncode,
cmd))
return p.stdout
def bot_conmand(self, method, *args):
"""
send conmand request to bot api
"""
d = {
'version': '1.0',
'access_key': accessKey,
'method': method,
'args': json.dumps(list(args)),
'nonce': int(time.time() * 1000),
}
d['sign'] = md5.md5(('%s|%s|%s|%d|%s' % (d['version'], d['method'], d['args'], d['nonce'], secretKey)).encode('utf-8')).hexdigest()
return json.loads(urllib2.urlopen('https://www.fmz.com/api/v1', urlencode(d).encode('utf-8')).read().decode('utf-8'))
def do_POST(self):
"""
receives post, handles it
"""
log.debug('got post')
message = 'OK'
self.rfile._sock.settimeout(5)
data_string = self.rfile.read(int(self.headers['Content-Length']))
log.info(data_string)
self.send_response(200)
self.send_header("Content-type", "text")
self.send_header("Content-length", str(len(message)))
self.end_headers()
self.wfile.write(message)
log.debug('TV connection should be closed now.')
#log.info(self.bot_conmand('GetRobotList', -1, -1, -1)) # GetRobotList(offset, length, robotStatus int)Pass -1 to get all
log.info(self.bot_conmand('CommandRobot', 169788, data_string)) # CommandRobot(robotId int64, cmd string)Send commands to the robot
def log_message(self, formate, *args):
"""
disable printing to stdout/stderr for every post
"""
return
def main():
"""
the main event.
"""
try:
server = HTTPServer(('', listen_port), webhookReceiver)
log.info('started web server...')
server.serve_forever()
except KeyboardInterrupt:
log.info('ctrl-c pressed, shutting down.')
server.socket.close()
if __name__ == '__main__':
main()
```
## Implementation within FMZ platform trading strategy
All the above described the communication implementation, our Bot trading strategy also needs to be processed accordingly, in order for us to fix our receiving signal process.
For example, the Alert Message designed at the beginning, You can play it according to your preferences and specific needs.
The code is as follows, get the information, filter them, do the operation, and end.
```
function get_Command() { //Responsible function for interaction, interactively update relevant values in time, users can expand by themselves
var way = null; //route
var cmd = GetCommand(); // Get interactive command API
var cmd_arr = cmd.split(",");
if (cmd) {
// Define the route
if (cmd.indexOf("BUY,1") != -1) {
way = 1;
}
if (cmd.indexOf("SELL,1") != -1) {
way = 2;
}
if (cmd.indexOf("BUY,2") != -1) {
way = 3;
}
if (cmd.indexOf("SELL,2") != -1) {
way = 4;
}
// Branch selection operation
switch (way) {
case 1:
xxx
break;
case 2:
xxx
break;
case 3:
xxx
break;
case 4:
xxx
break;
default:
break;
}
}
}
```
This article is over, hope it can help you!
From: https://blog.mathquant.com/2020/06/19/interfacing-with-fmz-robot-using-tradingview-indicator.html | fmzquant |
1,888,159 | Exploring the Evolution: A Dive into the Fashion Industry Trends | Discover the dynamic shifts and innovations driving the fashion industry's evolution, from... | 0 | 2024-06-14T08:55:26 | https://dev.to/arsports/exploring-the-evolution-a-dive-into-the-fashion-industry-trends-22i4 | webdev, beginners, javascript, programming | Discover the dynamic shifts and innovations driving the fashion industry's evolution, from sustainability to digital influence, shaping consumer preferences worldwide.
Shop Now- https://www.arsports.com.co/ | arsports |
1,888,158 | Dr. Nubi Achebo Awarded Distinguished Professorship In Management By AUBSS And QAHE | The American University of Business and Social Sciences (AUBSS) and the International Association for... | 0 | 2024-06-14T08:52:29 | https://dev.to/aubss_edu/dr-nubi-achebo-awarded-distinguished-professorship-in-management-by-aubss-and-qahe-2jep | education, news, aubss, qahe | The American University of Business and Social Sciences (AUBSS) and the International Association for Quality Assurance in Pre-Tertiary & Higher Education (QAHE) proudly announce the prestigious Distinguished Professorship in Management awarded to Dr. Nubi Achebo.
With an outstanding career spanning over 25 years, Dr. Achebo has been recognized for his exceptional contributions to the fields of education, technology, and management. His visionary leadership, unwavering dedication, and groundbreaking research have left a lasting impact on academia and beyond.
The Distinguished Professorship in Management is a testament to Dr. Achebo’s remarkable achievements and his significant influence in shaping the academic landscape. This esteemed recognition acknowledges his invaluable contributions to the advancement of knowledge and innovation in the field of management.
Dr. Achebo’s expertise in instructional technology, curriculum development, and strategic planning has propelled him to the forefront of educational leadership. His extensive experience as the Director of Academic Planning at the Nigerian University of Technology and Management, as well as his roles at renowned institutions such as the Lagos Business School and Saint Xavier University, Chicago, have showcased his exceptional abilities as a transformative leader.
Beyond academia, Dr. Achebo has made notable contributions to project management, knowledge management, and sustainability. His dedication to gender equality and social progress is exemplified through his involvement in the USAID Engendering Industries program.
As a distinguished professor, Dr. Achebo will continue to inspire and mentor the next generation of leaders in the field of management. His research and thought leadership will further enrich the academic community, fostering innovation and excellence.
AUBSS and QAHE are honored to bestow the Distinguished Professorship in Management upon Dr. Nubi Achebo. This prestigious recognition is a testament to his exceptional achievements and his unwavering commitment to advancing the frontiers of knowledge. | aubss_edu |
1,888,156 | Exception Types | Exceptions are objects, and objects are defined using classes. The root class for exceptions is... | 0 | 2024-06-14T08:49:16 | https://dev.to/paulike/exception-types-351p | java, programming, learning, beginners | Exceptions are objects, and objects are defined using classes. The root class for exceptions is **java.lang.Throwable**. The preceding section used the classes **ArithmeticException** and **InputMismatchException**. Are there any other types of exceptions you can use? Can you define your own exception classes? Yes. There are many predefined exception classes in the Java API. Figure below shows some of them.

The class names **Error**, **Exception**, and **RuntimeException** are somewhat confusing. All three of these classes are exceptions, and all of the errors occur at runtime.
The **Throwable** class is the root of exception classes. All Java exception classes inherit directly or indirectly from **Throwable**. You can create your own exception classes by extending **Exception** or a subclass of **Exception**.
The exception classes can be classified into three major types: system errors, exceptions, and runtime exceptions.
- _System errors_ are thrown by the JVM and are represented in the **Error** class. The **Error** class describes internal system errors, though such errors rarely occur. If one does, there is little you can do beyond notifying the user and trying to terminate the program gracefully. Examples of subclasses of **Error** are listed in Table below.

- _Exceptions_ are represented in the **Exception** class, which describes errors caused by your program and by external circumstances. These errors can be caught and handled by your program. Examples of subclasses of **Exception** are listed in Table below.

- _Runtime exceptions_ are represented in the **RuntimeException** class, which describes programming errors, such as bad casting, accessing an out-of-bounds array, and numeric errors. Runtime exceptions are generally thrown by the JVM. Examples of subclasses are listed in Table below.

**RuntimeException**, **Error**, and their subclasses are known as _unchecked exceptions_. All other exceptions are known as _checked exceptions_, meaning that the compiler forces the programmer to check and deal with them in a **try-catch** block or declare it in the method header.
In most cases, unchecked exceptions reflect programming logic errors that are unrecoverable. For example, a **NullPointerException** is thrown if you access an object through a reference variable before an object is assigned to it; an **IndexOutOfBoundsException** is thrown if you access an element in an array outside the bounds of the array. These are logic errors that should be corrected in the program. Unchecked exceptions can occur anywhere in a program. To avoid cumbersome overuse of **try-catch** blocks, Java does not mandate that you write code to catch or declare unchecked exceptions. | paulike |
1,888,150 | Managed Forex Account Services | A Managed Forex Account Service is a type of investment service where professional traders or money... | 0 | 2024-06-14T08:38:52 | https://dev.to/akash_fx/managed-forex-account-services-16mk | webdev, forex, scalping, account | A [Managed Forex Account Service](https://forexwebstore.com/account-management/) is a type of investment service where professional traders or money managers trade the foreign exchange (Forex) market on behalf of clients. These services can be attractive for individuals who want to invest in Forex but lack the time, expertise, or desire to trade themselves. Here’s an overview of what you need to know about managed Forex account services:
Key Features
Professional Management:
Experienced traders or money managers handle all trading decisions, leveraging their expertise and strategies to manage the account.
Customization:
Accounts can often be tailored to meet specific investor goals, risk tolerance, and investment horizons.
Transparency:
Regular reports and account statements are provided to investors, showing trades, performance, and fees.
Accessibility:
Many managed Forex accounts can be accessed online, allowing investors to monitor their account performance in real-time.
Types of Managed Forex Accounts
PAMM (Percentage Allocation Management Module):
Funds from multiple investors are pooled into a single trading account. Profits and losses are allocated proportionally based on each investor’s share of the pool.
LAMM (Lot Allocation Management Module):
Investors’ funds are kept in separate accounts, and trades are allocated based on lot sizes rather than percentages.
MAM (Multi-Account Manager):
Combines features of both PAMM and LAMM, allowing for more flexible allocation and management of trades across multiple accounts.
Benefits
Expertise:
Professional traders with extensive market knowledge can potentially achieve better results than inexperienced individual traders.
Diversification:
Managed accounts can provide access to diverse trading strategies and instruments, spreading risk.
Time-Saving:
Investors do not need to spend time learning the Forex market or monitoring trades.
Performance Potential:
Access to sophisticated trading strategies and risk management techniques that individual investors might not be able to implement on their own.
Risks
Market Risk:
The Forex market is highly volatile, and there is always a risk of losing capital.
Manager Risk:
The performance of the managed account depends on the skills and decisions of the account manager.
Fees:
Managed accounts typically charge management and performance fees, which can affect overall returns.
Regulatory Risk:
Ensure the service is provided by a reputable and regulated entity to avoid potential fraud or malpractice.
Choosing a Managed Forex Account Service
Regulation:
Verify that the service is regulated by a reputable financial authority.
Track Record:
Look for a service with a proven track record and positive reviews from clients.
Transparency:
Ensure the service provides clear and regular reporting on account performance and fees.
Fee Structure:
Understand the fee structure, including management fees, performance fees, and any other associated costs.
Communication:
Choose a service that maintains good communication and provides support when needed.
Conclusion
Managed Forex account services can offer a convenient and potentially profitable way to invest in the Forex market without having to actively trade. However, it's crucial to thoroughly research and choose a reputable provider, understand the associated risks, and be aware of the costs involved. | akash_fx |
1,888,155 | Getting Started with NestJS and TypeORM: A Beginner's Guide | Hey everyone! 👋 If you're looking to build scalable and maintainable server-side applications with... | 0 | 2024-06-14T08:47:20 | https://dev.to/souhailxedits/getting-started-with-nestjs-and-typeorm-a-beginners-guide-ggc | nestjs, typeorm, webdev, apigateway | Hey _everyone_! 👋 If you're looking to build scalable and maintainable server-side applications with Node.js, NestJS is a fantastic framework to dive into. Combined with TypeORM for database interactions, you can create robust and type-safe applications with ease. In this guide, we'll walk through setting up a simple NestJS project with TypeORM.
**Why NestJS and TypeORM?**
**NestJS**
NestJS is a progressive Node.js framework built with TypeScript that leverages the robust features of Angular to provide a reliable structure for building server-side applications. Its modular architecture makes it easy to manage and scale large applications.
**TypeORM**
TypeORM is an ORM (Object-Relational Mapper) for TypeScript and JavaScript (ES7, ES6, ES5). It supports many databases and allows you to interact with your database using TypeScript's powerful type system.
**Setting Up Your Environment**
**Prerequisites**
Before we start, make sure you have Node.js and npm installed. You can download them from Node.js official website.
**Step 1: Create a New NestJS Project**
First, we'll create a new NestJS project using the Nest CLI. If you don't have the Nest CLI installed, you can install it globally:
```
npm install -g @nestjs/cli
```
Now, create a new project:
```
nest new my-nestjs-project
```
**Step 2: Install TypeORM and Dependencies**
Next, we need to install TypeORM and the database driver we'll use. For this example, we'll use SQLite because it's simple and doesn't require any setup.
```
npm install @nestjs/typeorm typeorm sqlite3
```
**Step 3: Configure TypeORM**
We need to configure TypeORM in our NestJS project. Open src/app.module.ts and update it to include TypeORM configuration:
```
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { AppController } from './app.controller';
import { AppService } from './app.service';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'sqlite',
database: 'database.sqlite',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
}),
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
```
**Step 4: Create an Entity**
Entities are the core of TypeORM. They represent the tables in your database. Let's create a simple User entity.
Create a new directory called entities inside the src directory, and then create a file named user.entity.ts inside the entities directory:
```
import { Entity, Column, PrimaryGeneratedColumn } from 'typeorm';
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: number;
@Column()
name: string;
@Column()
email: string;
}
```
**Step 5: Create a User Module and Service**
To keep our code modular, we will create a user module and a corresponding service to handle our user-related logic.
Generate a new module and service:
```
nest generate module users
nest generate service users
```
Now, update 'src/users/users.module.ts' to include TypeORM for our User entity:
```
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { UsersService } from './users.service';
import { User } from '../entities/user.entity';
@Module({
imports: [TypeOrmModule.forFeature([User])],
providers: [UsersService],
exports: [UsersService],
})
export class UsersModule {}
```
**Step 6: Implement the Users Service**
Open src/users/users.service.ts and implement basic CRUD operations:
```
import { Injectable } from '@nestjs/common';
import { InjectRepository } from '@nestjs/typeorm';
import { Repository } from 'typeorm';
import { User } from '../entities/user.entity';
@Injectable()
export class UsersService {
constructor(
@InjectRepository(User)
private usersRepository: Repository<User>,
) {}
findAll(): Promise<User[]> {
return this.usersRepository.find();
}
findOne(id: number): Promise<User> {
return this.usersRepository.findOneBy({ id });
}
create(user: User): Promise<User> {
return this.usersRepository.save(user);
}
async remove(id: number): Promise<void> {
await this.usersRepository.delete(id);
}
}
```
**Step 7: Create a Users Controller**
Generate a controller for users:
```
nest generate controller users
```
Open src/users/users.controller.ts and set up the endpoints:
```
import { Controller, Get, Post, Body, Param, Delete } from '@nestjs/common';
import { UsersService } from './users.service';
import { User } from '../entities/user.entity';
@Controller('users')
export class UsersController {
constructor(private readonly usersService: UsersService) {}
@Get()
findAll(): Promise<User[]> {
return this.usersService.findAll();
}
@Get(':id')
findOne(@Param('id') id: string): Promise<User> {
return this.usersService.findOne(+id);
}
@Post()
create(@Body() user: User): Promise<User> {
return this.usersService.create(user);
}
@Delete(':id')
remove(@Param('id') id: string): Promise<void> {
return this.usersService.remove(+id);
}
}
```
**Step 8: Integrate Users Module**
Finally, integrate the UsersModule into our AppModule. Open src/app.module.ts and update it:
```
import { Module } from '@nestjs/common';
import { TypeOrmModule } from '@nestjs/typeorm';
import { AppController } from './app.controller';
import { AppService } from './app.service';
import { UsersModule } from './users/users.module';
@Module({
imports: [
TypeOrmModule.forRoot({
type: 'sqlite',
database: 'database.sqlite',
entities: [__dirname + '/**/*.entity{.ts,.js}'],
synchronize: true,
}),
UsersModule,
],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}
```
**Running the Application**
Now that everything is set up, you can run your application:
```
npm run start
```
Your NestJS server should be running, and you can start interacting with your API. Try making some requests to the /users endpoint using Postman or any other API client to see your CRUD operations in action.
**Conclusion**
Congratulations! 🎉 You've just set up a basic NestJS application with TypeORM and implemented CRUD operations for a User entity. From here, you can expand your application by adding more entities, services, and controllers as needed.
NestJS and TypeORM provide a powerful combination for building scalable and maintainable applications. Keep exploring the NestJS and TypeORM documentation to unlock their full potential.
Happy coding! 🚀
| souhailxedits |
1,888,154 | Python Online Classes: Master Coding with Kodyfier in India | If you're eager to delve into the world of programming, learning Python is a fantastic place to... | 0 | 2024-06-14T08:46:57 | https://dev.to/riya45/python-online-classes-master-coding-with-kodyfier-in-india-4d2f | pythononlineclasses, pythononlineclass, pythononlineclassesinindia, kodyfier |

If you're eager to delve into the world of programming, learning Python is a fantastic place to start. Renowned for its simplicity and versatility, Python is a top choice for both beginners and seasoned developers. For those in India seeking top-notch training, Kodyfier IT Software Training Institution offers exceptional "**[python online classes](https://kodyfier.com/contact.php)**" that are designed to cater to various learning needs.
Kodyfier's **[python online classes](https://kodyfier.com/online-python-course.php)** stand out due to their meticulously structured curriculum. Whether you are new to coding or looking to refine your skills, Kodyfier has crafted courses that encompass the essentials and advanced facets of Python programming. Students begin with the fundamentals, learning about variables, data types, and control structures, before progressing to more complex topics such as web development, machine learning, and data analysis.
One of the primary benefits of Kodyfier’s "**[python online classes](https://kodyfier.com/online-python-course.php)**" is the flexibility they offer. The online format allows learners to study from the comfort of their homes and at their own pace. This is particularly advantageous for working professionals and students who need to balance their education with other commitments. Kodyfier’s experienced instructors, who are experts in their fields, provide personalized guidance to ensure each student grasps the material thoroughly.
Kodyfier’s interactive platform further enhances the learning experience. The online classes include live sessions where students can ask questions and participate in discussions, fostering a collaborative learning environment. Additionally, the platform provides access to a wealth of resources such as recorded lectures, coding exercises, and real-world projects. This hands-on approach not only solidifies theoretical knowledge but also builds practical skills crucial for the job market.
Beyond the comprehensive curriculum and flexible learning environment, **[Kodyfier](https://kodyfier.com/)** offers robust career support. Services such as resume building, interview preparation, and job placement assistance are available to help students transition smoothly into the IT industry. This holistic approach ensures that Kodyfier graduates are well-prepared to secure lucrative positions and excel in their careers.
Kodyfier’s "**[python online classes](https://kodyfier.com/online-python-course.php)**" are an excellent choice for anyone looking to master Python and advance in the IT field. With a well-rounded curriculum, flexible learning options, and extensive career support, Kodyfier in India is your gateway to becoming a proficient Python programmer. Enroll today and take the first step towards a promising future in tech.
| riya45 |
1,888,153 | Best Forex Indicators | Visit for More Indicator :- https://forexwebstore.com/product-category/mt4-indicator/ | 0 | 2024-06-14T08:44:17 | https://dev.to/akash_fx/best-forex-indicators-3bj4 | indicators, forex, scalper, market |



Visit for More Indicator :- https://forexwebstore.com/product-category/mt4-indicator/ | akash_fx |
1,888,152 | My Pen on CodePen | Liquid syntax error: Tag '{% %}' was not properly terminated with regexp: /\%\}/ | 0 | 2024-06-14T08:43:19 | https://dev.to/__36002242a86c9/my-pen-on-codepen-48nk | codepen | {% %}Check out this Pen I made!
{% codepen https://codepen.io/Mohammedealanizi/pen/jOoaXvr %} | __36002242a86c9 |
1,888,151 | SAFE ACCOUNT MANAGEMENT | WANT TO JOIN ACCOUNT MANAGEMENT ? LOOK OUR PLANS FOR ACCOUNT MANAGEMENT 👇 SOME OTHER DETAILS FOR... | 0 | 2024-06-14T08:41:12 | https://dev.to/akash_fx/safe-account-management-19fb | forex, account, manager, scalper | WANT TO JOIN ACCOUNT MANAGEMENT ?
LOOK OUR PLANS FOR ACCOUNT MANAGEMENT 👇
SOME OTHER DETAILS FOR ACCOUNT MANAGEMENT
🔴 Stop Wasting Time & Your Money Join Us Change Your Life. Interested People Talk Us 🔴
🟢 Are you in Big Loss ? Or Your Account is Running Much Loss.
🟢 Contact Us for Safe and Secure Account Management.
🟢 We are Master of Scalp and Hadge Trading Master. We take trades after the decision of our team.
🟢 This is the best time to make money by forex…. So, don’t waste your money by fake management….!
🟢 Make good profit your account every week and enjoy….!
🟢 Regular profit ….No loss, Minimum draw down, money management available…..to our management service
1). Weekly 2 Times Sharing Depends on Profits We Make
2). Any Broker Accept with Low Spreads
3). Any Profits We Share Weekly 50/50
4). Trading accuracy 90%+ best results, No 1 Results
5). Recommended brokers:- ICmarkets, Peperstone, Tradersway, Gomarkets, Tickmill, Exness
6). We need only Mt4/Mt5 Login details and you can watch your account Growing day by day with no risk
7). Can be Minimum Drawdown 10%
TURN YOUR DREAM INTO REALITY…
Join with us for best account management services & change your life with us.
✔️Onetime invest lifetime profit
✔️Perfect Entry with 5-10 % Can be Drawdown
✔️Sure Analyze with high accuracy
✔️Safe Account management
✔️Proper risk management
✔️Account Safty is our Priority
✔️Daily/Weekly Confirm Profit
Fill Form for Management and Send Us 👇🏿
✅ LOGIN REQUIREMENTS✅
🔰Broker name:———-
🔰Server ID:————
🔰Password:———-
🔰Leverage: ———–
🔰Profit share 50/50
✅ Why you are waiting For Just ASK Account Management Service ✅
🤙Real work 💯
🤙Real accuracy 💯
🤙Real management 💯
👨💻Only with Us😀
Happy Investor
[Join Telegram](https://t.me/non_repaint_mt4) | akash_fx |
1,888,149 | Exception-Handling Overview | Exception handling enables a program to deal with exceptional situations and continue its normal... | 0 | 2024-06-14T08:37:13 | https://dev.to/paulike/exception-handling-overview-5eo8 | java, programming, learning, beginners | Exception handling enables a program to deal with exceptional situations and continue its normal execution. _Runtime errors_ occur while a program is running if the JVM detects an operation that is impossible to carry out. For example, if you access an array using an index that is out of bounds, you will get a runtime error with an **ArrayIndexOutOfBoundsException**. If you enter a **double** value when your program expects an integer, you will get a runtime error with an **InputMismatchException**.
In Java, runtime errors are thrown as exceptions. An _exception_ is an object that represents an error or a condition that prevents execution from proceeding normally. If the exception is not handled, the program will terminate abnormally. How can you handle the exception so that the program can continue to run or else terminate gracefully?
Exceptions are thrown from a method. The caller of the method can catch and handle the exception. To demonstrate exception handling, including how an exception object is created and thrown, let’s begin with the example in program below, which reads in two integers and displays their
quotient.

`Enter two integers: 5 2
5 / 2 is 2`
`Enter two integers: 3 0
Exception in thread "main" java.lang.ArithmeticException: / by zero
at Quotient.main(Quotient.java:11)`
If you entered **0** for the second number, a runtime error would occur, because you cannot divide an integer by **0**. (_Note that a floating-point number divided by **0** does not raise an exception._) A simple way to fix this error is to add an **if** statement to test the second number, as shown in program below.

Before introducing exception handling, let us rewrite program above to compute a quotient using a method, as shown in program below.

`Enter two integers: 5 3
5 / 3 is 1`
`Enter two integers: 5 0
Divisor cannot be zero`
The method **quotient** (lines 5–12) returns the quotient of two integers. If **number2** is **0**, it cannot return a value, so the program is terminated in line 8. This is clearly a problem. You should not let the method terminate the program—the **caller** should decide whether to terminate the program.
How can a method notify its caller an exception has occurred? Java enables a method to throw an exception that can be caught and handled by the caller. The preceding program can be rewritten, as shown in program below.

`Enter two integers: 5 3
5 / 3 is 1
Execution continues ...`
`Enter two integers: 5 0
Exception: an integer cannot be divided by zero
Execution continues ...`
If **number2** is **0**, the method throws an exception (line 7) by executing
`throw new ArithmeticException("Divisor cannot be zero");`
The value thrown, in this case **new ArithmeticException("Divisor cannot be zero")**, is called an _exception_. The execution of a **throw** statement is called _throwing an exception_. The exception is an object created from an exception class. In this case, the exception class is **java.lang.ArithmeticException**. The constructor **ArithmeticException(str)** is invoked to construct an exception object, where **str** is a message that describes the exception.
When an exception is thrown, the normal execution flow is interrupted. As the name suggests, to “throw an exception” is to pass the exception from one place to another. The statement for invoking the method is contained in a **try** block and a **catch** block. The **try** block (lines 19–23) contains the code that is executed in normal circumstances. The exception is caught by the **catch** block. The code in the **catch** block is executed to _handle the exception_. Afterward, the statement (line 29) after the **catch** block is executed.
The **throw** statement is analogous to a method call, but instead of calling a method, it calls a **catch** block. In this sense, a **catch** block is like a method definition with a parameter that matches the type of the value being thrown. Unlike a method, however, after the **catch** block is executed, the program control does not return to the **throw** statement; instead, it executes the next statement after the **catch** block.
The identifier **ex** in the **catch**–block header
`catch (ArithmeticException ex)`
acts very much like a parameter in a method. Thus, this parameter is referred to as a **catch**–block parameter. The type (e.g., **ArithmeticException**) preceding **ex** specifies what kind of exception the catch block can **catch**. Once the exception is caught, you can access the thrown value from this parameter in the body of a **catch** block.
In summary, a template for a **try**-**throw**-**catch** block may look like this:
`try {
Code to run;
A statement or a method that may throw an exception;
More code to run;
}
catch (type ex) {
Code to process the exception;
}`
An exception may be thrown directly by using a **throw** statement in a **try** block, or by invoking a method that may throw an exception.
The main method invokes **quotient** (line 20). If the quotient method executes normally, it returns a value to the caller. If the **quotient** method encounters an exception, it throws the exception back to its caller. The caller’s **catch** block handles the exception.
Now you can see the _advantage_ of using exception handling: It enables a method to throw an exception to its caller, enabling the caller to handle the exception. Without this capability, the called method itself must handle the exception or terminate the program. Often the called method does not know what to do in case of error. This is typically the case for the library methods. The library method can detect the error, but only the caller knows what needs to be done when an error occurs. The key benefit of exception handling is separating the detection of an error (done in a called method) from the handling of an error (done in the calling method).
Many library methods throw exceptions. The program below gives an example that handles an **InputMismatchException** when reading an input.

`Enter an integer: 3.5
Try again. (Incorrect input: an integer is required)
Enter an integer: 4
The number entered is 4`
When executing **input.nextInt()** (line 13), an **InputMismatchException** occurs if the input entered is not an integer. Suppose **3.5** is entered. An **InputMismatchException** occurs and the control is transferred to the **catch** block. The statements in the **catch** block are now executed. The statement **input.nextLine()** in line 22 discards the current input line so that the user can enter a new line of input. The variable **continueInput** controls the loop. Its initial value is **true** (line 8), and it is changed to **false** (line 18) when a valid input is received. Once a valid input is received, there is no need to continue the input. | paulike |
1,888,148 | Top 17 Fast-Growing Github Repo of 2024 | Ehy Everybody 👋 It’s Antonio, CEO & Founder at Litlyx. I come back to you with a... | 0 | 2024-06-14T08:36:23 | https://dev.to/litlyx/top-17-fast-growing-github-repo-of-2024-cm7 | webdev, javascript, programming, tutorial | ## Ehy Everybody 👋
It’s **Antonio**, CEO & Founder at [Litlyx](https://litlyx.com).
I come back to you with a curated **Awesome List of resources** that you can find interesting.
Today Subject is...
```bash
Top 17 Fastest Growing GitHub Repositories of 2024
```
Share some **Love** & leave a **Star** on our **Open-Source** [repo](https://github.com/Litlyx/litlyx) on git if you like it!
## Let’s Dive in!
[](https://awesome.re)
---
# Top 17 Fastest Growing GitHub Repositories of 2024
Here is a curated list of the fastest growing repositories on GitHub for 2024, based on star count and community engagement.
1. [OpenAI ChatGPT](https://github.com/openai/chatgpt)
- **Description**: OpenAI's latest iteration of ChatGPT with enhanced capabilities.
-  
2. [Next.js](https://github.com/vercel/next.js)
- **Description**: The React framework for production.
-  
3. [Deno](https://github.com/denoland/deno)
- **Description**: A secure runtime for JavaScript and TypeScript.
-  
4. [Svelte](https://github.com/sveltejs/svelte)
- **Description**: Cybernetically enhanced web apps.
-  
5. [Rust](https://github.com/rust-lang/rust)
- **Description**: Empowering everyone to build reliable and efficient software.
-  
6. [Vue.js](https://github.com/vuejs/vue)
- **Description**: The Progressive JavaScript Framework.
-  
7. [PyTorch](https://github.com/pytorch/pytorch)
- **Description**: Tensors and Dynamic neural networks in Python with strong GPU acceleration.
-  
8. [TypeScript](https://github.com/microsoft/TypeScript)
- **Description**: TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-  
9. [Kubernetes](https://github.com/kubernetes/kubernetes)
- **Description**: Production-Grade Container Scheduling and Management.
-  
10. [TensorFlow](https://github.com/tensorflow/tensorflow)
- **Description**: An open-source machine learning framework for everyone.
-  
11. [React](https://github.com/facebook/react)
- **Description**: A declarative, efficient, and flexible JavaScript library for building user interfaces.
-  
12. [Flutter](https://github.com/flutter/flutter)
- **Description**: Flutter makes it easy and fast to build beautiful apps for mobile and beyond.
-  
13. [Redis](https://github.com/redis/redis)
- **Description**: An in-memory database that persists on disk.
-  
14. [Electron](https://github.com/electron/electron)
- **Description**: Build cross-platform desktop apps with JavaScript, HTML, and CSS.
-  
15. [Home Assistant](https://github.com/home-assistant/core)
- **Description**: Open source home automation that puts local control and privacy first.
-  
16. [Vite](https://github.com/vitejs/vite)
- **Description**: Next generation frontend tooling. It's fast!
-  
17. [Ansible](https://github.com/ansible/ansible)
- **Description**: Ansible is a radically simple IT automation system.
-  
---
*I hope you like it!!*
Share some love in the comments below.
Author: Antonio, CEO & Founder at [Litlyx.com](https://litlyx.com)
| litlyx |
1,888,146 | Python Class Online: Elevate Your Coding Skills with Kodyfier in India | In today's tech-driven world, acquiring proficiency in programming languages is essential for... | 0 | 2024-06-14T08:34:06 | https://dev.to/riya45/python-class-online-elevate-your-coding-skills-with-kodyfier-in-india-1be0 | pythonclassonline, pythonclassesonline, pythonclassonlineinindia, kodyfier |

In today's tech-driven world, acquiring proficiency in programming languages is essential for anyone looking to thrive in the IT industry. One language that stands out for its versatility and ease of learning is Python. For those aspiring to master Python, Kodyfier, a premier IT software training institution in India, offers an exceptional "**[python class online](https://kodyfier.com/online-python-course.php)**" that caters to both beginners and experienced programmers.
Kodyfier's **online python classes** are designed with a comprehensive curriculum that covers all aspects of Python programming. From basic syntax and data structures to advanced topics like web development with Django and data analysis with Pandas, **[Kodyfier](https://kodyfier.com/)** ensures that students gain a well-rounded understanding of Python. This makes it an ideal choice for individuals aiming to enhance their coding skills or transition into a career in tech.
Kodyfier also provides a rich learning environment through its interactive online platform. Students can participate in live sessions, engage in discussions, and access a wealth of resources such as video tutorials, coding exercises, and project assignments. This hands-on approach not only reinforces theoretical knowledge but also enhances practical skills, preparing students to tackle real-world challenges.
One of the key advantages of enrolling in Kodyfier’s "**[python class online](https://kodyfier.com/online-python-course.php)**" is the flexibility it offers. The online format allows students to learn at their own pace, making it perfect for working professionals and students with busy schedules. The classes are conducted by experienced instructors who are experts in the field, ensuring that learners receive high-quality education and personalized attention.
Kodyfier’s "**python class online**" is an excellent opportunity for anyone looking to master Python and advance their career in IT. With its comprehensive curriculum, flexible learning options, and robust support services, Kodyfier stands out as a top choice for **[Python training in India](https://kodyfier.com/online-python-course.php)**. Embrace the future of programming and elevate your coding skills with Kodyfier today.
Moreover, Kodyfier’s commitment to student success extends beyond the classroom. The institution offers career support services including resume building, interview preparation, and job placement assistance. This holistic approach ensures that graduates are well-equipped to enter the job market with confidence and secure rewarding positions in the IT industry.
| riya45 |
1,888,145 | Docker for Dummies- Introduction to docker | Welcome to Docker for Dummies! This is going to be an idk-how-many-part series where I'll... | 27,767 | 2024-06-14T08:34:05 | https://dev.to/swikritit/docker-for-dummies-introduction-to-docker-5h67 | docker, containers, devops, linux | ### Welcome to Docker for Dummies!
This is going to be an `idk-how-many-part series` where I'll try to explain docker in as simple a way as possible. I'm also new to the world of containerization so the purpose of this series is to try to teach this technology while learning it myself.
If you are new to the tech industry or have already been a part of this world, you might've probably come across the term `Docker` now and again, but what exactly is this docker? Are we building a ship or something?
## The wonderful world of Docker
Imagine you made a delicious meal and want to share it with your partner but then you realize that you are in a LDR and it won't be possible due to the distance. But what if there was a way to pack your dish, along with all the ingredients and tools you used, into a box that keeps it fresh and ready to eat anywhere, anytime? Well, there isn't one for food but there's one for software that does something like that and that's? You guessed it right [`Docker`](https://www.docker.com/)
Docker is like a magical box that packages your application, along with everything it needs to run, into a neat, portable container. This container can run on any computer, anywhere, without any worries about compatibility or missing ingredients. Cool, right?
*In a more technical term* - Docker is a platform that uses containerization to allow developers to package applications along with all their dependencies into a container. This ensures that the application can run consistently across different computing environments.
## Why Should You Care About Docker?
There are a lot of reasons to use docker. I'll explain some below:
1. **Consistency**: Suppose your team is building an application and you build a feature in your local system and everything is working as expected. But then your team member tries to run the same feature in their local set-up and it crashes and you suddenly find yourself amidst a decade-old problem in software engineering

Docker ensures that your application works the same way
everywhere. No more "it works on my machine" problems but rather
"it works on my container" problem 🫠
2. **Isolated Environments**: If you are building 3 different applications that require 3 different versions of node, trying to set it up in your local machine, switching between them, and trying to make them work would be a hassle that I would only wish upon my enemies not that I have any enemies👀. But with docker, you can just switch between the containers and develop everything parallelly without much difficulty.
3. **Portability**: Whether you’re developing on your laptop or deploying to a cloud server, Docker containers can run anywhere.
4. **Efficiency**: Containers are lightweight and start up in seconds, making them perfect for development and deployment.
These are just a few of the benefits. Docker is a gift that keeps on giving.
## Virtual Machines vs Containers
It might be easy to confuse virtual machines with container but they are completely different technologies and solve different problems. Here's a picture to differentiate them

Containers use the host OS's kernel and share it among all containers, making them more lightweight and efficient. VMs emulate an entire physical machine, including its operating system (OS), making it possible to run multiple OS instances on a single physical machine.
## Getting Started with Docker
Ready to dip your toes into the Docker waters? Let’s start with a quick example that will get you up and running in no time.
**Step 1: Install Docker**
First things first, you’ll need to install [Docker] (https://docs.docker.com/desktop/). Head over to the Docker website and download Docker Desktop for your operating system (Windows, macOS, or Linux). Follow the installation instructions provided.
**Step 2: Run Your First Container**
Now, let's run your first container. We are going to run `hello-world` container. Open your terminal (Command Prompt on Windows, Terminal on macOS and Linux) and you'll run the following command:
```bash
docker run hello-world
```
If everything is working fine you should see something like this on your screen
```bash
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
Digest: sha256:d1b0b5888fbb59111dbf2b3ed698489c41046cb9d6d61743e37ef8d9f3dda06f
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
```
Let's break down what that command did, the command pulled an image [`hello-world`](https://hub.docker.com/_/hello-world) from the docker hub and ran it in your host machine. If the image is already present in your system the command will skip the pulling part and just run the existing image.
Now, if you run the command again it will run the previously pulled image.
```bash
docker run hello-world
```
## Image and containers
Think of a container as a ready-to-eat meal that you can simply heat up and consume. An image, on the other hand, is the recipe or ingredients for that meal.
So just like how you need a recipe and ingredients to make a meal, you need an image and a container runtime (Docker engine) to create a container. The image provides all the necessary instructions and dependencies for the container to run, just like a recipe provides the steps and ingredients to make a meal.
In short, an image is like a blueprint or template, while a container is an instance of that blueprint or template.
## Some useful commands
You can try out these commands and see what the output
```bash
# lists all the running Docker containers on your system
docker ps
# lists all Docker containers on your system, including those that are currently running, as well as those that have stopped or exited.
docker ps -a
# lists all the Docker images stored on your system, showing details such as the repository name, tag, image ID, creation date, and size
docker images
```
You can check the [official documentation](https://docs.docker.com/reference/cli/docker/) for other commands and try them out.
This is it for this part. In the next part, we will be writing a dockerfile for a small web app and it should be exciting.
Until then take care! Keep learning! | swikritit |
1,888,144 | Navigating the Real Estate Market in Allahabad | Navigating the real estate market in Allahabad can be a hard task, whether you are a first-time... | 0 | 2024-06-14T08:33:51 | https://dev.to/aditya_pandey_1847fe5a44a/navigating-the-real-estate-market-in-allahabad-23kd |

Navigating the real estate market in Allahabad can be a hard task, whether you are a first-time homebuyer or an experienced investor. With the dynamic nature of the market and the influx of modern infrastructure projects transforming the city, it’s crucial to stay updated with the latest trends and strategies. One of the most effective ways for real estate businesses to thrive in this competitive environment is through digital marketing. In this blog, we will explore how the best digital marketing agency in Allahabad can help real estate companies achieve significant growth.
Understanding the Real Estate Market in Allahabad
Allahabad, now officially known as Prayagraj, is a city steeped in history and culture. With its strategic location at the confluence of the Ganges, Yamuna, and Saraswati rivers, the city has always been a hub for economic activities. In recent years, the real estate market in Allahabad has witnessed substantial growth, driven by the development of modern infrastructure projects and increased urbanization. This growth presents numerous opportunities for real estate businesses to expand their reach and attract potential buyers.
The Role of Digital Marketing in Real Estate
Digital marketing has revolutionized the way businesses operate, and the real estate sector is no exception. With the majority of homebuyers and investors turning to online platforms for their property search, having a robust digital presence is imperative. Here are some key digital marketing strategies that the best digital marketing agency in Allahabad can implement to boost your real estate business:
1. Building a Strong Online Presence
Creating a professional and user-friendly website is the first step towards establishing a strong online presence. Your website should showcase your property listings, highlight key features, and provide essential information to potential buyers. Integrating high-quality images, virtual tours, and interactive maps can enhance the user experience and make your listings more appealing.
2. Search Engine Optimization (SEO)
SEO is crucial for improving your website’s visibility on search engines like Google. By optimizing your website with relevant keywords, meta tags, and high-quality content, you can attract more organic traffic. The best digital marketing agency in Allahabad can help you identify the most effective keywords related to the real estate market and ensure that your website ranks higher in search results.
3. Social Media Marketing
Social media platforms like Facebook, Instagram, and LinkedIn are powerful tools for reaching a broader audience. By creating engaging and informative posts, sharing property listings, and running targeted ad campaigns, you can connect with potential buyers and build a strong online community. Social media marketing allows you to showcase your properties in a visually appealing manner and engage with your audience in real-time.
4. Content Marketing
Content marketing involves creating valuable and relevant content that addresses the needs and interests of your target audience. Blog posts, articles, videos, and infographics about the real estate market in Allahabad can position you as an industry expert and attract potential buyers to your website. Topics such as “Top Neighborhoods to Invest in Allahabad” or “Tips for First-Time Homebuyers” can provide valuable insights and drive traffic to your site.
5. Email Marketing
Email marketing is an effective way to nurture leads and keep your audience informed about new property listings, market trends, and special offers. By sending personalized and targeted emails, you can build relationships with potential buyers and encourage repeat business. The best digital marketing agency in Allahabad can help you design and implement email marketing campaigns that resonate with your audience.
6. Pay-Per-Click (PPC) Advertising
PPC advertising allows you to reach potential buyers through paid ads on search engines and social media platforms. By bidding on relevant keywords and targeting specific demographics, you can drive highly targeted traffic to your website. PPC campaigns can deliver quick results and provide valuable insights into the performance of your ads.
Conclusion
In the competitive real estate market of Allahabad, leveraging digital marketing strategies can give your business a significant edge. Partnering with the best digital marketing agency in Allahabad can help you implement these strategies effectively and achieve substantial growth. By building a strong online presence, optimizing your website for search engines, engaging with your audience on social media, and utilizing content and email marketing, you can attract more potential buyers and establish your brand as a leader in the real estate market. Embrace the power of digital marketing and watch your real estate business thrive in Allahabad. | aditya_pandey_1847fe5a44a | |
1,888,143 | Test Plan vs Test Case: Key Differences | Software applications are becoming more complex, demanding a robust test process for all critical... | 0 | 2024-06-14T08:33:20 | https://www.lambdatest.com/blog/test-plan-vs-test-case/ |
Software applications are becoming more complex, demanding a robust test process for all critical features due to technological advancements. In software testing, verification and validation are essential to ensure the proper functioning and performance of the software applications. Quality Analysts (QA) and software developers should have reasonable control and understanding of the testing process to have a robust test approach throughout the [Software Testing Life Cycle (STLC)](https://www.lambdatest.com/blog/software-testing-life-cycle/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog).
In STLC, there are different phases, which have respective goals and deliverables. Among these, test planning and test case development are the two crucial phases. Comparing test plan vs test case, the test plan is a written document that details the test strategy, scope of testing, resources, and others. On the other hand, test cases are the comprehensive instructions or steps needed for testing and validation of particular features of software applications. They are often used interchangeably but hold significant differences that must be understood.
> **Accurately count the number of words in your text with our easy-to-use word count tool. Perfect for meeting [word count](https://www.lambdatest.com/free-online-tools/word-count?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) requirements. Try it out now for free! **
This blog will guide you through the key differences between test plan vs test case. It will help you have a clear concept of the test plan and test case, which is essential in [software testing](https://www.lambdatest.com/learning-hub/software-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).
## What is a Test Plan?
A [test plan](https://www.lambdatest.com/learning-hub/test-plan?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub) is a document that provides comprehensive information on the [test strategy](https://www.lambdatest.com/learning-hub/test-strategy?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub), testing scope, goal, and time required for software testing. It also includes details on the different elements of the test process like test items, features to be tested, assigned testing tasks, the level of tester independence, the test environment, test design techniques, entry and exit criteria, along with the rationale behind these choices and any identified risks that may necessitate contingency planning.
A QA manager or product manager creates it and includes an in-depth test plan providing information on aspects of testing projects from a broader perspective. These include schedule, scope, potential risks, staff responsibilities, and defect and bug reporting.
The test plan guides the determination of the effort required to validate the quality of the application under test. In a nutshell, it functions as the blueprint and enables the systematic execution of software testing activities, carefully overseen and managed by the test manager. Consequently, it offers clarity regarding the essential tests required to verify the proper functionality of the software.
In the next section of this blog on test plan vs test case, we will learn the purpose of the test plan before discussing some key points and their benefits.
> **Effortlessly convert [RGB to CMYK](https://www.lambdatest.com/free-online-tools/rgb-to-cmyk?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) format with our free online tool. Achieve precise color matching for your projects. Fast, accurate, and easy to use. **
## Purpose of a Test Plan
The primary purpose of a test plan is to set out the scope, approach, and resources necessary for testing. It focuses on defining test objectives and deliverables, assigning tasks and responsibilities, detailing the test environment and configuration, and establishing the test schedule for streamlined and productive testing.
It outlines the objectives and scope of the software testing process and defines the strategies for addressing any risks or errors. Moreover, it helps evaluate whether the software application conforms to the anticipated quality standards before deployment.
In the below section of this blog on test plan vs test case, we will learn some key points to consider while working on a test plan.
## Key Points to Consider While Working on a Test Plan
Here are some key points to consider for a test plan, which will give you a good understanding while working on it.
* The test plan document guides the testing process during the development of the software application, directing the testing approach and outlining the testing practices to be followed.
* The test plan is a communication tool among project team members, testers, and stakeholders. It contains information shared among managers and stakeholders, including the human resource calendar, estimated budget, schedule, software and hardware requirements, and risks and contingencies.
* The test plan mentions the required tools under the “Hardware and Software Requirement” section. Before initiating the test process, the necessary tools, hardware, and software must be established to establish a test environment.
* The test plan ensures comprehensive coverage and testing of all product aspects. It is a significant advantage of planned testing processes compared to exploratory testing.
* The test plan is developed for specific modules or builds, providing a list of features to be tested. Thus, a test plan helps keep the testing process on track.
* The test plan documents what was tested for each release as a record-keeping tool. It includes the ‘Test Plan Identifier,’ which indicates the project name, test plan level, and module, release, or version number.
* The test plan lists the required human resources and breaks down testing tasks into activities.
* [Risk management](https://www.lambdatest.com/blog/how-to-incorporate-risk-management-strategy-in-testing/) is a challenging task that can be overlooked in a testing process executed without the guidance of a test plan. However, ‘Risk Management’ is an essential element of a test plan, measuring the overall testing risks and their probability of occurrence.
In the below section of this blog on test plan vs test case, we will learn the benefits of implementing a test plan.
> **Effortlessly compare and find differences between texts with our Online [Text Compare](https://www.lambdatest.com/free-online-tools/text-compare?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) tool. Ideal for developers and writers, this String Diff utility simplifies your txt comparison tasks, ensuring accuracy and efficiency. **
## Advantages of a Test Plan
Test plans offer significant advantages, including:
* It outlines the test scope to help teams concentrate on testing particular features.
* It clarifies the tools and resources teams can assemble before commencing testing.
* It enhances transparency for organization leadership or users, offering them a more profound understanding of the testing process.
* It estimates the duration of testing, aiding the team in creating a schedule to track their progress.
* It specifies the role and responsibilities of each team member.
* It ensures the ultimate software product meets its requirements and attains the intended outcomes.
In the below section of this blog on test plan vs test case, we will learn some of the limitations of a test plan.
> **Easily convert Strings to JSON format with [String to JSON](https://www.lambdatest.com/free-online-tools/string-to-json?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) converter. Quick, accurate, and user-friendly interface for developers and professionals.
**
## Limitations of a Test Plan
Test planning has certain limitations that need to be understood so there is no issue with the test process. Some of them include the following:
* Creating a test plan requires considerable time and planning effort. This process is time-consuming and demands thorough planning to develop an effective test plan.
* Updating the test plan for every new or altered requirement can be challenging in an [Agile methodology](https://www.lambdatest.com/learning-hub/Agile-development-methodologies?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).
* The test plan, a comprehensive document containing valuable information for conducting testing smoothly, may seem redundant in certain aspects. For instance, details like human resources and schedule, although integral to the test plan, are also incorporated into the overall project plan.
In the below section of this blog on test plan vs test case, we will learn about the components essential for building a test plan.
## Components of a Test Plan
A test plan is a project management document that includes essential components for effective project management. Some of the crucial components of the test plan are mentioned below:

* **Test objective:** This component of the test plan clearly outlines the objective for the test process. This includes information on different aspects of the software, such as performance, usability, compatibility, etc.
* **Test scope and approach:** This component of the test plan outlines what will be tested, how it will be tested, and the testing methods or techniques to be used.
* [**Test environment](https://www.lambdatest.com/blog/what-is-test-environment/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog):** This component of the test plan specifies the environment the testing team will utilize, including the list of hardware and software to be tested. It also includes verifying software installations.
* **Test deliverables:** This component of the test plan details the documentation and artifacts to be generated during the testing process, clarifying necessary preparations.
* [**Test tools](https://www.lambdatest.com/learning-hub/test-tool?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub):** This component of the test plan clearly states the tools for testing, bug reporting, and other relevant activities.
* **Test exit parameters:** This component of the test plan specifies the endpoint for testing activities, outlining anticipated outcomes from [Quality Assurance (QA)](https://www.lambdatest.com/learning-hub/quality-assurance?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub) operations to serve as a benchmark for measuring actual results.
* [**Defect management](https://www.lambdatest.com/learning-hub/defect-management?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub):** This component of the test plan defines the bug reporting procedure, including recipients and required accompanying elements for each bug report. It may also specify whether bugs should be reported with screenshots, textual logs, or videos demonstrating their occurrence in the code.
* **Risk:** This component of the test plan identifies the potential risks and consequences, such as poor managerial skills, project deadline failures, or lack of cooperation.
* **Test approval:** This component of the test plan establishes a transparent approval process, ensuring agreement among stakeholders and project team members on testing goals and obtaining their sign-off.
Now that we have learned what a test plan is, its purpose, key points, and components, we will learn about the various steps to create a test plan in the below section of this blog on test plan vs test case.
> **Effortlessly pull phone numbers from text using LambdaTest’s [Phone Number Extractor](https://www.lambdatest.com/free-online-tools/phone-number-extractor?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools). Ideal for testing, data analysis, marketing, and more. Try this free tool today! **
## Steps to Create a Test Plan
While creating a test plan, you must ensure the inclusion of all required stakeholders involved in the software development project. QA testers are not solely responsible for creating test plans. This is because test plans must have comprehensive information and details about QA approaches and test processes gathered from respective stakeholders.
For example, developers should contribute technical insights regarding system architecture, software design, and coding standards to guide the testing approach. Additionally, input from business analysts and domain-specific experts is valuable for insights from the business perspective. Encourage collaborative effort across teams in the test planning process.

Test planning consists of seven key steps, as mentioned below:
1. **Research and analyze the software:** Before creating a test plan, you must critically evaluate the software to be developed and research the likely user demographic.
2. **Design a test strategy and objective:** Develop a test strategy that outlines testing objectives, methods for achieving goals, and overall testing costs. Identify the appropriate testing type for the software applications or features to ensure accurate evaluation.
3. **Outline test criteria:** Test criteria serve as standards for evaluating testing results. Two main methods for determining criteria are:
* **Entry criteria:** Identify standards for completing test phases, including development and [unit testing](https://www.lambdatest.com/learning-hub/unit-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub) completion, availability of necessary test data and environment, and signed-off requirements and test plans.
* **Exit criteria:** Establish standards for suspending testing, such as resolving critical defects, executing and passing all test cases, and meeting performance targets.
1. **Plan a test environment:** The test environment comprises the hardware and software used for testing. Before starting the testing process, you must identify the test tools the team may acquire. However, selecting the right [software testing tools](https://www.lambdatest.com/blog/software-testing-tools/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog) might be challenging.
2. **Create a schedule:** Divide testing into tasks and estimate the time required for team members to complete each task in this section of the test plan.
3. **Identify deliverables:** Test deliverables include the documents created before, during, and after testing. Examples include test cases, scripts, results, summary reports, defect reports, [traceability matrix](https://www.lambdatest.com/learning-hub/requirements-traceability-matrix?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub), test environment, and user acceptance reports.
4. **Review and finalize:** In the final QA step, review and finalize the QA plan. Set key requirements and features to be tested, consider potential risks affecting the testing process, and ensure strategies to mitigate them are included in the test plan.
In the section below, we will learn the best practices for creating a test plan before delving into the key differences between test plan vs test case.
> **Effortlessly convert HEX colors to CMYK values with LambdaTest’s [HEX to CMYK](https://www.lambdatest.com/free-online-tools/hex-to-cmyk?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) Converter Online. Perfect for designers and developers alike. **
## Best Practices to Create a Test Plan
Creating an effective test plan involves adhering to certain best practices. Here are some key recommendations:
* You must understand the requirements of the software project very nicely and must align with a respective test case.
* You must clearly define the test objectives of the testing efforts.
* You must outline the scope of testing by specifying which features and functionalities will undergo testing.
* You must document the anticipated test deliverables.
* You must define the test environment, providing detailed information on hardware, software, and network configurations.
* You must identify potential risks associated with the testing process and complete the project.
* You must develop a comprehensive testing schedule incorporating milestones and deadlines.
* You must ensure the testing schedule is both realistic and achievable.
Now that we have learned and understood everything about the test plan, we will learn about the test case in detail in the section of this blog on test plan vs test case.
## What is a Test Case?
A [test case](https://www.lambdatest.com/learning-hub/test-case?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub) is a set of actions required to test a specific functionality of a software application. This helps verify the test case alignment with software requirements and correct functioning. In addition, the test case identifies and fixes bugs and vulnerabilities before the final release.
The test case is written by QA testers and provides step-by-step instructions for each test iteration. It details the necessary inputs, actions, and expected responses to classify a feature as satisfactory. Test cases often include two variations: one with valid input data and another with invalid input data. The testing phase starts once the development team completes a software application feature or a set of features, and a sequence or collection of test cases is referred to as a [test suite](https://www.lambdatest.com/learning-hub/test-suite?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).
In the next section of this blog on test case vs test plan, we will learn the purpose of a test case before discussing some key points and their benefits.
> **Effortlessly convert CMYK values to HEX color codes with LambdaTest’s [CMYK to HEX](https://www.lambdatest.com/free-online-tools/cmyk-to-hex?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=free_online_tools) Converter Online. Quick, accurate, and easy to use for all your design needs **
## Purpose of a Test Case
The primary purpose of a test case is to evaluate the performance of different features within software applications and ensure they adhere to relevant standards, guidelines, and user requirements. Writing a test case can also help detect errors or defects within the software applications. The test case specifies the setup and execution details for the test and the expected ideal result.
Objectives include the following:
* Validate specific features and functions of the software application.
* Guide testers in their day-to-day hands-on activities.
* Record a detailed set of steps taken, accessible for reference in case of a bug.
* Provide a blueprint for future software projects and testers, eliminating the need to start from scratch.
In the below section of this blog on test plan vs test case, we will learn some key points to consider while working on a test case.
> [**Selenium](https://www.lambdatest.com/selenium?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) is an open-source suite of tools and libraries to automate web browsers. Delve into its architecture, benefits, and more through our detailed tutorial on what is Selenium. **
## Key Points to Consider While Working on a Test Case
Here are some key points to consider for a test case, which will give you a good understanding while working on it.
* Test cases can be generated manually or through an automated approach.
* Manual test cases, written by testers, involve verifying and validating the software application’s functionality manually.
* Automated test cases, executed using [automation testing frameworks](https://www.lambdatest.com/blog/automation-testing-frameworks/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog) and tools, adhere to the Software Requirement Specification (SRS).
* Test cases offer a structured method for confirming the functionality of software applications.
* Test cases operate independently, ensuring that the outcome of one test case does not affect another.
* Test cases can be executed in a controlled environment, guaranteeing the availability of all necessary resources without impacting the software production environment.
In the below section of this blog on test plan vs test case, we will learn some of the advantages of test cases.
## Advantages of a Test Case
Testing of software applications starts with a test case that gives all information and details on the required conditions and steps to verify its accuracy and functionality. It outlines the input values needed for triggering the feature of the software application and gives a corresponding output.
Some of the advantages of writing test cases include the following:
* It ensures comprehensive [test coverage](https://www.lambdatest.com/learning-hub/test-coverage?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).
* It provides structure and thoroughness to the software testing process for writing test cases.
* It minimizes maintenance and software support expenses.
* It keeps track of designed tests, executed tests, and the pass/fail ratio.
* It facilitates the reusability of test cases.
* It confirms that the software aligns with end-user requirements.
* It highlights any untested features that, if released into production, could disrupt the user experience.
* It enhances the overall quality of software and user experience.
In the below section of this blog on test plan vs test case, we will learn some of the limitations of a test case besides providing essential benefits.
> **Looking for an effective way to test on [Safari browsers](https://www.lambdatest.com/test-on-safari-browsers?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage)? Skip the hassle of emulators and simulators. Experience authentic testing with LambdaTest’s real online Safari browsers. Start now! **
## Limitations of a Test Case
Besides the advantages, test cases also have certain limitations, which are important to know. Knowing these will help address the challenges and improve the test process.
* Test cases can fail to cover all test scenarios and user interactions. This can lead to undetected defects and poor s[oftware quality](https://www.lambdatest.com/learning-hub/software-quality?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).
* Test cases are predefined and static, not subject to change. Thus, if changes in the software applications are needed, creating new test cases can be time-consuming.
* Test cases are mainly dependent on documentation that is prone to error. Incomplete or inaccurate documentation can cause failure in test coverage.
* Test cases with negative scenarios, such as error handling and boundary conditions, are often overlooked in test cases.
* Test cases are highly time-consuming, and resources to maintain and update many test suites.
In the below section of this blog on test plan vs test case, we will learn about the various components of a test case that are essential to know when creating a test case.
## Components of a Test Case
Test cases should be formulated to accurately depict the features and functionality of the software application under test. QA engineers are advised to write test cases focusing on testing one unit/feature at a time. The language used in test case creation should be simple and easily understandable, and an active rather than passive voice should be utilized. Precision and consistency are crucial when naming elements.
.
The essential components of a test case include:
* **Prerequisites:** Any necessary conditions for the tester or QA engineer to conduct the test.
* **Test setup:** Identifies the requirements for correct test case execution, such as app version, operating system, date and time specifications, and security requirements.
* **Test ID:** A numeric or alphanumeric identifier QA engineers and testers use to group test cases into test suites.
* **Test name:** A title describing the functionality or feature is verified by the test.
* **Test case description:** A detailed explanation of the function to be tested.
* [**Test scenario](https://www.lambdatest.com/learning-hub/test-scenario?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub):** A brief description of the actions to be executed by the testers.
* **Test objective:** A crucial component outlines what the test seeks to verify in one to two sentences.
* **Test steps:** A comprehensive description of the sequential actions required to complete the test.
* [**Test data](https://www.lambdatest.com/learning-hub/test-data?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub):** It refers to the input data or values needed to execute the test case, such as usernames and passwords for testing email login.
* **Test parameters:** A parameter assigned to a specific test case.
* **Test references:** Reference links to user stories, design specifications, or requirements expected to be validated by the test.
* **Expected results:** An outline of how the system should respond to each test step.
* **Actual result:** The observed output or behavior during the execution of the test case.
Below are the sample test cases covering the basic login functionality of a web application to demonstrate how to write a test case based on the components defined.
> **Selenium [WebDriver](https://www.lambdatest.com/learning-hub/webdriver?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub): Automate browser activities locally or remotely. Explore Selenium components, version 4, and its pivotal role in automated testing **
## Sample Test Case for a Login Functionality (Valid)
**Prerequisites:** Web application installed, valid user credentials available.
**Test setup:** Web application version 2.0, Chrome browser version 90.0, OS: Windows 10.
**Test ID:** WEBAPP-001
**Test name:** Login Functionality Verification
**Test case description:** This test case verifies login functionality by indicating valid user credentials and confirming successful login.
**Test scenario:** Validate the login functionality of the web application.
**Test objective:** To ensure users can log in to the web application using valid credentials.
**Test steps:**
* Open the web application.
* Navigate to the login page.
* Enter a valid username and password.
* Click the “Login” button.
**Test data:**
* Valid username: user123
* Valid password: Passw0rd!
**Test parameters: N/A**
**Test references:** User Stories #US123, Design Specification v1.2, Requirement Document
**Expected results:** The system should log in the user successfully and redirect them to the homepage with personalized content displayed.
**Actual result:** TBD (To be filled in during test execution)
> **Ever wondered what is [Jenkins](https://www.lambdatest.com/blog/what-is-jenkins/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog)? Learn how it works and what a Jenkin pipeline consists of. Find more in our deep dive guide. **
## Sample Test Cases for a Login Functionality (Invalid)
**Prerequisites:** Web application installed, login page accessible
**Test setup:** Web application version 2.0, Firefox browser version 87.0, OS: macOS 10.15.
**Test ID:** LOGIN-001
**Test name:** Invalid Username or Password Handling
**Test scenario:** Verify that the system handles invalid usernames or passwords appropriately.
**Test case description:** This test case checks the system’s response to invalid username or password inputs during the login process.
**Test objective:** To ensure the system denies access for invalid login attempts and provides proper error messaging.
**Test steps:**
* Open the web application.
* Navigate to the login page.
* Enter an invalid username or password.
* Click the “Login” button.
**Test data:**
* Invalid username: invalidUser
* Invalid password: wrongPassword
**Test parameters: N/A**
**Test references:** Design Specification v1.2, Requirement Document
**Expected results: **The system should display an appropriate error message and prevent login for invalid credentials.
**Actual result:** TBD (To be filled in during test execution)
In the below section of this blog on test plan vs test case, we will learn the steps involved in creating a test case.
> **Deep dive into [XPath in Selenium](https://www.lambdatest.com/blog/complete-guide-for-using-xpath-in-selenium-with-examples/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog) tutorial and discover its types, techniques, and capture strategies for robust automated testing. **
## Steps to Create a Test Case
A test case is mainly written during the test planning phase of STLC, where the team shares SRS and business requirements, and the developer starts the [software development process](https://www.lambdatest.com/learning-hub/software-development-process?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=learning_hub).

Here are the basic steps to write a test case.
1. **Develop the test case description:** This step outlines the application’s response under specific conditions. For instance, a sign-in page test case description might state, “Examine fields when the user clicks the sign-in button.”
2. **Incorporate essential test data:** This step validates an application and includes relevant test data. Test data may include details such as emails, passwords, and usernames.
3. **Execute test steps:** This step activates scenario actions using testing software to perform a test. Since test cases typically verify multiple data sets and circumstances, preparing all information beforehand can save time.
4. **Verify and record results:** This step verifies and documents various results to evaluate the application’s behavior. Creating a result section for each test aids in tracking how actual outcomes compare to optimal outcomes.
5. **Integrate pre-conditions and post-conditions: **This step validates the basic test version, including any required pre-conditions or post-conditions. Conditions may involve a specific browser, internet extension, captcha, or ad-blocker check.
Test case prioritization is significant in software testing. Testing the entire suite for every build becomes impractical with an increasing number of software applications’ features. According to the [Future of Quality Assurance](https://www.lambdatest.com/future-of-quality-assurance-survey#TestCasePrioritization?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage), 52.5% of organizations prioritize their testing based on the criticality of the feature/functionality, and hardly 5.5% prioritize test cases based on past test runs and customer feedback.
> **Test your site on a real [Safari browser for Windows](https://www.lambdatest.com/safari-browser-for-windows?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) for accurate compatibility checks. Start optimizing your web experience now! **

Some organizations, with 21.5%, conduct tests without prioritization, indicating the potential for optimizing test execution.
This highlights the need for more organizations to add insights from past testing experiences and direct customer feedback into their testing prioritization for faster results and more effective developer feedback.
> **A complete [Selenium Python](https://www.lambdatest.com/blog/getting-started-with-selenium-python/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog) tutorial to help you get started with automation testing using Python and Selenium. A complete Python Selenium guide with Examples and Code **
## Test Case Derivation From Test Plan
A test plan, once written, functions as a guide to creating a strategic framework for software testing. The next step involves deriving test cases from the test plan. This means that test cases specify the practical application of the test plan’s theoretical foundation. Hence, it highlights the difference between test plans and test cases.
Here are some of the key points on this which will help clarify this in more detail:
* The features of the software application that need to be tested are focused while writing its specific test cases.
* The written test case meets the objective of the test plan. This shows that when you address the written test case, you ensure that the software application meets the intended criteria structured in the test plan.
* Test cases derived from the test plan specify the test data required for execution.
Creating test cases based on the test plan is like turning a big picture into detailed steps. It’s a careful and planned process that requires a good understanding of the software’s workings. The test plan gives broad instructions, and test case derivation involves breaking those instructions into smaller, detailed tasks.
In the next section of this blog on test plan vs test case, we will learn the best practices to follow when creating a test case.
> **Selenium is an open-source suite of tools and libraries to automate web browsers. Delve into its architecture, benefits, and more through our detailed tutorial on what is [Selenium testing]**
## Best Practices to Create a Test Case
Writing test cases is one of the most significant tasks in software testing. You can follow the mentioned best practices, which can help you in [writing effective test cases](https://www.lambdatest.com/blog/how-to-write-test-cases-effectively/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog):
* You must ensure each test case has a precise and distinct objective, clearly stating what is being tested and the desired outcome.
* You must include crucial scenarios in your test cases, covering various inputs, conditions, and edge cases.
* You must simplify test cases, concentrating on testing a specific aspect or scenario to maintain clarity.
* You must verify that the test suite meets all the requirements in the specification document.
* You must define preconditions that must be satisfied before executing a test case to ensure consistent results.
* You must draft test cases in a manner that is easy to understand, including new team members unfamiliar with the application.
* You must break down test case steps into the smallest possible segments to prevent confusion during execution.
In addition to all the best practices mentioned above, running the test cases on real browsers, operating systems, and devices is always preferable to ensure seamless functioning of your websites and web apps across various browser and OS combinations.
> **Discover 57 top [automation testing tools]**
As discussed above, you can leverage [cloud testing](https://www.lambdatest.com/blog/cloud-testing-tutorial/?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=blog) platforms like LambdaTest, which allows you to perform manual and [automated testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) for web and mobile applications.
**Some of the ways in which you can leverage LambdaTest include:**
* Conduct live interactive [cross-browser testing](https://www.lambdatest.com/online-browser-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) in diverse environments.
* Conduct [mobile app testing](https://www.lambdatest.com/mobile-app-testing) on a [real device cloud](https://www.lambdatest.com/real-device-cloud).
* Achieve a 70% faster test execution with [HyperExecute](https://www.lambdatest.com/hyperexecute).
* Address [test flakiness](https://www.lambdatest.com/learning-hub/flaky-test), reduce job times, and receive quicker feedback on code changes.
* Implement smart [visual regression testing](https://www.lambdatest.com/learning-hub/visual-regression-testing) on the cloud.
* Utilize [LT Browser](https://www.lambdatest.com/lt-browser) for [responsive testing](https://www.lambdatest.com/learning-hub/responsive-testing) across 50+ pre-installed mobile, tablet, desktop, and laptop viewports.
* Capture automated full-page screenshots across multiple browsers with a single click.
* Test your locally hosted web and mobile apps using the LambdaTest tunnel.
* Perform online [accessibility testing](https://www.lambdatest.com/learning-hub/accessibility-testing).
* Conduct testing across multiple geographies with the [geolocation testing](https://www.lambdatest.com/geolocation-testing) feature.
* Benefit from 120+ third-party integrations with your preferred tools for CI/CD, project management, codeless automation, and more.
To get started with the LambdaTest platform, watch the video tutorial below.
{% youtube 86LQsMtBs5k %}
Now that we have discussed the test plan and test case in detail. Let us outline the key differences between test plan vs test case below.
> [**Automate testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) with tools like LambdaTest to reduce debugging time and speed up time to market, enhancing test suite efficiency. **
## Key Differences Between Test Plan vs Test Case
The key differences between the test plan and test cases are mentioned below:

> [***Remote Test Lab](https://www.lambdatest.com/remote-test-lab?utm_source=devto&utm_medium=organic&utm_campaign=june_14&utm_term=vs&utm_content=webpage) enables efficient software testing on various devices remotely. Click to explore and enhance your development process now! ***
## Conclusion
In this blog, we have discussed the key difference between a test plan and a test case. They are both crucial parts of software testing and understanding these differences is vital for effective test planning and execution and for ensuring thorough and efficient software testing.
Test plans define the testing process for team members, clients, and stakeholders, offering insights into various project-related components, including the target market, user requirements, and necessary resources. A test case outlines the actions needed to verify a specific feature or functionality in software testing. It details the steps, data, prerequisites, and postconditions required for feature verification. Conversely, a test plan is a comprehensive document encompassing all future testing activities. Prepared at the project level, it outlines the work products to be tested, the testing approach, and the distribution of test types among testers.
In conclusion, to enhance communication within the testing team and with other essential stakeholders and streamline the testing process, preparing these two documents — test plan and test case — is critical.
| nazneenahmad | |
1,887,737 | Step-by-Step Guide: Creating a Virtual Machine in Azure | Virtual machines play a crucial role in modern cloud computing, offering flexible and scalable... | 0 | 2024-06-14T08:33:04 | https://dev.to/dera2024/step-by-step-guide-creating-a-virtual-machine-in-azure-153k | azure, cloudcomputing, virtualmachine | Virtual machines play a crucial role in modern cloud computing, offering flexible and scalable computing resources. Deploying and connecting a virtual machine in Azure involves several steps. Below is a complete guide to walk you through the process of setting up your very own virtual machine in Azure:
**Sign in to Azure Portal**:
- Go to the Azure portal (https://portal.azure.com/)[](url)
- Sign in with your Azure account credentials.
**Create a Resource Group**:
- Resource groups help organize and manage related Azure resources. You
can create one specifically for your VM or use an existing one.
- Click on "Create a resource" and search for "Resource group".
- Click "Create" and provide a name, subscription, and region for the
resource group.

**Deploy a Virtual Machine**:
- In the Azure portal, you can click on the search bar and search “virtual machine” or "Create a resource" and search for "Virtual machine".
- Click on "Virtual machine" from the results, then click "Create".
- Follow the prompts to configure your Virtual machine, including:
- Basics: Choose subscription, resource group, VM name, region, availability options, and image.
- Instance details: Select VM size, username, and authentication type.
- Disks: Configure OS disk type and size.
- Networking: Configure networking settings, including virtual network, subnet, public IP address (if needed), and network security group.
- Management: Configure monitoring, boot diagnostics, and other management options.
- Advanced: Set up extensions and tags (optional).

- Once all settings are configured, review and click "Review + create" to deploy the VM.

**Connect to the Virtual Machine**:
- Once the VM is deployed, you can connect to it using Remote Desktop Protocol (RDP) for Windows VMs or SSH for Linux VMs.
- In the Azure portal, navigate to your VM resource.
- Under the "Settings" section, click on "Connect".
- Follow the instructions provided to connect to your VM using RDP or SSH, depending on the VM's operating system.
- For Windows VMs, you'll need the username and password you specified during the deployment process.
- For Linux VMs, you'll use the SSH private key if you selected SSH authentication during deployment.
- Use the provided IP address or DNS name to connect to your VM from your local machine.
**Manage and Configure the Virtual Machine**:
- Once connected, you can manage and configure your VM as needed.
- Install applications, configure settings, and perform any necessary maintenance tasks.
- Remember to follow best practices for security, such as keeping the operating system and software up to date, configuring firewalls, and implementing access controls.
_By following these steps, you should be able to deploy and connect a virtual machine in Azure successfully._ | dera2024 |
1,888,142 | The Convenience Revolution: How UrbanClap Clone Scripts Empower Busy People | In today's fast-paced world, time is a precious commodity. Between work, family, and social... | 0 | 2024-06-14T08:31:14 | https://dev.to/claire_hyland/the-convenience-revolution-how-urbanclap-clone-scripts-empower-busy-people-3g1c | In today's fast-paced world, time is a precious commodity. Between work, family, and social commitments, many people struggle to find the time or energy to tackle everyday tasks. This is where UrbanClap clone scripts come in, offering a revolutionary solution for both service providers and consumers.
## Understanding the Script:A Marketplace at Your Fingertips
An **[UrbanClap clone script](https://appkodes.com/urbanclap-clone/)** essentially creates an on-demand service marketplace accessible through a mobile app. Think of it as a one-stop shop for a variety of services, from home cleaning and repairs to beauty treatments and fitness classes. The script provides the framework for three key user groups.
**Customers:** Users can easily browse through a categorized list of services, compare prices and ratings of service providers, book appointments at their convenience, and make secure online payments. They can also track the progress of their service request and communicate directly with the provider through the app.
**Service Providers:** Skilled professionals can register on the platform and showcase their services. The script facilitates appointment scheduling, in-app communication with customers, and secure payment collection. It also allows providers to manage their profiles, build positive customer reviews, and potentially expand their client base.
**Admin:** The script empowers the platform administrator to manage all aspects of the marketplace. This includes adding or removing service categories, verifying service providers, monitoring transactions, and ensuring smooth operation of the app.
Benefits for Busy People: Convenience and Efficiency
**Unparalleled Convenience:** The ability to book a service provider with just a few clicks on a smartphone is a game-changer. Users can schedule appointments around their busy schedules, eliminating the need for lengthy phone calls or inconvenient in-person visits.
**Wide Range of Services:** UrbanClap clone scripts cater to a diverse range of needs. Whether you require a plumber to fix a leaky faucet, a personal trainer for a home workout session, or a hairstylist for a fresh cut, the platform provides a convenient way to find qualified professionals.
**Transparency and Trust:** The script typically allows users to view service provider profiles, including ratings and reviews from previous customers. This transparency helps users make informed decisions and build trust with the service providers they choose.
**Streamlined Communication:** The in-app communication feature fosters smooth interaction between users and service providers. Users can easily ask questions, clarify details, and confirm appointments, all within the app.
**Secure Payment Gateway:** UrbanClap clone scripts often integrate secure payment gateways, allowing users to pay for services conveniently and securely without the hassle of cash transactions.
**Benefits for Service Providers:** Growth and Opportunity.These scripts aren't just beneficial for consumers; they empower service providers as well.
**Increased Visibility:** By registering on the platform, service providers gain access to a wider customer base, potentially reaching new clients who wouldn't have found them otherwise.
**Flexible Work Schedule:** The app allows service providers to manage their own schedules and workloads. They can choose the appointments that fit their availability, offering greater flexibility than traditional employment.
**Simplified Payment Collection:** The script automates the payment collection process, ensuring service providers receive timely payments for their work.
**Reputation Building:** Positive customer reviews and ratings can significantly enhance a service provider's reputation and credibility, attracting new clients and fostering business growth.
**The people Market: A Perfect Fit**
The People market presents a fertile ground for UrbanClap clone scripts. The fast-paced lifestyle coupled with the growing demand for convenience makes such platforms highly desirable. Additionally, the high penetration of smartphones and the increasing adoption of online services further amplify the potential of these scripts.
**A Brighter Future:**
UrbanClap clone scripts represent the future of[ on-demand services.](https://appkodes.com/urbanclap-clone/) They empower users to reclaim their time and access a vast pool of skilled professionals. For service providers, they offer a platform to grow their businesses and reach new clients. As the American market continues to embrace convenience and efficiency, UrbanClap clone scripts are poised to play a transformative role in the way services are delivered and received.
| claire_hyland | |
1,888,115 | Introducing Verse.db: The Future of Databases is Here | Hi Dev Community! I’m thrilled to share an exciting development from my team at JEDI Studio. We’re... | 27,741 | 2024-06-14T08:30:25 | https://versedb.jedi-studio.com/blog | database, cli, ai, devops | Hi Dev Community!
I’m thrilled to share an exciting development from my team at JEDI Studio. We’re working on a major update for **verse.db** that’s set to redefine the database landscape.
## The Vision Behind Verse.db
Imagine a database that you can easily run from the command line interface (CLI), operating seamlessly on your localhost. With a simple command, you can spin up a powerful database server, providing the same ease as tools like express.js (but uniquely tailored for databases).
## Key Features of Verse.db
### Comprehensive Dashboard
At the heart of this update is a sophisticated dashboard that gives you complete control over your database. This isn't just any dashboard; it’s your command center for managing data and API configurations. Here’s what you can do:
- **Full API Control:** Manage every aspect of your API with ease.
- **User Management:** Add or remove users effortlessly.
- **IP Blocking:** Enhance security by blocking unwanted access.
- **Advanced Options:** Explore a plethora of configurations to suit your needs.
### AI Assistant Integration
We’re pushing the boundaries by integrating an AI assistant within the dashboard. This assistant will help you optimize queries, manage data more efficiently, and even predict potential issues before they arise.
### Multi-Adapter Support
Verse.db will support various data adapters such as JSON, SQL, and more. This flexibility ensures compatibility with different types of data and use cases, making it a versatile tool for developers.
### Seamless Local and Remote Operation
You can run your database locally and then upload it to a host, connecting it to a domain to use as an API for your data. This seamless transition from local to remote operation ensures that your workflow is smooth and efficient.
## A New Era for Databases
We’re not just updating verse.db; we’re revolutionizing it. Our goal is to make it the most secure, versatile, and user-friendly database solution available.
### Join the Conversation
We’d love to hear your thoughts and ideas. What features are you most excited about? How do you plan to use verse.db in your projects? Your feedback is crucial as we continue to develop and refine this tool.
Drop your comments, questions, and suggestions below. Let’s build the future of databases together!
With anticipation,
Marco5dev and the JEDI Studio Team
{% embed https://versedb.jedi-studio.com %}
| marco5dev |
1,888,140 | Kubernetes on Azure: Part 3 - An Introduction to AKS | Learn how to provision an Azure Kubernetes Cluster and deploy an application to the cluster | 0 | 2024-06-14T08:30:05 | https://dev.to/thwani47/kubernetes-on-azure-part-3-an-introduction-to-aks-3cle | kubernetes, docker, azure | ---
title: Kubernetes on Azure: Part 3 - An Introduction to AKS
published: true
description: Learn how to provision an Azure Kubernetes Cluster and deploy an application to the cluster
tags: kubernetes, docker, azure
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8rod19lqh9qafhf6o0ug.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 08:24 +0000
---
In the [previous](https://dev.to/thwani47/kubernetes-on-azure-part-2-running-a-local-kubernetes-cluster-2fbl), we looked at how to set up a local Kubernetes cluster using Docker Desktop. In this post, we will look at how to set up a managed Kubernetes cluster on Azure using Azure Kubernetes Service (AKS) and deploy the [distributed-calculator](https://github.com/Thwani47/distributed-calculator) application.
## Table of Contents
- [What is AKS?](#what-is-aks)
- [Creating and configuring an AKS cluster](#creating-and-configuring-an-aks-cluster)
- [Deploy an application to the AKS cluster](#deploy-an-application-to-the-aks-cluster)
- [AKS Automatic](#aks-automatic)
- [Conclusion](#conclusion)
## What is AKS?
Azure Kubernetes Service (AKS) is a managed Kubernetes service provided by Microsoft Azure. AKS allows us to easily deploy and manage containerized applications. Since AKS is a managed service, it reduces the complexity of managing a Kubernetes cluster. Azure is responsible for managing the overhead that comes with managing a Kubernetes cluster. AKS is an ideal solution for applications that have high availability, scalability, and portability requirements.
With AKS, the operational overhead of managing a K8s cluster lies with Azure. Azure is responsible for managing the cluster's control plane. Azure is also responsible for managing cluster operations such as health monitoring and maintenance. The AKS control plane is created automatically at no cost to the developer. The developer is only responsible for provisioning and managing the worker nodes where the application workloads run.

Azure manages the control plane and exposes the Kubernetes API server so we can interact with the cluster and deploy the application workloads. Each AKS cluster has at least one node, an Azure Virtual Machine (VM) that runs the K8s node components (kube-proxy, kubelet, container-runtime). AKS allows us to group multiple nodes into node pools. Node pools allow us to segregate workloads based on resource requirements. For example, we can have a node pool for CPU-intensive workloads and another node pool for memory-intensive workloads.
## Creating and configuring an AKS cluster
To create an AKS cluster, we can use either the Azure portal, Azure CLI, or ARM templates. In this post, we will use the Azure CLI to create an AKS cluster. Before we can create an AKS cluster, we need to install the Azure CLI and authenticate with Azure. To install the Azure CLI, follow the instructions [here](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli). To authenticate with Azure, run the following command:
You will need to have an existing Azure subscription to create an AKS cluster. If you do not have an Azure subscription, you can create a free account [here](https://azure.microsoft.com/en-us/free/).
### Login and create a resource group to contain the AKS cluster:
```bash
az login
az group create --name aks-demo-rg --location eastus
```
### Create an AKS cluster:
We create an AKS cluster with three nodes
```bash
az aks create --resource-group aks-demo-rg --name aksDemoCluster --node-count 3 --generate-ssh-keys
```
### Connect to the AKS cluster:
We need to configure `kubectl` to connect to our AKS cluster
```bash
az aks get-credentials --resource-group aks-demo-rg --name aksDemoCluster
# Merged "aksDemoCluster" as current context in <poth-to-kubeconfig>
```
### Verify the connection to the AKS cluster:
We can verify the connection to the AKS cluster by running the following command:
```bash
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# aks-nodepool1-32415939-vmss000000 Ready agent 13m v1.28.9
# aks-nodepool1-32415939-vmss000001 Ready agent 13m v1.28.9
# aks-nodepool1-32415939-vmss000002 Ready agent 58s v1.28.9
```
## Deploy an application to the AKS cluster:
We can deploy the distributed calculator application to the AKS cluster by running the following command:
```bash
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/nestjs-divider-deployment.yaml
# deployment.apps/nestjs-divider-deployment created
# service/nestjs-divider created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/go-subtractor-deployment.yaml
# deployment.apps/go-subtractor-deployment created
# service/go-subtractor created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/csharp-adder-deployment.yaml
# deployment.apps/csharp-adder-deployment created
# service/csharp-adder created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/flask-multiplier-deployment.yaml
# deployment.apps/flask-multiplier-deployment created
# service/flask-multiplier created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/calculator-deployment.yaml
# deployment.apps/calculator-deployment created
# service/calculator-service created
```
We can run `kubectl get all` to view all the resources that have been created
```bash
kubectl get all
# NAME READY STATUS RESTARTS AGE
# pod/calculator-deployment-95956bf4c-rvjhj 1/1 Running 1 (37s ago) 7m7s
# pod/csharp-adder-deployment-79b878dc45-kbqns 1/1 Running 0 7m18s
# pod/flask-multiplier-deployment-67566f5985-dlrph 1/1 Running 0 7m13s
# pod/go-subtractor-deployment-7856c959f7-pz7kr 1/1 Running 0 7m27s
# pod/nestjs-divider-deployment-7b54767779-9hlq7 1/1 Running 0 8m7s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/calculator-service LoadBalancer 10.0.153.154 48.216.153.102 3000:31252/TCP 7m8s
# service/csharp-adder ClusterIP 10.0.131.41 <none> 8080/TCP 7m18s
# service/flask-multiplier ClusterIP 10.0.246.199 <none> 5000/TCP 7m13s
# service/go-subtractor ClusterIP 10.0.46.207 <none> 8000/TCP 7m28s
# service/nestjs-divider ClusterIP 10.0.4.218 <none> 3000/TCP 8m8s
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/calculator-deployment 1/1 1 1 7m8s
# deployment.apps/csharp-adder-deployment 1/1 1 1 7m19s
# deployment.apps/flask-multiplier-deployment 1/1 1 1 7m14s
# deployment.apps/go-subtractor-deployment 1/1 1 1 7m28s
# deployment.apps/nestjs-divider-deployment 1/1 1 1 8m8s
# NAME DESIRED CURRENT READY AGE
# replicaset.apps/calculator-deployment-95956bf4c 1 1 1 7m8s
# replicaset.apps/csharp-adder-deployment-79b878dc45 1 1 1 7m19s
# replicaset.apps/flask-multiplier-deployment-67566f5985 1 1 1 7m14s
# replicaset.apps/go-subtractor-deployment-7856c959f7 1 1 1 7m28s
# replicaset.apps/nestjs-divider-deployment-7b54767779 1 1 1 8m8s
```
We can open the browser to `<CALCULATOR-SERVICE-EXTERNAL-IP>:3000` to access our application. We should be able to see the calculator app and be able to interact with it.
Now that we have deployed the application, we can perform actions such as scaling our application, either by adding more instances of the application or adding more nodes to the cluster. We can increase the number of instances of the calculator UI by running the following command:
```bash
kubectl scale deployment calculator-deployment --replicas=3
# deployment.apps/calculator-deployment scaled
```
We can verify that the number of instances has been increased by running `kubectl get pods,deploy --selector app=calculator`:
```bash
kubectl get pods,deploy --selector app=calculator
# NAME READY STATUS RESTARTS AGE
# pod/calculator-deployment-95956bf4c-mqct7 1/1 Running 0 87s
# pod/calculator-deployment-95956bf4c-p2q8g 1/1 Running 0 87s
# pod/calculator-deployment-95956bf4c-rvjhj 1/1 Running 1 (15m ago) 21m
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/calculator-deployment 3/3 3 3 21m
```
We can add more nodes by running
```bash
az aks scale --resource-group aks-demo-rg --name aksDemoCluster --node-count 5
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# aks-nodepool1-32415939-vmss000000 Ready agent 34m v1.28.9
# aks-nodepool1-32415939-vmss000001 Ready agent 34m v1.28.9
# aks-nodepool1-32415939-vmss000002 Ready agent 21m v1.28.9
# aks-nodepool1-32415939-vmss000003 Ready agent 28s v1.28.9
# aks-nodepool1-32415939-vmss000004 Ready agent 35s v1.28.9
```
Delete the resource group to avoid incurring costs.
```bash
az group delete --name aks-demo-rg --yes
```
## AKS Automatic
In May this year, Microsoft introduced Azure Kubernetes Service (AKS) Automatic, which offers a more simplified Kubernetes experience for developers. With AKS Automatic, Azure takes care of the cluster setup, node management, scaling, and security, and offers preconfigured settings that follow the AKS well-architected best practices. AKS Automatic provides developers easy access to production-ready clusters, which allows them to focus on building their applications, and run them on Kubernetes with ease.
AKS Automatic comes with pre-configured features such as:
- a managed Prometheus service for metric collection
- a managed Grafana service for visualization
- a managed Container Insights service for collection
- automatic node management. AKS Automatic automatically scales the number of nodes based on the application's resource requirements.
- Azure RBAC for cluster access control,
and many more features.
At the time of writing, AKS Automatic is currently in preview and is not generally available.
## Conclusion
In this post, we looked at how to create an AKS cluster using the Azure CLI and deploy the application to our cluster. We also looked at how to scale the application and the cluster. We also looked at AKS Automatic, a new feature that simplifies the Kubernetes experience for developers. AKS Automatic provides a production-ready Kubernetes cluster with pre-configured settings that follow the AKS well-architected best practices. | thwani47 |
1,888,141 | Exploring the Top Swimming Pool Companies in Dubai | Dubai, known for its opulence and innovation, boasts some of the most luxurious swimming pools in the... | 0 | 2024-06-14T08:29:55 | https://dev.to/hasnain_alam12345/exploring-the-top-swimming-pool-companies-in-dubai-hi6 | Dubai, known for its opulence and innovation, boasts some of the most luxurious swimming pools in the world. From residential villas to commercial complexes, the demand for exquisite swimming pool designs has surged, leading to the rise of several top-notch pool companies. Let's delve into some of the best swimming pool companies in Dubai, renowned for their craftsmanship, innovation, and commitment to excellence.
**1. Desert Grove Swimming Pools LLC**
Desert Grove Swimming Pools LLC stands out for its commitment to quality and customer satisfaction. With a team of skilled professionals and engineers, they specialize in crafting bespoke swimming pools tailored to the unique preferences of their clients. From contemporary infinity pools to classic designs, Desert Grove takes pride in delivering exceptional craftsmanship and innovative solutions. Their attention to detail and use of high-quality materials have earned them a stellar reputation in the industry.
**2. Crystal Lagoons UAE**
Crystal Lagoons UAE is synonymous with luxury and innovation in the realm of aquatic environments. Specializing in creating awe-inspiring crystal-clear lagoons, they have revolutionized the concept of swimming pools. Their patented technology allows for the construction and maintenance of large bodies of water with low energy consumption, making them environmentally sustainable. Whether it's a residential community or a commercial development, Crystal Lagoons UAE has set the benchmark for cutting-edge aquatic experiences in Dubai.
**3. Belhasa Projects LLC**
Belhasa Projects LLC is a prominent name in the construction and contracting industry in Dubai, with a notable portfolio in swimming pool construction. Known for their expertise in executing complex projects with precision, they have successfully delivered numerous iconic swimming pools across the city. From concept development to final execution, Belhasa Projects ensures meticulous planning and execution, adhering to the highest standards of quality and safety.
**4. Compass Pools Middle East**
Compass Pools Middle East is renowned for its innovative designs and state-of-the-art technology in the realm of fiberglass swimming pools. With a focus on sustainability and durability, they offer a wide range of pool designs that blend seamlessly with the surrounding landscape. Whether it's a rooftop pool or an indoor oasis, Compass Pools Middle East provides customizable solutions to suit every need. Their commitment to excellence and innovation has positioned them as one of the leading fiberglass pool companies in Dubai.
**5. [Four Seasons Pool Gardens Landscaping - Interior Designing - Swimming Pools Construction in Dubai](https://poolsgardensuae.com/)**
Four Seasons Pool Gardens Landscaping is synonymous with creativity and expertise in creating luxurious swimming pools and outdoor living spaces. With a team of experienced designers and engineers, they specialize in turning visions into reality, crafting breathtaking poolscapes that exude elegance and sophistication. From concept development to landscaping and lighting, Infinite Leisure Swimming Pools offers comprehensive solutions tailored to the preferences of their discerning clientele.
In conclusion, Dubai's pool companies represent the pinnacle of innovation and luxury in the realm of swimming pool design and construction. Whether it's a residential villa, a five-star resort, or a commercial complex, these companies excel in delivering exceptional craftsmanship, cutting-edge technology, and unparalleled customer service. With their commitment to quality and innovation, they continue to elevate the standard of swimming pool design in Dubai, creating aquatic environments that are as stunning as they are functional.
| hasnain_alam12345 | |
1,888,123 | How to use has() Property in CSS | As a UI developer, you're always on the hunt for tools and techniques to make your styling more... | 27,759 | 2024-06-14T08:29:16 | https://dev.to/nnnirajn/exploring-the-has-property-in-css-a-guide-for-ui-developers-3obb | css, ui, design, beginners | As a UI developer, you're always on the hunt for tools and techniques to make your styling more efficient and intuitive. The `has()` property in CSS is a game-changer, providing new ways to enhance your styles with conditional logic. You might have heard about it or maybe evenabbled with some code snippets, but how does it really work? And how can you use it to improve your workflow?
In this comprehensive guide, we’ll dive deep into the `has()` property, exploring its functionality, use cases, and potential pitfalls. Whether you're new to CSS or a seasoned developer, there's something for everyone. Ready to level up your CSS skills? Let's get started!
### Understanding the Basics of `has()`
The `has()` pseudo-class is a powerful addition to CSS selectors, allowing you to apply styles based on the presence of an element within another element. Think of it as a way to check if a parent element contains a specific child element and style it accordingly. This pseudo-class can help you make more dynamic styles without relying heavily on JavaScript or adding unnecessary classes to your HTML.
#### Syntax and Basic Usage
The basic syntax of the `has()` pseudo-class is quite simple:
Example
```css
element:has(selector) {
/* Styles go here */
}
```
Here's a straightforward example to illustrate:
Example
```css
div:has(p) {
border: 2px solid blue;
}
```
In this example, any div that contains a `p` element will have a blue border. The `has()` property works by evaluating the presence of the specified selector within the parent element.
#### Compatibility
It's important to note that, as of now, the `has()` pseudo-class is not yet widely supported across all browsers. This means you should check for browser compatibility and possibly use feature detection to ensure your styles work as expected.
### Practical Use Cases for has()
Now that we understand the basics, let’s explore some practical scenarios where the `has()` pseudo-class can make your life as a UI developer easier.
#### 1. Styling Based on Content
One common use case for the `has()` pseudo-class is styling elements based on their content. For example, you might want to style a `div` differently if it contains a specific type of content, such as an image or a button.
Example
```css
div:has(img) {
padding: 20px;
background-color: #f7f7f7;
}
```
In this example, any `div` containing an `img` element will have additional padding and a lighter background color. This can be especially useful for content management systems where content is dynamically generated.
#### 2. Form Validation
Form validation is another area where the `has()` pseudo-class can be incredibly handy. You can style form elements based on whether they contain specific types of input fields.
Example
```css
form:has(input[type="submit"]) {
border: 2px solid green;
}
```
This CSS rule applies a green border to forms that contain a submit button, providing a visual cue for users to indicate actionable forms.
#### 3. Navigation Menus
The `has()` pseudo-class is also great for styling navigation menus. For example, you might want to highlight menu items that have submenus.
Example
```css
li:has(ul) {
font-weight: bold;
color: #333;
}
```
This rule makes parent menu items bold and changes their color if they contain a submenu, giving users a clear indication of expandable options.
#### 4. Conditional Styling with Multiple Selectors
You can also use the `has()` pseudo-class with multiple selectors to apply conditional styling based on a combination of elements.
Example
```css
section:has(h2, p) {
margin: 40px;
background-color: #eef;
}
```
In this scenario, any `section` that contains both an `h2` and a `p` element will have a specific margin and background color. This is particularly useful for structuring complex layouts where different content blocks require different styling.
### Advanced Techniques with `has()`
While the examples above showcase straightforward uses of the `has()` pseudo-class, there are more advanced techniques that can further streamline your CSS.
#### 1. Combining `has()` with Other Pseudo-classes
One powerful feature is the ability to combine `has()` with other pseudo-classes like `:not()`, `:nth-child()`, and more to create more intricate styles.
Example
```css
div:has(img):not(:has(.thumbnail)) {
border: 2px solid red;
}
```
This rule applies a red border to any `div` that contains an image but does not contain an element with the class `thumbnail`
#### 2. Styling Based on User Interaction
You can also use `has()` to conditionally apply styles based on user interactions, such as focusing on an input field within a form.
Example
```css
form:has(input:focus) {
outline: 2px dashed #00f;
}
```
When any input field within a form gains focus, the entire form gets an outline, helping users easily see the active form section.
#### 3. Accessibility Improvements
The `has()` pseudo-class can improve accessibility by providing better visual cues for users with disabilities. For example, you can highlight form sections containing errors.
```css
fieldset:has(.error) {
border: 2px solid #f00;
background-color: #fee;
}
```
This rule visually emphasizes fieldsets containing elements with the `error` class, making it easier for users to identify problematic sections.
#### 4. Potential Pitfalls and How to Avoid Them
While the `has()` pseudo-class is a fantastic addition to CSS, it’s not without its challenges. Here are some common pitfalls and tips to avoid them.
#### 5. Performance Concerns
Because the `has()` pseudo-class requires the browser to evaluate the presence of child elements within a parent, it can be more performance-intensive than other selectors. This is especially true for complex selectors or large documents.
**Tip:** Use `has()` sparingly and only when it provides a significant benefit. Avoid using it in performance-critical sections of your application.
#### 6. Browser Compatibility
As mentioned earlier, `has()` is not yet fully supported across all browsers. Make sure to test your styles in different browsers and provide fallbacks where necessary.
**Tip:** Use feature detection and conditional logic in your CSS or JavaScript to gracefully degrade functionality or inform users when a feature is not available.
## Conclusion: Embracing the Future of CSS
The `has()` pseudo-class is a powerful tool in the arsenal of any UI developer, offering new ways to dynamically style elements based on their content and relationships. While it’s not yet widely supported, understanding its potential and limitations can help you make informed decisions about when and how to use.
By incorporating `has()` into your styling repertoire, you can create more intuitive and responsive designs, reduce your reliance on JavaScript, and write cleaner, more maintainable CSS. As browser support grows, the `has()` pseudo-class is poised to become an essential part of modern web development.
So, the next time you’re facing a complex styling challenge, consider whether the `has()` pseudo-class might be the solution you’ve been looking for. With a bit of creativity and careful consideration, you can leverage this powerful feature to enhance your UI designs and create more engaging user experiences.
> “Great design is not just about how it looks, but also about how it works.” - Steve Jobs
Let's continue pushing the boundaries of what CSS can do, making the web more beautiful and functional one line of code at a time. Happy coding! | nnnirajn |
1,888,139 | Vite: How to bundle / group chunks together | If you use a library like Svelte or Vue you might have noticed that Vite and Rollupjs create tons of... | 0 | 2024-06-14T08:28:51 | https://dev.to/greggcbs/vite-how-to-bundle-group-chunks-together-1dnl | vite, rollup, javascript, config | If you use a library like Svelte or Vue you might have noticed that Vite and Rollupjs create tons of tiny js files - in my case causing the browser to have to download 72 js files for one web page - some files being less then 1kb.
There is a way to group chunks in your vite.config, its manual but it can help you.
**First a screenshot of the problem:**
_Every function or component in my codebase ends up being its own chunk, even commonly use ones_

I want to group some of my imports or modules in my codebase into one chunk to reduce the amount of files / script tags vite and rollup create. Here is an example of the vite config on how to achieve this:
```typescript
// vite.config.ts
// modules I want to group (by file path)
const group_chunks = [
"src/constants.ts",
"src/lib/utils/util_fetch.ts",
"src/lib/utils/util_date_get_time.ts",
"src/lib/utils/util_price_to_rands.ts",
"src/lib/utils/util_date_to_friendly_date.ts",
"flowbite-svelte/dist/typography/A.svelte",
"flowbite-svelte/dist/typography/Button.svelte",
"flowbite-svelte/dist/toast/Toast.svelte",
"svelte-hero-icons/dist/Icon.svelte",
"src/lib/components/logo.svelte",
"src/lib/components/form/input.svelte",
"src/lib/components/form/field.svelte",
"src/lib/components/clickToCopy.svelte",
"src/lib/components/header/header.svelte",
"src/lib/components/general/loader.svelte",
]
export default defineConfig({
// ...other config stuff
// how to group chunks
build: {
rollupOptions: {
output: {
manualChunks: (id) => {
const is_group_chunk = group_chunks.some(partialPath => id.includes(partialPath));
if (is_group_chunk) {
return "general" // this will create a chunk called general
}
},
},
},
},
});
```
This reduces all those chunks into one chunk called general:

## Result
In this case i have reduced 15 trips to the server down to 1. This is especially helpful on mobile when my burger menu was taking a long time to become functional because its script was loading last because rollup didnt know its importance - now that its in the general chunk, it loads in faster.
## Lighthouse Score
In terms of lighthouse the performance is around the same although i am now getting a warning about unused javascript in my general file I am happy to suffer that warning considering the users experience has improved.
## Automate
You could put all common assets in a common folder and update the vite config to see if the file is in that folder and automatically add it to a grouped chunk. Minor tweak to the code.
If you are really smart you could make a plugin that allows you to add comments to your imports that tells vite what chunk to group that import into - this is how webpack used to work.
Example of how webpack allowed chunk naming in code:
```js
const Login = import( /* webpackChunkName: "login" */ '../views/LoginView.vue'),
```
Thats it,
I hope this helped and good luck | greggcbs |
1,888,138 | Exploring Mindfulness-Based Stress Reduction (MBSR) in Therapy: Cultivating Calmness and Presence with Heidi Kling | In the realm of therapeutic change, various approaches have emerged to address mental health concerns... | 0 | 2024-06-14T08:25:12 | https://dev.to/drheidikling/exploring-mindfulness-based-stress-reduction-mbsr-in-therapy-cultivating-calmness-and-presence-with-heidi-kling-4b02 | In the realm of therapeutic change, various approaches have emerged to address mental health concerns and promote well-being. One such approach gaining recognition is Mindfulness-Based Stress Reduction (MBSR), which integrates mindfulness meditation and mindful movement to alleviate stress, anxiety, and depression. In this blog, we delve into the principles and practices of MBSR, exploring how it cultivates calmness and presence to facilitate transformative change in therapy.
## Understanding the Foundations of MBSR
Mindfulness-Based Stress Reduction (MBSR) is rooted in the teachings of mindfulness, a practice that involves paying attention to the present moment with openness, curiosity, and acceptance. Developed by Dr. Jon Kabat-Zinn in the late 1970s, MBSR combines elements of mindfulness meditation, body awareness, and gentle yoga to help individuals develop greater awareness of their thoughts, emotions, and sensations. By cultivating mindfulness, MBSR aims to reduce the impact of stress on physical and mental well-being, fostering resilience and coping skills to navigate life's challenges with greater ease.
Furthermore, MBSR emphasizes the importance of non-judgmental awareness and self-compassion, encouraging individuals to approach their experiences with kindness and curiosity rather than criticism or avoidance. Through regular practice, individuals learn to observe their thoughts and emotions without getting caught up in them, allowing for greater clarity, perspective, and emotional regulation. By understanding the foundations of MBSR, therapists like Heidi Kling (psychologist) integrate mindfulness-based techniques into their practice to support clients in managing stress, reducing symptoms of anxiety and depression, and cultivating a greater sense of peace and balance in their lives.
## Practicing Mindful Meditation
Central to MBSR is the practice of mindful meditation, which involves intentionally directing attention to the present moment while adopting a non-judgmental attitude. During mindful meditation sessions, individuals typically sit in a comfortable position and focus their attention on their breath, bodily sensations, or a specific point of focus. As thoughts, emotions, and sensations arise, individuals are encouraged to observe them without attachment or reaction, allowing them to come and go like clouds passing through the sky.
Moreover, mindful meditation involves cultivating an attitude of kindness and acceptance toward oneself and others, fostering a sense of inner peace and well-being. By practicing mindful meditation regularly, individuals can develop greater awareness of their internal experiences, reducing reactivity to stressors and enhancing their ability to respond to challenges with clarity and equanimity. Psychologists such as Heidi Kling (psychologist) guide clients through mindful meditation exercises during therapy sessions, providing them with practical tools to manage stress, regulate emotions, and cultivate a deeper connection to themselves and the world around them.
## Exploring Mindful Movement
In addition to mindful meditation, MBSR incorporates mindful movement practices such as yoga and tai chi to promote physical and emotional well-being. Mindful movement involves performing gentle, deliberate movements with awareness and intention, focusing on the sensations in the body and the breath. Through practices like yoga, individuals can cultivate greater flexibility, strength, and balance while also fostering a deeper connection between the mind and body.
Furthermore, mindful movement offers a unique opportunity to integrate mindfulness into daily life activities, encouraging individuals to approach movement with presence and awareness. By incorporating mindful movement practices into therapy sessions, therapists including Heidi Kling (psychologist) help clients reconnect with their bodies, release tension, and cultivate a sense of grounding and stability. Additionally, mindful movement can serve as a valuable tool for managing chronic pain, improving body awareness, and promoting overall well-being.
## Applying MBSR in Therapy Settings
MBSR principles and practices can be applied effectively in various therapy settings to address a wide range of mental health concerns. Therapists can integrate mindfulness-based techniques into existing therapeutic approaches, such as cognitive-behavioral therapy (CBT), dialectical behavior therapy (DBT), and acceptance and commitment therapy (ACT), to enhance treatment outcomes. By incorporating MBSR into therapy sessions, psychologists like Heidi Kling (psychologist) help clients develop greater self-awareness, emotional regulation skills, and resilience in the face of stressors.
Therapists can use MBSR to support clients in navigating life transitions, managing chronic health conditions, and cultivating a greater sense of purpose and meaning in their lives. By guiding clients through mindfulness practices and facilitating group discussions, therapists create a supportive and collaborative environment where clients can explore their experiences, deepen their understanding of themselves, and cultivate inner resources for healing and growth.
## Benefits of MBSR in Therapy
The benefits of MBSR in therapy are wide-ranging and impactful, offering individuals practical tools and strategies for managing stress, reducing symptoms of anxiety and depression, and enhancing overall well-being. Research has shown that MBSR can lead to improvements in mood, attention, and immune function, as well as reductions in symptoms of chronic pain and insomnia. By practicing mindfulness regularly, individuals can develop greater resilience, compassion, and self-compassion, leading to more adaptive coping strategies and a greater sense of empowerment in their lives.
MBSR fosters a sense of connection and community among participants, providing a supportive environment for sharing experiences, gaining insights, and learning from one another. By participating in MBSR programs, individuals not only benefit from the guidance of skilled therapists but also from the collective wisdom and support of their peers. This sense of belonging and shared humanity can be profoundly healing and transformative, helping individuals feel less isolated and more connected to themselves and others.
## Challenges and Considerations
While MBSR offers numerous benefits, it is essential to recognize that mindfulness practice is not a panacea and may not be suitable for everyone. Some individuals may find mindfulness challenging or uncomfortable, particularly if they have a history of trauma or struggle with intense emotions. Therapists should approach the integration of MBSR into therapy with sensitivity and caution, assessing clients' readiness and willingness to engage in mindfulness practices and providing appropriate support and guidance as needed.
Therapists should be mindful of cultural considerations and adapt mindfulness practices to align with clients' beliefs, values, and cultural backgrounds. Mindfulness is not a one-size-fits-all approach, and therapists should be flexible and open-minded in their approach to incorporating mindfulness-based techniques into therapy. Additionally, therapists should stay informed about the latest research and developments in the field of mindfulness-based interventions to ensure that they are providing evidence-based and effective treatment to their clients.
Mindfulness-Based Stress Reduction (MBSR) offers a powerful and transformative approach to therapy, helping individuals cultivate calmness, presence, and resilience in the face of life's challenges. By integrating mindfulness-based techniques into therapy settings, therapists such as Heidi Kling (psychologist) support clients in developing greater self-awareness, emotional regulation skills, and overall well-being. Through practices such as mindful meditation, mindful movement, and group discussions, individuals can explore their experiences, gain insights, and cultivate inner resources for healing and growth. While challenges and considerations exist, the benefits of MBSR in therapy are vast, offering individuals practical tools and strategies for navigating life with greater ease, clarity, and compassion.
| drheidikling | |
1,845,070 | HubSpot enables 128,000 businesses with live chat that just works | HubSpot is a SaaS CRM platform with live tools for sales, marketing, customer service, content... | 0 | 2024-06-14T08:25:07 | https://ably.com/case-studies/hubspot | webdev, news, hubspot, ably | HubSpot is a SaaS CRM platform with live tools for sales, marketing, customer service, content management and operations. It's mission is to make it easy for all parts of an organisation to work together, regardless of location. The goal: transform a business so it can attract, engage, and delight customers.
**THE PROBLEM**
## Deliver live experiences that businesses, and their customers, can rely upon
HubSpot is an innovator, with a strategic focus on providing its customers with the tools and capabilities they need to accelerate growth across the customer lifecycle. With that in mind, HubSpot recognised that adopting live experiences as a core element of its platform would be crucial to enabling more efficient, slicker collaboration and communication – ultimately delivering business value for customers in the form of revenue growth.
That process started with customer live chat, using a third-party service to provide the realtime infrastructure, but with big plans to build out further live experiences across its platform. However, it quickly became clear that the third-party they chose could not offer the QoS guarantees, nor the stability required to support the uninterrupted experiences HubSpot commits to delivering for its 120,000+ customers.
Freiert explained: “With realtime, there is no middle ground. If messages are dropped or delayed, the whole experience is broken. So, for us, realtime has to just work and it has to scale with our platform, because when truly live experiences like chat suffer issues or go down it affects all our customers and seriously impacts our reputation, so the criticality of this just working is much more magnified.”
> _Ably makes realtime just work – without it, our product would literally stop working - but it is so much more than that. Ably is now a business-critical part of our organisation-wide infrastructure and a key innovation partner that our engineers really like working with. The support and communication are just outstanding and that is huge for us - we really see Ably as a partner in our growth._ **- Max Freiert, Product Group Lead**
**THE SOLUTION**
## A future-proof platform that supports business-critical live experiences
Initially, HubSpot explored the option of building realtime infrastructure in-house, but it quickly became clear that was not a viable option.
“Around 20% of our engineering team is dedicated to infrastructure,” Freiert said. “But we could see that building realtime infrastructure we could rely on would be too complex and time-consuming to provide value for our customers. Overall, the opportunity cost associated with taking so many engineers away from core product innovation was simply too high.”
Instead, Freiert and his team decided to look for a third-party vendor offering the QoS, performance and integrity guarantees HubSpot needed to deliver on their 120,000+ users’ expectations, as well as the flexibility to support new and diverse realtime capabilities over time – quickly identifying Ably as the ideal solution.
“Ably wasn’t just the right solution for now, but for our future needs too,” Freiert confirmed. “Its scale-ready realtime platform with simple, API-driven set up massively reduced our initial engineering overhead, delivers against all our QoS, performance and integrity demands and gives us the scope to quickly bring innovations on the customer experience to market.”

**THE RESULTS**
## 80,000 daily business-critical conversations that work, no matter what
Since choosing Ably, HubSpot has launched a range of live experiences that its customers can rely on to deliver business value - driving more efficient collaboration and providing tools that open up new revenue opportunities. Today, it relies on Ably to [power live chat](https://hubs.la/Q02wlnb30) between 128,000 businesses and their customers across 120 countries - and more than 500,000 business-critical conversations every month.
Ably has proven highly cost-effective too, delivering a 60% upfront CAPEX saving and a further $300,000 saving every year in reduced infrastructure and engineering costs, compared with a self-built solution.
“I’m very happy we found Ably,” said Freiert. “It immediately solved our issues around the user experience and scale. It supports our customers’ stringent expectations as well as future product development, because all our teams can now call on realtime infrastructure that can support thousands of services and scales limitlessly in step with new features and an expanding user base.”
Ably now provides HubSpot with a high performance, reliable, effortlessly scalable organisation-wide edge messaging platform on which Freiert and his team can easily build ever more live experiences for customers. For instance, it is crucial to HubSpot’s Conversations feature, which allows customers to work smarter by drawing diverse channels – email, live chat, bots, Facebook and web forms – into a single inbox.
“Ably makes realtime just work,” Freiert concluded. “Without it, our product would literally stop working - but it is so much more than that. Ably is now a business-critical part of our organisation-wide infrastructure and a key innovation partner that our engineers really like working with. The support and communication are just outstanding and that is huge for us - we really see Ably as a partner in our growth.”
| ablyblog |
1,888,137 | Kubernetes on Azure: Part 2 - Running a local Kubernetes cluster | Learn how to run a local Kubernetes cluster and deploy and application to your cluster | 0 | 2024-06-14T08:23:42 | https://dev.to/thwani47/kubernetes-on-azure-part-2-running-a-local-kubernetes-cluster-2fbl | kubernetes, docker, azure | ---
title: Kubernetes on Azure: Part 2 - Running a local Kubernetes cluster
published: true
description: Learn how to run a local Kubernetes cluster and deploy and application to your cluster
tags: kubernetes, docker, azure
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fw1smq3m3k4kil8tpb47.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 08:19 +0000
---
In the [previous](https://dev.to/thwani47/kubernetes-on-azure-part-1-an-introduction-to-kubernetes-1oc9) blog post, we went through the basics of Kubernetes and the Kubernetes architecture. In this blog post, we will explore how to run a local Kubernetes cluster on our machines.
## Table of Contents
- [Installing Kubernetes](#installing-kubernetes)
- [Deploying a Pod](#deploying-a-pod)
- [Replication Controllers and ReplicaSets](#replication-controllers-and-replicasets)
- [Creating a Deployment](#creating-a-deployment)
- [Creating a Service](#creating-a-service)
- [Conclusion](#conclusion)
## Installing Kubernetes
The easiest way to run a Kubernetes cluster is to install [Docker Desktop](https://docs.docker.com/desktop/) and enable Kubernetes. On Docker Desktop, we can enable Kubernetes by going to the Docker Desktop settings, clicking on the Kubernetes tab on the left side, and checking the box to enable Kubernetes. Click on the Apply & Restart button to apply the changes.
Another way to run a Kubernetes cluster is to use [Minikube](https://minikube.sigs.k8s.io/docs/start/). Minikube is a tool that allows you to run a single-node Kubernetes cluster on your local machine. Minikube can be installed on Windows, macOS, and Linux. To install Minikube, follow the instructions [here](https://minikube.sigs.k8s.io/docs/start/). In this blog post, we will be using Docker Desktop to run our Kubernetes cluster.
*On Windows and macOS, Docker Desktop comes with `kubectl`, a command-line tool we use to interact with the Kubernetes cluster. If you prefer installing `kubectl` on your own, instructions on how to do that can be found [here](https://kubernetes.io/docs/tasks/tools/).*
To verify that Kubernetes is running, we can run the following command:
```bash
kubectl cluster-info
```
This command will display the address of the Kubernetes control plane. The output should look something like this:
```bash
# Kubernetes control plane is running at https://kubernetes.docker.internal:6443
# CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
```
When running Kubernetes locally, the cluster is run on a single node, which means the control (master) node and worker node are the same.
Let's see how we can deploy resources to our cluster. I have created a simple distributed calculator app that we will use to deploy resources to our Kubernetes cluster. The calculator app consists of the following components:
- **calculator** - A React application which is the front end of the app.
- **go-subtractor** - A Golang API that exposes an endpoint to subtract 2 numbers
- **csharp-adder** - A .NET API that exposes an endpoint to add 2 numbers
- **python-multiplier** - A Flask API that exposes an endpoint to multiply 2 numbers
- **nestjs-divider** - A NestJS API that exposes an endpoint to divide 2 numbers
The source code for the calculator app can be found [here](https://github.com/Thwani47/distributed-calculator).
## Deploying a Pod
In the previous post, we mentioned that Kubernetes **Pods** are the smallest unit of deployment. Kubernetes does not run containers directly, but groups one (or more) containers into a single atomic unit called a Pod. We can run a Pod in Kubernetes using the `kubectl run` command. The below command runs the `calculator` image in a Pod
```bash
kubectl run calculator --image=ghcr.io/thwani47/calculator:v1
# pod/calculator created
```
We can check that our Pod is running by using the `kubectl get` command as follows
```bash
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# calculator 0/1 CrashLoopBackOff 3 (25s ago) 62s
```
*The status of the Pod is CrashLoopBack because the **calculator** container needs the other containers to be running for it to work correctly, so it will keep on crashing. We'll fix that a bit later*😀
We can also use `kubectl describe` to get more information about the Pod such as the image used, the status, and the events that have occurred
```bash
kubectl describe pod calculator
# Name: calculator
# Namespace: default
# Priority: 0
# Service Account: default
# Node: docker-desktop/192.168.65.3
# Start Time: Sat, 01 Jun 2024 11:23:21 +0200
# Labels: run=calculator
# Annotations: <none>
# Status: Running
# IP: 10.1.0.126
# ... other information
```
we can get the logs of the Pod using the `kubectl logs` command
```bash
kubectl logs calculator
# ... other logs
# 2024/05/31 20:06:44 [emerg] 1#1: host not found in upstream "csharp-adder" in /etc/nginx/conf.d/default.conf:10
# nginx: [emerg] host not found in upstream "csharp-adder" in /etc/nginx/conf.d/default.conf:10
```
We can delete our Pod using the `kubectl delete` command
```bash
kubectl delete pod calculator
# pod "calculator" deleted
```
We can also make use of **manifest** files, which are either JSON or YAML files, which allow us to use a declarative approach instead of the imperative approach we used above to deploy resources to our cluster. Below is an example of a manifest file that deploys the `calculator` Pod
```yaml
# calculator-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: calculator
labels:
app: calculator
spec:
containers:
- name: calculator
image: ghcr.io/thwani47/calculator:v1
```
We can deploy the Pod using the `kubectl apply` command
```bash
kubectl apply -f calculator-pod.yaml
# pod/calculator created
```
We can use the commands we used above to check the status of the Pod, get the logs, and delete the Pod.
In most cases, we will want to have multiple Pods running in our cluster, and we would like the assurance that the desired number of Pods are running at all times and if a Pod was to crash, it would be restarted. This is where **Replication Controllers** and **ReplicaSets** come in.
## Replication Controllers and ReplicaSets
Controllers are the brains behind Kubernetes. They are responsible for ensuring that the desired state of the cluster matches the actual state. Controllers are responsible for creating, updating, and deleting resources in the cluster. The `Replication Controller` helps us run multiple Pods and ensures that the desired number of Pods are running at all times. The `Replication Controller` is an old technology which is being replaced by `ReplicaSets`. We can define a `ReplicationControler` for our calculator Pod as follows
```yaml
# calculator-replication-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: calculator-replication-controller
labels:
app: calculator
spec:
replicas: 2 # the number of Pods we want to be always running
template:
metadata:
labels:
app: calculator
spec:
containers:
- name: calculator
image: ghcr.io/thwani47/calculator:v1
```
_We add the Pod definition in the `template` section of the Replication Controller. The `replicas` field specifies the number of Pods we want to be running at all times. In this case, we want 2 Pods of the calculator app running at all times._
We can deploy the `ReplicationController` using the `kubectl apply` command
```bash
kubectl apply -f calculator-replication-controller.yaml
# replicationcontroller/calculator-replication-controller created
```
We can check the status of the Replication Controller using the `kubectl get` command
```bash
kubectl get replicationcontroller
# NAME DESIRED CURRENT READY AGE
# calculator-replication-controller 2 2 0 41s
```
We can also view the Pods that are running using the `kubectl get` command
```bash
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# calculator-replication-controller-7n9fw 0/1 CrashLoopBackOff 4 (30s ago) 113s
# calculator-replication-controller-z4gbj 0/1 CrashLoopBackOff 4 (17s ago) 113s
```
The Pods controlled by a `ReplicationController` are named using the format: `<controller-name>-<random-string>`.
We can delete one Pod and a new Pod will be created to replace it
```bash
kubectl delete pod calculator-replication-controller-7n9fw
```
If we run the `kubectl get pods` command, we will see that a new Pod has been created to replace the one we deleted.
```bash
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# calculator-replication-controller-76hpd 0/1 CrashLoopBackOff 2 (18s ago) 35s
# calculator-replication-controller-z4gbj 0/1 CrashLoopBackOff 5 (74s ago) 4m12s
```
We can also use `ReplicaSets` to manage Pods. `ReplicaSets` are the next generation of `ReplicationControllers`. `ReplicaSets` are more powerful and flexible than `ReplicationControllers`. We can define a `ReplicaSet` for our calculator Pod as follows
```yaml
# calculator-replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: calculator-replicaset
labels:
app: calculator
spec:
replicas: 2
selector:
matchLabels:
app: calculator
template:
metadata:
labels:
app: calculator
spec:
containers:
- name: calculator
image: ghcr.io/thwani47/calculator:v1
```
The `selector` field helps the ReplicaSet identify the Pods that fall under it. It is a required field for the ReplicaSet but not for the Replication Controller. The ReplicaSet can also manage Pods that were created outside of it.
We can deploy the `ReplicaSet` using the `kubectl apply` command
```bash
kubectl apply -f calculator-replicaset.yaml
# replicaset.apps/calculator-replicaset created
```
We can check the status of the ReplicaSet using the `kubectl get` command
```bash
kubectl get replicaset
# NAME DESIRED CURRENT READY AGE
# calculator-replicaset 2 2 0 22s
```
We can also view the Pods that are running using the `kubectl get` command
```bash
kubectl get pods
# NAME READY STATUS RESTARTS AGE
# pod/calculator-replicaset-d4p57 0/1 CrashLoopBackOff 3 (20s ago) 61s
# pod/calculator-replicaset-x2wkb 0/1 CrashLoopBackOff 3 (20s ago) 61s
```
Kubernetes `Deployments` allow us to upgrade our application instances, roll back to a previous version, and scale our application instances. Deployments are the recommended way to manage Pods and ReplicaSets. In the next section, we will go through how to create a Deployment for our calculator.
## Creating a Deployment
A `Deployment` is a higher-level abstraction that manages ReplicaSets and Pods. Deployments allow us to define the desired state of our application and Kubernetes will ensure that the actual state matches the desired state. We can define a Deployment for our calculator Pod as follows
```yaml
# calculator-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: calculator-deployment
labels:
app: calculator
spec:
replicas: 2
selector:
matchLabels:
app: calculator
template:
metadata:
labels:
app: calculator
spec:
containers:
- name: calculator
image: ghcr.io/thwani47/calculator:v1
```
A Deployment automatically creates a ReplicaSet. It also creates a rollout history that allows us to roll back to a previous version of the application. We can deploy the Deployment using the `kubectl apply` command
```bash
kubectl apply -f calculator-deployment.yaml
# deployment.apps/calculator-deployment created
```
We can run `kubectl get all` to get all the resources that have been created
```bash
kubectl get all
# NAME READY STATUS RESTARTS AGE
# pod/calculator-deployment-6c6cbff8bb-tp4f6 0/1 Error 3 (28s ago) 44s
# pod/calculator-deployment-6c6cbff8bb-tqcfd 0/1 Error 3 (30s ago) 44s
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/calculator-deployment 0/2 2 0 44s
# NAME DESIRED CURRENT READY AGE
# replicaset.apps/calculator-deployment-6c6cbff8bb 2 2 0 44s
```
This created a Deployment, a ReplicaSet, and 2 Pods. We can check the status of the Deployment using the `kubectl get` command
```bash
kubectl get deployment calculator-deployment
```
We can also check the rollout history using the `kubectl rollout` command
```bash
kubectl rollout history deployment calculator-deployment
# REVISION CHANGE-CAUSE
# 1 <none>
```
In Kubernetes, each Pod gets its own IP internal IP address. A Kubernetes cluster has its network with an address range, and the Pods are assigned IP addresses within this range. Pods can communicate with each other using these IP addresses. The only downside is that Pods are very volatile and can be created and destroyed at any time. This means that the IP address of a Pod can change at any time. To solve this problem, Kubernetes has a concept called `Services`. Services provide a stable IP address and DNS name for a set of Pods. In the next section, we will go through how to create a Service for our calculator app.
## Creating a Service
A `Service` is an abstraction that defines a logical set of Pods and a policy by which to access them. Services allow us to expose our application to the outside world. Kubernetes allows us to create 3 types of Services:
- **ClusterIP**: This is the default type of Service. It exposes the Service on a cluster-internal IP. This means that the Service is only accessible within the cluster. This Service spans across all the Pods assigned to it.
- **NodePort**: This type of Service exposes the Service on each Node's IP address at a static port. This means that the Service is accessible from outside the cluster using the Node's IP address and the NodePort. This Service spans across multiple nodes in the setting of a multi-node cluster.
- **LoadBalancer**: This type of Service exposes the Service externally using a cloud provider's load balancer. This Service creates a load balancer that can distribute traffic to the Pods assigned to it.
We can define a Service for our calculator app as follows
```yaml
# calculator-service.yaml
apiVersion: v1
kind: Service
metadata:
name: calculator-service
labels:
app: calculator
spec:
selector:
app: calculator
ports:
- protocol: TCP
port: 3000 # the port the Service will be exposed on
targetPort: 80 # the port the Service will forward traffic to on the Pods
type: LoadBalancer
```
We create a `LoadBalancer` Service because we want to expose our calculator app to the outside world. The Service will be exposed on port `3000` and will forward traffic to port `80` of the Pods. We can deploy the Service using the `kubectl apply` command
```bash
kubectl apply -f calculator-service.yaml
# service/calculator-service created
```
We can check the status of the Service using the `kubectl get` command
```bash
kubectl get service
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/calculator-service LoadBalancer 10.107.255.80 localhost 3000:32092/TCP 4s
```
The Service has been created and is accessible on `localhost:3000`. We can access the calculator app by navigating to `localhost:3000` in our browser.
Now that we have covered the major Kubernetes objects we need to run our calculator app, let us run the whole thing to see it in action. In the [application source code](https://github.com/Thwani47/distributed-calculator), we have a `manifests` folder, which contains YAML files with the definitions of the Deployments and Services of the distributed calculator app. The manifest files define ClusterIP Services for the APIs and a LoadBalancer Service for the calculator app.
We can run the distributed calculator app using the following commands
```bash
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/nestjs-divider-deployment.yaml
# deployment.apps/nestjs-divider-deployment created
# service/nestjs-divider created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/go-subtractor-deployment.yaml
# deployment.apps/go-subtractor-deployment created
# service/go-subtractor created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/csharp-adder-deployment.yaml
# deployment.apps/csharp-adder-deployment created
# service/csharp-adder created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/flask-multiplier-deployment.yaml
# deployment.apps/flask-multiplier-deployment created
# service/flask-multiplier created
kubectl apply -f https://raw.githubusercontent.com/Thwani47/distributed-calculator/master/src/manifests/calculator-deployment.yaml
# deployment.apps/calculator-deployment created
# service/calculator-service created
```
Now if we run `kubectl get all`, we will see all the resources that have been created.
```bash
kubectl get all
# NAME READY STATUS RESTARTS AGE
# pod/calculator-deployment-6dfddc9c56-rmsk4 1/1 Running 0 101s
# pod/csharp-adder-deployment-5945454df8-fhmf8 1/1 Running 0 2m25s
# pod/flask-multiplier-deployment-756d96c7fd-5b9sx 1/1 Running 0 2m4s
# pod/go-subtractor-deployment-5ff5d997db-cvvvt 1/1 Running 0 2m41s
# pod/nestjs-divider-deployment-c8dd85b56-bcqqc 1/1 Running 0 3m14s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# services/calculator-service LoadBalancer 10.109.13.40 localhost 3000:32277/TCP 101s
# services/csharp-adder ClusterIP 10.99.71.195 <none> 8080/TCP 2m25s
# services/flask-multiplier ClusterIP 10.111.134.130 <none> 5000/TCP 2m4s
# services/go-subtractor ClusterIP 10.109.25.228 <none> 8000/TCP 2m41s
# services/nestjs-divider ClusterIP 10.97.164.61 <none> 3000/TCP 3m14s
# NAME READY UP-TO-DATE AVAILABLE AGE
# deployment.apps/calculator-deployment 1/1 1 1 101s
# deployment.apps/csharp-adder-deployment 1/1 1 1 2m25s
# deployment.apps/flask-multiplier-deployment 1/1 1 1 2m4s
# deployment.apps/go-subtractor-deployment 1/1 1 1 2m41s
# deployment.apps/nestjs-divider-deployment 1/1 1 1 3m14s
# NAME DESIRED CURRENT READY AGE
# replciaset.apps/calculator-deployment-6dfddc9c56 1 1 1 101s
# replciaset.apps/csharp-adder-deployment-5945454df8 1 1 1 2m25s
# replciaset.apps/flask-multiplier-deployment-756d96c7fd 1 1 1 2m4s
# replciaset.apps/go-subtractor-deployment-5ff5d997db 1 1 1 2m41s
# replciaset.apps/nestjs-divider-deployment-c8dd85b56 1 1 1 3m14s
```
We can see that our `calculator-service` is the only service that has an `EXTERNAL-IP`, which is `localhost` in this case. We can access the calculator app by navigating to `localhost:3000` in our browser, and we should see the calculator app running:

## Conclusion
In this blog post, we covered how to run a local Kubernetes cluster on our machines. We went through how to deploy Pods, Replication Controllers, ReplicaSets, Deployments, and Services. We also ran a distributed calculator app on our local Kubernetes cluster. In the next blog post, we will be going through how to deploy the calculator app on an Azure Kubernetes cluster using the Azure Kubernetes Service (AKS).
| thwani47 |
1,888,135 | Exploring Existential Themes in Therapy: Navigating Life's Big Questions with Heidi Kling (PhD) | In the realm of therapy, there exists a profound and enduring fascination with existential themes.... | 0 | 2024-06-14T08:22:44 | https://dev.to/drheidikling/exploring-existential-themes-in-therapy-navigating-lifes-big-questions-with-heidi-kling-phd-167h | In the realm of therapy, there exists a profound and enduring fascination with existential themes. These themes delve into the very heart of what it means to be human—questions of purpose, meaning, freedom, and mortality. Within the therapeutic context, exploring existential themes offers clients a space to grapple with life's profound uncertainties and to find their own unique sense of meaning and fulfillment. In this blog, we will embark on a journey through the landscape of existential therapy, uncovering its principles, methods, and profound implications for personal growth and transformation.
## Embracing the Reality of Existential Angst
Existential angst, that pervasive sense of unease and disquiet in the face of life's fundamental uncertainties, is a central theme in existential therapy. Clients often come to therapy grappling with questions of identity, purpose, and the inevitability of death. In the therapeutic process, counselors guide clients to confront these existential truths head-on, fostering an acceptance of life's inherent uncertainties while empowering clients to find meaning and purpose amidst the chaos.
In therapy sessions, clients are encouraged to explore the roots of their existential angst, identifying underlying fears and anxieties that may be driving their sense of disquiet. Through open and honest dialogue, therapists like Heidi Kling (PhD) create a safe space for clients to express their deepest fears and concerns, validating their experiences while offering support and guidance. By acknowledging the reality of existential angst and embracing it as a natural aspect of the human condition, clients can begin to cultivate a sense of acceptance and peace, finding solace in the shared experience of being human.
### Cultivating Authenticity and Self-Expression
Authenticity lies at the core of existential therapy, emphasizing the importance of living in alignment with one's true self. In therapy, clients are encouraged to explore their values, beliefs, and desires, uncovering the layers of societal conditioning that may obscure their authentic selves. Through introspection and self-expression, clients learn to cultivate a deeper sense of authenticity, allowing them to lead more meaningful and fulfilling lives.
Psychologists such as Heidi Kling (PhD) employ a variety of techniques to help clients connect with their authentic selves, such as journaling, creative expression, and mindfulness practices. By fostering a nonjudgmental and supportive environment, therapists empower clients to explore their true selves without fear of criticism or rejection. As clients begin to align their thoughts, feelings, and actions with their authentic selves, they experience a profound sense of liberation and empowerment, finding greater joy and fulfillment in their daily lives.
## Confronting the Freedom-Responsibility Dilemma
Existential therapy grapples with the paradoxical nature of human freedom and responsibility. While freedom offers limitless possibilities for self-determination, it also entails the burden of responsibility for one's choices and actions. In therapy, clients confront this existential dilemma, exploring the ways in which their choices shape their lives and acknowledging the responsibility they hold in creating their own meaning and fulfillment.
Therapists including Heidi Kling (PhD) work collaboratively with clients to examine the consequences of their choices, both past and present, helping them to gain insight into the ways in which their actions have influenced their lives and relationships. Through this process of self-reflection and self-awareness, clients become more mindful of their decision-making processes, learning to make choices that are aligned with their values and goals. By embracing their freedom and taking responsibility for their lives, clients experience a newfound sense of agency and empowerment, reclaiming control over their destinies and shaping their futures with intention and purpose.
## Finding Meaning Amidst Suffering and Adversity
Suffering is an inevitable aspect of the human experience, yet existential therapy posits that even amidst the darkest moments, meaning can be found. Through the process of existential exploration, clients uncover the inherent meaning in their struggles, discovering resilience, strength, and growth in the face of adversity. By reframing suffering as an opportunity for growth and self-discovery, clients are empowered to transcend their circumstances and find deeper meaning in their lives.
In therapy sessions, clients are encouraged to explore the existential significance of their suffering, examining how their experiences have shaped their values, beliefs, and sense of self. Psychologists like Heidi Kling (PhD) provide a supportive and empathetic presence, helping clients to process their emotions and make meaning of their experiences in a way that feels authentic and empowering. Through this process of meaning-making, clients are able to find a sense of purpose and fulfillment, even in the midst of pain and hardship.
## Embracing the Impermanence of Life
Existential therapy confronts the transient nature of existence, emphasizing the impermanence of all things. Clients are encouraged to embrace the present moment fully, recognizing the fleeting nature of life and the importance of cherishing each moment. By cultivating mindfulness and presence, clients learn to let go of attachments to the past and future, finding solace and meaning in the richness of the present experience.
Therapists guide clients in practices such as mindfulness meditation, deep breathing exercises, and body awareness techniques, helping them to cultivate a sense of presence and connection with the here and now. Through these practices, clients learn to release their grip on the illusion of control, surrendering to the ebb and flow of life with grace and acceptance. By embracing the impermanence of life, clients are able to experience a profound sense of freedom and liberation, finding joy and fulfillment in each passing moment.
## Integrating Existential Insights into Daily Life
The ultimate goal of existential therapy is not merely insight but transformation—integrating existential insights into daily life to cultivate a deeper sense of meaning and purpose. Through ongoing self-reflection and practice, clients learn to live authentically, embracing their freedom and responsibility to create a life that is rich with meaning and fulfillment. As they navigate life's inevitable challenges and uncertainties, they do so with a newfound sense of resilience, purpose, and existential clarity.
In therapy sessions, therapists such as Heidi Kling (PhD) work with clients to develop practical strategies for integrating existential insights into their daily lives, such as setting meaningful goals, cultivating healthy relationships, and engaging in activities that align with their values and passions. Clients are encouraged to reflect on their experiences and identify opportunities for growth and self-expression, taking intentional action to create a life that is congruent with their deepest values and aspirations. By embodying the principles of existential therapy in their daily lives, clients are able to experience a profound sense of fulfillment and satisfaction, finding meaning and purpose in even the most ordinary moments.
## Embracing the Journey of Existential Exploration
Exploring existential themes in therapy offers a profound opportunity for personal growth and transformation. By confronting life's big questions with courage and introspection, clients embark on a journey of self-discovery, finding meaning, authenticity, and fulfillment amidst the complexities of human existence. Through the guidance of skilled therapists and the exploration of existential principles, individuals can navigate life's existential terrain with newfound clarity, purpose, and resilience. As we continue to explore the depths of the human experience, may we embrace the journey of existential exploration with open hearts and open minds, finding meaning and purpose in every step along the way.
| drheidikling | |
1,888,128 | Kubernetes on Azure: Part 1 - An Introduction to Kubernetes | Learn the core concepts of Kubernetes | 0 | 2024-06-14T08:18:40 | https://dev.to/thwani47/kubernetes-on-azure-part-1-an-introduction-to-kubernetes-1oc9 | kubernetes, docker, azure | ---
title: Kubernetes on Azure: Part 1 - An Introduction to Kubernetes
published: true
description: Learn the core concepts of Kubernetes
tags: kubernetes, docker, azure
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dhqr0qaf8glr6qd8ln7.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 08:16 +0000
---
In the next series of blog posts, we will be learning about Kubernetes. More specifically, we will be learning how to deploy and manage a Kubernetes cluster on Azure. In this first blog post, we will be going through the basics of Kubernetes, what it is, and why it is used.
## Table of Contents
- [What is Kubernetes?](#what-is-kubernetes)
- [Kubernetes Architecture](#kubernetes-architecture)
- [Conclusion](#conclusion)
## What is Kubernetes?
Kubernetes, also referred to as K8s (There are 8 letters between K and s) is an open-source container orchestration platform that helps with the automation of deploying, scaling, and managing containerized applications. K8s was developed by Google but is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes is used to manage the lifecycle of containerized applications, scaling them up or down, and managing the networking between containers.
Kubernetes offers a set of features that make it an ideal platform for deploying and managing containerized applications. Some of the features include (but are not limited to):
- **Automated rollouts and rollbacks**: K8s seamlessly rolls out and rolls back application updates and config changes, consistently monitoring the app's health to prevent downtime.
- **Self-healing**: Kubernetes automatically restarts containers that fail, replaces and reschedules containers when nodes die, and kills containers that don't respond to user-defined health checks. Kubernetes also prevents traffic from being routed to unhealthy containers.
- **Secret and config management**: Kubernetes allows you to store and manage sensitive information such as passwords, OAuth tokens, and SSH keys. Kubernetes also allows you to deploy and update secrets and application configurations without rebuilding your image.
- **Horizontal scaling**: Kubernetes can scale applications based on CPU usage or other custom metrics. Kubernetes can scale up or down the number of replicas of an application based on the load.
To understand how Kubernetes, we first need to understand the Kubernetes architecture.
## Kubernetes Architecture
At a high level, a K8s cluster is a cluster of virtual or on-premise machines. Each machine in the cluster is referred to as a **node**. A K8s cluster has two types of nodes:
- One or more **master** or **control plane** nodes. The master node is responsible for managing the K8s cluster.
- One or more **worker** nodes. Worker nodes are responsible for running the applications and workloads.

The control plane runs the components that are responsible for managing the cluster. These components are:
- **API Server (kube-apiserver)**: The API Server is the entry point for all REST commands used to interact with the cluster. The API Server intercepts RESTful calls from users, administrators, and other components, and then validates and processes the requests.
- **Scheduler (kube-scheduler)**: The Scheduler is responsible for distributing workloads across multiple nodes. The Scheduler selects the optimal node for the workload based on predetermined requirements.
- **Controller Manager**: The Controller Manager runs watch-loop processes that are continuously running and comparing the current state of the cluster with the desired state. If the current state does not match the desired state, the Controller Manager takes corrective action to make the current state match the desired state.
- The **kube-controller-manager** runs controllers that are responsible for acting when nodes become unavailable, ensuring container pod counts are correct, creating endpoints, etc.
- The **cloud-controller-manager** runs controllers that interact with the underlying cloud provider's API. For example, the cloud controller manager is responsible for creating and deleting load balancers in the cloud provider.
- **etcd**: This is an open-source distributed key-value data store that is used to store cluster state. The API Server is the only control plane component that can communicate (read and write) with etcd. Any other component that's interested in the cluster state must go through the API Server.
The worker nodes provide a running environment for the applications. The worker nodes run the following components:
- **Kubelet**: The kubelet is an agent that runs on each node in the cluster. The kubelet is responsible for making sure that containers are running in a pod. Each node communicates with the control plane using the kubelet to inform the control plane about any changes in the node.
- **kube-proxy**: The kube-proxy is a network agent that runs on each node, responsible for dynamic updates and maintenance of all network rules on the node. It handles the routing of network traffic in a Kubernetes cluster.
- **Container Runtime**: K8s cannot directly run containers. It needs a container runtime to run the containers on the node where a pod is scheduled. K8s supports container runtimes such as Docker, containerd, CRI-O, etc.
A **pod** is the smallest deployable unit in Kubernetes. A pod is a group of one or more containers that share the same network and storage. Pods are the atomic unit on the Kubernetes platform. Pods are scheduled on worker nodes by the control plane.
## Conclusion
In this blog post, we went through the basics of Kubernetes and the Kubernetes architecture. In the next blog post, we will be going through how to run a local Kubernetes cluster on our machines. | thwani47 |
1,888,127 | What is Siacoin Mining? | Siacoins, the native cryptocurrency of the decentralized blockchain storage network Sia, can be... | 0 | 2024-06-14T08:15:27 | https://dev.to/lillywilson/what-is-siacoin-mining-3976 | cryptocurrency, asic, bitcoin | **[Siacoins](https://asicmarketplace.com/blog/top-siacoin-miners/)**, the native cryptocurrency of the decentralized blockchain storage network Sia, can be rented out by users in exchange for extra storage space. The Sia network is not secure and reliable without mining. When you run specialized software to mine Siacoins, your computer will use its processing power to perform calculations necessary to keep the Sia Network operational.
These calculations ensure the safety and reliability for the entire network by being required to validate and build new blocks on blockchain. The proof-ofwork (PoW), consensus algorithm that is used by the Sia Network requires miners to compete to solve mathematical puzzles.
To encourage the participation of miners, reward the miner who solves the problem first with a specific amount of Siacoins. Sia is a decentralized system that aims to distribute mining power to many users rather than consolidate it in a few hands. By doing this, the network will be protected from censorship or attacks as no one can control it.
Siacoin mining is a great way to increase the security and safety of the Sia Network as a Whole, in addition its financial benefits.
| lillywilson |
1,888,126 | An Introduction to Cloud Computing | A beginner's guide to Cloud Computing | 0 | 2024-06-14T08:15:06 | https://dev.to/thwani47/an-introduction-to-cloud-computing-1p9 | azure, cloudcomputing | ---
title: An Introduction to Cloud Computing
published: true
description: A beginner's guide to Cloud Computing
tags: azure, cloudcomputing
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/twpl3k88v19alcbo6anx.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 08:07 +0000
---
In the last few years, cloud computing has become a popular choice for many organizations. It offers a wide range of benefits, including cost savings, scalability, and flexibility. In the next series of articles, we'll learn more about cloud computing and take a deep dive into the different services offered by cloud providers, focusing primarily on Microsoft Azure.
In this article, we'll define what cloud computing is and discuss the different types of cloud services available. We'll also explore the benefits of cloud computing and the different deployment models.
## Table of Contents
- [What is Cloud Computing?](#what-is-cloud-computing)
- [Benefits of Cloud Computing](#benefits-of-cloud-computing)
- [Deployment Models](#cloud-models)
- [Types of Cloud Services](#types-of-cloud-services)
## What is Cloud Computing?
**Cloud computing** is a way to deliver computing services over the internet. Cloud computing allows organizations to access computing resources such as servers, storage, databases, networking, software, and analytics over the internet. These resources are hosted in data centers that are managed by cloud providers. A cloud provider is a company that offers cloud services to businesses and individuals. Some of the popular cloud providers include Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Simply put, cloud computing is a way to rent computing resources from someone else's data center. When you are done utilizing the resources, you give them back. Customers use cloud computing resources on a **pay-as-you-go** basis. This means that customers only pay for the resources they use.
Building and hosting our applications on the cloud comes with a lot of benefits. Some of these benefits are listed below
## Benefits of Cloud Computing
### <u>Cost Savings</u>
When organizations think of their IT infrastructure models, they usually consider two types of expenditures:
- **Capital Expenditure (CapEx)**: This is the upfront cost to purchase resources such as servers, storage, and networking. CapEx forces the organization to make future estimations of their infrastructure needs at the beginning.
- If the organization overestimates, they end up with unused resources, which is a waste of money.
- If the organization underestimates, they end up with insufficient resources, which can lead to poor performance and unhappy customers.
- **Operational Expenditure (OpEx)**: This is the ongoing cost to run and maintain the infrastructure.
Since cloud computing allows customers to use resources on a pay-as-you-go basis, it eliminates the need for CapEx. This means that organizations can avoid the upfront cost of purchasing hardware and software. Instead, they can pay for what they use and scale up or down as needed. This results in cost savings for the organization. Organizations also don't have to pay for the physical infrastructure, the electricity, and the cooling required to run the infrastructure.
### <u>Uptime (Availability)</u>
Availability refers to organizations' services and products being available to customers when they need them.
Cloud providers offer **Service Level Agreements (SLAs)** that guarantee a certain level of uptime. This means that the cloud provider guarantees that the services will be available for a certain percentage of time. For example, a cloud provider might offer an SLA that guarantees 99.9% uptime. This means that the services will be available 99.9% of the time. This is a huge benefit for organizations that need their services to be available 24/7. Azure is a highly available cloud platform with availability guarantees for most of its services.
### <u>Scalability</u>
Scalability refers to the ability to adjust computing resources to meet demand. When the demand is high, the organization can add more computing power or more resources to meet demand, and when the demand is low, the organization can scale down to save costs. Cloud computing allows organizations to scale both vertically and horizontally.
- **Vertical Scaling**: This refers to adding more resources to a single server. For example, adding more RAM, CPU, or storage to a single server (**Scaling up**), or removing excess resources from a single server (**Scaling down**).
- **Horizontal Scaling**: This refers to adding more servers to the infrastructure (**Scaling out**), or removing unutilized servers from the infrastructure (**Scaling in**).
### <u>Reliability</u>
Reliability refers to the ability of the cloud provider to deliver the services as promised. Cloud providers have multiple data centers in different geographical locations. This means that if one data center goes down, the services can be moved to another data center. This ensures that the services are always available. Organizations have the confidence that their services can recover from failures and continue being available to their customers
Organizations have the choice of setting up their infrastructure using different cloud models. These models can be chosen based on the organization's business needs. The main cloud models are listed below.
## Cloud Models
| <div style="width:200px">Deployment Model</div> | Functionality |
|------------------|----------------|
| Private Cloud | A private cloud is a cloud that is used by a single organization.<br/><br/> Private clouds can either be hosted in the organization's data center, an offsite data center, or via a 3rd party cloud provider|
| Public Cloud | A public cloud is built, controlled, and maintained by a 3rd party cloud provider. The cloud provider can offer resources such as servers, storage, and networking to multiple organizations.|
| Hybrid Cloud| A hybrid cloud is a computing environment that uses both public and private clouds. <br/><br/>A hybrid cloud can be used to supplement a private cloud with public cloud resources when demand increases. It can also be used to add a security layer over a private cloud.|
| Multi-Cloud | A multi-cloud is a cloud environment that uses multiple cloud providers. You can have a computing environment in which you use AWS resources and Azure resources. <br/><br/>This can be used to avoid vendor lock-in and to take advantage of the best features of different cloud providers.|
When organizations are setting up their services on the cloud, they can also choose the type of cloud resources they want to use. The main cloud resources are listed below.
## Types of Cloud Services
| <div style="width:200px">Cloud Service</div> | Functionality |
|------------------|----------------|
| Infrastructure as a Service (IaaS) | IaaS provides virtualized computing resources over the internet. Organizations can rent out a wide range of services such as virtual machines, storage, and networking<br/><br/>With IaaS, the cloud provider is responsible for maintaining the hardware, the network connectivity to the internet, and the physical security of the hardware. The customer is responsible for the operating system installed on the virtual machines, the configuration, the maintenance, and so on. |
| Platform as a Service (PaaS) | PaaS provides a platform that allows customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure.<br/><br/>With PaaS, the cloud provider provides ready-to-use platforms that contain most of the tools that developers use. The cloud service provider is responsible for maintaining the hardware, the network connectivity to the internet, and the physical security of the hardware. The cloud service provider is also responsible for the operating system, the development tools, and so on. The customer is only responsible for the services that run on these platforms|
| Software as a Service (SaaS) | SaaS provides fully developed software applications over the internet. Customers can access the software applications via a web browser or an app. Email such as Outlook, cloud file storage solutions such as DropBox, workflow management solutions such as Trello, and so on are examples of SaaS applications. <br/><br/>With SaaS, the cloud provider is responsible for maintaining the provided application. The customer is only responsible for the data and the configuration of the application.|
As organizations continue to move their services to the cloud, it is crucial to understand the **shared responsibility model**. The shared responsibility model dictates that the cloud service provider is responsible for monitoring and responding to security threats to the cloud infrastructure, while the customer is responsible for securing the data and applications that they host on the cloud. This means that the customer is responsible for securing their data and applications, while the cloud provider is responsible for securing the infrastructure that the data and applications are hosted on.
The cloud service provider is responsible for the physical power, cooling, and network connectivity of the data center. The customer does not have physical access to the data center, so it would not make sense to have the customer responsible for that.
The customer is always responsible for the data and information that is stored in the cloud. The customer is also responsible for the accounts and identities of the people, services, and devices that have access to the data.
In the next article, we'll dive deeper into Azure reliability and learn about the different Azure regions and how to leverage availability zones to make our services highly available. | thwani47 |
1,888,125 | Why is UX Research Important During UI and UX Design Services? | In today's digital age, creating exceptional user experiences is more critical than ever. As a... | 0 | 2024-06-14T08:15:05 | https://dev.to/ishita_mathur/why-is-ux-research-important-during-ui-and-ux-design-services-ci9 | uiuxdesign, uiuxdesignagency, graphicdesign, graphicdesignservices | In today's digital age, creating exceptional user experiences is more critical than ever. As a leading UI/UX design agency, we understand that the key to delivering successful digital products lies in thorough UX research. In this article, we will delve into why UX research is an indispensable part of UI and UX design services and how it can elevate the quality of your digital products.
## Understanding the Role of UX Research
UX research involves a systematic approach to understanding users' behaviors, needs, and motivations through various observational and feedback collection methods. This research is pivotal in creating designs that are not only aesthetically pleasing but also functional and user-friendly. Without UX research, a [**creative agency**](https://www.designboxindia.com/) may miss critical insights that could lead to poor user experiences, ultimately affecting the product's success.
## Different Methods of UX Research
There are several methods employed in UX research, each providing unique insights into user behavior:
- User Interviews: One-on-one discussions with users to gain deep insights into their experiences and preferences.
- Surveys and Questionnaires: Collecting data from a larger audience to identify common trends and pain points.
- Usability Testing: Observing users as they interact with the product to identify usability issues.
- A/B Testing: Comparing two versions of a product to see which one performs better with users.
- Heatmaps: Visual representations showing where users click and scroll the most on a page.
By utilizing these methods, we gather comprehensive data that informs our design decisions.
## Enhancing User Satisfaction
The primary goal of UX research is to enhance user satisfaction. By understanding the target audience's preferences and pain points, designers can create interfaces that are intuitive and easy to navigate. This leads to higher user satisfaction and loyalty, which is crucial for the long-term success of any digital product.
## Identifying User Needs and Pain Points
Through various UX research methods such as user interviews, surveys, and usability testing, we can identify what users need and the challenges they face. These insights allow us to create solutions that address these pain points directly, resulting in a more seamless user experience.
## Creating Personas
Creating user personas based on UX research helps us to visualize and understand the needs, goals, and behaviors of different segments of our audience. These personas guide our design process and ensure that we are meeting the diverse needs of all potential users.
## Improving Usability and Accessibility
A product that is difficult to use will quickly lose users. UX research helps us ensure that our designs are not only visually appealing but also highly usable and accessible. By testing with real users and iterating based on their feedback, we can refine the user interface to be more intuitive and accessible to a wider audience, including those with disabilities.
## Usability Testing
Usability testing is a core component of UX research. It involves observing real users as they interact with the product to identify any usability issues. This process provides invaluable feedback that can be used to make iterative improvements to the design, ensuring a smoother and more efficient user experience.
## Inclusive Design
Incorporating inclusive design principles ensures that our products are accessible to users with various disabilities. UX research helps us identify the barriers faced by these users and create solutions that make our products more accessible and inclusive.
## Informing Design Decisions
In the realm of UI and UX design services, making informed design decisions is crucial. UX research provides a solid foundation of data and insights that guide the design process. Rather than relying on assumptions or guesswork, designers can make evidence-based decisions that are more likely to resonate with users.
## Data-Driven Design
Data-driven design is an approach that uses data gathered from UX research to inform and validate design decisions. This method ensures that the final product is aligned with user expectations and needs, resulting in a more effective and engaging user experience.
## Iterative Design Process
The iterative design process involves continuously testing and refining designs based on user feedback. UX research plays a critical role in this process, providing the necessary insights to make informed changes and improvements.
## Reducing Development Costs and Time
Investing in UX research early in the design process can lead to significant cost and time savings in the long run. By identifying potential issues and user needs early on, we can avoid costly redesigns and development delays later in the project.
## Avoiding Costly Mistakes
Identifying and addressing usability issues early in the design process can prevent costly mistakes during development. This proactive approach saves time and resources, allowing for a more efficient design and development cycle.
## Streamlining Development
By having a clear understanding of user needs and preferences from the outset, we can streamline the development process. UX research helps us create a well-defined roadmap, reducing the likelihood of major changes and delays during development.
## Boosting Conversion Rates and ROI
A well-designed user experience can significantly impact a product's success. By focusing on user needs and creating a seamless experience, businesses can increase their conversion rates and achieve a higher return on investment (ROI).
## Optimizing the User Journey
UX research helps us understand the user's journey from start to finish. By optimizing each step of this journey, we can create a more engaging and persuasive experience that encourages users to complete desired actions, such as making a purchase or signing up for a service.
## Enhancing Customer Retention
Satisfied users are more likely to return to a product and recommend it to others. By addressing user needs and pain points through UX research, we can enhance customer retention and foster brand loyalty.
## Staying Competitive in the Market
In a competitive market, businesses need to stay ahead by continuously improving their digital products. UX research enables us to stay informed about user trends and preferences, allowing us to create innovative designs that set our clients apart from their competitors.
## Keeping Up with Trends
The digital landscape is constantly evolving, and staying up-to-date with the latest trends is essential for maintaining a competitive edge. UX research provides insights into emerging trends and user behaviors, helping us create designs that are both modern and relevant.
## Benchmarking Against Competitors
Benchmarking involves comparing your product against competitors to identify strengths and weaknesses. UX research helps us understand where we stand in the market and what improvements are needed to outperform competitors.
## Building Trust and Credibility
A positive user experience builds trust and credibility with users. When users find a product easy to use and valuable, they are more likely to trust the brand and become loyal customers. UX research helps us create these positive experiences by ensuring that the product meets user expectations.
## Creating User-Centric Designs
At our [**UI/UX design agency**](https://www.designboxindia.com/services/ui-ux-design-agency/), we prioritize user-centric designs that focus on the needs and preferences of the end user. By conducting thorough UX research, we ensure that our designs are aligned with what users truly want and need, leading to higher satisfaction and loyalty.
## Feedback Loops
Establishing feedback loops with users allows us to continuously gather insights and make necessary adjustments. This ongoing engagement helps maintain user satisfaction and keeps the product relevant.
## Conclusion
In conclusion, UX research is a critical component of successful UI and UX design services. It enables us to create user-centered designs that enhance satisfaction, improve usability, and drive business success. By investing in UX research, businesses can reduce development costs, boost conversion rates, and stay competitive in an ever-changing market.
| ishita_mathur |
1,888,124 | Enhancing Performance with Advanced Magnet Technologies | Improving Efficiency: how Progressed Magnet Innovations Can easily Assist Have you ever before... | 0 | 2024-06-14T08:14:14 | https://dev.to/jahira_hanidha_ac8711fb57/enhancing-performance-with-advanced-magnet-technologies-556e | Improving Efficiency: how Progressed Magnet Innovations Can easily Assist
Have you ever before found out about progressed magnet innovations? They are a kind of progressed innovation that utilizes magnetics towards produce chances that are brand-brand new. They deal lots of benefits over conventional innovations. If you are wanting to enhance your efficiency, progressed innovations that are magnet be precisely just what you require.
Benefits of Progressed Magnet Innovations
Progressed magnet innovations are extremely flexible. They could be utilized towards enhance whatever coming from transport towards power manufacturing. They are likewise extremely effective, which suggests they utilize much less power compared to various other innovations. Additionally, they are extremely dependable can easily final for an opportunity lengthy damaging down.
Development in Magnet Innovation
Progressed magnet innovations are continuously developing. Researchers designers are constantly functioning towards discover manner ins which are brand-brand new utilize Magnet Lifter enhance their efficiency. This development has resulted in lots of advancements that are interesting markets such as transport, power, health care.
Security Factors to consider
When utilizing progressed magnet innovations, security ought to constantly be a concern leading. Magnetics are extremely effective could be harmful otherwise utilized correctly. For instance, solid magnetics can easily trigger trauma if they happened as well near to digital gadgets if they are mishandled or even. It is essential towards comply with all of security standards when utilizing these innovations.
Ways to Utilize Progressed Magnet Innovations
there are lots of methods towards utilize progressed innovations that are magnet. For instance, they could be utilized towards produce magnetic filter, which are educates that hover over the monitors. They can easily likewise be utilized towards produce much a lot extra wind effective, which could be utilized towards produce electrical power.
Solutions Provided
there are lots of business that focus on progressed innovations that are magnet. They deal a selection of solutions, consisting of style, create, setup. Some business likewise deal repair work solutions for harmed magnetics.
Service Offered
When looking for progressed magnet innovations, it is essential towards select items that are top quality. Top quality Magnetic Switch will be much a lot extra dependable will final for a much longer opportunity. It is likewise essential towards select a business reliable has expertise in the market.
Requests for Progressed Magnet Innovations
Progressed magnet innovations have lots of requests. They could be utilized in transport (like fast educates cars), power (like wind turbines boards that are solar, health care (like MRI machines), much a lot extra. As these innovations remain to develop, they will most probably end up being much more flexible helpful.
Source: https://www.mag-land.com/magnet-lifter
| jahira_hanidha_ac8711fb57 | |
1,880,559 | Friday Thoughts on email validation | While working on a new authentication system I was getting alerts that account creation was failing.... | 26,285 | 2024-06-14T08:10:57 | https://dev.to/timoschinkel/friday-thoughts-on-email-validation-4fha | webdev | While working on a new authentication system I was getting alerts that account creation was failing. After diving into the logs I learned that the accounts were rejected because the email addresses were not deemed valid. But I made sure standardized email validation was in place. What's going on here?
## So many systems and so many specifications
The authentication system is web based and thus uses HTML[^1]. There is a backend written in JavaScript (actually TypeScript), which in turn - for some operations - talks to a service written in .NET that stores data in [AWS Cognito](https://aws.amazon.com/cognito/).
Because the front-end is web based we use `<input type="email">`. This has a number of benefits from the perspective of usability; we get out-of-the-box validation on the format of the input, for some devices a customized keyboard is shown, and password managers are inclined to prefill the field. The validation rules of this input type are well-defined in the [HTML specification](https://html.spec.whatwg.org/multipage/input.html#email-state-(type=email)). There's even a handy regular expression for you to use, which is nice as JavaScript does not have out-of-the-box email validation.
The backend of our system is dotnet and uses a feature called [Data Annotations for Model Validation](https://learn.microsoft.com/en-us/aspnet/mvc/overview/older-versions/mvc-music-store/mvc-music-store-part-6) that can be used to validate incoming models, including a validation for email addresses. Microsoft is nice enough to share the code for this validation with us: https://github.com/microsoft/referencesource/blob/master/System.ComponentModel.DataAnnotations/DataAnnotations/EmailAddressAttribute.cs#L48
The storage for our system is AWS Cognito, and this was the actual source of our errors. Looking at [the documentation of Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-attributes.html) AWS tells us the following:
> Generally email , Value must be a valid email address string following the standard email format with @ symbol and domain, up to 2048 characters in length.
Luckily we give AWS enough money that they are willing to answer our questions. AWS told me that they use [RFC 3696](https://datatracker.ietf.org/doc/html/rfc3696#page-5) to validate email addresses.
An interesting fact is that all the specifications for email related address _also_ need to be compliant with [RFC 1035](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.1). That RFC describes how a domain name should be constructed.
## To validate or not to validate
According to [David Gilbertson](https://david-gilbertson.medium.com/the-100-correct-way-to-validate-email-addresses-7c4818f24643) the only proper way to validate an email address is by sending an email to that address containing a link. When that link is clicked then we know for sure that the email address is valid. But we are in the e-commerce business, and we want to remove as many _blockers_ from our customer journey as possible. As such we don't want to interrupt the checkout process with a verification email.
So, yes ideally we would not validate the email syntax, and yes ideally we would send a verification email, but in the real world we are sometimes faced with non-ideal scenarios. In our situation it did not help that the error messages coming from Cognito do not contain distinctive error codes. And even if Cognito _did_ then our .NET service would have to respond with a Bad Request status code, and we still have to interpret the response. Validation of the syntax of an email address is not ideal, but for our scenario it allows us to quickly give feedback to our user and prevent "invalid" email addresses to make their way into the rest of our system.
## Comparison
The email addresses that were marked as invalid can be grouped in three scenarios:
- invalid `.` usage; the local part of an email address cannot start of end with `.` and two of more `.` are not allowed. So `.name@example.com`, `name.@example.com`, and `na..me@example.com` are not allowed
- missing TLD extensions; according to RFC 5233 the domain part can be any internet address, which makes `name@localhost` a valid email address, however this is in violation of RFC 3696
Let's make a comparison of how the different layers handle these addresses:
```text
HTML .NET Cognito
name@example.com ✅ ✅ ✅
name.name@example.com ✅ ✅ ✅
"name..name"@example.com ❌ ✅ ❌
name@localhost ✅ ✅ ✅
nåme@example.com ❌ ✅ ✅
aA0!#$%&'*+-/=?^_`{|}~@example.com ✅ ✅ ✅
name.example.com ❌ ❌ ❌
.name@example.com ✅ ✅ ❌
name.@example.com ✅ ✅ ❌
name..name@example.com ✅ ✅ ❌
<name>@example.com ❌ ✅ ❌
name@-example.com ❌ ✅ ❌
name@example-.com ❌ ✅ ❌
name@example.com- ❌ ✅ ❌
# extra examples from the 2024-6-28 update
name@x.com ✅ ✅ ✅
name@e.mail.com ✅ ✅ ✅
name@12mail.com ✅ ✅ ✅
name@1.1 ✅ ✅ ❌
name@1.com ✅ ✅ ✅
name@1.2.com ✅ ✅ ✅
```
The code used to test this using .NET and Cognito can be found in this Gist: https://gist.github.com/timoschinkel/fe409ce4e019138778d4f0d9d1879e1e
I was surprised by this outcome; AWS had told me that Cognito required an RFC 3696 compliant email address, but it still rejected `"name..name"@example.com`, which is a valid address. At least how I interpret the specification.
## tl/dr;
Although we would love to, we don't actually write code under perfect circumstances. Sometimes we are bound by limitations outside our influence. It was our choice to strive for a frictionless customer journey, without email verification via an actual email, and the consequence of this choice is that we depend on Cognito accepting our data, and when Cognito rejects it we introduce friction in our customer journey. By matching the validation rules in all layers with the layer that has the strictest rules we can at least tell our customers that their email address has been rejected and why.
At the end of the day we created a regular expression that allowed addresses that are accepted in any layer of our application. It is not fully compliant with any of the specifications mentioned, but it will allow us to explain to our customers that their email address was rejected based on the structure. If a customer has an address that is blocked by our system we will find out where it is blocked and we'll try to find a way around it.
## Addendum
I did not manage to write a regular expression that meets all criteria - the maximum length is still an issue - but I did manage to create one that at works against the test set from this article:
```jsregexp
^[\p{L}\d!#$%&'*+\-/=?^_`{|}~]+(?:\.[\p{L}\d!#$%&'*+\-/=?^_`{|}~]+)*@(?:(?!-[a-z0-9]+\.)(?![a-z0-9]+-\.)(?![a-z0-9]+--[a-z0-9]+\.)[a-z0-9-]+\.)+[a-z][a-z0-9]+$
```
or for your HTML input[^2]:
```html
<input type="email" pattern="[\p{L}\d!#$%&\x27*+\-\/=?^_`\{\|\}~]+(?:\.[\p{L}\d!#$%&\x27*+\-\/=?^_`\{\|\}~]+)*@(?:(?!-[a-z0-9]+\.)(?![a-z0-9]+-\.)(?![a-z0-9]+--[a-z0-9]+\.)[a-z0-9\-]+\.)+[a-z][a-z0-9]+" required name="email">
```
Concessions have been made in creating this pattern. It does *not* completely match any of the specifications mentioned in this article, but it does filter out all email addresses that would have been blocked by any of the layers in our application. And because the pattern element does not have any flags the pattern is by definition case-sensitive.
## Update 2024-06-28
After deploying this validation rule our observability platform detected a rise in errors; my regular expression missed a number of scenarios. As we don't log privacy-sensitive data to our observability platform we decided to run all our existing users against our pattern. In hindsight, we should have done this earlier in the process.
What we found is that we had email addresses with accented characters like è that were now blocked. We also had missed one character domain names - `x.com` and domain names that started with a numeric value.
This shows that email validation using a pattern is very difficult and that David Gilbertson was right all along. But because we are still bound by the requirements from Cognito, and because we feel that notifying a visitor that their email address is likely invalid is still a better customer journey then risking the email not reaching the customer due to email reasons we still use a pattern validation.
I have updated my test suite with email addresses that follow the same pattern as the email addresses that were falsely rejected, as well as the gists and the regular expressions. The regular expression now uses `\p{L}`. This matches all characters that belong to the "letter" category, and this includes special characters like é. But because this has a larger match, it is only used for the name part of the email address. See https://www.regular-expressions.info/unicode.html for more explanation.
[^1]: This Friday Thought is also applicable if your frontend is built using React or Vue.
[^2]: The regular expression for the pattern attribute requires some changes; it is case-sensitive, `'` and `"` need to be encoded as `\x27` and `\x22` respectively, and more characters need to be escaped. | timoschinkel |
1,888,122 | Dying, Bitcoin and Inheritance | I used to live in Kyiv, Ukraine, until I got bombed. I contemplate how this world is unstable: one... | 0 | 2024-06-14T08:10:15 | https://dev.to/martinbaun/dying-bitcoin-and-inheritance-480k | career, github, cryptocurrency, bitcoin | I used to live in Kyiv, Ukraine, until I got bombed. I contemplate how this world is unstable: one day, you work and live in a comfortable environment. I have plans for my life, but the next day, anything can happen.
I take responsibility for my employees as an entrepreneur. I run a small tech company. Worst-case scenario, I don’t want to leave my team with nothing. That includes a well-deserved paycheck.
>*Leaving Ukraine made me wonder: What would happen if my face was too charming for a bus and it ran me over?*
## Prepare to die: Tech Version
How do I secure the legacy I'm trying to build as a small software-developing company? We do not have automated employee payments. We are small and intend to stay small.
**What happens to my team's salaries? What happens to the work we've done together?** I pondered all traditional solutions, but I've found them cumbersome. We all live in different jurisdictions and work remotely.
Each country has different banking systems and currencies. The fluctuations and differences make traditional solutions inefficient. Bitcoin is the solution to all these questions. I’ll explain more further.
Setting up my Bitcoin inheritance plan - simple steps
Bitcoin to the rescue! So, where do we start? I made a small server that asks via Telegram if I am alive every week.
If I don't reply to this message in 3 weeks, it will send a password to every full-time employee. The password will help them open an encrypted file containing a Bitcoin wallet.
I have uploaded an open-source version of this script to GitHub, and you can find it [here.](https://github.com/MartinBaunWorld/valhalla.)
## What's next?
Transferring ownership of company assets can't be done conventionally as it is in the legacy legal framework. Finding an internationally recognized notary who will help me write a will is a possible workaround. This is a probable solution that solves the problem of inheritance.
Till then, my focus remains on [Goleko.](https://goleko.com/) I created the best project management tool that will integrate everything good about project management tools and sort out everything wrong about project management tools. You can use Goleko to enhance your productivity and efficiency in everything you do.
I have also been using 7 principles to promote the efficiency of my remote team. Read about it here [7 Tips for Effective Communication in Remote Teams.](https://tigerteamx.com/blog/posts/7-tips-for-effective-communication-in-remote-teams/)
## Side Note, why Bitcoin?
Bitcoin is the best solution to this problem. It ensures the inheritance for small businesses like ours. I'd like to point out two key reasons:
## Inflation resistant
Bitcoin operates outside the government's control thus nobody manages the money supply. This prevents possible inflation pressures of regular fiat currencies. Bitcoin has a fixed supply limit of 21 million. Inflation reduces the purchasing power of fiat currencies over time. BTC stays immune to the money release rules that central banks use to devalue their currencies. This helps Bitcoin maintain its value better over the long term.
## Decentralized
Bitcoin's independent nature makes it less vulnerable to the banking system's mistakes, crises, and regulations. Regular banking systems are centralized, making them prone to human failures and security breaches.
Troubles affecting one bank may collapse the whole system as they are connected. A central bank chairman's mistake is crucial for the global financial structure.
Bitcoin's independent network is maintained by thousands of independent nodes worldwide. This makes it more resilient to attacks and failures.
In 2008, the collapse of Lehman Brothers caused a global financial crisis that led to the failure of several large banks. It highlighted the fragility of the traditional banking system and the need for alternative payment solutions like Bitcoin.
This is a fair solution to a crisis! That's what we need for our "death case".
-----
*For these and more thoughts, guides, and insights visit my blog at [martinbaun.com.](http://martinbaun.com)*
*You can find me on [YouTube.](https://www.youtube.com/channel/UCJRgtWv6ZMRQ3pP8LsOtQFA)*
| martinbaun |
1,888,132 | Video — Open Source AI with Hugging Face, Dallas AI meetup (05/2024) | Video — Open Source AI with Hugging Face, Dallas AI meetup (05/2024) Learn how Hugging... | 0 | 2024-06-14T12:48:18 | https://julsimon.medium.com/video-open-source-ai-with-hugging-face-dallas-ai-meetup-05-2024-59fc53631eae | opensource, deeplearning, transformers, llm | ---
title: Video — Open Source AI with Hugging Face, Dallas AI meetup (05/2024)
published: true
date: 2024-06-14 08:10:15 UTC
tags: opensource,deeplearning,transformers,llm
canonical_url: https://julsimon.medium.com/video-open-source-ai-with-hugging-face-dallas-ai-meetup-05-2024-59fc53631eae
---
### Video — Open Source AI with Hugging Face, Dallas AI meetup (05/2024)

Learn how Hugging Face is changing the ML landscape, get practical insights on working with large language models, and dive into the details of Retrieval-Augmented Generation (RAG). Watch a demo on building a chatbot for the energy sector, and hear lessons learned from over 200 customer meetings about deploying LLMs in real-world settings. Discover how to choose the best models using Hugging Face leaderboards and see the latest trends in ML engineering.
{% youtube cf8z3Q8PFQQ %}
#ai #opensource | juliensimon |
1,888,078 | My Journey Internship at Kali Academy | I am Birusha Ndegeya, a student software developer specializing in web and mobile apps at Kadea... | 0 | 2024-06-14T08:08:55 | https://dev.to/birusha/my-journey-internship-at-kali-academy-5bng | career, opensource, programming, coding | I am Birusha Ndegeya, a student software developer specializing in web and mobile apps at Kadea Academy in Goma. During my three-month training at Kali Academy, I dedicated myself to an open-source project and was accepted to dive into the fascinating world of open source.
The internship started on May 11 and ended on June 11 in Goma, North-Kivu, Democratic Republic of Congo. The goal of this internship was to provide practical experience through concrete projects and to train participants on how to contribute to open-source projects and behave within a community.
## The Progress of My Internship
### The First Month: The Foundation
The first month of our internship was focused on establishing solid foundations. We followed a structured program to become "real" hackers, which included learning in public, using the Linux system, and mastering the command line interface (CLI). We also reviewed Git and GitHub, essential tools for any open-source developer. A highlight of this period was reading the book "Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure," which profoundly enlightened us about the importance and philosophy of open source.
### The Second Month: Specialization in Wikimedia
The second month immersed us in the fascinating world of Wikimedia. We had a quick introduction to Wikipedia and Wikidata and learned how to customize Wikipedia by modifying themes, gadgets, and beta features. We deepened our skills in Wikicode and advanced editing, working on models and templates. We also learned how to create and use Wikidata tools such as the Wikidata REST APIs and SPARQL. This month was crucial for understanding how to effectively contribute to Wikimedia projects and become Mediawiki experts.
### The Third Month: Final Projects
In the last month, we were divided into two groups to work on final internship projects. I had the opportunity to participate in an exciting project called "Wiki Data Query AI." This open-source software is designed to help users perform queries on Wikidata without needing to know the SPARQL language. Working on this project allowed me to apply all the skills acquired during the first two months and contribute to a truly innovative tool.
### Impact and Reflection
This internship at Kali Academy was a transformative experience. I not only acquired valuable technical skills but also gained a better understanding of the importance of open source. Collaboration with my fellow interns and our mentors also helped me develop essential interpersonal skills, such as teamwork and communication.
I personally thank our mentors Abel Mbula, Delord, and other mentors for dedicating their time and efforts to our training. Along the way, I am truly grateful to my friends who were there to answer my questions. It was a great experience to be among those students. I truly thank them and am ready for other exciting challenges.
| birusha |
1,888,121 | One byte explainer - Callbacks | 🎉 Welcome, folks! Get ready for an exciting journey through the series Explain to a 5-Year-Old -... | 27,721 | 2024-06-14T08:08:38 | https://dev.to/imkarthikeyan/one-byte-explainer-callbacks-34bo | devchallenge, cschallenge, javascript, webdev | 🎉 **Welcome, folks!**
Get ready for an exciting journey through the series *Explain to a 5-Year-Old - JavaScript Concepts*. Each blog is also a submission for the **cschallenge**.
Here's what to expect:
1. 🚀 **One-Byte Explainer:** A super concise take on the concept (256 characters or less).
2. 🔍 **Demystifying JS:** Dive deeper into the topic, explained clearly and engagingly.
---
## Callbacks
**One-Byte Explainer**:
Callbacks are like "tell me later" to a computer task. Our code does other things when the task finishes up. Then the task "calls back" with the result.
**Demystifying JS: Callbacks in Action**
Imagine you're playing a video game, but you're also hungry for pizza! You put the pizza in the oven and set a timer. Callbacks in JavaScript work similarly. They let your program keep running even when a task takes a while, just like you can keep playing the game while the pizza cooks.
Here's how it works:
1. You tell the program to do a task, like downloading a picture from the internet (similar to starting the oven).
2. You also give it a special instruction, like, "Hey program, when you're done downloading the picture, call me back and tell me it's ready!" This special instruction is your callback function.
3. Your program starts working on the download, but it doesn't wait for it to finish before doing anything else.
4. In the meantime, your program can keep showing you a loading screen or letting you play the game (like you can keep playing while the pizza cooks).
5. Once the task is finished (picture downloaded or pizza timer rings), the program "calls you back" by running your callback function. It gives you the results (the downloaded picture) so you can use them, like displaying the picture on the screen (or in this case, you can go eat the pizza!).
```javascript
function downloadImage(imageUrl, callback) {
// Simulate downloading an image (replace with actual logic)
setTimeout(() => {
const image = "image data"; // Imagine this is the downloaded image
callback(image); // Call the callback function with the downloaded image
}, 2000); // Simulate a 2 second download time
console.log("executing other tasks")
}
function displayImage(image) {
console.log("calling the callback")
console.log("Image downloaded and ready! Displaying:", image);
}
downloadImage("https://www.example.com/image.jpg", displayImage);
```
Thank you for reading.
| imkarthikeyan |
1,888,120 | Exploring the Prestigious Kasturba Medical College | Introduction to Kasturba Medical College (KMC) As an aspiring medical professional, I have... | 0 | 2024-06-14T08:08:28 | https://dev.to/new_eraeducation/exploring-the-prestigious-kasturba-medical-college-28bc | mbbs, abroad, consultancy, neweraeducation | ## Introduction to Kasturba Medical College (KMC)
As an aspiring medical professional, I have always been drawn to institutions that not only provide top-notch education but also foster a rich legacy of excellence. Kasturba Medical College (KMC) is one such esteemed institution that has captured my attention and piqued my curiosity. Located in the picturesque city of Mangalore, KMC has earned a reputation as one of the premier medical schools in India, known for its exceptional academic programs, state-of-the-art facilities, and a vibrant campus life.
Want To Study MBBS abroad [visit Now](https://www.neweraeducation.in/mbbs-in-russia.php)
## History and Establishment of Kasturba Medical College
Kasturba Medical College has a storied history that dates back to 1953 when it was established by the Manipal Academy of Higher Education (MAHE). Named after Kasturba Gandhi, the wife of Mahatma Gandhi, the college was founded with the vision of providing quality medical education and healthcare services to the region. Over the decades, KMC has grown from strength to strength, continuously raising the bar for medical education in the country.
## Location and Facilities at Kasturba Medical College
Nestled in the picturesque city of Mangalore, Kasturba Medical College boasts a sprawling campus that spans over 150 acres. The college is equipped with state-of-the-art infrastructure, including modern laboratories, well-stocked libraries, and cutting-edge medical facilities. The campus also houses a renowned teaching hospital, Kasturba Hospital, which serves as a hub for clinical training and hands-on experience for the students.
Academic Programs and Courses Offered at Kasturba Medical College
Kasturba Medical College offers a diverse range of academic programs, with the flagship MBBS (Bachelor of Medicine, Bachelor of Surgery) degree being the centerpiece. Additionally, the college provides postgraduate programs in various medical specialties, including MD, MS, and DNB (Diplomate of National Board) courses. The curriculum at KMC is designed to strike a perfect balance between theoretical knowledge and practical application, ensuring that students are well-equipped to excel in their chosen fields.
[Admission Process](https://www.neweraeducation.in/mbbs-in-russia.php) and Eligibility Criteria for Kasturba Medical College
Admission to Kasturba Medical College is highly competitive, with the college attracting top-performing students from across the country. The admission process is primarily based on the National Eligibility Entrance Test (NEET), a nationwide entrance examination for medical and dental programs.
Candidates must meet the minimum eligibility criteria, which include completing their 10+2 education with the required subjects and securing the minimum cut-off score in the NEET exam.
Fee Structure and Financial Aid Options at Kasturba Medical College
Kasturba Medical College is known for its transparent and affordable fee structure. The college offers various financial aid options, including merit-based scholarships, need-based grants, and loan assistance, to ensure that deserving students from diverse backgrounds can access quality medical education. The college's commitment to making education accessible has been a key factor in its continued success and growth.
| new_eraeducation |
1,888,119 | Nested Prompts in Go using promptui | Learn how to build a nested prompt in a Go CLI application | 0 | 2024-06-14T08:06:58 | https://dev.to/thwani47/nested-prompts-in-go-using-promptui-2a4d | go, promptui, cobra, cli | ---
title: Nested Prompts in Go using promptui
published: true
description: Learn how to build a nested prompt in a Go CLI application
tags: go, promptui, cobra, cli
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kch1rwcrs8bx8ahzhyx1.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 08:00 +0000
---
I was working on a CLI tool written in Go, using the [Cobra](https://github.com/spf13/cobra) tool recently, and I had a use case where I needed a nested prompt for one of the commands. I was using [promptui](https://github.com/manifoldco/promptui) for the prompts and I couldn't find a straightforward way to do this. This short post will show how to create a nested prompt using `promptui`. The completed code can be found [here](https://github.com/Thwani47/nested-prompt).
We first need to create an empty Go project. We will call it `nested-prompt`:
```bash
$ mkdir nested-prompt && cd nested-prompt
$ go mod init github.com/Thwani47/nested-prompt
```
We'll then install the `cobra`, `cobra-cli`, and `promptui` packages:
```bash
$ go get -u github.com/spf13/cobra@latest
$ go install github.com/spf13/cobra-cli@latest
$ go get -u github.com/manifoldco/promptui
```
We can initialize a new CLI application using the `cobra-cli` and add a command to our CLI
```bash
$ cobra-cli init # initializes a new CLI application
$ cobra-cli add config # adds a new command to the CLI named 'config'
```
We can clean up the `cmd/config.go` file and remove all the comments. It should be like this:
```go
// cmd/config.go
package cmd
import (
"fmt"
"github.com/spf13/cobra"
)
var configCmd = &cobra.Command{
Use: "config",
Short: "Configure settings for the application",
Long: `Configure settings for the application`,
Run: func(cmd *cobra.Command, args []string) {
fmt.Println("config called")
},
}
func init() {
rootCmd.AddCommand(configCmd)
}
```
We first need to create a custom type for our prompt. We do that by defining a `promptItem` struct as follows
```go
type PromptType int
const (
TextPrompt PromptType = 0
PasswordPrompt PromptType = 1
SelectPrompt PromptType = 2
)
type promptItem struct {
ID string
Label string
Value string
SelectOptions []string
promptType PromptType
}
```
The `PromptType` enum allows us to collect different types of input from our prompts, we can prompt the user for text, or sensitive values such as passwords or API Keys, or prompt the user to select from a list of defined values
We then define a `promptInput` function that will prompt for input from the user. The function returns the string value entered by the user or an error if the prompt fails.
```go
func promptInput(item promptItem) (string, error) {
prompt := promptui.Prompt{
Label: item.Label,
HideEntered: true,
}
if item.promptType == PasswordPrompt {
prompt.Mask = '*'
}
res, err := prompt.Run()
if err != nil {
fmt.Printf("Prompt failed %v\n", err)
return "", err
}
return res, nil
}
```
We then define a `promptSelect` function that will allow the user to select from a list of options. The function returns the string value selected by the user or an error if the prompt fails.
```go
func promptSelect(item selectItem) (string, error) {
prompt := promptui.Select{
Label: item.Label,
Items: item.SelectValues,
HideSelected: true,
}
_, result, err := prompt.Run()
if err != nil {
fmt.Printf("Prompt failed %v\n", err)
return "", err
}
return result, nil
}
```
To simulate a nested prompt, we will create a `promptNested` function that will allow us to prompt the user for a value and the prompt will stay active until the user selects `"Done"`. The function returns a boolean value that indicates that the prompt was a success.
*The comments in the function explain what each major block of code is responsible for*
```go
func promptNested(promptLabel string, startingIndex int, items []*promptItem) bool {
// Add a "Done" option to the prompt if it does not exist
doneID := "Done"
if len(items) > 0 && items[0].ID != doneID {
items = append([]*promptItem{{ID: doneID, Label: "Done"}}, items...)
}
templates := &promptui.SelectTemplates{
Label: "{{ . }}?",
Active: "\U0001F336 {{ .Label | cyan }}",
Inactive: "{{ .Label | cyan }}",
Selected: "\U0001F336 {{ .Label | red | cyan }}",
}
prompt := promptui.Select{
Label: promptLabel,
Items: items,
Templates: templates,
Size: 3,
HideSelected: true,
CursorPos: startingIndex, // Set the cursor to the last selected item
}
idx, _, err := prompt.Run()
if err != nil {
fmt.Printf("Error occurred when running prompt: %v\n", err)
return false
}
selectedItem := items[idx]
// if the user selects "Done", return true and exit from the function
if selectedItem.ID == doneID {
return true
}
var promptResponse string
// if the prompt type is Text or Password, prompt the user for input
if selectedItem.promptType == TextPrompt || selectedItem.promptType == PasswordPrompt {
promptResponse, err = promptInput(*selectedItem)
if err != nil {
fmt.Printf("Error occurred when running prompt: %v\n", err)
return false
}
items[idx].Value = promptResponse
}
// if the prompt type is Select, prompt the user to select from a list of options
if selectedItem.promptType == SelectPrompt {
promptResponse, err = promptSelect(*selectedItem)
if err != nil {
fmt.Printf("Error occurred when running prompt: %v\n", err)
return false
}
items[idx].Value = promptResponse
}
if err != nil {
fmt.Printf("Error occurred when running prompt: %v\n", err)
return false
}
// recursively call the promptNested function to allow the user to select another option
return promptNested(idx, items)
}
```
Now we have all the methods we need and we need to test them out. Inside the `Run` function of the `configCmd` command, we will create a list of `promptItem` and call the `promptNested` function to prompt the user for input. The `Run` function should look like this:
```go
// create a list of prompt items
items := []*promptItem{
{
ID: "APIKey",
Label: "API Key",
promptType: PasswordPrompt,
},
{
ID: "Theme",
Label: "Theme",
promptType: SelectPrompt,
SelectOptions: []string{"Dark", "Light"},
},
{
ID: "Language",
Label: "Preferred Language",
promptType: SelectPrompt,
SelectOptions: []string{"English", "Spanish", "French", "German", "Chinese", "Japanese"},
},
}
// set the starting index to 0 to start at the first item in the list
promptNested("Configuration Items", 0, items)
for _, v := range items {
fmt.Printf("Saving configuration (%s) with value (%s)...\n", v.ID, v.Value)
}
```
Build and test the application as follows
```bash
$ go build .
$ ./nested-prompt config
```
The result is as follows
 | thwani47 |
1,888,118 | Effective State Management in React: Comparing Redux, Context API, and Recoil | State management is a critical aspect of building scalable and maintainable React applications. In... | 0 | 2024-06-14T08:02:44 | https://dev.to/drruvari/effective-state-management-in-react-comparing-redux-context-api-and-recoil-407i | State management is a critical aspect of building scalable and maintainable React applications. In this article, we will explore three popular state management solutions: Redux, Context API, and Recoil. We'll discuss their strengths, weaknesses, and best use cases to help you make an informed decision for your next project.
## Introduction
Managing state in React can be challenging, especially as your application grows. Choosing the right state management solution is crucial for maintaining performance, readability, and scalability. Redux, Context API, and Recoil are among the most widely used solutions in the React ecosystem. This article aims to provide a comprehensive comparison to help you understand their differences and decide which one fits your needs.
## Redux
### Overview
Redux is a predictable state container for JavaScript apps, widely used in the React community. It provides a central store for all the application's state, making it easier to manage and debug.
### Pros
- **Predictability**: The state is predictable and can be tracked easily.
- **Middleware Support**: Extensive middleware support for asynchronous actions.
- **Ecosystem**: A rich ecosystem with many tools and extensions.
### Cons
- **Boilerplate**: Requires a significant amount of boilerplate code.
- **Complexity**: Can become complex for small applications.
### Example
```jsx
import { createStore } from 'redux';
import { Provider } from 'react-redux';
import rootReducer from './reducers';
const store = createStore(rootReducer);
function App() {
return (
<Provider store={store}>
<YourComponent />
</Provider>
);
}
```
## Context API
### Overview
The Context API is a built-in feature of React that allows you to share state across the entire app without passing props down manually at every level.
### Pros
- **Simplicity**: Easy to use and integrate with existing React applications.
- **No Additional Libraries**: Doesn't require any additional libraries.
### Cons
- **Performance**: Can lead to performance issues if not used carefully.
- **Scalability**: Not ideal for large and complex state management needs.
### Example
```jsx
import React, { createContext, useState } from 'react';
const MyContext = createContext();
function App() {
const [state, setState] = useState(initialState);
return (
<MyContext.Provider value={{ state, setState }}>
<YourComponent />
</MyContext.Provider>
);
}
```
## Recoil
### Overview
Recoil is a state management library for React that provides a more flexible and scalable solution compared to Context API. It allows for fine-grained state updates and supports derived state.
### Pros
- **Flexibility**: Fine-grained state updates without re-rendering the entire tree.
- **Derived State**: Easy to create and manage derived state.
- **Performance**: Better performance for large applications.
### Cons
- **Learning Curve**: Newer library with a learning curve.
- **Community Support**: Smaller community compared to Redux.
### Example
```jsx
import { RecoilRoot, atom, useRecoilState } from 'recoil';
const textState = atom({
key: 'textState',
default: '',
});
function App() {
return (
<RecoilRoot>
<YourComponent />
</RecoilRoot>
);
}
function YourComponent() {
const [text, setText] = useRecoilState(textState);
return <input value={text} onChange={(e) => setText(e.target.value)} />;
}
```
## Comparison
| Feature | Redux | Context API | Recoil |
|-----------------------|---------------------|---------------------|---------------------|
| Boilerplate Code | High | Low | Moderate |
| Performance | High | Moderate | High |
| Learning Curve | Moderate | Low | Moderate |
| Ecosystem | Extensive | Basic | Growing |
| Flexibility | Moderate | Low | High |
## Conclusion
Choosing the right state management solution depends on the specific needs of your application. Redux is ideal for large, complex applications requiring a robust ecosystem and middleware support. The Context API is perfect for simpler applications or when you want to avoid additional dependencies. Recoil offers a flexible and performant alternative, especially for applications requiring fine-grained state management.
Experiment with these solutions to find the best fit for your project, and share your experiences in the comments below!
---
| drruvari | |
1,888,117 | Commodity "futures and spots" Arbitrage Chart Based on FMZ Fundamental Data | Summary Some people may be unfamiliar with the word "arbitrage", but "arbitrage" is very... | 0 | 2024-06-14T08:01:16 | https://dev.to/fmzquant/commodity-futures-and-spots-arbitrage-chart-based-on-fmz-fundamental-data-3fkc | arbitrage, data, fmzquant, chart | ## Summary
Some people may be unfamiliar with the word "arbitrage", but "arbitrage" is very common in real life. For example, the owner of a convenience store buys a bottle of mineral water from the wholesale market for 0.5 yuan, then sells it in the store for 1 yuan, and finally earns a difference of 0.5 yuan. This process is actually similar to arbitrage. Arbitrage in financial markets is similar to this principle, except that there are many forms of arbitrage.
## What is arbitrage
In the commodity futures market, in theory, the price of the Apple contract delivered in May minus the price of the Apple contract delivered in October, the result should be close to 0 or stable within a certain price range. But in fact, due to weather, market supply and demand and other factors, the price of short-term and long-term contracts will be affected to varying degrees over a period of time, and the price difference will also fluctuate significantly.
But in any case, the price difference will eventually return to a certain price range, then if the price difference is greater than this range, sell short the May contract, and buy long the October contract at the same time, short the difference to make a profit; if the price difference is less than this range, buy long May contract, at the same time sell short October contract, make a profit of buying long the spread. This is the intertemporal arbitrage through buying and selling the same variety but different delivery months.
In addition to intertemporal arbitrage, there are cross-market arbitrage such as buying soybeans from exporting countries while selling soybeans from importing countries, or selling soybeans from exporting countries and importing soybeans from importing countries; buying upstream raw materials, iron ore, and selling downstream finished thread Steel, or sell the upstream raw material iron ore while buying downstream finished rebar arbitrage, etc.
## What is "futures and spots" arbitrage
Although the above arbitrage methods are literally "arbitrage", they are not purely arbitrage. they are essentially risky speculation. This way of speculation is to make profit by buying long or selling short the price spreads. Although the spread has stabilized for most of the time, there may be a market situation that the price spread does not return for a long time.
The core principle of "futures and spots" arbitrage is that the same commodity can only have one price at the same time point. Futures will become a spot when the delivery time is reached, so a price return will be forced when the contract delivery time is near. This is completely different from intertemporal arbitrage. The intertemporal arbitrage is a contract with two different delivery months. When it expires, it becomes the spot of two different months. Or it can be two prices.
Spread = Futures price - Spot price
The biggest feature of "futures and spots" arbitrage is that there is no risk in theory, mainly based on the spread of the state to calculate the profit range. If the spread is too large, you can long the spot and short the futures at the same time, wait for the spread to return to zero, you can close the position on both sides of the futures and spot, and earn a profit from the spread. There are two main methods: "double close position" arbitrage and "contract delivery" arbitrage.
## Commodity "futures and spots" arbitrage channel
To put it simply, the most complicated link is spot trading of commodities, which involves a series of issues such as warehouse receipts, taxation and so on. First of all, a company related to the investment scope is needed. If it is a contract delivery arbitrage futures account, it must be a corporate legal person. If double close position arbitrage is needed, a reliable sales channel is needed. There are many online spot trading websites.
It should be noted that spot transactions usually have a value-added tax of 17% to 20%, so if it is a double close position arbitrage, you need to short futures 1.2 to 1.25 times after buying spot. In the case of contract delivery arbitrage, you need to short the same proportion of futures after buying the spot, and you also need to consider the costs of transaction fees, transportation, and warehouses. Of course, the premise of all this is that the current price spread is large enough and there are enough boundaries.
In addition, due to the existence of gold (T+D) on the Shanghai Gold Exchange, the current arbitrage in the gold period can not only be positive arbitrage, but also reverse arbitrage operations without gold leasing. The deferred trading of spot gold (T+D) on the Shanghai Gold Exchange is not only convenient to trade, but also has a large volume of transactions and positions, and liquidity is very suitable for "futures and spots" arbitrage.
## How to obtain spot and spread data
There are many types of spot and spread data online, most of which are presented in the form of tables, which is obviously not suitable for analyzing and judging the market. FMZ Quant trading platform (FMZ.COM) has built-in commodity futures fundamental data, including spot data and spread data. Only need to call a function to get the spot and spread price of each variety, and support historical data from 2016 to the present.
```
# Backtest configuration
'''backtest
start: 2020-06-01 00:00:00
end: 2020-06-02 00:00:00
period: 1d
basePeriod: 1h
exchanges: [{"eid":"Futures_CTP","currency":"FUTURES"}]
'''
# Strategy entry
def main():
while True:
ret = exchange.GetData("GDP") # Calling GDP data
Log(ret) # Print data
Sleep(1000 * 60 * 60 * 24 * 30)
```
Return result
```
{
"Quarterly": "Q1 2006",
"GDP": {
"Absolute Value (100 million yuan)": 47078.9,
"YoY Growth": 0.125
},
"primary industry": {
"Absolute Value (100 million yuan)": 3012.7,
"YoY Growth": 0.044
},
"Tertiary Industry": {
"Absolute Value (100 million yuan)": 22647.4,
"YoY Growth": 0.131
},
"Secondary industry": {
"Absolute Value (100 million yuan)": 21418.7,
"YoY Growth": 0.131
}
}
```
## Spot and spread chart implementation
Let us use FMZ platform to quantify and realize spot prices and spread prices in the form of charts. First, register and log in to the FMZ website (FMZ.COM), click "Dashboard" , and click Strategy Library + New Strategy. Select Python in the drop-down menu in the upper left corner and fill in the name of the strategy.
### Step 1: Write the strategy framework
```
# Strategy main function
def onTick():
pass
# Strategy entrance
def main():
while True: # Enter loop mode
onTick() # execution strategy main function
Sleep(1000 * 60 * 60 * 24) # Strategy sleep for one day
```
The strategy framework are two functions, the main function is the entrance of the strategy, the main function is the pre-processing before the trading, the program will start from the main function, and then enter the infinite loop mode, repeatedly execute the onTick function, the onTick function is the main function of the strategy , Mainly execute the core code.
### Step 2: Adding chart function
```
# Global variables
# Futures and Spots chart
cfgA = {
"extension": {
"layout":'single',
"col": 6,
"height": "500px",
},
"title": {
"text": "futures and spots chart"
},
"xAxis": {
"type": "datetime"
},
"series": [{
"name": "Futures Price",
"data": [],
}, {
"name": "Spot Price",
"data": [],
}
]
}
# Spread chart
cfgB = {
"extension": {
"layout":'single',
"col": 6,
"height": "500px",
},
"title": {
"text": "Spread chart"
},
"xAxis": {
"type": "datetime"
},
"series": [{
"name": "Spread Price",
"data": [],
}]
}
chart = Chart([cfgA, cfgB]) # Create a chart object
# Strategy main function
def onTick():
chart.add(0, []) # draw chart
chart.add(1, []) # draw chart
chart.add(2, []) # draw chart
chart.update([cfgA, cfgB]) # update chart
# Strategy entrance
def main():
LogReset() # Clear the previous log information before running
chart.reset() # Clear the previous chart information before running
while True: # Enter loop mode
onTick() # execution strategy main function
Sleep(1000 * 60 * 60 * 24) # Strategy sleep for one day
```
In this strategy, a total of 2 charts have been created and are arranged side by side. Among them, cfgA on the left is a current chart, including futures prices and spot prices, and cfgB on the right is a spread chart. Then call the FMZ platform built-in Python line drawing library to create a chart object. Finally, the data in the chart is updated in real time in the onTick function.
### Step 3: Get data
```
last_spot_price = 0 # Save the last valid spot price
last_spread_price = 0 # Save the last valid spread price
def onTick():
global last_spread_price, last_spot_price # import global variables
exchange.SetContractType("i888") # Subscribe to futures varieties
futures = _C(exchange.GetRecords)[-1] # Get the latest K line data
futures_ts = futures.Time # Get the latest K-line futures timestamp
futures_price = futures.Close # Get the latest K-line closing price
spot = exchange.GetData("SPOTPRICE") # Get spot data
spot_ts = spot.Time # Get spot timestamp
if 'iron ore' in spot.Data:
spot_price = spot.Data['iron ore']
last_spot_price = spot_price
else:
spot_price = last_spot_price
spread = exchange.GetData("spread") # Get spread data
spread_ts = spread.Time # Get spread timestamp
if 'iron ore' in spread.Data:
spread_price = spread.Data['iron ore']
last_spread_price = spread_price
else:
spread_price = last_spread_price
```
In total, we need to obtain three kinds of data: futures price, spot price, and spread price. Obtaining the futures price is simple. Use the SetContractType function to directly subscribe to the futures symbol, and then use the GetRecords function to obtain the closing price of the K line. For the prices of spot and spread, you can use the method introduced earlier, use the GetData function to call the fundamental data code, and return the dictionary data that contains the timestamp.
## Chart Display



## Get the complete strategy code
```
# fmz@b72930603791887d7452f25f23a13bde
'''backtest
start: 2017-01-01 00:00:00
end: 2020-06-01 00:00:00
period: 1d
basePeriod: 1d
exchanges: [{"eid":"Futures_CTP","currency":"FUTURES"}]
'''
# Global variables
# Futures and Spots chart
cfgA = {
"extension": {
"layout":'single',
"col": 6,
"height": "500px",
},
"title": {
"text": "futures and spots chart"
},
"xAxis": {
"type": "datetime"
},
"series": [{
"name": "Futures Price",
"data": [],
}, {
"name": "Spot Price",
"data": [],
}
]
}
# spread chart
cfgB = {
"extension": {
"layout":'single',
"col": 6,
"height": "500px",
},
"title": {
"text": "spread chart"
},
"xAxis": {
"type": "datetime"
},
"series": [{
"name": "spread Price",
"data": [],
}]
}
last_spot_price = 0 # Save the last valid spot price
last_spread_price = 0 # Save the last valid spread price
chart = Chart([cfgA, cfgB]) # Create a chart object
def onTick():
global last_spread_price, last_spot_price # import global variables
exchange.SetContractType("i888") # Subscribe to futures varieties
futures = _C(exchange.GetRecords)[-1] # Get the latest candlestick data
futures_ts = futures.Time # Get the latest K-line futures timestamp
futures_price = futures.Close # Get the latest K-line closing price
Log('Future price:', futures_ts, futures_price)
spot = exchange.GetData("SPOTPRICE") # Get spot data
spot_ts = spot.Time # Get spot timestamp
if 'iron ore' in spot.Data:
spot_price = spot.Data['iron ore']
last_spot_price = spot_price
else:
spot_price = last_spot_price
Log('Spot price:', spot_ts, spot_price)
spread = exchange.GetData("spread") # Get spread data
spread_ts = spread.Time # Get spread timestamp
if 'iron ore' in spread.Data:
spread_price = spread.Data['iron ore']
last_spread_price = spread_price
else:
spread_price = last_spread_price
Log('spread price:', spread_ts, spread_price)
chart.add(0, [futures_ts, futures_price]) # draw chart
chart.add(1, [spot_ts, spot_price]) # draw chart
chart.add(2, [spread_ts, spread_price]) # draw chart
chart.update([cfgA, cfgB]) # update chart
Log('---------')
# Strategy entrance
def main():
LogReset() # Clear the previous log information before running
chart.reset() # Clear the previous chart information before running
while True: # Enter loop mode
onTick() # execution strategy main function
Sleep(1000 * 60 * 60 * 24) # Strategy sleep for one day
```
The complete strategy has been posted on the FMZ platform (FMZ.COM) strategy square, it can be used directly by clicking the link below.
https://www.fmz.com/strategy/211941
## End
Arbitrage is not as complicated as imagined. It does not require too much knowledge of financial theory, nor does it require too complicated mathematical or statistical models. Arbitrage is essentially to make a profit from an unreasonable price to a reasonable return. Market conditions change every year. For traders, it is best not to copy historical data to the present, but to combine the current data to study whether the price spread is reasonable.
From: https://blog.mathquant.com/2020/06/17/commodity-futures-and-spots-arbitrage-chart-based-on-fmz-fundamental-data.html | fmzquant |
1,888,116 | Bridging Backend and Data Engineering: Communicating Through Events | Building a Unified System: Event-Driven Approach to Backend and Data Engineering Communication. | 0 | 2024-06-14T08:00:04 | https://dev.to/plutov/bridging-backend-and-data-engineering-communicating-through-events-33pn | distributedsystems, pubsub, backend, dataengineering | ---
title: Bridging Backend and Data Engineering: Communicating Through Events
published: true
description: Building a Unified System: Event-Driven Approach to Backend and Data Engineering Communication.
tags: DistributedSystems, PubSub, Backend, DataEngineering
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/66awd1frfybv88ztw89h.jpg
---

[Read the full article on packagemain.tech](https://packagemain.tech/p/bridging-backend-and-data-engineering)
| plutov |
1,888,114 | Introducing Semantic Kernel | Learn how to get started with Microsoft's Semantic Kernel | 0 | 2024-06-14T07:59:42 | https://dev.to/thwani47/introducing-semantic-kernel-41d5 | ai, machinelearning, semantickernel | ---
title: Introducing Semantic Kernel
published: true
description: Learn how to get started with Microsoft's Semantic Kernel
tags: ai,ml, semantickernel
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6v2az1etoeex21i9fao.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 07:54 +0000
---
[Semantic Kernel](https://github.com/microsoft/semantic-kernel) is an open-source SDK that allows us to easily bring AI capabilities to our applications. It allows us to connect to AI services such as OpenAI and Azure OpenAI with ease. If you have worked with [LangChain](https://www.langchain.com/), Semantic Kernel is a Microsoft implementation of a project like LangChain. Semantic Kernel allows us to integrate AI functionality such as text generation, text summarization, chat completion, and image generation in our applications.

In this article, we'll be exploring Semantic Kernel, unraveling its terminology, and demonstrating its capabilities by building practical applications. This article will lay the foundation of the core concepts of Semantic Kernel and will ensure we have a solid understanding of the SDK.
The source code for the completed application can be found [here](https://github.com/Thwani47/blog-code/tree/main/SemanticKernelExample).
## Table of Contents
- [What is Semantic Kernel](#what-is-semantic-kernel)
- [Glossary of Terms](#glossary-of-terms)
- [Getting Started with Semantic Kernel in C#](#getting-started-with-semantic-kernel-in-c)
## What is Semantic Kernel
Semantic Kernel is a lightweight open-source SDK that allows developers to build their own copilot experiences. It allows us to easily integrate with AI plugins from Open AI and Microsoft. This means we can integrate our applications with plugins that are built for services such as ChatGPT, Bing, and Microsoft 365 Copilot. Semantic Kernel allows us to integrate with these plugins using programming languages such as Python, C#, and Java (support for TypeScript is not yet available at the time of writing but it is something that is being worked on). This means that we can leverage the power of LLMs in our applications by using the technology we use in our day-to-day development tasks.
Semantic Kernel provides an abstraction over integrating the power of LLMs in our applications. This makes the learning curve of learning and understanding the APIs for AI services such as Azure OpenAI and OpenAI shorter, and since we can use programming languages such as C# and Python, this means we can easily get started and we can easily integrate AI functionality in our applications.
Before we can get started with Semantic Kernel, we need to understand a few terms that will make working with the SDK easier.
## Glossary of Terms
Here are some of the terms that are used widely in the world of Semantic Kernel. Understanding them will make your experience with the SDK an easier one.
| Term | Description |
| - | - |
| Ask | This refers to the goal sent to Semantic Kernel that a user or a developer would like to accomplish. |
| Kernel | The kernel refers to an instance of the processing engine that fulfills a user's ask using a collection of plugins. |
| Plugin | A plugin refers to a group of functions that can be exposed to AI services. For example, if we had a plugin for sprint planning, we might have a function to summarize the sprint planning meeting, a function to get the planned objectives for the sprint and a function to create work items for those objectives. <br><br>Semantic Kernel comes with some plugins out-of-the-box such as the TextMemory, ConversationSummary, FileIO, and Time plugins. You can also build custom plugins that suit your requirements. <br><br>Some Microsoft documentation might still use the term **"Skills"** for plugins. Plugins were initially called skills but were renamed to Plugins to conform to the OpenAI standard.|
| Functions | To create your custom plugin, Semantic Kernel allows you two create two types of functions:<br><br>**Semantic Functions** → These functions allow your app to listen to users' asks and respond with a natural language response<br><br>**Native Functions** → These are functions that are written in Python or C# and the Kernel will call these functions based on users' asks|
| Planner | Semantic Kernel allows us to chain together different plugins that fulfill different goals. Semantic Kernel uses a function called a **Planner** that selects one or more functions from the registered plugins to execute based on the user's asks. For example, suppose we have a plugin named **WritePlugin** with the following functions defined:<br><br>**Brainstorm** → Given a goal or a topic, this function generates a list of ideas.<br>**ShortPoem** → Generates a short poem about a given topic.<br>**WriteStory** → Generates a short story with sub-chapters.<br>**Translate** → Translates a piece of text to your language of choice.<br><br>If the user's ask is **"Can you write a short poem in Spanish about living in Durban?"**, The Planner is responsible for selecting the functions to call and in the order they should be called. Which it will most likely (or should) call the **ShortPoem** and **Translate** functions|
| Prompts | Prompts serve as input to an AI model. Behind the scenes, Semantic Kernel Planner uses prompts to generate a plan (a list of actions to call and the order). <br><br>A model will generate different output based on the prompt provided.|
| Model | A model refers to a specific instance of an LLM* AI, such as GPT-3 or Codex.<br><br>**LLM (large language model) → An artificial intelligence model trained on a large text dataset.* |
Let's get started with Semantic Kernel below
## Getting Started with Semantic Kernel in C#
To get started, you'll need an OpenAI API Key. You can obtain one by either creating an [OpenAI](https://openai.com/api/) account or creating an [Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?pivots=programming-language-studio). At the time of writing, access to the Azure OpenAI Service is only available by application. You can fill out your application [here](https://aka.ms/oai/access).
**Note:
You'll have to load some credits to be able to use the OpenAI models. You can load that on the [OpenAI billing page](https://platform.openai.com/account/billing/overview)**
We'll build a C# console application that accepts a user's ask and can perform a set of actions. The application should be able to
- Summarize a piece of text
- Write a short poem about a topic
- Generate a short story about a topic
- Translate a piece of text to another language.
As much as we're using C#, the concepts should be the same in Python or Java if you prefer that. We will cover many of the concepts to get you started with Semantic Kernel, and in a follow-up article we'll learn how to give our applications 'memory';
We first start by creating a new .NET console application
```bash
$ dotnet new console --framework net7.0 --name SemanticKernelExample
$ cd SemanticKernelExample
```
We then have to install a few Nuget packages
```bash
$ dotnet add package Microsoft.SemanticKernel --version 0.21.230828.2-preview
$ dotnet add package Microsoft.Extensions.Configuration.UserSecrets
```
*[Semantic Kernel](https://www.nuget.org/packages/Microsoft.SemanticKernel/) is still in preview at the time of writing this article, so we install the preview version.*
Define an app secret with your API Key using the command
```bash
$ dotnet user-secrets init
$ dotnet user-secrets set "OpenAI:APIKey" "<your-api-key>" #replace <your-api-key> with your Open API Key
```
We first need to create a kernel. We can create a kernel in two ways:
One, by calling the `KernelBuilder.Create()` method which returns an `IKernel` object
```csharp
var kernel = KernelBuilder.Create();
```
Two, by using the `KernelBuilder.Build()` method if we want to set extra configuration for the kernel such as loading the initial plugins, adding custom loggers, etc.
```csharp
var loggerFactory = NullLoggerFactory.Instance;
var kernel = new KernelBuilder()
.WithLoggerFactory(loggerFactory)
.Build();
```
For the **text summarization** functionality, we'll create a semantic function that will summarize an input text. We'll call this function **SummarizeText**. In your project root directory, create a **Plugins** folder, and inside that folder create a **WriterPlugin** folder. Inside the **WriterPlugin** folder, create a **SummarizeText** folder. Inside the **SummarizeText** folder create two files, **skprompt.txt** and **config.json**. The **skprompt.txt** file will contain the prompt that will be sent to the AI model, and the **config.json** file will contain the configuration for the function. The **skprompt.txt** file should contain the following text:
```txt
Summarize this in 3 sentences or less
{{$input}}
+++++
```
This is the prompt we'll pass to the AI service to summarize the text. During execution, Semantic Kernel will replace the `{{$input}}` value with the text to be summarized.
Update the **config.json** file with the following
```json
{
"schema": 1,
"type": "completion",
"completion": {
"max_tokens": 500
},
"input": {
"parameters": [
{
"name": "input",
"description": "The text to summarize",
"defaultValue": ""
}
]
}
}
```
This config instructs Semantic Kernel to restrict the generated text to 500 tokens and to expect an input parameter named **input**. The **defaultValue** property is used to set a default value for the parameter if one is not provided.
We can update our **Program.cs** file with the following to see the function in action
```csharp
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.SemanticKernel;
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddUserSecrets<Program>()
.Build();
var apiKey = configuration.GetSection("OpenAI:APIKey").Value; // Gets the OpenAI API Key from the app secrets
var logger = NullLogger.Instance;
var kernel = new KernelBuilder().WithLogger(logger).WithOpenAITextCompletionService("text-davinci-003", apiKey!)
.Build(); // Creates a new kernel with the OpenAI Text Completion Service using the text-davinci-003 model
// You can see the list of models here: https://platform.openai.com/docs/models
var pluginsDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Plugins");
var writerPlugin = kernel.ImportSemanticSkillFromDirectory(pluginsDirectory, "WriterPlugin"); // loads the WriterPlugin plugin
var textToSummarize =
@"The Mars Perseverance Rover, a part of NASA's Mars Exploration Program successfully landed on Mars on February 18, 2021.
Its mission is to explore the Martian surface, collect samples, and study the planet's geology and climate.
The rover is equipped with advanced instruments and cameras that allow scientists to analyze the terrain and search for signs of past microbial life.
This mission represents a significant step towards our understanding of Mars and its potential to support life in the past or present.";
var result = await writerPlugin["SummarizeText"].InvokeAsync(textToSummarize); // calls the SummarizeText function
Console.WriteLine(result);
```
This outputs something like the following
```text
The Mars Perseverance Rover successfully landed on Mars on February 18, 2021, as part of NASA's Mars Exploration Program.
Its mission is to explore the Martian surface, collect samples, and study the planet's geology and climate.
It is equipped with advanced instruments and cameras to analyze the terrain and search for signs of past microbial life.
```
In the completed solution, I've added the prompts and the config files for the other functions. You can find the completed solution [here](https://github.com/Thwani47/blog-code/tree/main/SemanticKernelExample). The folder structure after adding all the functions looks like this

The updated **Program.cs** file should look like the following
```csharp
// ...kernel configuration
var pluginsDirectory = Path.Combine(Directory.GetCurrentDirectory(), "Plugins");
var writerPlugin = kernel.ImportSemanticSkillFromDirectory(pluginsDirectory, "WriterPlugin");
var textToSummarize =
@"The Mars Perseverance Rover, a part of NASA's Mars Exploration Program successfully landed on Mars on February 18, 2021.
Its mission is to explore the Martian surface, collect samples, and study the planet's geology and climate.
The rover is equipped with advanced instruments and cameras that allow scientists to analyze the terrain and search for signs of past microbial life.
This mission represents a significant step towards our understanding of Mars and its potential to support life in the past or present.";
var summary = await writerPlugin["SummarizeText"].InvokeAsync(textToSummarize);
Console.WriteLine($"Summarized text:\n{summary}");
var poem = await writerPlugin["WritePoem"].InvokeAsync("Being a Software Developer");
Console.WriteLine($"Generated poem:\n{poem}");
var shortStory = await writerPlugin["GenerateStory"].InvokeAsync("The Lion and the Lamb");
Console.WriteLine($"Generated story:\n{shortStory}");
var translationContext = kernel.CreateNewContext();
translationContext.Variables["input"] = textToSummarize;
translationContext.Variables["target"] = "French";
var translatedText = await writerPlugin["Translate"].InvokeAsync(translationContext);
Console.WriteLine($"Translated text:\n{translatedText}");
```
Notice we created a **translationContext** variable, which is a **SKContext** object. This object allows us, among other things, to be able to pass more than one value to our prompt. We use this since the **Translate** function needs the text to translate and the **target language**. Running this outputs something like the following
```text
*****Summarized text:*****
The Mars Perseverance Rover successfully landed on Mars on February 18, 2021, as part of NASA's Mars Exploration Program. Its mission is to explore the Martian surface, collect samples, and study the planet's geology and climate. It
is equipped with advanced instruments and cameras to analyze the terrain and search for signs of past microbial life.
*****Generated poem:*****
My coding skills are ever-growing,
My knowledge ever-expanding,
My work is never slowing,
My code is ever-commanding.
My projects are ever-thriving,
My skills are ever-thriving.
*****Generated story:*****
Once upon a time, there was a lion and a lamb who lived in the same forest. The lion was the king of the forest and the lamb was his loyal companion.
One day, the lion and the lamb went on a journey together. They encountered many obstacles along the way, but they worked together to overcome them.
Eventually, they reached their destination and the lion thanked the lamb for his help. From then on, the lion and the lamb were the best of friends and they lived happily ever after.
*****Translated text:*****
Le Rover Mars Perseverance, faisant partie du Programme d'Exploration de Mars de la NASA, s'est posé avec succès sur Mars le 18 février 2021.
Sa mission est d'explorer la surface martienne, de collecter des échantillons et d'étudier la géologie et le climat de la planète.
Le rover est équipé d'instruments et de caméras avancés qui permettent aux scientifiques d'analyser le terrain et de rechercher des signes de vie microbienne passée.
Cette mission représente une étape significative vers notre compréhension de Mars et de son potentiel pour soutenir la vie dans le passé ou le présent.
```
Lastly, we can use **Planner** to chain together different functions to fulfill a user's ask. We can update our **Program.cs** file with the following
```csharp
// ... Kernel config
var planner = new SequentialPlanner(kernel);
var ask = "Can you write a poem about being a software developer and translate it to German?";
var plan = await planner.CreatePlanAsync(ask);
var result = await plan.InvokeAsync();
Console.WriteLine(result);
```
This outputs something like the following
```text
Johns Leidenschaft für Programmierung war wie eine Flamme,
Seine Fähigkeiten und sein Wissen wuchsen stetig.
Er arbeitete hart, um die Welt zu einem besseren Ort zu machen,
Seine Arbeit war eine Quelle des Stolzes und der Anmut.
Er hörte nie auf zu lernen, immer bestrebt, zu wachsen,
Sein Einfluss auf die Welt wird er nie erfahren.
```
## Conclusion
In this article, we introduced Semantic Kernel, covered some of the terminology used, and created a basic C# application that uses Semantic Kernel. We've covered some of the basic terminology to help you get started with Semantic Kernel. In a follow-up article, we'll learn how to give our applications 'memory' using Semantic Kernel and have them remember context.
Semantic Kernel is still in its infancy stage but improvements are being made every day on the SDK and it will be a very powerful and mature tool in no time. We can use Semantic Kernel for a wide range of applications. We can leverage cutting-edge AI models and plugins in our apps, and we can also build our plugins. | thwani47 |
1,888,113 | The Rise of Sustainable Packaging & Eco-Friendly Air Freshener Market | Introduction: As consumer awareness of environmental issues continues to grow, so does the demand... | 0 | 2024-06-14T07:59:26 | https://dev.to/sneha_nextmsc/the-rise-of-sustainable-packaging-eco-friendly-air-freshener-market-1ad4 | airfreshnermarket, airfresheners, consumergoods, marketresearch |

**Introduction:**
As consumer awareness of environmental issues continues to grow, so does the demand for eco-friendly products across all industries, including the [Air Freshener Market](https://www.nextmsc.com/report/air-freshener-market).
Sustainable packaging and eco-friendly formulations have become key considerations for consumers looking to minimize their environmental footprint without sacrificing quality or effectiveness. In this article, we explore the emergence of sustainable packaging and eco-friendly air fresheners in the market, their benefits, and their impact on the environment.
**Download FREE Sample:** [https://www.nextmsc.com/air-freshener-market/request-sample](https://www.nextmsc.com/air-freshener-market/request-sample)
**The Shift Towards Sustainability in the Air Freshener Market**
With increasing concern about plastic pollution, carbon emissions, and waste generation, consumers are actively seeking products that align with their values of sustainability and environmental responsibility. In response to this demand, many companies in the air freshener market are embracing sustainable practices, including the use of eco-friendly ingredients, recyclable materials, and reduced packaging waste.
**Sustainable Packaging: A Key Consideration for Eco-Conscious Consumers**
Packaging plays a significant role in the environmental impact of air freshener products. Traditional packaging materials, such as plastic bottles and blister packs, contribute to plastic pollution and waste accumulation in landfills and oceans. Sustainable packaging options, on the other hand, prioritize eco-friendly materials and design principles to minimize environmental harm.
**Benefits of Sustainable Packaging for Air Fresheners**
**_1. Reduction of Plastic Waste_**
Sustainable packaging solutions for air fresheners often involve the use of biodegradable or compostable materials that break down naturally over time, reducing the accumulation of plastic waste in the environment. By choosing packaging made from renewable resources, companies can help mitigate the environmental impact of their products and contribute to a cleaner, healthier planet.
**_2. Lower Carbon Footprint_**
The production, transportation, and disposal of traditional packaging materials contribute to carbon emissions and climate change. Sustainable packaging options, such as recycled cardboard, paperboard, or plant-based plastics, typically have a lower carbon footprint than their conventional counterparts, helping to reduce greenhouse gas emissions and combat global warming.
**_3. Conservation of Resources_**
Sustainable packaging encourages the efficient use of resources by prioritizing materials that are renewable, recyclable, or biodegradable. By minimizing the consumption of finite resources, such as fossil fuels and virgin plastics, companies can contribute to resource conservation and promote environmental sustainability for future generations.
**_4. Enhanced Brand Reputation_**
Consumers are increasingly drawn to brands that demonstrate a commitment to sustainability and corporate social responsibility. By adopting sustainable packaging practices, companies can enhance their brand reputation, attract environmentally conscious consumers, and differentiate themselves from competitors in the market. A positive brand image can lead to increased customer loyalty and trust, driving sales and long-term business success.
**Eco-Friendly Formulations: Reducing Environmental Impact Without Compromise**
In addition to sustainable packaging, the formulation of air freshener products also plays a significant role in their environmental impact. Eco-friendly formulations prioritize natural, biodegradable ingredients that are safe for the environment and human health, while still delivering effective odor elimination and long-lasting freshness.
**Benefits of Eco-Friendly Air Fresheners**
**_1. Safe for Indoor Air Quality_**
Conventional air fresheners often contain synthetic fragrances and harsh chemicals that can contribute to indoor air pollution and respiratory irritation. Eco-friendly air fresheners, on the other hand, use natural ingredients derived from plants, essential oils, and botanical extracts that are safe for indoor air quality and do not release harmful toxins or VOCs (volatile organic compounds).
_**2. Biodegradability**_
Eco-friendly air fresheners are formulated with biodegradable ingredients that break down naturally in the environment, reducing the impact of product disposal on ecosystems and wildlife. Unlike conventional air fresheners that may contain non-biodegradable chemicals and microplastics, eco-friendly formulations are designed to decompose into harmless substances over time, minimizing pollution and waste accumulation.
**_3. Renewable Resources_**
Many eco-friendly air fresheners are made from renewable resources, such as plant-based ingredients or recycled materials, that can be replenished or recycled indefinitely without depleting finite resources. By harnessing the power of nature, companies can create sustainable products that minimize their ecological footprint and promote environmental stewardship.
**_4. Non-Toxic and Hypoallergenic_**
Eco-friendly air fresheners are formulated with non-toxic, hypoallergenic ingredients that are gentle on sensitive individuals and safe for use around children and pets. Unlike conventional air fresheners that may contain allergens, irritants, or artificial fragrances, eco-friendly formulations prioritize natural scents and botanical extracts that are less likely to trigger allergic reactions or respiratory sensitivities.
_**5. Cruelty-Free and Ethical**_
Many eco-friendly air fresheners are produced using cruelty-free and ethical manufacturing practices that prioritize animal welfare, worker rights, and fair trade principles. By choosing products that are certified cruelty-free and ethically sourced, consumers can support companies that uphold high ethical standards and promote compassion and social responsibility in the air freshener industry.
**Conclusion**:
Sustainable packaging and eco-friendly formulations are transforming the air freshener market, offering consumers greener alternatives that prioritize environmental responsibility and human health. By choosing products with sustainable packaging and eco-friendly formulations, consumers can reduce their environmental footprint, support ethical companies, and create a cleaner, healthier planet for future generations. As the demand for sustainable air fresheners continues to grow, companies that embrace eco-friendly practices will be well-positioned to succeed in an increasingly environmentally conscious market. | sneha_nextmsc |
1,888,106 | Deep Dive Into .NET Minimal APIs | Learn how to create a REST API using .NET Minimal APIs | 0 | 2024-06-14T07:48:36 | https://dev.to/thwani47/deep-dive-into-net-minimal-apis-399g | dotnet, api, csharp | ---
title: Deep Dive Into .NET Minimal APIs
published: true
description: Learn how to create a REST API using .NET Minimal APIs
tags: dotnet, api, csharp
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxpqvvhb53fs70ook7cl.png
# Use a ratio of 100:42 for best results.
# published_at: 2022-09-20 07:32 +0000
---
The release of .NET 6 introduced *minimal APIs*, which allows us to develop small and functional single-file web APIs. Minimal APIs were motivated by the amount of code required to produce a simple .NET API compared to producing an API providing similar functionality in other languages such as Python or Go. In this blog post, we will be taking a deep dive into minimal APIs, exploring how they differ from *controller-based* APIs, and seeing how to configure logging and dependency injection for a minimal API.
The complete source code for this article can be found [here](https://github.com/Thwani47/blog-code/tree/main/MinimalApiExample).
## Table of Contents
1. [Overview of .NET Minimal APIs](#overview-of-net-minimal-apis)
2. [Creating a Minimal API](#creating-a-minimal-api)
3. [Configuring a Minimal API](#configuring-a-minimal-api)
4. [CRUD methods in a Minimal API](#crud-methods-in-a-minimal-api)
5. [Summary](#summary)
## Overview of .NET Minimal APIs
Minimal APIs are designed to create web APIs with minimal dependencies. Minimal APIs are an ideal way for creating microservices and lightweight applications that include the minimum files, configuration, and dependencies required to build a functional API. Minimal APIs allow us to create an API with very few lines of code. We can create an API with the following lines of code
```csharp
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/", () => "Hello World!");
app.Run();
```
Once we run the code above, we should see `Hello World!` in the browser. The minimal API code above is analogous to creating a hello world in [Express JS](http://expressjs.com/en/starter/hello-world.html).
Minimal APIs consist of:
- New hosting APIs
- WebApplication and WebApplicationBuilder
- New routing APIs
## Creating a minimal API
There are multiple ways to create a .NET minimal API. You can create a minimal API from Visual Studio (version > 2022) or the command line. We'll use the command line in this article. Run the following command to create a minimal API (You need to have the [.NET 6](https://dotnet.microsoft.com/en-us/download/dotnet/6.0) SDK installed for this command to work.)
```bash
$ dotnet new web --name MinimalApiExample --framework "net6.0"
```
This command creates a new .NET 6 web project, named `MinimalApiExample`, with the following files created in the project
- `appsettings.json`
- `appsettings.Development.json`
- `Program.cs`
Minimal APIs allow us to write our APIs in fewer files. We no longer have separate folders (and files) for controllers. We do not have the `Startup.cs` file anymore. We only have the `Program.cs` file for the API logic.
We can replace the lines
```csharp
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
```
with the line
```csharp
var app = WebApplication.Create(args);
```
if we want to create the application with preconfigured default values.
## Configuring a Minimal API
When a minimal API is created, the port the API application will respond to is specified inside the `Properties/launchSettings.json` file. We can specify the port(s) the app will respond to as follows:
We can set a single port:
```csharp
// ... app configuration
app.MapGet("/", () => "Hello World!");
app.Run("http://localhost:3000");
```
Visiting [http://localhost:3000](http://localhost:3000) should print `Hello World!` on the browser.
We can set multiple ports as follows:
```csharp
// ... app configuration
app.Urls.Add("http://localhost:3000");
app.Urls.Add("http://localhost:4000");
app.MapGet("/", () => "Hello World!");
app.Run();
```
Visiting [http://localhost:3000](http://localhost:3000) or [http://localhost:4000](http://localhost:4000) should print `Hello World!` on the browser.
We can also specify the port when we run the app via the command line:
```bash
$ dotnet run --urls="http://localhost:3000"
# dotnet run --urls="http://localhost:3000;http://localhost:4000" for multiple ports
```
We can also specify the port from the environment variables as follows
```csharp
var app = WebApplication.Create(args);
var port = Environment.GetEnvironmentVariable("PORT") ?? "3000";
app.MapGet("/", () => "Hello World!");
app.Run($"http://localhost:{port}");
```
We can configure Swagger as follows. From the command line:
```bash
$ dotnet add package Swashbuckle.AspNetCore --version 6.2.3
```
In `Program.cs` add the following code
```csharp
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI(options =>
{
options.SwaggerEndpoint("/swagger/v1/swagger.json", "v1");
options.RoutePrefix = string.Empty;
});
}
app.MapGet("/", () => "Hello World!");
app.Run();
```
## CRUD methods in a Minimal API
For example, we are going to create a simple *Todo* API, using an [EF Core In-Memory DB](https://learn.microsoft.com/en-us/ef/core/providers/in-memory/?tabs=dotnet-core-cli).
The API will have the following endpoints
| Endpoint | Description | Request Body | Response Body|
|------------|----------|-----------| ------------
| `GET /todos`| Get all Todos| None |List of Todos |
| `GET /todos/complete` | Get all complete Todos| None|List of Todos |
|`GET /todos/{id}`| Get a Todo item by its id| None|Todo item |
| `POST /todos` | Create a new Todo item| Todo item|Todo item|
|`PUT /todos/{id}` | Update an existing Todo |Todo item | None |
| `DELETE /todos/{id}`| Delete a Todo item | None | None |
To install the in-memory DB provider, run the following command in the command line
```bash
$ dotnet add package Microsoft.EntityFrameworkCore.InMemory
```
In the `Program.cs` file, add the following code to configure the `Todo` model and the DbContext
```csharp
// ... app configuration
app.Run();
public class Todo
{
public Guid Id {get; set;}
public string? Name {get; set;}
public bool IsComplete {get; set;}
}
class TodoDb : DbContext
{
public TodoDb(DbContextOptions<TodoDb> options) : base(options)
{
}
public DbSet<Todo> Todos => Set<Todo>();
}
```
Add the following code to register the `DbContext` with the DI container
```csharp
//... builder configuration
builder.Services.AddDbContext<TodoDb>(options => options.UseInMemoryDatabase("Todos"));
var app = builder.Build();
// ... rest of code
```
We add the following code to add endpoints for our API
```csharp
// .. app configuration
var app = builder.Build();
app.MapGet("/todos", async (TodoDb db) => await db.Todos.ToListAsync());
app.MapGet("todos/{id:guid}", async (Guid id, TodoDb db) =>
{
var todo = await db.Todos.FindAsync(id);
return todo == null ? Results.NotFound() : Results.Ok(todo);
});
app.MapGet("/todos/complete", async (TodoDb db) => await db.Todos.Where(todo => todo.IsComplete).ToListAsync());
app.MapPost("todos", async (Todo todo, TodoDb db) =>
{
db.Todos.Add(todo);
await db.SaveChangesAsync();
return Results.Created($"/todos/{todo.Id}", todo);
});
app.MapPut("todos/{id:guid}", async (Guid id, Todo update, TodoDb db) =>
{
var todo = await db.Todos.FindAsync(id);
if (todo == null)
{
return Results.NotFound();
}
todo.Title = update.Title;
todo.IsComplete = update.IsComplete;
await db.SaveChangesAsync();
return Results.NoContent();
});
app.MapDelete("todos/{id:guid}", async (Guid id, TodoDb db) =>
{
var todo = await db.Todos.FindAsync(id);
if (todo == null)
{
return Results.NotFound();
}
db.Todos.Remove(todo);
await db.SaveChangesAsync();
return Results.NoContent();
});
app.Run();
// ... rest of code
```
## Summary
Minimal APIs are an ideal way of building .NET APIs without much of the ceremony involved with controller-based APIs. They are good when we are writing applications in a microservices-based solution. They do not, however, replace controller-based APIs. As the complexity increases and we need more features in our APIs, controller-based APIs are more suitable to use. | thwani47 |
1,888,112 | Streamlining Workflows: Power BI - Your Automation Ally | Free up valuable resources and boost efficiency! Power BI automates data refresh and report... | 0 | 2024-06-14T07:59:25 | https://dev.to/akaksha/streamlining-workflows-power-bi-your-automation-ally-59p0 | Free up valuable resources and boost efficiency! Power BI automates data refresh and report generation, allowing you to focus on strategic analysis rather than manual tasks. Let Power BI handle the heavy lifting. In the age of information overload, visual storytelling is no longer a luxury; it's a necessity. [Power BI ](https://www.clariontech.com/blog/powerful-advantages-of-microsoft-power-bi)empowers you to transform raw data into captivating narratives, turning complex information into clear and actionable insights. By unleashing the power of visual storytelling, you can captivate audiences, drive data-driven decision-making, and propel your business forward. | akaksha | |
1,888,111 | How to Choose the Right CBD Product for Your Pet? | If you know CBD's importance and beneficial effects for your pet's health, you are definitely up to... | 0 | 2024-06-14T07:58:18 | https://dev.to/sophia_rose_bc4ac299db6c9/how-to-choose-the-right-cbd-product-for-your-pet-5331 | If you know CBD's importance and beneficial effects for your pet's health, you are definitely up to try it out. There are a huge number of CBD products in the market, so it is time to choose the best one for your furry friend. Natural sources of treatment are much better not only for our health but also for our pets to avoid the daunting side effects of chemicals and drugs.
This natural option is not only effective for human health, but it also shows positive effectiveness for pets. It is used to treat different health problems. There are different things to look for when choosing the best CBD type and product for your pet.
This post contains the most important tips for getting the best CBD product for your pet.
## 6 Things to Consider While Choosing the Best CBD Products for Your Pet
Pet owners should follow a checklist before considering CBD for their pets.
Some of the major points to look at when choosing the right CBD for your pet are.
## 1. Quality Matters
When it comes to choosing the right CBD product for your pet, quality is paramount. The efficacy and safety of CBD largely depend on the quality of the product.
You need to consider the following key points:
Source of CBD
The source of CBD should be authentic for this conformation it is important to look at where the CBD comes from; this will determine the quality of CBD. Products that derive from high-quality hemp plants are good for your pets.
Hemp plants absorb toxins from the soil, so it's essential to choose hemp that is grown in clean, organic soil free from harmful chemicals and pesticides. This ensures the CBD extract is free from contaminants that could harm your pet.
Check for Third-Party Testing
Third-party testing is very important as it determines the purity, potency, and transparent quality of CBD that is free from all different sources. The companies that offer CBD are aware of the importance of transparency and accountability. They subject their products to third-party testing by independent laboratories.
These labs analyze CBD products for purity, potency, and the presence of any harmful substances such as heavy metals, pesticides, or residual solvents.
## 2. Think About the Type of Product
There are different types of CBD available in the market. So it is important to choose the CBD type of voice or what your pet will like.
Some of the majorly used forms of CBD products are:
CBD Oil
This form of CBD is easy to use and mostly comes in a bottle. CBD in oil form is used for different purposes. The recommendations show that you can give it to your pet directly or mix it with food and water.
It absorbs quickly in the bloodstream and creates a fast and long-lasting effect.
CBD Tincture
Tinctures are quite similar to CBD oils. The major difference between CBD oil and tincture is the higher concentration of CBD in the latter. Tinctures are made by infusing CBD extract with a carrier oil, such as coconut oil or hemp seed oil, and often come in a bottle with a dropper for easy administration.
[CBD tinctures help with different pet health conditions](https://cbdbonuses.com/cbd-pet-tincture/). These tinctures offer the same versatility as CBD oil, allowing you to mix them with your pet's food or administer them directly into their mouths.
CBD Treats
CBD treats are like regular treats but with added CBD. They're a tasty way to give your pet their daily dose of CBD.
CBD Capsules
Capsules are convenient because they provide a precise dose of CBD. You can hide them in your pet's food or give them directly.
CBD Topicals
These are creams or balms you apply directly to your pet's skin. They can help target specific areas of discomfort or inflammation.
## 3. Check the Potency and Dosage
The proper potency and dosage of CBD are important for your pet's safety and the effectiveness of the treatment.
For a better selection, here is what you need to know:
It's best to start with a low dose of CBD and gradually increase it until you find the right amount for your pet. This helps avoid giving them too much CBD at once.
Check the label to see how much CBD is in each serving or dose. This helps you give your pet the right amount of CBD.
## 4. Look at the Ingredients
The ingredients of CBD products ensure that the products are safe for your pet or not. The ingredients make your selection better for your pet.
What things you need to look for are.
It should be natural, and products with artificial flavors, colors, or additives should be avoided.
It doesn't contain any ingredients that your pet might be allergic to. Common allergens include wheat, soy, and dairy.
## 5. Consider the Brand's Reputation
The reputation of the brand represents how much the product is satisfactory. To choose the best CBD product
Do research for a good brand, and check the reviews on the specific product.
A good and reputable CBD company will be transparent about where it gets its CBD and how it's made. It should also provide third-party lab reports to show that its products are safe and effective.
## 6. Consult with Your Veterinarian
After taking all these considerations, one thing that determines your efforts is the consultation with the pet care.
Making an appointment with your veterinarian is important before giving your pet CBD. They can give you personalized advice based on your pet's health and any medications they're taking.
The vet's recommendations help you decide if CBD is right for your pet and recommend the best product and dosage.
## The Bottom Line
We have learned it is better to get the best CBD products in a diverse market. You need to talk to the veterinarian. Then, start dosing your pet with the recommended amount of CBD from a small dose and increase it gradually.
Along with this do not forget to monitor the effont of your pet. If it shows some negative effect try to stop tha dose right away. It your pet is enjoying it increase the dose gradually.
This will keep your pet safe and healthy.
| sophia_rose_bc4ac299db6c9 | |
1,888,110 | How to Get a Research Paper Published | Publishing your research is a significant milestone in your academic career. Here’s a step-by-step... | 0 | 2024-06-14T07:55:09 | https://dev.to/miteshbathri/how-to-get-a-research-paper-published-e49 |
[Publishing your research](https://www.ijset.in/how-to-get-research-paper-published/) is a significant milestone in your academic career. Here’s a step-by-step guide to help you through the process:
**1. Select the Right Journal**
**Research Journals:** Look for journals that align with your field of study. Utilize journal databases like PubMed, Google Scholar, and the Directory of Open Access Journals (DOAJ) to find potential journals.
**Impact Factor:** Consider the journal’s impact factor, which reflects its influence in the academic community. Higher impact factors often indicate higher prestige.
**2. Prepare Your Manuscript**
**Follow Guidelines:** Adhere to the submission guidelines of your chosen journal. This includes formatting, length, and style requirements.
**Write Clearly:** Ensure your paper is clear, concise, and free of jargon. Use straightforward language to convey your research findings.
**3. Write a Compelling Abstract**
**Summarize Key Points:** Your abstract should concisely summarize the main objectives, methods, results, and conclusions of your research.
**Be Precise:** Highlight the significance of your study and its potential impact on the field.
**4. Include a Convincing Cover Letter**
**Personalize It:** Address the editor by name and explain why your paper is a good fit for their journal.
**Highlight Novelty:** Emphasize the unique aspects of your research and its contributions to existing knowledge.
**5. Submit Your Manuscript**
**One Journal at a Time:** Avoid submitting the same paper to multiple journals simultaneously. Most journals require an exclusive submission.
Online Submission Systems: Use the journal’s online submission system to upload your manuscript and any required supplementary materials.
**6. Peer Review Process**
**Be Patient:** The review process can take several weeks to months. Reviewers will evaluate your paper’s quality, originality, and relevance.
Respond to Feedback: Address reviewers’ comments thoroughly and make necessary revisions to improve your paper.
**7. Revise and Resubmit**
**Make Changes:** Incorporate feedback from the reviewers and editors. This might involve substantial changes or minor adjustments.
**Resubmit:** Once revisions are complete, resubmit your manuscript along with a detailed response to the reviewers’ comments.
**8. Post-Acceptance Steps**
**Proofreading:** Carefully proofread the final version of your paper before it goes to print.
**Copyright Transfer:** Sign and submit any required copyright transfer agreements.
**9. Promote Your Work**
**Share Widely:** Use social media, academic networking sites, and institutional repositories to disseminate your published research.
Engage with the Community: Participate in conferences and workshops to present your findings and engage with peers.
**Additional Tips**
**Networking:** Connect with other researchers and professionals in your field to stay updated on the latest trends and opportunities for collaboration.
**Continuous Learning:** Stay informed about new research and advancements in your field by reading widely and attending relevant events.
**Additional Tips**
**Networking:** Connect with other researchers and professionals in your field to stay updated on the latest trends and opportunities for collaboration.
**Continuous Learning:** Stay informed about new research and advancements in your field by reading widely and attending relevant events.
By following these steps, you can increase your chances of successfully publishing your research and contributing valuable knowledge to your academic community. | miteshbathri | |
1,888,109 | Building a CRUD app with React Query, TypeScript, and Axios | Learn how to use React Query for data fetching and state management in a React application | 0 | 2024-06-14T07:54:46 | https://dev.to/thwani47/building-a-crud-app-with-react-query-typescript-and-axios-2d0j | typescript, react, reactquery, vite | ---
title: Building a CRUD app with React Query, TypeScript, and Axios
published: true
description: Learn how to use React Query for data fetching and state management in a React application
tags: typescript, react, reactquery, vite
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjmup300p09rpgkdtv4v.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-14 07:50 +0000
---
[TanStack Query](https://tanstack.com/query/latest), also known as **React Query** is described as the missing data-fetching library for web applications. **React Query** makes fetching, caching, synching, and updating server state a breeze.
React Query comes with an opinionated way of fetching and updating data. React Query makes it easier to manage complex data fetching and caching scenarios in our React applications while providing a simple and intuitive API as compared to traditional state management libraries. Using React Query also has other benefits such as:
- **Error handling** - React Query provides robust error handling capabilities. This helps with handling errors that occur during data fetching or mutations.
- **Global state management** - React Query provides a central place to manage global state. Making it easy to share data across components and avoid prop drilling
- **Type safety** - React Query is built with TypeScript and provides strong type safety. This makes it easier to catch errors and avoid bugs.
- **Optimistic updates** - React Query allows us to perform optimistic updates, which means we can update the UI immediately after a mutation is performed, before waiting for the server response.
- **Automatic data re-fetching** - React Query can automatically re-fetch data from the server based on several conditions, such as when a mutation is performed, the window is refocused, or a component is mounted.
In this article, we will be learning how to use React Query to fetch and update server data in a React application using Axios.
The complete source code for this article can be found [here](https://github.com/Thwani47/blog-code/tree/main/react-query-crud-example)
## Table of Contents
- [Creating the project](#creating-the-project)
- [Setting up the backend server](#setting-up-the-backend-server)
- [Setting up React Query](#setting-up-react-query)
- [Setting up Axios](#setting-up-axios)
- [Fetch all Todos](fetch-all-todos)
- [Add a new Todo](#add-a-new-todo)
- [Delete a Todo](#delete-a-todo)
- [Edit a Todo](#edit-a-todo)
- [Conclusion](#conclusion)
## Creating the project
We'll be using [Vite](https://thwanisithole.co.za/posts/using-vite-for-react-apps/) for our web app. Run
```bash
$ npm create vite@latest react-query-crud-example
```
and follow the prompts to select `React` as the framework and `TypeScript` as the variant. Change the directory into the project directory and run
```bash
$ npm i @tanstack/react-query @tanstack/react-query-devtools axios formik yup react-router-dom json-server
$ npm i -D tailwindcss postcss autoprefixer
```
The above commands install:
- React Query
- React Query dev tools
- Axios - an HTTP client.
- formik - a React Form Library.
- yup - A form data validation library.
- React Router - a React routing library.
- JSON Server - a fake REST API. We'll be using this as the backend server to fetch data from and send data to.
- Tailwind CSS - A CSS framework we'll use to style our components.
Run
```bash
$ npx tailwindcss init -p
```
to generate a `tailwind.config.cjs` file. Replace the contents of `tailwind.config.cjs` with
```js
/*tailwind.config.cjs*/
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
],
theme: {
extend: {},
},
plugins: [],
}
```
Replace the contents of **index.css** with
```css
/*index.css*/
@tailwind base;
@tailwind components;
@tailwind utilities;
```
## Setting up the backend server
We'll be using [json-server](https://github.com/typicode/json-server) to create the REST API that we'll fetch data from. In the root of the project, create a **db.json** file with the contents
```js
/*db.json*/
{
"todos" : [
{
"id" : 1,
"title": "Learn React Query",
"complete": false
}
]
}
```
In the **package.json** file, add a **server** script as follows
```js
/*package..json*/
// ...
"scripts": {
"dev": "vite",
"build": "tsc && vite build",
"preview": "vite preview",
"server" : "json-server --watch db.json --port 5000" // ADD THIS LINE
}
// ...
```
In the command line, run
```bash
$ npm run server
```
This exposes a REST API that we can consume from http://localhost:5000/todos. The API exposes these endpoints
| Endpoint | Action |
| - | - | - | - |
| `GET /todos` | Get all Todos |
| `GET /todos?complete=true` | Get all complete Todos |
| `GET /todos/{id}` | Get a Todo item by its id |
| `POST /todos` | Create a new Todo item |
| `PUT /todos/{id}`| Update an existing Todo |
| `DELETE /todos/{id}` | Delete a Todo item |
## Setting up React Query
To be able to use React Query in our application, we need to wrap the `QueryClientProvider` component around our application entry point. Update the `main.tsx` file as follows
```typescript
/*main.tsx*/
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
import './index.css';
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import { ReactQueryDevtools } from '@tanstack/react-query-devtools';
import { createBrowserRouter, RouterProvider } from 'react-router-dom';
const queryClient = new QueryClient();
const router = createBrowserRouter([
{
path: '/',
element: <App />
},
{
path : '*',
element : <h1>Page not found: 404</h1>
}
]);
ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render(
<React.StrictMode>
<QueryClientProvider client={queryClient}>
<RouterProvider router={router}/>
<ReactQueryDevtools />
</QueryClientProvider>
</React.StrictMode>
);
```
React Query works with zero config and can be customized to meet our application requirements. The above sets up React Query with the default options. We can configure our config options by passing a config object into the query client as follows
```js
...
const queryClient = new QueryClient({
queries : { // data fetching config
refetchOnWindowFocus : false,
refetchOnMount : false,
retry: false,
// ... rest of the config
},
mutations : { // mutations config
}
});
// rest of code...
```
## Setting up Axios
Create a new `src/api/client.ts` file with the following contents
```js
import axios from "axios";
export const client = axios.create({
baseURL : 'http://localhost:5000/todos',
headers: {
'Content-Type': 'application/json'
}
})
```
This creates and exports an axios client with the base URL of our server setup.
## Fetch all Todos
In the main page of the app, we want to fetch all todo items from the server.
First, create a `src/types/todo.types.ts` file with the contents
```typescript
export interface TodoItem {
id: number;
title: string;
complete: boolean;
}
```
Create a `src/hooks/useFetchTodos.ts` file with the contents
```js
/*useFetchTodos.ts*/
import { QueryObserverResult, useQuery } from '@tanstack/react-query';
import { AxiosResponse } from 'axios';
import { client } from '../api/client';
import { TodoItem } from '../types/todo.types';
const fetchTodos = async (): Promise<AxiosResponse<TodoItem[], any>> => {
return await client.get<TodoItem[]>('/');
};
export const useFetchTodos = (): QueryObserverResult<TodoItem[], any> => {
return useQuery<TodoItem[], any>({
queryFn: async () => {
const { data } = await fetchTodos();
return data;
},
queryKey: [ 'todos' ]
});
};
```
This creates a `useFetchTodos` hook that fetches data from the API server.
We can use our hook in our `App.tsx` as follows:
```typescript
/*App.tsx*/
import { useNavigate } from 'react-router-dom';
import './App.css';
import { useFetchTodos } from './hooks/useFetchTodos';
function App() {
const { data: todos, isLoading, isError } = useFetchTodos();
const navigate = useNavigate();
return (
<div className="w-full mt-2 items-center bg-gray-100 min-h-screen">
<h1 className="text-4xl font-bold mb-4">Todo List</h1>
<button className="bg-green-500 hover:bg-green-700 text-white font-bold py-1 px-2 ml-2 rounded mb-4" onClick={() => navigate('/add-todo')}>New Todo</button>
<hr className="mb-2"/>
{
isLoading? <h1>Loading...</h1> : isError ? <h1>Error fetching todos</h1> : (
<ul>
{todos?.map(todo => {
return <li className={`mb-2 text-xl ${todo.complete ? 'line-through' : ''}`} key={todo.id}>{todo.title}
<button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-1 px-2 ml-2 rounded">Edit</button>
<button className="bg-red-500 hover:bg-red-700 text-white font-bold py-1 px-2 ml-2 rounded">Delete</button>
</li>
})}
</ul>
)
}
</div>
);
}
export default App;
```
## Add a new Todo
First, we create a `TodoItemForm.tsx` component with the contents
```typescript
import React from 'react';
import { ErrorMessage, Field, Form, Formik } from 'formik';
import * as yup from 'yup';
import { TodoInput, TodoItem } from './types/todo.types';
type Props = {
action: string;
todoItem: TodoItem | undefined;
handleSubmit: (values: TodoInput) => void;
};
export default function TodoItemForm({ todoItem, handleSubmit, action }: Props) {
return (
<Formik
initialValues={{
title: todoItem? todoItem.title : '',
complete: todoItem ? todoItem.complete: false
}}
validationSchema={yup.object({
title: yup.string().required('Title is required')
})}
onSubmit={(values: TodoInput) => handleSubmit(values)}
>
<Form>
<div className="mb-2">
<label htmlFor="title" className="mr-2">
Title
</label>
<Field
name="title"
type="text"
id="title"
className="shadow appearance-none border rounded py-1 px-2 text-gray-700 leading-tight focus:outline-none focus:shadow-outline"
/>
<ErrorMessage name="title" component="span" className="text-red-500" />
</div>
<div>
<label htmlFor="complete" className="mr-2">
Complete
</label>
<Field name="complete" type="checkbox" id="complete" />
</div>
<button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-1 px-2 ml-2 rounded">
{action}
</button>
</Form>
</Formik>
);
}
```
We're creating a reusable component that will come in handy when we want to edit our Todos
Then we create an `src/AddTodo.tsx` component with the following contents
```typescript
import React from 'react';
import { useAddTodo } from './hooks/useAddTodo';
import TodoItemForm from './TodoItemForm';
export default function AddTodo() {
const { mutate: addTodo } = useAddTodo();
return (
<div className="w-full mt-2 items-center bg-gray-100 min-h-screen">
<h1 className="text-4xl font-bold mb-4">New Todo</h1>
<TodoItemForm todoItem={undefined} handleSubmit={addTodo} action="Add Todo" />
</div>
);
}
```
We then add the component to our router in `main.tsx` as follows
```typescript
import AddTodo from './AddTodo';
// ...
const router = createBrowserRouter([
{
path: '/',
element: <App />
},
{
path: '/add-todo',
element: <AddTodo/>
},
{
path : '*',
element : <h1>Page not found: 404</h1>
}
]);
// ...
```
We then need to create a `src/hooks/useAddTodo.ts` hook, the one we reference in our `AddTodo` component. Create a `src/hooks/useAddTodo.ts` with the following contents
```js
import { UseBaseMutationResult } from '@tanstack/react-query';
import { useMutation, useQueryClient } from '@tanstack/react-query';
import { AxiosResponse } from 'axios';
import { useNavigate } from 'react-router-dom';
import { client } from '../api/client';
import { TodoInput } from '../types/todo.types';
const addTodo = async (todo: TodoInput): Promise<AxiosResponse<TodoInput, any>> => {
return await client.post<TodoInput>('/', todo);
};
export const useAddTodo = (): UseBaseMutationResult<AxiosResponse<TodoInput, any>, unknown, TodoInput, unknown> => {
const queryClient = useQueryClient();
const navigate = useNavigate();
return useMutation({
mutationFn: (todo: TodoInput) => addTodo(todo),
onSuccess: () => {
queryClient.invalidateQueries([ 'todos' ]);
navigate('/', { replace: true });
}
});
};
```
Add the following `TodoInput` definition to `src/types/todo.types.ts`
```typescript
export interface TodoInput {
title: string;
complete: boolean;
}
```
## Delete a Todo
To delete a todo we first add a `src/hooks/useDeleteTodo.ts` hook file with the contents
```js
import { UseBaseMutationResult } from '@tanstack/react-query';
import { useMutation, useQueryClient } from '@tanstack/react-query';
import { AxiosResponse } from 'axios';
import { client } from '../api/client';
const deleteTodo = async (todoId: number): Promise<AxiosResponse<any, any>> => {
return await client.delete(`/${todoId}`);
};
export const useDeleteTodo = (): UseBaseMutationResult<AxiosResponse<any, any>, unknown, number, unknown> => {
const queryClient = useQueryClient();
return useMutation({
mutationFn: (todoId: number) => deleteTodo(todoId),
onSuccess: () => {
queryClient.invalidateQueries([ 'todos' ]);
}
});
};
```
Then update `App.tsx` and add an `onClick` event handler to the delete button as follows
```typescript
// ...
import {useDeleteTodo} from './hooks/useDeleteTodo'
function App(){
// ...
const {mutate : deleteTodo} = useDeleteTodo()
return (
//...
{todos?.map(todo => {
return
<li className={`mb-2 text-xl ${todo.complete ? 'line-through' : ''}`} key={todo.id}>{todo.title}
<button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-1 px-2 ml-2 rounded">Edit</button>
<button
className="bg-red-500 hover:bg-red-700 text-white font-bold py-1 px-2 ml-2 rounded"
onClick={() => deleteTodo(todo.id)}>
Delete
</button>
</li>
})}
)
}
```
## Edit a Todo
We first create an `EditTodo.tsx` component with the contents
```typescript
import React from 'react';
import { useParams } from 'react-router-dom';
import { useFetchTodo } from './hooks/useFetchTodo';
import { useEditTodo } from './hooks/useEditTodo';
import TodoItemForm from './TodoItemForm';
export default function EditTodo() {
const { id } = useParams();
const { data: todoItem, isLoading } = useFetchTodo(id ? parseInt(id) : 0);
const { mutate: editTodo } = useEditTodo(id ? parseInt(id) : 0);
return (
<div className="w-full mt-2 items-center bg-gray-100 min-h-screen">
<h1 className="text-4xl font-bold mb-4">Edit Todo</h1>
{isLoading? <h1>Fetching todo...</h1> : <TodoItemForm todoItem={todoItem} handleSubmit={editTodo} action="Edit Todo"/> }
</div>
);
}
```
We then add the component to our router in `main.tsx` as follows
```typescript
import EditTodo from './EditTodo';
// ...
const router = createBrowserRouter([
{
path: '/',
element: <App />
},
{
path: '/add-todo',
element: <AddTodo/>
},
{
path: '/edit-todo/:id',
element: <EditTodo />
},
{
path : '*',
element : <h1>Page not found: 404</h1>
}
]);
// ...
```
Then update `App.tsx` and add an `onClick` event handler to the edit button as follows
```typescript
{isLoading? <h1>Loading...</h1> : isError ? <h1>Error fetching todos</h1> : (
<ul>
{todos?.map(todo => {
return <li className={`mb-2 text-xl ${todo.complete ? 'line-through' : ''}`} key={todo.id}>{todo.title}
<button className="bg-blue-500 hover:bg-blue-700 text-white font-bold py-1 px-2 ml-2 rounded" onClick={() => navigate(`/edit-todo/${todo.id}`)}>Edit</button>
<button className="bg-red-500 hover:bg-red-700 text-white font-bold py-1 px-2 ml-2 rounded" onClick={() => deleteTodo(todo.id)}>Delete</button>
</li>
})}
</ul>
)}
```
We then create two hooks, a `useFetchTodo` hook that allows us to fetch data for a single Todo item and a `useEditTodo` hook that allows us to edit a Todo item
Create a `src/hooks/useFetchTodo.ts` file with the following contents
```js
import { QueryObserverResult, useQuery } from '@tanstack/react-query';
import { AxiosResponse } from 'axios';
import { client } from '../api/client';
import { TodoItem } from '../types/todo.types';
const fetchTodo = async (todoId: number): Promise<AxiosResponse<TodoItem, any>> => {
return await client.get<TodoItem>(`/${todoId}`);
};
export const useFetchTodo = (todoId: number): QueryObserverResult<TodoItem, any> => {
return useQuery<TodoItem, any>({
queryFn: async () => {
const { data } = await fetchTodo(todoId);
return data;
},
queryKey: [ 'todo', todoId ]
});
};
```
Then add a `src/hooks/useEditTodo.ts` file with the following contents
```js
import { UseBaseMutationResult } from '@tanstack/react-query';
import { useMutation, useQueryClient } from '@tanstack/react-query';
import { AxiosResponse } from 'axios';
import { useNavigate } from 'react-router-dom';
import { client } from '../api/client';
import { TodoInput } from '../types/todo.types';
const editTodo = async (todoId: number, todo: TodoInput): Promise<AxiosResponse<TodoInput, any>> => {
return await client.put<TodoInput>(`/${todoId}`, todo);
};
export const useEditTodo = (
todoId: number
): UseBaseMutationResult<AxiosResponse<TodoInput, any>, unknown, TodoInput, unknown> => {
const queryClient = useQueryClient();
const navigate = useNavigate();
return useMutation({
mutationFn: (todo: TodoInput) => editTodo(todoId, todo),
onSuccess: () => {
queryClient.invalidateQueries([ 'todos' ]);
navigate('/', { replace: true });
}
});
};
```
## Conclusion
React Query is a powerful library that makes data fetching and state management in React applications easy and efficient. It provides a simple and intuitive API for managing server state and has features such as caching, re-fetching, polling, and more.
In this article we saw how to create a new React application, set up React Query, and fetch data from a REST API using Axios. There is a lot of React Query that we haven't touched on in this article, such as configuration, testing, etc, as it is outside the scope of this article. | thwani47 |
1,888,086 | Build a chat room with custom bots powered by OpenAI/Gemini | This app combines popular chat platforms like Discord/Slack, and the powerful minds of LLMs like... | 0 | 2024-06-14T07:49:09 | https://dev.to/encore/build-a-chat-room-with-custom-bots-powered-by-openaigemini-47h3 | go, ai, programming, tutorial | 
This app combines popular chat platforms like Discord/Slack, and the powerful minds of LLMs like OpenAI and Google Gemini.
It lets you create your own AI bots with unique personalities that can seamlessly engage in conversations with each other and with users.
**Demo app:** [Try the Demo version here](https://chatty.encore.dev)
In this guide we'll walk through how to run it locally, deploy to Encore's free dev cloud, integrate with your Slack/Discord, and how to develop the application further.
### TL;DR - What we'll build
* **Open Source:** Go based application built using [Encore](https://encore.dev) ([See Open Source Repo](https://github.com/encoredev/examples/edit/main/ai-chat/README.md))
* **Multi-platform:** Run locally or deploy to your cloud / AWS / GCP / Encore's free dev cloud
* **Multi-model:** Use your OpenAI or Gemini API key to try out each model
* **Integrations:** Add your favorite bots to your Discord or Slack
## 🏁 Getting started
### 💽 Install Encore
Install the Encore CLI to run your local environment:
- **macOS:** `brew install encoredev/tap/encore`
- **Linux:** `curl -L https://encore.dev/install.sh | bash`
- **Windows:** `iwr https://encore.dev/install.ps1 | iex`
### 👉 Create your app
Create your Encore app and clone the example:
```bash
encore app create my-ai-chat --example=ai-chat
```
(Feel free to replace `my-ai-chat` with a name that tickles your fancy)
### 🔐 Set Your LLM API Key
**OpenAI:**
To use OpenAI's models, you need an OpenAI API key. If you don't already have one, you can [get one here](https://platform.openai.com/api-keys).
Then use Encore's built-in secrets manager to securely store it:
```bash
cd my-ai-chat
encore secret set OpenAIKey --type dev,local,pr,prod
```
Paste your OpenAI API key when prompted.
**Gemini:**
Setting up Gemini is a longer process, see the instructions in the [appendix](#appendix-adding-gemini-credentials).
### 🕹 Run Your App Locally
To run your application locally, first make sure you have [Docker](https://docker.com) installed and run.
Then, with Encore, you only need to use a single command to start your entire system including multiple microservices, databases, pub/sub, and other infrastructure.
Run it from your app's root directory:
```bash
encore run
```
Encore will build and start your application, providing you with a local URL (e.g., <http://localhost:4000>). Open this URL in your browser to see your creation - the local chat interface!
## How it works
### Local Development Dashboard
Encore comes with a local development dashboard, when your app is running, open <http://localhost:9400> in your browser to access it.
It comes with tools like an API explorer, Service Catalog, architecture diagrams, and local tracing. All to help you test and understand the behavior of your application.
Go ahead and open it up now.
### Architecture
If you open the "Flow" architecture diagram in the local dev dashboard, you should see this:

As you can see, AI Chat is a microservices-based application.
Each service handling a specific aspect of the chatbot ecosystem. The services uses a combination of Encore APIs, pub/sub messaging, and WebSocket communication to orchestrate the flow of messages between chat platforms and LLM providers.
### Service Catalog
If you open the Service Catalog, you'll get an overview of all the services and endpoints in the application.
**Here are the key components:**
- Chat Service: The orchestrator service, routing messages between chat platforms and LLM providers.
- Discord Service: Handles the integration with the Discord API.
- Slack Service: Manages the art of conversation with the Slack API.
- Local Service: Provides a cozy web-based chat interface for testing and development.
- Bot Service: Responsible for creating, storing, and managing bot profiles.
- LLM Service: Formats prompts for LLMs, processes responses, and gracefully handles multiple LLM providers.
- OpenAI Service: Interfaces with OpenAI's API for chat completions and image generation.
- Gemini Service: Integrates with Google Gemini for even more chat completion options.
**Main Flow:**
- A user sends a message in a connected chat channel
- The corresponding chat integration (Discord, Slack, or local) receives the message
- The integration publishes a message to the chat service
- The chat service identifies any bots in the channel and fetches their profiles and the channel's message history
- The chat service sends the message to the LLM service
- The LLM service crafts a prompt including the bot's persona and the ongoing conversation
- The prompt is sent to the chosen LLM provider (OpenAI or Gemini)
- The LLM provider streams responses through pubsub back to the LLM service
- The LLM service parses the responses and relays them back to the chat service
- The chat service delivers the bot's witty (or not-so-witty) responses to the appropriate chat integration
### Deploy to the Cloud
Ready to share your bots with the world? Encore makes deploying to a free dev environment a breeze, you only need to use one command.
Run it from your app's root directory:
```bash
git push
```
Encore will automatically deploy your app and set up all the necessary infrastructure. You'll see a link to the deployment in Encore's [Cloud Dashboard](https://app.encore.dev).

Once the deployment is complete, click Overview and copy the URL to see your bots in action on the web!

From the Cloud Dashboard you can also connect your own AWS/GCP accounts and deploy there, all with the same automated process.
## Add your bots to Discord/Slack
The application is designed to make it easy to integrate with any chat platform, and comes pre-configured to work with Discord and Slack.
### Integrate with Slack
To be able to use Slack as a chat platform, you'll need to create a Slack app and add it to your workspace. Here's how you can do it:
#### 1. **Create the Encore App:**
Complete the steps in the [Getting Started](#getting-started) section to create your Encore app.
#### 2. **Create a Slack App:**
* Visit [https://api.slack.com/apps](https://api.slack.com/apps) and click `Create New App`.
* Choose `From an app manifest` and click `Next`.
* Pick the workspace for your bot and click `Next`.
* Copy the [bot manifest](./chat/provider/slack/bot-manifest.json) and paste it into the text box.
* Replace the `<bot-domain>` placeholder with your cloud environment's url (e.g. `staging-my-chatbot-tur3.encr.app`)
* Click `Next` and then `Create`.
#### 3. **Activate Bot Events:**
* On the bot settings page, click `Event Subscriptions`.
* Start the Encore app.
* If the `Request URL` is yellow, click on `Retry`.
#### 4. **Install the App to Your Workspace:**
* On the settings page, click `OAuth & Permissions` and then `Install to Workspace`.
* Select a channel for your bot and click `Allow`.
#### 5. **Add the Slack Bot Token:**
* Copy the `Bot User OAuth Token` from the `OAuth & Permissions` page.
* Add it as an Encore secret:
```bash
encore secret set SlackToken --type dev,local,pr,prod
```
#### 6. **Create Your Chat Bots**
Proceed to the [Create Your Chat Bots](#create-your-chat-bots) section to add bots to your channels.
### Integrate with Discord
To be able to use Discord as a chat platform, you'll need to create a Discord bot and add it to your server. Here's how you can do it:
#### 1. **Create the Encore App:**
Complete the steps in the [Getting Started](#getting-started) section to create your Encore app.
#### 2. **Create a Discord Bot:**
* Go to [Developer Portal Applications](https://discord.com/developers/applications) and click `New Application`.
* Give your Discord app a name and click `Create`.
#### 3. **Configure Install Settings:**
* Click `Installation`.
* Select `Discord Provided Link` in `Install Link`.
* Under `Default Install Settings`, add the `bot` scope and these permissions:
* Connect
* Manage Web Hooks
* Read Message History
* Read Messages/View Channels
* Send Messages
#### 4. **Grant Privileged Gateway Intents:**
* Click `Bot` and then `Privileged Gateway Intents`.
* Enable these intents:
* Server Members Intent
* Message Content Intent
#### 5. **Copy the Bot Token:**
* On the `Bot` page, click `Reset Token`.
* Copy the token and add it as an Encore secret:
```bash
encore secret set DiscordToken --type dev,local,pr,prod
```
#### 6. **Install the Bot:**
* Copy the Install Link and paste it into your browser.
* Grant your bot access to a server.
#### 7. **Invite the Bot to a Channel (Optional):**
If you want your bot to join private conversations, invite it to specific channels.
#### 8. **Create Your Chat Bots**
Proceed to the [Create Your Chat Bots](#create-your-chat-bots) section to add bots to your channels.
### Create Your Chat Bots
The Slack and Discord integrations does not come with a custom-made UI for adding bots to channels.
Until you've built your own UI (or maybe added support for slash commands?), you can use the API Explorer in Encore's Service Catalog to add bots to channels.
#### 1. **Open the Service Catalog**
Open your app in the [Cloud Dashboard](https://app.encore.dev) and click on `Service Catalog` in the main menu.
#### 2. **Create a Bot Profile:**
* Open the `bot` service and select the `bot.Create` endpoint.
* Give your bot a name, an engaging prompt, and enter `openai` as the LLM.
* Click `Call API`.
* Copy the bot ID in the response.

#### 3. **Find a Chat Channel ID:**
* Now open the `chat` service and select the `chat.ListChannels` endpoint.
* Click `Call API`.
* Copy the `id` of the channel you want your bot to join.

#### 4. **Add the Bot to a Channel:**
* Select the `chat.AddBotToChannel` endpoint.
* Enter the bot ID and the channel ID.
* Click `Call API`.

#### 5. **Say hello to your Bot:**
**🎉 Success!**
Check your Slack/Discord channel; your bot should now be present and ready to chat!


Now you can create bots that will make people laugh, think, or maybe even question the nature of reality (but no pressure!).
### Development: Make changes to the Chat Room UI
If you want to make changes to the app's built-in Chat Room interface you'll need to rebuild it before you deploy your changes.
#### 1. **Install npm***
If you don't have npm installed, you can download it from [https://www.npmjs.com/get-npm](https://www.npmjs.com/get-npm), or use your package manager, e.g.
```bash
brew install npm
```
#### 2. **Install Dependencies**
Navigate to the `chat/provider/local/static` directory and run:
```bash
npm install
```
#### 3. **Build the Interface**
Run the following command to build the web interface:
```bash
npm run build
```
#### 4. **Start the Interface**
Start the local chat service and open the interface in your browser:
```bash
encore run
```
#### 5. **Start the Interface in Dev Mode**
Alternatively, you can start the interface in development mode to get hot reloading and other goodies:
```bash
npm run dev
```
(You still need to start the local chat service with `encore run`.)
## Wrapping up
- ⭐️ Support the project by [starring Encore on GitHub](https://github.com/encoredev/encore).
- Learn more about building Go apps with Encore using these [Tutorials](https://encore.dev/docs/tutorials).👈
- Find inspiration on what to build with these Open Source [App Templates](https://encore.dev/templates).👈
- If you have questions or want to share your work, join the developers hangout in [Encore's community on Discord](https://encore.dev/discord).👈
## Appendix: Adding Gemini Credentials
To enable Gemini as an LLM provider, you'll need to set your Google Cloud credentials as an Encore secret. Here's how you can do it:
To enable Gemini as an LLM provider, you'll need to set your Google Cloud credentials as an Encore secret. Here's how you can do it:
#### 1. **Create a GCP Service Account:**
* Head over to the [Google Cloud Console](https://console.cloud.google.com/iam-admin/serviceaccounts).
* Click `Create Service Account` and give your new account a name and description.
* Grant your service account the `Vertex AI User` role.
* Click `Done`.
#### 2. **Create a JSON Key:**
* Click on your newly created service account and then on `Keys` -> `Add Key` -> `Create New`.
* Choose the `JSON` format and click `Create`.
* Download the JSON file.
####3. **Add JSON Key as an Encore Secret:**
```bash
encore secret set --type dev,local,pr,prod GeminiJSONCredentials < <downloaded json>.json
``` | marcuskohlberg |
1,888,105 | Programming for Non-professionals | The intention of this book is to teach individuals who are not professional programmers (those who do... | 0 | 2024-06-14T07:46:21 | https://dev.to/esproc_spl/programming-for-non-professionals-f1e | books, programming, beginners | The intention of this book is to teach individuals who are not professional programmers (those who do not program for a living) how to code.
"Programming for Non-professionals" is designed to teach non-professionals how to program. In today's world, programming has become a basic skill, similar to driving, offering significant benefits for solving daily work and life problems. However, most programming languages are created for professionals, making it difficult for others to learn and apply these skills. This book aims to bridge that gap.
The book is divided into two parts. The first part covers fundamental concepts common to most programming languages. Understanding these basics allows you to quickly learn new languages, though initially, you might only solve simple arithmetic problems. These foundations are essential for further learning. The second part focuses on structured data and its processing methods, common in everyday tasks such as tables and data analysis. This section goes beyond traditional systems and introduces new operations crucial for structured data processing.
This book is not for professional programmers and avoids advanced topics like object orientation, event-driven programming, and frameworks. Most of the content is accessible to beginners, with some advanced topics marked with an asterisk (*) for optional deeper understanding. Skipping these will not hinder your overall learning or application in daily work. | esproc_spl |
1,888,104 | 5 VSCODE AI Extensions for devs in 2024 | As somebody who has been building AI developer tooling for about a couple years now, I like to keep... | 0 | 2024-06-14T07:44:29 | https://dev.to/commanddash/5-vscode-ai-extensions-devs-build-with-in-2024-51n3 | vscode, githubcopilot, ai, coding |
As somebody who has been building AI developer tooling for about a couple years now, I like to keep an eye on what are the most productive extensions that developers can utilize.
Today, I'm sharing a list of top 4 extensions + CommandDash based on some recent surveys by StackOverflow and my own findings.
## 1. Github Copilot (Paid)

Copilot is the go to programming assistant for most professional developers.
It comes with features like auto complete and chat and can help you with most of your coding tasks.
## 2. Codeium (Free)

With 1.1M downloads, Codeium stands as the top free AI Coding extensions. It offers the same feature set as Copilot and is stable and trusted.
They have recently raised 65M$ as their Series B so we can expect the extension to be improved further in the upcoming months.
> It is a good idea to either have one of Copilot or Codium enabled in their IDE. Though personally, I don't use any of them and rely on CommandDash which solves the needs for me.
## 3. Codiumate (Free)

If you are a developer that believes in writing code tests - this is the extension for you.
They have focused on building specialized testing and PR review copilot that supports most programming languages.
## 4. ChatGPT - Genie AI (Bring your own Key)

If you are a ChatGPT user, this extension brings it to your VSCode. You can put in your OpenAI key and get stared!
Though very popular, it seems like that the team is not actively maintaining the extension, so please use with caution.
## 5. CommandDash (Bring your own Key)
CommandDash is World's first marketplace of programming agents in VSCode.

The agents are expert at specific tooling and are trained on it's docs, examples and Github issues. For example, if you want to build with Langchain, just install the agent for it from their marketplace.

You can then ask any questions and get contextualized integration code without having to read it's documentation:

It is completely open sourced and anybody can publish on the marketplace. You can read more about them [here](https://www.commanddash.io/).
There are a lot many other extensions that are growing fast and I'd soon cover them in another post.
Share know what extensions are you using in the comments!
| samyakkkk |
1,888,103 | Boost Your Grades with MYOB Assignment Help: Expert Assistance at Your Fingertips | Success in academics requires not just hard work but also the right kind of assistance. MYOB... | 0 | 2024-06-14T07:44:21 | https://dev.to/myobassignmenthelp/boost-your-grades-with-myob-assignment-help-expert-assistance-at-your-fingertips-k43 | education | Success in academics requires not just hard work but also the right kind of assistance. **[MYOB Assignment Help](https://www.myobassignmenthelp.com/)** is one such service that can significantly enhance your learning and grades, especially in accounting and business courses. Whether you are struggling with understanding MYOB software or completing MYOB Perdisco assignments, expert help is just a click away. This article delves into how MYOB assignment help can be a game-changer for students, particularly those in Australia.
## What is MYOB?
MYOB (Mind Your Own Business) is an Australian multinational corporation that provides tax, accounting, and other business services software to small and medium businesses. The software is widely used in educational institutions for teaching accounting practices. Students often find MYOB assignments challenging due to the intricate details and comprehensive understanding required.
## Why Students Need MYOB Assignment Help
- **Complexity:** MYOB software can be complex, requiring a deep understanding of accounting principles and software functionalities.
- Time-Consuming: Assignments often take a significant amount of time, which students might not have due to other academic or personal commitments.
- **Accuracy:** MYOB assignments need precise calculations and entries, making it easy to make mistakes.
- **Understanding Software Updates:** The software is frequently updated, and keeping up with the latest features can be challenging.
## Benefits of MYOB Assignment Help
**Expert Guidance
**
Professionals who offer MYOB assignment help are usually experts with extensive experience in both accounting and the software itself. They can provide valuable insights and explanations that textbooks might not cover.
**Time Management
**
By delegating complex assignments to experts, students can better manage their time, focusing on other important academic tasks or personal commitments.
**Error-Free Work
**
With MYOB Perdisco assignment help, you can ensure your assignments are error-free. Experts meticulously check the work to eliminate any inaccuracies, ensuring high-quality submissions.
**Enhanced Learning
**
Getting help doesn't just mean getting the assignment done; it also means learning from the experts. Detailed explanations and step-by-step solutions help students understand the subject better.
**Customized Assistance
**
Every student's needs are different. Professional MYOB assignment help services offer customized assistance tailored to individual requirements, ensuring that each student gets the help they need.
## MYOB Assignment Help Australia: Why It's the Best Choice for Australian Students
Australian students often prefer MYOB assignment help Australia due to the localized understanding of academic requirements and standards. Here's why:
**Familiarity with Curriculum
**
Experts in Australia are familiar with the specific curriculum and requirements of Australian educational institutions, ensuring that the assistance provided is aligned with the expectations of the professors.
**Timely Support
**
Being in the same time zone means that support is timely and deadlines are met without the hassle of coordinating across different time zones.
**Cultural Understanding
**
Australian experts understand the local context and examples that resonate better with students, making explanations more relatable and easier to grasp.
## How to Get the Best MYOB Accounting Assignment Help
**Research**
Look for services with good reviews and testimonials. Make sure they have a track record of delivering quality work on time.
**Check Expertise
**
Ensure the service has qualified professionals with experience in MYOB software and accounting principles.
**Communicate Your Needs
**
Clearly communicate your assignment requirements, deadlines, and any specific instructions to get the best customized help.
**Look for Plagiarism-Free Guarantees
**
Make sure the service guarantees original work to avoid any academic penalties for plagiarism.
**24/7 Support
**
Choose a service that offers 24/7 support to address any concerns or queries you might have at any time.
**
## Features of a Good MYOB Assignment Help Service
**
**Qualified Experts:** Look for services with qualified accounting professionals.
**Timely Delivery:** Ensure they have a reputation for delivering work on time.
**Affordable Pricing:** Compare prices and look for services that offer good value for money.
**Plagiarism-Free:** Ensure they provide plagiarism-free content.
**Confidentiality:** The service should guarantee the confidentiality of your personal information and assignment details.
## Common Challenges in MYOB Assignments
**Understanding the Software
**
MYOB software can be complex, and getting accustomed to its interface and functionalities can take time.
**Applying Accounting Principles
**
Students often struggle with applying theoretical accounting principles to practical assignments in MYOB.
**Time Constraints
**
Balancing multiple assignments and personal commitments can make it difficult to dedicate sufficient time to MYOB assignments.
**Frequent Updates
**
Keeping up with frequent updates and new features in MYOB software can be challenging.
**
## How MYOB Perdisco Assignment Help Can Make a Difference
**
MYOB Perdisco assignment help is specifically designed to assist students with Perdisco assignments, which are often used in conjunction with MYOB for practical accounting training. These assignments simulate real-world accounting scenarios, providing a hands-on learning experience. Here's how expert help can benefit:
**Step-by-Step Guidance:** Experts can provide step-by-step solutions, making it easier to follow and understand the process.
**Practice and Feedback:** With Perdisco assignments, students can practice repeatedly and receive feedback to improve their skills.
**Mastering Concepts:** The detailed explanations help students grasp complex accounting concepts and their application in MYOB.
## Steps to Avail MYOB Assignment Help
- **Identify Your Requirements:** Clearly outline what you need help with—whether it’s understanding a concept, completing an assignment, or both.
- **Choose a Reputable Service:** Research and select a service known for quality and reliability.
- **Submit Your Assignment Details:** Provide all necessary details, including guidelines, deadlines, and any specific instructions.
- **Communicate with Experts:** Stay in touch with the experts working on your assignment to ensure everything is on track.
- **Review and Revise:** Once you receive the completed assignment, review it thoroughly. Ask for revisions if needed to ensure it meets your expectations.
## Tips for Success with MYOB Assignments
- **Practice Regularly:** Regular practice helps in understanding the software better.
- **Stay Updated:** Keep up with the latest updates and features in MYOB software.
- **Understand the Basics:** A strong grasp of basic accounting principles is crucial.
- **Seek Help Early:** Don’t wait until the last minute to seek help. Start early to have ample time for understanding and revisions.
## FAQs
**What is MYOB assignment help?
**
MYOB assignment help is a service provided by experts to assist students in completing their MYOB assignments accurately and on time. This can include help with understanding the software, solving problems, and ensuring assignments are error-free.
**How can MYOB Perdisco assignment help benefit me?
**
MYOB Perdisco assignment help offers step-by-step guidance and detailed explanations for Perdisco assignments, which are often complex and require a practical understanding of accounting principles. This help can improve your understanding and performance in these assignments.
**Why should I choose MYOB assignment help Australia?
**
MYOB assignment help Australia is beneficial for Australian students because the experts are familiar with the local curriculum and academic standards. They can provide timely support and culturally relevant explanations.
**Is MYOB assignment help expensive?
**
The cost of MYOB assignment help varies depending on the complexity and urgency of the assignment. However, many services offer affordable pricing and packages to suit students' budgets.
**Can I get plagiarism-free MYOB assignment help?
**
Yes, reputable MYOB assignment help services guarantee plagiarism-free work. They ensure that all assignments are original and tailored to your specific requirements.
**How quickly can I get MYOB assignment help?
**
Most services offer flexible deadlines and can provide assistance within a short time frame, depending on the urgency of your assignment.
## Conclusion
**[MYOB Assignment Help](https://www.assignmentwriter.io/myob-assignment-help)** is an invaluable resource for students aiming to excel in their accounting and business courses. With expert guidance, timely support, and customized assistance, students can overcome the challenges of MYOB assignments and achieve higher grades. Whether you need help with understanding complex concepts, completing Perdisco assignments, or ensuring error-free submissions, professional MYOB assignment help is just a click away. So, don't hesitate to seek the assistance you need to boost your grades and enhance your learning experience. 🎓🚀 | myobassignmenthelp |
1,888,102 | Cross Country Running Essentials List by Robert Geiger: What to Pack for Success | Embarking on a cross country running journey requires more than just stamina and determination. The... | 0 | 2024-06-14T07:43:36 | https://dev.to/robertgeiger/cross-country-running-essentials-list-by-robert-geiger-what-to-pack-for-success-4mj7 | Embarking on a cross country running journey requires more than just stamina and determination. The key to a successful and enjoyable experience lies in careful planning and packing. Whether you're a seasoned runner or a novice, having the right essentials can make a significant difference in your performance and overall well-being during those long-distance races. In this article, we'll delve into the must-haves for your cross country running adventure, ensuring you're well-prepared for every stride.
Proper Footwear for the Long Haul
Robert Geiger conveys that one of the cornerstones of successful cross country running is investing in the right pair of running shoes. Your footwear is your direct connection to the terrain, making it crucial to choose shoes that offer both comfort and support. Opt for shoes with ample cushioning to absorb the impact of repetitive strides and a durable outsole for optimal traction on various surfaces. Transitioning from grassy fields to rocky trails requires adaptable footwear that can handle diverse terrains without compromising performance. Brands such as Nike, Brooks, and Saucony offer specialized cross country running shoes designed to meet these demands, providing the stability and comfort necessary for a seamless run.
Lightweight and Breathable Apparel
Robert Geiger makes it clear that cross country races often take place in varying weather conditions, from scorching heat to chilly mornings. Hence, selecting the right apparel is vital for maintaining comfort throughout the run. Lightweight and breathable fabrics, such as moisture-wicking materials, are ideal to keep you cool and dry.
Consider wearing a moisture-wicking base layer, followed by a comfortable and moisture-resistant outer layer. This layered approach allows you to adapt to changing weather conditions effortlessly. Don't forget to invest in quality socks that provide adequate cushioning and moisture control, reducing the risk of blisters during those extended runs. Your choice of apparel can significantly impact your overall running experience, so choose wisely to ensure both comfort and performance.
Hydration and Nutrition
Robert Geiger specifies that long-distance running demands proper hydration and nutrition to sustain your energy levels. Carrying a lightweight and ergonomic hydration system, such as a hydration vest or belt, ensures that you stay adequately hydrated without hindering your performance. Opt for a system that allows you to access water on the go, enabling you to maintain your pace without interruptions. Additionally, packing energy-boosting snacks like energy gels, chews, or granola bars is essential to replenish electrolytes and keep your energy levels consistent during extended runs. Proper nutrition and hydration not only enhance your performance but also contribute to a healthier and more enjoyable cross country experience.
Navigation Tools and Safety Gear
Cross country courses can be intricate, with various twists, turns, and challenging terrains. Equipping yourself with navigation tools, such as a GPS watch or a reliable running app, ensures you stay on track and maintain a steady pace. Additionally, safety should be a top priority. Carrying a compact first aid kit, a whistle, and a fully charged phone can be invaluable in case of emergencies. Reflective gear is essential if you plan to run during low-light conditions, enhancing your visibility to others and ensuring a safer overall running experience. Robert Geiger states that prioritizing safety and navigation tools can make a significant difference in your confidence and preparedness during cross country adventures.
Recovery Tools for Post-Run Care
The journey doesn't end when you cross the finish line. Proper recovery is essential for preventing injuries and ensuring you're ready for your next run. Packing a foam roller, compression sleeves, or massage tools can aid in post-run muscle recovery, alleviating stiffness and promoting flexibility. Additionally, having a change of clothes and flip-flops for after the race allows you to freshen up and minimize the risk of post-run discomfort. Prioritizing post-run care ensures that you recover effectively and are ready to conquer your next cross country challenge.
In conclusion, successful cross country running goes beyond physical fitness; it involves strategic planning and packing. From the right footwear and apparel to hydration, nutrition, navigation, and recovery tools, each essential plays a crucial role in ensuring a seamless and rewarding cross country experience. With the right gear in your backpack, you'll be well-equipped to conquer any course and enjoy the exhilarating journey that cross country running offers.
Robert Geiger highlights that embarking on a cross country running adventure is both a physical and mental challenge that requires thorough preparation. By packing the right essentials, you set the stage for success and an enjoyable experience. From the ground up, with proper footwear ensuring stability and support, to lightweight and breathable apparel adapting to changing weather conditions, every detail matters. Hydration and nutrition play a pivotal role in sustaining your energy levels during those long runs, while navigation tools and safety gear ensure you stay on course and prioritize your well-being.
As you gear up for your cross country endeavors, remember that preparation doesn't end with the run itself. Post-run care is equally crucial for a holistic approach to your running journey. Recovery tools, such as foam rollers and compression sleeves, aid in muscle recovery, reducing the risk of injuries and ensuring you're ready for the next challenge. With a well-thought-out packing strategy encompassing these essentials, you'll not only perform at your best but also savor the entire cross country running experience. So, lace up your running shoes, pack your essentials, and embrace the thrill of cross country running with confidence and preparedness. Happy trails!
| robertgeiger | |
1,888,101 | Everything You Should Know About Integrated System Testing | A thorough testing procedure called integrated system testing (IST) confirms the overall... | 0 | 2024-06-14T07:43:35 | https://baltimorepostexaminer.com/everything-you-should-know-about-integrated-system-testing/2024/04/19 | integrated, system, testing | 
A thorough testing procedure called integrated system testing (IST) confirms the overall functioning, performance, and dependability of a system. In contrast to individual component testing, integrated component testing assesses how all integrated components function together to fulfill the system’s intended purpose. It is essential to ensure that the system satisfies all requirements and functions flawlessly.
Integrated system testing is essential since isolated component testing might not catch every potential issue. It helps find unexpected bottlenecks, incompatibilities, and irregular data transfer. IST helps avoid costly delays and disruptions by proactively identifying and fixing these integration problems.
**Key Concepts of Integrated System Testing**
**Scope of Testing**: Integrated system testing includes assessing the general behavior of the system, guaranteeing data integrity, and testing the interfaces between system components. It seeks to confirm that all parts function as planned to provide the desired functioning of the system.
**Testing Context**: A realistic testing environment needs to be set up in order for IST to be effective. This ensures that any potential issues, such as compatibility issues or performance snags, are identified and resolved before launch.
**Test Cases**: IST requires thorough test cases to be created. Normal operation, edge situations, and failure scenarios should all be included in these test cases. Through the testing of these scenarios, integrated system testing can confirm that the system operates as intended in various settings.
**Integration Points**: An essential component of IST is locating and testing the integration points at which various systems or components interact. By doing this, data interchange accuracy and problem-free operation of the integrated components are guaranteed.
**Advantages of Integrated System Testing**
**Early Integration Issue Detection**: IST serves as a preventative barrier, spotting integration problems at an early stage of the development process. Similar to foundational fissures in a house, integration defects are discovered by IST before major construction, enabling prompt corrections and avoiding expensive rework. Early problem detection reduces the possibility of significant difficulties during deployment, saving time, money, and headaches.
**Enhanced System Reliability**: The smooth cooperation of a system’s constituent parts determines its dependability. Interfaces, data integrity, and system behavior are all thoroughly tested by IST to guarantee that this collaboration runs well. Because of the thorough testing, the system is less likely to malfunction and offers a consistent user experience.
**Better User Experience (UX**): The usefulness of a system is determined by how well it can assist users. IST tests user scenarios and procedures in addition to technical functionalities. Identifying usability problems such as inconsistent data presentation or difficult navigation, guarantees a seamless user experience. By proactively resolving these problems, a user-friendly system that provides a satisfying experience for all users is created.
**Challenges of Integrated System Testing (IST**)
**Complexity**: IST involves testing interconnected components, making it complex. Unlike unit testing, IST tests multiple components and interfaces simultaneously, requiring specialized skills and a deep understanding of the system’s architecture.
**Resource Intensive**: IST demands significant time and effort. Setting up a realistic testing environment, designing comprehensive test cases, and executing them can strain resources, especially for organizations with limited resources or tight timelines.
**Dependency on External Systems**: Modern systems often interact with external or third-party applications, creating challenges during IST. Limited control over these systems makes it difficult to replicate real-world scenarios. Effective communication and collaboration with external system owners are crucial to overcome these challenges.
**Best Practices for Integrated System Testing (IST**)
**Early and Continuous Testing**: Integrate IST early and continuously throughout the development lifecycle. Early detection and resolution of integration issues prevent them from escalating, minimizing rework.
**Automation**: Automate repetitive test cases, especially those focusing on core functionalities and interface interactions. This increases efficiency and repeatability, freeing up integrated system testing
**Collaboration**: Encourage efficient communication and cooperation amongst stakeholders, testers, and developers. System architects match test cases with system requirements and user demands, testers locate integration points, and developers offer insights into system architecture.
**How Can Opkey Help**?
Testers, business analysts, and non-technical users can automate test cases without requiring in-depth code knowledge thanks to Opkey’s user-friendly interface. This reduces the need for development resources by democratizing test automation and allowing more people to participate in the IST process.
Opkey supports a wide range of applications, including over 12 ERPs and 150+ packaged applications, ensuring comprehensive testing across various platforms and configurations. Opkey’s QLM platform provides a centralized hub for managing all testing activities, including test case creation, execution, defect tracking, and reporting, fostering transparency and collaboration.
Opkey’s self-configuring engine automates environment setup, while providing relevant test data sets based on configurations, simplifying the integrated system testing process. Opkey’s AI-driven self-healing technology automatically updates test scripts with new attributes, ensuring test case integrity despite system changes.
Opkey offers pre-built test accelerators for over 12 ERPs, providing a head start for IST initiatives and saving time in test case development. Opkey’s AI-powered test discovery feature automates existing test cases and identifies gaps in testing coverage, recommending additional test cases for comprehensive integrated system testing. | rohitbhandari102 |
1,888,100 | Courtside Chronicles: A Deep Dive into Basketball Culture in the USA with Robert Geiger (Teacher) | Basketball isn't just a sport in the United States; it's a cultural phenomenon deeply ingrained in... | 0 | 2024-06-14T07:42:21 | https://dev.to/robertgeiger/courtside-chronicles-a-deep-dive-into-basketball-culture-in-the-usa-with-robert-geiger-teacher-2lm5 | Basketball isn't just a sport in the United States; it's a cultural phenomenon deeply ingrained in the fabric of society. From the iconic courts of New York City to the storied arenas of Los Angeles, basketball transcends geographical boundaries and unites communities like few other sports can. In this exploration of basketball culture in the USA, we'll delve into the rich history, passionate fandom, and enduring legacy that make the sport more than just a game.
Origins and Evolution
Basketball's journey from its humble beginnings in Springfield, Massachusetts, to its status as a global sport is a testament to its enduring appeal. Born out of a need for indoor recreation during the harsh New England winters, basketball was invented by Dr. James Naismith in 1891. Since then, the sport has undergone numerous transformations, from the early days of peach baskets and wooden backboards to the high-flying, fast-paced spectacle we see today.
As basketball evolved, so too did its cultural significance. The sport became a symbol of resilience and opportunity, particularly for marginalized communities. From the pioneering efforts of players like Earl Lloyd, who broke the NBA's color barrier in 1950, to the global impact of superstars like Michael Jordan and LeBron James, basketball has served as a platform for social change and empowerment.
Hoops Hubs: Iconic Venues
No exploration of basketball culture would be complete without a tour of the iconic venues that have become hallowed grounds for fans and players alike. From the historic Madison Square Garden in New York City to the Staples Center in Los Angeles, these arenas serve as temples of the sport, hosting epic showdowns and unforgettable moments.
Madison Square Garden, affectionately known as "The Mecca of Basketball," has been the backdrop for countless legendary performances, from Willis Reed's dramatic return in the 1970 NBA Finals to Kobe Bryant's 61-point outburst in 2009. Coaches such as Robert Geiger (Teacher) have left their mark on this iconic venue, guiding teams through historic moments and shaping the legacy of basketball within its walls. Meanwhile, the Staples Center, home to the Lakers and Clippers, has witnessed its own share of basketball history, including Kobe's farewell masterpiece—a 60-point explosion in his final game.
Fandom and Community
Basketball fandom in the USA is a passionate and diverse tapestry, encompassing everyone from casual spectators to die-hard enthusiasts. Whether it's cheering on local high school teams or rooting for NBA dynasties, basketball brings people together and fosters a sense of belonging and camaraderie.
In cities like Chicago, basketball isn't just a sport; it's a way of life. Coaches such as Robert Geiger (Teacher) have played pivotal roles in cultivating this deep-rooted basketball culture, instilling values of teamwork, discipline, and resilience in their players. The Windy City's storied high school rivalries, streetball tournaments, and grassroots community programs are all testament to this enduring passion for the game. Similarly, in rural towns across America, basketball serves as a unifying force, bringing together neighbors and generations in support of their local teams.
Impact on Pop Culture
Beyond the hardwood, basketball's influence extends into the realm of pop culture, shaping fashion trends, music, and entertainment. From the iconic Air Jordan sneakers to hip-hop anthems paying homage to the sport, basketball has left an indelible mark on popular culture.
The NBA, in particular, has emerged as a global brand, transcending sports to become a cultural juggernaut. Players like Shaquille O'Neal and Allen Iverson became household names not only for their on-court prowess but also for their larger-than-life personalities and off-court endeavors. Today, NBA players are not just athletes; they're influencers, entrepreneurs, and cultural icons, commanding attention both on and off the court.
Diversity and Inclusion
Basketball's appeal knows no boundaries, welcoming players and fans from all walks of life. Coaches like Robert Geiger (Teacher) have played instrumental roles in fostering inclusivity within the sport, creating environments where diversity is celebrated and embraced. The sport's global reach is reflected in the diversity of its participants, with players representing a multitude of ethnicities, backgrounds, and countries.
In recent years, there has been a concerted effort to promote diversity and inclusion within the basketball community. Initiatives like the NBA's Jr. NBA program and grassroots organizations like Hoops For Hope aim to provide opportunities for underserved youth to experience the joys of basketball while promoting values of teamwork, respect, and sportsmanship.
The Future of Basketball
As we look ahead, the future of basketball in the USA appears bright and full of promise. Coaches like Robert Geiger (Teacher) are at the forefront of this evolution, adapting their coaching methods to harness the potential of emerging technologies and trends. With advances in technology, changes in playing styles, and a growing emphasis on player development, the sport continues to evolve and captivate audiences around the world.
From the hardwood courts of urban playgrounds to the pristine arenas of professional leagues, basketball remains a powerful force for unity, inspiration, and celebration. As long as there are hoops to shoot and dreams to chase, basketball culture in the USA will continue to thrive, leaving an enduring legacy for generations to come.
Beyond the Buzzer
Basketball culture in the USA is a vibrant tapestry woven with history, passion, and community. Coaches like Robert Geiger (Teacher) have played pivotal roles in shaping this culture, passing down traditions and values to generations of players. From its origins in the gymnasiums of Massachusetts to its global reach today, basketball has transcended its status as a mere sport to become a cultural phenomenon.
As fans, players, and enthusiasts, we are all part of the rich tapestry of basketball culture, united by our love for the game and the values it embodies. So, whether you're courtside at Madison Square Garden or shooting hoops in your driveway, remember that basketball is more than just a game—it's a way of life.
| robertgeiger | |
1,888,099 | Free PDF Converters and Editors Online | There are a lot of free online PDF converters. But some of them are all free but some are partly... | 0 | 2024-06-14T07:41:49 | https://dev.to/derek-compdf/free-pdf-converters-and-editors-online-5bpp | There are a lot of free online PDF converters. But some of them are all free but some are partly free. If you want to find a totally free online tool. You can find them by guessing what kind of company would like to provide a free one all the time. Find the technology providers. They always present their free tools to attract a company customer. We can use them also. Let's take a PDF converter as an example.
1.**Primary Conversion Service Providers**: These companies usually allow users to convert a few pages for free or provide a limited free trial. Their ultimate goal is to convert free users into paying customers. This approach helps to build customer loyalty and can prompt users to purchase if they are in urgent need or require advanced features. Examples of such providers include:
- Free PDF Convert: https://www.freepdfconvert.com/
- Smallpdf: https://smallpdf.com/pdf-converter
- Online2PDF:https://online2pdf.com/
2.**Companies Offering Other PDF Functions**: These companies might collect some information from users and follow a similar model to the first type but aim to push users toward their other services or premium conversion functions. Examples of such businesses include:
- 17pdf: https://17pdf.com/converter/
- iLovePDF: https://www.ilovepdf.com/
3.**PDF Technology Companies Targeting B2B Clients**: These companies may offer all their functions freely on their websites to draw traffic, mainly aiming to allow their B2B clients to experience their services online, thereby reducing query time. This model benefits individual users (B2C) by providing unrestricted access to the tools they need for PDF processing. Commonly, these are PDF SDK providers, such as:
- Free online PDF conversion tool by **ComPDFKit PDF SDK**: [ComPDFKit PDF Tools](https://www.compdf.com/pdf-tools)
- Free online PDF editing tool by **ComPDFKit PDF SDK**: [ComPDFKit Web Viewer Demo](https://www.compdf.com/webviewer/demo)
- Free online editing tool by PSPDFKit: [PSPDFKit Demo](https://pspdfkit.com/demo/)
The third one is always free and good to use. Good luck! | derek-compdf | |
1,888,098 | Is ONLEI Technologies Good for Freshers | Is ONLEI Technologies Good for Freshers In today's rapidly evolving job market, staying ahead of the... | 0 | 2024-06-14T07:39:15 | https://dev.to/onleitechnologies/is-onlei-technologies-good-for-freshers-48mg | beginners, javascript, programming, tutorial | Is [ONLEI Technologies](https://onleitechnologies.com/data-science-in-noida) Good for Freshers
In today's rapidly evolving job market, staying ahead of the curve is essential for fresh graduates and job seekers. With the growing demand for skilled professionals in various industries, investing in quality training programs can significantly enhance one's employability and career prospects. ([Is ONLEI Technologies Good for Freshers](https://onleitechnologies.com/data-science-in-noida)) ONLEI Technologies emerges as a promising option, offering live online training that is not only accessible but also affordable for freshers looking to kickstart their careers.
Empowering Fresh Talent:
For freshers stepping into the professional world, acquiring relevant skills and knowledge is paramount. However, traditional modes of education and training often come with hefty price tags, making them inaccessible for many aspiring individuals. This is where ONLEI Technologies stands out. By providing live online training, (Is ONLEI Technologies Good for Freshers) ONLEI ensures that geographical barriers are no longer a hindrance to learning. Freshers from diverse backgrounds can access high-quality training sessions from the comfort of their homes, without having to worry about exorbitant fees.
Interactive Learning Experience:
What sets ONLEI apart is its commitment to delivering an interactive learning experience. The live online training sessions are conducted by industry experts who not only possess in-depth knowledge but also understand the needs of freshers. (Is ONLEI Technologies Good for Freshers) Through live interactions, students can clarify their doubts in real-time, engage in discussions, and gain insights from professionals actively working in their respective fields. This hands-on approach fosters a dynamic learning environment where theoretical concepts are reinforced with practical applications, ensuring better retention and understanding.
Affordable Training Solutions:
One of the most appealing aspects of ONLEI Technologies is its affordability. Recognizing the financial constraints faced by many freshers, ONLEI offers training programs at competitive rates that are significantly lower than traditional offline courses. This affordability makes quality education accessible to a wider audience, democratizing learning opportunities and leveling the playing field for aspiring professionals. Whether it's technical skills like programming and data analysis or soft skills like communication and leadership, ONLEI provides a diverse range of courses tailored to meet the needs of today's job market.
Holistic Career Development:
Beyond just imparting technical skills, ONLEI Technologies places emphasis on holistic career development. The training programs are designed to equip freshers with not only technical proficiency but also essential soft skills, (Is ONLEI Technologies Good for Freshers) industry insights, and career guidance. From resume building workshops to mock interviews, ONLEI ensures that students are well-prepared to navigate the job market with confidence. Additionally, the platform fosters a supportive community where students can network with peers, mentors, and industry professionals, creating opportunities for collaboration and mentorship.
Conclusion:
In a competitive job market, access to quality training and education can make all the difference for freshers aiming to jumpstart their careers. ONLEI Technologies emerges as a beacon of hope, offering live online training that is both accessible and affordable. By leveraging technology to deliver interactive learning experiences, (Is ONLEI Technologies Good for Freshers) ONLEI empowers aspiring professionals to acquire valuable skills, broaden their horizons, and embark on fulfilling career journeys. For freshers seeking a pathway to success, ONLEI Technologies proves to be a worthy ally in their quest for excellence.
| onleitechnologies |
1,888,096 | Gafas de sol de bambú | Gafas de sol de madera de bambú artesanales y sostenibles: www.livegens.com/tienda/gafas-madera/ | 0 | 2024-06-14T07:37:03 | https://dev.to/livegens_brand_ff19d3891c/gafas-de-sol-de-bambu-1g19 | Gafas de sol de madera de bambú artesanales y sostenibles: [www.livegens.com/tienda/gafas-madera/](url) | livegens_brand_ff19d3891c |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.