id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,893,608 | Building a simple Full-Stack Restaurant Finder App with React, Redux, Node.js, and Google Places API (Part 1) | Introduction Welcome to the first part of our tutorial series on building a full-stack "Restaurant... | 0 | 2024-06-19T14:47:00 | https://dev.to/vb_nair/building-a-simple-full-stack-restaurant-finder-app-with-react-redux-nodejs-and-google-places-api-1acd | node, googlemaps, googleplaces, backend | **Introduction**
Welcome to the first part of our tutorial series on building a full-stack "Restaurant Finder" application. In this blog post, we will guide you through the process of creating a robust backend using Node.js and Express. The backend will serve as the foundation for our application, allowing us to integrate seamlessly with the Google Places API to retrieve restaurant data based on user queries.
Our goal is to develop a RESTful API that can handle requests for nearby restaurants, provide detailed information such as ratings and distances, and interact efficiently with the frontend built in React. By the end of this tutorial, you will have a functional backend server ready to power the restaurant discovery capabilities of our application.
Let's dive into the world of backend development and set the stage for a powerful and responsive "Restaurant Finder" experience!
**<u>Part 1: Building the Backend</u>**
**-> Creating a RESTful Backend with Node.js and Express**
In this comprehensive tutorial, we'll walk you through creating a RESTful backend for a "Restaurant Finder" app using Node.js and Express.
You'll learn
-> how to set up a basic server,
-> integrate the Google Places API to fetch nearby restaurants,
-> build a full-featured server that processes and returns sorted restaurant data based on user location.
Perfect for beginners, this guide will provide all the steps you need to get your backend up and running.
- Initialize a new Node.js project
`mkdir restaurant-finder
cd restaurant-finder
mkdir server
npm init -y`
-> Install Express, Axios, CORS, and Dotenv:
`npm install express axios cors dotenv`
**Step 1: Setting Up a Basic Express Server**
- Create Basic Server:
-> Create a file named 'server.js' in the server folder.
Add the following code to set up a basic Express server:
```
// server.js
const express = require("express");
const cors = require("cors");
require("dotenv").config();
const app = express();
const port = 3001;
app.use(cors());
app.get("/", (req, res) => {
res.send("Hello, World!");
});
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});
```
- Run the Server:
-> Run node server.js in terminal to start the server.
-> Open a browser and navigate to http://localhost:3001 to see "Hello, World!" message.
**Step 2: Adding Google Places API**
- Obtain API Key:
-> Go to the Google Cloud Console.
Create a new project or select an existing one.
Navigate to "APIs & Services" > "Library" and enable the "Places API".
Navigate to "APIs & Services" > "Credentials" and create an API key.
- Add API Key to .env:
-> Create a .env file in the server folder and add your API key:
```
REACT_APP_GOOGLE_PLACES_API_KEY=your_api_key_here
```
- Update Server to Use Google Places API:
Update server.js to include a new endpoint that interacts with the Google Places API:
```
// server.js
const express = require("express");
const axios = require("axios");
const cors = require("cors");
require("dotenv").config();
const app = express();
const port = 3001;
app.use(cors());
app.get("/api/places", async (req, res) => {
try {
const { lat, lng } = req.query;
const response = await axios.get(
"https://maps.googleapis.com/maps/api/place/nearbysearch/json",
{
params: {
location: `${lat},${lng}`,
radius: 1500,
type: "restaurant",
key: process.env.REACT_APP_GOOGLE_PLACES_API_KEY,
},
}
);
res.json(response.data.results);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(port, () => {
console.log(`Server running on http://localhost:${port}`);
});
```
- Test the Endpoint:
-> Use a tool like Postman to send a GET request to http://localhost:3001/api/places?lat=37.7749&lng=-122.4194 (replace with appropriate coordinates).
-> Verify that the response contains restaurant data from the Google Places API.
**Step 3: Building the Full Functional Server**
- Calculate Distance:
-> Add a function to calculate the distance between two coordinates using the Haversine formula:
(The Haversine formula calculates the shortest distance between two points on a sphere using their latitudes and longitudes measured along the surface. It is important for use in navigation.)
```
// server.js
const haversineDistance = (coords1, coords2) => {
function toRad(x) {
return (x * Math.PI) / 180;
}
const lat1 = coords1.lat;
const lon1 = coords1.lng;
const lat2 = coords2.lat;
const lon2 = coords2.lng;
const R = 6371; // Radius of the Earth in kilometers
const x1 = lat2 - lat1;
const dLat = toRad(x1);
const x2 = lon2 - lon1;
const dLon = toRad(x2);
const a =
Math.sin(dLat / 2) * Math.sin(dLat / 2) +
Math.cos(toRad(lat1)) *
Math.cos(toRad(lat2)) *
Math.sin(dLon / 2) *
Math.sin(dLon / 2);
const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
const d = R * c;
return d;
};
```
- Update Endpoint to Include Distance and Sort:
-> Modify the /api/places endpoint to include distance calculation and sorting:
```
// server.js
app.get("/api/places", async (req, res) => {
try {
const { lat, lng } = req.query;
const response = await axios.get(
"https://maps.googleapis.com/maps/api/place/nearbysearch/json",
{
params: {
location: `${lat},${lng}`,
radius: 1500,
type: "restaurant",
key: process.env.REACT_APP_GOOGLE_PLACES_API_KEY,
},
}
);
const restaurants = response.data.results.map((restaurant) => {
const photoUrl = restaurant.photos
? `https://maps.googleapis.com/maps/api/place/photo?maxwidth=400&photoreference=${restaurant.photos[0].photo_reference}&key=${process.env.REACT_APP_GOOGLE_PLACES_API_KEY}`
: null;
return {
name: restaurant.name,
vicinity: restaurant.vicinity,
rating: restaurant.rating,
user_ratings_total: restaurant.user_ratings_total,
distance: haversineDistance(
{ lat: parseFloat(lat), lng: parseFloat(lng) },
{
lat: restaurant.geometry.location.lat,
lng: restaurant.geometry.location.lng,
}
),
photoUrl,
place_id: restaurant.place_id
};
});
const sortedRestaurants = restaurants
.sort((a, b) => a.distance - b.distance)
.slice(0, 10);
res.json(sortedRestaurants);
} catch (error) {
res.status(500).json({ error: error.message });
}
});
```
- Test Full Functionality:
-> Test the endpoint again to verify it returns a sorted list of nearby restaurants with all required details.
_**Congratulations!**_
You have successfully built a robust backend for the "Restaurant Finder" app. Your server can now handle requests, interact with the Google Places API, and return sorted restaurant data based on the user's location.
In the next part of this series, we'll focus on creating the frontend using React and TypeScript, ensuring that users have a seamless experience when searching for restaurants.
Read the next part {% cta https://dev.to/vb_nair/building-a-simple-full-stack-restaurant-finder-app-with-react-redux-nodejs-and-google-places-api-part-2-557a %} here {% endcta %}.
_Stay tuned!_
_**Fun Fact😉:**
This project started as a take-home assignment for a job interview 🎓. While working on it, I decided to create a tutorial to help others looking for references to build similar solutions 💪. Happy coding! 🧑💻_ | vb_nair |
1,893,748 | Building a REST API with Node.js and Express | Creating a REST API is a common task for developers, especially when building web applications. In... | 0 | 2024-06-19T14:46:11 | https://dev.to/andylarkin677/building-a-rest-api-with-nodejs-and-express-4j3p | webdev, node, learning, api | Creating a REST API is a common task for developers, especially when building web applications. In this tutorial, we will walk through the process of building a simple REST API using Node.js and Express. By the end, you'll have a basic understanding of how to set up routes, handle requests and responses, and connect to a database.
Prerequisites
Before we begin, make sure you have Node.js and npm installed on your machine. You can download and install them from the official Node.js website.
Setting Up the Project
Initialize the Project
First, create a new directory for your project and navigate into it:
mkdir rest-api
cd rest-api
Initialize a new Node.js project:
npm init -y
This will create a package.json file with default settings.
Install Dependencies
We will use Express for our web server framework and Mongoose to interact with MongoDB:
npm install express mongoose
Create the Server
Create a file named server.js and set up a basic Express server:
// server.js
const express = require('express');
const mongoose = require('mongoose');
const app = express();
// Middleware
app.use(express.json());
// Routes
app.get('/', (req, res) => {
res.send('Hello World!');
});
// Database connection
mongoose.connect('mongodb://localhost:27017/rest-api', {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => console.log('MongoDB connected'))
.catch((err) => console.error(err));
// Start the server
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
This code sets up a basic Express server, connects to a MongoDB database, and defines a single route that returns "Hello World!".
Defining the Data Model
For this tutorial, we will create a simple API to manage a collection of "books". Each book will have a title, author, and number of pages.
Create the Book Model
In the models directory, create a file named Book.js:
// models/Book.js
const mongoose = require('mongoose');
const BookSchema = new mongoose.Schema({
title: {
type: String,
required: true,
},
author: {
type: String,
required: true,
},
pages: {
type: Number,
required: true,
},
});
module.exports = mongoose.model('Book', BookSchema);
Creating CRUD Routes
Next, we will define routes to create, read, update, and delete books.
Set Up the Routes
In the routes directory, create a file named books.js:
// routes/books.js
const express = require('express');
const router = express.Router();
const Book = require('../models/Book');
// Get all books
router.get('/', async (req, res) => {
try {
const books = await Book.find();
res.json(books);
} catch (err) {
res.status(500).json({ message: err.message });
}
});
// Get a single book
router.get('/:id', getBook, (req, res) => {
res.json(res.book);
});
// Create a new book
router.post('/', async (req, res) => {
const book = new Book({
title: req.body.title,
author: req.body.author,
pages: req.body.pages,
});
try {
const newBook = await book.save();
res.status(201).json(newBook);
} catch (err) {
res.status(400).json({ message: err.message });
}
});
// Update a book
router.patch('/:id', getBook, async (req, res) => {
if (req.body.title != null) {
res.book.title = req.body.title;
}
if (req.body.author != null) {
res.book.author = req.body.author;
}
if (req.body.pages != null) {
res.book.pages = req.body.pages;
}
try {
const updatedBook = await res.book.save();
res.json(updatedBook);
} catch (err) {
res.status(400).json({ message: err.message });
}
});
// Delete a book
router.delete('/:id', getBook, async (req, res) => {
try {
await res.book.remove();
res.json({ message: 'Deleted Book' });
} catch (err) {
res.status(500).json({ message: err.message });
}
});
// Middleware to get a book by ID
async function getBook(req, res, next) {
let book;
try {
book = await Book.findById(req.params.id);
if (book == null) {
return res.status(404).json({ message: 'Cannot find book' });
}
} catch (err) {
return res.status(500).json({ message: err.message });
}
res.book = book;
next();
}
module.exports = router;
Integrate the Routes
Update server.js to use the new routes:
// server.js
const express = require('express');
const mongoose = require('mongoose');
const app = express();
const booksRouter = require('./routes/books');
// Middleware
app.use(express.json());
// Routes
app.use('/books', booksRouter);
// Database connection
mongoose.connect('mongodb://localhost:27017/rest-api', {
useNewUrlParser: true,
useUnifiedTopology: true,
})
.then(() => console.log('MongoDB connected'))
.catch((err) => console.error(err));
// Start the server
const PORT = process.env.PORT || 5000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
Testing the API
Now, you can use a tool like Postman or curl to test the API endpoints:
GET /books: Retrieve all books.
GET /books/
: Retrieve a specific book by ID.
POST /books: Create a new book.
PATCH /books/
: Update a specific book by ID.
DELETE /books/
: Delete a specific book by ID.
Small conclusion
In this tutorial, we built a simple REST API using Node.js and Express. We covered setting up the server, defining a data model, and creating CRUD routes. This basic setup can be expanded and customized to fit more complex requirements and applications. Happy coding! | andylarkin677 |
1,893,631 | My journey into cybersecurity | My Journey in Cybersecurity, in chronological order @ 11 At the age of 11, I... | 0 | 2024-06-19T14:39:11 | https://dev.to/clom/my-journey-in-cybersecurity-1b07 | cybersecurity, aboutme, newbie |
## My Journey in Cybersecurity, in chronological order
## @ 11
At the age of 11, I began to grow fascinated by how computers work, and even more intrigued by how hackers hack these intrinsically designed machines. I was introduced to the world of ethical hacking by my close friend then, who he himself was quite well versed in the topic. However, he was not an ethical hacker himself, rather he wanted to be a black hat hacker. Yet, I supposed he was not willing to take the rest, as he never attempted to break into systems illegally, which was best for him as the world of hacking is not forgiving. Black hat hacking can cause one to be arrested, and result in fines and/or even up to years in prison.
## @ 12
At the age of 12, I created my first Metasploit Payload, which I was attempting to use to hack the webcam on my brother's computer. However, perhaps something went wrong within the payload itself, or my brother's firewall blocked the payload, and consequently I was unable to successfully hack his webcam. I suppose my brother had already known this before hand, because he seemed confident in allowing me to try install the payload. After this incident, I spent several days trying to fix the payload, without realizing that there were other possibilities like the firewall or antivirus blocking the payload. At this point of time, I also did not know what was a port, hence I also failed to realize that the port my payload was using could have been closed.

As a result of lacking ports and computing knowledge, I also tried to connect to my brother's computer via RDP(Remote Desktop Protocol), without knowing that port 3389 used for RDP was closed by his firewall, and RDP being disabled by default on his computer. It was only after trying for several hours(and failing), did I finally attempted to look up the internet for a solution to my problem. _Voila!_ I had found the solution, which was to open the port on the firewall and enable RDP! However, it was too late because my computer usage time for the week was used up. Consequently, I forgot about this short term goal of mine, until today.

## @ 13-14
During ages 13-14, I was mainly scouring hacking forums to see how hackers think and act, and learning about the various ways they exploited systems and vulnerabilities. What shocked me was how easy it was to exploit these vulnerabilities, owing to the rise of automated hacking tools in the contemporary digital era. I fondly remember one of these tools being SQLiDumper - Which was made by AngelSecurityTeam. The tool enabled users to perform operations like Dork Searching , Vulnerability Scanning ,SQL Injection , XSS , LFI and RFI. Furthermore, it was in a GUI(Graphic User Interface), which allowed beginner 'hackers' to utilise the tool and exploit vulnerable databases, without even knowing how to do it the manual way in Linux by using the shell. Although I believe the tool was made for penetration testing by ethical hackers, many malicious users used this tool to perform data-breaches on various websites. I also noticed the uprise of hacking tools being sold by programmers, like account checkers and database dumpers/dorking tools. Unlike SQLiDumper, these tools were made by malicious creators, for malicious users, who could access these tools without prior knowledge to hacking at all. Of course, this was highly worrying, but I could not do much to stop these malicious users nor creators. Dare not I dabble in such vices, because I know the legal consequences of doing so.
## @ 15
I did not dabble much into cybersecurity when I was 15, as I was busy preparing and studying for my upcoming important exams. It was difficult to balance between studying and leisure during this period of time, hence I made the difficult choice to put cybersecurity aside temporarily for a while, to focus more on my studies. However, the time came during around November that year, when my mother's bank account was hacked by malicious actors. This rekindled the spirit in me for cybersecurity, as I realised how vulnerable we were, even with the utmost security protocols being put in place. 2FA and OTP are no challenge for black hat hackers in our era, hence I hopped back onto the bandwagon of cybersecurity during the End-Of-Year holidays.
## @ 16
Currently at 16, I aim to pursue a degree in Cybersecurity, to achieve my aim of safeguarding the collective data of innocent people, and ultimately building a more secure and safe digital haven for everyone. I have tried using HacktheBox and PicoCTF to refresh my knowledge about cybersecurity, and free courses online from Coursera, for example, do really help a lot in this aspect. I have also recently participated in a CTF competition and workshop held by YCEP(Youth Cyber Exploration Program) held in conjunction with Ngee Ann Polytechnic, and am looking forward to attend my next CTF held by CISCO this coming weekend. Although the prospect of winning is nice, it's not my ultimate goal, as I solely wish to gain knowledge and become more well versed in the field.
## Conclusion
My journey throughout cybersecurity thus far has been quite eventful, with there being many ups, but also many downs. What I have learnt from my journey is the importance of self-reflection, and how it is important to recognise our own mistakes and correct them. In fact, this is one of the reasons why I am writing this post; to reconcile with my past and reflect on what has built me up as a person so far. **Perseverance**, **Empathy**, and **Teamwork** are core values that are not only important in cybersecurity, but also in all aspects of life. Ultimately, my goal is to become a **Penetration Tester** in the future, as I enjoy being in the red team of cybersecurity. I may not be the best in what I do, but I am always aiming for greater heights, and not just settling for the status quo, because challenging ourselves is the only way to achieve growth.
> A Journey of a thousand miles, begins with a single step" -Lao Tzu
| clom |
1,893,747 | To Add a new header in second line of the existing file and 3 rd. column should be sum of the records | In source i am getting below data in csv format Comp CAD Amount 2135 CAD 156.56 2135 CAD ... | 0 | 2024-06-19T14:37:40 | https://dev.to/rahul_mahendru_45d895855b/to-add-a-new-header-in-second-line-of-the-existing-file-and-3-rd-column-should-be-sum-of-the-records-2a7g | help |
In source i am getting below data in csv format
Comp CAD Amount
2135 CAD 156.56
2135 CAD 171.33
2135 CAD 156.56
2135 CAD 156.56
2135 CAD 137.15
I need to add need header in Bottum of the old header
example
orignal header--->Comp CAD Amount
added_new header->Customer second_hader(with empty) sum(Amount)
Sum(amount) data will show like this
Comp code CAD Amount
Customer Invoice 778.16
2135 CAD 156.56
2135 CAD 171.33
2135 CAD 156.56
2135 CAD 156.56
2135 CAD 137.15
How I write the code in shell script and save the data in existing file
| rahul_mahendru_45d895855b |
1,893,746 | سایت معرفی دنس بت | سایت شرط بندی دنس بت یکی از قدیمیترین سایت های شرط بندی در ایران می باشد دقت داشته باشید که سایت شرط... | 0 | 2024-06-19T14:36:50 | https://dev.to/sasanmmmm2222/syt-mrfy-dns-bt-14gl | سایت شرط بندی [دنس بت](dancebet.pro) یکی از قدیمیترین سایت های شرط بندی در ایران می باشد دقت داشته باشید که سایت شرط بندی دنس بت انواع بازی ها را در زمینه های مختلف به کاربران خود ارائه میکند لازم به ذکر است که این سایت توسط نازنین همدانی مدیریت میشود و از این جهت میتوانید به این سایت برای واریزی ها و برداشت های خود به صورت کامل اطمینان کنید در واقع سایت شرط بندی دنس بت را میتوانید یکی از گزینه های اصلی و معتبر بدانید و بهترین و چشم نوازترین قسمت این سایت پشتیبانی همیشه آنلاین آن است که کمتر از چند دقیقه میتوانید جواب سوالات خود را دریافت کنید و تجربه ای جذاب از یک سایت شرط بندی ایرانی داشته باشید.
 | sasanmmmm2222 | |
1,893,745 | Integrating Stripe Payment Elements in Nuxt 3 | This guide will show you how to integrate Stripe's Payment Element into a Nuxt 3 application to... | 0 | 2024-06-19T14:36:31 | https://dev.to/jmkweb/integrating-stripe-payment-elements-in-nuxt-3-5d2j | stripe, nuxt, vue, javascript | This guide will show you how to integrate Stripe's Payment Element into a Nuxt 3 application to process payments for purchasing a cat. We'll cover setting up Stripe on both the client and server sides, and handling the payment process.
## Prerequisites
- A Nuxt 3 application set up.
- Stripe account with API keys (STRIPE_SECRET_KEY and STRIPE_PUBLIC_KEY).
## Step 1 - Install Stripe
Install the necessary Stripe packages:
```npm install @stripe/stripe-js stripe```
## Step 2 - Configure Environment Variables
Add your Stripe API keys to the .env file.
```
NUXT_STRIPE_PUBLIC_KEY=your_public_key_here
STRIPE_SECRET_KEY=your_secret_key_here
```
## Step 3: Create a Server API Endpoint
Create an endpoint to handle the creation of PaymentIntents. Create a file in this directory:`server/api/stripe.js`
```
import Stripe from 'stripe';
export default defineEventHandler(async (event) => {
const body = await readBody(event);
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
try {
const paymentIntent = await stripe.paymentIntents.create({
amount: Number(body.amount), // Amount in cents
currency: 'usd',
automatic_payment_methods: { enabled: true },
});
return {
client_secret: paymentIntent.client_secret
};
} catch (error) {
console.error('Error creating PaymentIntent:', error);
throw createError({
statusCode: 500,
statusMessage: 'Error creating payment intent',
});
}
});
```
## Step 4: Set Up the Basic Nuxt Page
Create a component for purchasing a cat that integrates the Payment Element and handles the payment process.
`pages/purchase/[catId].vue`
```
<template>
<div>
<h1>Purchase Cat {{ catId }}</h1>
<div>
<p>Select your favorite cat and proceed to checkout.</p>
</div>
<!-- Payment form for Stripe Payment Element -->
<form @submit.prevent="pay">
<div class="border border-gray-500 p-2 rounded-sm" id="payment-element"></div>
<p id="payment-error" role="alert" class="text-red-700 text-center font-semibold"></p>
<button
:disabled="isProcessing"
type="submit"
class="mt-4 bg-gradient-to-r from-[#FE630C] to-[#FF3200] w-full text-white text-[21px] font-semibold p-1.5 rounded-full"
:class="isProcessing ? 'opacity-70' : 'opacity-100'"
id="processing"
aria-label="loading"
>
<p v-if="isProcessing">I'm processing payment</p>
<div v-else>Buy Now</div>
</button>
</form>
</div>
</template>
<script setup>
import { ref, onMounted } from 'vue';
import { useRoute } from 'vue-router';
import { loadStripe } from '@stripe/stripe-js';
import { useRuntimeConfig } from '#app';
// Accessing environment variables
const config = useRuntimeConfig();
const stripePk = config.public.STRIPE_PUBLIC_KEY;
const route = useRoute();
const catId = route.params.catId; // Get the cat ID from the URL
const isProcessing = ref(false);
let stripe;
let elements;
let paymentElement;
let clientSecret;
const total = 2000; // Example fixed amount for purchasing a cat in cents ($20)
onMounted(async () => {
await initializeStripe();
});
const initializeStripe = async () => {
stripe = await loadStripe(stripePk);
// Create a payment intent on your server
const res = await fetch('/api/stripe', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
amount: total // Fixed amount in cents
})
});
const result = await res.json();
clientSecret = result.client_secret;
elements = stripe.elements({ clientSecret });
// Create and mount the Payment Element
paymentElement = elements.create('payment');
paymentElement.mount('#payment-element');
isProcessing.value = false;
};
const pay = async () => {
isProcessing.value = true;
try {
const { error } = await stripe.confirmPayment({
elements,
confirmParams: {
payment_method_data: {
billing_details: {
name: 'John Doe',
email: 'john.doe@example.com',
phone: '+1234567890',
},
},
},
redirect: 'if_required' // Stay on the same page unless redirect is necessary
});
if (error) {
console.error('Payment error:', error);
document.querySelector('#payment-error').textContent = error.message;
isProcessing.value = false;
} else {
console.log('Payment succeeded');
// Handle post-payment success actions, like showing a success message
}
} catch (error) {
console.error('Payment processing error:', error);
document.querySelector('#payment-error').textContent = 'An error occurred. Please try again.';
isProcessing.value = false;
}
};
</script>
<style scoped>
/* Add any styles you want to apply to the purchase page */
</style>
```
This tutorial shows how to integrate Stripe's Payment Element into a Nuxt 3 application for a fictional online cat purchase scenario. You can further customize the form, handle additional payment methods, or expand functionality as needed.
For more detailed information, refer to the Stripe Payment Element documentation.
| jmkweb |
1,893,315 | Scaling to 125 Million Transactions per Day: Juspay's Engineering Principles | At Juspay, we process 125 million transactions per day, with peak traffic reaching 5,000 transactions... | 0 | 2024-06-19T14:32:56 | https://dev.to/hyperswitchio/scaling-to-125-million-transactions-per-day-juspays-engineering-principles-2bj1 | digitalpayments, rust, opensource, haskell | At Juspay, we process 125 million transactions per day, with peak traffic reaching 5,000 transactions per second, all while maintaining 99.99% uptime. Handling such enormous volumes demands a robust, reliable, and scalable system. In this post, we'll walk you through our core engineering principles and how they've shaped our engineering decisions and systems.
When designing systems at this scale, several challenges naturally arise:
1. **Reliability vs Scale:** Generally, as you scale, you tend to exhaust resources, which can affect system availability.
2. **Reliability vs Agility:** Frequent releases and system changes can impact system reliability.
3. **Scale vs Cost-Effectiveness:** Scaling requires more resources,
leading to higher costs.
### Core Engineering Pillars
We've been able to strike the right balance between these challenges by anchoring our tech stack on four pillars:
1. **Build zero downtime stacks:** Solve reliability by building redundancy at each layer to achieve almost 100% uptime.
2. **Horizontally scalable systems:** Solve scalability by building systems that can scale horizontally by removing bottlenecks.
3. **Build agile systems for frequent bug-free releases.**
4. **Build performant systems for low latency, high throughput transaction processing.**
### Adopting Haskell Programming Language
To achieve our goals, we've made a critical investment: adopting the Haskell programming language. Haskell, a functional programming language, offers performance akin to C, which is closer to the machine and processes transactions much faster. With Haskell, we've reduced our transaction **processing time to less than 100 milliseconds.**
Here's an example of a Haskell function that adds two numbers:
```
-- Define the add function
add :: Int -> Int -> Int
add x y = x + y
-- Main function to test the add function
main :: IO ()
main = do
let sum = add 3 5
putStrLn ("The sum of 3 and 5 is " ++ show sum)
```
This function showcases Haskell's concise and readable syntax.
Additionally, Haskell's readability, like English, enables non-technical folks to read the code easily, verify business logic, and sign off on features during development itself. As a strong-typed language, Haskell enforces a set of rules to ensure consistency of results, helping us preempt failures and achieve zero technical declines.
### Cache-based Shock Absorber
To handle scale and remove database bottlenecks, we introduced a horizontally scalable caching layer where real-time transactions are served from this cache layer and later drained to the database.

Scaling up and down the cache layer is relatively easy and cost-effective compared to scaling databases.
### Rapid Deployment and Release Frameworks
With rapid development comes the challenge of frequent production releases. To achieve agility through frequent releases without compromising reliability, we've built internal tools for automated releases with minimal manual effort. These tools monitor the performance of the release by benchmarking error codes against the previous stable version of the codebase:
- **ART (Automated Regression Tester):** A system that records production payloads and runs them in the UAT system against a new deployment to identify bugs early.
- **Autopilot:** A tool that creates a new deployment and performs traffic staggering from 1% onwards.
- **A/B testing
framework:** A system that monitors and benchmarks the new deployment's performance against the previous stable version. Based on this benchmark, the system automatically decides to scale up the traffic or abort the deployment.
### Hyperswitch: An Open-Source Payments Switch
We're carrying these learnings forward to our latest product, [Hyperswitch](https://github.com/juspay/hyperswitch), an open-source payments switch. Every line of code powering our stack is available for you to see. With Hyperswitch, our vision is to ensure every business has access to world-class payment infrastructure.
#### Conclusion
Through these investments, we've built reliable, agile, and scalable systems, enabling our engineers to solve exciting new problems and fostering a culture of systems thinking within the company. We encourage developers to engage with the open-source Hyperswitch project and explore the principles and technologies we've adopted to handle massive scale and high-volume transaction processing.
| gorakh13 |
1,893,744 | The Future of Business Communication: Beyond Emails and Phone Calls | For decades, emails and phone calls were the cornerstones of business communication. While they still... | 0 | 2024-06-19T14:32:55 | https://dev.to/gianna4/the-future-of-business-communication-beyond-emails-and-phone-calls-34d | businesscommunication | For decades, emails and phone calls were the cornerstones of business communication. While they still hold value, the rise of a remote workforce, the need for real-time collaboration, and ever-increasing customer expectations push businesses to explore new avenues. The future of business communication is about streamlining processes, fostering collaboration, and creating a more engaging experience for both employees and clients.
### The Downfall of Traditional Methods
Emails, once hailed as a revolutionary communication tool, are now notorious for overflowing inboxes and information overload. Lengthy email threads can be difficult to follow, and reaching the right person can be time-consuming. Phone calls, while offering the benefit of real-time interaction, often suffer from missed connections and the inability to share visual information easily.
### The Rise of Collaboration Tools
The modern business landscape thrives on collaboration. Instant messaging and project management tools like Slack, Microsoft Teams, and Asana offer a more dynamic and efficient way for teams to communicate. These tools allow for real-time messaging, file sharing, video conferencing, and task management, all within a single platform. This fosters a more collaborative environment, eliminates communication silos, and empowers geographically dispersed teams to work seamlessly together.
### AI and Automation: Powering Up Communication
Artificial intelligence (AI) is rapidly transforming the way businesses communicate. Chatbots powered by AI are now handling routine customer inquiries, providing 24/7 support, and even personalizing interactions based on customer data. This frees up human employees to focus on more complex tasks and fosters a more efficient customer service experience. Additionally, AI can automate tasks like scheduling meetings, summarizing emails, and translating languages, further streamlining communication workflows.
### The Power of Visual Communication
Visuals are a powerful tool for communication, and businesses are increasingly leveraging their potential. Video conferencing allows for face-to-face interaction, even across vast distances. Tools like screen sharing and collaborative whiteboards further enhance communication by enabling teams to brainstorm ideas and visualize concepts in real-time.
### The WhatsApp API: A Personalized Communication Channel
The rise of mobile messaging apps like WhatsApp has fundamentally changed how people communicate. Businesses are catching on to this trend and utilizing the [WhatsApp API](https://msg91.com/us/whatsapp) to connect with customers on a more personal level. This API allows businesses to integrate WhatsApp into their existing communication platforms, enabling them to send and receive messages, share documents, and offer customer support directly through the app. With over 2 billion users worldwide, WhatsApp presents a unique opportunity for businesses to reach a wide audience familiarly and conveniently.
### Building a Customer-Centric Communication Strategy
The future of business communication isn't just about internal collaboration; it's also about creating a seamless and engaging experience for customers. Businesses need to move beyond generic emails and embrace communication channels preferred by their target audience. This might include offering chat support via social media platforms, implementing SMS notifications, or leveraging video tutorials to explain complex topics.
### The Importance of Security and Compliance
As communication channels evolve, so too does the need for robust security protocols. Businesses need to ensure that all communication platforms they utilize meet industry regulations and protect sensitive data. Regular security audits and employee training on data security best practices are crucial for safeguarding confidential information.
### The Future is a Symphony, Not a Solo
The future of business communication isn't about replacing traditional methods with entirely new ones. Instead, it's about creating a symphony where different tools and channels work together seamlessly. Businesses can leverage email for formal communication, instant messaging apps for real-time collaboration, video conferencing for in-depth discussions, and the WhatsApp API for personalized customer interactions. The key lies in selecting the right tool for the right situation and ensuring a smooth and integrated communication experience for everyone involved.
By embracing new technologies, fostering a culture of collaboration, and prioritizing customer experience, businesses can navigate the changing landscape of communication and build a foundation for success in the years to come. | gianna4 |
1,893,743 | 33 front-end development tools developers use in 2024 | Today more than ever, we have choices. Some applications help heal the plants, we meet future... | 0 | 2024-06-19T14:29:42 | https://dev.to/momciloo/33-front-end-development-tools-developers-use-in-2024-2o3f | Today more than ever, we have choices. Some applications help heal the plants, we meet future partners through dating apps, we have AI that "does the work for us" and so on. Front-end development tools are no exception to this extensive range of options. The landscape is richer and more sophisticated than ever. Developers invest in premium tools to enhance workflows, boost productivity, and deliver advanced user experiences.
So, what are the must-have tools for front-end developers? If, you want to know which tools you should add to your toolbox, keep reading.
## What is front-end development?
To put it simply, everything you see on the website buttons, links, headers, and footer animation is part of front-end development.
To accomplish that, front-end developers use a wide spectrum of computer languages, tools, and frameworks to bring a website’s design and functionality to life. Some of the technologies are:
- HTML: Markup language that specifies the structure and content of web pages.
- CSS: Manage the visual appearance and layout of web page elements, such as colors, fonts, and spacing.
- JavaScript: A programming that allows websites to be dynamic and interactive.
## Key front-end development tools trends
Here are some key statistics and trends in front-end development tools for 2024, backed by data from recent reports:
- **React** continues to dominate, with 42.65% of developers using or interested in using it.
- **Next.js** has seen significant growth and is now a preferred choice for many developers.

- Astro is becoming a hot new thing
According to [Netlify's 2023 State of Web Development](https://www.netlify.com/pdf/the-state-of-web-development-2023.pdf/?submissionGuid=cb258080-cbaf-475f-8d39-302a526839ca), Astro is showing the highest growth in usage and satisfaction, with weekly downloads on NPM reaching 197,435.

- JavaScript domination
According to a recent [W3Tech survey](https://w3techs.com/technologies/details/cp-javascript), 98.9% of all websites use JavaScript as a programming language.

- **Progressive Web Apps (PWAs)** have become a standard, providing benefits such as offline functionality, push notifications, and app-like interactions.
- **GitHub** remains essential for collaborative coding and project management, with most developers using it for version control and code sharing.
- **NPM** continues to be a critical tool for JavaScript developers, facilitating code reuse and efficient package management.
### Other notable front-end development tools trends
- **AI and personalization**: AI-driven chatbots and personalization techniques are enhancing the UX by providing tailored content and interactions.
- **Low-code development**: Low-code platforms are becoming more popular, enabling rapid development and empowering non-developers to contribute to application development.
- **TypeScript** has risen in popularity and influence in the front-end development landscape.TypeScript has overtaken Java as the third most popular language across open-source projects on GitHub, with a 37% growth in its user base in 2023.
- **Micro frontends**: apply microservices principles to the frontend by breaking down a web application into smaller, independent modules or functions, providing flexibility and speed.
Considering all these trends let’s see which development tools are must-haves.
## Best front-end development tools
Sit back, enjoy, and learn, because I have prepared the ultimate list of web development tools by categories that are key to the entire software development ecosystem.
## Front-end frameworks and libraries
Frameworks offer a collection of files that map out for styling and structuring websites. With pre-built components such as navigation menus, buttons, and typography, there is no need to code those elements from ground zero.
### React
I need to start with the most popular one. [React](https://react.dev/), an open-source JavaScript is used to create user interfaces. Because of the features it offers, it excels at web development, making it one of the fastest libraries out there.
**Features:**
- Component-based architecture (reusable pieces of code)
- Virtual DOM ( allows quick rendering and improves performance)
- Modularity
- Pricing: Free.
### Vue.js
[Vue.js](http://vue.js/) is a declarative and component-based programming style built on top of the standards of HTML, CSS, and JavaScript. This method encourages an organized and modular code structure, which makes it easier to design UIs.
**Features:**
- Declarative rendering (you describe what the UI should look like for a given state, and Vue.js handles updating the DOM to match this description)
- Routing (allows navigation between different pages, without reloading)
- It has a very small size (12 to 21 KG)
- It combines the best features of Angular and React.
- Pricing: Free.
### Angular
[Angular](https://angular.dev/) is ideal for creating dynamic web applications because it extends static HTML to dynamic HTML subsequently making it more convenient for you to build dynamic and rich websites.
**Features:**
- Two-way data binding (automatically synchronizes data between the model and the view, meaning that the view layer of the architecture is always an exact representation of the model)
- Dependency injection (automatically provides components with the services they need, making the code modular and easier to test)
- MVC- (able to create a client-side application)
- Pricing: Free.
### Bootstrap
[Bootstrap](https://getbootstrap.com/)'s focus is on responsive and mobile-first websites. It provides pre-built HTML, CSS, and JavaScript design components to help developers in building user interfaces
**Features:**
- Responsive Grid system (provides a flexible layout system that adapts to various screen sizes and devices.)
- Pre-designed UI components (ready-made pieces of code for building common web elements like buttons, navigation bars, and forms)
- Bootstrap themes and customization (let you easily change styles and colors to match your design preferences.)
- Pricing: Free.
### jQuery
[jQuery](https://jquery.com/) simplifies HTML document traversal, event handling, and animation.
**Features:**
- DOM manipulation
- Event handling
- AJAX (simplifies asynchronous requests to load data from a server without refreshing the page)
- Pricing: Free.
### NextJS
[NextJS](https://nextjs.org/) is used to create server-rendered React apps and webpages. It offers code splitting, automatic [server-side rendering](https://thebcms.com/blog/static-site-generation-vs-server-side-rendering), and support for static exports out of the box. NextJs's versatility is further enhanced by its support for API routes and static site generation.
**Features:**
- Server-side rendering (renders web pages on the server before sending them to the client, improving performance and SEO)
- Code splitting (automatically splits your JavaScript code into smaller chunks, loading only the necessary parts to improve performance)
- Static site generation (pre-renders pages at build time)
- API routes (allow you to create serverless API endpoints)
- Pricing: Free.
### [Tailwind CSS](https://tailwindcss.com/)
A utility-first CSS framework that provides low-level utility classes to build custom designs directly in your markup.
**Features**:
- Utility-first CSS framework
- Responsive design
- Customization.
- Pricing: Free.
If you want to see how these frameworks work together check out our [code starters](https://thebcms.com/starters).
## Code Editors and Integrated Development Environments (IDEs)
While code editors give you fast access to edits and coding, IDEs offer features such as debugging and testing for a variety of development tasks.
### Sublime text
[Sublime Text](https://www.sublimetext.com/) is one of the most popular code editors because supports various programming and markup languages and provides features like syntax highlighting, auto-completion, multiple selections, and a powerful command palette.
**Features:**
- Command palette
- IntelliSense
- Simultaneous editing
- Customization
- Pricing: $99 one-time purchase.
### WebStorm
With [WebStorm](https://www.jetbrains.com/webstorm/), you can get started coding [JavaScript and TypeScript](https://thebcms.com/blog/should-you-use-javascript-or-typescript) right away. It provides better performance when dealing with large codebases.
**Features:**
- Smart editor
- built-in debugger
- seamless integration with various tools
- Pricing:$15.90/month.
### VS Code
According to the [2023 Stack Overflow developer survey](https://survey.stackoverflow.co/2023/#technology-most-popular-technologies), [Visual Studio code](https://code.visualstudio.com/) remains the most preferred IDE among developers.
Used for tasks like debugging, task running, and version control. It aims to provide the features developers need for a quick code-build-debug cycle.
**Features:**
- Source control (Git source control by default)
- syntax highlighting
- code refactoring
- real-time code completion
- Pricing: Free.
### Komodo Edit
[Komodo Edit](https://www.activestate.com/products/komodo-edit/) is a lightweight yet versatile text for dynamic programming languages such as Python, PHP, Ruby, and JavaScript.
**Features**:
- Syntax highlighting and auto-completion
- Customizable user interface
- Support for multiple programming languages
- Pricing: Free.
## Package Managers
Package managers are important tools for automating the process of installing, updating, customizing, and managing software libraries or packages within a software project.
### NPM
The Node package manager for JavaScript. [NPM](https://www.npmjs.com/) lets you publish, discover, install, and develop node programs, as well as discover code packages that you can reassemble according to your needs.
**Features**:
- Large registry
- Dependency management
- Custom script execution
- Pricing: Free.
### Yarn
Designed as an NPM alternative, [Yarn](https://yarnpkg.com/) focuses on speed, reliability, and security.
**Features**:
- Large registry
- dependency management
- custom script execution.
- Pricing: Free.
### Advanced Packaging Tool (APT)
[**APT**](https://linuxsimply.com/linux-basics/package-management/package-manager-examples/apt/) is mainly used to install, remove, and upgrade packages on Debian-based Linux Distribution. I. Apt is the front-end of the **dpkg** package manager and the file extension is **.deb**. Apt resolves dependencies automatically.
**Features:**
- Good for beginners
- Pinning Feature (allows one to install packages from multiple repositories)
- Intuitive commands
- **Pricing**: Free.
## Design tools
Design tools create stylish and intuitive UI. Developers use them for the implementation of design elements.
### Figma
[Figma](https://www.figma.com/) is a cloud-based design tool that allows collaboration between designers to create UIs for mobile and web applications.
- **Features**:
- Real-time collaboration
- Prototyping
- Reusable design components.
- **Pricing**: Free tier, paid plans start at $12/month.
### Sketch
[Sketch](https://www.sketch.com/) is a vector graphics editor for macOS primarily used for UI and UX design of websites and mobile apps.
**Features**:
- Vector editing
- Prototyping
- collaboration tools
- **Pricing**: $99/year.
### LottieFiles
The world's largest collection of free animations can be found at [LottieFiles](https://lottiefiles.com/), including thousands of unique designs that are suitable for both personal and commercial use. You can also change the color, height, width, animation speed, and so on to match the theme of your website.
**Features**:
- Code snippets generation
- High customization
- **Pricing**: paid plans start at $19,99/month, billed annually.
### Google Fonts
This is an open-source library of font families that includes an interactive web directory for browsing the library, and APIs for using the fonts via CSS and Android. The fonts are available in multiple weights, styles, and scripts and you can find everything you need to improve typography.
**Features**:
- Code snippets generation
- High customization
- **Pricing**: Free.
### Coolors
This color palette generator allows you to go through all the existing color palettes and search for palettes based on color, topic, and style. [Coolors](https://coolors.co/) also allow you to generate new palettes if the existing ones don't suit your design.
**Features**:
- Gradient maker
- Image converter
- Font generator
- **Pricing**: Free, Pro plan: $3 per month or $36 billed yearly.
## Version Control Systems (VCS)
It is possible to work concurrently with other members of your team on a project using VCSs, which provide a structured mechanism for collaborative coding. These are the most popular VCS tools:
### Git
[Git](https://www.git-scm.com/) collaborates with other developers on a project and tracks code changes. It enables you to successfully track updates made over time to the code repository by the developers.
**Features**:
- Branching and merging
- Data integrity
- Non-linear workflows support
- **Pricing**: Free.
### GitHub
On the other hand, GitHub is an internet-based platform that allows the hosting and sharing of Git repositories. It is the biggest git repository platform with many incredible features that let developers work together and host their code online.
**Features**:
- Public repositories
- Code search & code view
- Protected branches
- **Pricing**: Free tier, paid plans start at $4/month.
### Apache Subversion (SVN)
File and directory changes can be monitored over time using [Apache Subversion](https://subversion.apache.org/), a centralized VCS. With its tools, numerous developers can work on a project at once through collaborative software development.
**Features**:
- Centralized model
- Atomic commits
- User permissions and restrictions
- **Pricing**: Free.
## Responsive Design tools
Using responsive design tools, you can develop and test web applications that adjust to various screen sizes and devices. There are a couple of them that you should consider:
### Chrome DevTools
[Chrome DevTools](https://developer.chrome.com/docs/devtools) is a set of debugging tools included in the Chrome browser. It enables you to analyze, troubleshoot, and alter your websites quickly. It enables you to inspect and update HTML components and CSS properties in real-time, monitor network requests, analyze performance, and check local storage.
**Features**:
- Real-time HTML and CSS inspection
- Detailed monitoring
- Performance analysis tools
- **Pricing**: Free.
### Responsively
[Responsively](https://responsively.app/) is an open-source browser extension that allows you to visualize and interact with your websites across multiple device viewports.
**Top Features:**
- Multi-device visualization
- Real-time interaction with your website within the tool
- Customizable viewports
- Live CSS editing for immediate adjustments
- **Pricing**: Free.
### Viewport Resizer
[Viewport Resizer](https://chromewebstore.google.com/detail/viewport-resizer-responsi/kapnjjcfcncngkadhpmijlkblpibdcgm?hl=en) is a browser-based tool that enables testing and previewing how a website or web application responds to different viewport sizes, such as smartphones, tablets, and desktops.
**Features:**
- Customizable resolutions
- Toggle between portrait and landscape orientations
- Device pixel ratio adjustment
- Real-time information about the current viewport dimensions
- **Pricing**: Free.
## CSS Preprocessors
These tools provide advanced features such as variables, mixins, and nested rules, which make stylesheets more maintainable and modular. This improves code organization, and reuse, and minimizes redundancy.
### Sass, Less, and Stylus
[Sass](https://sass-lang.com/), [Less](https://lesscss.org/) and [Stylus](https://stylus-lang.com/), extends CSS by adding variables, nesting mixins, and other features. It's an excellent solution for organizing huge and complex stylesheets.
**Features:**
- Enable the use of variables in CSS
- Supports mixins, enabling the encapsulation and reuse of style patterns
- Nested rules support
- Price: Free
## Other frontend developer tools
### Grunt
[Grunt](https://gruntjs.com/) is a JavaScript runner used to automate repetitive activities. It is useful for automating routine processes such as minification, compilation, unit testing, and linting. Grunt provides over 6k different plugins for installing and automating specific tasks with minimal effort.
**Features:**
- Easy installation
- Automation
- Enable creating customizable plugins
- Price: Free
### Lighthouse
Lighthouse is a performance optimization tool that provides audits for performance, accessibility, progressive web apps, SEO, and more. Lighthouse enables you to optimize UX by offering insights into resource utilization, loading times, and rendering processes.
**Features:**
- Easy installation
- Automation
- Enable creating customizable plugins
- Price: Free
### Postman
[Postman](https://www.postman.com/) ****is an API platform for building and using APIs. It simplifies each step of the API lifecycle and streamlines collaboration.
**Features**:
- Intuitive interface for API requests
- Support for automated test scripts
- real-time collaboration
- Generates API documentation
- **Pricing**: Free tier, paid plans start at $12/month.
### ESLint
[ESLint](https://eslint.org/) is a static code analysis tool that detects problematic patterns in JavaScript code and guarantees compliance with coding standards and best practices.
**Features:**
- Highly configurable
- It enables the usage of plugins, making it flexible and adaptable to a variety of project requirements.
- Offers real-time feedback
- Provides a collection of preconfigured rules.
- Price: Free
### [Webflow](https://webflow.com/features)
Often considered as a [headless CMS, Webflow](https://thebcms.com/blog/webflow-vs-headless-cms-which-one-to-use) is actually a low-code website builder that with its drag-and-drop feature helps build responsive websites without coding. It generates clean, semantic HTML, CSS, and JavaScript.
**Features**:
- Visual web design
- Custom code
- Production-ready code.
- Responsive website templates
- Price: From free to very complicated pricing plans.
Find more about Webflow pricing plans: [Is Webflow too expensive](https://thebcms.com/blog/is-webflow-too-expensive)
Speaking about headless CMS, let’s see its role in the development process.
## Why Frontend developers should use a Headless CMS
Headless CMS enables a frontend agnostic approach because it decouples content management from content presentation, providing a bunch of benefits that streamline development. Here's how a headless CMS can significantly help front-end developers in their work.
## How a Headless CMS helps frontend developers in their work
### API-driven development
A headless CMS delivers content using APIs, often [RESTful](https://restfulapi.net/) or [GraphQL](https://graphql.org/), allowing front-end developers to fetch material dynamically. This strategy makes it easier to integrate content into apps and helps developers structure their data retrieval more efficiently.
For example, as a frontend developer, you can use a GraphQL query to fetch specific fields from multiple content types in a single request, optimizing the application's performance and reducing the number of network calls.
### Framework and tool-agnostic content management system
With a headless CMS, you are not tied to any specific frontend framework or technology. You can use the tools and libraries they prefer or that are best suited to the project. Whether you React, Vue.js, Angular, or even a static site generator like Gatsby, the headless CMS can provide content via APIs that integrate with any of these tools.
Learn more: NextJS headless CMS, Nuxt CMS, Gatsby CMS
### Content updates
Content updates can be made independently of the frontend codebase. Once content is updated in the CMS, it is instantly available through the API, allowing for real-time content management without redeploying the front-end application.
- **Example:** Marketing teams can update website content directly in the CMS, and the changes will reflect immediately on the website, enabling rapid content iteration and deployment.
### Component-based architecture
A headless CMS fits well with modern frontend development practices such as a [composable architecture](https://thebcms.com/blog/composable-architecture-guide). Developers can build reusable components that fetch and display content, making the codebase more modular and maintainable.
### Team collaboration
Just like Figma, a headless CMS comes with roles and permissions that facilitate collaboration between developers, content creators, and marketers. This segregation of duties allows each team to work more efficiently in their areas of expertise.
## Start building your front-end development tool kit
You have an ultimate list of tools, the only thing left to do is to choose your tech stack. I won’t tell you which one to use, I will just list the tools you need so you can do your work in the best possible way. Here’s the list of tools:
- Code editor
- JavaScript framework
- NPM
- Task runner
- CSS preprocessor
- Design tool
- Prototyping tool
- VCS
- Performance optimization tool
- API integration tool
And last, but not least, consider [BCMS headless CMS](https://thebcms.com/), as a collaborative front-end CMS platform with integrated UI development and content modeling tools. Create, share, and improve your code seamlessly to get further, faster, and more modern websites and app development. | momciloo | |
1,893,742 | Regression Testing: Ensuring Stability and Reliability in Software Development | In the fast-paced world of software development, maintaining the stability and reliability of... | 0 | 2024-06-19T14:29:30 | https://dev.to/keploy/regression-testing-ensuring-stability-and-reliability-in-software-development-27c2 | testing, webdev, productivity, ai |

In the fast-paced world of software development, maintaining the stability and reliability of applications through continuous changes is paramount. [Regression testing](https://keploy.io/regression-testing) is a critical practice that helps developers ensure that new code changes do not adversely affect the existing functionality of a software application. This article explores the concept of regression testing, its importance, methodologies, tools, and best practices.
What is Regression Testing?
Regression testing is a type of software testing that verifies whether recent code changes have not negatively impacted the existing functionality of an application. It involves re-running previously executed test cases to ensure that the application still behaves as expected after modifications such as enhancements, patches, or configuration changes.
Importance of Regression Testing
The significance of regression testing in software development cannot be overstated. Here are some key reasons why it is essential:
1. Ensures Software Stability: By re-testing existing functionalities, regression testing helps maintain the stability of the application despite continuous changes.
2. Detects Unintended Side Effects: It helps identify bugs or issues that may have been introduced inadvertently during new feature implementation or bug fixes.
3. Enhances Code Quality: Regular regression testing ensures high-quality code by catching regressions early in the development process.
4. Facilitates Continuous Integration/Continuous Deployment (CI/CD): Automated regression tests can be integrated into CI/CD pipelines, providing quick feedback to developers and ensuring that code changes do not break the build.
5. Improves Customer Satisfaction: By ensuring that updates do not disrupt existing features, regression testing contributes to a positive user experience and customer satisfaction.
Types of Regression Testing
There are several types of regression testing, each serving different purposes:
1. Corrective Regression Testing: This involves re-running test cases when no changes have been made to the existing functionality. It ensures that the unchanged parts of the application work as expected.
2. Retest-all Regression Testing: This comprehensive approach re-tests all existing test cases. It is thorough but can be time-consuming and resource-intensive.
3. Selective Regression Testing: This focuses on re-running a subset of test cases that are most likely to be affected by the recent changes, making it more efficient.
4. Progressive Regression Testing: This type of testing is performed when the codebase undergoes frequent changes. It involves adding new test cases for new features while re-running existing ones to ensure overall stability.
5. Complete Regression Testing: Conducted before a major release, it involves exhaustive testing of the entire application to ensure that everything works correctly.
Methodologies for Regression Testing
Implementing regression testing effectively requires a structured approach. Here are some common methodologies:
1. Manual Regression Testing: Testers manually re-execute test cases to verify that existing functionality is not affected by code changes. While this can be effective, it is labor-intensive and prone to human error.
2. Automated Regression Testing: Automated tools are used to execute regression test cases. This approach is faster, more reliable, and can be easily integrated into CI/CD pipelines.
3. Hybrid Approach: Combining manual and automated regression testing can provide a balance between thoroughness and efficiency. Critical tests can be automated, while exploratory and ad-hoc testing can be done manually.
Tools for Regression Testing
Several tools are available to facilitate automated regression testing. Some of the most popular ones include:
1. Selenium: A widely-used open-source tool for automating web applications. Selenium supports multiple programming languages and browsers.
2. JUnit: A testing framework for Java applications that supports unit and regression testing.
3. TestNG: Another testing framework for Java that provides additional features like parallel execution and data-driven testing.
4. PyTest: A robust testing framework for Python applications, known for its simplicity and powerful features.
5. Appium: An open-source tool for automating mobile applications, supporting both Android and iOS platforms.
6. Jenkins: A CI/CD tool that can be integrated with various testing frameworks to automate regression tests as part of the build process.
7. Katalon Studio: An all-in-one test automation solution for web, mobile, API, and desktop applications.
Best Practices for Regression Testing
To maximize the effectiveness of regression testing, it is important to follow best practices:
1. Prioritize Test Cases: Identify and prioritize test cases based on their criticality and the likelihood of being affected by recent changes.
2. Maintain a Regression Test Suite: Keep an up-to-date regression test suite that covers the core functionality of the application. Regularly review and update the suite to include new test cases and remove obsolete ones.
3. Automate Where Possible: Automate repetitive and time-consuming test cases to increase efficiency and reduce the risk of human error.
4. Integrate with CI/CD: Incorporate regression testing into your CI/CD pipeline to ensure continuous feedback and early detection of issues.
5. Use Version Control: Maintain version control of test cases and scripts to track changes and roll back if necessary.
6. Monitor Test Results: Regularly review test results to identify patterns, detect flakiness, and address recurring issues.
7. Perform Root Cause Analysis: When a regression is detected, perform a root cause analysis to understand why it occurred and prevent similar issues in the future.
Challenges in Regression Testing
Despite its benefits, regression testing presents several challenges:
1. Time and Resource Intensive: Comprehensive regression testing can be time-consuming and require significant resources, especially for large applications.
2. Test Maintenance: Keeping the regression test suite up-to-date with the evolving codebase can be challenging and requires ongoing effort.
3. Flaky Tests: Automated tests can sometimes produce inconsistent results due to timing issues, dependencies, or other factors, leading to "flaky" tests that undermine trust in the test suite.
4. Coverage Gaps: Ensuring that the regression test suite provides adequate coverage without becoming unwieldy is a delicate balance.
Conclusion
Regression testing is an essential practice in software development that ensures the stability and reliability of applications amidst continuous changes. By re-running previously executed test cases, it helps detect and fix unintended side effects of code modifications. Implementing effective regression testing requires a combination of methodologies, tools, and best practices to maximize its benefits while addressing its challenges. As software development continues to evolve, regression testing will remain a critical component in delivering high-quality, reliable applications. | keploy |
1,891,154 | JavaScript: O que é Symbol? | Eae gente bonita, beleza? Continuo a jornada de "aprofundamento" em JavaScript e quanto mais eu... | 0 | 2024-06-19T14:28:32 | https://dev.to/cristuker/javascript-o-que-e-symbol-38l0 | javascript, braziliandevs, beginners, node | Eae gente bonita, beleza? Continuo a jornada de "aprofundamento" em JavaScript e quanto mais eu aprendo mais eu quero escrever e compartilhar com vocês. Dessa vez vou falar um pouco sobre o ``Symbol`` que para mim o motivo da sua existência sempre foi um mistério.

## Tabela de conteúdos
* [O que é Symbol](#o-que-é-symbol)
* [Criando propriedades "privadas"](#criando-propriedades-privadas)
* [Mas por que "privado"](#mas-por-que-privado)
* [Symbols e comportamentos padrão](#symbols-e-comportamentos-padrão)
* [Referências](#referências)
## O que é Symbol
Começando do começo, o Symbol é um tipo primitivo no JavaScript, ele é usado para criar valores, funções privadas e até mesmo para interceptar um comportamento padrão de um objeto no JavaScript.
> Mas não se esqueça de uma coisa, no JavaScript TUDO é objeto.
Apenas para confirmar que o symbol é um tipo primitivo
```javascript
typeof Symbol("foo") === "symbol"; // true
```
## Criando propriedades "privadas"
Com o Symbol podemos criar algumas propriedades quase privadas. Abaixo um exemplo
```javascript
const uniqueKey = Symbol("userName");
const user = {};
user["userName"] = "value normal object";
user[uniqueKey] = "value from symbol";
console.log('getting object', user[uniqueKey]); // "value from symbol"
console.log('getting object', user["userName"]); // "value normal object"
console.log('getting object', user[Symbol("userName")]); // undefined
```
No exemplo nós criamos a `uniqueKey `com o `Symbol `com o valor `userName` e um objeto `user`. Primeiro colocamos um valor na propriedade `user["userName"]` e depois fazemos a mesma coisa com a propriedade `user[uniqueKey]` e você pode pensar, bom o symbol tem o mesmo valor que é userName, então o valor foi sobrescrito mas na verdade não. O Symbol cria um outro endereço de memória e uma propriedade diferente.
Ok, então se eu criar um Symbol com o mesmo valor eu consigo acessar? Errado novamente. Como disse o Symbol cria um novo endereço na memória logo o terceiro console log retorna undefined. Por isso que com ele nós podemos criar variáves e métodos privados, pois os mesmos só serão acessados se o Symbol usado for exportado junto.
## Mas por que "privado"
Bom imagino que deve ter percebido o uso excessivo das aspas em privado isso acontece porque nós podemos saber quais são o symbols existentes. Vamos a mais um exemplo:
```javascript
const uniqueKey = Symbol("userName");
const func = Symbol("soma");
const user = {};
user[uniqueKey] = "value from symbol";
user[func] = (n1, n2) => n1+ n2;
console.log('symbols', Object.getOwnPropertySymbols(user));// symbols [ Symbol(userName), Symbol(soma) ]
```
Como podemos ver no exemplo acima os Symbols desse objeto não são privados. Com um simples método do `Object` nós podemos ver o Symbols existentes, porém não podemos acessá-los ou ler suas implementações.
## Symbols e comportamentos padrão
Com o Symbol nós podemos alterar alguns comportamentos padrão dos objetos, como por exemplo como um objeto será lido por uma função de iterator ou convertido para string.
Vamos aos exemplos!
```javascript
const obj = {
items: ['c', 'b', 'a'],
}
```
Para podermos ler o objeto acima em um iterator(for, forEach, map) nós precisariamos usar alguma função como Objeto.entries ou Object.keys para transformar as chaves em um array e ler propriedade por propriedade.
Mas e se quiséssemos que esse objeto fosse lido de forma correta por um iterator? Bom nós podemos mudar esse comportamento, como no exemplo abaixo
```javascript
const obj = {
[Symbol.iterator]: () => ({
items: ['c', 'b', 'a'],
next() {
return {
done: this.items.length === 0,
value: this.items.pop()
}
}
})
}
```
Dessa forma podemos apenas chamar uma função de iterator e o nosso objeto será lido sem problema algum. O mesmo vale para `toString` por exemplo e por ai vai. Caso queira saber recomendo pesquisar sobre prototypes no JavaScript.
## Referências
[Symbol - MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol)
-------
Espero que tenha sido claro e tenha ajudado a entender um pouco mais sobre o assunto, fique a vontade para dúvidas e sugestões abaixo!
Se chegou até aqui, me segue la nas [redes vizinhas]
(https://cristiansilva.dev/).
<img src="https://media.giphy.com/media/xULW8v7LtZrgcaGvC0/giphy.gif" alt="thank you dog" /> | cristuker |
1,893,741 | How to Improve Development Efficiency with PHP 8 | PHP 8 is a significant version of the PHP language, introducing many new features and improvements... | 0 | 2024-06-19T14:28:10 | https://dev.to/servbay/how-to-improve-development-efficiency-with-php-8-1c02 | webdev, php, beginners, programming | PHP 8 is a significant version of the PHP language, introducing many new features and improvements aimed at enhancing development efficiency, performance, and overall language quality. In this article, we will explore how PHP 8 promotes development efficiency through various features and language enhancements.
### Enhanced Type System
PHP 8 introduces an enhanced type system, including named arguments, improved type declarations, and support for Union Types. These improvements make the code clearer and reduce the likelihood of runtime errors caused by type issues. Enhanced type declarations also help IDEs provide better code suggestions and static analysis, improving the development experience.
```php
// Named arguments
function greet(string $name, string $greeting): string {
return "$greeting, $name!";
}
// Union Types
function processValue(int|float $value): void {
// Processing logic
}
```
### New Language Feature: Match Expression
PHP 8 introduces the match expression, a more powerful and flexible alternative to the switch statement. The match expression allows you to perform pattern matching based on the value of an expression and return the corresponding result. This makes the code more concise and readable, especially when dealing with multiple conditions.
```PHP
$result = match ($status) {
'success' => 'Operation was successful',
'failure' => 'Operation failed',
'in_progress' => 'Operation is still in progress',
};
```
### Nullsafe Operator
PHP 8 introduces the nullsafe operator (?->), a variant of the null coalescing operator (??). This makes it more convenient to handle objects that might be null, avoiding cumbersome null checks.
```PHP
// In PHP 7 you might write:
$length = $obj->getNestedObject()->getString()->length ?? 0;
// In PHP 8 you can simplify it to:
$length = $obj?->getNestedObject()?->getString()?->length ?? 0;
```
### Attributes
Attributes are a new feature in PHP 8 that allow you to add metadata to classes, methods, properties, etc., in a declarative manner. This makes the code more concise and improves readability.
```PHP
#[Route("/api/users", methods: ["GET"])]
class UserController {
#[Inject]
private UserService $userService;
#[Authorize("ADMIN")]
public function getUser(int $id): JsonResponse {
// Processing logic
}
}
```
### JIT Compiler
PHP 8 introduces the Just-In-Time (JIT) compiler, which can dynamically compile PHP code into native machine code, improving execution efficiency. The JIT compiler can significantly boost performance, especially in computation-heavy tasks.
### String and Array Improvements
PHP 8 introduces a series of string and array improvements, including new string functions and array syntax sugar. For example, the str_contains function checks if one string contains another string, and the array keyword can be used to concisely create arrays.
```PHP
// String improvement
if (str_contains($haystack, $needle)) {
// Contains logic
}
// New array syntax sugar
$array = [1, 2, ...$anotherArray, 4, 5];
```
### Conclusion
PHP 8 significantly enhances development efficiency through the introduction of new language features, an enhanced type system, and performance improvements. PHP 8.4 is expected to bring even more optimizations and powerful features. According to official sources, PHP 8.4 will be released on November 21, 2024, and many developers are eagerly awaiting this version.
If you want to experience PHP 8.4 early, you can do so through [ServBay](https://www.servbay.com), which includes PHP 8.4 (Dev). Installation is just a click away, and you can download it for free from the ServBay website if interested.

Download: https://www.servbay.com
---
Got questions? Check out our [support page](https://support.servbay.com) for assistance. Plus, you’re warmly invited to join our [Discord](https://talk.servbay.com) community, where you can connect with fellow devs, share insights, and find support.
If you want to get the latest information, follow [X(Twitter)](https://x.com/ServBayDev) and [Facebook](https://www.facebook.com/ServBay.Dev).
Let’s code, collaborate, and create together!
| servbay |
1,893,738 | Best software training institute in Indore For IT Training | If you want to learn more about our courses and batch details, checkout It Training Indore and learn... | 0 | 2024-06-19T14:22:46 | https://dev.to/ittrainingindore01/best-software-training-institute-in-indore-for-it-training-1b8c | it, training, ittrainingindore, javascript | If you want to learn more about our courses and batch details, checkout It Training Indore and learn the best [software training institute in indore](https://www.ittrainingindore.in/).
Enroll Today
Take the first step towards a successful career in software development with IT Training Indore. Visit our [web development coures](https://www.ittrainingindore.in/course-category/web-development-course-details/) to learn more about our courses and enroll today. With our expert training and support, you can achieve your career goals and excel in the dynamic field of software development. Your future starts here! | ittrainingindore01 |
1,893,623 | Create a Circular progress bar using HTML, CSS and JS. | I am going to give you a easy way to create a circular progress bar using html, css and... | 0 | 2024-06-19T14:09:38 | https://dev.to/sunder_mehra_246c4308e1dd/create-a-circular-progress-bar-using-html-css-and-js-2p0k | javascript, html, css, animation | I am going to give you a easy way to create a circular progress bar using html, css and javascript.

**HTML:**
Create two div, with .outer and .inner class name.
```html
<div class="outer">
<div class="inner">0%</div>
</div>
```
**CSS:**
I am using flex box to center the inner div and content of div.
1- for .outer div
```css
.outer{
width: 100px;
height: 100px;
margin: 10px auto;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
background: conic-gradient(red 180deg, rgb(255, 255, 255) 0deg);
box-shadow: -5px -3px 14px 1px #000000, 0px 1px 14px 0px #ffffff;
}
```
2- For .inner div
```css
.inner{
width: 80px;
height: 80px;
background-color: white;
border-radius: 50%;
display: flex;
justify-content: center;
align-items: center;
box-shadow: inset 0px 0px 4px 0px black;
font-weight: bolder;
font-size: 1rem;
}
```
**Note:**
_I am using background as conic-gradient for OUTER div to make progress bar and INNER div will have same background as root background to blend in._
`background: conic-gradient(red 180deg, rgb(255, 255, 255) 0deg);`
Once done you will get this result as we have given 180deg.

**Now we want to make it dynamic.**
So to make it dynamic or to add animation feel we will use below javascript code.
I have added comments to know the definition of each line.
```js
//get the refrence of div in js using query selector for both inner and outer div
const progressbar= document.querySelector(".outer");
const innerbar= document.querySelector(".inner");
//make initial value as 0
var currentdeg=0;
//define a interval which will run with the interval of 15 milisecond [increase or decrease the value of
//timer as slow or fast you what your progress bar to be]
const myInterval= setInterval(()=>{
//increment degree with the value of 1 each time function is called
currentdeg=currentdeg+1;
//convert the degree to the percentage so that you can update value in html
var deg= (currentdeg/360)*100;
//increment the degree value
progressbar.style=`background:conic-gradient(red ${currentdeg}deg, rgb(255, 255, 255) 0deg);`;
//update the inner div value as percentage
innerbar.innerHTML=`${Math.floor(deg)}%`;
//stop or clear interval at particular value here i am using 60 % as max value
//after which our interval will stop
if(deg>60)
{
clearInterval(myInterval);
}
},
15);
```
Thank you.
Please feel free if you have any queries.
| sunder_mehra_246c4308e1dd |
1,893,737 | Weekend Recap: Solana’s Loyalty Platform, Buterin Backs TiTok AI, XLink Partners Fireblocks and Ancilia | Elon Musk’s social network X is not doing “enough” to prevent the spread of cryptocurrency fraud on... | 0 | 2024-06-19T14:22:09 | https://36crypto.com/weekend-recap-solanas-loyalty-platform-buterin-backs-titok-ai-xlink-partners-fireblocks-and-ancilia/ | cryptocurrency, news | Elon Musk’s social network X is not doing “enough” to prevent the spread of cryptocurrency fraud on the platform. Such an opinion was shared by Binance co-founder Yi He, who recently [asked](https://x.com/heyibinance/status/1801732167917256859) how the billionaire owner is going to deal with this problem. However, this was not the hottest news of the week.
**Buterin Supports TiTok AI**
Ethereum co-founder Vitalik Buterin has endorsed TiTok AI for its potential application in the blockchain. We are not talking about the social network TiTok but about Token for Image Tokenizer, a new method of image compression that makes them more practical for storing on the blockchain.
On his Warpcast account, Buterin [called](https://warpcast.com/vitalik.eth/0x1389d35c) the image compression method a new way of “encoding a profile photo.” He also noted: _“320 bits is a hash. Small enough to go on chain for every user.”_
The co-founder became interested in the method after reading a [post](https://x.com/Ethan_smith_20/status/1801493585155526675) in X published by Ethan, a researcher at Leonardo AI, an artificial intelligence-based image creation platform. The author described how the technology could help those interested in reinterpreting the high-frequency details of images to successfully encode complex visual objects into 32 tokens. For his part, Buterin expressed an opinion on how much easier it would make it for developers and creators to create profile images and non-fungible tokens (NFTs).
TiTok AI, developed in collaboration between ByteDance and the University of Munich, is characterized as an innovative one-dimensional tokenization framework that is significantly different from the dominant two-dimensional methods in use. The [White Paper ](https://arxiv.org/pdf/2406.07550)of the project describes the challenges faced by previous image tokenization methods such as VQGAN.
TiTok, using artificial intelligence, plans to overcome this problem with technologies that efficiently convert images into one-dimensional hidden sequences to provide a “compact hidden representation” and eliminate redundant regions.
**AI is Now on TikTok**
The social network TikTok may soon be filled with ads with “digital avatars” created by artificial intelligence. On June 17, the platform [announced](https://www.tiktok.com/business/en/blog/tiktok-symphony-ai-creative-suite#:~:text=Symphony%20Digital%20Avatars:%20a%20new%20way%20to%20scale%20your%20storytelling) the expansion of its Symphony advertising package with “stock avatars” and “artificial dubbing” functions.
According to TikTok, all avatars are created from videos of real paid actors licensed for commercial use. In addition, users will be able to use “voice and accent” to read the script voiced on the avatar using artificial intelligence.
The attached video demonstrates how the tool converts text into voiceover and can dub actors with voices in several languages, trying to imitate mouth movements in each language. The script itself can also be generated using artificial intelligence. The feature supports ten languages and dialects, including English, Spanish, Japanese, and Korean. The tool detects the language used and duplicates it in the user’s target language.
Today, artificial intelligence technology is becoming increasingly widespread. Influential companies are integrating AI into their operations to increase efficiency, optimize processes, and develop innovative products and services. One example is Microsoft, which has a multimillion-dollar [partnership](https://blogs.microsoft.com/blog/2023/01/23/microsoftandopenaiextendpartnership/) with OpenAI, the company behind the development of the famous ChatGPT chatbot. Their partnership aims to develop and promote cutting-edge research in this area and democratize AI as a new tool for companies and organizations in various industries.
One of the largest crypto exchanges in Europe, WhiteBIT, also uses artificial intelligence in its operations. In particular, to analyze big data, market trends, user behavior, transaction processing, etc. The financial company JPMorgan Chase also announced the use of artificial intelligence in its work. In particular, they use a neural network to obtain information about potential investments and speed up decision-making. Moreover, the head of the bank’s asset and wealth management department, Mary Erdos, recently [spoke](https://qz.com/jpmorgan-chase-ai-banking-training-mary-erdoes-1851488056#:~:text=According%20to%20Erdoes%2C%20JPMorgan%20bankers,work%20each%20day%2C%20she%20said.) about the implementation of engineering training for new employees to work with artificial intelligence.
**XLink Onboards Fireblocks, Ancilia to Prevent Hacks**
Following a recent security breach that stole $10 million in user funds, XLink, a Bitcoin bridge by Alex Lab, has [partnered](https://x.com/XLinkbtc/status/1802739896903159832) with Fireblocks and Ancilia. According to the company, the collaboration with Fireblocks will allow it to implement multi-party computing (MPC) technology to manage XLink’s wallet and smart contracts.
Chiente Hsu, CEO and co-founder of Alex told [Cointelegraph](https://cointelegraph.com/news/xlink-fireblocks-ancilia-partner-10-m-hack): _“[The] partnership with Fireblocks will implement two of three multiparty computation wallets to hold all these user assets, with one key held by the validator network of Bitcoin Oracle (that validates the XLink bridging events), another key held by Fireblocks and the last key held by Coincover who provides the disaster recovery service.”_
At the same time, the partnership with Ancilia will help ensure continuous monitoring and real-time threat detection, offering instant alerts and proactive measures to prevent hacks.
Hsu explained that the “source of the hack” was the leak of a private key with “administrator access” to a smart contract that stores users’ assets. He also added that the cooperation will maximize the security of users’ assets, which they have been planning for some time, but the recent incident has accelerated this process.
**Funding for Crypto Startups Exceeded $100 Billion**
The total amount of funding for crypto startups has crossed the $100 billion mark. [According](https://defillama.com/raises) to DefiLlama, since the end of May 2014, crypto projects have raised $101.36 billion in 5287 investment rounds.
A noticeable peak occurred in October 2021, when funding amounted to more than $7 billion. No other month has come close to this figure, although February 2022 – $3.67 billion – was the second highest in history.
The 2023 study shows that almost half of all funding comes from US investors, followed by the UK and Singapore. At the end of 2023 and in the first half of 2024, several high-profile funding rounds took place, further strengthening investor confidence. Specifically, Together.AI, Wormhole, Totter, and Eigenlayer received significant investments of more than $100 million each.
**Solana Labs Debuts Blockchain Loyalty Platform**
Solana Labs has [announced](https://x.com/solanalabs/status/1800875778848202975) the launch of Bond, a platform designed to increase customer engagement through direct customer interactions, digital collectibles, and more. In its publication, the company notes that Bond will provide brands, including non-cryptocurrency brands, with a platform “to create personalized, transparent, and engaging digital experiences that deepen customer connections and foster long-term loyalty.”
In addition, the technology will be able to eliminate the “critical limitations” of modern loyalty programs, namely the loss of connection with the end user if the product is ever resold or given away. The company promises that brands will not be required to have experience with blockchain, as the service will be available through a single application programming interface.
The platform uses the Solana blockchain to create collectible “digital twins” and limited edition products complemented by their real-life models to “encourage repeat purchases and increase overall customer value.”
By using digital product identification, customers can verify the authenticity of a product, and brands can track their items even if they are subsequently resold or given away. | deniz_tutku |
1,893,564 | Learn CSS with these Games | CSS can be frustrating to learn, but what is better than learning by playing fun and enjoyable games?... | 0 | 2024-06-19T14:16:33 | https://dev.to/douiri/learn-css-with-these-games-5e4m | webdev, css, beginners, learning | CSS can be frustrating to learn, but what is better than learning by playing fun and enjoyable games? That's why I want to share games that helped me in my CSS journey and others that you might find useful.
## [CSS Diner](https://flukeout.github.io/)
CSS Diner will help you master every CSS selector by selecting various items on the table. The game provides multiple use cases for each selector, allowing you to experiment and understand where to apply them. Additionally, it includes helpful hints if you get stuck.
## [Flexbox Zombies](https://mastery.games/flexboxzombies/)
In Flexbox Zombies, you activate your crossbow using the power of Flexbox to kill zombies. Following the instructions of your teacher, you'll learn Flexbox properties and techniques. This game combines engaging storytelling with practical Flexbox exercises to enhance your learning experience.
## [Flexbox Froggy](https://flexboxfroggy.com/)
Flexbox Froggy teaches Flexbox by having you help Froggy and his friends reach the lilypad using only flex properties. Each level presents a new challenge, requiring you to apply different Flexbox concepts. The game's progressive difficulty ensures that you build a solid understanding of Flexbox as you advance.
## [Knights of the Flexbox Table](https://knightsoftheflexboxtable.com/)
Knights of the Flexbox Table helps you learn Flexbox layout and Tailwind CSS by writing Tailwind classes to move knights to their proper chests from the dungeon. The game presents a medieval fantasy theme, making learning both Flexbox and Tailwind CSS enjoyable and immersive.
## [CSS Grid Garden](https://cssgridgarden.com/)
CSS Grid Garden is similar to Flexbox Froggy but focuses on CSS Grid. In this game, you help water the garden by using CSS Grid properties. Each level introduces new grid concepts, gradually increasing in complexity, ensuring that you gain a comprehensive understanding of CSS Grid layout.
---
Don't forget to share with us the games you know in the comment section 🙂 | douiri |
1,893,627 | Sticky Sessions: Benefits and Drawbacks | Sticky sessions are a common technique used to manage user sessions across multiple server nodes.... | 0 | 2024-06-19T14:12:38 | https://dev.to/rahulvijayvergiya/sticky-sessions-benefits-and-drawbacks-68n | webdev, devops, beginners, programming | Sticky sessions are a common technique used to manage user sessions across multiple server nodes. However, while they can offer some benefits, sticky sessions also come with significant drawbacks that can affect scalability, reliability, and overall performance of web applications.
## What are Sticky Sessions?
Sticky sessions, also known as session affinity, refer to a method where requests from a user are always directed to the same server during a session. This approach ensures that all session data is stored on a single server, simplifying session management by avoiding the need to share session state across multiple servers.
## Why Sticky Sessions are Used:
1. **Stateful Applications:** Some applications store user session data (like login status, shopping cart contents, etc.) in the memory of the server that handled the initial request. If subsequent requests go to different servers, the session data won’t be accessible, causing issues for the user.
2. **Consistency:** Ensures that a user's experience remains consistent and uninterrupted by keeping their session on the same server.
### How Sticky Sessions Work
In a typical load-balanced environment, a load balancer distributes incoming requests to various servers based on algorithms like round-robin, least connections, or random selection. With sticky sessions, the load balancer adds a layer of logic to route all requests from a particular user to the same server. This is often achieved using cookies or IP address mapping.
For example, when a user first accesses the application, the load balancer directs them to Server A and places a cookie in the user’s browser. On subsequent requests, the load balancer reads this cookie and ensures that the user is always routed back to Server A, maintaining a consistent session state.
## Benefits of Sticky Sessions
1. **Simplicity**: Sticky sessions simplify session management by keeping all user session data on a single server. This can be particularly beneficial for small-scale applications or those with a limited number of servers.
2. **Performance**: By keeping session data on a single server, sticky sessions can reduce the overhead associated with session data synchronization across multiple servers. This can potentially lead to faster response times for the user.
3. **Easy Implementation**: Implementing sticky sessions is relatively straightforward and can be done with minimal changes to the existing infrastructure. Most modern load balancers support sticky sessions out of the box.
## Drawbacks of Sticky Sessions
While sticky sessions can offer simplicity and improved performance in certain scenarios, they come with several notable drawbacks:
1. **Scalability Issues**: Sticky sessions can lead to uneven load distribution. If a particular server gets a high number of sticky sessions, it can become a bottleneck, while other servers remain underutilised. This uneven load can hinder the scalability of the application.
2. **Reliability Concerns**: If the server to which a user's session is bound goes down, all active sessions on that server are lost, leading to a poor user experience. High availability architectures typically aim to avoid single points of failure, but sticky sessions introduce this risk.
3. **Session Persistence Challenges**: Sticky sessions depend on the persistence of session data on a specific server. If sessions are long-lived, they can tie up resources on a particular server for extended periods, complicating server maintenance and upgrades.
4. **Complexity in Distributed Environments**: In a distributed environment, especially one using cloud services with auto-scaling capabilities, sticky sessions can become a challenge. As servers are dynamically added or removed, maintaining session affinity can complicate load balancing and infrastructure management.
5. **Single Point of Failure:** Without a way to replicate sessions, the server handling the session becomes a single point of failure.
## Alternatives to Sticky Sessions
Given the drawbacks of sticky sessions, many modern web applications opt for alternative approaches to session management that enhance scalability and reliability:
1. **Centralised Session Stores**: Instead of keeping session data on individual servers, centralised session stores like Redis or Memcached can be used. These stores allow session data to be accessed by any server in the pool, facilitating better load distribution and failover capabilities.
2. **Token-Based Authentication**: Using stateless authentication methods, such as JSON Web Tokens (JWT), can eliminate the need for server-side session storage altogether. The session data is stored in the token itself, which is passed back and forth between the client and the server, allowing any server to handle the request.
3. **Database-Backed Sessions**: Storing session data in a database provides a persistent and scalable solution. While this can introduce some latency due to database access, it ensures that session data is not lost if a server goes down and supports seamless scaling.
Conclusion
----------
Sticky sessions can be a useful tool for managing user sessions in a web application, but they come with trade-offs in terms of scalability and fault tolerance. Alternatives like distributed caching or token-based authentication can provide more robust and scalable solutions. | rahulvijayvergiya |
1,893,626 | I _____ hate arrays in C++! | Author: Anton Tretyakov Or why I think developers need to know about them but should not use... | 0 | 2024-06-19T14:12:12 | https://dev.to/anogneva/i-hate-arrays-in-c-25jg | cpp, programming, coding | Author: Anton Tretyakov
Or why I think developers need to know about them but should not use them\.

## Introduction
Do you remember the first time you put a pointer to the first array element to the *sizeof* operator, and your code stopped working the way you intended? It certainly doesn't come close to the thrill of sticking your fingers in a power socket but\.\.\.
Here's an array:
```cpp
int arr[5] = {1, 2, 3, 4, 5};
```
And here it turned into a pointer:
```cpp
int *ptr = arr;
```
The *[array\-to\-pointer](https://timsong-cpp.github.io/cppwp/n4950/conv.array#1)*[ *conversion* happened](https://timsong-cpp.github.io/cppwp/n4950/conv.array#1), we lost information about the array size and some nerve on top of that\. Don't get me wrong: before using an array, one should first learn how to "cook" it, just like any other feature of our favorite language\. But I don't like it when language rules seem to be constantly trying to trick the programmer and make things more complicated for no reason\.
I think the problem is that this infamous *array\-to\-pointer* *conversion* imposes a certain mindset\. It assumes that an array and a pointer to its first element are absolute equivalents and always lead to the same behavior when the same code is used\. However, arrays and pointers are far from being interchangeable\. Moreover, C\+\+ has contexts where you may regret using one instead of the other\.
## Pointer isn't array
First, let's get some very basic things straight from the start\. A pointer has its own type, namely the *compound* *type* of the *pointer\-to\-T\.* An array is a separate entity, so it has a different type than a pointer\. Why don't we check it right off the bat?
To determine the type, let's use the *typeid* built\-in operator and the *name\(\)* function that returns a string containing the type name\. The *typeid* operator works over RTTI, so the string differs depending on the compiler\. However, the type is still determined as the same one\.
<spoiler title="NOTA BENE for those who dislike typeid.">
Yes, *typeid* discards references and *cv\-*qualification\. We don't apply these concepts specifically in this part of the article\. If we had used *typeid* to find out how a type is deducted in templates, we'd be stepping on a massive rake\. Fortunately, there are no such rakes in our garden, we're just looking at the type of the explicitly declared variable\.
</spoiler>
We declare an array and look at its type:
```cpp
int arr[3] = { 1, 2, 3 };
std::cout << typeid(arr).name() << std::endl;
```
Our compiler defines it as *A3\_i*, an array of three elements of the *int* type\.
At the same time, we can use the *typeid* operator on a pointer to the first element of an array:
```cpp
int *ptr = arr;
std::cout << typeid(ptr).name() << std::endl;
```
In this case, *Pi*, which is a *pointer* *to* *int*, is printed to *stdout*\.
If the types are different, where does the confusion come from?
## Array\-to\-pointer conversion
The confusion in using arrays and pointers comes from the fact that C\+\+, as well as its direct ancestor C, has contexts where an array becomes a pointer\. Technically, it's hard to find contexts where an array is used by itself\! Let's leave the reasons for these sudden changes out of this article and just say thanks to good old [Dennis Ritchie](https://en.wikipedia.org/wiki/Dennis_Ritchie)\.
This is called *array\-to\-pointer* *conversion*\. It occurs in cases defined by the C\+\+ standard, causing confusion among beginners\. Things are complicated by the fact that the C\+\+ standard doesn't have a single list of cases where such conversion occurs\. We have to retrieve them throughout the text, as if we're completing another quest in some Korean MMORPG\.
### When conversion doesn't occur
Let's first look at the cases where conversion does not occur\.
#### The discarded\-value expression
The *discarded\-value expression* is an expression whose result is not used\. No conversion occurs in the example below:
```cpp
int arr[3] = { 1, 2, 3 };
arr;
```
#### typeid
As we've already seen at the very beginning of the article, the *typeid* operator can handle arrays and returns different strings for arrays and pointers\.
#### sizeof
The same applies to the *sizeof* operator\. It can count sizes of both arrays and pointers\. The code below works fine:
```cpp
int arr[3] = { 1, 2, 3 };
auto ptr = arr;
static_assert(
sizeof(arr) != sizeof(ptr)
);
```
#### References
References\. Operations that would otherwise lead to the *array\-to\-pointer* conversion would be performed on an array itself when references are used\. You can pass an array to a function by reference\. You can deduce a reference to an array in a template\. Referances are cool\. Be like a reference\!
Be careful with function overloading\! If a compiler has to choose between a function that takes a reference and a function that takes a pointer, it prefers not to choose at all\. The code below doesn't compile because both functions fit equally well:
```cpp
void foo(int (&a)[3]) {};
void foo(int *p) {};
int main()
{
int arr[3] = { 1, 2, 3 };
foo(arr);
}
```
### When conversion does occur
Back in C days, it was easier\. There were \(and still are\) several cases where the *array\-to\-pointer* *conversion* didn't happen\. So, in all other cases, it did\. C\+\+ is a slightly more complex language, so you can't get around this rule anymore\. Well, let's examine the cases where conversion occurs\.
#### Built\-in operators
We'll start with operators\. All these addition, subtraction, division, multiplication, indexing, and other built\-in binary and unary operators can't work with arrays\. If you try to add up two arrays, they convert to pointers\. This doesn't mean of course that we can add up two pointers\. This operation is meaningless at the very least, and at most not part of the standard\.
So, if that's how it works with built\-in operators, what about user\-defined operators?
#### Functions
Overloaded operators, like other functions, take arguments\. The standard says that an *array\-to\-pointer* *conversion* is applied to function arguments when an argument is an array\. This means that an array itself can't be passed to a function, only a pointer to its element can be\.
Even if you write a function parameter as an array:
```cpp
int foo(int arr[3]);
```
Inside the function, the *arr* parameter is a pointer\. The array isn't copied\.
#### Cast operators
When using *static\_cast* and *reinterpret\_cast*, we can't convert anything to an array: the output results in a pointer to an element\.
Strictly speaking, we can't convert something to an array using *const\_cast* and *dynamic\_cast* either\. The other thing is, these two simply don't compile when we try to perform such a conversion\.
#### Ternary operator
If the second or third operators of the ternary operator \(the one with the question mark\!\) are arrays, the operator returns a pointer instead of an array\.
#### Template parameters
As far as arrays are concerned, *non\-type* template parameters are similar to function parameters\. If we declare a parameter as an array, it turns out to be a pointer, actually:
```cpp
template <int arr[3]>
void foo();
```
#### Template arguments deduction
Arrays used as template arguments in a function call are deducted as pointers\. In the example below, the template parameter is deducted as a pointer:
```cpp
template <typename T>
void foo(T arr) {};
//....
int arr[3] = { 1, 2, 3};
foo(arr);
```
#### Conversion function template
The same applies to conversion function template\. Arrays become pointers:
```cpp
template <typename T>
struct A {
operator T() {
static int arr[3];
return arr;
}
};
```
Don't blame it on me\. I didn't force you to learn C\+\+\.
### Enough with the theory\. Let's shoot in a foot
I doubt you started reading this article for the sake of theoretical research that you, dear reader, have probably already done\. However, it was necessary to smoothly guide the narrative to real\-life aspects that can unexpectedly pop up in a programmer's text editor, making developers exclaim, "I hate C\+\+ arrays\!"

### Function that takes pointer to base class
Let's imagine we have two classes\. One is the base one, and the other derives from it:
```cpp
struct B {
char i;
};
struct D : B {
int y;
};
```
Besides that, there's a function specified somewhere nearby that traverses an array of base class objects:
```cpp
void foo(B *b, size_t size)
{
for(auto &&el : std::span(b, size)) {
std::cout << el.i << std::endl;
}
}
```
There's a catch: it takes a pointer and some size\. Such a composition gives some creative freedom to a careless programmer who may write something like the following code:
```cpp
int main()
{
D arr[3] = { {'a', 1}, {'b', 2}, {'c', 3} };
foo(arr, 3);
}
```
This is definitely UB\. But that's not the issue here, or at least not the only one\.
In the example above, the next element in the loop isn't calculated correctly\. Rather than displaying the next object *B::i *member, the program displays the *padding* *bytes*\. They follow the variable and were added by a compiler for alignment purposes\. In our compiler, *sizeof\(B\)* is one and *sizeof\(D\)* is eight\.
Now let's consider a case where both of these classes are polymorphic\. We add virtual functions to them:
```cpp
struct B {
char i;
B(char i) : i(i) {};
virtual void print() { std::cout << "BASE " << i << "\n"; }
};
struct D : B {
int y;
D(char i, int y) : B(i), y(y) {};
void print() { std::cout << "DERIVED " << i << " " << y << "\n"; }
};
```
We can [create an example](https://godbolt.org/z/YfKhM3zfn) where this change affects how the program looks and behaves\. In the linked example, you can see how the *virtual* keyword alone causes the elements in the loop to be calculated correctly\. However, UB remains\.
In the example above, the behavior is caused by the implementation of polymorphic classes used in the compiler\. A reasonable but not mandatory way to implement polymorphism is to add the *vtable* pointer to objects of a polymorphic class that points to a table containing various information about an object class\. Apparently, in our case, the pointer is added to the end of the structure, causing the compiler to align the entire structure size to that pointer\. To do this, the compiler adds 7 *padding* *bytes* after the *i* variable in the *B* class objects and 3 bytes after the same variable when using the *D* class objects \(since 4 bytes go to the *y* variable\)\. As a result, the size of both structures becomes the same, and the iteration runs correctly\. But if we change the type of the *y* variable to *long*, we won't be so lucky anymore\.
The big inconvenience here is that the compiler doesn't warn about this, since converting a pointer to a derived class to a pointer to the base class is supported by the language rules\. So, we can imagine a case where the code works with one compiler and platform \(even though it shouldn't\), and crashes under other circumstances\. If the function had accepted parameters like *std::array* or *std::span*, there would have been no issues\.
### Lambda
Let's look at the following code:
```cpp
#include <iostream>
int main()
{
int arr[3] = {1, 2, 3};
auto sizeof_1 = [arr] {
return sizeof(arr);
};
auto sizeof_2 = [arr = arr] {
return sizeof(arr);
};
auto sizeof_3 = [=] {
return sizeof(arr);
};
std::cout << sizeof_1() << std::endl;
std::cout << sizeof_2() << std::endl;
std::cout << sizeof_3() << std::endl;
}
```
What do we know about lambda expressions? Well, they have captures\. Captures capture \(\!sic\) variables by value \(=\) or by reference \(&\)\. Captures are implemented by the compiler creating a service class where each captured variable is a non\-static class field\.
If a variable is written without ampersand and yet isn't *this* variable, it's passed by value\. In the code above, all arrays are passed by value\. So, they are all cast using the *array\-to\-pointer* *conversion*\. This means that the program displays the same number three times\.
Alternatively, as we can read on [cppreference](https://en.cppreference.com/w/cpp/language/lambda), lambda data members that are in a capture expression without an initializer go through *direct\-initialization*\. If an array is captured, each of its elements is initialized via *direct\-initialization* in the ascending index direction\. So, the program displays the same numbers, just not the ones we had in mind\.
Or, as we can also read there, if there's an initializer, the variable to be captured is initialized in the way the initializer dictates\. The previously declared array can be used to initialize only a variable of the *pointer* *to* *that* *array* *element* type\. So, when we write *\[arr* *=* *arr\]* in the capture, the pointer to the first element is still captured, unlike in other ways of capture by value notation\.
This little detail is easy enough to overlook, so one may, for example, overwrite the external array elements when the second type \(mentioned above\) lambda expression is used\.
It seems logical, but there are still some mixed feelings\. Most importantly, we've found a C\+\+ context where we can still implicitly copy an array without resorting to library functions\!
Be careful, programmer: using a regular pointer instead of an array in this context displays the size of the pointer itself in all three cases\!
### Iteration
However, enough has been said about array copying\. Let's address iterating over it as well\.
There are currently two types of iteration in C\+\+: the classic *for* loop and its *range\-based* version\. We can use both to iterate over arrays\. Both have known issues with iterating through a pointer\.
We'll cover the nuances of using the classic *for* loop in the next chapter of the article\. In this one, however, we'll focus on its *range\-based* little brother\. Let's have a quick recap of how it works\.
```cpp
int arr[3] = {1, 2, 3};
for(auto &&element : arr) std::cout << element << std::endl;
```
Under the hood, it expands into a regular *for* loop that handles iterators\. This structure also works correctly if you create the *prvalue* array in place of the *arr* variable in the loop\. The temporary array is [bound to the *forwarding* reference](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/n4950.pdf) and exists until the loop ends\.
Some readers may have already started typing in the comments below that the *range\-for* loop example is incorrect because it uses library functions to get iterators\. Like, the *element* parameter would be obtained using the *std::begin* library function \(or *std::cbegin*, depending on the element *const*\-ness\), and the iterator pointing to the array boundary is obtained via *std::end* \(or *std::cend*\)\. Indeed, these functions have array overloads\. But be careful, programmer, because at the same link to the standard, you can read that iterating over arrays doesn't involve iterators—only good old pointers\.
At the same time, you will come a cropper trying to replace the array with the pointer\. The following code doesn't even compile:
```cpp
int *ptr = arr;
for(auto &&element : ptr) std::cout << element << std::endl;
```
If pointers in a *range\-based* loop lead to unbuildable code, using them in its big brother *for* can be even more nasty\.
### Iteration over multidimensional array
Let's say we have a multidimensional array—a matrix of integers, for example:
```cpp
int arr[2][2][2] = { 0, 1, 2, 3, 4, 5, 6, 7 };
```
Suddenly, we need to iterate over this array to change each value\. We know that when indexing an array of the *T\[N\]* type, we get back *T*\. In our case, *T* is *int\[2\]\[2\]*\. We can also apply the indexing operation to the obtained construction, thus getting an object of the *int\[2\]* type and once again finally reaching the desired *int*\. In fact, if we do it via regular *for* loops, we need three of those\.
We also know that the standard guarantees that array elements are arranged one after the other\. In fact, the rule applies recursively to all parts of a multidimensional array\. All elements of the *int\[2\]\[2\]* type are arranged in sequence and within them, all elements of the *int\[2\]* type are arranged in sequence, and so on\.
Of course, everybody knows that taking this logic too far is dangerous to the program health\. The code below is incorrect:
```cpp
#include <iostream>
int main()
{
int arr[2][2][2] = { 0, 1, 2, 3, 4, 5, 6, 7 };
for(size_t i = 0; i < 8; ++i) {
std::cout << arr[0][0][i] << std::endl;
}
}
```
Here we try to access elements of the *int* type via the very first *int\[2\]* sub\-array of the very first *int\[2\]\[2\]* sub\-array, thus going beyond no less than three array boundaries\. But elements of the *int* type are still in memory one after the other—the standard ensures it\! Indeed, it does, as well as ensuring that the behavior is undefined for the given code\.
UB [is *malum in se*\.](https://pvs-studio.com/en/blog/posts/cpp/1024/) What's worse is that the code may work quite well because the elements are actually arranged in a sequence\. Or it may crash pretty hard\. If you don't believe us, [you can see for yourself](https://godbolt.org/z/bEhdqecW4)\.
Great\! We know that adhering to the standard is a good thing\! Let's write three loops\. But what if the array is four\-dimensional, though? And what if it's five\-dimensional? What if templates come into play and the dimensionality can be whatever you want?
Do the dark forces possess some sorcery after all? Some magic that may allow us to get around the restriction, maybe\. Let's rewrite the code as follows:
```cpp
#include <iostream>
#include <type_traits>
int main()
{
int arr[2][2][2] = { 0, 1, 2, 3, 4, 5, 6, 7 };
auto ptr = reinterpret_cast<std::remove_all_extents_t<decltype(arr)>*>(arr);
for(size_t i = 0; i < 8; ++i) {
std::cout << ptr[i] << std::endl;
}
}
```
Let's say some evil genius decided to get the type of an element in a multidimensional array using *std::remove\_all\_extents\_t*\. Then they cast that array to a pointer to that element using *reinterpret\_cast*\. In fact, such intricacies result in an analog of the *flat\(\)* function in other programming languages\. That function squashes a multi\-dimensional array into a one\-dimensional one\. We even save on pointer arithmetic by adding only one index to *ptr* instead of three, as in the case of *arr*\.
Unfortunately, this is still UB\. In this case, in addition to crossing array boundaries, *strict* *aliasing* rules also come into play: we can't access an object using a type other than the one it was created with\. Another thing is that when we use the same compiler as in the previous example, the [current example](https://godbolt.org/z/eEsW93aKn) not only compiles but also runs without crashing, even with the sanitizer enabled\.
Be careful, friends, arbitrary conversions between array types won't keep a doctor away\!
### Array size
Much has already been said about the issues one may face when mixing up an array and a pointer to its first element in the *sizeof* operator arguments\. We won't repeat ourselves and invite you to read more about it [here](https://pvs-studio.com/en/blog/posts/cpp/1112/)\.
### What's next?
We hope you didn't take this article as a criticism of *built\-in* arrays in C\. After all, they had their place and their use there, due to the language particularities\.
However, we're talking about C\+\+ now\. So, it'd be wrong to leave an absolutely logical question unanswered at the end of the article\. This question goes something like, "What should we do in C\+\+ with such an irritating feature of the C language?"
Well, we couldn't come up with a better answer than "use *std::array* or *std::span*"\.
The issue with using a pointer to a base class when iterating over an array of derived class objects is solved by using *std::array* or *std::span*\. The compiler doesn't allow us to push an array of derived elements into an array of base elements\.
In "*lambdas*", we talked about the degenerate case of passing an array to *captures* by value*,* where the behavior is different\. Again, *std::array* helps\. In all cases where it's passed by value, all elements are fully copied\. When it comes to *std::span*, the elements aren't copied, again, in all three cases\.
"*Iterating*" over *std::array* and *std::span* works like a charm\. You can choose between the regular *for*, the *range\-based* one, or library functions\.
However, *std::array* and *std::span* can't help much with *iterating* *over* a *multidimensional* *array*\. If one tries hard enough, they can make similar mistakes with them too\. Moreover, it's a pain in the neck to declare multidimensional *std::array* and *std::span*\. Well, not every cloud has a silver lining, right? But if we want to declare multidimensional stuff in a compact way, and C\+\+ 23 is already used in the project, we can consider using *[std::mdspan](https://en.cppreference.com/w/cpp/container/mdspan)*\.

### Conclusion
That's it\. One should know how to use built\-in arrays, but here's a question: are they worth it when C\+\+ has safer alternatives? This question is rather rhetorical, maybe even philosophical\. We'll just hope that this short note has made it a little easier for you, dear reader, to answer it\!
And if not, feel free to leave your questions in the comments\!
Thank you for reading it to the end\! *El* *Psy* *Kongroo*\.
| anogneva |
1,893,576 | 5 Reasons Why Your Side Projects Fail to Make Money And How to Avoid Them | Introduction Hello there! If you're like many aspiring entrepreneurs (including me),... | 0 | 2024-06-19T14:10:12 | https://dev.to/wasp/5-reasons-why-your-side-projects-fail-to-make-money-and-how-to-avoid-them-4l5m | beginners, webdev, career, learning | ## Introduction
<img width="100%" style="width:100%" src="https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExcjA0MGM0NjN0YjR3aGRicHM3YzUyc254ZmxxNjkxdzlnZWZ0NHRjbCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/ljtfkyTD3PIUZaKWRi/giphy.webp">
Hello there! If you're like many aspiring entrepreneurs (including me), you've probably had your fair share of bright ideas but struggled to turn them into profitable side projects. You're not alone. Many side projects fail to make money, and understanding why is the first step towards success. So let's dive into the common pitfalls of the solopreneur/indiehacker journey and learn how to avoid them.
Starting this journey, it's important to remember that failure is not the enemy. It is, in fact, a crucial part of the process. Yes, that’s the hard truth: No one ever succeeded by doing something first try.
Embracing failure and learning from it is what helps us avoid making the same mistakes in the future. So buckle up, because we're about to explore the common reasons why side projects fail and how to navigate around them.
## Mistake #1 - Not trying it out

This is a tweet from [@levelsio](https://twitter.com/levelsio), a successful solopreneur, making over $150K/month and it’s a great example to check out. The fear of failure often holds us back from taking that first step. Don't let this fear stop you! **It's better to try and fail than to never try at all.**
Moreover, keep in mind that not trying at all means you're missing out on valuable experiences and opportunities for growth. Even if your project doesn’t turn out to be profitable, the skills and experience you gain are really the point here. Whether it's enhancing your problem-solving abilities, learning more about a new market, or understanding its dynamics, these skills can be incredibly beneficial in your future projects and interviews.
So, the next time you have an idea for a side project, go for it! Allow your curiosity and passion to drive you, and don't let the fear of failure hold you back. Fail and learn with your mistakes, over and over again — this is the best way to grow as shown by [@levelsio](https://twitter.com/levelsio).
## Mistake #2 - Failed Ideation
<img width="100%" style="width:100%" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExMzZ5ejdzZWFhbHl6anF1NXNtN2ZwdXJvbmE5NDBxdjZjY2J1Mm84cSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/PbzwVUojP4d8RcRgK0/giphy.webp">
You've got this idea. But was it formed through effective brainstorming and problem-solving? One common pitfall in side projects is rushing the ideation process. A thorough brainstorming process is crucial to ensure the viability of your idea.
Try to use your own lenses to filter out ideas that you don’t have that much in common with. The more native the problem is to you, the more obvious and doable the solution seems.
- Validate your idea: It's not enough to think it's good. You need to have at least some assurance that there's a market for it. Conduct surveys, ask people you trust, and gather as much initial data as you can.
- Make sure your idea solves a problem: A good business idea is one that fills a gap in the market or solves a problem that people are experiencing.
- Evaluate your resources: Do you have the skills, time, and money to turn your idea into a business? Be honest with yourself. Always remember that you can do an MVP (minimum viable product), but, if the MVP doesn’t give real value to the users, it won’t suffice.
Want an example? Check this [interview](https://wasp-lang.dev/blog/2023/02/14/amicus-indiehacker-interview) with Erlis, creator of [amicus.work](https://www.amicus.work/). It precisely shows how being close to the problem makes the solution intuitive. If you find yourself stuck, you can have a quick read at this other [article](https://dev.to/llxd/creating-a-more-than-minor-side-project-from-planning-to-release-3be8), or, if you’d prefer a more deep dive [Make Book](https://makebook.io/) or [The Lean Startup](https://www.amazon.com/Lean-Startup-Entrepreneurs-Continuous-Innovation/dp/0307887898) are great references too, since they provide valuable insights into avoiding common mistakes during the ideation phase.
## Mistake #3 - The infinite build
<img width="100%" style="width:100%" src="https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExNmoxdm1nN2I4dml0OXNibTg5a2N3bGxhbGprZG10eHR4MmllNzJ0dSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/UAHZijO91QCl2/200.webp">
And there you are: Technology choices flipping through your mind, and you're actually considering learning a whole new programming language just to solve this new problem. Come on, you already read an [article on learning](https://dev.to/wasp/the-art-of-self-learning-how-to-teach-yourself-any-programming-concept-5de4)! Nothing can stop you!
But, wait! Think about it. Now you have to fight against two problems at the same time:
1. Learning a new language,
2. **AND CREATING THE SOLUTION TO YOUR PROBLEM.**
Turning a fantastic idea into a thriving business is challenging enough already. And you already know that many side projects fail **during the building phase**, so why do this to yourself?
The secret here? Innovate, but with caution!
Try things that can always speed you up way more than burden you and slow you down. An example? Already know React? Try [Wasp](https://wasp-lang.dev/) , a full-stack framework that takes care of Boilerplate (e.g. Auth) for you and uses AI generation capabilities to help you create products even quicker.
When trying to create and test an idea, we are not looking too much into learning new stuff, rather **it is more about creating the idea itself.** So when choosing tools, choose ones that are based on technologies you already know and will help you move fast!
---
By the way: [Wasp](https://wasp-lang.dev/) are the ones making this article possible and, recently, we released Open SaaS — an excellent, 100% free and open-source way to quick-start your new SaaS. Check it out!
{% cta https://github.com/wasp-lang/open-saas %} ⭐️ Create your SaaS with Open SaaS 🙏 {% endcta %}
[](https://opensaas.sh)
---
Other really common mistake is striving for perfection, which often leads to endless tweaking and delays. Remember, "Done is better than perfect." It's crucial to complete your project and get it out into the world. If your project was seen by no one, it’s just an idea.
## Mistake #4 - The never arriving feedback
Delays aren't the only stumbling block in this phase. Sometimes, we're so consumed with creating the perfect product that we forget to validate it with actual users. Regular feedback is crucial - it helps you make necessary changes and ensure your product meets the users' needs.
Without feedback, you’ll never know if you hit the jackpot or if you’re creating a solution for a problem that no one has.
<img width="100%" style="width:100%" src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExM294bHQyaGN3NHVsaWFncW1lNWc3aGIzaG50YWR3c3d2bThxMHdkNyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/4LsN0YwgIsvaU/giphy.webp">
So, how do you make sure you're getting the necessary feedback? Start by testing your product with a small group of users. This could be a group of friends, family, or even a dedicated focus group. Their feedback is valuable in identifying any issues or areas of improvement.
It’s really common that we face the fear of receiving negative feedback too, which often leads to withholding the product from the market until it's 'perfect'. This approach, however, can be harmful. It's vital to ship your product as early as possible, even if it's lacking some of the cool features you're planning to add. Early feedback from users can lead you to add features you hadn't previously thought of but, are the features the actual users want.
Remember, feedback is a gift. It allows you to improve your product and make it something that people not only use but love. So, don’t shy away from it, embrace it!
## Mistake #5 - A shy launch
And talking about being shy: So, you've built it, now what? It's time to present it to the world. Yet, remember, timing is everything. If you do a launch that is shy and badly planned, you won’t get the users (neither the revenue) you need.
The first step here is to understand your audience and select the appropriate platform. [Reddit](https://www.reddit.com/) is excellent for open-source or projects not primarily driven by profit, while, [Dev Hunt](https://devhunt.org/), [Product Hunt](https://www.producthunt.com/), and [Hacker News (YC)](https://news.ycombinator.com/) are wonderful for a broader spectrum of projects. Choosing the right place to launch can mean the difference between a business and nothingness.
Moreover, creating a strategic launch plan is crucial. It's not enough to just publish your project and hope for the best, although it can happen. You need to plan your launch, considering factors such as the right time to launch, the platform's mannerisms, and adjust your communication to your target audience.
A well-thought-out launch plan will not only help you reach a wider audience but also increase the chances of your project's success. You should use tools like [Screen Studio](https://www.screen.studio/) and [Canva](https://canva.com) to help you create beautiful screen recordings and promotional images/banners.
As a bonus, here's an example launch plan to get you started:
- Week 1: prepare all promotional materials, like images and videos
- Week 2: Launch on [Dev Hunt](https://devhunt.org/) + promote on social media
- Week 3: Launch on [Product Hunt](https://www.producthunt.com/) + [Show HN](https://news.ycombinator.com/show)
- Week 4 & beyond: Be loud! Continue to promote (without getting banned) on places like Reddit, HN, and other social media platforms.
<img width="100%" style="width:100%" src="https://media0.giphy.com/media/v1.Y2lkPTc5MGI3NjExNmFzZG14azQ1cjJrc25mY3hoNnk1aWI3bHpnNW45MHkxOWJ6ejd6ciZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/P0ZRTYaCmPsJPNd3r0/giphy.webp">
And hey, you’re on the internet, so, although you might get good feedback, some users will just attack you personally. It's normal, so just ignore the angry people. It’s definitely not a great sensation to have your idea annihilated by some random reddit user, but every now and then you’ll actually get some great insight from an honest user too. It's not easy, but try to recognize which feedback to take seriously, and which to ignore :)
---
## Looking for more?
<img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNWNraHJwazF1cGtxZWI5YmZmaXc1YzZwZms3djJ0eXY4NDhkMGZjZyZlcD12MV9naWZzX3NlYXJjaCZjdD1n/IwAZ6dvvvaTtdI8SD5/giphy.gif">
Interested in more content like this? Follow our blog at [Wasp - DEV Community](https://dev.to/wasp) and give us a star on github. This is the easiest way to support us!
{% cta https://www.github.com/wasp-lang/wasp %} ⭐️ Star Wasp on GitHub 🙏 {% endcta %}
---
## Conclusion
In conclusion, transforming a side project into a successful business is a journey filled with challenges and triumphs. Along this journey, each step, be it a hurdle or a victory, is a valuable opportunity to learn and grow.
Each failure is not an end, but a stepping stone towards greater success, serving as a lesson that propels us forward. Every setback is a chance to reassess, refine, and come back stronger. Embrace the iterative nature of this process: continue to refine your ideas, persistently strive for improvement, and most importantly, never stop trying and learning. | llxd |
1,893,621 | Exploring the Advantages of PTFE Conveyor Belts | Discovering the benefits of PTFE Conveyor Belts Marketing information is produced to introduce some... | 0 | 2024-06-19T14:06:50 | https://dev.to/janet_gonzalesb_ebdce9031/exploring-the-advantages-of-ptfe-conveyor-belts-3lk3 | design |
Discovering the benefits of PTFE Conveyor Belts
Marketing information is produced to introduce some very nice great things about something test certain company. we will explore the countless stove protector items that are superb PTFE utilizing conveyor. It are quick for polytetrafluoroethylene, a type or kind as type of thermoplastic polymer employed in various businesses.
Advantages of PTFE Conveyor Belts
It want a importance few will make them a range which will be encouraged applications which can be commercial. First, they are typically resistant to substances, conditions, dampness, producing them perfect for utilize in harsh environments. they have been non stick, and therefore you can washed and keep maintaining them.
Innovation in Conveyor Belt Technology
It part of the technology that are revolutionary try changing how businesses operate. They are built to provide a smooth conveyor that are efficient which decreases downtime improves effectiveness. With regards to their properties that are unique they function a myriad of abilities, like conveying that are high speed precision control.
Safety Features of PTFE Conveyor Belts
Protection can be a concern that are top any environment which try commercial and It possesses security few which create them appropriate utilized in various organizations. The chance of stovetop protector product contamination conveyor gear fires because of their area that was decrease non stick.
Using PTFE Conveyor Belts
Using It procedure which is needs that are straightforward that was minimal. They have been quite simple to put in keep, and also they don't need any device products that are being is unique. When working with It is critical to have a look at producer's instructions to be effectiveness sure has been durability that was maximum.
Quality Service
It produced from top notch contents that have been tested for durability satisfaction. They are typically developed to last, they require minimal fix. Also, whenever It's important to find the company that delivers customer care which will be great. The company that was supply which was reputable suggestions about choosing the conveyor belt best, installation, maintenance, troubleshooting.
Applications
It was versatile which can be correctly found in a true number of organizations, like stove top protector products processing, packaging, automotive, aerospace, equipment which are electronic. They're worthy of food being conveying compounds, and also other elements correctly efficiently. It utilized in high temperature applications such as cooking and curing, where belts that can be fail which are antique.
PTFE mesh conveyor belt products It give a variety of benefits, like durability, effectiveness, security, freedom. They have been revolutionary of their design, producing them well suited for applications that require high speed control conveying which is precise. It simple to use and maintain, producing them an answer that has been organizations being affordable. When purchasing It is very important to pick an company which is set up offers customer care which will be good. | janet_gonzalesb_ebdce9031 |
1,893,620 | LeetCode Day 12 | 144. Binary Tree Preorder Traversal Use iteration instead of recursion ... | 0 | 2024-06-19T14:06:28 | https://dev.to/flame_chan_llll/leetcode-day-12-221a | leetcode, java, datastructures | # 144. Binary Tree Preorder Traversal
## Use iteration instead of recursion
```
public List<Integer> preorderTraversal(TreeNode root) {
List<Integer> list = new ArrayList<>();
//mid -> left -> right
Deque<TreeNode> stack = new LinkedList<>();
stack.push(root);
while(!stack.isEmpty()){
TreeNode cur = stack.pop();
if(cur!=null){
list.add(cur.val);
stack.push(cur.right);
stack.push(cur.left);
}
}
return list;
}
```
# LeetCode No. 226. Invert Binary Tree
Given the root of a binary tree, invert the tree, and return its root.
Example 1:

>Input: root = [4,2,7,1,3,6,9]
>Output: [4,7,2,9,6,3,1]
Example 2:
Example 2:

> Input: root = [2,1,3]
> Output: [2,3,1]
Example 3:
>Input: root = []
>Output: []
Constraints:
The number of nodes in the tree is in the range [0, 100].
-100 <= Node.val <= 100
### BFS, Iteration ways
```
public TreeNode invertTree(TreeNode root) {
if(root == null){
return null;
}
TreeNode cur = root;
Deque<TreeNode> queue = new ArrayDeque<>();
queue.offer(cur);
while(!queue.isEmpty()){
if(cur!=null){
cur = queue.poll();
TreeNode temp = cur.left;
cur.left = cur.right;
cur.right = temp;
if(cur.left != null){
queue.offer(cur.left);
}
if(cur.right !=null){
queue.offer(cur.right);
}
}
}
return root;
}
```
### Refine It

Here this evaluation is useless because if the element is in queue it must be non-null (We use offer() to add elements and it will cause NullPointer Exception it offered elements are null)
# LeetCode No. 101. Symmetric Tree
Given the root of a binary tree, check whether it is a mirror of itself (i.e., symmetric around its center).
```
public boolean isSymmetric(TreeNode root) {
if (root == null) {
return true;
}
Deque<TreeNode> leftQ = new LinkedList<>();
Deque<TreeNode> rightQ = new LinkedList<>();
leftQ.offer(root.left);
rightQ.offer(root.right);
while (!leftQ.isEmpty() && !rightQ.isEmpty()) {
TreeNode left = leftQ.poll();
TreeNode right = rightQ.poll();
if (left == null && right == null) {
continue;
}
if (left == null || right == null) {
return false;
}
if (left.val != right.val) {
return false;
}
leftQ.offerLast(left.left);
leftQ.offerLast(left.right);
rightQ.offerLast(right.right);
rightQ.offerLast(right.left);
}
return leftQ.isEmpty() && rightQ.isEmpty();
}
``` | flame_chan_llll |
1,893,823 | Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos | A plataforma Google Cloud Skills Boost lançou cursos gratuitos destinados a todos que desejam... | 0 | 2024-06-23T13:50:16 | https://guiadeti.com.br/google-cloud-skills-boost-cursos-ia-gratuitos/ | cursogratuito, automacao, cursosgratuitos, inteligenciaartifici | ---
title: Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos
published: true
date: 2024-06-19 14:01:21 UTC
tags: CursoGratuito,automacao,cursosgratuitos,inteligenciaartifici
canonical_url: https://guiadeti.com.br/google-cloud-skills-boost-cursos-ia-gratuitos/
---
A plataforma Google Cloud Skills Boost lançou cursos gratuitos destinados a todos que desejam adquirir conhecimentos em Inteligência Artificial Generativa.
Esta oferta educacional consiste em módulos introdutórios, com a possibilidade de obter certificação oficial do Google.
Este caminho de aprendizado oferece uma visão geral dos conceitos de IA generativa, desde os fundamentos dos modelos de linguagem de grande escala até os princípios de IA responsiva.
## Iniciante: Introdução ao Caminho de Aprendizado em IA Generativa
O Google lançou cursos gratuitos para todos os interessados em adquirir ou aprofundar conhecimentos em Inteligência Artificial Generativa. Este aprendizado inclui desde os conceitos introdutórios e fundamentais.

_Imagem da página dos cursos google cloud_
### Módulos Introdutórios
A oferta educacional é composta por módulos introdutórios, que são ideais tanto para iniciantes quanto para aqueles que já possuem algum conhecimento no assunto e desejam se aprofundar.
### Atividades do Curso
O curso é composto por 5 atividades, sendo 4 em português e 1 em inglês, proporcionando uma experiência de aprendizado abrangente e acessível. Confira a ementa:
#### Introdução a IA Generativa
Este curso introdutório de microaprendizagem explica a IA generativa: o que é, como é utilizada e por que é diferente dos métodos tradicionais de machine learning. O curso também aborda as ferramentas do Google que ajudam a desenvolver aplicativos de IA generativa.
#### Introdução aos Modelos de Linguagem de Grande Escala
Este curso introdutório de microlearning explica o que são os modelos de linguagem de grande escala (LLMs), seus casos de uso e como ajustar prompts para melhorar o desempenho dos LLMs. Além disso, aborda as ferramentas do Google que auxiliam no desenvolvimento de aplicativos de IA generativa.
#### Introdução a IA Responsiva
Este curso introdutório de microaprendizagem visa explicar a IA responsável: o que é, sua importância e como é aplicada nos produtos do Google. Ele também ensina os 7 princípios de IA do Google.
#### Design de Prompts no Vertex AI
Complete o badge de habilidade introdutória em Design de Prompts no Vertex AI para demonstrar suas habilidades em: engenharia de prompts, análise de imagens e técnicas generativas multimodais dentro do Vertex AI.
Descubra como criar prompts eficazes, guiar a saída de IA generativa e aplicar modelos Gemini a cenários reais de marketing.
Um skill badge é um distintivo digital exclusivo emitido pelo Google Cloud em reconhecimento à sua proficiência com produtos e serviços do Google Cloud e testa sua capacidade de aplicar seu conhecimento em um ambiente interativo e prático.
Complete este curso de skill badge e o laboratório de desafio de avaliação final para receber um skill badge que você pode compartilhar com sua rede.
#### IA Responsável: Aplicando Princípios de IA com o Google Cloud
Você aprenderá como o Google Cloud aplica esses princípios atualmente, além de trabalhar práticas recomendadas e lições aprendidas, para criar uma base sólida e desenvolver sua própria abordagem de IA responsável.
### Certificação Oficial do Google
Há a possibilidade de obter certificação do Google, validando seu aprendizado e habilidades em IA Generativa.
### Visão Geral do Caminho de Aprendizado
Este caminho de aprendizado oferece uma visão geral dos conceitos de IA Generativa, desde os fundamentos dos modelos de linguagem de grande escala até os princípios de IA responsável.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Cursos-Sest-Senat-280x210.png" alt="Cursos Sest Senat" title="Cursos Sest Senat"></span>
</div>
<span>SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática</span> <a href="https://guiadeti.com.br/sest-senat-cursos-gratuitos/" title="SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Google-Cloud-IA-280x210.png" alt="Google Cloud IA" title="Google Cloud IA"></span>
</div>
<span>Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos</span> <a href="https://guiadeti.com.br/google-cloud-skills-boost-cursos-ia-gratuitos/" title="Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Evento-De-Programacao-SAS-280x210.png" alt="Evento De Programação SAS" title="Evento De Programação SAS"></span>
</div>
<span>Evento De Programação SAS Para Iniciantes Gratuito</span> <a href="https://guiadeti.com.br/evento-programacao-sas-iniciantes-gratuito/" title="Evento De Programação SAS Para Iniciantes Gratuito"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/04/Webinar-de-AWS-Cloud-280x210.png" alt="Webinar de AWS Cloud" title="Webinar de AWS Cloud"></span>
</div>
<span>Webinar de AWS Cloud Practitioner Essentials Gratuito</span> <a href="https://guiadeti.com.br/aws-cloud-practitioner-discovery-day/" title="Webinar de AWS Cloud Practitioner Essentials Gratuito"></a>
</div>
</div>
</div>
</aside>
## Tipos de Inteligência Artificial
A inteligência artificial (IA) é uma área da tecnologia que inclui uma variedade de formas e aplicações, desde sistemas simples que realizam tarefas específicas até conceitos avançados de máquinas que superam a inteligência humana. Conheça alguns tipos a seguir.
### Inteligência Artificial Fraca (IA Fraca)
A Inteligência Artificial Fraca, também conhecida como IA Narrow ou IA limitada, é projetada para realizar tarefas específicas. Esta forma de IA não possui consciência ou entendimento além da programação destinada a realizar uma única função ou um conjunto restrito de funções. Exemplos:
- Assistentes Virtuais: Assistentes como Siri, Alexa e Google Assistant são exemplos de IA Fraca, pois são programados para responder a comandos específicos e realizar tarefas definidas.
- Sistemas de Recomendação: Algoritmos que recomendam filmes na Netflix ou produtos na Amazon baseados em preferências e comportamentos do usuário.
### Inteligência Artificial Geral (IA Forte)
A Inteligência Artificial Geral, ou IA Forte, refere-se a sistemas que possuem a capacidade de entender, aprender e aplicar conhecimento de maneira ampla, semelhante aos seres humanos. Esses sistemas podem realizar qualquer tarefa intelectual que um humano é capaz de fazer.
Atualmente, a IA Forte ainda é uma meta teórica e não existe de forma prática. Pesquisas e desenvolvimentos estão em andamento para alcançar este nível de IA.
### Inteligência Artificial Superinteligente
A Inteligência Artificial Superinteligente refere-se a sistemas que superam a inteligência humana em todos os aspectos, incluindo criatividade, resolução de problemas e habilidades sociais. Este nível de IA seria capaz de realizar avanços e tomar decisões além da compreensão humana.
Assim como a IA Forte, a Inteligência Artificial Superinteligente é teórica e ainda não foi alcançada. Pesquisadores e futuristas continuam a explorar as possibilidades e implicações desta tecnologia.
### Inteligência Artificial Reativa
IA Reativa é o tipo mais básico de IA, projetada para responder a entradas específicas com saídas predefinidas. Não possui memória ou capacidade de aprender com experiências passadas. Exemplo:
Deep Blue: O famoso computador de xadrez da IBM que derrotou o campeão mundial Garry Kasparov em 1997 é um exemplo de IA Reativa. Ele respondia a movimentos específicos no tabuleiro sem aprender ou adaptar estratégias.
### Inteligência Artificial com Memória Limitada
IA com Memória Limitada pode usar dados históricos para tomar decisões futuras. Esses sistemas podem melhorar seu desempenho ao longo do tempo com base na experiência acumulada. Exemplos:
Veículos Autônomos: Carros autônomos utilizam IA com Memória Limitada para analisar dados históricos de tráfego, condições de estrada e comportamento de motoristas para tomar decisões de direção em tempo real.
## Google Cloud
Google Cloud é uma suíte de serviços de computação em nuvem oferecida pelo Google, que fornece uma gama abrangente de ferramentas e recursos para empresas e desenvolvedores.
Essas soluções incluem desde infraestrutura como serviço (IaaS) até plataformas como serviço (PaaS) e software como serviço (SaaS).
### Principais Serviços do Google Cloud
#### Compute Engine
O Compute Engine oferece máquinas virtuais que rodam em centros de dados do Google, permitindo a execução de aplicações em uma infraestrutura escalável e segura. As VMs são altamente configuráveis e podem ser ajustadas conforme as necessidades do negócio.
#### App Engine
App Engine é uma plataforma como serviço que permite aos desenvolvedores construir e hospedar aplicativos em uma infraestrutura totalmente gerenciada. Ele suporta várias linguagens de programação e facilita o escalonamento automático de aplicativos.
#### Kubernetes Engine
O Kubernetes Engine é um serviço gerenciado de orquestração de contêineres, que simplifica a implantação, gerenciamento e escalonamento de aplicativos em contêineres usando Kubernetes, uma tecnologia de código aberto originalmente desenvolvida pelo Google.
#### Cloud Storage
Cloud Storage oferece uma solução de armazenamento de objetos durável e altamente disponível. É ideal para armazenar e acessar grandes volumes de dados não estruturados, como imagens, vídeos e backups.
#### BigQuery
BigQuery é um serviço de análise de dados rápido e escalável, que permite a execução de consultas SQL em grandes conjuntos de dados. Ele é otimizado para análise em tempo real e pode processar petabytes de dados em segundos.
### Segurança e Confiabilidade
Google Cloud é conhecido por sua forte ênfase na segurança e confiabilidade. Ele oferece várias camadas de segurança, incluindo criptografia de dados em repouso e em trânsito, gerenciamento de identidades e acesso, e proteção contra ataques DDoS.
A infraestrutura do Google Cloud é projetada para ser altamente disponível, com redundância incorporada para minimizar o tempo de inatividade.
### Benefícios do Google Cloud
#### Escalabilidade
Google Cloud permite que empresas escalem seus recursos de acordo com a demanda, seja aumentando a capacidade durante picos de uso ou reduzindo-a durante períodos de menor atividade. Isso ajuda a otimizar custos e a garantir desempenho consistente.
#### Inovação
Tendo acesso a tecnologias avançadas como inteligência artificial, machine learning e análise de big data, as empresas podem inovar mais rapidamente e obter insights valiosos que impulsionam seus negócios.
#### Flexibilidade
Google Cloud oferece uma variedade de serviços e opções de personalização que permitem que as empresas escolham as soluções que melhor se adequam às suas necessidades específicas, seja em termos de armazenamento, computação ou desenvolvimento de aplicativos.
#### Integração
Google Cloud integra-se facilmente com outras ferramentas e serviços do Google, como Google Workspace, bem como com soluções de terceiros. Isso facilita a criação de um ecossistema de TI coeso e eficiente.
## Link de inscrição ⬇️
As [inscrições para os cursos de inteligência artificial](https://www.cloudskillsboost.google/paths/118?locale=pt_BR) devem ser realizadas na plataforma Google Cloud Skills Boost.
## Compartilhe esta oportunidade de aprendizado do Google!
Gostou do conteúdo sobre os cursos de IA? Então compartilhe com a galera!
O post [Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos](https://guiadeti.com.br/google-cloud-skills-boost-cursos-ia-gratuitos/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,893,605 | Angular Material Tabs with components inside the tabs | My use case was to render components inside the Angular Material Tabs component. Despite searching... | 0 | 2024-06-19T14:01:07 | https://dev.to/ferdiesletering/angular-material-tabs-with-components-inside-the-tabs-56ki | My use case was to render components inside the Angular Material Tabs component. Despite searching online, I couldn't find any relevant resources, so I had to figure it out myself.
Below is the structure we want to use to define the components that should be displayed inside the tabs:
```
tabs = <Tab[]>[
{
label: 'Tab Component',
component: TabComponent,
data: {
title: 'Tab 1 Import information',
content: `Donec lacinia condimentum efficitur. Phasellus tempor est in luctus
facilisis.`,
},
},
{
label: 'Random component',
component: RandomComponent,
data: {
text: 'Text passed on from the random component',
},
},
];
```
## Component
Here's the complete code snippet for the component that renders the components inside the tab container.
```
type Tab = {
label: string;
component: Type<any>;
data: Record<string, any>;
};
@Component({
selector: 'app-tabcontainer',
standalone: true,
imports: [MatTabsModule],
templateUrl: './tabcontainer.component.html',
styleUrl: './tabcontainer.component.css',
})
export class TabcontainerComponent {
viewContainerRef = inject(ViewContainerRef);
tabs = <Tab[]>[
{
label: 'Tab Component',
component: TabComponent,
data: {
title: 'Tab 1 Import information',
content: `Donec lacinia condimentum efficitur. Phasellus tempor est in luctus
facilisis.`,
},
},
{
label: 'Random component',
component: RandomComponent,
data: {
text: 'Text passed on from the random component',
},
},
];
ngOnInit() {
this.loadComponent(this.tabs[0]);
}
loadComponent(tab: Tab) {
this.viewContainerRef.clear();
const ref = this.viewContainerRef.createComponent<any>(tab.component);
// Pass on the props
for (const prop in tab.data) {
if (Object.prototype.hasOwnProperty.call(ref.instance, prop)) {
ref.instance[prop] = tab.data[prop];
}
}
}
onTabSelected(index: number) {
this.loadComponent(this.tabs[index]);
}
}
```
## Caching the tabs
I conducted performance testing to see if caching the component references and loading them from the cache upon request would improve render time. However, there was no improvement in render time.
## Angular signals
The code doesn't work with `input()` signals yet.
## Demo
[Try the stackblitz](https://stackblitz.com/edit/stackblitz-starters-wpgfma?file=src%2Fapp%2Ftabcontainer%2Ftabcontainer.component.ts) | ferdiesletering | |
1,885,551 | How to Use Tailwind CSS for Your Ruby On Rails Project | It's hard to overstate the importance of Cascading Style Sheets (CSS) for all websites. Since the... | 0 | 2024-06-19T14:00:00 | https://blog.appsignal.com/2024/06/05/how-to-use-tailwind-css-for-your-ruby-on-rails-project.html | ruby, rails | It's hard to overstate the importance of Cascading Style Sheets (CSS) for all websites. Since the first CSS standards were published in late 1996, we have come quite far regarding features and ecosystems.
Several frameworks have appeared and proved popular, one of the most recent being Tailwind CSS.
In this post, we'll first examine Tailwind's utility-first approach before diving into how to use it in a Ruby on Rails application. You will see how Tailwind helps you to build excellent websites without the need for custom CSS and long debugging sessions.
Let's get started!
## Tailwind CSS: A Utility-First Approach
Most CSS frameworks (Foundation, Bootstrap, or Bulma, for example) provide ready-to-use components such as buttons and form fields, so you can quickly assemble blocks to shape an interface.
Typically, adding a button with Bootstrap looks like this:
```html
<button class="btn btn-primary">My Button</button>
```
In this example, a simple button is defined and styled by applying the `btn` and `btn-primary` classes. `btn-primary` sets the right color for our use case. Yet, that interface can't fit our needs, so we add a custom CSS stylesheet to customize every component:
```html
<button class="btn btn-primary admin-button">My Button</button>
```
Tailwind is a "utility-first" concept. Instead of providing ready-to-use components such as buttons, it has low-level utility classes that you can compose to build custom designs. As such, it encourages a more functional approach to styling, where you apply pre-defined classes directly in your HTML. It aims to minimize the need for custom CSS and promotes design consistency through the constraints of the utility classes.
> "Utility-first" means that Tailwind provides atomic, single-purpose classes you can combine to construct complex designs.
Let's have a look at some code to compare Tailwind and Bootstrap. First, here is how Tailwind lets us style a simple button:
```html
<button class="bg-blue-500 hover:bg-blue-600 text-white py-2 px-4 rounded">
My Button
</button>
```
There are a series of button element classes to configure:
- **Background color `bg-blue-500`:** While 'blue' is a pre-picked color, we can set the color shade with the number. The higher the number, the darker the color.
- **Background color on hover:** `hover:bg-blue-600`.
- **Text color `text-white`:** No need for a number here, as it's white; there is always a default shade if you don't specify a number, such as with text-red.
- **Vertical padding `py-2`:** 'p' is padding, 'y' is for the vertical axis, '2' is the spacing value, not in pixels but a scale defined in Tailwind.
- **Horizontal padding `px-4`:** Same as above, with 'x' for the horizontal axis.
- **Rounding corners:** `rounded`.
This looks more verbose than the Bootstrap example, but by only adding classes, we can adjust each part of the style. We don't need to create a custom CSS class.
> You might not be happy with these colors, but the good news is that you can add custom colors. We will cover that later.
### A Word on Scales
CSS is mighty when it comes to spacing (such as margins and padding), and you can work with pixels and rems (root-em, a size relative to the size of the root element). This tends to be difficult, though. Tailwind comes with its own spacing scale that hides complexity while also helping with proportionality.
By default, Tailwind offers values between 0 and 96, with each step proportional to the others. For example, the value `16` has twice as much spacing as `8`. Thanks to this, we don't have to do the math to work with rems or pixels. We can define our preferred values and reuse them throughout our design.
[Read more about spacing in Tailwind CSS's documentation](https://tailwindcss.com/docs/customizing-spacing#default-spacing-scale).
## Setting Up Tailwind in a Ruby on Rails Environment
Ruby on Rails 7.x directly supports Tailwind in its application generator.
```sh
$> cd ~/workspace/ && mkdir tailwind-tryout && cd tailwind-tryout
$> rails new -d sqlite3 -c tailwind -T .
```
> We'll skip the test configuration (-T) to avoid adding unnecessary complexity to this article.
Tailwind has a neat feature that generates the CSS file your application needs. Other frameworks require you to include a whole CSS file defining a framework (even the pieces you don't use). In contrast, Tailwind will scan your project and generate a CSS file that contains only the classes your project needs.
You do need to run a little utility to make that happen. In development mode, you can run a watcher daemon that will keep things up to date as you work: `bin/rails tailwindcss:watch`.
You can also add the watcher daemon to your `Procfile`, then use `foreman` or `overmind` to start the `web` and `css` processes:
```text
web: bin/rails server
css: bin/rails tailwindcss:watch
```
Let's now use it within a simple landing page:
```sh
bin/rails generate controller Landing index
```
We can then head to http://localhost:3000/landing/index.
### Dissecting Our Landing Page
Every landing page needs a title. The generator works since we configured our application to use Tailwind as its CSS framework.
```erb
# app/views/landing/index.html.erb
<div>
<h1 class="font-bold text-4xl">Landing#index</h1>
<p>Find me in app/views/landing/index.html.erb</p>
</div>
```
We find something that looks like standard HTML here. We have only two classes for the h1 tag:
- `font-bold`: to control the [font weight](https://tailwindcss.com/docs/font-weight).
- `text-4xl`: to control the [font size](https://tailwindcss.com/docs/font-size).
If we change `text-4xl` to `text-xl` and reload the page, the new style will be automatically applied. Looking at the terminal where Foreman is running, you will see that Tailwind has generated a stylesheet in the background again.
That's how simple it is to integrate Tailwind into a Ruby on Rails application (this relies on the [tailwindcss-rails gem](https://github.com/rails/tailwindcss-rails)).
### Configuring Tailwind for Ruby on Rails
You can edit the `config/tailwind.config.js` file to adjust Tailwind's settings (e.g., to add additional colors, specify a font to use, adjust spacing, etc).
For example, we could add a "copper" color to our backgrounds and text:
```js
module.exports = {
content: ["./src/**/*.{html,js}"],
theme: {
colors: {
copper: {
100: "#FAD9C1",
200: "#F6C8A4",
300: "#F2B786",
400: "#EEA669",
500: "#E9944C",
600: "#D17F3E",
700: "#B96A31",
800: "#A15524",
900: "#8A4018",
dark: "#8A4018",
},
},
fontFamily: {
serif: ["Times", "serif"],
},
extend: {
spacing: {
"8xl": "108rem",
},
},
},
};
```
> Note that the shades are helpful but can instead be named. If we only need three shades, for example, we can use 'light', 'medium', and 'dark' instead of numbers in our views.
We can then use the shades in our title:
```html
<h1 class="font-bold text-4xl text-copper-200">Landing#index</h1>
<h2 class="font-bold text-xl text-copper-dark">Subtitle</h2>
```
> You can find details about this in the [tailwindcss-rails gem's README](https://github.com/rails/tailwindcss-rails?tab=readme-ov-file#configuration) and also the [Tailwind CSS documentation](https://tailwindcss.com/docs/configuration).
### Asset Pipeline
We have seen how `bin/rails tailwindcss:watch` keeps our stylesheets updated in local development mode. If we need to build the stylesheets just once, we can use `bin/rails tailwindcss:build` instead.
For production use, you can rely on `bin/rails assets:precompile` to directly call `bin/rails tailwindcss:build`.
> [Learn more about the asset pipeline for Ruby on Rails applications](https://guides.rubyonrails.org/asset_pipeline.html).
## Tailwind for Rails in Action
Let's check out a couple of practical uses of Tailwind in some views: a form and a responsive navigation bar.
### A Simple Form
Using the Ruby on Rails generator, we create a `user` resource:
```sh
bin/rails g resource user email:string password:string
bin/rails db:migrate
```
We can then alter the `users_controller.rb` file and create a view for the form.
```ruby
# app/controllers/users_controller.rb
class UsersController < ApplicationController
def def new
@user = User.new
end
end
```
```erb
# app/views/users/new.html.erb
<div>
<h1 class="font-bold text-4xl text-blue-500">Users#new</h1>
<%= form_with model: @user, local: true do |form| %>
<div class="mb-6">
<%= form.label :email, class: "block mb-2 text-sm font-medium text-blue-900" %>
<%= form.text_field :email, class: "bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5" %>
</div>
<div class="mb-6">
<%= form.label :password, class: "block mb-2 text-sm font-medium text-gray-900" %>
<%= form.password_field :password, class: "bg-gray-50 border border-gray-300 text-gray-900 text-sm rounded-lg focus:ring-blue-500 focus:border-blue-500 block w-full p-2.5" %>
</div>
<button type="submit" class="text-white bg-blue-700 hover:bg-blue-800 focus:ring-4 focus:outline-none focus:ring-blue-300 font-medium rounded-lg text-sm w-full sm:w-auto px-5 py-2.5 text-center">Submit</button>
<% end %>
</div>
```
We style each piece individually, adjusting the text color, background color, borders, padding, margins, etc. There is nothing beyond standard Tailwind here, yet we customize the form to fit our needs.
### A Responsive Navigation Bar
We can add conditional breakpoints based on a browser's minimum width using any utility class in Tailwind. For example, the following title will change color depending on the window size:
```html
<h2
class="text-base font-semibold text-gray-900 sm:text-teal-800 lg:text-purple-500"
>
Additional information
</h2>
```
By default, the color is a dark shade of gray. When a browser window's width is between 640px and 1024px, it's a shade of teal. If a window's width is above 1024px, it's a shade of purple.
As Tailwind can also handle [columns](https://tailwindcss.com/docs/grid-template-columns), here is an example to showcase how an element's column width can change based on window size:
```html
<div class="sm:col-span-2 md:col-span-3">
<label for="region" class="block text-sm font-medium leading-6 text-gray-900"
>State</label
>
</div>
```
The label "State" spans two or three columns in this case.
Here, using Tailwind's grid layout utilities, we define a grid that is:
- One column wide by default (`grid-cols-1`)
- Six columns wide above 640px width
- Eight columns wide above 768px width
```html
<div class="mt-10 grid grid-cols-1 gap-x-6 gap-y-8 sm:grid-cols-6 md:grid-cols-8">
</div
```
Breakpoints and their widths:
- `sm`: 640px
- `md`: 768px
- `lg`: 1024px
- `xl`: 1280px
- `2xl`: 1536px
As we've seen, Tailwind simplifies page design and the styling of components.
## Tailwind vs. Other Frameworks
Now that we understand how Tailwind can be used, let's review its key differences to other frameworks:
- **Utility-based:** We compose the style of each element using specific CSS classes, each focusing on different parts of the style.
- **Get what we need:** We only get the parts we need to ship our website, making for faster load times; that optimizes build time.
- **Extensible:** We can extend or customize TailwindCSS' defaults through a simple configuration file.
- **Easy shading of colors:** There's no need to figure out how to make lighter or darker shades of a color to handle hover situations, for example.
- **Simple spacing:** The hidden and proportional scales simplify spacing.
- **Less custom CSS:** Since we only assemble classes to style elements, we rely less on custom CSS and can share styles (including complete themes) using HTML files and snippets.
- **Ruby on Rails friendly:** Thanks to the Tailwind gem, everything is integrated into the layouts and the assets pipeline.
## Wrapping Up
As we've seen, Tailwind's utility-first approach is a great fit for Ruby on Rails. We don't need to spend time adjusting Tailwind to fit our needs by adding complex custom configurations or additional custom CSS. As we conceive our views and partials, we can use Tailwind utility classes to shape and style them.
If you want to learn more, you can access many ready-to-use templates and components thanks to Tailwind's vibrant community, and products such as [TailwindUI](https://tailwindui.com) (from Tailwind's creators).
Happy coding!
**P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, [subscribe to our Ruby Magic newsletter and never miss a single post](https://blog.appsignal.com/ruby-magic)!** | riboulet |
1,893,617 | Are we really using Google because it is better? | Hey guys. I started to find Google searches a bit biased, not finding old news that I know I've... | 0 | 2024-06-19T13:56:24 | https://dev.to/miplle_player1/are-we-really-using-google-because-it-is-better-4eel | Hey guys.
I started to find Google searches a bit biased, not finding old news that I know I've already seen. Sometimes news that is trending in some countries related to my country does not appear, but when I go through other search engines it appears.
I think Google has an incredible ecosystem that integrates between Email (Gmail), Video (Youtube), Search Engine (Google), Office (Docs), Browser (Chrome), Adsense (Adds), Blogs (blogger)
But recently I started to value my privacy more and started discovering other very good services! Like Proton mail, DuckduckGo, OnlyOffice...
Are we really using Google because it is better? | miplle_player1 | |
1,893,616 | The Evolution of RJ45 Connectors: Driving Connectivity Standards Forward | Have You Heard of the RJ45 Connectors? The RJ45 connector is a small but tool mighty has... | 0 | 2024-06-19T13:56:22 | https://dev.to/janet_gonzalesb_ebdce9031/the-evolution-of-rj45-connectors-driving-connectivity-standards-forward-491 | design | Have You Heard of the RJ45 Connectors?
The RJ45 connector is a small but tool mighty has revolutionized how we connect our devices to the internet. It is a connector that helps to transmit data between devices such as computers, televisions, phones, and modems. We’ll dig deep into how the RJ45 connector has actually evolved over time to become one of the most efficient and widely used connectors.
Advantages of RJ45 Connectors
The RJ45 connector is unrivaled because of its benefits that are many. This connector provides a cat 6 cable jack secure and connection reliable ensures excellent transmission speeds and connectivity, and is easy to use. It is generally durable and compatible with other devices, making it the connector go-to many people worldwide.
Innovation in RJ45 Connections
The RJ45 connector has seen continuous innovation over the years. The evolution of RJ45 connectors has been driven by the need for better connectivity standards that can help device users to connect easily to the internet. These connectors have expanded in various aspects, including materials, wire shielding abilities, innovative locking mechanisms, and better speed capabilities.
Safety of RJ45 Connectors
Safety is a aspect crucial of device responsible for transmitting data. The RJ45 connector has been designed with safety in mind. Its features make it resistant to noise electrical interference, and also prevent accidental disconnections. With the increase of online transactions and work remote RJ45 connectors have become critical to the security of electronic transmissions.
Use and How to Use RJ45 Connectors
RJ45 connectors are widely used in various settings, from homes to businesses that are small even in big companies. They are typically used to connect computers to networking cat 6a keystone jacks devices such as routers and modems, enabling fast communication and network coverage robust. To use an RJ45 connector, you need to plug one end of the connector to your device and the other to the network device. Once connected, it is essential to test the connection to ensure that the device is communicating fast.
Service and Quality
To ensure that RJ45 connectors perform as expected, service and quality remain integral. RJ45 connectors' quality is essential because they must provide the bandwidth necessary speed to facilitate communication within networks and the internet. Service also plays a role vital which will be why service providers must offer support to customers whenever they have difficulties connecting devices.
Application of RJ45 Connectors
RJ45 connectors have multiple applications. They are used for VoIP (Voice-over-Internet-Protocol) communications, gaming, streaming movies, and file-sharing. Additionally, RJ45 connectors are required for various data server and storage setups in data centers. As cat 6 wall jack devices continue to advance, we can be sure to see applications that are new RJ45 connectors emerge.
| janet_gonzalesb_ebdce9031 |
1,893,615 | Some of the Best Animation Libraries in React | 1.React Spring Website Link -> https://react-spring.dev It gives you tools flexible... | 0 | 2024-06-19T13:54:27 | https://dev.to/shyam1806/some-of-the-best-animation-libraries-in-react-1jh1 | react, frontend, webdev, development | ## 1.React Spring
Website Link -> https://react-spring.dev
It gives you tools flexible enough to confidently cast your ideas into moving interfaces.
## 2.AOS
Website Link -> https://michalsnik.github.io/aos
Simple Library to animate UI components using scroll events.
## 3.Framer Motion
Website Link ->https://www.framer.com/motion
It Utilizing the power behind framer, the best prototyping tools for teams.
## 4.React-Tweenful
Website Link -> https://teodosii.github.io/react-tweenful
It Creates real-world animations for your pages.
| shyam1806 |
1,893,614 | Some of the Best Animation Libraries in React | 1.React Spring Website Link -> https://react-spring.dev It gives you tools flexible... | 0 | 2024-06-19T13:54:25 | https://dev.to/shyam1806/some-of-the-best-animation-libraries-in-react-59pf | react, frontend, webdev, development | ## 1.React Spring
Website Link -> https://react-spring.dev
It gives you tools flexible enough to confidently cast your ideas into moving interfaces.
## 2.AOS
Website Link -> https://michalsnik.github.io/aos
Simple Library to animate UI components using scroll events.
## 3.Framer Motion
Website Link ->https://www.framer.com/motion
It Utilizing the power behind framer, the best prototyping tools for teams.
## 4.React-Tweenful
Website Link -> https://teodosii.github.io/react-tweenful
It Creates real-world animations for your pages.
| shyam1806 |
340,932 | Graduating in 2020! | Hey everyone! This is my first dev post! I've wanted to post for a long time now, but couldn't think... | 0 | 2020-05-21T14:29:10 | https://dev.to/sakshamio/graduating-in-2020-2ddp | octograd2020, epidemic, models, datascience | Hey everyone! This is my first dev post! I've wanted to post for a long time now, but couldn't think of a topic until now! This is a project I started building in my final year at VIT University but had to abandon in between as I received an internship offer from Stylumia Intelligence, and the work looked too cool to pass on. So, here goes.
## My Final Project
I had started looking for interesting projects to work on during my second last semester and finally came across a topic I found really interesting. Using Agent-based Modelling to predict the spread of infectious diseases. In hindsight, given the current situation, this seems like the best project I could have invested my time into. But back in December 2019, Covid-19 wasn't as big as a phenomenon as it is now, and epidemiology wasn't a topic cool kids picked.

So the initial idea was to build a platform where researchers could use disease heuristics to understand how a disease spreads. But soon I realized that one platform couldn't meet the needs of the very wide range of diseases, infections that were possible.
My project guide also recommended that I shouldn't try to boil the ocean, and focus on the low hanging fruit first. So, I decided that I would pick two different diseases, which had very different characteristics, and try to model them. I come from India, and the obvious choices were Malaria, Dengue, HIV, etc.

But as I was about to finalize on one of these, an online clickbait site told me that a new virus in China had killed 6 people, and over 300 people had been infected. What could be a better use-case than to model a disease that was spreading by the second?
That was the moment I decided that the novel coronavirus would be the object of my modeling.
Here's a fun quote by Kathleen Carley, Director of CMU’s Center for Computational Analysis of Social and Organizational Systems, on why we need to build better simulations of disease spread to model biowarfare.
_“You don’t want to run an actual attack on a city, of course. That would be unethical.”_
I used this quote in my initial presentation to my guide. She didn't laugh.
I'll continue how I progressed in a series of posts. Meanwhile, here are a few resources that I came across when there were less spam and clickbait regarding epidemic modeling. I hope you find this interesting.
* https://link.springer.com/chapter/10.1007/978-3-642-46599-4_13
* https://www.sciencedirect.com/topics/medicine-and-dentistry/epidemic-model
* https://www.sciencedirect.com/science/article/pii/S1755436514000334
* https://www.sciencedirect.com/science/article/pii/S1755436514000553
* http://www.stat.columbia.edu/~regina/research/notes123.pdf
* https://link.springer.com/chapter/10.1007/978-3-642-46599-4_13
| sakshamio |
1,893,613 | How to Deploy your FastAPI Backend with PostgreSQL Database to Render | Introduction FastAPI is a popular Python backend web development framework. Many Python... | 0 | 2024-06-19T13:54:16 | https://dev.to/odhiambo/how-to-deploy-your-fastapi-backend-with-postgresql-database-to-render-4ca2 | fastapi, python, render, hosting | ## Introduction
[FastAPI](https://fastapi.tiangolo.com/) is a popular Python backend web development framework. Many Python developers use FastAPI to built Application Programming Interfaces (APIs) and connect other backend infrastructure such as databases. FastAPI is suitable for API design for several reasons:
* APIs built using FastAPI are fast in terms of performance.
* FastAPI framework is simple to learn.
* FastAPI comes with a built-in API documentation feature that saves the developer the time needed to manually document APIs.
* FastAPI is generally optimized for API design.
This tutorial walks Python developers through the process of hosting APIs built using FastAPI to the [Render cloud hosting service](https://dashboard.render.com/), and connecting a PostgreSQL database. The article assumes you have your FastAPI backend ready for deployment. However, I will walk you through the deployment checks involved to get your code ready, so you can apply the checks you might have missed. Let's go!
## Prerequisites
1. A FastAPI backend
2. A registered [Render account](https://dashboard.render.com/register)
## Pre-Deployment
### Deployment Checks
Assuming you have written the logic for all your APIs and have tested that everything is working as expected, there are five steps extra steps we are going to take to ascertain that our backend is ready for production:
#### 1. Create a requirements.txt file
You obviously installed software packages to build your APIs inside your Python virtual environment. Among the packages is the FastAPI framework itself and all its dependencies. We will need to install the same packages to run our code in production. We need to pass to the Render build process a list of dependencies for our project.
In your local environment, create a file named **requirements.txt**. You could name it anything, but naming it as above is a standard convention. Next, run `pip freeze > requirements.txt` in your terminal. The command gathers the names of all the software dependencies and their version numbers from your virtual environment, and outputs the information in the requirements.txt file
#### 2. Create a .env file
Next you need to hide any sensitive information in your codebase before you push your code to Github. In most cases, the sensitive information in most Python projects is your project's secret keys, database URL connection string, and database name, username and password. You could include database port and host if you deem them sensitive enough to be hidden.
Proceed to create a hidden file called `.env`. Do not forget the period at the beginning. Edit the file by providing key-value pairs of the secret information like this:
```
DATABASE_PORT=5432
DATABASE_NAME=fastapi
DATABASE_USERNAME=postgres
DATABASE_PASSWORD=testpass123
SECRET_KEY=123456HDUDCDHCHDUCDNCNDNCR
```
Remember to replace the environment variables values with your actual values.
#### 3. Create a .gitignore file
This steps precedes pushing your code to Github and it involves instructing git to ignore directories and files you do not want checked into Github. By default, when you initialize a git repository as we will do in a moment, git tracks all files in the entire directory. We do not want version control to track the .env file. In addition, we do not want to check our virtual environment folder or the __pycache__ directory into Github. This is because we can easily recreate our virtual environment. Furthermore, we do not want to pollute our Github repository with code we do not need, such as __pycache__ files and virtual environment packages.
Untrack unnecessary files by editing the .gitignore file like this:
```python
__pycache__/
venv/ # Use your virtual environment's name
.env
```
#### 4. Create a build.sh file
This step involves creating a bash file called `build.sh` that will store our pre-deploy commands. We will run this file shortly when we host our code on Render.
Let's talk about pre-deploy commands. These are commands that should run before you start your production server. Most often, these are commands that set up your production database. They may involve commands that set up your database schema and tables and run database migrations. Additionally, you can run pre-deploy commands to upload your static files to a Content Delivery Network (CDN).
In most cases, with FastAPI and other Python frameworks such as Django, pre-deploy commands involve running database migrations to create schema and tables. Django, in particular, auto-generates migration files every time you make changes to your database. Therefore, with Django, you almost always have to write only these two database migration commands in your build.sh file:
```python
python manage.py makemigrations
python manage.py migrate
```
However, FastAPI does not create any migration files when you make changes to your database. For a serious project, you usually have
to integrate a database migration tool like [Alembic](https://testdriven.io/blog/fastapi-sqlmodel/), which tracks changes to your database schema. If you integrated Alembic into your FastAPI backend, first write this database migration command in build.sh to set up your schema:
```
alembic upgrade head
```
However if you did not integrate any database migration tool, you typically wouldn't have to run pre-deploy commands on Render.
Assuming you do not need to make database migrations for your FastAPI, our build.sh file will contain only one command; the command to start our production server. Input the following command in your build.sh file:
```shell
uvicorn main:app --host 0.0.0.0 --port $PORT
```
**Note**
FastAPI ships with the [uvicorn](https://fastapi.tiangolo.com/deployment/manually/) server by default. The above command presumes that the entry point to your backend, the main.py file, is located at the root directory of your project. If the file were located in some sub-directory, you would have to provide the relative path to the file, i.e relative path to the root directory
Also, take note of the host IP. The IP is set to the 0.0.0.0 address, which is a special network interface that accepts connections from all IPs. The port option specifies an environment variable **PORT**. Using an environment variable instead of hard-coding a specific port number is considered best practice. The hosting service build process will assign us a random port during the build. You could hard-code a port, say 5000 or 10000 and hope the port is not occupied by another process during the build.
#### 5. Initialize a Github repository and push your code to Github
Go to your Github account, create a new public repository and copy the repository's URL. Back in your command line, enter these prompts, in order,
at the root of your project folder
```python
git init # Initializes a local git repository
git add --all
git commit -m 'FastAPI backend' # Commit message could be anything you want
git branch -M main # Sets up a branch called 'main' on Github
git remote add origin [Your repository URL] # Sets up a remote repository on Github
git push origin main # Pushes all your code to the main branch of your Github repository.
```
All your code is now hosted on Github
## Deployment
### Set up a PostgreSQL Database on Render
Follow these steps to spin up a free PostgreSQL database on Render:
1. Login into your Render account. You will be redirected to your dashboard.
2. Click on **new** and select **PostgreSQL** from the dropdown menu.
3. Fill in the Database, Name and User fields with the database credentials you want to use.
4. Scroll down to **Instance Type** and select free instance. You can select a paid instance for an enterprise-level project.
5. Hit **Create Database** at the bottom of the page and wait for Render to create a new PostgreSQL database instance for you. Render will redirect you to a page with information about the newly created database.
6. Copy the generated **Database Password** and **Database Internal URL** somewhere.
Note that we are using [Internal URL](https://docs.render.com/databases) and not external URL because both our service and database are hosted within the same server. Connecting using the internal URL speeds up the connection process and reduces latency.
### Create a Web Service
1. Click on new again, and select **Web Service** from the dropdown.
2. Choose **Build and deploy from a Git repository** and press **next**. Render will connect to your Github and avail all your public repositories. Connect your FastAPI repository.
3. Fill in details about your web service in the resulting page:
1. Input a suitable unique name for your service under **Name** field
2. Scroll down to **Runtime** field and select **Python3** from the dropdown
3. Input `pip install -r requirements.txt` command under **Build Command**
4. Run your build.sh file under **Start Command** field like this:
```shell
./build.sh
```
You could just input the command required to start our production server directly into the Start Command field since we do not have any other pre-deploy command. However, in a future event where you would need to run database migration commands, it is best practice to create a build.sh file to run all your commands at once.
5. Select the instance type of your choice. If it is a practice project, select the **free instance** type.
6. Add environment variables
You can proceed in either of the following ways in Render:
1. You can add your environment variables one-by-one from your `.env` file by clicking on the **Add Environment Variable** button.<br>
2. You can also click on **Add from .env** which will provide an input field. Proceed to copy the contents of your .env file from your local machine and paste them in the input field provided, then click on **Add variables**.
Remember that we added the details of our new PostgreSQL instance, which are different from the database you were using locally. Return to the dashboard and you should see the PostgreSQL instance you just created, marked as **available**. Click on it and note the database name, database user, database password and database **internal** URL details. Then proceed in either of the two ways discussed above to set your environment variables on Render.
7. Click on **Create Web Service** to initialize the build process. The process will take a few minutes after which your new web service URL will be generated. You can visit your service on the browser by clicking on its URL. If the build process fails for some reason, Render will inform you. You should review the build logs to fix any errors.
## Conclusion
Hosting your FastAPI backend with a PostgreSQL database on Render is a fairly simple process. First you will need to prepare your project for production by creating a requirements.txt file, a .env file, a .gitignore file and a build.sh file. Then you will need to push all your code to Github. Finally, you need to proceed to your Render account and create a PostgreSQL database instance, then create your web service by passing in a build command, a start command and your environment variables. Hope this helps. Happy hosting! | odhiambo |
1,893,612 | Torrenting 101 | 1. Get a VPN Currently, I use Mullvad, and I won't be swapping as it suits my needs and I... | 0 | 2024-06-19T13:51:56 | https://dev.to/piracy/torrenting-101-21df | ### 1. Get a VPN
Currently, I use [Mullvad](https://mullvad.net/en/vpn), and I won't be swapping as it suits my needs and I have no complaints. I've also used [IVPN](https://www.ivpn.net/) and [Proton](https://protonvpn.com/) *(before the reskin and bad pricing).* These are both fine options, but I prefer Mullvad as it's faster than the others and doesn't need login credentials *(IVPN doesn't either).* It's worth noting that Mullvad doesn't have a lot of servers in different countries, so I would recommend checking if they have [servers in your country](https://mullvad.net/en/servers). If they don't, go with Proton or IVPN.
### 2. Download a torrent client
Right now I use [qBittorrent](https://www.qbittorrent.org/). I've also used [Deluge](https://deluge-torrent.org/) and [Transmission](https://transmissionbt.com/). I would recommend using any of these clients.
### 3. Bind your VPN to your Torrent Client
As a safeguard, I would recommend that everyone bind their VPN to their torrent client. This prevents you from leaking your IP address while torrenting if the VPN disconnects. It will automatically stop all interaction with the torrents as it doesn't have an internet connection, but when you reconnect your VPN, it will resume.
**Tutorials for:**
- [qBittorrent](https://old.reddit.com/r/VPNTorrents/comments/ssy8vv/guide_bind_vpn_network_interface_to_torrent/)
- [Deluge](https://forum.deluge-torrent.org/viewtopic.php?f=7&t=49883)
- [Transmission](https://forum.transmissionbt.com/viewtopic.php?t=11452)
### 4. Public and Private Trackers
Public trackers, such as The Pirate Bay or RARBG, are open for anyone to use and often have a large selection of torrents, but the quality and availability of seeds can vary. On the other hand, private trackers are invite-only communities that typically specialize in specific types of content, such as movies, music, or software. They enforce strict rules regarding seeding ratios and content quality, resulting in faster downloads and better-curated libraries. You might use public trackers for casual downloading or when you can't find a specific torrent on private trackers, while private trackers are preferred for accessing rare, high-quality, or niche content with the added benefits of tighter security and community features.
To gain access to private trackers, you can try to obtain an invite from an existing member, participate in IRC interviews or application processes, or watch for rare open signup periods. Building a good reputation on lower-tier trackers first can help you progress to better ones over time.
### 5. How to gain access to Private trackers
To gain access to private trackers, you can try to obtain an invite from an existing member, participate in IRC interviews or application processes, or watch for rare open signup periods. Building a good reputation on lower-tier trackers first can help you progress to better ones over time.
[A-Z Waitlist](https://discord.gg/pfmAegk8bD)
[r/OpenSignups](https://old.reddit.com/r/OpenSignups/)
### 6. Piracy Media Terminology
- **WEB-DL**: A lossless rip from a streaming service, such as Netflix or Amazon Video, or downloaded via an online distribution website like iTunes, often in high quality with no re-encoding.
- **WEBRip**: A file extracted using the HLS or RTMP/E protocols and remuxed from a TS, MP4, or FLV container to MKV, generally used for releases captured from streaming services.
- **DVDRip**: A lossless rip from a DVD source, formerly common but has diminished in popularity in favor of higher quality Blu-Ray releases.
- **WEB.720p**: A high-definition (720p) release sourced from a streaming service or online distribution platform.
- **Cam**: A low-quality recording made in a movie theater using a camcorder or mobile phone, with distinctly poor audio and video quality.
- **Telesync (TS)**: Similar to a cam, but recorded using professional equipment in a movie theater, often with an external audio source, resulting in slightly better quality than a cam.
- **Blu-ray/BD/BRRip**: An extremely common high-definition rip from a Blu-ray source, making up a large share of the pirated movie market.
- **HDTV**: A high-definition TV recording captured via cable or satellite, usually in 720p or 1080i resolution, often used for TV programs.
- **HC HD-Rip**: A high-contrast, high-definition rip, usually from an HDRip or WEB-DL source.
- **WEB.1080p**: A full high-definition (1080p) release sourced from a streaming service or online distribution platform. | piracy | |
1,893,611 | RAG Techniques: Multi Query | In my last project, I used RAG (Retrieval Augmented Generation) for retrieving the relevant context... | 0 | 2024-06-19T13:49:43 | https://dev.to/shawonmajid/rag-techniques-multi-query-2p5h | rag, langchain, llm, ai | In my last project, I used RAG (Retrieval Augmented Generation) for retrieving the relevant context for the user question. But the problem I faced is that, from the user query, the retrieval is not always very accurate.
For example, In my [budget AI](https://www.linkedin.com/posts/shawon-majid_langchain-openai-ai-activity-7205634483872530432-8rV3/?utm_source=combined_share_message&utm_medium=member_desktop) project, I embedded the user query directly to match the vector-store database for getting the relevant documents. But for questions like:
> "What did I buy last month?"
The retrieval sometimes provided relevant documents and sometimes failed. Because, this question is too vague and does not provide enough context or specific details (like category or amount) to match effectively with the embedded vectors. The semantic search might struggle to identify which specific records are relevant without additional context.
To solve this problem, I came across a technique called multi-query. Before matching with the vector-store, I break down the user question into multiple prompts for the semantic search. For example, The given question can be broke into multiple different related prompts:
1. "List all expenses from last month."
2. "Show purchases and expenses made in the last month."
3. "What items did I spend money on in the previous month?"
4. "Provide details of all transactions from last month."
5. "What were my expenses for each category last month?"
This can be easily done with the help of an LLM. Just ask it to break the question into multiple related questions to get relevant documents. The following diagram, taken from the [LangChain](https://www.langchain.com/) GitHub repository, visualizes the process:

This process significantly increased the relevancy of my documents from the retrieval. LangChain has a built-in function for the same task, you may look at their official documentation for [Multi-Query](https://js.langchain.com/v0.1/docs/modules/data_connection/retrievers/multi-query-retriever/) retriever.
I found this technique in [Rag-From-Scratch](https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_5_to_9.ipynb) by LangChain. There are more sophisticated techniques for improving retrieval capabilities, and I will try to write more articles if I find anything interesting. | shawonmajid |
1,892,280 | How to Create and Connect to a Linux VM using a Public Key | Table of Contents Introduction Step 1. Login to Azure Portal Step 2. Select/click Virtual... | 0 | 2024-06-19T13:44:50 | https://dev.to/yuddy/how-to-create-and-connect-to-a-linux-vm-using-a-public-key-5fl0 | **Table of Contents**
Introduction
Step 1. Login to Azure Portal
Step 2. Select/click Virtual Machine
Step 3. Create Azure Virtual Machine
Step 4. Basic Tab: Create new Resource Group
Step 5. Basic Tab: Fill all the Virtual Machine Instance Details
Step 6. Disk Tab: Fill all the Disk fields
Step 7. Validation Passed then Click Create
Step 8. Create the Linux VM
Step 9. Open Command Prompt and run some command lines
Step 10. Run IP address on browser
Introduction
Virtual Machine is one of the resources Azure has, to help organizations and individuals execute various tasks without accessing a local computer. It simply means, having a full computer system running in the cloud with lots of advantages than carrying or stationing a physical computer system.
Connecting a Virtual Machine using SSH Public Key makes it a more secured way of connection. This helps to make a connection via command lines, run commands through your terminals to talk to server and client.
**Step 1. Login to Azure Portal**
Open a browser, type url: portal.azure.com
Fill in your registered username and password, then process your entries. A successful login lands into Azure portal where various tasks can be executed accordingly. Also make sure you have a subscription in Azure to enable the creation of VM.

**Step 2. Select/click Virtual Machine**
There are 3 ways to locate Virtual Machine in the portal (via search bar/resource group/portal tab). Go to the search bar type in virtual machine, select Virtual Machine from the dropdown list.

**Step 3. Create Azure Virtual Machine**
Locate the create button up left, click Connect from the dropdown.

**Step 4. Basic Tab: Create new Resource Group**
**Notice:** There is a note on how each instance field works. This icon is found suffix each field label. Take a cursor point to the icon, then read. Also all red asterixis label field are compulsory and must be filled.
Select from already created resource group or create new one if none exists. Click New and type in the new resource group name, click ok.

**Step 5. Basic Tab: Fill all the Virtual Machine Instance Details**
- Virtual Machine Name: Valid name could contain alpha-numeric, alpha + some special characters excluding underscore (_). Eg. Denz-VM, Denz4 etc
- Region: Select from the dropdown according to your architectural plan.
**Step 6. Disk Tab: Fill all the Disk fields**
- Region: Select a desired data center region
- Availability options: Select how you want your availability zone to run (eg. no infrastructural redundancy required etc.)
- Availability zone: Select your zone(s) or
- Security type: Select how you want to secure the VM. Eg. Standard
- Image: Select either Linux type of OS **(e.g Ubuntu Server 24.04LTS x64)**

- VM architecture: Could be on x64
- Size: Select from the dropdown as it is available in the regional zone selected.
- Enable Hibernation: Check if you want otherwise ignore
**Administrator account**
Because Linux image was selected instead of Windows, the Authentication type will show these options --SSH Public Key -- Password
**Here we are selecting SSH Public Key.**
- Username: Type in your username
- SSH public key source: You have options to Generate a new key or Use the existing key. (Select Generate new Key Pair).
- SSH Key Type: Checkbox: RSA SSH Format
- Key pair name: This key is automatically generated using the inputted VM name and must be kept safe because without this key your connection to VM will be aborted.

- Public inbound ports: Allow selected ports
- Select inbound ports: Select how you want the VM to be accessed. Eg. HTTP(80), SSH(22)
- Click on Next Disk button to proceed to the next tab set-up.
**Note:**
You can leave every other field in this Tab at default then proceed.

**For now, accept Networking, Monitoring, Advance and Tags Tabs at default level. Click Review and Create Button.**
**Step 7. Validation Passed then Click Create**
At this point the information inputted on the Tabs are reviewed for validation. Most especially make sure you have a strong internet connection in order to experience smooth loadings.

**Step 8. Create the Linux VM**
Once you click on create button, it will prompt you to download a private key file which must be done if not you will not find it again. Click "Download private key and create resource", automatically the file will be downloaded into your download folder.



**Step 9. Open Command Prompt and run some command lines**
Here you type in "command prompt" in the search field of your task bar, select run as administrator.

To copy private key file path, open the download folder, click on the file once to select the file, click inside the address bar and copy the highlighted folder path (eg. C:\Users\dell\Downloads)
Combine and type these 3 things inside the command prompt:
1. ssh -i
2. C:\Users\dell\Downloads\DezxLVM_key.pem
3. DezAzure@20.160.81.95 (this is your Linux VM username@Ip address)

ssh -i C:\Users\dell\Downloads\DezxLVM_key.pem DezAzure@20.160.81.95
Hit enter button on your keyboard and type yes to process further.

type in: sudo apt-get -y update
Hit enter key to process further
DezAzure@DezxLVM:~$ sudo apt-get -y update

type in: sudo apt-get -y install nginx
Hit enter key to process further

**Step 10. Run IP address on browser**
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
 | yuddy | |
1,893,607 | Exploring Eco-Friendly Waterproofing Solutions | Have actually you ever before been actually captured in the rainfall or even unintentionally splashed... | 0 | 2024-06-19T13:42:09 | https://dev.to/rebecca_greenh_5fdea1862c/exploring-eco-friendly-waterproofing-solutions-1dn3 | design |
Have actually you ever before been actually captured in the rainfall or even unintentionally splashed sprinkle on your possessions It could be aggravating when sprinkle problems your products. Fortunately, certainly there certainly are actually waterproofing that is environmentally friendly that can easily safeguard your possessions coming from the aspects while being actually type towards the atmosphere. we'll check out the benefits of utilization waterproofing that is environmentally friendly, the development responsible for all of them, ways to utilize all of them, as well as the high top premium of solution as well as request that includes all of them
Benefits of Eco-Friendly Waterproofing Services:
Certainly there certainly are actually lots of benefits towards utilizing waterproofing that is environmentally friendly. First of all, they are actually ecologically lasting, significance they are actually comprised of products that are actually naturally degradable as well as safe. This is actually particularly essential since conventional waterproofing services can easily include chemicals that are hazardous can easily impact the atmosphere as well as your health and wellness. Second of all, environmentally friendly transparent waterproof glue services are actually resilient as well as lasting. They offer an level that is additional of versus sprinkle damages, mold and mildew, as well as mold, for that reason prolonging the life expectancy of your possessions. Lastly, they are actually affordable over time because they get rid of the have to change your products often or spend for sprinkle even damages repair work
Development responsible for Eco-Friendly Waterproofing Services:
Environmentally friendly waterproofing services are actually an outcome of development in the area of design as well as product research that is scientific. The items are actually created coming from all-organic products that have actually been actually crafted towards have actually waterproof residential or homes that are commercial. For instance, some items are actually created coming from plant-based products such as beeswax, which has actually an oil that is all-natural that repels sprinkle. Others are actually created coming from ingenious products such as silicone, which was crafted towards bond along with different surface areas, producing a sprinkle obstacle that is repellent
Ways to Utilize Eco-Friendly Waterproofing Services:
Utilizing waterproofing that is environmentally friendly is actually easy as well as simple. First of all, surface area prep work is actually important. Guarantee the surface area is actually cleanse as well as completely dry out prior to using the service. Second of all, use the ongoing service inning accordance with the directions offered. For instance, some ongoing services need a comb while others need a spray container. Third, enable the surface area towards completely dry out prior to utilizing the waterproof paint for concrete product. It is actually necessary to details that the period of drying out might differ depending upon the item utilized as well as the problems that are ecological. Lastly, guarantee that the surface area is actually completely dealt with towards accomplish optimum security coming from sprinkle damages
High premium that is top of as well as Request:
One benefit of utilization waterproofing that is environmentally friendly is actually the high top premium of solution as well as request they offer. Very most waterproofing that is environmentally friendly have actually customer-centric solutions that guarantee their customers obtain the very best worth for cash. They typically offer assistance on which item towards utilize depending upon the product you wish to water resistant. Furthermore, their items have actually been actually subjected towards extensive high premium that is top, guaranteeing that the items are actually of top quality as well as provide on their guarantee
Request of Eco-Friendly Waterproofing Services:
Environmentally waterproofing that is friendly could be put on different products like backpacks, camping outdoors tents, footwear, clothes, outside equipment, as well as digital gadgets. For instance, using it towards backpacks as well as camping outdoors tents guarantees that the possessions are actually safeguarded coming from sprinkle damages while outdoor camping or even trekking. For clothes, environmentally friendly waterproof roof coating services guarantee that you remain completely dry out as well as comfy in wet survive. Using it towards digital gadgets such as phones as well as laptop computers offers an level that is additional of coming from sprinkle damages
| rebecca_greenh_5fdea1862c |
340,331 | DevOps: the next level | Ten years and more passed since Patrick Debois coined the term DevOps, in 2009. In the IT world, noth... | 0 | 2020-05-20T19:34:11 | https://dev.to/zeppaman/devops-the-next-level-969 | devops, NoOps, Cloud | Ten years and more passed since Patrick Debois coined the term **DevOps**, in 2009. In the IT world, nothing is definitive. All the technologies and techniques continue to evolve following an innovation trend that we cannot stop.
We cannot merely say: _“I’m tired of changing; please give me a rest.”_ The only option is to be ready for the change.
In DevOps, there is a significant change because of the rise of the cloud and the market, always more demanding.
The DevOps philosophy was adopted by most companies and bring essential improvements to quality and cost-saving. Anyway, the DevOps scenario is continuously evolving and adapting to the new market requirements.
In this article, we will analyze the new frontiers of DevOps and what to know to keep up with the times. Let’s see in this article what are the latest trends to follow.
Most essential keywords in this change are:
- Cloud
- Automation
- Assembly Pipeline
- NoOps
Most of them converge to the same goal: use the technologies to improve the quality process.
[Read the full article](https://towardsdatascience.com/devops-trends-8ccbed85e7af?source=friends_link&sk=cf29ce7ddd6ee30fef82a99aa98ac6b4) | zeppaman |
1,893,606 | Business AI: Revolutionizing Operations and Innovation | Introduction The integration of Artificial Intelligence (AI) in business processes... | 27,673 | 2024-06-19T13:39:14 | https://dev.to/rapidinnovation/business-ai-revolutionizing-operations-and-innovation-5gio | ## Introduction
The integration of Artificial Intelligence (AI) in business processes has
revolutionized the way companies operate, innovate, and compete in the global
market. AI technologies, ranging from machine learning models to advanced
predictive analytics, are being leveraged to enhance decision-making, automate
operations, and personalize customer experiences. As AI continues to evolve,
its applications in business are expanding, making it a critical tool for
achieving competitive advantage and operational efficiency.
## What is Business AI?
Business Artificial Intelligence (AI) refers to the application of AI
technologies to solve business problems, enhance operational efficiency, and
drive innovation across various sectors. It encompasses the integration of
machine learning, natural language processing, robotic process automation, and
predictive analytics into business processes. Business AI is tailored to meet
specific corporate needs, ranging from automating routine tasks to providing
deep insights that inform strategic decisions.
## How Business AI is Implemented
Implementing AI in business involves a strategic approach that starts with
understanding the specific needs of the business and then integrating AI
technology with existing systems to enhance efficiency, decision-making, and
customer experiences. This includes identifying business needs, integrating AI
with existing systems, data collection and preparation, model development and
training, and deployment and monitoring.
## Types of AI Technologies Used in Business
AI technologies have become integral to modern business operations, enhancing
efficiency, personalization, and decision-making. Key technologies include
Machine Learning, Natural Language Processing, Robotics Process Automation,
and Predictive Analytics.
## Benefits of Implementing AI in Business
Implementing AI in business can lead to numerous benefits, ranging from
enhanced decision-making and increased efficiency to improved customer
experiences and cost reduction. AI technologies enable businesses to automate
complex processes, gain insights from data, and engage with customers in more
personalized ways.
## Challenges in Implementing AI
Despite its benefits, implementing AI in business comes with challenges such
as data privacy and security issues, high initial investment, skill gap and
talent acquisition, and integration complexities. Addressing these challenges
is crucial for successful AI adoption.
## Engineering Best Practices for Business AI
Adhering to engineering best practices is crucial for the success of AI
implementation. These practices include robust data governance, ethical AI
development, continuous testing and validation, and scalability
considerations. They ensure that AI systems are reliable, efficient, and
aligned with business objectives and ethical norms.
## Future Trends in Business AI
The future of AI in business is poised for transformative changes with trends
like AI and IoT convergence, advancements in AI algorithms, increased
regulation and standardization, and a growing emphasis on explainable AI.
These trends will further enhance the capabilities and applications of AI in
business.
## Real-World Examples of Business AI
AI is already making significant impacts in various industries. Examples
include AI in retail for personalized shopping, AI in banking for fraud
detection, AI in healthcare for predictive diagnostics, and AI in
manufacturing for optimized supply chains.
## Conclusion
AI in business has transformed numerous industries by automating routine
tasks, enhancing data analysis, and improving decision-making processes. While
the integration of AI presents challenges, addressing these issues head-on
allows businesses to fully leverage AI to drive growth and remain competitive
in the digital age.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa) [AI Software
Development](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <http://www.rapidinnovation.io/post/the-potential-of-business-ai-engineering-best-practices>
## Hashtags
#Here
#are
#five
#relevant
#hashtags
#for
#the
#provided
#text:
#1.
#BusinessAI
#2.
#AIAutomation
#3.
#MachineLearning
#4.
#AIIntegration
#5.
#AITrends
| rapidinnovation | |
1,893,588 | OpenTelemetry Trace Context Propagation for gRPC Streams | gRPC is a modern, open-source remote procedure call (RPC) framework developed by Google and broadly... | 0 | 2024-06-19T13:35:06 | https://tracetest.io/blog/opentelemetry-trace-context-propagation-for-grpc-streams | opentelemetry, grpc, go, observability | gRPC is a modern, open-source remote procedure call (RPC) framework developed by Google and broadly adopted today through many enterprise systems. Built on the HTTP/2 protocol, it is commonly used in microservices architecture because of its performance and support for communication between services written in different programming languages.
One interesting feature of gRPC is its ability to enable communication via streaming. This allows systems to listen to these streams and fetch data as it is available instead of making polling strategies to external systems to get their data, which avoids overloading the consumer of the data with requests.

In this article, you will see an example of a system written in Go that uses gRPC streams to send data to consumers and learn:
1. how to instrument it with Traces using OpenTelemetry;
2. how to set the context propagation to track the processing of each data item;
3. how to test it and guarantee that the data item is properly processed.
The code sample for this article is available [here](https://github.com/kubeshop/tracetest/tree/main/examples/quick-start-grpc-stream-propagation), and you can run it with:
```bash
git clone git@github.com:kubeshop/tracetest.git
cd ./examples/quick-start-grpc-stream-propagation
TRACETEST_API_KEY=your-api-key docker compose up -d
```
## Using gRPC Streams to Communicate Between Systems
Suppose you have a system written in Go that receives user payments and notifies a worker that this payment has arrived, identifies if it is a high-value payment, and to do further processing:

To avoid asking the `PaymentReceiverAPI`for notifications, you model two endpoints: `ReceivePayment` to receive payments and `NotifyPayment` to emit these notifications. You can specify it with the following `protobuf` file (full example [here](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/proto/paymentreceiver.proto)):
```protobuf
syntax = "proto3";
package proto;
option go_package = "your.module.path/proto";
service PaymentReceiver {
rpc ReceivePayment(Payment) returns (ReceivePaymentResponse) {}
rpc NotifyPayment(Empty) returns (stream PaymentNotification) {}
}
message Empty {}
message Payment {
string customerId = 1;
float amount = 2;
}
message ReceivePaymentResponse {
bool received = 1;
}
message PaymentNotification {
Payment payment = 1;
bool highValuePayment = 2;
}
```
In a simple implementation for the PaymentReceiverAPI (full implementation [here](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/producer-api/main.go)), `ReceivePayment` will enqueue the request for further processing, telling the user that it was received while processing the item:
```go
package main
import (
// ...
pb "your.module.path/proto"
)
type serverImpl struct { // Implement the PaymentReceiverServer interface
pb.PaymentReceiverServer
}
var paymentChannel = make(chan *pb.Payment) // act as an "in-memory" queue
func (s *serverImpl) ReceivePayment(ctx context.Context, payment *pb.Payment) (*pb.ReceivePaymentResponse, error) {
go func() { // enqueue payment
paymentChannel <- payment
}()
return &pb.ReceivePaymentResponse{Received: true}, nil
}
// to continue
```
While `NotifyPayment` will read from this queue, detect if the payment has a high value, and publish it into a stream:
```go
func (s *serverImpl) NotifyPayment(_ *pb.Empty, stream pb.PaymentReceiver_NotifyPaymentServer) error {
for {
payment, ok := <-paymentChannel //dequeue
if !ok {
return nil
}
highValuePayment := payment.Amount > 10_000
notification := &pb.PaymentNotification{
Payment: payment,
HighValuePayment: highValuePayment,
}
if err := stream.Send(notification); err != nil {
return err
}
}
}
```
As a consumer, you can create a simple worker that will consume the gRPC API and call the `NotifyPayment` endpoint, opening a stream connection and receiving notifications as they are available through the stream, with the command `stream.Recv()` (full example [here](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/consumer-worker/main.go)):
```go
package main
import (
// ...
pb "your.module.path/proto"
)
func main() {
ctx := context.Background()
grpcClient, err := grpc.NewClient(/* ... */)
if err != nil {
log.Fatalf("could not connect to producer API: %v", err)
}
log.Printf("Connected to producer API at %s", producerAPIAddress)
client := pb.NewPaymentReceiverClient(grpcClient)
stream, err := client.NotifyPayment(ctx, &pb.Empty{}, grpc.WaitForReady(true))
if err != nil {
log.Fatalf("could not receive payment notifications: %v", err)
}
log.Printf("Listening for payment notifications...")
for {
notification, err := stream.Recv()
if err == io.EOF {
log.Printf("No more payment notifications")
return
}
if err != nil {
log.Fatalf("could not receive payment notification: %v", err)
}
// process notifications
processPaymentNotification(notification)
}
}
func processPaymentNotification(notification *pb.PaymentNotification) {
log.Printf("Received payment notification: %v", notification)
}
```
Using [grpcurl](https://github.com/fullstorydev/grpcurl), you can simulate a customer adding a payment of $50000 by calling your service with the following command:
```bash
grpcurl -plaintext -proto ./proto/paymentreceiver.proto -d '{ "customerId": "1234", "amount": 50000 }' localhost:8080 proto.PaymentReceiver/ReceivePayment
# It should output:
# {
# "received": true
# }
```
Also, you should see the following output from the consumer:
```bash
Received payment notification: payment:{customerId:"1234" amount:50000} highValuePayment:true}
```
## Adding OpenTelemetry to the System
[OpenTelemetry](https://opentelemetry.io/) is an open-source observability framework for generating, capturing, and collecting telemetry data such as logs, metrics, and traces from software services and applications. For this article, we will focus on configuring traces in the system, so you can see the entire distributed operation of processing a payment.
First, you need to set up a basic infrastructure, with an [OpenTelemetry (OTel) Collector](https://github.com/open-telemetry/opentelemetry-collector) to receive traces and [Jaeger](https://www.jaegertracing.io/) to store them, structuring the system like this:

To simplify the setup, you will set up both in a `docker-compose.yaml` file (full example [here](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/docker-compose.yaml#L42)), like this:
```yaml
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:0.101.0
command:
- "--config"
- "/otel-local-config.yaml"
volumes:
- ./collector.config.yaml:/otel-local-config.yaml
ports:
- 4317:4317
depends_on:
jaeger:
condition: service_started
jaeger:
image: jaegertracing/all-in-one:latest
restart: unless-stopped
ports:
- 16686:16686
- 16685:16685
environment:
- COLLECTOR_OTLP_ENABLED=true
healthcheck:
test: ["CMD", "wget", "--spider", "localhost:16686"]
interval: 1s
timeout: 3s
retries: 60
```
A local `collector.config.yaml` will be used to configure the OTel Collector to receive traces and send them to Jaeger:
```yaml
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
timeout: 100ms
exporters:
logging:
loglevel: debug
otlp/jaeger:
endpoint: jaeger:4317
tls:
insecure: true
service:
pipelines:
traces/1:
receivers: [otlp]
processors: [batch]
exporters: [otlp/jaeger]
```
You can run both locally on your machine by executing `docker compose up` in the folder where you set up the files, with access to the Jaeger UI through [`http://localhost:16686/`](http://localhost:16686/).
After configuring the infra, you will start to instrument your code by sending data to it. Since both the PaymentReceiverAPI and the Worker are written in Go, you will use [OpenTelemetry Go](https://github.com/open-telemetry/opentelemetry-go) to set up basic instrumentation and [OpenTelemetry Go Contrib](https://github.com/open-telemetry/opentelemetry-go-contrib) to instrument the gRPC server and client.
Add the following functions to your code to configure basic instrumentation:
```go
// ...
const spanExporterTimeout = 1 * time.Minute
func setupOpenTelemetry(ctx context.Context, otelExporterEndpoint, serviceName string) (trace.Tracer, error) {
log.Printf("Setting up OpenTelemetry with exporter endpoint %s and service name %s", otelExporterEndpoint, serviceName)
spanExporter, err := getSpanExporter(ctx, otelExporterEndpoint)
if err != nil {
return nil, fmt.Errorf("failed to setup span exporter: %w", err)
}
traceProvider, err := getTraceProvider(spanExporter, serviceName)
if err != nil {
return nil, fmt.Errorf("failed to setup trace provider: %w", err)
}
return traceProvider.Tracer(serviceName), nil
}
func getSpanExporter(ctx context.Context, otelExporterEndpoint string) (sdkTrace.SpanExporter, error) {
ctx, cancel := context.WithTimeout(ctx, spanExporterTimeout)
defer cancel()
conn, err := grpc.NewClient(
otelExporterEndpoint,
grpc.WithTransportCredentials(insecure.NewCredentials()),
)
if err != nil {
return nil, fmt.Errorf("failed to create gRPC connection to collector: %w", err)
}
traceExporter, err := otlptracegrpc.New(ctx, otlptracegrpc.WithGRPCConn(conn))
if err != nil {
return nil, fmt.Errorf("failed to create trace exporter: %w", err)
}
return traceExporter, nil
}
func getTraceProvider(spanExporter sdkTrace.SpanExporter, serviceName string) (*sdkTrace.TracerProvider, error) {
defaultResource := resource.Default()
mergedResource, err := resource.Merge(
defaultResource,
resource.NewWithAttributes(
defaultResource.SchemaURL(),
semconv.ServiceNameKey.String(serviceName),
),
)
if err != nil {
return nil, fmt.Errorf("failed to create otel resource: %w", err)
}
tp := sdkTrace.NewTracerProvider(
sdkTrace.WithBatcher(spanExporter),
sdkTrace.WithResource(mergedResource),
)
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(
propagation.NewCompositeTextMapPropagator(
propagation.TraceContext{},
propagation.Baggage{},
),
)
return tp, nil
}
```
This `setupOpenTelemetry` function will configure a `spanExporter` to send telemetry data to the OTel Collector via gRPC with the address specified with `otelExporterEndpoint` (which can be [`localhost:4317`](http://localhost:4317) if you are running your API locally, or `otel-collector:4317` if you are running the API inside docker), and then set up a `traceProvider` globally to start capturing traces.
You can call this function from your entrypoint in `main` with code like this ([PaymentReceiver API example](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/producer-api/main.go#L83) and [Worker example](https://github.com/kubeshop/tracetest/blob/main/examples/quick-start-grpc-stream-propagation/consumer-worker/main.go#L22)):
```go
func main() {
otelExporterEndpoint := // ...
otelServiceName := // ...
tracer, err := setupOpenTelemetry(context.Background(), otelExporterEndpoint, otelServiceName)
if err != nil {
log.Fatalf("failed to initialize OpenTelemetry: %v", err)
return
}
// ...
}
```
Finally, you need to configure both the gRPC server and clients to start creating spans for each operation by setting up the OTel Contrib middleware:
```go
// in server
grpcServer := grpc.NewServer(
grpc.StatsHandler(otelgrpc.NewServerHandler()),
)
// in client
grpcClient, err := grpc.NewClient(
/* ... */,
grpc.WithStatsHandler(otelgrpc.NewClientHandler()),
)
```
Now you can run the system again with OpenTelemetry. To see that it is working you can execute another [grpcurl](https://github.com/fullstorydev/grpcurl) and check in Jaeger to see it registered:
```bash
grpcurl -plaintext -proto ./proto/paymentreceiver.proto -d '{ "customerId": "1234", "amount": 50000 }' localhost:8080 proto.PaymentReceiver/ReceivePayment
# after result, go to http://localhost:16686/search
```

However, there is a problem.
**You can see two traces for the PaymentReceiverAPI and the Worker, for each part of the process, but you cannot see it together as a single timeline. This happens due to the lack of [Trace Context propagation](https://opentelemetry.io/docs/concepts/context-propagation/).**
The OpenTelemetry library does not have the trace metadata to identify which trace is the parent. Newly created spans are instead created as another trace instead of getting added to the trace parent.
## Fixing Context Propagation for Producer and Consumer
To propagate context through HTTP systems, OpenTelemetry libraries use HTTP Headers to send metadata informing other APIs that a trace was generated previously by another API, usually by the [`traceparent` header](https://www.w3.org/TR/trace-context/#traceparent-header), that contains the TraceID of the current transaction.
Since the Worker uses a streaming client for multiple messages (a single HTTP call to continuously receive data), you cannot rely on HTTP headers to track each time a piece of data is received. To solve that, you need to manually set the Context Propagation for your payment notifications, by attaching metadata to each notification with the `traceparent`.
Also, we will add manual instrumentation to track internal operations, so we can trace the following operations:

First, you will change the Protobuf definition of the `PaymentNotification` to have metadata, like this:
```protobuf
// ...
message PaymentNotification {
Payment payment = 1;
bool highValuePayment = 2;
map<string, string> metadata = 3; // new field
}
```
Then update the generated server and clients to have these fields (usually done by `protoc` command, or in the code sample, using `make build-proto`).
In Go, OTel libraries rely on setting the tracing metadata on the `context.Context` object to track operations. In the next step, we will capture the trace propagation metadata in the context and inject it into the notification. To do that, create the following helper functions:
```go
func injectMetadataIntoContext(ctx context.Context, metadata map[string]string) context.Context {
propagator := otel.GetTextMapPropagator()
return propagator.Extract(
ctx,
propagation.MapCarrier(metadata),
)
}
func extractMetadataFromContext(ctx context.Context) map[string]string {
propagator := otel.GetTextMapPropagator()
metadata := map[string]string{}
propagator.Inject(
ctx,
propagation.MapCarrier(metadata),
)
return metadata
}
```
Then, change the server to handle this metadata and add manual instrumentation for receiving and enqueuing payments:
```go
type paymentWithMetadata struct {
payment *pb.Payment
metadata map[string]string
}
// Guarantee that the serverImpl implements the PaymentReceiverServer interface
var _ pb.PaymentReceiverServer = &serverImpl{}
// Channel to store payments and used as a "in-memory" queue
var paymentChannel = make(chan *paymentWithMetadata)
func (s *serverImpl) ReceivePayment(ctx context.Context, payment *pb.Payment) (*pb.ReceivePaymentResponse, error) {
go func() {
ctx, span := s.tracer.Start(ctx, "EnqueuePayment")
defer span.End()
message := &paymentWithMetadata{
payment: payment,
metadata: extractMetadataFromContext(ctx),
}
// handle channel as in-memory queue
paymentChannel <- message
}()
return &pb.ReceivePaymentResponse{Received: true}, nil
}
```
Then, for sending it through the stream:
```go
func (s *serverImpl) NotifyPayment(_ *pb.Empty, stream pb.PaymentReceiver_NotifyPaymentServer) error {
for {
message, ok := <-paymentChannel
if !ok {
return nil
}
ctx := injectMetadataIntoContext(context.Background(), message.metadata)
ctx, span := s.tracer.Start(ctx, "SendPaymentNotification")
payment := message.payment
highValuePayment := payment.Amount > 10_000
notification := &pb.PaymentNotification{
Payment: payment,
HighValuePayment: highValuePayment,
}
// extract OTel data from context and add it to the notification
notification.Metadata = extractMetadataFromContext(ctx)
if err := stream.Send(notification); err != nil {
return err
}
span.End()
}
}
```
With the PaymentReceiverAPI instrumented, the last step is to change the Worker to get the tracing metadata and start registering spans linked to the current operation:
```go
func processPaymentNotification(tracer trace.Tracer, notification *pb.PaymentNotification) {
messageProcessingCtx := injectMetadataIntoContext(context.Background(), notification.Metadata)
_, span := tracer.Start(messageProcessingCtx, "ProcessPaymentNotification")
defer span.End()
log.Printf("Received payment notification: %v", notification)
}
```
Now, use [grpcurl](https://github.com/fullstorydev/grpcurl) again and check Jaeger. You should be able to see one trace for both PaymentReceiverAPI and Worker, with the entire operation in one timeline:
```bash
grpcurl -plaintext -proto ./proto/paymentreceiver.proto -d '{ "customerId": "1234", "amount": 50000 }' localhost:8080 proto.PaymentReceiver/ReceivePayment
# after result, go to http://localhost:16686/search
```


## Testing a Payment Being Processed
To evaluate and guarantee that everything is working properly, you can create a [trace-based test](https://docs.tracetest.io/concepts/what-is-trace-based-testing) that triggers a gRPC call against the API and validates whether the trace is logged as intended and the payment is correctly processed in each part of the system.
To do that, we will use [Tracetest](https://tracetest.io/), which triggers service calls (in our case, gRPC calls like our `grpcurl` calls) and validate the emitted traces to ensure that our observability stack works as intended.
First, you will create a new account on [Tracetest](https://app.tracetest.io/), and then create a [new organization](https://docs.tracetest.io/concepts/organizations) and a [new environment](https://docs.tracetest.io/concepts/environments). This you allow you to have an [API Key for your agent](https://docs.tracetest.io/configuration/agent), you will start the local stack with a new container with a [Tracetest Agent](https://docs.tracetest.io/concepts/agent):
```bash
TRACETEST_API_KEY=your-api-key docker compose up -d
```
Then, you will [install Tracetest CLI](https://docs.tracetest.io/getting-started/installation) and configure it to access your environment with the command below. It will guide you to connect to your `personal-org` and environment.
```bash
tracetest configure
# This command will print some instructions interactively to help to connect to your env:
# What tracetest server do you want to use? (default: https://app.tracetest.io/)
# What Organization do you want to use?:
# > personal-org (ttorg_000000000000000)
# What Environment do you want to use?:
# > OTel (ttenv_000000000000000)
# SUCCESS Successfully configured Tracetest CLI
```
Now, configure the Tracetest Agent to connect to local Jaeger, using the following command:
```bash
tracetest apply datastore -f ./tracetest/tracetest-tracing-backend.yaml
# It will send the following output, which means that our environment was correctly configured:
# type: DataStore
# spec:
# id: current
# name: Jaeger
# type: jaeger
# default: true
# createdAt: 2023-10-31T00:30:47.137194Z
# jaeger:
# endpoint: jaeger:16685
# tls:
# insecure: true
```
Next, write a test that checks the trace generated by calling `ReceivePayment` gRPC endpoint and validate:
1. if `ReceivePayment` gRPC endpoint is properly called.
2. if a payment is enqueued to be sent.
3. if a payment notification is sent through a gRPC stream.
4. if the payment notification is received and processed.
To do that, we will create a test file called `./trace-based-test.yaml` with the following contents:
```yaml
type: Test
spec:
id: pprDfSUSg
name: Test gRPC Stream Propagation
trigger:
type: grpc
grpc:
address: producer-api:8080
method: proto.PaymentReceiver.ReceivePayment
protobufFile: ./proto/paymentreceiver.proto
request: |
{
"customerId": "1234",
"amount": 50000
}
specs:
- selector: span[name="proto.PaymentReceiver/ReceivePayment"]
name: It should call ReceivePayment gRPC endpoint
assertions:
- attr:tracetest.selected_spans.count = 1
- selector: span[name="EnqueuePayment"]
name: In should enqueue a payment to send it in a stream
assertions:
- attr:tracetest.selected_spans.count = 1
- selector: span[name="SendPaymentNotification"]
name: It should send a payment notification through a gRPC stream
assertions:
- attr:tracetest.selected_spans.count = 1
- selector: span[name="ProcessPaymentNotification"]
name: It should receive a PaymentNotification through a stream and process it
assertions:
- attr:tracetest.selected_spans.count = 1
- selector: span[name="proto.PaymentReceiver/ReceivePayment"] span[name="EnqueuePayment"] span[name="SendPaymentNotification"] span[name="ProcessPaymentNotification"]
name: The trace shape is correct
assertions:
- attr:tracetest.selected_spans.count = 1
```
Note that under the `spec.specs` section, you have one assertion for each span emitted by the system, checking every process step. Also, you have one last step that checks if each step is executed in the correct order.
To run it, you can execute the following command:
```bash
tracetest run test --file ./trace-based-test.yaml
# It will output:
# ✔ RunGroup: #b_CBSFUIg (https://app.tracetest.io/organizations/your-organization-id/environments/your-environment-id/run/b_CBSFUIg)
# Summary: 1 passed, 0 failed, 0 pending
# ✔ Test gRPC Stream Propagation (https://app.tracetest.io/organizations/your-organization-id/environments/your-environment-id/test/pprDfSUSg/run/1/test) - trace id: 4ec68b1a3aaa57aecf0098dd7b4a9916
# ✔ It should call ReceivePayment gRPC endpoint
# ✔ In should enqueue a payment to send it in a stream
# ✔ It should send a payment notification through a gRPC stream
# ✔ It should receive a PaymentNotification through a stream and process it
# ✔ The trace shape is correct
```
You can also see this output in the Tracetest UI through the links printed in the CLI output:

## Conclusion
gRPC streams are a great way to integrate between APIs to send a continuous flow of data. However, tracking a single trace for an operation can be tricky because of how OpenTelemetry Context propagation works for HTTP.
This article provides a guide on how to implement trace context propagation for gRPC streams using OpenTelemetry, fix context propagation issues for producers and consumers, and test them.
The [example sources](https://github.com/kubeshop/tracetest/tree/main/examples/quick-start-grpc-stream-propagation) used in this article and [setup instructions](https://github.com/kubeshop/tracetest/tree/main/examples/quick-start-grpc-stream-propagation/README.md) are available in the Tracetest GitHub repository.
Would you like to learn more about Tracetest and what it brings to the table? Visit the Tracetest [docs](https://docs.tracetest.io/getting-started/installation) and try it out by [signing up today](https://app.tracetest.io)!
Also, please feel free to join our [Slack Community](https://dub.sh/tracetest-community), give [Tracetest a star on GitHub](https://github.com/kubeshop/tracetest), or schedule a [time to chat 1:1](https://calendly.com/ken-kubeshop/45min).
| danielbdias |
1,893,604 | Securing Service Catalog with RBAC and OPA Gatekeeper | In modern cloud-native environments, managing access to resources and ensuring compliance with... | 0 | 2024-06-19T13:33:36 | https://dev.to/platform_engineers/securing-service-catalog-with-rbac-and-opa-gatekeeper-2b20 | In modern cloud-native environments, managing access to resources and ensuring compliance with organizational policies is crucial. This blog post will delve into the technical details of securing a service catalog using Role-Based Access Control (RBAC) and Open Policy Agent (OPA) Gatekeeper.
### Service Catalog Overview
A service catalog is a centralized repository that provides a single source of truth for all services offered by an organization. It enables users to discover, request, and manage services in a standardized manner. In a Kubernetes environment, the service catalog is typically implemented using the Open Service Broker API (OSBA).
### Role-Based Access Control (RBAC)
RBAC is a widely adopted access control mechanism that restricts access to resources based on user roles. In Kubernetes, RBAC is implemented using roles, role bindings, and cluster roles. Roles define a set of permissions, while role bindings associate roles with users or groups. Cluster roles are used to define permissions at the cluster level.
To implement RBAC for the service catalog, we need to create roles and role bindings that define the permissions for users to access and manage services. For example:
```yaml
# Role for service catalog administrators
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: service-catalog-admin
rules:
- apiGroups: ["servicecatalog.k8s.io"]
resources: ["clusterserviceclasses", "clusterserviceplans", "serviceinstances"]
verbs: ["get", "list", "create", "update", "delete"]
# Role binding for service catalog administrators
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: service-catalog-admin-binding
roleRef:
name: service-catalog-admin
kind: Role
subjects:
- kind: User
name: admin-user
apiGroup: rbac.authorization.k8s.io
```
### Open Policy Agent (OPA) Gatekeeper
OPA Gatekeeper is a policy controller that enforces policies on Kubernetes resources. It provides a flexible and extensible way to define and manage policies. In the context of the service catalog, OPA Gatekeeper can be used to enforce policies on service instances and plans.
To integrate OPA Gatekeeper with the service catalog, we need to create a `ConstraintTemplate` that defines the policy rules. For example:
```yaml
# Constraint template for service instance policies
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: service-instance-policy
spec:
crd:
spec:
names:
kind: ServiceInstance
validation:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
plan:
type: string
pattern: ^[a-zA-Z0-9-]+$
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package serviceinstance
deny[msg] {
input.review.object.spec.plan != "standard"
msg := "Only standard plans are allowed"
}
```
This constraint template defines a policy that only allows service instances with the "standard" plan.
### Integrating RBAC and OPA Gatekeeper
To integrate RBAC and OPA Gatekeeper, we need to create a `ClusterConstraint` that references the `ConstraintTemplate` and applies it to the service catalog resources. For example:
```yaml
# Cluster constraint for service instance policies
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: ClusterConstraint
metadata:
name: service-instance-policy
spec:
match:
kinds:
- apiGroups: ["servicecatalog.k8s.io"]
kinds: ["ServiceInstance"]
parameters:
target: admission.k8s.gatekeeper.sh
constraintTemplate:
name: service-instance-policy
```
This cluster constraint applies the service instance policy to all service instances in the cluster.
### Conclusion
Securing a service catalog [with RBAC and OPA Gatekeeper](https://platformengineers.io/blog/securing-kubernetes-beyond-rbac-and-pod-security-policies-psp/) provides a robust access control mechanism that ensures compliance with organizational policies. By implementing roles, role bindings, and constraint templates, [platform engineers](www.platformengineers.io) can restrict access to resources and enforce policies on service instances and plans. This technical approach provides a scalable and flexible solution for managing access to resources in a Kubernetes environment. | shahangita | |
1,893,603 | Are mobile mechanics certified and trustworthy? | Reputable mobile mechanics are often certified and have experience similar to those working in... | 0 | 2024-06-19T13:32:17 | https://dev.to/deransmith/are-mobile-mechanics-certified-and-trustworthy-pla | webdev, javascript, programming, react | Reputable mobile mechanics are often certified and have experience similar to those working in traditional repair shops. It’s important to check reviews, ask for certifications, and ensure the mechanic is insured before scheduling a service.
If a [mobile mechanic near me](http://www.asapmobiletulsa.com/) determines that your vehicle requires extensive repairs that cannot be performed on-site, they will typically recommend a nearby repair shop or help arrange for your vehicle to be towed to a suitable location.
Most mobile mechanics can work on a wide range of vehicle makes and models. However, it’s always a good idea to confirm that the service you’re scheduling can handle your specific vehicle and issue.
| deransmith |
1,891,467 | How to Write an Effective README File - A Guide for Software Engineers | As software engineers, our goal is to create code that remains relevant and maintainable over the... | 27,822 | 2024-06-19T13:31:14 | https://dev.to/kfir-g/enhancing-software-architecture-through-comprehensive-testing-in-backend-development-4bn2 | readme, writing, softwareengineering, coding | As software engineers, our goal is to create code that remains relevant and maintainable over the years. A key element in achieving this is crafting a comprehensive and clear README file. A README file is vital for effectively documenting and summarizing your work. This blog post shares my perspective on the importance of README files, specifically for internal use rather than open-source repositories, which already have numerous established guidelines.
## The Importance of a README File
A well-crafted README file offers numerous benefits that enhance the development process and overall project management. Here are some key reasons why a relevant README file is indispensable:
1. **Facilitates Collaboration**: A comprehensive README file allows other software engineers to quickly understand the project and contribute effectively. By providing clear instructions and context, it reduces the learning curve for new contributors and ensures consistency in coding practices.
2. **Improves Interdepartmental Communication**: Other departments, such as QA, marketing, or customer support, can find answers to their questions within the README file, eliminating the need for direct communication with the development team. This streamlines workflows and minimizes interruptions.
3. **Saves Time**: By centralizing critical information about the project, a README file saves time for everyone involved. Team members can refer to it for guidance on setup, usage, and troubleshooting, reducing the need for repeated explanations and individual support.
4. **Organizes Your Thoughts**: Writing a README file forces you to organize your thoughts and clarify the project’s objectives, structure, and implementation details. This process can reveal gaps in your planning and help refine your approach.
5. **Highlights Key Aspects of the Code**: The README file gives you the opportunity to highlight important aspects of the code, such as unique features, architectural decisions, and areas that may require special attention. This not only aids understanding but also helps in maintaining and scaling the project in the future.
In essence, a relevant README file acts as a cornerstone for effective project documentation, fostering better communication, efficiency, and collaboration across all levels of a software development project.
## Key Points to Address in a README File
To create a high-quality README file, ensure you cover these major aspects:
1. **What the Code Component Does**: Clearly describe the purpose of the code component, including the inputs it requires, the main algorithms or calculations it performs, and the expected outputs. This section should give readers a solid understanding of the component’s functionality and how it fits into the larger project.
2. **How to Run the Code**: Provide detailed instructions on how to set up and run the code. Include necessary installation steps, dependencies, package requirements, and any environment variables that need to be configured. This ensures that anyone trying to use the code can do so with minimal friction.
3. **Why It Does Things the Way It Does (Optional)**: If relevant, explain the reasoning behind specific implementation choices. This might include why certain algorithms were chosen, the rationale for architectural decisions, or any trade-offs that were made. While optional, this context can be invaluable for future developers who may need to modify or extend the code.
By addressing these key points, you ensure that your README file is not only informative but also practical and user-friendly, making it easier for others to understand, use, and contribute to your project.
## Essential Elements of a README File
A well-structured README file should cover several key points to provide a comprehensive guide for users and contributors. These include:
1. **Table of Contents**: (optional) A clear table of contents helps readers quickly navigate through the README file to find the information they need.
2. **How to Install**: Provide step-by-step installation instructions, detailing the necessary tools, packages, and dependencies required to set up the project.
3. **How to Run It**: Explain how to execute the code, including any required commands, configurations, or environment setups.
4. **Main Workflow**: Describe the primary workflow of the project, highlighting how different components interact and the overall process flow.
5. **How to Add Code / Contribute**: Offer guidelines for contributing to the project, including coding standards, branch management, and the process for submitting pull requests.
6. **What It Does**: Summarize the purpose and functionality of the project, making it clear what problems it solves and the main features it offers.
7. **How to Test It**: Provide instructions for testing the code, including any testing frameworks or tools used, as well as how to run tests and interpret the results.
8. **Dependencies**: Provide a list of all external libraries, tools, and frameworks the project relies on, including their versions and installation instructions. Include a link to the `requirements.txt` file for easy access to the required dependencies.
9. **Security**: Address any security considerations, such as authentication, data encryption, and how to report security vulnerabilities.
10. **Scalability**: Discuss the scalability of the project, including how it can handle increased load, potential bottlenecks, and recommendations for scaling the infrastructure.
By including these key points, your README file will serve as a valuable resource for understanding, using, and contributing to the project, ensuring clarity and ease of use for all stakeholders.
## Writing Style for a README File
The README file should be crafted in a manner that is simple, concise, and coherent. Strive to use clear and concise language to convey information effectively without overwhelming the reader. Ensure that the content aligns with general terminology and industry standards, making it easier for others to understand and follow. This approach helps maintain readability and accessibility, ensuring that the README file is useful for both current team members and future contributors.
## Conclusion
In conclusion, the README file is more than just a document; it's a roadmap that guides every stakeholder through the intricacies of a software project. It serves as a beacon for collaboration, a time-saver for team members, and a bridge between departments. By meticulously documenting the purpose, setup, and operation of the code, a README file crystallizes the essence of the project, ensuring that its legacy can be understood and built upon for years to come. Whether for internal use or a wider audience, the README file stands as a testament to the thoughtful engineering and clear communication that underpin successful software development. Remember, a well-maintained README is not just for today—it's for the sustainability and adaptability of your code in the ever-evolving landscape of technology.
---
Originally published on [Medium](https://blog.stackademic.com/how-to-write-an-effective-readme-file-a-guide-for-software-engineers-09a9618c0532) at [Stackademic](https://blog.stackademic.com/) publication.
Photo by [Guy Hurst](https://www.pexels.com/photo/a-person-driving-a-vessel-14420807/)
---
| kfir-g |
1,893,601 | How AI Coding Tools Might Set You Up for Failure | Why do I think AI code generation tools are bad? – An unpopular opinion In this episode, we'll... | 0 | 2024-06-19T13:30:27 | https://dev.to/iwooky/how-ai-coding-tools-might-set-you-up-for-failure-1pdf | ai, programming, productivity, career | **Why do I think AI code generation tools are bad?** _– An unpopular opinion_
In this episode, we'll discuss how AI coding tools can drive your business forward, but also how they might slow you down if not used properly. Whether you're an engineer or a business owner, this episode will provide valuable insights on AI-assisted coding.
👉 Let's get started! – [How AI Coding Tools Might Set You Up for Failure](https://iwooky.substack.com/p/ai-coding-tools)
[](https://iwooky.substack.com/p/ai-coding-tools) | iwooky |
1,892,990 | Parse, Don’t Validate: Embracing Data Integrity in Elixir | Introduction In the world of functional programming, ensuring data integrity is paramount.... | 0 | 2024-06-19T13:30:00 | https://dev.to/zoedsoupe/parse-dont-validate-embracing-data-integrity-in-elixir-5c94 | elixir, webdev, architecture | ## Introduction
In the world of functional programming, ensuring data integrity is paramount. One effective way to achieve this is by adopting the principle of **"Parse, Don’t Validate"**. This approach emphasizes the transformation of raw input data into structured, well-defined data early in the application flow, thereby enhancing reliability and maintainability. While this concept is not new, its application in Elixir—a functional and concurrent programming language—offers unique benefits and challenges. This article delves into the theory behind parsing over validation and how it aligns with Elixir's paradigms.
## Theoretical Foundations
### Parsing vs. Validation
**Validation** involves checking if data meets certain criteria, often at multiple points in an application. This can lead to redundancy and inconsistencies, as the same checks are repeated, and errors may not be handled uniformly.
**Parsing**, on the other hand, transforms data into a structured format that inherently satisfies the required criteria. This approach ensures that once data is parsed successfully, it is guaranteed to be valid throughout the application, eliminating the need for repeated checks.
### Why Parsing over Validation?
1. **Early Error Detection**: Parsing catches errors at the boundaries of your system, preventing invalid data from entering the core logic.
2. **Simplified Code**: By transforming data into a well-defined structure upfront, the core application logic becomes simpler and more focused on business requirements rather than data validation.
3. **Enhanced Maintainability**: Centralizing data integrity checks in parsing functions makes the system easier to understand and maintain.
### Functional Programming and Parsing
In functional programming, functions are first-class citizens, and immutability is a core principle. Parsing fits naturally into this paradigm as it allows data to be transformed in a pure, deterministic manner. Once data is parsed into a well-defined structure, it remains immutable, ensuring consistency and reliability.
### Concurrency and Data Integrity in Elixir
Elixir, built on the Erlang VM, excels in building concurrent, distributed systems. In such environments, data integrity is crucial, as concurrent processes need to operate on reliable data. By parsing data at the boundaries, Elixir applications can ensure that all processes work with valid, consistent data, thereby reducing the risk of concurrency-related bugs.
## Applying the "Parse, Don’t Validate" Principle in Elixir
### Conceptual Approach
1. **Define Data Structures**: Use Elixir structs or maps to define the shape of your data.
2. **Parse Input Data**: Transform raw input data into these well-defined structures at the earliest possible point in your application.
3. **Centralize Parsing Logic**: Encapsulate parsing logic in dedicated modules or functions to ensure uniformity and reuse.
4. **Leverage Pattern Matching**: Utilize Elixir’s powerful pattern matching to simplify the parsing process and handle different data shapes effectively.
### Example Scenario
Consider an API endpoint that accepts user registration data. Instead of validating fields individually, parse the entire payload into a `User` struct.
#### Defining the Data Structure
```elixir
defmodule User do
defstruct [:name, :email, :age, :address]
end
```
#### Parsing the Input Data
```elixir
defmodule UserParser do
def parse(params) do
with {:ok, name} <- validate_name(params["name"]),
{:ok, email} <- validate_email(params["email"]),
{:ok, age} <- validate_age(params["age"]),
{:ok, address} <- validate_address(params["address"]) do
{:ok, %User{name: name, email: email, age: age, address: address}}
else
{:error, reason} -> {:error, reason}
end
end
defp validate_name(name) when is_binary(name) and byte_size(name) > 0, do: {:ok, name}
defp validate_name(_), do: {:error, "Invalid name"}
defp validate_email(email) when is_binary(email) and String.contains?(email, "@"), do: {:ok, email}
defp validate_email(_), do: {:error, "Invalid email"}
defp validate_age(age) when is_integer(age) and age > 0, do: {:ok, age}
defp validate_age(_), do: {:error, "Invalid age"}
defp validate_address(address) when is_map(address), do: {:ok, address}
defp validate_address(_), do: {:error, "Invalid address"}
end
```
#### Using the Parser in Your Application
```elixir
defmodule UserController do
alias MyApp.UserParser
def register_user(conn, params) do
case UserParser.parse(params) do
{:ok, user} ->
# Proceed with business logic using the parsed user
json(conn, %{status: "success", user: user})
{:error, reason} ->
# Handle parsing errors
json(conn, %{status: "error", reason: reason})
end
end
end
```
## Parsing with Peri
While the above example demonstrates a manual approach to parsing, the [Peri](https://hexdocs.pm/peri) library offers a more structured way to define and enforce schemas in Elixir.
### Defining a Schema with Peri
```elixir
defmodule MySchemas do
import Peri
defschema :user, %{
name: :string,
email: {:required, :string},
age: :integer,
address: %{
street: :string,
city: :string
},
role: {:required, {:enum, [:admin, :user]}}
}
end
```
### Parsing Data with Peri
```elixir
defmodule UserController do
alias MyApp.MySchemas
def register_user(conn, params) do
case MySchemas.user(params) do
{:ok, user} ->
# Proceed with business logic using the parsed user
json(conn, %{status: "success", user: user})
{:error, errors} ->
# Handle parsing errors
json(conn, %{status: "error", errors: errors})
end
end
end
```
## Conclusion
Adopting the "Parse, Don’t Validate" principle in Elixir ensures data integrity, simplifies code, and enhances maintainability. By transforming raw input data into structured, well-defined data at the system's boundaries, you create a robust foundation for your application.
Elixir's functional and concurrent nature makes it an ideal language for embracing this approach. While manual parsing is effective, libraries like Peri offer powerful tools to define and enforce schemas, ensuring consistency and reliability throughout your application.
Embrace the power of parsing in Elixir, and let your code benefit from cleaner, more maintainable, and type-safe data handling. | zoedsoupe |
1,893,600 | Transform Your Look at the Premier Salon in Prahlad Nagar, Ahmedabad | In the bustling area of Prahlad Nagar, Ahmedabad, finding a salon that offers exceptional hair and... | 0 | 2024-06-19T13:29:56 | https://dev.to/abitamim_patel_7a906eb289/transform-your-look-at-the-premier-salon-in-prahlad-nagar-ahmedabad-4mn5 | In the bustling area of Prahlad Nagar, Ahmedabad, finding a salon that offers exceptional hair and beauty services can elevate your style and confidence. Our **[top-rated salon in Prahlad Nagar](https://trakky.in/ahmedabad/nearby/?area=Prahladnagar)** is dedicated to providing luxurious and professional care, ensuring you look and feel your best.
Why Choose Our Salon in Prahlad Nagar?
At our salon, we prioritize excellence and customer satisfaction. Here’s why our salon is the best choice for your beauty needs:
Expert Stylists and Beauticians: Our team consists of highly trained and certified professionals who are up-to-date with the latest trends and techniques. They offer personalized services tailored to your specific preferences and requirements.
Comprehensive Services: Whether you need a chic haircut, vibrant hair color, or a rejuvenating facial, our salon offers a wide range of services. From traditional beauty treatments to the latest styling trends, we have everything you need to enhance your look.
Luxurious Ambiance: Our salon is designed to provide a serene and luxurious environment. With stylish decor, soothing music, and a welcoming atmosphere, you’ll feel relaxed and pampered from the moment you enter.
High-Quality Products: We use only the finest products from top brands to ensure outstanding results. Our selection of premium products helps maintain the health and beauty of your hair and skin.
Our Signature Services
Haircuts and Styling: Our expert stylists are skilled in creating the latest hairstyles that complement your personality and lifestyle. Whether it’s a trendy cut or an elegant style, we deliver perfection.
Coloring and Highlights: Our color specialists use top-tier products to achieve vibrant, long-lasting colors. From bold new shades to subtle highlights, we ensure your hair looks stunning.
Facials and Skin Care: Indulge in our range of facials designed to rejuvenate your skin and give you a radiant glow. Our skincare services are tailored to address your specific concerns.
Manicures and Pedicures: Enjoy our luxurious nail services, including classic manicures, pedicures, and intricate nail art. We ensure your hands and feet are beautifully groomed.
Benefits of Regular Salon Visits
Regular visits to the salon offer numerous benefits, including:
Enhanced Appearance: Professional haircuts and beauty treatments can significantly improve your overall look and boost your confidence.
Healthy Hair and Skin: Regular treatments help maintain the health of your hair and skin, preventing damage and promoting a youthful appearance.
Stress Relief: Salon visits provide an opportunity to relax and de-stress. The pampering experience can enhance your mood and overall well-being.
Expert Advice: Our experienced staff offers personalized advice and treatments tailored to your unique needs, ensuring the best possible results.
Book Your Appointment Today
Are you ready to experience the best in hair and beauty services? Visit our premier **[salon in Prahlad Nagar, Ahmedabad](https://trakky.in/ahmedabad/nearby/?area=Prahladnagar)**, and treat yourself to a luxurious and rejuvenating experience. Our easy online booking system allows you to schedule your appointment at your convenience. Don’t wait – book your session today and step into a world of beauty and relaxation.
| abitamim_patel_7a906eb289 | |
1,893,599 | Friction Liner/Gasket Manufacturers: Meeting the Demands of Heavy-Duty Applications | screenshot-1718052226886.png Friction Liner/Gasket Manufacturers: Meeting the Demands of Heavy-Duty... | 0 | 2024-06-19T13:26:53 | https://dev.to/rebecca_greenh_5fdea1862c/friction-linergasket-manufacturers-meeting-the-demands-of-heavy-duty-applications-513p | design | screenshot-1718052226886.png
Friction Liner/Gasket Manufacturers: Meeting the Demands of Heavy-Duty Applications
Friction liner/gasket manufacturers play a critical role in ensuring the safety and durability of heavy-duty machinery and vehicles. They create components that reduce friction, prevent leaks, and offer other advantages that make them essential for such applications. We will discuss the advantages of friction liners and gaskets, the innovations manufacturers have made, their safety features, how to use them, and the Road-Railer Liner quality and service they offer.
Benefits
Friction liners and gaskets offer several advantages in heavy-duty applications
They offer a barrier like mechanical prevent the leakage of liquids, gases, as well as other substances
They've been resistant to high conditions, pressures, and surroundings that are corrosive
In addition, they lessen the wear and tear of this mating areas and improve their performance, ensuring durability like reliability like long-term
Innovation
Friction liner/gasket manufacturers are continually innovating their products to meet up the evolving needs of heavy-duty applications
They are typically developing materials which are brand new could be more resistant to raised temperatures and pressures, as well as enhancing the design and production processes to fulfill the particular needs of various industries
Also they are utilizing brand new technologies, such as for example laser cutting and printing like electronic to boost precision and efficiency
Safety
In heavy-duty equipment and cars, security is vital
Friction liners and gaskets donate to safety by preventing leaks and decreasing the threat of fires, explosions, as well as other dangers
They also boost the performance of stopping Imported Liner systems, ensuring vehicles can stop quickly and safely, and provide noise and vibration damping to reduce motorist tiredness and protect against hearing harm
Provider
Friction liner/gasket manufacturers offer excellent solution because of their customers to make performance like certain is optimized durability of the products
They offer tech help team and advice to help customers pick the absolute most elements which are suitable regards with their applications
Additionally they provide maintenance and fix services to make certain that the elements continue steadily to safely operate correctly and in their lifespan
Quality
Quality is essential in terms of friction liners and gaskets
Manufacturers adhere to strict quality standards to be sure their products meet the needs of heavy-duty applications
They test their products or services extensively to ensure they meet the needed performance specs, and additionally they simply utilize the quality materials which can be greatest inside their manufacturing procedures
Application
Friction liner/gasket manufacturers have an market like enormous their products or services, with applications spanning companies that are various including automotive, aerospace, marine, and construction
Heavy-duty equipment and machinery are necessary for all those companies, where durability and safety are very crucial
Therefore, friction liners and gaskets will be in popular, and manufacturers continue to innovate and improve their products to meet the needs of the companies
Conclusion
In conclusion, friction liner/gasket manufacturers play a crucial role in heavy-duty applications, providing mechanical barriers that prevent leaks and offer other advantages, such as reducing wear and improving performance. Manufacturers are continuously innovating their Steel Wire Rope products to meet the specific needs of different industries, while also ensuring that their products meet strict quality standards and are safe to use. In heavy-duty equipment and machinery, friction liners and gaskets are essential components that enhance safety and reliability.
| rebecca_greenh_5fdea1862c |
1,893,598 | 3 Part Journey to Understanding the Key Benefits of Continuous Profiling 🔥 | Your ultimate guide to understanding continuous profiling and how it combines profiling and... | 0 | 2024-06-19T13:26:30 | https://dev.to/platformsh/3-part-journey-to-understanding-the-key-benefits-of-continuous-profiling-2inn | webdev, tooling, productivity, devops | ## Your ultimate guide to understanding continuous profiling and how it combines profiling and monitoring with minimal overhead 😎✨
Discover our latest three part series where we take you behind the scenes to explore the different ways of collecting observability data and converting it into actionable information.
**In this series you will discover 💡:**
- How to trigger a profile and explore the information provided in a way that shines a light on the inherent nature of deterministic profiling.
- How far you can push deterministic observability’s logic.
- The key element to every observability initiative: the ratio between the information available and the overhead caused by the data collection.
**💪 Get ready to start your observability journey, let's go ⬇️**
- [**Understanding continuous profiling part**](https://blog.blackfire.io/understanding-continuous-profiling-part-1.html) 1️⃣
- [**Understanding continuous profiling part**](https://blog.blackfire.io/understanding-continuous-profiling-part-2.html) 2️⃣
- [**Understanding continuous profiling part**](https://blog.blackfire.io/understanding-continuous-profiling-part-3.html) 3️⃣
Have burning questions about observability or do you want know how to include it in your optimization strategy? Let's have a conversation, leave us a comment below with your questions to kick it off. 🤓
**To better observability and beyond!** | celestevanderwatt |
1,893,596 | How I published my first app to Apple Store #1 | Background Hey ho! My name is Uladz and I’m writing my first iOS application. Last week,... | 0 | 2024-06-19T13:26:01 | https://dev.to/uladzmi/how-i-published-my-first-app-to-apple-store-1-491a | mobile, development, ios, learning | ### Background
Hey ho! My name is Uladz and I’m writing my first iOS application. Last week, my laptop got drenched by water from a baby bottle and refused to turn on. So, for the next couple of weeks, since I can’t code, I’ll be writing posts instead…
For the past few years, I’ve been constantly thinking about starting something of my own, dreaming of working for myself. The criteria for a new venture were a few, namely the ability to engage in it in my spare time, having the necessary skills or the desire to learn, minimal financial investment, and the potential to see initial results within six months. The ideas were quite varied, ranging from dropshipping or a print-on-demand T-shirt store to my own cafe or a children’s toy store. Many ideas were dismissed as boring, but mostly due to the fear that something might not work out.
Two years ago, I became a father and we downloaded an app to track the activities of our growing baby. I don’t remember the name anymore, but it served its purpose. The only issue was that some features, which imho should have been free, were only available with a subscription. So, I thought, why not quickly put together my own app?

I liked the idea because it met all the criteria: I could code for 1–2 hours a day, I wanted to learn mobile development, no funds were required (not entirely true, I’ll explain in the next parts), and six months seemed feasible. I didn’t plan to sell it; the main idea was learning and personal use. I also wanted to streamline the release process, i.e., to be able to make changes and release new versions quickly. In the future, if all went well, I could try to monetize it and have some passive income, which would be nice. And so, at night, sitting on a fitness ball, rocking my daughter with one hand and typing with the other, I started coding.
The first thing was to decide what to write and for which platform. I decided not to limit myself to just iOS, as I couldn’t leave Android smartphone owners without the opportunity to use my wonderful creation. So, I was choosing between React Native and Flutter. For work, I needed to learn React to support an internal service, and I decided that it was a good argument in favor of React Native. I started, as expected, with the official documentation and tried to launch Hello World sample in the simulator, but it didn’t go as smoothly as I expected. The process of setting up the environment was so clumsy that my motivation quickly plummeted. Dependencies didn’t want to install, the simulator didn’t want to run or crashed without apparent reason. Also, the app release process was unclear to me, and after struggling for 2–3 weeks, I abandoned the idea. Besides, our child’s routine had settled, and we stopped using even the paid app.

Nine months passed, we moved to another city, and I joined a gym. I had tried before, but never stuck with it for long. This time, the gym was a 2-minute walk away, so the likelihood of skipping was minimal. I started looking for an app to log sets and weights and found there were many. One of the first I came across suited me, but over time, I again found that I needed a subscription to comment on all exercises, not just one per session, and I couldn’t create more than three routines.
> For the record, I believe you should pay for any service one way or another: watch ads, subscribe, or accept that your data is sold. The issue here was that much of the functionality available with a subscription was unnecessary for me, and what I needed was artificially limited with the sole purpose of forcing me to buy something I didn’t need.
Thus, I returned to the idea of my own app and the question of what to write it in. This time, I decided to take the path of least resistance — start small, learn fast. Considering that I was writing primarily for myself, I decided to limit support to iOS. The entry threshold was very low; I started with [Apple’s official tutorials](https://developer.apple.com/tutorials/app-dev-training/) and within the first two hours, I loaded a test app onto my phone. So, in the evenings after my main job, I began working on yet another workoutlogger 💪.
That’s all for now, thanks to everyone who read to the end!
In the next posts, I’ll write about:
- development process and why I decided not to learn Swift, but to ask ChatGPT
- first release and
- future plans
You can see screenshots or download it [here](https://apps.apple.com/de/app/yet-another-workoutlogger/id6484401597?l=en-GB&platform=iphone). It’s free, with no ads or registration needed.
| uladzmi |
1,893,594 | Essential Skills Every Aspiring Graphic Designer should Know | As the world becomes more digitally focused, graphic design has become an increasingly popular field.... | 0 | 2024-06-19T13:25:01 | https://dev.to/red_applelearningpvtl/essential-skills-every-aspiring-graphic-designer-should-know-344f | design, graphic | As the world becomes more digitally focused, graphic design has become an increasingly popular field. After seeing huge career growth, job security, and a high paycheck; young creative minds are flocking into the sector. Due to the huge demand for learning the skills of a graphic designer; there is a growing number of courses and programs available to aspiring designers. Whether you're just starting out in your graphic design journey or looking to brush up on your skills, there are certain essential skills every graphic designer should know. Therefore in this article, we will take a closer look at these skills and we will also look that why these skills are more important.
## List of Essential Skills Every Aspiring Graphic Designer should know:
### Design Theory and Principles
Every graphic designer should have comprehensive knowledge of the foundational principles and theories of design. This includes understanding basic design elements like color, typography, and composition, as well as principles such as balance, contrast, and hierarchy. Understanding design theory and principles will help you create visually appealing and effective designs that communicate your message. Therefore enrolling in a [graphic designing course in Kolkata](https://redapplelearning.in/graphic-designing-course-kolkata/) will help you in learning about all these principles and theories about designing.
### Software Skills
Graphic design software is an essential tool for any designer, and it's important to learn how to use it effectively. The graphic designing courses help you in to learn how to use Adobe Creative Suite programs such as Photoshop, Illustrator, InDesign, or other graphic design software tools. Regardless of the specific software, you should learn how to use it to create designs, manipulate images, and create layouts.
### Collaboration and Communication
Graphic design is often a collaborative process, whether you're working with clients or other designers. As such, it's important to learn how to communicate your ideas effectively and work well with others. This may include learning how to give and receive feedback, how to manage client expectations, and how to collaborate with others to achieve a common goal. As graphic designers should be able to communicate properly, one such reputed institute Red Apple Learning has come up with the idea of taking soft skill classes so that the students can have both hard and soft skills.
### Marketing and Branding
Graphic design is often used to promote products, services, and brands. As such, it's important to understand the basics of marketing and branding. This may include learning about target audiences, brand identity, and how to create designs that align with a company's brand image.
### Problem-Solving and Critical Thinking
Graphic design is not just about creating pretty pictures; it's about solving problems through design. Whether you're creating a logo, a website, or a brochure, you need to be able to think critically about the problem you're trying to solve and come up with creative solutions. This may involve brainstorming, sketching, and iterating on your designs until you find the best solution.
### Time Management and Organization
Some graphic design projects can be complex and time-consuming, so it's important to learn how to manage your time effectively and stay organized as well. Moreover, as a professional graphic designer, you have to submit your client's projects at a deadline. This may also include learning how to break down a project into smaller tasks, how to prioritize your work, and how to stay on schedule and meet deadlines.
### Creativity and Innovation
At its core, graphic design is a creative field. As such, it's important to foster your creativity and learn how to think outside the box. This may include learning how to brainstorm new ideas, experimenting with different design styles, and taking risks with your designs.
## To wrap up
Learning from graphic design courses will teach you a range of essential skills that will prepare you for a career in the industry. From design theory and software skills to collaboration and communication; problem-solving and critical thinking, time management and organization, creativity and innovation, and professionalism; these skills are critical to your success as a graphic designer. By mastering these skills, you'll be well on your way to creating effective and visually appealing designs that make a real impact.
| red_applelearningpvtl |
1,893,593 | Tired of waiting for AI model downloads? 😠 Introducing AI Torrent! 🚀 | Ever felt stuck in the torrent chicken-and-egg problem? You need seeders to download, but they need... | 0 | 2024-06-19T13:24:10 | https://dev.to/zerroug/tired-of-waiting-for-ai-model-downloads-introducing-ai-torrent-391i | ai, tutorial, machinelearning, beginners | Ever felt stuck in the torrent chicken-and-egg problem? You need seeders to download, but they need to download first! 😩
**AI Torrent** solves this! 🎉 Download AI models **instantly** with **blazing-fast speeds** thanks to our server-side seeding. 💪 No more waiting, just pure AI goodness delivered straight to your machine. 🧠
**Here's what we offer:**
* **Instant downloads:** No more waiting for peers.
* **Resumable downloads:** Pick up where you left off, even for massive files.
* **Sleek and fast interface:** Easily find the models you need.
* **Growing catalog:** New AI models added daily!
* **Categorized for easy browsing:** Quickly find models by type.
**Check it out:** [aitorrent.zerroug.de](aitorrent.zerroug.de)
**Want to help?**
We're looking for contributors! Email me at nadjib@zerroug.de to join the team. 🙌
**Donations are also welcome to keep the project alive:** [ko-fi.com/zerroug](ko-fi.com/zerroug) (every bit helps!) 🙏
Let's make AI accessible for everyone!
| zerroug |
1,893,592 | سایت معتبر موسیقی جهت پخش موزیک | نرم افزار ام اس بی موزیک با نرم افزار ام اس بی موزیک، دنیای بی انتهای موزیک در دستان شماست. این... | 0 | 2024-06-19T13:23:32 | https://dev.to/msbmusic/syt-mtbr-mwsyqy-jht-pkhsh-mwzykh-13cj | music, iran | **نرم افزار ام اس بی موزیک**
با نرم افزار ام اس بی موزیک، دنیای بی انتهای موزیک در دستان شماست. این اپلیکیشن پرقدرت به شما امکان میدهد تا به جدیدترین آهنگها و موزیک ویدیوها دسترسی پیدا کنید. با پلی لیستهای پخش متنوع، هر لحظه از موسیقی لذت ببرید.
**ویژگیها:**
1. **دریافت جدیدترین آهنگها و موزیک ویدیوها:**
از طریق این نرم افزار، همواره به روز باشید و آخرین اثرات هنرمندان مورد علاقهتان را کشف کنید.
2. **پلی لیستهای پخش متنوع:**
انتخاب از بین پلی لیستهای گوناگون و تنوع بخش موسیقی به شما امکان میدهد هر لحظه را با لذت تجربه کنید.
3. **دسترسی به بیش از 20 هزار آهنگ:**
با دسترسی به مجموعهای از بیش از 20 هزار آهنگ، همیشه موسیقی مورد نظرتان در دسترس است.
4. **دسترسی به متن آهنگ و جزئیات آثار:**
بیشتر از ارتباط با موسیقی، متن آهنگها و جزئیات آثار را هم در اختیار داشته باشید.
5. **نمایش برترین آهنگها:**
با مشاهده لیست برترین آهنگها، همواره بهترینها را در دسترس داشته باشید.
6. **انتخاب کیفیت پخش و دانلود آهنگ و ویدیو:**
با امکان انتخاب کیفیت پخش و دانلود، تجربهی موسیقی خود را بهینه کنید.
7. **سرعت بالای نرم افزار:**
تجربه استفاده بینقص با سرعت بالای نرم افزار را تجربه کنید.
8. **امکان اشتراکگذاری مستقیم فایل آهنگ:**
آسانی در اشتراکگذاری موسیقی مورد علاقه با دوستان خود را تجربه کنید.
9. **امکان نظر به آثار هنری و تعامل با سایر کاربران:**
10. **امکان ساخت پلی لیست دلخواه
نظرات خود را با دیگر کاربران به اشتراک بگذارید و از تعاملات جذاب در جامعهی موسیقی لذت ببرید.
با نصب MSB، به عشق موسیقی خود تازگی بخشیده و تجربهی یک محیط پویا و پر از امکانات را تجربه کنید.
[دانلود آهنگ](https://msbmusic.ir/)
[دانلود نرم افزار پخش آهنگ](https://refl.ir/msbmusic)
| msbmusic |
1,893,591 | 5 Ways How Generative AI (Gen AI) Helps Software Development Companies | The software development landscape is a fast-paced battlefield. Developers are constantly battling... | 0 | 2024-06-19T13:23:30 | https://dev.to/gloriajoycee/5-ways-how-generative-ai-gen-ai-helps-software-development-companies-4ici | programming, softwaredevelopment, ai, softwareengineering | The software development landscape is a fast-paced battlefield. Developers are constantly battling tight deadlines, evolving technologies, and the ever-growing demand for innovative solutions. But fear not, weary warriors! A powerful new ally has emerged: Generative Artificial Intelligence (Gen AI).
This revolutionary technology isn't here to replace developers – it's here to empower them. Gen AI can automate tedious tasks, unlock creative solutions, and streamline workflows, giving developers the edge they need to conquer any challenge.
**1. Boost Productivity: Automating the Mundane**
Imagine spending less time on repetitive tasks like writing boilerplate code or conducting basic unit tests. In 2024, Gen AI tools are becoming adept at understanding developer intent and generating code snippets, test cases, and even basic API documentation. This frees up developers for more strategic tasks like design, problem-solving, and building the core functionalities of the software. With Gen AI handling the mundane, developers can focus on the areas where human creativity and expertise truly shine.
**2. Spark Innovation: Unleashing the Power of "What If?"**
Gen AI in 2024 isn't just about automation; it's about sparking creative ideas. Advanced models can analyze existing codebases, user data, and market trends to suggest novel functionalities, alternative approaches, and even entirely new features. Imagine a Gen AI tool that can analyze a user feedback report and suggest potential improvements or brainstorm creative solutions to a complex technical challenge. This "what if?" scenario exploration allows developers to break out of traditional thinking patterns and unlock innovative solutions they might have missed otherwise.
**3. Enhance Efficiency: Streamlining Workflows and Minimizing Errors**
In 2024, Gen AI is becoming an essential tool for optimizing workflows and minimizing errors. By integrating with development platforms and project management systems, Gen AI can automate routine tasks like code reviews, dependency management, and version control checks. This not only reduces the time developers spend on administrative tasks but also helps to catch potential bugs and inconsistencies early in the development cycle. With Gen AI handling the monotonous and error-prone aspects of development, the entire process becomes smoother, more efficient, and less prone to costly delays.
**4. Improve Code Quality: Building Better, Together**
The fight against bugs and security vulnerabilities is an ongoing battle for developers. In 2024, Gen AI is increasingly used to assist with code quality assurance. These tools can analyze code for potential security risks, identify areas prone to errors, and suggest best practices and coding standards adherence. Gen AI can even perform automated penetration testing, mimicking potential attacks to identify vulnerabilities before they can be exploited. This collaborative approach, where developers leverage Gen AI's analytical capabilities, leads to more robust, secure, and well-written code.
**5. Democratize Development: Lowering the Barrier to Entry**
The complexity of software development can often be a barrier for new entrants to the field. In 2024, Gen AI tools are helping to democratize development by making it more accessible to a wider range of skillsets. "No-code" and "low-code" platforms powered by Gen AI allow individuals with less coding experience to build basic applications and prototypes. Additionally, Gen AI can assist with tasks like code commenting and generation, making it easier for junior developers to understand complex codebases and contribute effectively. This not only broadens the development talent pool but also fosters a more collaborative and inclusive environment.
## Top 3 Gen AI Tools for Software Development Companies
**1. AlphaCode**
AlphaCode, a revolutionary coding assistant, leverages generative AI to empower developers. It excels in writing code, bug resolution, and suggesting optimal programming solutions.
AlphaCode goes beyond basic syntax suggestions – it [software development companies](https://www.ishir.com/software-development-company-dallas.htm) by:
**- Writing clean, efficient code:** AlphaCode tackles the heavy lifting, generating code snippets that fit seamlessly into your projects.
**- Resolving bugs with laser focus:** Stuck on a pesky bug? AlphaCode pinpoints the culprit and suggests effective fixes, saving you valuable time.
**- Optimizing code for peak performance:** Get the most out of your code with AlphaCode's optimization recommendations.
**- Facilitating collaboration:** Share code suggestions with your team and leverage collective knowledge for the best results.
**2. Cohere Generate**
Imagine chatbots that feel like real conversations, personalized marketing emails that resonate deeply, and the ability to handle diverse natural language tasks with ease. Cohere Generate goes beyond basic text generation – it's a powerful tool for businesses:
**- Build interactive chatbots:** Empower your customer service or sales teams with intelligent chatbots that can answer questions, provide support, and improve user satisfaction.
**- Automate customer communication:** Streamline workflows by automating responses to frequently asked questions, freeing up human resources for more complex tasks.
**- Personalized marketing messages:** Increase engagement and conversion rates with personalized email outreach tailored to individual customers.
**GitHub Copilot**
GitHub Copilot is a revolutionary tool designed to be your AI pair programmer, empowering you to write better code, faster.
**- Increased Developer Productivity:** Copilot empowers developers to write more code in less time, allowing them to tackle more complex projects.
**- Improved Code Quality:** Copilot's context-aware suggestions can help developers write cleaner, more maintainable code, reducing the risk of bugs and errors.
**- Reduced Development Costs:** By streamlining workflows and minimizing errors, Copilot can help businesses save time and money on software development projects.
**- Innovation at Scale:** With Copilot by their side, developers can experiment with new ideas and features more freely, fostering a culture of innovation within the development team.
## **Conclusion**
Gen AI is not a magic bullet, but it's a powerful weapon in the developer's arsenal. By leveraging its capabilities for automation, creativity, efficiency, and accessibility, software development teams in 2024 can build better applications, faster, and with a competitive edge.
| gloriajoycee |
1,893,590 | [React]Vite Github pages | Create Vite react typescript(or javascript) Vite React TypeScript(또는 javascript) 만들기 $ npm create... | 0 | 2024-06-19T13:22:40 | https://dev.to/sidcodeme/reactvite-github-pages-10nm | react, vite, github, pages | Create Vite react typescript(or javascript)
Vite React TypeScript(또는 javascript) 만들기
```shell
$ npm create vite@latest
```
write your choice name and framework and variant.
원하시는 프로젝트명 이랑 플레임웍 선택 그리고 형식 을 작성하세요.
*(not important project_name, 프로젝트명 중요하지않음)
```console
> npx
> create-vite
✔ Project name: … [your_github_id].github.io
✔ Select a framework: › React
✔ Select a variant: › TypeScript
Scaffolding project in /Users/sidcode/Developer/flutter_workspace/HomePage/sidcodeme.github.io...
Done. Now run:
cd [your_github_id].github.io
npm install
npm run dev
```
if finished create and then you command line and do it change directory and run "npm install"
생성이 완료되면 명령줄에서 디렉터리를 변경하고 "npm install"을 실행합니다.
### git
```shell
$ git init
$ git add .
$ git commit -m "init"
$ git branch -M gh-pages
===============================================
$ git remote add origin https://github.com/[your_id]/[repo_name].git
******** choice not token or token github
******** 깃허브 토큰없거나 토큰 있는 경우 선택
$ git remote add origin https://[your_id]:[your_token]@github.io/{your_id}/{repo_name}.git
===============================================
$ git push -u origin gh-pages
```
0. you can ready to start now.
1. install gh-pages
gh-pages 설치
```shell
$ npm install gh-pages --save-dev
```
2. package.json {setup, 설정}
if you dont have domain or had domain
도메인이 없거나 또는 도메인이 있는 경우
```json
"homepage": "https://sidcodeme.github.io/",
or
"homepage": "https://sidcod.me/",
```
==========================================
- scripts command values is use vscode or not
- 스크립트 명령 값은 vscode 사용 여부입니다.
```json
"scripts": {
"depoly" : "npm run deploy",
"predeploy" : "npm run build",
"deploy" : "gh-pages -d dist",
or
"scripts": {
"predeploy" : "npm run build",
"deploy" : "gh-pages -d dist",
```
anyway finished my package.json
여하튼 완성된 제 package.json 입니다.
```json
{
"name": "sidcodeme.github.io",
"private": true,
"version": "1.0.0",
"type": "module",
"homepage": "https://sidcod.me/",
"scripts": {
"depoly" : "npm run deploy",
"predeploy" : "npm run build",
"deploy" : "gh-pages -d dist",
"dev": "vite",
"build": "vite build",
"lint": "eslint . --ext js,jsx --report-unused-disable-directives --max-warnings 0",
"preview": "vite preview"
},
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"@types/react": "^18.2.66",
"@types/react-dom": "^18.2.22",
"@vitejs/plugin-react": "^4.2.1",
"eslint": "^8.57.0",
"eslint-plugin-react": "^7.34.1",
"eslint-plugin-react-hooks": "^4.6.0",
"eslint-plugin-react-refresh": "^0.4.6",
"gh-pages": "^6.1.1",
"vite": "^5.2.0"
}
}
```
Those using a domain can place the CNAME file in the public directory.
CNAME file contents
도메인을 사용하시는 분들은 CNAME파일을 public 디렉토리에 넣으시면됩니다.
CNAME 파일 내용
```text
sidcode.me
```
### last!! 마지막!!
커멘드 직접 입력하거나 vscode - npm에서 deploy 실행
```shell
$ npm run deploy
```
=--------------------=
```shell
* Executing task: yarn run depoly
yarn run v1.22.22
$ npm run deploy
> sidcode.github.io@1.0.0 predeploy
> npm run build
> sidcode.github.io@1.0.0 build
> vite build
vite v5.3.1 building for production...
✓ 34 modules transformed.
dist/index.html 0.46 kB │ gzip: 0.30 kB
dist/assets/react-CHdo91hT.svg 4.13 kB │ gzip: 2.14 kB
dist/assets/index-DiwrgTda.css 1.39 kB │ gzip: 0.72 kB
dist/assets/index-DVoHNO1Y.js 143.36 kB │ gzip: 46.07 kB
✓ built in 394ms
> sidcode.github.io@1.0.0 deploy
> gh-pages -d dist
Published
✨ Done in 5.28s.
* Terminal will be reused by tasks, press any key to close it.
```
check homepage!!

Thank you for reading and sorry, im not well English. | sidcodeme |
1,893,589 | PaperTalk.xyz: HackerNews for research papers | I made PaperTalk.xyz out of my own selfish desire to have a place where people can find and discuss... | 0 | 2024-06-19T13:22:25 | https://dev.to/stefan_lenoach_362296822/papertalkxyz-hackernews-for-research-papers-3578 | buildinpublic, discuss | I made [PaperTalk.xyz](https://www.papertalk.xyz/research/home) out of my own selfish desire to have a place where people can find and discuss state of the art research across all domains. Would love to hear any critical feedback anyone here has.
Thanks! | stefan_lenoach_362296822 |
1,893,804 | Chart of the Week: Creating the .NET MAUI Scatter Chart to Visualize Different Sports Ball Sizes and Weights | TL;DR: Let’s use the Syncfusion .NET MAUI Scatter Chart to visualize the sizes and weights of various... | 0 | 2024-06-19T16:23:47 | https://www.syncfusion.com/blogs/post/dotnet-maui-scatter-chart-sports-ball | dotnetmaui, chart, maui, mobile | ---
title: Chart of the Week: Creating the .NET MAUI Scatter Chart to Visualize Different Sports Ball Sizes and Weights
published: true
date: 2024-06-19 13:18:24 UTC
tags: dotnetmaui, chart, maui, mobile
canonical_url: https://www.syncfusion.com/blogs/post/dotnet-maui-scatter-chart-sports-ball
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sb228cjwvu559jbybfi9.png
---
**TL;DR:** Let’s use the Syncfusion .NET MAUI Scatter Chart to visualize the sizes and weights of various sports balls and highlight their differences. This blog guides you through gathering data, preparing it, configuring the chart, and customizing its appearance and interactivity. Explore the GitHub demo for more details.
Welcome to our **Chart of the Week** blog series!
Today, we’ll visualize the sizes and weights of various sports balls using the [Syncfusion .NET MAUI Scatter Chart.](https://www.syncfusion.com/maui-controls/maui-cartesian-charts/chart-types/maui-scatter-chart ".NET MAUI Scatter Chart")
Sports enthusiasts know that most sports use balls, which vary widely in size and weight. Understanding these variations can provide fascinating insights into the design and dynamics of each sport. The lightest ball is a Table Tennis ball, and the largest ball size is Basketball. These differences reflect each sport’s unique demands and rules, providing a deeper appreciation for their diversity.
The [Syncfusion .NET MAUI Scatter Chart](https://www.syncfusion.com/maui-controls/maui-cartesian-charts/chart-types/maui-scatter-chart ".NET MAUI Scatter Chart") is ideal for showcasing these differences. A scatter chart plots data points on a two-dimensional graph, making it easy to observe relationships between variables. The chart’s feature set includes data binding, tooltips, titles, and customizable axes, ensuring a clear and engaging visualization.
The following image shows the scatter chart we are going to create.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Visualizing-different-sports-ball-sizes-and-weights-using-the-.NET-MAUI-Scatter-Chart.png" alt="Visualizing different sports ball sizes and weights using the .NET MAUI Scatter Chart" style="width:100%">
</figure>
Let’s get started!
## Step 1: Gathering the data
First, we need to gather information from the [topend sports.](https://www.topendsports.com/resources/equipment/balls.htm "Article: topend sports.") We require data on ball [Sizes](https://www.topendsports.com/resources/equipment/ball-size.htm "Article: Sports Ball Size (Diameter) Comparison") and [weights](https://www.topendsports.com/resources/equipment/ball-weight.htm "Article: Sports Ball Weight Comparison").
## Step 2: Prepare the data for the chart
Create a **SportsBallModel** class to define the structure of our sports ball data and a **SportsBallViewModel** class to handle the data manipulation and communication between the model and the scatter chart.
First, define the **SportsBallModel** class with the following properties:
- **Name:** Denotes the name of the sports ball, indicating the specific type or variant it represents.
- **Size:** Describes the dimensions of the sports ball, specifying its diameter.
- **Weight:** Quantifies the mass of the sports ball, measured in units such as grams.
Refer to the following code example.
```csharp
public class SportsBallModel
{
public string Name { get; set; }
public double Size { get; set; }
public double Weight { get; set; }
public SportsBallModel(string name, double size, double weight)
{
Name = name;
Size = size;
Weight = weight;
}
}
```
Next, create the **SportsBallViewModel** class. It acts as an intermediary between the data models and user interface elements ([scatter chart](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ScatterSeries.html "ScatterSeries class of .NET MAUI Charts")), preparing and formatting data for display and interaction.
Refer to the following code example.
```csharp
public class SportsBallViewModel
{
public List<SportsBallModel> SportsBallData { get; set; }
public SportsBallViewModel()
{
SportsBallData = new List<SportsBallModel>();
SportsBallData.Add(new SportsBallModel { Weight = 23, Size = 3.96, Name = "Squash" });
SportsBallData.Add(new SportsBallModel { Weight = 2.7, Size = 3.99, Name = "Table Tennis" });
SportsBallData.Add(new SportsBallModel { Weight = 45.93, Size = 4.27, Name = "Golf" });
SportsBallData.Add(new SportsBallModel { Weight = 125, Size = 5.59, Name = "Jai Alai" });
SportsBallData.Add(new SportsBallModel { Weight = 40, Size = 5.72, Name = "Racquetball" });
SportsBallData.Add(new SportsBallModel { Weight = 156, Size = 5.72, Name = "Pool" });
SportsBallData.Add(new SportsBallModel { Weight = 142, Size = 6.35, Name = "Lacrosse" });
SportsBallData.Add(new SportsBallModel { Weight = 56.0, Size = 6.54, Name = "Tennis" });
SportsBallData.Add(new SportsBallModel { Weight = 680, Size = 7.94, Name = "Pétanque" });
SportsBallData.Add(new SportsBallModel { Weight = 155.9, Size = 7.11, Name = "Cricket" });
SportsBallData.Add(new SportsBallModel { Weight = 156, Size = 7.11, Name = "Field Hockey" });
SportsBallData.Add(new SportsBallModel { Weight = 142, Size = 7.30, Name = "Baseball" });
SportsBallData.Add(new SportsBallModel { Weight = 22.1, Size = 7.29, Name = "Pickleball" });
SportsBallData.Add(new SportsBallModel { Weight = 99, Size = 7.62, Name = "Polo" });
SportsBallData.Add(new SportsBallModel { Weight = 166.6, Size = 8.89, Name = "Softball (slowpitch)" });
SportsBallData.Add(new SportsBallModel { Weight = 453.6, Size = 9.21, Name = "Croquet" });
SportsBallData.Add(new SportsBallModel { Weight = 178, Size = 9.70, Name = "Softball (fastpitch)" });
SportsBallData.Add(new SportsBallModel { Weight = 920, Size = 10.67, Name = "Bocce" });
SportsBallData.Add(new SportsBallModel { Weight = 400, Size = 18.03, Name = "Rhythmic gymnastics ball" });
SportsBallData.Add(new SportsBallModel { Weight = 425, Size = 18.54, Name = "Team Handball" });
SportsBallData.Add(new SportsBallModel { Weight = 260, Size = 20.70, Name = "Volleyball" });
SportsBallData.Add(new SportsBallModel { Weight = 420, Size = 21.59, Name = "Football (Soccer)" });
SportsBallData.Add(new SportsBallModel { Weight = 445, Size = 21.59, Name = "Korfball" });
SportsBallData.Add(new SportsBallModel { Weight = 400, Size = 21.59, Name = "Water polo" });
SportsBallData.Add(new SportsBallModel { Weight = 397, Size = 22.61, Name = "Netball" });
SportsBallData.Add(new SportsBallModel { Weight = 623.7, Size = 23.88, Name = "Basketball" });
}
}
```
## Step 3: Configure the Syncfusion .NET MAUI Cartesian Charts
Let’s configure the .NET MAUI Cartesian Charts control using this [documentation.](https://help.syncfusion.com/maui/cartesian-charts/getting-started "Getting Started with .NET MAUI Chart")
Refer to the following code example.
```xml
<chart:SfCartesianChart>
<chart:SfCartesianChart.XAxes>
<chart:CategoryAxis/>
</chart:SfCartesianChart.XAxes>
<chart:SfCartesianChart.YAxes>
<chart:NumericalAxis/>
</chart:SfCartesianChart.YAxes>
</chart:SfCartesianChart>
```
## Step 4: Bind data to the .NET MAUI Scatter series
To display the sports ball data effectively, we’ll utilize the Syncfusion **ScatterSeries** instance and bind our **SportsBallData** collection to it. Each ball will be represented as a point on the chart, with its corresponding size and weight.
Refer to the following code example.
```xml
<chart:ScatterSeries XBindingPath="Weight"
YBindingPath="Size"
ItemsSource="{Binding SportsBallData}">
</chart:ScatterSeries>
```
We’ve bound the **SportsBallData** collection from the **SportsBallViewModel** to the **ItemSource** property. The **XBindingPath** and **YBindingPath** properties are bound to the **Weight** and **Size** properties.
## Step 5: Customize the .NET MAUI Scatter Chart appearance
Let’s customize the appearance of the Syncfusion .NET MAUI Scatter Chart to enhance its readability.
### Adding the chart title
Adding a [title](https://help.syncfusion.com/maui/cartesian-charts/appearance "Getting Started with the Appearance in .NET MAUI Cartesian Chart") to the chart enhances its readability and aids users in understanding its content more effectively. By providing a clear and descriptive title, users can quickly grasp the purpose and context of the chart, making it more accessible and informative.
Refer to the following code example.
```xml
<chart:SfCartesianChart.Title>
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="{OnPlatform Android=68,Default=80,iOS=68}"/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width="55"/>
<ColumnDefinition Width="Auto"/>
</Grid.ColumnDefinitions>
<Image Grid.Column="0"
Grid.RowSpan="2"
Source="balls_sports.png"
Margin="0,-25,0,0"
HeightRequest="70"
WidthRequest="50"/>
<StackLayout Grid.Column="1"
Grid.Row="0"
Margin="7,7,0,0">
<Label Text="Exploring Ball Sizes and Weights Across Different Sports"
FontSize="{OnPlatform Android=12,Default=16,iOS=12}"
FontAttributes="Bold"
TextColor="Black"/>
<Label Text="A Comparative Analysis Across Different Sports Ball Sizes and Weights"
FontSize="{OnPlatform Android=10,Default=12,iOS=10}"
TextColor="Black"
Margin="0,2,0,0"/>
</StackLayout>
</Grid>
</chart:SfCartesianChart.Title>
```
### Customize the axes
Let’s customize the X- and Y-axes with the [Minimum](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.LogarithmicAxis.html#Syncfusion_Maui_Charts_LogarithmicAxis_Minimum "Minimum property of .NET MAUI Charts"), [Maximum](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.LogarithmicAxis.html#Syncfusion_Maui_Charts_LogarithmicAxis_Maximum "Maximum property of .NET MAUI Charts"), [EdgeLabelsDrawingMode](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartAxis.html#Syncfusion_Maui_Charts_ChartAxis_EdgeLabelsDrawingMode "EdgeLabelsDrawingMode property of .NET MAUI Charts"), and [ShowMajorGridLines](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartAxis.html#Syncfusion_Maui_Charts_ChartAxis_ShowMajorGridLines "ShowMajorGridLines property of .NET MAUI Charts") properties and the axis Title, LineStyle, and TickStyle properties.
Refer to the following code example.
```xml
<chart:SfCartesianChart.XAxes>
<chart:NumericalAxis ShowMajorGridLines=”True”
Maximum=”700”
Interval=”100”
EdgeLabelsDrawingMode=”Shift”
RangePadding=”Additional”>
<chart:NumericalAxis.MajorTickStyle>
<chart:ChartAxisTickStyle Stroke=”Black”/>
</chart:NumericalAxis.MajorTickStyle>
<chart:NumericalAxis.AxisLineStyle>
<chart:ChartLineStyle Stroke=”Black”/>
</chart:NumericalAxis.AxisLineStyle>
<chart:NumericalAxis.Title>
<chart:ChartAxisTitle Text=”Weight in Gram” TextColor=”Black”/>
</chart:NumericalAxis.Title>
</chart:NumericalAxis>
</chart:SfCartesianChart.Xaxes>
<chart:SfCartesianChart.Yaxes>
<chart:NumericalAxis EdgeLabelsDrawingMode=”Center”
Maximum=”31”
Minimum=”0”
Interval=”5”
ShowMajorGridLines=”False”>
<chart:NumericalAxis.MajorTickStyle>
<chart:ChartAxisTickStyle Stroke=”Black”/>
</chart:NumericalAxis.MajorTickStyle>
<chart:NumericalAxis.AxisLineStyle>
<chart:ChartLineStyle Stroke=”Black”/>
</chart:NumericalAxis.AxisLineStyle>
<chart:NumericalAxis.Title>
<chart:ChartAxisTitle Text=”Size in Centimeter” Stroke=”Black”/>
</chart:NumericalAxis.Title>
</chart:NumericalAxis>
</chart:SfCartesianChart.Yaxes>
```
### Customizing the Scatter series
Here, we will enhance the series appearance using the [PaletteBrushes](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_PaletteBrushes "PaletteBrushes property of .NET MAUI Charts"), [PointHeight](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ScatterSeries.html#Syncfusion_Maui_Charts_ScatterSeries_PointHeight "PointHeight property of .NET MAUI Charts"), and [PointWidth](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ScatterSeries.html#Syncfusion_Maui_Charts_ScatterSeries_PointWidth "PointWidth property of .NET MAUI Charts") properties.
Refer to the following code example.
```xml
<chart:ScatterSeries PaletteBrushes="{Binding PaletteBrushes}"
PointHeight="20"
PointWidth="20">
</chart:ScatterSeries>
```
### Adding interactivity to the chart
Now, enable tooltips in the scatter series to enhance data visualization and provide additional insights. Customize the tooltip content to display relevant information, such as the name, size, and weight of a sports ball, that improves user experience by offering immediate, context-specific information and deeper data insights using the [TooltipTemplate](https://help.syncfusion.com/cr/maui/Syncfusion.Maui.Charts.ChartSeries.html#Syncfusion_Maui_Charts_ChartSeries_TooltipTemplate "TooltipTemplate property of .NET MAUI Charts") property.
Refer to the following code example.
```xml
<chart:ScatterSeries.TooltipTemplate>
<DataTemplate>
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height=”Auto”/>
<RowDefinition Height=”Auto”/>
<RowDefinition Height=”Auto”/>
</Grid.RowDefinitions>
<Grid.ColumnDefinitions>
<ColumnDefinition Width=”Auto”/>
</Grid.ColumnDefinitions>
<Label Grid.Row=”0”
Grid.Column=”0” LineBreakMode=”WordWrap”
MaximumWidthRequest=”100” Text=”{Binding
Item.Name,StringFormat=’{0}’}”
HorizontalTextAlignment= »Center » HorizontalOptions= »Center »
VerticalTextAlignment= »Center » FontAttributes= »Bold »
Margin=”0,3,0,3” FontSize=”13.5” TextColor=”White”/>
<BoxView Grid.Row=”1” Grid.Column=”0” VerticalOptions=”Center” Color=”White” HeightRequest=”1” />
<StackLayout Grid.Row=”2” Grid.Column=”0” Orientation=”Vertical” VerticalOptions=”Fill” Spacing=”0” Padding=”3” Margin=”0”>
<Label Text=”{Binding Item.Weight,StringFormat=’Weight : {0} g’}”
VerticalTextAlignment=”Center”
HorizontalOptions=”Start”
Margin=”0,0,3,0” FontSize=”13.5” TextColor=”White”/>
<Label Text=”{Binding Item.Size,StringFormat=’Size : {0} cm’}”
VerticalTextAlignment=”Center”
HorizontalOptions=”Start” Margin=”0,0,3,0”
FontSize=”12” TextColor=”White”/>
</StackLayout>
</Grid>
</DataTemplate>
</chart:ScatterSeries.TooltipTemplate>
```
After executing these code examples, we will get the output that resembles the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Visualizing-different-sports-ball-sizes-and-weights-using-the-.NET-MAUI-Scatter-Chart.gif" alt="Visualizing different sports ball sizes and weights using the .NET MAUI Scatter Chart" style="width:100%">
<figcaption>Visualizing different sports ball sizes and weights using the .NET MAUI Scatter Chart</figcaption>
</figure>
## GitHub reference
Use the .NET MAUI Scatter Chart [GitHub demo](https://github.com/SyncfusionExamples/Creating-a-Bubble-Chart-to-Explore-Ball-Sizes-and-Weights-Across-Different-Sports/tree/Task-889684_scatter_blog_sample_for_.Net_Maui "Creating a Bubble Chart to Explore Ball Sizes and Weights Across Different Sports GitHub demo") to visualize different sports ball sizes and weights for more details.
## Conclusion
Thanks for reading! In this blog, we’ve seen how to use the Syncfusion [MAUI Scatter chart](https://help.syncfusion.com/maui/cartesian-charts/scatter "Getting Started with the Scatter Chart in .NET MAUI Chart") to visualize the different sports ball sizes and weights. We strongly encourage you to follow the steps outlined in this blog and share your thoughts in the comments below.
The existing customers can download the new version of Essential Studio on the [License and Downloads](https://www.syncfusion.com/account "Essential Studio License and Downloads page") page. If you are not a Syncfusion customer, try our 30-day [free trial](https://www.syncfusion.com/downloads "Get free evaluation for the Essential Studio products") to check out our incredible features.
You can also contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [Introducing the New .NET MAUI Digital Gauge Control](https://www.syncfusion.com/blogs/post/dotnetmaui-digital-gauge-control "Blog: Introducing the New .NET MAUI Digital Gauge Control")
- [Introducing the 12th Set of New .NET MAUI Controls and Features](https://www.syncfusion.com/blogs/post/syncfusion-dotnet-maui-2024-volume-2 "Blog: Introducing the 12th Set of New .NET MAUI Controls and Features")
- [Microsoft Build 2024: The Syncfusion Experience](https://www.syncfusion.com/blogs/post/microsoft-build-2024-syncfusion-recap "Blog: Microsoft Build 2024: The Syncfusion Experience")
- [What’s New in .NET MAUI Charts: 2024 Volume 2](https://www.syncfusion.com/blogs/post/dotnet-maui-charts-2024-volume-2 "Blog: What’s New in .NET MAUI Charts: 2024 Volume 2") | gayathrigithub7 |
1,893,584 | 10 Essential Books to Accelerate your Cloud Career | TL;DR 🤓 DevOps and cloud engineers need not have a floor-to-ceiling bookshelf full of... | 0 | 2024-06-19T13:17:44 | https://dev.to/glasskube/10-essential-books-to-accelerate-your-cloud-career-6jf | beginners, productivity, devops, career | ## TL;DR 🤓
**DevOps** and **cloud engineers** need not have a floor-to-ceiling bookshelf full of books to enhance their skills. **A few key books are all you need (at least at the beginning)**. Below I highlight key titles for both junior and senior DevOps and cloud engineers, focusing on improving productivity, understanding cloud native technologies and cloud infrastructure. Emphasizing **practical application** and continuous learning is essential for advancing a career in the rapidly evolving cloud industry.
---
With the vast amount of valuable and entertaining sources of Cloud Engineering information, from [podcasts](https://open.spotify.com/show/6VRDZ6E89JfNY9BCANx70m?si=c08f83248aee4df7) and [YouTube](https://www.youtube.com/@KodeKloud) channels to even [movies](https://www.youtube.com/watch?v=BE77h7dmoQU), it can sometimes make us question the authenticity of the well-stocked bookshelves we see behind team members on our daily Zoom calls. **Do they actually read those books, or are they just for show?**
As someone who has always loved books for their content and entertainment value, I've found myself increasingly drawn to them in recent years for the change of mental pace they provide compared to other modern content mediums.

Unlike other technical book recommendation articles, I want to distinguish between four distinct categories of books that I find helpful to keep in mind. Not every book on this list is meant to be read from cover to cover, nor is every book a dense technical textbook that will gather dust on your shelf after a quick leaf-through.
The handful of book recommendations that have made an impact on my own cloud journey **fall into the following categories**:
**Core concepts:** 🏗️
- Cloud Computing: Concepts, Technology & Architecture
- The DevOps Handbook
- Cloud Native Infrastructure
**Reference books:** 🔎
- Site Reliability Engineering
- AWS Fundamentals
**Page turners:** 📑
- The Phoenix Project
- Accelerate
**Not cloud specific but still must reads:**📍
- Wiring the winning organisation
- Slow productivity
- Designing Data-Intensive Applications
> 🚨 Disclaimer: Gene Kim is going to feature prominently on this list
---
## Before we begin
For us at [Glasskube](https://github.com/glasskube/glasskube) crafting great content is as important as building great software. If this is the first time you've heard of us, we are working to build the next generation `Package Manager for Kubernetes`.
If you like our content and want to support us on this mission, we'd appreciate it if you could give us a star ⭐️ on GitHub.

{% cta https://github.com/glasskube/glasskube %} ⭐️ Star us on GitHub 🙏 {% endcta %}
## Core concepts 🏗️
These are the types of books that provide beginners with **essential core concepts and frameworks**, forming a solid foundation that helps future lessons fall right into place.
### 1. Cloud Computing: Concepts, Technology & Architecture

**Synopsis:**
Originally published in 2013, this book is respected industry wide and is a comprehensive guide to cloud computing by exploring its history, models, mechanisms, architectures, and security considerations. Combining case studies with technical analysis, the book examines topics ranging from cloud-enabling technologies to cloud service level agreements.
**What you will learn:**
- You will get a comprehensive and **vendor-neutral understanding of cloud technologies**, making it easier to assess various cloud solutions and providers objectively.
- Fundamental **cloud computing concepts, models, and mechanisms**.
- A deeper understanding of **cloud characteristics, security threats, and risk management frameworks**, crucial for designing secure and reliable cloud solutions.
**Quote from the book:**
> _"There is no greater danger to a business than approaching cloud computing adoption with ignorance. The magnitude of a failed adoption effort not only correspondingly impacts IT departments, but can actually regress a business to a point where it finds itself steps behind from where it was prior to the adoption—and, perhaps, even more steps behind competitors that have been successful at achieving their goals in the meantime."_
**A review that caught my eye:**

Get it [here](https://www.oreilly.com/library/view/cloud-computing-concepts/9780133387568/).
### 2. The DevOps Handbook

**Synopsis:**
By presenting the “Three ways” as the principles for mitigating these challenges and achieving world-class performance, the book illustrates how organizations can adopt DevOps principles and practices to accelerate delivery, improve reliability, and create a more satisfying and productive work environment.
**What you will learn:**
- The **Three Ways** (Flow, Feedback & Continual Learning and Experimentation).
- Focus on **Deployment Lead Time.**
- [Conway’s law.](https://en.wikipedia.org/wiki/Conway%27s_law)
**Quote from the book:**
> _"DevOps isn't about automation, just as astronomy isn't about telescopes."_
**A review that caught my eye:**

Get it [here](https://itrevolution.com/product/the-devops-handbook-second-edition/).
### 3. Cloud Native Infrastructure

**Synopsis:**
The authors explain key concepts like representing infrastructure through code, APIs, managing application life cycles in a cloud-native way, and ensuring security and compliance, but overall the book stresses that cloud native infrastructure is about adapting principles and processes more than specific technologies, impacting application management as much as hardware.
It guides readers on when and why to adopt these practices, outlining how they differ from traditional approaches.
**What you will learn:**
- Cloud native infrastructure **prioritizes building infrastructure with software** over manual configuration.
- **Resiliency** is paramount in cloud native infrastructure.
- Cloud native infrastructure **necessitates a shift in mindset** and company culture.
- **Embracing chaos** and designing for failure are essential.
**Quote from the book:**
> _"The only systems that should never fail are those that keep you alive (e.g., heart implants, and brakes)"_.
**A review that caught my eye:**

Get it [here](https://www.oreilly.com/library/view/cloud-native-infrastructure/9781491984291/).
## Reference books 🔎
Keep these books close by, preferably within arm's reach. In truth, the "Core Concepts" books could also fall into this category and vice versa. As you build your personal home library, you'll definitely want hard copies of these books. This way, you can reference them easily, make annotations, earmark important sections, and fully extract the valuable insights they offer.
### 4. Site Reliability Engineering (SRE)

**Synopsis:**
Site Reliability Engineering (SRE), a discipline pioneered by Google, offers a distinct approach to managing large-scale software systems by emphasizing the application of software engineering principles to operations. The book explores this approach in detail, providing insights into Google's production environment, SRE principles such as embracing risk and eliminating toil, and practical practices including monitoring, incident response, and capacity planning.
It emphasizes the importance of automation, simplicity in software design, and a blameless postmortem culture for continuous improvement.
**What you will learn:**
- The **core** principles and practices of **Site Reliability Engineering**.
- Gain insights into Google's production environment and the challenges of running **large-scale systems**.
- How to apply **SRE principles** to their own organizations, regardless of size or technical expertise.
**Quote from the book:**
> _"If a human operator needs to touch your system during normal operations, you have a bug. The definition of normal changes as your systems grow."_
**A review that caught my eye:**

Get it [here](https://sre.google/books/).
### 5. AWS Fundamentals

**Synopsis:**
"AWS Fundamentals" guides readers through the essentials of Amazon Web Services (AWS), emphasizing practical application over certification preparation. It provides a comprehensive overview of core AWS services like EC2, S3, RDS, DynamoDB, Lambda, and more, categorized by function (compute, database and storage, messaging, etc.
The authors offer practical examples, use cases, configuration recommendations, and tips for each service. Additionally, the book introduces Infrastructure as Code (IaC), explaining its importance and demonstrating how to use frameworks like CloudFormation, Serverless, and CDK to provision infrastructure.
**What you will learn:**
- Understanding the **core building blocks of AWS** (a lot of core cloud concepts are valid for over cloud providers).
- Learning how to apply AWS knowledge in **real-world scenarios.**
- Gaining an understanding of **Infrastructure as Code** (IaC).
**Quote from the book:**
> _“Learning AWS doesn’t need to be hard. It is important to focus on the basics and to understand them well. Once this is done all new services or features can be understood really well.”_
**A review that caught my eye:**

Get it [here](https://awsfundamentals.com/).
## Page turners 📑
Apart from teaching valuable professional lessons these books have felt as entertaining as other non technical works of fiction or non-fiction. All three left me with a strong “I need you to read this“ feeling.
### 6. The Phoenix project

**Synopsis:**
A novel (yes an actual novel) that follows Bill Palmer, a VP of IT Operations at Parts Unlimited, who is tasked with salvaging the company's failing IT project, code-named "Phoenix." As Bill navigates the challenges of a stressful work environment full of miscommunication, finger-pointing, and a looming audit, he crosses paths with Erik, a board member who introduces him to the principles of DevOps, without calling it that.
Throughout the story, Bill and his team work to implement these principles, striving to streamline their workflow and improve collaboration between Development and IT Operations. The book is special since the conceptual thrust of the book of spreading DevOps practices is delivered in the refreshing form of a fully fledged narrative novel.
**What you will learn:**
- The **three ways** is also touched upon in this book (similar to the DevOps Handbook).
- The importance of **identifying and managing constraints** to optimize the flow of work.
- The importance of **collaboration** and **communication** between Development, IT Operations, and the business as a whole.
**Quote from the book:**
> _“Being able to take needless work out of the system is more important than being able to put more work into the system.”_
**A review that caught my eye:**

Get it [here](https://itrevolution.com/product/the-phoenix-project/).
### 7. Accelerate

**Synopsis:**
The book presents four years of research exploring the practices that contribute to high-performing technology organizations. The authors sought to identify the capabilities that drive software delivery performance and, in turn, impact organizational performance. Their findings highlight twenty-four key capabilities, categorized as continuous delivery, architecture, product and process, lean management and monitoring, and cultural, that demonstrably improve software delivery and overall business outcomes.
The book emphasizes that these capabilities are measurable and improvable, offering guidance for organizations to assess their current state and embark on a journey of continuous improvement.
**What you will learn:**
- **Software delivery performance** is a key predictor of organizational performance
- A set of capabilities, including **technical practices**, **Lean management**, and a **generative culture**, drive improvements in software delivery performance.
- **Transformational leadership** is crucial in enabling and amplifying the adoption of these capabilities.
**Quote from the book:**
> _"The most important characteristic of high-performing teams is that they are never satisfied: they always strive to get better."_
**A review that caught my eye:**

Get it [here](https://itrevolution.com/product/accelerate/).
## Not cloud specific but still must reads📍
Not specifically cloud related at all but these are book that for different reasons I feel can round out a cloud engineer.
### 8. Wiring the Winning Organization

> _I'm a huge fan of reading "leadership" or "management" focused books, even though I'm not in either of those positions. Learning what leadership should care about and the key ingredients needed to achieve excellent results serves as a cheat sheet for any individual contributor wanting to stand out by being impactful. Change doesn’t always have to come from the top, you can push it from wherever you find yourself._
**Synopsis:**
The book presents a new theory of performance management, emphasizing how leaders can create the conditions for their organizations to achieve exceptional results. The book introduces three key mechanisms for building a "winning organization": slowification (making problem-solving easier), simplification (making problems easier to solve), and amplification (making problems more visible).
Through a combination of theoretical explanations, practical case studies, and real-world examples, the authors demonstrate how these mechanisms can be applied across diverse industries and organizational contexts to achieve superior performance.
**What you will learn:**
- Greatness in any endeavor is achievable through a focus on refining the **"social circuitry"** of an organization.
- Three key mechanisms **slowification**, **simplification**, and **amplification** can be employed to move an organization from a **"danger zone"** to a "**winning** zone".
- Leaders who embrace these mechanisms can create organizations that achieve extraordinary results.
**Quote from the book:**
> _“Slowification enables the shift from reactive, "fast thinking" based on ingrained habits to more effective "slow thinking" that allows for deliberation, reflection, and creativity in problem-solving.“_
**A review that caught my eye:**

Get it [here](https://itrevolution.com/product/wiring-the-winning-organization/).
### 9. Slow productivity

**Synopsis:**
Slow Productivity by Cal Newport challenges the modern obsession with visible busyness as a measure of productivity, what he terms pseudo-productivity. Instead, the book champions a slow productivity philosophy based on three core principles: doing fewer things, working at a natural pace, and obsessing over quality.
This philosophy argues that by intentionally limiting workloads, embracing a sustainable work pace, and prioritizing quality over quantity, knowledge workers can achieve greater meaning and produce superior results.
The book explores the theoretical underpinnings of these principles and offers practical strategies for implementing them in various professional settings.
**What you will learn:**
- To do **less**
- Work at the more **natural pace**
- Obsess over **quality**
**Quote from the book:**
> _"for all of our complaining about the term, knowledge workers have no agreed-upon definition of what “productivity” even means."_
**A review that caught my eye:**

Get it [here](https://calnewport.com/slow/).
### 10. Designing Data-Intensive Applications

> _As cloud engineers, we collaborate closely with the developers in our organization who write the applications we deploy and maintain. While this book doesn’t provide directly actionable knowledge for our day-to-day tasks, it has given me a deeper understanding of the concepts underpinning the creation of our organization’s apps. More importantly, it has provided me with a common language to effectively communicate with the developers on my team._
**Synopsis:**
Designing Data-Intensive Applications, by Martin Kleppmann, explores the fundamental principles and practical considerations for building reliable, scalable, and maintainable data systems. The book examines various data models, including relational, document, and graph-based models, and their respective query languages, analyzing their suitability for different applications.
It also examines storage engines, data encoding formats, and schema evolution. A key focus is the exploration of distributed data systems, including replication, partitioning (sharding), and the challenges of maintaining consistency and consensus in such environments. The book uses real-world examples of successful data systems to illustrate key concepts and trade-offs.
**What you will learn:**
- Various **data models**, including relational, document, and graph-based models, and their respective query languages, analyzing their suitability for different applications.
- **Storage engines** and how databases arrange data on disk to find it again efficiently
- **Reliability**, **scalability**, and **maintainability**.
**Quote from the book:**
> _"The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair."_
**A review that caught my eye:**

Get it [here](https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/).
---
If you like our content and want to support us on this mission, we'd appreciate it if you could give us a star ⭐️ on GitHub.

{% cta https://github.com/glasskube/glasskube %} ⭐️ Star us on GitHub 🙏 {% endcta %} | jakepage91 |
1,893,577 | Polypane 20: Browser features and performance | Polypane 20 improves the features and performance of the Elements and Outline panel, as well as... | 0 | 2024-06-19T13:16:32 | https://polypane.app/blog/polypane-20-browser-features-and-performance/ | webdev, productivity, css, news | Polypane 20 improves the features and performance of the Elements and Outline panel, as well as improving general browser features and stability. It's also running the latest version of Chromium, 126.
> **What's [Polypane](https://polypane.app)?** Polypane is the web browser for ambitious web developers. It's a stand-alone browser that shows
> sites in multiple fully synced _panes_ and helps you make your site more responsive, more accessible and faster.
With [Polypane 19](https://polypane.app/blog/polypane-19-workflow-improvements/) being only a few weeks old we weren't planning on releasing a major version so soon, but we wanted to get the new Chromium version out to you as soon as possible.
Because of that this release is a little light on new features, but it does include a lot of smaller improvements and fixes that make a real difference in day-to-day usage.
## Elements panel
The [Elements panel](https://polypane.app/docs/elements-panel) is where you can inspect and edit the DOM and CSS across all opened panes at once.
As more CSS features become available we work hard to implement them (often before other devtools do, like `@layer` support in Polypane 8 or native CSS nesting in Polypane 13) and to make sure the panel is still as fast as possible.
### Performance
In this release we made a significant improvement in how we index the CSS styles for each pane. In some situations that is now an order of magnitude faster.
This means that the initial load of the Elements panel is faster, and that the panel is much more responsive when switching between DOM nodes even in more complex pages.
### `@starting-style` support
The `@starting-style` at-rule is really cool: it lets you define the initial style for an element when adding it to the DOM. This means that you can now animate elements as you add them to the DOM without needing JS. In Polypane you can now access that `@starting-style` like any other CSS rule.
You won't find this in the Chromium devtools yet so if you want to experiment with it, Polypane is the place to do it.
### A11y panel updates
In the accessibility panel we now have two new checks for the accessible name.
#### Warning for missing accessible names
Firstly, Polypane now knows which elements should have an accessible name (like headings, links, buttons etc) and will warn you if the selected element is missing one. Just a quick little test that will catch small issues. Note that this can also easily be checked in the Outline panel.

#### Warning for repetition in the accessible name
An often found accessibility issue is the following scenario:
```html
<a href="/">
<img src="home-icon.svg" alt="Home" />
Home
</a>
```
Superficially this looks pretty good:
- it's a real link;
- it has readable text;
- the image has an alt text.
But when this is sent to a screen reader, it will read out "Home Home" because the accessible name for the link is the combination of the alt text and the text content.

Polypane now check for repetition in accessible names and warns you when it finds them, and will also work with more complex repetition.
Writing the function/algorithm for this was a lot of fun, and we'll explain how we did it in a future blog post.
### Other Elements panel improvements
1. In this release we've improved the sorting of autocompletion suggestions when adding new CSS properties. In Polypane 19 we made suggestions purely on frequency (so if you types `fo` that would be autocompleted to `font-style`, as that's more likely than `font` or `font-display`) and in Polypane 20 this is now expanded to prefer the shortest property name when the frequencies are close. Long story short, it should make the suggestion feel even more natural.
1. When you have a top level `&` in your CSS that applies the styling to `:root`. Polypane now correctly shows this selector in the Elements panel.
1. The Elements panel now supports the `round()`, `mod()` and `rem()` CSS functions.
1. Along with `aria-hidden`, the DOM view now also highlights the `inert` attribute.
## Outline panel
We added some new checks to the [Outline panel](https://polypane.app/docs/outline-panel/), and improved its performance too.
### Performance improvements
Instead of getting accessibility data directly from the rendered page, we now get it from the accessibility tree that Chromium generates. This moves the work out of the main process and into a separate process, which makes the panel faster and more stable.
#### Looking ahead: full accessibility tree view
In a future release, this means we'll also introduce a full accessibility tree view. If you have suggestions on how we can make that the most useful for you, let us know!
### Cap on automatic link testing
The Link overview in the panel will [automatically check all links on a page](https://polypane.app/docs/outline-panel/#broken-link-checking) to see if they don't go to broken pages, so you catch broken URLs without having to click each one.
This is great when you have a few links, but when you have a lot of links this takes a lot of processing as well as bandwidth (for you and your server).

So we've added a cap. When there are more than 100 links on a page, Polypane shows a "Check status" button that you have to click before we go and check them all.
### Accessible name vs visible name
The Outline panel now also checks the visible name of an element against the accessible name.
The visible text should be the same as the accessible name, or a substring off it. When it's not this could be a violation of WCAG success criterion 2.5.3, so we flag that for you.

For example, on the homepage of Polypane we have a title with some fancy styling, and that styling interferes with the accessible name (removing spaces) so we overwrite that with a `aria-label` attribute. Then we (let's not hide behind "we". It was _I_, Kilian) changed the visible text to a much cooler title, but forgot to update the `aria-label` attribute. Polypane now catches that for me.
If the visible text is a substring of the accessible name, we show the difference but this isn't considered a violation (on its own).
### Repetition in accessible names
Like in the Elements panel, we now also check for repetition in accessible names in the Outline panel. Here too we show the repeating parts so you can quickly validate if it's a problem or not.

### Support for heading levels 7 through 9
When it comes to accessibility, there's always more to know. Turns out that `aria-level` doesn't just match up to H1 through H6, but you can actually go all the way up to level 9 and browsers will still understand it.
...Except you probably shouldn't be using them, so we now flag it as a warning. If you find your structure needs them you're probably better off splitting that page or restructuring your content.
Heading levels beyond 9 however are not supported at all by browsers and aren't communicated as such to assistive technology. Those we will flag as an error.
## Load failure messages
The earliest few version of Polypane had a load error state but we lost them during a refactor. It took us a while but now they're back, and updated!

When a page fails to load, Polypane will show a message in the pane that failed to load, so you know what's going on. Depending on the type of failure it tries to be a little helpful. Most often it's either a typo, a network issue or the server is gone.
<svg
xmlns="http://www.w3.org/2000/svg"
width="128"
height="128"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
class="icon icon-tabler icons-tabler-outline icon-tabler-device-gamepad-2"
style="display: block;margin-inline: auto;"
>
<path d="M12 5h3.5a5 5 0 0 1 0 10h-5.5l-4.015 4.227a2.3 2.3 0 0 1 -3.923 -2.035l1.634 -8.173a5 5 0 0 1 4.904 -4.019h3.4z" />
<path d="M14 15l4.07 4.284a2.3 2.3 0 0 0 3.925 -2.023l-1.6 -8.232" />
<path d="M8 9v2" />
<path d="M7 10h2" />
<path d="M14 10h2" />
</svg>
Like any _serious_ browser, this error message should also have some sort of game in it. _Obviously_. So I'm asking you:
**What kind of game would you like to see in Polypane?** A snake game where the snake travels across all panes? A pong or breakout game where you control the paddle in all panes and you're playing multiple games at once? A game where you have to find the differences between two panes? Let me know (on Twitter, via email or the chat)! ...Bonus points if I don't have to build something from scratch.
## Screenshotting improvements
We've made various small improvements to the screenshotting functionality in Polypane.
### Dashed and dotted lines in the screenshot editor
You can now draw dashed and dotted lines in the screenshot editor. Thanks Rik for enabling this in [Pintura](https://pqina.nl/pintura/?aff=xLXrx&ref=polypane)!

### Easy access to the storage folder
When you take a screenshot and quick-save it, it will automatically be saved to the last used folder. We've now added a link to open that folder directly from the context menu of screenshot buttons.

### Overview screenshot distortion (fix)
In vertical layout, sometimes the panes in an overview screenshot would get distorted, depending on things like multiple monitors with different pixel ratios and other fun situations. This is now fixed.
## Address bar
The address bar in Polypane can match URLs and page titles based on fragments. So you can type `po blo 20` to get to e.g. <code>https://**po**lypane.app/**blo**g/**po**lypane-**20**-browser-features-and-performance/</code>.
The algorithm behind it now prefers perfect matches over multiple partial matches. So if you type "sign in" it will put urls with "sign in" in the url or title higher than other URLs that might contain both "sign" and "in" at different points in the URL.
It's a small tweak but it makes the address bar feel much more dependable.
## Chromium 126
Polypane 20 includes an updated Chromium version, 126.0.6478.36. This lands cross-document view transitions and the Chromium devtools performance panel got a lot of improvements.
For an overview of the new experimental features enabled in 126, head over to our experimental features overview: [Experimental Chromium Web Platform Features](/experimental-web-platform-features/) and for additional info, head over to the [experimental chromium features docs](/docs/experimental-chromium-features/).
## Get Polypane 20
Polypane is available for Windows, Mac and Linux (.deb or AppImage) in both 64 bit and ARM versions.
Polypane automatically updates on Mac, Windows and on Linux when using the AppImage. Otherwise, go to
[the download page](https://polypane.app/download/) to download the latest version!
**Don't have Polypane yet? There is a 14 day trial available. [Try it for free](https://dashboard.polypane.app/register).** No credit card needed.
## Polypane 20 Changelog
**New**
- **New** Elements panel: `@starting-style` support
- **New** Elements panel: Accessible name missing warnings (Thanks Torrance!)
- **New** Outline panel: check visible name against accessible name (WCAG SC 2.5.3) Thanks Markus!
- **New** Accessible names check for repeating substrings (Thanks Eric!)
- **New** Screenshot editor: dotted and dashed line support (Thanks Eric and Rik!)
- **New** Page load failure messages
- **New** Chromium 126
**Improved**
- **Improved** Elements panel: better sorting of suggested CSS properties
- **Improved** Elements panel: Support for `round()`, `mode()` and `rem()` CSS functions
- **Improved** Elements panel: Support for top level `&`
- **Improved** Elements panel: Support for nested at-rules
- **Improved** Elements panel: increase the number of CSS rules Polypane can process at once
- **Improved** Elements panel: significant performance improvements for nested styles
- **Improved** Elements panel: Highlight the `inert` attribute
- **Improved** Outline panel: With more than 100 links, testing them is now an explicit action
- **Improved** Address bar: Prefer perfect matches in suggestions when typing fragments
- **Improved** Screenshots: add 'open folder' link in context menus of screenshot buttons
- **Improved** Command bar: Allow matching by category (Thanks Artem!)
- **Improved** Show a error message for unsupported state files
- **Improved** Outline panel: support for heading levels 7 through 9
- **Improved** Performance: Get and use accessibility tree out-of-process
- **Improved** Live CSS panel: Support folding CSS code
- **Improved** Touch emulation: Easier toggle between touch in pane and normal mouse outside of pane
- **Improved** Add cmd+shift+brackets to nagivate between tabs (Thanks Jerod!)
- **Improved** Responsiveness of resizing the devtools and browser panels
- **Improved** Updated Google Fonts
**Fixes**
- **Fix** Elements panel: pressing : when adding a css property no longer applies the suggestion.
- **Fix** Elements panel: Only add anonymous scoping roots when style elements contains them.
- **Fix** Workspace panel: preview for 100% height panes are now visible again (Thanks Artem!)
- **Fix** Command bar: Lorem Ipsum command now works again
- **Fix** Command bar: Show correct shortcuts for next/previous tab on mac
- **Fix** Screenshot menu no longer hidden under devtools (Thanks Ahmad!)
- **Fix** 100% height panes in vertical layout now have a minimum height of 800px.
- **Fix** A11y checks: Prevent `input type="hidden"` from being marked as focusable
- **Fix** Visible focus styles during signup flow (Thanks Novella!)
- **Fix** Prevent error when failing to write detached panel window state (Thanks John!)
- **Fix** Prevent error when devtools reference is destroyed (Thanks Mike!)
- **Fix** Overview screenshot distortion in vertical layout (Thanks Travis!)
- **Fix** Pane emulation: Prevent auto dark mode from incorrectly showing as on for certain configurations
| kilianvalkhof |
1,893,583 | Microsoft Power Platform Services | Webtual GLOBAL helps businesses use the Microsoft Power Platform to make their operations more... | 0 | 2024-06-19T13:17:36 | https://dev.to/webtualglobal/microsoft-power-platform-services-34h8 | Webtual GLOBAL helps businesses use the Microsoft Power Platform to make their operations more efficient with less complex coding. Learn more at www.webtualglobal.com or contact us at contactus@webtualglobal.com. Let's improve your business together.
| webtualglobal | |
1,893,582 | Throw | Throw - Throught this key we can create the error ourselves like this. | 0 | 2024-06-19T13:17:31 | https://dev.to/husniddin6939/throw-2ggc | 1. Throw - Throught this key we can create the error ourselves like this.
| husniddin6939 | |
1,893,581 | spicami | Помимо практических характеристик, при выборе медицинской мебели нельзя забывать и об эстетике. Даже... | 0 | 2024-06-19T13:17:04 | https://dev.to/spicami/spicami-1f84 | Помимо практических характеристик, при выборе медицинской мебели нельзя забывать и об эстетике. Даже в медицинских учреждениях важно создавать приятную, расслабляющую атмосферу. Поэтому подбирайте мебель, которая сочетает в себе функциональность, комфорт и стильный внешний вид. При этом особое внимание следует уделять качеству материалов, из которых изготовлена мебель, и их безопасности для пациентов и персонала. В том числе это касается покупки [манипуляционного медицинского стола из качественных материалов](https://spicami.ru/archives/153190). | spicami | |
1,893,578 | SharePoint migration services | Webtual GLOBAL offers comprehensive IT solutions specializing in Microsoft 365 Power Platforms,... | 0 | 2024-06-19T13:16:32 | https://dev.to/webtualglobal/sharepoint-migration-services-1cgj | Webtual GLOBAL offers comprehensive IT solutions specializing in Microsoft 365 Power Platforms, SharePoint support, and SharePoint migration services. With a commitment to excellence, we cater to diverse business needs, ensuring seamless transitions and robust support. Visit our website at www.webtualglobal.com or contact us at contactus@webtualglobal.com to discover how we can optimize your digital workspace today. | webtualglobal | |
1,877,917 | EHallPass | Streamlining School Operations with Digital Hall Pass Systems In today's digital age, technology is... | 0 | 2024-06-05T10:58:29 | https://dev.to/nadre_marry_3d4233392361a/ehallpass-4g87 | ehallpass, login | Streamlining School Operations with Digital Hall Pass Systems
In today's digital age, technology is revolutionizing every aspect of our lives – including education. Gone are the days of paper hall passes and manual tracking systems; instead, schools are turning to digital hall pass systems like eHallPass to streamline operations and enhance safety and efficiency. Let's explore the benefits of digital hall pass systems and how they're transforming the educational landscape.
Website:- [EHallPass](https://ehallpass.today/)

What is eHallPass?
eHallPass is a digital hall pass system designed to automate and simplify the process of monitoring student movement within a school. With eHallPass, teachers and administrators can create digital hall passes for students, track their whereabouts in real-time, and ensure accountability and safety throughout the school day.
Key Features of eHallPass
Digital Hall Pass Creation: With eHallPass, teachers can create digital hall passes for students directly from their computer or mobile device. Passes can be customized with details such as destination, duration, and reason for leaving the classroom.
Real-Time Monitoring: Administrators can track student movement in real-time using eHallPass, allowing them to see who is out of class, where they are, and how long they've been gone. This helps ensure accountability and safety throughout the school day.
Automatic Alerts: eHallPass can automatically send alerts to teachers and administrators when a student is absent from class for an extended period or when they attempt to leave campus without permission. This allows for quick intervention and follow-up as needed.
Data Analysis: eHallPass collects data on student movement and behavior over time, allowing administrators to identify trends, analyze patterns, and make informed decisions to improve school operations and student outcomes.
Integration with Student Information Systems: eHallPass seamlessly integrates with existing student information systems, making it easy to synchronize data and streamline administrative tasks.
Benefits of Digital Hall Pass Systems
Enhanced Safety: Digital hall pass systems like eHallPass help enhance safety by providing real-time visibility into student movement and ensuring that students are accounted for at all times.
Improved Efficiency: By automating the process of creating and tracking hall passes, digital hall pass systems help save time and reduce administrative burden for teachers and staff.
Accountability: Digital hall pass systems promote accountability among students by requiring them to request and carry digital passes for any movement outside the classroom.
Data-Driven Decision Making: With access to detailed data and analytics, administrators can make data-driven decisions to optimize school operations, improve student behavior, and enhance overall school culture.
Conclusion
Digital hall pass systems like eHallPass are revolutionizing the way schools manage student movement and safety. By leveraging technology to automate processes, enhance accountability, and improve efficiency, these systems are transforming the educational landscape and providing a safer and more productive learning environment for students and staff alike. | nadre_marry_3d4233392361a |
1,893,575 | The Ultimate Guide to QA Testing Certification: Everything You Need to Know | In today's dynamic technological landscape, quality assurance (QA) testing serves as a critical... | 0 | 2024-06-19T13:12:03 | https://dev.to/pradeep_kumar_0f4d1f6d333/the-ultimate-guide-to-qa-testing-certification-everything-you-need-to-know-2h1e | In today's dynamic technological landscape, quality assurance (QA) testing serves as a critical cornerstone in ensuring that software meets rigorous standards of functionality, usability, and reliability before it reaches end-users. QA testers play a pivotal role in identifying and rectifying defects, thereby enhancing user experience and mitigating costs associated with post-release issues.
## Understanding QA Testing
QA testing involves a systematic process of verifying and validating software to ensure it meets specified requirements and functions as intended. It encompasses a diverse array of testing types, methodologies, and tools designed to detect and address issues early in the software development lifecycle.
## Importance of QA Testing Certification
Certification in [QA Courses](https://www.h2kinfosys.com/courses/qa-online-training-course-details/) serves as a validation of proficiency in software testing methodologies, tools, and best practices. It enhances career prospects by demonstrating competence to employers and establishing credibility within the industry.
## Types of QA Testing Certifications
ISTQB (International Software Testing Qualifications Board) Certifications
## Overview of ISTQB Foundation Level
Advanced Level Certifications (Test Manager, Test Analyst, Technical Test Analyst)
Expert Level Certifications (Improving the Testing Process, Test Management)
## Certified Agile Tester (CAT)
Focus on Agile testing methodologies and practices
Importance of CAT certification in Agile development environments
Certified Software Tester (CSTE)
## Offered by the Quality Assurance Institute (QAI)
Covers fundamental testing skills and principles
HP Certified Professional Program (HP ATP, ASE)
HP’s certification programs focusing on tools such as Unified Functional Testing (UFT), LoadRunner, etc.
Certified Selenium Professional
Tailored for professionals proficient in Selenium automation testing
## Choosing the Right QA Testing Certification
Factors to Consider:
Alignment with career goals and aspirations
Industry relevance and demand
Certification costs and duration
Prerequisites and eligibility criteria
How to Prepare for QA Testing Certifications
Study Resources:
Recommended literature, study guides, and online resources
Practice tests and mock exams
Training Courses:
Evaluation of prominent training providers and platforms
Hands-On Experience:
Importance of practical application in mastering QA testing techniques
Benefits of QA Testing Certification
Career Advancement:
Expanded job opportunities and potential for higher remuneration
Recognition of expertise and professional credibility
Skill Enhancement:
Development of proficiency in testing methodologies and tools
Keeping abreast of evolving industry standards
Challenges and Pitfalls
## Common Challenges During Certification Preparation:
Strategies for overcoming these hurdles
Future Trends in QA Testing Certification
## Emerging Technologies Influencing QA Practices:
Impact of AI, automation, and DevOps on QA testing methodologies
## Conclusion
Acquiring a [QA testing certification](https://www.h2kinfosys.com/courses/qa-online-training-course-details/) signifies a significant milestone toward building a successful career in software quality assurance. It validates your capabilities, enhances industry credibility, and opens doors to diverse career opportunities. By thoroughly exploring certification options, preparing diligently, and staying informed about industry advancements, you can confidently navigate your journey toward becoming a proficient and sought-after QA tester.
| pradeep_kumar_0f4d1f6d333 | |
1,893,574 | How to create an Instagram API client in python: a step-by-step tutorial | Learn how to create an Instagram API client in Python with this step-by-step tutorial. Discover the best practices for using the Instagram API to fetch posts, likes, and follower data efficiently. | 0 | 2024-06-19T13:06:18 | https://usemyapi.com/articles/how-to-create-an-instagram-api-client-in-python-a-step-by-step-tutorial/ | python, tutorial, programming, api | ---
title: "How to create an Instagram API client in python: a step-by-step tutorial"
published: true
description: "Learn how to create an Instagram API client in Python with this step-by-step tutorial. Discover the best practices for using the Instagram API to fetch posts, likes, and follower data efficiently."
canonical_url: "https://usemyapi.com/articles/how-to-create-an-instagram-api-client-in-python-a-step-by-step-tutorial/"
---
# How to Create an Instagram API Client in Python: A Step-by-Step Tutorial - part 2
In the [previous post](https://dev.to/apiharbor/how-to-analyze-instagram-likes-a-simple-python-app-2fl1), we set up the runtime environment for working with [RapidAPI](https://rapidapi.com) in Python. We also wrote a simple client that communicates with the **Instagram Scraper 2023 API** hosted on RapidAPI. The tests went well, so today we will be finishing the client.
## Our Goal
**To remind you of our goal:** We want to create a full-fledged Python application using Instagram data that will:
- Tell us who is liking the posts
- Calculate the percentage of followers who like the posts
### What value will this application provide?
It will help determine whether the account owner should focus more on encouraging users to follow the profile, getting more likes, or improving the content.
Let's get to work!
## Necessary Endpoints
To achieve our goal, we need to know exactly what data we need to conduct our analysis. Therefore, we need:
- To get a listing of posts by user ID
- To get a list of people liking a particular post
- To get a list of people following a user
In our application, we use the **Instagram Scraper 2023 API**, which provides the necessary data without any issues. We will use the following endpoints:
- **User Posts** - returns a listing of posts by user ID
- **Post Likes** - returns a listing of people liking a particular post
- **User Followers** - returns a listing of people following a user
Since we have everything selected, it's time to implement these endpoints in our `RapidApiClient` class.
## Implementation in RapidApiClient Class
Currently, this class in Python looks like this:
```python
import http
import requests
from config import RAPIDAPI_HOST, RAPIDAPI_KEY
class RapidApiClient:
def __init__(self):
self.headers = {
'x-rapidapi-key': RAPIDAPI_KEY,
'x-rapidapi-host': RAPIDAPI_HOST
}
def get_user_posts(self, userid, count):
url = self.__get_api_url(f"/userposts/{userid}/{count}/%7Bend_cursor%7D")
response = requests.get(url, headers=self.headers)
return response.json()
def __get_api_url(self, path_and_query):
return f"https://{RAPIDAPI_HOST}{path_and_query}"
```
As you can see, there's nothing complicated here. To fetch more posts, we need to add **end_cursor** support. The operation of **end_cursor** is very simple. When we fetch the first batch of posts, the response will include an **end_cursor** value. To get the next batch of posts, we simply pass this cursor in the next request.
### Modify the get_user_posts Function
```python
def get_user_posts(self, userid, count, end_cursor=None):
url = self.__get_api_url(f"/userposts/{userid}/{count}/{self.__get_end_cursor(end_cursor)}")
response = requests.get(url, headers=self.headers)
return response.json()
```
We set the **end_cursor** parameter as default (None) and wrote a simple function `__get_end_cursor`:
```python
def __get_end_cursor(self, ec):
if ec is None:
return "%7Bend_cursor%7D"
return ec
```
This function returns the default value if the cursor is None.
Let's modify `main.py` to fetch more than 50 posts:
```python
# Initialize the end_cursor to None
end_cursor = None
# Loop to fetch posts, iterating up to 2 times
for i in range(2):
# Fetch user posts using the API client, specifying user ID, number of posts to fetch, and end cursor for pagination
posts = api_client.get_user_posts('11579415180', 50, end_cursor)
# Check if there is no next page of posts
if not posts["data"]["next_page"]:
# Break the loop if there are no more pages
break
# Update the end_cursor to the end cursor from the current batch of posts for the next iteration
end_cursor = posts["data"]["end_cursor"]
# Print the fetched posts
print(posts)
```
This code fetches 100 posts. The first time, when `cursor_end = None`, it fetches the 50 latest posts. Then, in the second loop iteration, it captures `cursor_end` from the API response and uses it to fetch 50 more posts.
We're doing quite well!
### Write the Function to Return People Liking the Post
Let's move to `rapidapi_client.py`:
```python
def get_post_likes(self, shortcode, count, end_cursor=None):
url = self.__get_api_url(f"/postlikes/{shortcode}/{count}/{self.__get_end_cursor(end_cursor)}")
response = requests.get(url, headers=self.headers)
return response.json()
```
As you can see, the operation is identical to `get_user_posts`. So, let's write another function:
```python
def get_user_followers(self, userid, count, end_cursor=None):
url = self.__get_api_url(f"/userfollowers/{userid}/{count}/{self.__get_end_cursor(end_cursor)}")
response = requests.get(url, headers=self.headers)
return response.json()
```
This function will return the followers of a user.
## RapidApiClient Ready for Action!
At this point, we have a complete client for communicating with the API. We can fetch all the data we need. In the next post, we will write the logic to use this data for analysis. Finally, we will be able to advise the analyzed user on how to develop their profile.
See you next time! 🚀 | apiharbor |
1,893,573 | Conga vs Docs Made Easy: Choosing the best alternative for Salesforce Document Generation | In the competitive landscape of Salesforce document generation, businesses often grapple with... | 0 | 2024-06-19T13:05:53 | https://dev.to/kimayanazum/conga-vs-docs-made-easy-choosing-the-best-alternative-for-salesforce-document-generation-2p64 | salesforce, software, documentation | In the competitive landscape of [Salesforce document generation](https://docsmadeasy.com/), businesses often grapple with balancing advanced features and cost efficiency. Conga Composer and Docs Made Easy are two prominent solutions offering distinct approaches to meet varying organizational needs. Conga Composer is known for its robust feature set but may come at a higher cost, whereas Docs Made Easy emphasizes cost efficiency without compromising essential functionalities.
**Here are the key advantages that make Docs Made Easy superior to Conga and why you should consider switching to Docs Made Easy as conga alternative immediately.**
**1. Pricing: Value for Money**
When it comes to pricing, Docs Made Easy stands out as the clear winner. The platform offers free services to an extent as well as offers a variety of pricing tiers that cater to businesses of all sizes, ensuring that even small and medium-sized enterprises can afford top-notch document automation services.
On the other hand, Conga's pricing structure is notoriously opaque and exorbitant. Many users report being blindsided by additional costs that were not initially disclosed. The high cost of using Conga can be prohibitive, especially for smaller businesses or startups.
**2. Customer Support: Responsive and Effective**
Effective customer support is crucial for any software solution, and this is where Docs Made Easy truly excels. The company offers round-the-clock support, ensuring that any issues or queries are addressed promptly. Docs Made Easy ensures that their customers are never left in the lurch.
Conversely, Conga's customer support leaves much to be desired. Reports of long response times and unhelpful support staff are common. Users frequently express frustration with the lack of timely assistance, such inefficiencies can be costly for businesses relying on Conga's services.
**3. Data Query Capabilities: Flexibility and Precision**
Docs Made Easy offers powerful and flexible data query tools that allow users to extract, manipulate, and analyze data with ease. The intuitive interface and robust features enable users to perform complex queries without needing advanced technical skills, making the platform accessible to a broader range of users.
Conga, however, is plagued by limitations in this area. Users often find the data query tools to be rigid and cumbersome, requiring significant technical expertise to operate effectively. The lack of flexibility hampers the ability to generate insightful reports and can hinder decision-making processes.
[
](https://appexchange.salesforce.com/appxListingDetail?listingId=a0N4V00000GbAJDUA3)
**4. User Experience: Intuitive and User-Friendly**
User experience is a critical factor in the adoption and success of any software solution. Docs Made Easy prioritizes user-friendly design and intuitive navigation, making it easy for users to get up to speed quickly.
In stark contrast, Conga’s user interface is often described as cluttered and unintuitive. New users, in particular, may find it challenging to navigate the system and utilize its full range of features.
**5. Integration and Compatibility: Seamless and Versatile**
Docs Made Easy offers seamless integration with a wide array of other software tools and platforms. This versatility ensures that businesses can easily incorporate Docs Made Easy into their existing workflows without disruption.
Conga, on the other hand, has limited integration capabilities. Users often encounter compatibility issues when trying to integrate Conga with other tools. This lack of flexibility can result in fragmented workflows and reduced operational efficiency.
**6. Reliability and Performance: Consistent and Robust**
Reliability is paramount for any document automation solution, and Docs Made Easy delivers consistent and robust performance. The platform is known for its uptime and reliability, ensuring that businesses can depend on it for critical operations. Regular updates and improvements keep the system running smoothly and efficiently.
Conga’s performance, however, is inconsistent at best. Users frequently report issues with system crashes, slow response times, and buggy updates. These reliability issues can lead to significant disruptions and loss of productivity, undermining the trust businesses place in the platform.
**Conclusion**
In conclusion, Docs Made Easy clearly emerges as the superior choice for businesses seeking a reliable, cost-effective, and user-friendly document automation solution. From its transparent pricing and exceptional customer support to its flexible data query capabilities and seamless integration, Docs Made Easy consistently outperforms Conga in every critical area. While Conga’s shortcomings in pricing, support, data query limitations, user experience, integration, and reliability render it a less desirable option, Docs Made Easy stands as a testament to how document management should be done right. For businesses aiming to optimize their workflows and achieve greater efficiency, Docs Made Easy is undoubtedly the best choice.
| kimayanazum |
1,893,572 | Try... catch... finally... | Try - we can log or do some actions and show in console. catch - whlile js check the try section and... | 0 | 2024-06-19T13:04:26 | https://dev.to/husniddin6939/try-catch-finally-5n9 | 1. Try - we can log or do some actions and show in console.
2. catch - whlile js check the try section and if it find some error, it will catch and don't show errors the screen. catch section find first error and stop , it doesn't check second one.
3.finally - this section show whatever in console the finish part.
| husniddin6939 | |
1,547,166 | Ursula von Rydingsvard | Ursula von Rydingsvard, an acclaimed contemporary artist, is renowned for her monumental and... | 0 | 2023-07-24T10:23:48 | https://dev.to/adreenil98/ursula-von-rydingsvard-ha6 | auction, auctionhouses, news, ursulavonrydingsvard | [Ursula von Rydingsvard](https://auctiondaily.com/news/artist-to-know-ursula-von-rydingsvard/), an acclaimed contemporary artist, is renowned for her monumental and evocative sculptures. Born in Germany in 1942, von Rydingsvard's family eventually settled in the United States, where she embarked on a remarkable artistic journey. Her sculptures, characterized by their immense scale, intricate textures, and emotional depth, have captivated audiences worldwide. In this blog post, we will delve into the world of Ursula von Rydingsvard, exploring her artistic vision, notable works, and the impact of her art.
[Ursula Von Rydingsvard art](https://auctiondaily.com/news/artist-to-know-ursula-von-rydingsvard/) is deeply rooted in her personal history and experiences. Drawing inspiration from her childhood memories, family narratives, and Polish-Ukrainian heritage, she creates powerful and intimate sculptures. The artist often employs cedar wood as her primary medium, transforming it into intricate and complex forms that engage viewers on both a visceral and intellectual level.
One of von Rydingsvard's signature techniques involves the laborious process of carving cedar beams. She cuts, assembles, and shapes individual sections, layering them to create mesmerizing and imposing structures. The final artworks bear the marks of her meticulous handwork, revealing the artist's dedication to the craft and her ability to imbue life into the otherwise rigid material.
Her sculptures evoke a sense of both fragility and strength. The rough, textured surfaces speak of vulnerability and resilience, while the monumental scale commands attention and reverence.
Ursula Von Rydingsvard works often evoke organic forms, such as tree trunks, roots, and other natural elements, creating a profound connection between her art and the natural world.
One of her notable works, "Ona," is a prime example of her artistic mastery. Standing over ten feet tall, the sculpture exudes a sense of majestic presence. The meticulously carved cedar beams, with their interwoven patterns and jagged edges, create a dynamic play of light and shadow. The artwork's abstract form invites viewers to interpret and engage with its emotional resonance, evoking a range of responses and introspection.
Von Rydingsvard's art has been showcased in numerous exhibitions and public spaces worldwide. Her sculptures can be found in prestigious institutions such as the Museum of Modern Art (MoMA) in New York, the Art Institute of Chicago, and the National Gallery of Art in Washington, D.C. These public installations allow her work to reach a broad audience and invite interaction with her monumental sculptures on a grand scale.
The impact of Ursula von Rydingsvard's art extends far beyond the physical presence of her sculptures. Her work has a profound ability to elicit an emotional response and stir the human spirit. The intricate details and powerful forms she creates invite contemplation and introspection, encouraging viewers to reflect on their own experiences, memories, and connections to the world around them.
Through her dedication to craftsmanship and her ability to transform humble materials into breathtaking works of art, Ursula von Rydingsvard has established herself as a significant figure in the contemporary art world. Her sculptures transcend boundaries, bridging the gap between personal narratives and universal human experiences.
In conclusion, Ursula von Rydingsvard's art is a testament to the transformative power of sculpture. Her immense sculptures, crafted with care and precision, provoke a sense of awe and inspire contemplation. Through her masterful use of materials and her ability to convey complex emotions,von Rydingsvard has left an indelible mark on the art world, inviting viewers to engage with her work on a profound and intimate level.

| adreenil98 |
1,892,825 | A practical approach to using generative AI in the SDLC | Have you ever taken requirements from a stakeholder, implemented them, only to have them come back to... | 0 | 2024-06-19T13:02:00 | https://community.aws/content/2i1vLMdryliLgpdceMy2N6o2LPn/a-practical-approach-to-using-generative-ai-in-the-sdlc | aws, ai, productivity | Have you ever taken requirements from a stakeholder, implemented them, only to have them come back to you and say "That's not what I asked for! I meant this..." It happens all the time. Sometimes we heard them wrong, sometimes we made incorrect assumptions, or maybe even the needs of the end-user changed.
You may have already seen how an AI assistant like [Amazon Q Developer](https://aws.amazon.com/developer/generative-ai/amazon-q) can help you write code and make your development more productive, and today I'd like to show you how I'm using an AI assistant to support the rest of my software development process, from idea to launch and beyond.
In the examples below, my stakeholders want to build a way for their customers to share and discover recipes and I'm using Amazon Q Developer to help me figure out what to build and how to build it. We'll step through each phase of the software development life cycle (SDLC) and this will be relevant whether you're using a traditional waterfall process or agile methodologies.
Let's get started!
## Analysis, requirements, and planning
During the analysis, requirements, and planning phases of the SDLC, I'm doing a lot of information gathering, asking questions, and researching the domain. I can use Amazon Q Developer to help me with this. Using the stakeholder business idea to create a recipe sharing and discovery app, I can ask the following questions:
`I'm creating a platform for users to share and discover recipes, including ingredients, steps, and images. What sort of functional and non-functional requirements should I be thinking about?`

Some other questions I might ask:
`What questions should I be asking my customer?`
`What are some challenges in the recipe app domain that I need to account for?`
`What are some questions I should be asking you about the functional and non-functional requirements of this recipe app?`
I can use these tools to better prepare me for planning meetings, ask the right questions, help think about functional and non-functional requirements, all as input into the design and architecture phase.
## Design and architecture
During the design phase, I start thinking more concretely about what the architecture will look like.
I need to find accurate, up-to-date information and guidance to help me understand potential solutions, which AWS services I should use, and how to use them. Instead of spending hours reading through AWS documentation, reading through blog posts, searching on Google, before even getting started, I can explore different approaches and provide initial ideas for the design and architecture phase, evaluate pros and cons, and even get a recommendation from an AI assistant like Amazon Q Developer.
Whether you enter this phase as someone new to the tech or the domain or you're armed with knowledge and years of experience, you still have to sift through it the information, evaluate your options, and make decisions. These tools help me gain clarity by narrowing down options based on the inputs I already have (functional and non functional requirements), the trade-offs I need to think about, and help make design and architectural decisions.
In the recipe sharing app my stakeholders want me to build, I can ask questions like:
`I would like to use AWS to deploy this web app. My team is made up of mostly Python engineers. We'll use Flask to build the app. What options do I have to deploy this app on AWS that include using infrastructure as code?`

Some other questions I might ask:
`What are the different tradeoffs I need to think about when choosing to use Amazon ECS for this app?`
`Assuming we use ECS, what are the different tradeoffs I need to think about when choosing to use AWS CDK or CloudFormation for this project?`
At this point, I have enough information to make a decision on how to proceed. I've decided to use Amazon ECS with Fargate to deploy this Flask app. Now I can start on the development of the app.
## Development
An AI assistant also helps me with many of my development tasks. Once I know what I need to build and how to communicate that to my tools, I can start asking questions about the implementation details.
I can use it to learn how to work with libraries and APIs, explain a block of code, refactor, ask for steps on how to implement a new feature with code examples, get code completions based on existing code and comments.
Using the same example, I ask:
`What are the steps to bootstrap a simple Flask app that has one route "/recipes" that outputs "My Recipes" text? I need to set up a python virtual environment and requirements.txt as part of this.`

I can generate code inline, right in my IDE, based on existing code and comments:

I can also troubleshoot and debug issues, fix faulty code, or configuration. I often paste in my stack traces and ask for help:
`When I run "python app.py" my Flask app does not start. There is no output when the command completes. Where should I investigate?`
`I'm getting this stack trace when I hit the recipes route /recipes. The app starts ok and the root route / works ok. What are some reasons I might get this error and how can I resolve it?`
Hopefully, this means less forgotten `print('GOT HERE')` lines! Using an AI assistant to debug an issue after I've found is great, but I can also use one to prevent bad code by helping me write tests.
## Testing
One of my favorite ways to use an AI assistant is to help with my tests. Often there's a recipe (pun intended!) to setting up the test suite and getting the first piece of code under test. Often this is boilerplate code, so I can use Amazon Q Developer to set up my test suite, to stub out a new test class, generate test cases based on existing code or a set of requirements via prompt, and even generate synthetic data for testing purposes.
In the recipe sharing app, I can ask questions like:
`I want to create a unit test for the "/recipes" route using the unittest framework. What are the steps and example code to do that?`

I might also ask questions about how to set up my test suite and run it, especially if this is a new-to-me code base:
`I have python tests cases created in this project. Which dependencies need to be installed so that I can run these tests and how do I run the test suite?`
Or to use the mocking library:
`I need to mock the API call to Amazon Bedrock in the /generate-recipe route. How can I use the unittest.mock library to do this? Please provide an example.`
## Deployment
During the deployment phase, I can use an AI assistant to help write my infrastructure code, create deployment scripts, learn about command line deployment tools, and create images and diagrams for my docs.
For my app, I can ask:
`What are the steps to deploy this Flask app as a container to ECS using Fargate?`

And when I'm ready to set up my CI/CD pipeline, I might start by asking for high-level steps and then drill down into each step with more specific questions:
`What are the steps to set up this project's CI/CD pipeline using GitHub Actions? When a commit is made to the code repository, this action will automatically coordinate building, testing, and deploying the application to ECS for every push to the repository.`
Once I've deployed the app to production and end-users are using it, I'm not done. I still need to maintain it over time.
## Maintenance
During the maintenance phase, I can use an AI assistant to learn about legacy codebases, an application's architecture, or a new programming language.
I can use the Amazon Q Developer's Explain feature to get more info about a selected block of code:

Or even get ideas on how to optimize my AWS bill right from the AWS Console:
`How can I optimize my bill this month?`

I can transform, migrate, and upgrade a codebase or an application and I can refactor or optimize code, or even evaluate security vulnerabilities using an AI assistant.
## Wrapping up
As you use a tool like this, it's important to understand how your AI assistant is trained -- on what data, for what tasks -- and then use it appropriately, using your own judgement. You may not use Amazon Q Developer in your IDE to ask about stakeholder questions, but in my situation, it worked for me. You likely wouldn't ask for unit test code examples in the AWS Console, but you would in your IDE, because that's where you are doing that work.
These are only some of the many ways you could use an AI assistant like Amazon Q Developer to help you throughout the SDLC. Whether you're using a variation of waterfall or an agile process, you can incorporate an AI assistant like Amazon Q Developer into your process. Read more about AI assistants on the Community.aws space or get started in [VSCode](https://community.aws/content/2fVw1hN4VeTF3qtVSZHfQiQUS16/getting-started-with-amazon-q-developer-in-visual-studio-code), [JetBrains IDEs](https://community.aws/content/2fXj10wxhGCExqPvnsJNTycaUcL/adding-amazon-q-developer-to-jetbrains-ides), or even the [command line](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-getting-started-installing.html).
How are you using AI assistants to help you with your development process? Drop a comment below to share and help others. | jennapederson |
1,893,571 | Top QA Testing Training and Job Placement Programs Near Me | In today's rapidly evolving technological landscape, quality assurance (QA) testing plays a pivotal... | 0 | 2024-06-19T12:54:23 | https://dev.to/pradeep_kumar_0f4d1f6d333/top-qa-testing-training-and-job-placement-programs-near-me-4go5 | In today's rapidly evolving technological landscape, quality assurance (QA) testing plays a pivotal role in ensuring that software meets stringent quality standards before it reaches end-users. With the demand for skilled QA testers steadily increasing, choosing a training program equipped with effective job placement services is crucial for aspiring professionals. This comprehensive guide explores some of the top [qa testing training and job placement near me](https://www.h2kinfosys.com/courses/qa-online-training-course-details/) programs available near you, providing insights into their offerings, benefits, and how they can advance your career in software quality assurance.
## Introduction to QA Testing and Its Importance
QA testing is integral to the software development lifecycle, focusing on identifying and rectifying defects to ensure software products are reliable, functional, and aligned with user expectations. Beyond enhancing product quality, QA testing plays a crucial role in cost reduction by identifying issues early in the development process, thereby maintaining project efficiency and effectiveness.
## Benefits of QA Testing Training Programs
Effective QA testing training programs offer comprehensive education on essential methodologies, tools, and techniques vital for success in QA roles. These programs typically cover:
Software Testing Fundamentals: Introduction to testing types, lifecycle stages, and methodologies.
Test Design Techniques: Crafting robust test cases and plans for thorough software validation.
Automation Tools: Training on widely-used tools such as Selenium, QTP, or LoadRunner for efficient test automation.
Performance and Security Testing: Techniques to evaluate software performance under various conditions and identify vulnerabilities.
Mobile and Web Testing: Specialized training in testing mobile applications and web platforms to meet current market demands.
Hands-on experience is foundational in these programs, offering learners opportunities to apply theoretical knowledge through projects and simulations, essential for developing proficiency in QA testing roles.
## Importance of Job Placement Services
Job placement services offered by QA testing training programs bridge the gap between education and employment, providing essential support such as:
Resume Building: Assistance in crafting tailored resumes highlighting relevant QA skills and experiences.
Interview Preparation: Coaching sessions focused on confidently answering QA-related interview questions.
Job Search Assistance: Access to comprehensive job listings and connections with potential employers within the QA industry.
Networking Opportunities: Participation in industry events and networking sessions to establish connections with QA professionals and potential employers.
These services are invaluable for helping graduates secure entry-level QA positions and advance their careers within the field.
## What to Look for in QA Testing Training Programs
When evaluating QA testing training programs near you, consider critical factors to ensure an informed decision:
Accreditation and Reputation: Verify the program's accreditation and standing within the QA industry.
Curriculum Coverage: Ensure comprehensive coverage of essential QA testing topics aligned with industry standards.
Instructor Expertise: Experienced instructors with practical industry knowledge enhance the learning experience.
Learning Flexibility: Programs offering flexible learning options like part-time schedules or online courses accommodate diverse learning needs.
Success Rates: Research the program's track record in job placements to gauge effectiveness.
Support Services: Ensure robust support services encompass resume building, interview coaching, and job placement assistance to facilitate career advancement.
## Top QA Testing Training and Job Placement Programs Near Me
Explore some highly recommended QA testing training and job placement programs available near you:
H2k Infosys QA Testing Program: offers a comprehensive QA testing program with a hands-on approach and strong industry connections, providing practical projects and personalized job placement support.
Quality Assurance Training Center (QATC): Specializing in QA testing with a focus on automation and performance testing, QATC offers extensive job placement assistance, including resume refinement, mock interviews, and networking opportunities.
Software Testing Institute (STI): Delivering structured QA testing education covering manual and automated testing methodologies, STI emphasizes practical learning through real-world projects and internships.
QA Mentor Training Program: Known for its industry-experienced mentors, QA Mentor provides tailored QA testing training to equip learners with practical skills and comprehensive job placement support.
IT Training and Placement Academy (ITPA): ITPA offers specialized QA testing training aligned with current industry demands, featuring hands-on labs, certification preparation, and dedicated career services.
## Conclusion
Choosing the right [qa training and placement](https://www.h2kinfosys.com/courses/qa-online-training-course-details/) program near you is crucial for establishing a successful career in software quality assurance. These programs not only impart essential QA skills but also provide invaluable support in securing employment within the competitive QA job market. Whether you are starting a new career or advancing existing skills, investing in a reputable training program can significantly accelerate your professional growth and open doors to abundant opportunities in QA testing.
By understanding the significance of QA testing, exploring training program benefits, and carefully considering key selection factors, you can confidently navigate your path toward a rewarding career in software quality assurance.
| pradeep_kumar_0f4d1f6d333 | |
1,893,570 | Travel Recommended with FastAPI, Kafka, MongoDB and OpenAI | Introduction In the bustling realm of modern travel, personalized recommendations play... | 0 | 2024-06-19T12:49:09 | https://dev.to/riottecboi/travel-recommended-with-fastapi-kafka-mongodb-and-openai-j4g | fastapi, kafka, mongodb, openai |

## Introduction
> In the bustling realm of modern travel, personalized recommendations play a pivotal role in enhancing user experiences and fostering memorable journeys. Our project, the Smart Travel Recommender System, aims to revolutionize the way travelers explore new destinations by providing tailored recommendations for activities based on country and season. By integrating cutting-edge technologies such as FastAPI, Kafka, MongoDB, and OpenAI, we aspire to deliver a seamless and scalable solution that empowers users to discover the essence of every destination.
## Structure of Project
```
Travel Recommender/
├── app/
│ ├── background_worker.py
│ ├── main.py
│ ├── api/
│ │ ├── openai_api.py
│ │ └── routes/
│ │ ├── recommendations_route.py
│ │ └── status_route.py
│ ├── models/
│ │ ├── __init__.py
│ │ └── models.py
│ ├── schemas/
│ │ └── schema.py
│ ├── core/
│ │ └── config.py
│ ├── db/
│ │ └── mongodb.py
│ └── utils/
│ └── kafka.py
├── test/
│ └── sample_test.py
├── requirements.txt
├── README.md
├── Dockerfile
├── docker-compose.yml
```
More information from my [Github](https://github.com/riottecboi/FastAPI-Kafka-MongoDB-OpenAI)
## Installation & Usage
```
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.11 -y
```
Use the package manager [Pip](https://pip.pypa.io/en/stable/) to install driver, and any other required libraries.
```
sudo apt-get install python3-pip -y
```
Initialize your virtual environment in your project and activate the virtual environment.
```
python3.11 -m venv <virtual-environment-name>
source <virtual-environment-name>/bin/activate
```
Install all required libraries for this project from the file _requirements.txt_
```
python3.11 -m pip install -r requirements.txt
```
## Set Up All Services
I have created a docker-compose file for easy to install all of elements we need for this project.
```
version: '3.8'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka:
image: bitnami/kafka:latest
ports:
- 9092:9092
- 9093:9093
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT
- KAFKA_CFG_LISTENERS=CLIENT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
depends_on:
- zookeeper
kafka-ui:
image: provectuslabs/kafka-ui
ports:
- 8080:8080
depends_on:
- kafka
environment:
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
mongodb:
image: mongo:5.0
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test --quiet
interval: 10s
timeout: 5s
retries: 5
mongo-express:
image: mongo-express:latest
depends_on:
mongodb:
condition: service_healthy
ports:
- 8888:8081
environment:
ME_CONFIG_MONGODB_SERVER: mongodb
fastapi-app:
build:
context: .
args:
NO_CACHE: "true"
depends_on:
- kafka
- mongodb
ports:
- "8000:8000"
environment:
OPENAI_KEY: "xxxx"
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
KAFKA_TOPIC: recommendations_topic
MONGODB_URI: mongodb://root:root@mongodb:27017
MONGODB_DATABASE: recommendations
```
Dockerfile will be like this
```
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.11
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt /app/requirements.txt
COPY app/api /app/api
COPY app/core /app/core
COPY app/db /app/db
COPY app/schemas /app/schemas
COPY app/utils /app/utils
COPY app/background-worker.py /app/background.py
COPY app/main.py /app/main.py
RUN pip3 install --no-cache-dir -r requirements.txt
EXPOSE 8000
# Run the FastAPI service and background process
CMD ["sh", "-c", "uvicorn main:app --host 0.0.0.0 --port 8000 & python background.py"]
```

> Containers will be pulled and run by each process, connect with a default bridge network of project.
## Explanation
We will have in total two routes (recommendation_route and status_route)
- **_recommendation_route_** will accept two query parameters:
1. _country_: The country for which the recommendations are to be fetched.
2. _season_: The season in which the recommendations are desired (e.g., "summer", "winter").
3. Both parameters are required and the season must validate that one of the four seasons is chosen.
```
class RecommendationsRequest(BaseModel):
country: str = Field(..., description="The country for which recommendations are to be fetched.")
season: str = Field(..., description="The season in which the recommendations are desired.")
@model_validator(mode='after')
def validate_season(cls, values):
try:
pycountry.countries.search_fuzzy(values.country)
except LookupError:
raise ValueError(f"Invalid country.")
valid_seasons = ["spring", "summer", "autumn", "winter"]
if values.season not in valid_seasons:
raise ValueError(f"Invalid season. Must be one of {', '.join(valid_seasons)}")
return values
```
When a request is made to the endpoint, generate a unique identifier (UID) for the request then return the UID to the user immediately.
```
try:
RecommendationsRequest(country=country, season=season)
uid = str(uuid.uuid4())
request_data = {'country': country, 'season': season}
await kafka_producer(request_data, uid)
return RecommendationSubmitResponse(uid=uid)
except ValidationError as e:
raise HTTPException(status_code=422, detail=ErrorResponse(error="Invalid country/season", message="The input of country or season is invalid. Please try again.").dict())
```
Offload the processing of the request to a background component. At this stage, we will use **Kafka** to send the request as a message to a Kafka topic and consume it with a separate worker.
```
async def kafka_producer(request_data, uid):
producer = AIOKafkaProducer(
bootstrap_servers=settings.KAFKA_BOOTSTRAP_SERVERS
)
await producer.start()
await producer.send(
settings.KAFKA_TOPIC,
f"{uid}:{request_data}".encode("utf-8"), partition=0
)
await producer.stop()
```
This Python code block defines an asynchronous function named _kafka_producer_ that is responsible for sending data to a Kafka topic.
```
producer = AIOKafkaProducer(
bootstrap_servers=settings.KAFKA_BOOTSTRAP_SERVERS
)
```
Initializes a **Kafka** producer using the _AIOKafkaProducer_ class from _aiokafka_ library. This producer is configured to connect to the Kafka cluster specified in the _settings.KAFKA_BOOTSTRAP_SERVERS_
```
await producer.start()
await producer.send(
settings.KAFKA_TOPIC,
f"{uid}:{request_data}".encode("utf-8"), partition=0
)
```
This step prepares the producer to send messages to the Kafka cluster, sends a message to the specified **Kafka** topic (_settings.KAFKA_TOPIC_). The message content is a string composed of the uid followed by a colon `(:)` and the _request_data_, message should be encoded as UTF-8 before sending.
When the message sent a new topic to **Kafka**, _background_worker.py_ will do a work to catch that message.
```
async def handle_request(uid, request_data):
try:
recommendations = await get_openai_recommendation(request_data)
except Exception as e:
recommendations = []
result = await save_recommendations(uid, request_data, recommendations)
print(f"Recommendations saved with ID: {result}")
async def main():
while True:
uid, request_data = await kafka_consumer()
await handle_request(uid, request_data)
```
on this stage, the worker will consume a message by this code block, it defines an asynchronous function that acts as a Kafka consumer, receiving messages from a specified **Kafka** topic, processing them, and committing the offsets to track the progress of message consumption.
```
consumer = AIOKafkaConsumer(
settings.KAFKA_TOPIC,
bootstrap_servers=settings.KAFKA_BOOTSTRAP_SERVERS,
group_id=settings.KAFKA_TOPIC,
auto_offset_reset='earliest'
)
await consumer.start()
try:
async for msg in consumer:
uid, request_data = msg.value.decode("utf-8").split(":", 1)
print(f"Processed recommendation request: {request_data}")
await consumer.commit()
return uid, eval(request_data)
except Exception as e:
print(f"Consumer error: {e}")
finally:
await consumer.stop()
```
About the _auto_offset_reset_=’_earliest_’, It specifies the behavior for handling the offset when there is no initial offset or the current offset does not exist on the server (e.g., because the data has been deleted). In this case, it's set to ‘earliest’, meaning the consumer will start reading from the earliest available message.
After commits the offset of the consumed message, _background_worker.py_ make the **OpenAI** API call.
```
try:
client = OpenAI(api_key=settings.OPENAI_KEY)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": f"Provide three recommendations for doing in {request_data['country']} during {request_data['season']}.",
}
],
model="gpt-3.5-turbo",
)
return [chat_completion.choices[0].message.content]
except Exception as e:
raise Exception(str(e))
```
and saving recommendations store the recommendations in MongoDB with the UID as the key, structure the data to include the (country, season) as request_data, recommendations.
```
async def save_recommendations(uid, request_data, recommendations):
recommendation_doc = {
"uid": uid,
"request_data": request_data,
"recommendations": recommendations
}
result = await loop.run_in_executor(None, recommendations_collection.insert_one, recommendation_doc)
return result.inserted_id
```
- **status_route** accept the UID as a query parameter, checking the result in MongoDB.
1. If having result, the status is “completed” with the recommendations.
2. If not process haven’t finish, the status is “pending” to inform the data is not yet available.
```
if recommendations is None:
raise HTTPException(status_code=404, detail=ErrorResponse(error="UID not found", message="The provided UID does not exist. Please check the UID and try again.").dict())
if recommendations:
return RecommendationResponse(uid=uid, country=country, season=season, message="The recommendations are ready", recommendations=recommendations, status="completed")
return RecommendationCheckResponse(uid=uid, status="pending", message="The recommendations are not yet available. Please try again later.")
```
```
class RecommendationSubmitResponse(BaseModel):
uid: str
class RecommendationCheckResponse(RecommendationSubmitResponse):
message: Optional[str] = None
status: str
class RecommendationResponse(RecommendationCheckResponse):
country: Optional[str] = None
season: Optional[Literal["spring", "summer", "autumn", "winter"]] = None
recommendations: Optional[List[str]] = None
class ErrorResponse(BaseModel):
error: str
message: str
```
## Final Demo

## Conclusion
By leveraging FastAPI, Kafka, MongoDB, and OpenAI, we endeavor to deliver a sophisticated yet user-friendly platform that empowers travelers to embark on unforgettable journeys tailored to their preferences and interests. With scalability, efficiency, and personalization at its core, our system strives to redefine the way travelers explore the world, one recommendation at a time.
| riottecboi |
1,893,568 | How Much Does It Cost to Make an App? | Creating a mobile application involves various costs that can vary significantly based on the... | 0 | 2024-06-19T12:48:20 | https://dev.to/hyscaler/how-much-does-it-cost-to-make-an-app-can | appdevelopment, programming, python, ai | Creating a mobile application involves various costs that can vary significantly based on the complexity and features of the app. Some of the key factors that influence the cost of [app development](https://hyscaler.com/service/mobile-app-development-services/) include:
## Development Time and Complexity
The more complex an app is, the longer it will take to develop, leading to higher costs. Features such as user authentication, real-time messaging, and in-app purchases can all contribute to the complexity of the app.
## Design and User Interface
A well-designed user interface is crucial for the success of an app. Investing in a professional designer can incur additional costs, but it can significantly enhance the user experience and overall appeal of the app.
## Platform Compatibility
Developing an app for multiple platforms such as iOS and Android can increase costs. Each platform has its own set of requirements and guidelines that need to be followed, adding to the development time and expenses.
## Maintenance and Updates
After the app is launched, ongoing maintenance and updates are essential to ensure its functionality and security. Budgeting for regular updates and bug fixes is crucial for the long-term success of the app.
## Marketing and Promotion
Launching an app successfully requires effective marketing and promotion strategies. Investing in app store optimization, social media advertising, and other marketing efforts can drive user acquisition but comes with its own set of costs.
## Average Costs of App Development
The cost of developing an app can range from a few thousand dollars to hundreds of thousands of dollars, depending on the factors mentioned above. On average, a simple app with basic features can cost around $5,000 to $10,000, while a more complex app with advanced functionalities can cost upwards of $100,000 or more.
It's important to note that these are just rough estimates, and the actual cost can vary based on individual project requirements and the chosen [app development company](https://hyscaler.com/service/mobile-app-development-services/).
In conclusion, the cost of making an app is influenced by various factors such as development time, design complexity, platform compatibility, maintenance, and marketing efforts. By understanding these factors and planning accordingly, you can better estimate and budget for the cost of bringing your app idea to life. We at HyScaler.com craft digital dreams into pixel-perfect reality. Visit us at [hyscaler.com](https://hyscaler.com/service/mobile-app-development-services/) and let’s make app magic happen! 🚀 | rajatp |
1,893,569 | I believe in the Angular Roadmap: Especially Angular’s Full Potential in 2024 | Angular has solidified its status as a vital framework in the development community. With its... | 0 | 2024-06-19T12:48:09 | https://dev.to/zoltan_fehervari_52b16d1d/i-believe-in-the-angular-roadmap-especially-angulars-full-potential-in-2024-2kge | angular, angularroadmap, progrmamingroadmap | Angular has solidified its status as a vital framework in the development community. With its impressive 260K stars on GitHub, Angular continues to be a key resource for developers. It provides extensive roadmaps, best practices, and comprehensive guidance, making it an indispensable tool for front-end developers.
## A bit of a spoiler before we dig: Summary of Main Points
Angular is a versatile, TypeScript-based web application framework that has a significant impact on single-page application (SPA) development. The framework is known for its performance-efficient change detection system, the use of Zones for managing state changes, and its Model-View-ViewModel (MVVM) architecture.
## Key Takeaways
1. Angular’s Role: Angular remains essential for modern web development.
2. Community-Driven Roadmap: Structured guidance is available for all expertise levels.
3. Foundation Knowledge: A strong grasp of web technologies and TypeScript is necessary.
4. Angular CLI: This tool streamlines the development process.
5. Core Concepts: Directives, reactive forms, dependency injection, and HTTP client are critical.
## Angular Roadmap in 2024
Angular has established itself as a versatile framework, continuously evolving to meet the needs of developers. Its unique change detection mechanism and the use of Zones ensure performance efficiency and a reactive user interface. The MVVM architecture allows for building and interacting with SPAs effectively.
## Key Components
1. Change Detection: Angular’s system is distinct from the Virtual DOM approach, focusing on performance and responsiveness.
2. Zones: Manages asynchronous operations to ensure smooth user experiences.
3. MVVM Architecture: Provides a solid foundation for SPA development.
## Where to Begin with an [Angular Roadmap](https://bluebirdinternational.com/angular-roadmap/)?
Starting with Angular requires a solid understanding of its core principles and tools, beginning with foundational web technologies and TypeScript.
## Key Concepts of TypeScript Vital for Angular
- Structural Typing: For flexible data management.
- Type Interface: Defines and reuses complex type structures.
- Union Types: Adds versatility to type definitions.
- Built-in Types: Ensures consistent data handling.
- Type Guards and Generics: Enables safe code composition.
- Decorators: Adds metadata and logic to class declarations.
**Example:**

## Angular CLI Utilization for Project Bootstrapping
Angular CLI simplifies project setup and management, boosting productivity through commands for:
- Project Initialization
- Module, Component, and Service Generation
- Building, Serving, and Compiling Applications
**Example:**

## Laying the Groundwork with HTML, CSS, and Core JavaScript
Mastering Angular starts with proficiency in HTML, CSS, and JavaScript. These are the building blocks for Angular’s structure:
1. HTML: Semantics of web content.
2. CSS: Styling for responsive and engaging interfaces.
3. JavaScript: Dynamics of web applications.
Example:

## Core Concepts and Best Practices in the Angular Roadmap
Understanding Angular’s core concepts, such as directives, reactive forms, dependency injection, and HTTP client, is essential for creating efficient, scalable applications.
- Directives: Extends HTML functionality.
- Reactive Forms: Model-driven approach for complex data entry.
- Dependency Injection: Modularizes and simplifies unit testing.
- HTTP Client: Handles communication with external APIs.
**Directive Examples:**
Structural Directives: *ngFor, *ngIf
Attribute Directives: [ngStyle], [ngClass]
## Angular Advanced Features and Techniques
Advanced features like modularization, lazy loading, change detection strategies, and server-side rendering (SSR) are key to building high-performance applications.
- Modules and Lazy Loading: Enhances start-up performance.
- Change Detection Strategies: Optimizes performance.
- Angular Universal for SSR: Improves SEO and accessibility.
**Example: Lazy Loading**
const routes: Routes = [
{ path: ‘feature’, loadChildren: () => import(‘./feature/feature.module’).then(m => m.FeatureModule) }
];
**Emerging Technologies**
The integration of machine learning (ML), artificial intelligence (AI), and blockchain is transforming Angular’s capabilities, enhancing its performance and adaptability.
- ML and AI: Adaptive physics and predictive modeling.
- Blockchain: Decentralized computing and asset interoperability.
| zoltan_fehervari_52b16d1d |
1,893,566 | Discover Excellence: Hair Course in Chennai with Orane Chennai | Introduction to Orane Chennai Orane Chennai, a distinguished branch of Orane International School... | 0 | 2024-06-19T12:40:56 | https://dev.to/orane_chennai_f11b0a360b2/discover-excellence-hair-course-in-chennai-with-orane-chennai-26m1 | Introduction to Orane Chennai
Orane Chennai, a distinguished branch of Orane International School of Beauty & Wellness, offers unparalleled opportunities in the realm of beauty education. As the foremost provider of hair and beauty courses in Chennai, we pride ourselves on delivering industry-leading training that prepares students for thriving careers.
Why Choose Orane Chennai for Your Hair Course in Chennai?
Chennai, known for its rich cultural heritage and burgeoning beauty industry, is an ideal locale to pursue a hair course. At Orane Chennai, we offer a diverse array of programs including hairdressing, cosmetology, and beautician courses. Our curriculum is meticulously crafted to equip students with the skills essential for success in the dynamic beauty and wellness sector. Choosing Orane Chennai ensures you receive top-notch education, access to industry professionals, and exposure to cutting-edge trends.
What Sets Orane Chennai Apart?
Orane Chennai distinguishes itself with a comprehensive curriculum that encompasses all facets of hairdressing and hairstyling. Our courses are continuously updated to incorporate the latest techniques and trends, ensuring our students stay ahead in the competitive beauty industry. With state-of-the-art facilities and expert faculty comprising seasoned professionals, we provide a nurturing environment for learning and growth.
Courses Offered at Orane Chennai
- Hair Courses in Chennai: From foundational hairdressing skills to advanced hairstyling techniques, our courses cater to beginners and advanced learners alike.
-Cosmetology Courses in Chennai: Explore the art of beauty and wellness with our comprehensive cosmetology programs designed to enhance your expertise.
- Beautician Courses in Chennai: Master the essentials of skincare, makeup, and grooming through our specialized beautician courses.
Why Orane Chennai?
Orane Chennai is committed to your success, offering globally recognized certifications and extensive industry partnerships that facilitate internships, job placements, and career advancement opportunities. Our alumni exemplify the pinnacle of achievement in the beauty industry, reflecting the quality education and training provided at Orane Chennai.
Join Orane Chennai Today
Embark on your journey towards a fulfilling career in the hair and beauty industry with Orane Chennai. Whether you're searching for a [hair course in Chennai](https://maps.app.goo.gl/5foTUxZGNeesYP2P9), [cosmetology course in Chennai](https://maps.app.goo.gl/5foTUxZGNeesYP2P9), or [beautician course in Chennai](https://maps.app.goo.gl/5foTUxZGNeesYP2P9), Orane Chennai stands ready to empower you with the skills, knowledge, and confidence needed to excel. Enroll today and discover how we can help you achieve your professional aspirations in Chennai's vibrant beauty landscape.
Orane Chennai invites you to embrace excellence in beauty education. Join us and experience the transformative power of quality training that sets the stage for a successful career. Explore our range of courses and take the first step towards realizing your dreams in the captivating world of hairdressing and beauty. | orane_chennai_f11b0a360b2 | |
1,893,565 | The History Of JavaScript: A Journey Through Time | The foundation of contemporary web development, JavaScript, has a fascinating past that begins in the... | 0 | 2024-06-19T12:38:35 | https://www.swhabitation.com/story/history-of-javascript | javascript, historyofjavascript, ecma, webdev | The foundation of contemporary web development, JavaScript, has a fascinating past that begins in the middle of the 1990s.
Over the years, this programming language has undergone considerable evolution, influencing how we currently interact with the online.
We'll go back in time and examine the beginnings, development, significance, and potential applications of JavaScript in this blog.
## The Birth Of JavaScript
The internet had only begun to take off in the early nineties. The websites had little interaction and were static.
A significant participant in web browsers at the time, Netscape Communications, recognised the need for a more dynamic online experience.
Netscape assigned gifted programmer Brendan Eich the task of developing a scripting language for the browser in 1995.
In just ten days, Brendan Eich created JavaScript, which he first dubbed "Mocha."
Later on, in an attempt to cash in on the success of another programming language from that era, Java, it was renamed "LiveScript" and then "JavaScript".
JavaScript and Java are very distinct, despite their namesake.

## JavaScript's Early Years
September 1995 saw the release of JavaScript in Netscape Navigator 2.0. It was a huge advancement over the static HTML sites of the day since it made it possible for web developers to create dynamic and interactive web pages.
The early days of JavaScript were not without difficulties, though. Compatibility problems and inconsistent language implementation resulted from different browsers' implementations of the language.
The European Computer Manufacturers Association (ECMA) standardized JavaScript in 1997 as a solution to these problems, yielding the ECMAScript specification.
The first official version, known as ECMAScript 1, offered a uniform standard for JavaScript implementation in various browsers.
## The Evolution Of JavaScript
| Version | Name | Features |
|------------|----------|-----------|
| ES1 | ECMAScript 1 -1997 | First Edition |
| ES2 | ECMAScript 2 -1998 | Editorial Changes |
| ES3 | ECMAScript 3 -1999 | Regular Expressions, Try/Catch, Switch, Do-While |
| ES4 | ECMAScript 4 -Never Released | Never Released |
| ES5 | ECMAScript 5 -2009 | Strict Mode JSON Support String.Trim() Array.IsArray() Array Iteration Methods Allows Trailing Commas For Object Literals |
| ES6 | ECMAScript 2015 | Let And Const, Default Parameter Values, Array.Find(),Array.FindIndex() |
| ES7 | ECMAScript 2016 | Exponential Operator (**) , Array.Includes() |
| ES8 | ECMAScript 2017 | String Padding, Object.Entries(), Object.Values(), Async Functions, Shared Memory, Allows Trailing Commas For Function Parameters |
| ES9 | ECMAScript 2018 | Rest / Spread Properties, Asynchronous Iteration, Promise.Finally(), Additions To RegExp |
| ES10 | ECMAScript 2019 | String.TrimStart(), String.TrimEnd(), Array.Flat(), Object.FromEntries, Optional Catch Binding |
| ES11 | ECMAScript 2020 | The Nullish Coalescing Operator (??), BigInt Primitive Type |
| ES12 | ECMAScript 2021 | String.ReplaceAll() Method, Promise.Any() Method |
| ES13 | ECMAScript 2022 | Top-Level Await, New Class Elements,Static Block Inside Classes |
| ES14 | ECMAScript 2023 | toSorted method,toReversed,findLast, and findLastIndex methods on Array.prototype and TypedArray.prototypet |
## JavaScript Today
The most popular programming language for web development nowadays is JavaScript.
Most websites' client-side functionality, including interactive features, sophisticated apps, and rich user interfaces, is powered by it.
JavaScript's capabilities have been further enhanced by frameworks and libraries like React, Angular, and Vue.js, which enable programmers to create complex mobile and single-page applications.
JavaScript is not just for use in browsers. Thanks to the introduction of Node.js, a runtime environment that allows JavaScript code to be executed server-side, programmers may now utilize JavaScript to create whole web apps from start to finish.
JavaScript's status as a fundamental component of contemporary web development has been solidified by its full-stack development capability.
## The Impact Of JavaScript
The web is now a completely different experience thanks to JavaScript. It has changed static web pages into dynamic, interactive programmes that react instantly to user interaction. JavaScript has made features like animations, asynchronous data loading, and form validation commonplace.
Furthermore, a thriving community is always adding to the ever-expanding collection of tools, frameworks, and libraries that make up JavaScript's ecosystem. Because of the innovative atmosphere created by collaboration, JavaScript is able to maintain its leading position in web development.
## JavaScript’s Ecosystem And Tools
JavaScript is strong not only in the language itself but also in the large community of tools and frameworks that surround it. The following are a few crucial elements of the JavaScript ecosystem:

**1. Frameworks and Libraries:** React, Angular, and Vue.js are some of the frameworks that make it easier for developers to create sophisticated apps quickly. Libraries like D3.js, Lodash, and jQuery offer strong tools for processing data, modifying the DOM, and producing visualisations.
**2. Node.js:** Backend development using JavaScript is made possible by this server-side runtime. Developers may create command-line tools, scalable network applications, and APIs with Node.js.
**3. Build Tools:** The development process is streamlined by tools like Webpack, Gulp, and Grunt, which automate repetitive processes like testing, compilation, and minification.
**4.Package Managers:** Yarn and NPM (Node Package Manager) make it easier to install and manage JavaScript libraries and dependencies, making projects scalable and easy to maintain.
**5.Testing Frameworks:** Popular testing frameworks like Jasmine, Mocha, and Jest offer reliable testing environments for JavaScript applications, assisting in the assurance of code quality.
## JavaScript In Mobile And Desktop Applications

JavaScript is used in desktop and mobile applications in addition to online development. technologies like:
**1. React Native :** Enables the creation of mobile applications with native performance on iOS and Android smartphones utilizing JavaScript and React.
**2. Electron :** Makes it possible to create desktop applications that are cross-platform using JavaScript, HTML, and CSS. Electron is used in the development of apps like Slack and Visual Studio Code.
## The Future Scope Of JavaScript
JavaScript is expected to play an ever bigger part in web development as we move forward. Keep an eye out for the following developments and trends:
**1. WebAssembly Integration :** With the help of the binary instruction format known as WebAssembly (Wasm), code written in several languages can execute in a browser almost as quickly as natively. Even more potent web apps will be possible thanks to JavaScript's integration with WebAssembly, which enables programmers to take advantage of the advantages of additional languages in addition to JavaScript.
**2. Progressive Web Apps (PWAs) :** PWAs provide a smooth user experience by combining the finest features of mobile and web apps. PWA development relies heavily on JavaScript, and as PWAs gain popularity, JavaScript will become ever more important.
**3. Artificial Intelligence and Machine Learning :** Machine learning capabilities are being introduced to browsers through JavaScript libraries such as TensorFlow.js. This creates new opportunities for the development of intelligent web apps that can adjust to changing user behavior.
**4.Internet of Things (IoT) :** JavaScript is becoming more and more prevalent in the Internet of Things. Smarter homes and cities are possible because of frameworks like Node-RED, which allow developers to leverage JavaScript to control and communicate with a variety of IoT devices.
**5. Serverless Architecture :** Developers can create and execute apps using serverless computing without having to worry about maintaining servers. With tools like AWS Lambda and Azure Functions, JavaScript is a popular choice for serverless development, which lowers operating costs and facilitates application scalability.
**6. Enhanced Performance and Efficiency :** It is anticipated that future JavaScript versions would prioritize increasing efficiency and performance. JavaScript can be made even more powerful and efficient by adding capabilities like Just-In-Time (JIT) compilation and carrying out more optimisations.
## Conclusion
JavaScript's journey from its modest beginnings in 1995 to its current position as a web development powerhouse is a monument to the strength of creativity and teamwork. JavaScript will surely continue to be important as the web develops, propelling the upcoming generation of digital experiences. Knowing the background and potential applications of JavaScript can help you, regardless of experience level, gain significant understanding of the language that powers our online environment.
As we delve deeper into the intriguing realm of technological tales, we'll be sharing more insights with you about the pasts and futures of React, HTML, CSS, and other key technologies that influence our digital world. Stay tuned.
| swhabitation |
1,883,355 | Cebolas e camadas para padrões de projetos no Front-end — Parte I | Nesse texto, tenho como objetivo trazer uma alternativa de padrões de projetos front-ends, esse... | 0 | 2024-06-19T12:35:40 | https://dev.to/tino-tech/cebolas-e-camadas-para-padroes-de-projetos-no-front-end-parte-i-55af | frontend, architecture, react, javascript | Nesse texto, tenho como objetivo trazer uma alternativa de padrões de projetos front-ends, esse padrão funciona independente do framework ou biblioteca.
A estrutura proposta neste artigo utiliza alguns recursos e nomenclaturas conhecidas pela comunidade do ReactJS, mas outras são uma inspiração de outros ecossistemas como Angular e similares, a ideia é compartilhar um padrão que já adotei em alguns projetos e que serve muito bem para um monolito escalável, e com uma ótima fórmula para modelar um ecossistema que poderá evoluir para um micro-frontend em um futuro próximo.
# O que vamos encontrar aqui?
- Um projeto To-Do List com ReactJs e arquitetura onion.
- Referências a outras publicações que irá descrever melhor alguns contextos e implementações.
- Uma simples comparação entre uma arquitetura layered e uma onion.
- Um relato dos projetos em que implementei essa composição com pontos positivos e negativos.
# Mas antes de começar:
- O projeto de exemplo não possui uma API para gerenciar acessos e salvar dados, ou seja, os dados são salvos localmente no navegador.
- Essa arquitetura não é uma bala de prata, é complexa e robusta para escalar grandes projetos, para algo menor considere recursos mais objetivos como um MVC por exemplo.
- Os nomes podem ser modificados e adaptados para cada organização e contexto.
- É uma arquitetura conhecida no mundo back-end então, possuímos muitos posts e referência sobre ela, mas são sobre aplicações back-end.
# Porque eu preciso adotar padrões de projetos?
O conceito de padrões foi primeiramente descrito por Christopher Alexander em [Uma Linguagem de Padrões](https://www.amazon.com.br/Uma-Linguagem-Padr%C3%B5es-Christopher-Alexander/dp/8565837173/).
Padrões de projeto são soluções comuns para alguns problemas típicos, mas acredito que não deva ser algo que seja uma organização de arquivos que fica mais bonita ou que agrade a preferência do programador que está construindo, porque tudo isso é sempre do ponto de vista de quem implementa. Para, ser imparcial durante a definição de um bom padrão de projeto eu levo alguns pontos em consideração de que ele precisa cumprir:
- **Testabilidade** — Que seja possível implementar diferentes testes (unitário, integrado, E2E, mutação, regressão e etc…), precisa ser fácil testar, a porcentagem de coverage não significa qualidade de teste e muito menos de software, porém quanto mais difícil de testar mais cenários os desenvolvedores irá deixar de testar ao longo do caminho.
- **Manutenabilidade** — A alteração de algo em funcionamento, não deveria afetar muitos lugares do organismos. Abstrações e camadas podem parecer burocráticas e verbosa, mas elas garantem a manutenção, isolamento do contexto e sua sanidade no futuro.
- **Escalabilidade** — Uma aplicação assim como uma cidade precisa ser preparada para abrigar muitos cidadãos (usuários) e os engenheiros que irão mantê-la (devs). Não é só escalar apenas código, é escalar a capacidade de muitas pessoas atuarem no mesmo projeto, um projeto grande com muitas pessoas, irá enfrentar problemas como: CI/CD, Ambiente de deploy, PRs e merges, rollback, estratégias de deploy e etc. Para todas as nossas decisões temos que levar isso em consideração.
- **Separação de responsabilidades (SoC — separation of concerns)** — Isolar domínios e responsabilidades, permite um compartilhamento melhor de recursos sem afetar a performance e complexidade da sua aplicação.
- **Observabilidade —** Se algo quebrou, eu preciso saber o que e em qual momento isso aconteceu, preciso atuar e reverter o problema de maneira rápida e eficiente, garantindo que o erro não volte mais a acontecer. Como podemos saber qual é a parte da sua aplicação que mais consome performance? Será que só usar a URL é o suficiente para entender quantos elementos e domínios temos ali e o que eles realmente fazem?
- **Evitar pastas genéricas e sem propósito, shared e utils —** Quando não conseguimos separar bem nossas entidades e camadas, começamos a criar acoplamento entre os recursos, gerando a necessidade de criar pastas para shared ou utils. Funciona para o primeiro elemento, mas depois de um tempo, os desenvolvedores não irão gastar muito tempo pensando sobre o domínio das coisas, e tudo será útil ou shared.
# Entre camas e cebolas.
Uma padrão de projeto de camadas tradicional (layered pattern), o fluxo de dependências entre as camadas segue de cima para baixo. Na camada superior encontramos a área de interface do usuário (User Interface), no meio o processamento das informações (Business), e na base a saída ou entrada de dados para um serviço externo (Data).

Fazendo uma sim analogia com uma estrutura em React:

O grande ponto sobre esse padrão é o forte acoplamento entre as camadas, uma vez que é preciso substituir qualquer uma delas, as outras camadas relacionadas sofrem uma grande alteração ou conflito de comportamento. Ou seja, imagina que precisamos mudar a camada de Data de um Rest API para um GraphQL, a camada de business sofreria diretamente com essa mudança, além da baixa reutilização dos mesmos recursos em lugares diferentes.
Agora invertemos a ordem das dependências, colocando como centro a nossa camada de Business e as camadas de Data e User Interface como referências ao business. Temos uma forma de garantir que, o que é importante para nossa aplicação, fique segura de mudanças garantindo que as camadas externas fiquem responsáveis por lidar com o mundo exterior.

Vamos mudar um pouco mais o formato do desenho:

## Dessa forma, o que temos de evidência?
- O Foco é a parte central da nossa aplicação **o business**, é onde a nossa mudança de negócio e evolução irá acontecer ao longo do tempo. Em alguns casos é possível manter uma linguagem mais pura, sem muitos recursos atrelado a um framework.
- User Interface e Data são camadas periféricas, onde os recursos são substituíveis e podem ser atrelados a um recurso ou tecnologia.
# Evoluindo nossa cebola
Podemos quebrar ainda mais as camadas dividindo responsabilidades:

## A camada de user Interface se torna:
**Presentation** — Essa é a camada relacionada ao dispositivo do usuário, seja para web, navegador ou mobile. Os controles de rotas, parâmetros de urls, diagramação visual da página e composição de múltiplos domínios, devem ser tratados aqui. No caso de uma aplicação de front-end os componentes de UI também se encontram aqui, ou seja tome cuidado ao utilizar contextos de domínios em componentes que são apenas UI como Button por exemplo.
## Business é composta de:
**Application** — Essa camada do Business é considerada como operador do domínio, conhecido por alguns como os “Use Cases (Caso de usos)”, são métodos ou componentes que cumprem uma função de controlar o estado do domínio.
**Domain** — Já na camada de domínio, encontramos o controle das nossas principais entidades. “Todos os objetos de uma aplicação deveriam ser uma entidade?” A resposta é não necessariamente, quando você se encontra em uma situação que precisa compartilhar a mesma informação entre dois organismos distintos e vizinhos, você se esbarra no problema da Mutabilidade, logo conseguimos resolver isso com uma entidade Imutável que é parte do domínio. Se o seu domínio for um método de composição de objeto ao invés de um objeto em si também funciona, é um comportamento muito comum em uma estrutura de Functional Programing.
## A camada de Data é separada em:
**Persistence** — Nem todas as aplicações que trabalhei precisou dessa camada, mas cada vez mais tempos aplicações front-ends que permitem o usuário utilizar recursos offline. Logo precisamos persistir um dado primeiramente no dispositivo do usuário, antes de enviar para o back-end. Essa camada abstrai a forma que persistimos essa informação, seja através de um Index DB, Local storage ou memória volátil. Novamente o nosso domínio não deveria se importar quem e como essas informações chegam ou saem.
**Infrastructure** — Aqui configuramos a camada de serviço e adaptadores, uma forma de criar métodos e clientes para controlar nossas chamadas com o back-end, seja ela através de Rest API, Web Assembly, GRPC ou qualquer outro recurso, novamente nosso domínio poderia apenas esperar uma promessa de dados seja como for, a camada de infraestrutura deve se resolver para passar esse acordo. Outra coisa bacana, podemos adicionar aqui transformadores e compositores, ou seja nossa entidade do domínio pode ser um objeto que para existir, precisa ser composto por uma ou mais APIs e modificações em dados, na falta de um BFF, a camada de service trabalha muito bem removendo essa composição do seu domínio e entregando um dado mais puro para as camadas de dentro da cebola.
# As pastas e arquivos precisam respeitar esses nomes?
Não necessariamente, é claro que se estiver com esses nomes facilita a identificação dos contextos por aqueles que já conhecem esse padrão. Porém a adoção dos padrões é um acordo entre os desenvolvedores, e para todos os projetos que implementei, a nomenclatura dos organismos ainda foram o ponto de resistência à mudança pelos desenvolvedores Front-ends. Então minha escolha foi manter os organismos com nomes já conhecido pela comunidade, e criar um dicionário explicando onde se encaixa cada um deles e seus respectivos papéis para o projeto.
Para entender melhor como cada camada ou elemento disso se comporta e funciona, leia a parte 2 dessa publicação.
# Pontos positivos
- Essa estrutura permite tomar decisões mais tarde, ou seja, podemos desenvolver uma feature ou módulo, sem precisar esperar a API ficar pronta, conseguimos criar um mock dos dados e mudar isso no futuro sem afetar o domínio.
- Facilita muito para testar e conseguir garantir uma boa cobertura e qualidade de test.
- É fácil identificar a quebra de um módulo e features em pequenas atividades e PRs.
- Exclui a necessidade de pastas como shared, utils e recursos que são uma gaveta de quinquilharias, no começo do projeto faz sentido, depois de um ano não sabemos se tudo que tem lá é realmente para ser compartilhado ou útil.
- Essa estrutura funciona para qualquer tipo de linguagem ou tecnologia, já usei com projetos Angular, React + NextJS, React, React + Gatsby, VUE e etc.
# Pontos negativos
- A curva de aprendizado é um pouco mais demorada, é uma estrutura robusta e complexa que exige um entendimento do porquê das coisas.
- Antes de desenvolver um domínio é preciso gastar tempo pensando como ele será, o que é relacionado ao seu contexto e o que não é.
- Possui camadas de abstração mesmo que pareça redundante.
- Diferente do Java que é uma linguagem que possui recursos como “Protected”, que consegue bloquear o uso indevido de algumas funcionalidades, o javascript é modo “Freestyle” onde tudo é possível, mas nem tudo deveria ser permitido. Então criar bons padrões de alias, ajuda a identificar quando alguém está usando algo de forma errada.
------
# Referências
Código de exemplo:
https://github.com/RobsonMathias/frontend-onion-architecture
Fontes de estudos:
https://refactoring.guru/pt-br/design-patterns
https://code-maze.com/onion-architecture-in-aspnetcore
https://jeffreypalermo.com/2008/07/the-onion-architecture-part-1
| robsonmathias |
1,847,919 | OpenAI api RAG system with Qdrant | Why? OpenAI has been making it easier and easier to build out GPT agents that make use of... | 0 | 2024-06-19T12:35:00 | https://artur.wtf/blog/qdrant-streamlit-openai-rag/ | openai, opensource, langchain, rag | ## Why?
[OpenAI](https://openai.com/) has been making it easier and easier to build out [GPT agents](https://www.deeplearning.ai/the-batch/how-agents-can-improve-llm-performance/) that make use of your own data to improve the generated responses of the pretrained models.
Agents give a way to inject knowledge about your specific proprietary data into your pipeline, without actually sharing any private information about it. You can also improve the recency of your data too which makes you less dependent on the model's training cycle.
OpenAI has improved the DX, UX and APIs since version 3.5, and has made it easier to create `agents` and embed your data into your custom [`GPTs`](https://openai.com/index/introducing-gpts/). They have lowered the barrier to entry which means that virtually anyone can build their own assistants that would be able to respond to queries about their data. This is perfect for people to experiment on building products. IMO this is a very good approach to enable product discovery for the masses.
Most big AI contenders on the market provide you with a toolbox of high level abstractions and low to no code solutions. The weird thing about my approach to learning things is that not having some understanding of the first principles of the tech I'm using makes me feel a bit helpless, this is why I figured trying to build my own `RAG` system would be a good way to figure out the nuts and bolts.
## What?
I wanted to get a project for running my own pipeline with somewhat interchangeable parts. Models can be swapped around so that you can make the most of the latest models either available on [`Hugginface`](https://huggingface.co/), [`OpenAI`](https://openai.com/) or wherever.
Because things are moving so fast in model research the top contenders are surpassing each other every day pretty much. A custom pipeline would allow us to quickly iterate and test out new models as they evolve. This allows you to try out new models and just as easily rollback your experiment.
What I wound up building is a [`Streamlit`](https://streamlit.io/) app that uses [`qdrant`](https://qdrant.com/) to index and search data extracted from a collection of `pdf` document. The app is a simple chat interface where you can ask questions about the data and get responses from a mixture of `GPT-4` and the indexed data.
## How?
#### 1. Setting up the environment
- use `pyenv` to manage python versions
```bash
# update versions
pyenv update
# install any python version
pyenv install 3.12.3 # as of writing this
# create a virtualenv
~/.pyenv/versions/3.12.3/bin/python -m venv .venv
# and then activate it
source .venv/bin/activate
```
#### 2. Install the dependencies
```bash
# install poetry
pip install poetry
# install the dependencies
poetry install
```
the dependencies section of the `pyproject.toml` file should look like this:
```toml
...
[tool.poetry.dependencies]
python = "^3.12"
streamlit = "^1.32.1"
langchain = "^0.1.12"
python-dotenv = "^1.0.1"
qdrant-client = "^1.8.0"
openai = "^1.13.3"
huggingface-hub = "^0.21.4"
pydantic-settings = "^2.2.1"
pydantic = "^2.6.4"
pypdf2 = "^3.0.1"
langchain-community = "^0.0.28"
langchain-core = "^0.1.31"
langchain-openai = "^0.0.8"
instructorembedding = "^1.0.1"
sentence-transformers = "2.2.2"
...
```
#### 3. Set up the loading of the variables from a config file
- a nice way to manage settings is to use `pydantic` and `pydantic-settings`
```python
from pydantic import Field, SecretStr
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
model_config = SettingsConfigDict(env_file="config.env", env_file_encoding="utf-8")
hf_access_token: SecretStr = Field(alias="HUGGINGFACEHUB_API_TOKEN")
openai_api_key: SecretStr = Field(alias="OPENAI_API_KEY")
```
this way you can load the settings from `config.env` but variables in the environment override the ones in the file.
- a nice extra is that you also get type checking and validation from `pydantic` including `SecretStr` types for sensitive data.
#### 4. Set up the UI elements
- Streamlit makes it quite easy to strap together a layout for your app. You have a single script that can run via the streamlit binary:
```bash
streamlit run app.py
```
[The gallery](https://streamlit.io/components?category=all) has many examples of various integrations and components that you can use to build your app. You have smaller components like inputs and buttons but also more complex UI tables, charts, you even have [`ChatGPT`](https://streamlit.io/components?category=llms) style templates.
For our chat interface we require very few elements. Generally to create them you only need to use streamlit to initialize the UI.
```python
import streamlit as st
...
def main():
st.title("ChatGPT-4 Replica")
st.write("Ask me anything about the data")
question = st.text_input("Ask me anything")
if st.button("Ask"):
st.write("I'm thinking...")
response = get_response(question)
st.write(response)
...
main()
```
The one thing I find a bit awkward is the fact that if you have elements that need to be conditionally displayed the conditions tend to resemble the javascript pyramid of doom if you have too many conditionals in the same block.
Below is a simple example so you can see what I mean:
```python
if len(pdf_docs) == 0:
st.info("Please upload some PDFs to start chatting.")
else:
with st.sidebar:
if st.button("Process"):
with st.spinner("Processing..."):
# get raw content from pdf
raw_text = get_text_from_pdf(pdf_docs)
text_chunks = get_text_chunks(raw_text)
if "vector_store" not in st.session_state:
start = time.time()
st.session_state.vector_store = get_vector_store(text_chunks)
end = time.time()
# create vector store for each chunk
st.write(f"Time taken to create vector store: {end - start}")
```
This makes me think that it is probably not designed for complex UIs but rather for quick prototyping and simple interfaces.
#### 5. pdf data extraction
- I used the `PyPDF2` library to extract the text from the pdfs. The library is quite simple to use and you can extract the text from a pdf file with a few lines of code.
```python
import PyPDF2
def get_text_from_pdf(pdf_docs):
raw_text = ""
for pdf in pdf_docs:
pdf_file = pdf["file"]
pdf_reader = PyPDF2.PdfFileReader(pdf_file)
for page_num in range(pdf_reader.numPages):
page = pdf_reader.getPage(page_num)
raw_text += page.extract_text()
return raw_text
```
- The extracted text should be chunked into smaller pieces that can be used to create embeddings for the `qdrant` index.
```python
def get_text_chunks(raw_text):
text_chunks = []
for i in range(0, len(raw_text), 1000):
text_chunks.append(raw_text[i:i + 1000])
return text_chunks
```
#### 6. Setting up the `qdrant` server via `docker`
The best way to set up `qdrant` is to use docker and to keep track of the environment setup `docker-compose` is a nice approach. You can set up the `qdrant` server with a simple `docker-compose.yml` file like the one below:
```yaml
version: '3.9'
services:
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333" # Expose Qdrant on port 6333 of the host
volumes:
- qdrant_data:/qdrant/data # Persistent storage for Qdrant data
environment:
RUST_LOG: "info" # Set logging level to info
volumes:
qdrant_data:
name: qdrant_data
```
#### 7. Indexing the data
- The `qdrant` client can be used to index the embeddings and perform similarity search on the data. You can pick and choose the best model for embeddings for your data and swap them out if you find [a better one](https://huggingface.co/spaces/mteb/leaderboard).
```python
def get_vector_store(text_chunks, qdrant_url="http://localhost:6333"):
embeddings = HuggingFaceInstructEmbeddings(model_name="avsolatorio/GIST-Embedding-v0", model_kwargs={"device": "mps"})
vector_store = Qdrant.from_documents(
text_chunks,
embeddings,
url=qdrant_url,
collection_name="pdfs",
force_recreate=True,
)
return vector_store
```
#### 8. sending the query
In order to send the query to `qdrant` you again need to embed it to allow to do a similarity search over your collection of documents.
```python
def get_response(question, qdrant_url="http://localhost:6333"):
embeddings = HuggingFaceInstructEmbeddings(model_name="avsolatorio/GIST-Embedding-v0", model_kwargs={"device": "mps"})
query_vector = embeddings.encode(question)
vector_store = Qdrant(url=qdrant_url, collection_name="pdfs")
response = vector_store.search(query_vector, top_k=1)
return response
```
#### 9. Analysis
You can swap out any of the components in this project with something else. You could use [`Faiss`](https://github.com/facebookresearch/faiss) instead of `qdrant`, you could use `OpenAI` models for everything(embeddings/chat completion) or you could use open models.
You can forego the UI and simply use `fastapi` to create an API to interact with the PDF documents. I hope this gives you some sense of the possibilities that are available to you when building your own `RAG` system.
## Conclusions
- you can build your own agent and have it respond to queries about your data quite easily
- `streamlit` is great for prototyping and building out simple interfaces
- `qdrant` is good for performing similarity search on your data
- when building `RAG` systems you need to make use of embedding models to encode your data
- embedding models are the most taxing parts of the pipeline
- if you have pluggable parts in your pipeline you can swap them out easily to save costs
- `pydantic` and `pydantic-settings` are great for adding type checking and validation to your python code
| adaschevici |
1,893,563 | Use Gemini Pro Asynchronously in Python | When your prompt is too large and the LLM starts to hallucinate, or when the data you want from the... | 0 | 2024-06-19T12:31:32 | https://dev.to/muhammadnizamani/use-gemini-pro-asynchronously-in-python-5b6a | python, ai, machinelearning, gemini | When your prompt is too large and the LLM starts to hallucinate, or when the data you want from the LLM is too extensive to be handled in one response, asynchronous calling can help you get the desired output. In this brief blog, I will teach you how to call Gemini Pro asynchronously to achieve the best results.
**let's go**
First create project in new directory then install following Package
```
pip install asyncio
pip install python-dotenv
pip install aiohttp
```
Go to this following link to get gemini api key
https://aistudio.google.com/app/apikey

in the above picture you can see that **create API key ** click on it and get your api key
Now, create a file named **.env** in your project directory and add your key like this:
API_KEY = "here you copy paste the API key "
Next, create a file named **main.py** in your project directory.
Let's start coding the** main.py** file. First, import all necessary libraries and retrieve the API key from the **.env** file.
```
import asyncio
import os
from dotenv import load_dotenv
import aiohttp
load_dotenv()
API_KEY = os.getenv("API_KEY")
```
Now create a async function which return all list of all prompts
```
async def prompts() -> list:
heros = """
Give me the list of all 10 top highest win rate Dota 2 hero 2023
"""
players = """
Top players in the game Dota 2 in 2023
"""
team = """
Give me the name the name of all team who got directe invite in TI 2023 dota 2.
"""
return [heros, players, team ]
```
Now we are going to send an asynchronous POST request to the Google Generative Language API to generate content based on a given prompt. To do this, we will first set the endpoint we are going to access, then define the headers, and finally create the payload with all the necessary parameters for the endpoint. After that, we will send an asynchronous call to the endpoint. We are accessing response in json.
Note: Session (aiohttp.ClientSession): An instance of the aiohttp ClientSession class.
```
async def fetch_ai_response(session, prompt):
url = f"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key={API_KEY}"
headers = {
"Content-Type": "application/json"
}
payload = {
"contents": [
{
"parts": [
{
"text": prompt
}
]
}
]
}
async with session.post(url, headers=headers, json=payload) as response:
result = await response.json()
# Extract text from the response
try:
content = result['candidates'][0]['content']['parts'][0]['text']
return content
except (KeyError, IndexError) as e:
# Log the error and response for debugging
print(f"Error parsing response: {e}")
print(f"Unexpected response format: {result}")
return "Error: Unexpected response format"
```
following function takes a list of prompts and uses the fetch_ai_response function to retrieve the AI response for each prompt.
The function then uses asyncio.gather to run the fetch_ai_response function in parallel for each prompt.
The results are returned as a list of responses.
```
async def test_questions_from_ai() -> list:
prompts_list = await prompts()
async with aiohttp.ClientSession() as session:
tasks = [fetch_ai_response(session, prompt) for prompt in prompts_list]
results = await asyncio.gather(*tasks)
return results
```
Now calling test_questions_from_ai function Asynchronously
```
if __name__ == "__main__":
responses = asyncio.run(test_questions_from_ai())
for inx, response in enumerate(responses):
print(f"Response: {inx} ", response)
```
now run following command to run see the response
```
python main.py
```
check the code on following repo on github
https://github.com/GoAndPyMasters/asyncgemini
here is my github profile
https://github.com/MuhammadNizamani
If you need any help contact with on linkedin
https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/ | muhammadnizamani |
1,893,562 | What is SSL pinning, and how do you implement it in a mobile app? | What is SSL Pinning? SSL pinning is a security technique used to ensure that an... | 0 | 2024-06-19T12:26:15 | https://dev.to/chariesdevil/what-is-ssl-pinning-and-how-do-you-implement-it-in-a-mobile-app-1gjl | sslpinning, appdevelopment, mobileappdevelopment, appsync | ## What is SSL Pinning?
SSL pinning is a security technique used to ensure that an application only communicates with a trusted server. It involves storing the server's SSL certificate (or a public key) within the app itself, allowing the app to verify the server's identity directly rather than relying solely on the operating system's trust store. This helps prevent man-in-the-middle (MITM) attacks, where an attacker intercepts and potentially alters the communication between the client and the server.
## Why is SSL Pinning Important?
SSL pinning adds an extra layer of security for mobile applications, particularly those handling sensitive data, such as financial information, personal data, or confidential business communications. Without SSL pinning, an attacker with a fraudulent certificate could intercept and decrypt HTTPS traffic, even if the traffic is encrypted, because the attacker’s certificate would be trusted by the system’s default certificate store.
## Best Practices for SSL Pinning
**Fallback Mechanism:** Implement a fallback mechanism to handle scenarios where the certificate has changed legitimately. This can include app updates or a mechanism to update the pinned certificates dynamically.
**Testing:** Ensure thorough testing of SSL pinning implementation to avoid false positives or negatives that could affect the app’s connectivity.
**Security Updates:** Regularly update the pinned certificates and monitor for any security updates or vulnerabilities in the libraries used.
**Error Handling:** Implement robust error handling to provide meaningful messages to users in case of SSL pinning failures.
## Conclusion
SSL pinning is a crucial security measure for protecting mobile applications from MITM attacks. By embedding the server's certificate or public key within the app, developers can ensure that the app communicates only with trusted servers. Implementing SSL pinning in iOS and Android requires careful handling of certificates and network requests, but the enhanced security it provides is well worth the effort. | chariesdevil |
1,893,561 | Error types | 1.TypeError - Data type value is not expected type. 2.SyntaxError - we miss something or left any... | 0 | 2024-06-19T12:25:08 | https://dev.to/husniddin6939/error-types-4f1f | javascript | 1.TypeError - Data type value is not expected type.

2.SyntaxError - we miss something or left any extra characters like this , . or any letter in empty space it response SyntaxError so Js try to knows all written things in it. Also when we write wrong some words in console but we write it in up other word it also considered SyntaxError.

3.ReferenceError - If we call somthing in console and it has not or has not createn , it searchs from the global scope al least and couldn't find and response ReferenceError.

4.Internal Error - while we apply with Js to server and it couldn't response any data , it is an Internal Error. And it response some kind of number value Errors.

5.Range Error - if Js run some Loops and there is no enaught memory for them, it response Range Error and It say so -> Maximum call stack size exceeded, It means that if we call samthing more and there is not any empty spaces for saving it, It will response us this kind of Error type.

6.Eval Error - This exception is not thrown by JavaScript anymore, however the EvalError object remains for compatibility.

7.URI Error - It can also be thrown by encodeURI and encodeURIComponent if the specified string contains illegal Unicode surrogate pairs.
 | husniddin6939 |
1,893,560 | Develop a Powerful DEX with a PancakeSwap Clone Script | DEXs have significantly transformed the cryptocurrency trading space through offering a safer,... | 0 | 2024-06-19T12:25:07 | https://dev.to/rick_grimes/develop-a-powerful-dex-with-a-pancakeswap-clone-script-2n7 | pancakeswap, webdev, programming, blockchain | DEXs have significantly transformed the cryptocurrency trading space through offering a safer, transparent, and decentralized marketplace. PancakeSwap is one of the most utilized DEXs at the moment, as it is effective and easy to use. Interested business person or an entrepreneur who wants to attempt into DeFi, creating a DEX using a clone script of PancakeSwap is a great opportunity.
A PancakeSwap clone script is a pre-built frontend code platform solution that is developed to copy PancakeSwap DEX that is currently active on Binance Smart Chain (BSC). This enables developers to set up their own decentralized exchange solutions for swapping BEP-20 tokens, providing liquidity, yield farming and other DeFi functions in a customized manner without the need to trust on Binance or DEX.
**PancakeSwap Clone Script – Reasons To Choose The Right Platform?**
**Cost-Effective and Time-Saving**
Launching a DEX with the purpose of building it from scratch is rather a time-consuming and cost intense process. A clone script is useful to create a PancakeSwap clone for it saves time and cost. This pre-built script has many of the features and functions of PancakeSwap so users can easily establish their DEX using this template.
**Proven Success Model**
Thus PancakeSwap has become one of the most popular DEX with high trading volume and user base. When you utilize a clone script, you can replicate this successful model to attract users and traders to your platform. The clone script contains all the important characteristics like integrated liquidity pools, yield farming, and staking to preserve a strong and possible DEX.
**Key Features of a PancakeSwap Clone Script**
**User-Friendly Interface**
A PancakeSwap clone script also has an attractive and functional front end for users who want to trade easily. The user-friendly design of the platform allows inexperienced individuals as well as professionals to buy tokens, swap assets and provide liquidity.
**High Security**
Safety is key in the Cryptocurrency Industry. A clone script of PancakeSwap comes with enhanced security measures to safeguard users’ funds and information. Operations through smart contracts and decentralized governance means that the possibility of hacks and frauds is minimized.
**Customizability**
This is one of the largest benefits of using a clone script – the script is highly customizable. It can be easily customized to suit the individual business needs and choice of the user. Starting from branding and designing the UI/UX, up to adding features that will set the DEX apart, a clone script grants adaptability.
**Integration with Multiple Wallets**
A clone script of PancakeSwap allows users to link multiple cryptocurrencies wallets making it easy for users to trade tokens. This feature makes it possible for your DEX to be able to serve more individuals, which will ultimately increase user experience.
**Conclusion**
The cryptocurrency market is developing constantly, and creating a DEX can be an incredibly lucrative business. One of the best strategies to start in the DeFi market is by using a PancakeSwap clone script. The benefits of using it include affordability, flexibility and security, allowing you to build a robust DEX for modern traders.
Our research shows that Fire Bee Techno Services is the best place to get a [**PancakeSwap clone script**](https://www.firebeetechnoservices.com/Pancakeswap-clone-script). Being an experienced clone script provider in the blockchain development industry, Fire Bee Techno Services provides you with the premier PancakeSwap clone scripts that meet your business needs. The solutions we provide will assist you in launching a DEX that is efficient and enjoyable for users. | rick_grimes |
1,873,361 | State Management in React | Introduction State management is a critical aspect of building dynamic and interactive... | 27,559 | 2024-06-19T12:25:00 | https://dev.to/suhaspalani/state-management-in-react-56ea | #### Introduction
State management is a critical aspect of building dynamic and interactive React applications. As applications grow, managing state becomes more complex. This week, we will explore various state management techniques in React, focusing on `useState`, `useReducer`, and the Context API.
#### Importance of State Management
Effective state management ensures that your application remains scalable, maintainable, and predictable. Proper handling of state can improve performance and provide a better user experience.
#### useState Hook
**Introduction to useState:**
- **Definition**: A hook that allows you to add state to functional components.
- **Syntax**:
```javascript
const [state, setState] = useState(initialState);
```
**Using useState:**
- **Example**: Counter Component
```javascript
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
<button onClick={() => setCount(count - 1)}>Decrement</button>
</div>
);
}
```
**Managing Multiple State Variables:**
- **Example**: Form with Multiple Inputs
```javascript
function UserForm() {
const [name, setName] = useState('');
const [age, setAge] = useState('');
const handleSubmit = (e) => {
e.preventDefault();
alert(`Name: ${name}, Age: ${age}`);
};
return (
<form onSubmit={handleSubmit}>
<input
type="text"
value={name}
onChange={(e) => setName(e.target.value)}
placeholder="Name"
/>
<input
type="number"
value={age}
onChange={(e) => setAge(e.target.value)}
placeholder="Age"
/>
<button type="submit">Submit</button>
</form>
);
}
```
#### useReducer Hook
**Introduction to useReducer:**
- **Definition**: A hook used for managing complex state logic.
- **Syntax**:
```javascript
const [state, dispatch] = useReducer(reducer, initialState);
```
**Using useReducer:**
- **Example**: Counter with useReducer
```javascript
import React, { useReducer } from 'react';
const initialState = { count: 0 };
function reducer(state, action) {
switch (action.type) {
case 'increment':
return { count: state.count + 1 };
case 'decrement':
return { count: state.count - 1 };
default:
throw new Error();
}
}
function Counter() {
const [state, dispatch] = useReducer(reducer, initialState);
return (
<div>
<p>Count: {state.count}</p>
<button onClick={() => dispatch({ type: 'increment' })}>Increment</button>
<button onClick={() => dispatch({ type: 'decrement' })}>Decrement</button>
</div>
);
}
```
**When to Use useReducer:**
- Prefer `useReducer` when dealing with complex state logic or when the next state depends on the previous state.
- Use it when managing multiple related state variables.
#### Context API
**Introduction to Context API:**
- **Definition**: A way to manage state globally across a React application.
- **Why Use Context**: Avoids prop drilling by providing a way to share values between components without passing props through every level of the tree.
**Using Context API:**
- **Creating a Context**:
```javascript
const MyContext = React.createContext();
```
- **Providing Context**:
```javascript
function App() {
const [user, setUser] = useState({ name: 'John Doe', age: 30 });
return (
<MyContext.Provider value={user}>
<UserProfile />
</MyContext.Provider>
);
}
```
- **Consuming Context**:
```javascript
function UserProfile() {
const user = React.useContext(MyContext);
return (
<div>
<h1>{user.name}</h1>
<p>Age: {user.age}</p>
</div>
);
}
```
**Combining useReducer and Context API:**
- **Example**: Global State Management
```javascript
const initialState = { count: 0 };
function reducer(state, action) {
switch (action.type) {
case 'increment':
return { count: state.count + 1 };
case 'decrement':
return { count: state.count - 1 };
default:
throw new Error();
}
}
const CountContext = React.createContext();
function App() {
const [state, dispatch] = useReducer(reducer, initialState);
return (
<CountContext.Provider value={{ state, dispatch }}>
<Counter />
</CountContext.Provider>
);
}
function Counter() {
const { state, dispatch } = React.useContext(CountContext);
return (
<div>
<p>Count: {state.count}</p>
<button onClick={() => dispatch({ type: 'increment' })}>Increment</button>
<button onClick={() => dispatch({ type: 'decrement' })}>Decrement</button>
</div>
);
}
```
#### Conclusion
Effective state management is vital for building robust React applications. By mastering `useState`, `useReducer`, and the Context API, you can manage state more efficiently and build scalable applications.
#### Resources for Further Learning
- **Online Courses**: Platforms like Udemy, Pluralsight, and freeCodeCamp offer in-depth courses on React state management.
- **Books**: "Learning React" by Alex Banks and Eve Porcello, "React - The Complete Guide" by Maximilian Schwarzmüller.
- **Documentation and References**: The official [React documentation](https://reactjs.org/docs/hooks-intro.html) provides comprehensive information on hooks and state management.
- **Communities**: Engage with developer communities on platforms like Stack Overflow, Reddit, and GitHub for support and networking.
| suhaspalani | |
1,893,559 | Casement Windows: Traditional Aesthetics with Modern Performance | Introduction Casement windows could be a choice great those wanting to find conventional looks with... | 0 | 2024-06-19T12:24:32 | https://dev.to/tina_garciag_fbecfd60ef53/casement-windows-traditional-aesthetics-with-modern-performance-11hb | design | Introduction
Casement windows could be a choice great those wanting to find conventional looks with contemporary performance. These windows happen used by more than 100 years and have stood the test regarding the right time for his or her durability, functionality, and beauty. Whether you may be renovating your home or creating a brand new one, casement windows may be a choice great, we will speak about the popular features of casement windows, how they have now been innovated, their security features, utilizing them, along with the quality of the construction
Innovation
While casement windows have been used by more than 100 years, technology advances has permitted for great innovation. Today's casement windows are made out of top-quality materials for instance plastic, aluminum, or timber, which can be more powerful than conventional china aluminum windows materials based in the last. Furthermore, casement windows go with unique features like low-e coatings, which mirror UV along with heat rays before they enter the space. Other innovations consist of self-cleaning cup, which operates on the all chemical unique to split up dust and grime and enable water to scrub it away
Security
Casement windows have security strong whenever utilized precisely. They have been constructed with a club hinge that holds the screen tightly in place, making it hard for intruders to make the screen available through the outside. Modern casement windows likewise incorporate securing mechanisms which can be easy to use and gives protection extra. Furthermore, because casement windows open through the overall part, property owners can quickly and properly evacuate in the eventuality of an situation urgent
Service
Casement windows are low-to-zero maintenance and require only cleaning occasional which can be performed insurance firms a moderate detergent and water. Nevertheless, if it breaks, it's important to make contact with an expert professional to ensure appropriate installation plus an optimal seal if you wish to change a window, or
Quality
The standard of casement windows differs according to the maker and product. Vinyl and aluminum structures are more powerful than wood structures. Also, seek out windows with low-emissivity cup, which reflects temperature and obstructs UV harmful. Be sure to purchase inswing window casement windows from reputable manufacturers whom provide warranties on the ongoing products and services
Application
Casement windows may be used in just about any available space regarding the home, nevertheless they are oftentimes found in living spaces, rooms, and kitchen areas. They are typically specially beneficial in places where airflow is vital, the home or restroom. Casement aluminum windows are available in a variety of styles, sizes, and colors, to greatly help the fit is discovered by you perfect your home's looks
| tina_garciag_fbecfd60ef53 |
1,893,557 | How an ICO Development Company Can Help You Grow Your Business? | In the fast-evolving landscape of blockchain technology, initial coin offerings (ICOs) have emerged... | 0 | 2024-06-19T12:21:28 | https://dev.to/roberttony03/how-an-ico-development-company-can-help-you-grow-your-business-1bog | icodevelopment, icodevelopmentcompany, blockchain, cryptocurrency | In the fast-evolving landscape of blockchain technology, initial coin offerings (ICOs) have emerged as a revolutionary means of fundraising for startups and established businesses alike. An **[ICO development company](https://blocktunix.com/ico-development-services/)** plays a crucial role in harnessing this potential, offering specialized services that can significantly impact the growth trajectory of your business.
**Understanding ICO Development:**
ICO development involves creating and launching a digital token sale on a blockchain platform. This process typically includes:
**1. Consultation and Strategy:**
An experienced ICO development team begins with a comprehensive consultation to understand your business goals, target audience, and market positioning. They devise a strategic roadmap customized to maximize your ICO's success.
**2. Smart Contract Development:**
Smart contracts are the backbone of ICOs, ensuring transparent and automated token distribution. Development companies adeptly create and deploy these contracts, ensuring they meet security standards and regulatory compliance.
**3. Token Creation and Distribution:**
Crafting tokens that align with your project's utility or security needs is pivotal. ICO development experts handle token creation, ensuring they are functional, secure, and integrated seamlessly into the blockchain ecosystem.
**4. Security and Compliance:**
Given the regulatory scrutiny around ICOs, security and compliance are paramount. Development companies implement robust security protocols and ensure adherence to legal frameworks, safeguarding both the project and investors.
**5. Marketing and Community Engagement:**
Launching an ICO requires extensive marketing efforts to build investor confidence and attract participation. Development firms provide marketing strategies, community management, and outreach campaigns to enhance visibility and credibility.
**Benefits of Engaging an ICO Development Company:**
**1. Expertise and Experience:**
ICO development companies bring in-depth knowledge and hands-on experience in blockchain technology and tokenomics. Their expertise ensures your ICO is structured to maximize funding potential and navigate regulatory challenges.
**2. Efficiency and Speed:**
Developing an ICO from scratch demands time and resources. Outsourcing to a dedicated development team accelerates the process, allowing you to focus on core business activities while professionals handle technical intricacies.
**3. Security and Trust:**
Security breaches and fraud are prevalent in the digital asset space. ICO development firms implement rigorous security measures, safeguarding your project's integrity and investor trust.
**4. Scalability and Flexibility:**
Whether you're launching a startup or expanding an established enterprise, ICO development services offer scalable solutions tailored to your growth trajectory. They provide customizable features that adapt to evolving market demands.
**5. Comprehensive Support:**
Beyond technical implementation, ICO development companies offer ongoing support post-launch. They monitor token performance, implement upgrades, and ensure seamless integration with exchanges, fostering sustained growth.
**Conclusion:**
Engaging with an ICO development company is a strategic decision that can help your company advance in the competitive blockchain ecosystem. With their knowledge of smart contract development, token production, regulatory compliance, and market outreach, you can confidently traverse the intricacies of launching an ICO. This alliance not only improves fundraising capacity but also lays the groundwork for long-term success and innovation in the decentralized economy. Choosing the appropriate **[cryptocurrency development company](https://blocktunix.com/cryptocurrency-development-company/)** might make all the difference in attaining these objectives efficiently and sustainably.
| roberttony03 |
1,893,554 | Blogger icon a little modified no svg no img no js CSS+HTML only | Check out this Pen I made! | 0 | 2024-06-19T12:18:35 | https://dev.to/tidycoder/blogger-icon-a-little-modified-no-svg-no-img-no-js-csshtml-only-328c | codepen, icon, css, html | Check out this Pen I made!
{% codepen https://codepen.io/TidyCoder/pen/qBGxMpj %} | tidycoder |
1,893,553 | Demystifying the Machine: A Beginner's Guide to Machine Learning Algorithms | Ever wondered how your favorite streaming service recommends movies you'll love, or how spam filters... | 0 | 2024-06-19T12:17:44 | https://dev.to/fizza_c3e734ee2a307cf35e5/demystifying-the-machine-a-beginners-guide-to-machine-learning-algorithms-i33 | machine, machinelearning, algorithms | Ever wondered how your favorite streaming service recommends movies you'll love, or how spam filters effortlessly weed out unwanted emails? The answer lies in the fascinating world of machine learning (ML) algorithms. These algorithms are the secret sauce behind many of today's technological marvels, and understanding them can be surprisingly accessible.
**Learning by Example: The Core of Machine Learning**
Unlike traditional programming, where you provide explicit instructions, machine learning algorithms learn from data. Imagine teaching a friend a new game by showing them examples of winning moves. Similarly, machine learning algorithms are trained on large datasets, enabling them to identify patterns and make predictions based on new, unseen data.
**The Algorithm All-Stars: Unveiling Popular Techniques**
The world of machine learning boasts a diverse range of algorithms, each with its own strengths and applications. Let's explore a few popular ones:
_Supervised Learning:_ Like a student learning with a teacher, supervised learning algorithms are trained on data where the desired output is already labeled. For example, an email spam filter might be trained on a dataset of emails labeled as "spam" or "not spam."
_Unsupervised Learning:_ Here, the algorithm is presented with unlabeled data and must find hidden patterns or groupings within it. Think of organizing a messy room – unsupervised learning algorithms can categorize objects based on similarities without prior instruction.
_Reinforcement Learning:_ Imagine training a dog with rewards and punishments. Reinforcement learning algorithms learn through trial and error, receiving feedback in the form of rewards for good decisions and penalties for bad ones. This is often used in training AI for games or robot control.
**Beyond the Basics: Where to Deepen Your Dive**
This is just a glimpse into the vast and exciting world of machine learning algorithms. If you're eager to delve deeper, consider enrolling in a Machine Learning Course. These courses provide a structured learning path, equipping you with the necessary skills to build and train your own machine-learning models. You'll explore various algorithms in detail, learn data analysis techniques, and gain practical experience through hands-on projects.
**The Future is Machine Learning:**
Machine learning is rapidly transforming industries, from healthcare and finance to manufacturing and entertainment. Understanding these algorithms empowers you to participate in this technological revolution. With a solid foundation in machine learning, you can pursue exciting careers in data science, artificial intelligence, and various other cutting-edge fields.
Ready to embark on your machine learning journey? Start by exploring the resources available online, and consider enrolling in a [Machine Learning Course](https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/) to take your understanding to the next level. The world of machine learning awaits – and with a little effort, you can unlock its potential and become an active participant in shaping the future. | fizza_c3e734ee2a307cf35e5 |
1,893,552 | Essential Tips for Maintaining Polyester Mooring Rope | Vital Recommendations for Maintaining Polyester Mooring Rope Have you ever before become aware of... | 0 | 2024-06-19T12:15:15 | https://dev.to/tina_garciag_fbecfd60ef53/essential-tips-for-maintaining-polyester-mooring-rope-5827 | design | Vital Recommendations for Maintaining Polyester Mooring Rope
Have you ever before become aware of the term Polyester Mooring Rope? Essentially, it is a style of Rope helped make of Polyester fibers that are interweaved with each other to produce a solid and durable Rope. This Rope is frequently utilized for Mooring watercrafts, ships, and various other boat. We'll discuss some vital recommendations for maintaining Polyester Mooring Rope in order that it lasts much a lot longer and does much a lot better.
Advantages of Polyester Mooring Rope
Polyester Mooring Rope has lots of advantages over various other kinds of Ropes. It is sturdy, heavy duty, and immune to UV radiations, deep sea, and water absorption. This produces it best for use in aquatic settings where mooring anchor rope are consistently left open to extreme weather. Polyester Mooring Rope additionally has exceptional adaptability and flexibility, which enables it to extend and soak up surprise without damaging.
8 - Copy.jpg
Innovation in Polyester Mooring Rope
Recently, Polyester Mooring Rope has gone through substantial innovation. Makers have launched brand new fibers and finishes that enrich its own stamina, sturdiness, and various other residential buildings. A few of these innovations consist of including safety finishes that enhance protection to abrasion and abrasion protection and utilizing fibers that are much a lot extra immune to use and tear.
Safety when Using Polyester Mooring Rope
Safety is regularly a leading concern when using Polyester Mooring Rope. It is crucial to use the appropriate size and duration of Rope for the task. Utilizing a ship mooring rope that's as well slim or even as well quick can easily top to damage or even failing, which could be unsafe. It is additionally essential to evaluate the Rope on a regular basis for indications of use and tear or even harm.
How to Use Polyester Mooring Rope
Polyester Mooring Rope could be utilized in an array of applications, including Mooring watercrafts, getting freight lots, and lugging. When using Polyester Mooring Rope, it is crucial to manage it along with treatment and stay clear of quick jerks or even spins. Regularly double-check that the knots are get and that the Rope is attached to a strong support aspect.
Service and Quality of Polyester Mooring Rope
The service and quality of Polyester Mooring Rope can easily differ depending upon the producer and the provider. It is crucial to buy from a trusted provider that promises top notch and trustworthy items. Frequent upkeep and evaluation of the Rope can easily additionally expand its own life-span and guarantee its own efficiency.
8.jpg
Application of Polyester Mooring Rope
Polyester Mooring Rope is a functional and practical device for an array of applications. In the aquatic business, it is utilized for linking off watercrafts, ships, and various other marine rope boat. It is additionally utilized in the logistics business for getting freight lots. It can easily also be utilized for lugging and drawing massive items. Along with its own stamina, sturdiness, and protection to extreme weather, Polyester Mooring Rope is a crucial device for lots of markets. | tina_garciag_fbecfd60ef53 |
1,893,551 | Twilio made for changing | This is a submission for the Twilio Challenge What I Built I want to make a great and... | 0 | 2024-06-19T12:14:55 | https://dev.to/gerardtheeagle/twilio-made-for-changing-4pbm | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
I want to make a great and amazing chatbot for user to allow then to ask questions about me and other things
## Demo
<!-- Share a link to your app and include some screenshots here. -->
gerardocover.great-site.net


## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
In fact, i will use AI specifically Google Gemini to incorporite in the chatbot for making smooth and efficient User Experience
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | gerardtheeagle |
1,893,550 | Accelerate Your Career with QA Testing Training and Placement Services | In today’s competitive job market, finding a career that offers both growth and stability can be... | 0 | 2024-06-19T12:14:46 | https://dev.to/pradeep_kumar_0f4d1f6d333/accelerate-your-career-with-qa-testing-training-and-placement-services-52h4 | In today’s competitive job market, finding a career that offers both growth and stability can be daunting. Quality Assurance (QA) testing has evolved into a vital component of the software development lifecycle, ensuring products meet stringent quality standards before reaching consumers. For those entering this dynamic field, [QA Testing Training and placement](https://www.h2kinfosys.com/courses/qa-online-training-course-details/)services present an exceptional opportunity to acquire essential skills and secure employment. This guide delves into how these services can propel your career, their benefits, and offers guidance on selecting the right service for your needs.
## The Importance of QA Testing in Software Development
QA testing is pivotal in software development, involving meticulous bug identification and resolution to meet functional and performance criteria before release. QA testers play a crucial role in ensuring software products are of high quality, reliability, and usability, indispensable to project success.
## Why QA Testing is Crucial
Ensures Product Quality: Identifies defects that could affect functionality, usability, and performance, ensuring high-quality outcomes.
Enhances User Satisfaction: Reliable software builds user trust and satisfaction.
Reduces Costs: Early issue detection minimizes post-release expenses.
Promotes Efficiency: Prevents delays and disruptions in development.
## Benefits of QA Testing Training and Placement Services
These services offer a structured path into software quality assurance, providing comprehensive training, hands-on experience, and job placement support, preparing graduates for successful careers.
## Comprehensive Training
QA testing programs cover diverse topics from basic testing principles to advanced automated tools, ensuring a thorough grasp of QA methodologies.
## Key Training Areas
Software Testing Fundamentals: Basics, testing types, lifecycle, and methodologies.
Test Design Techniques: Crafting effective test cases and plans.
Automation Tools: Training on tools like Selenium, QTP, or LoadRunner.
Performance Testing: Evaluating software performance under varied conditions.
Security and Mobile Testing: Techniques for identifying vulnerabilities and testing mobile applications.
## Hands-On Experience
Emphasis on practical learning through projects and simulations equips learners with real-world skills essential for workplace readiness.
## Placement Services
Critical in bridging the gap between training and employment, offering:
Resume Building: Professional resume crafting highlighting relevant skills.
Interview Preparation: Coaching on answering questions and presenting confidently.
Job Search Assistance: Access to listings and connections with employers.
Networking Opportunities: Industry networking events and connections.
## Choosing the Right QA Testing Training and Placement Service
Considerations include accreditation, reputation, curriculum coverage, instructor expertise, learning flexibility, success rates in job placements, and comprehensive support services.
The Path to Success: A Step-by-Step Guide
## Navigate your journey:
Understand Basics: Familiarize with QA testing fundamentals.
Enroll in a Program: Choose a reputable training program aligned with career goals.
Gain Practical Experience: Actively engage in hands-on projects and seek additional opportunities.
## Build Technical Skills: Develop QA-specific and technical proficiencies.
Prepare for Certifications: Obtain industry-recognized certifications.
Leverage Placement Services: Utilize resume refinement, interview coaching, and networking.
Network Actively: Join associations, attend events, and connect on professional platforms.
Apply and Interview: Tailor applications, demonstrate skills in interviews.
Continual Learning: Stay updated with industry trends and practices.
Conclusion
[qa analyst training and placement](https://www.h2kinfosys.com/courses/qa-online-training-course-details/) services offer a structured pathway to a fulfilling career in software quality assurance, equipping individuals with the skills and support needed for success. Whether starting anew or enhancing existing skills, these programs accelerate careers in a dynamic field ripe with opportunities, especially with the flexibility and advantages of online QA training.
| pradeep_kumar_0f4d1f6d333 | |
1,893,549 | How to cancel capcut pro subscription | If you have subscribed to the premium version of capcut which is commonly known as capcut pro. You... | 0 | 2024-06-19T12:14:27 | https://dev.to/thecapsapk/how-to-cancel-capcut-pro-subscription-2pl1 | capcut, videoediting, website, capcutwebsite | If you have subscribed to the [premium version of capcut](https://thecapsapk.com/how-to-cancel-the-capcut-pro-subscription/) which is commonly known as capcut pro. You want to cancel its subscription and you dont know how to do it then must visit this insightful article for help. | thecapsapk |
1,890,316 | A simple distributed lock implementation using Redis | When you want to make sure only one process modifies a given resource at a time you need a lock. When... | 0 | 2024-06-19T12:14:24 | https://dev.to/woovi/a-simple-distributed-lock-implementation-using-redis-445c | lock, redis | When you want to make sure only one process modifies a given resource at a time you need a lock.
When you have more than one pod running in production to your server in your Kubernetes, you can't lock only in memory, you need a distributed lock.
## Implementation using _redlock_
```ts
import Redis from 'ioredis';
import Redlock from 'redlock';
const redis = new Redis(config.REDIS_HOST);
export const redlock = new Redlock([redis], {
retryCount: 15,
});
export const lock = async (key: string, duration: number) => {
try {
const lockAcquired = await redlock.acquire([key], duration);
return {
unlock: async () => {
try {
await lockAcquired.release();
} catch (err) {
}
},
};
} catch (err) {
return {
error: 'Unable to get lock',
unlock: () => {},
};
}
};
```
Usage
```ts
const { error, unlock } = await lock('lockKey');
// error trying to get
if (error) {
return { success: false, error: String(error) };
}
try {
// modify a shared resource
await unlock();
return result;
} catch (err) {
await unlock();
return { success: false, error: String(err) };
}
```
## Use cases
You can use a distributed lock to update the balance of a ledger account.
You can use it to avoid two processes consuming the same API at the same time.
## In Conclusion
Distributed locks are one of many possible approaches to handle concurrency to shared resources. You could use a conditional put. Use a queue to process just one event at a time.
Investigate and discover what is the best solution for your specific scenario.
---
[Woovi](https://www.woovi.com) is an innovative startup revolutionizing the payment landscape. With Woovi, shoppers can enjoy the freedom to pay however they prefer. Our cutting-edge platform provides instant payment solutions, empowering merchants to accept orders and enhance their customer experience seamlessly.
If you're interested in joining our team, we're hiring! Check out our job openings at [Woovi Careers](https://woovi.com/jobs/).
---
Photo by <a href="https://unsplash.com/@kenziem?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Mackenzie Marco</a> on <a href="https://unsplash.com/photos/gold-colored-and-silver-colored-padlocks-8qpFv44zkMc?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| sibelius |
1,893,548 | جلـ | A post by Teret Terw | 0 | 2024-06-19T12:13:26 | https://dev.to/saerwd/j-3h6m | saerwd | ||
1,891,426 | Neural Network (explained in 1 Byte) | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T12:11:08 | https://dev.to/jjokah/neural-network-explained-in-1-byte-2hop | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Just like our brain, a Neural Network is made up of interconnected "neurons". These neurons work together by learning from (**input**) data and getting better at tasks (in the **hidden layer**) to give (**output**) predictions or decisions.

## Additional Context
A Neural Network is a computational model inspired by the structure and function of the human brain. It consists of a series of interconnected "neurons" or nodes organized in layers. These layers include:
1. **Input Layer**: Each node in this layer represents a feature from the input data. For example, if the data is an image, each node might represent a pixel value.
2. **Hidden Layers**: This is where the real magic happens. This layer consists of multiple nodes that process and transform the input data through a series of _weighted_ connections. Each connection has a weight that gets adjusted during training based on the input data and the desired output to minimize errors.
3. **Output Layer**: This layer takes the outputs of the last hidden layer and transforms them into the final prediction of the network. For example, if it is an image recognition task, each node in the output layer might represent different predictions to identify the image.

In summary, a Neural Network mimics the human brain's ability to learn and make decisions by processing input data through interconnected layers of neurons, adjusting weights, and minimizing errors to improve performance.
| jjokah |
1,893,547 | Optimal Algorithms for Aligning Glock Dovetail Optic Mounts | Aligning Glock dovetail optic mounts accurately is crucial for optimal shooting performance. Start... | 0 | 2024-06-19T12:09:14 | https://dev.to/marylisa3245/optimal-algorithms-for-aligning-glock-dovetail-optic-mounts-57oo | Aligning Glock dovetail [optic mounts](https://www.versatactical.com/product/rmr-glock-dovetail-optic-mounting-kit/) accurately is crucial for optimal shooting performance. Start with a visual inspection of the dovetail cut and optic mount, ensuring there are no defects or obstructions. Clean both parts thoroughly to remove any dirt or debris. Use appropriate tools, such as a dovetail sight tool or pusher for precise control, a torque wrench for consistent screw tension, and leveling devices like bubble or digital levels to ensure perfect alignment.
Place the optic mount into the dovetail cut and use a non-marring mallet to tap it gently into place, ensuring it's initially centered. Utilize the dovetail sight tool to make fine, incremental adjustments, ensuring the optic mount is perfectly aligned with the slide. Once aligned, secure the optic mount screws with the torque wrench to the recommended torque settings, ensuring a secure and stable fit. This methodical approach ensures that the optic is properly aligned, providing accuracy and reliability. | marylisa3245 | |
1,893,546 | Physics Engine: I couldn’t wait to explore more. | Modern video games feature stunning visuals and realistic physics simulations that immerse players in... | 0 | 2024-06-19T12:08:48 | https://dev.to/zoltan_fehervari_52b16d1d/physics-engine-i-couldnt-wait-to-explore-more-a88 | physicsengine, gamephysics, gamedev, gameprogramming | Modern video games feature stunning visuals and realistic physics simulations that immerse players in virtual worlds. At the heart of these realistic interactions is the physics engine, a crucial component of game engines.
## So? What’s next?
Now I am going to explore the technical aspects of physics engines, their function, and integration with game engines, along with their impact on gameplay experience and notable implementations.
## Impact on Gameplay Experience
Physics engines enhance gameplay by creating immersive and dynamic environments that respond realistically to player actions. They allow for the simulation of weather, fire, water, and destructible environments, adding visual appeal and interactivity. The unpredictability of physics simulations introduces excitement and challenges, making gameplay more engaging.
## Integration with Game Engines
Physics engines are integral to game engines, simulating real-world physics and creating interactive movements. They interact with other components like graphics and audio engines to create seamless gameplay. Integration involves optimizing computational resources and using APIs to customize the engine. Proper integration enhances realism, allowing for natural movements and interactions within the game environment.
## Technical Background
Developing a physics engine involves complex mathematical models and algorithms to simulate real-world physics. The accuracy and performance of these models are crucial, particularly in fast-paced games. Developers must balance realism with hardware and software limitations to ensure smooth gameplay.
## How a Physics Engine Works
A physics engine operates through several steps:
1. Input: Receives data on objects’ physical properties.
2. Simulation: Calculates movements based on physics laws.
3. Collision Detection: Identifies and calculates collisions.
4. Response: Adjusts objects’ positions and velocities post-collision.
These steps create dynamic and immersive gaming experiences by simulating realistic interactions.
## Popular Physics Engines
1. Unity Physics: Known for accurate collision detection and performance in 2D and 3D games.
2. PhysX (Nvidia): Features dynamic destruction and GPU acceleration; used in games like Grand Theft Auto V.
3. Havok: Used in triple-A games for advanced character animations and AI simulations; featured in Halo and Assassin’s Creed.
4. Bullet Physics: Open-source engine known for stability and real-time soft body physics; used in Grand Theft Auto IV.
5. Box2D: Lightweight and efficient 2D engine used in games like Angry Birds.
## Applications Beyond Video Games
[Physics engines](https://bluebirdinternational.com/physics-engine/) are used in various industries for simulations and modeling:
- Virtual Reality: Creating realistic VR experiences.
- Robotics: Simulating robot movements for design testing.
- Architectural Simulations: Modeling buildings and environmental conditions.
- Medical Simulations: Simulating physical phenomena in the human body for study.
## Emerging Technologies
Integration of machine learning (ML), artificial intelligence (AI), and blockchain is transforming physics engines:
- ML and AI Enhancements: Adaptive physics, enhanced realism, and predictive modeling.
- Blockchain: Potential for decentralized computing and asset interoperability.
- Virtual and Augmented Reality: Crucial for achieving lifelike interactions in VR and AR.
| zoltan_fehervari_52b16d1d |
1,893,545 | PVC Resin: The Foundation of PVC Manufacturing | Headline: PVC Material: The Structure Obstruct of PVC Production PVC material is actually the... | 0 | 2024-06-19T12:06:25 | https://dev.to/tina_garciag_fbecfd60ef53/pvc-resin-the-foundation-of-pvc-manufacturing-29e2 | design | Headline: PVC Material: The Structure Obstruct of PVC Production
PVC material is actually the component that is essential the produce of PVC items
It is actually a flexible as well as affordable polyvinyl chloride resin product that's utilized thoroughly in the industries that are commercial well as building
This post that is short certainly talk about the benefits, security, development, as well as high top premium of PVC material, in addition to its own requests as well as exactly how it could be utilized
Benefits of PVC Material
Among the primary advantages of PVC material is actually its own flexibility
PVC could be created right in to items that are various are actually utilized in different markets
PVC pipelines are actually commonly utilized in the building market for sprinkle drain bodies as well as source
They are actually resilient, corrosion-resistant, as well as have actually a life expectancy that is lengthy
PVC movie is actually utilized in product packing requests as it is actually versatile, clear, as well as
PVC accounts are actually utilized in doors and window frameworks, offering protection as well as security coming from the aspects
Another benefit of PVC material is actually its own inexpensive
PVC is actually created coming from affordable materials that are basic well as needs much less power towards create compared to various other products such as glass or even steel
PVC items are actually likewise light-weight, creating all of them simpler towards set up as well as transfer
Development in PVC Production
Recently, the PVC market has actually created developments that are considerable the development of PVC items
Certainly there certainly are actually currently PVC items that are actually lead-free, as well as some that are actually created coming from reused PVC
These developments have actually created pvc resin a much more product that is environmentally-friendly well as have actually enhanced the sustainability of the market
Security of PVC Material
PVC is actually a risk-free product for utilize in a selection of requests
It is actually resilient as well as can easily endure severe temperature levels without derogatory or even launching harmful compounds
PVC pipelines, for instance, are actually utilized towards transfer consuming sprinkle as well as have actually been actually revealed to become risk-free for this function
Nevertheless, it is essential towards details that PVC items ought to certainly not be actually shed, as this can easily launch hazardous chemicals right in to the sky
PVC items ought to likewise certainly not enter exposure to various other chemicals or even products that might respond along with the PVC as well as trigger it towards launch hazardous compounds
Utilizing PVC Material
PVC items are actually simple towards set up as well as utilize
PVC pipelines, for instance, could be quickly reduce as well as signed up with utilizing PVC joints or even ports
PVC movie could be quickly covered about items towards safeguard all of them throughout transport or even storing
When utilizing PVC items, it is essential towards comply with the manufacturer's directions thoroughly towards guarantee the item is actually utilized properly as well as securely
For instance, PVC pipelines should be actually correctly sustained to avoid drooping or even stress on the joints, which can easily trigger leakages
Solution as well as High premium that is top of Material
The PVC market is actually understood for offering solution that is outstanding its own clients
PVC producers function carefully along with their clients towards guarantee they are actually offering the appropriate items for their requirements that are particular
PVC items are actually likewise based on extensive quality assurance examinations towards guarantee they satisfy market resin pvc requirements as well as are actually risk-free for utilize
Requests of PVC Material
PVC material has actually lots of requests in different markets
In the building market, PVC pipelines are actually utilized for supply of water as well as drain bodies, in addition to for cable television security
PVC movie is actually utilized in product packing as well as publishing requests, while PVC accounts are actually utilized in door frameworks as well as home window
PVC floor covering is actually utilized in industrial as well as domestic structures, offering resilient as well as easy-to-clean surface areas | tina_garciag_fbecfd60ef53 |
1,893,544 | Validate user path access on edge with NextAuth & Next.js Middleware | .... Cover Photo by Susan Q Yin on Unsplash Source Code:... | 0 | 2024-06-19T12:05:41 | https://dev.to/smy/validate-user-path-access-on-edge-with-nextauth-nextjs-middleware-4hm6 | nextjs, javascript, webdev, typescript | ....
Cover Photo by [Susan Q Yin on Unsplas](https://unsplash.com/@syinq?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash)[h](https://unsplash.com/photos/red-and-blue-arrow-sign-surrounded-by-brown-trees-BiWM-utpVVc?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash)
Source Code: [https://gist.github.com/smyaseen/eb1ad52e85b7a6665642430bc4b9da31](https://gist.github.com/smyaseen/eb1ad52e85b7a6665642430bc4b9da31)
....
**Helloooooo!**
Hope you're doing great! This is SMY! 👋 Let's Jump right in 🚀
## Contents:
* ⚡ `Some Background of NextAuth and Middleware`
* ⚡ `Implementation of NextAuth and Middleware validating user path access on edge`
## 1️⃣ What -
* **NextAuth.js**: NextAuth.js is an open-source authentication library specifically designed for Next.js applications. It supports various authentication providers such as Google, Facebook, and custom providers, offering a comprehensive solution for user authentication.
* **NextJS Middleware**: NextJS Middleware allows you to intercept and modify requests before your Next.js application handles them. This capability is essential for tasks like authentication, where you may need to redirect users, check authorization, or modify headers dynamically.
## 2️⃣ Why -
* **Flexibility**: NextAuth.js supports multiple authentication providers and allows for customization to fit diverse application needs.
* **Edge Validation**: NextJS Middleware enables validation and customization of requests at the edge, similar to an API gateway, ensuring robust security and tailored user experiences.
* **Streamlined Authorization**: By combining NextAuth.js and NextJS Middleware, you can efficiently manage user authorization and control page access without complex conditional rendering.
## 3️⃣ How -
### Step 1: Install Dependencies
First, ensure you have NextAuth.js installed in your Next.js project. Head over to NextAuth's documentation to learn more about integration:
[https://next-auth.js.org/getting-started/introduction](https://next-auth.js.org/getting-started/introduction)
### Step 2: Configure NextAuth.js
Set up NextAuth.js with your preferred authentication providers and configuration options. Below is a basic example using JWT strategy:
```javascript
// pages/api/auth/[...nextauth].ts
import NextAuth from 'next-auth';
import Providers from 'next-auth/providers';
export default NextAuth({
pages: {
signIn: '/login',
},
providers: [
Providers.JWT({
secret: process.env.JWT_SECRET,
// Additional JWT options if needed
}),
// Add other providers like Google, Facebook, etc.
],
// Optional NextAuth.js configurations
});
```
### Step 3: Implement NextJS Middleware
Create a middleware function to handle authentication and authorization logic:
```javascript
// middleware.ts
import { NextFetchEvent, NextRequest, NextResponse } from 'next/server';
import { getToken } from 'next-auth/jwt';
import { withAuth } from 'next-auth/middleware';
/*
* Match all request paths except for the ones starting with:
* - api (API routes)
* - _next/static (static files)
* - _next/image (image optimization files)
* - favicon.ico (favicon file)
*/
const pathsToExclude = /^(?!\/(api|_next\/static|favicon\.ico|manifest|icon|static)).*$/;
// set of public pages that needed to be excluded from middleware
const publicPagesSet = new Set<string>(['home']);
const rootRegex = /^\/($|\?.+|#.+)?$/;
export default async function middleware(req: NextRequest, event: NextFetchEvent) {
if (!pathsToExclude.test(req.nextUrl.pathname) || publicPagesSet.has(req.nextUrl.pathname))
return NextResponse.next();
const token = await getToken({ req });
const isAuthenticated = !!token;
// if user goes to root path '/' then redirect
// /dashboard if authenticated
// /login if unauthenticated
if (rootRegex.test(req.nextUrl.pathname)) {
if (isAuthenticated) return NextResponse.redirect(new URL('/dashboard', req.url)) as NextResponse;
return NextResponse.redirect(new URL('/login', req.url)) as NextResponse;
}
// redirects user from '/login' if authenticated
if (req.nextUrl.pathname.startsWith('/login') && isAuthenticated) {
return NextResponse.redirect(new URL('/dashboard', req.url)) as NextResponse;
}
// This has to be same page option as in AuthOptions
const authMiddleware = await withAuth({
pages: {
signIn: `/login`,
},
});
return authMiddleware(req, event);
}
```
### Step 4: Understand the Middleware Logic
* **Paths to Exclude**: `pathsToExclude` regex ensures that certain paths like API routes (`/api/*`), static files (`/_next/static/*`), and others are excluded from middleware processing.
* **Public Pages Set**: `publicPagesSet` contains paths that are considered public and shouldn't be protected by middleware.
* **JWT Token**: `getToken({ req })` retrieves the JWT token from the request headers or cookies.
* **Authentication Checks**: The middleware checks if the user is authenticated (`isAuthenticated`). If not, it redirects to the login page (`/login`). If authenticated and accessing the root path (`/`), it redirects to `/dashboard`.
* **NextAuth Middleware**: For other protected routes, it utilizes `withAuth` from `next-auth/middleware` to enforce authentication requirements.
## Wrapping Up:
Implementing NextJS Middleware involves creating a middleware function (`middleware.ts`) that intercepts requests, performs authentication checks, and handles redirection based on the user's authentication status. This approach ensures secure and streamlined authentication within your Next.js application, enhancing both security and user experience.
By following these steps, you can effectively integrate NextAuth.js with NextJS Middleware to manage authentication and authorization in your Next.js projects. Adjust the middleware logic and configuration based on your specific application requirements and security policies.
.....
Now you're equipped to enhance your Next.js applications with robust authentication capabilities using NextAuth.js and NextJS Middleware. Happy coding! 🚀
That's it, folks! hope it was a good read for you. Thank you! ✨
👉 Follow me
[GitHub](https://github.com/smyaseen)
[LinkedIn](https://www.linkedin.com/in/sm-y) | smy |
1,893,543 | LazyReminders | This is a submission for the Twilio Challenge What I Built Where we’re from, people are... | 0 | 2024-06-19T12:04:07 | https://dev.to/muhammad_anas_ad5864b49fe/lazyreminders-1jk0 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
Where we’re from, people are always on their phones - and WhatsApp is widely used for communication. We built **LazyReminders: a WhatsApp Reminders bot** that leverages Twillio and Natural Language Processing (NLP) to remind people about important events through text messages in WhatsApp. Using this bot, people can set up multiple reminders simply by sending a text *or voice message* and get reminded by a text! Our bot also has additional features such as first-rate data privacy through secure data encryption, timezone sensitivity and deletion/removal for any reminder at any time.
## Demo
On **YouTube**: https://www.youtube.com/watch?v=rPG3okIw5iU&feature=youtu.be
Talk to the bot **Today**: https://www.getlazy.ai/featured/whatsapp-reminder-bot
## Twilio and AI
OpenAI’s GPT-4o came out only days before we launched the bot - and using Twillio’s WhatsApp Sender, we brought GPT-4o into WhatsApp for everyone to use! Our approach was to take the general capabilities of OpenAI’s models and fine-tune it for our use-case: reminders. Using Twillio’s API, which made securely accessing audio from users *simple*, we were able to integrate our application with not just GPT-4o, but also OpenAI’s “Whisper” model.
Before finding Twillio, neither me nor my teammate had ever built anything for WhatsApp: both of us had worked with AI, but I personally didn’t even know that WhatsApp bots existed - and when I found out that they did, the idea of having a phone number connected to a server sounded impossible! Yet with Twillio’s docs we were able to have a working version out on the very same night!
Twillio streamlined the process of obtaining and registering a real phone number as well as connecting our WhatsApp bot to our server. Through Twillio, we brought our complex AI knowledge to WhatsApp for non-technical people such as my sister and father to use every day! Thank you, Twillio!
## Additional Prize Categories
We don’t think our WhatsApp bot fits into any additional prize category. If you think we deserve to be in one of them, we’d be honored to be nominated.
Credits to: https://dev.to/chiefimagineer , https://dev.to/cutycat2000x
| muhammad_anas_ad5864b49fe |
1,893,541 | Decoding the Java Compiler: From Code to Execution | Decoding the Java Compiler: From Code to Execution Introduction: Understanding... | 0 | 2024-06-19T12:01:30 | https://dev.to/scholarhat/decoding-the-java-compiler-from-code-to-execution-47b9 | ## Decoding the Java Compiler: From Code to Execution
## Introduction: Understanding the Java Compiler
The journey of Java code from creation to execution is a fascinating process, meticulously orchestrated by the **[Java Compiler](https://www.scholarhat.com/compiler/java)**. In the programming realm, compilers act as the backbone, translating human-readable code into language that a machine can execute. This introduction to the Java compiler aims to demystify how Java code is processed, compiled, and executed on a machine, ensuring a robust application performance. An integral part of this process includes robust [exception handling in Java](https://www.scholarhat.com/tutorial/java/exception-handling-in-java), a method to manage errors gracefully during runtime.
## The Compilation Process
## Stage 1: Lexical Analysis
The first stage in the Java compilation process is lexical analysis, where the source code is converted into tokens. These tokens are the basic building blocks of code, like vocabulary in human language. During this phase, the compiler scans the code and categorizes elements into identifiers, keywords, literals, operators, and more.
## Stage 2: Syntax Analysis
Following lexical tokenization, syntax analysis checks how tokens are used according to the grammar of Java. This phase is crucial as it ensures that the code’s structure adheres to Java's strict syntax rules, preventing errors during execution.
## Stage 3: Semantic Analysis
Once the structure is validated, semantic analysis begins. Here, the compiler checks the logic of the code, ensuring that operations are feasible and variables are used correctly. For instance, you cannot perform mathematical operations on a boolean variable. Semantic analysis helps in detecting such logical errors.
## Intermediate Code Generation
## Converting High-Level to Low-Level Code
After ensuring the code is lexically, syntactically, and semantically correct, the compiler translates it into an intermediate form. This is a lower level than the original Java code but higher than machine code. It helps in optimizing the code across different platforms.
## Optimization
## Enhancing Performance
Optimization is a critical phase where the intermediate code is refined to run efficiently on any machine. It involves enhancing the code to execute with fewer resources and in less time, without altering its output.
## Code Generation
## The Final Translation
In this final stage of the Java compilation process, the optimized intermediate code is translated into machine language, which can be directly executed by the computer’s processor. This machine code is specific to the target platform’s architecture.
## The Role of the Java Compiler in Execution
## Just-In-Time Compilation
A unique aspect of Java is its Just-In-Time (JIT) compilation, which occurs at runtime. The JIT compiler part of the Java Virtual Machine (JVM) translates the intermediate bytecode (generated by the Java compiler) into machine code just before execution. This process enhances performance by compiling code as it is needed, not all at once.
## Garbage Collection
Another critical feature facilitated by the Java compilation process is garbage collection. This automated memory management feature helps reclaim memory allocated to objects that are no longer in use, preventing memory leaks and enhancing application performance.
## Conclusion: The Impact of the Java Compiler on Software Development
The Java Compiler is an essential tool in the software development lifecycle. It not only translates code but optimizes and prepares it for efficient execution. Understanding the detailed workings of the Java compiler, including its phases from lexical analysis to code generation and features like JIT and garbage collection, provides developers with the insights needed to write better, more efficient Java applications.
From initial code writing to final execution, the Java compiler ensures that applications are robust, scalable, and efficient. This comprehensive overview not only illustrates the critical stages of Java compilation but also highlights how features like exception handling in Java integrate seamlessly to enhance the robustness of applications. The Java compiler, therefore, is not just a tool but a pivotal element in the realm of Java development, enabling programmers to translate human ingenuity into executable and efficient software solutions.
| scholarhat | |
1,885,544 | Develop a Serverless TypeScript API on AWS ECS with Fargate | AWS Fargate is a serverless compute engine that allows you to run containers without managing... | 0 | 2024-06-19T12:00:00 | https://blog.appsignal.com/2024/06/05/develop-a-serverless-typescript-api-on-aws-ecs-with-fargate.html | typescript, node, fargate, aws | AWS Fargate is a serverless compute engine that allows you to run containers without managing servers. With Fargate, you no longer have to provision clusters of virtual machines to run ECS containers: this is all done for you.
Fargate has an Amazon ECS construct that can host an API. In this take, we will build a Fargate service using the AWS CDK, put the API in a docker image, and then host it inside Amazon ECS.
The API will be a pizza API and we'll store the data in a DynamoDB table.
Let’s get started!
## Why Use Fargate with ECS?
Fargate has some benefits over lambda functions because of provisioned servers. When you have a high volume of traffic, you can save on costs by running containers. These can scale out to as many servers as needed to meet the current demand and avoid paying per request. Once you have millions of requests per day or even per hour, paying per actual load starts to become more cost-effective.
The one gotcha here is that scaling out can become a bit of a hassle because, by the time you spin up a new container, it is already too late to handle the current load. A good technique is to come up with scaling strategies ahead of time, which means you definitely must have a predictable load.
## Requirements
Feel free to [clone the sample code for this project from GitHub](https://github.com/beautifulcoder/node-fargate-api).
Be sure to have the latest version of Node and the AWS CDK Toolkit installed.
```shell
> npm install -g aws-cdk
```
Then, simply spin up a new TypeScript project using the toolkit.
```shell
> mkdir node-fargate-api
> cd node-fargate-api
> cdk init --language typescript
> cdk synth
```
## Build the ECR Image
First, install Fastify and DynamoDB in the root `package.json` file. Then, create an `app` folder to contain the application.
```shell
> npm i fastify @aws-sdk/client-dynamodb @aws-sdk/util-dynamodb --save
> mkdir app
```
Create the `app/api.ts` file and add this code snippet:
```typescript
import Fastify from "fastify";
import {
DynamoDBClient,
GetItemCommand,
PutItemCommand,
} from "@aws-sdk/client-dynamodb";
import { unmarshall } from "@aws-sdk/util-dynamodb";
const client = new DynamoDBClient();
const fastify = Fastify({
logger: true,
});
fastify.get("/pizzas/:id", async (request, reply) => {
const { id } = request.params as any;
const res = await client.send(
new GetItemCommand({
TableName: "pizzas",
Key: { id: { N: id } },
})
);
const item = res.Item;
if (item === undefined) {
reply.callNotFound();
return;
}
const pizza = unmarshall(item);
const { ingredients } = pizza;
await reply.status(200).send({ ...pizza, ingredients: [...ingredients] });
});
fastify.put("/pizzas/:id", async (request, reply) => {
const { id } = request.params as any;
const { name, ingredients } = request.body as any;
const pizza = {
id: { N: id },
name: { S: name },
ingredients: { SS: ingredients },
};
await client.send(
new PutItemCommand({
TableName: "pizzas",
Item: pizza,
})
);
await reply.status(200).send({ id, name, ingredients });
});
fastify.get("/health", async (request, reply) => {
await reply.status(200).send();
});
const { ADDRESS = "localhost", PORT = 3000 } = process.env;
const start = async (): Promise<void> => {
try {
await fastify.listen({ port: Number(PORT), host: ADDRESS });
} catch (err) {
fastify.log.error(err);
process.exit(1);
}
};
void start();
```
This is a pizza API with GET/PUT endpoints. We store the data in a DynamoDB table.
_**Note**: The `host` and `port` come from an environment variable when this runs in Fargate. This guarantees the app runs on `0.0.0.0` and not the loopback IP address. The port number must also be `80` for HTTP traffic._
To upload the app to an ECR image, create a repository in the AWS console.
- Log in to AWS and click 'Elastic Container Registry'
- Click 'Create repository'
- Leave it as Private
- Set a name, for example, `pizza-fargate-api`, and make note of this
Next, create the Dockerfile in the root folder.
```text
FROM node:20-alpine
WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig.json ./
COPY app/* ./
RUN npm ci --omit=dev && npm run build
COPY . .
ENV ADDRESS=0.0.0.0 PORT=80
CMD ["node", "app/api.js"]
```
_**Note**: For this docker build to work, be sure to move `typescript` and `@types/node` in the `package.json` file from dev to just plain dependencies._
For the load balancer to get access to the container, be sure to specify the `PORT` and `ADDRESS`. As mentioned, this cannot be the loopback `127.0.0.0` because it does not allow incoming connections from outside the container. In Fastify, this is the default behavior, so we must manually set this IP address.
In ECR, click on the newly created repository, then 'View push commands'. This lists instructions for uploading the docker image to the repository (they are self-explanatory, so we will not repeat them here).
Once the commands complete successfully, the image should appear in the Amazon Elastic Container Registry.

## The AWS CDK
Open the `node-fargate-api-stack.ts` file and drop in this entire code snippet.
```typescript
import * as cdk from "aws-cdk-lib";
import type { Construct } from "constructs";
import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as ecs from "aws-cdk-lib/aws-ecs";
import * as ecs_patterns from "aws-cdk-lib/aws-ecs-patterns";
import * as dynamodb from "aws-cdk-lib/aws-dynamodb";
import * as iam from "aws-cdk-lib/aws-iam";
import * as ecr from "aws-cdk-lib/aws-ecr";
import * as logs from "aws-cdk-lib/aws-logs";
export class NodeFargateApiStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = ec2.Vpc.fromLookup(this, "vpc", {
vpcId: "VPC_ID",
});
const cluster = new ecs.Cluster(this, "cluster", {
vpc,
});
const repository = ecr.Repository.fromRepositoryName(
this,
"repository",
"pizza-fargate-api"
);
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(
this,
"api",
{
cluster,
cpu: 256,
desiredCount: 1,
taskImageOptions: {
image: ecs.ContainerImage.fromEcrRepository(repository, "latest"),
containerPort: 80,
logDriver: new ecs.AwsLogDriver({
streamPrefix: "api",
logRetention: logs.RetentionDays.THREE_DAYS,
}),
},
memoryLimitMiB: 512,
publicLoadBalancer: true,
}
);
const taskCount = service.service.autoScaleTaskCount({ maxCapacity: 5 });
taskCount.scaleOnCpuUtilization("cpu-scaling", {
targetUtilizationPercent: 45,
scaleInCooldown: cdk.Duration.seconds(60),
scaleOutCooldown: cdk.Duration.seconds(60),
});
service.targetGroup.configureHealthCheck({ path: "/health" });
const table = new dynamodb.Table(this, "table", {
tableName: "pizzas",
partitionKey: {
name: "id",
type: dynamodb.AttributeType.NUMBER,
},
});
service.taskDefinition.taskRole.addToPrincipalPolicy(
new iam.PolicyStatement({
actions: ["dynamodb:*"],
resources: [table.tableArn],
})
);
}
}
```
The `ApplicationLoadBalancedFargateService` construct needs a VPC (which should already exist in your AWS account). The load balancer directs HTTP traffic from the internet to the cluster. This is the reason why it is publicly available.
The DynamoDB table simply declares a main partition key: the number id and table name. The task definition is then granted full access to the DynamoDB table.
The `cpu`, `desiredCount`, and `memoryLimitMiB` set scalability configuration for the service when it first starts. The service uses task definitions to spin up new containers that can handle the load. You can manually scale Fargate by updating the desired task count in your service definition. This CDK also creates auto-scaling policies based on metrics like CPU utilization. Fargate uses CloudWatch alarms to trigger scaling actions based on metrics or events.
For this CDK to work, be sure to update the main entry point in `bin/node-fargate-api.ts`. Set the account number and region.
## Unit Test the CDK
Type `cdk synth` in a terminal, then inspect the CloudFormation YAML template that will get uploaded to AWS. This has enough information to write unit tests.
Open `node-fargate-api.test.ts` in the `test` folder and enter this code snippet.
```typescript
import * as cdk from "aws-cdk-lib";
import { Template } from "aws-cdk-lib/assertions";
import * as api from "../lib/node-fargate-api-stack";
test("Fargate service created", () => {
const app = new cdk.App();
const stack = new api.NodeFargateApiStack(app, "test-stack", {
env: { account: "0123456789", region: "us-east-1" },
});
const template = Template.fromStack(stack);
template.hasResourceProperties("AWS::ECS::Service", {
LaunchType: "FARGATE",
LoadBalancers: [
{
ContainerName: "web",
ContainerPort: 80,
},
],
});
});
test("DynamoDB table created", () => {
const app = new cdk.App();
const stack = new api.NodeFargateApiStack(app, "test-stack", {
env: { account: "0123456789", region: "us-east-1" },
});
const template = Template.fromStack(stack);
template.hasResourceProperties("AWS::DynamoDB::Table", {
TableName: "pizzas",
});
});
```
This time, it is important to specify the `env` for each test because the VPC lookup in the stack requires the environment. Luckily, you can mock the account id and region because it is just a unit test. What is important is that the unit test verifies the YAML template since we are creating these specific resources via the CDK.
## The Load Balancer
Before we can check that the load balancer works, simply deploy the CDK.
```shell
> cdk deploy
```
_**Note**: If the deploy takes a very long time, it is likely that the load balancer cannot clear the health endpoint check. Simply cancel the deploy, then go back and double-check your CDK._
The CDK should have an output with a publicly accessible URL from the load balancer. You can now hit the two endpoints with CURL.
```shell
> curl -i -X PUT -H "Content-Type: application/json" \
-d "{\"name\":\"Pepperoni Pizza\",\"ingredients\":[\"cheese\",\"tomato\",\"pepperoni\"]}" \
http://LBNAME-ACCOUNT.REGION.elb.amazonaws.com/pizzas/1
> curl -i -X GET -H "Content-Type: application/json" http://LBNAME-ACCOUTN.REGION.elb.amazonaws.com/pizzas/1
```
To smack this API with as many requests as possible, write this K6 script and save it as a JavaScript file.
```javascript
import { check } from "k6";
import http from "k6/http";
export default function () {
const res = http.get(
"http://LBNAME-ACCOUNT.REGION.elb.amazonaws.com/pizzas/1"
);
check(res, {
"is status 200": (r) => r.status === 200,
});
}
```
Then, run a load test using K6:
```shell
k6 run --vus 50 --duration 20m load-test.js
```
This test simulates 50 users hitting the API at the same time for 20 minutes. The reason you want to go for 20 minutes is because Fargate takes a few minutes to spin up new tasks to handle the current load. This is the reason why Fargate does better with predictable load. By the time the new task spins up, it might already be too late. (In contrast, a lambda function cold starts in mere seconds on the slower end and is therefore more suitable for spiky _unpredictable_ traffic).
The load test might also show some failing requests. This happens because tasks are sometimes too slow to spin up and respond to current incoming traffic.
By the end of the load test, you should see the maximum amount of configured tasks running in your service.

Before we end, be sure to fire a `cdk destroy` so you don't accrue further EC2 charges. The one downside to Fargate is that you continue to pay for the running service even when there is no traffic.
## Wrapping Up
In this post, we built and hosted a Fargate service using AWS CDK and Amazon ECS.
We've seen that Fargate running on ECS can be a great alternative to lambda functions. Given predictable traffic and heavy load, it is possible to save on costs because you are not paying per request.
Happy coding!
**P.S. If you liked this post, [subscribe to our JavaScript Sorcery list](https://blog.appsignal.com/javascript-sorcery) for a monthly deep dive into more magical JavaScript tips and tricks.**
**P.P.S. If you need an APM for your Node.js app, go and [check out the AppSignal APM for Node.js](https://www.appsignal.com/nodejs).** | beautifulcoder |
1,893,539 | How a Web Development Agency Secures Java EE Applications with Java Authentication and Authorization Service | Ensuring that sensitive data and functionalities are accessible only to authorized users is a top... | 0 | 2024-06-19T11:59:06 | https://dev.to/jessicab/how-a-web-development-agency-secures-java-ee-applications-with-java-authentication-and-authorization-service-2m1l | webdev, javascript, programming | Ensuring that sensitive data and functionalities are accessible only to authorized users is a top priority for any leading web development agency. Therefore, building secure Java EE applications is essential.
Java EE is a popular framework for enterprise-level web development and provides robust features to safeguard sensitive information and functionalities.
One crucial aspect of this security is authorization in Java EE. This involves verifying user permissions before granting access to specific resources or actions within the application.
Java Authentication and Authorization Service helps organizations achieve this strong access control. Check out how.
## What is Java Authentication and Authorization Service?
Java Authentication and Authorization Service (JAAS) is a standard Java security framework. It is framed to let developers incorporate user-based authentication and authorization features into their applications. It supplants the previous code-centric security model, which depended on the code's source for access permissions. JAAS provides a more detailed method, concentrating on the user executing the code.
## Purposes of JAAS
JAAS serves two primary purposes:
**Authentication:** This process includes confirming the user who is currently running the code. JAAS accomplishes this by using different methods, enabling programmers to incorporate various ways of verifying identity. These methods encompass using a username and password, LDAP integration, etc.
**Authorization:** After a user is verified, JAAS guarantees they have the required privileges to use certain resources or features in the app. This approval mechanism blocks access that shouldn't be granted and safeguards confidential information.
## How does JAAS work?
Java Authentication and Authorization Service (JAAS) can be an instrumental tool to secure secure Java EE Applications. Any reliable web development agency can leverage this to streamline a secure login process by leveraging several key components:
**CallbackHandler**
- The CallbackHandler interface plays a crucial role during login. It acts as a bridge between the application and the user and is responsible for collecting credentials.
- JAAS utilizes callbacks, which are essentially requests for specific information.
- The CallbackHandler interacts with the user, typically prompting for a username and password through these callbacks.
The CallbackHandler dynamically generates callbacks requesting username and password. Then, it collects the user's input for these fields.
**LoginModule**
The LoginModule interface forms the heart of JAAS authentication. It's a pluggable component that encapsulates the logic for verifying a user's identity. The key feature of LoginModule is its flexibility. Top-rated web developers can create custom implementations to support diverse authentication methods. Here are some common examples:
- Database authentication:
A custom LoginModule can connect to a database and verify if the provided credentials match a valid user record.
- LDAP (Lightweight directory access protocol) integration:
Another LoginModule implementation can interact with an LDAP server to authenticate users against a centralized directory.
JAAS offers a set of LoginModule implementations, each specializing in a specific authentication approach. The chosen LoginModule receives the collected credentials from the CallbackHandler and performs the necessary checks based on the chosen authentication method (e.g., querying a database or LDAP server).
**Configuration**
JAAS relies on configuration to specify which LoginModule to use for authentication in a web development agency. This configuration can be defined in a separate login configuration file or programmatically within the application. The configuration essentially acts as an instruction manual outlining the chosen authentication method (represented by the specific LoginModule) that the application should utilize during the login process.
In simpler terms, the configuration defines which authentication "tool" (LoginModule) the application should use from the available toolbox. This ensures the application follows the correct authentication procedure based on the chosen method.
## How a leading web development agency uses JAAS to protect Java EE applications
Following are the steps a web development agency in New York follows to secure Java EE applications with JAAS:
### Implement a CallbackHandler to collect user credentials
The CallbackHandler interface plays a vital role during login by gathering user credentials. It interacts with the user through callbacks, typically prompting for username and password.
Here's a code example of a simple CallbackHandler implementation that uses the console to collect credentials:
```
public class ConsoleCallbackHandler implements CallbackHandler {
@Override
public void handle(Callback[] callbacks) throws UnsupportedCallbackException {
Console console = System.console();
for (Callback callback : callbacks) {
if (callback instanceof NameCallback) {
NameCallback nameCallback = (NameCallback) callback;
nameCallback.setName(console.readLine(nameCallback.getPrompt()));
} else if (callback instanceof PasswordCallback) {
PasswordCallback passwordCallback = (PasswordCallback) callback;
passwordCallback.setPassword(console.readPassword(passwordCallback.getPrompt()));
} else {
throw new UnsupportedCallbackException(callback);
}
}
}
}
```
This implementation iterates through an array of Callback objects and handles specific types like NameCallback and PasswordCallback. It requests the user's password and username using the console and sets the retrieved values on the respective callbacks.
### Create a LoginModule for user authentication
The LoginModule interface is responsible for the [core authentication logic in a web development agency](https://www.unifiedinfotech.net/services/web-design-new-york/). Developers can implement custom LoginModule classes to handle various authentication methods (e.g., database lookup, LDAP integration).
Here's a simplified example in Java of an in-memory LoginModule that stores credentials for demonstration purposes:
```
public class InMemoryLoginModule implements LoginModule {
private static final String USERNAME = "testuser";
private static final String PASSWORD = "testpassword";
private boolean loginSucceeded = false;
@Override
public boolean login() throws LoginException {
// Retrieve credentials from CallbackHandler
NameCallback nameCallback = new NameCallback("username: ");
PasswordCallback passwordCallback = new PasswordCallback("password: ", false);
try {
CallbackHandler callbackHandler = ...; // Get CallbackHandler instance
callbackHandler.handle(new Callback[]{nameCallback, passwordCallback});
String username = nameCallback.getName();
String password = new String(passwordCallback.getPassword());
if (USERNAME.equals(username) && PASSWORD.equals(password)) {
loginSucceeded = true;
}
} catch (IOException | UnsupportedCallbackException e) {
// Handle exceptions appropriately
}
return loginSucceeded;
}
// Implement other required methods of LoginModule (initialize, commit, abort)
}
```
This example LoginModule retrieves credentials from the provided CallbackHandler and compares them with pre-defined values. If a match is found, the login is successful. Real-world implementations would likely connect to external data sources for authentication.
### Configure JAAS with the LoginModule details
JAAS relies on configuration to specify the LoginModule to be used. This configuration can be done in a login configuration file or programmatically.
Here's an example of a login configuration file entry for the InMemoryLoginModule:
```
jaasApplication {
com.example.security.InMemoryLoginModule required debug=true;
};
```
This configuration defines an application named jaasApplication that uses the com.example.security.InMemoryLoginModule. The required flag indicates that this module is mandatory for successful login. The debug=true option enables debug logging for the LoginModule.
### Initialize a LoginContext to start the authentication process
The LoginContext class is the entry point for JAAS authentication in a web development agency. It interacts with the configured LoginModule and performs the login process.
Here's how to initialize a LoginContext using the ConsoleCallbackHandler and the login configuration file in Java:
```
LoginContext loginContext = new LoginContext("jaasApplication", new ConsoleCallbackHandler());
loginContext.login();
```
This code creates a LoginContext instance for the application named jaasApplication and uses the ConsoleCallbackHandler to collect user credentials. The login() method initiates the authentication process by calling the configured LoginModule.
### Once authenticated, check user permissions using the Java security policy
After successful authentication, JAAS associates the user with a Subject object. Permissions are defined within the Java security policy file, which grants specific access control rights to users or roles.
Here's a breakdown of the process:
**Defining permissions:**
Permissions are represented by subclasses of the Permission abstract class. Website developers in NYC can create custom permission classes to define specific access control needs. Here's a basic example in Java:
```
public class ResourcePermission extends BasicPermission {
public ResourcePermission(String name) {
super(name);
}
}
```
This example defines a ResourcePermission that takes a resource name as a constructor argument.
**Granting permissions in the security policy:**
The Java security policy file specifies which users or roles have access to specific permissions. Here's an example policy entry:
```
grant principal com.sun.security.auth.UserPrincipal testuser {
permission com.example.security.ResourcePermission "test_resource";
};
```
This entry grants the test_resource permission to the user testuser. It uses the principal keyword to identify the user and the permission keyword to specify the permission being granted.
**Checking permissions in the application:**
Once permissions are defined and granted, the application can check user permissions using the SecurityManager class. Here's an example in Java:
```
Subject subject = loginContext.getSubject(); // Get Subject from LoginContext
public class ResourceAction implements PrivilegedAction {
@Override
public Object run() {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {
sm.checkPermission(new ResourcePermission("test_resource"));
}
System.out.println("I have access to test_resource!");
return null;
}
}
```
Subject.doAsPrivileged(subject, new ResourceAction(), null);
This example retrieves the Subject object from the LoginContext. It then defines a PrivilegedAction that checks for the test_resource permission using the SecurityManager.checkPermission method. Finally, it calls Subject.doAsPrivileged to execute the action within the context of the authenticated user. This ensures the access of protected resources to only authorized users.
## Authorization in JAAS in a web development agency
After successful user authentication, JAAS steps in to ensure users possess the necessary permissions to access specific resources or functionalities within the application. This process, known as authorization, safeguards sensitive data and prevents unauthorized access.
### Defining permissions
Permissions are the fundamental components of access control in JAAS. They represent granular control over specific resources or actions within an application. Developers of a web development agency can create custom permission classes by subclassing the abstract class Permission. These custom classes define the specific resources or actions being protected.
Here's a basic example in Java:
```
public class ResourcePermission extends BasicPermission {
public ResourcePermission(String name) {
super(name);
}
}
```
This example defines a ResourcePermission that takes a resource name as a constructor argument. This permission could be used to control access to specific files, databases, or functionalities within the application.
### Granting permissions in the security policy
The Java security policy file acts as the central registry for defining user or role permissions. This file specifies which users or roles are granted access to specific permissions. Here's an example policy entry:
```
grant principal com.sun.security.auth.UserPrincipal testuser {
permission com.example.security.ResourcePermission "test_resource";
};
```
This entry grants the test_resource permission to the user testuser. The principal keyword identifies the user, and the permission keyword specifies the permission being granted. Developers can define complex access control rules by creating roles and associating permissions with those roles.
### Checking permissions in the application
Once permissions are defined and granted, applications can leverage the SecurityManager class to enforce authorization checks. The SecurityManager class provides methods to verify if a user has the required permission to perform a specific action.
Here's an example in Java of how an application might check user permissions:
```
Subject subject = loginContext.getSubject(); // Get Subject from LoginContext
public class ResourceAction implements PrivilegedAction {
@Override
public Object run() {
SecurityManager sm = System.getSecurityManager();
if (sm != null) {
sm.checkPermission(new ResourcePermission("test_resource"));
}
System.out.println("I have access to test_resource!");
return null;
}
}
```
Subject.doAsPrivileged(subject, new ResourceAction(), null);
This code retrieves the Subject object associated with the logged-in user from the LoginContext. It then defines a PrivilegedAction that attempts to perform an action requiring the test_resource permission. The SecurityManager.checkPermission method verifies if the user's Subject has the necessary permission before allowing the action to proceed.
By implementing these steps, JAAS empowers developers to establish a robust authorization system within their Java applications. The combination of custom permissions, security policy configuration, and the SecurityManager class ensures that only authorized users can access protected resources and functionalities.
## Conclusion
This was a thorough discussion on protecting a Java EE application with JAAS. A leading web development agency can safeguard its Java EE applications by leveraging JAAS. It helps ensure that user authentication and authorization are handled efficiently and securely, building trust and company reputation.
| jessicab |
1,893,540 | AI vs Solana: Battle of the Meme Coins | Despite their rather humble and insignificant beginnings, meme coins have found their place in the... | 0 | 2024-06-19T11:59:04 | https://dev.to/blockchainx358/ai-vs-solana-battle-of-the-meme-coins-4gm4 | aimemecoins, solanamemecoins, cryptomemeeconomy, blockchaininnovation |

Despite their rather humble and insignificant beginnings, meme coins have found their place in the modern and quite towless and swift world of cryptocurrencies. Two distinct categories have emerged: proposals and projects that incorporate AI meme coins and Solana meme coins. But what are they, how were they created, what sets them apart, and what big clash for domination of the crypto meme area is happening now?
**[AI Meme Coin Development](https://www.blockchainx.tech/ai-meme-coin-development-company/)**
Moems, AI meme coins are an interesting convergence of two significant and popular phenomena of modern society: AI and cryptocurrencies. Many of these coins use AI technology or relate it to their branding, to their community involvement, or even in their trading algorithms. There exists a convergence of developers using AI in areas such as memes, tokenomics, or even portions of community management.
Its development process often includes using artificial intelligence alongside market analysis algorithms, sentiment analysis for posts on social media or even creating memes using current events. This convergence of AI and crypto is an efficient way of attracting not only tech-savvy investors but also enlightening the general public about the advanced spirit of the crypto meme economy.
**[Solana Meme Coin Development](https://www.blockchainx.tech/solana-meme-coin-development/)**
On the opposite side, the Solana meme coins operate within the Solana ecosystem, which is characterized by fast transactions and, accordingly, low fees. Solana is designed for horizontal scaling, and mRAM is quite easy to work with, so meme coins continue to be launched, all in an attempt to become more popular.
Some meme coins are built on Solana, and they are quick at processing large volumes of transactions compared to other meme tokens. Due to this architecture, it becomes possible for meme coins to provide more convenient user experiences and cheaper transaction fees than other platforms.
The Battle for Dominance
As both AI and Solana meme coins gain traction, they compete for market attention and investor funds. The battle is not merely about technological prowess but also about community engagement, meme virality, and sustainable tokenomics. AI meme coins leverage cutting-edge technology to stand out in a crowded market, while Solana meme coins emphasize transactional efficiency and scalability.
The future of these meme coins hinges on their ability to adapt to market trends, regulatory landscapes, and technological advancements. Developers continuously innovate to enhance user experiences and maintain community interest, driving the evolution of meme coins beyond their initial novelty.
Conclusion
In conclusion, AI and Solana meme coins represent two distinct yet interconnected trends within the cryptocurrency space. AI meme coins pioneer the integration of artificial intelligence into cryptocurrency branding and development, while Solana meme coins leverage the speed and scalability of the Solana blockchain for efficient transactions.
The battle for dominance between these meme coins underscores the dynamic nature of the crypto meme economy. As developers continue to innovate and investors navigate the evolving landscape, one thing remains certain: meme coins are here to stay, reshaping how we perceive and interact with digital assets.
| blockchainx358 |
1,893,538 | Hadoop Date Mastery for Astronomers | The year is 2285, and humanity has established a thriving space station orbiting the planet Mars | 27,774 | 2024-06-19T11:54:35 | https://labex.io/tutorials/hadoop-hadoop-date-mastery-for-astronomers-288963 | coding, programming, tutorial, hadoop |
## Introduction
The year is 2285, and humanity has established a thriving space station orbiting the planet Mars. This research facility, known as the Martian Observatory, serves as a hub for scientific exploration and discovery. Among the many scientists stationed here is Dr. Emma Wilkins, a brilliant data analyst specializing in astronomical observations.
Dr. Wilkins has been tasked with analyzing vast amounts of data collected from various telescopes and instruments aboard the station. However, the data is in a raw format, and she needs to process and manipulate it to extract meaningful insights. One of the critical challenges she faces is working with date and time information, as many of the observations are time-sensitive and require accurate date calculations.
To tackle this challenge, Dr. Wilkins must leverage the powerful date operating functions available in Hadoop Hive, a data warehousing solution designed for big data processing. By mastering these functions, she can efficiently manipulate and analyze the date and time data, enabling her to uncover patterns, trends, and anomalies that could lead to groundbreaking discoveries in the field of astronomy.
## Setting Up the Hive Environment
In this step, we will set up the Hive environment and create a sample dataset for practicing date operating functions.
1. First, switch to the `hadoop` user by running the following command in the terminal:
```bash
su - hadoop
```
2. Now, launch the Hive shell by executing the following command:
```bash
hive
```
3. Create a new Hive table called `observations` with the following schema:
```sql
CREATE TABLE observations (
observation_id INT,
telescope STRING,
observation_date STRING,
observation_time STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE;
```
4. Load some sample data into the `observations` table:
```sql
LOAD DATA LOCAL INPATH '/home/hadoop/resources/observations.csv' OVERWRITE INTO TABLE observations;
```
The `observations.csv` file contains sample observation data with columns for `observation_id`, `telescope`, `observation_date` (in the format `yyyy-MM-dd`), and `observation_time` (in the format `HH:mm:ss`).
## Using the year() Function
In this step, we will learn how to use the `year()` function in Hive to extract the year from a date string.
The `year()` function takes a date or timestamp string as input and returns the year component as an integer value.
1. Open the Hive console by running the `hive` command in the terminal.
2. Execute the following query to extract the year from the `observation_date` column:
```sql
SELECT observation_id, year(observation_date) AS observation_year
FROM observations;
```
This query will return the `observation_id` and the corresponding year for each observation in the table.
3. You can also use the `year()` function in combination with other date functions or clauses. For example, to filter observations from a specific year, you can use the following query:
```sql
SELECT *
FROM observations
WHERE year(observation_date) = 2022;
```
This query will return all observations where the year component of the `observation_date` is 2022.
## Using the datediff() Function
In this step, we will learn how to use the `datediff()` function in Hive to calculate the difference between two dates.
The `datediff()` function takes two date or timestamp strings as input and returns the number of days between them.
1. Open the Hive console if it's not already open.
2. Execute the following query to calculate the number of days between two observation dates:
```sql
SELECT observation_id,
observation_date,
'2022-12-31' AS reference_date,
datediff('2022-12-31', observation_date) AS days_until_end_of_year
FROM observations;
```
This query will return the `observation_id`, `observation_date`, a reference date (`2022-12-31`), and the number of days between the `observation_date` and the reference date (`days_until_end_of_year`).
3. You can also use the `datediff()` function with other date functions or clauses. For example, to filter observations within a specific date range, you can use the following query:
```sql
SELECT *
FROM observations
WHERE datediff(observation_date, '2022-01-01') BETWEEN 0 AND 180;
```
This query will return all observations where the `observation_date` is between January 1, 2022, and June 30, 2022 (inclusive).
## Using the date_format() Function
In this step, we will learn how to use the `date_format()` function in Hive to convert a date string from one format to another.
The `date_format()` function takes two arguments: a date or timestamp string and a format pattern. It returns the date or timestamp string in the specified format pattern.
1. Open the Hive console if it's not already open.
2. Execute the following query to convert the `observation_date` column from the `yyyy-MM-dd` format to the `MMM dd, yyyy` format:
```sql
SELECT observation_id,
observation_date,
date_format(observation_date, 'MMM dd, yyyy') AS formatted_date
FROM observations;
```
This query will return the `observation_id`, the original `observation_date`, and the formatted date (`formatted_date`) in the `MMM dd, yyyy` format (e.g., `Jun 15, 2022`).
3. You can also use the `date_format()` function with other date functions or clauses. For example, to filter observations based on a specific date format, you can use the following query:
```sql
SELECT *
FROM observations
WHERE date_format(observation_date, 'yyyy/MM/dd') = '2022/06/15';
```
This query will return all observations where the `observation_date`, when formatted as `yyyy/MM/dd`, is equal to `2022/06/15`.
## Using the add_months() Function
In this step, we will learn how to use the `add_months()` function in Hive to add or subtract months from a date.
The `add_months()` function takes two arguments: a date or timestamp string and an integer value representing the number of months to add or subtract.
1. Open the Hive console if it's not already open.
2. Execute the following query to add six months to the `observation_date` column:
```sql
SELECT observation_id,
observation_date,
add_months(observation_date, 6) AS date_plus_six_months
FROM observations;
```
This query will return the `observation_id`, the original `observation_date`, and the date six months after the `observation_date` (`date_plus_six_months`).
3. You can also use the `add_months()` function with other date functions or clauses. For example, to filter observations within a specific month range, you can use the following query:
```sql
SELECT *
FROM observations
WHERE month(add_months(observation_date, 6)) BETWEEN 1 AND 6;
```
This query will return all observations where the month component of the date six months after the `observation_date` is between January and June (inclusive).
## Summary
In this lab, we explored the world of date operating functions in Hadoop Hive, a powerful data warehousing solution for big data processing. Through a captivating scenario set in a futuristic space station orbiting Mars, we followed the journey of Dr. Emma Wilkins, a brilliant data analyst tasked with analyzing astronomical observations.
By mastering date operating functions such as `year()`, `datediff()`, `date_format()`, and `add_months()`, Dr. Wilkins gained the ability to efficiently manipulate and analyze date and time data, enabling her to uncover patterns, trends, and anomalies that could lead to groundbreaking discoveries in the field of astronomy.
Throughout the lab, we delved into hands-on examples and provided checkers to ensure a seamless learning experience. The interactive nature of the lab allowed learners to practice and reinforce their understanding of these essential functions, laying a solid foundation for more advanced data analysis techniques.
Overall, this lab not only imparted valuable technical skills but also fostered a sense of wonder and curiosity about the vast expanse of the cosmos. By empowering learners with the tools to unlock the secrets hidden within astronomical data, we paved the way for future generations of scientists to push the boundaries of human knowledge and exploration.
---
## Want to learn more?
- 🚀 Practice [Hadoop Date Mastery for Astronomers](https://labex.io/tutorials/hadoop-hadoop-date-mastery-for-astronomers-288963)
- 🌳 Learn the latest [Hadoop Skill Trees](https://labex.io/skilltrees/hadoop)
- 📖 Read More [Hadoop Tutorials](https://labex.io/tutorials/category/hadoop)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.