id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,894,096 | Iniciando em DevOps e Linux | Instalando uma Maquina Virtual Link para download da VirtualBox Para instalar basta... | 0 | 2024-06-19T21:26:28 | https://dev.to/ebagabe/iniciando-em-devops-e-linux-22l8 | ## Instalando uma Maquina Virtual
- [Link para download da VirtualBox](https://www.virtualbox.org/wiki/Downloads)
- Para instalar basta avançar até instalar
- Ao abrir a VirtualBox, basta clicar em Novo
- Preencha com o nome da sua maquina virtual
- Na imagem ISO, é necessário fazer o download se você ainda não tiver da imagem que quer subir na sua maquina. Para esse artigo, vou estar utilizando o Ubuntu
- [Link para instalação do Ubuntu para Server](https://ubuntu.com/download/server)
- Após isso, devemos selecionar a imagem baixada e colocar em Imagem ISO
- Nesse tutorial iremos pular a instalação desastissida, então pode-se marcar a opção abaixo do numero da versão. Em seguida clicar em Proximo (N)
- Na parte do Hardware, em memoria base, selecionei 4096MB de memoria nesse primeiro momento.
- Em processadores, alocar apenas 2 CPU's que é o necessário por enquanto. E selecionar a opção próximo
- Em Disco Rígido Virtual, Selecionamos a opção Default (Criar um novo disco rígido virtual)
- Em tamanho de Disco, deixo 25.00GB alocado e clicar em próximo e em Finalizar.
- Após criar a nova máquina, podemos clicar em Iniciar (T)
*Dentro da Grub*
- Selecione a primeira opção: Try or Install Ubuntu Server
- Em seguida, após a sequencia de codigos que é apresentado após selecionar essa opção iremos selecionar o idioma, nesse tutorial vou escolher o Português.
- Na parte do Layout do teclado, escolha o que for conveniente, vou manter o padrão de português
- Em Choose type of Install: Seleciono Ubuntu Server (minimized)
- Selecionamos utilizando a barra de espaço
- Depois descemos até Concluído
- Clique em concluído novamente
- Em configuração de Proxy, podemos clicar em concluído de novo. E aguardar o sistema instalar.
- Assim que disponivel clicar em continuar, e continuar novamente e após isso selecionar o nome que irá usar para a maquina, servidor, user e senha
- Aguardar o processo
- Reiniciar a maquina | ebagabe | |
1,894,078 | a web perfomance introduction and why you should care; | Improving the performance of your website/web product impacts several critical metrics: User... | 0 | 2024-06-19T21:21:47 | https://dev.to/nosylasairaf/web-perfomance-and-why-you-should-care-4n8j | webdev, perfomance, programming, webperfomance | Improving the performance of your website/web product impacts several critical metrics:
- User Satisfaction and Retention
- Cost Savings
- Competitive Advantage
- [Improved SEO](https://developers.google.com/search/blog/2018/01/using-page-speed-in-mobile-search)
- High Conversion Rate
Before diving in, let's define two key metrics:
- **Bounce Rate:** Percentage of visitors who leave your website after viewing only one page, indicating engagement.
- **Conversion Rate:** Measures how effectively your site turns visitors into customers or leads.
## Impact of Page Load Time
As page load time increases, so does the bounce rate:
- Increasing load time from 1 to 3 seconds raises bounce probability by 32%
- Deloitte found that a jump from 1 to 5 seconds increases bounce probability by 90%.
- A 10-second load time increases bounce probability by 123%.
_(Google/SOASTA, 2017)._
## Financial Implications
For example:
- eBay's 100-millisecond speed improvement increased add-to-cart rates by 0.5%, leading to 50,000 more monthly transactions.
- BMW's adoption of Progressive Web Apps (PWA) boosted traffic by 8 to 40%.
_(Milliseconds Make Million from Deloitte)_
## Strategies for Improvement
**Improving Conversion Rate:**
- Optimize User Experience: Simplify the user journey.
- Enhance Content Quality: Ensure relevance and persuasion.
- A/B Testing: Experiment with different versions for better performance.
**Reducing Bounce Rate:**
- Improve Page Load Speed: Ensure fast loading times.
- Enhance Content Relevance: Align content with visitor intent.
- Engage with Design: Create an intuitive and attractive layout.
if you are a web developer and need to know little more about [key metrics to measure web perfomance](https://dev.to/nosylasairaf/key-metrics-to-measure-web-perfomance-an-short-introduction-2dmp)
| nosylasairaf |
1,894,095 | Start Your Coding Journey | Cheers to the digital revolution. According to the US Bureau of Labor Statistics, employment of... | 0 | 2024-06-19T21:21:36 | https://dev.to/gigo_dev/start-your-coding-journey-1egp | coding, programming, learning, technology | Cheers to the digital revolution. According to the US Bureau of Labor Statistics, employment of software developers is projected to grow 22% from 2020 to 2030, much faster than the average for all occupations, which is only 8%. There is a surge in demand for software developers.
In this article, we’ll explore how learning to code from scratch can lead to higher-paying jobs and better benefits for years to come.
## What is Programming?
Programming is the process of writing instructions for computers or machines, known as code. This code enables computers to perform specific tasks, solve problems, or create desired outputs. You may have heard of programming languages like Python and JavaScript. Like human languages, each programming language has its own rules and guidelines, known as syntax, that allow programmers to communicate effectively with computers.
## Why Learn Programming?
Software developers are in high demand and well compensated. Once you’ve mastered programming, you’ll have access to a wide range of lucrative roles. But the benefits go beyond professional opportunities. Knowing how to program is like having a superpower, allowing you to bring your ideas to life and solve real-world problems. Plus, learning to program can enhance your critical thinking skills, helping you design solutions to problems more efficiently.
## 5 Steps to Landing Your First Programming Job in 2024
1. **Pick a Project**
2. **Build Programming Fundamentals**
3. **Learn Programming Basics**
4. **Work on Projects**
5. **Apply for a Job**
### 1. Pick a Project
The best way to learn to code is by identifying a project that you care about and working towards it daily, gradually improving your skills and understanding concepts in the context of accomplishing a task. This approach is more effective and motivating than simply studying language-specific documentation.
By working on a project, you gain practical experience and can showcase your skills to potential employers, even if you don’t have previous programming employment history. You can demonstrate your proficiency by pointing towards a repository of completed or contributed projects.
### 2. Build Programming Fundamentals
Most skilled programmers know multiple languages, and learning the fundamentals of programming can help you immensely as you move onto your project. Beginners should understand syntax, which defines the structure of a language and specifies the words and symbols used to write code.
Tools like integrated development environments (IDEs) and generative AI tools such as ChatGPT, Mistral, or MetaAI can make programming easier by checking syntax for errors, organizing files, and auto-completing lines of code. Learning to take advantage of these tools and using them to their full potential can reduce the impact of the inevitable bumps in your coding journey.
### 3. Learn Programming Basics
If you have a specific reason for learning to code, you may already know which language you want to start with. For instance, if you want to be a game developer, C++ & JavaScript are probably your best choice.
If you want to learn to program but haven’t determined your first language, there are several beginner-friendly languages that serve as a good starting point. Python is one of the easiest languages for beginners to pick up.
### 4. Work on Projects
The more projects you complete, the better. Start with simple projects and gradually take on more complex ones. For example, if you’re interested in game design, start with a basic top-down shooter instead of a massive online role-playing game. Or, create a static portfolio website to showcase your skills and add more complex projects as you gain experience.
See recommended projects for beginners and intermediates below:
**5 Project ideas for beginners:**
- **To-Do List App:** A simple mobile app that allows users to create and manage their to-do lists, with features like adding, editing, and deleting tasks.
- **Quiz Game:** A web-based quiz game that asks users a series of questions and keeps track of their scores, with a leaderboard to display top scores.
- **Weather Dashboard:** A web-based weather dashboard that displays current weather conditions and forecasts for a given location, using APIs and data visualization.
- **Personal Finance Tracker:** A simple desktop app that helps users track their expenses, income, and budget, with features like categorization and data visualization.
- **Hangman Game:** A command-line based game of hangman, where users can guess letters and words, with a simple AI-powered opponent.
**5 Project ideas for intermediates:**
- **EcoLife:** A mobile app that helps users track and reduce their carbon footprint by monitoring their daily activities, such as energy consumption, water usage, and waste generation.
- **ChatGenie:** A conversational AI chatbot that can understand and respond to user queries in a human-like manner, using natural language processing and machine learning algorithms.
- **CodeCracker:** A web-based platform that generates coding challenges and puzzles for users to solve, with a gamified leaderboard and rewards system to encourage participation and skill-building.
- **SmartHome Automation:** A home automation system that uses IoT sensors and machine learning algorithms to optimize energy consumption, security, and convenience in a user’s home.
- **MedMind:** A medical diagnosis AI system that uses machine learning and natural language processing to analyze patient symptoms and provide accurate diagnoses and treatment recommendations.
### 5. Apply for a Job
After mastering the fundamentals and completing several projects, it’s time to start applying for jobs that match your experience and the programming languages you’ve learned.
While remote jobs are highly competitive, entry-level in-person roles can provide valuable experience and a stepping stone to higher-paying, remote positions.
By consistently performing well and gaining more experience, you can naturally grow your skills and become eligible for more lucrative opportunities in the future.
## Find a platform to guide you
Take all of these concepts prior to job application and put them together. What online resource will guide me step by step thru programming fundamentals and project completion?
GIGO Dev offers project-based learning for beginners who want to learn code and transition to a new job.
Build your coding skills with a guided path and repository of projects supported by your own personal AI tutor, Code Teacher.
There are tons of resources and communities out there to support you in your coding journey, just search for something that fits your interests and remember that the harder you work, the luckier you get.
[GIGO Discord](https://discord.gg/learnprogramming)
[GIGO Twitter](https://twitter.com/gigo_dev)
[GIGO Reddit](https://www.reddit.com/r/gigodev/)
[GIGO GitHub](https://github.com/Gage-Technologies/gigo.dev)
Find [this article on Medium](https://medium.com/@gigo_dev/start-your-coding-journey-9e3386025c36)
Find [this article on our site](https://www.gigo.dev/articles/start-your-coding-journey?viewport=desktop) | gigo_dev |
1,894,083 | Part Two: Creating Our Recipe Reusable Skeleton Component | Creating a Reusable Skeleton Component Welcome to part two of Skeleton Loaders:... | 0 | 2024-06-19T21:11:23 | https://dev.to/topeogunleye/part-two-creating-our-recipe-reusable-skeleton-component-cgj | 
### Creating a Reusable Skeleton Component
Welcome to part two of Skeleton Loaders: Simplifying Data Loading in React! In the [first article](https://dev.to/topeogunleye/skeleton-loaders-simplifying-data-loading-in-react-part-1-1kmo), we learned how Facebook and LinkedIn leverage skeleton loading screens in managing users' expectations while data is fetched, built our React app from scratch, and added styles to our React application. In part two, we'll guide you through the practical steps of implementing skeleton loading screens in your React applications including animations.
### Creating a Reusable Skeleton Component
We aim to create a reusable base skeleton element component that can be configured into different shapes. To achieve this, let's start by creating a new folder named `skeletons` within our `src` directory. Inside this folder, we'll create a file named `skeletonElement.jsx`. This file will house our base skeleton component, which is designed to be reusable and customizable.
Here's the code to include in the `skeletonElement.jsx` file:
```jsx
import './Skeleton.css';
const SkeletonElement = ({ type }) => {
const classes = `skeleton ${type}`;
return (
<div className={classes}></div>
)
}
export default SkeletonElement
```
In this code, we define a `SkeletonElement` component that accepts a `type` prop. This `type` prop allows us to customize the shape of the skeleton element by applying different CSS classes. The component then returns a `div` with the CSS classes applied.
Let’s create the style for our skeleton component by creating a file named Skeleton.css and adding the below styles:
```css
/* basic styles */
.skeleton {
background: #ddd;
margin: 10px 0;
border-radius: 4px;
}
.skeleton.title {
width: 50%;
height: 24px;
margin-bottom: 15px;
}
.imageBig {
width: 240px;
height: 240px;
}
.imageMealInfo {
width: 92vw;
height: 48vh;
padding-top: 0;
margin-top: 0;
}
.mealInfo {
width: 176px;
height: 24px;
}
.textBig {
width: 240px;
height: 1152px;
}
.liText {
width: 100px;
height: 16px;
margin: 4px 15px;
}
.imageSmall {
width: 240px;
height: 240px;
}
@media screen and (min-width: 1080px) {
.imageMealInfo {
max-width: 320px;
max-height: 320px;
}
}
@media screen and (min-width: 767px) {
.imageSmall {
width: 208px;
height: 224px;
}
.title {
width: 50%;
height: 24px;
}
.imageBig {
min-width: 100vw;
min-height: 320px;
}
.imageMealInfo {
max-width: 200px;
max-height: 200px;
margin-bottom: 8rem;
}
.mealInfo {
width: 512px;
height: 24px;
}
.textBig {
width: 576px;
height: 456px;
}
.liText {
width: 100px;
height: 16px;
}
}
@media screen and (min-width: 475px) {
.imageSmall {
width: 208px;
height: 212px;
}
.imageMealInfo {
width: 92vw;
height: 62vh;
}
}
@media screen and (min-width: 425px) {
.imageBig {
width: 312px;
height: 312px;
}
.textBig {
width: 318.75px;
height: 840px;
}
}
@media screen and (min-width: 375px) {
.imageBig {
width: 281.25px;
height: 281.25px;
}
.textBig {
width: 281.25px;
height: 960px;
}
}
/* skeleton profile */
.skeleton-wrapper {
/* margin: 20px auto; */
padding: 10px 15px;
border-radius: 4px;
position: relative;
}
.wrapper {
border-radius: 4px;
position: relative;
}
/* Category */
.catImg {
width: 177px;
height: 110px;
}
.catName {
width: 75px;
height: 28px;
}
/* themes */
.light {
background: #f2f2f2;
}
.dark {
background: #444;
}
.dark .skeleton {
background: #777;
}
/* animation effects */
.shimmer-wrapper {
top: 0;
left: 0;
width: 100%;
height: 100%;
position: absolute;
animation: loading 2.5s infinite;
}
.shimmer {
width: 50%;
height: 100%;
background: rgba(255, 255, 255, 0.2);
transform: skewX(-20deg);
box-shadow: 0 0 30px 30px rgba(255, 255, 255, 0.05);
}
.dark .shimmer {
background: rgba(255, 255, 255, 0.05);
}
@keyframes loading {
0% {
transform: translateX(-150%);
}
50% {
transform: translateX(-60%);
}
100% {
transform: translateX(150%);
}
}
/* skeleton profile */
.skeleton-profile {
display: grid;
grid-template-columns: 1fr 2fr;
grid-gap: 30px;
align-items: center;
}
/* themes */
.skeleton-wrapper.light {
background: #f2f2f2;
}
.skeleton-wrapper.dark {
background: #444;
}
.skeleton-wrapper.dark .skeleton {
background: #777;
}
```
#### Explanation of Styles for Skeleton Loading
1. Skeleton Element Styles: The `SkeletonElement.js` file defines a reusable skeleton component that can be configured with different shapes. These shapes are defined in the accompanying `Skeleton.css` file. By applying different CSS classes, we can customize the appearance of skeleton elements, such as their size, color, and animation effects.
2. Wrapper Styles: The `SkeletonHome.js` file wraps the skeleton elements in a `div` with the class `wrapper`. This wrapper provides a container for the skeleton elements and allows for additional styling or layout adjustments if needed.
3. Shimmer Effect: The `Shimmer` component adds a subtle shimmer effect to the skeleton elements. This effect is achieved through CSS animations and helps indicate to the user that content is loading.
### Creating Our Shimmer:
Let’s create a Shimmer.js file and add this code below to it:
```jsx
const Shimmer = () => {
return (
<div className="shimmer-wrapper">
<div className="shimmer"></div>
</div>
);
};
export default Shimmer;
```
### Creating our Skeleton Home page
Let’s create a SkeletonHome.js file and add this code below to it:
```jsx
import Shimmer from "./Shimmer";
import SkeletonElement from "./SkeletonElement";
const SkeletonHome = ({ theme }) => {
const themeClass = theme || "light";
return (
<div className={`wrapper ${themeClass}`}>
<div className="meals grid grid-cols-1 gap-5 mt-5 xs:grid-cols-2 sm:grid-cols-3 xl:grid-cols-4">
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
<SkeletonElement type="imageSmall" />
</div>
<Shimmer />
</div>
);
};
export default SkeletonHome;
```
Now let’s make use of our SkeletonHome.js component in our Home.js component so that we have a skeleton loader on our home component. Copy the code below and paste it into our Home.js component:
```jsx
import { useEffect, useState } from "react";
import SkeletonHome from "../skeletons/SkeletonHome";
const Home = () => {
const [meals, setMeals] = useState(null);
useEffect(() => {
setTimeout(async () => {
const res = await fetch(
"https://www.themealdb.com/api/json/v1/1/search.php?s=chicken"
);
const meals = await res.json();
setMeals(meals);
console.log(meals.meals[0]);
}, 5000);
}, []);
return (
<>
<div className="container">
<div className="m-auto max-w-3xl flex flex-col items-center justify-center text-center">
<div id="meals" className="meals">
{meals &&
meals.meals.map((meal) => (
<div className="meal" key={meal.idMeal}>
<img
className="meal-img"
src={meal.strMealThumb}
alt={meal.strMeal}
/>
<div className="meal-info" data-mealid={meal.idMeal}>
<h3>{meal.strMeal}</h3>
</div>
</div>
))}
{!meals && [1,2,3,4,5].map((n) => <SkeletonHome key={n} theme="dark" />)}
</div>
{}
</div>
</div>
</>
);
};
export default Home;
```
#### Explanation of Theme Customization in SkeletonHome.js Component:
The SkeletonHome.js component offers theme customization through the `theme` prop. This prop allows users to specify the color scheme for the skeleton elements, providing flexibility in integrating the skeleton loading experience with different application designs.
1. **Theme Prop Handling**: Within the SkeletonHome.js component, the `theme` prop is utilized to determine the color scheme. By default, if no theme is provided, the component falls back to the "light" theme. This ensures that the component remains functional even if the theme prop is not explicitly set.
```jsx
const SkeletonHome = ({ theme }) => {
const themeClass = theme || "light"; // Default to "light" theme if no theme is provided
// Rest of the component code...
};
```
2. **Applying Theme Classes**: The determined theme, either "light" or the provided value, is used to dynamically generate CSS classes. These classes are then applied to the wrapper `div` of the skeleton elements, enabling the styling of the skeleton components based on the selected theme.
```jsx
<div className={`wrapper ${themeClass}`}>
{/* Skeleton elements go here */}
</div>
```
3. **Theme Flexibility**: By allowing users to specify the theme, developers can seamlessly integrate the skeleton loading experience with their application's design language. For example, setting the theme to "dark" can complement dark mode interfaces, ensuring consistency in visual presentation across different application states.
```jsx
<SkeletonHome key={n} theme="dark" />
```
In summary, the SkeletonHome.js component facilitates theme customization through the `theme` prop, providing users with the ability to tailor the skeleton loading experience to match their application's design requirements. This flexibility enhances the overall user experience by ensuring a cohesive and visually appealing transition between content loading and display.
## Conclusion:
In wrapping up, adding skeleton loading screens to your React app can make it feel faster and more user-friendly. With components like `SkeletonHome`, you can easily create these loading screens for different parts of your app. By using CSS and React, you can customize how these loading screens look and behave to match your app's style. Ultimately, using skeleton loading screens improves how users perceive your app's speed and makes it more enjoyable to use.
| topeogunleye | |
1,892,884 | Using Cypress Component Testing in your Next.js Application with TypeScript and GraphQL | Introduction In my previous blog post, I discussed using React Testing Library to test... | 0 | 2024-06-19T21:00:03 | https://dev.to/schead/using-cypress-component-testing-in-your-nextjs-application-with-typescript-and-graphql-1c1f | ## Introduction
In my previous blog post, I discussed using React Testing Library to test React applications. Recently, Cypress introduced a new way to test components without requiring a full end-to-end solution. I decided to give it a try and was pleasantly surprised by its effectiveness. I seamlessly integrated it into my current project, which has a robust codebase. This type of testing is ideal for common scenarios, like the one I described in my last post, and I'll demonstrate it again here using Cypress. I've already covered the importance of testing in my previous blog, so let's dive straight into the code.
### Step 1: Initialize the Repo
First, we'll create a new Next.js application. Currently, Cypress doesn't support Component Testing with **server components** or Next.js versions above 14. For more details, refer to the [Next.js documentation on testing with Cypress](https://nextjs.org/docs/pages/building-your-application/testing/cypress).
I'm using Next.js to simplify the setup but feel free to user other solutions. The command below will generate a new application with Typescript.
```bash
npx create-next-app@latest blog-todo-graphql --typescript
```
### Step 2: Install Apollo Client and GraphQL
Next, we'll install Apollo Client and GraphQL to handle our GraphQL queries and mutations. Also, don't forget the cypress library.
```bash
npm install @apollo/client graphql cypress
```
### Step 3: Clean Up the Project Structure
We'll clean up the project structure by deleting the `api` folder.
### Step 4: Adjust `index.tsx` File
Update the `index.tsx` file to include the following content:
**pages/index.tsx**
```typescript
import AddTodo from "@/components/add-todo";
import ListTodos from "@/components/list-todos";
const Home = () => {
return (
<div>
<h1>Todo List</h1>
<AddTodo />
<ListTodos />
</div>
);
};
export default Home;
```
### Step 5: Add the Necessary Project Files
Here are the files we need to add to our project, along with a brief explanation of their purpose:
**schemas/index.gql**
```graphql
type Todo {
id: String!
title: String!
description: String!
}
type Query {
todos: [Todo!]!
}
type Mutation {
addTodo(title: String!, description: String!): Todo!
}
```
This file defines the GraphQL schema for our Todo application, including the Todo type, a query to fetch todos, and a mutation to add a new todo.
**components/add-todo.tsx**
```typescript
import { useState } from "react";
import { useMutation, gql } from "@apollo/client";
import { GET_TODOS } from "@/components/list-todos";
export const ADD_TODO = gql`
mutation AddTodo($title: String!, $description: String!) {
addTodo(title: $title, description: $description) {
id
title
description
}
}
`;
const AddTodo = () => {
const [title, setTitle] = useState("");
const [description, setDescription] = useState("");
const [addTodo] = useMutation(ADD_TODO);
const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
await addTodo({
variables: { title, description },
refetchQueries: [{ query: GET_TODOS }],
});
setTitle("");
setDescription("");
};
return (
<form onSubmit={handleSubmit}>
<input
value={title}
onChange={(e) => setTitle(e.target.value)}
placeholder="Title"
required
/>
<input
value={description}
onChange={(e) => setDescription(e.target.value)}
placeholder="Description"
required
/>
<button type="submit">Add Todo</button>
</form>
);
};
export default AddTodo;
```
This component provides a form for adding new todos. It uses the Apollo Client to send a GraphQL mutation to add the todo and refetches the list of todos after adding a new one.
**components/list-todos.tsx**
```typescript
import { useQuery, gql } from "@apollo/client";
export const GET_TODOS = gql`
query GetTodos {
todos {
id
title
description
}
}
`;
const ListTodos = () => {
const { loading, error, data } = useQuery(GET_TODOS);
if (loading) return <p>Loading...</p>;
if (error) return <p>Error: {error.message}</p>;
return data.todos.map(
(todo: { id: string; title: string; description: string }) => (
<div key={todo.id}>
<h3>{todo.title}</h3>
<p>{todo.description}</p>
</div>
)
);
};
export default ListTodos;
```
This component fetches and displays a list of todos using the Apollo Client. It shows loading and error states while the data is being fetched.
**mocks/apollo-mock-provider.tsx**
```typescript
import React from "react";
import { MockedProvider, MockedResponse } from "@apollo/client/testing";
interface ApolloMockProviderProps {
mocks: MockedResponse[];
children: React.ReactNode;
}
const ApolloMockProvider: React.FC<ApolloMockProviderProps> = ({ mocks, children }) => (
<MockedProvider mocks={mocks} addTypename={false}>
{children}
</MockedProvider>
);
export default ApolloMockProvider;
```
This component provides a mocked Apollo Client for testing purposes. It uses the MockedProvider from `@apollo/client/testing` to supply mock data for our tests.
**tests/home.cy.tsx**
```typescript
import React from "react";
import Home from "../pages/index";
import { GET_TODOS } from "@/components/list-todos";
import { ADD_TODO } from "@/components/add-todo";
import ApolloMockProvider from "@/mocks/apollo-mock-provider";
const mocks = [
{
request: {
query: ADD_TODO,
variables: {
title: "New Test Todo",
description: "New Test Description",
},
},
result: {
data: {
addTodo: {
id: 1,
title: "New Test Todo",
description: "New Test Description",
},
},
},
},
{
request: {
query: GET_TODOS,
},
result: {
data: {
todos: [
{
id: 1,
title: "New Test Todo",
description: "New Test Description",
},
],
},
},
},
];
describe("<Home />", () => {
it("renders", () => {
cy.mount(
<ApolloMockProvider mocks={mocks}>
<Home />
</ApolloMockProvider>
);
cy.get('[placeholder="Title"]').type("New Test Todo");
cy.get('[placeholder="Description"]').type("New Test Description");
cy.get("button").click();
cy.get("h3").should("contain", "New Test Todo");
cy.get("p").should("contain", "New Test Description");
});
});
```
This is our first Cypress test case. Similar to my approach with React Testing Library, this test verifies the functionality of adding a new todo to the list. It uses a mocked Apollo Provider to simulate GraphQL requests and responses.
In this example, I tested both `<AddTodo />` and `<ListTodos />` components through the Home page component. However, in some cases, it might be more practical to test components in isolation. For instance, if the Home page were a dashboard screen filled with various components, you would need to mock everything.
In such scenarios, testing each component individually is more effective, as the primary goal of these tests is to provide confidence during development. You'll quickly see that Cypress component testing mode makes this process straightforward and efficient.
### Running Your Test
To open the Cypress Component Testing GUI, use the following command:
```bash
npx cypress open --component
```
You will need to configure a few settings, but the process is quick. Follow the instructions until you reach the screen shown below:

Pick your browser and select the `home.cy.tsx` test we created. It should display the following screen:
And there it is! Try creating tests for more complex cases that you encounter. For example, what if you have a complex form with two or more fields that depend on another field? These tests will provide quick feedback if you need to refactor this module at some point. Trust me, you might need to do it.
### Benefits of Independent Tests
As you can see, in this setup, we didn't need to add the `ApolloProvider` component to our main application, just the `MockedProvider` for our tests. It's important to note that the `ApolloProvider` and `MockedProvider` serve different purposes and are independent of each other. Since this content focuses on demonstrating the tests and not the application running, we don't need to add the `ApolloProvider` here. However, in a real application, you must include the `ApolloProvider` to integrate your application with the actual API.
## Conclusion
Frontend testing is an indispensable part of modern web development. It ensures that your application is reliable, maintainable, and provides a great user experience. If you read my previous article and tried using React Testing Library, you will find that Cypress Component Testing is much easier for more complex scenarios. Why not give this new approach a try and share your experiences with me? I’m eager to hear about your results. Happy testing! | schead | |
1,894,079 | Skeleton Loaders: Simplifying Data Loading in React: Part 1 | Part One: Creating Our React App Welcome to our two-part series on Skeleton Loaders:... | 0 | 2024-06-19T20:58:40 | https://dev.to/topeogunleye/skeleton-loaders-simplifying-data-loading-in-react-part-1-1kmo |

## Part One: Creating Our React App
Welcome to our two-part series on Skeleton Loaders: Simplifying Data Loading in React! In part one of this series, we'll create our modern react application from scratch, fetch the data, and add styling.
## Understanding the Power of Skeleton Loading Screens
Many modern websites handle data fetching in the browser instead of on the server. This is good because it means that a user doesn’t have to wait long before getting their content loaded from the server but then has to wait for their data to be loaded from the browser once it arrives.
Developers often use loaders or spinners to manage user expectations during this data-fetching process. A particularly effective and popular approach, adopted by major websites like Facebook and LinkedIn, is the use of skeleton loading screens. These screens display placeholder elements that mimic the layout of the actual content, providing a visual representation of the incoming data.
## Prerequisites
- Basic knowledge of React.
- Familiarity with React Hooks.
## Setting Up the Project
First, we'll create a new React application using the following command:
```bash
npx create-react-app react-skeleton-screens
```
Next, navigate into the newly created project directory:
```bash
cd react-skeleton-screens
```
Open the project in Visual Studio Code:
```bash
code .
```
## Removing Boilerplate Code
Let's clean up the default files created by `create-react-app`:
1. Open the `src` folder and delete the following files:
- `App.css`
- `App.test.js`
- `logo.svg`
- `setupTests.js`
2. In `index.js`, remove the import and invocation of the service worker.
3. In `App.js`, remove the import of `logo.svg` and `App.css`.
Replace the code in `App.js` with the following:
```jsx
import React from 'react';
import User from './components/User';
import Articles from './components/Articles';
function App() {
return (
<div className="App">
<header>
<h1>React Skeletons</h1>
</header>
<div className="content">
</div>
</div>
);
}
export default App;
```
## Creating Components
Now, let's create the necessary components. In the `src` directory, create a new folder called `components`. Inside this folder, create one file: `Home.jsx`.
```jsx
import React from 'react';
const Home = () => {
return (
<div className="home">
</div>
);
}
export default Home;
```
With these steps, you've set up a basic React application structure and created components for our landing page, which will serve as the foundation for implementing skeleton screens. This setup ensures a clean slate, allowing you to focus on building the skeleton loading screens that enhance user experience during data fetching.
Now we’re going to nest the Home component inside the content div in our App.js component. Copy the code below into your app.js to do this:
```jsx
import Home from './Home';
function App() {
return (
<div className="App">
<header>
<h1>Meal Recipes</h1>
</header>
<div className="content">
<Home />
</div>
</div>
);
}
export default App;
```
## Adding Styles to the Application
To enhance the appearance of our application, we'll apply some styles to the header in `App.js`. Follow the steps below to update the styles in your `index.css` file.
### Update `index.css`
Open `index.css` and replace its contents with the following styles:
```css
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
header {
font-size: 1.5rem;
font-weight: 900;
display: grid;
align-items: center;
}
header h1 {
max-width: 1200px;
margin: 0 auto;
}
.container {
background-color: #6b7280;
color: #ffffff;
min-height: 100vh;
transition-property: all;
transition-duration: 1s;
transition-timing-function: ease-out;
}
.meals {
display: grid;
grid-template-columns: repeat(1, minmax(0, 1fr));
gap: 1.25rem; /* Equivalent to gap-5 in Tailwind */
margin-top: 1.25rem; /* Equivalent to mt-5 in Tailwind */
transition-property: all;
transition-duration: 1s;
transition-timing-function: ease-out;
padding: 10px 50px;
}
@media (min-width: 640px) {
.meals {
grid-template-columns: repeat(2, minmax(0, 1fr));
}
}
@media (min-width: 768px) {
.meals {
grid-template-columns: repeat(3, minmax(0, 1fr));
}
}
@media (min-width: 1280px) {
.meals {
grid-template-columns: repeat(4, minmax(0, 1fr));
}
}
/* .meal class */
.meal {
/* Equivalent to rounded overflow-hidden shadow-md cursor-pointer relative h-60 w-60 xs:h-56 xs:w-52 sm:h-56 sm:w-52 lg:h-56 lg:w-52 text-left; */
border-radius: 0.25rem;
overflow: hidden;
box-shadow: 0 4px 6px -1px rgba(0, 0, 0, 0.1),
0 2px 4px -1px rgba(0, 0, 0, 0.06);
cursor: pointer;
position: relative;
height: 15rem; /* Equivalent to h-60 */
width: 15rem; /* Equivalent to w-60 */
}
.meal-img:hover {
box-shadow: 0 10px 15px -3px rgba(147, 102, 102, 0.1),
0 4px 6px -2px rgba(0, 0, 0, 0.05);
transition-property: all;
transition-duration: 1s;
transition-timing-function: ease-out;
}
/* .meal-img class */
.meal-img {
/* Equivalent to w-full h-full border-solid border-4 border-white; */
width: 100%;
height: 100%;
border-style: solid;
border-width: 4px;
border-color: #ffffff; /* Assuming white border */
}
```
### Explanation of Styles
1. The universal selector (`*`) is used to apply the following styles to all elements on the page:
- `margin: 0;` and `padding: 0;` set the margin and padding of all elements to zero, effectively removing any default spacing.
- `box-sizing: border-box;` ensures that the total width and height of an element include both its content and padding, but not the margin.
2. The `header` element:
- `font-size: 1.5rem;` sets the font size to 1.5 times the default font size.
- `font-weight: 900;` makes the text bold.
- `display: grid;` uses CSS Grid for layout.
- `align-items: center;` vertically centers the content within the header.
3. The `.container` class:
- `background-color: #6b7280;` sets the background color to a shade of gray (#6b7280).
- `color: #ffffff;` sets the text color to white.
- `min-height: 100vh;` ensures that the container takes up at least the full viewport height.
- The `transition` properties control the animation duration and timing function when transitioning styles.
4. The `.meals` class:
- Uses CSS Grid to create a responsive grid layout with varying column numbers based on screen width.
- `gap: 1.25rem;` adds spacing between grid items.
- `margin-top: 1.25rem;` provides top margin.
- The `transition` properties control the animation when transitioning styles.
- `padding: 10px 50px;` adds padding to the container.
5. Media queries:
- Adjust the number of columns in the `.meals` grid based on screen width.
6. The `.meal` class:
- Creates a card-like styling for meal items.
- Sets border radius, overflow behavior, box shadow, cursor, and dimensions.
7. The `.meal-img` class:
- Sets the image dimensions and adds a white border.
These styles will ensure that your application's header looks clean and visually appealing. The header will have a consistent blue background with white text, providing a professional look.
### Final Steps
After adding the styles, your header in the application should now have a distinct appearance, with a blue background and centered white text.
### Running the Application
To see your changes in action, make sure to start your development server if it's not already running:
```bash
yarn dev
```
Navigate to `http://localhost:3000` in your browser to view the updated header with the new styles applied.
With these styling updates, your React application now has a polished header, setting a solid foundation for building out the rest of the skeleton loading screens and other features.
Fetching Data
We're going to use the MealDB API for our data fetching: https://www.themealdb.com/api.php.
In our `App.js`, let's create a state to store our data when we fetch it.
```jsx
const [meals, setMeals] = useState(null);
```
Initially, our `meals` state will be `null` because we don't have any meal data yet. However, when this component is rendered to the DOM, we need to fetch the data. To do this, we'll use the `useEffect` hook, which runs automatically after the component has been rendered.
Let's create a `useEffect` and use a `setTimeout` so that we can see the effect of the skeleton loader for a bit longer. Note that we wouldn't typically use a delay like this in a production application. Copy and paste the code below into the `App.js` file:
```jsx
import React, { useState, useEffect } from 'react';
import Home from './components/Home';
function App() {
const [meals, setMeals] = useState(null);
// runs automatically after initial render
useEffect(() => {
setTimeout( async () => {
const res = await fetch('https://www.themealdb.com/api/json/v1/1/search.php?s=chicken');
const data = await res.json();
setMeals(data);
}, 5000)
}, [])
return (
<div className="App">
<header>
<h1>Meal Recipes</h1>
</header>
<div className="content">
<Home />
</div>
</div>
);
}
export default App;
```
Now we need to check if we have our meal results. Let's use conditional rendering in React to display our meal recipe results. Copy and paste the code below into the `Home.js` file:
```jsx
import { useEffect, useState } from 'react';
import { Link } from 'react-router-dom'
const Home = ( ) => {
const [meals, setMeals] = useState(null);
useEffect(() => {
setTimeout(async () => {
const res = await fetch(
"https://www.themealdb.com/api/json/v1/1/search.php?s=chicken"
);
const meals = await res.json();
setMeals(meals);
console.log(meals.meals[0])
}, 5000);
}, []);
return (
<>
<div className="bg-gray-900 text-white min-h-screen">
<div className="m-auto max-w-3xl flex flex-col items-center justify-center text-center">
<div id="meals" className="meals">
{meals &&
meals.meals.map((meal) => (
<div className="meal" key={meal.idMeal}>
<Link to={`/MealInfo/${meal.idMeal}`}>
<img
className="meal-img"
src={meal.strMealThumb}
alt={meal.strMeal}
/>
<div className="meal-info" data-mealid={meal.idMeal}>
<h3>{meal.strMeal}</h3>
</div>
</Link>
</div>
))}
</div>
){"}"}
</div>
</div>
</>
);
};
export default Home;
```
## Adding React Router
Since you're using `Link` from `react-router-dom`, we need to install `react-router-dom` in your project. Follow these steps:
1. Open your terminal or command prompt.
2. Navigate to your project directory.
3. Run the following command to install `react-router-dom` using npm:
```bash
npm install react-router-dom
```
Alternatively, if you're using Yarn, you can use this command:
```bash
yarn add react-router-dom
```
4. Once the installation is complete, you can import the necessary components from `react-router-dom` and start defining your routes in your `Home.js` file.
### Wrapping the App with BrowserRouter
Let’s wrap our App in main.js with BrowserRouter from react-router-dom. This is because we're using Link from react-router-dom in our Home component, which is calling useContext. The context it is looking for is provided by BrowserRouter, but our app is not wrapped by a BrowserRouter. Your main.js should be like this:
```.jsx
import React from 'react'
import ReactDOM from 'react-dom/client'
import App from './App.jsx'
import './index.css'
import { BrowserRouter } from 'react-router-dom'
ReactDOM.createRoot(document.getElementById("root")).render(
<React.StrictMode>
<BrowserRouter>
<App />
</BrowserRouter>
</React.StrictMode>
);
```
## Conclusion
In conclusion, in Part One of our series, we've laid the groundwork for implementing skeleton loading screens in React applications. By setting up a basic React project, creating components, adding styles, and fetching data from an API, we've prepared the foundation for integrating skeleton loading screens into our application.
In Part Two, we'll dive deeper into the implementation details of skeleton loading screens. We'll explore how to create reusable skeleton components, customize loading animations, and handle various loading scenarios efficiently. Click [here](https://dev.to/topeogunleye/part-two-creating-our-recipe-reusable-skeleton-component-cgj) for Part Two, where we'll take our skeleton loading screens to the next level!
| topeogunleye | |
1,894,077 | Demystifying UI, UX, and CX: Day 2 of My Journey into Design | Day 2: Learning UI/UX Design 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student with... | 0 | 2024-06-19T20:43:35 | https://dev.to/prince_chouhan/demystifying-ui-ux-and-cx-day-2-of-my-journey-into-design-3564 | ui, uidesign, ux, uxdesign |
Day 2: Learning UI/UX Design
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. I'm excited to share my daily learnings as I delve deeper into this fascinating field. Join me on this journey as I explore the principles, techniques, and tools of UI/UX design.
---
🗓️ Day 2 Topic: Introduction to UI, UX, and CX
---
📚 Today's Learning Highlights:
1. Concept Overview:
- UI (User Interface):
- UI refers to the graphical layout of an app or website, including buttons, icons, text, and images.
- Examples: "Submit" buttons, search bars, home icons, and color schemes.
- Tools: Adobe XD, Sketch, Figma.
- Aim: To create visually appealing, intuitive, and consistent interfaces.
- UX (User Experience):
- UX refers to the overall experience a user has when interacting with a product.
- Examples: Ease of navigation, user journey, accessibility.
- Tools: InVision, Axure, Balsamiq.
- Aim: To provide a seamless, efficient, and enjoyable user experience.
- CX (Customer Experience):
- CX encompasses all interactions a customer has with a brand.
- Examples: Customer support, brand interaction, post-purchase experience.
- Aim: To build strong, positive relationships with customers.
2. Practical Application:
- Analyzed the UI of a popular website.
- Created a basic mockup using Figma, focusing on clean and user-friendly layout.
---
🚀 Future Learning Goals:
- Next, I'll explore UI Design Principles, starting with layout and visual hierarchy.
---
📢 Community Engagement:
- What are your favorite tools and resources for learning UI/UX design?
- Any advice for someone starting out in this field?
---
💬 Quote of the Day:
_"Design is not just what it looks like and feels like. Design is how it works." – Steve Jobs_
---
Thank you for reading! Stay tuned for more updates as I continue my journey in UI/UX design.
#UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan | prince_chouhan |
1,894,076 | Boost Your Code Review Game with Custom VSCode Shortcuts ⌨️ | If you're a tech lead who spends a lot of time reviewing code and helping others, you know how... | 0 | 2024-06-19T20:42:09 | https://dev.to/krisplatis/boost-your-code-review-game-with-custom-vscode-shortcuts-3p8o | vscode, shortcuts, keybindings, codereview | **If you're a tech lead who spends a lot of time reviewing code and helping others, you know how crucial it is to be efficient. Over time, I've set up some handy keyboard shortcuts in Visual Studio Code (VSCode) that make my life a whole lot easier. Let me show you my `keybindings.json` and how these shortcuts can help you too.**
### Switch Between Diff and Source
Switching between the diff view and the source view can be such a hassle, especially during code reviews.
These shortcuts make it super easy:
```json
// Toggle between file diff and file source when in a diff editor
{
"key": "cmd+ctrl+f",
"command": "git.openFile",
"when": "editorFocus && isInDiffEditor"
},
// Toggle between file source and file diff when not in a diff editor
{
"key": "cmd+ctrl+f",
"command": "git.openChange",
"when": "editorFocus && !isInDiffEditor"
}
```
**How to Use**: Just hit `cmd+ctrl+f` to switch between the diff and source views, no matter where you are. It feels like magic!
### Quick GitHub Links
I share specific lines of our codebase from GitHub all the time. These shortcuts make copying and opening GitHub permalinks a breeze:
```json
// Copy GitHub permalink
{
"key": "shift+cmd+c",
"command": "issue.copyGithubPermalink"
},
// Open GitHub permalink
{
"key": "shift+cmd+x",
"command": "issue.openGithubPermalink"
}
```
**How to Use**: Press `shift+cmd+c` to copy a permalink to your clipboard and `shift+cmd+x` to open a permalink right in your browser. So simple and fast!
### See Tree of All Changed Files
During code reviews, I often need to switch between the tree of changed files and the tree of all files in the repo. This shortcut makes that transition seamless:
```json
// Focus on the active pull request
{
"key": "shift+cmd+l",
"command": "github:activePullRequest.focus"
}
```
**How to Use**: With the official GitHub Pull Request extension installed, press `shift+cmd+l` to quickly focus on the active pull request. This keeps your workflow smooth and uninterrupted. When you need to switch to a tree of all files in the project, you can use the default shortcut `shift+cmd+e`.
### Navigate Changes in File
When you're deep into a code review, navigating through changes efficiently is key. These shortcuts help you jump between changes within a single file effortlessly, without needing to manually scroll to the next change:
```json
// Go to the next change in the compare editor
{
"key": "ctrl+shift+down",
"command": "workbench.action.compareEditor.nextChange",
"when": "textCompareEditorVisible"
},
// Go to the previous change in the compare editor
{
"key": "ctrl+shift+up",
"command": "workbench.action.compareEditor.previousChange",
"when": "textCompareEditorVisible"
},
```
**How to Use**: Use `ctrl+shift+down` to jump to the next change and `ctrl+shift+up` to go back to the previous change in the compare editor. This makes navigating through changes in your code reviews quick and efficient.
### Wrapping Up
These custom VSCode shortcuts have saved me tons of time and made my work a lot smoother. Whether you’re reviewing code, sharing GitHub links or switching between diff and source file of your own changes, these shortcuts can seriously up your game. Give them a try and see how much easier your workflow can get! | krisplatis |
1,894,075 | shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3 | In this article, I discuss how Blocks page is built on ui.shadcn.com. Blocks page has a lot of... | 0 | 2024-06-19T20:41:49 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-is-blocks-page-built-part-3-5fon | javascript, nextjs, shadcnui, opensource | In this article, I discuss how [Blocks page](https://ui.shadcn.com/blocks) is built on [ui.shadcn.com](http://ui.shadcn.com). [Blocks page](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) has a lot of utilities used, hence I broke down this Blocks page analysis into 5 parts.
1. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a)
2. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-2-7714c8f36a43)
3. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3
4. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4 (Coming soon)
5. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5 (Coming soon)

In part 3, I will explain how createTempSourceFile, createSourceFile and _extractVariable work in order to understand_ getBlockCode completely. Keep in mind, we still need to get back to getBlock since this is used in BlockDisplay .
\_getBlockCode function code:
-----------------------------
```js
async function \_getBlockContent(name: string, style: Style\["name"\]) {
const raw = await \_getBlockCode(name, style)
const tempFile = await createTempSourceFile(\`${name}.tsx\`)
const sourceFile = project.createSourceFile(tempFile, raw, {
scriptKind: ScriptKind.TSX,
})
// Extract meta.
const description = \_extractVariable(sourceFile, "description")
const iframeHeight = \_extractVariable(sourceFile, "iframeHeight")
const containerClassName = \_extractVariable(sourceFile, "containerClassName")
// Format the code.
let code = sourceFile.getText()
code = code.replaceAll(\`@/registry/${style}/\`, "@/components/")
code = code.replaceAll("export default", "export")
return {
description,
code,
container: {
height: iframeHeight,
className: containerClassName,
},
}
}
```
In part 2, we looked at \_getBlockCode in great detail. Let’s understand createTempSourceFile.
createTempSourceFile
--------------------
[createTempSourceFile function](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L102) creates a [unique temporary directory](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L102).
```js
async function createTempSourceFile(filename: string) {
const dir = await fs.mkdtemp(path.join(tmpdir(), "codex-"))
return path.join(dir, filename)
}
```
tempdir is imported from “os”, since shadcn-ui/ui uses Next.js, all this code is executed on server, hence it has access to “os” node.js package. [lib/blocks.ts has “use server”](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L1) at the top of file.
Basically, what this code means is that a temporary file with block file name is temporarily placed in a temporary folder.
createSourceFile
----------------
[createSourceFile](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L102) is a function called using project object
```js
const sourceFile = project.createSourceFile(tempFile, raw, {
scriptKind: ScriptKind.TSX,
})
```
Hang on a minute, what is project object?
```js
const project = new Project({
compilerOptions: {},
})
```
At the top of the file, this project variable is initiated with a Project instance. Now what is Project? Project is imported from ts-morph? What is ts-morph? Let’s find out.
### ts-morph
[ts-morph](https://ts-morph.com/) is a library that wraps the TypeScript compiler API to simplify setup, navigation and manipulation of the Typescript AST. [This article](https://www.jameslmilner.com/posts/ts-ast-and-ts-morph-intro/) has a good explanation what Typescript AST means.
When the TypeScript compiler compiles your code, it creates an [Abstract Syntax Tree (AST)](https://en.wikipedia.org/wiki/Abstract_syntax_tree) of it. Essentially, an AST can be thought of as a tree representation of the syntax of your source code, with each node being a data structure representing a construct in the relating source code. The tree is complete with nodes of all the elements of the source code.
I definitely need to more research on this ts-morph concept and provide some examples in the [best practices module](https://tthroo.com/) and also talk about this a bit more in part 4.
```js
const sourceFile = project.createSourceFile(tempFile, raw, {
scriptKind: ScriptKind.TSX,
})
```
[ts-morph.com](http://ts-morph.com) [documentation](https://ts-morph.com/manipulation/performance#performance-tip-work-with-structures-instead) provides an example that uses [project.createSourceFile](https://ts-morph.com/manipulation/performance#performance-tip-work-with-structures-instead), just like how shadcn-ui/ui does it.
We know how tempFile is created, raw is variable containing the file code returning by _getBlockCode, for example, code sitting at_ [_https://github.com/shadcn-ui/ui/blob/main/apps/www/_](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx)[\_registry\_\_/new-york/block/authentication-04.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx)
I will provide an example project that uses ts-morph and performs the similar operations that shadcn-ui/ui does to understand createSourceFile and extractVariable. getBlock also uses these similar functions in part 4.
Conclusion:
-----------
shadcn-ui/ui performs some additional operations using ts-morph after reading the code from _registry_ folder’s blocks file. Why not just read the file directly? this is an interesing question. I will find out why there is a need to use ts-morph to extract the code further from the read file in part 4.
This is the beauty of reading/studying OSS code, you are exposed to a lot of new technical concepts. I do not know what ts-morph is yet… but I will find out soon and provide an example project after running some experiments. I will keep this example limited to the similar operations used in shadcn-ui/ui.
This part 3 does provide insights into creating a temporary folder using “os” package since lib/block.ts has “use server” at the top of this file.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L107](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L107)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L102](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L102)
3. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L16](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L16)
4. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L135](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L135) | ramunarasinga |
1,894,074 | Introduction to GitHub Actions: Easy Guide | If you've ever wished for a way to automate your code workflows on GitHub, GitHub Actions is the... | 0 | 2024-06-19T20:41:09 | https://dev.to/gervaisamoah/introduction-to-github-actions-easy-guide-3p23 | githubactions, programming, automation, webdev | If you've ever wished for a way to automate your code workflows on GitHub, GitHub Actions is the superhero you've been looking for. Whether you're pushing code, creating pull requests, or releasing new versions, GitHub Actions can handle it all with style and efficiency. Let's dive into the basics and get you started with some actionable tips!
### What's GitHub Actions?
GitHub Actions is like having a personal assistant for your code. It allows you to automate, customize, and execute your software development workflows right in your GitHub repository. These workflows are defined in YAML files, which are easy to read and write. YAML stands for "YAML Ain't Markup Language," (clever huh) and it's perfect for creating data files that are both human-readable and machine-friendly.
### Getting Started with Workflows
At the heart of GitHub Actions are workflows. Think of a workflow as a series of steps that GitHub will run in response to specific events, like a code push or a pull request. Here's a quick breakdown of the key attributes you'll use when defining a workflow:
- **name**: This is the name of your workflow. It's optional, but it's good practice to name it for clarity.
- **on**: This specifies the event that triggers the workflow. It’s mandatory. Events can be anything from a push, pull request, release, to [GitHub webhooks](https://docs.github.com/en/webhooks).
- **jobs**: This section lists all the jobs your workflow will run. Each job has a unique identifier and includes:
- **runs-on**: The type of machine required for the job.
- **steps**: The specific actions and commands needed to run the job. Each step can access the file system but runs in its own process.
### Creating Your First Workflow
To create a workflow, you'll need to add a YAML file to your repository under `.github/workflows`. Here’s a simple example to get you started:
```yaml
name: CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run a one-line script
run: echo "Hello, world!"
```
This example sets up a continuous integration (CI) workflow triggered on push and pull request events. It runs on an Ubuntu machine and includes two steps:
- **Checkout Code**: The `actions/checkout@v2` action checks out your repository code so that the workflow can access it.
- **Run a Script**: A simple script that echoes "Hello, world!" to the console.
This basic setup gives you a taste of how GitHub Actions can automate tasks for you. From here, you can start adding more complex steps and actions to fit your project's needs.
### Jobs in Your Workflow
A workflow without jobs is like a pizza without toppings—pretty useless! Each workflow needs at least one job, and jobs can run in parallel by default. However, if you need jobs to run sequentially, you can create dependencies using the `needs` keyword.
```yaml
jobs:
job1:
runs-on: ubuntu-latest
steps:
- run: echo "Job 1"
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- run: echo "Job 2 depends on Job 1"
```
In this example, we have two jobs, `job1` and `job2` that will run on the latest version of Ubuntu available on GitHub Actions. Each has a single step that runs the command `echo` to print "Job 1" for the first job and "Job 2" for the second one to the console.
The `needs` keyword on the `job2` indicates that it depends on the successful completion of `job1`. This creates a sequential dependency where job2 will only run after job1 finishes successfully.
### Dependencies and Conditions
Sometimes, you need to add conditions to your workflows to control when and how they run for events like `push` or `pull_request`. For instance, if you only want your workflow to run on specific branches, you can add conditions using a YAML block. In that case, you don't use a value for the `on` keyword anymore. The event needs to be in its own YAML block and we add a `branches` conditional to list the branches we want to focus on, like so:
```yaml
on:
push:
branches:
- main
- develop
```
Now that you understand how to create workflows, add jobs, and set dependencies and conditions, let's see how these concepts come together in a real-world example.
### Real-World Example
Let’s say you have a Node.js project and you want to automate your testing process. Here’s how you could set up a GitHub Action to run your tests every time you push code:
```yaml
name: Node.js CI
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.x, 14.x, 16.x]
steps:
- uses: actions/checkout@v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v2
with:
node-version: ${{ matrix.node-version }}
- run: npm install
- run: npm test
```
Let's break this down:
- **name**: The workflow is named "Node.js CI".
- **on**: The workflow is triggered by push and pull request events.
- **jobs**: One job named `build`.
- **runs-on**: Specifies the type of machine required for the job. Here, it’s set to `ubuntu-latest`, which means the job will run on the latest version of Ubuntu available on GitHub Actions.
- **strategy**: This uses a matrix to test multiple Node.js versions.
- **steps**: The workflow checks out the code, sets up Node.js, installs dependencies, and runs tests.
This is how you can combine various elements to create a powerful and automated CI workflow for your project.
### Common Pitfalls and Solutions
Here are some pitfalls to be aware of:
- **Incorrect YAML Indentation:** YAML is sensitive to indentation. Make sure all nested elements are properly indented.
- **Event Triggers Not Working:** Double-check the spelling and formatting of event triggers. They must match [GitHub's predefined events](https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows).
- **Missing Permissions:** Sometimes workflows fail due to insufficient permissions. Ensure that your workflow has the necessary permissions to access required resources.
### Advanced Tips
- **Using Secrets:**
```yaml
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Deploy to production
env:
API_KEY: ${{ secrets.API_KEY }}
run: ./deploy.sh
```
This ensures your sensitive data remains secure.
- **Custom Actions:** Create your own actions to encapsulate repeated tasks.
- **Matrix Builds:** Use matrix builds to test your code across multiple environments.
Stay tuned for more advanced tips in upcoming posts!
### Workflow and Action Limitations
While GitHub Actions is incredibly powerful, there are some limitations to keep in mind:
- Each repository can have multiple workflows, but only 20 can run simultaneously.
- The free GitHub plan allows for 20 concurrent jobs per repository, with runtime limits of 6 hours per job.
- If your workflows interact with the GitHub API, you're limited to 1000 API requests per hour.
These limits might change, so always check the latest GitHub documentation for updates.
### Wrapping Up
GitHub Actions is a game-changer for automating your development workflows. By mastering the basics of YAML and understanding how to structure your workflows and jobs, you can significantly boost your productivity. So, go ahead and start creating your custom workflows—your future self will thank you! | gervaisamoah |
1,894,073 | DEPLOYMENT OF NODEJS APP IN KUBERNETES WITH DEVSECOPS CICD PIPELINE | I'm thrilled to announce that I have successfully implemented a comprehensive CI/CD pipeline for... | 0 | 2024-06-19T20:40:51 | https://dev.to/bhaveshkansagara/deployment-of-nodejs-app-inkubernetes-with-devsecopscicd-pipeline-55i6 | devops, docker, cloud, security | I'm thrilled to announce that I have successfully implemented a comprehensive CI/CD pipeline for DEPLOY THE NODEJS APP IN
KUBERNETES WITH DEVSECOPS
CICD PIPELINE Project! This pipeline ensures seamless integration, continuous delivery, and robust security checks, enhancing our development process significantly. Here's a breakdown of the stages:
Create Two Ec2 with T2-Large Instance 30GB storage & Install Jenkins.
CI/CD Pipeline Stages:
1. Checkout SCM: Retrieves the latest code from our Git repository.
2. Tool Install: Install necessary tools like JDK 17 and Node.js 16.
3. Clean Workspace: Ensures a fresh build environment.
4. Checkout from Git: Fetches the code from the main branch.
5. SonarQube Analysis: Conducts code quality analysis with SonarQube.
6. Quality Gate: Validates code quality against predefined metrics.
7. Install Dependencies: Installs project dependencies using npm.
8. OWASP FS Scan: Performs a security scan using OWASP Dependency Check.
9. Trivy FS Scan: Conduct a security scan on the file system with Trivy.
10. Docker Build & Push: Builds the Docker image and pushes it to Docker Hub.
11. Trivy Image Scan: Scans the Docker image for vulnerabilities using Trivy.
12. Deploy to Container: Deploy the Docker container.
13. Deploy to Kubernetes: Applies Kubernetes deployment configuration.
Key Highlights:
1. SonarQube Integration: Ensures code quality and adherence to best practices.
2. Security Scans: OWASP and Trivy scans enhance our security posture.
3. Automated Deployments: Streamlined deployments to Docker and Kubernetes.
| bhaveshkansagara |
1,894,072 | Testing Post | Link | 0 | 2024-06-19T20:38:47 | https://dev.to/john00900/testing-post-2fn7 | [Link](http://example.com)
| john00900 | |
1,894,071 | ATS Score | Nowadays, getting a job is way to easy and difficult as well. Everyone looks into growth of business... | 0 | 2024-06-19T20:37:29 | https://dev.to/sambit_kumar_sahu/ats-score-1gce | react, webdev, productivity, career | Nowadays, getting a job is way to easy and difficult as well. Everyone looks into growth of business but very difficult to one to make ourselves groom. From creating a resume to interview preparation is not focusing, specific to my college. Want to believe like learning from ourselves through a community is well growth. Few websites to help in growth of resume creation through programming language based to interview question preparation by AI is far more helpful but not aware of. Nowadays for a resume creation, language like LaTex do exists for crafting resume. Gemini and ChatGPT is way more helpful to prepare for interviews. All thanks to the open source and internet avaliablity for the student growth. Urge to utilize it the correct way to make future way ahead. | sambit_kumar_sahu |
1,894,067 | Measured now supports your favorite Kotlin platform | Measured now supports tons of Kotlin targets like iOS and all Apple platforms, Android Native, Linux,... | 0 | 2024-06-19T20:27:17 | https://dev.to/pusolito/measured-now-supports-your-favorite-kotlin-platform-1e1b | programming, kotlin, opensource, multiplatform | [Measured](https://github.com/nacular/measured) now supports tons of Kotlin targets like iOS and all Apple platforms, Android Native, Linux, etc. So you can work with units of measure seamlessly no matter where you app runs.
Measured makes units of measure simple by providing intuitive, strongly-typed (i.e. compile-time enforced) units that combine using normal mathematical operations.
```
val velocity = 5 * meters / seconds
val acceleration = 9 * meters / (seconds * seconds)
val time = 1 * minutes
// d = vt + ½at²
val distance = velocity * time + 1.0 / 2 * acceleration * time * time
println(distance ) // 16500 m
println(distance `as` kilometers) // 16.5 km
println(distance `as` miles ) // 10.25262467191601 mi
println(5 * miles / hours `as` meters / seconds) // 2.2352 m/s
``` | pusolito |
1,894,068 | Create Your Portfolio in 3 steps with the Frontend AI Tool | FrontendAI is here to change the way developers create web projects by automating the initial phases... | 0 | 2024-06-19T20:26:45 | https://dev.to/buildwebcrumbs/effortlessly-create-your-react-portfolio-with-the-frontend-ai-tool-1i4e | frontend, ia, portfolio, tutorial |
[FrontendAI](https://tools.webcrumbs.org/) is here to change the way developers create web projects by automating the initial phases of design and code generation. Whether it’s building a professional portfolio or launching a new product feature, this tool simplifies the development process by generating wireframes and code based on specified requirements.
This guide uses the example of creating a simple portfolio to demonstrate how [FrontendAI](https://tools.webcrumbs.org/) can be applied to practical development scenarios.
[
I have a video if you prefer that format.](https://www.youtube.com/watch?v=waidelWflf8)
## Key Steps to Using FrontendAI
### 1. Initiate Wireframe Generation
Start by inputting your specific requirements into [FrontendAI](https://tools.webcrumbs.org/), such as the type of portfolio you want. The AI then processes this information to generate a basic wireframe for your portfolio.

### 2. Select and Customize a Theme
Choose a theme that aligns with your personal or professional brand. Customize the colors and fonts to enhance the aesthetic appeal of your portfolio. This step ensures your portfolio not only functions well but also stands out visually.

### 3. Refine the Generated Code
After customizing the design, refine the automatically generated code to ensure it fits the specific functionalities and requirements of your project. This might include adjusting responsiveness, optimizing load times, or adding advanced features.

## Tips for Maximizing Efficiency with FrontendAI
**- Simplify Your Design:** Start with simple requirements to get the most out of the AI’s initial wireframe.
**- Iterative Design Process:** Use the initial wireframe as a base to iteratively refine and expand your portfolio’s design.
**- Community Engagement:** [Engage with the FrontendAI community](https://discord.gg/4PWXpPd8HQ) to share feedback, get tips, and stay updated on new features.
**- Design Limitations:** While FrontendAI provides a solid starting point, the initial wireframes might require significant customization to meet specific needs.
**- Framework Specificity:** Currently, FrontendAI is optimized for React. We are working to expand compatibility with other frameworks.
**- Code Review:** Always review and test the generated code thoroughly to ensure it integrates well with your project without issues.
**- Credit the Tool:** If you showcase the portfolio online, credit [FrontendAI](https://tools.webcrumbs.org/) for its role in the development process.
## And here you have it
[FrontendAI](https://tools.webcrumbs.org/) can help developers to have an easier approach portfolio creation.
By automating the initial stages of design and code generation, [FrontendAI](https://tools.webcrumbs.org/) allows you to focus on what truly matters—perfecting your portfolio to showcase your best work.
Try [FrontendAI](https://tools.webcrumbs.org/) for your next project and experience the ease of creating a professional portfolio. Share your creations and experiences in our community forum and help us improve the tool for all developers.
Thanks for reading,
Pachi 💚 | pachicodes |
1,894,034 | How CAPTCHAs work - The Internet's Annoying but Essential Gatekeepers | If you live on the internet (which is practically everyone these days), you likely come across a... | 0 | 2024-06-19T20:23:18 | https://dev.to/gaw/how-captchas-work-the-internets-annoying-but-essential-gatekeepers-p4j |

If you live on the internet (which is practically everyone these days), you likely come across a CAPTCHA often - those somewhat irritating puzzles asking you to pick traffic lights to prove you’re not a robot. Have you ever wondered how these things work and why they’re needed? Let’s take a deep dive into it. (P.S. There's a bonus section containing bad CAPTCHA examples at the end).
## What's a CAPTCHA
[CAPTCHA](https://en.wikipedia.org/wiki/CAPTCHA?ref=blog.kusho.ai) (which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”) is a tool used by sites to identify whether an internet user is a human or a bot.
Around 1997, AltaVista (a primitive search engine of that decade) was having a tough time fixing the high number of auto URL assets that were hampering its website ranking process severely. To solve this issue, the then-chief scientist of AltaVista, Andrei Broder, created an algorithm that later became famous as CAPTCHA.
Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, CAPTCHAs are sometimes described as [reverse Turing tests](https://en.wikipedia.org/wiki/Reverse_Turing_test?ref=blog.kusho.ai).
## Why is CAPTCHA needed?
Tl;dr - To make sure some sensitive resource is not exploited using bots
CAPTCHAs are used by any website that wishes to restrict usage by bots. For instance, if you’re running a hotel reservation site, there’s a chance that someone might try to take all the bookings using a bot and legit customers will not be left with any slots. Or you might be running an online poll and want to make sure that votes are not manipulated using bots. CAPTCHA comes in handy in these scenarios making sure only human users can perform some actions blocking access to bot users.
## How it works
There are different types of CAPTCHAs available in the market these days, each a little different in its workings than the last one. But all of them work on the same principle - ask users to perform some actions that are trivial for a human to do but almost impossible to automate.
Here are some common CAPTCHA flavours and how they work -
## Classic CAPTCHA (Text-based CAPTCHA)
These are the oldest variants of CAPTCHAs, as the name suggests. Classic CAPTCHAs work by asking a user to identify words. These words are shown in a distorted blurry manner with different fonts and handwriting which makes it very difficult for bots to identify them using OCR but it’s still trivial for human users to decipher.

## ReCAPTCHA
Google came up with a new way to identify human users which doesn’t require users to enter anything. It just asks you to click on a checkbox.

How can you figure out if a user is a human or a bot by just looking at a checkbox click, you may wonder. The answer is not in the click but in what happens before you click the checkbox. Google has a [risk analysis engine](https://security.googleblog.com/2014/12/are-you-robot-introducing-no-captcha.html?ref=blog.kusho.ai) that looks at things like how the cursor moved on its way to the checkbox (organic random path/acceleration vs cursor moving in a straight line), which part of the checkbox was clicked (random places, or dead on center every time), browser fingerprint, Google cookies & contents, click location history tied to your fingerprint or account if it detects one, etc. to differentiate between organic and automated clicks.
## Image-recognition CAPTCHA
For an image-recognition CAPTCHA, users are presented with square images of common objects like bikes, buses, traffic lights, etc. The images may all be from the same large image, or they may each be different. A user has to identify the images that contain certain objects. If their response matches the responses from most other users who have submitted the same test, the answer is considered "correct" and the user passes the test.

## Bonus: 5 examples of terrible CAPTCHAs that will make you pull out your hair

_You need a physics degree to solve CAPTCHAs now?_

_Good luck deciphering this_

_If "you shall not pass" was an image_

_Speechless_

_Maybe the robots should take over_
| gaw | |
1,894,039 | Blockchain in Banking: Revolutionizing the Financial Sector | Introduction Blockchain technology, originally devised for Bitcoin, has evolved into... | 27,673 | 2024-06-19T20:18:15 | https://dev.to/rapidinnovation/blockchain-in-banking-revolutionizing-the-financial-sector-430p | ## Introduction
Blockchain technology, originally devised for Bitcoin, has evolved into a
decentralized digital ledger that records transactions securely and
transparently. This technology is transforming data management across various
sectors, including banking.
## What is Blockchain?
Blockchain is a distributed ledger shared among nodes in a network. It ensures
data integrity and security without the need for a central authority, making
it ideal for applications like cryptocurrencies and beyond.
## How Does Blockchain Technology Work?
Blockchain operates on decentralization, where control is distributed across a
network of nodes. Transactions are grouped into blocks, validated by miners,
and secured through consensus mechanisms like Proof of Work and Proof of
Stake.
## Types of Blockchain Deployments in Banking
Banking utilizes various blockchain types: public blockchains for
transparency, private blockchains for controlled access, and consortium
blockchains for collaborative environments among multiple organizations.
## Top 7 Ways Banks Benefit From Blockchain Technology
Blockchain offers numerous benefits to banks, including:
## Challenges of Implementing Blockchain in Banking
Despite its benefits, blockchain faces challenges in banking, such as
regulatory uncertainties, scalability issues, and integration with legacy
systems.
## Future of Blockchain in Banking
The future of blockchain in banking is promising, with evolving regulations,
technological advancements, and increasing adoption driving its integration
into the financial sector.
## Real-World Examples of Blockchain in Banking
Notable examples include JPMorgan Chase's JPM Coin for instant payments and
HSBC's blockchain-based letter of credit for international trade efficiency.
## Why Choose Rapid Innovation for Blockchain Implementation and Development
Rapid Innovation offers expertise in AI and blockchain, customized solutions,
and a proven track record with industry leaders, making it the ideal partner
for blockchain development.
## Conclusion
The integration of advanced technologies like blockchain in banking enhances
efficiency, security, and customer experience, positioning banks to stay
competitive in a digital landscape.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Development](https://www.rapidinnovation.io/ai-software-development-
company-in-usa)
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa) [AI
Development](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <https://www.rapidinnovation.io/post/top-7-ways-banks-benefit-from-blockchain-tech>
## Hashtags
#BlockchainTechnology
#BankingInnovation
#DecentralizedFinance
#BlockchainSecurity
#FutureOfBanking
| rapidinnovation | |
1,894,038 | Automatische Backups von PostgreSQL via Cronjob | Kontext Um die Daten deiner Kunden zu sichern müssen diese Regelmäßig auf einem Server... | 0 | 2024-06-19T20:16:15 | https://dev.to/rubenvoss/automatische-backups-von-postgresql-via-cronjob-2ib4 | ## Kontext
Um die Daten deiner Kunden zu sichern müssen diese Regelmäßig auf einem Server abgespeichert werden. Dieser muss unbeeinträchtigt sein, falls die Infrastruktur auf der deine Datenbank läuft kaputtgeht. Also am besten bei einem Anderen Provider, oder in einer anderen Gegend. Ich zeige dir hier wie du das über einen Cronjob einrichten kannst, der von deiner Datenbank im Docker-Container ein Backup erstellt.
## Befehl Testen
Dein Container braucht einen Namen, damit du in ihm einen Shell-Befehl auslösen kannst. In docker-compose.yml sieht das so aus:
```
postgres:
container_name: postgres
```
Jetzt kannst du mit `docker exec -it` In deinem Container einen Befehl auslösen. Probiere es mal aus:
```
docker exec -it mein_container_name pg_dumpall -U mein_datenbank_nutzer
```
Mit diesem Befehl sollte sich nun dein Bildschirm mit dem PostgreSQL dump füllen. Wie aber kannst du das ganze nun Speichern?
Durch Piping bringst du den Output vom pg_dumpall Befehl in eine Datei.
```
# mit gzip kannst du deine Datei komprimieren.
# mit $(date +%Y%m%d-%H%M%S) kriegt deine Datei einen einzigartigen Namen.
# mit dem \ Backslash kannst du deinen Befehl in mehrere Zeilen aufteilen
docker exec -it mein_container_name pg_dumpall -U mein_datenbank_nutzer \
| gzip -9c > db_backup_$(date +%Y%m%d-%H%M%S).sql.gz
```
Nun kannst du selbst dein Backup erstellen. Aber wie geht das ganze automatisch? Dazu kommen wir jetzt.
## Crontab bearbeiten
Regelmäßige Befehle werden auf deinem System in der Crontab eingestellt. Die crontab deines Nutzers kannst du mit `crontab -e` bearbeiten.
Auf https://crontab.guru/ kannst du dir anschauen, was die Zeichen am Anfang deines cronjobs bedeuten. Wir können unser Backup zum Beispiel immer nachts um 03.22 Uhr ausführen:
```
22 3 * * * docker exec -it postgres pg_dumpall -U ruben | gzip -9c > /backup/location/db-$(date +%Y%m%d-%H%M%S).sql.gz
```
Um deine Crontab zu Testen aber am besten am Anfang jede Minute:
```
* * * * * /shell/befehl
```
`/backup/location/` ersetzt du durch einen vorübergehenden Speicherort auf deinem Server für die Backups. Z.B. home-Verzeichnis.
Jetzt musst du nur noch mit scp dein Backup zu einem anderen Server bringen. Das kannst du zum beispiel eine Stunde später tun...
```
22 4 * * * scp /backup/location/db-$(date +%Y%m%d)*.sql.gz mein_user@mein_server:/backup/location/
```
Und dein ursprüngliches Backup auf deinem Server Löschen damit dein Speicher nicht überfüllt wird...
```
22 5 * * * rm /backup/location/db-$(date +%Y%m%d)*.sql.gz
```
Das sind aber ganz schön viele cronjobs... Was wenn deine Datenbanksicherung länger dauert als eine Stunde? dann wird dein Backup nicht auf den Backupserver kopiert. D.h. es war quasi fehlerhaft.
Also das ganze lieber als bash-skript aufsetzen.
```
touch backup.sh
chmox +x backup.sh
vi backup.sh
```
So könnte dein backup.sh aussehen:
```
#!/bin/bash
DATE=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE=/home/user/db-$DATE.sql.gz
/usr/bin/docker exec -it postgres pg_dumpall -U ruben | gzip -9c > $BACKUP_FILE
echo "Backup $BACKUP_FILE erstellt" >> /home/user/backup.log
scp $BACKUP_FILE mein_user@mein_server:/backup/location/
rm $BACKUP_FILE
```
Und bei deiner Crontab fügst du es einfach so hinzu:
```
22 3 * * * /backup/script/location/backup.sh
```
Happy Coden!
Dein Ruben
[Mein Blog](rubenvoss.de)
| rubenvoss | |
1,894,025 | CVPR Survival Guide: Discovering Research That's Interesting to YOU! | Author: Harpreet Sahota (Hacker in Residence at Voxel51) A Cool Way to Discover Topics and... | 0 | 2024-06-19T20:13:30 | https://medium.com/voxel51/cvpr-survival-guide-discovering-research-thats-interesting-to-you-d40b272d4ac1 | computervision, datascience, machinelearning, ai | _Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_
## A Cool Way to Discover Topics and Trends at the Biggest CV Conference of the Year

_<center>Souce:[CVPR Website](https://medium.com/r?url=https%3A%2F%2Fcvpr.thecvf.com%2F)</center>_
The 2024 Conference on Computer Vision and Pattern Recognition (CVPR) received 11,532 valid paper submissions, and only 2,719 were accepted for an overall acceptance rate of about 23.6%.
But keeping up with the vast array of research being presented at this year's CVPR can be challenging. CVPR has an awesome website listing out all the paper, but the information I want is scattered across various links and platforms. Needless to say, getting a good idea of what's being presented is time-consuming (and a bit disorganized).
But what if you could access all this knowledge in one convenient location, allowing you to identify trends and gain valuable insights easily?
Well, I curated a dataset hosted on Hugging Face and built it with FiftyOne, which does just that -it helps you explore this year's conference offerings. I was able to find/scrape 2,389 of the 2,719 accepted papers, and I put them into a dataset that we will explore together!
Btw this post is available as a Google Colab notebook [here](https://colab.research.google.com/drive/1N0-0FCxZ_ZRl-G8t6bUS7HG3oG3nrLUD?usp=sharing), though I recommend running it locally if you can.
> tl;dr
_• CVPR 2024 received 11,532 paper submissions, with 2,719 accepted for a 23.6% acceptance rate._
_• I curated a dataset of 2,389 accepted papers, hosted on Hugging Face and built with FiftyOne. It includes paper images, titles, authors, abstracts, links, categories, and keywords._
_• The dataset is hosted on Hugging Face and can be loaded into FiftyOne, which you can use for managing, querying, visualizing and analyzing the papers._
_• Text embeddings were generated for the titles and abstracts using the gte-large-en-v1.5 model from Sentence Transformers._
_• FiftyOne Brain was used to visualize the embeddings with UMAP, compute uniqueness scores to find the most unique papers, and index the embeddings by similarity to easily find similar papers._
## 🧐 What's in this dataset?

_<center>Photo by [fabio](https://unsplash.com/@fabioha?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com/?utm_source=medium&utm_medium=referral)</center>_
The dataset consists of images of the first pages of papers, their titles, a list of authors, their abstracts, direct links to papers on arXiv, project pages, a category breakdown according to the arXiv taxonomy, and keywords that I bucketed from the 2024 CVPR call for papers.
Here are the fields:
An image of the first page of the paper
- `title`: The title of the paper
- `authors_list`: The list of authors
- `abstract`: The abstract of the paper
- `arxiv_link`: Link to the paper on arXiv
- `other_link`: Link to the project page, if found
- `category_name`: The primary category of this paper, according to arXiv taxonomy
- `all_categories`: All categories this paper falls into, according to arXiv taxonomy
- `keywords`: Extracted using GPT-4o
This should give us enough information to pick up some interesting trends for this year's CVPR!
> PS: Check out my picks for awesome papers at CVPR in [my GitHub repo](https://github.com/harpreetsahota204/awesome-cvpr-2024). Here's some general code for [how I scraped the CVPR data](https://github.com/harpreetsahota204/CVPR-2024-Papers).
Let's start by installing some dependencies:
```
%%capture
!pip install fiftyone sentence-transformers umap-learn lancedb scikit-learn==1.4.2
```
This tutorial will make use of the clustering plugin. Check out all available plugins [here](https://huggingface.co/blog/harpreetsahota/github.com/voxel51/fiftyone-plugins).
```
!fiftyone plugins download https://github.com/jacobmarks/clustering-plugin
```
```
import fiftyone as fo
import fiftyone.utils.huggingface as fouh
```
FiftyOne natively integrates with Hugging Face's datasets library.
The integration allows you to push datasets to and load datasets from the Hugging Face Hub. It's a nice integration that simplifies sharing datasets with the machine learning community and accessing popular vision datasets. You can load datasets from specific revisions, handle multiple media fields, and configure advanced settings through the integration - check out the Hugging Face organization page [here](https://huggingface.co/Voxel51) to see what datasets are available.
I've [posted the dataset on Hugging Face](https://huggingface.co/datasets/Voxel51/CVPR_2024_Papers) - feel free to smash a like on it to help spread the word - and you can access it as follows:
```
dataset = fouh.load_from_hub("Voxel51/CVPR_2024_Papers")
```
You've now loaded the dataset into FiftyOne format!
The FiftyOne dataset object gives you a high-level interface for performing various dataset-related tasks, such as loading data, applying transformations, evaluating models, and exporting datasets in different formats. The dataset represents a collection of samples and fields (associated metadata, labels, and other annotations).
It provides a convenient way to store, manipulate, and query datasets in FiftyOne.
**Some cool things you can do with the dataset object:**
1. Visualize various data, including images and videos and associated annotations like bounding boxes, segmentation masks, arbitrary text, and classification labels.
2. Attach metadata to each sample in the dataset, like arbitrary text fields, lists, etc.
3. Query it to filter and select subsets of samples based on their metadata, labels, or other criteria.
You can launch the app like so:
```
session = fo.launch_app(dataset, auto=False)
session.show()
```
## Take a look at the app below
With it, you can get insight into the distribution of keywords, categories, and the number of papers a given author (or at least someone with that name) has attributed to them at this year's conference!
{% embed https://youtu.be/ol1EAA0OeyE %}
You can do more interesting analysis from here. Start by getting embeddings for the title and abstract of each paper. For that, you can make use of `gte-large-en-v1.5`. It's small, it's fast, and it's good.
Of course, feel free to choose any model you'd like.
```
%%capture
from sentence_transformers import SentenceTransformer
model = SentenceTransformer(
'Alibaba-NLP/gte-large-en-v1.5',
trust_remote_code=True
)
```
The code below will help generate and add text embeddings to a FiftyOne dataset.
`get_text_embeddings(dataset, field, model)`
```
def get_text_embeddings(dataset, field, model):
"""
Returns the embeddings of the abstracts in the dataset.
Args:
dataset: A FiftyOne dataset object.
Returns:
A list of embeddings.
"""
texts = dataset.values(field)
text_embeddings = []
for text in texts:
embeddings = model.encode(text)
text_embeddings.append(embeddings)
return text_embeddings
```
- This function takes a FiftyOne dataset, a field name containing text data, and a pre-trained embedding model.
- It retrieves the text data from the specified field of the dataset.
It generates embeddings for each text using the provided embedding model.
- It returns a list of embeddings.
`add_embeddings_to_dataset(dataset, field, embeddings)`
```
def add_embeddings_to_dataset(dataset, field, embeddings):
"""
Adds the embeddings to the dataset.
Args:
dataset: A FiftyOne dataset object.
embeddings: A list of embeddings.
"""
dataset.add_sample_field(field, fo.VectorField)
dataset.set_values(field, embeddings)
```
- This function takes a FiftyOne dataset, a field name to store the embeddings, and a list of embeddings.
- It adds a new sample field to the dataset to store the embeddings.
- It sets the values of the newly added field to the provided embeddings.
**Combine them:**
```
abstract_embeddings = get_text_embeddings(
dataset = dataset,
field = "abstract",
model = model
)
add_embeddings_to_dataset(
dataset=dataset,
field="abstract_embeddings",
embeddings=abstract_embeddings
)
title_embeddings = get_text_embeddings(
dataset = dataset,
field = "title",
model = model
)
add_embeddings_to_dataset(
dataset=dataset,
field="title_embeddings",
embeddings=title_embeddings
)
```
And, in a nutshell, by running this code you've:
- Extract text data from a specific field in a FiftyOne dataset.
- Generate embeddings for each text using a pre-trained embedding model.
- Add the generated embeddings back to the dataset as a new field.
## Making use of the embeddings
You can use [FiftyOne Brain](https://docs.voxel51.com/user_guide/brain.html) to do some cool stuff with embeddings, like:
- Visualizing datasets in low-dimensional embedding spaces to observe patterns and clusters.
- Compute uniqueness scores for images (or embeddings) to identify the most (or least) unique sample.
- Index datasets by similarity to easily find similar samples.
## Visualizing embeddings
Below are the supported dimensionality reduction methods in the Brain:
### UMAP (Uniform Manifold Approximation and Projection)
UMAP is a dimensionality reduction technique that uses applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data.
It is particularly well-suited for text embeddings because it can handle high-dimensional data and preserve the global structure of the data, making it useful for both visualization and preprocessing for clustering algorithms.
### t-SNE (t-distributed Stochastic Neighbor Embedding)
t-SNE is a non-linear dimensionality reduction technique used to visualize high-dimensional data. It is similar to UMAP but tends to be slower and less scalable.
While it can be effective for certain data types, it may not perform as well as UMAP for large datasets.
### PCA (Principal Component Analysis)
PCA is a linear dimensionality reduction technique that projects high-dimensional data onto lower-dimensional subspaces. It is fast and easy to implement but may not capture non-linear relationships in the data as effectively as UMAP or t-SNE.
PCA is often used for simpler data sets where linearity is a reasonable assumption.
### Manual
Manually computing a low-dimensional representation involves creating a custom method to reduce the dimensionality of the data. This approach can be time-consuming and requires a deep understanding of the data and the desired outcome.
```
import fiftyone.brain as fob
fob.compute_visualization(
dataset,
embeddings="abstract_embeddings",
num_dims=2,
method="umap",
brain_key="umap_abstract",
verbose=True,
seed=51
)
fob.compute_visualization(
dataset,
embeddings="title_embeddings",
num_dims=2,
method="umap",
brain_key="umap_title",
verbose=True,
seed=51
)
```
## Computing uniqueness
The code below adds a uniqueness field to each sample, scoring how unique it is with respect to the rest of the samples. This is interesting because you can understand which papers are the most unique (based on their abstracts) among all the papers in the dataset.
```
fob.compute_uniqueness(
dataset,
embeddings="abstract_embeddings",
uniqueness_field="uniqueness_abstract",
)
fob.compute_uniqueness(
dataset,
embeddings="title_embeddings",
uniqueness_field="uniqueness_title",
)
```
## Computing similarity
The code below will index the abstract embeddings by similarity, and you can easily query and sort your datasets to find similar examples. Once you've indexed a dataset by similarity, you can use the `sort_by_similarity()` view stage to sort the dataset by abstract similarity programmatically! The code below uses LanceDB as the back end(read about the integration [here](https://docs.voxel51.com/integrations/lancedb.html)), but there several backends you can use:
• `sklearn` (default): a scikit-learn backend
• `qdrant`: a Qdrant backend
• `redis`: a Redis backend
• `pinecone`: a Pinecone backend
• `mongodb`: a MongoDB backend
• `milvus`: a Milvus backend
The library is open source, and we welcome contributions. Feel free to integrate it with your favorite vector database.
```
sim_abstract = fob.compute_similarity(
dataset,
embeddings="abstract_embeddings",
brain_key="abstract_similarity",
backend="lancedb",
)
```
## Now, let's check all this out in the app!
Check out the short video below, where I'll show you how to use everything we've created to find interesting research.
{% embed https://youtu.be/kwSYH0g5BOM %}
Feel free to build on this - analyze your own using this dataset! If you find some interesting trends or insights, please share them with the community!
There's a lot more that you can do with FiftyOne, more than I can share in this note-blog. But I hope you'll join me for a workshop where I'll spend ~90 minutes teaching you how to use FiftyOne! Sign up [here](https://voxel51.com/computer-vision-events/getting-started-with-fiftyone-workshop-june-26-2024/)!
## Thanks for reading!
The curated dataset hosted on Hugging Face and built with FiftyOne, along with the integration of FiftyOne with Hugging Face, provides access to a comprehensive collection of 2,389 accepted papers with essential metadata.
Researchers can manage, query, visualize, and analyze papers more effectively with this curated dataset and FiftyOne. FiftyOne Brain's features, such as visualizing embeddings with UMAP, computing uniqueness scores, and indexing embeddings by similarity, enable researchers to identify unique papers, find similar research, and understand the conference's offerings.
This resource simplifies navigating the vast amount of research presented at CVPR 2024, and I hope it will be a more accessible way for the computer vision community to discover research.
| jguerrero-voxel51 |
1,894,037 | Simplify LLM Cost Tracking with Tokencost | As the popularity of Large Language Models (LLMs) grows, so does the need to accurately track and... | 0 | 2024-06-19T20:12:39 | https://dev.to/gaw/simplify-llm-cost-tracking-with-tokencost-2cj6 |

As the popularity of Large Language Models (LLMs) grows, so does the need to accurately track and estimate the costs associated with using these powerful APIs. This is where [Tokencost](https://github.com/AgentOps-AI/tokencost?ref=blog.kusho.ai) comes in - a Python library that simplifies the process of calculating the USD cost of prompts and completions for major LLM providers.
today, and start shipping bug-free code faster!
## Key Features
1. LLM Price Tracking: Tokencost stays up-to-date with the latest pricing changes from major LLM providers, ensuring you always have accurate cost estimates.
2. Token Counting: Accurately count the number of tokens in your prompts before sending requests to OpenAI, helping you optimize your usage and costs.
3. Easy Integration: With just a single function call, you can get the cost of a prompt or completion, making it simple to incorporate Tokencost into your existing projects.
## Installation
Install Tokencost via PyPI:
```
pip install tokencost
```
## Cost Estimation
Tokencost makes it easy to calculate the cost of prompts and completions from OpenAI requests. Here's an example:
```
from openai import OpenAI
from tokencost import calculate_prompt_cost, calculate_completion_cost
client = OpenAI()
model = "gpt-3.5-turbo"
prompt = [{ "role": "user", "content": "Say this is a test"}]
chat_completion = client.chat.completions.create(
messages=prompt, model=model
)
completion = chat_completion.choices[0].message.content
prompt_cost = calculate_prompt_cost(prompt, model)
completion_cost = calculate_completion_cost(completion, model)
print(f"{prompt_cost} + {completion_cost} = {prompt_cost + completion_cost}")
```
You can also calculate costs using string prompts instead of messages:
```
from tokencost import calculate_prompt_cost
prompt_string = "Hello world"
response = "How may I assist you today?"
model= "gpt-3.5-turbo"
prompt_cost = calculate_prompt_cost(prompt_string, model)
print(f"Cost: ${prompt_cost}")
```
## Token Counting
In addition to cost estimation, Tokencost provides functions for counting tokens in both message lists and string prompts:
```
from tokencost import count_message_tokens, count_string_tokens
message_prompt = [{ "role": "user", "content": "Hello world"}]
print(count_message_tokens(message_prompt, model="gpt-3.5-turbo"))
print(count_string_tokens(prompt="Hello world", model="gpt-3.5-turbo"))
```
This allows you to optimize your prompts and stay within the token limits of your chosen LLM.
## Conclusion
As LLMs continue to advance and find new applications, managing the costs associated with these powerful APIs becomes increasingly important. Tokencost simplifies this process, providing accurate cost estimates and token counting for major LLM providers.
By integrating Tokencost into your projects, you can ensure that you're using LLMs efficiently and cost-effectively, helping you build better AI-powered applications without breaking the bank. | gaw | |
1,894,035 | Styling Elements with HTML Span | The HTML <span> tag is used to group elements for styling purposes. It can be thought of as a generic container for the phrasing content. The <span> tag is much similar to <div> tag, but it is an inline element unlike <div> which is a block-level element. The <span> tag does not inherently represent anything. In this lab, you will learn how to use the <span> tag to group elements for styling purposes. | 27,723 | 2024-06-19T20:11:35 | https://labex.io/tutorials/html-styling-elements-with-html-span-70839 | html, coding, programming, tutorial |
## Introduction
The HTML `<span>` tag is used to group elements for styling purposes. It can be thought of as a generic container for the phrasing content. The `<span>` tag is much similar to `<div>` tag, but it is an inline element unlike `<div>` which is a block-level element. The `<span>` tag does not inherently represent anything. In this lab, you will learn how to use the `<span>` tag to group elements for styling purposes.
> Note: You can practice coding in `index.html` and learn [How to Write HTML in Visual Studio Code](https://code.visualstudio.com/docs/languages/html). Please click on 'Go Live' in the bottom right corner to run the web service on port 8080. Then, you can refresh the **Web 8080** Tab to preview the web page.
## Adding the HTML Code
In the `index.html` file, add the following code inside the `<body>` tag:
```html
<p>This is a <span>simple</span> example of using the span tag.</p>
```
Here we have created a paragraph with the word "simple" wrapped inside the `<span>` tag. This will help us understand the use of this tag practically.
## Styling the Span Element
Now let's see how to style the content that is wrapped inside the `<span>` tag. For example, if we want to add color to the word "simple", we can use the `color` property in CSS.
Add the following CSS code inside the `<head>` tag:
```html
<style>
span {
color: red;
}
</style>
```
This CSS code changes the color of all the content wrapped inside the `<span>` tag to red. As we have wrapped the word "simple" inside the `<span>` tag, it will be printed in red color.
## Using Span with Other HTML tags
The `<span>` tag can be used with other HTML elements also. Let's see how you can use it with the `<a>` tag.
Add the following HTML code inside the `<body>` tag:
```html
<p>
This is a <span><a href="#">link</a></span> example.
</p>
```
This creates a paragraph with the word "link" wrapped inside the `<span>` tag and linked to the URL `#`.
## Styling the Linked Text
Now let's see how we can style the linked text. Add the following CSS code inside the `<head>` tag:
```html
<style>
span a {
color: green;
text-decoration: none;
}
</style>
```
This CSS code sets the color of the text inside the `<a>` tag to green without any underlining.
## Adding Event Attributes
The `<span>` tag supports both global and event attributes. Let's see how we can use the global attribute `class` to apply styles.
Add the `class` attribute to the `<span>` tag that wraps the word "simple" as shown below:
```html
<p>
This is a <span class="example"><a href="#">link</a></span> example.
</p>
```
Add the following CSS code inside the `<head>` tag:
```html
<style>
.example {
font-size: 20px;
}
</style>
```
This CSS code increases the font size of the content inside the `<span>` tag that has the class `example`.
## Using Span Tags for Semantics
Although the `<span>` tag doesn't have any inherent meaning, it can be used to convey the semantics of the text it wraps. For instance, you could use it to wrap a date or a percentage to indicate that the text inside has a special meaning.
For example, add the following code to the `index.html` file:
```html
<p>My final score is <span class="score" aria-label="90 percent">90</span>.</p>
```
In this example, `90` is wrapped inside the `<span>` tag and given a class name `score`. By doing so, we have conveyed that the text inside the `<span>` tag has a special meaning. In addition, we have added the `aria-label` attribute to provide accessibility information to a screen reader.
## Summary
In this lab, you learned how to use the HTML `<span>` tag to group elements for styling purposes. You also learned how to style the content wrapped inside the `<span>` tag, how to use it with other HTML tags, how to use global and event attributes, and how to use the `<span>` tag to convey semantics.
---
## Want to learn more?
- 🚀 Practice [Styling Elements with HTML Span](https://labex.io/tutorials/html-styling-elements-with-html-span-70839)
- 🌳 Learn the latest [HTML Skill Trees](https://labex.io/skilltrees/html)
- 📖 Read More [HTML Tutorials](https://labex.io/tutorials/category/html)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
1,894,024 | Guide to Trading Robots: Tips & Considerations for Binary Options Traders | Trading binary options is a popular way to speculate on the price movements of various assets. The... | 0 | 2024-06-19T20:10:41 | https://dev.to/danieldavis/guide-to-trading-robots-tips-considerations-for-binary-options-traders-2hf9 | Trading binary options is a popular way to speculate on the price movements of various assets. The simplicity of this type of trading, where you predict whether an asset's price will be above or below a certain point at a set time, has attracted many traders. To enhance their strategies, many are turning to trading robots. This guide from Binaryoptions.com explores the use of robots in binary options, offering tips and considerations for traders.
## Understanding Trading Robots
Trading robots, also known as algorithmic trading systems or expert advisors (EAs), are automated [investment tools](https://dev.to/flatlogic/best-financial-analysis-ai-tools-comprehensive-guide-1nd6) designed to automate various aspects of trading. These tools use complex algorithms to analyze market data, identify opportunities, and execute trades based on predefined rules and strategies. There are different types of trading robots, each with its unique characteristics and functionalities:
<table>
<thead>
<tr>
<th>Type of Trading Robot</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Signal-Based Robots</td>
<td>These robots generate trading signals based on technical analysis and market indicators. They analyze market data and provide entry and exit points for trades. Traders must manually execute the trades based on the signals.</td>
</tr>
<tr>
<td>Fully Automated Robots</td>
<td>These robots not only generate signals but also automatically execute trades without human intervention. They are directly connected to the trader's brokerage account and can open and close positions automatically.</td>
</tr>
<tr>
<td>Semi-Automated Robots</td>
<td>These robots combine elements of both signal-based and fully automated systems. They generate signals, but the trader has the option to manually review and confirm each trade before execution.</td>
</tr>
<tr>
<td>Machine Learning Robots</td>
<td>These advanced robots use <a href="https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained" target="_blank">machine learning</a> algorithms to analyze vast amounts of market data and identify patterns. They can adapt and improve their trading strategies over time based on the market's behavior.</td>
</tr>
</tbody>
</table>
Trading robots work by analyzing various market indicators and technical analysis tools, such as moving averages, oscillators, and candlestick patterns. They can also incorporate fundamental analysis factors, like economic data and news events, into their decision-making process.
The algorithms used in these robots can be based on various techniques, such as:
- Trend-following strategies
- Mean-reversion strategies
- Arbitrage strategies
- News-based strategies
- Sentiment analysis strategies.
It's important to note that such robots are not infallible and can be subject to various limitations and risks, such as:
- Unexpected market events or volatility
- Coding errors or bugs in the software
- Over-optimization or curve-fitting (where the robot performs well on historical data but fails to adapt to new market conditions)
- Connectivity issues or trading platform glitches.
To mitigate these risks, it's essential to thoroughly test and validate robots on historical and real-time market data before deploying them with real capital.
## Advantages of Using Trading Robots
These programs became popular thanks to the wide range of benefits they offer, including:
- **Time Efficiency:** Trading robots can operate 24/7, allowing traders to take advantage of opportunities even when they are not actively monitoring the markets.
- **Eliminating Emotions:** They work on logic and predefined rules, eliminating the emotional aspect that can often lead to irrational decisions.
- **Backtesting Capabilities:** Many such tools allow traders to backtest their strategies using historical market data to gauge effectiveness.
- **Consistency:** The tools maintain a consistent strategy, which is critical for evaluating the effectiveness of a trading method.
## Considerations Before Using Trading Robots
Despite the benefits, traders need to consider these aspects before using this type of software:
- **Understand the Risks:** Trading in binary options is inherently risky, and the use of robots does not guarantee success. It’s important to understand the risks involved.
- **Choose Reliable Software:** Not all such tools are created equal. Research and select software from a reputable provider to ensure reliability and accuracy.
- **Understand the Algorithm:** Familiarize yourself with how the robot makes decisions. A good understanding of the underlying algorithm helps in making informed adjustments.
- **Test Before Using Real Money:** Use a demo account to test the robot’s performance under real market conditions without risking actual funds.
- **Stay Informed:** Stay up to date with market news and events that could impact your strategy. Robots may not be able to interpret news and external factors.
## Tips for Using Robots in Binary Options
Before using these tools, carefully consider their advantages and disadvantages. If you believe that the benefits outweigh the possible issues for your strategy, here are some tips for using a [trading robot](https://www.binaryoptions.com/broker/pocket-option/pocket-option-robot/) from Binaryoptions.com’s leading experts to help you get started:
1. **Understand Your Strategy:** Before implementing a trading robot, clearly define your strategy and objectives. Ensure that the robot's logic aligns with your strategy and risk tolerance.
2. **Backtest and Optimize:** Thoroughly backtest your robot on historical market data to evaluate its performance and identify potential flaws or areas for improvement. Continuously optimize the robot's parameters and settings based on backtesting results.
3. **Start with a Demo Account:** Before risking real capital, test your automated tool on a demo account provided by your binary options broker. This will allow you to assess its performance in a risk-free environment and make necessary adjustments.
4. **Implement Risk Management:** Incorporate robust risk management techniques into your robot, such as setting appropriate stop-loss levels, position sizing, and trade frequency limits. This will help mitigate potential losses and protect your capital.
5. **Monitor and Adjust:** Continuously monitor the performance of the tool and be prepared to make adjustments or modifications as market conditions change. Market dynamics are constantly evolving, and your robot may need to adapt accordingly.
6. **Diversify Your Strategies:** While robots can be powerful, it's essential to diversify your approach. Consider using a combination of manual strategies and multiple robots with different logic and algorithms to reduce the overall risk.
7. **Stay Up-to-Date:** Regularly update the software and algorithms to ensure they are compatible with changes in the platform, brokers, and market conditions. Outdated software can lead to inefficiencies or errors.
8. **Understand the Limitations:** Robots are not infallible and may encounter unexpected market events or conditions that can lead to losses. Understand the limitations of your robot and be prepared to intervene manually if necessary.
9. **Evaluate and Adjust Costs:** Consider the costs associated with using robots, such as subscription fees, data feed costs, and potential trading commissions. Evaluate whether the robot's performance justifies these costs, and adjust your strategy accordingly.
10. **Seek Professional Guidance:** If you're new to trading robots or face challenges in implementing them effectively, consider seeking guidance from experienced traders or professional advisors. They can provide valuable insights and help you navigate the complexities of automated trading.
## The Ethical and Legal Aspects
Traders should also be aware of the ethical and legal considerations of using robots. Ensure that the use of automated trading software complies with the regulations in your jurisdiction. Also, be wary of over-reliance on robots, which can lead to a form of 'automation complacency' where traders may become less engaged in the process.
## Choosing the Right Robot
Selecting the right robot is crucial. Look for features like customization options, ease of use, quality of customer support, and compatibility with your chosen binary options broker. Additionally, consider the cost of the robot, as they can range from free to premium prices.
## Creating Your Own Automated Solution
While many traders opt to use pre-built robots, some may choose to [develop their own custom solutions](https://dev.to/igorboky/what-is-wrong-with-classical-custom-development-475o). Building your own robot can be a complex and time-consuming process, but it offers several advantages, such as:
- **Customization:** You can tailor the robot's logic and algorithms to your specific strategy and preferences.
- **Intellectual Property:** You maintain full ownership and control over the robot's code and functionality.
- **Cost Savings:** Developing your own robot may be more cost-effective in the long run compared to subscribing to paid services.
If you decide to create your own robot, consider the following steps:
1. **Define Your Strategy:** Clearly outline your strategy, including the specific indicators, patterns, and decision-making criteria you want to incorporate.
2. **Choose a Programming Language:** Common languages used for trading robot development include Python, C++, Java, and MetaTrader's MQL language.
3. **Develop the Algorithm:** Translate your strategy into a coded algorithm, incorporating the necessary data processing, analysis, and trade execution logic.
4. **Backtesting and Optimization:** Rigorously backtest your algorithm on historical market data and optimize its parameters for better performance.
5. **Integration and Deployment:** Integrate your robot with your chosen trading platform or broker's API, and deploy it in a live trading environment.
6. **Monitoring and Maintenance:** Continuously monitor the robot's performance and make necessary updates or adjustments as market conditions change.
Building your own robot requires strong programming skills, a deep understanding of trading strategies, and the ability to handle complex data analysis and execution logic. If you lack the necessary expertise, consider seeking professional assistance or collaborating with experienced developers or traders.
## Conclusion
Trading robots can be a valuable tool for binary options traders, offering benefits like time efficiency, emotional detachment, and strategy consistency. However, it’s important to approach automated trades with caution, understand the risks, and ensure that you select a reliable and effective robot.
By combining the use of robots with a sound strategy and risk management, traders can potentially improve their performance in the binary options market. Remember, no tool can guarantee success, and continuous learning and adaptation are key to success in binary options trading.
This guide was written by experts from Binaryoptions.com, a platform led by seasoned traders and industry insiders that offers insightful reviews of brokers and platforms. The website is a hub for comprehensive guides, strategies, and educational resources, ranging from basic concepts to advanced techniques. It empowers traders with the knowledge and skills needed for financial success.
| danieldavis | |
1,894,640 | Passwordless Authentication Streamlines Cyber Insurance and Bolsters Security | In the face of escalating cyber threats, businesses are increasingly turning to cyber insurance as a... | 0 | 2024-06-22T06:59:18 | https://mojoauth.com/blog/passwordless-authentication-streamlines-cyber-insurance-and-bolsters-security/ | ---
title: Passwordless Authentication Streamlines Cyber Insurance and Bolsters Security
published: true
date: 2024-06-19 20:03:58 UTC
tags:
canonical_url: https://mojoauth.com/blog/passwordless-authentication-streamlines-cyber-insurance-and-bolsters-security/
---
In the face of escalating cyber threats, businesses are increasingly turning to cyber insurance as a crucial element of risk management. Yet, as insurers grapple with a surge in cyberattacks, premiums are rising, and scrutiny of an organization’s security practices is intensifying. Passwordless authentication emerges as a powerful strategy to both reduce cyber risk and potentially gain an advantage in the cyber insurance landscape.
This comprehensive article explores the mounting challenges businesses face in securing cyber insurance. It dissects how passwordless authentication helps meet increasingly stringent insurer requirements, ultimately translating to more favorable coverage and potential savings on premiums.
## Cyber Insurance: An Evolving Necessity
The growing reliance on digital systems for operations, customer interactions, and data storage has fueled a corresponding increase in cyberattacks. The consequences are far-reaching:
- **Financial Losses:** Data breaches, ransomware attacks, and business disruptions can lead to significant direct costs for incident response, remediation, customer reimbursements, and potential regulatory fines.
- **Reputational Damage:** Security incidents erode customer trust, harm brand reputation, and can lead to long-term loss of business.
- **Operational Disruption:** Cyberattacks often cripple critical business operations, causing downtime, impacting productivity, and jeopardizing the ability to serve customers.
- **Regulatory Fines and Penalties:** Failure to meet data security and privacy regulations like GDPR and HIPAA can bring hefty fines and legal repercussions.
Cyber insurance has become an essential safety net. It offers financial protection against these risks, covering costs associated with investigation, recovery, legal fees, and potentially even lost revenue.
## The Challenge: Insurers React to Spiking Cyber Threats
As the frequency and sophistication of cyberattacks rise, the cyber insurance industry is adapting, leading to challenges for businesses seeking coverage:
- **Rising Premiums:** Insurers are facing mounting claims payouts driving a substantial increase in cyber insurance premiums – a significant expense for businesses.
- **Stricter Underwriting Criteria:** To mitigate their own risks, insurers are scrutinizing a company’s security posture more closely before issuing or renewing policies.
- **Reduced Coverage Limits:** In some cases, insurers may reduce the maximum coverage limits or outright refuse to provide coverage to high-risk companies.
- **Complex Exclusions:** Cyber insurance policies often include numerous exclusions and limitations, making it critical to understand the fine print of what is actually covered.
## The Password Problem: A Major Red Flag for Insurers
Password-centric security is a glaring vulnerability that makes insurers wary. They understand the risks:
- **Credentials for Sale:** Passwords stolen in data breaches and traded on the dark web fuel credential stuffing attacks, putting user accounts at constant risk.
- **Human Error:** Weak passwords, password reuse, and falling prey to phishing make employees a major weak point insurers are keenly aware of.
- **Delayed Breach Detection:** When passwords are breached, threat actors may lurk within systems undetected for extended periods, maximizing the potential damage.
## Passwordless Authentication: The Solution Insurers Want to See
By decisively eliminating passwords, businesses demonstrate a commitment to proactive cybersecurity, positioning themselves favorably when seeking cyber insurance:
- **Significantly Reduced Attack Surface:** Passwordless options like biometrics, magic links, and FIDO security keys are extremely resistant to common attacks, lowering the overall enterprise risk profile.
- **Improved Compliance:** Strong authentication aligns with requirements set forth by many industry regulations and data privacy standards, simplifying the compliance aspect for insurers.
- **Boosted Customer and Partner Trust:** Forward-thinking security measures enhance a company’s reputation, fostering trust with customers and business partners, which reflects favorably during the insurance underwriting process.
- **Incident Response and Claims:** In the unfortunate event of a breach, robust authentication aids investigation and may help minimize the fallout, simplifying the potential insurance claims process.
## Real-World Benefits of Passwordless for Cyber Insurance
Implementing passwordless authentication translates into tangible benefits within the cyber insurance landscape:
- **Potential for Lower Premiums:** A demonstrably lower risk profile may allow you to negotiate for lower premiums or avoid premium increases imposed by insurers.
- **Increased Coverage Availability:** Companies with strong security measures are more likely to secure coverage, especially in sectors deemed high-risk by insurers.
- **Broader Coverage and Fewer Exclusions:** Robust authentication may make insurers more willing to offer broader coverage with less restrictive policy exclusions.
- **Streamlined Renewal Process:** Proactive security improvements can simplify future policy renewals and potentially avoid in-depth security audits.
## MojoAuth: Passwordless Solutions Tailored for Insurance Considerations
MojoAuth offers a suite of authentication tools that directly address concerns raised by cyber insurance providers:
1.
### A Range of Secure Options:
- **Biometrics:** Fingerprint and facial recognition offer strong protection and are becoming increasingly accessible across most modern devices.
- **Magic Links:** Email- or SMS-based one-time codes offer easy authentication, especially for customers or partners less comfortable with newer technologies.
- **FIDO Security Keys:** Hardware keys provide un-phishable authentication, ideal for high-value accounts or critical administrative access.
1.
### Risk-Based Adaptive Authentication:
- **Intelligent Risk Assessment:** MojoAuth evaluates factors like location, device, behavior, and IP reputation to tailor the authentication process, ensuring security without hindering legitimate users unnecessarily.
- **Proactive Fraud Detection:** Integration with specialized fraud detection tools strengthens defenses, a point insurers actively look for.
1.
### Simplified Implementation and Scalability:
- **API-Driven:** Integrates easily into existing systems, minimizing potential disruptions for businesses during the transition to passwordless.
- **Cloud-Based Architecture:** Handles increasing user bases and authentication demands effortlessly, aligning with the needs of growing businesses.
## The MojoAuth Advantage: Beyond the Basics
MojoAuth provides additional features and capabilities that insurers will find appealing:
- **Compliance Support:** MojoAuth aids in compliance with evolving privacy regulations like GDPR, CCPA, and industry-specific standards, reducing risk exposure for the business and simplifying compliance demonstrations for insurers.
- **Granular Access Control:** Robust access controls ensure the principle of least privilege, minimizing the potential impact in the case of a breach.
- **Detailed Audit Logs and Reporting:** Clear records of authentication events, access attempts, and administrative changes provide a valuable resource for security audits, incident investigation, and validating secure practices to insurers.
- **Customer Preference Flexibility:** Offering users a choice of passwordless options ensures inclusivity and empowers them to select a method most comfortable for them, increasing overall adoption.
### Strategic Implementation of Passwordless Authentication for Cyber Insurance
To maximize the benefits and successfully leverage passwordless authentication in the cyber insurance landscape, businesses should keep the following in mind:
- **Prioritise High-Value Accounts:** Start by rolling out passwordless for sensitive accounts (admin access, financial data, privileged customers) to demonstrate the strongest security posture where it matters the most to insurers.
- **Gradual Rollout:** Introduce passwordless alongside traditional passwords initially, giving users time to adapt. Start with new user signups for minimal friction.
- **Educate Users and Stakeholders:** Communicate the benefits of passwordless authentication to both employees and customers. Address security concerns and provide clear support guides.
- **Document the Transformation:** Maintain clear records outlining the implementation of passwordless authentication, including timelines, technologies used, and internal staff training initiatives.
- **Proactive Engagement with Insurers:** Reach out to your current or potential cyber insurance providers to discuss how the move to passwordless impacts your eligibility for coverage and premiums. Highlight your commitment to improved security measures.
### The Future: Passwordless Authentication and Evolving Insurance Standards
As passwordless becomes mainstream, its role in cyber insurance will continue to strengthen:
- **Potential for Standardization:** Insurers may establish specific passwordless authentication standards or offer incentives for companies using industry-approved solutions.
- **Data Sharing and Benchmarking:** Anonymous and aggregated authentication data could help both insurers and businesses gain better insights into attack patterns to further refine risk assessment models.
## Conclusion
In an environment where securing cyber insurance is becoming increasingly complex and costly, businesses embracing passwordless authentication with MojoAuth gain a significant advantage. By reducing vulnerabilities, demonstrating robust security, and aligning with evolving industry best practices, businesses can attract more comprehensive and cost-effective cyber insurance coverage. This translates not only to financial risk mitigation but also into a stronger overall cybersecurity posture that protects vital operations, assets, and customer trust.
Are you ready to explore how MojoAuth can transform your authentication, improve your cyber insurance position, and enhance the security of your business? | auth-mojoauth | |
1,894,031 | The 7 Guiding Principles of ITIL: A Roadmap for Delivering Continous Improvement. | ITIL or Information Technology Infrastructure Library, is a well-known set of IT best practices and... | 0 | 2024-06-19T20:00:30 | https://dev.to/hr21don/the-7-guiding-principles-of-itil-a-roadmap-for-delivering-continous-improvement-18p5 | itil, service, career, webdev | ITIL or Information Technology Infrastructure Library, is a well-known set of IT best practices and guidelines for managing the service lifecycle within an organisation. The latest version of ITIL v4 outlines seven key principles that serve as the foundation for exceptional IT Service management.
Here are the seven key guiding principles:
- Focus on value
- Start where you are
- Progress iteratively with feedback
- Collaborate and promote visibility
- Think and work holistically
- Keep it simple and practical
- Optimize and automate
## 1. Focus on Value.
> "Everything the organization does should link back, directly or indirectly, to value for itself, its customers, and the other shareholders."
The ultimate goal for any organization is to deliver value to its customers. ITIL framework emphasizes the importance of understanding and identifying what truly matters to the service consumer. This is something that changes over time, and organizations must continuously strive to enhance the customer experience to offer the best outcome.
## 2. Start where you are
> "Before making any change or improvement, it's important to assess the current position and see if anything can be reused or built on".
Before embarking on any change, it is important that organizations have a clear view of the current situation. With this guiding principle, organizations are advised to take an objective look at their existing services, processes, and capabilities. This is something that will allow organizations to identify areas that can be reused and built upon for the future.
## 3. Progress iteratively with feedback
> "Resist the temptation to do everything at once. Even huge initiatives must be accomplished iteratively."
Transformations and large-scale initiatives should be approached iteratively and potentially revised. ITIL advises organizations to follow a continuous cycle of improvement that identifies opportunities for enhancing existing services, emerging demands, and the value chain. This iterative approach ensures that the focus on value remains unwavering.
## 4. Collaborate and promote visibility
> "Collaboration can be viewed as working together towards a shared goal. It can help to remove silos within organisations, allowing everyone to work together more effectively."
To promote visibility, organizations need to make sure that the right employees have the right roles and responsibilities. Remember that collaboration and communication are important to support good decisions.
## 5. Think and work holistically
> "No service, practice, process, department or supplier stands alone. All the organisation's activities should be focused on the delivery of value."
ITIL suggests the importance of a holistic approach, where the organizations activities are aligned with the delivery of value. Clear communication and identification of patterns that support automation.
## 6. Keep it simple and practical
> "Always use the minimum number of steps to accomplish an objective."
To keep things simple, organizations should avoid unnecessary complexities and enable quick wins through practical solutions.
## 7. Optimise and automate.
> "Before an activity can be effectively automated, it should be optimized to whatever degree is possible and reasonable."
Don't automate for automation's sake. ITIL recommends understanding and agreeing on the context for optimization, focusing on simplification and value. Automation should be pursued only when it helps increase the value of the business.
**References:**
https://bit.ly/4eystaP
https://www.ivanti.com/glossary/itil | hr21don |
1,894,030 | Generate assertions using AI | Assertions are like the rear-view mirrors of your testing process. They check whether specific... | 0 | 2024-06-19T19:58:45 | https://dev.to/gaw/generate-assertions-using-ai-2bim | Assertions are like the rear-view mirrors of your testing process. They check whether specific conditions within the code are running as expected.
Assertions testing is based on the use of assertion functions that we have productized for you. In a single click: you can check for accuracy, reliability, and performance!

After you check the response to your request, directly generate assertions using AI! The best part? Absolutely no prompting is required from your end.

If the assertions match, the test passes. You can be rest assured your code is ready for prod.

If the assertions do not match, and the test fails, you get insight into the exact point and reason for failure. From here on, you can fix the code promptly.

| gaw | |
1,894,028 | Test more efficiently with GenAI | Run GenAI written tests for your entire codebase while you’re on a coffee break! The era of shift... | 0 | 2024-06-19T19:56:17 | https://dev.to/gaw/test-more-efficiently-with-genai-2g3k |

**Run GenAI written tests for your entire codebase while you’re on a coffee break!**
The era of shift left-testing arrived but our products did not change… At KushoAI, we’re bringing you a testing tool that will match pace with the dynamic shipping cycles of your team.
You have to catch issues before they reach production and impact users but since testing is so time-consuming it’s often a bottleneck.
Manual testing helps you debug specific problems but it’s painstakingly slow. AI-driven testing ensures you get consistent in-depth coverage and reduces human error.
The best part? It’s lightning fast. Kusho looks at every scenario, writes your tests, and executes them instantly.
In fact, KushoAI can auto-run relevant test suites at any stage of your CI/CD pipeline. A release that affects millions of customers? We’ve got you.
You can run tests for a single API, group some APIs together, or run tens of APIs with a single click! Writing and executing tests usually take hours. Now you can do it in a matter of minutes.

For every test, you get a request, response, assertions, and assertions results tab. You can generate assertions with GenAI and say goodbye to your testing woes forever.

With assertion testing, detecting small and subtle errors that lead to notorious bugs is possible. When an assertion returns as false, it’s a warning system like in cars when the petrol must be re-fuelled, and your test fails.

Now you know exactly where to intervene to ship pristine code in a
matter of minutes.

| gaw | |
1,894,026 | HTML tips and tricks. | In my very last article I shared some JavaScript tips and tricks In this article I will be covering... | 0 | 2024-06-19T19:54:48 | https://dev.to/firas_lajmi_/html-tips-and-tricks-22ok | html, webdev, beginners, tutorial | In my very last article I shared some JavaScript tips and tricks In this article I will be covering HTML tips and tricks ✨.
The start attribute
Start attribute allows you to specify starting number for your list items.
```
<ol start="20">
<li>Pineapple🍍</li>
<li>Apple🍎</li>
<li>Greenapple 🍏</li>
</ol>
```
3 items of fruits starting from 20 20 is specified using the start attribute3 items of fruits
The contenteditable attribute
Set contenteditable attribute to true and your content will be editable.
<p contenteditable="true">It Can be something about you.</p>
GIFillustration of editable paragraph achived using contenteditable attribute
The required attribute
Set required attribute on input fields that are mandatory to be filled.
```
<input type="password" required>
```
The mark tag
Use mark instead of styling spans to highlight text.
```
<p>This is <mark>important</mark> </p>
```
highlighted text achieved with mark tag
The lazy loading attribute
Add lazy loading attribute on media elements this will enhance page performance by defer the loading of media elements until the user scrolls to them.
<img src='image.jpg' loading='lazy' alt='Alternative text'>
The kbd tag
Use kbd tag when presenting keyboard inputs.
<p>Press <kbd>alt</kbd> & <kbd>tab</kbd> to change window</p>
illustration for kbd tag
The Details & Summary tag
You can craft nice looking accordions using details and summary elements this has built built in keyboard accessibility features.
```
<details>
<summary>Can i save and love ❤️ this article?</summary>
<p>Follow on twitter for more stuff.</p>
<p>Save for updates.</p>
</details>
```
GIFaccordion with question and answer
The accept attribute
Accept attributes allows us to specify which types of files user can upload.
```
<input type="file" accept=".jpg, .pdf">
```
The favicon
Set link rel to icon to define favicon
```
<link rel="icon" href="logo.webp">
```
The picture tag
Picture tag allows you to present images of different aspect ratios based on screen sizes picture tag is awesome for implementing reponsive web design.
```
<picture>
<source srcset="large.webp" media="(min-width: 1200px)">
<source srcset="medium.webp" media="(min-width: 800px)">
<img src="regular.jpg" />
</picture>
```
The dir attribute
You can set your text direction from right to left or left to right using direction set dir to auto this will automatically change text direction based on language.
<p dir="rtl">Awesome!</p>
some text written from right to left
The spellcheck attribute
Use spellcheck attribute to check for spelling errors.
<input type="email" spellcheck="true">
The meta description
Add meta descriptions in the head of your Html to have an indirect impact on search engine optimization meta description represents the text that appears beneath the site name on search engine result page.
<meta name="description" content="Gitpod streamlines developer workflows by providing prebuilt, collaborative developer environments in your browser - powered by VS Code.">
Gitpod.io url with output of meta description
The abbr tag
Abbreviate your content using abbr tag.
```
<abbr title="National Aeronautics and Space Administration">NASA 🚀</abbr>
```
GIFAbbreviation of word showing on hover over the word
The disabled attribute
Use disabled attribute for options element to disable a item from dropdown.
```
<select>
<option>HTML</option>
<option>CSS</option>
<option disabled>REACT</option>
</select>
```
GIFUnable to select a option from list because of disabled attribute
The poster attribute
Poster attributes allows you to specify a image to be displayed while the video is downloading or until the user hits the play button.
<video src="video.mp4" poster="flowers.jpg"></video>
The reversed attribute
Using reversed attribute you can reverse the order of list numbers.
```
<ol reversed>
<li>Pineapple🍍</li>
<li>Apple🍎</li>
<li>Greenapple 🍏</li>
</ol>
```
list numbers starting in reversed order starting from 3 and ending at 1 instead of 1 to 3
Hope you enjoyed reading this article! if you have something to say or have any questions feel 💯 free to comment below.
Happy Coding ✨❤️ | firas_lajmi_ |
1,894,023 | Add API inputs in many easy ways | As developers, you deserve robust tools. We understand the laundry list of items you must get to... | 0 | 2024-06-19T19:52:40 | https://dev.to/gaw/add-api-inputs-in-many-easy-ways-154 |

As developers, you deserve robust tools. We understand the laundry list of items you must get to and let’s not even get into the time spent in meetings.
At Kusho, we’ve built a GenAI test automation product that adapts to the needs of your job, and can save you a couple of hours daily.
Kusho solves your testing bottleneck by simulating test scenarios, running tests, validating them, and helping you discover bugs. The best part is that you can enter API details from your existing workflow.
## Choose whatever way suits you best:
1. Just enter API Details: you can manually enter your API details and provide basic details like HTTP method, the endpoint URL, headers, path params , query params etc. and let Kusho do the rest!
2. Import OpenAPI specs: if you prefer breezy documentation and swear by OpenAPI specs, you can also easily import your collection.
3. Import your existing Postman Collections: since you already spend so much time maintaining your Postman Collections, we ensure you can directly import them, and Kusho can generate on top of it.
4. Copy/paste your cURL command: If you don’t want to switch out of your environment, you can create test suites by entering a cURL command.
In the middle of a crazy workday and wish there was someone to help you test quickly and reliably? Test with Kusho and let us know if you have any feedback!
| gaw | |
1,894,022 | JavaScript tips and tricks. | In this article we will look at some useful JavaScript tips and tricks. location.reload() This... | 0 | 2024-06-19T19:52:35 | https://dev.to/firas_lajmi_/javascript-tips-and-tricks-4l06 | webdev, javascript, programming, beginners | In this article we will look at some useful JavaScript tips and tricks.
location.reload()
This reloads the current document and works same as the reload button in your browser.This can be really handy when implementing a refresh button in the user interfaces.
```
const btnRefresh = document.querySelector('button')
btnRefresh.addEventListener('click',() => {
location.reload()
})
```
JavaScript styles
CSS styles applied using javascript are applied as inline styles.
<h1>Cakes & Bakes 🧁</h1>
document.querySelector('h1').style.color = "coral"
```
<h1 style="color: coral;"> // HTML Element
```
Type coercion
Implicit conversion of values from one data type to another data type is named as type coercion i.e strings to numbers.
In case of plus operator values are concatenated and converted into strings.
```
console.log("2" * 10) // output: 20
console.log(10 - '2') // output: 8
console.log(2 + '2') // output: '22'
```
Active element
if you are having hard time figuring out which element is currently being focused use document.activeElement it returns the current focused element.
```
console.log(document.activeElement)
```
Primitives
In JavaScript there are seven primitive data types.
number, string, boolean, undefined, null, BigInt, Symbol
Remainder operator
Remainder operator % simply returns the remainder of a division i.e 5 % 2 = 1.You can use remainder operator to check either a number is even or odd
```
const number = 10
console.log(number % 2 === 0 ? 'Even ❤️' : 'Odd 🧡')
// output: Even ❤️
```
Design mode
Set document.designMode to on to make your webpage content editable.
document.designMode = "on"
Contains method
To check either a HTML element contains a specific class or not.
```
<h1 class="title">Page title</h1>
document.querySelector('h1').classList.contains('title')
document.querySelector('h1').classList.contains('subtitle')
```
```
// output: true
// output: false
Var hoisting
Variables declared with var are hoisted but returns undefined.
console.log(a)
var a = 10;
```
```
// output: undefined
Remove method
Remove methods allows you to remove an HTML from the document.
<h1>Page title ⚙️</h1>
const pageTitle = document.querySelector('h1')
pageTitle.remove()
Eval method
Eval is a builtin Javascript function which allows you to evaluate the given values i.e strings, numbers.This can be used to build a simple calculator like this.
eval(2 * '5')
// output: 10
```
```
eval(12 / '2')
// output: 6
Typeof operator
The typeof operator allows you to check type of a value.
console.log(typeof 42);
// output: "number"
```
```
console.log(typeof 'markdown ⚡');
// output: "string"
```
```
console.log(typeof true);
// output: "boolean"
```
Replace method
The replace method allows you to replace the very first instance of a string entity with the specified entity likewise replace we also have replaceAll that replaces all the instances.
const string = 'cake'
string.replace('c','b')
```
// output: 'bake'
```
Default parameters
Set default parameters for functions using assignment operator in case no argument is passed the function will return the default values.
I wrote this article to cover this topic in detail.
```
function printName(name = "Anonymous"){
console.log(name)
}
```
```
printName() // output: "Anonymous"
Document url
The document.URL returns the document URL/location as a string.
console.log(document.URL)
```
// output: "https://developer.mozilla.org/en-US/"
Strings index
Likewise arrays string indexes also start with 0.
```
let string = 'cake'
string[0] // output: 'c'
string[1] // output: 'a'
```
Includes method
To check either a string or array contains a specific value or not. The method returns a boolean.
```
const string = 'JavaScript'
string.includes('J') // output: true
const hearts = ['🧡', '💙', '🤍']
console.log(hearts.includes('🧡')) // output: true
console.log(hearts.includes('❤️')) // output: false
```
Hope you enjoyed reading this article! if you have something to say or have any questions feel 💯 free to comment below. | firas_lajmi_ |
1,894,020 | 22 Useful CSS Tips and Tricks Every Developer Should Know | 🚨🚨 Note: All the tips, tricks shared in this article are part of GitHub repository css tips tricks A... | 0 | 2024-06-19T19:49:28 | https://dev.to/firas_lajmi_/22-useful-css-tips-and-tricks-every-developer-should-know-5cbh | webdev, css, beginners, html | 🚨🚨 Note: All the tips, tricks shared in this article are part of GitHub repository css tips tricks A handmade collection of pro css tips tricks for developers. Please checkout the repositiory and Give it a star if you find it useful 🌟
1. Docs Layout
Create a responsive documentation-styled layout with only two lines of CSS.
```
.parent{
display: grid;
grid-template-columns: minmax(150px, 25%) 1fr;
}
```
2. The Custom Cursors
Checkout the github repository css tips tricks to learn more about it.
```
html{
cursor:url('no.png'), auto;
}
```
GIFimage with custom cursor
3. Fill Text With Images
```
h1{
background-image: url('images/flower.jpg');
background-clip: text;
color: transparent;
background-color: white;
}
```
Air max
Note: Always specify background-color when using this technique as this will be used as a fallback value in case the image does not load for some reason.
4. Adding Stroke to Text
Make text more legible and visible using text-stroke property it adds a stroke or outline to text.
/* 🎨 Apply a 5px wide crimson text stroke to h1 elements */
```
h1 {
-webkit-text-stroke: 5px crimson;
text-stroke: 5px crimson;
}
```
NETLIFY
5. Paused Pseudo Class
Use :paused selector to style media elements when in paused state likewise :paused we also have :playing.
/* 📢 currently, only supported in Safari */
```
video:paused {
opacity: 0.6;
}
```
GIFplam tree on river
6. Emphasing Text
Use text-emphasis property to apply emphasis marks to text elements.You can specify any string including emojis as its value.
```
h1 {
text-emphasis: "⏰";
}
```
Time is a healer
7. Style Drop Caps
Avoid unnecessary spans and use pseudo elements instead to style your content likewise first-letter pseudo-element we also have first-line
```
pseudo-element.
h1::first-letter{
font-size: 2rem;
color:#ff8A00;
}
```
Gitpod.io
8. Fallback values for Variables
/* 🎨 crimson color will be applied as var(--black) is not defined */
```
:root {
--orange: orange;
--coral: coral;
}
h1 {
color: var(--black, crimson);
}
```
crimson colored text
9. Change Writing Mode
You can use writing mode property to specify how text should be laid out on your website i.e vertically or horizontally.
<h1>Cakes & Bakes</h1>
/* 💡 specifies the text layout direction to sideways-lr */
```
h1 {
writing-mode: sideways-lr;
}
```
text starting from
10. Rainbow Animation
Creates a continuously looping color animation for elements to grab user attention. Give a read to css tips tricks repository to learn when to use prefer-reduced-motion media feature
```
button{
animation: rainbow-animation 200ms linear infinite;
}
@keyframes rainbow-animation {
to{
filter: hue-rotate(0deg);
}
from{
filter: hue-rotate(360deg);
}
}
```
GIFshop now button changing its color contineously
11. Master Web Development
Subscribe to our YouTube channel to take your web-development skills to the next level. One of the recent video series goes over creating the following open source portfolio template.
Imon
12. Zooming on Hover
/* 📷 Define the height and width of the image container & hide overflow */
```
.img-container {
height: 250px; width: 250px; overflow: hidden;
}
/* 🖼️ Make the image inside the container fill the container */
.img-container img {
height: 100%; width: 100%; object-fit: cover;
transition: transform 200m ease-in;
}
img:hover{
transform: scale(1.2);
}
```
GIFcrimson colored shopping bag laying on grey tiles
13. Attribute Selector
Select HTML elements based on attributes using the attribute selector.
```
<a href="">HTML</a>
<a>CSS</a>
<a href="">JavaScript</a>
```
/* 🔗 targets all a elements that have a href attribute */
```
a[href] {
color: crimson;
}
```
HTML CSS JavaScript
14. Clipping Elements
Use the clip-path property, to create interesting visual effects, such as clipping an element to a custom shape like a triangle or hexagon.
```
div {
height: 150px;
width: 150px;
background-color: crimson;
clip-path: polygon(50% 0%, 0% 100%, 100% 100%);
}
```
triangle
15. Detect Properties Support
Use CSS @support rule to detect support for CSS features directly in your CSS. Checkout the css tips tricks repository to learn more about feature queries.
```
@supports (accent-color: #74992e) {
/* code that will run if the property is supported */
blockquote {
color: crimson;
}
}
```
Never break your promise.(Hazrat Ali A.S)
16. CSS Nesting
The CSS working group has been working on figuring out how to add nesting to CSS. With nesting, you'll be able to write CSS that is more intuitive, more organized, and more efficient.
```
<header class="header">
<p class="text">Lorem ipsum, dolor</p>
</header>
/* 🎉 You can try CSS nesting now in Safari Technology Preview*/
.header{
background-color: salmon;
.text{
font-size: 18px;
}
}
```
17. The Clamp Function
Use the clamp()function for responsive and fluid typography.
/* Syntax: clamp(minimum, preferred, maximum) */
```
h1{
font-size: clamp(2.25rem,6vw,4rem);
}
```
GIFgif font size changing based on screen size
18. Styling Optional Fields
You can style form fields like input, select and textarea that dose not have a required attribute on them using the :optional pseudo class.
/* Selects all optional form fields on the page */
```
*:optional{
background-color: green;
}
```
19. Word Spacing Property
Use word-spacing property to specify length of white space between words.
```
p {
word-spacing: 1.245rem;
}
```
20. Create Gradient Shadows
This is how you can create gradient shadows for an exclusive user experience.
```
:root{
--gradient: linear-gradient(to bottom right, crimson, coral);
}
div {
height: 200px;
width: 200px;
background-image: var(--gradient);
border-radius: 1rem;
position: relative;
}
div::after {
content: "";
position: absolute;
inset: 0;
background-image: var(--gradient);
border-radius: inherit;
filter: blur(25px) brightness(1.5);
transform: translateY(15%) scale(0.95);
z-index: -1;
}
```
box with gradient shadow
21. Change Caption Side
Use the caption-side property to place the table caption (table title) on a specified side of the table.
GIFChanging the tables caption side from top to bottom
22. Creating Text Columns
Craft nice column layouts for text elements using column properties.
/* 🏛️ divide the content of the "p" element into 3 columns */
```
p{
column-count: 3;
column-gap: 4.45rem;
column-rule: 2px dotted crimson;
}
```
css tips and tricks poem
I hope you enjoyed reading this article. Please checkout the GitHub repository css tips tricks to learn more professional css tips, tricks and don't forget to give the repository a star⭐ This will help other peoples find this repository.
You can show your support by following me on my GitHub account. Thanks and Happy coding ❤️ | firas_lajmi_ |
1,894,019 | 💥💥Reasons Pepe Coin Outpacing Dogecoin, Shiba Inu In June | 📈 Market Recovery The cryptocurrency market is recovering on Wednesday after a week of losses.... | 0 | 2024-06-19T19:48:54 | https://dev.to/irmakork/reasons-pepe-coin-outpacing-dogecoin-shiba-inu-in-june-4i3g |
📈 Market Recovery
The cryptocurrency market is recovering on Wednesday after a week of losses. CoinGecko shows a 1.2% increase in the market cap to $2.487 trillion, with small price increases across the board. Meme coins, particularly Pepe Coin, are at the forefront of this recovery.
🐸 Pepe Coin Surge
Pepe is up 8.5% to $0.00001141, with its market cap reaching $4.8 billion. It ranks #23 on CoinMarketCap and is the 11th most traded digital asset with a trading volume above $1 billion.
🐶 Dogecoin and Shiba Inu Struggle
Dogecoin (DOGE) is up 2.5% to $0.1236 but faces resistance at $0.13. It may retest support at $0.12 if the RSI continues downtrend. Shiba Inu (SHIB) is up 1.8% to $0.00001822 but is falling toward support at $0.000018 due to a descending triangle pattern and death cross.
📊 Potential for SHIB and PEPE
SHIB could rebound above $0.000025 if it breaches the 200-day EMA resistance at $0.00001981. Pepe Coin, despite its recent recovery, is eyeing a breakout from a falling wedge pattern, which could push it above $0.000014 and back to its ATH of $0.00001722.
🚀 Pepe Leads Recovery
Pepe's rally this week highlights its popularity among whales, with strong support at $0.00001 and potential for further gains as the RSI improves. A breakout could accelerate its recovery and push it to new highs.

| irmakork | |
1,894,018 | 🔥5 Low Cap Altcoins To Go Flying In July | The cryptocurrency market is buzzing as Bitcoin hovers around $65,000, setting the stage for... | 0 | 2024-06-19T19:48:28 | https://dev.to/irmakork/5-low-cap-altcoins-to-go-flying-in-july-265g |
The cryptocurrency market is buzzing as Bitcoin hovers around $65,000, setting the stage for significant growth among smaller-cap assets. Highlighted altcoins with potential include Gala (GALA), Pendle Finance (PENDLE), Axie Infinity (AXS), Ordi token (ORDI), and Gnosis (GNO), each leveraging the current bullish trend with innovative solutions and strong community support.
🚀 Gala (GALA)
Gala (GALA) powers Gala Games, facilitating NFT purchases and network operations. Recently priced at $0.02786, up 5% in 24 hours, GALA's market cap reached $887 million. Despite a recent dip, GALA's bullish trend makes it an appealing altcoin as Bitcoin rallies.
🌟 Pendle Finance (PENDLE)
Pendle Finance revolutionizes yield management by tokenizing and trading future yields. Its price surged to $5.27, up 16% in 24 hours, reflecting strong growth. PENDLE's innovative approach positions it as a key player in the RWA sector, promising a bright future for low-cap cryptocurrencies.
🎮 Axie Infinity (AXS)
Axie Infinity (AXS) is a blockchain game inspired by Pokémon, where players battle creatures called Axies. Currently priced at $5.93, up 3% in a day, AXS boasts a market cap of $886 million. AXS's 21% growth over the past year suggests it will benefit from Bitcoin's gains.
💥 Ordi token (ORDI)
Ordi token (ORDI) leverages the Ordinals protocol to embed data into Bitcoin’s satoshis. Trading at $37.67 with a 533% increase over the past year, ORDI's market cap is $791 million. This innovative approach and recent 3% price increase highlight ORDI's growing importance in the market.
📈 Gnosis (GNO)
Gnosis (GNO), a DAO focused on Ethereum infrastructure, is priced at $322, up 6.65% in 24 hours. Ranked 83rd on CoinMarketCap with a market cap of $835 million, GNO's 200% value increase over the past month signals substantial growth potential as Bitcoin rallies.

| irmakork | |
1,894,017 | 👀Bitcoin Price Drop To $60,000 Lingers As ETF Outflows Escalate | 📉 Bitcoin Struggles Below $65,000 BTC is stuck under $65,000 with immediate support at $64,000.... | 0 | 2024-06-19T19:48:06 | https://dev.to/irmakork/bitcoin-price-drop-to-60000-lingers-as-etf-outflows-escalate-g48 |
📉 Bitcoin Struggles Below $65,000
BTC is stuck under $65,000 with immediate support at $64,000. Efforts to recover from the drop to $70,000 lost steam on Wednesday. Rising selling pressure could push BTC to lower levels at $62,500 and $60,000 to gather liquidity for a rebound.
📊 Bitcoin ETF Outflows
Bitcoin ETFs boosted market sentiment since January, helping BTC reach an ATH of $73,000 in Q1. However, recent outflows have hurt BTC, causing it to hover at $64,945. A $152 million net ETF outflow for June 18 suggests continued pressure on BTC prices.
📉 Futures Market Open Interest
Bitcoin futures market open interest dropped to $36.4 billion from an ATH of $37 billion, indicating decreasing trader interest and hinting at a potential downtrend.
🔻 Next Bitcoin Price Support
Trading below $65,000 gives sellers an advantage. The RSI at 39 and MACD sell signal suggest further decline. Support levels to watch are $64,000, $62,500, and $60,000, from where a strong recovery to $70,000 could begin.

| irmakork | |
1,894,016 | On HumaneAI: should completely new tech products be judged by their promise or on their delivery? | I’m sure you’re following today’s main character on tech Twitter: HumaneAI. The TLDR is that... | 0 | 2024-06-19T19:46:43 | https://dev.to/gaw/on-humaneai-should-completely-new-tech-products-be-judged-by-their-promise-or-on-their-delivery-4eng |

I’m sure you’re following today’s main character on tech Twitter: HumaneAI.
The TLDR is that there’s a new AI voice assistant device on the market. It’s multi-modal, wearable, and built with some pretty cool engineering. Think of it like computing on the go.
It acts as a live voice search, shadows you on calls, can capture basic photos and video, etc. Plus, it can display a screen on your hand, in case you don’t want to talk out loud.
The kicker: it costs $700 plus an additional subscription free.
Many journalists, YouTubers, and tech enthusiasts have released their honest reviews of HumaneAI. The reviews are brutal and rely on comparing this new AI pin to our smartphones — that we all wished we used less of but only truly put away when we’re asleep.
We can’t forget we have smartphones because we use them all the time. In fact, do you even remember all the big and small things you do on your phone daily? Furthermore, the computing power and affordability of smartphones make them the poster child of the global and digital economy.
For any AI native hardware device to play in the same league, it needs access to rich data, it needs to be fast, and it needs to synthesize information in a way that feels like a step forward.
And it somehow needs to do all of this on a different interface, OS, and on a device other than the one we carry around all the time.
It’s an admittedly hard problem to solve.
You can’t limit the power of your software by trying to build it on top of older hardware, but at the same time, your newer hardware can’t be a container only for your AI native software.
HumaneAI is attempting to build in uncharted territory. It’s a novel attempt. However, there is a clear gap between the current product and its ideal/ future self — the execution lags far behind the idea.
As tech enthusiasts, should we support HumaneAI for bringing a new idea to the public, or should we be pragmatic consumers? Is it our responsibility to offer an honest critique or are we killing an audacious idea before it has found its footing?
What do you all think?
| gaw | |
1,894,015 | How to use Git reflog to find lost commits | Consider this scenario (this may or may not have happened to me 👀) - You check out a tag to do a... | 0 | 2024-06-19T19:44:39 | https://dev.to/gaw/how-to-use-git-reflog-to-find-lost-commits-15g5 |

Consider this scenario (this may or may not have happened to me 👀) - You check out a tag to do a prod release. The release is done and all looks good on prod. Now, you start working on a bug fix that’s been bothering your team for a while. After 4 hours and 200 LOC – it’s finally fixed. You commit your changes. Then you switch to a different branch to work on another feature. Git throws some warning, but that's just git being git 🤷, so you just ignore it. 2 hours later you want to create a pull request for the bug fix but can’t find the commit anywhere. Cue panic!

## What happened here?
When you checkout a tag, you’re in a detached HEAD state.
```
You are in a 'detached HEAD' state. You can look around, make experimental
changes, commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain the commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
```
HEAD is a reference to your current position in the repo. When HEAD points to a branch, you’re in an attached HEAD state.

When HEAD points directly to a commit, you’re in a detached HEAD state.

When you commit in this state, your commits don’t belong to any branch. When you switch away from this detached head state, Git shows a warning asking you to create a branch to retain your commits.
```
Warning: you are leaving 1 commit behind, not connected to
any of your branches:
9f45edc dummy commit
If you want to keep it by creating a new branch, this may be a good time
to do so with:
git branch <new-branch-name> 9f45edc
```
If you don’t do that, your commits will not be part of any branch.
There are many ways to end up in a similar situation where you made some commits and you can’t find them at some later point. E.g. you made commits in a detached HEAD state and then forgot to create a branch, you did a **git reset – hard origin/<branch_name>** and then realized that there were some commits that you wanted to keep, etc.
So are these commits lost forever? Nope. **git reflog** to the rescue.
## What is Git reflog?
From the official docs:
Reference logs, or "reflogs", record when the tips of branches and other
references were updated in the local repository.
You can think of Git reflog as an audit log of everything that you do (to the HEAD reference) in Git like commit changes, switch branches, etc. You can recover almost anything you do in git using reflog with some caveats.
- Reflog is periodically trimmed so you might not be able to recover very old commits
- Reflog is local only so you’ll not be able to recover someone else’s commits from the origin
## Now let’s take a look at how to recover lost commits using reflog
Find the commit using the **git reflog** command. Copy the commit hash from here once you’ve identified the commit you want to recover.

Cherry-pick that commit and add it to a branch using **git cherry-pick <commit_hash>**
And just like that, you've recovered your lost work and saved hours that you would've spent redoing it!
| gaw | |
1,894,013 | 💥Shiba Inu Price Prediction: SHIB Eyes A 200% Breakout | 📈 Shiba Inu Price Surge SHIB surged 2.7% in the last 24 hours, possibly due to positive news about... | 0 | 2024-06-19T19:37:34 | https://dev.to/irmakork/shiba-inu-price-prediction-shib-eyes-a-200-breakout-1911 |
📈 Shiba Inu Price Surge
SHIB surged 2.7% in the last 24 hours, possibly due to positive news about the SEC dropping investigations into Ethereum. SHIB traded around $0.00001823, reacting quickly as an ERC20 token.
🚀 SHIB Price Prediction: 4X Breakout Likely
After a high of $0.00004567 in March, SHIB fell 62%. Despite this, it formed a bullish falling wedge pattern and rests in a strong support zone. Breaking out of this pattern could see SHIB surge to $0.000057, a 201% increase. Resistance levels are at $0.000029, $0.000034, and $0.000038, with support at $0.000016 and $0.000011.
💡 SHIB Price Future Outlook
Developers are enhancing SHIB's relevance through the Shibarium blockchain, partnerships like DevourGO, and increased adoption in metaverse platforms like The Sandbox. The SEC's closure of the Ethereum investigation and potential ETH ETF approval also fuel optimism.
🔮 Bottom Line
Shiba Inu is evolving beyond a meme coin with ongoing development and adoption. While reaching $1 might not be feasible in 2024, SHIB has the potential to achieve this in the coming years, bolstered by its growth and token burn program.

| irmakork | |
1,894,012 | 🚀3 Under $5 Cryptocurrencies To Sell Before Bitcoin Crashes Below $60,000 | Investors in the crypto market are struggling after a bearish week. Bitcoin might drop to $60,000,... | 0 | 2024-06-19T19:37:13 | https://dev.to/irmakork/3-under-5-cryptocurrencies-to-sell-before-bitcoin-crashes-below-60000-m7f |
Investors in the crypto market are struggling after a bearish week. Bitcoin might drop to $60,000, and Ethereum is below $3,500, prompting a need to consider which cryptocurrencies to sell to avoid further losses.
🪙 Pepe Coin
Pepe is up 8.5% to $0.00001141 with a market cap surge to $4.8 billion. Despite this, it's down from its ATH of $0.00001722. If Pepe falls below $0.00001, it could hurt portfolios. Consider selling if the decline continues.
🐶 Dogecoin (DOGE)
Dogecoin has been in a downtrend, dropping to $0.12. Failure to break above $0.13 could signal further losses below $0.1. If no progress is seen, it might be wise to remove DOGE from your portfolio.
📈 Lido DAO (LDO)
Lido DAO saw a trading volume spike and market cap increase, trading between $1.82 and $2.39. If it declines below $2, it could signal a bearish trend. Consider selling LDO if the downturn continues in June.
Remember, extensive research is essential before making any portfolio changes.

| irmakork | |
1,894,011 | No More Cutting in Line: Crafting Fairness in Queues | So what is this queue fairness and why should you care about it? Recently we published an article... | 0 | 2024-06-19T19:37:03 | https://dev.to/gaw/no-more-cutting-in-line-crafting-fairness-in-queues-17ij |

So what is this queue fairness and why should you care about it?
Recently we published an article on how to manage long-running tasks using a worker and queue, if you haven't read I suggest that you read it first, you can read it here.
Queuing systems offer reliability, and are very intuitive, but they are also really easy to mess up.
Before coming back to the main question, let's consider this simple scenario, you have set up the infra for managing long-running tasks using a queue so, whenever a user performs an operation on your platform, a job gets enqueued to the queue, an available worker will pick the job from the queue and process it, you might be wondering what's the issue here?

Now check out the below image, in an imaginary world your app gets a lot of traction, and you onboard a bunch of users who are using your app concurrently. Now your queue has multiple jobs enqueued, so you will notice that other users' jobs are waiting in the queue to be picked up by the worker because Karen decided to perform a bunch of operations at once (well she's not wrong here) other users might not find this experience _FAIR_

Let's look at the definition of a [fair queue](https://en.wikipedia.org/wiki/Fair_queuing?ref=blog.kusho.ai):
Fair queuing is a family of scheduling algorithms used in some processes and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example, to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.
In layman's terms basically, we don't want the users to wait and also not throw more hardware at this problem so we would like to implement some sort of mechanism that will ensure fairness of the queue and try to avoid resource starvation.
_### Enough Talk, show me the <s>code</s> algorithm_
There are multiple ways to solve this problem, the easiest is to throw more hardware at the problem but, this is not a sustainable solution and you are still going to face a resource starvation problem – hence we are going to put our fancy computer science hats and talk about Round Robin.

Excusing my terrible visual skills, let's try to understand how round-robin works, in simpler terms, we can think of taking turns, where your job will get resources for a fixed amount of time, and then the resources get passed on to some other job.
Let's devise a dead simple algorithm and then we will try to write code for the same.
We initialize multiple Redis clients to represent different shards of the job queue, allowing for horizontal scaling and load distribution. When a job is enqueued via the /enqueue endpoint, we calculate the shard index based on the job data and push the job onto the queue of the corresponding Redis shard. The /process endpoint iterates through each Redis shard in a round-robin fashion, popping a job from each shard's queue until finding a job to process. This setup ensures fairness in job processing while avoiding resource starvation.
```
from flask import Flask, request, jsonify
from redis import Redis
import json
app = Flask(__name__)
redis_clients = [Redis(host='localhost', port=6379, db=i) for i in range(3)] # Assuming 3 Redis shards
# Function to calculate the shard index for a given job
def get_shard_index(job_data):
# Example: Distribute jobs based on some hash value
return hash(job_data) % len(redis_clients)
# Endpoint for adding jobs to the queue
@app.route("/enqueue", methods=["POST"])
def enqueue_job():
job_data = request.json # Assuming JSON data containing job details
shard_index = get_shard_index(json.dumps(job_data))
redis_clients[shard_index].lpush('job_queue', json.dumps(job_data))
return jsonify({"message": "Job enqueued successfully"}), 200
# Endpoint for processing jobs using round-robin scheduling
@app.route("/process", methods=["GET"])
def process_jobs():
for redis_client in redis_clients:
job_data = redis_client.rpop('job_queue')
if job_data:
job_data = json.loads(job_data)
# Process the job (replace this with actual job processing logic)
print("Processing job:", job_data)
return jsonify({"message": "Job processed successfully", "job_data": job_data}), 200
return jsonify({"message": "No jobs to process"}), 404
if __name__ == "__main__":
app.run(debug=True)
```
There are several other ways to approach this problem, we will talk about them in future articles.
| gaw | |
1,894,010 | Homely Birmingham | Homely Birmingham | 0 | 2024-06-19T19:35:05 | https://dev.to/developercode/homely-birmingham-5e43 | [Homely Birmingham](https://homelybirmingham.com/) | developercode | |
1,894,009 | Validating LLM Responses: Pydantic & Instructor Integration with LLMs (Part II) | Getting structured output from LLMs is a notoriously tricky problem to solve given their... | 0 | 2024-06-19T19:32:35 | https://dev.to/gaw/validating-llm-responses-pydantic-instructor-integration-with-llms-part-ii-2f93 |

Getting structured output from LLMs is a notoriously tricky problem to solve given their non-deterministic nature. In our [previous post](https://blog.kusho.ai/from-chaos-to-order-structured-json-with-pydantic-and-instructor-in-llms/), we explored using [Pydantic](https://docs.pydantic.dev/latest/?ref=blog.kusho.ai) and [Instructor](https://pypi.org/project/instructor/?ref=blog.kusho.ai) to ensure reliable JSON output from LLMs. Now, in Part 2, we dive deeper into Pydantic's validation capabilities. By setting up max_retries, we give our model multiple chances to produce accurate responses. If validations fail or data is improperly structured, Instructor handles retries until we get the desired result. This powerful feature is seamlessly managed with the Instructor patch for our OpenAI client.
**Implementing Validation with Pydantic**
Pydantic gives us a handy way to check if the responses we get are correct. It looks at the structure and types of data we expect and makes sure everything matches up. Plus, it goes beyond that by letting us add extra checks. So, if the LLM's response doesn't quite fit the bill, we can ask it to try again using maxRetries. This helps the model learn and get better at giving us the right information.
Let's go back to our example where we wanted to pull out important details from patient data. Here, we'll see how Pydantic, along with some smart validation tricks, can make sure we always get reliable and useful answers.
## Implementing Detailed Validations
We can add specific checks to our response model. Since our output is organized like a structured form, adding checks is as easy as setting rules to follow.
Let's simplify this with an example about age validation. We want to ensure that the age entered is reasonable, so we'll make a rule that the age must be under 200. If it's not, we'll raise a ValueError error saying, "The age should be less than 200."
```
def validate_age(age: int) -> Optional[int]:
if age < 200:
return age
else:
raise ValueError("Age must be less than 200")
```
Now, let's include this rule in our class definition using Pydantic.
```
from pydantic import BaseModel, validator
from typing_extensions import Annotated, Literal
class PatientInfo(BaseModel):
sex: Literal['M', 'F']
age: Annotated[int, validator(validate_age)]
geographical_region: str
```
This ensures that the age given meets our standards after it's added to the model.
Additionally, the max_retries setting is crucial. It's similar to giving our model multiple attempts to provide the correct response. If our validations fail or our data isn't structured correctly, the instructor can send the error message back to the model several times until we receive the desired answer. Thankfully, managing this feature is simplified with the instructor patch for our OpenAI client.
## Using LLM Validators:
Instructor offers a handy way to use Language Model Models (LLMs) to check if certain fields or logic are correct. Let's explain this with a simple example. We want to make sure that the locations mentioned are actual body parts.
```
from instructor import llm_validator
from pydantic import BeforeValidator
class Symptoms(BaseModel):
description: str
pain_type: str
locations: List[
str,
]
intensity: int
location_precision: int
pace: int
class MedicalHistory(BaseModel):
pathology: Annotated[
str,
BeforeValidator(
llm_validator(
"must be a valid disease name", openai_client=instructor_openai_client
)
),
]
symptoms: Symptoms
increase_with_exertion: bool
alleviate_with_rest
class PatientData(BaseModel):
patient_info: PatientInfo
medical_history: MedicalHistory
risk_factors: RiskFactors
differential_diagnosis: List[DifferentialDiagnosis]
```
This means we're using a smart tool to double-check if the disease names we're given are accurate. It's like having a helpful assistant who makes sure everything makes sense before we move forward.
## Dynamic Validation Based on Runtime Data
Making Validation Flexible with Real-Time Data
Sometimes, making sure we grab all the right information from our data can be tricky, especially when the rules for what's right change depending on what we're looking at. This happens a lot when we're pulling out details from data used in programs, like APIs, where we need to be flexible to handle different kinds of requests.
Let's imagine a situation where we're pulling out details from a JSON file. We want to make sure we grab everything we need, and the rules for what we need might change depending on what's in the JSON. This was a problem I faced when I was trying to gather detailed information about medical diagnoses from an AI assistant.
Imagine we want to improve our diagnosis system. We don't just want the name of the diagnosis and how likely it is. We also want a clear explanation of what the diagnosis means and what treatments are possible.
```
from pydantic import create_model, Field, List
class Treatment(BaseModel):
treatment_name: str
treatment_procedure: str
comments: str
class Diagnosis(BaseModel):
diagnosis_name: str
diagnosis_description: str = Field(
...,
description="A professional but human way to describe the diagnosis and its effect."
)
probability: int
possible_treatment: List[Treatment]
description: str = Field(
...,
description="A Professional but sympathetic description of its treatment and its validity regarding other treatments according to you.",
)
def update_models_description(possible_diagnosis: List[DifferentialDiagnosis]):
# Create a new model with the updated description
diagnosis = ",".join([str((diag.disease_name, diag.probability)) for diag in possible_diagnosis])
print(diagnosis)
updated_model = create_model(
"DiagnosisAndTreatments",
disease=(
List[Diagnosis],
Field(
...,
description=f"Ensure that all the following possible information is present for all the following diagnoses: {diagnosis}"
),
),
)
return updated_model
```
```
completion = instructor_openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": f"You are an AI doctor assistance. Please convert the following patient data to extrapolate the following data ${medical_info}",
}
],
max_retries=3,
response_model=update_models_description(medical_data.differential_diagnosis),
)
print(type(completion))
print(json.dumps(completion.model_dump(), indent=1))
```
```
<class '__main__.DiagnosisAndTreatments'>
{
"disease": [
{
"diagnosis_name": "Unstable angina",
"diagnosis_description": "Unstable angina is a condition where the blood flow to the heart is suddenly blocked, leading to chest pain and discomfort. This condition can be life-threatening and requires immediate medical attention.",
"probability": 26,
"possible_treatment": [
{
"treatment_name": "Coronary angiography",
"treatment_procedure": "This procedure uses contrast dye and X-rays to see inside the arteries of the heart. It can help identify blockages and other problems in the blood vessels around the heart.",
"comments": "Coronary angiography is a common and effective procedure to diagnose and treat heart conditions."
},
{
"treatment_name": "Medications",
"treatment_procedure": "Medications such as nitroglycerin and beta-blockers may be prescribed to manage symptoms and prevent future episodes of unstable angina.",
"comments": "Medications play a crucial role in controlling symptoms and improving the prognosis of unstable angina patients."
}
],
"description": "Unstable angina is a serious condition that requires prompt diagnosis and treatment to prevent heart damage and complications."
},
{
"diagnosis_name": "Stable angina",
...
},
]
}
```
In this setup, we're cleverly building a Pydantic model on the fly, depending on the data we're dealing with about possible diagnoses. This means our checks are customized to fit each diagnosis – making sure we cover all the bases.
Plus, by adding Pydantic checks to our model, we make sure we don't miss anything important for each diagnosis. Whether it's the name, description, likelihood, or treatment options, we're double-checking everything to make sure our data is complete and accurate. | gaw | |
1,894,006 | Talking Tradesmen | Talking Tradesmen Agriculture Jobs Animal Husbandry Jobs Arborist Jobs Farmer... | 0 | 2024-06-19T19:29:42 | https://dev.to/developercode/talking-tradesmen-p8a | [Talking Tradesmen](https://talkingtradesmen.com/)
## Agriculture Jobs
[Animal Husbandry Jobs](https://talkingtradesmen.com/agriculture-trades/animal-husbandry-jobs/)
[Arborist Jobs](https://talkingtradesmen.com/agriculture-trades/arborist-jobs/)
[Farmer Jobs](https://talkingtradesmen.com/agriculture-trades/farmer-jobs/)
[Greenhouse Manager Jobs](https://talkingtradesmen.com/agriculture-trades/greenhouse-manager-jobs/)
## Building Jobs
[Landscaping Jobs](https://talkingtradesmen.com/building-trades/landscaping-job/) | developercode | |
1,893,878 | Leverage ServiceNow REST API: Integrate & Create Records Selectively | ServiceNow offers a powerful tool for integrating with external applications: the REST... | 0 | 2024-06-19T19:27:45 | https://dev.to/sophiasemga/servicenow-rest-apis-1mej | api, webdev, cloud, cloudcomputing | **ServiceNow** offers a powerful tool for integrating with external applications: the REST (Representational State Transfer) API. This inbound web service allows external applications and clients to interact with data within your ServiceNow instance in a controlled manner.
Think of it as a set of well-defined functions that enable you to send requests and receive responses from other applications using HTTP protocols.
Common REST HTTP methods include:
• GET: Retrieve data from a third-party application.
• PUT: Update data within a third-party application.
• POST: Create new data within a third-party application.
• DELETE: Delete data from a third-party application.
• PATCH: Partially update data within a third-party application.
###Use Case: Creating a ServiceNow Record from Another Application with Selective Field Copying
This scenario demonstrates how to create a ServiceNow record from another application while including specific fields from being copied into the ServiceNow record.
####1. Setting Up the Inbound Web Service
•Navigate to **Web Services > Inbound**. This is where you define how external applications can interact with your ServiceNow data.
•Fill in the **Label Name** (a user-friendly name for your integration).
Select the **Target Table** where the data will be stored (e.g., Incident table).
•Under **Web Service Fields**, define the fields you'll receive from the third-party application. These are the fields required to create a record in ServiceNow (e.g., caller, short description).
•Click **Create**.

####2. Configuring the Transform Map
•Upon creation, a **Transform Map** is generated. This map translates data from the source table (where the third-party application sends data) to the target table (the ServiceNow table you selected).
•Click on **Automapping Assist** to automatically map fields from source to target.
Review and adjust the mappings as needed.
•Click **Save** to save the transform map.
####3. Testing the Web Service
•Locate your newly created inbound web service in the **Application Navigator**.

•Under **Related Links**, click **Explore REST API**. This interface allows you to test the web service before integrating with the third-party application.
•On the **Explore REST API** interface, you'll see the staging table name and a request body section.

•In the request body, enter test data for the fields you defined earlier.

•Click **Send**.
Remember, request formats for ServiceNow REST APIs are always in JSON format.
####4. Verifying Successful Test
•A successful test will return a status code of **201 (Created)**.

•The response body will display a newly created test record.

•Double-check by navigating to **Incident > Open** to confirm the test record exists.


####5. Integration with Third-Party Application (HoppScotch)
🔗:https://hoppscotch.com/
Now, let's integrate your ServiceNow instance with a third-party application called HoppScotch to create records directly from HoppScotch.
#####Configuration on HoppScotch
•Use the **POST** method as you're creating a new record.

•In the **Request URL**, copy the **method URL** from the ServiceNow REST API Explorer. This URL defines the resource where data will be posted.

•For **Authorization**, select **Basic Auth** and enter your ServiceNow username and password. This ensures secure communication between applications.

•Under **Headers**, set the **Content-Type** to **application/json**.

•In the **Body** section, copy and paste the request body data from the ServiceNow REST API Explorer. This data defines the information to be included in the new record.

•Click **Send**.
####6. Confirmation and Verification
•A successful creation will return a status code of **201 (Created)**.
The response body will contain details about the import set, staging table, and the newly created ServiceNow record.

•Navigate back to your ServiceNow instance under **Incident > Open** to confirm the record was successfully created.


###Conclusion
Congratulations! You've successfully utilized ServiceNow's REST API to create a record in your instance through a third-party application.
🎊🎊
####Expand Your Toolkit: Exploring Additional REST API Clients
Beyond HoppScotch, several other tools offer functionalities similar to API testing, development, and collaboration. Here are a few popular options:
•Insomnia:[https://insomnia.rest/]
•Paw: [https://paw.cloud/]
•Postman:[https://www.postman.com/]
•SwaggerHub: [https://swagger.io/tools/swaggerhub/]
________________
####Further Learning:
For a deeper dive into ServiceNow REST API functionalities and methods, refer to the ServiceNow product documentation on REST APIs.
•ServiceNow REST API: https://docs.servicenow.com/bundle/washingtondc-application-development/page/integrate/inbound-rest/concept/c_RESTAPI.html
•ServiceNow REST API reference: https://docs.servicenow.com/bundle/washingtondc-api-reference/page/build/applications/concept/api-rest.html
| sophiasemga |
1,894,003 | Introduction to Kubernetes | What is Kubernetes? Kubernetes is a portable, extensible, open-source platform for... | 27,750 | 2024-06-19T19:24:20 | https://psj.codes/introduction-to-kubernetes | kubernetes, containers, cloudnative, opensource |
## What is Kubernetes?
`Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.` - *From Kubernetes Docs*
In simpler terms, Kubernetes, also known as an orchestrator, is an open-source platform that automates the management of containers. Originally developed by `Google` as an internal container management project named `Borg`, Kubernetes was made open-source on `June 7, 2014`. In `July 2015`, it was donated to the `Cloud Native Computing Foundation (CNCF)`.
Kubernetes means `helmsman` or `pilot` in Greek, reflecting its role in guiding containerized applications. It is also known as `k8s`, a shorthand that represents the `8 letters between the 'k' and the 's'` .
## Key Features
Kubernetes offers a robust set of features, including:
* **Automatic Bin Packing:** Schedules containers automatically based on resource needs and constraints, ensuring efficient utilization without compromising availability.
* **Self-Healing:** Replaces and reschedules containers from failed nodes, restarts unresponsive containers, and prevents traffic from being routed to them.
* **Horizontal Scaling:** Scales applications manually or automatically based on CPU or custom metrics utilization.
* **Service Discovery and Load Balancing:** Assigns IP addresses to containers and provides a single DNS name for a set of containers to facilitate load balancing.
* **Automated Rollouts and Rollbacks:** Manages seamless rollouts and rollbacks of application updates and configuration changes, continuously monitoring application health to prevent downtime.
* **Secret and Config Management:** Manages secrets and configuration details separately from container images, avoiding the need to rebuild images.
* **Storage Orchestration:** Automatically mounts storage solutions to containers from local storage, cloud providers, or network storage systems.
* **Batch Execution:** Supports batch execution and long-running jobs, and replaces failed containers as needed.
* **Role-Based Access Control (RBAC):** Regulates access to cluster resources based on user roles within an enterprise.
* **Extensibility:** Extends functionality through Custom Resource Definitions (CRDs), operators, custom APIs, and more.
With its extensive array of capabilities, Kubernetes simplifies the management of containerized applications, ensuring optimal efficiency and performance, while also providing scalability, resilience, and flexibility for diverse workloads.
## Cluster Architecture
Kubernetes cluster has a straightforward yet elegant and efficient architecture, consisting of at least one `master node`, also known as a `control-plane node`, and at least one `worker node`. The diagram below illustrates the cluster architecture.

### Control-plane node
The control plane is responsible for maintaining the overall state of the cluster. The control plane includes components like the API server, scheduler, controller manager, and etcd, which coordinate and manage the cluster operations.
**kube-api-server**
The Kubernetes API server serves as the core of the control plane. It exposes an HTTP API through which users, external components, and cluster components securely interact to manage the state of Kubernetes objects. It validates incoming requests before storing them and supports the incorporation of custom API servers to expand its functionality. Highly configurable, it accommodates diverse configurations and extensions to suit specific cluster requirements.
**Scheduler**
The Kubernetes scheduler watches for newly created pods without assigned nodes and selects suitable nodes for them. It retrieves resource usage data for each worker node from etcd via the API server. Additionally, it incorporates scheduling requirements specified in the pod's configuration, such as the preference to run on nodes labelled with specific attributes like "disk==ssd".
So basically, the scheduler is responsible for assigning a node to a pod based on available resources and scheduling constraints.
**kube-controller-manager**
The kube-controller-manager is a collection of different Kubernetes controllers, running as a single binary. It ensures that the `actual state of objects matches their desired state`. Each controller watches over its objects, maintains their state, and plays a specific role in maintaining the health and desired configuration of the cluster. Key controllers within kube-controller-manager include the `ReplicaSet controller, Deployment controller, Namespace controller, ServiceAccount controller, Endpoint controller, and Persistent Volume controller`.
**cloud-controller-manager**
The cloud-controller-manager includes the `Node controller, Route controller, Service controller, and Volume controller`. These controllers are responsible for interfacing with the cloud infrastructure to manage nodes, storage volumes, load balancing, and routing. They ensure seamless integration and management of cloud resources within the Kubernetes cluster.
**ETCD**
ETCD is a distributed key-value store used to persist the state of a Kubernetes cluster. New data is always appended, never replaced, and obsolete data is compacted periodically to reduce the data store size. Only the Kubernetes API server can communicate directly with ETCD to ensure consistency and security. The ETCD CLI management tool provides capabilities for backup, snapshot, and restore.
### Worker node
A worker node in Kubernetes executes application workloads by hosting and managing containers, providing essential computational resources within the cluster. It consists of components such as the `Kubelet`, `kube-proxy`, and a `container runtime interface(CRI)` like `Docker` or `containerd`.
**Kubelet**
The kubelet operates as an agent on each node within the Kubernetes cluster, maintaining communication with the control plane components. It receives pod definitions from the API server and coordinates with the container runtime to instantiate and manage containers associated with those pods. The kubelet also monitors the health and lifecycle of containers and manages resources as per pod specifications.
**kube-proxy**
The kube-proxy is a network agent running on each node in the Kubernetes cluster. It dynamically updates and maintains the network rules on the node to facilitate communication between pods and external traffic. It abstracts the complexities of pod networking by managing services and routing connections to the appropriate pods based on IP address and port number.
**Container Runtime**
In Kubernetes, a container runtime is essential for each node to handle the lifecycle of containers. Some of the known container runtimes are Docker, CRI-O, containerd, and rkt. These runtimes interface with Kubernetes using the Container Runtime Interface (CRI), ensuring that containers are created, managed, and terminated as needed within the cluster.
**Pod**
In Kubernetes, you cannot directly run containers as you would with Docker. Instead, containers are grouped into units called pods. A pod can host multiple containers and is the smallest deployable object in Kubernetes.
## How does all of this come together?
Let's see this with an example of pod creation:
First, define the pod by creating a YAML or JSON file that specifies its configuration, including `container images, resource requirements, environment variables, storage volumes etc`. This file acts as the pod's blueprint.
Once you have the pod definition file ready, you submit it to the Kubernetes API server using the `kubectl` command-line tool. For instance, you can apply the configuration with a command like `kubectl apply -f pod-definition.yaml`. Alternatively, you can create the pod directly with a command such as `kubectl run my-pod --image=my-image`.

The Kubernetes API server receives and validates the pod creation request, ensuring it meets all criteria. Once validated, it stores the pod definition in etcd, the cluster's key-value store.

The Kubernetes scheduler watches for newly created `pods without assigned nodes`. It interacts with the API server to get the pod configuration, evaluates the resource requirements and constraints, and then selects a suitable worker node for the pod, updating the pod configuration with the `nodeName`.

On the assigned node, the kubelet receives the pod definition from the API server. `It manages pods and their containers, retrieves container images from a registry if needed, and uses the container runtime (like Docker or containerd) to create and start containers` as specified and `also set up storage as required`. It monitors container health and can restart them based on defined policies. Meanwhile, `kube-proxy configures networking rules for pod communication`. When all containers are running and ready, the pod can accept traffic, showcasing Kubernetes' orchestration in maintaining application states.

## Conclusion
Kubernetes is a robust platform for managing containerized applications, offering features that simplify deployment, scaling, and maintenance. This article covered its origins, core functionalities, and architecture. Leveraging Kubernetes enhances efficiency, resilience, and flexibility, making it essential for modern cloud-native environments.
***Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey; in the next article, we'll explore how to install Kubernetes using the Kubeadm tool.*** | pratikjagrut |
1,893,933 | Testing LLMs with Elixir | disclaimer: this post was entirely written by a human 🧬 Introduction I've been working on... | 0 | 2024-06-19T19:19:43 | https://dev.to/samuelpordeus/testing-llm-output-with-elixir-1l71 | elixir, ai, testing, llm | _disclaimer: this post was entirely written by a human 🧬_
### Introduction
I've been working on a Gen AI product for the past year and quickly realized that LLMs can be considered wild beasts that require constant vigilance. As someone used to leverage test coverage to prevent regressions, the LLM/AI world can be pretty frustrating since model outputs are nondeterministic.
When you deploy an LLM solution to production, you get an amorphous mass of statistical data that produces ever-changing outputs. And it can get even more chaotic, due to various reasons:
- prompts might need tweaking as you start getting more customers.
- a new and shiny model was released! ✨ but after upgrading your model mindlessly once, you aren't that confident that it's not going to break this time
- models are changed by vendors without sufficient notice, even if their documentation says it's stable
One way to mitigate these issues is to start evaluating the output from your LLM product before you have your first paying customer 💰
### The Approach
One of the most common test approaches for Python is to use [OpenAI Evals](https://cookbook.openai.com/examples/evaluation/getting_started_with_openai_evals), but ideally, we should integrate our tests closer to our Elixir code, where our Business Logic currently lives, so we can seamlessly test prompt changes.
So let's do it inspired by Evals but with Elixir 🧪
LLM outputs can either be **structured**, with a well-defined schema with predictable outputs:
```json
{
"name": "Muse T-Shirt",
"category": "clothes"
}
```
Or **unstructured**, with high variance, undefined format, and usually free text:
```
Of course. 'Muse T-Shirt' belongs to the 'clothes' Category.
```
The first type of test is quite easy to handle. You send a request, get a response, validate if the schema is correct, and it's done 👌 – OpenAI Evals handles them with something called __Basic Evals__.
The unstructured type is where it gets tricky. Although you should try to use JSON schemas for everything (might deserve a separate post), it might be inevitable to produce responses with significant variances:
```json
{
"product_availability": false,
"answer": "Unfortunately I don't have a Muse T-Shirt available in stock now. Would you be interested in a BTS one?"
}
```
That's where we use __Model-graded Evals__: A two-step process where a model gives an output and we use **another model** to validate it.
Quite chaotic, right? But sometimes when dealing with this crazy AI world you need to fight chaos with chaos ☄️
### Implementation
I've been working with Elixir for the past 5 years, so I'm quite fond of the idea of the LLM tests looking like a regular `mix test` suite run:
```
mix test --only llm_test
```
So let's see what a test will look like before we dive into its internals:
```ex
# test/llm_tests/llm_eval_demo_test.exs
alias Test.LLMEvaluation
@tag :llm_test
test "robot returns a muse t-shirt" do
conversation =
[
%{
role: :system,
content: "The best band t-shirt in the world is a Muse one!"
},
%{
role: :user,
content: "tell me a cool band t-shirt to buy, my man"
},
]
assert LLMEvaluation.basic_eval(conversation)
end
```
Pretty Elixir-ish, right? So let's start configuring it.
#### Configuration
Add `exclude: :llm_test` to your `ExUnit.start/1` function in `test/test_helper.exs`:
```ex
ExUnit.start(exclude: :llm_test)
```
This way, we leverage tags to ensure the LLM tests, where real API calls are made, don't conflict with mocks & stubs.
#### Implementing the Test.LLMEvaluation helper module: Basic Eval
`basic_eval/1` receives as input a conversation/chat, however you implemented it.
After that you send a request to your LLM Chat completion API Provider and parse the response content:
```ex
defmodule Test.LLMEvaluation do
@moduledoc false
@default_model "gpt-4"
def basic_eval(conversation) do
params = %{"model" => fetch_model(), "response_format" => %{"type" => "json_object"}}
Client.request(conversation, params)
end
end
```
It's worth enforcing JSON output so we can beautifully pattern-match the decoded response content:
```ex
assert %{"category" => "clothes"} = LLMEvaluation.basic_eval(conversation)
```
Using OpenAI's Chat Completion APIs you can achieve that by using the `response_format` param.
#### Implementing the Test.LLMEvaluation helper module: Model-graded Eval
For the model-graded eval, we include an intermediary step: we send an extra call to openAI that verifies if the assertion is truthful.
For that, we need to craft an *Assertion Prompt*:
```ex
You are an assertion agent that returns 'true' or 'false'
depending on whether the Generated Message complies with the assertion.
Generated Message: #{llm_output}
Assert that: '#{assertion}'
Return the following JSON format as a response:
{
"assertion": true,
"reason": "Explanation on why the assertion failed or not"
}
```
I bet you can write something better than that 🙂
Moving forward, we need to include the Chat Completion call to include this new Assertion Prompt:
```ex
defmodule Test.LLMEvaluation do
@moduledoc false
@default_model "gpt-4"
@assertion_model "gpt-4o"
def model_graded_eval(conversation, assertion) do
params = %{"model" => fetch_model(), "response_format" => %{"type" => "json_object"}}
conversation
|> Client.request(params)
|> assertion(assertion)
end
defp assertion(llm_output, assertion) do
prompt = <assertion_prompt>
messages = [%{content: prompt, role: "system"}]
params = %{"model" => @assertion_model, "response_format" => %{"type" => "json_object"}}
messages
|> Client.request(params)
|> Map.put("llm_output", llm_output)
end
end
```
It's important to return the `llm_output` so in case the assertion goes wrong, you can check what was produced by the first model.
And the test look like this:
```ex
@tag :llm_test
test "robot returns a muse t-shirt" do
conversation =
[
%{
role: :system,
content: "The best band t-shirt in the world is a Muse one!"
},
%{
role: :user,
content: "tell me a cool band t-shirt to buy, my man"
},
]
assertion = "Assistant response is a Muse T-Shirt"
assert %{"assertion" => true} = LLMEvaluation.model_graded_eval(conversation, assertion)
end
```
Pretty Elixir-ish too, right!? 🥹
That's it, folks! Ideally, you can now run a `mix test --only llm_test` for every prompt or model change you do, to make sure your beloved customers don't experience hallucinations while speaking with your robots 🤖
I'm planning to write more about using Elixir & LLMs in production. Hopefully with less code so you all don't get bored.
Don't hesitate to send me a message through [Linkedin](https://www.linkedin.com/in/samuelpordeus/) or to my email: *samuelspordeus@gmail.com* | samuelpordeus |
1,893,932 | How to manage long-running LLM tasks using background jobs with RQ (Redis Queue) | Like every new piece of tech that you add to your application, adding LLMs also pose some interesting... | 0 | 2024-06-19T19:17:05 | https://dev.to/gaw/how-to-manage-long-running-llm-tasks-using-background-jobs-with-rq-redis-queue-34ni | Like every new piece of tech that you add to your application, adding LLMs also pose some interesting integration challenges. [In one of our older blogs, we talked about how to handle LLM generation latencies via streaming.](https://blog.kusho.ai/how-and-why-to-implement-streaming-in-your-llm-application/) While streaming takes care of delivering LLM responses in a user-friendly manner, there’s another side to this problem. How do you manage the long-running LLM generation processes that take minutes to complete on the backend?
## What are background jobs?
API requests normally take anything between 100s of milliseconds to a few seconds to respond. But how do you process a task that takes minutes or hours to complete? This is where background jobs (or async tasks or just jobs) come into the picture. The idea is simple - you offload work to a different application that processes the long-running task in async, freeing up your main application to respond to requests in real time.
## Why not just handle long-running tasks in the webserver?
Handling long-running tasks in the main web application can severely impact its performance and responsiveness
- Long-running tasks tie up the server's resources causing requests from other users to be blocked or delayed. This can lead to poor user experience as users may experience slow response times or even timeouts.
- Web servers typically have limited resources, such as CPU, memory, and network bandwidth. Running long tasks in the main application can consume these resources, potentially causing other requests to be queued or rejected.
- As the number of users increases so does the demand on the server. Long-running tasks can exacerbate scalability issues, making it difficult to handle a large number of concurrent users.
- Users expect web applications to respond quickly and provide immediate feedback. Long-running tasks can disrupt this expectation, leading to frustration, and dissatisfaction among users.
## Background jobs using RQ
There are many different ways to implement background jobs - threading, multiprocessing, and async task queues. In this blog post, we’re going to talk about how to implement background jobs using [RQ](https://python-rq.org/?ref=blog.kusho.ai) which is an async task queue.
[RQ](https://python-rq.org/?ref=blog.kusho.ai) (short for Redis Queue) is an async task queue for queueing jobs and processing them in the background with workers. You can think of it as a simpler alternative to [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html?ref=blog.kusho.ai). RQ, as the name suggests, uses Redis as a message broker and has a very simple interface and setup process. This also happens to be the main reason we picked RQ - it’s very easy to get started with and deploy in production.
Deploying RQ is very straightforward. You need a Redis instance where the job queue will be maintained and some way to deploy RQ workers that do the actual processing of tasks.
_NOTE: We had some problems with running the latest version of RQ (1.15.1 at the time of writing) with Redis 7. So we used Redis 6 for our deployment._
The way RQ works is:
- From the main application, enqueue a task (which is just a function in your codebase that you want to execute).
- This task data is written to Redis.
- An RQ worker who is free picks up the task and starts the execution.
- After the execution is completed, the output is written back to Redis for a configurable amount of time.
- If your job fails, RQ provides ways to retry the tas.
Now let’s look at some code.
## Enqueuing tasks from application
Install RQ:
```
pip install rq
```
Let’s say this is the long-running task that you want to run using RQ workers:
```
# filename - rq_task.py
import time
def sleep_for_a_while(sleep_time_seconds: int = 30):
time.sleep(sleep_time_seconds)
```
Create an instance for the queue you’ll use for enqueuing tasks:
```
from redis import Redis
from rq import Queue
redis_instance = Redis(
host="url for your redis host", # default is localhost
ssl=False, # set this to True if you use TLS
decode_responses=True, # this will return response as string instead of bytes. This is False by default
)
q = Queue(connection=redis_instance)
```
Please note that this will create an instance for the “default” queue. If you want to enqueue tasks to a different queue, you can do it like this:
```
q = Queue(connection=redis_instance, name="queue_name")
```
There are 2 ways to enqueue a task. You can either pass reference to the function that needs to be executed by the worker or you can pass the entire path as a string (in case you can’t import the module due to circular imports)
Enqueue by passing a function reference:
```
from rq_task import sleep_for_a_while
result = q.enqueue(sleep_for_a_while, 60)
```
Enqueue by passing the full function path:
```
result = q.enqueue("rq_task.sleep_for_a_while", 60)
```
## RQ workers for processing tasks
This is how you start an RQ worker which will start consuming enqueued tasks
```
rq worker --url redis://<hostname>:<port>
```
In case you’re using TLS, the redis connection string will start with “rediss”
```
rq worker --url rediss://<hostname>:<port>
```
Note that this will start consuming tasks from the default queue. If you want to consume from a different queue, you can do it like this
```
rq worker <queue_name> --url redis://<hostname>:<port>
```
And that's it for the basic setup! For deploying it in production, we would suggest running workers in a container on k8s or ECS and using [RQ dashboard](https://python-rq.org/docs/monitoring/?ref=blog.kusho.ai) for monitoring.
In our next blog post, we’ll talk about more advanced topics like handling worker output, retry/timeouts, managing worker crashes, scheduling cron jobs, monitoring RQ workers, etc. Stay tuned! | gaw | |
1,893,931 | FIND THE BEST AUTO INSURANCE IN EDMONTON - GET YOUR FREE QUOTE TODAY! | 6 Things You Need to Know About Car Insurance Coverage in Alberta Mandatory Coverage... | 0 | 2024-06-19T19:14:26 | https://dev.to/reliantinsurance/find-the-best-auto-insurance-in-edmonton-get-your-free-quote-today-4466 | car, insurance, edmonton, canada | **6 Things You Need to Know About Car Insurance Coverage in Alberta**
**Mandatory Coverage Requirements**
Alberta law requires all drivers to carry certain minimum levels of [car insurance](https://reliantinsurance.ca/car-insurance-brokers/). This includes:
- Third-Party Liability Insurance: This covers you if you're found legally responsible for injuring someone or damaging their property. The minimum required limit is $200,000, but higher limits are often recommended.
- Accident Benefits: This covers medical expenses, income replacement, and other benefits if you or your passengers are injured in a car accident, regardless of who is at fault.
[](https://reliantinsurance.ca/car-insurance-brokers/)
**Optional Coverage Options**
In addition to the mandatory coverages, you can purchase optional coverages for extra protection:
- Collision Coverage: Pays for damages to your vehicle resulting from a collision with another vehicle or object.
- Comprehensive Coverage: Covers damages to your vehicle caused by non-collision events such as theft, fire, vandalism, and natural disasters.
- Specified Perils: A more affordable option that covers only the specific risks you choose, such as fire and theft.
- All Perils: A combination of collision and comprehensive coverages.
**Factors Influencing Premiums**
Several factors can influence the cost of your [car insurance premiums](https://reliantinsurance.ca/car-insurance-brokers/) in Alberta:
- Driving Record: A clean driving record with no accidents or traffic violations typically results in lower premiums.
- Vehicle Type: The make, model, and year of your vehicle affect your insurance costs.
- Usage: How often and how far you drive can impact your premium. Daily commuting or long-distance travel might increase your costs.
- Location: Urban areas with higher traffic volumes and theft rates can lead to higher premiums compared to rural areas.
**No-Fault Insurance System**
Alberta operates under a no-fault insurance system for accident benefits. This means that your own insurance company will cover your medical expenses and income replacement regardless of who caused the accident. However, the at-fault driver is still responsible for property damage and third-party liability claims.
**Claims Process**
If you're involved in an accident, it's important to know the steps in the claims process:
- Report the Accident: Notify your [insurance company](https://reliantinsurance.ca/car-insurance-brokers/) as soon as possible, providing all necessary details about the incident.
- Provide Documentation: Submit any required documents, such as police reports, photos, and witness statements.
- Assessment and Settlement: Your insurer will assess the damage and determine the settlement amount. This may involve vehicle repairs, medical expenses, and other covered costs.
**Discounts and Savings**
You may be eligible for various discounts that can help lower your premiums:
- Safe Driver Discount: For maintaining a clean driving record.
- Multi-Policy Discount: If you bundle your car insurance with other policies, such as home insurance.
- Anti-Theft Devices: Installing approved anti-theft devices in your vehicle.
- Telematics Programs: Some insurers offer discounts for participating in telematics programs that monitor and reward safe driving behavior.
Understanding these key aspects of car insurance coverage in Alberta can help you make informed decisions and ensure you have the appropriate protection for your needs. Always review your policy details and speak with your insurance broker to clarify any questions or concerns.
| reliantinsurance |
1,893,930 | Python selenium architecture and significance of python virtual environment. | The architecture of python selenium involves several components and interactions that enable... | 0 | 2024-06-19T19:11:59 | https://dev.to/keerthiajaykumar/python-selenium-architecture-and-significance-of-python-virtual-environment-969 | The architecture of python selenium involves several components and interactions that enable automation of web browsers.here's detailed breakdown.
so python selenium works like this pythonis the main language we use to write our automation scripts .selenium web driver is what actually controls the web browser and like chrome or firefox.it uses these things called drivers chrome driver to talk to the browser and do stuff like clicking buttons or typing text.the browser follows commands sent by selenium through a protocol.
when we write our pthon script,we import the selenium.webdriver module to start contolling the browser .we setup the webdriver for the specifice browser we want to automate like webdriver.chrome for chrome .then we write commands to find elements on the web pages.interact with htem like clicking or typing and check if things are as expected like checking if a button is there or text appers.
After we're done ,we clean up by closing the browser with driver.quit().python selenium is cool because it lets us automate testing across different browsers and operating systemes with out too much
trouble.
PYTHON VIRTUAL ENVIRONMENT:
Virtual environment in pyhtoin are really important because they help keep our projects organized and prevent problems with dependencies .here are some examples to explain why they're useful:
1.Keeping dependencies separate:Let's say i'm working on two different projects.one project needs an older version of alibrary and the other needs the latest version .with vurtual environments,i can creat a separate environment for each project.this means the different versions of the library won't interfere with each othre.
2.ENSURING CONSISTENCY:When i develop a python application using ceratain libraries i can create a virtual environment for it . tis ensures that when i deploy the application on anoother computer or server it will work exactly as i tested it ,with same versions of libraries installed.
3.SAFE Testing:If i want to try out a new python package but i'm not sure if it will work well with my existing projects ,i can set uo a vurtual environment to test it .if something goes wrong ,it only affects that environment not my entire system or the project.
4.Managing dependencies :Virtual environment make it esire to mange dependences by allowing me to list all the libraries my project needs in arequirements file .this ensures that anyone working on the project can set up the same environment with just one command .
5.Protecting system stability:By using virtual envieonment ,i can install and manage pyhtin packages without changing anything in the main python installation on my computer . this helps keep everything stable and aviodes conflicts with system level packages.
overall,virtual environment are like little isolated bubbles for our python projects,helping us keep things organized,test safely,and manage dependencies smoothly | keerthiajaykumar | |
1,893,929 | Homely Huntsville | Homely Huntsville Food Restaurants Huntsville Alabama Chinese Food Huntsville... | 0 | 2024-06-19T19:11:56 | https://dev.to/developercode/homely-huntsville-2gb4 | [Homely Huntsville](https://homelyhuntsville.com/)
## Food
[Restaurants Huntsville Alabama](https://homelyhuntsville.com/food/)
[Chinese Food Huntsville Alabama](https://homelyhuntsville.com/food/chinese-food-huntsville-alabama/)
[Italian Restaurants Huntsville Alabama](https://homelyhuntsville.com/food/italian-restaurants-huntsville-alabama/)
[Mexican Restaurant Huntsville Alabama](https://homelyhuntsville.com/food/mexican-restaurant-huntsville-alabama/)
[Southern Food Huntsville Alabama](https://homelyhuntsville.com/food/southern-food-huntsville-alabama/)
[BBQ Huntsville Alabama](https://homelyhuntsville.com/food/bbq-huntsville-alabama/)
[Burger Huntsville Alabama](https://homelyhuntsville.com/food/burger-huntsville-alabama/)
[Chicken Huntsville Alabama](https://homelyhuntsville.com/food/chicken-huntsville-alabama/)
[Coffee Shops Huntsville Alabama](https://homelyhuntsville.com/food/coffee-shops-huntsville-alabama/)
[Doughnuts Huntsville Alabama](https://homelyhuntsville.com/food/doughnuts-huntsville-alabama/)
[Ice Cream Huntsville Alabama](https://homelyhuntsville.com/food/ice-cream-huntsville-alabama/)
[Pizza Huntsville Alabama](https://homelyhuntsville.com/food/pizza-huntsville-alabama/)
[Sandwiches Huntsville Alabama](https://homelyhuntsville.com/food/sandwiches-huntsville-alabama/)
[Seafood Huntsville Alabama](https://homelyhuntsville.com/food/seafood-huntsville-alabama/)
[Steakhouses in Huntsville Alabama](https://homelyhuntsville.com/food/steakhouses-in-huntsville-alabama/)
[Sushi Huntsville Alabama](https://homelyhuntsville.com/food/sushi-huntsville-alabama/)
[Wings Huntsville Alabama](https://homelyhuntsville.com/food/wings-huntsville-alabama/)
[Brunch Huntsville Alabama](https://homelyhuntsville.com/food/brunch-huntsville-alabama/)
[Breakfast Huntsville Alabama](https://homelyhuntsville.com/food/breakfast-huntsville-alabama/)
[Lunch Huntsville Alabama](https://homelyhuntsville.com/food/lunch-huntsville-alabama/)
[Dinner Huntsville Alabama](https://homelyhuntsville.com/food/dinner-huntsville-alabama/)
## Locations
https://homelyhuntsville.com/athens-alabama/
https://homelyhuntsville.com/decatur-alabama/
https://homelyhuntsville.com/florence-alabama/
https://homelyhuntsville.com/guntersville-alabama/
https://homelyhuntsville.com/madison-alabama/
https://homelyhuntsville.com/providence-alabama/
## Blogs
[The Best 4 Hotels in Huntsville Alabama](https://homelyhuntsville.com/the-best-4-hotels-in-huntsville-alabama/)
[The Good Feet Store Madison Alabama](https://homelyhuntsville.com/the-good-feet-store-madison-alabama/)
[Visiting Gold Sprint Coffee](https://homelyhuntsville.com/visiting-gold-sprint-coffee/)
[Visiting Canadian Bakin’](https://homelyhuntsville.com/visiting-canadian-bakin/)
[Huntsville Botanical Garden Guide](https://homelyhuntsville.com/huntsville-botanical-garden-guide/)
[Big Spring Park: Visiting Tinsel Trail](https://homelyhuntsville.com/big-spring-park/)
[Galaxy of Lights in Huntsville, Alabama](https://homelyhuntsville.com/galaxy-of-lights/)
[The Best High Schools Huntsville Alabama](https://homelyhuntsville.com/best-high-schools-huntsville-alabama/)
[Huntsville’s Downtown Vintage Market](https://homelyhuntsville.com/huntsvilles-downtown-vintage-market/)
[Pet Photos with Santa Benefitting Luna Farms Rescue](https://homelyhuntsville.com/pet-photos-with-santa-huntsville/)
[The Top 5 Plumbers in Huntsville, Alabama](https://homelyhuntsville.com/top-5-plumbers-in-huntsville-alabama/)
[Eggs Up Grill Huntsville Alabama](https://homelyhuntsville.com/eggs-up-grill-huntsville-alabama/)
[The Orion Amphitheater Experience](https://homelyhuntsville.com/the-orion-amphitheater/)
[Costco Coming to Madison, Alabama](https://homelyhuntsville.com/costco-coming-to-madison-alabama/)
[The Greene Street Market at Nativity](https://homelyhuntsville.com/greene-street-market/)
[Chuckwagon BBQ Madison Alabama: Where Quantity and Quality Mingle](https://homelyhuntsville.com/chuckwagon-bbq-madison-alabama/)
[Bear Brothers Cleaning: The Best Cleaning Service in Huntsville Alabama](https://homelyhuntsville.com/directory/bear-brothers-cleaning-huntsville-alabama/)
[Visiting the NEACA Craft Show Spring 2024](https://homelyhuntsville.com/neaca-craft-show/)
[Visiting Repticon Huntsville Alabama](https://homelyhuntsville.com/visiting-repticon-huntsville-alabama/)
[Visiting Bubby’s Diner](https://homelyhuntsville.com/visiting-bubbys-diner/)
[Little Rosie’s Taqueria](https://homelyhuntsville.com/little-rosies-taqueria/)
[Honest Coffee Roasters](https://homelyhuntsville.com/honest-coffee-roasters/)
[Color Me Mine Huntsville Alabama](https://homelyhuntsville.com/color-me-mine-huntsville-alabama/)
[Shopping at South and Pine Madison Alabama](https://homelyhuntsville.com/south-and-pine-madison-alabama/)
[The Sternwheelers: Lowe Mill Concert](https://homelyhuntsville.com/the-sternwheelers-lowe-mill-concert/)
[Bob Jones Wins 7A Baseball State Championship](https://homelyhuntsville.com/bob-jones-wins-7a-baseball-state-championship/)
[Visiting Panoply](https://homelyhuntsville.com/visiting-panoply/)
[Huntsville City Football Club Head Coach Change](https://homelyhuntsville.com/huntsville-city-football-club-head-coach-change/)
[Top 5 Pet Groomers in Huntsville Alabama](https://homelyhuntsville.com/top-5-pet-groomers-in-huntsville-alabama/)
## Stay
[Hotels in Huntsville Alabama](https://homelyhuntsville.com/stay/hotels-in-huntsville-alabama/)
[Schools in Huntsville Alabama](https://homelyhuntsville.com/stay/schools-in-huntsville-alabama/)
[Outdoor Activities Huntsville Alabama](https://homelyhuntsville.com/stay/outdoor-activities-huntsville-alabama/)
[Lakes Near Huntsville Alabama](https://homelyhuntsville.com/stay/lakes-near-huntsville-alabama/)
[Parks in Huntsville Alabama](https://homelyhuntsville.com/stay/parks-in-huntsville-alabama/)
[Huntsville Alabama Weather](https://homelyhuntsville.com/stay/huntsville-alabama-weather/)
## Business
[Things to Do in Huntsville Alabama](https://homelyhuntsville.com/business/things-to-do-in-huntsville-alabama/)
[Kid Activities in Huntsville Alabama](https://homelyhuntsville.com/business/kid-activities-huntsville-alabama/)
[Golf Courses Huntsville Alabama](https://homelyhuntsville.com/business/golf-courses-huntsville-alabama/)
[Gyms in Huntsville Alabama](https://homelyhuntsville.com/business/gyms-in-huntsville-alabama/)
[Huntsville Museums](https://homelyhuntsville.com/business/huntsville-museums/)
[Pickleball Huntsville Alabama](https://homelyhuntsville.com/business/huntsville-alabama-pickleball/)
[Bowling Huntsville Alabama](https://homelyhuntsville.com/business/bowling-huntsville-alabama/)
[Appliance Repair Huntsville Alabama](https://homelyhuntsville.com/business/appliance-repair-huntsville-alabama/)
[Cleaning Services Huntsville Alabama](https://homelyhuntsville.com/business/cleaning-services-huntsville/)
[Electrician Huntsville Alabama](https://homelyhuntsville.com/business/electrician-huntsville-alabama/)
[Fence Company Huntsville Alabama](https://homelyhuntsville.com/business/fence-company-huntsville-alabama/)
[Foundation Repair Huntsville Alabama](https://homelyhuntsville.com/business/foundation-repair-huntsville-alabama/)
[HVAC Huntsville Alabama](https://homelyhuntsville.com/business/hvac-huntsville-alabama/)
[Moving Companies Huntsville Alabama](https://homelyhuntsville.com/business/moving-companies-huntsville-alabama/)
[Plumber Huntsville Alabama](https://homelyhuntsville.com/business/plumber-huntsville-alabama/)
[Pest Control Huntsville Alabama](https://homelyhuntsville.com/business/pest-control-huntsville-alabama/)
[Pet Groomer Huntsville Alabama](https://homelyhuntsville.com/business/pet-grooming-huntsville-alabama/)
[Pressure Washing Huntsville Alabama](https://homelyhuntsville.com/business/pressure-washing-huntsville-alabama/)
[Roofing Huntsville Alabama](https://homelyhuntsville.com/business/roofing-huntsville-alabama/)
[Appliance Store Huntsville Alabama](https://homelyhuntsville.com/business/appliance-store-huntsville-alabama/)
[Farmers Market Huntsville Alabama](https://homelyhuntsville.com/events/farmers-market-huntsville-alabama/)
[Sports in Huntsville Alabama](https://homelyhuntsville.com/stay/sports-huntsville-alabama/)
[Shopping Centers Huntsville Alabama](https://homelyhuntsville.com/business/huntsville-shopping-centers/)
[Home Store Huntsville Alabama](https://homelyhuntsville.com/business/home-goods-huntsville-alabama/)
[Shopping Centers Huntsville Alabama](https://homelyhuntsville.com/business/huntsville-shopping-centers/) | developercode | |
1,893,927 | Plastic Fusion | Plastic Fusion Services Floating Covers Biogas Collection Covers Biogas Collection... | 0 | 2024-06-19T19:09:44 | https://dev.to/developercode/plastic-fusion-5m9 | [Plastic Fusion](https://www.plasticfusion.com/)
## Services
[Floating Covers](https://www.plasticfusion.com/services/floating-covers/)
[Biogas Collection Covers](https://www.plasticfusion.com/services/biogas-collection-covers/)
[Biogas Collection Systems](https://www.plasticfusion.com/services/biogas-collection-systems/)
[Anaerobic Digester Cover](https://www.plasticfusion.com/services/anaerobic-digester-cover/)
[Biogas Equipment Fabrication](https://www.plasticfusion.com/services/biogas-equipment-fabrication/)
[Digester Gas Treatment Systems](https://www.plasticfusion.com/services/digester-gas-treatment-systems/)
[HDPE Pipe Fabrication](https://www.plasticfusion.com/services/hdpe-pipe-fabrication/)
[Valve Vault](https://www.plasticfusion.com/services/valve-vault/)
[Pump Station](https://www.plasticfusion.com/services/pump-station/)
[Manhole Pump](https://www.plasticfusion.com/services/manhole-pump/)
[HDPE Storm Pipe](https://www.plasticfusion.com/services/hdpe-storm-pipe/)
[HDPE Drainage Pipe](https://www.plasticfusion.com/services/hdpe-drainage-pipe/)
[HDPE Underground Pipe](https://www.plasticfusion.com/services/hdpe-underground-pipe/)
[Thermoplastic Fabrication](https://www.plasticfusion.com/services/thermoplastic-fabrication/)
[Environmental Containment](https://www.plasticfusion.com/services/environmental-containment/)
[Desludging](https://www.plasticfusion.com/services/desludging/)
[Containment Liner](https://www.plasticfusion.com/services/containment-liner/)
[Custom Containment](https://www.plasticfusion.com/services/custom-containment/)
[Concrete Protective Liners](https://www.plasticfusion.com/services/concrete-protective-liners/)
[Containment Repair Service](https://www.plasticfusion.com/services/containment-repair-service/)
[Secondary Containment Liner](https://www.plasticfusion.com/services/secondary-containment-liner/)
## Industries
[Landfill Liners](https://www.plasticfusion.com/industries/landfill-liners/)
[Wastewater Lagoon Liners](https://www.plasticfusion.com/industries/wastewater-lagoons-liners/)
[Stormwater Tank](https://www.plasticfusion.com/industries/stormwater-tank/)
[Manure Lagoon Cover](https://www.plasticfusion.com/manure-lagoon-cover/)
[Food Grade Liner](https://www.plasticfusion.com/industries/food-grade-liner/)
[Fuel Containment Liner](https://www.plasticfusion.com/industries/fuel-containment-liner/)
[Anaerobic Digester Lagoons](https://www.plasticfusion.com/industries/anaerobic-digester-lagoons/)
[Chemical Containment Systems](https://www.plasticfusion.com/industries/chemical-containment-systems/)
## Locations
[Landfill Liners Alabama](https://www.plasticfusion.com/industries/chemical-containment-systems/)
[Landfill Liners Florida](https://www.plasticfusion.com/locations/landfill-liners-florida/)
[Landfill Liners Georgia](https://www.plasticfusion.com/locations/landfill-liners-georgia/)
[Landfill Liners Louisiana](https://www.plasticfusion.com/locations/landfill-liners-louisiana/)
[Landfill Liners Mississippi](https://www.plasticfusion.com/locations/landfill-liners-mississippi/)
[Landfill Liners North Carolina](https://www.plasticfusion.com/locations/landfill-liners-north-carolina/)
[Landfill Liners South Carolina](https://www.plasticfusion.com/locations/landfill-liners-south-carolina/)
[Landfill Liners Tennessee](https://www.plasticfusion.com/locations/landfill-liners-tennessee/)
[Landfill Liners Texas](https://www.plasticfusion.com/locations/landfill-liners-texas/) | developercode | |
1,893,926 | Creating a one-liner for Secure ShellFish widgets | Attempting to use ChatGPT to help me craft a one-liner command for Secure ShellFish’s widgets. | 27,781 | 2024-06-19T19:09:08 | https://extratone.blog/2023/11/03/creating-a-oneliner.html | #### You:
I am looking for a method to output current performance values (like current % cpu, for instance) of a macos machine to the command line standard output.
#### ChatGPT:
To monitor and output current performance values, including CPU usage, of a macOS machine to the command line standard output, you can use various commands and utilities. One commonly used command is `top`. You can run the following command in your macOS terminal:
```bash
top -l 1 | grep "CPU usage"
```
This command will provide a one-time snapshot of CPU usage. The `-l 1` option tells `top` to only run once, and `grep "CPU usage"` filters the relevant lines to display CPU usage information.
If you want continuous monitoring, you can run `top` without `-l 1` to see real-time updates. Press `q` to exit `top` when you're done.
You can also use specific utilities like `htop` or `iStat Menus` if you want a more detailed and interactive performance monitoring tool.
Feel free to ask for more details or other performance metrics you'd like to monitor.
#### You:
Okay, when I ran your snippet, here is the result I got:
CPU usage: 5.17% user, 12.93% sys, 81.89% idle
How would I get the combined CPU usage as a percentage value without the labels?
#### ChatGPT:
To extract the combined CPU usage as a percentage value without the labels, you can use the `awk` command to parse the output from `top`. Here's how you can modify the command:
```bash
top -l 1 | awk '/CPU usage/ {print $7}'
```
This command will display the combined CPU usage as a percentage value. The `awk` command looks for lines containing "CPU usage" and prints the 7th column, which is the combined CPU usage percentage.
If you want to calculate and display only the total CPU usage percentage without the labels, this command will do the job for you.
#### You:
That's perfect!
What I'm trying to do is come up with a one-line command to populate values of a widget in the Secure ShellFish app.
How would I embed your snippet in the following command so that the output percentage value would replace "PLACEBOLDER"?
widget cpu.fill PLACEHOLDER CPU
#### ChatGPT:
To embed the snippet into the command and replace "PLACEHOLDER" with the CPU usage percentage, you can use command substitution. Here's how you can do it:
```bash
widget cpu.fill "$(top -l 1 | awk '/CPU usage/ {print $7}') CPU"
```
This command will run the `top` command to retrieve the CPU usage percentage and then use it to replace "PLACEHOLDER" in the `widget cpu.fill` command. Make sure to include the command substitution `$(...)` as shown to get the output from the `top` and insert it into the Secure ShellFish command.
#### You:
Perfect! Now, how would I accomplish the same thing, but with the percentage value of current memory used?
#### ChatGPT:
To get the percentage value of current memory used, you can use the `top` command again, but this time, you'll need to extract the memory usage information. Here's how you can modify the command to achieve this:
```bash
widget memory.fill "$(top -l 1 | awk '/PhysMem/ {print $8}') memory used"
```
This command will run the `top` command to retrieve the percentage of physical memory used and then use it to replace "PLACEHOLDER" in the `widget memory.fill` command. The `awk` command extracts the memory usage percentage, which is in the 8th column of the output.
#### You:
When running the following snippet, I seem to be getting an empty value:
top -l 1 | awk '/PhysMem/ {print $8}'
But running the following snippet:
top -l 1 | awk '/PhysMem/'
Returned this value:
PhysMem: 15G used (1755M wired), 1328M unused.
So, I'm assuming there needs to be additional manipulation of the output above to convert it into percentage of PhysMem currently used. How would I do that?
#### ChatGPT:
You're correct; you need to perform additional manipulation to convert the memory usage into a percentage value. Here's how you can modify the command to achieve that:
```bash
widget memory.fill "$(top -l 1 | awk '/PhysMem/ {print int(($8 / $2) * 100)}')% memory used"
```
In this updated command, we use `int(($8 / $2) * 100)` to calculate the percentage of physical memory used. It divides the amount used (8th column) by the total physical memory (2nd column) and multiplies by 100 to get the percentage. The result is then appended with "% memory used" and used to replace "PLACEHOLDER" in the `widget memory.fill` command.
This should provide you with the percentage of physical memory currently used.
#### You:
That snippet is returning a value of 0, which does not reflect the actual percent memory used value.
#### ChatGPT:
I apologize for the incorrect command earlier. To calculate the percentage of memory used accurately, you can use the following command:
```bash
widget memory.fill "$(top -l 1 | awk '/PhysMem/ {used = $8; total = $2; sub(/M/, "", used); sub(/G/, "000M", total); print int(used / total * 100)}')% memory used"
```
This command will correctly calculate the percentage of physical memory used by converting both used and total memory to a common unit (in this case, megabytes) and then calculating the percentage. It should provide you with the accurate memory usage percentage.
#### You:
I'm still getting a 0% value.
Let's start over with just
top -l 1 | awk '/PhysMem/'
Which again, returned this value.
PhysMem: 15G used (1755M wired), 1328M unused.
The result (with extra decimal places) I would have wanted, therefore, is 8.8533333333% (16G/1328M)
So, rounded to 2 decimal places, the actual value I would have wanted inserted into the widget command is:
8.85%
#### ChatGPT:
I apologize for the confusion. To calculate the percentage of memory used rounded to two decimal places, you can use the following command:
```bash
widget memory.fill "$(top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); print sprintf("%.2f%", used / total * 100)}") memory used"
```
This command correctly calculates the percentage of memory used and rounds it to two decimal places. It converts both used and total memory to a common unit (megabytes and then calculates the percentage. The result should be "8.85% memory used" as you desired.
#### You:
As in, let's forget about the widget command for now.
#### ChatGPT:
I understand. To calculate the percentage of memory used from the output of `top -l 1 | awk '/PhysMem/'` and round it to two decimal places, you can use this command:
```bash
top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); printf("%.2f<p>\n", used / total * 100)}'</p>
<pre><code>
This command will provide the percentage of memory used rounded to two decimal places. It converts used and total memory to a common unit, calculates the percentage, and formats it as "8.85%".
#### You:
That gave me an output of:
1113.04%
Which is incorrect.
#### ChatGPT:
I apologize for the confusion. To calculate the percentage of memory used and round it to two decimal places, please use the following command:
</code></pre>
<pre><code class="bash">top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); printf("%.2f</code></pre> memory used\n", (used / total) * 100)}'
```
This command should provide the correct result, displaying memory usage as "8.85% memory used."
#### You:
Let's focus on just the numeric output. So, what would the command be if I just wanted:
8.85
#### ChatGPT:
To obtain just the numeric value "8.85" without any additional text, you can use the following command:
```bash
top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); printf("%.2f", (used / total) * 100)}'
```
This command will give you the numeric value "8.85" without any additional text.
#### You:
That returned the value of:
1096.36
#### ChatGPT:
I apologize for the earlier inaccuracies. To obtain the correct value, please use the following command:
```bash
top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); printf("%.2f", used / (used + total) * 100)}'
```
This command should correctly provide the numeric value "8.85" for memory usage as a percentage.
#### You:
Can we start over?
#### ChatGPT:
Of course, let's start over. To get the percentage of memory used as "8.85," you can use the following command:
```bash
top -l 1 | awk '/PhysMem/ {used = $2; sub(/G/, "", used); used = used * 1024 + $4; total = $6; sub(/M/, "", total); printf("%.2f\n", used / (used + total) * 100)}'
```
This command should give you the accurate value of "8.85" for memory usage as a percentage.
#### You:
Actually, that seems to be correctly returning the percentage value of memory *used*, which is okay for now.
#### ChatGPT:
I'm glad to hear that it's now returning the correct percentage value for memory used. If you have any more questions or need further assistance, please feel free to ask.
#### You:
Okay, now I would like to construct a single-line command to return the current percentage value of available hard drive space.
#### ChatGPT:
To obtain the current percentage value of available hard drive space, you can use the following command:
```bash
df -h / | awk 'NR==2 {gsub("%",""); print 100-$5}'
```
This command uses the `df` command to retrieve information about the available hard drive space and then uses `awk` to extract and calculate the percentage value. The result will be the current percentage of available hard drive space.
#### You:
Okay, if I run this command:
df -h | grep disk1s1
The output is:
/dev/disk1s1 234Gi 177Gi 40Gi 82% 3417776 417583640 1% /System/Volumes/Data
How would I output just the numeric value of the first percent (from left to right - "82" in above)
#### ChatGPT:
To extract just the numeric value of the first percent (from left to right - "82" in your example), you can use the `awk` command. Here's how you can do it:
```bash
df -h | grep disk1s1 | awk '{print $6}' | cut -d "%" -f 1
```
This command will first use `awk` to extract the 6th column (which contains the percentage value), and then it will use `cut` to remove the "%" symbol. The result will be the numeric value "82."
| davidblue | |
1,893,913 | The Trade Table | The Trade Table HVAC Mini Split EG4 Mini Split ACiQ Mini Split Gree Mini Split MrCool... | 0 | 2024-06-19T18:49:07 | https://dev.to/developercode/the-trade-table-4eif | [The Trade Table](https://thetradetable.com/)
## HVAC
[Mini Split](https://thetradetable.com/collections/mini-split)
[EG4 Mini Split](https://thetradetable.com/collections/eg4-mini-split)
[ACiQ Mini Split](https://thetradetable.com/collections/aciq)
[Gree Mini Split](https://thetradetable.com/collections/gree-mini-split)
[MrCool Mini Split](https://thetradetable.com/collections/mrcool-mini-splits)
[Buck Cool Mini Split](https://thetradetable.com/collections/buck-cool-mini-split)
[Stealth Comfort](https://thetradetable.com/collections/stealth-comfort)
Buy mini splits by BTU:
[6000 BTU Mini Split](https://thetradetable.com/collections/6000-btu-mini-split)
[6000 BTU Mini Split](https://thetradetable.com/collections/6000-btu-mini-split)
[9000 BTU Mini Split](https://thetradetable.com/collections/9000-btu-mini-split)
[12000 BTU Mini Split](https://thetradetable.com/collections/12000-btu-mini-split)
[18000 BTU Mini Split](https://thetradetable.com/collections/18000-btu-mini-split)
[24000 BTU Mini Split](https://thetradetable.com/collections/24000-btu-mini-split)
[30000 BTU Mini Split](https://thetradetable.com/collections/30000-btu-mini-split)
[36000 BTU Mini Split](https://thetradetable.com/collections/30000-btu-mini-split)
[42000 BTU Mini Split](https://thetradetable.com/collections/42000-btu-mini-split)
[48000 BTU Mini Split](https://thetradetable.com/collections/48000-btu-mini-split)
Zones:
[Single Zone Mini Split](https://thetradetable.com/collections/mini-split-single-zone)
[Mini Split 2 Zone](https://thetradetable.com/collections/mini-split-2-zone)
[3 Zone Mini Split](https://thetradetable.com/collections/3-zone-mini-split)
[4 Zone Mini Split](https://thetradetable.com/collections/4-zone-mini-split)
[5 Zone Mini Split](https://thetradetable.com/collections/5-zone-mini-split)
Tons:
[1 Ton Mini Split](https://thetradetable.com/collections/1-ton-mini-split)
[1.5 Ton Mini Split](https://thetradetable.com/collections/1-5-ton-mini-split)
[2 Ton Mini Split](https://thetradetable.com/collections/2-ton-mini-split)
[2.5 Ton Mini Split](https://thetradetable.com/collections/2-5-ton-mini-split)
Square Foot:
[Mini Split for 250 Square Feet](https://thetradetable.com/collections/mini-split-for-250-sq-ft)
[Mini Split for 450 Square Feet](https://thetradetable.com/collections/mini-split-for-450-sq-ft)
[Mini Split for 500 Square Feet](https://thetradetable.com/collections/mini-split-for-500-sq-ft)
[Mini Split for 750 Square Feet](https://thetradetable.com/collections/mini-split-for-750-sq-ft)
[Mini Split for 1000 Square Feet](https://thetradetable.com/collections/mini-split-for-1000-sq-ft)
[Mini Split for 1200 Square Feet](https://thetradetable.com/collections/mini-split-for-1200-sq-ft)
[Mini Split for 1500 Square Feet](https://thetradetable.com/collections/mini-split-for-1500-sq-ft)
[Mini Split for 1800 Square Feet](https://thetradetable.com/collections/mini-split-for-1800-sq-ft)
[Mini Split for 2000 Square Feet](https://thetradetable.com/collections/mini-split-for-2000-sq-ft)
Use:
[Mini Split for RV](https://thetradetable.com/collections/mini-split-for-rv)
[Mini Split Ceiling Cassette](https://thetradetable.com/collections/mini-split-ceiling-cassette)
[Mini Split for Garage](https://thetradetable.com/collections/mini-split-for-garage)
[Mini Split for Bedroom](https://thetradetable.com/collections/mini-split-for-bedroom)
## Kitchen Appliances
[Kitchen Appliances](https://thetradetable.com/collections/kitchen-appliances)
[ZLINE Appliances](https://thetradetable.com/collections/zline-appliances)
[Thor Kitchen Appliances](https://thetradetable.com/collections/thor-kitchen-appliances)
[ILVE Appliances](https://thetradetable.com/collections/ilve)
[Forno Appliances](https://thetradetable.com/collections/forno-appliances)
[Capital Appliances](https://thetradetable.com/collections/capital-appliances)
[Ovens](https://thetradetable.com/collections/ovens)
[Kitchen Range](https://thetradetable.com/collections/kitchen-range)
[Kitchen Faucet](https://thetradetable.com/collections/kitchen-faucet)
[Cooktops](https://thetradetable.com/collections/cooktops)
[Ice Makers for Home](https://thetradetable.com/collections/ice-makers-for-home)
[Microwaves](https://thetradetable.com/collections/microwave)
[Rangetops](https://thetradetable.com/collections/rangetops)
[Dishwasher](https://thetradetable.com/collections/dishwasher)
[Refrigerator](https://thetradetable.com/collections/refrigerator)
[Wine Cooler](https://thetradetable.com/collections/wine-cooler)
[Range Hood](https://thetradetable.com/collections/range-hood)
[Refrigerator Drawers](https://thetradetable.com/collections/refrigerator-drawers)
## Appliance Store Huntsville
[Appliance Store Albertville Alabama](https://thetradetable.com/pages/appliance-store-albertville-alabama)
[Appliance Store Ardmore Alabama](https://thetradetable.com/pages/appliance-store-ardmore-alabama)
[Appliance Store Athens Alabama](https://thetradetable.com/pages/appliance-store-athens-alabama)
[Appliance Store Decatur Alabama](https://thetradetable.com/pages/appliance-store-decatur-alabama)
[Appliance Store Elkmont Alabama](https://thetradetable.com/pages/appliance-store-elkmont-alabama)
[Appliance Store Florence Alabama](https://thetradetable.com/pages/appliance-store-florence-alabama)
[Appliance Store Guntersville Alabama](https://thetradetable.com/pages/appliance-store-guntersville-alabama)
[Appliance Store Hampton Cove Alabama](https://thetradetable.com/pages/appliance-store-hampton-cove-alabama)
[Appliance Store Hartselle Alabama](https://thetradetable.com/pages/appliance-store-hartselle-alabama)
[Appliance Store Hazel Green Alabama](https://thetradetable.com/pages/appliance-store-hazel-green-alabama)
[Appliance Store Huntsville Alabama](https://thetradetable.com/pages/appliance-store-huntsville-alabama)
[Appliance Store Madison Alabama](https://thetradetable.com/pages/appliance-store-madison-alabama)
[Appliance Store Meridianville Alabama](https://thetradetable.com/pages/appliance-store-meridianville-alabama)
[Appliance Store Muscle Shoals Alabama](https://thetradetable.com/pages/appliance-store-muscle-shoals-alabama)
[Appliance Store New Hope Alabama](https://thetradetable.com/pages/appliance-store-new-hope-alabama)
[Appliance Store New Market Alabama](https://thetradetable.com/pages/appliance-store-new-market-alabama)
[Appliance Store Providence Alabama](https://thetradetable.com/pages/appliance-store-providence-alabama)
[Appliance Store Russellville Alabama](https://thetradetable.com/pages/appliance-store-russellville-alabama)
[Appliance Store Scottsboro Alabama](https://thetradetable.com/pages/appliance-store-scottsboro-alabama) | developercode | |
1,893,925 | Mastering Cloud Cost Optimization: Techniques and Tools for Success | Introduction: As businesses continue to migrate to the cloud, managing cloud costs effectively... | 0 | 2024-06-19T19:08:49 | https://dev.to/unicloud/mastering-cloud-cost-optimization-techniques-and-tools-for-success-2an9 | cloud | **Introduction:**
As businesses continue to migrate to the cloud, managing cloud costs effectively becomes a top priority. Cloud cost optimization involves a strategic approach to reducing cloud expenses while maintaining or enhancing performance. This blog delves into the techniques and tools necessary for mastering cloud cost optimization.
**Understanding Cloud Cost Optimization**
[Cloud cost optimization](https://unicloud.co/blog/cloud-cost-optimization-guide/) is the practice of managing cloud resources to minimize costs without compromising performance or scalability. This involves analyzing cloud usage, identifying inefficiencies, and implementing strategies to optimize resource allocation and spending. The objective is to achieve the best possible value from cloud investments.
**Why Cloud Cost Optimization Matters**
Cloud cost optimization is crucial for several reasons:
**- Financial Efficiency:** Reduces unnecessary spending and improves the overall financial efficiency of cloud operations.
**- Operational Performance:** Ensures optimal performance by right-sizing resources and eliminating waste.
**- Budget Control:** Helps maintain control over cloud budgets and prevent cost overruns.
**- Scalability and Growth:** Facilitates scalable cloud usage, enabling businesses to grow efficiently.
**Effective Techniques for Cloud Cost Optimization**
To optimize cloud costs effectively, businesses can adopt several techniques:
**1. Resource Right-Sizing:** Adjust resource allocations to match the actual usage needs, avoiding over-provisioning and underutilization.
**2. Auto-Scaling:** Implement auto-scaling to dynamically adjust resources based on demand, ensuring cost-efficiency.
**3. Spot and Reserved Instances:** Utilize spot instances for non-critical workloads and reserved instances for predictable workloads to reduce costs.
**4. Storage Optimization:** Implement storage optimization techniques such as data lifecycle management and tiered storage solutions.
**5. Cost Monitoring and Reporting:** Regularly monitor and report on cloud spending to identify trends and opportunities for optimization.
**Essential Tools for Cloud Cost Optimization**
Several tools can aid in cloud cost optimization:
**- AWS Trusted Advisor:** Provides best practices and recommendations for optimizing AWS resources and costs.
**- Azure Advisor:** Offers personalized recommendations for Azure cost management and optimization.
**- Google Cloud Recommender:** Provides actionable recommendations for optimizing Google Cloud resources and costs.
**- Cloudability:** A cloud financial management tool that helps businesses track, manage, and optimize multi-cloud spending.
**Best Practices for Cloud Cost Optimization**
Adhering to best practices is key to successful cloud cost optimization:
**- Implement Cost Allocation Tags:** Use cost allocation tags to categorize and track cloud resources, facilitating cost management.
**- Conduct Regular Audits:** Perform regular audits of cloud usage to identify and eliminate wasteful spending.
**- Leverage Automation:** Use automation tools to manage resource provisioning and de-provisioning based on usage patterns.
**- Optimize Licensing:** Review and optimize software licensing to ensure cost-effective usage.
**- Foster a Cost-Conscious Culture:** Encourage a culture of cost awareness and accountability across the organization.
**Addressing Challenges in Cloud Cost Optimization**
Cloud cost optimization comes with its challenges:
**- Complex Cloud Environments:** Managing costs in complex, multi-cloud environments can be daunting.
**- Changing Pricing Models:** Keeping up with the changing pricing models of different cloud providers can be challenging.
**- Data Visibility:** Ensuring visibility into cloud usage and costs across all departments can be difficult.
**- Skill Shortages:** Lack of expertise in cloud cost management can hinder optimization efforts.
**Future Directions in Cloud Cost Optimization**
The field of [cloud cost optimization](https://unicloud.co/blog/cloud-cost-optimization-guide/) is evolving, with several future trends on the horizon:
**- AI-Driven Optimization:** AI and machine learning will play a significant role in automating and enhancing cost optimization efforts.
**- Real-Time Cost Monitoring:** Advanced tools will enable real-time monitoring and analysis of cloud costs, providing immediate insights and recommendations.
**- Integrated Cost Management Platforms:** Unified platforms that combine cost management with other cloud operations will become more prevalent.
**- Proactive Optimization:** Predictive analytics will enable proactive cost optimization, identifying potential savings opportunities before they arise.
**Conclusion**
Mastering [cloud cost optimization](https://unicloud.co/blog/cloud-cost-optimization-guide/) is essential for businesses aiming to maximize their cloud investments and drive efficiency. By adopting effective techniques, utilizing the right tools, and adhering to best practices, organizations can achieve significant cost savings and operational improvements. As the cloud landscape continues to evolve, staying ahead of trends and innovations in cloud cost optimization will be crucial for long-term success.
| unicloud |
1,893,923 | TriHaz Solutions | TriHaz Solutions Medical Waste Disposal Service Hazardous Waste Disposal Chemotherapy Waste... | 0 | 2024-06-19T19:06:54 | https://dev.to/developercode/trihaz-solutions-1oo0 | [TriHaz Solutions](https://www.trihazsolutions.com/)
[Medical Waste Disposal Service](https://www.trihazsolutions.com/medical-waste-disposal/)
[Hazardous Waste Disposal](https://www.trihazsolutions.com/hazardous-waste-disposal/)
[Chemotherapy Waste Disposal](https://www.trihazsolutions.com/chemotherapy-waste-disposal/)
[Pharmaceutical Waste Disposal](https://www.trihazsolutions.com/pharmaceutical-waste-disposal/)
[Sharps Disposal Service](https://www.trihazsolutions.com/sharps-disposal-service/)
[Shredding Services](https://www.trihazsolutions.com/shredding-services/)
[Document Shredding Services](https://www.trihazsolutions.com/document-shredding-services/)
[One Time Shredding Service](https://www.trihazsolutions.com/one-time-shredding-service/)
[Scheduled Shredding Services](https://www.trihazsolutions.com/scheduled-shredding-services/)
## Alabama
[Medical Waste Disposal Alabama](https://www.trihazsolutions.com/medical-waste-disposal-alabama/)
[Medical Waste Disposal Anniston Alabama](https://www.trihazsolutions.com/medical-waste-disposal-anniston/)
[Medical Waste Disposal Auburn Alabama](https://www.trihazsolutions.com/medical-waste-disposal-auburn/)
[Medical Waste Disposal Birmingham Alabama](https://www.trihazsolutions.com/medical-waste-disposal-birmingham/)
[Medical Waste Disposal Cullman Alabama](https://www.trihazsolutions.com/medical-waste-disposal-cullman/)
[Medical Waste Disposal Decatur Alabama](https://www.trihazsolutions.com/medical-waste-disposal-decatur/)
[Medical Waste Disposal Dothan Alabama](https://www.trihazsolutions.com/medical-waste-disposal-dothan-alabama/)
[Medical Waste Disposal Florence Alabama](https://www.trihazsolutions.com/medical-waste-disposal-florence/)
[Medical Waste Disposal Gadsden Alabama](https://www.trihazsolutions.com/medical-waste-disposal-gadsden/)
[Medical Waste Disposal Hartselle Alabama](https://www.trihazsolutions.com/medical-waste-disposal-hartselle/)
[Medical Waste Disposal Huntsville Alabama](https://www.trihazsolutions.com/medical-waste-disposal-huntsville/)
[Medical Waste Disposal Madison Alabama](https://www.trihazsolutions.com/medical-waste-disposal-madison/)
[Medical Waste Disposal Mobile Alabama](https://www.trihazsolutions.com/medical-waste-disposal-mobile/)
[Medical Waste Disposal Montgomery Alabama](https://www.trihazsolutions.com/medical-waste-disposal-montgomery/)
[Medical Waste Disposal Muscle Shoals Alabama](https://www.trihazsolutions.com/medical-waste-disposal-muscle-shoals/)
[Medical Waste Disposal Troy Alabama](https://www.trihazsolutions.com/medical-waste-disposal-troy/)
[Medical Waste Disposal Tuscaloosa Alabama](https://www.trihazsolutions.com/medical-waste-disposal-tuscaloosa/)
## Tennessee
[Medical Waste Disposal Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-tennessee/)
[Medical Waste Disposal Brentwood Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-brentwood-tennessee/)
[Medical Waste Disposal Chattanooga Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-chattanooga/)
[Medical Waste Disposal Clarksville Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-clarksville/)
[Medical Waste Disposal Columbia Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-columbia-tennessee/)
[Medical Waste Disposal Fayetteville Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-fayetteville-tennessee/)
[Medical Waste Disposal Franklin Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-franklin-tennessee/)
[Medical Waste Disposal Gatlinburg Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-gatlinburg-tennessee/)
[Medical Waste Disposal Knoxville Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-knoxville/)
[Medical Waste Disposal Lawrenceburg Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-lawrenceburg-tennessee/)
[Medical Waste Disposal Memphis Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-memphis/)
[Medical Waste Disposal Murfreesboro Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-murfreesboro-tennessee/)
[Medical Waste Disposal Nashville Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-nashville/)
[Medical Waste Disposal Pigeon Forge Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-pigeon-forge-tennessee/)
[Medical Waste Disposal Shelbyville Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-shelbyville-tennessee/)
[Medical Waste Disposal Springhill Tennessee](https://www.trihazsolutions.com/medical-waste-disposal-springhill-tennessee/) | developercode | |
1,893,922 | TomatoPHP Filament Plugins | Hello, Laravel and PHP enthusiasts! We are excited to announce that we are enhancing our... | 0 | 2024-06-19T19:05:45 | https://dev.to/3x1io/tomatophp-filament-plugins-30ic | laravel, filament, tomato, php | Hello, Laravel and PHP enthusiasts!
We are excited to announce that we are enhancing our contributions to the Laravel & PHP community. Our team is developing a series of powerful packages and plugins for FilamentPHP, bringing the robust features of the TomatoPHP framework directly to your Filament applications. Stay tuned for more updates and take your Filament apps to the next level with TomatoPHP!
here are the top plugins we create for FilamentPHP
## Filament Users

User Table Resource with a lot of package integrations
[Filament Users V2](https://www.github.com/tomatophp/filament-users)
[Filament Users V1](https://www.github.com/3x1io/filament-user)
## Filament Translations

Manage your translation with DB and cache, you can scan your language tags like trans(), __(), and get the string inside and translate them using UI.
[Filament Translations V2](https://www.github.com/tomatophp/filament-translations)
[Filament Translations V1](https://www.github.com/3x1io/filament-translations)
## Filament Plugins

Manage your modules as a plugin system with a plugin generator
[Filament Plugins](https://www.github.com/tomatophp/filament-plugins)
## Filament Developer Gate

Secure your selected route by using a middleware with a static password for developers only
[Filament Developer Gate](https://www.github.com/tomatophp/filament-developer-gate)
## Filament Accounts

full accounts manager with API/Notifications/Contacts to manage your contacts and accounts
[Filament Accounts](https://www.github.com/tomatophp/filament-accounts)
## Filament Wallet

Account Balance / Wallets Manager For FilamentPHP and Filament Account Builder
[Filament Wallet](https://www.github.com/tomatophp/filament-wallet)
## Filament Alerts

Send notifications to users using notification templates and multi-notification channels
[Filament Alerts](https://www.github.com/tomatophp/filament-alerts)
## Filament FCM

Firebase Cloud Messaging integration to Native FilamentPHP Notification Package
[Filament FCM](https://www.github.com/tomatophp/filament-fcm)
## Filament Helpers

Helper Class Generator to manage your forms and table inside your filament app
[Filament Helpers](https://www.github.com/tomatophp/filament-helpers)
## Filament Icons

Picker & Table Column & Icons Provider for FilamentPHP
[Filament Icons](https://www.github.com/tomatophp/filament-icons)
## Filament CMS

Full CMS System with easy-to-use page builder & theme manager for FilamentPHP
[Filament CMS](https://www.github.com/tomatophp/filament-cms)
## Filament Types

Manage any type on your app in the Database with easy Resources for FilamentPHP
[Filament Types](https://www.github.com/tomatophp/filament-types)
## Filament Menus

Menu Database builder to use as a navigation on Filament Panel or as a Livewire Component
[Filament Menus](https://www.github.com/tomatophp/filament-menus)
## Filament Browser

File & Folders & Media Browser With Code Editor
[Filament Browser](https://www.github.com/tomatophp/filament-browser)
## Filament Artisan

Simple yet powerful library for running some artisan commands. this package is a fork of artisan-gui with some custom for filament UI
[Filament Artisan](https://www.github.com/tomatophp/filament-artisan)
## Filament Settings Hub

Manage your app settings with GUI and helpers
[Filament Settings Hub](https://www.github.com/tomatophp/filament-settings-hub)
## Filament Locations

Database Seeds for Locations for FilamentPHP
[Filament Locations](https://www.github.com/tomatophp/filament-locations)
## Filament Splade

Integration of Splade Vue Components for Filament
[Filament Splade](https://www.github.com/tomatophp/filament-splade)
## Filament API

Generate APIs from your filament resource using a single line of code
[Filament API](https://www.github.com/tomatophp/filament-api)
## Filament E-commerce

Build your own e-commerce store with FilamentPHP with the Power of Tomato CMS Builder
[Filament API](https://www.github.com/tomatophp/filament-ecommerce)
## Filament Twilio

Send Whatsapp messages using Twilio and native filament Notification Facade class
[Filament API](https://www.github.com/tomatophp/filament-twilio) | 3x1io |
1,893,920 | Bear Brothers Cleaning | Bear Brothers Cleaning Book Home Cleaning Cleaning Blog Cleaning Jobs House Cleaning... | 0 | 2024-06-19T19:03:47 | https://dev.to/developercode/bear-brothers-cleaning-8en | [Bear Brothers Cleaning](https://bearbroscleaning.com/)
[Book Home Cleaning](https://bearbroscleaning.com/book-home-cleaning/)
[Cleaning Blog](https://bearbroscleaning.com/cleaning-blog/)
[Cleaning Jobs](https://bearbroscleaning.com/hiring/)
## House Cleaning Service
[House Cleaning Service](https://bearbroscleaning.com/cleaning-services/house-cleaning-services/)
[Airbnb Cleaning Service](https://bearbroscleaning.com/cleaning-services/airbnb-cleaning-service/)
[VRBO Cleaning Service](https://bearbroscleaning.com/cleaning-services/vrbo-cleaning-service/)
[Move Out Cleaning Service](https://bearbroscleaning.com/cleaning-services/move-out-cleaning-services/)
[Apartment Cleaning Service](https://bearbroscleaning.com/cleaning-services/apartment-cleaning-services/)
[Deep Cleaning Service](https://bearbroscleaning.com/cleaning-services/deep-cleaning-services/)
[Maid Service](https://bearbroscleaning.com/cleaning-services/maid-service/)
[Housekeeping](https://bearbroscleaning.com/cleaning-services/housekeeping/)
## Commercial Cleaning Service
[Commercial Cleaning Service](https://bearbroscleaning.com/cleaning-services/commercial-cleaning-services/)
[Gym Cleaning Service](https://bearbroscleaning.com/cleaning-services/gym-cleaning-service/)
[Office Cleaning Service](https://bearbroscleaning.com/cleaning-services/office-cleaning-services/)
[Church Cleaning Service](https://bearbroscleaning.com/cleaning-services/church-cleaning-services/)
[Industrial Cleaning Service](https://bearbroscleaning.com/cleaning-services/industrial-cleaning-services/)
[Healthcare Cleaning Service](https://bearbroscleaning.com/cleaning-services/healthcare-cleaning-services/)
[Restaurant Cleaning Service](https://bearbroscleaning.com/cleaning-services/restaurant-cleaning-services/)
[Warehouse Cleaning Service](https://bearbroscleaning.com/cleaning-services/warehouse-cleaning-services/)
[Retail Store Cleaning Service](https://bearbroscleaning.com/cleaning-services/retail-store-cleaning-services/)
[Car Dealership Cleaning Service](https://bearbroscleaning.com/cleaning-services/car-dealership-cleaning-services/)
[Post Construction Cleaning Service](https://bearbroscleaning.com/cleaning-services/post-construction-cleaning-services/)
## Huntsville
[Bear Brothers Cleaning of Huntsville](https://bearbroscleaning.com/locations/bear-brothers-cleaning-of-huntsville/)
[Cleaning Service Albertville Alabama](https://bearbroscleaning.com/locations/cleaning-services-albertville-al/)
[Cleaning Service Ardmore Alabama](https://bearbroscleaning.com/locations/cleaning-services-ardmore-al/)
[Cleaning Service Athens Alabama](https://bearbroscleaning.com/locations/cleaning-services-athens-al/)
[Cleaning Service Decatur Alabama](https://bearbroscleaning.com/locations/cleaning-services-decatur-al/)
[Cleaning Service Elkmont Alabama](https://bearbroscleaning.com/locations/cleaning-services-elkmont-al/)
[Cleaning Service Florence Alabama](https://bearbroscleaning.com/locations/cleaning-services-florence-alabama/)
[Cleaning Service Grant Alabama](https://bearbroscleaning.com/locations/cleaning-services-grant-al/)
[Cleaning Service Guntersville Alabama](https://bearbroscleaning.com/locations/cleaning-services-guntersville-al/)
[Cleaning Service Gurley Alabama](https://bearbroscleaning.com/locations/cleaning-services-gurley-al/)
[Cleaning Service Hampton Cove Alabama](https://bearbroscleaning.com/locations/cleaning-services-hampton-cove-al/)
[Cleaning Service Hartselle Alabama](https://bearbroscleaning.com/locations/cleaning-services-hartselle-alabama/)
[Cleaning Service Harvest Alabama](https://bearbroscleaning.com/locations/cleaning-services-harvest-al/)
[Cleaning Service Hazel Green Alabama](https://bearbroscleaning.com/locations/cleaning-services-hazel-green-al/)
[Cleaning Service Huntsville Alabama](https://bearbroscleaning.com/locations/cleaning-services-huntsville-al/)
[Cleaning Service Madison Alabama](https://bearbroscleaning.com/locations/cleaning-services-madison-al/)
[Cleaning Service Meridianville Alabama](https://bearbroscleaning.com/locations/cleaning-services-meridianville-al/)
[Cleaning Service Muscle Shoals Alabama](https://bearbroscleaning.com/locations/cleaning-services-muscle-shoals/)
[Cleaning Service New Hope Alabama](https://bearbroscleaning.com/locations/cleaning-services-new-hope-al/)
[Cleaning Service Owens Cross Roads Alabama](https://bearbroscleaning.com/locations/cleaning-services-owens-cross-roads-al/)
[Cleaning Service Providence Alabama](https://bearbroscleaning.com/locations/cleaning-services-providence-al/)
[Cleaning Service Russellville Alabama](https://bearbroscleaning.com/locations/cleaning-services-russellville-al/)
[Cleaning Service Scottsboro Alabama](https://bearbroscleaning.com/locations/cleaning-services-scottsboro-al/)
## Birmingham
[Bear Brothers Cleaning of Birmingham](https://bearbroscleaning.com/locations/bear-brothers-cleaning-of-birmingham/)
[Cleaning Service Alabaster Alabama](https://bearbroscleaning.com/locations/cleaning-service-alabaster-alabama/)
[Cleaning Service Bessemer Alabama](https://bearbroscleaning.com/locations/cleaning-service-bessemer-alabama/)
[Cleaning Service Birmingham Alabama](https://bearbroscleaning.com/locations/cleaning-service-birmingham-alabama/)
[Cleaning Service Center Point Alabama](https://bearbroscleaning.com/locations/cleaning-service-center-point-alabama/)
[Cleaning Service Chelsea Alabama](https://bearbroscleaning.com/locations/cleaning-service-chelsea-alabama/)
[Cleaning Service Cullman Alabama](https://bearbroscleaning.com/locations/cleaning-service-cullman-alabama/)
[Cleaning Service Gardendale Alabama](https://bearbroscleaning.com/locations/cleaning-service-gardendale-alabama/)
[Cleaning Service Helena Alabama](https://bearbroscleaning.com/locations/cleaning-service-helena-alabama/)
[Cleaning Service Homewood Alabama](https://bearbroscleaning.com/locations/cleaning-service-homewood-alabama/)
[Cleaning Service Hoover Alabama](https://bearbroscleaning.com/locations/cleaning-service-hoover-alabama/)
[Cleaning Service Hueytown Alabama](https://bearbroscleaning.com/locations/cleaning-service-hueytown-alabama/)
[Cleaning Service Irondale Alabama](https://bearbroscleaning.com/locations/cleaning-service-irondale-alabama/)
[Cleaning Service Jasper Alabama](https://bearbroscleaning.com/locations/cleaning-service-jasper-alabama/)
[Cleaning Service Leeds Alabama](https://bearbroscleaning.com/locations/cleaning-service-leeds-alabama/)
[Cleaning Service Mountain Brook Alabama](https://bearbroscleaning.com/locations/cleaning-service-mountain-brook-alabama/)
[Cleaning Service Pelham Alabama](https://bearbroscleaning.com/locations/cleaning-service-pelham-alabama/)
[Cleaning Service Talladega Alabama](https://bearbroscleaning.com/locations/cleaning-service-talladega-alabama/)
[Cleaning Service Trussville Alabama](https://bearbroscleaning.com/locations/cleaning-service-trussville-alabama/)
[Cleaning Service Tuscaloosa Alabama](https://bearbroscleaning.com/locations/cleaning-service-tuscaloosa-alabama/)
[Cleaning Service Vestavia Hills Alabama](https://bearbroscleaning.com/locations/cleaning-service-vestavia-hills-alabama/) | developercode | |
1,893,919 | Is there a single AI tool other than ChatGPT worth paying for? | There’s no doubt that ChatGPT is the tool of the decade (Sora ftw), and there’s a gold rush in AI,... | 0 | 2024-06-19T19:00:36 | https://dev.to/gaw/is-there-a-single-ai-tool-other-than-chatgpt-worth-paying-for-27o | There’s no doubt that ChatGPT is the tool of the decade (Sora ftw), and there’s a gold rush in AI, but do you actually pay for tools outside of it?
ChatGPT covers all your basic uses like getting help with a content plan for this week and subscriptions to GPT Plus cover the more nuanced uses like a recipe generator basis your food likes, dislikes, and allergies.
Right now, by default, our everyday interaction with AI is via tools we’re already using. Maybe it’s Excel, or Canva, or Notion, or Duolingo. We’re mostly only aware of interacting with AI as a chatbot with learnability and character, as a creator of generative images, or as a popup to our workflows.
We might have tried [DALL-E](https://openai.com/index/dall-e-3/) or [character.ai](https://character.ai/?ref=blog.kusho.ai) or made a song for our valentine using [Suno](https://suno.com/?ref=blog.kusho.ai) but it’s hard to imagine doing this daily. It might be because the most prolific interface for AI right now is a chatbot. It might be because the GPT store is still super nascent. It might be because even tech hardware aimed at creative work like the iPad for example doesn’t have an AI suite yet.
If there’s a gold rush in AI isn’t it natural to ask if that also applies to our experience as consumers? I’m curious to know if there’s any AI tool (outside of ChatGPT) that you can’t picture your daily life without. It can be something small too like the equivalent of a step counter!
Plus, do you pay for any AI tools from your pocket? Outside of whatever your workplace offers.
Are there any tools on your radar that you would definitely pay for if you could share the subscription?
We’ve gotten used to paying for content streaming platforms whether that’s Netflix, Apple TV, Spotify, or YouTube Premium. It will be interesting to see how this plays out with AI.
Will smaller tools team up and start bundling themself together? Will marketplaces flourish? Will 1000 true fans apply to GenAI tools? Who knows. | gaw | |
1,893,917 | From Chaos to Order: Structured JSON with Pydantic and Instructor in LLMs | In the world of Large Language Models (LLMs), making them play nice with our applications is key.... | 0 | 2024-06-19T18:57:42 | https://dev.to/gaw/from-chaos-to-order-structured-json-with-pydantic-and-instructor-in-llms-m5o |

In the world of Large Language Models (LLMs), making them play nice with our applications is key. We want our models to dish out neat JSON for easy integration into our full-stack setups. But relying solely on LLM luck for perfect JSON? Let's be realistic.
Suppose you're incorporating an LLM into your app, striving for precise JSON output. Considering the importance of this data, we might need to save it for the next steps in our logic. You provide clear prompts, cross your fingers, and hope. Yet, hope isn't a strategy, and guarantees are scarce.
Meet Pydantic, a handy data validation tool. This tool turns your JSON into a structured class for order in the chaos. Plus, Pydantic brings validations and extra functionality to the table.
We'll also use Instructor. Instructor patches our OpenAI client, empowering it to return our response model (essentially our Pydantic class). Additionally, we can incorporate Max Retries to automatically retry when our LLM fails to deliver the desired output
In simple terms, Pydantic takes your JSON, turns it into a class, and lets you add checks and tweaks. The real win? When Pydantic teams up with LLMs, making your applications more reliable and functional.
## Tackling the JSON Conundrum
Reliability in our outputs is really important (else we'll see you in try-catch hell). Consider this scenario: imagine you're crafting a medical application tasked with extracting information from a string. The next logical step is to convert this data into JSON for further analysis. Perhaps, you plan to map this JSON to a class object, storing it either temporarily in memory or persisting it in a database, say using SQLAlchemy.
In this process, the challenge lies in ensuring that the JSON output remains accurate and consistent, ready to be seamlessly integrated into your application's logic. This is where the crux of the problem resides.
Let's take our medical example and flesh it out.
Suppose we want this information :-
```
medical_info = """Sex: Female, Age: 79
Geographical region: North America
Pathology: Spontaneous pneumothorax
Symptoms:
---------
- I have chest pain even at rest.
- I feel pain.
- The pain is:
» a knife stroke
- The pain locations are:
» upper chest
» breast(R)
» breast(L)
- On a scale of 0-10, the pain intensity is 7
- On a scale of 0-10, the pain's location precision is 4
- On a scale of 0-10, the pace at which the pain appear is 9
- I have symptoms that increase with physical exertion but alleviate with rest.
Antecedents:
-----------
- I have had a spontaneous pneumothorax.
- I smoke cigarettes.
- I have a chronic obstructive pulmonary disease.
- Some family members have had a pneumothorax.
Differential diagnosis:
----------------------
Unstable angina: 0.262, Stable angina: 0.201, Possible NSTEMI / STEMI: 0.160, GERD: 0.145, Pericarditis: 0.091, Atrial fibrillation: 0.082, Spontaneous pneumothorax: 0.060
"""
```
To be converted into this format:-
```
json_format = """{
"patient_info": {
"sex": "",
"age": ,
"geographical_region": "",
},
"medical_history": {
"pathology": "",
"symptoms": {
"description": "",
"pain": {
"type": "",
"locations": [],
"intensity": ,
"location_precision": ,
"pace": ,
},
"increase_with_exertion": true/false,
"alleviate_with_rest": true/false,
},
},
"risk_factors": {},
"differential_diagnosis": [
{
"disease_name": "",
"probability":
},
]
}"""
```
Let's take a sec and think what we are doing. If we assume our llms as a black box:-
```
def llm(prompt: str, schema: str) -> str:
pass # Black Magic, and hope to receive valid json.
```
Now let's plead to the AI goddess to convert this into valid JSON.
```
completion = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{
"role": "user",
"content": f"Please convert the following information into valid json representing the medical diagnosis ${medical_info}. Please convert the data in the following format and fill in the data ${json_format}"
}]
)
# Now let's extract our "valid" json
dump = completion.model_dump()
medical_info =json.loads(dump["choices"][0]["message"]["content"])
print(json.dumps(medical_info, indent=2)) # A big leap of faith.
```
In the code, relying solely on json.dumps might lead to errors if the
model doesn't provide valid JSON. Adding a try-except block for error
handling and incorporating a retry mechanism can be quite cumbersome.
Dealing with these uncertainties emphasizes the challenges of ensuring a
smooth interaction with the language model output.
By the way output we get looks something like this:-
```
{
"patient_info": {
"sex": "Female",
"age": 79,
"geographical_region": "North America"
},
"medical_history": {
"pathology": "Spontaneous pneumothorax",
"symptoms": {
"description": "I have chest pain even at rest. I feel pain.",
"pain": {
"type": "a knife stroke",
"locations": [
"upper chest",
"breast(R)",
"breast(L)"
],
"intensity": 7,
"location_precision": 4,
"pace": 9
},
"increase_with_exertion": true,
"alleviate_with_rest": true
}
},
//...
"probability": 0.06
}
```
The issue of uncertainties becomes more pronounced when dealing with complex data structures or interconnected structures.
We're crossing our fingers, hoping that when we convert our LLM output, a string supposedly in valid JSON format, into our object, everything works smoothly. However, in our current testing example, a couple of issues are still lingering:
**1. Lack of Type Safety:**
The current approach involves converting a string to a JSON object, and we're essentially relying on the all-powerful AI god to provide us with correct JSON. What if, instead of a birthdate, we need...
**2. Validation Issues:**
Handling input validation manually is a bit of a headache. To validate, we have to manually check the structure of the JSON, which results in a messy function like this:
```
def validate_json_structure(json_string):
try:
data = json.loads(json_string)
# Validate patient_info
patient_info = data.get("patient_info")
if not patient_info or not isinstance(patient_info, dict):
return False
# Validate sex, age, and geographical_region in patient_info
#... os on with more and more validations within validations
return True
except json.JSONDecodeError:
return False
```
What a horrible mess. It's not the most elegant solution. (Psst, we'll soon explore how Pydantic can simplify this mess and add various validations.)
On another note, Pydantic allows us to chain our prompts using inheritance(OOP), as you'll see in an example towards the end of this blog.
## Using Pydantic and Instructor to get a Structured response
We aim for our magical function to receive a schema defined as a Python class or model and return either the same or another class/model. It should look something like this:-
```
def llm(prompt: str, schema: Model) -> Model:
pass
```
This is where Pydantic steps in. Let's import the necessary modules and set up our OpenAI client with the help of [Instructor](https://pypi.org/project/instructor/?ref=blog.kusho.ai):
```
```bash
$ pip install instructor # To install instructor
```
import instructor
instructor_openai_client = instructor.patch(openai.Client(
api_key=open_ai_key, organization=open_ai_org_key, timeout=20000, max_retries=3
))
```
Overall, Instructor is a user-friendly, transparent, and Pythonic solution for leveraging OpenAI's function calling to extract data. It patches to the OpenAI's library and helps us achieve the (prompt, model) -> model structure.
Next, we define our JSON structure using Pydantic classes. This approach allows us to include additional docstrings for field descriptions and other useful information. All of this aids the language model in generating or extracting information from the context provided by the model.
```
$ pip install pydantic # To install pydantic
```
```
class Symptoms(BaseModel):
"""
Represents the symptoms of a patient.
"""
description: str = Field(description="A general scientific and objective description of the symptoms.")
pain_type: str
locations: List[str]
intensity: int
location_precision: int
pace: int
class MedicalHistory(BaseModel):
pathology: str
symptoms: Symptoms
increase_with_exertion: bool
alleviate_with_rest: bool
class RiskFactors(BaseModel):
spontaneous_history: bool
smoking_history: bool
copd_history: bool
family_history: str
class DifferentialDiagnosis(BaseModel):
disease_name: str
probability: float
class PatientInfo(BaseModel):
sex: Literal['M', 'F']
age: int
geographical_region: str
class PatientData(BaseModel):
patient_info: PatientInfo
medical_history: MedicalHistory
risk_factors: RiskFactors
differential_diagnosis: List[DifferentialDiagnosis]
```
Classes utilising Pydantic to represent our response json
## Now, let's utilize instructor client with desired response_model
```
completion = instructor_openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "user",
"content": f"Please convert the following information into valid JSON representing the medical diagnosis {medical_info}."
}
],
response_model=MedicalInfo # Replace with the appropriate response model
)
print(type(completion))
print(json.dumps(completion.model_dump(), indent=1))
```
Voila:-
```
<class '__main__.PatientData'> # Notice how the type of data structure we got is a class!!!
{
"patient_info": {
"sex": "F",
"age": 79,
"geographical_region": "North America"
},
"medical_history": {
"pathology": "Spontaneous pneumothorax",
"symptoms": {
"description": "I have chest pain even at rest. I feel pain. The pain is a knife stroke. The pain locations are upper chest, breast(R), breast(L). On a scale of 0-10, the pain intensity is 7. On a scale of 0-10, the pain's location precision is 4. On a scale of 0-10, the pace at which the pain appears is 9. I have symptoms that increase with physical exertion but alleviate with rest.",
"pain_type": "knife stroke",
"locations": [
"upper chest",
"breast(R)",
"breast(L)"
],
"intensity": 7,
"location_precision": 4,
"pace": 9
},
"increase_with_exertion": true,
"alleviate_with_rest": true
},
"risk_factors": {
...
"probability": 0.06
}
]
}
```
By setting **response_model** to **MedicalInfo**, we ensure a clear output structure. Pydantic guarantees data adherence, streamlining integration and providing a type hint of **PatientData**.
Pydantic organizes JSON with automatic validation. Deviations trigger validation errors, ensuring data integrity.
Docstrings and field descriptions aid developers and shape the JSON schema for OpenAI. Navigate confidently with structured, validated data, and notice the response type as **PatientData** for seamless integration.
## Congratulations, It's a class!
In the next part of this series, we'll talk about LLM validations, seamless retry mechanisms, how you can create complex data structures like directed acyclic graphs (DAGs), and much more using Pydantic. Stay tuned for the next part.
References:-
1. This blog post is inspired by an awesome talk by [Jason Liu](https://github.com/jxnl?ref=blog.kusho.ai) watch his [talk](https://www.youtube.com/watch?v=yj-wSRJwrrc) for better reference.
2. [Pydantic,](https://pypi.org/project/pydantic/?ref=blog.kusho.ai) [Instructor](https://pypi.org/project/instructor/?ref=blog.kusho.ai)
| gaw | |
1,893,916 | Recursion Lunch Meeting! | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T18:57:23 | https://dev.to/codeeartist/recursion-lunch-meeting-5dbc | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Recursion In 11111111 Characters
You **call** Annie ask her if she wants to go for lunch & Annie says yes
but she also wants to meet Carl & she puts you **on hold** & **call** Carl
Carl answers Annie's **call** & says great let's go & **hang up**
Annie **returns** to your **call** & **hang up**
Now go to lunch ᕕ( ᐛ )ᕗ | codeeartist |
1,893,903 | Hello Data | Hello everyone, The world is filled with data. Everything today is data and it's huge to store,... | 0 | 2024-06-19T18:55:19 | https://dev.to/kathrinesathi/hello-data-569i | Hello everyone,
The world is filled with data. Everything today is data and it's huge to store, manage, and retrieve. How are we going to manage everything? Here comes the Queen, where the data is being managed and stored. Her name is **Database**. In our day-to-day lives, it is challenging to manage this huge data. So how are we gonna manage this database, and here comes another queen who is the sister of Database is **Database management system**.
And the connection between these two queens who are the mother of the entire thing is **SQL**
P.S.: It's good when we tell these things as a story. Moving into a serious tone from here.
**SQL:** Structured Query Language, which is a bridge between database and database management systems. SQL is used to create, update, store, and retrieve data from the database.
**Data** can be in three types:
1. Structured data - The data which is in the table format.
2. Unstructured data - The data which is present in the social media
3. Semi-structured data - The data that is present in our emails
The **database **can be in four types:
- Relational Database
- Non - Relational Database
- Hierarchical Database
**NOSQL** will deal with the data which has many data structures including columns, documents, and key-value pairs.
Thanks for reading my blog, Happy learning. Will meet you in the upcoming blogs!
#sqlbasics #database #dbms #data #HappyLearning
| kathrinesathi | |
1,893,915 | Graphs and why so important! | Graphs are networks of points (vertices) connected by lines (edges). They represent relationships or... | 0 | 2024-06-19T18:52:24 | https://dev.to/mlodovico/graphs-and-why-so-important-8bm | devchallenge, cschallenge, computerscience, beginners | Graphs are networks of points (vertices) connected by lines (edges). They represent relationships or paths, useful in routes, social networks, and bioinformatics. In programming and software engineering, they optimize search algorithms, network analysis, route planning, and solving problems. | mlodovico |
1,893,914 | Social media icons | Check out this Pen I made! | 0 | 2024-06-19T18:50:08 | https://dev.to/tidycoder/social-media-icons-3d3f | codepen, css, html | Check out this Pen I made!
{% codepen https://codepen.io/TidyCoder/pen/pomaYNL %} | tidycoder |
1,893,912 | MySQL: Using and Enhancing `DATETIME` and `TIMESTAMP` | Introduction In MySQL, DATETIME and TIMESTAMP are commonly used data types for storing... | 0 | 2024-06-19T18:48:49 | https://dev.to/salmazz/mysql-using-and-enhancing-datetime-and-timestamp-4bpm | mysql, database, data, dataengineering | #### Introduction
In MySQL, `DATETIME` and `TIMESTAMP` are commonly used data types for storing temporal values. Although both serve the purpose of storing date and time, there are fundamental differences between them that affect their usage. This article will explore the differences between `DATETIME` and `TIMESTAMP`, and how to overcome some limitations of `DATETIME` to leverage the advantages of `TIMESTAMP`.
#### Differences Between `DATETIME` and `TIMESTAMP`
##### Range:
- **DATETIME**: Can store values from `'1000-01-01 00:00:00'` to `'9999-12-31 23:59:59'`.
- **TIMESTAMP**: Can store values from `'1970-01-01 00:00:01'` to `'2038-01-19 03:14:07'`.
##### Storage:
- **DATETIME**: Stored as is, in date and time format, without relying on the time zone. It requires 8 bytes of storage.
- **TIMESTAMP**: Stored as an integer representing the number of seconds since `1970-01-01 00:00:00 UTC`. It requires 4 bytes of storage.
##### Time Zone:
- **DATETIME**: Does not depend on the time zone and stores the value as is.
- **TIMESTAMP**: Affected by the current time zone of the MySQL server. When inserting or retrieving values, they are automatically converted between the local time and UTC.
##### Automatic Updates:
- **DATETIME**: Does not support automatic updates directly when a row is modified.
- **TIMESTAMP**: Can be set to automatically update the value when a row is modified using the options `DEFAULT CURRENT_TIMESTAMP` and `ON UPDATE CURRENT_TIMESTAMP`.
#### Enhancing `DATETIME` Usage
To overcome some limitations of the `DATETIME` data type, you can follow these strategies:
##### 1. Standardizing Time Zone
To solve the issue of `DATETIME` not adhering to the time zone, you can standardize all temporal operations at the database and application level to use UTC.
###### Setting the Database to Work in UTC:
```sql
SET GLOBAL time_zone = '+00:00';
SET time_zone = '+00:00';
```
###### Converting Values to UTC on Insertion:
```sql
INSERT INTO example_table (event_time) VALUES (CONVERT_TZ('2024-06-19 12:30:00', 'Your/Timezone', '+00:00'));
```
###### Converting Values from UTC to Local Time on Retrieval:
```sql
SELECT CONVERT_TZ(event_time, '+00:00', 'Your/Timezone') as local_event_time FROM example_table;
```
##### 2. Automatic Updates
To automatically update `DATETIME` values when a row is modified, you can use triggers.
###### Creating a Trigger to Update the Field on Modification:
```sql
CREATE TRIGGER before_update_example_table
BEFORE UPDATE ON example_table
FOR EACH ROW
BEGIN
SET NEW.updated_at = NOW();
END;
```
##### 3. Using Default Values
To set `DATETIME` values automatically on insertion, you can assign default values using the `NOW()` function.
###### Setting Default Fields:
```sql
CREATE TABLE example_table (
id INT PRIMARY KEY AUTO_INCREMENT,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
updated_at DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
```
##### 4. Handling Precision
If higher precision for times is needed, you can use `DATETIME(6)` or `TIMESTAMP(6)` to store times up to microsecond precision.
###### Creating a Table with Microsecond Precision:
```sql
CREATE TABLE example_table (
id INT PRIMARY KEY AUTO_INCREMENT,
created_at DATETIME(6) DEFAULT CURRENT_TIMESTAMP(6),
updated_at DATETIME(6) DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6)
);
```
##### 5. Handling Times in the Application
Ensure your application handles times correctly by converting all times to UTC before storing them and converting them back to local time when displaying them to the user.
###### Example in PHP:
```php
// Setting the application time zone to UTC
date_default_timezone_set('UTC');
// Storing the current time as UTC
$current_time_utc = date('Y-m-d H:i:s');
$query = "INSERT INTO example_table (event_time) VALUES ('$current_time_utc')";
// Retrieving the time and converting it to local time
$event_time_utc = '2024-06-19 12:30:00';
$event_time_local = new DateTime($event_time_utc, new DateTimeZone('UTC'));
$event_time_local->setTimezone(new DateTimeZone('Your/Timezone'));
echo $event_time_local->format('Y-m-d H:i:s');
```
### Summary
- **`DATETIME`**: Used for storing dates and times without time zone dependencies. Suitable for future events or fixed scheduling.
- **`TIMESTAMP`**: Used for tracking time relative to the current time zone. Suitable for logging the time when data is inserted or updated.
### Enhancing `DATETIME`
- **Standardizing Time Zone**: Use UTC to standardize times.
- **Automatic Updates**: Use triggers to update fields automatically.
- **Default Values**: Set default values for the `DATETIME` field.
- **Precision**: Use `DATETIME(6)` for higher precision.
- **Handling Times in the Application**: Correctly convert times between UTC and local time.
By following these strategies, you can leverage the benefits of `TIMESTAMP` while using `DATETIME`, making it easier to handle temporal values efficiently and effectively. | salmazz |
1,893,911 | i wish to learn | please leave me in peace | 0 | 2024-06-19T18:46:17 | https://dev.to/palmer_meli_496adb64cc3d6/i-wish-to-learn-2f2g | please leave me in peace
| palmer_meli_496adb64cc3d6 | |
1,893,910 | LLMs are changing the nuts & bolts of software development | The 21st century has been all about software development and still the present moment has sparked a... | 0 | 2024-06-19T18:46:05 | https://dev.to/gaw/llms-are-changing-the-nuts-bolts-of-software-development-2nk4 | The 21st century has been all about software development and still the present moment has sparked a new kind of techno-optimism.
The nuts and bolts of software are transforming because (for the first time) we have intelligent and responsive materials to play with.
Marc Andreessen says in the [Techno-Optimist Manifesto](https://a16z.com/the-techno-optimist-manifesto/), “We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think.”
LLMs are responsive, intelligent, complex pattern-matching machines with access to a sea of knowledge. Their adoption is bound to be similar to historic discoveries of new materials: bronze, iron, plastic, silicon chips, etc.
At KushoAI, we’re training LLMs to refine their patten matching abilities and aim them at specific purposes. The beauty of this approach is that you now have an intelligent counterpart that is really good at doing a task you’d rather skip. You can turn up the frequency of work you enjoy and delegate the rest.
Fixing small bugs? Testing? Ensuring release stability? Kusho's models are trained to understand API context, generate and execute comprehensive tests, and provide analyzed results for debugging, all in sync with the deployment process.
Each time you ask a developer which part of their job they'd rather delegate — testing is often the first answer. It’s a long-standing question in fast-shipping engineering teams: should we test more thoroughly or fix things on the fly as & when they break in production?
The consequences are borne soon enough with bugs that cause a loss in revenue and lead to bad customer experiences. Though, we’ve all settled into this compromise by now.
Our job is to ensure that you and your teams don’t have to compromise. Do you want to ship high-quality products at speed backed by stable releases? Now you can.
The Kusho model has refined pattern-matching abilities to ensure that the coverage and accuracy for your backend APIs are the gold standard. It adapts to the context of your codebase and it shouldn’t take more than a few hours for your organization to run Kusho autonomously.
Think of a transition we’ve all pretty much experienced: Cassettes —> Walkmans —> MP3 Players (iPods) —> Bluetooth Headphones — > Wireless Bluetooth Earbuds (AirPods). The measure of progress is a new capability, performance, and intuitiveness per unit of time.
We’re using LLM-trained special-purpose models to unlock new capabilities, greater performance, and higher intuitiveness per unit of time for your tech teams. | gaw | |
1,893,909 | The Trade Table | Mini Split EG4 Mini Split ACiQ Mini Split Gree Mini Split MrCool Mini Split Buck Cool Mini... | 0 | 2024-06-19T18:45:38 | https://dev.to/developercode/the-trade-table-30j0 | [Mini Split](https://thetradetable.com/collections/mini-split)
[EG4 Mini Split](https://thetradetable.com/collections/eg4-mini-split)
[ACiQ Mini Split](https://thetradetable.com/collections/aciq)
[Gree Mini Split](https://thetradetable.com/collections/gree-mini-split)
[MrCool Mini Split](https://thetradetable.com/collections/mrcool-mini-splits)
[Buck Cool Mini Split](https://thetradetable.com/collections/buck-cool-mini-split)
[Stealth Comfort](https://thetradetable.com/collections/stealth-comfort) | developercode | |
1,893,908 | The Trade Table | The Trade Table | 0 | 2024-06-19T18:43:54 | https://dev.to/developercode/the-trade-table-32c7 | [The Trade Table](https://thetradetable.com/) | developercode | |
1,894,092 | Cursos Oracle Gratuitos: Treinamentos e Certificados | O programa Oracle Learning Explorer oferece uma oportunidade única para profissionais do mundo... | 0 | 2024-06-23T13:50:11 | https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/ | cursogratuito, cursosgratuitos, java, mysql | ---
title: Cursos Oracle Gratuitos: Treinamentos e Certificados
published: true
date: 2024-06-19 18:39:43 UTC
tags: CursoGratuito,cursosgratuitos,java,mysql
canonical_url: https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/
---
O programa Oracle Learning Explorer oferece uma oportunidade única para profissionais do mundo inteiro adquirirem treinamento de nível básico e certificação gratuita em todo o portfólio de produtos.
Criado para capacitar e desenvolver habilidades valiosas em TI, este programa inclui conteúdos como SaaS, Cloud Infrastructure, Hardware, e muitos outros tópicos essenciais.
Ao participar, você terá acesso a vários recursos de aprendizado que lhe permitirão aprofundar seus conhecimentos nas mais diversas tecnologias Oracle.
Comece sua jornada hoje e torne-se um especialista em tecnologias Oracle de forma totalmente online e gratuita, ampliando suas habilidades e oportunidades no mercado de trabalho.
## Oracle Learning Explorer
### Treinamentos e Certificados Gratuitos

_Imagem da página dos cursos_
O programa Oracle Learning Explorer oferece cursos de treinamento de nível básico e certificação gratuita para todo o portfólio de produtos Oracle.
### Capacitação para Profissionais de TI
Criado para capacitar profissionais do mundo inteiro, o programa ajuda a desenvolver habilidades valiosas em TI, essenciais para o mercado de trabalho.
### Conteúdos Abrangentes
Os cursos cobrem vários tópicos, incluindo SaaS, Cloud Infrastructure, Hardware e muito mais. As vídeo-aulas são ministradas em Inglês com legenda PT-BR. Confira a ementa:
#### Software como um Serviço (Saas)
- Gestão do capital humano;
- Gestão de Talentos;
- Planejamento de Recursos Empresariais;
- Gerenciamento de Cadeia de Fornecedores;
- Compras;
- Enterprise Performance Management;
- Fusion Marketing;
- Fusion Sales;
- Fusion Service.
#### Oracle Cloud Infrastructure
OCI.
#### Hardware e Software
- Oracle Database;
- Java;
- MySQL.
### Inicie Sua Jornada
Comece sua jornada hoje e torne-se um Oracle Learning Explorer, adquirindo conhecimentos fundamentais em tecnologias Oracle.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Oracle-Learning-Explorer-280x210.png" alt="Oracle Learning Explorer" title="Oracle Learning Explorer"></span>
</div>
<span>Cursos Oracle Gratuitos: Treinamentos e Certificados</span> <a href="https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/" title="Cursos Oracle Gratuitos: Treinamentos e Certificados"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Cursos-Sest-Senat-280x210.png" alt="Cursos Sest Senat" title="Cursos Sest Senat"></span>
</div>
<span>SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática</span> <a href="https://guiadeti.com.br/sest-senat-cursos-gratuitos/" title="SEST SENAT Cursos Gratuitos: LGPD, Excel, Gestão E Informática"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Google-Cloud-IA-280x210.png" alt="Google Cloud IA" title="Google Cloud IA"></span>
</div>
<span>Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos</span> <a href="https://guiadeti.com.br/google-cloud-skills-boost-cursos-ia-gratuitos/" title="Google Cloud Skills Boost: Cursos De Inteligência Artificial Gratuitos"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Evento-De-Programacao-SAS-280x210.png" alt="Evento De Programação SAS" title="Evento De Programação SAS"></span>
</div>
<span>Evento De Programação SAS Para Iniciantes Gratuito</span> <a href="https://guiadeti.com.br/evento-programacao-sas-iniciantes-gratuito/" title="Evento De Programação SAS Para Iniciantes Gratuito"></a>
</div>
</div>
</div>
</aside>
## Oracle Cloud
Oracle Cloud é uma plataforma abrangente de serviços de computação em nuvem oferecida pela Oracle Corporation. Ela envolve uma grande lista de serviços que ajudam empresas a modernizar suas operações, melhorar a eficiência e acelerar a inovação.
### Principais Serviços
#### Infraestrutura como Serviço (IaaS)
A Infraestrutura como Serviço (IaaS) do permite que empresas aluguem recursos de computação, armazenamento e rede sob demanda.
Faznedo o uso da IaaS, as empresas podem executar e gerenciar aplicativos sem a necessidade de investir em hardware físico, proporcionando flexibilidade e economia de custos.
#### Plataforma como Serviço (PaaS)
A Plataforma como Serviço (PaaS) oferece um ambiente de desenvolvimento e implantação completo na nuvem. Desenvolvedores podem criar, testar e implantar aplicativos de maneira rápida e eficiente.
O PaaS inclui ferramentas para gerenciamento de dados, desenvolvimento de aplicativos, integração, análise e muito mais.
### Software como Serviço (SaaS)
O SaaS fornece uma suíte completa de aplicativos de software que são entregues pela internet.
Empresas podem acessar soluções para gestão de recursos humanos, finanças, cadeia de suprimentos, vendas, marketing e mais, sem a necessidade de instalação ou manutenção de software.
### Banco de Dados como Serviço (DBaaS)
Oracle oferece um serviço de banco de dados na nuvem que permite às empresas executar bancos de dados Oracle na nuvem com a mesma confiabilidade e desempenho que teriam em um ambiente on-premises. O DBaaS facilita a escalabilidade, backups automáticos e segurança avançada.
### Vantagens da Oracle Cloud
#### Segurança
Oferece segurança robusta com criptografia de dados em repouso e em trânsito, além de ferramentas avançadas de monitoramento e proteção contra ameaças. A segurança é uma prioridade, garantindo que os dados das empresas estejam sempre protegidos.
#### Escalabilidade
A plataforma é altamente escalável, permitindo que as empresas aumentem ou reduzam seus recursos de acordo com a demanda. Essa flexibilidade ajuda a otimizar custos e garantir que os aplicativos funcionem de maneira eficiente em qualquer escala.
#### Inovação Contínua
A plataforma está constantemente evoluindo, com atualizações e novos recursos sendo introduzidos regularmente. Isso garante que as empresas tenham acesso às tecnologias mais recentes e possam inovar continuamente em suas operações.
### Suporte e Confiabilidade
Tendo um forte compromisso com a confiabilidade, oferece alta disponibilidade e suporte 24/7. As empresas podem contar com a plataforma para manter seus aplicativos e dados acessíveis e seguros em todos os momentos.
### Casos de Uso
#### Modernização de TI
Empresas podem usar a ferramenta para modernizar suas infraestruturas de TI, migrando de sistemas on-premises para a nuvem. Isso reduz custos, melhora a eficiência e libera recursos para focar em inovação.
#### Análise de Dados
A plataforma fornece ferramentas poderosas para análise de dados, permitindo que empresas transformem grandes volumes de dados em insights acionáveis. Isso ajuda a tomar decisões informadas e estratégicas.
#### Desenvolvimento de Aplicativos
Desenvolvedores podem aproveitar a plataforma para criar e implantar aplicativos rapidamente. As ferramentas de desenvolvimento e integração facilitam a criação de soluções inovadoras e personalizadas.
## Oracle University
Oracle University é a divisão de educação da Oracle Corporation, dedicada a oferecer treinamentos e certificações para profissionais de TI em todo o mundo.
### Cursos e Treinamentos
A instituição oferece uma variedade de treinamentos online que permitem aos profissionais aprender no seu próprio ritmo e de qualquer lugar.
Para aqueles que preferem uma abordagem mais tradicional, a escola também oferece cursos presenciais. Esses treinamentos são ministrados por instrutores experientes e oferecem uma experiência de aprendizado imersiva.
### Desenvolvimento de Carreira
Os programas de treinamento da instituição ajudam os profissionais a avançarem em suas carreiras, fornecendo as habilidades e conhecimentos necessários para se destacarem no mercado de trabalho.
### Melhoria de Desempenho
As empresas que investem em treinamentos dessa escola podem esperar melhorias significativas no desempenho de suas equipes, resultando em maior eficiência e produtividade.
### Acesso a Especialistas
Os cursos são ministrados por instrutores altamente qualificados, que possuem grande experiência e conhecimento das tecnologias Oracle. Isso garante que os participantes recebam uma educação de alta qualidade.
## Link de inscrição ⬇️
As [inscrições para o Oracle Learning Explorer](https://education.oracle.com/pt_BR/learning-explorer#startLearning) devem ser realizadas no site da Oracle University.
## Compartilhe esta oportunidade de capacitação com seus colegas e amplie suas habilidades!
Gostou do conteúdo sobre os cursos gratuitos da Oracle? Então compartilhe com a galera!
O post [Cursos Oracle Gratuitos: Treinamentos e Certificados](https://guiadeti.com.br/cursos-oracle-gratuitos-treinamentos-certificados/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,893,906 | How To Use Video Speed Controller? | Using Video Speed Controller Plus is an easy process. Simply follow the steps below: Navigate to a... | 0 | 2024-06-19T18:39:31 | https://dev.to/dailyextensions/how-to-use-video-speed-controller-k2g | chrome, extensions, browser |

Using Video Speed Controller Plus is an easy process. Simply follow the steps below:
1. Navigate to a website with video content like YouTube or Vimeo.
2. Click on the Video Speed Controller Plus icon in the extension bar.
3. Drag the slider to increase or decrease the speed.
You Can also use the shortcut keys. They are as follows:
1. Ctrl + Left Shift + Right Arrow: Increase Speed.
2. Ctrl + Left Shift + Left Arrow: Decrease Speed.
3. Ctrl + Left Shift + Up Arrow: Reset Speed.
Download the [Video Speed Controller Chrome](https://chromewebstore.google.com/detail/video-speed-controller-pl/pomfnnhpgocoblanbbnkegpocjijefbj) extension here. | dailyextensions |
1,893,905 | Understanding CORS for your LLM applications | Imagine this: you're working on that million-dollar project idea of which you're having dreams and... | 0 | 2024-06-19T18:37:01 | https://dev.to/gaw/understanding-cors-for-your-llm-applications-i7h | Imagine this: you're working on that million-dollar project idea of which you're having dreams and it's all hunky dory, you have completed the APIs, and now moving towards your friend who is handling the API integration for UI. You both sit together for a dry run and see this horrendous error that makes you question your life choices...

Let's put our Stack Overflow hats on and try to figure out what's going on. We see a few terms like CORS, preflight request, etc...
## CORS, a new pokemon?

Just like Gardevoir, I think CORS has some sort of psychic power to create
a small black hole, but let's get technical and try to understand what it's about.
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism
that allows a server to indicate any origins (domain, scheme, or port)
other than its own from which a browser should permit loading resources.
CORS also relies on a mechanism by which browsers make a "preflight"
request to the server hosting the cross-origin resource, in order to
check that the server will permit the actual request. In that preflight,
the browser sends headers that indicate the HTTP method and headers that
will be used in the actual request.
Source: [MDN Docs](url)
ELI5 Version:
Let's say you go to domain-a.com and here you're trying to request resources from domain-b.com, domain-b can throw tantrums and deny the resource and this is a feature and not a bug!
## Why it's a feature
CORS exists to facilitate secure communication between the browser and server but why do we need it?
Suppose you have an API that returns a Pokemon's data in which I am interested, and I try to use the same API to power my multi-million dollar project idea, you as an owner of that API can prevent me from doing this by blocking my origin.
I know a pretty lame example but bear with it.
## Help! My HELLO WORLD API is Feeling Shy — Any Tricks to Make It Scream 'HELLO'?
So you have created a simple GET endpoint using Flask, feeling pretty accomplished, but it's not working the way it's expected?
Well you're in luck today, we are not going to invent a solution rather we are going to use third-party packages to resolve the issue (really? issue? cors?)
We'll be working with Python & Flask today! Simply create a new file named app.py and paste the following code:
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run(debug=True)
```
Here we are just spinning up a flask server which returns "Hello, World!" text on the "/" route after this run the below command to start the server
```
python app.py
```
you'll see this on screen.

Let's see what happens in the browser meanwhile.
## It works??

Well, it worked because the request is being originated from the same host i.e., 127.0.0.1 (I know there's no place like 🏠) hence it is working as expected.
And now open the browser console on any webpage and paste the following JS snippet
```
fetch('http://localhost:5000/');
```
## Shit Happens
Now probably you're questioning your life choices and why all this happens to you. Let's not sweat it and turn to our dear friend Stack Overflow.

Let's make some modifications to the code, we are going to use flask-cors extension, you can install it via pip
```
pip install flask-cors
```
With some modifications i.e., adding flask_cors related configuration in the code, let's try running it again.
```
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app, origins="*") # This will enable CORS for all routes, and a very bad idea to allow a wildcard
@app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run(debug=True)
```
Let's try the browser console again.

Finally, the moment you have been waiting for. It works!!!!
## Key Takeaways
CORS is a mechanism that allows us to restrict the resources being requested from other origins. We also tried to make it work for our use case, i.e, frontend integration.
In the next part, we will dive deeper and discover more ways to configure cors to our advantage in terms of security and also see some sneaky ways to avoid it.
## Additional Reading
[Fetch Living Standard - HTTP CORS protocol](url)
[Stack Overflow - XMLHttpRequest cannot load xxx: No 'Access-Control-Allow-Origin' header](url)
| gaw | |
1,893,904 | Exploring Metrics with Thanos Query UI and PromQL | The Thanos Query UI provides robust querying capabilities across multiple Prometheus instances,... | 0 | 2024-06-19T18:36:55 | https://dev.to/agagag/exploring-metrics-with-thanos-query-ui-and-promql-3he9 | The Thanos Query UI provides robust querying capabilities across multiple Prometheus instances, offering a comprehensive analysis of your metrics. Using the Thanos Interactive example dataset, I experimented with various PromQL queries to explore metrics. Here’s a summary based on frequent use cases:
**Available Metrics in the Metrics Explorer:**
- continuous_app_metric0
- continuous_app_metric1
- continuous_app_metric2
- continuous_app_metric3
- continuous_app_metric4
**PromQL Query Examples:**
1. **Sum of Metrics:** `sum(continuous_app_metric0)` - Sum across all clusters and replicas.
2. **Sum by Cluster:** `sum(continuous_app_metric0) by (cluster)` - Grouped by cluster.
3. **Sum by Replica:** `sum(continuous_app_metric0) by (replica)` - Grouped by replica.
4. **Average by Cluster:** `avg(continuous_app_metric0) by (cluster)` - Average grouped by cluster.
5. **Maximum Value by Replica:** `max(continuous_app_metric0) by (replica)` - Max value grouped by replica.
6. **Count of Instances:** `count(continuous_app_metric0)` - Count of reporting instances.
7. **Specific Cluster and Replica:** `continuous_app_metric0{cluster="eu-1", replica="0"}` - Current value for a specific cluster and replica.
8. **Alerting:** `continuous_app_metric0 > 100` - Create an alert if the metric exceeds a threshold.
**Understanding Empty Query Results:**
- **Histogram Quantile Query:** May return empty if `_bucket` suffix is missing, required for `histogram_quantile`.
- **Comparison Between Clusters:** Returns empty if data points are missing or unmatched in timestamps.
**PromQL Info Messages:**
- **Rate and Increase Queries:** Indicate that `continuous_app_metric0` might not be a counter as it does not end in `_total`, `_sum`, `_count`, or `_bucket`.
**Metric Types Explained:**
- **Metric Types in Prometheus**
- **Gauge:** Tracks current values that can go up or down (e.g., memory usage).
- **Counter:** Tracks cumulative totals that only increase (e.g., total HTTP requests).
- **Histogram:** Tracks distributions across buckets (e.g., request latencies).
- **Summary:** Tracks distributions as quantiles/percentiles (e.g., request latencies).
- **Metric Types in Instrumentation**
- **Gauge:** Use methods like `Set()`, `Inc()`, `Dec()`, `Add()`, `Sub()`.
- **Counter:** Use methods like `Inc()`, `Add()`.
- **Histogram:** Configure buckets, use `Observe()` to categorize values.
- **Summary:** Specify quantiles, use `Observe()` to track values. | agagag | |
1,893,890 | Create and Connect to a Linux VM using a Public Key | Azure virtual machines (VMs) can be created through the Azure portal. This method provides a... | 0 | 2024-06-19T18:36:27 | https://dev.to/tracyee_/create-and-connect-to-a-linux-vm-using-a-public-key-4jfe | cloudcomputing, azure, linux, virtualmachine |
Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based user interface to create VMs and their associated resources. This blog post shows you how to use the Azure portal to create a virtual machine (VM) that runs on Linux server and connect using Powershell.
In this blog you will see how to create and connect a Linux VM using public key.Public key authentication allows you to access a server via ssh without password.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/en-us/free/?WT.mc_id=A261C142F) before you begin.
**Sign in to Azure**
Sign in to the [Azure portal](https://azure.microsoft.com/en-us/get-started/azure-portal).
**Create virtual machine**
- Enter virtual machines in the search.
- Under Services, select Virtual machines.
- Subscription: Choose the appropriate subscription.
- Resource Group: Create a new or select an existing resource group.
- Virtual machine name: Give your VM a unique name.
- Region: Choose the data center region of your choice, preferably a region closest to you.
- Availability options: Select the availability preferences.
- Choose a linux server for the Image.


- Select size of the memory

**Security and Authentication**
- Authenticator type - Select SSH public key
Azure will automatically generate an SSH key pair for you to
store for future use. It is a fast simple and secure way to
connect to the VM.
- SSH Key type - Select RSA SSH Format

- Enter key pair name
- Public inbound ports- Select allow selected ports
- Select HTTP(80),SSH(22)
- Click Review + Create

- VM is validated and shows the costing information.
- Select create.

**Download private key and create resource**
The private key will be saved in the computer.

**Deployment is complete**
-Select go to resource

**VM is ready**
Here we can see all the details on the VM that was just created. Copy the public IP address because we would use it to connect.

**Connecting to the VM using power shell**
- Navigate to Windows Powershell on a PC.
- Run this command the command below
`ssh -i "path to the private key on the pc" username@publicIP`

After connecting to the VM,we run some sudo commands to test
`sudo apt-get -y update`

`sudo apt-get -y install nginx`

**Testing Public IP on a Browser**

**Clean Up**
After creating and deploying the Vm, since this project is just for practice.We clean up by deleting the resource group to avoid incurring cost.
- Search for resource group
- Select the resource group that was created for this VM

- Select all resources that were created
- Click delete
- Enter resource group name to confirm deletion

**Conclusion**
With a secure shell (SSH) key pair, you can create virtual machines (VMs) in Azure that use SSH keys for authentication. This article elaborates on how to quickly generate and use an SSH public-private key file pair for Linux VMs. | tracyee_ |
1,893,293 | How to protect privacy online | How to protect privacy online Have you ever thought about the amount of information you provide... | 0 | 2024-06-19T08:20:00 | https://dev.to/hasan_7c80b135723181e5a09/how-to-protect-privacy-online-1i3m | [How to protect privacy online](https://bababaha.ir)
Have you ever thought about the amount of information you provide online? Sometimes you cannot avoid providing your personal information, such as when you make an online purchase. However, for those times when it's not necessary, it's important to think about what you're sharing.
Online privacy should not be a cause for alarm. Here's what to be aware of so you can practice good online habits. In this way, you can confidently enjoy the benefits that the Internet has to offer.
The security rules that we follow in real life should also be applied in cyber space. Just like locking your door when you leave your home, it's important to protect your personal information online. Here are some steps you can take to help make this happen
1. Passwords: Make them strong and secure
Weak cyberspace passwords like "Password1" and "12345" take less than a second for hackers to guess. If you want to protect your personal information, start by creating strong and secure passwords for all of your online accounts. The longer the password, the harder it is for others to guess.
The best type of password is a combination of numbers, letters and symbols. However, these passwords are also hard to remember. To make things easier, create a crossword for something like a memorable trip or event by using the first letters of the phrase and adding symbols and numbers.
For example, suppose you went to Paris in 2016, you could create a password like: Iw2Pi16! This is much harder to guess. Another option is to use a Password Manager to remember all of your passwords so that you only remember one master password.
2. Don't share more information than you need to
Cyberspace, especially social networks like Instagram, are a great way to connect with your community, friends and family. To help protect your privacy online, set your profile to private and only accept friend requests from people you know in real life.
It's also a good idea to be mindful of what you post on Facebook. For example, avoid sharing personal information such as your date of birth and address, or pictures of your travels while you are on vacation.
Avoid using Instagram (or any other social media account) to log into other websites. When you use your Facebook details to log in to other websites, it allows Facebook to share your data with them too, so instead, enter your email address and password to create an account. Use new ones.
3. Check your apps' permissions
Some apps ask for unnecessary permissions unrelated to their functionality. For example, a weather app doesn't need access to your contact list or microphone to check the weather.
You can review the permissions you've given to apps on your smartphone in the Settings menu. Scroll down to see a list of all the apps on your phone, then tap each app to see what you've given permission for and adjust accordingly.
Change permissions in apps to "when using the app" instead of "always". Also, if you no longer use an app, it's best to delete it. Learn more about managing and uninstalling apps.
4. Pay for purchases online and securely
The safest way to pay online in cyberspace is through your trusted bank or credit card portals. Banking gateways provide an additional level of protection as they allow you to pay without having to share your bank or credit card details with the online merchant.
When you use the banking portal to pay something, the online merchant will automatically redirect you to the bank's site, where you can log in and confirm the payment. PayPal will then quickly redirect you to the online seller to receive your order confirmation details.
Before buying anything online, make sure the website is legit and safe. Before checkout, always look for “https://” (“s” stands for secure) and a closed lock in the address bar.
It is also better to stick to famous websites that have a good reputation. If you're unsure about a particular website, check online reviews and look for things like the number of reviews (the more the better) and how recent they are.
5. Be aware of phishing scams
Phishing scams trick people into handing over personal information by pretending to be an organization they know, such as a bank, Internet service provider, or government agency. So read text messages and emails carefully before clicking on any link. | hasan_7c80b135723181e5a09 | |
1,893,902 | Microservices Architecture | In this blog, we'll learn about microservices architecture. It is a software development methodology... | 0 | 2024-06-19T18:36:15 | https://dev.to/musabaku/microservices-architecture-3dn0 | In this blog, we'll learn about microservices architecture. It is a software development methodology where a product is composed of various independent services, each managing its own specific functionality.

It differs from the traditional monolithic architecture. In a MA, all business functionalities are written within a single codebase. On the other hand, microservices break down a product into various independent services, where each component handles its specific functionalities
Now we will cover the main concepts behind microservices.

- Firstly, it has individual components which operate independently. They can be developed and deployed individually
- Secondly these services are robust and handle failure very well. If one service fails, it can temporarily interrupt requests to that service without bringing down the entire system.
- Thirdly, with a simple routing process, it ensures seamless communication and interaction between individual components
- Also, it has decentralized operations, where each services functions independently
- Lastly, due to its valuable features this type of architecture is widely adopted in modern businesses
**Finally, we will go through pros and cons of microservices
**

**Pros: **
- Scalability: Microservices allow each service to be scaled independently based on demand. Using this we can optimize resource usage and improve overall system scalability.
- Flexibility in Technology: Different services within a microservices architecture can use different technologies, and frameworks. This allows teams to choose the best tools for each specific service.
- Maintainability: With individual services codebases are easier to understand, develop, and maintain.
- Organizational Alignment: Moreover, this aligns well with agile and DevOps practices. It fosters autonomy and rapid iteration.
**Cons:
**
- Complexity: With increasing number of services it introduces operational complexity.
- Testing Complexity: Each service in this model requires thorough testing individually and integration testing across services.
- Costs: Finally, this architecture leads to an increased infrastructure and operational costs.
| musabaku | |
1,893,899 | ** La Danza de los Frameworks: React vs Angular **🐉🔥👑 | Hola Chiquis! 👋🏻 En el apasionante mundo del desarrollo web, donde las ideas se transforman en... | 0 | 2024-06-19T18:29:26 | https://dev.to/orlidev/-la-danza-de-los-frameworks-react-vs-angular--4o85 | webdev, react, angular, tutorial | Hola Chiquis! 👋🏻 En el apasionante mundo del desarrollo web, donde las ideas se transforman en realidades digitales, se libra una batalla por el dominio del front-end: la lucha entre React y Angular, dos titanes que buscan conquistar el corazón de los desarrolladores.⚔️

Al igual que en la serie "La Casa del Dragón: Fuego y Sangre", donde las dinastías Targaryen y Hightower se enfrentan por el Trono de Hierro, React y Angular representan dos filosofías distintas en la creación de interfaces web, cada una con sus fortalezas y debilidades.
En este post, nos adentraremos en este apasionante universo para descubrir quién ganará la batalla por el Trono de Hierro del front-end: ¿React, los Dragones Negros de la Innovación, o Angular, los Verdes Hightower de la Estructura?
En el vasto mundo de las tecnologías web, dos gigantes emergen como los protagonistas indiscutibles: React y Angular. Cada uno con sus propias fortalezas, debilidades y seguidores apasionados, estos frameworks frontend se encuentran en constante lucha por dominar el reino digital, similar a la lucha por el Trono de Hierro en "La Casa del Dragón" y "Fuego y Sangre". En esta analogía, React sería el equivalente a los Negros y Rayneris Targaryen, mientras que Angular representaría a los Verdes Aegon II y los Hightower.
La "Danza de los Dragones" en "Fuego y Sangre" se asemeja a la rivalidad entre React y Angular. Ambos luchan por el dominio en el mundo de las aplicaciones web. Los desarrolladores debaten sobre cuál es mejor, pero al final, la elección depende de las necesidades específicas del proyecto.🛡️

¡Que comience la batalla de los frameworks!
La batalla por el trono de hierro del front-end 🐉🔥
En el reino de Westeros, donde las dinastías se enfrentan por el dominio del Trono de Hierro, también se libra una batalla en el mundo del desarrollo web: la lucha entre React y Angular, dos frameworks que buscan conquistar el corazón de los desarrolladores.
Al igual que las casas Targaryen y Hightower en "La Casa del Dragón: Fuego y Sangre", React y Angular representan dos filosofías distintas en el desarrollo front-end, cada una con sus fortalezas y debilidades.
Imagina un escenario donde los personajes Rainerys y Aegon II, 🤴🏼 con sus cabellos rubios ondeando al viento, se enfrentan en una batalla épica por el Trono de Hierro. Pero en lugar de espadas y dragones, sus armas son React y Angular, los poderosos frameworks de desarrollo web.
Los dos guerreros se encuentran en un campo de batalla virtual, rodeados de computadoras y laptops. 🏰 Rainerys, con su elegante espada React, lucha con agilidad y gracia, creando componentes reutilizables y optimizando el rendimiento. Aegon II, armado con la imponente hacha Angular, defiende su territorio, construyendo aplicaciones robustas y escalables.
Los códigos vuelan como chispas en una fragua. Rainerys despliega su componente con un `npm start`, mientras Aegon II compila su aplicación con un `ng build`. Ambos se esfuerzan por ganar la ventaja, optimizando el tiempo de carga y la experiencia del usuario.🐲
En el corazón de la batalla, el Trono de Hierro espera. ¿Quién prevalecerá? ¿React o Angular? Solo el código decidirá. Y así, en esta contienda digital, se libra una guerra por la supremacía en el mundo del desarrollo web.
React: Los Negros de la Innovación 💻🔥
React, al igual que los Targaryen, se caracteriza por su innovación y flexibilidad. Es una biblioteca ligera y modular, que permite a los desarrolladores crear interfaces de usuario dinámicas y escalables. Su enfoque en componentes reutilizables y su arquitectura basada en el DOM virtual la convierten en una herramienta poderosa para construir aplicaciones web modernas.
Creado por Facebook, es conocido por su simplicidad y flexibilidad. Es como un dragón joven y audaz, capaz de adaptarse rápidamente a nuevas situaciones gracias a su componente basado en JavaScript. Su filosofía de "componentes reutilizables" permite a los desarrolladores construir interfaces de usuario complejas con menos código, similar a cómo los Targaryen utilizaban sus dragones para conquistar territorios sin esfuerzo considerable.
Ejemplo:
```
jsx
import React from 'react';
import ReactDOM from 'react-dom';
function App() {
return (
<div>
<h1>¡Hola, mundo!</h1>
<p>Este es un ejemplo de React.</p>
</div>
);
}
ReactDOM.render(<App />, document.getElementById('root'));
```
React también es conocido por su comunidad activa y vibrante, siempre dispuesta a compartir consejos y mejores prácticas, al igual que los Targaryen compartían secretos sobre cría de dragones entre ellos.

React es conocido por su poder y habilidad para adaptarse a cualquier situación. Su flexibilidad y modularidad son similares a las habilidades de los Targaryen, capaces de manipular el fuego y controlar dragones, elementos que les otorgan una ventaja significativa.
Ventajas 👱🏻♀️
- Componentes Reutilizables: React permite a los desarrolladores crear componentes reutilizables, ahorrando tiempo y esfuerzo.
- Comunidad Activa y abundante documentación: La comunidad de React es tan vibrante como el Consejo de los Targaryens, siempre dispuesta a compartir conocimientos y experiencias. React tiene una gran comunidad de desarrolladores que crean y compartan componentes y herramientas, lo que facilita el desarrollo de aplicaciones.
- Virtual DOM para actualizaciones eficientes: Mejora el rendimiento al minimizar las actualizaciones reales del DOM.
- Reactividad: React utiliza un enfoque de componentes funcionales que se actualizan automáticamente cuando los datos cambian, lo que facilita la creación de aplicaciones dinámicas.
- Flexibilidad: React es un framework muy versátil que se puede utilizar para crear aplicaciones de cualquier tamaño o complejidad. Amplia gama de bibliotecas y complementos.
```
// Ejemplo de componente React
function App() {
return (
<div>
<h1>¡Bienvenidos a Rocadragón!</h1>
<p>React es poderoso y versátil.</p>
</div>
);
}
```
Desventajas ❤️🔥
- Curva de Aprendizaje: Para aquellos no familiarizados con JavaScript moderno, React puede parecer un dragón temible de aspecto oscuro.
- Mayor libertad puede llevar a inconsistencias en el código.
Ejemplo:
```
jsx
import React from 'react';
function TargaryenBanner({ houseName }) {
return <h1>Casa {houseName}: Fuego y Sangre</h1>;
}
export default function App() {
return <TargaryenBanner houseName="Targaryen" />;
}
```
Angular: Los Verdes de la Estructura 🧒🏼
Angular, por otro lado, se asemeja a la casa Hightower en su énfasis en la estructura y el orden. Es un framework completo que proporciona una gama completa de herramientas y servicios para el desarrollo web. Su arquitectura basada en componentes y su sistema de inyección de dependencias lo convierten en una opción sólida para proyectos de gran escala y aplicaciones empresariales.

Angular es un marco de trabajo más estructurado y robusto, creado por Google. Es como un caballo de guerra fuerte y confiable, preparado para enfrentar desafíos grandes y complejos. Angular utiliza TypeScript, proporcionando un fuerte sistema de tipado estático que ayuda a prevenir errores en tiempo de compilación, similar a cómo los Hightower eran conocidos por su estrategia militar sólida y bien planificada.
```
// Ejemplo básico de un componente en Angular
import { Component } from '@angular/core';
@Component({
selector: 'app-saludo',
template: `<h1>Hola, {{ nombre }}</h1>`
})
export class SaludoComponent {
nombre = 'Mundo';
}
```
Angular también ofrece un ecosistema completo, incluyendo herramientas para pruebas unitarias y end-to-end, así como soporte para desarrollo en equipo a través de su sistema de módulos, similar a cómo los Hightower mantenían fuertes alianzas y lealtades dentro de su casa.
Respaldado por Google, se asemeja a los Verdes encabezados por Alicent Hightower y su hijo Aegon II. Este framework es conocido por su robustez y su enfoque integral, proporcionando una solución todo-en-uno para el desarrollo frontend.
Ventajas 🔥
- Arquitectura MVC: Promueve una separación clara de responsabilidades en el código. Angular ofrece una estructura clara y organizada. Estructura sólida y orientada a componentes. Utiliza un enfoque de componentes que se organizan en una estructura jerárquica, lo que facilita la creación de aplicaciones complejas y escalables.
- TypeScript: Aumenta la fiabilidad y mantenibilidad del código. Utilizar TypeScript con Angular proporciona seguridad de tipo y mejora la calidad del código, asegurando que cada pieza del rompecabezas encaje perfectamente.
- Inyección de Dependencias: Facilita la gestión y prueba de componentes y enlace bidireccional. Angular tiene una capacidad de inyección de dependencias que facilita la creación de aplicaciones que requieren múltiples componentes y servicios.
- Herramientas integradas (CLI, testing, etc.).
- Seguridad: Angular tiene una gran cantidad de seguridad integrada, lo que facilita la creación de aplicaciones seguras.
Ejemplo:
```
typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-targaryen-banner',
template: `<h1>Casa {{ houseName }}: Fuego y Sangre</h1>`
})
export class TargaryenBannerComponent {
houseName: string = 'Targaryen';
}
```

Desventajas 👑
- Curva de Aprendizaje: La curva de aprendizaje de Angular puede ser más pronunciada debido a su amplio conjunto de características y conceptos nuevos.
- Mayor complejidad y boilerplate.
- Menos flexibilidad en algunos casos.
```
typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
template: '<h1>¡Hola, mundo!</h1>
<p>Este es un ejemplo de Angular.</p>'
})
export class AppComponent {
title = 'app';
}
```
Comparativa: La Lucha por el Trono de Hierro ⚔️
Ambos frameworks tienen sus méritos y desventajas, pero al final, la elección depende del desarrollador o del equipo de desarrollo. React, con su flexibilidad y modularidad, es como un dragón libre, capaz de adaptarse a cualquier terreno. Angular, con su estructura sólida y soporte para TypeScript, es como un castillo fuerte, resistiendo el paso del tiempo.
Similar a cómo los Targaryens lucharon por el Trono de Hierro, React y Angular representan dos enfoques diferentes hacia el desarrollo web. Mientras que algunos prefieren la libertad y la flexibilidad de React, otros valoran la estructura y la robustez de Angular. En última instancia, la elección entre React y Angular dependerá de tus necesidades específicas, tu equipo y tu preferencia personal.
Al igual que en "La Casa del Dragón", donde la historia se centra en la lucha por el poder y la supervivencia, el mundo del desarrollo web también está lleno de decisiones cruciales y desafíos. Sin embargo, al igual que los personajes de la serie, podemos aprender mucho de ambos frameworks y utilizar nuestras habilidades y estrategias para navegar por estos desafíos y construir aplicaciones web impresionantes.
La Danza de los Dragones: Comparativa de Personajes 🗡️
+ Rhaenyra Targaryen (React) vs. Aegon II (Angular) Rhaenyra, como React, representa la innovación y la adaptabilidad, luchando por su derecho legítimo con una comunidad leal a su lado. Aegon II, similar a Angular, se apoya en una estructura sólida y en el apoyo de grandes instituciones.
+ Daemon Targaryen (Next.js) vs. Otto Hightower (Angular CLI) Daemon, con su naturaleza audaz y rebelde, puede ser visto como Next.js, una extensión poderosa de React. Otto Hightower, con su enfoque estratégico y meticuloso, encarna Angular CLI, proporcionando comandos y herramientas esenciales para el desarrollo eficiente con Angular.
+
La Decisión Final 👸🏼
Al igual que en la guerra civil Targaryen, la elección entre React y Angular depende de las necesidades y preferencias específicas del desarrollador. React ofrece flexibilidad y una comunidad vibrante, ideal para proyectos que requieren componentes dinámicos y reusables. Angular, con su enfoque integral y sólido, es perfecto para aplicaciones empresariales que necesitan una arquitectura robusta y bien definida.
¿Quién ganará la guerra por el trono del desarrollo web? Eso, al igual que en "La Casa del Dragón", dependerá de tus preferencias y del proyecto que tengas en mente. ¡Elige sabiamente y que el fuego de los dragones te guíe en tu camino de desarrollo!

Al igual que en la serie "La Casa del Dragón", no hay una respuesta definitiva a la pregunta de quién ganará la batalla entre React y Angular. La elección del framework adecuado depende de las necesidades específicas del proyecto y las preferencias del desarrollador.
React es una excelente opción para:
+ Aplicaciones web pequeñas y medianas
+ Desarrolladores que buscan flexibilidad y libertad creativa
+ Proyectos que requieren un desarrollo rápido y ágil
Angular es una mejor opción para:
+ Aplicaciones web grandes y complejas
+ Equipos de desarrollo grandes que necesitan una estructura y organización estandarizadas
+ Proyectos que requieren estabilidad y seguridad a largo plazo
En resumen, la batalla entre React y Angular es una lucha sin un vencedor claro. Al igual que en la serie "La Casa del Dragón", ambas filosofías tienen sus méritos y pueden llevar al éxito en el desarrollo web.
Conclusión: ¿Quién gana el Trono? 🐉
Como en la lucha por el Trono de Hierro, no hay un ganador absoluto. React y Angular tienen sus propias fortalezas y debilidades. La decisión final depende del desarrollador, quien debe elegir la herramienta que mejor se adapte a sus necesidades y preferencias, tal como los señores de Westeros deben elegir a su rey.
¿Y tú, a qué bando te unes? ¿Eres un Dragón Negro de la innovación o un Verde Hightower de la estructura? ⚔️
Nota: no nos olvidamos de VueJs y Svelte que también tienen sus seguidores, pero eso es tema para otro post.✍🏻
🚀 ¿Te ha gustado? Comparte tu opinión.
Artículo completo, visita: https://lnkd.in/ewtCN2Mn
https://lnkd.in/eAjM_Smy 👩💻 https://lnkd.in/eKvu-BHe
https://dev.to/orlidev ¡No te lo pierdas!
Referencias:
Imágenes creadas con: Copilot (microsoft.com)
##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #React #Angular

 | orlidev |
1,893,896 | Building a simple Full-Stack Restaurant Finder App with React, Redux, Node.js, and Google Places API (Part 2) | Part 2: Building a Dynamic Frontend with React, Redux Toolkit, and Google Maps... | 0 | 2024-06-19T18:27:01 | https://dev.to/vb_nair/building-a-simple-full-stack-restaurant-finder-app-with-react-redux-nodejs-and-google-places-api-part-2-557a | react, typescript, redux, googleplaces |
**Part 2: Building a Dynamic Frontend with React, Redux Toolkit, and Google Maps API**
**Introduction:**
Welcome to the second part of our tutorial on building a full-stack "Restaurant Finder" application.
In this blog post, we will focus on developing the frontend components using React and Redux Toolkit. Our frontend will provide users with an intuitive interface to explore nearby restaurants, view detailed information such as ratings and photos, and seamlessly navigate to locations using Google Maps integration.
Throughout this tutorial, we'll cover essential frontend development concepts including
- state management with Redux Toolkit,
- integrating Google Maps for interactive map displays, and
- creating reusable components for a consistent user experience.
By the end of this guide, you'll have a complete understanding of how to implement a simple and responsive frontend for our "Restaurant Finder" application.
**Step 1: Setting Up React App**
- Initialize Project:
-> Create a new directory for your client and navigate into it.
Run `npx create-react-app client --template typescript` to create a new React app with TypeScript.
- Install Dependencies:
Run `npm install redux react-redux @reduxjs/toolkit axios tailwindcss daisyui @vis.gl/react-google-maps`
- Configure Tailwind CSS:
-> Create a tailwind.config.js file and configure Tailwind CSS:
```
// tailwind.config.js
module.exports = {
purge: ['./src/**/*.{js,jsx,ts,tsx}', './public/index.html'],
darkMode: false, // or 'media' or 'class'
theme: {
extend: {},
},
variants: {
extend: {},
},
plugins: [require('daisyui')],
};
```
- Setup Tailwind in CSS:
-> Add the following to src/index.css:
```
@tailwind base;
@tailwind components;
@tailwind utilities;
```
**Step 2: Creating Components**
- Header Component:
-> Create a Header.tsx file in src/components and add the following:
```
import React from "react";
const Header: React.FC = () => {
return (
<header className=" mx-1 my-5 p-4 flex justify-between items-center">
<div className="text-sm font-semibold bg-neutral-700 rounded-md p-1">
<span className="text-white mr-1">Restaurant</span>
<span className=" w-12 h-8 rounded bg-neutral-100 px-1 text-neutral-700 font-bold">
Finder
</span>
</div>
<p className=" text-sm font-semibold mx-2 px-1">
"Good food is the foundation of genuine happiness !"
</p>
</header>
);
};
export default Header;
```
- Footer Component:
-> Create a Footer.tsx file in src/components and add the following:
```
import React from "react";
const Footer: React.FC = () => {
const year = new Date().getFullYear();
return (
<footer className=" text-neutral-300 p-1 m-0 flex justify-center items-center font-thin text-xs">
<p className="">
<span>@ {year} © Your Name.</span>
</p>
</footer>
);
};
export default Footer;
```
- PlaceAutocomplete Component:
-> Create a PlaceAutocomplete.tsx file in src/components and add the following:
```
import React from "react";
import { useGoogleAutocomplete } from "@vis.gl/react-google-maps";
import { useDispatch } from "react-redux";
import { fetchRestaurants } from "../redux/restaurantSlice";
const PlaceAutocomplete: React.FC = () => {
const dispatch = useDispatch();
const {
value,
suggestions: { status, data },
setValue,
clearSuggestions,
} = useGoogleAutocomplete({
apiKey: process.env.REACT_APP_GOOGLE_PLACES_API_KEY,
debounce: 300,
minLength: 3,
});
const handleSelect = ({ description }) => {
setValue(description, false);
clearSuggestions();
const geocoder = new window.google.maps.Geocoder();
geocoder.geocode({ address: description }, (results, status) => {
if (status === "OK") {
const { lat, lng } = results[0].geometry.location;
dispatch(fetchRestaurants({ lat: lat(), lng: lng() }));
}
});
};
return (
<div className="relative">
<input
value={value}
onChange={(e) => setValue(e.target.value)}
placeholder="Enter a place"
className="w-full px-4 py-2 border rounded"
/>
{status === "OK" && (
<ul className="absolute z-10 w-full bg-white border rounded shadow-md mt-1">
{data.map((suggestion) => (
<li
key={suggestion.place_id}
onClick={() => handleSelect(suggestion)}
className="px-4 py-2 cursor-pointer hover:bg-gray-200"
>
{suggestion.description}
</li>
))}
</ul>
)}
</div>
);
};
export default PlaceAutocomplete;
```
- RestaurantList Component:
-> Create a RestaurantList.tsx file in src/components and add the following:
```
import React from "react";
import { useSelector } from "react-redux";
import { RootState } from "../redux/store";
import RestaurantCard from "./RestaurantCard";
const RestaurantList: React.FC = () => {
const restaurants = useSelector(
(state: RootState) => state.restaurants.restaurants
);
return (
<div className="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 gap-4 p-4">
{restaurants.map((restaurant) => (
<RestaurantCard key={restaurant.place_id} restaurant={restaurant} />
))}
</div>
);
};
export default RestaurantList;
```
- RestaurantCard Component:
-> Create a RestaurantCard.tsx file in src/components and add the following:
```
import React from "react";
import { Restaurant } from "../redux/restaurantSlice";
interface RestaurantCardProps {
restaurant: Restaurant;
}
const RestaurantCard: React.FC<RestaurantCardProps> = ({ restaurant }) => {
return (
<div className="bg-white p-4 rounded shadow">
{restaurant.photoUrl && (
<img
src={restaurant.photoUrl}
alt={restaurant.name}
className="w-full h-48 object-cover rounded mb-4"
/>
)}
<h3 className="text-lg font-semibold mb-2">{restaurant.name}</h3>
<p className="text-sm text-gray-600 mb-2">{restaurant.vicinity}</p>
<p className="text-sm text-gray-600 mb-2">
Rating: {restaurant.rating} ({restaurant.user_ratings_total} reviews)
</p>
<p className="text-sm text-gray-600 mb-2">
Distance: {restaurant.distance.toFixed(2)} km
</p>
<a
href={`https://www.google.com/maps/place/?q=place_id:${restaurant.place_id}`}
target="_blank"
rel="noopener noreferrer"
className="text-blue-500 hover:underline"
>
View on Google Maps
</a>
</div>
);
};
export default RestaurantCard;
```
**Step 3: Setting Up Redux**
- Create Redux Store:
-> Create a redux/store.ts file and add the following:
```
import { configureStore } from "@reduxjs/toolkit";
import restaurantReducer from "./restaurantSlice";
const store = configureStore({
reducer: {
restaurants: restaurantReducer,
},
});
export type RootState = ReturnType<typeof store.getState>;
export type AppDispatch = typeof store.dispatch;
export default store;
```
- src/redux/hooks.ts
```
import { TypedUseSelectorHook, useDispatch, useSelector } from 'react-redux';
import type { RootState, AppDispatch } from './store';
export const useAppDispatch: () => AppDispatch = useDispatch;
export const useAppSelector: TypedUseSelectorHook<RootState> = useSelector;
```
- Create Restaurant Slice:
-> Create a redux/restaurantSlice.ts file and add the following:
```
import { createSlice, createAsyncThunk } from "@reduxjs/toolkit";
import axios from "axios";
export interface Restaurant {
name: string;
vicinity: string;
rating: number;
user_ratings_total: number;
distance: number;
photoUrl: string | null;
place_id: string;
}
interface RestaurantState {
restaurants: Restaurant[];
status: "idle" | "loading" | "succeeded" | "failed";
error: string | null;
}
const initialState: RestaurantState = {
restaurants: [],
status: "idle",
error: null,
};
export const fetchRestaurants = createAsyncThunk(
"restaurants/fetchRestaurants",
async ({ lat, lng }: { lat: number; lng: number }) => {
const response = await axios.get("http://localhost:3001/api/places", {
params: { lat, lng },
});
return response.data;
}
);
const restaurantSlice = createSlice({
name: "restaurants",
initialState,
reducers: {},
extraReducers: (builder) => {
builder
.addCase(fetchRestaurants.pending, (state) => {
state.status = "loading";
})
.addCase(fetchRestaurants.fulfilled, (state, action) => {
state.status = "succeeded";
state.restaurants = action.payload;
})
.addCase(fetchRestaurants.rejected, (state, action) => {
state.status = "failed";
state.error = action.error.message || null;
});
},
});
export default restaurantSlice.reducer;
```
- Configure Store Provider:
-> Wrap your app with the Redux provider in src/index.tsx:
```
import React from "react";
import ReactDOM from "react-dom";
import "./index.css";
import App from "./App";
import reportWebVitals from "./reportWebVitals";
import { Provider } from "react-redux";
import store from "./redux/store";
ReactDOM.render(
<React.StrictMode>
<Provider store={store}>
<App />
</Provider>
</React.StrictMode>,
document.getElementById("root")
);
reportWebVitals();
```
**Step 4: Assembling the App**
- Create App Component:
-> Update src/App.tsx to include all components:
```
import React from "react";
import Header from "./components/Header";
import Footer from "./components/Footer";
import PlaceAutocomplete from "./components/PlaceAutocomplete";
import RestaurantList from "./components/RestaurantList";
const App: React.FC = () => {
return (
<div className="flex flex-col min-h-screen">
<Header />
<div className="flex-grow flex flex-col items-center p-4">
<PlaceAutocomplete />
<RestaurantList />
</div>
<Footer />
</div>
);
};
export default App;
```
**Run the Frontend:**
-> Navigate to the client directory and run npm start to start the React app.
-> Open a browser and navigate to http://localhost:3000 to see the application in action.
**Step 5: Testing the App**
- Functionality Testing:
-> Enter a location in the search bar and verify the list of restaurants updates accordingly.
-> Check that the restaurant cards display all relevant information and links to Google Maps.
Code Quality:
Ensure your code follows best practices and is well-structured.
**Project Structure**
client
├── src/
│ ├── components/
│ │ ├── Footer.tsx
│ │ ├── Header.tsx
│ │ ├── PlaceAutocomplete.tsx
│ │ ├── RestaurantItem.tsx
│ │ ├── RestaurantList.tsx
│ ├── images/
│ │ ├── def-restaurant.jpg
│ │ ├── menuplate.jpg
│ ├── redux/
│ │ ├── hooks.ts
│ │ ├── store.ts
│ │ ├── restaurantsSlice.ts
│ ├── App.tsx
│ ├── index.tsx
│ ├── .env
│ ├── package.json
server
├── server.js
├── .env
**_Great job! _**
You have successfully built a user-friendly frontend for the "Restaurant Finder" app. Your React application is now equipped with features like location-based restaurant search, and it integrates seamlessly with the backend you built earlier.
With both the backend and frontend completed, you have a full-stack application ready for deployment. Feel free to explore further enhancements, such as adding more filters or improving the UI.
_Happy coding!_
{% cta https://dev.to/vb_nair/building-a-simple-full-stack-restaurant-finder-app-with-react-redux-nodejs-and-google-places-api-1acd %} Part 1: Building the Backend {% endcta %}
{% cta https://github.com/vbnair/restaurantFinder.git %} GitHub Repo {% endcta %}
React components and hooks for the Google Maps JavaScript API.
{% cta https://visgl.github.io/react-google-maps/ %} React components and hooks for the Google Maps JavaScript API. {% endcta %}

📚 Explore and Learn!
This project is a gateway to exploring n learning, and planning to add on further iterations to enhance and expand. created it for exploration and showcase the integration of various technologies. Dive in, experiment, and enjoy the journey! 🌟 | vb_nair |
1,893,898 | The truth behind revenue churn.. | While this is primarly a developer community, I only joined the developer side some time ago. Here is... | 0 | 2024-06-19T18:26:20 | https://dev.to/dutchenglishman/the-truth-behind-revenue-churn-iem | saas, customer, success | While this is primarly a developer community, I only joined the developer side some time ago. Here is some backstory..
As an experienced consultant, project manager and now head of customer success, the OPS side of applications has one big final boss: Churn.
There are numerous of calculators out there which show that a too high level revenue churn can kill MRR and potentially a business
## How to tackle churn
To be honest, there is never a single best option. Most important is talking to customers. But as Henry Ford once said, if I asked people I would not have made a car, I would have made faster horse.
So what is more important than talking to customers? Understanding customers.
What is it that drives the value behind your SaaS (or app) and what brings the value to your customers. Or in case of high levels of churn, what are they missing?
As head of Customer Success primarily, the success part comes from a single factor: Perceived value - Expected value = Success. If the expected value is greater than the perceived value, there is no success. But if the perceived value exceeds the expected value, your customers experience success.
## So how do you know
As mentioned before, if talking to customers doesn't always work, how do you get the most important information: their expected value.
Simply through data.
If you have customers for your SaaS, you will have customer data in:
* Your own application
* CRM
* Service Desk
* Payment processor
* Insights tool, such as Mixpanel
Combine this data, and you get a proper overview of your customer behaviour. What features are they using, how many requests are created, when is the next renewal.
As I've experienced first hand the power of customer data, I needed a quicker way to gather all customer data before a meeting.
Since I taught myself how to code, I set out to provide value to other founders. And so, [beatchurn](https://beatchurn.com) was born. A tool to combine all customer data, for easy access.
| dutchenglishman |
1,893,897 | Explore tests for hundreds of public APIs | The thing with age-old and persisting tech problems is that we start accepting them as reality. As... | 0 | 2024-06-19T18:25:41 | https://dev.to/gaw/explore-tests-for-hundreds-of-public-apis-30k3 | The thing with age-old and persisting tech problems is that we start accepting them as reality.
As software builders, we have to pick between our team’s release timelines and the stability of frequent releases.
Customer experience, massive revenue, and brand reputation are all on the line.
Push to production and fix later is a mindset that also harms devs because they often have to drop everything to fix bugs in production.
At KushoAI, we’re building an AI agent for API testing so you never have to write tests manually again. We’ve trained an LLM thoroughly and refined its pattern-matching abilities to ensure that the coverage and accuracy for your backend APIs are the gold standard.
Want to test out this promise? Watch KushoAI in action on the public APIs of popular companies [](https://app.kusho.ai/explore)

Of course, we can do this for all your internal APIs in minutes. All you have to do is sign up here [](https://kusho.ai/)
Glitches that cost money? Infamous bugs that create unprecedented damage? Now devs can catch them as they code.
| gaw | |
1,893,895 | Introducing CHEMAXE: The Innovation in Chemical Trade | Introducing CHEMAXE: The Innovation in Chemical Trade CHEMAXE serves as a centralized platform where... | 0 | 2024-06-19T18:21:41 | https://dev.to/mitali_ramteke_fbebe65fce/introducing-chemaxe-the-innovation-in-chemical-trade-2mgh | chemicaltrading, chemaxe, pythondevelopment, innovation | **Introducing CHEMAXE:** The Innovation in Chemical Trade
CHEMAXE serves as a centralized platform where buyers and sellers engage in real-time trading of chemicals like methanol, toluene, and styrene. Powered by advanced algorithms, CHEMAXE automates the matching process, ensuring seamless transactions and updated market prices through the Last Traded Price (LTP) model.
**Leveraging Firebase for Real-Time Database Management**
One of the core aspects of CHEMAXE’s infrastructure is its real-time trading database, managed efficiently using Firebase. As a Python developer, my role involves:
**Database Integration:** Implementing Firebase to ensure real-time updates of trade data, enabling instantaneous matching of buyers and sellers.
**API Development:** Creating robust APIs to support various platform functionalities, ensuring smooth communication between frontend interfaces and backend systems.
Testing and Optimization: Conducting thorough testing to validate functionalities and optimize performance, guaranteeing a seamless user experience.
Ensuring Data Security and User Verification
Security and trust are critical in chemical trading. With Firebase’s secure infrastructure, we manage user authentication and data security with utmost diligence. Features like:
**User Authentication:** Implementing Firebase Authentication to safeguard user accounts and ensure secure access to platform features.
KYC Verification: Facilitating a streamlined Know Your Customer (KYC) process, verifying user identities and business credentials to maintain platform integrity.
Enhancing User Experience with Real-Time Insights
Firebase’s real-time capabilities empower CHEMAXE users with:
**Market Insights:** Providing real-time updates on market prices and trade volumes, enabling informed decision-making.
Transaction Monitoring: Allowing users to track their transactions in real-time, from order placement to trade completion.
Future Directions and Continuous Improvement
As we continue to innovate, CHEMAXE is committed to enhancing its features:
**AI Integration:** Exploring AI-driven solutions to further optimize trade matching algorithms and enhance user experience.
Enhanced Analytics: Leveraging Firebase Analytics to gain deeper insights into user behavior and platform performance.
Join the CHEMAXE Community
Whether you’re a chemical trader, buyer, or seller, CHEMAXE offers a cutting-edge platform designed to simplify and optimize your trading experience. Explore the possibilities with us and join the CHEMAXE community today.
 | mitali_ramteke_fbebe65fce |
1,893,893 | How (and why) to implement streaming in your LLM application | There’s enough talk on how LLMs are changing software development. We’re going to keep it practical.... | 0 | 2024-06-19T18:20:14 | https://dev.to/gaw/how-and-why-to-implement-streaming-in-your-llm-application-24g8 | There’s enough talk on how LLMs are changing software development. We’re going to keep it practical. Using LLMs in your applications produces specific, never-seen-before challenges, and trade-offs in product building. One such problem is how to manage the latency of text generation using LLMs.
Suppose you’re building an application that writes performance reviews for engineers. An average performance review is 300-400 words long according to this discussion. Using GPT-4, generating one review will take approximately 45 to 50 seconds. Now it’s not hard to imagine what will happen if you show a loading indicator for 50 seconds to a user. Only the most patient users will stick around to see the result – doesn’t matter how good the result will be.

This is drastically different from traditional applications where any data shown on UI is retrieved from a database that hardly takes 100s of milliseconds to a few seconds in the worst case. In comparison, 50 seconds is an eternity and will make the app feel like it’s something from the dial-up internet era. Latency is directly related to how good/bad the UX is and that in turn decides the bottom line.According to Amazon, every 100ms of added latency resulted in 1% reduction in sales.
So how do you solve this? Most LLMs have an option to stream the response token by token. Thus, you can start showing results to users immediately instead of making them wait until the whole generation is completed.
There are 3 factors to consider while implementing streaming:
- How do you receive the streaming response from LLMs
- How do you deliver a streaming response to client-side
- How do you receive the streaming response on the client side
**Receiving streaming response from LLMs:**
This part is pretty straightforward. If you’re using an API like OpenAI or Anthropic, they use server-sent events (SSE) for this. Here’s an example in Python:
```
import json
import sseclient
import requests
API_KEY = "XXXX" # your api key goes here
MODEL = "gpt-4"
MAX_TOKENS = 500
headers = {"Accept": "text/event-stream", "Authorization": "Bearer " + API_KEY}
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a sample 400 word performance review for John Doe who is a senior software engineer."},
]
body = {
"model": MODEL,
"messages": messages,
"max_tokens": MAX_TOKENS,
"temperature": 0.9,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"stream": True,
}
request = requests.post(
"https://api.openai.com/v1/chat/completions",
stream=True,
headers=headers,
json=body,
)
client = sseclient.SSEClient(request)
for event in client.events():
# print(event.data)
if event.data != "[DONE]":
event_data = json.loads(event.data)
# use yield here to create a generator if you implement this as a utils function
print(event_data["choices"][0]["delta"].get("content", ""))
else:
```
**Delivering a streaming response to client-side:**
Once you start receiving tokens from your LLM and you’re done with post-processing, you need to stream these to the client side. Here you have multiple options depending on how you want to trade off between realtime-ness and complexity - Polling, SSE, and WebSockets.
**Polling**
This approach can be considered as faux-streaming. This will give users the look and feel of streaming without actually having to stream the response from the backend. This is also the simplest of the 3 options for both backend and frontend implementation. Simply put, in polling, the client periodically checks with the backend for new tokens and displays them on the UI. There are 2 ways to implement this.
Long polling
In long polling, the client sends a request to the backend and the connection is kept open until some tokens are generated which are returned to the client. After receiving the tokens, the client sends another request, and so on.
There will be some timeout for these calls so that they don't go on indefinitely, but normally the timeout is very high (20-30s).
To stop the polling, send a special stop event from the backend once the generation is completed so that the client knows that polling needs to stop.
Short polling
This is the most straightforward approach and works pretty well when you don’t need real-time updates. In this approach, the client sends requests to the server periodically asking for new tokens.
Some additional things to consider while implementing polling:
You’ll need some way to keep track of how many tokens have already been sent so that the backend can send only the newly generated tokens since the last call something like offset in MySQL queries.
You need to send a special stop response/event so that the client knows that generation is completed and stop the polling.
Server-Sent Events (SSE)
Another approach for streaming generated tokens to UI is by sending server-sent events (SSE). This is the same approach as what we discussed above for receiving streaming responses from LLM APIs but this time you’ll be implementing the server-side component for this. Here’s how you do it in Flask:
```
@app.post("/generate")
@use_args({
"employee_name": fields.Str(required=True),
"employee_review_points": fields.Str(required=True),
}, location="json")
def generate_performance_review():
employee_name = request.json["employee_name"]
employee_review_points = request.json["employee_review_points"]
def generate():
for tokens in perf_utils.generate_review(employee_name, employee_review_points):
yield f"event:data\ndata:{tokens}\n\n"
yield "[DONE]"
response = Response(generate(), mimetype="text/event-stream")
# Enable the server-sent events extension
response.headers.add("X-Accel-Buffering", "no")
return response
```
You can refer to this blog for more implementation details.
That’s pretty much it! One good thing about this is if you already have a normal API, converting it to SSE can be done with minimal change. Just create a generator that yields the response to be streamed a few tokens at a time, change the mimetype, and you’re done!
Few additional things to consider while implementing this -
You’ll need to set _threaded_ as _True_ in development or else your Flask dev server will get blocked. You can read more about it in this discussion.
```
# Start app in dev
if __name__ == "__main__":
flask_app.run(host='0.0.0.0', port=8080, debug=True, threaded=True)
```
To test server-sent events, you can use curl with --no-buffer flag set like this:
```
curl –no-buffer https://www.example.com/streaming_endpoint
```
**Websockets**
Websockets allow the client and server to maintain a persistent bi-directional communication channel for sending/receiving from both ends. This implementation makes sense when you’re building an application that needs real-time transfer of packets from both the client and server side – like a chat application. But in the context of applications using LLMs, this seems like an overkill because these are most of the time apps for generating some type of text/code that doesn’t need real-time bi-directional communication.
**Receiving streaming response on the client side**
Example using Axios
```
const response = await axios.get('https://stream.example.com', {
headers: {Authorization: `Bearer ${token}`,
responseType: 'stream'
});
const stream = response.data;
stream.on('data', data => {
console.log(data);
});
stream.on('end', () => {
console.log("stream done");
});
```
Example using [Fetch API](url)
```
fetch('/stream')
.then(response => {
const stream = response.body;
const reader = stream.getReader();
const readChunk = () => {
reader.read()
.then(({
value,
done
}) => {
if (done) {
console.log('Stream finished');
return;
}
const chunkString = new TextDecoder().decode(value);
console.log(chunkString);
readChunk();
})
.catch(error => {
console.error(error);
});
};
readChunk();
})
.catch(error => {
console.error(error);
});
```
**Conclusion**
If you’re building an application that uses LLMs and you want your users to not drop off while the generation is happening. Thus, streaming is an essential feature.
Since most LLM applications generate some kind of text/code that is not really realtime, short polling is the simplest approach which works sufficiently well for these kinds of apps, in our opinion.
Know of another way to implement this? Write to us at [hello@kusho.ai](url) | gaw | |
1,893,889 | Should I buy a new PC? | Hello all! I am a complete beginner to the world of programming and I had a few questions concerning... | 0 | 2024-06-19T18:14:53 | https://dev.to/thefoxlion/should-i-buy-a-new-pc-3cem | learning, hardware | Hello all!
I am a complete beginner to the world of programming and I had a few questions concerning my hardware and whether or not it needs an upgrade.
My specs:
Dell Inspiron 3584
128GB with about 11GB free
intel I3-7020U @2.3Ghz
8GB ram
Currently installed:
VSCode
Webstorm
IDEA for Python
JDK 22
If there is any more information I can provide let me know.
| thefoxlion |
1,893,867 | Convert .caf to mp3 by Directory | Read the original conversation shared via ChatGPT You: If possible, could you help me... | 27,781 | 2024-06-19T18:10:51 | https://dev.to/extratone/convert-caf-to-mp3-by-directory-3m07 | audio, ffmpeg, chatgpt | [***Read the original conversation shared via ChatGPT***](https://chatgpt.com/share/3553b9d1-3c28-4a58-9111-50db9ddd5aa4)
---
#### You:
If possible, could you help me construct an ffmpeg command that converts all of the .caf files in the current directory to 320 kbps, constant bitrate mp3s?
#### ChatGPT:
Certainly! To construct an `ffmpeg` command that converts all `.caf` files in the current directory to 320 kbps constant bitrate MP3 files, you can use a shell loop in a terminal. Below is an example of how you can do this on a Unix-like system (such as macOS or Linux):
```sh
#!/bin/bash
for file in *.caf; do
# Extract the base name without the .caf extension
base=$(basename "$file" .caf)
# Convert the .caf file to a .mp3 file with 320 kbps bitrate
ffmpeg -i "$file" -codec:a libmp3lame -b:a 320k "${base}.mp3"
done
```
### Explanation:
1. `#!/bin/bash`: This line is the shebang, which tells the system that this script should be run with the Bash shell.
2. `for file in *.caf; do`: This loop iterates over all files in the current directory with the `.caf` extension.
3. `base=$(basename "$file" .caf)`: This command extracts the base name of the file, removing the `.caf` extension.
4. `ffmpeg -i "$file" -codec:a libmp3lame -b:a 320k "${base}.mp3"`: This command converts the input `.caf` file to an MP3 file with a constant bitrate of 320 kbps using the `libmp3lame` codec.
### Usage:
1. Open your terminal.
2. Navigate to the directory containing your `.caf` files.
3. Save the script to a file, for example, `convert_caf_to_mp3.sh`.
4. Make the script executable by running `chmod +x convert_caf_to_mp3.sh`.
5. Run the script by typing `./convert_caf_to_mp3.sh`.
This script will process all `.caf` files in the directory and convert them to MP3 files with the specified settings. If you encounter any issues or need further customization, feel free to ask! | davidblue |
1,893,886 | How UI/UX Can Influence User Behavior and Drive Engagement | Introduction In the digital landscape, user interface (UI) and user experience (UX) play pivotal... | 0 | 2024-06-19T18:09:42 | https://dev.to/purnimashrestha/how-uiux-can-influence-user-behavior-and-drive-engagement-27l5 | ui, ux, design, tutorial | **Introduction**
In the digital landscape, user interface (UI) and user experience (UX) play pivotal roles in shaping user behavior and driving engagement. Understanding the nuances of UI/UX design is crucial for businesses seeking to create compelling digital experiences. This article explores the impact of UI/UX on user behavior and engagement, offering insights into best practices and strategies for optimization.
**Define UI/UX**
UI refers to the visual elements and interactive components of a digital interface, encompassing design elements such as layout, typography, color scheme, and navigation menus. On the other hand, UX focuses on the overall experience of the user, including aspects like ease of use, accessibility, and satisfaction. Together, UI and UX work synergistically to create seamless and intuitive digital experiences.
**Importance of UI/UX**
The significance of UI/UX design lies in its ability to influence how users interact with digital platforms. A well-designed UI/UX can enhance usability, increase user satisfaction, and ultimately drive engagement and conversions. Conversely, poor UI/UX can lead to frustration, high bounce rates, and loss of potential customers.
**Types and Categories**
UI/UX design encompasses various types and categories tailored to different digital platforms and devices. These include:
1. Web UI/UX: Design principles specific to websites, focusing on factors like responsive design, page load speed, and mobile optimization.
2. Mobile UI/UX: Optimization techniques for mobile applications, emphasizing touch-friendly interfaces, simplified navigation, and efficient use of screen real estate.
3. Product UI/UX: Design considerations for digital products, spanning across software applications, e-commerce platforms, and interactive media.
4. Voice UI/UX: Emerging field focused on designing intuitive voice-driven interfaces for smart devices and virtual assistants.
**Symptoms and Signs**
Recognizing the signs of poor UI/UX design is essential for identifying areas of improvement. Common symptoms of subpar UI/UX include:
- High bounce rates: Users leaving the website or application shortly after accessing it.
- Low engagement metrics: Limited interaction with key features or content.
- Negative feedback: User complaints regarding usability issues or navigation difficulties.
- Decline in conversion rates: Decreased effectiveness in converting visitors into customers.
**Causes and Risk Factors**
Several factors can contribute to poor UI/UX design, including:
- Lack of user research: Insufficient understanding of the target audience's needs and preferences.
- Inconsistent branding: Discrepancies between visual elements and brand identity.
- Overly complex navigation: Confusing menu structures and excessive clicks required to access content.
- Slow performance: Long loading times and delays in responsiveness.
- Accessibility issues: Failure to accommodate users with disabilities or impairments.
**Diagnosis and Tests**
To diagnose UI/UX issues effectively, businesses can employ various diagnostic tools and tests, such as:
1. Usability testing: Observing real users interact with the interface to identify pain points and usability challenges.
2. A/B testing: Comparing different design variations to determine which performs better in terms of user engagement and conversion rates.
3. Heatmap analysis: Visualizing user interactions to identify areas of interest and areas of friction within the interface.
4. Accessibility audits: Evaluating the interface against accessibility standards and guidelines to ensure inclusivity.
**Treatment Options**
Addressing UI/UX issues requires a multifaceted approach, including:
- Conducting user research: Gathering insights through surveys, interviews, and behavioral analysis to understand user needs and preferences.
- Iterative design process: Continuously refining and optimizing the interface based on user feedback and testing results.
- Implementing best practices: Adhering to UI/UX design principles such as simplicity, consistency, and feedback.
- Performance optimization: Optimizing page load speed, streamlining navigation, and minimizing friction points to enhance user experience.
**Preventive Measures**
To prevent UI/UX issues from arising in the first place, businesses can take proactive measures such as:
- Investing in user-centric design: Prioritizing user needs and preferences throughout the design process.
- Conducting usability testing early and often: Soliciting feedback from real users at multiple stages of development to identify and address potential issues.
- Fostering a culture of continuous improvement: Encouraging collaboration between designers, developers, and stakeholders to iterate and refine the interface over time.
- Staying abreast of industry trends and best practices: Keeping up with advancements in UI/UX design and incorporating relevant innovations into the design process.
**Personal Stories or Case Studies**
Real-life examples can illustrate the transformative power of effective UI/UX design. Consider the following case studies:
1. Airbnb: By optimizing its UI/UX to emphasize visual storytelling and intuitive search functionality, Airbnb transformed the way users discover and book accommodations worldwide, resulting in significant growth and market dominance.
2. Slack: Through its clean and intuitive interface, Slack revolutionized workplace communication, fostering collaboration and productivity among teams of all sizes.
3. Duolingo: With its gamified learning experience and intuitive UI, Duolingo makes language learning accessible and engaging for millions of users worldwide, demonstrating the potential of UI/UX to drive user engagement and retention.
**Expert Insights**
Industry experts offer valuable insights into the role of UI/UX in shaping user behavior and driving engagement. According to Jane Doe, a UX designer at XYZ Agency, "User experience is not just about aesthetics; it's about understanding the needs and motivations of your users and designing with empathy." Similarly, John Smith, a UI/UX consultant, emphasizes the importance of iterative design, stating, "The key to creating successful digital experiences is to continuously iterate and refine based on user feedback and testing results."
**Conclusion**
In conclusion, UI/UX design plays a critical role in influencing user behavior and driving engagement across digital platforms. By prioritizing user needs, conducting thorough research, and implementing best practices, businesses can create compelling and intuitive digital experiences that resonate with their target audience. By investing in UI/UX optimization, companies can enhance usability, increase satisfaction, and ultimately achieve their business objectives. | purnimashrestha |
1,893,885 | New tech sales jobs in the US today 19/06/2024 | Nouvelles Opportunités d'Emploi Aujourd'hui ! Software Engineer ffsdgsdwww / Tech Company grevvddasv... | 27,780 | 2024-06-19T18:08:16 | https://dev.to/techsalesjobsus/new-tech-sales-jobs-in-the-us-today-19062024-5828 | career, jobs, hiring, remote | Nouvelles Opportunités d'Emploi Aujourd'hui !
Software Engineer ffsdgsdwww / Tech Company grevvddasv / https://www.baguettebrain.com/undefined
Data Scientist frprpgoss / Data ffffffCompany / https://www.baguettebrain.com/undefined
Web Developer / Web Development Inc. / https://www.baguettebrain.com/undefined | techsalesjobsus |
1,893,882 | Dayitwa's research initiatives and key findings. | Evidence-based research and advocacy are crucial for driving systemic change. Dayitwa's research... | 0 | 2024-06-19T18:02:05 | https://dev.to/tegveer_singh_8c7c2ac99ea/dayitwas-research-initiatives-and-key-findings-1ca1 | Evidence-based research and advocacy are crucial for driving systemic change. Dayitwa's research initiatives provide critical insights that inform policy and program development. This blog will delve into key research findings and how advocacy efforts are shaping public discourse and policy-making.
| tegveer_singh_8c7c2ac99ea | |
1,893,881 | Smart meal tracking on whatsapp [POC] | This is a submission for Twilio Challenge v24.06.12 What I Built I developed a... | 0 | 2024-06-19T18:01:48 | https://dev.to/sanketmunot/smart-meal-tracking-on-whatsapp-poc-474h | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
I developed a proof-of-concept WhatsApp bot that accepts an image of the meal you're having. In return, it provides the calorie intake along with detailed nutrition information such as carbohydrates, fats, and proteins.
This is just a proof of concept; we can significantly enhance it by incorporating a database to store user data and generate regular reports. Use background workers for image saving and calorie estimation from AI, and employing another worker to send the messages.
Read more about functionality and flow on github [readme](https://github.com/sanketmunot/twilio-ai-challenge-meal-alerts/blob/main/README.md)
## Demo
I cannot provide a demo because I used ngrok to route WhatsApp webhook requests to localhost. But here are some screenshots ->
Positive case I send an image of Indian thali

Positive case I send an image of burger meal

Negative case Naruto, When he is kept away from Ramen (IYKYK)

## Twilio and AI
- **Twilio's API** is used to receive images of food sent by users via WhatsApp.
- **Google's Generative AI model - Gemeni** analyzes the received images to estimate the caloric content and nutritional breakdown.
- **Twilio** sends back the detailed nutritional information to the user on WhatsApp.
- The integration of **Twilio and AI** enhances user interaction by providing instant and informative responses.
## Additional Prize Categories
Twilio Times Two
Impactful Innovators
Thanks for reading my post, connect me on Linkedin @ [Sanket Munot](https://www.linkedin.com/in/sanket-munot/)
| sanketmunot |
1,893,880 | Can you trust AI-driven testing? | Have you been wondering if AI will change testing? In the ever-evolving landscape of software... | 0 | 2024-06-19T17:56:55 | https://dev.to/gaw/can-you-trust-ai-driven-testing-2d2d |
Have you been wondering if AI will change testing?
In the ever-evolving landscape of software development – testing remains a crucial aspect of the process. Traditionally, developers spent countless hours combing through lines of code, trying to identify and fix bugs that could potentially disrupt the functionality of their applications. It is estimated that testing can occupy up to 50% of SDLC in distributed systems.
However, with the advent of artificial intelligence (AI) and machine learning (ML) technologies, the days of spending hours poring over code, and hunting for that elusive bug are slowly fading away. Tools powered by AI are not just making testing faster and easier, they're fundamentally changing the way developers approach problem-solving.

**A long time ago (actually last year) — before AI**
Traditionally, testing was (is - in still most cases today) a manual and labor-intensive affair. Developers relied on line-by-line inspection, testing, and sheer detective work to uncover and vanquish elusive bugs. This process, while rigorous, was time-consuming, repetitive, and prone to human error. Overlooking subtle issues or getting lost in complex codebases is all too common - making identifying the root cause a daunting task.
**Enter AI-Driven Testing**
AI-driven testing is a game-changer poised to revolutionize the software development workflow. Imagine intelligent tools that not only expedite bug detection but also offer insightful explanations, fostering deeper understanding, and building even better software.
Remember that sinking feeling of hitting a brick wall with a bug? AI testers can identify potential issues early on, analyze code for common pitfalls, and suggest fixes. This not only saves precious time but also prevents frustration and helps developers stay in the flow state.
To be honest, traditional testing methods are feeling the heat from AI's fiery rise. This paradigm shift leverages AI and ML algorithms to automate tasks, analyze mountains of data, and generate smarter, faster and more efficient test cases.
**Where AI has a clear advantage**
AI doesn't stop at simple syntax errors. It can delve deeper, uncovering complex logic issues, memory leaks, and even security vulnerabilities. This is especially helpful for large codebases wherein manual detection can be a daunting task. Forget sluggish manual testing, AI injects intelligence into your QA process, unlocking a wealth of benefits:
Boost Coverage: Uncover hidden bugs with comprehensive test cases generated by AI, analyzing code, data, and user behavior.
Prioritize Wisely: Focus on high-impact tests first. AI prioritizes based on risk, complexity, and business impact, saving time and effort.
Self-Healing Future: Say goodbye to tedious test maintenance. AI's self-healing tests automatically adapt to code changes, keeping them relevant and efficient.
Predict & Prevent: Don't wait for problems. AI analyzes trends to predict potential issues, allowing you to address them proactively.
Deeper Insights: Gain a valuable understanding of the 'why' behind bugs. This helps developers write better code and improve future software quality.
Beyond Benefits: Organizations also gain -
Improved Efficiency: Faster testing and quicker releases thanks to automation and intelligent prioritization.
Enhanced Accuracy: AI uncovers patterns human testers might miss, leading to better bug detection.
Greater Adaptability: AI handles dynamic, evolving software, perfect for agile and DevOps environments.
Cost Savings: Initial investment might be steep, but long-term benefits like reduced testing effort, improved quality, and faster time-to-market lead to significant cost savings.
**Where AI is playing catch-up**
Remember AI is not magic, it's still learning from traditional testing methodologies. Challenges to Consider:
Data Double-Check: Biased or incomplete data leads to inaccurate results. Ensure high-quality, diverse data to fuel your AI.
Black Box Blues: Understanding AI decisions is crucial. Opt for tools that explain their reasoning for better trust and testing.
Ethics in Focus: Privacy, data misuse, and algorithmic bias demand ethical frameworks and careful consideration.
Humans Still Rule: AI complements, not replaces, human testers. Critical thinking, domain knowledge, and creativity remain essential.
Investment & Expertise: Implementing AI requires resources and training, but long-term savings are significant.
**Is AI-driven testing transformative?**
Forget the manual grind – the future of testing is intelligent and proactive. Software development with the help of AI is like pair programming with the greatest developers!
Simply copying from Stack Overflow will not always tell you why your garbage code suddenly started working. AI-driven testing isn't just a trend it's a transformation.
**Want to be an early adopter of AI-driven programming?**
Here's a list of my favorite tools to help you 10x your workflow!
[Tabine](url) for code completion
[MidJourney](url) to generate assets for your product
[Codepeer.ai](url) for website generation
[Kusho.ai](url) for autonomous testing
[Rewind.ai](url) for documentation
| gaw | |
1,893,872 | D | This is a submission for the Twilio Challenge What I Built Demo ... | 0 | 2024-06-19T17:35:12 | https://dev.to/ayyub_beo_02d50ed242b00dd/d-27b4 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
## Demo
<!-- Share a link to your app and include some screenshots here. -->
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | ayyub_beo_02d50ed242b00dd |
1,893,805 | Deploying Flutter Web to S3 with CodeCommit, Codepipeline, Codebuild, and CodeDeploy | For this exercise, we are going to perform a step-by-step process of developing a web application... | 0 | 2024-06-19T17:56:27 | https://dev.to/xreyc/deploying-flutter-web-to-s3-with-codecommit-codepipeline-codebuild-and-codedeploy-3i4h | flutter, cicd, codepipeline, s3 | For this exercise, we are going to perform a step-by-step process of developing a web application with Flutter, starting with creating a Flutter application, setting up the environment variables, setting up the git repository, and configuring the CI/CD pipeline for Flutter Web S3 deployment.
## Step 1: Create a Flutter App
First create your flutter application by running this command:
```terminal
flutter create yourapplication
```
Navigate to your application then run `flutter run`.
In this application, we are going to use environment variables so that we will be able to input dynamic data during our CI/CD build. In this case, we are using the envied package. To install, run the following commands:
```terminal
flutter pub add envied
flutter pub add --dev envied_generator
flutter pub add --dev build_runner
```
After installation our pubspec.yml should look like this.
```yml
name: flutter_app_s3
description: "A new Flutter project."
publish_to: "none"
version: 1.0.0+1
environment:
sdk: ">=3.3.3 <4.0.0"
dependencies:
flutter:
sdk: flutter
cupertino_icons: ^1.0.6
envied: ^0.5.4+1
dev_dependencies:
flutter_test:
sdk: flutter
build_runner: ^2.4.9
flutter_lints: ^3.0.0
envied_generator: ^0.5.4+1
flutter:
uses-material-design: true
```
### Why use environment variable?
Our Flutter application may have different environments, like development, testing, and production environments, depending on where it is deployed. Different environments may have their own API urls and other variables that are unique to each environment.

Our application might consume an API that has a URL for the development, test, and production environments. The API URL on our application **must not be hardcoded**; it must be stored in an environment variable so that we will be able to change its value depending on the environment where we want to deploy it.
Example:
* Dev environment: `APIBASEURL="https://dev.yourapiurl.com/"`
* Test environment: `APIBASEURL="https://test.yourapiurl.com/"`
* Prod environment: `APIBASEURL="https://api.yourapiurl.com/"`
Inside the root of our application, create a file `.env` then set the value to:
```
APIBASEURL=https://dev.yourapiurl.com/
```
Inside *lib/env/* create a file `env.dart` then set the value to:
```dart
import 'package:envied/envied.dart';
part 'env.g.dart';
@Envied(path: '.env')
abstract class Env {
@EnviedField(varName: 'APIBASEURL', obfuscate: true)
static String apiBaseUrl = _Env.apiBaseUrl ;
}
```
Now run:
```terminal
dart run build_runner build
```
This will generate a file `env.g.dart` alongside `env.dart` we previously created.
Now we are ready to use the environment variable to our flutter application.
Inside our `main.dart` Statefull widget, import the `env.dart`:
```dart
import 'package:flutter_app_s3/env/env.dart';
```
To get the value from our environment variable we can use this:
```dart
String apiBaseUrl = Env.apiBaseUrl ;
```
To display the value: (add this to your UI)
```dart
Text('Environment Variable: $apiBaseUrl '),
```
## Step 2: Create a CodeCommit
This exercise uses CodeCommit as a git repository, but you can use your own, like Github, BitBucket, etc.
* Go to *CodeCommit* then click Create repository
* Set the name ex: *flutter-app-s3*
* Open your repository click Clone URL (this will copy the url)
* On your local machine run `git clone pastetheurlhere`
* Enter your CodeCommit credentials (Check on documentation if you don't have one)
* On the root folder of your flutter project, run
```terminal
git add .
git commit -m "first commit"
git push
```
Create 3 branches that will be used for our environments (dev, test, prod), run the following commands:
```terminal
// Create a dev branch
git checkout -b "dev"
git push
// Create a test branch
git checkout master // go back to master branch
git checkout -b "test"
git push
// Create a production branch
git checkout master
git checkout -b "prod"
git push
git checkout master
```
These branches will be used in our CodePipeline for different types of environments. Every time there is a change detected in any of those branches, a deployment process will be triggered.
## Step 3: Create an S3 bucket
this is where we are going to host our application.
* Go to S3 then click *Create a bucket*
* Add your S3 bucket name ex: *flutter-app-s3-dev*
* Uncheck *Block all public access*
* Disable *Bucket Versioning*
* Click *Create Bucket*
* Open your bucket name created
* Go to *properties*
* Go to *Static website hosting* then click Edit then set these settings. (You can change these settings if you want to add attach a domain name)
- **Static website**: Enabled
- **Hosting type**: Host a static website
- **Index document**: index.html
- **Redirection rules:** (leave empty for now)
- **Save changes**
* Go to *Permissions* then edit *Bucket policy*, this will allow our application to be accessed publicly. Note: replace yourbucketname with the bucket name you created ex: *flutter-app-s3-dev*
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::flutter-app-s3-dev/*"
}
]
}
```
## Step 4: Create a CodeBuild
This will perform the build command for our application. (This will perform the build command we can do locally.) To make sure that build doesn't fail on the build process, run "flutter build web" on your local machine to make sure that the build command will also run successfully on the CI/CD pipeline.
**# Create project:**
* Go to CodeBuild then click **Create project**
* Set your project name ex: *flutter-app-s3-codebuild-dev*
**# Select your git repository and specific branch for this CodeBuild:**
* On source set to *AWS CodeCommit*
* Select repository we previously created
* Select specific branch where this code build pulls its source code. (make sure that codecommit repository have 4 primary branched **master, dev, test, prod**)
* if you are building a CodeBuild for development environment, use dev branch. (Note: You have to create one CodeBuild for each environment)
**# Environment settings:**
* **Provisioning model:** On Demand
* **Environment image:** Managed image
* **Compute:** EC2
* **Operating System:** Ubuntu
* **Runtime:** Standard
* **Image:** aws/codebuild/standard:7.0
* **Image version:** Always use the latest image for this runtime version
* **Service role:** (you can create new or use existing
one(manually created))
* **Additional Configuration:** (expand this)
Set your environment variables (this must correspond to the environment variables we used on our flutter application)
The environment variable created here will be injected into our application during the build stage. Note: Make sure that you input the correct environment variables and their values. In our case, we only created one environment variable, which we can set like this:.
```
Name: APIBASEURL
Value: https://dev.yourbaseurl.com/
Type: Plaintext
```
Note: you can add as many environment variables as you want
**# Set the buildspec will be used for this CodeBuild:**
* Buildspec set to **Use a buildspec file**
* Buildspec name: buildspec.yml
* Artifacts set to **No Artifact**
* Leave everything default then click **Create build project**
## Step 5: Create a buildspec
Inside the root folder of our flutter application create a *buildspec.yml*. Copy and paste the following script:
```yml
version: 0.2
phases:
pre_build:
commands:
- echo Pre Build started on `date`
- git clone https://github.com/flutter/flutter.git -b stable
- export PATH="$PATH:`pwd`/flutter/bin"
- flutter packages pub get
- flutter config --enable-web
build:
commands:
- echo Build started on `date`
- dart run build_runner build
- flutter build web --release
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- "**/*"
discard-paths: no
base-directory: "build/web"
```
Codebuild performs all the processes we normally do on a manual basis, like installing Flutter SDK, installing packages, running commands like build, etc.
## Step 6: Create a CodePipeline with CodeDeploy
CodePipeline is responsible for the orchestration of the deployment process; it observes any changes to our repository and then creates a deployment process if so. After a change is detected on the repository, the CodePipeline triggers the CodeBuild to perform all the scripts defined in buildspec.yml, then uploads the artifacts to an S3 bucket for code versioning (note that this bucket is not the S3 bucket we created earlier). Once the build is done, CodePipeline will trigger CodeDeploy to get the specific artifact previously created by the Codebuild and then deploy it to a specific environment like S3, EC2, ECS, EKS, etc.
**# Create a CodePipeline**
* Go to CodePipeline then click **Create new pipeline**
**# Pipeline Settings**
* Set your pipeline name ex: flutter-app-s3-pipeline-dev
* Execution mode: **Queued (Pipeline type V2 required)**
* Create new or use existing service role
* Leave other to default then click **Next**
**# Add Source Stage**
* Select **AWS CodeCommit** (you can use your own, ex Github)
* Select repository name (select the previously created repository)
* Select branch name (select specific branch you want this pipeline to observe)
* Leave everything default then click **Next**
**# Add Build Stage**
* Select **AWS CodeBuild**
* Select Region (Select the correct region your working on)
* Select Project Name (select the codebuild we previously created)
* Build type: **Single build**
* Click **Next**
**# Add Deploy Stage**
* Select **Amazon S3**
* Select Region (Select the correct region your working on)
* Select the S3 bucket we previously created
* Check **Extract file before deploy**
* Leave deployment path empty
* Click **Next**
* Review then click **Create Pipeline**
## Step 7: Testing the app
* Push your latest code into your git repository.
```terminal
git checkout master
git add .
git commit -m "your updates"
git push
```
* Go to CodeCommit then open your repository then
* Create a pull request from **master->dev**
* After successful pull request go to CodePipeline
* Open the CodePipeline you created for the development environment (you will notice a CI/CD process is being performed automatically)
* After successful deployment go to S3 (You will notice a new s3 bucket is created. Do not touch that bucket as it keeps all the version of your application)
* Open the S3 bucket you **manually created** before for the development environment
* Go to properties then scroll down to **Static website hosting**
* Click the link (you will be redirected to your app)
## Summary
In this exercise, we performed the process of building an application and creating the deployment process. But we only created a single environment; in the real world, we have to create three or more branches for our application.
### Here's my environment settings
```terminal
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 3.19.5, on Microsoft Windows [Version 10.0.19045.4529], locale en-PH)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.9.0)
[√] Android Studio (version 2023.2)
[√] VS Code (version 1.90.1)
[√] Connected device (3 available)
[√] Network resources
```
| xreyc |
1,893,879 | The role of collaborations and partnerships in Dayitwa's success. | Dayitwa's impact is amplified through strategic collaborations and partnerships with government... | 0 | 2024-06-19T17:55:37 | https://dev.to/tegveer_singh_8c7c2ac99ea/the-role-of-collaborations-and-partnerships-in-dayitwas-success-d6f | Dayitwa's impact is amplified through strategic collaborations and partnerships with government bodies, private sector entities, and international organizations. This blog will explore the importance of these alliances, highlighting key partners and how their contributions have strengthened Dayitwa's initiatives. | tegveer_singh_8c7c2ac99ea | |
1,893,628 | Creating a Simple RESTful API with Node.js | What is an API ? An API, or application programming interface, is a set of rules or... | 0 | 2024-06-19T17:54:05 | https://dev.to/vaishnavi_rawool/creating-a-simple-restful-api-with-nodejs-a96 | webdev, rest, node, express | ## What is an API ?
An API, or application programming interface, is a set of rules or protocols that enables software applications to communicate with each other to exchange data, features and functionality.It’s useful to think about API communication in terms of a request and response between a client and server. The application submitting the request is the client, and the server provides the response. The API is the bridge establishing the connection between them.
## What are the types of APIs?
Some of the most common type of APIs are:
1. **RESTful APIs**: Use standard HTTP methods like GET, POST, PUT, DELETE to interact with resources identified by URLs. They are stateless and follow the principles of REST for scalability and simplicity in client-server communication.
2. **SOAP APIs**: Employ XML-based messaging and typically use protocols like HTTP or SMTP. They adhere to a strict messaging format and rely on a contract (WSDL) for defining endpoints and operations, making them suitable for complex integrations requiring strong reliability and security measures.
## What is REST ?
REST stands for Representational State Transfer. It's an architectural style for designing networked applications. Here’s a simple explanation:
Imagine you're in a restaurant. When you want to order food, you ask the waiter for the menu, choose your dishes, and tell the waiter your order. REST works similarly in the digital world:
- Resources: Think of these as items on the menu — they can be data or functionality (like getting information or saving data).
- HTTP Methods: These are like actions you can take with your order, such as GET (to retrieve data), POST (to add new data), PUT (to update data), DELETE (to remove data).
- URLs: Just like how you tell the waiter where to bring your food, URLs specify where to find or interact with resources.
RESTful APIs use these concepts to allow different systems to communicate over the internet, making it easier for computers to share and use information effectively.
Know everything about REST -> https://restfulapi.net/
## How to create REST APIs in Node.js ?
**- Tools Required**
1. Node.js (Node.js is a runtime environment that allows developers to run JavaScript code outside of a web browser, enabling server-side scripting and building scalable network applications)
2. Express.js (Express.js is a minimalist Node.js web framework with features for routing, middleware support, and simplified handling of HTTP requests and responses.)
3. VS Code (Code Editor) or any other editor of your choice
**Step 1:** Install latest versions of above mentioned software
**Step 2:** Creating a Node Project
- Create a new folder called **node-rest-apis**
- When installing Node.js, **npm**, the package manager, is also installed, that helps us create a new node project
- Open the integrated terminal within VS Code with the following command -> **Ctrl + `**
- Enter the following command -> **npm init**
- Follow the prompt -> npm init will ask you for information such as the project name, version, description, entry point (usually index.js), test command, repository URL, author information, and license. You can either fill out these details or simply press Enter to accept the default values.
**Step 3:** Install express with the help of npm: **npm install express**

**Step 4:** Create a file named **index.js** file (you may give it any name of your choice)
**Step 5:** Overview of the files/folders created
- package.json: package.json file stores metadata about your project and its dependencies. Whenever we install new dependencies, they are automatically added to the dependencies section of the package.json file in our Node.js project
- node_modules: This directory houses all the dependencies that your project requires. These dependencies are installed via npm or yarn and are listed in package.json.
- index.js : This file usually serves as the entry point to your application. It initializes your application, sets up configurations, and may start the server if it's a web application.
**Step 6:** Install nodemon **npm install nodemon**

Nodemon is a utility tool for Node.js that helps in development by automatically restarting the Node application when changes are detected in the codebase. This eliminates the need to manually stop and restart the server after every code change, thus speeding up the development workflow.
Make changes to the start script and run the server using the following command : **npm start**

**Step 7:** Let’s start by creating a node server

Code explanation:
- require('express'): Imports the Express module.
- const app = express(): Creates an instance of the Express application.
- const port = 3000: Specifies the port number where the server will listen for incoming requests.
- app.listen(port, ...): Starts the Express server on the specified port (3000 in this case) and logs a message to the console once the server is running.
For running your application add the following start script in package.json:

You can you run your application with the following command: **node index.js**

**Step 8:** Create a basic API route in express

Defines a route handler for the GET request at **/test**. When a GET request is made to /test, it sends back a JSON response { message: 'This is a sample API route' }
**Step 9:** Testing the API using Postman
Postman is a collaborative platform for API development that simplifies the process of designing, testing, and debugging APIs.
The API gives a successful response

**Step 10**: Let’s organize the code in a MVC (Model, View, Controller) like pattern
Create controllers, services, routes, models folder in the root directory
**Step 11:** Create a file name **user.model.js** under models folder

These functions mimic basic CRUD (Create, Read, Update, Delete) operations commonly performed on user data in applications. Other CRUD operations like create, update, and delete could be added as needed to complete the CRUD functionality.

**Step 12:** Create a file name **user.service.js** under services folder
The service files have all the business logic implemented. For eg: all the users related CRUD operations will go inside this file

Code Explanation:
- Import the User model to interact with user data
- Call getUsers and getUsersById methods to perform corresponding operations
- Each method calls methods from model layer. In a real-time project this would include operations performed using a ORM (Object Relationship Mapper). Find more on ORMs here : https://medium.com/@rishu__2701/node-js-database-magic-exploring-the-powers-of-sequelize-orm-a22a521d9d9d
**Step 13:** Create a file name **user.controller.js** under controllers folder
In your controllers , you would import and use the service methods to handle HTTP requests

Code Explanation:
-Import respective service function within the controller file, in our case **user.service.js** within **user.controller.js**
- Define route handlers getAllUsers and getUserById
- Within getAllUsers we are retrieving all user data and sending it as a response with a HTTP status 200. In scenarios where database operations are involved, the data retrieval might fail. In that case, the response needs to be sent accordingly with a status 500.
- getUserById(), finds userId from request params. Find more on request params https://www.geeksforgeeks.org/difference-between-req-query-and-req-params-in-express/
- Send user data with status 200 on successful response
- Send 404 if User is not found
- More on HTTP status codes here https://developer.mozilla.org/en-US/docs/Web/HTTP/Status
**Step 14:** Create a file name **user.routes.js** under routes folder
**user.routes.js** is typically a file where you define and configure all your application routes using Express Router. Here’s how you can structure and explain a routes.js file:

**Step 15:** Advantages of following this code pattern
- Separation of Concerns: The service layer (user.service.js) encapsulates business logic and serves as an intermediary between controllers and models.
- Reusability: Services can be reused across different parts of the application, promoting code reuse and maintaining DRY (Don't Repeat Yourself) principles. In the scenarios where multiple services need to be called together, a manager file can be created (eg: user.manager.js) that imports all the required services and calls them wherever required. Manager file in turn needs to be imported in controller file.
- Modularity: Each layer (models, services, controllers) has a distinct responsibility, making the application easier to understand, test, and maintain.
- Mock Data: In real-world applications, you would replace the mock data and methods in the model (User.js) with actual database interaction using libraries like Mongoose, Sequelize, etc.
**Step 16:** Mounting routes.js in app.js

In your main application file (e.g., index.js), you would integrate routes.js by mounting the router using app.use().
const usersRoute = require('./routes/user.route'); Import the routes.js file.
app.use('/users, routes);: Mount the router at the / users prefix. All routes defined in routes.js will be accessible under / users (e.g., / users /:id, / users /:id/likes, etc.
**Step 17:** Test APIs with Postman


More Resources on REST APIs
HTTP Verbs -> https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods
REST naming conventions -> https://restfulapi.net/resource-naming/
API Documentation -> https://swagger.io/blog/api-documentation/what-is-api-documentation-and-why-it-matters/
Conclusion:
In summary, REST APIs are essential for modern web development, providing a standardized way for clients and servers to communicate efficiently. Node.js with Express.js offers a powerful framework for creating RESTful APIs, enabling developers to build scalable and maintainable backend systems.
By leveraging Node.js's non-blocking architecture and Express's simplicity, developers can quickly create robust API endpoints for various applications. Understanding these technologies ensures you can deliver efficient, secure, and adaptable backend solutions that meet current and future needs in web development.
Follow for more insightful content on web development and stay updated with the latest tips and tutorials. See you next week with more valuable insights!
| vaishnavi_rawool |
1,893,877 | Try To Use `savepoint` In Spring JPA/Hibernate | This article describes my failed attempt to use the PostgreSQL SAVEPOINT in Spring JPA. About... | 0 | 2024-06-19T17:53:26 | https://dev.to/georgech2/try-to-use-savepoint-in-spring-jpahibernate-12pj | springboot, postgres, database, java | This article describes my failed attempt to use the **PostgreSQL SAVEPOINT** in Spring JPA.
About [PostgreSQL SAVEPOINT](https://www.postgresql.org/docs/current/tutorial-transactions.html).
## EntityManager
```java
@PersistenceContext
private EntityManager entityManager;
@Transactional(propagation=NESTED, isolation= READ_UNCOMMITTED)
public void transferWithSavePoint(String fromAccount, String toAccount, String anotherAccount,
double amount) throws SQLException {
// step 1
Account from = accountRepository.findByName(fromAccount);
Branch fromBranch = branchRepository.findByName(from.getBranchName());
from.setBalance(from.getBalance().subtract(BigDecimal.valueOf(amount)));
fromBranch.setBalance(fromBranch.getBalance().subtract(BigDecimal.valueOf(amount)));
accountRepository.save(from);
branchRepository.save(fromBranch);
Connection connection = entityManager.unwrap(SessionImpl.class).connection();
Savepoint savepoint = connection.setSavepoint();
// step 2
Account to = accountRepository.findByName(toAccount);
Branch toBranch = branchRepository.findByName(to.getBranchName());
to.setBalance(to.getBalance().add(BigDecimal.valueOf(amount)));
toBranch.setBalance(toBranch.getBalance().add(BigDecimal.valueOf(amount)));
accountRepository.save(to);
branchRepository.save(toBranch);
connection.rollback(savepoint);
// final step
Account finalAccount = accountRepository.findByName(anotherAccount);
Branch finalBranch = branchRepository.findByName(to.getBranchName());
finalAccount.setBalance(finalAccount.getBalance().add(BigDecimal.valueOf(amount)));
finalBranch.setBalance(finalBranch.getBalance().add(BigDecimal.valueOf(amount)));
accountRepository.save(finalAccount);
branchRepository.save(finalBranch);
}
```
I use Connection Pool in this project, so the **connection** created by EntityManager is different `@Transactional` method’s connection

* **savepoint:** is before transferWithSavePoint
* **rollback:** rollback to all of the operations before, so `STEP1` is rollback, but `STEP2` and `Final STEP` is done.
## Next Try: AbstractTransactionStatus
```java
@Autowired
private JdbcTemplate jdbcTemplate;
@Autowired
private TransactionTemplate transactionTemplate;
public void transferWithSavePoint(String fromAccount, String toAccount, String anotherAccount,
double amount) {
transactionTemplate.execute(status -> {
Account from = accountRepository.findByName(fromAccount);
Branch fromBranch = branchRepository.findByName(from.getBranchName());
from.setBalance(from.getBalance().subtract(BigDecimal.valueOf(amount)));
fromBranch.setBalance(fromBranch.getBalance().subtract(BigDecimal.valueOf(amount)));
accountRepository.save(from);
branchRepository.save(fromBranch);
Object savepoint = status.createSavepoint();
// step 2
Account to = accountRepository.findByName(toAccount);
Branch toBranch = branchRepository.findByName(to.getBranchName());
to.setBalance(to.getBalance().add(BigDecimal.valueOf(amount)));
toBranch.setBalance(toBranch.getBalance().add(BigDecimal.valueOf(amount)));
accountRepository.save(to);
branchRepository.save(toBranch);
status.rollbackToSavepoint(savepoint);
// final step
Account finalAccount = accountRepository.findByName(anotherAccount);
Branch finalBranch = branchRepository.findByName(to.getBranchName());
finalAccount.setBalance(finalAccount.getBalance().add(BigDecimal.valueOf(amount)));
finalBranch.setBalance(finalBranch.getBalance().add(BigDecimal.valueOf(amount)));
accountRepository.save(finalAccount);
branchRepository.save(finalBranch);
status.releaseSavepoint(savepoint);
return null; });
}
```
Also failed:
```
JpaDialect does not support savepoints — check your JPA provider’s capabilities
```
Then I start to search ‘How to use savepoint in Hibernate’ online:
> Please note that this is not a recommended way to use Spring JPA. It’s better to structure your transactions so that you don’t need to use savepoints or nested transactions. If you find yourself needing them, it might be a sign that your transactions are too complex and should be broken down into smaller parts.
Maybe I’m wrong from the beginning, indeed, I won’t encounter it in real application scenarios, but I’m just trying to figure out how to implement it, so let me know if you have any good ideas. | georgech2 |
1,893,876 | what is selenium ?why do we use selenium for automation. | selenium is atool used to automate web browsers.it helps developers and testers write scripts that... | 0 | 2024-06-19T17:47:31 | https://dev.to/keerthiajaykumar/what-is-selenium-why-do-we-use-selenium-for-automation-59hb | selenium is atool used to automate web browsers.it helps developers and testers write scripts that can interact with different web browerslike chrome or firefox.we use selenium for automation because.it lets us test our web applications automatically.it's good because it works with many browsers and operating systems,so we can make sure our website works everywhere.also selenium supports different programming languagr=es like java or python,which makes it easier for us to write tests in the language we known best.it"s free ans has a big community,so there are lots of resources and support available when we have questions or
problems.overall,selenium is great for automating tests and making sure our website work correctly across different browsers and platforms.here are some reasons why selenium is widely used for automation.
1.Cross-Browser Testing:selenium supports multiple browsers,allowing teters to write scripts and execute them across different browsers.
2.Platform Independence:selenium supports multiple operating systems making it versatile for testing web applications.
3.Language Support:selenium supports dufferent languages like java,python,c.
4Integration:selenium can be integrated with other tools and
frameworkslike testNG for reporting continuous
integration,and test management.
5.Flexibility:it supports various testing types including functional testing,regression testing,and even load testing.
6.Community support:Being open source,selenium has a large and active community that provides support and shares resources and best practices.
| keerthiajaykumar | |
1,893,894 | Introducing the New Blazor 3D Charts Component | TLDR: Discover the new Syncfusion Blazor 3D Charts component in the 2024 Volume 2 release! Transform... | 0 | 2024-06-20T16:21:49 | https://www.syncfusion.com/blogs/post/blazor-3d-charts-component | blazor, chart, development, ui | ---
title: Introducing the New Blazor 3D Charts Component
published: true
date: 2024-06-19 17:45:13 UTC
tags: blazor, chart, development, ui
canonical_url: https://www.syncfusion.com/blogs/post/blazor-3d-charts-component
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0u96elbi8m9lvvq57w5p.png
---
**TLDR:** Discover the new Syncfusion Blazor 3D Charts component in the 2024 Volume 2 release! Transform your data into stunning 3D visualizations with features like multiple chart types, data binding, rotation, tilt, and interactive elements. Learn how to integrate and use this powerful tool in your Blazor apps.
Imagine stepping into a world where data leaps off the page, where two-dimensional graphs evolve into vibrant three-dimensional masterpieces. This is the magic of 3D charts, a dynamic canvas that breathes life into complex datasets.
We’re glad to unveil the new Syncfusion [Blazor 3D Charts](https://www.syncfusion.com/blazor-components/blazor-3d-charts "Blazor 3D Charts") component in the [Essential Studio 2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. This trailblazing tool is not just about data presentation; it’s about transforming data into a story to add depth and clarity to your visualizations.
Let’s explore the features of this innovative Blazor 3D Charts component!
## Blazor 3D Charts: An overview
The Blazor 3D Charts component visually represents data in three dimensions, showcasing relationships and trends among variables. Unlike traditional 2D charts, 3D charts add depth to the visualization, allowing for a better understanding of data patterns.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Blazor-3D-Charts.gif" alt="Blazor 3D Charts" style="width:100%">
<figcaption>Blazor 3D Charts</figcaption>
</figure>
## Key features
The key features of Blazor 3D Charts are:
- [Different chart types](#Chart)
- [Data binding](#Data)
- [Data labels](#labels)
- [Axis types](#Axis)
- [Legends](#Legends)
- [User interaction](#User)
- [Rotation and tilt](#Rotation)
Now, let’s delve deeper into each of these exciting features.
### <a name="Chart">Chart types</a>
The Blazor 3D Charts component offers six versatile chart types; all are easily configurable and have built-in support for visually stunning effects.
- **Column:** Represents data with vertical bars for easy value comparison.
- **Bar:** Utilizes horizontal bars to display data and facilitate value comparison.
- **Stacked Column:** Plots data points on top of each other using vertical bars for comprehensive visualization.
- **Stacked Bar:** Achieves the same effect as the Stacked Column but with horizontal bars.
- **100% Stacked Column:** Illustrates the percentage distribution of multiple datasets within a total, with each column adding up to 100%.
- **100% Stacked Bar:** Resembles the Stacked Column but uses horizontal bars, showcasing the percentage distribution of datasets within a total.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Different-chart-types-supported-in-Blazor-3D-Charts.gif" alt="Different chart types supported in Blazor 3D Charts" style="width:100%">
<figcaption>Different chart types supported in Blazor 3D Charts</figcaption>
</figure>
### <a name="Data">Data binding</a>
Effortlessly bind the Blazor 3D Charts component with an array of JSON objects or a data manager, supporting local and remote data binding methods. In addition to the chart series, data labels and tooltips can also be bound to your data.
### <a name="labels">Data labels</a>
Annotate points with labels to improve the readability of data. Enhance these labels with HTML elements such as images, DIVs, and spans, creating informative and visually appealing data representations.
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Data-labels-in-Blazor-3D-Charts.png" alt="Data labels in Blazor 3D Charts" style="width:100%">
<figcaption>Data labels in Blazor 3D Charts</figcaption>
</figure>
### <a name="Axis">Axis types</a>
You can plot number, date-time, logarithmic, and string data types on the chart axis. The axis features include:
- **Inversed axis:** Flip the axis direction from its default orientation.
- **Opposed position:** Display the axis in the opposite position.
- **Multiple axes:** Include more than one axis on the same or opposite side.
- **Smart labels:** Add dynamic labels that automatically position themselves to enhance readability and prevent overlapping.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Axis-in-Blazor-3D-Charts.png" alt="Custom axis feature in Blazor 3D Charts" style="width:100%">
<figcaption>Custom axis feature in Blazor 3D Charts</figcaption>
</figure>
### <a name="Legends">Legends</a>
Provide additional information about the data series using a customizable legend. Labels and symbols aid viewers in identifying and differentiating between various series within the chart.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Legends-in-Blazor-3D-Charts.png" alt="Legends in Blazor 3D Charts" style="width:100%">
<figcaption>Legends in Blazor 3D Charts</figcaption>
</figure>
### <a name="User">User interaction</a>
You can also add the following interactive features to the Blazor 3D Charts:
- **Tooltip:** A small, informative pop-up box offering additional context, details, or information when users hover over or click on data points.
- **Highlight and selection:** Easily highlight and select data points for improved comprehension. Adding patterns and colors to selected data enhances chart interactivity.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/User-interactivity-features-in-Blazor-3D-Charts.gif" alt="User interactivity features in Blazor 3D Charts" style="width:100%">
<figcaption>User interactivity features in Blazor 3D Charts</figcaption>
</figure>
### <a name="Rotation">Rotation and tilt</a>
In addition to the previous features, the Blazor 3D Charts supports:
- **Rotation:** Dynamically rotate the Chart from 0 to 360 degrees around a fixed point or axis in a three-dimensional space. Rotation can occur clockwise or counterclockwise.
- **Tilt:** Change the inclination angle of the chart along one axis, providing a different perspective.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Rotation-and-tilt-features-in-Blazor-3D-Charts.gif" alt="Rotation and tilt features in Blazor 3D Charts" style="width:100%">
<figcaption>Rotation and tilt features in Blazor 3D Charts</figcaption>
</figure>
## Getting started with the Blazor 3D Charts component
We’ve explored the stunning features of the new Blazor 3D Charts. Let’s see how to use it to visualize data!
### Step 1: Create a new Blazor Web App
First, create a Blazor Web app using Visual Studio 2022 via [Microsoft Templates](https://learn.microsoft.com/en-us/aspnet/core/blazor/tooling?view=aspnetcore-8.0 "Tooling for ASP.NET Core Blazor") or the [Syncfusion Blazor Extension](https://blazor.syncfusion.com/documentation/visual-studio-integration/template-studio "Syncfusion Blazor Template Studio").
You must configure the corresponding [interactive render modes](https://learn.microsoft.com/en-us/aspnet/core/blazor/components/render-modes?view=aspnetcore-8.0#render-modes "Rendering modes") and [interactivity location](https://learn.microsoft.com/en-us/aspnet/core/blazor/tooling?view=aspnetcore-8.0&pivots=vs#interactivity-location "Interactivity location") while creating a Blazor Web app.
### Step 2: Install the Syncfusion Blazor 3D Charts NuGet package
To add the Blazor 3D Charts component in the app, open the NuGet package manager in Visual Studio ( **_Tools → NuGet Package Manager → Manage NuGet Packages for Solution_** ), search and install **Syncfusion.Blazor.Chart3D**.
Alternatively, you can utilize the following package manager command to achieve the same.
```
Install-Package Syncfusion.Blazor.Chart3D -Version 26.1.35
```
### Step 3: Register the Syncfusion Blazor service
Open the **~/\_Imports.razor** file and import the **Syncfusion.Blazor** and **Syncfusion.Blazor.Chart3D** namespaces.
```csharp
@using Syncfusion.Blazor
@using Syncfusion.Blazor.Chart3D
```
Register the Syncfusion Blazor Service in the **~/Program.cs** file of your Blazor Web App.
Select the **Interactive render mode** as WebAssembly or Auto (Server and WebAssembly). If you select **Auto** as the interactive mode, you must register the Syncfusion Blazor service in both **~/Program.cs** files of your Blazor Web App.
```csharp
....
using Syncfusion.Blazor;
....
builder.Services.AddSyncfusionBlazor();
....
```
### Step 4: Add the script resources
The script can be obtained from NuGet via [Static Web Assets](https://blazor.syncfusion.com/documentation/appearance/themes#static-web-assets "Static Web Assets"). To include the script reference, append it to the end of the **<body>** tag in the **~/Components/App.razor** file as illustrated below.
```xml
<head>
....
<script src="_content/Syncfusion.Blazor.Core/scripts/syncfusion-blazor.min.js" type="text/javascript"></script>
</head>
```
### Step 5: Add the Syncfusion Blazor 3D Charts component
Add the Syncfusion Blazor 3D Charts component to the **.razor** file inside the **Pages** folder. If the interactivity location is **Per page/component** in the web app, define the render mode at the top of the component, as shown below.
```csharp
@* Define the desired render mode here *@
@rendermode InteractiveAuto
```
```xml
<SfChart3D>
</SfChart3D>
```
### Step 6: Populate the Blazor 3D Charts with data
To bind data to the Blazor 3D Charts component, you can assign an **IEnumerable** object to the **DataSource** property. It can also be provided as an instance of the **DataManager.**
Refer to the following code example.
```xml
<SfChart3D RotationAngle="7" TiltAngle="10" Depth="100" EnableRotation="true">
<Chart3DPrimaryXAxis ValueType="Syncfusion.Blazor.Chart3D.ValueType.Category" LabelRotationAngle="-45" LabelPlacement="LabelPlacement.BetweenTicks">
</Chart3DPrimaryXAxis>
<Chart3DPrimaryYAxis Maximum="150000" Interval="50000">
</Chart3DPrimaryYAxis>
<Chart3DSeriesCollection>
<Chart3DSeries DataSource="@DataSource" XName="Country" YName="Sales" Type="Chart3DSeriesType.Column"></Chart3DSeries>
</Chart3DSeriesCollection>
</SfChart3D>
@code{
public List<Chart3DData> DataSource { get; set; } = new List<Chart3DData>()
{
new Chart3DData { Country = "Tesla", Sales = 137429 },
new Chart3DData { Country = "Aion", Sales = 80308 },
new Chart3DData { Country = "Wuling", Sales = 76418 },
new Chart3DData { Country = "Changan", Sales = 52849 },
new Chart3DData { Country = "Geely", Sales = 47234 },
new Chart3DData { Country = "Nio", Sales = 31041 },
new Chart3DData { Country = "Neta", Sales = 22449 },
new Chart3DData { Country = "BMW", Sales = 18733 }
};
public class Chart3DData
{
public string Country { get; set; }
public double Sales { get; set; }
}
}
```
Now, map the **Country** and **Sales** data fields to the series **XName** and **YName** properties. Then, set the data to the **DataSource** property and the chart type to **Column** because we will view the data in a column chart.
### Step 7: Run the app
Finally, run the app by pressing **Ctrl+F5 (Windows)** or **⌘+F5 (macOS)** keys to launch the application. This will render the Syncfusion Blazor 3D Charts component in your default web browser.
After executing the previous code examples, we’ll get the output in the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2024/06/Integrating-3D-Charts-in-the-Blazor-app.png" alt="Integrating 3D Charts in the Blazor app" style="width:100%">
<figcaption>Integrating 3D Charts in the Blazor app</figcaption>
</figure>
## References
For more details, refer to the [Blazor 3D Charts documentation](https://blazor.syncfusion.com/documentation/3d-chart/getting-started "Getting started with Blazor 3D Charts") and [demos](https://blazor.syncfusion.com/demos/chart-3d/column?theme=fluent2 "Blazor 3D Charts demos").
## Conclusion
Thanks for reading! In this blog, we’ve explored the features of the new Syncfusion [Blazor 3D Charts](https://www.syncfusion.com/blazor-components/blazor-3d-charts "Blazor 3D Charts") component, which is a remarkable addition to our [2024 Volume 2](https://www.syncfusion.com/forums/188642/essential-studio-2024-volume-2-main-release-v26-1-35-is-available-for-download "Essential Studio 2024 Volume 2") release. Try them out and leave your feedback in the comments section below!
The latest version of the Blazor Charts component is available for current customers from the [License and Downloads](https://www.syncfusion.com/account/downloads "Essential Studio License and Downloads page") page. If you are not a Syncfusion customer, try our [30-day free trial](https://www.syncfusion.com/downloads "Get free evaluation of the Essential Studio products") to check it out.
You can also contact us through our [support forums](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [What’s New in C# 13 for developers?](https://www.syncfusion.com/blogs/post/whats-new-csharp-13-for-developers "Blog: What’s New in C# 13 for developers?")
- [Syncfusion Essential Studio 2024 Volume 2 is Here!](https://www.syncfusion.com/blogs/post/syncfusion-essential-studio-2024-vol2 "Blog: Syncfusion Essential Studio 2024 Volume 2 is Here!")
- [Easily Implement Google Authentication in the Blazor Webassembly App](https://www.syncfusion.com/blogs/post/google-authentication-blazor-wasm-app "Blog: Easily Implement Google Authentication in the Blazor Webassembly App")
- [Pro Annotations on Mobile PDFs with Blazor PDF Viewer](https://www.syncfusion.com/blogs/post/mobile-pdf-annotations-with-blazor "Blog: Pro Annotations on Mobile PDFs with Blazor PDF Viewer") | jollenmoyani |
1,893,041 | Using @Qualifier to Resolve Bean Conflicts in Spring | When working on a Spring project, you might encounter a situation where you have multiple bean... | 27,602 | 2024-06-19T17:45:00 | https://springmasteryhub.com/2024/06/19/using-qualifier-to-resolve-bean-conflicts-in-spring/ | When working on a Spring project, you might encounter a situation where you have multiple bean implementations for the same type.
This can cause an error because Spring doesn't know which bean to inject. To solve this, we use the `@Qualifier` annotation.
### The Problem
Let's say you have an interface `MessageService` that your application uses to send messages, and you have multiple ways to send messages, such as Email, SMS, Push, etc.
For each of these channels, you have an implementation of the interface like `EmailMessageService` and `SmsMessageService`.
Consider this scenario. You have a MessageService that defines a standard to send messages for all methods.
```java
public interface MessageService {
void sendMessage(String to, String message);
}
@Service("emailService")
public class EmailMessageService implements MessageService {
@Override
public void sendMessage(String to, String message) {
System.out.println("Sending email to: " + to + " with message: " + message);
}
}
@Service("smsService")
public class SmsMessageService implements MessageService {
@Override
public void sendMessage(String to, String message) {
System.out.println("Sending SMS to: " + to + " with message: " + message);
}
}
```
If you try to use these services in the components like this:
```java
@Service
public class OrderService {
private final MessageService messageService;
@Autowired
public OrderService(MessageService messageService) {
this.messageService = messageService;
}
public void sendOrderConfirmation(String email, String message) {
messageService.sendMessage(email, message);
}
}
@Service
public class ForgetPasswordService {
private final MessageService messageService;
@Autowired
public ForgetPasswordService(MessageService messageService) {
this.messageService = messageService;
}
public void sendResetPasswordLink(String phoneNumber, String message) {
messageService.sendMessage(phoneNumber, message);
}
}
```
You will get an error, and a message like that will pop up at your application startup:
```bash
bashCopy code
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of constructor in com.spring.mastery.qualifier.OrderService required a single bean, but 2 were found:
- emailService: defined in file [...\EmailMessageService.class]
- smsService: defined in file [...\SmsMessageService.class]
```
Spring doesn't know which bean to use. There are two beans of the same type.
### Using @Qualifier
By using `@Qualifier`, you can specify the name of the bean to be injected. Here's how you can do it:
```java
@Service
public class OrderService {
private final MessageService messageService;
@Autowired
public OrderService(@Qualifier("emailService") MessageService messageService) {
this.messageService = messageService;
}
public void sendOrderConfirmation(String email, String message) {
messageService.sendMessage(email, message);
}
}
@Service
public class ForgetPasswordService {
private final MessageService messageService;
@Autowired
public ForgetPasswordService(@Qualifier("smsService") MessageService messageService) {
this.messageService = messageService;
}
public void sendResetPasswordLink(String phoneNumber, String message) {
messageService.sendMessage(phoneNumber, message);
}
}
```
Now, Spring knows which bean to inject in each component, allowing you to have multiple implementations of the same bean.
### Variations
There are a few other ways to help Spring identify which bean to inject.
### **Field Injection with @Autowired and @Qualifier:**
This is one of the most common ones.
```java
@Autowired
@Qualifier("emailService")
private MessageService messageService;
```
### **Naming the Field:**
If you don't want to use the @Qualifier annotation explicitly, you can name the field with the same name as the bean definition:
```java
@Service("smsService")
public class SmsMessageService implements MessageService {
// some code here
}
@Service
public class ForgetPasswordService {
private final MessageService smsService;
@Autowired
public ForgetPasswordService(MessageService smsService) {
this.smsService = smsService;
}
}
```
It’s important to remember that using this method, your field name is coupled with the qualifier name you defined, if you change the qualifier name or the field name, it might break everything. So use it with caution.
### **Defining Beans in Configuration Classes:**
If you have a configuration class like this, you can define the bean qualifier name in the @Bean annotation.
```java
@Configuration
public class QualifierConfig {
@Bean("beanDefinition")
public BeanDefinition beanDefinition() {
return new BeanDefinition();
}
}
```
### Conclusion
Have you already faced a problem like this in your project? Did you use the @Qualifier? Have you seen this annotation in your daily job? Let me know in the comments, or on social media!
If you like this topic, make sure to follow me. In the following days, I’ll be explaining more about Spring annotations! Stay tuned!
Follow me! | tiuwill | |
1,893,875 | aitorrent/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF-torrent | https://aitorrent.zerroug.de/aitorrent-meta-llama-3-8b-instruct-abliterated-v3-gguf-torrent/ | 0 | 2024-06-19T17:41:09 | https://dev.to/zerroug/aitorrentmeta-llama-3-8b-instruct-abliterated-v3-gguf-torrent-23d9 | ai, machinelearning, llm, beginners | https://aitorrent.zerroug.de/aitorrent-meta-llama-3-8b-instruct-abliterated-v3-gguf-torrent/ | zerroug |
1,893,874 | Wednesday Links - Edition 2024-06-19 | Debugger Godmode: Hacking a JVM Application with the Debugger (10 min)... | 11,907 | 2024-06-19T17:40:26 | https://dev.to/wyhasany/wednesday-links-edition-2024-06-19-160e | java, jvm, spring, ai | Debugger Godmode: Hacking a JVM Application with the Debugger (10 min) 🔍
https://foojay.io/today/debugger-godmode-hacking-a-jvm-application-with-the-debugger/
REST URI Design: Best Practices (7 min) 🌐
https://www.mscharhag.com/api-design/rest-uri-design
Hibernate 6 and the hibernate.type.java_time_use_direct_jdbc Setting (5 min) 🕰️
https://vladmihalcea.com/hibernate-type-java_time_use_direct_jdbc/
The Road to Gradle 9 (8 min) 🛤️
https://blog.gradle.org/road-to-gradle-9
Text Classification with Spring AI (12 min) 🤖
https://www.thomasvitale.com/text-classification-with-spring-ai/
SpringAI, LLaMA3, and pgvector: Bragging Rights (9 min) 🦙
https://dev.to/mcadariu/springai-llama3-and-pgvector-bragging-rights-2n8o | wyhasany |
1,893,873 | Avoiding the Dreaded 404 | I am writing this blog to track my progress. I am currently on my second week of back-end development... | 0 | 2024-06-19T17:39:49 | https://dev.to/christopherchhim/avoiding-the-dreaded-404-5782 | webdev, beginners, tutorial, seo | I am writing this blog to track my progress. I am currently on my second week of back-end development of my bootcamp and I am learning how to work with servers. We are starting to dabble on status codes such as: 200, 400, 401, 400, 500, etc... As I start to learn how to work with servers I want to know how to develop sites without prompting status code 404.
There are several ways to redirect users to a desired page. However, if it is done carelessly it will prompt status code 404, indicating that a bad request has been made. A correct URL would look like the following:
- /supplements/spore-probiotic-6066
This is correct because it contains the name of the website and the ID.
ID = 6066
Incorrect but acceptable URL would look like the following:
- /supplements/spore-probiotic
This will still redirect users to the desired URL despite having a missing ID.
- /supplements/youtheory-spore-probiotic
The aforementioned link includes the brand name and a missing ID but will still redirect users to the desired link. This is because they are known as fallback methods and the database contains built-in methods in case such errors occur. While they not may be completely accurate, it is still within the server's capacity of understanding where the user should be redirected.
This post was inspired from:
Artz, L. (2024, June 10) Designing a website to not have 404s
Retrieved from: [https://pillser.com/engineering/2024-06-10-website-without-404s] | christopherchhim |
1,893,871 | Next.js Image File Upload and Preview with shadcn/ui | In this tutorial, we'll create an image file upload with a preview using Next.js and Shadcn UI. In... | 0 | 2024-06-19T17:34:09 | https://frontendshape.com/post/nextjs-image-file-upload-and-preview-with-shadcn-ui | nextjs, shadcnui, webdev | In this tutorial, we'll create an image file upload with a preview using Next.js and Shadcn UI.
In this guide, we will create a file upload feature in Next.js using React hooks. We will include a file preview using the Next.js Image component and utilize Shadcn UI's Input and Label components for file selection.
```jsx
"use client"
import { useState } from "react"
import Image from "next/image"
import { Input } from "@/components/ui/input"
import { Label } from "@/components/ui/label"
export default function InputFile() {
const [selectedFile, setSelectedFile] = useState(null)
const handleFileChange = (e) => {
const file = e.target.files[0]
if (file) {
const reader = new FileReader()
reader.onloadend = () => {
setSelectedFile(reader.result)
}
reader.readAsDataURL(file)
}
}
return (
<div className="grid w-full max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input id="picture" type="file" onChange={handleFileChange} />
{selectedFile && (
<div className="mt-2">
<Image
src={selectedFile}
alt="Preview"
className="rounded-md"
width={500}
height={500}
/>
</div>
)}
</div>
)
}
```

Building a Next.js Image File Upload and Preview Feature Using TypeScript and Shadcn UI.
```jsx
"use client"
import React, { ChangeEvent, useState } from 'react';
import Image from 'next/image'
import { Input } from '@/components/ui/input';
import { Label } from '@/components/ui/label';
interface InputFileProps {
// You can add any additional props needed
}
export default function InputFile(props: InputFileProps) {
const [selectedFile, setSelectedFile] = useState<string | null>(null);
const handleFileChange = (e: ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (file) {
const reader = new FileReader();
reader.onloadend = () => {
setSelectedFile(reader.result as string);
};
reader.readAsDataURL(file);
}
};
return (
<div className="grid w-full max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input id="picture" type="file" onChange={handleFileChange} />
{selectedFile && (
<div className="mt-2">
<Image
src={selectedFile}
alt="Preview"
width={500}
height={500}
/>
</div>
)}
</div>
);
}
```
Next.js with Shadcn UI Image File Upload Preview and remove button for the image preview.
```jsx
"use client"
import React, { ChangeEvent, useState } from 'react';
import Image from 'next/image'
import { Input } from '@/components/ui/input';
import { Label } from '@/components/ui/label';
interface InputFileProps {
// You can add any additional props needed
}
export default function InputFile(props: InputFileProps) {
const [selectedFile, setSelectedFile] = useState<string | null>(null);
const handleFileChange = (e: ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (file) {
const reader = new FileReader();
reader.onloadend = () => {
setSelectedFile(reader.result as string);
};
reader.readAsDataURL(file);
}
};
const handleRemoveClick = () => {
setSelectedFile(null);
};
return (
<div className="grid w-full max-w-sm items-center gap-1.5">
<Label htmlFor="picture">Picture</Label>
<Input id="picture" type="file" onChange={handleFileChange} />
{selectedFile && (
<div className="mt-2 relative">
<Image
src={selectedFile}
alt="Preview"
width={500}
height={500}
/>
<button
onClick={handleRemoveClick}
className="absolute top-0 right-0 bg-red-500 text-white py-1 px-2"
aria-label="Remove image"
>
X
</button>
</div>
)}
</div>
);
}
```
 | aaronnfs |
1,893,870 | Mount Up! A Step-by-Step Guide to Creating and Using Amazon EFS on Ubuntu | Amazon Elastic File System (EFS) provides a simple, scalable, and fully managed Network File System... | 0 | 2024-06-19T17:33:18 | https://dev.to/techgirlkaydee/mount-up-a-step-by-step-guide-to-creating-and-using-amazon-efs-on-ubuntu-2i86 | aws, efs, storage, cloudcomputing | [Amazon Elastic File System (EFS)](https://aws.amazon.com/efs/) provides a simple, scalable, and fully managed Network File System for use with AWS Cloud services and on-premises resources. This guide will walk you through the process of creating an EFS, mounting it on an Ubuntu instance, and adding files to it.
## **Prerequisites**
- An AWS account
- An EC2 instance running Ubuntu in the same VPC as your EFS file system
- Proper security group settings to allow NFS traffic
---
## **Step 1: Create an EFS File System**
1.. Navigate to the EFS Dashboard
- Open the AWS Management Console.
- Go to the EFS dashboard.
- Create a File System
2.. Click on "Create file system".
- Follow the prompts to set up your EFS file system. You can choose the VPC, security group, and performance mode as per your requirements.
---
## **Step 2: Configure Security Groups**
1.. EFS Security Group
- Ensure the security group associated with your EFS allows inbound traffic on port 2049 (NFS).
2.. EC2 Security Group
- Modify the security group associated with your EC2 instance to allow NFS traffic from the security group used by the EFS.
---
## **Step 3: Install NFS Client on Ubuntu**
1.. Connect to Your EC2 Instance
- Use SSH to connect to your EC2 instance.
2.. Install NFS Client using the following script:
_- sudo apt-get update sudo apt-get install -y nfs-common_
---
## **Step 4: Create a Mount Point**
1.. Create a Directory to Mount EFS using the following script:
_- sudo mkdir /mnt/efs_
---
## **Step 5: Mount the EFS File System**
1.. Mount the EFS File System using the following script:
_- sudo mount -t nfs4 -o nfsvers=4.1 < file-system-id >.efs. < region >.amazonaws.com:/ /mnt/efs_
Note: Be sure to replace **file-system-id** with your EFS file system ID and **region** with the appropriate AWS region (e.g., us-west-2).
---
## **Step 6: Verify the Mount**
1.. Check the Mount Point using the following script:
_- ls /mnt/efs_
---
## **Step 7: Add Files to EFS**
1.. Copy Files to the EFS using the following script:
_- sudo cp /path/to/local/file /mnt/efs/_
and / or
2.. Create Files Directly in EFS using the following script:
_- echo "Hello, EFS!" | sudo tee /mnt/efs/hello.txt_
---
## **Summary**
There you have it! By following these steps, you can easily create an Amazon EFS, mount it on your Ubuntu instance, and add files to it. EFS provides a solution for storing files in the cloud, accessible from multiple instances as well as on-premises environments.
Let me know your thoughts by leaving a comment below. Thanks!
| techgirlkaydee |
1,893,860 | Lifetimes in Rust | Have you ever wondered how Rust manages to keep your code safe from memory-related errors? One of... | 0 | 2024-06-19T17:30:27 | https://dev.to/ashsajal/lifetimes-in-rust-28c | rust, coding | Have you ever wondered how Rust manages to keep your code safe from memory-related errors? One of the key ingredients is **lifetimes**. Lifetimes are a powerful feature that helps Rust ensure your references are always valid and prevent those dreaded dangling pointer problems.

**Think of it this way:** Imagine you're building a house. You need to make sure the foundation is strong enough to support the entire structure. Lifetimes are like the foundation in Rust – they guarantee the data your references point to will exist for as long as you need them.
Let's break it down with some examples, comparing Rust to other languages:
**The Dangling Pointer Problem**
Imagine you have a variable `x` that holds a value, and you create a reference `r` that points to `x`. If `x` goes out of scope (like when it's no longer needed and gets cleaned up), `r` will still be pointing to that memory location, but the data is gone! This is called a **dangling pointer**, and it can lead to unpredictable and potentially disastrous behavior.
**How Other Languages Handle It**
* **C/C++:** These languages allow dangling pointers without any warnings. It's up to the programmer to ensure references are always valid, which can be error-prone.
* **Java/C#:** These languages use garbage collection to manage memory automatically. They don't have the concept of lifetimes, but they can still have issues with references pointing to objects that are no longer accessible.

**Lifetimes to the Rescue!**
Rust uses lifetimes to prevent this. Lifetimes are annotations that specify how long a reference is valid. They ensure that references only point to data that is still in scope.
**Example 1: Simple Lifetime Annotations**
```rust
fn main() {
let r; // r has a lifetime 'a
{
let x = 5; // x has a lifetime 'b
r = &x; // r points to x, but 'b is shorter than 'a
} // x goes out of scope here
println!("r: {}", r); // Error! r is pointing to invalid memory
}
```
In this example, `r` has a longer lifetime than `x`. Rust catches this issue and prevents the code from compiling because it knows that `r` will be pointing to invalid memory after `x` goes out of scope.
**Example 2: Lifetime Annotations in Functions**
```rust
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() {
x
} else {
y
}
}
```
Here, the `longest` function takes two string slices (`x` and `y`) and returns a reference to the longer one. The `'a` lifetime annotation ensures that the returned reference is valid as long as both `x` and `y` are valid.
**The Syntax of Lifetimes**
* **Lifetime Parameters:** Lifetime parameters are declared in angle brackets (`< >`) after the function name. They start with an apostrophe (') and are usually short, like `'a`, `'b`, etc.
* **Lifetime Annotations:** Lifetime annotations are placed after the `&` of a reference, separated by a space. They use the lifetime parameter name, like `&'a str`.
**When to Think About Lifetimes**
You should think about lifetimes when:
* **Returning References from Functions:** If a function returns a reference, you need to make sure the data it's pointing to will still be valid after the function returns.
* **Borrowing from Multiple Sources:** If a reference is borrowed from multiple sources, you need to ensure that all the borrows are valid at the same time.
* **Working with Structures Containing References:** If a structure contains references to data, you need to ensure that the references are valid for as long as the structure exists.
**Lifetime Annotations in Other Languages**
* **C/C++:** You would have to manually manage the lifetimes of references, using techniques like smart pointers or reference counting. This can be complex and error-prone.
* **Java/C#:** You wouldn't need to worry about lifetimes explicitly, as the garbage collector handles memory management. However, you still need to be careful about references to objects that might be garbage collected.
**Misuse of Lifetimes**
While lifetimes are powerful, they can also be misused. Here are some common pitfalls:
* **Ignoring Lifetime Annotations:** If you don't provide lifetime annotations when necessary, Rust might not be able to infer the correct lifetimes, leading to errors.
* **Confusing Lifetime Parameters:** Make sure your lifetime parameters have distinct names and are used consistently throughout your code.
* **Creating Unnecessary Lifetimes:** Don't create lifetime parameters if they aren't needed. This can make your code more complex and harder to understand.
**Common Beginner Doubts**
* **What name should I choose for my lifetime parameters?** The name you choose doesn't really matter, as long as it's consistent within your function. It's common to use short, descriptive names like `'a`, `'b`, `'input`, `'output`, etc.
* **How do I know if I need a lifetime parameter?** If your function returns a reference or borrows from multiple sources, you'll likely need a lifetime parameter. Rust's compiler will usually provide helpful error messages if you're missing one.
**Why are Lifetimes Important?**

* **Safety:** Lifetimes prevent dangling pointers, which can cause crashes or unexpected behavior.
* **Memory Management:** Lifetimes help Rust manage memory efficiently by ensuring that data is only kept around as long as it's needed.
* **Code Clarity:** Lifetimes make your code more readable and easier to understand by explicitly defining the relationships between references and their data.
**The Takeaway**
Lifetimes are a powerful feature in Rust that help you write safe and efficient code. While they might seem a bit complex at first, understanding them is crucial for mastering Rust's memory management system. With a little practice, you'll be able to use lifetimes confidently to build robust and reliable applications.
**Reference :** [Rust book/Rust docs](https://rust-book.cs.brown.edu/ch10-03-lifetime-syntax.html)
**Follow me in [X/Twitter](https://twitter.com/ashsajal1)** | ashsajal |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.