id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,920,153 | Unlocking Protected PDFs | Discover how to bypass Google Drive's view-only restrictions and download protected PDFs using JavaScript. This step-by-step guide offers a practical method to capture high-resolution images of each page and compile them into a PDF. | 0 | 2024-07-11T20:12:42 | https://dev.to/dpaluy/unlocking-protected-pdfs-99b | javascript, googledrive, pdf | ---
title: Unlocking Protected PDFs
published: true
description: Discover how to bypass Google Drive's view-only restrictions and download protected PDFs using JavaScript. This step-by-step guide offers a practical method to capture high-resolution images of each page and compile them into a PDF.
tags: javascript, googledrive, pdf
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-11 19:37 +0000
---
Several years ago, I wrote a [Gist](https://gist.github.com/dpaluy/74258794f7930401cc27262e0ea794dd) that received a lot of positive interest, highlighting a common need among developers. Given its popularity, it’s time to share this solution with you. This post will guide you through a practical method to download protected PDFs using JavaScript, ensuring high-resolution output.
This approach allows you to bypass view-only restrictions by capturing high-resolution images of each page.
## Step 1: Open the Document
Open the protected document in Google Docs.
Scroll through the entire document to ensure all pages are fully loaded. Some documents require zoom-in to get a better resolution.
## Step 2: Open Developer Tools
Navigate to the Console tab.
## Step 3: Run this Script to convert images to PDF
```javascript
let jspdf = document.createElement("script");
jspdf.onload = function () {
let pdf = new jsPDF();
let elements = document.getElementsByTagName("img");
for (let i in elements) {
let img = elements[i];
console.log("add img ", img);
if (!/^blob:/.test(img.src)) {
console.log("invalid image src");
continue;
}
let can = document.createElement('canvas');
let con = can.getContext("2d");
can.width = img.width;
can.height = img.height;
con.drawImage(img, 0, 0);
let imgData = can.toDataURL("image/jpeg", 1.0);
pdf.addImage(imgData, 'JPEG', 0, 0);
pdf.addPage();
}
pdf.save("download.pdf");
};
jspdf.src = 'https://cdnjs.cloudflare.com/ajax/libs/jspdf/1.5.3/jspdf.debug.js';
document.body.appendChild(jspdf);
```
Note: Check the original Gist and other comments with various improvements and suggestions.
## Note on Ethical Use
Remember to respect copyright and privacy laws. Use this method responsibly and only for documents you have the right to download.
| dpaluy |
1,920,154 | Cities as Codebases | In the world of software development, we often find ourselves grappling with the complexity of our... | 0 | 2024-07-11T20:16:39 | https://dev.to/youssefibrahim/cities-as-codebases-2mj9 | software, developers, codebases, cities | In the world of software development, we often find ourselves grappling with the complexity of our codebases, much like urban planners navigating the intricate landscapes of cities. The following comparison isn't just a whimsical analogy; it could offer profound insights into how we can improve our software systems by drawing parallels with urban development.
> _"We build our computer [systems] the way we build our cities: over time, without a plan, on top of ruins."_ - Ellen Ullman
## Old Cities and Legacy Code

Think about walking through the old streets of Rome or London. Each part of the city tells a story from the past. For example, under the Shard in London, you can find a Roman mosaic, a piece of history showing how the city used to be.
Legacy code is like these old cities. It's the old programming code in software that has been worked on by many different developers over the years. Just like ancient cities, this code has been built up in layers, with each change adding more complexity. And just as these cities developed over time without a clear plan, legacy code often becomes a mix of many quick fixes and patches, leading to a messy and complicated structure.
## The Unplanned Growth

When cities grow without a plan, it can cause traffic jams, inefficiencies, and a lack of organization. Buildings and roads end up randomly placed, leading to congestion and infrastructure problems.
In software, this kind of unplanned growth happens too. It's called technical debt, where quick fixes and shortcuts build up over time, making the code harder to manage and expand. Scalability refers to how well software can handle growth. Just as a poorly planned city struggles to accommodate more people and traffic, a codebase with lots of technical debt struggles to handle new features and more users.
## Dead Code and Abandoned Buildings

Just like abandoned buildings can harm a neighbourhood's reputation, dead code can have a similar impact on a software project. Abandoned buildings often become neglected eyesores, attracting vandalism and contributing to a decline in the area's appeal.
Similarly, in software, dead code refers to unused or obsolete code that remains in the codebase. This unused code clutters the system, making it more difficult to understand and maintain. It can confuse developers, leading to wasted time and effort, which lowers the overall quality of the codebase.
Regularly identifying and removing dead code is essential for keeping a software project healthy and efficient, much like efforts to renovate and revitalize neglected urban areas.
## The Speed of Urban Development

To illustrate how quickly cities can evolve, consider the former photo taken during a school geography field trip in the late 1990s. The picture captures One Canada Square tower in London, at the time the tallest building in the UK. A title it held for many years until surpassed by the Shard in 2012. The tower stands in what was once a rural area, marking one of the initial constructions under a new regeneration initiative. Today, it is surrounded by many other skyscrapers, highlighting the city's remarkable transformation and expansion within a relatively brief timeframe.
## Learning from Urban Planning

Urban planning provides valuable lessons that can be applied to software development to create better, more sustainable environments for both users and developers:
- **Modular Design (microservices):** Cities benefit from zoning laws that separate residential, commercial, and industrial areas, which helps in organizing and managing different aspects of urban life. Similarly, in software development, modular design involves breaking down a system into distinct, manageable modules. This approach reduces complexity and makes maintenance easier by focusing on individual components rather than the entire system at once.
- **Green Spaces:** Cities incorporate parks and green spaces to improve quality of life and prevent overcrowding. In software development, this concept translates to regular refactoring and cleanup of the codebase. These "green spaces" in code prevent the accumulation of technical debt—unused or obsolete code—ensuring the system remains efficient and easy to work with over time.
- **Infrastructure Planning:** Well-planned cities have robust infrastructure, such as roads, public transport, and utilities, that support daily activities and growth. Similarly, a well-planned codebase includes clear documentation, consistent coding standards, and automated testing. These elements ensure the software is reliable, scalable, and easy to maintain.
In summary, applying principles from urban planning—such as modular design, regular maintenance (like green spaces), and robust infrastructure planning—to software development can lead to more efficient, scalable, and sustainable software systems.
## Enhancing Codebases with Urban Principles

We can draw inspiration from what makes a great city to enhance our codebases:
- **Accessibility and Inclusivity:** Just as cities aim to be accessible to all residents, our codebases should be accessible to all developers, with clear, understandable code and thorough documentation.
- **Sustainability:** Sustainable cities focus on long-term well-being. In software, this means prioritizing maintainable and scalable solutions over quick fixes, investing time in building a robust architecture.
- **Community Engagement:** Cities thrive when residents are engaged. Similarly, fostering a strong developer community around a codebase, where knowledge is shared and collaboration is encouraged, leads to a healthier codebase.
## Quality of Life in Codebases vs. Cities

The quality of life in a city significantly affects its residents’ happiness and productivity. Similarly, for developers working on a codebase, a well-maintained and thoughtfully designed environment can have profound effects. Here's how some of these parallels unfold:
**Environmental:**
- **City:** Clean air, green spaces, and sustainable practices contribute to a healthier living environment.
- **Codebase:** A well-maintained codebase reduces technical debt and inefficiencies, promoting a cleaner and more efficient software environment.
**Social:**
- **City:** Community engagement, safety, and inclusivity foster a sense of belonging and well-being among residents.
- **Codebase:** A well-designed codebase promotes collaboration, knowledge sharing, and a positive team culture among developers.
**Economic:**
- **City:** Job opportunities, cost of living, and infrastructure impact residents' economic stability.
- **Codebase:** Clear requirements, attitude to code quality, attitude to technical debt, shared vision, trust, dedication to automating processes and awareness of context switching.
In both urban and software environments, ensuring high quality of life enhances overall happiness, productivity, and satisfaction among residents and developers alike.
## The Urban Planner’s Oath
As we conclude, let us reflect on the Urban Planner’s Oath and adapt it to software development:
- **Respect for Legacy:** _"I honor the wisdom of those who came before me. With humility, I stand on that which they have built."_ Developers should respect the work of previous developers and be sure they did the best they could with that they had.
- **Service to the Present:** _"I serve those who live here today; their actions shape my actions. My vision must dance with theirs."_ Our code should serve current users and developers, accommodating their needs and facilitating their work.
- **Responsibility to the Future:** _"We work for those who come tomorrow. May we deserve their admiration and inspire the best within them."_ Our work should pave the way for future developers, making their tasks easier and inspiring them.
## Conclusion

By viewing our codebases through the lens of urban development, we could gain valuable insights into creating more sustainable, maintainable, and enjoyable software systems. Just as great cities are built with foresight, respect, and community spirit, so too can our codebases be crafted to stand the test of time, serving generations of developers to come.
**Reference:** Naomi Gotts, "We Built This City: The Geographies of Software Development," http://NaomiGotts.co.uk.
| youssefibrahim |
1,920,172 | Running Tests in GitLab CI: From Zero to Pipeline | Continuous Integration (CI) is an essential practice in modern software development. It ensures that... | 0 | 2024-07-11T22:26:06 | https://dev.to/thiagoematos/running-tests-in-gitlab-ci-from-zero-to-pipeline-5ece | gitlab, cicd, springboot, unittest | Continuous Integration (CI) is an essential practice in modern software development. It ensures that code changes are automatically tested, leading to faster and more reliable software releases. In this article, we'll walk through the process of setting up a Spring Boot project with Gradle, writing unit tests, and configuring GitLab CI to automate the testing.
## Pre-requirements:
- Linux (Ubuntu based) Operational System
- GitLab Account
- Tool to unzip a file
## Step 1: Install SDKMAN and Java 21
First, we need to install **SDKMAN**, a tool for managing parallel versions of multiple Software Development Kits (SDKs), including Java.
```
curl -s "https://get.sdkman.io" | bash
source "$HOME/.sdkman/bin/sdkman-init.sh"
```
You can close and open again your terminal to assure your installation is successful.
Second, use sdkman to install **Java 21**, the most stable version.
The command below shows the available versions:
```
sdk list java
```

You can choose one of your preferences. Use the Identifier column as reference to install the chosen version:
```
sdk install java 21.0.2-open
```
Third, confirm you've installed successfully
```
java --version
```
## Step 2: Install Git
Next, install **Git**, a version control system that we'll use to manage code changes of our project.
```
sudo apt-get update
sudo apt-get install git
```
## Step 3 Install IntelliJ IDEA
**IntelliJ IDEA** is a popular IDE(Integrated Development Environment) for Java development. Here is the command to install it:
```
sudo snap install intellij-idea-community --classic
```
## Step 4: Create a Spring Boot Project
We'll use **Spring Initializr** to create a new project for the most recent version.
Access https://start.spring.io/ and configure the project as follows:
1. Project: Gradle - Kotlin
2. Language: Java
3. Spring Boot: 3.3.1
4. Packaging: jar
5. Java: 21
6. Dependencies: No dependency is necessary
Click on the button **GENERATE** to download the project as zip.

## Step 5: Import the Project into IntelliJ
Unzip the downloaded file into a folder of your preference.
Open IntelliJ IDEA and import the unzipped project. IntelliJ will automatically detect the Gradle configuration and set up the project accordingly.
## Step 6: Make a TDD
First, let's create a Test. Create a simple class and a corresponding unit test.
Create a new class named **Calculator** in the folder **src/main/java/com/example/demo**:
```
public class Calculator {
public int add(int a, int b) {
return 0;
}
}
```
Use **jUnit** lib (it already comes with the Spring Boot) to create its unit test in the folder **src/test/java/com/example/demo**:
```
import org.junit.jupiter.api.Test;
import static org.assertj.core.api.Assertions.assertThat;
class CalculatorTest {
@Test
void testAdd() {
var calculator = new Calculator();
var result = calculator.add(2, 3);
assertThat(result).isEqualTo(5);
}
}
```
Second, let the test fail.
Press one of the **Green Button**s to run the test.

Check that the test fails.

Third, fix the test by writing the correct business rule:
```
public class Calculator {
public int add(int a, int b) {
return a + b;
}
}
```
And, at last, run the test again and see that now it is passing.

## Step 6: Create a Gitlab CI Configuration File
Create a .gitlab-ci.yml file in the root of your project. This file will define the pipeline.
```
stages:
- build
Build:
stage: build
image: gradle:8.8.0-alpine
script:
- gradle --build-cache clean build
artifacts:
when: always
expire_in: 1 days
paths:
- build/libs/*.jar
- build/test-results/test/*.xml
reports:
junit:
- build/test-results/test/*.xml
only:
- main
```
**stages**: define a sequence of steps that the pipeline will execute. The pipeline runs stages in the order they are defined, and all jobs within the same stage run in parallel. Only after all jobs in a stage complete successfully does the pipeline proceed to the next stage.
**Build**: name of the **job** to be executed for the stage.
**Image**: Docker Image that will be used to execute the job.
**script**: gradle command to generate the jar, run the tests and generate a report about the tests.
**artifacts**: Files that will survive when the job is done.
**only**: name of the git branch
## Step 8: Create a Project on GitLab
Gitlab is a web-based DevOps lifecycle tool that provides a Git repository manager offering source code management, continuous integration, and continuous deployment capabilities.
Go to **GitLab** and create a new blank project and make it public or private based on your preference.
https://gitlab.com/projects/new#blank_project

## Step 9: Configure Git in Project
Open the terminal, go to the directory where you unzipped your code and configure git on it.
```
git init --initial-branch=main
git remote add origin https://gitlab.com/your-username/spring-boot-unit-test-ci.git
git add .
git commit -m "Initial commit"
git push --set-upstream origin main
```
## Step 10: Check the pipeline running
Go to https://gitlab.com/your-username/spring-boot-unit-test-ci/-/pipelines and click on the first (and only existent) pipeline running.

Here you can see the pipeline running:

After some time, the pipeline passes:

Here you can see the test report:

## Conclusion
By following these steps, you've successfully set up a Spring Boot project with Gradle, written an unit test, and configured GitLab CI to automate the testing. You can now make a test fail intentionally and observe the pipeline breaking, then fix the test and see the pipeline pass again. This iterative process helps ensure that your code is always in a deployable state.
Here is the link to the project:
https://gitlab.com/thiagoematos/spring-boot-unit-test-ci
| thiagoematos |
1,920,173 | Toronto Google Ads Management | Discover the power of precision with Tweaked SEM, your go-to Google Ads agency in Toronto. We... | 0 | 2024-07-11T20:24:48 | https://dev.to/toronto-sem/toronto-google-ads-management-3hck | Discover the power of precision with Tweaked SEM, your go-to **Google Ads agency in Toronto**. We specialize in top-tier **Google Ads management** and **PPC management services** designed to elevate your online presence, amplify web traffic, and skyrocket your leads. Our cutting-edge SEM strategies focus on razor-sharp targeting, seamless landing page optimization, and custom-crafted ad campaigns that convert like crazy. Dive into our suite of services, featuring thorough audits, lightning-fast web traffic generation, and savvy local SEO integration. Our squad of certified Google Ads pros delivers data-driven insights and bespoke marketing solutions. Ready to transform your digital game? Visit us now for more details or snag a free consultation - [Tweaked SEM Google Ads PPC Management Toronto](https://www.tweakedsem.com/google-ads-ppc-management-toronto/). | toronto-sem | |
1,920,174 | Importance of HTMLsemantic on SEO | []https://docs.google.com/document/d/11x3hfTVJa9n737IzuGdW2OW_QqSiuXOLlOQp0sHN4pw/edit#heading=h.td4t... | 0 | 2024-07-11T20:25:07 | https://dev.to/eunice_ngina_3e1848509848/importance-of-htmlsemantic-on-seo-430e | []https://docs.google.com/document/d/11x3hfTVJa9n737IzuGdW2OW_QqSiuXOLlOQp0sHN4pw/edit#heading=h.td4teddxj7m | eunice_ngina_3e1848509848 | |
1,920,175 | Corner Store App Basic Prototype | Corner Store App Corner Store App is a progressive web application (PWA) developed using Next.js and... | 0 | 2024-07-11T20:29:07 | https://dev.to/ror2022/corner-store-app-basic-prototype-45b0 | nextjs, nestjs, typescript, api | Corner Store App
Corner Store App is a progressive web application (PWA) developed using Next.js and Nest.js, designed as a prototype to serve as a platform for small businesses, either individually or in groups. Additionally, it has the potential to be adapted for medium-sized corporations that need to manage multiple branches, inventories, employees, and sales control.
Technologies and Features
Frontend and Backend: The application is built with Next.js for the frontend and Nest.js for the backend, ensuring agile, efficient, and scalable development.
Cloud Storage: Utilizes AWS S3 Buckets for secure and scalable image storage, allowing efficient management of multimedia resources.
Database: MongoDB Atlas is the database, offering a flexible and scalable solution to store and manage application data.
Authentication and Authorization: Implements JWT (JSON Web Tokens) and bcrypt to handle user authentication and authorization, ensuring secure and protected access to the application.
Purpose and Potential
The main goal of this prototype is to demonstrate a solid foundation for development that can expand according to the specific requirements of clients. Thanks to the choice of powerful and modern technologies, Corner Store App guarantees:
Scalability: Grow and adapt easily as business needs increase.
Functionality: Integrate new features and functionalities that enhance user experience and operational efficiency.
User Experience (UX/UI): Offer an intuitive and user-friendly interface that facilitates user interaction with the application.
Performance: Maintain optimal performance, even with increased data and users.
Security: Protect user data and transactions through advanced security practices and technologies.
In summary, Corner Store App is designed to be a robust and scalable solution that can evolve to meet the needs of small and medium-sized businesses, providing a comprehensive platform for managing their daily operations.
Frontend:
url: https://clientcornerstore.vercel.app/
code: https://github.com/ROR2022/clientcornerstore
The frontend of Corner Store App is built using Next.js, a powerful React framework that enables server-side rendering and static site generation. This ensures a fast and responsive user experience, which is critical for web applications. Key features of the frontend include:
User Interface: Designed with an intuitive and user-friendly interface that simplifies interaction and navigation.
Responsive Design: Ensures compatibility across various devices and screen sizes, providing a seamless experience for users.
Performance Optimization: Utilizes Next.js features such as server-side rendering and static site generation to enhance performance and load times.
Backend:
url: https://servercornerstore-production.up.railway.app/
code: https://github.com/ROR2022/servercornerstore
The backend of Corner Store App is developed using Nest.js, a progressive Node.js framework that provides a robust architecture for building scalable and maintainable server-side applications. Key features of the backend include:
API Development: Provides a comprehensive set of APIs for handling various functionalities such as user management, inventory control, and sales tracking.
Database Management: Utilizes MongoDB Atlas for a flexible and scalable database solution, ensuring efficient data storage and retrieval.
Security: Implements JWT (JSON Web Tokens) and bcrypt for secure user authentication and authorization, protecting sensitive data and transactions.
Cloud Integration: Uses AWS S3 Buckets for secure and scalable storage of images and other multimedia resources.
Corner Store App Short Description:
Corner Store App is a progressive web application (PWA) developed using Next.js and Nest.js, designed to serve small businesses or medium-sized corporations managing multiple branches, inventories, employees, and sales. It leverages AWS S3 for secure image storage, MongoDB Atlas for flexible data management, and uses JWT and bcrypt for secure user authentication and authorization. With an intuitive UI and responsive design, the app ensures optimal performance through server-side rendering and static site generation. The main goal is to provide a scalable, functional, and secure platform that can adapt to the evolving needs of businesses, enhancing user experience and operational efficiency.
Resume: https://docs.google.com/document/d/104ek8dOTdOU6RcDMtGT-g1T--FWxq2earIDvMZoQ79E/edit?usp=sharing
Portfolio: https://prodigy-wd-05-kappa.vercel.app/#/portfolio
Github: https://github.com/ROR2022
Linkedin: https://www.linkedin.com/in/ramiro-ocampo-5a661b1a7/
Best regards,
Ramiro Ocampo Rodriguez.
Developer ReactJS-NodeJS
Tel: +52 7777937484
Whatsapp: +52 8332998900
rami.ror279@gmail.com
Cuernavaca,Mor.México.
| ror2022 |
1,920,176 | Phase 3 sql, tuples, and object problems | Hello, my name is Daniel Trejo and I'm a student at Flatiron School in the Software Engineering... | 0 | 2024-07-11T20:29:48 | https://dev.to/daniel_trejo14/phase-3-sql-tuples-and-object-problems-1k8d | Hello, my name is Daniel Trejo and I'm a student at Flatiron School in the Software Engineering program. Phase 3 was definitely the hardest one yet because of the sheer amount of curriculum. The hardest part for me was the project which is probably not surprising due to the fact that it is the final project of the phase. However I ran into a ton of issues right off the bat but once I got started it wasn't actually that bad. That was until I finished it. Now you might be saying, "It's being done a good thing?" Normally yes but with coding, finishing coding doesn't always mean the project is done.
Now I ran into a heap of issues at this point and they were are super simple fixes except for one. I needed get some data to display at the end of another piece of data but these 2 pieces of data are from different places. Now to some people that might be a simple fix but to me who has been struggling a lot recently with personal stuff and just the scope of the phase. It took me until doing the project to even start completely understanding python, cli, orm, and sql as is.
The problem was I wasn't grabbing the information correctly so I had to create a whole new function and test run for hours trying to get the right thing. Once I finally did, I was so happy but that was until I ran into another problem.
I needed it to return an object but all it kept returning was a tuple. Which meant the information I was getting wasn't returning at all. That's when I found out through using one of the best tools in python,
`breakpoint()`
that it WAS returning something and it was a tuple but my program didn't know what to do with that tuple. It wasn't actually showing anything on the return until I grabbed the index of that information with `[0]` brackets. Yay finally it was fixed, however, what everyone will deal with after fixing something. Another problem popped up. It was that I was returning everything as a tuple which meant as a string. A requirement of the project was to be able to view related object. I was viewing just the string of what I needed which isn't an object. So that's when I figured out that I needed to return an object not a tuple. Instead of using sql like this,
```
sql = """
SELECT name
FROM authors
WHERE id = ?
"""
```
I had to use it like this,
```
sql = """
SELECT *
FROM authors
WHERE id = ?
"""
```
The first way was just grabbing the string(name) instead of grabbing the whole object, which the second version did. Then I had to call it in my repr as self.author().name because doing .name is one of the 2 ways to grab something from an object. | daniel_trejo14 | |
1,920,178 | Rewear: Transforming Sustainable Fashion with Wix Studio | This is a submission for the Wix Studio Challenge . What I Built Rewear is a sustainable... | 0 | 2024-07-11T20:42:32 | https://dev.to/ketanrajpal/rewear-transforming-sustainable-fashion-with-wix-e-commerce-4l3a | devchallenge, wixstudiochallenge, webdev, javascript | *This is a submission for the [Wix Studio Challenge ](https://dev.to/challenges/wix).*
## What I Built
Rewear is a sustainable clothing exchange platform designed to transform how you experience fashion. Our platform allows users to send their pre-loved clothing items for quality control, browse curated selections, and receive new styles, all while reducing waste and promoting eco-friendly living. With Rewear, you can refresh your wardrobe, renew your style, and redefine your fashion choices sustainably. Additionally, **people can also bid for other people’s clothes on the platform**.
## Demo
Link: [https://kb2yj4664f.wixstudio.io/rewear](https://kb2yj4664f.wixstudio.io/rewear)
{% embed https://www.youtube.com/watch?v=VYl6exQ0Jgg %}
## Development Journey
To bring Rewear to life, I leveraged the robust capabilities of Wix Studio. Using the Wix VsCode IDE, I added hooks and wrote JavaScript code to automate processes and enhance functionality.
Wix APIs and Libraries Used:
1. **wix-user**: Managed user authentication and profiles, ensuring a secure and personalized experience for each user.
2. **wix-site-frontend**: Created dynamic, interactive user interfaces to enhance the user experience.
3. **wix-data**: Managed and queried collections of data, facilitating seamless data handling and storage.
4. **Collection Hooks Library**: Automated processes such as quality control checks and updating item statuses.
I also utilised the Wix JavaScript interface to create a custom bidding system for our “Bids for Good” social auction. This feature allows users to bid on unique, pre-loved clothing items, supporting sustainable fashion while engaging in an exciting auction experience. The platform uses two main collections: one for auction items and another for page content. The auction collection contains all the dresses available for bidding, making it easy for users to participate and track their bids. | ketanrajpal |
1,920,179 | The Battle of Databases: SQL, PostgreSQL, MongoDB, and Redis Explored | As a data enthusiast, you've likely encountered the names SQL, PostgreSQL, MongoDB, and Redis, but... | 0 | 2024-07-11T20:35:45 | https://dev.to/aquibpy/the-battle-of-databases-sql-postgresql-mongodb-and-redis-explored-jbc | redis, sql, postgres, mongodb | As a data enthusiast, you've likely encountered the names SQL, PostgreSQL, MongoDB, and Redis, but what exactly are they and when should you use them? This blog aims to demystify these powerful database technologies, comparing their features, use cases, and offering a headstart to get you started.
**SQL: The Foundation**
SQL (Structured Query Language) is the king of relational databases. It's the language used to interact with structured data, organized into tables with rows and columns. Imagine a spreadsheet, but on a much larger scale.
**Key Features:**
* **Data Integrity:** SQL excels at maintaining data consistency through features like primary and foreign keys.
* **Structured Query Language:** SQL uses a powerful query language for data manipulation and retrieval.
* **Transactions:** Ensures data accuracy by treating operations as atomic units, either all succeed or all fail.
* **Widely adopted:** SQL has a vast community, making it easy to find resources and support.
**Examples:**
* **SELECT * FROM customers WHERE city = 'New York';** Retrieves all customers from the city New York.
* **UPDATE products SET price = price * 1.10 WHERE category = 'Electronics';** Increases the price of all electronics products by 10%.
**PostgreSQL: The Relational Powerhouse**
PostgreSQL, often called Postgres, is a popular open-source object-relational database system that extends the power of SQL.
**Why choose PostgreSQL?**
* **Advanced Features:** Supports features like inheritance, foreign data wrappers, and triggers.
* **Data Integrity:** Provides ACID (Atomicity, Consistency, Isolation, Durability) properties for transactional integrity.
* **Extensible:** Offers a wide range of extensions for custom functionality.
* **Mature and Stable:** A long-standing database with a strong reputation for reliability.
**MongoDB: The NoSQL Dynamo**
MongoDB is a NoSQL database, meaning it doesn't follow the rigid structure of SQL. Instead, it uses document-oriented storage, where data is represented as JSON-like documents.
**MongoDB Advantages:**
* **Flexibility:** Adapts to changing data structures and schema changes.
* **Scalability:** Designed for high availability and horizontal scaling, making it ideal for large datasets.
* **Ease of use:** Uses a simple document-based model for efficient data storage and retrieval.
**Examples:**
* **db.users.insertOne({ name: "John Doe", age: 30, city: "New York" });** Inserts a new user document.
* **db.users.find({ age: { $gt: 25 } });** Retrieves all users older than 25.
**Redis: The Speed Demon**
Redis is an in-memory data store known for its lightning-fast performance. It acts as a caching layer, storing frequently accessed data in memory for quick retrieval.
**Redis Use Cases:**
* **Caching:** Improves website performance by storing frequently used data in memory.
* **Session Management:** Stores user session data for faster access.
* **Real-time Analytics:** Handles high-frequency data processing for real-time insights.
**Tips for Getting Started**
* **Learn SQL:** Even if you're using NoSQL databases, understanding SQL is essential for data manipulation.
* **Choose the right tool:** Consider your specific requirements, like data structure, scalability, and performance needs.
* **Experiment with different databases:** Start with small projects to understand the strengths and weaknesses of each technology.
**Conclusion**
Choosing the right database can make or break your data-driven project. SQL, PostgreSQL, MongoDB, and Redis offer unique strengths and are well-suited for different use cases. By understanding their differences and experimenting with their capabilities, you can confidently navigate the world of data management.
| aquibpy |
1,920,180 | Angular 18.1: Template Local Variables with @let | Key takeaways Syntax: let variableName = expression; Scope: The variable is only... | 0 | 2024-07-11T20:36:56 | https://dev.to/debba/angular-181-template-local-variables-with-let-3mh9 | webdev, javascript, angular, frontend | ## Key takeaways
* **Syntax**: let variableName = expression;
* **Scope**: The variable is only available within the HTML element or block where it's declared.
* **Common scenarios**:
* Reduce repetition of complex expressions.
* Better handle type narrowing.
* Define complex styling options.
* Improve template readability.
## Best practices
* Use let to minimize repetition.
* Use let to improve type narrowing.
* Use let to define complex styling options.
* Carefully consider using let for calculations or business rules.
## Thoughts
The let declaration is a welcome addition that simplifies variable management in templates and improves code readability. It's particularly useful for reducing repetition and defining complex styling options. However, it's important to carefully consider using let for calculations or business rules, as it could make the code harder to maintain.
## Additional resources
* Official Angular documentation on the let declaration: [https://blog.angular.dev/introducing-let-in-angular-686f9f383f0f?gi=63f5b3e52822](https://blog.angular.dev/introducing-let-in-angular-686f9f383f0f?gi=63f5b3e52822)
How do you plan to use the let declaration in your Angular projects?
Leave a comment below and share your thoughts! | debba |
1,920,182 | The Elegance and Innovation of Omega Watches: A Timeless Investment | Omega watches represent the pinnacle of Swiss watchmaking excellence, combining precision,... | 0 | 2024-07-11T20:47:01 | https://dev.to/shuvronil_datta_420be2d75/the-elegance-and-innovation-of-omega-watches-a-timeless-investment-352d | Omega watches represent the pinnacle of Swiss watchmaking excellence, combining precision, innovation, and timeless design. Whether you are a seasoned collector or a first-time buyer, an Omega watch is a worthy addition to any collection. With their rich heritage, exceptional craftsmanship, and versatile styles, Omega watches continue to captivate and inspire watch enthusiasts around the world.
For more detailed insights and to explore the exquisite range of Omega watches, visit the [official Omega watch collection](https://timeavenue.com/top-10-best-watches-for-men-under-10-lakhs/) at Time Avenue. Discover the elegance and precision of Omega watches and find the perfect timepiece to complement your style and lifestyle. Additionally, you can explore the official omega watch collection to expand your collection with another prestigious brand. | shuvronil_datta_420be2d75 | |
1,920,184 | HOST STATIC WEBSITE ON NETTLIFY; A BEGINNER-FRIENDLY GUIDE TO WEB DEVELOPMENT | Prerequisite basic knowledge in html, basic knowledge in cascading styles (css) basic JavaScript... | 0 | 2024-07-11T21:39:18 | https://dev.to/psam4ord/host-static-website-on-nettlify-a-beginner-friendly-guide-to-web-development-3kog | webdev, html, beginners, programming | _**Prerequisite**_
- basic knowledge in html,
- basic knowledge in cascading styles (css)
- basic JavaScript knowledge.
**Project Overview**
- This is a static website project hosted on Netlify and available on GitHub.
- The project was created using basic web development tools and does not utilize any npm packages, making it straightforward and beginner-friendly.
TOOLS
- Github: version control to track changes to our code.
- Git: For version control and managing the code repository.
- Visual Studio Code: As the code editor.
- Netlify: For hosting the website.
## GETTING STARTED
1. Clone the Repository
To get a copy of the project up and running on your local machine, clone the repository from GitHub:
- `git clone https://github.com/Psam4ord/aws-resume-project.git`
2.Open in Visual Studio Code:
Navigate to the project directory and open it in Visual Studio Code:
```
cd your-repository-name
code .
```
Now press enter
# Project Structure
your-repo-name/
├── index.html
├── style.css
├── script.js
└── README.md
NB: YOU CAN MAKE MODIFICATIONS TO THE FILES TO SUITE YOUR STYLE.
3. **Hosting on Netlify**
- Sign Up or Log In
Go to [Netlify](https://www.netlify.com) and sign up for an account if you don't have one. Log in if you already have an account.
- Click on "New site from Git" on your Netlify dashboard.
Connect to your GitHub account and select the repository of your project.
4.**Deploy Settings**
- For a basic static site, you don’t need to change any settings.
- Click on "Deploy site."
- Once the deployment is complete, [Netlify](https://www.netlify.com) will provide you with a URL where your site is live.
5. **Conclusion**
You've successfully set up and deployed a static website Netlify.
This simple setup is ideal for beginners and provides a solid foundation for further web development projects.
>
`_Take the first step in faith,
you don't have to see the whole staircase.
Just take the first step.
Dr. Martin Luther King 1929-1968 _`
| psam4ord |
1,920,185 | Flutter Gooey Blobs | Learn how to make gooey blobs with custom painter in Flutter, and how you might be able to use them in your projects. | 0 | 2024-07-11T20:59:36 | https://code.pieces.app/blog/flutter-gooey-blobs | <figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/gooey-blobs_640fcfb3316f48f20b3ec6e2c92a9736.jpg" alt="Flutter Gooey Blobs with CustomPainter."/></figure>
Have you ever seen these gooey blobs on the internet?
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image_f05f6c80470d23899c3920503db6205b.gif" alt="A gif of two blobs merging with each other."/></figure>
They look kinda cool, right? Have you ever thought it would be really cool to create something like this in Flutter?
A few days ago, Twitter user [@double__glitch](https://twitter.com/double__glitch) demonstrated creating [gooey blobs in Figma](https://twitter.com/double__glitch/status/1618213807503060993). Seeing his technique reminded me of drawing something on a canvas, which led to me thinking about CustomPainter, and I thought, “I can do that! Somehow...”
After an hour or so, I did manage to re-create it in Flutter! Here’s how:
## Why CustomPainter?
As I said earlier, the technique shown by [@double__glitch](https://twitter.com/double__glitch) is like drawing on a canvas. CustomPainter serves exactly that purpose. It provides you with set of APIs ranging from drawing shapes on a canvas to adding layer composite effects, combining image filters, blend modes, and more.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image-1-copy-2_e4c9c3ffcbb62093af2749a537fe2d82.gif" alt="Six different paint effects."/></figure>
We should be able to do this in Flutter, too! If you aren’t aware, Flutter converts your widget layers into a set of canvas painting instructions as part of the rendering pipeline. Somewhere down the line, these instructions are rendered on the screen.
Before diving into it, here’s something that you’ll need to know to achieve this effect:
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/zelda-take-this_9e1a0c1f57b5bf57d7dcfeab2bd6f949.jpg" alt="Zelda meme that says, &quot;It&#39;s dangerous to go alone! Take this: saveLayer.&quot;"/></figure>
### saveLayer
`saveLayer` allows you to create composite effects on canvas. Normally each canvas instruction is painted individually, thus applying composite effects on a group of shapes. With `saveLayer`, you can group those shapes and apply an effect on those as one single layer on the canvas.
```
canvas.saveLayer(Rect.fromLTWH(0, 0, size.width, size.height), paint);
// drawing instructions here are grouped together.
canvas.restore();
```
[Save this code](https://takrutvik.pieces.cloud/?p=1dcc45831d)
`saveLayer` takes the size of the layer and a `Paint` object. Any canvas instructions called after calling `saveLayer` will be grouped, and once you call `canvas.restore`, that group will be flattened out into a layer on which the `Paint` object’s image filters and blend modes will be applied.
Now, back to creating our gooey blobs!
## Adding Blurred Circles
We’ll use the `canvas.drawCircle()` to draw two circles on the canvas and add a blur filter to the layer using `saveLayer`.
```
import 'dart:ui';
import 'package:flutter/material.dart';
class BlobsView extends StatefulWidget {
const BlobsView({super.key});
@override
State<BlobsView> createState() => _BlobsViewState();
}
class _BlobsViewState extends State<BlobsView> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: CustomPaint(
painter: _BlobsPainter(),
size: MediaQuery.of(context).size,
),
),
);
}
}
class _BlobsPainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
final blackCirclesPainter = Paint()
..color = Colors.black
..style = PaintingStyle.fill;
final blurLayerPaint = Paint()
..color = const Color(0xff808080)
..style = PaintingStyle.fill
..imageFilter = ImageFilter.blur(
sigmaX: 10,
sigmaY: 10,
tileMode: TileMode.decal,
);
canvas.saveLayer(Rect.fromLTWH(0, 0, size.width, size.height), blurLayerPaint);
canvas.drawCircle(Offset(size.width / 2 - 50, size.height / 2), 70, blackCirclesPainter);
canvas.drawCircle(Offset(size.width / 2 + 50, size.height / 2), 60, blackCirclesPainter);
canvas.restore();
}
@override
bool shouldRepaint(covariant CustomPainter oldDelegate) {
return true;
}
}
```
[Save this code](https://takrutvik.pieces.cloud/?p=36a6428584)
Here’s how it looks. Nothing fancy right now.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/screenshot-2024-07-07-at-52848-pm_50bb5ef87dabe7fb5fa0d0ef6eb3e600.jpg" alt="Two blurry conjoined circles."/></figure>
## Adding a colorDodge Layer
Next, we’ll add certain layers on top of our canvas with specific blend mode applied to them. The blend mode will specify how the layer should be composed with its background. In this article, we won’t dive deep into how these blend modes work, but if you’re curious, I would recommend reading [these w3 docs](https://www.w3.org/TR/compositing-1/#blendingcolordodge) on the different blend modes mentioned in this article and how they work.
First is the `colorDodge` layer. We’ll achieve this by creating a rectangle, giving it a bright color like `Color(0xff808080)`, and setting its blend mode to `BlendMode.colorDodge`.
This will go right after the `canvas.restore` call:
```
canvas.drawRect(
Rect.fromCenter(center: Offset(size.width / 2, size.height / 2), width: size.width, height: size.height),
Paint()
..color = const Color(0xff808080)
..style = PaintingStyle.fill
..blendMode = BlendMode.colorDodge,
);
```
[Save this code](https://takrutvik.pieces.cloud/?p=136b46a1a0)
Now we get something interesting.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/screenshot-2024-07-07-at-52918-pm_96c5f146682b0df9f2db44e7a60b82b8.jpg" alt="Two conjoined circles that are only a little blurry."/></figure>
`colorDoge` is one of the blend modes that can be used to brighten the destination image (our canvas in this case) with respect to to the source image (blend mode layer). It basically brightens the brighter areas and adds more contrast and saturation to them. In our case, it helps create sharper outlines for our circles, replacing the blurred outlines. Darker areas of the destination image aren’t affected as much as the brighter areas.
## Adding a colorBurn Layer
Now we’ll add a `colorBurn` layer on top of our whole canvas. Similar to what we did earlier, we’ll create a rectangle, make it black, and set its blend mode to `BlendMode.colorBurn`.
```
canvas.drawRect(
Rect.fromCenter(center: Offset(size.width / 2, size.height / 2), width: size.width, height: size.height),
Paint()
..color = Colors.black
..style = PaintingStyle.fill
..blendMode = BlendMode.colorBurn,
);
```
[Save this code](https://takrutvik.pieces.cloud/?p=33394fadd5)
We’re getting close!
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/screenshot-2024-07-07-at-52931-pm_b76dc0d63332c4144b1eda0d5feef982.jpg" alt="Two conjoined circles that are not blurry."/></figure>
`colorBurn` is basically the opposite of `colorDodge`. It darkens the destination image with respect to the source image. With this, the inside blurred areas of the circles become completely black, creating a sharp, gooey shape.
## Adding A Gradient Layer
As before, we’ll add a gradient layer with a blend mode set to `screen`. Setting the blend mode to `screen` will compose this layer as an overlay on top of our gooey shapes, excluding the white areas of the canvas.
```
canvas.drawRect(
Rect.fromCenter(center: Offset(size.width / 2, size.height / 2), width: size.width, height: size.height),
Paint()
..style = PaintingStyle.fill
..shader = const RadialGradient(
colors: [Colors.yellow, Colors.pink],
).createShader(
Rect.fromCenter(
center: Offset(size.width / 2, size.height / 2),
width: size.width,
height: size.height,
),
)
..blendMode = BlendMode.screen,
);
```
[Save this code](https://takrutvik.pieces.cloud/?p=0f934fa062)
With that, our gooey blobs are ready! 🎉
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/final-version-after-adding-drag-controls_af11f69f8f93cff24333364b7206ce2f.gif" alt="Two circles melting into each other."/></figure>
This is the final version after adding drag controls.
We finally have gooey blobs in Flutter and that wasn’t crazy hard to do. It’s interesting how using composite effects with various blend modes can help you create some interesting visuals like this.
Use these blobs cautiously though :)
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_32099/image-2_fd2826d41c52306daa91bca3c7a21719.gif" alt="The evil ghost blob from Ghostbusters."/></figure>
## Conclusion
You can find the source code for these gooey blobs [here](https://github.com/rutvik110/Flutter-Animations/blob/master/lib/custom_painters/blobs.dart) among some of my other creative work. Feel free to poke around and play with them :) | get_pieces | |
1,920,186 | SEMANTIC HTML AND SEO | ROLE OF HTML SEMANTIC ELEMENTS IN SEARCH ENGINE OPTIMISATION SEMANTIC TAGGS ARE CRUCIAL IN... | 0 | 2024-07-11T21:02:59 | https://dev.to/peter_noel_33fcf5672a5db1/semantic-html-and-seo-123e | html, beginners, programming, webdev | ## **_ROLE OF HTML SEMANTIC ELEMENTS IN SEARCH ENGINE OPTIMISATION_**
SEMANTIC TAGGS ARE CRUCIAL IN THE PERFORMANCE OF A WEB PAGE , HERE ARE THE INSIGHTS ON HOW HTML SEMANTICS IMPROVE THE PERFRMANCE OF THE WEB PAGE AND ACCESABILITY ;
https://docs.google.com/document/d/1eR_ZlTRXG03RrKJmn2Ypw5vwKSLb8yO83RLCT5eLQT4/edit?usp=sharing | peter_noel_33fcf5672a5db1 |
1,920,187 | shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.11 | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the... | 0 | 2024-07-11T21:06:53 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-211-46ml | javascript, opensource, nextjs, shadcnui | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI.
In part 2.10, we looked at getRegistryBaseColors function, prompts, creating components.json and resolveConfigPaths.

Now that we understood how promptForMinimalConfig function works, it is time we move onto find out how runInit function works.
runInit
-------
```js
export async function runInit(cwd: string, config: Config) {
const spinner = ora(\`Initializing project...\`)?.start()
// Ensure all resolved paths directories exist.
for (const \[key, resolvedPath\] of Object.entries(config.resolvedPaths)) {
// Determine if the path is a file or directory.
// TODO: is there a better way to do this?
let dirname = path.extname(resolvedPath)
? path.dirname(resolvedPath)
: resolvedPath
// If the utils alias is set to something like "@/lib/utils",
// assume this is a file and remove the "utils" file name.
// TODO: In future releases we should add support for individual utils.
if (key === "utils" && resolvedPath.endsWith("/utils")) {
// Remove /utils at the end.
dirname = dirname.replace(/\\/utils$/, "")
}
if (!existsSync(dirname)) {
await fs.mkdir(dirname, { recursive: true })
}
}
const extension = config.tsx ? "ts" : "js"
const tailwindConfigExtension = path.extname(
config.resolvedPaths.tailwindConfig
)
let tailwindConfigTemplate: string
if (tailwindConfigExtension === ".ts") {
tailwindConfigTemplate = config.tailwind.cssVariables
? templates.TAILWIND\_CONFIG\_TS\_WITH\_VARIABLES
: templates.TAILWIND\_CONFIG\_TS
} else {
tailwindConfigTemplate = config.tailwind.cssVariables
? templates.TAILWIND\_CONFIG\_WITH\_VARIABLES
: templates.TAILWIND\_CONFIG
}
// Write tailwind config.
await fs.writeFile(
config.resolvedPaths.tailwindConfig,
template(tailwindConfigTemplate)({
extension,
prefix: config.tailwind.prefix,
}),
"utf8"
)
// Write css file.
const baseColor = await getRegistryBaseColor(config.tailwind.baseColor)
if (baseColor) {
await fs.writeFile(
config.resolvedPaths.tailwindCss,
config.tailwind.cssVariables
? config.tailwind.prefix
? applyPrefixesCss(baseColor.cssVarsTemplate, config.tailwind.prefix)
: baseColor.cssVarsTemplate
: baseColor.inlineColorsTemplate,
"utf8"
)
}
// Write cn file.
await fs.writeFile(
\`${config.resolvedPaths.utils}.${extension}\`,
extension === "ts" ? templates.UTILS : templates.UTILS\_JS,
"utf8"
)
spinner?.succeed()
// Install dependencies.
const dependenciesSpinner = ora(\`Installing dependencies...\`)?.start()
const packageManager = await getPackageManager(cwd)
// TODO: add support for other icon libraries.
const deps = \[
...PROJECT\_DEPENDENCIES,
config.style === "new-york" ? "@radix-ui/react-icons" : "lucide-react",
\]
await execa(
packageManager,
\[packageManager === "npm" ? "install" : "add", ...deps\],
{
cwd,
}
)
dependenciesSpinner?.succeed()
}
```
This function is rather large, let’s break this analysis down by studying small code chunks.
Well, this code already has some comments added that are specific to the operations. We can follow the same comments to break this analysis down into parts.
1. Ensure all resolved paths directories exist.
2. Write tailwind config.
3. Write css file.
4. Write cn file.
5. Install dependencies.
In this article, let’s find out how shadcn-ui/ui CLI ensures all resolved paths directories exist.
Ensure all resolved paths directories exist.
--------------------------------------------
```js
// Ensure all resolved paths directories exist.
for (const \[key, resolvedPath\] of Object.entries(config.resolvedPaths)) {
// Determine if the path is a file or directory.
// TODO: is there a better way to do this?
let dirname = path.extname(resolvedPath)
? path.dirname(resolvedPath)
: resolvedPath
// If the utils alias is set to something like "@/lib/utils",
// assume this is a file and remove the "utils" file name.
// TODO: In future releases we should add support for individual utils.
if (key === "utils" && resolvedPath.endsWith("/utils")) {
// Remove /utils at the end.
dirname = dirname.replace(/\\/utils$/, "")
}
if (!existsSync(dirname)) {
await fs.mkdir(dirname, { recursive: true })
}
}
```
In article 2.10, I talked about how config has resolvedPaths object is added.

```js
// Determine if the path is a file or directory.
// TODO: is there a better way to do this?
let dirname = path.extname(resolvedPath)
? path.dirname(resolvedPath)
: resolvedPath
```
The above code uses [path](https://www.npmjs.com/package/path). The path.extname() method returns the extension of the path, from the last occurrence of the . (period) character to end of string in the last portion of the path. If there is no . in the last portion of the path, or if there are no . characters other than the first character of the basename of path (see path.basename()) , an empty string is returned.
```js
// If the utils alias is set to something like "@/lib/utils",
// assume this is a file and remove the "utils" file name.
// TODO: In future releases we should add support for individual utils.
if (key === "utils" && resolvedPath.endsWith("/utils")) {
// Remove /utils at the end.
dirname = dirname.replace(/\\/utils$/, "")
}
```
The comment in the above code explains it all.
```js
if (!existsSync(dirname)) {
await fs.mkdir(dirname, { recursive: true })
}
```
[existsSync](https://nodejs.org/api/fs.html#fsexistssyncpath) is a function from “fs” package, returns true if the path exists, false otherwise.
if the directory does not exist, fs.mkdir is used to create the directory.
Conclusion:
-----------
Now that I understood how promptForMinimalConfig function works, it is time to move onto finding out how runInit function works in the shadcn-ui/ui CLI related source code.
runInit function is rather large, let’s break this analysis down by studying small code chunks. This already has some comments explaining what it does. These operations with comments are as follows:
1. Ensure all resolved paths directories exist.
2. Write tailwind config.
3. Write css file.
4. Write cn file.
5. Install dependencies.
I discussed how shadcn’s init command ensures all resolved paths directories exist by using existsSync from “fs” package, if the directory does not exist, this function simply creates a new dir using mkdir.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
[Build shadcn-ui/ui from scratch](https://tthroo.com/)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/commands/init.ts#L81](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/commands/init.ts#L81)
2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/commands/init.ts#L307](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/commands/init.ts#L307) | ramunarasinga |
1,920,188 | What is WebRTC protocol? | WebRTC or Web Real Time Communication protocol is an open source protocol and technology that enables... | 0 | 2024-07-11T22:31:54 | https://www.metered.ca/blog/what-is-webrtc-protocol/ | webdev, javascript, webrtc, devops | WebRTC or Web Real Time Communication protocol is an open source protocol and technology that enables real time communication directly between web browsers and webRTC enabled applications
Using WebRTC you can do video calling, audio calling and data transfer between devices.
This capability is implemented using a set of JavaScript APIs that enable video, audio and data transmission between devices. These APIs include ICE, STUN, TURN, NAT and SDP
We are going to learn more about these protocols below

## ICE (Interactive Connectivity Establishment)
ICe is a protocol that is used to find the best path to reach devices that is to establish a connection between devices.
ICE is used to navigate a best path through NAT routers and firewall rules. It overcomes the connectivity barriers introduced by NAT and firewall rules using the STUN and TURN servers
### How does it work:
ICE gathers all the candidates for the media streams, that is the potential paths between devices trying to connect
It first tries a direct connection using STUN servers to find the client device IP addresses, if that fails this is due to NAT devices or firewall rules then it tries to connect using relays around NAT that is the TURN server
If you are looking for ICE servers and want to know more about ICE then refer to our article: Interactive Connectivity Establishment (ICE) Server: The Complete Guide
If you are looking for a TURN server for your app, then you can consider the Metered TURN servers, A global TURN server service provider.
If you are looking for a list of ICE servers
## STUN server (Session Traversal Utilities for NAT)
STUN servers are used by devices that are behind a NAT to find out what their public IP address and port number is
Devices that are behind a NAT have private IP address assigned to them by the NAT router.
And all the traffic of all the devices that are behind a perticular NAT is routed through a single or a few public IP addresses
When devices want to connect with each other directly they want to know what is their own and others public IP and port number is
These client devices use STUN server to find out their own public IP and port number when they then communicate to (send to ) each other so as to establish a direct line of communication
A client device sends a request to the STUN server when then replies back with the IP address and port number from which the request came from
There are a lot of free and paid STUN servers available. Google also provides free stun servers for public usage google stun server list

## NAT Network Address Translation
NAT or Network address translation is a method by which NAT devices use a single or a few public IP addresses to channel traffic to and from multiple devices which are behind it (These devices are give private IP and port number by the NAT device)
This process was invented to conserve limited number of IPv4 addresses, you can learn more about NAT and how the NAT or Network address translation works here: [NAT traversal: How does it work?](https://www.metered.ca/blog/nat-traversal-how-does-it-work/)

## TURN (Traversal Using Relays around NAT)
TURN relays the data for WebRTC connections when direct peer to peer connection is not possible due to NAT or firewall restrictions
TURN servers relay traffic between peers when direct connection between them fails
It is used as a last resort in the ICE server when direct communication between devices fail
TURN servers are resource intensive and require a lot of bandwidth and cpu to function
TURN servers need to be near your users hence you require TURN servers all over the globe if your users are distributed
If you are looking for a global turn server provider then you can consider Metered.ca TURN servers

## SDP (Session Description Protocol)
SDP is a standard for describing multimedia communicatoin sessions for the purpose of
- Session announcement
- session invitataion
- and other forms of multimedia session initiation
SDP protocol itself does not deliver media streams or transport data. It just describes the format for session descrition that will convey information about the media steams in multimedia sessions to help the devices receive any particular multimedia stream

### Purpose of SDP
SDP was designed to be extensible and works with varied network environments and formats
It is used to describe the multimedia communication and to control the logistics of connectivity and media exchange
### Structure of SDP
SDP describes multimedia sessions using plain text encoding with simple syntax
An SDP message has text in the form of `type=value` where `type` is a single `char` that signifies the type of the field and `value` is a structured text string
These messages are typically transported with other protocols such as SIP Session Initiation Protocol or as a part of the WebRTC signalling process and establishing a new connection
Here are some of the key components of SDP
- **Version:** This shows the version of SDP that is being used
- **Origin:** Identifies the initiator of the session and the session identifier
- Session Name Provides a human readable name for the session
- **Timing:** Describes the start time and the stop time for the session
- **Media Descriptions:** Describes the Media components of the session, including media type that is audio, video text, port, protocol and other media formats theat are being used.
## Role of SDP in WebRTC
SDP plays a part in the offer/answer model, this is a fundamental signalling mechanism in webrtc which is used to establish a connection between peers
Here is how SDP works in webrtc
* **Offer/Answer:** Here one client generates an SDP offer and sends this to the other client device with whom it wants to establish a connection
The other client then responds with an answer. This exchange describes the proposed media capabilities at both the client devices such as
supported codes, media types and encryption requirements for establishing a connection
* **Negotiation:**
The SDP exchange includes negotiation between the clients about which codes and encryption requirements are supported by both and can be used to establish a connection.
* **ICE candidates:**
SDP also conveys the ICE candidates in webrtc. These ICE candidates describes the potential pathways that can be taken to establish the connection in webrtc including STUN and TURN server addresses.
The SDP is dynamically updated during the ICE candidates gathering phase with addresses for STUN and TURN server connections from both the client devices.

## MediaStream
The MediaStream API is a an important component of the WebRTC suite of APIs/
This API manages the flow of data related to media such as audio and video, with the help of media stream api a broad range of appliations can function like video streams, video calls and audio calls
MediaStream represents multiple streams of media such as multiple audio and video tracks that are synchronized for a seamless experience.
These streams can come from multiple sources such as microphones, cameras, screen recorders and even pre recorded media
These streams are then transmitted between peer devices for real time communication
### Key features of MediaStream API
* **Stream Capture**
The MediaStream API can capture the media stream from a user device. This is done with the help of `getUserMedia()` method.
This method asks the user's permission to access the microphone and camera inputs and returns a MediaStream containing the requested media types
* **Track Manipulation:**
The MediaStream returned by the `getUserMedia()` function contains multiple tracks such as audio tracks and video tracks and these tracks can be indivdually manipulated as required
For example you can easily enable and disable individual tracks thus muting a user or disable their video output etc
* **Stream Combination:**
As we know there are multiple mediastream objects or tracks as we have seen above, these objects can be combined into a single stream of data or can also be seprated and individually manipulated as required
These tracks can be removed from one stream and be added to another stream , thus allowing for dynamic reconfiguration during a video call amoung many participants
* **Cloning:**
MedisStreams can also be cloned, this is perticular useful where the same media stream needs to used in multiple cpntexts simultaneously.
for example a single meia stream needs to be shown to multiple users in a video call and also has to be recorded for fututre referene
This stream can aso be encoded and manipulated as the user wishes without affecting the original stream
* **Compatibility and constraints:**
The API provides ability to have constraints on the media stream, these could be a reduction in the video quality or noise supperation for audio
This allows you to specify the media capture needs according to your application and client device compatibility and performance
### Practical use cases for MediaStream API
* **Video Conferencing**
You can conduct video conferencing with the mediastream api, capture camera and audio streams of multiple participants and show it to other participants
* **Media Recording:**
You can combine the MediaStream api with the MediaRecorder API and record the stream locally in the browser or have features like session recording
* **Real Time Media Processing**
Media Stream can be processed in real time to apply affects, change the resolutions, and perform analytics and any other things that you want to do
* **BroadCasting**
MediaStreams can be be broadcasted to a loarge audience over the internet through media servers, you can also live stream events by using webrtc to record the camera and audio then using the media servers to broadcast the MediaStream on the internet

## RTCPeerConnection
RTCPeerConnection is one of the core components of WebRTC suite of APIs.
The primary function for the RTCPeerConnection as the name implies is to establish and maintain a connection.
This connection allows direct exchange of data between client devices without the need for an intermediary (that is apart from the initial signalling process)
Everything handled by the RTCPeerConnection includes things like negotiating the connection details, managing the media and data transfer once the devices are connected.
### Key features of RTCPeerConnection
* **Connection Setup:**
RTCPeerConnection handles all the negotiation with regards to the media and the network details that is required to setup a connection between two devices.
These details include the offer/answer model and the ICE candidates. These details need to be communicated between peers through a signalling server.
* **Signalling:**
While RTCPeerConnection does not perform signalling, it generates the data that is required to send by the signalling server
This data includes the offer/answer and the ICE server candidates. The signalling process is important for establishing a connection between client devices and the RTCPeerConnection generates the data for the signalling server
* **NAT Traversal:**
Using the ICE and STUN and TURN servers RTCPeerConnection finds the best possible way to establish a connection between devices
If you are looking for a STUN and TURN servers then you can consider the metered.ca turn servers
* **Media Stream Management:**
Once the connection is establihed the RTCPeerConnection manages the media streams that are provided by the media stream api.
The RTCPeerConnection controls the flow of streams to and from the client devices.
* **Data Channel Setup:**
The RTCPeerConnection can establish data channels using the RTCDataChannel API
Using the RTCDataChannel API any arbitary data can be transferred between devices thus you can build any application using the webrtc
* **Encryption**
All the data transmitted is encryted by the RTCPeerConnection using DTLS encryotion protocol.
This ensures that all the communication is safe and secure
* **Bandwidth Management:**
Using RTCPeerConnection you can use the inbuilt mechanisms to manage bandwidth consumption based on factors like your application requirements and network requirements.
## RTCDataChannel
RTCDataChannel is an important component of webrtc API. It enables bi directional transfer of data between devices using webrtc
Using this featues developers can build a wide range of applications apart from the video and audio calling for which traditionally webrtc has been used
Developers can build apps like chat apps, collaborative whiteboard and other collaborative apps, file sharing services
The data channel is designed to be highly flexible and supports both highly reliable data delivery as well as unreliable data delivery with low overhead.
The data channel can be configured to suit all kinds of data transfer needs
### Key features of RTCDataChannel
* **Bidirectional and Peer to Peer**
Data Channel allows for direct peer to peer and birectional transfer of data between client devices
Unlike media streams that are used for video and audio data transfer the RTCDataChannel and pretty much transfer anything you throw at it
* **Configurable transport**
There are two modes of transport available iwth RTCDataChannel. 1. Reliable mode, where data is gauranteed to arrive in the order it was sent but it has a heavy load with it. 2 the unreliable mode which is quite lightweight but the data is not guaranteed to arrive at all.
* **Integration with RTCPeerConnection**
Data channels are established using the same RTCPeerConnection and utilize the same channels for communication as the other webrtc media apis and thus use the same TURN servers for communications
* **Security:**
Similar to other webrtc apis the RTCDataChannels are encrypted using the DTLS encrption for end to end encryption security
* **Low Overhead**
The RTCDataChannels use SCTP that is stram control transport protocol over DTLS and UDP
This comnination provides a balance of low latency and reliability over TCP based real time communication solutions
### Practical Applications for RTCDataChannel
- Chat Application
- Collaborative tools
- File Sharing
- Gaming

## [Metered TURN servers](https://www.metered.ca/stun-turn)
- **API:** TURN server management with powerful API. You can do things like Add/ Remove credentials via the API, Retrieve Per User / Credentials and User metrics via the API, Enable/ Disable credentials via the API, Retrive Usage data by date via the API.
- **Global Geo-Location targeting:** Automatically directs traffic to the nearest servers, for lowest possible latency and highest quality performance. less than 50 ms latency anywhere around the world
- **Servers in all the Regions of the world:** Toronto, Miami, San Francisco, Amsterdam, London, Frankfurt, Bangalore, Singapore,Sydney, Seoul, Dallas, New York
- **Low Latency:** less than 50 ms latency, anywhere across the world.
- **Cost-Effective:** pay-as-you-go pricing with bandwidth and volume discounts available.
- **Easy Administration:** Get usage logs, emails when accounts reach threshold limits, billing records and email and phone support.
- **Standards Compliant:** Conforms to RFCs 5389, 5769, 5780, 5766, 6062, 6156, 5245, 5768, 6336, 6544, 5928 over UDP, TCP, TLS, and DTLS.
- **Multi‑Tenancy:** Create multiple credentials and separate the usage by customer, or different apps. Get Usage logs, billing records and threshold alerts.
- **Enterprise Reliability:** 99.999% Uptime with SLA.
- **Enterprise Scale:** With no limit on concurrent traffic or total traffic. Metered TURN Servers provide Enterprise Scalability
- **5 GB/mo Free:** Get 5 GB every month free TURN server usage with the Free Plan
- **Runs on port 80 and 443**
- **Support TURNS + SSL** to allow connections through deep packet inspection firewalls.
- **Supports both TCP and UDP**
- **Free Unlimited STUN**
| alakkadshaw |
1,920,189 | How to Become an Seo Freelancer in Dubai | Embarking on the path to becoming an SEO freelancer in Dubai requires a strategic blend of skill... | 0 | 2024-07-11T21:15:09 | https://dev.to/khuzaima_yamman_a0046bbd8/how-to-become-an-seo-freelancer-in-dubai-2kf3 | startup | Embarking on the path to becoming an [SEO freelancer in Dubai](Seoexpertdubai.pro) requires a strategic blend of skill development, portfolio building, niche identification, and effective client management. As the digital landscape continues to evolve, mastering the art of SEO is paramount for success. However, the journey does not end there. By understanding the nuances of the market, establishing a strong online presence, and nurturing client relationships, individuals can carve a rewarding freelance career in this dynamic industry. But how does one navigate these intricacies and stand out in a bustling marketplace like Dubai?
Developing Your SEO Skills
To excel as an SEO freelancer in Dubai, it is essential to continuously enhance and refine your SEO skills through ongoing learning and practical application.
In the dynamic field of search engine optimization, staying updated with the latest trends, algorithms, and strategies is crucial for success. Dedicate time to explore reputable SEO resources, attend webinars, and enroll in online courses to deepen your understanding of SEO techniques and best practices.
Practical application of your SEO knowledge is equally important. Work on real projects, optimize websites, conduct keyword research, and analyze the results to gain valuable hands-on experience. Experiment with different SEO tools and tactics to see what works best in different scenarios.
Collaborate with other SEO professionals, join online communities, and participate in discussions to broaden your perspective and learn from others in the field.
Building Your Portfolio
Building a strong portfolio is essential for showcasing your SEO expertise and attracting potential clients as a freelancer in Dubai. Your portfolio serves as a visual representation of your skills, experience, and successful projects.
When creating your portfolio, focus on highlighting your best work and results achieved for previous clients. Include case studies that demonstrate your ability to improve search engine rankings, drive organic traffic, and optimize websites for better visibility.
To build a compelling portfolio, consider showcasing a variety of projects that showcase your versatility and expertise in different areas of SEO. Include before-and-after metrics to provide concrete evidence of your impact on a client's online presence.
Make sure to update your portfolio regularly with new projects and successes to keep it fresh and relevant.
Remember that your portfolio is a reflection of your capabilities as an SEO freelancer. It should be visually appealing, easy to navigate, and clearly demonstrate the value you can offer to potential clients.
Identifying Your Niche
Establishing a niche within the SEO industry is pivotal for positioning yourself as a specialized and sought-after freelancer in Dubai. By honing in on a specific area of expertise, you can distinguish yourself from the competition and attract clients who are looking for your particular skills.
When identifying your niche, consider your strengths, passions, and the market demand in Dubai. Whether you excel in local SEO, e-commerce optimization, content marketing, or technical SEO, choosing a niche that aligns with your interests and strengths will not only set you apart but also make your work more enjoyable.
Moreover, focusing on a niche allows you to become an expert in that specific area, building your credibility and reputation within the industry. Clients in Dubai are often looking for specialists who can deliver exceptional results tailored to their unique needs.
Setting Your Freelance Rates
Determining appropriate freelance rates is a critical aspect of positioning yourself competitively in the SEO market in Dubai. As a freelancer seeking to thrive in this dynamic industry, it's essential to set rates that reflect your expertise, the value you provide, and the current market trends.
When deciding on your rates, consider factors such as the complexity of the project, the time and effort required, and the results you can deliver to your clients.
Setting your freelance rates can be a balancing act between being competitive and ensuring that your services are valued appropriately. Conducting research on the average rates charged by SEO freelancers in Dubai can give you a good starting point.
Additionally, factor in your level of experience, the demand for your services, and the quality of work you deliver.
Establishing Your Online Presence
Crafting a strong online presence is fundamental for SEO freelancers in Dubai to showcase their expertise and attract potential clients. Start by creating a professional website that highlights your services, experience, and previous successes. Ensure that your website is optimized for search engines to increase visibility.
Utilize social media platforms like LinkedIn, Twitter, and Instagram to engage with your audience and share valuable insights about SEO trends and strategies. Consistently produce high-quality content such as blog posts, case studies, and whitepapers to demonstrate your knowledge and build credibility in the industry.
Guest posting on reputable websites and participating in online forums can also help expand your reach and establish yourself as a thought leader in the field. Don't forget to regularly update your online profiles and respond promptly to inquiries to maintain a positive reputation.
Networking in Dubai
To thrive as an SEO freelancer in Dubai, networking is key. Attending networking events in the city can help you connect with potential clients and industry professionals.
Additionally, leveraging online networking platforms can further expand your reach and opportunities in the digital landscape.
Networking Events in Dubai
Attending networking events in Dubai provides invaluable opportunities for SEO freelancers to connect with industry professionals and potential clients. These events offer a platform for freelancers to showcase their skills, build relationships, and stay updated on the latest trends in the industry.
Dubai hosts a variety of networking events tailored to different sectors, including digital marketing, technology, and entrepreneurship, allowing freelancers to target specific audiences based on their niche.
Networking events in Dubai range from casual meetups to formal conferences and workshops. They often feature guest speakers, panel discussions, and networking sessions designed to facilitate meaningful interactions.
By actively participating in these events, SEO freelancers can exchange ideas, gain insights from experts, and even secure potential projects or collaborations.
Moreover, networking events provide freelancers with the opportunity to enhance their visibility in the market and establish a strong personal brand. Building a robust network in Dubai can open doors to new opportunities, referrals, and partnerships, ultimately contributing to the freelancer's success in the competitive SEO industry.
Online Networking Platforms
Online networking platforms serve as virtual hubs for SEO freelancers in Dubai to connect with industry professionals and potential clients, facilitating meaningful interactions and opportunities for collaboration. In a city known for its vibrant business environment, these online platforms offer a convenient way for freelancers to expand their network and showcase their expertise.
Platforms like LinkedIn provide a space for freelancers to share their work, engage in discussions, and establish credibility in the industry. Freelancers can also leverage platforms such as Upwork and Freelancer to find project opportunities and connect with clients seeking SEO services.
Additionally, social media platforms like Twitter and Instagram can be used to showcase portfolio pieces, share industry insights, and engage with a broader audience. By actively participating in online networking platforms, SEO freelancers in Dubai can stay updated on industry trends, build valuable connections, and potentially secure long-term collaborations with businesses and individuals looking to enhance their online presence.
Finding SEO Freelance Opportunities
To excel as an SEO freelancer in Dubai, it is crucial to first identify niche markets where your expertise can shine. By understanding specific industries or sectors that require SEO services, you can tailor your approach and stand out in a competitive market.
Additionally, building a strong online presence through a professional website and social media platforms can attract potential clients and showcase your skills effectively.
Identifying Niche Markets
Identifying niche markets is essential for SEO freelancers in Dubai seeking lucrative freelance opportunities in the digital landscape. By focusing on specific industries or target audiences, freelancers can position themselves as experts in high-demand sectors, attracting clients willing to pay a premium for specialized services.
One approach to identifying niche markets is to conduct market research to assess the competition and demand for SEO services within different sectors. Freelancers can also leverage their own interests, skills, and experiences to identify niche markets where they can add unique value.
In Dubai, niche markets such as luxury real estate, hospitality, e-commerce, and tech startups present promising opportunities for SEO freelancers. Understanding the specific needs and pain points of businesses operating in these sectors can help freelancers tailor their services to meet the demands of niche clients effectively.
Building Online Presence
To enhance their chances of securing SEO freelance opportunities in Dubai, freelancers must strategically position themselves through an effective online presence. Building a strong online presence involves creating a professional website that showcases the freelancer's skills, experiences, and past projects. It is essential to optimize the website for search engines by utilizing relevant keywords and creating high-quality content that demonstrates expertise in SEO.
Furthermore, freelancers should actively engage on social media platforms such as LinkedIn, Twitter, and Facebook to connect with potential clients and industry professionals. Sharing valuable insights and participating in relevant discussions can help establish credibility and attract freelance opportunities.
Additionally, freelancers can leverage online freelancing platforms like Upwork, Freelancer, and Fiverr to find SEO projects in Dubai. Creating a compelling profile, highlighting relevant skills, and showcasing positive client reviews can significantly increase visibility and attract potential clients.
Managing Client Relationships
Establishing clear communication channels is crucial in effectively managing client relationships as an SEO freelancer in Dubai. As a freelancer, maintaining transparency and openness with clients is key to building trust and credibility. Regular updates on project progress, discussing strategies, and addressing any concerns promptly are essential practices to foster strong client relationships.
Another vital aspect of managing client relationships as an SEO freelancer in Dubai is setting realistic expectations from the outset. Clearly defining project timelines, deliverables, and outcomes helps manage client expectations and prevents misunderstandings down the line. It is important to ensure that clients have a clear understanding of what SEO services entail and the timeframes for seeing results.
Moreover, actively listening to clients' needs and feedback is imperative for maintaining a positive working relationship. By understanding their goals and challenges, you can tailor your SEO strategies to meet their specific requirements effectively.
Building a solid rapport with clients through attentive communication and delivering quality results will not only help retain current clients but also attract new ones through positive referrals.
| khuzaima_yamman_a0046bbd8 |
1,920,246 | Method security with @Secured Annotation in Spring | This annotation provides a way to add security configuration to business methods. It will use roles... | 27,602 | 2024-07-11T21:21:59 | https://springmasteryhub.com/2024/07/11/method-security-with-secured-annotation-in-spring/ | java, spring, springboot, tutorial | This annotation provides a way to add security configuration to business methods.
It will use roles to check if a user has permission to call this method. The annotation is part of spring security. So to enable its usage you need the spring security dependency.
## Example Scenario
You have an application that has a product CRUD. In this CRUD you want to control the operations using two specific roles.
- **User:** can create the product and see the product. But cannot update or delete a product.
- **Admin:** that can do all the user operations and can also update and delete a product.
You can use @Secured to manage the access of those roles on each operation.
### Roles for Operations
We can define the following roles in our example scenario.
- ROLE_USER, ROLE_ADMIN
To read:
- ROLE_USER, ROLE_ADMIN
To update:
- ROLE_ADMIN
To delete:
- ROLE_ADMIN
Let's look at a code example and observe the application behavior.
### Adding Spring Security Dependency
To work with the `@Secured` annotation, add the Maven dependency for Spring Security:
```java
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
```
### Annotating Methods with @Secured
We annotate the methods with @Secured defining which roles can access the method behavior.
```java
public class Product {
private Long id;
private String name;
private BigDecimal value;
//getters and setters
}
@Service
public class ProductService {
@Secured({"ROLE_USER", "ROLE_ADMIN"})
public Product createProduct(Product product) {
// Logic for creating a product
return product;
}
@Secured({"ROLE_USER", "ROLE_ADMIN"})
public Product getProductById(Long id) {
// Logic for fetching a product
return null;
}
@Secured("ROLE_ADMIN")
public Product updateProduct(Product product) {
// Logic for updating a product
return product;
}
@Secured("ROLE_ADMIN")
public void deleteProduct(Long id) {
// Logic for deleting a product
}
}
```
### Application configuration
You need to add the @EnableGlobalMethodSecurity(securedEnabled = true) to configure your Spring application to use enable method security using @Secured.
```java
@SpringBootApplication
@EnableTransactionManagement
@EnableGlobalMethodSecurity(securedEnabled = true)
public class MasteryApplication {
public static void main(String[] args) {
SpringApplication.run(MasteryApplication.class, args);
}
}
```
### Testing the Behavior
In our example we are going to test the behavior using tests, so we add the spring boot test dependency.
```xml
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-test</artifactId>
<scope>test</scope>
</dependency>
```
Then we create tests to validate if using a mock user and assign specific roles to him, we can test users in each role and how our application behaves. By doing that we can ensure that only the right roles can perform the allowed actions.
```java
@SpringBootTest
class ProductServiceTests {
@Autowired
private ProductService productService;
@Test
@WithMockUser(roles = "USER")
void testCreateProductAsUser() {
Product product = new Product();
assertDoesNotThrow(() -> productService.createProduct(product));
}
@Test
@WithMockUser(roles = "ADMIN")
void testCreateProductAsAdmin() {
Product product = new Product();
assertDoesNotThrow(() -> productService.createProduct(product));
}
@Test
@WithAnonymousUser
void testCreateProductAsAnonymous() {
Product product = new Product();
assertThrows(AccessDeniedException.class, () -> productService.createProduct(product));
}
@Test
@WithMockUser(roles = "USER")
void testGetProductByIdAsUser() {
assertDoesNotThrow(() -> productService.getProductById(1L)); // Assuming product with ID 1 exists
}
@Test
@WithMockUser(roles = "ADMIN")
void testGetProductByIdAsAdmin() {
assertDoesNotThrow(() -> productService.getProductById(1L));
}
@Test
@WithAnonymousUser
void testGetProductByIdAsAnonymous() {
assertThrows(AccessDeniedException.class, () -> productService.getProductById(1L));
}
@Test
@WithMockUser(roles = "USER")
void testUpdateProductAsUser() {
Product product = new Product();
assertThrows(AccessDeniedException.class, () -> productService.updateProduct(product));
}
@Test
@WithMockUser(roles = "ADMIN")
void testUpdateProductAsAdmin() {
Product product = new Product();
assertDoesNotThrow(() -> productService.updateProduct(product));
}
@Test
@WithAnonymousUser
void testUpdateProductAsAnonymous() {
Product product = new Product();
assertThrows(AccessDeniedException.class, () -> productService.updateProduct(product));
}
@Test
@WithMockUser(roles = "USER")
void testDeleteProductAsUser() {
assertThrows(AccessDeniedException.class, () -> productService.deleteProduct(1L));
}
@Test
@WithMockUser(roles = "ADMIN")
void testDeleteProductAsAdmin() {
assertDoesNotThrow(() -> productService.deleteProduct(1L));
}
@Test
@WithAnonymousUser
void testDeleteProductAsAnonymous() {
assertThrows(AccessDeniedException.class, () -> productService.deleteProduct(1L));
}
}
```
That’s it, now you can manage user access to the application using roles with the @Secured annotation.
If you like this topic, make sure to follow me. In the following days, I’ll be explaining more about Spring annotations! Stay tuned!
Follow me! | tiuwill |
1,920,247 | Today, I start my "Problem Resolver" journey with the name of Allah. | I start my journey and explore how to resolve problems as a programmer or coder. | 0 | 2024-07-11T21:22:17 | https://dev.to/ayeshaseher/today-i-start-my-problem-resolver-journey-with-the-name-of-allah-1g0c | I start my journey and explore how to resolve problems as a programmer or coder. | ayeshaseher | |
1,920,251 | Python - First Week | Python is taught online in Tamil without any cost. The only expectation from them is to create a blog... | 0 | 2024-07-11T21:23:33 | https://dev.to/sureshlearnspython/python-first-week-1bbl | python, basic | Python is taught online in Tamil without any cost. The only expectation from them is to create a blog and write our understanding after learning. Hence started writing the blog.
- Started Learning python through Kaniyam. Got to know about two Tamil people Syed and Srini. Classes will be for Monday to Wednesday at 7pm to 8pm
- WhatsApp group was created and 3 classes completed. Agenda for every class is clear. During the class Zoom Recording and YouTube Live both were made which is useful for Checking it again.
https://kaniyam.com/python-course-2024/
https://parottasalna.com/python-development/
I have already installed Python and VS code available in my laptop. Find little difficult to understand running the sample program via Terminal and from Vscode. After few tries it is little clear now. So environment is made available for the class.
Lot of free ebooks are available to learn python other than youtube videos. Recommended book links are as below
**<u>Day 1: Meet and Greet:</u>**
Agenda:
1. Why Python ?
2. Course Syllabus
3. Python Installation - Windows, Linux
4. Collab Notebook
5. Where to see updates & recordings.
6. Where to ask questions ?
7. Our Expectations
8. Basic Print Command
9. About FOSS, FOSS Communities.
Post session, the below information is shared
<u>Youtube Recording:</u>
Part 1: https://www.youtube.com/live/rcJRkt3odlw?si=SZGCr6aBVwSQII0g
Part 2: https://www.youtube.com/live/xBpXOkyoFD8?si=Z89W5VAtLnkUpFHH
<u>How to create a blog: </u>
https://www.youtube.com/watch?v=pkp8WK9ub4o
<u>Google Form to submit your blog url:</u>
https://docs.google.com/forms/d/e/1FAIpQLSdiJ3qQ-37YSi2VnTFpgVIJL0iE9mxveKHA3kFnwVAmhJooMg/viewform?usp=sf_link
<u>Whatsapp Group Links:</u>
Group 1: https://chat.whatsapp.com/DcfvtLP0y6S0iUkjCJLcH7
Group 2: https://chat.whatsapp.com/ErBIxb1lQfs7mNRo33c4Vc
Group 3: https://chat.whatsapp.com/ETxQ9WVCXkp5TYmY22wLaC
**Tamil Linux Forum Link:** (Ask Your Queries here)
https://forums.tamillinuxcommunity.org/
**Community Links**
https://forums.tamillinuxcommunity.org/t/gather-all-the-foss-group-in-tamil-nadu/1387
**Python Download Link:**
https://www.python.org/downloads/
**Google Colab Link:**
https://colab.research.google.com/
**<u>Day 2: Python Print</u>**
Agenda:
1. 10 min discussion on previous class
2. Basic print statement
3. Multiple prints
4. separator
5. concatenating strings
6. escape sequences
7. raw string
8. printing quotes inside strings.
9. printing numbers
10. multiline strings
11. string multiplication
12. combining int and str in printing
13. .format
14. f-strings
Concepts are understood. Important links are
<u>Youtube Recording:</u>
Session: https://www.youtube.com/watch?v=zr3skBHzbAI&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&index=4&pp=gAQBiAQB
Q/A: https://www.youtube.com/watch?v=OWjW7GBMND4&list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL&index=5&pp=gAQBiAQB
<u>Blog:</u> https://parottasalna.com/2024/07/05/python-fundamentals-the-print/
<u>Quiz:</u> https://docs.google.com/forms/d/e/1FAIpQLSeW7dGCYrvPXBK7llexbwa_yImFQWFiHHE4c4ATOk-NwJWxIw/viewform?usp=sf_link
<u>Task:</u> https://parottasalna.com/2024/07/04/task-1-python-print-exercises/
<u>Playlist:</u> https://www.youtube.com/playlist?list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL
<u>Colab Notebook:</u> https://colab.research.google.com/drive/1Uu9btRd_U0i3-PRfZwlR2QalJRp0DbBK?usp=sharing
<u>Infographics:</u> https://parottasalna.com/wp-content/uploads/2024/07/print-method.pdf
Byte of Python book is recommended. The link is'''
https://python.swaroopch.com/
https://www.tutorialspoint.com/python/index.htm
https://python.swaroopch.com/
https://pymbook.readthedocs.io/
'''
**<u>Day 3: Data types Variables and constants</u>**
Agenda:
0. Discussion on print quiz.
1. Numeric Types (int, float, complex)
2. Text Type (strings)
3. Boolean Type (bool)
4. None Type (None)
5. How to check a data type ?
6. What is a variable ?
7. How to define it
8. valid, invalid variables
9. assigning values
10. multiple assignment
11. unpacking
12. variable types
13. Constants
<u>**Details shared**</u>
Print Quiz Solutions Video: https://youtu.be/JzFLSZySbRI
Print Task Solutions: https://youtu.be/k6pwbOZtQ30
<u>Variables Datatypes & Constants:</u>
Session: https://youtube.com/live/5G0PoJofxXk?feature=share
<u>Q/A</u>: https://youtube.com/live/9cJDqHwQG5k?feature=share
<u>Blog:</u> https://parottasalna.com/2024/07/07/python-fundamentals-constants-variables-and-data-types/
<u>Quiz:</u> https://docs.google.com/forms/d/e/1FAIpQLSezKyjHkKlg4Qo8juWqviZkasyWOcAcEcBzK_NsBwGYG3WAvg/viewform?usp=sf_link
<u>Task:</u> https://parottasalna.com/2024/07/07/task-2-constants-and-variables/
<u>Playlist:</u> https://www.youtube.com/playlist?list=PLiutOxBS1Mizte0ehfMrRKHSIQcCImwHL
<u>Infographics:</u> https://parottasalna.com/wp-content/uploads/2024/07/variables-constants-and-data-types-in-python.pdf
There was a Q&A session planned by Srini and Syed today as there are many participants(11.07.2024 - Thursday)
I told them that I will create a blog this weekend. Srini objected and said that postponing the activity will keep that delayed. Hence today I started writing my first blog.
For teaching Kids, below websites can be checked
- https://code.org/
- https://scratch.mit.edu/
**<u>Other important links:</u>**
- https://kaniyam.com/ebooks
- https://www.datacamp.com/tutorial/setting-up-vscode-python
I have asked two questions today.
1. Website link to teach kids?
2. To know in detail about print function(like hovering over VSCODE on print provides detail) what I have to do?
3. For the above I have to add some addons. those are **Python** and **Pylance**
4. During the conversation Pandas also came which is a library for Excel data processing.
Also there was some interesting questions came about AI training using Python. It seems like with the available data, next prediction will happen like Regression LinearGraph.To know more about AI, **We need to study the Book from Kaniyam**
<u>AI Libraries</u>
- Scifikit
- Tensorflow
- pytorch
**<u>To do:</u>**
1. Go through the link or search on adding python, Pylance library to VScode
2. Explore Pandas with the Ebook
3. Read Byte of python till date
| sureshlearnspython |
1,920,252 | Card Portfolio UI Component | Cards for portfolio Website. If you find this usefull, you can support me by buying me a coffe | 0 | 2024-07-11T21:26:40 | https://dev.to/sofidev/card-portfolio-ui-component-1c0f | codepen, ui, components, css | Cards for portfolio Website.
If you find this usefull, you can support me by buying me a coffe
{% codepen https://codepen.io/sofidev/pen/wvLvKmR %} | sofidev |
1,920,254 | Classes in Python (Introduction) | In Python, classes are the foundation of object-oriented programming. In simple terms, they are... | 0 | 2024-07-11T21:41:40 | https://dev.to/gianni_cast/classes-in-python-introduction-13cc | beginners, python, classes | In Python, classes are the foundation of object-oriented programming. In simple terms, they are essentially a template for creating objects with similar attributes.
**Creating Classes**
Class definition syntax is extremely straightforward. All you need is the keyword: class followed by the ClassName: (the class name is always in UpperCamelCase). I have provided an example below:
`class Shop:`
Well done you have successfully created a class! Now we will take a deeper dive into how you can use them. I will be using a class to create and store different shops throughout this blog.
**Using Classes**
The first step after creating your class is to use a constructer method know as the __init__ method to initialize instance attributes that will be used when instantiating objects.
```
class Shop:
def __init__(self, name, location, owner):
self.name = name
self.location = location
self.owner = owner
```
Now whenever we create or instantiate a new store/shop object within this class it will share these attributes we initialized! Now lets create some shops:
```
class Shop:
def __init__(self, name, location, owner):
self.name = name
self.location = location
self.owner = owner
#method for displaying our stores
def display_store_info(self)
return f"Shop: {self.name}, Location: {self.location}, Owner: {self.owner}"
#creating shop instances
first_shop = Shop("FoodMart", "Main Street", "John Smith")
second_shop = Shop("ClothingStore", "Billybob Avenue", "Billy Bob")
```
Now in our python shell if we type print(first_shop.display_store_info()) we will see this display:
`Shop: FoodMart, Location: Main Street, Owner: John Smith`
We could also do the same for the second_shop! We created a method or function in our class called display_store_info that allowed us to inject the attributes defined in our init. Now we could make limitless shop objects that include the name, location, and owner as a reusable template.
This is just the beginning when it comes to classes. The possibilities and reusability is incredible when it comes to using classes in Python. I would love to go into more detail in a future blog post but this is just a small little intro.
| gianni_cast |
1,920,255 | A greener planet with tech | Hello everyone! This week I decided to blog about something that caught my eye- sustainable web... | 0 | 2024-07-11T21:42:03 | https://dev.to/christopherchhim/a-greener-planet-with-tech-180n | webdev, environment, co2emissions | Hello everyone! This week I decided to blog about something that caught my eye- sustainable web development! I have done a lot of research concerning today's environmental problems at school such as water pollution, deforestation, habitat fragmentation, etc... Reducing carbon emissions to preserve the earth's health is one of my biggest motivations for starting a tech career. Technology has come a long way and has many innovations, so I aspire to use technology to help the planet become a more sustainable environment.
Counting the number of carbons from web products is hard to measure so a good way to do so is by taking into account the amount of data transfers being done and measuring the weight of a page. We must find ways to develop power-efficient websites to reduce energy consumption.
This post was inspired from:
Greenwood, T. (2021, August 5) Sustainable Web Design, An Excerpt - A List Apart
Retrieved from: [https://alistapart.com/article/sustainable-web-design-excerpt/] | christopherchhim |
1,920,256 | Building Conversational AI with React: Trends, Tools, and Applications | Explore the future of conversational AI with React and discover the transformative power of Sista AI. Dive into the world of innovative voice assistants! 🌐✨ | 0 | 2024-07-11T21:45:41 | https://dev.to/sista-ai/building-conversational-ai-with-react-trends-tools-and-applications-26oc | ai, react, javascript, typescript | <h2>Introduction</h2><p>Conversational AI is transforming user interactions with technology, offering personalized and engaging experiences. This article delves into the latest trends, tools, and applications for building conversational AI with React, exploring the cutting-edge capabilities of <strong>Sista AI</strong>.</p><h2>Conversational AI Trends for 2024</h2><p>The future of conversational AI in 2024 holds exciting advancements, from emotionally intelligent chatbots to conversational search engines. Rapid deployment of virtual agents and the integration of AI companions showcase the dynamic landscape shaping the industry.</p><h2>Building AI Chatbots with React</h2><p>Utilize the power of React to create AI chatbots that enhance user engagement and accessibility. Leverage React components from repositories like react-chatbot-kit to build customizable chatbot solutions, aligning with the innovative features offered by <strong>Sista AI</strong>.</p><h2>Sista AI: An End-to-End AI Integration Platform</h2><p><strong>Sista AI</strong> redefines AI integration, enabling any app to incorporate a voice assistant in minutes. With features like conversational AI agents, voice user interface, and real-time data integration, Sista AI offers a comprehensive solution for modern applications integrated with React.</p><h2>Building AI Chatbots with React and ChatGPT API</h2><p>The seamless integration of React and the ChatGPT API empowers developers to create intelligent chatbots. By following simple steps and integrating these tools, developers can unlock the potential of AI-driven interactions that elevate user experiences in line with the capabilities of Sista AI.</p><h2>The Social Side of Artificial Intelligence</h2><p>Artificial Intelligence transcends technology to offer personalized interactions through AI companions. Discover how AI friends are reshaping human-computer interactions and the potential for mainstream adoption, highlighting the transformative impact of advanced solutions like Sista AI.</p><br/><br/><a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=big_logo" target="_blank"><img src="https://vuic-assets.s3.us-west-1.amazonaws.com/sista-make-auto-gen-blog-assets/sista_ai.png" alt="Sista AI Logo"></a><br/><br/><p>For more information, visit <a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=For_More_Info_Link" target="_blank">sista.ai</a>.</p> | sista-ai |
1,920,257 | Laravel Cheatsheet after making changes to a file | Clear caches (if needed): php artisan config:cache php artisan route:cache php artisan... | 0 | 2024-07-11T21:46:01 | https://dev.to/msnmongare/laravel-cheatsheet-after-making-changes-to-a-file-a8b | webdev, beginners, laravel, programming | Clear caches (if needed):
```
php artisan config:cache
php artisan route:cache
php artisan view:clear
php artisan cache:clear
```
**Serve the application:**
```
php artisan serve
```
| msnmongare |
1,920,261 | Master Linear Regression with NumPy: Step-by-Step Guide to Building and Optimizing Your First Model! | Linear regression is a simple yet powerful method in machine learning used to model the relationship... | 0 | 2024-07-11T21:56:47 | https://dev.to/moubarakmohame4/master-linear-regression-with-numpy-step-by-step-guide-to-building-and-optimizing-your-first-model-oo7 | python, machinelearning, beginners, numpy | Linear regression is a simple yet powerful method in machine learning used to model the relationship between a dependent variable (target) and one or more independent variables (predictors). In this article, we will implement a simple linear regression using NumPy, a powerful library for scientific computing in Python. We will cover the different equations necessary for this implementation: the model, the cost function, the gradient, and gradient descent.
**1. Linear Regression Model**
The linear regression model can be represented by the following equation:

**y=Xθ**
where:
- **X** is the matrix of predictors.
- **θ** is the vector of parameters (coefficients).
**2. Cost Function**
The cost function in linear regression is often the sum of squared errors (mean squared error). It measures the difference between the values predicted by the model and the actual values.

**3. Gradient**
The gradient of the cost function with respect to the parameters
**θ** is necessary to minimize the cost function using gradient descent. The gradient is calculated as follows:

**4. Gradient Descent**
Gradient descent is an iterative optimization method used to minimize the cost function. The parameter update equation is:

**5. Implementation with NumPy**
importing libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
Generate example data
```
np.random.seed(42)
m = 100
x = 2 * np.random.rand(m, 1)
y = 4 + 3 * x + np.random.randn(m, 1)
plt.figure(figsize=(12, 8))
plt.scatter(x, y)
```

Add a bias term (column of 1s) to X
```
X = np.hstack((x, np.ones(x.shape)))
X.shape
```
> (100, 2)
Initialize the parameters θ to random values
```
theta = np.random.randn(2, 1)
```
Model function
```
def model(X, theta):
return X.dot(theta)
plt.scatter(x[:, 0], y)
plt.scatter(x[:, 0], model(X, theta), c='r')
```

**Cost function**
```
def cost_function(X, y, theta):
m = len(y)
return 1/(2*m) * np.sum((model(X, theta) - y)**2)
cost_function(X, y, theta)
```
> 16.069293038191518
**Gradient et Gradient descent**
```
def grad(X, y, theta):
m = len(y)
return 1/m * X.T.dot(model(X, theta) - y)
def gradient_descent(X, y, theta, learning_rate, n_iter):
cost_history = np.zeros(n_iter)
for i in range(0, n_iter):
theta = theta - learning_rate * grad(X, y, theta)
cost_history[i] = cost_function(X, y, theta)
return theta, cost_history
```
```
n_iterations = 1000
learning_rate = 0.01
final_theta, cost_history = gradient_descent(X, y, theta, learning_rate, n_iterations)
final_theta
```
> array([[2.79981142],
[4.18146098]])
```
predict = model(X, funal_theta)
plt.scatter(x[:, 0], y)
plt.scatter(x[:, 0], predict, c='r')
```

**Learning curve**
```
plt.plot(range(n_iterations), cost_history)
```

This implementation demonstrates how the fundamental concepts of linear regression, such as the model, cost function, gradient, and gradient descent, can be implemented using NumPy. This basic understanding is essential for advancing to more complex machine learning models.
Feel free to explore further by adjusting hyperparameters, adding more features, or trying other optimization techniques to improve your linear regression model. | moubarakmohame4 |
1,920,262 | JavaScript MMORPG - Maiu Online - #babylonjs - Ep29: Chat message clouds | Hello, Again super short update from the progress. When someone writes message on the chat and is in... | 0 | 2024-07-11T22:03:06 | https://dev.to/maiu/javascript-mmorpg-maiu-online-babylonjs-ep29-chat-message-clouds-1a90 | babylonjs, javascript, mmorpg, indiegamedev | Hello,
Again super short update from the progress. When someone writes message on the chat and is in the visual range I'm displaying message in the 'cloud'. And to the future backlog I'm adding enhancing chat with the local/global mode.
{% youtube n28vM7oDaKQ %} | maiu |
1,920,263 | JavaScript Tips | Hello! Today I decided to blog on an article about JavaScript tips. JavaScript is an essential... | 0 | 2024-07-11T22:12:14 | https://dev.to/christopherchhim/javascript-tips-aj9 | javascript, webdev, programming, beginners | Hello! Today I decided to blog on an article about JavaScript tips. JavaScript is an essential programming language for web development, so I decided to share these tips in case I ever need a reference.
1. Navigation tool
Browser OS details can be viewed by using either the window.navigator object or navigator.platform method.
2. Stopping Auto Refresh
void(0) prevents the page from auto-refreshing.
3. Page Redirecting
The user can be redirected to a new page by setting the href property of the location object, which is a property of the window object.
function redirect() {
window.location.href = "newPage.html";
}
4. Email Validation
function validateEmail(email) {
var re =
/^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/;
return re.test(String(email).toLowerCase());
}
5. Fetching URL
window.location.href can be used to get the current URL and update it.
6. Getting metadata
The import.meta object contains information on the current module.
This post was inspired by:
Baranwal, A. (2024, July 1) 15 Amazing JavaScript Tips
Retrieved from: [https://dev.to/anmolbaranwal/15-amazing-things-you-can-do-with-simple-javascript-g88?context=digest]
| christopherchhim |
1,920,264 | Random Walk on the Line | Random Walk “In mathematics, a random walk, sometimes known as a drunkard’s walk, is a... | 0 | 2024-07-11T22:24:30 | https://dev.to/kdalkafoukis/random-walk-on-the-line-3lkf | mathematics, python, probabilitythoery, randomwalk | ## Random Walk
“In mathematics, a **random walk**, sometimes known as a **drunkard’s walk**, is a random process that describes a path that consists of a succession of random steps on some mathematical space.”
"Random walks have applications to engineering and many scientific fields including ecology, psychology, computer science, physics, chemistry, biology, economics, and sociology. The term random walk was first introduced by Karl Pearson in 1905."
## One-dimensional classic random walk
“An elementary example of a random walk is the random walk on the integer number line, which starts at 0 and at each step moves +1 or −1 with equal probability.”
For instance, imagine you are on the **x axis of integers** and you start from **position 0**.
Flipping a coin, if it’s heads you move +1 and if it’s tails -1.
**First iteration:** Flip a coin, if it’s heads go to **position 1**. If it’s tails go to **position -1**.
**Second iteration:** Flip again a coin. If the first coin was heads it means that you are in **position 1** which means now if it’s heads you go to **position 2** and if it’s tails to **position 0**. If the first coin was tails it means that you are in **position -1** if the coin now is heads you go to **position 0** and if it’s tails you go to **position -2**. And so on.
<div style="">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ym8vzkcymr9dbq6y75r3.png"
width="100px" height="100px" alt="All possible random walk outcomes after 5 flips of a fair coin"/>
</div>
> All possible random walk outcomes after 5 flips of a fair coin
### Algorithm implementation
In python this can be written like this
```python
import random
def random_walk_on_the_line(steps=100):
position = 0
for _ in range(steps):
if random.uniform(0, 1) < 0.5:
position -= 1
else:
position += 1
return position
```
If you run the above function starting from position 0 after x amount of steps you will be able to see where you landed.
But this is not very useful.
Let’s run this function multiple times and try to see which positions occur more often and which not.
This process is called Monte Carlo Simulation
```python
def monte_carlo_simulation(number_of_executions=1000, walk_steps=100):
results = {}
for _ in range(number_of_executions):
result = random_walk_on_the_line(walk_steps)
if result not in results:
results[result] = 1
else:
results[result] += 1
return results
```
The above code runs random walks according to the number of executions, counts the number of occurrences for each position and returns an object of the form
```
results = {
...
"position -1": "20 times"
"position 0": "10 times"
"position 1": "5 times"
...
}
```
The last step is to transform this into something that we can plot and plot it
```python
def prepare_results_for_plotting(dictionary):
positions = []
number_of_occurance = []
for key, value in dictionary.items():
positions.append(key)
number_of_occurance.append(value)
return positions, number_of_occurance
```
This takes the results object and creates two arrays of the form
```
positions = [... ,"position - 1", "position 0", "position 1",... ]
number_of_occurance = [ ...,"20 times", "10 times", "5 times",...]
```
Having the data in the right format for matplotlib we have one final step to visualise the data
```python
import matplotlib.pyplot as plt
def plot_results(dictionary):
positions, number_of_occurance = prepare_results_for_plotting(dictionary)
plt.bar(positions, number_of_occurance)
plt.ylabel('number of occurances')
plt.xlabel('positions')
plt.show()
```
Last step is to call the above functions
```python
# 100000 discrete walks, 100 steps for every walk
simulation_results = monte_carlo_simulation(100000, 100)
plot_results(simulation_results)
```
Putting all together in a python file and running it we approach a normal distribution.
Also make sure you have installed `matplotlib`

“The central limit theorem and the law of the iterated logarithm describe important aspects of the behaviour of simple random walks on Z. In particular, the former entails that as n increases, the probabilities (proportional to the numbers in each row) approach a normal distribution.”
### Final Words
I hope it was helpful.
Thanks for reading.
If you have any questions feel free to ask in the comments.
### References
https://en.wikipedia.org/wiki/Random_walk | kdalkafoukis |
1,920,265 | Connect External Public IP to Private VPC on AWS | Having a Private AWS VPC and Want to Connect Public External Ip to it takeing into consideration to... | 0 | 2024-07-11T22:15:11 | https://dev.to/basel5001/connect-external-public-ip-to-private-vpc-on-aws-2ag4 | Having a Private AWS VPC and Want to Connect Public External Ip to it
takeing into consideration to Configure the Following:
VPC CIDR BLock
Routing Table
ACL
Security Group
and using IAM policy to Specifiy the Ip as Well.
Tried to Configure these Steps but with no luck, and couldn't ping from inside EC2 | basel5001 | |
1,920,266 | Useful datasets for AI/ML | Looking for datasets for your next project. Here are some sources I... | 0 | 2024-07-11T22:17:16 | https://dev.to/johnscode/useful-datasets-for-aiml-4n89 | machinelearning, ai, datascience | Looking for datasets for your next project. Here are some sources I know:
https://huggingface.co/datasets
https://openml.org/
https://dagshub.com/datasets/
https://github.com/awesomedata/awesome-public-datasets
https://paperswithcode.com/datasets
https://registry.opendata.aws/
https://www.kaggle.com/datasets
Add more in the comments
Thanks! | johnscode |
1,920,286 | AI vs Marketmaker | Today, an increasing number of people are becoming familiar with cryptocurrency and actively... | 0 | 2024-07-11T23:53:12 | https://dev.to/thejefry/ai-vs-marketmaker-15g5 |
Today, an increasing number of people are becoming familiar with cryptocurrency and actively exploring its possibilities. With the advancement of neural network technologies, token creators now have the opportunity to integrate AI into market making. Neural networks facilitate transaction execution, display coin rates, and provide predictive insights.
However, the overarching question remains: what is the overall impact of AI adoption, and can it fully replace the role of a market maker today? The answer lies in examining existing cryptocurrency projects that have implemented neural networks.
The **[EARTH TON](t.me/earthtonofficial)** project has particularly captured my interest. Focused on ecological themes, which are highly appealing to investors, it boasts a significant following with 60,000 Telegram subscribers. This prompted me to explore it as an example of AI integration in cryptocurrency platforms.
**What advantages does AI offer businesses today?**
Neural networks have already revolutionized various economic sectors. From gadgets and self-service kiosks to online chatbots, AI brings several benefits:
Automation streamlines internal workflows, enhancing efficiency.
AI takes over tasks traditionally handled by employees, reducing labor costs.
Minimized incidents and errors due to human oversight.
These advantages have spurred the integration of neural networks into specific tasks within cryptocurrency projects. The market comprises various participants, including long-term investors and arbitrage practitioners.
Now, let's delve into how replacing a market maker with AI can benefit project development and investors alike.
Automating trading and transaction closures under AI control is pivotal for market makers aiming to increase trade volume and profits. AI replicates transaction processes swiftly, significantly reducing processing times compared to human interaction.
Implementing an autonomous AI algorithm for trading automation involves integrating bots for obtaining initial transaction data (e.g., cryptocurrency pair exchange rates, transaction volumes) directly from the platform. Placing this autonomous neural network on a separate server and synchronizing it with the platform via API ensures operational transparency and reliability.
AI also plays a crucial role in forecasting exchange rates and managing risks. Long-term investors rely on accurate predictions to capitalize on market fluctuations. By analyzing vast datasets, AI enhances forecasting accuracy, minimizing risks associated with market volatility.
For young cryptocurrency projects, AI presents an alternative to traditional market makers who typically prioritize established cryptocurrencies like Bitcoin and Ethereum. Implementing AI reduces dependency on market makers, mitigating risks associated with inexperienced or opportunistic actors.
While AI's current capabilities allow for specific market-making functions such as trade closures and accurate forecasting, broader applications require further development. Therefore, platforms like EARTH TON strategically leverage AI for secondary market processes while employing traditional methods for primary operations.
In conclusion, while AI can perform certain market maker functions effectively, complete replacement remains a distant goal. Nonetheless, its role as a reliable assistant is increasingly recognized across cryptocurrency platforms, like **EARTH TON**. Future advancements in AI technology will likely redefine its impact in various sectors, including cryptocurrency markets. | thejefry | |
1,920,269 | Why It Might Be Time To Remove Gitbhub Copilot From Your Code Editor | Over the past few years, AI-powered coding assistants like GitHub Copilot have revolutionized how we... | 0 | 2024-07-11T22:23:48 | https://dev.to/fido1hn/why-it-might-be-time-to-remove-gitbhub-copilot-from-your-code-editor-52j4 | programming, vscode, githubcopilot | Over the past few years, AI-powered coding assistants like GitHub Copilot have revolutionized how we code. These tools have undoubtedly offered some benefits, such as providing code suggestions and helping us navigate coding hurdles more efficiently. However, they are not without their drawbacks, which raises the question: Is it time to reconsider our reliance on GitHub Copilot in our integrated development environments?
## Understanding the implications of AI assistance:
1. **Short-term memory:** It’s common for developers to use coding assistants like GitHub Copilot to generate code snippets or even entire functions. While this may seem efficient at first glance, it can also make it more challenging to remember the specific implementation details of a project. This is particularly concerning when you need to revisit or debug your code in the future.
2. **Limitations of Large Language Models (LLMs):** Although LLMs have made significant strides in generating human-like text, they aren’t infallible, especially when it comes to writing code. GitHub Copilot can offer useful code suggestions, but it’s essential to remember that these suggestions may not always be the most suitable or efficient solutions for a specific problem. Blindly accepting these suggestions might lead to issues with your code in the long run.
3. **Skill atrophy:** Overreliance on coding assistants like GitHub Copilot can dull your problem-solving skills as a developer. By leaning too heavily on these tools, you may lose the ability to think critically and independently when solving complex coding challenges. This can become particularly problematic during job interviews, where you might struggle with programming tasks in unfamiliar environments that don’t offer AI assistance.
## The positive side of coding assistants:
Despite the potential issues mentioned above, it’s important to acknowledge the benefits of coding assistants like GitHub Copilot. When used judiciously, these tools can be valuable resources for:
1. **Code research and discovery:** AI-powered coding assistants are excellent for answering coding-related questions and suggesting implementation ideas. They can serve as a starting point to explore possible solutions and help you build a foundation for your projects.
2. **Efficient code generation:** In cases where you’re working with repetitive tasks or boilerplate code, GitHub Copilot can be a real time-saver. By generating code snippets, it can streamline your workflow and allow you to focus on more complex aspects of your project.
## Conclusion:
GitHub Copilot and similar AI-driven coding assistants have undoubtedly transformed how developers write code. However, it’s crucial to strike a balance between utilizing these powerful tools and nurturing your own coding abilities. As you continue to leverage the power of AI in your development process, keep in mind the importance of honing your problem-solving skills, maintaining control over your projects, and critically evaluating AI-generated code suggestions to ensure optimal outcomes.
Some developers have seen a significant leap in their understanding of projects they work on, and their effieciency in general when they have either removed Github Copilot from their editor or generally ignored it’s suggestions while coding.
The below tweet is from The Primeagen a Senior Developer who worked at Netflix and is well respected in the community. It hints clearly at this topic.
{% embed https://x.com/ThePrimeagen/status/1810048240739602466 %}
Just some food for thought, Thank you for reading, and Happy Coding ❤️ | fido1hn |
1,920,270 | Want to get started as a Data Engineer | Level Up Your Data Engineering Game! Get the Basics Down: Dive into data engineering... | 0 | 2024-07-11T22:26:56 | https://dev.to/johnscode/want-to-get-started-as-a-data-engineer-1amn | datascience, machinelearning, ai | Level Up Your Data Engineering Game!
Get the Basics Down:
- Dive into data engineering essentials
- Play around with big data tools and cloud stuff
- Get comfy with SQL and a coding language (Python's a good bet)
Pick a Cool Problem:
- Choose something that gets you excited (sports stats, stock market, food delivery, weather, crypto, etc.)
- Make sure it's meaty enough to show off your skills
Hunt Down Some Data:
- Find a juicy dataset or API for your chosen topic
- The messier and bigger, the better (within reason!)
Build Your Data Playground:
- Whip up a smart data model
- Set it up in PostgreSQL (or your database of choice)
- Make it sing with some optimization magic
Craft a Slick ETL Pipeline:
- Something like this could work: _Raw data → S3 → Glue (Spark) → S3 (transformed) → Crawler → Athena_
- Feel free to mix and match tools to fit your style
Show It Off:
- Document your journey (the good, the bad, and the ugly)
- Whip up some cool visuals to make your data pop
- Throw it up on GitHub or your own site
- Blog about it if you're feeling extra fancy
Make It Count:
- Slap that project on your resume
- Use it to dazzle interviewers with your mad skills
- Keep tinkering with it to show you're always learning
There you have it! A recipe for data engineering awesomeness. Now go out there and build something cool!
What else would you do? A different stack? Let me know in the comments. | johnscode |
1,920,272 | Apprenticeship & Beyond👨🏾💻 | What does apprenticeship mean to you? To me, being an apprentice means being a very specialized type... | 0 | 2024-07-11T22:58:55 | https://dev.to/taariqelliott/apprenticeship-beyond-30ol | p5js, javascript, creativecoding, generativeart | **_What does apprenticeship mean to you?_**
To me, being an apprentice means being a very specialized type of student. It involves being under the supervision and guidance of someone who is highly knowledgeable in a particular field. An apprenticeship is a way to learn the intricacies of that field while practicing to become proficient enough to work with this set of skills. Currently, as an apprentice, I feel that my developer skills have been validated significantly. Being entrusted by an organization to learn software development under the wings of much more experienced engineers has lifted a huge weight off my shoulders, allowing me to focus on what’s to come and what the future will require from me.
**_What kind of skills do you hope to build during your apprenticeship?_**
One skill I hope to develop is the ability to work better in teams of engineers. An important aspect behind any successful product or company is usually a dedicated team of skilled individuals who all understand the importance of synergy. I love being in environments where I can focus on a specific task instead of having to do everything myself. I’ve worked on many different solo projects as a bootcamp grad to practice and learn different languages, but one thing that often happens is getting stuck. This can lead to fatigue, and eventually, many of these projects get shelved. Being part of a team will allow me to leverage the array of skills my teammates possess, enabling me to do my job better. I think the word for that is efficiency!
Another skill I hope to foster during this apprenticeship is gaining a much deeper understanding of Python as well as JavaScript. These two languages are my favorites because of the wide range of applications you can build with them alone. From music to art to communication applications, both of these languages have proven why they're the two most popular. I’ve worked in the non-profit music education space, and seeing how code can transform the minds of the youth is incredible. I hope to build a more solid foundation in these skills I'm acquiring so that I may also pass the knowledge down the line.
**_Why might you select a particular blog publishing platform? What features do you find most valuable?_**
I’ve chosen [dev.to](https://dev.to/) as my blog platform of choice because it ranks pretty high in Google search—just kidding! There seems to be a huge community of people who love this platform, as well as it being accessible, which to me are two very important things that the world needs more of. Community allows you to feel seen and heard and is integral for sharing information. Accessibility allows for that information to be spread to different audiences who may have never been able to connect without a certain resource, which again, I would love to see more of in my everyday life. | taariqelliott |
1,920,273 | Introducing the Samsung Galaxy Watch Ultra: A Powerhouse of Smartwatch Innovation | ** Samsung Watch Ultra ** The highly anticipated Samsung Galaxy Unpacked 2024 event has... | 0 | 2024-07-11T22:58:33 | https://dev.to/tomandjerry36/introducing-the-samsung-galaxy-watch-ultra-a-powerhouse-of-smartwatch-innovation-5a20 | ai, samsung, smartwatch, news | **
## Samsung Watch Ultra
**

The highly anticipated Samsung Galaxy Unpacked 2024 event has just unveiled the latest addition to the Galaxy Watch lineup — the remarkable Galaxy Watch Ultra. This premium smartwatch is poised to redefine the standards in the wearable technology market, offering a seamless blend of cutting-edge features, robust performance, and a striking design.
## **Captivating Design and Display**
The Galaxy Watch Ultra immediately catches the eye with its distinctive rounded square titanium case, exuding a sense of rugged sophistication. This premium build not only adds to the watch’s durability but also sets it apart from the typical circular smartwatch designs. Complementing the striking case is a circular display that provides a unique and immersive viewing experience, allowing users to effortlessly navigate through the watch’s various functions.
Furthering the user experience, the Galaxy Watch Ultra features a Quick Button, strategically placed for easy access during workouts and other activities. This button enables users to quickly control essential functions, such as the Samsung Health app and the built-in flashlight mode, without interrupting their flow.
**
## Cutting-Edge Health and Fitness Tracking
**
https://www.google.com/url?sa=i&url=https%3A%2F%2Findianexpress.com%2Farticle%2Ftechnology%2Fgadgets%2Fgalaxy-unpacked-2024-galaxy-watch-ultra-price-specs-features-9445194%2F&psig=AOvVaw1Xj56eMTwZXssz_mFBg-1d&ust=1720824926596000&source=images&cd=vfe&opi=89978449&ved=0CBIQjRxqFwoTCIiboMqKoIcDFQAAAAAdAAAAABAE
At the heart of the Galaxy Watch Ultra lies its advanced health and fitness tracking capabilities. The device is equipped with a state-of-the-art BioActive Sensor, which offers precise heart rate monitoring for both exercise and sleep tracking. This sensor plays a crucial role in the watch’s ability to detect sleep apnea, providing users with valuable insights into their sleep patterns and overall well-being.
The Galaxy Watch Ultra also boasts an Irregular Heart Rhythm Notification (IHRN) feature, which can detect irregular heart rhythms suggestive of atrial fibrillation. This function, combined with real-time heart rate monitoring and alerts for abnormally high or low heart rates, empowers users to stay on top of their cardiovascular health.
For cycling enthusiasts, the Galaxy Watch Ultra introduces a new Functional Threshold Power (FTP) metric, allowing users to track their cycling performance and progress with greater accuracy.
**
## Unparalleled Performance and Battery Life
**
Powering the Galaxy Watch Ultra is the Exynos W1000 processor, which delivers exceptional performance and efficiency. This advanced chipset, coupled with the watch’s ample 2GB of RAM and 32GB of storage, ensures a smooth and responsive user experience.
Complementing the impressive performance is the Galaxy Watch Ultra’s remarkable battery life. The device boasts up to 16 hours of battery life during outdoor workouts and up to 60 hours with typical use and the always-on display activated. This extended battery life, combined with the watch’s dual-GPS system for enhanced accuracy, makes it an ideal companion for active lifestyles and outdoor adventures.
## **Intelligent and Connected Experiences**
The Galaxy Watch Ultra takes the smartwatch experience to new heights with its integration of Samsung’s cutting-edge Galaxy AI technology. This innovative feature powers two novel functions: the “Energy Score” and “Wellness Tips.”
The “Energy Score” provides users with a comprehensive assessment of their daily physical and mental well-being, assigning a score out of 100 based on seven key indicators. This feature empowers users to better understand their overall health and make informed decisions to improve their lifestyle.
The “Wellness Tips” function acts as a personalized wellness coach, offering tailored recommendations on exercise, sleep, stress management, hydration, and more. This intelligent guidance helps users optimize their daily routines and achieve their health and fitness goals.
Furthermore, the Galaxy Watch Ultra seamlessly integrates with the new Galaxy Ring, allowing users to unlock a deeper level of health insights and personalized recommendations.
**
## Durability and Water Resistance
**
https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.tv9hindi.com%2Ftechnology%2Fsamsung-galaxy-watch-7-watch-ultra-galaxy-buds-3-launch-at-samsung-galaxy-unpacked-event-2719465.html&psig=AOvVaw1Xj56eMTwZXssz_mFBg-1d&ust=1720824926596000&source=images&cd=vfe&opi=89978449&ved=0CBIQjRxqFwoTCIiboMqKoIcDFQAAAAAdAAAAABAJ
Designed for active lifestyles, the Galaxy Watch Ultra boasts exceptional durability and water resistance. The titanium Grade 4 frame ensures the watch can withstand the rigours of daily wear and tear, while the 10ATM water resistance rating makes it suitable for a variety of water-based activities, from swimming to water sports.
**
## Pricing and Availability
**
The Samsung Galaxy Watch Ultra is priced at $649.99 and will be available in three stunning colour options: Titanium Gray, Titanium White, and Titanium Silver. Consumers can pre-order the watch starting today, with the official launch scheduled for July 24, 2024.
**
## Conclusion
**

The Samsung Galaxy Watch Ultra is a true game-changer in the smartwatch market. With its captivating design, cutting-edge health and fitness tracking capabilities, unparalleled performance, and intelligent connectivity features, this premium wearable device sets a new standard for what a smartwatch can achieve. Whether you’re an avid fitness enthusiast, a health-conscious individual, or simply someone seeking a stylish and versatile smartwatch, the Galaxy Watch Ultra is poised to redefine your expectations and elevate your connected experiences. | tomandjerry36 |
1,920,274 | Day 2 - JavaScript Essential Training | 1. Async vs. Defer When adding scripts to the head of your HTML document, it's crucial to... | 0 | 2024-07-11T23:03:36 | https://dev.to/ryoichihomma/day-2-javascript-essential-training-33n4 | javascript, linkedinlearning, jstips, developer | ## 1. Async vs. Defer
When adding scripts to the head of your HTML document, it's crucial to ensure they don't block the page rendering. This is where **`async`** and **`defer`** attributes come into play.
- **Async:** Scripts with **`async`** load in parallel with the HTML parsing and execute as soon as they’re downloaded. Use **`async`** for scripts that don't rely on others or the DOM.
`<script src="example.js" async></script>`
- **Defer:** Scripts with **`defer`** also load in parallel but execute in the order they appear in the document after the HTML is fully parsed. Use **`defer`** for scripts that depend on the DOM or other scripts.
`<script src="example.js" defer></script>`
## 2. Use of type="module"
Using **`type="module"`** allows you to leverage ES6 modules, enabling better organization and modularization of your code.
- **Modules:** By using **`type="module"`**, you can import and export functions, objects, or primitives between different files. This makes your code more maintainable and reusable.
```
<script type="module">
import { myFunction } from './module.js';
myFunction();
</script>
```
## 3. Use of Object
Objects are fundamental in JavaScript for storing collections of data and more complex entities.
- **Object Syntax:** An object is a collection of key-value pairs, where values can be properties or methods.
```
const person = {
name: 'John',
age: 30,
greet: function() {
console.log('Hello!');
}
};
person.greet();
```
## 4. Class + Constructor
ES6 introduced classes, providing a clearer and more concise syntax for creating objects and handling inheritance.
- **Class and Constructor:** A class in JavaScript is essentially a blueprint for creating objects, and the constructor method initializes new objects.
```
class Person {
constructor(name, age) {
this.name = name;
this.age = age;
}
greet() {
console.log(`Hello, my name is ${this.name}`);
}
}
const john = new Person('John', 30);
john.greet();
```
## 5. Extends + Super
Classes can extend other classes, inheriting their properties and methods. The **`super`** keyword is used to call the constructor of the parent class.
- **Extends and Super:** This allows for hierarchical class structures and code reuse.
```
class Employee extends Person {
constructor(name, age, jobTitle) {
super(name, age);
this.jobTitle = jobTitle;
}
work() {
console.log(`${this.name} is working as a ${this.jobTitle}`);
}
}
const jane = new Employee('Jane', 28, 'Software Developer');
jane.greet();
jane.work();
```
## 6. getElementById() → querySelector() & getElementsByClassName() → querySelectorAll()
Modern JavaScript provides more powerful and flexible ways to select DOM elements.
- **querySelector() and querySelectorAll():** These methods allow you to use CSS selectors, providing a consistent and concise way to select elements.
```
// Old way
const elementById = document.getElementById('myId');
const elementsByClassName = document.getElementsByClassName('myClass');
// New way
const element = document.querySelector('#myId');
const elements = document.querySelectorAll('.myClass');
```
The modern methods are preferred because they provide a more uniform way to select elements and support complex selectors.
## Conclusion
Today's exploration of JavaScript concepts like async/defer, modules, objects, classes, and modern DOM methods has been enlightening. These concepts are crucial for writing efficient, organized, and maintainable code.
I hope this summary helps all readers remember these key points, including me when I'll come back here to recall them in the future. Stay tuned for more updates as I continue my journey towards full-stack development.
Happy coding!💻 | ryoichihomma |
1,920,275 | Building a Tic-Tac-Toe Terminal Game using Python | Introduction My name's Derek and I'm an aspiring software engineer! Recently I've been... | 0 | 2024-07-11T23:06:17 | https://dev.to/keelan-derek/building-a-tic-tac-toe-terminal-game-using-python-55of | beginners, python, ai, learning | ## Introduction
My name's Derek and I'm an aspiring software engineer! Recently I've been trying very hard to learn Python and the fundamentals of software development through an online course. Having graduated from college two years ago with a Bachelor's in Business Computing and Information Systems, I am relatively familiar with the software development process and have some IT skills; but I have quite a bit to learn on the technical side when it comes to programming and problem solving. So, I decided to take the aforementioned course as a means of supplementing the knowledge and skills I was able to pick up while in college and making my resume stand out more. Since practice makes perfect, especialy in the realm of IT, I decided to undertake a project in support of the programming fundamentals I've been learning in the course. And for this project I decided to build a tic-tac-toe terminal game: something that would be fun yet challenging to do. I am writing this post to share the finished product I was able to conjure up (with some help) and get some feedback on my execution of the project along with how best to proceed with my journey of becoming a software engineer. So let's dive right in!
## A Description About the Code
The way that the program was built was by breaking down the overall solution into a number of smaller components called functions that all work together to form a working application. The first function was the _**insertLetter**_ function, which is what allows the player to place a letter onto the board. The second function was the _**spaceIsFree**_ function, which checks whether a space is free before an insert is made into that spot. The third function was _**printBoard**_, which draws the tic-tac-toe board and updates the board with moves made by the player and the computer. The fourth function was the _**isWinner**_ function, which keeps track of the moves being made on the board to then determine whether the player or the computer is the winner. The fifth function was the _**playerMove**_ function, which allows the player to make their move on the board. The sixth function was the _**compMove**_ function, which allows an ai opponent (i.e. the computer) to make moves with the core aim of winning the game. The seventh function was _**selectRandom**_, which enables the opponent to make moves at random that could potentially lead to a win. The eighth function was _**isBoardFull**_, which checks to see if the board has been filled with moves and if there are no more empty spaces so the game can be brought to an end. The ninth function was _**resetBoard**_, which clears the board should a player want to play another game of tic-tac-toe. The final function was _**main**_, the function that makes use of nearly all the other functions in order to allow a game of tic-tac-toe to be played.
To review the code for yourself or to be able to play the program for yourself, here's a link to the GitHub repository for the Tic-Tac-Toe game: [Tic-Tac-Toe Terminal Game](https://github.com/Keelan-Derek/Tic-Tac-Toe).
## Conclusion
While this project was a bit challenging, the making of this application taught me a lot about what goes into building an application: I had to think of an idea, understand the problem at hand and break it down into solution components, control different versions of the application while building the actual application, troubleshoot bugs and errors, and maintain the applicaiton so that it worked efficiently and effectively post development. That said, I have much more to learn and a lot more practical experience to gain as a junior software engineer. If anyone of you so happens to be interested in mentoring me, offering me a paid internship position, or giving me advice, I'd be more than grateful. Have a nice one and hope you enjoyed this amateur blog post. | keelan-derek |
1,920,276 | [Game of Purpose] Day 54 | Today I was working on using Cpp files as components. And I managed to do it. However, it is not that... | 27,434 | 2024-07-11T23:07:20 | https://dev.to/humberd/game-of-purpose-day-54-1ln | gamedev | Today I was working on using Cpp files as components. And I managed to do it. However, it is not that easy as it is in blueprints. In blueprints all the async execution flow is very seamless. However, in Cpp it seems to be harder.
Here I made a target go to the first point in a Spline.
```cpp
void USplineNavigator::NavigateToSplinePoint()
{
PrintString(TEXT("NavigateToSplinePoint"));
check(targetSpline != nullptr);
auto targetSplineTotalLength = targetSpline->GetSplineLength();
// staring the movement
if (NextTargetDistance < 0.0f)
{
NextTargetDistance = 0.0f;
}
// looping
else if (targetSplineTotalLength >= NextTargetDistance)
{
NextTargetDistance = 0.0f;
}
else
{
NextTargetDistance = FMath::Clamp(NextTargetDistance + DistanceToMove, 0.0f, targetSplineTotalLength);
}
auto targetWorldLocation = targetSpline->GetLocationAtDistanceAlongSpline(
NextTargetDistance, ESplineCoordinateSpace::World);
AAIController* AIController = Cast<AAIController>(GetOwner()->GetInstigatorController());
if (AIController == nullptr)
{
PrintString(TEXT("AIController is null"));
return;
}
UAIBlueprintHelperLibrary::SimpleMoveToLocation(AIController, targetWorldLocation);
}
```
Drawing debug arrows doesn't seem to work at all.
```cpp
DrawDebugDirectionalArrow(
GetOwner()->GetWorld(),
FVector(0.0f, 0.0f, 0.0f),
FVector(1500.f, 1500.f, 300.f),
100.0f,
FColor::Red,
false,
100,
1,
10.0f
);
```
| humberd |
1,920,277 | AI Testing | Artificial intelligence (AI) is transforming the landscape of software testing, offering a more... | 0 | 2024-07-11T23:14:58 | https://dev.to/jeff_handy_6460add778baca/ai-testing-4699 | Artificial intelligence (AI) is transforming the landscape of software testing, offering a more efficient and effective alternative to traditional manual and automated testing methods. As modern software applications become increasingly complex and development cycles accelerate, AI-driven testing stands out by harnessing advanced algorithms, machine learning, and data analysis to overcome these challenges. By automating and optimizing various aspects of the testing process, AI empowers engineers to work more efficiently, detect defects more effectively, and ensure comprehensive test coverage. From intelligent test case generation and real-time anomaly detection to visual analysis and living documentation, AI-powered testing tools and techniques adapt to the ever-changing world of software development, enabling teams to deliver higher-quality applications, reduce time to market, and enhance the overall user experience, ultimately raising the bar for software quality assurance.
Test Case Generation with AI
One of the most significant advantages of [AI-driven testing](qualiti.ai) is its ability to revolutionize test case generation. By leveraging user data and analyzing real-world customer interactions, AI algorithms can create comprehensive and relevant test cases that cover every possible product path. This data-driven approach ensures that testing efforts are focused on the areas that matter most to end-users, resulting in a more efficient and effective testing process.
AI-generated test cases not only cover the most frequently used product paths but also account for edge cases and scenarios that human testers might overlook. This thorough coverage helps uncover hidden bugs and vulnerabilities, leading to more robust and reliable software. Moreover, AI can generate test cases at a much faster rate compared to manual methods, enabling teams to increase their velocity and accelerate the delivery of their projects and features.
To illustrate this concept, let's consider an example from an AI-powered testing tool like Qualiti. The tool generates test cases by collecting data in real-time from the targeted application. An embedded script within the application monitors and records user actions, which are then passed to a machine learning model. The model identifies patterns in user behavior and, over time, refines its understanding as new data is continuously collected. Once fully trained, the model can generate and maintain test cases based on the most up-to-date user behavior patterns.
Upon closer examination of individual test cases generated by the AI tool, we can see how it identifies usage patterns during live transactions and breaks them down into specific steps. For instance, a test case for searching an employee's name within an application might include the following steps:
Navigate to the product admin page
Click on the menu
Type the employee's name
Click on the search result that appears
Click on the search button
The AI-powered tool not only generates these test cases but also implements and executes them during each test run. The real power of AI lies in its ability to adapt and refine test cases based on the latest data, ensuring that testing remains current and aligned with the evolving needs of the user base. This dynamic approach to test case generation sets AI-driven testing apart from traditional methods, making it an invaluable asset in the quest for higher-quality software.
Easy Test Maintenance with AI
Maintaining test cases is a time-consuming and resource-intensive task that often requires significant effort from engineers. However, with the introduction of AI in software testing, this burden can be greatly reduced. AI-powered tools can take over the responsibility of fixing and updating test cases as needed, without requiring manual input from the development team. This allows engineers to focus on more critical tasks, ultimately improving overall team productivity.
One of the key advantages of AI in test maintenance is its ability to automatically adapt to changes in the application under test. As new features are added, user interfaces are modified, or the test environment evolves, AI algorithms can quickly identify these changes and update the affected test cases accordingly. This ensures that the testing process remains aligned with the current state of the software, reducing the risk of false positives and false negatives.
To demonstrate the potential of AI in test maintenance, let's consider an example using the Qualiti testing tool. In a given test case, each automated test step is designed to interact with specific elements within the application's user interface. For instance, a test step might click on an element by searching for a specific URL reference (href) on the page. However, if this reference were to change due to an update in the application, the test would fail.
Traditionally, when a test fails, an engineer would need to manually investigate the cause, determine if the failure indicates an actual defect, and update the test code if necessary. This process often involves locating the new href and modifying the test steps accordingly. However, with AI-powered tools like Qualiti, this manual process can be automated. The AI will automatically rerun the tests after updating the selector, ensuring that the tests account for any recent changes in the application.
As the software evolves and new requirements emerge, AI-driven test maintenance can rapidly adapt test cases to reflect these changes. This dynamic approach to test maintenance ensures that the testing process remains efficient, accurate, and up-to-date, even in the face of constantly shifting software development landscapes.
By leveraging AI for test maintenance, software development teams can significantly reduce the time and effort required to keep their test suites in sync with the ever-changing nature of their applications. This, in turn, allows them to allocate their resources more effectively, focusing on delivering high-quality software that meets the needs of their users.
Visual Analysis with AI
AI-powered visual analysis is another game-changer in the realm of software testing. By leveraging advanced algorithms, AI can interpret and analyze the product's user interface, helping engineers identify issues and gaps in test coverage that might otherwise go unnoticed. This innovative approach to testing enables a more comprehensive evaluation of the user experience, ensuring that the software meets the expected visual and functional standards.
One of the primary advantages of using AI for visual analysis is its ability to detect a wide range of issues that traditional testing methods might overlook. AI algorithms can scan and analyze screenshots, videos, or live application feeds, identifying visual anomalies, layout inconsistencies, and functional problems. For example, AI can detect incorrect alignments, color discrepancies, font inconsistencies, and broken layouts – all of which can negatively impact the user experience. By flagging these issues early in the development process, AI-driven visual analysis empowers teams to address them promptly, resulting in a more polished and user-friendly final product.
In addition to identifying defects, AI-powered visual analysis can also help pinpoint gaps in test coverage. By highlighting areas of the user interface that have not been thoroughly tested, AI enables teams to optimize their testing efforts and ensure that all critical aspects of the application receive adequate attention. This targeted approach to testing not only improves the overall quality of the software but also helps teams allocate their resources more efficiently.
To illustrate the potential of AI in visual analysis, consider a scenario where a submit button on a form becomes invisible due to a coding error. An AI-powered testing tool can visually assess the page, recognize the missing button, and promptly alert the development team to the issue. By catching such problems early, AI-driven visual analysis can prevent them from making their way into production, thereby minimizing the risk of user frustration and reducing the need for costly post-release fixes.
As software applications become increasingly complex and user expectations continue to rise, the importance of delivering visually appealing and functionally sound products cannot be overstated. By harnessing the power of AI for visual analysis, software development teams can streamline their testing processes, identify a broader range of issues, and ultimately deliver higher-quality applications that meet the ever-evolving needs of their users.
Conclusion
The integration of artificial intelligence into software testing has revolutionized the way developers approach quality assurance. By leveraging advanced algorithms, machine learning, and data analysis, AI-driven testing tools and techniques have proven to be a powerful alternative to traditional manual and automated testing methods. From intelligent test case generation and real-time anomaly detection to visual analysis and living documentation, AI has demonstrated its ability to streamline and optimize various aspects of the testing process.
As software applications continue to grow in complexity and development cycles become increasingly rapid, the adoption of AI in testing has become a necessity rather than a luxury. By automating and adapting to the ever-changing landscape of software development, AI empowers engineers to work more efficiently, uncover defects more effectively, and ensure comprehensive test coverage. This, in turn, enables teams to deliver higher-quality applications, reduce time to market, and enhance the overall user experience.
In conclusion, the future of software testing lies in the hands of artificial intelligence. As AI technologies continue to evolve and mature, we can expect to see even more innovative applications and breakthroughs in the field of software quality assurance. By embracing AI-driven testing, software development teams can not only keep pace with the demands of modern software development but also set new standards for quality, reliability, and user satisfaction.
| jeff_handy_6460add778baca | |
1,920,278 | Effortless HTTP Client Testing in Go | Explore the benefits of VCR testing in Go with dnaeon/go-vcr | 0 | 2024-07-11T23:16:03 | https://dev.to/calvinmclean/effortless-http-client-testing-in-go-4d75 | go, testing, tutorial, learning | ## Introduction
As a software engineer, you are probably familiar with writing code to interact with external HTTP services. After all, it is one of the most common things we do! Whether it's fetching data, processing payments with a provider, or automating social media posts, our applications almost always involve external HTTP requests. In order for our software to be reliable and maintainable, we need a way to test the code responsible for executing these requests and handling the errors that could occur. This leaves us with a few options:
- Implement a client wrapper that can be mocked by the main application code, which still leaves a gap in testing
- Test response parsing and handling separate from actual request execution. While it's probably a good idea to test this lower-level unit individually, it'd be nice if that could easily be covered along with the actual requests
- Move tests to integration testing which can slow down development and is unable to test some error scenarios and may be impacted by the reliability of other services
These options aren't terrible, especially if they can all be used together, but we have a better option: VCR testing.
VCR testing, named after the videocassette recorder, is a type of mock testing that generates [test fixtures](https://en.wikipedia.org/wiki/Test_fixture) from actual requests. The fixtures record the request and response to automatically reuse in future tests. Although you might have to modify the fixtures afterwards to handle dynamic time-based inputs or remove credentials, it is much simpler than creating mocks from scratch. There are a few additional benefits to VCR testing:
- Execute your code all the way down to the HTTP level, so you can test your application end-to-end
- You can take real-world responses and modify the generated fixtures to increase response time, cause rate limiting, etc. to test error scenarios that don't often occur organically
- If your code uses an external package/library for interacting with an API, you might not know exactly what a request and response look like, so VCR testing can automatically figure that out
- Generated fixtures can also be used for debugging tests and making sure your code executes the expected request
## Deeper Dive using Go
Now that you see the motivation behind VCR testing, let's dig deeper into how to implement it in Go using [`dnaeon/go-vcr`](https://github.com/dnaeon/go-vcr).
This library integrates seamlessly into any HTTP client code. If your client library code doesn't already allow setting the `*http.Client` or the Client's `http.Transport`, you should add that now.
For those that aren't familiar, an `http.Transport` is an implementation of `http.RoundTripper`, which is basically a client-side middleware that can access the request/response. It is useful for implementing automatic retries on 500-level or 429 (rate-limit) responses, or adding metrics and logging around requests. In this case, it allows `go-vcr` to re-reoute requests to its own in-process HTTP server.
### URL Shortener Example
Let's get started on a simple example. We want to create a package that makes requests to the free https://cleanuri.com API. This package will provide one function: `Shorten(string) (string, error)`
Since this is a free API, maybe we can just test it by making requests directly to the server? This might work, but can result in a few problems:
- The server has a rate limit of 2 requests/second which could be an issue if we have a lot of tests
- If the server goes down or takes awhile to respond, our tests could fail
- Although the shortened URLs are cached, we have no guarantee that we will get the same output every time
- It's just rude to send unnecessary traffic to a free API!
Ok, what if we create an interface and mock it? Our package is incredibly simple, so this would overcomplicate it. Since the lowest-level thing we use is `*http.Client`, we would have to define a new interface around it and implement a mock.
Another option is to override the target URL to use a local port served by `httptest.Server`. This is basically a simplified version of what `go-vcr` does and would be sufficient in our simple case, but won't be maintainable in more complex scenarios. Even in this example, you'll see how managing generated fixtures is easier than managing different mock server implementations.
Since our interface is already defined and we know some valid input/output from trying the UI at https://cleanuri.com, this is a great opportunity to practice [test-driven development](https://dev.to/calvinmclean/test-driven-api-development-in-go-1fb8). We'll start by implementing a simple test for our `Shorten` function:
```go
package shortener_test
func TestShorten(t *testing.T) {
shortened, err := shortener.Shorten("https://dev.to/calvinmclean")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if shortened != "https://cleanuri.com/7nPmQk" {
t.Errorf("unexpected result: %v", shortened)
}
}
```
Pretty easy! We know that the test will fail to compile because `shortener.Shorten` is not defined, but we run it anyways so fixing it will be more satisfying.
Finally, let's go ahead and implement this function:
```go
package shortener
var DefaultClient = http.DefaultClient
const address = "https://cleanuri.com/api/v1/shorten"
// Shorten will returned the shortened URL
func Shorten(targetURL string) (string, error) {
resp, err := DefaultClient.PostForm(
address,
url.Values{"url": []string{targetURL}},
)
if err != nil {
return "", err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return "", fmt.Errorf("unexpected response code: %d", resp.StatusCode)
}
var respData struct {
ResultURL string `json:"result_url"`
}
err = json.NewDecoder(resp.Body).Decode(&respData)
if err != nil {
return "", err
}
return respData.ResultURL, nil
}
```
Now our test passes! It's just as satisfying as I promised.
In order to start using VCR, we need to initialize the Recorder and override `shortener.DefaultClient` at the beginning of the test:
```go
func TestShorten(t *testing.T) {
r, err := recorder.New("fixtures/dev.to")
if err != nil {
t.Fatal(err)
}
defer func() {
require.NoError(t, r.Stop())
}()
if r.Mode() != recorder.ModeRecordOnce {
t.Fatal("Recorder should be in ModeRecordOnce")
}
shortener.DefaultClient = r.GetDefaultClient()
// ...
```
Run the test to generate `fixtures/dev.to.yaml` with details about the test's request and response. When we re-run the test, it uses the recorded response instead of reaching out to the server. Don't just take my word for it; turn off your computer's WiFi and re-run the tests!
You might also notice that the time it takes to run the test is relatively consistent since `go-vcr` records and replays the response duration. You can manually modify this field in the YAML to speed up the tests.
### Mocking Errors
To further demonstrate the benefits of this kind of testing, let's add another feature: retry after `429` response due to rate-limiting. Since we know the API's rate limit is per second, `Shorten` can automatically wait a second and retry if it receives a `429` response code.
I tried to reproduce this error using the API directly, but it seems like it responds with existing URLs from a cache before considering the rate limit. Rather than polluting the cache with bogus URLs, we can create our own mocks this time.
This is a simple process since we already have generated fixtures. After copy/pasting `fixtures/dev.to.yaml` to a new file, duplicate the successful request/response interaction and change first response's code from `200` to `429`. This fixture mimics a successful retry after rate-limiting failure.
The only difference between this test and the original test is the new fixture filename. The expected output is the same since `Shorten` should handle the error. This means we can throw the test in a loop to make it more dynamic:
```go
func TestShorten(t *testing.T) {
fixtures := []string{
"fixtures/dev.to",
"fixtures/rate_limit",
}
for _, fixture := range fixtures {
t.Run(fixture, func(t *testing.T) {
r, err := recorder.New(fixture)
if err != nil {
t.Fatal(err)
}
defer func() {
require.NoError(t, r.Stop())
}()
if r.Mode() != recorder.ModeRecordOnce {
t.Fatal("Recorder should be in ModeRecordOnce")
}
shortener.DefaultClient = r.GetDefaultClient()
shortened, err := shortener.Shorten("https://dev.to/calvinmclean")
if err != nil {
t.Errorf("unexpected error: %v", err)
}
if shortened != "https://cleanuri.com/7nPmQk" {
t.Errorf("unexpected result: %v", shortened)
}
})
}
}
```
Once again, the new test fails. This time due to the unhandled `429` response, so let's implement the new feature to pass the test. In order to maintain simplicity, our function handles the error using `time.Sleep` and a recursive call rather than dealing with the complexity of considering max retries and exponential backoffs:
```go
func Shorten(targetURL string) (string, error) {
// ...
switch resp.StatusCode {
case http.StatusOK:
case http.StatusTooManyRequests:
time.Sleep(time.Second)
return Shorten(targetURL)
default:
return "", fmt.Errorf("unexpected response code: %d", resp.StatusCode)
}
// ...
```
Now run the tests again and see them pass!
Take it a step further on your own and try adding a test for a bad request, which will occur when using an invalid URL like `my-fake-url`.
The full code for this example (and the bad request test) is available [on Github](https://github.com/calvinmclean/calvinmclean.github.io/blob/main/examples/go-vcr-testing/shortener_test.go).
## Conclusion
The benefits of VCR testing are clear from just this simple example, but they are even more impactful when dealing with complex applications where the requests and responses are unwieldy. Rather than dealing with tedious mocks or opting for no tests at all, I encourage you to give this a try in your own applications. If you already rely on integration tests, getting started with VCR is even easier since you already have real requests that can generate fixtures.
Check out more documentation and examples in the package's Github repository: https://github.com/dnaeon/go-vcr
| calvinmclean |
1,920,279 | Top AI Testing Tools | AI has revolutionized QA testing as we know it. QA testing tools significantly impact application... | 0 | 2024-07-11T23:16:04 | https://dev.to/jeff_handy_6460add778baca/top-ai-testing-tools-1d44 | AI has revolutionized QA testing as we know it. QA testing tools significantly impact application performance, reliability, and quality. While the market for traditional software quality testing tools is mature and has a few well-known frameworks teams can use for many test use cases, AI testing tools have burst onto the scene to supercharge a team's testing efforts. Of course, this new suite of tools makes for a new set of decision points for testing teams to consider.
AI Testing Tools vs. Traditional Frameworks
The advent of AI testing tools has brought about a significant shift in the world of QA testing. These cutting-edge tools offer a range of unique advantages over traditional, non-AI-powered automation frameworks. By leveraging the power of artificial intelligence, AI testing tools are capable of revolutionizing the way software teams approach quality assurance.
One of the most notable benefits of AI testing tools is their ability to automate test generation and maintenance. Traditional frameworks often require manual creation and upkeep of test cases, which can be time-consuming and prone to human error. In contrast, AI testing tools can automatically generate test cases based on the application's functionality and user behavior. This not only saves valuable time and resources but also ensures that the tests are comprehensive and up-to-date.
Another key advantage of AI testing tools is their proficiency in anomaly detection. By analyzing vast amounts of data and identifying patterns, these tools can quickly spot irregularities and potential issues that might otherwise go unnoticed. This proactive approach to defect detection allows teams to address problems early in the development cycle, reducing the risk of costly bugs making their way into production.
Furthermore, AI testing tools provide enhanced analytics and reporting capabilities. These tools can generate detailed insights into the application's performance, pinpointing areas that require attention and improvement. With the help of intuitive dashboards and visualizations, teams can easily track progress, identify trends, and make data-driven decisions to optimize their testing efforts.
While the benefits of AI testing tools are undeniable, it's important to note that creating and maintaining an in-house platform that leverages AI can be a significant undertaking. Unless an organization has extensive AI capabilities and resources, it may be more practical to opt for an off-the-shelf solution. These ready-made tools require minimal setup and onboarding, allowing teams to quickly reap the benefits of AI-powered testing without the need for extensive development and maintenance.
Essential Features of AI Testing Tools
When considering the adoption of AI testing tools, it's crucial to evaluate the features they offer. These features can greatly impact the effectiveness and efficiency of your QA testing process. Let's explore nine essential features that AI testing tools should possess to deliver maximum value to your team.
1. Automated Test Framework Creation
AI testing tools should simplify the creation of automated test frameworks. By generating test cases based on the application's source code and user interactions, these tools eliminate the need for manual test design and implementation. This automation saves time and ensures consistent, accurate tests across the board.
2. Test Framework Maintenance
Maintaining an automated test framework can be challenging as applications evolve. AI testing tools address this issue by automatically updating test cases in response to changes in the application, such as new features or UI modifications. This self-maintaining capability reduces the manual workload on engineering teams, keeping the testing process aligned with the current state of the application.
3. Regression Test Management
Regression testing is vital for ensuring that code changes do not introduce unintended side effects. AI testing tools enhance regression testing by predicting potential areas of failure and detecting anomalies. By learning from past test results, these tools refine their test strategies to identify regressions more effectively.
4. Test Coverage Measurement
Measuring test coverage is essential for assessing the thoroughness of your testing efforts. AI testing tools provide visual dashboards that clearly display which parts of the application are being tested. This granular view of test results allows teams to understand the context of failures and anomalies, guiding targeted improvements in test coverage.
5. Integration with Existing Tools and CI/CD Pipelines
Seamless integration with existing tools and workflows is a key consideration when selecting an AI testing tool. Tools that integrate with popular project management software, such as Jira or ClickUp, streamline the development process and boost productivity. Additionally, native support for CI/CD pipelines, like GitHub or GitLab, enables continuous testing and faster feedback loops.
By carefully evaluating these essential features, teams can make informed decisions when adopting AI testing tools. The right tool will empower your QA team to deliver high-quality software more efficiently, ultimately leading to improved application performance and user satisfaction.
Enhancing Testing Capabilities with AI
AI testing tools offer a range of advanced capabilities that can significantly enhance the quality and efficiency of your testing efforts. Let's explore how these tools can improve cross-platform and cross-browser testing, UI testing, and test result management.
Cross-Platform and Cross-Browser Testing
Ensuring a consistent user experience across different platforms and browsers is a critical aspect of software testing. AI testing tools streamline this process by dynamically adjusting tests based on the unique behaviors of various operating systems and browsers. By automatically adapting to different environments, these tools provide comprehensive coverage and efficiently detect layout and functional discrepancies, saving testers valuable time and effort.
UI Testing
User interface testing is crucial for ensuring a seamless and intuitive user experience. AI testing tools revolutionize UI testing by analyzing user interactions and identifying critical paths and high-usage functionalities. By prioritizing testing efforts based on user engagement, these tools focus on the areas that matter most to end-users. For example, in an e-commerce platform, AI can detect high user activity in specific sections, such as electronics or clothing, and allocate more testing resources to those areas. This targeted approach improves application quality and usability where it counts.
Test Result Management
Effective management of test results is essential for identifying trends, issues, and areas for improvement. AI testing tools excel in this regard by providing intuitive visualizations, such as dashboards, charts, and screenshots, that make interpreting test outcomes a breeze. These tools clearly distinguish between successful and failed tests, allowing teams to quickly pinpoint and address problems. Moreover, the ability to view precisely what is being tested is particularly valuable for AI-generated tests, as it helps QA engineers and non-technical stakeholders understand and trust the tests created by the AI.
By leveraging the power of AI in cross-platform and cross-browser testing, UI testing, and test result management, teams can take their testing capabilities to new heights. These advanced features enable organizations to deliver high-quality software that meets the expectations of users across different devices and platforms while ensuring a delightful user experience. With AI testing tools, teams can focus their efforts on the most critical aspects of the application, ultimately leading to more efficient and effective testing processes.
Conclusion
The rise of [AI testing tools](qualiti.ai) has brought about a paradigm shift in the world of QA testing. These powerful tools offer a wide range of benefits over traditional frameworks, enabling teams to streamline their testing processes, improve software quality, and deliver exceptional user experiences. By automating test generation, maintenance, and regression testing, AI testing tools save valuable time and resources while ensuring comprehensive test coverage.
When selecting an AI testing tool, it's crucial to consider the essential features that align with your team's needs. From seamless integration with existing tools and CI/CD pipelines to advanced capabilities in cross-platform and cross-browser testing, UI testing, and test result management, the right AI testing tool can revolutionize your QA efforts.
As the demand for high-quality software continues to grow, embracing AI testing tools becomes increasingly important. By leveraging the power of artificial intelligence, organizations can stay ahead of the curve, delivering robust and reliable applications that meet the ever-evolving needs of their users. The future of QA testing undoubtedly lies in the hands of AI, and those who adopt these cutting-edge tools will be well-positioned to succeed in the digital landscape.
| jeff_handy_6460add778baca | |
1,920,287 | GET BACK YOUR SCAMMED MONEY FROM FAKE INVESTMENT PLATFORM. | The loss of a significant investment, such as $170,000 in Bitcoin, can be a crushing blow. Imagine... | 0 | 2024-07-11T23:54:28 | https://dev.to/bright_bryan_b6130ba993ac/get-back-your-scammed-money-from-fake-investment-platform-3ko7 | The loss of a significant investment, such as $170,000 in Bitcoin, can be a crushing blow. Imagine building a substantial cryptocurrency portfolio over years, only to have it vanish overnight due to hackers or con artists. I know firsthand the debilitating feeling of helplessness and fear that comes with losing what you thought was a secure investment. It was as if my life savings had disappeared in an instant.
However, I didn't let despair get the better of me. Instead, I took action and sought out the expertise of professionals specializing in locating and recovering stolen digital assets RECOVERY EXPERT, a reputable firm known for their advanced investigative techniques, technical proficiency, and unwavering determination, was my chosen partner. Thanks to their tireless efforts, they successfully traced and recovered the entire $170,000 worth of Bitcoin I had lost.
This remarkable turnaround is a testament to the importance of perseverance and seeking out the right resources in the face of financial adversity. It shows that even the most daunting cyber theft incidents can be resolved with diligence and the intervention of skilled recovery specialists.
If you find yourself in a similar situation, I highly recommend reaching out to RECOVERY EXPERT for assistance. Their expertise and professionalism made all the difference for me, and I'm confident they can do the same for you. You can contact them at contact email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram;(https://t.me/RECOVERYEXPERT0) | bright_bryan_b6130ba993ac | |
1,920,280 | How to upload multipart files to a cloud storage locally with Spring Boot, Kotlin and MinIO | MinIO, an AWS S3 alternative. Easy, Dockerized and straightfoward. Introduction If you’ve... | 0 | 2024-07-11T23:19:31 | https://dev.to/renejr03/how-to-upload-multipart-files-to-a-cloud-storage-locally-with-spring-boot-kotlin-and-minio-1jma | kotlin, spring, minio | MinIO, an AWS S3 alternative. Easy, Dockerized and straightfoward.
## Introduction
If you’ve ever tried to kick off a personal project, something that might be meaningful or even fun but involves dealing with file storage, this seemingly straightforward task can turn into a bit of a headache.
There are various options for storing files nowadays, but let’s focus on the main ones:
* Bytes: One approach is to convert your files into a series of bytes and store them in your database. It works, but it might not be as practical as one would hope. Let’s suppose you have and API, how would you display this information?
* Cloud Storage: In my opinion, this is the way to go for file management. Simply put, you’re not burdened with handling the intricacies of data manipulation. All you need to do is upload the file to your cloud application, and in return, you receive a URL for the uploaded file. You can then access and display it however you see fit.
There are a lot of points to defend the cloud storage, but you probably will find better writers down there explaining the pros and cons way more accurate than me.
If you look closely and pay attention, maybe you’re wondering: It looks like that cloud storage is simply better and easier that store with bytes, what’s the big deal than? Simply, cloud storage isn’t free.
If you want to run a instance of AWS S3 — the most used cloud storage — you’ll get a free trial of 12 months and after that you’ll be charged for it. Not worthy if the application is just for personal studies. And that’s how it bring us to this post, is there any way to run a cloud storage locally and simulate something like S3?
## What's MinIO?
> MinIO is a high-performance, S3 compatible object store. It is built for large scale AI/ML, data lake and database workloads. It is software-defined and runs on any cloud or on-premises infrastructure. MinIO is dual-licensed under open source GNU AGPL v3 and a commercial enterprise license.
https://min.io/
## Steps
Now, I’ll try to show in 4 simple and shorts steps how to upload a multipart file with Kotlin, Spring Boot and Mi
Make sure you have Docker installed before any steps.
### Step 1
Just run the MinIO docker container
```
docker run \
-p 9000:9000 \
-p 9090:9090 \
--name minio \
-v ~/minio/data:/data \
-e "MINIO_ROOT_USER=root" \
-e "MINIO_ROOT_PASSWORD=password" \
quay.io/minio/minio server /data --console-address ":9090"
```
### Step 2
Add MinIO SDK dependency
`implementation("io.minio:minio:8.5.7")`
### Step 3
Create a client to communicate with MinIO
```kotlin
import io.minio.BucketExistsArgs
import io.minio.MakeBucketArgs
import io.minio.MinioClient
object MinioFactory {
// These are the envs you pass as argument in your docker run
private const val URL = "http://localhost:9000"
private const val USER = "root"
private const val PASSWORD = "password"
const val BUCKET = "exam"
private val minioClient: MinioClient by lazy {
val client = MinioClient.builder()
.endpoint(URL)
.credentials(USER, PASSWORD)
.build()
// Logic to create the bucket if don't exists yet
if (!client.bucketExists(BucketExistsArgs.builder().bucket(BUCKET).build())) {
client.makeBucket(MakeBucketArgs.builder().bucket(BUCKET).build())
}
client
}
fun getInstance() = minioClient
}
```
### Step 4
Create a service responsible to upload the file
```kotlin
import io.minio.PutObjectArgs
import org.springframework.stereotype.Service
import org.springframework.web.multipart.MultipartFile
fun uploadExam(file: MultipartFile) {
val minioClient = MinioFactory.getInstance()
val putObjectArgs = PutObjectArgs.builder()
.bucket(MinioFactory.BUCKET)
.`object`("YOUR_PATH")
.contentType(file.contentType)
.stream(file.inputStream, file.size, -1)
.build()
minioClient.putObject(putObjectArgs).etag()
}
```
That’s it, you’ve just upload your first file to MinIO. You can check your work of art in http://localhost:9090/browser | renejr03 |
1,920,282 | RECOVER YOUR SCAMMED BITCOIN FROM FAKE INVESTMENT PLATFORM | I RECOVERED all my loss cryptocurrency through RECOVERY EXPERT I didn't see anything wrong with it... | 0 | 2024-07-11T23:45:25 | https://dev.to/bryan_hills_85d35f52b11ad/recover-your-scammed-bitcoin-from-fake-investment-platform-86p | I RECOVERED all my loss cryptocurrency through RECOVERY EXPERT
I didn't see anything wrong with it until after about 3 months of the deception, I was carried along while i kept sending more and more funds to them expecting an expeditious profit which kept increasing and increasing but i never had the permission to withdraw it, there was one reason or the other why i couldn't withdraw and it kept going on until i lost almost everything i had. eventually i had to stop and ask people for advice, that was when i knew it was all a bubble. i was crazy for months until my husband came back home with jay an old friend who happened to work for RECOVERY EXPERT an asset recovery agency who have been operating for many years. I submitted all the details to my husband who in turn will relay it to his old friend the next day. We started the process to recover what i lost to the scammers and surprisingly it took only about 48 hours until my husband got a notification on his trust wallet, it was the entire amount i lost, it was all recovered and he was at work when that happened, so he gave me a call to tell me he has got a surprise for me but he's not gonna tell me about it but show me when he get back from work, i wasn't expecting to get that surprise but to my very surprise i got back my all my loss crypto in his TRUST WALLET, i was super excited about it. I cant express how good it felt, it was a miracle i never expected would happen to me. I know many of you here may find yourself in similar situations because the bad guys are everywhere. Just incase anyone needs an assistance with their loss, they should reach out to RECOVERY EXPERT here is their details contact them AT
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram https://t.me/RECOVERYEXPERT0 | bryan_hills_85d35f52b11ad | |
1,920,283 | How to build an order monitoring system on your own? | Q: I have many orders stored in my CRM database. I need to frequently (possibly daily or weekly)... | 0 | 2024-07-11T23:57:25 | https://dev.to/sqlman/how-to-build-an-order-monitoring-system-on-your-own-3b8 | sql, email | **Q:** I have many orders stored in my CRM database. I need to frequently (possibly daily or weekly) query the orders that are about to expire and send them to different people for handling. Different orders need to be sent to different people. How should I build an order monitoring system to automate this task?
Here is my query result:

The format requirements for the emails I send out are as follows:

**A:** You can use the "Information Distribute" feature of SQLMessenger to accomplish this task. Please follow the steps below to configure it in SQLMessenger.
Step1: In the Task Manager of the SQLMessenger console, click the "New Task" button to create a new task.

Step2: In the Task Editor, click the "Information Distribute" tab, check the "Loop Task" option, and click the "Recipient List (Loop Data) Config" button.

Step3: Select the data source, enter the SQL statement that queries the email list to be sent, then click the "Test" button. After testing is complete, click "OK."

Note: The query statement here should simply retrieve the email list and ensure that email addresses are unique.
Step4: Click on the "Task Template" tab, enter the email subject. Click the button on the right side of "Send To" to select the email recipients.

Step5: Click the "From A Variable" button, then go to the "System Variables/Functions" tab, locate @@LoopData(), and click the button on the right of the entry.

Step6: In the "Function Parameter," select "ASSIGNEE_EMAIL_ADDR," then click "OK" repeatedly to return.

Tips: Choosing "ASSIGNEE_EMAIL_ADDR" here means using the field from the SQL query entered earlier as the email address for recipients.
Step7: In the body template editor, enter the email body template. During the input process, you can right-click at the position where the recipient's name should appear, and select the "Insert Variable" menu item.

Step8: In the "System Variables/Functions" tab, locate "@@LoopData()" and click the button on the right side of the item.

Step9: In the "Function Parameter," select "ASSIGNEE_NAME," indicating that the value of the ASSIGNEE_NAME field from the SQL query entered earlier will be displayed here.

Tips: You can use the same method to display the date of task execution at the end of the email. In the "System Variables/Functions" tab, select the "@@Date" variable.

Step10: In the body template, right-click where you want to display the table and select the "Insert SQL Table" menu item.

Step11: In the wizard, select the data source and input the query for the list of orders to be sent. Make sure to modify the SQL statement's query conditions accordingly here.

**Note:** Here, the SQL query conditions need to be modified to retrieve orders assigned to one handler. The modified SQL statement is as follows:
```
SELECT order_number,
To_char(create_time,'mm/dd/yyyy hh24:mi:ss') AS create_time,
status,
To_char(deadline,'mm/dd/yyyy hh24:mi:ss') AS deadline,
assignee_name
FROM crm_order_list
WHERE deadline - Now() <= '1 day'::interval
/*Here, using the @@LoopData(ASSIGNEE_EMAIL_ADDR) variable as a query condition means using the "ASSIGNEE_EMAIL_ADDR" field from the SQL statement entered earlier to query this SQL statement.*/
AND assignee_email_addr = #[@@LoopData(ASSIGNEE_EMAIL_ADDR)]#
ORDER BY order_number;
```
Here, using the **_@@LoopData(ASSIGNEE_EMAIL_ADDR)_** variable as a query condition means using the **_"ASSIGNEE_EMAIL_ADDR"_** field from the SQL statement entered earlier to query this SQL statement.
After entering the statement, click "Next." The system will execute this SQL statement and retrieve the list of fields returned by the statement.
Step12: In the wizard, select the fields to display in the table and add them to the list on the right.

Step13: After clicking "Next," you can configure the table's format settings here.

Tips: You can modify column names, adjust column widths, and set table colors here.
After completing the setup, click "Next," and the system will display a table icon in the body template. Double-click on this icon to edit its contents.
Step14: Click on the "Task Schedules" tab in the task editor, then click the "Add New" button to add a schedule for the task.

Step15: Set the start time and interval for the Task Schedule. Here, configure the task to run at 10 AM every Monday to Friday.

The task configuration is complete. Click the "Deploy" button to activate the new task configuration. The system will then automatically run the task at the specified time, querying each employee's orders that are about to expire and sending them directly to their respective email addresses.
If you want to preview the task's execution results beforehand, you can right-click on the task in the task list and select "Run selected tasks immediately" to manually start the task.

In the "Run Task" dialog, you can select the option "Do not send emails and messages generated by this task instance." This way, the system will only execute the task without sending any emails generated by it.

The following image is one of the emails generated after task execution.

**Q:** If I want to execute the task on Monday, Wednesday, and Friday each week, how should I configure it?
**A:** A task can have multiple schedules. You can add multiple schedules to the task, like the following:

**Q:** What is the difference between the SQL statement entered in Step 3 and the SQL statement entered later?
**A:** The SQL statement entered in the Step 3 is used to query the email list to be sent, with the system executing the task for each record. The SQL statement entered later is used to query a list of orders with overdue deadlines for an email address, and the system adds the order list to the email body for sending.
**Q:** Can the list of orders be sent as an Excel spreadsheet attachment in the email?
**A:** Yes. You can add a "Dynamic Attachment File" attachment template for the task and set up the SQL statement to query the list of orders in the template.
**Q:** Can the date of the email be displayed in the email subject?
**A:** Yes, you can modify the email subject to: "Order Expiry Reminder#@@Date#", and the system will replace #@@Date# with the current date when executing the task.
Reposted from: [https://www.sqlmessenger.com/docreader.html?id=537](https://www.sqlmessenger.com/docreader.html?id=537)
Video Demo: [https://youtu.be/VqXGXuicRhE](https://youtu.be/VqXGXuicRhE) | sqlman |
1,920,284 | GET BACK YOUR SCAMMED BITCOIN | I fell victim to a crafty scam, but just when I thought all hope was lost, FAST RECOVERY EXPERT... | 0 | 2024-07-11T23:48:19 | https://dev.to/juan_romano/get-back-your-scammed-bitcoin-191l | I fell victim to a crafty scam, but just when I thought all hope was lost, FAST RECOVERY EXPERT emerged as a beacon of light. For a fact, bitcoin is true and is the future of world currencies. I have been using it until I lost 3 BTC in the hands of unregulated brokers. In the wake of losing my monies to this sham investment brokers, I found myself in a state of panic and despair. Fortunately for me, an old friend who previously worked with my uncle referred me to RECOVERY EXPERT SERVICES. I participated in an in-depth consultation to understand the details of the theft and the extent of the loss I suffered. They created a customized recovery plan that met my specific need using their extensive knowledge of blockchain technology and forensic investigative skills. With their sophisticated and robust technological firewalls, my case was investigated and RECOVERY EXPERT SERVICES were able to recover my stolen cryptos in less than 72 hours. Working with RECOVERY EXPERT SERVICES was a transformative experience, not only did they recover my stolen funds, they also demonstrated a level of professionalism that exceeded my expectations. I appreciate them for their help and I wish to recommend them to everyone caught up in systemic scams. Please contact RECOVERY EXPERT SERVICES for your swift recovery.
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram https://t.me/RECOVERYEXPERT0 | juan_romano | |
1,920,285 | GET BACK YOUR SCAMMED MONEY | I am overjoyed and immensely grateful for the exceptional work done by RECOVERY EXPERT. My experience... | 0 | 2024-07-11T23:51:15 | https://dev.to/bright_bryan_c89e495c602d/get-back-your-scammed-money-3l79 | I am overjoyed and immensely grateful for the exceptional work done by RECOVERY EXPERT. My experience with this company has been nothing short of extraordinary, and I cannot thank them enough for their unwavering dedication to helping me recover my stolen funds. I had found myself in a dire situation after losing a substantial amount of money – approximately $378,500 worth of Bitcoin – to a group of individuals posing as an investment company based in Finland. They had lured me in with enticing promises and guarantees, and I fell victim to their deceitful tactics. After realizing that I had been scammed, I felt a deep sense of despair and hopelessness. I tried reaching out to the supposed investment company through emails, texts, and calls, but to no avail. It became evident that I had been duped, and I was left feeling utterly betrayed. Determined to find a solution, I turned to the internet in search of a reputable recovery service, and that's when I stumbled upon(RECOVERY EXPERT ) Little did I know that this discovery would mark the turning point in my ordeal. From the moment I contacted RECOVERY EXPERT, I was met with professionalism, empathy, and a genuine commitment to helping me. Their team of experts guided me through the entire recovery process, providing me with clear explanations and constant updates on the progress of my case. Their transparency and willingness to address all of my concerns instilled in me a sense of trust and confidence that had been shattered by the scammers. I was astounded by how swiftly and efficiently RECOVERY EXPERT managed to track down the individuals responsible for my predicament and recover my stolen funds in their entirety.
REACH OUT WITH THE BELOW INFORMATIONS
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram;(https://t.me/RECOVERYEXPERT0) | bright_bryan_c89e495c602d | |
1,920,295 | GET BACK YOUR SCAMMED MONEY | I got scammed with over $205,000. I came in contact with this guy from Facebook and we communicated... | 0 | 2024-07-12T00:09:21 | https://dev.to/juan_romano_3584013e78b1c/get-back-your-scammed-money-3meg | I got scammed with over $205,000. I came in contact with this guy from Facebook and we communicated for a whole one year. I sent him money via Bitcoin ATM and bank account, I almost lost everything. But for the timely intervention of the team of RECOVERY EXPERT Company, who just in kick-off on time got back my $205,000. They are good at what they do, I have recommended them to friends and co-workers who all became satisfied customers. They have helped me a lot in the aspect of retrieving my lost digital assets, you can reach out to them for everything related to Hacking and Funds Recovery. They are the best and have different skills in funds recovering and exposing scammers. I’m glad to recover my money, there is no shame in becoming a scam victim of one of these sophisticated and predatory operations. By reporting you may be able to recover some or all of your lost funds and prevent the scammers from targeting others. To recover your Scammed Btc funds, Scammed funds, Clear or Erase Criminal Records and Mobile spy remote control access Contact this genius recovery Expert Company through their
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram https://t.me/RECOVERYEXPERT0 | juan_romano_3584013e78b1c | |
1,920,288 | Reverse a linked list in go | This is a favorite question to give to new developers. Pretty simple if you have had a decent data... | 27,729 | 2024-07-11T23:55:53 | https://dev.to/johnscode/reverse-a-linked-list-in-go-583i | go, interview, programming | This is a favorite question to give to new developers. Pretty simple if you have had a decent data structures class.
Reverse a single linked list. (_This is Leetcode 206_)
For the implementation, I have chosen to make the linked list a generic type.
```
type Node[T any] struct {
Data T
Next *Node[T]
}
type LinkedList[T any] struct {
Head *Node[T]
}
func (ll *LinkedList[T]) Append(data T) {
newNode := &Node[T]{Data: data, Next: nil}
if ll.Head == nil {
ll.Head = newNode
return
}
current := ll.Head
for current.Next != nil {
current = current.Next
}
current.Next = newNode
}
```
And for the reverse function, it's done with a single pass by recognizing that all we need to do is maintain a pointer to the previous node, then set a given node's 'next' to the previous.
When we reach the end, then we know the current node is the new 'head' of the list.
```
func (ll *LinkedList[T]) ReverseLinkedList() {
var prev *Node[T] = nil
var ptr *Node[T] = ll.Head
for ptr != nil {
var next *Node[T] = ptr.Next
ptr.Next = prev
prev = ptr
if next == nil {
ll.Head = ptr
}
ptr = next
}
}
```
Have we missed a boundary condition? What complications are added if the list is now a doubly linked list? Let me know in the comments.
Thanks!
_The code for this post and all posts in this series can be found [here](https://github.com/johnscode/gocodingchallenges)_
| johnscode |
1,920,289 | GET BACK YOUR SCAMMED MONEY | I am writing to express my deepest gratitude to RECOVERY EXPERT for their exceptional services and... | 0 | 2024-07-11T23:56:51 | https://dev.to/bryan_hills_4f7b4d46567d6/get-back-your-scammed-money-3l7g | I am writing to express my deepest gratitude to RECOVERY EXPERT for their exceptional services and expertise in recovering my lost funds. I had invested $240,000 in an online platform, hoping to generate returns, but unfortunately, I was scammed and lost access to my investment. Despite my efforts to contact the platform, I was unable to recover my money.
It was then that I discovered RECOVERY EXPEDRT and reached out to them for assistance. Their team of skilled hackers utilized their advanced technology to recover my entire investment of $240,000. I am thrilled to report that my funds have been safely returned to me.
I am grateful for the professionalism and expertise displayed by RECOVERY EXPERT. I can't adequately express my appreciation, but I will certainly leave an excellent review as a testament to their outstanding services. If you've also fallen victim to online fraud and are seeking help, I highly recommend contacting RECOVERY EXPERT at email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram;(https://t.me/RECOVERYEXPERT0)
They are trustworthy and efficient in their work." | bryan_hills_4f7b4d46567d6 | |
1,920,290 | GET BACK YOUR LOST FUNDS FROM ANY PLATFORM | There are a lot of scammers everywhere on the internet looking for whom to take advantage of, I am a... | 0 | 2024-07-11T23:59:00 | https://dev.to/olokun_agadagodo_a3729da3/get-back-your-lost-funds-from-any-platform-50pi | There are a lot of scammers everywhere on the internet looking for whom to take advantage of, I am a victim too. I have been dealing cryptocurrencies for almost 5 years, I recently was introduced to investment coinsglobal.... as the most profitable investment platforms out there , I could buy cryptocurrencies at 40% discount once i am an investor on the company, It was a good deal for me , i started with 4kBTC and got my returns and reinvested it on the site , I went all out and invested more money to gain more profits, At my third deal I invested 13KBTC at the rate of 32k/BTC .. i was unable to withdraw, trade or access my investment afterwards, i had contact the company costumer support for assistance but they stopped replying after i had paid the fees required by them, My world was shattered and the banks were already on my neck , i reported to the police but there was little information for them to start with before i came up with the thought of getting a hacker, made some researches online and i was lucky to came across (RECOVERYEXPERT). i read some positive reviews about this firm and i decided to contact them for help towards my problem, though i never believed my funds can be recovered back until i met this team and we talked about my situation, they assured that my funds can be recovered, i was so fortunate to meet this team, they are very professional and excellent on this field , they tracked my profile and traced it out to the master wallet. then confirmed how much i have invested and transferred before processing the recovery which was swiftly done, i really cannot thank this team enough for coming through for me at my worst, it was just like a magic to me, you can reach the firm through their details and contact Them
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram;(https://t.me/RECOVERYEXPERT0) | olokun_agadagodo_a3729da3 | |
1,920,292 | RECOVER YOUR SCAMMED MONEY THROUGH RECOVERY EXPERT. | I'm glad I found RECOVERY EXPERT, an honest fund/crypto recovery company. Their team of professionals... | 0 | 2024-07-12T00:03:30 | https://dev.to/ben_linda_3828b63dc80f7ed/recover-your-scammed-money-through-recovery-expert-mep | I'm glad I found RECOVERY EXPERT, an honest fund/crypto recovery company. Their team of professionals was able to retrieve my crypto that had been stolen from a forex trader who had deceived me by saying I would receive a 35% return on my investment. I was able to receive all of my cryptocurrency back after writing to this team about my situation in less than 48 hours. I was overjoyed because I had thought all hope had been lost after being scammed of my funds $ 62,500. I highly recommend them with full confidence. File a complaint to this company to get your stolen cryptocurrency and other digital assets back. In addition, he can help you get back on more profitable trading platforms, recover forgotten or lost cryptocurrency wallet passwords, and protect you from extortionists. Speak with the actual deal at
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram;https://t.me/RECOVERYEXPERT0 | ben_linda_3828b63dc80f7ed | |
1,920,293 | GET BACK YOUR SCAMMED MONEY | How to get back scammed USDT/USD from crypto scam platform? First question is how do i get back a... | 0 | 2024-07-12T00:05:58 | https://dev.to/bryan_hills_b04636e3dd313/get-back-your-scammed-money-39j4 | How to get back scammed USDT/USD from crypto scam platform? First question is how do i get back a scammed crypto or money after realizing oneself has been ripped, Next questions is who or where can i find a recovery hacker for stolen crypto? Due to high volume of crypto recovery scam on the internet today, lot of humans around the globe are likely to fell into more scam while trying to have their funds recovered, simply because of crypto impersonators online who gives fake reviews to make innocent scam victim believe they can actually do what they said, i advise everyone to kindly take caution in all act before action, losing the total amount $1,149,310 USD was like losing everything i ever lived for simply because it was all my savings and i had to go for mortgage as well, when they kept on asking for different fees which i paid without getting my funds until 9 months after the incident i got referred to Recovery Expert Service who i explained everything to in details and they took over the case and it took them 48 hours and i was surprised to receive an alert worth $2.563m on my Coinbase wallet address and i was so excited and decide to move some into my bank account, behold it worked and the bank never questioned the source of the funds, Recovery Expert are professional and i can as well say if truly a hacker is needed don’t fail to seek for Crypto Recovery Expert
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram(https://t.me/RECOVERYEXPERT0) | bryan_hills_b04636e3dd313 | |
1,920,294 | Day 989 : Glitch | liner notes: Professional : Started off the day with a meeting with my immediate team to talk about... | 0 | 2024-07-12T00:07:16 | https://dev.to/dwane/day-989-glitch-14f3 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Started off the day with a meeting with my immediate team to talk about how things are going on with our projects. Had some time to take a look at the community board for any questions. Had a wider team meeting. Went back to work on refactoring a project to use a new SDK. Had a quick session with a team member to work out an issue I was having with the refactor project. The solution was something small that I knew the solution because I've done it before, but missed so I was glad to get that working. Guess I had a brain glitch. Continued finishing up the UI around that fix. Put into place some code to test a functionality. Started working on updating a Web Component I'm using in the application to extend it for new functionality. Pretty good day.
- Personal : Went through some tracks for the radio show. Browsed through Bandcamp for projects to pick up. Looked at some land. Bought a new travel backpack. Did some research on my side project. Started watching "The Suicide Squad" anime but was so tired, I fell asleep. I think that was everything.

Whew... I took a nap after work. It's been kind of gloomy and rainy all day. At least it wasn't super hot. Going to pick up projects on Bandcamp and set up social media posts. Maybe put together tracks for the radio show. There's a new episode of "The Boys" so I'll be watching that and maybe finish the episode of "The Suicide Squad" anime.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube 2ILX-XPcwgU %} | dwane |
1,920,296 | Implementando Teste Contínuo: Garantindo Qualidade em Cada Commit | No mundo acelerado do desenvolvimento de software, garantir qualidade sem comprometer a velocidade é... | 0 | 2024-07-12T00:12:01 | https://dev.to/womakerscode/implementando-teste-continuo-garantindo-qualidade-em-cada-commit-3bon | wecoded, devops, testing | No mundo acelerado do desenvolvimento de software, garantir qualidade sem comprometer a velocidade é um desafio constante. O teste contínuo de software surge como uma solução revolucionária, integrando testes automatizados em todas as etapas do ciclo de desenvolvimento.
Neste artigo, exploraremos os benefícios do teste contínuo e os desafios que você pode enfrentar. Descubra como essa abordagem pode transformar sua equipe e impulsionar a excelência no desenvolvimento de software.
---
## Mas, o que é Teste Contínuo?
O teste contínuo é uma abordagem moderna e essencial no desenvolvimento de software, especialmente em ambientes ágeis e de DevOps. Diferente dos métodos tradicionais, que executam testes em fases específicas, o teste contínuo envolve a execução automatizada e constante de testes a cada alteração no código.
Essa prática permite a detecção imediata de defeitos e fornece feedback rápido, facilitando correções ágeis e eficientes. Além de elevar a qualidade do software, o teste contínuo otimiza o processo de desenvolvimento, garantindo entregas rápidas e seguras. Ao verificar mudanças no código de forma contínua, esta abordagem promove a detecção precoce de falhas e a manutenção de alta qualidade ao longo de todo o ciclo de vida do software.
---
## Benefícios e Desafios do Teste Contínuo
Entre os principais benefícios da implementação dos testes contínuos podemos citar:
- **Detecção Precoce de Defeitos**: O teste contínuo permite identificar bugs e problemas logo após a introdução de mudanças no código, reduzindo o custo e o esforço necessários para corrigir defeitos antes que se propaguem para outras partes do sistema.
- **Feedback Rápido**: Com testes automatizados rodando continuamente, os desenvolvedores recebem feedback quase em tempo real sobre o impacto de suas alterações, acelerando o processo de desenvolvimento e aumentando a confiança nas mudanças feitas.
- **Melhoria na Qualidade do Software**: Testar continuamente garante que a qualidade do software seja mantida ao longo de todo o ciclo de desenvolvimento. Problemas que poderiam passar despercebidos em um ambiente de teste tradicional são rapidamente identificados e corrigidos.
- **Integração com CI/CD**: O teste contínuo é fundamental nas pipelines de Integração Contínua (CI) e Entrega Contínua (CD). Ele assegura que o software seja testado automaticamente a cada commit, garantindo que apenas o código aprovado chegue ao ambiente de produção.
Porém, nem tudo são flores e alguns desafios devem ser levados em consideração, como:
- **Manutenção de Testes**: Automatizar testes exige atualização contínua dos scripts à medida que o código evolui, o que pode ser desafiador.
- **Gerenciamento de Dados de Teste**: Testes contínuos dependem de dados consistentes e realistas, o que pode ser complexo em sistemas com grandes volumes de dados ou preocupações com privacidade.
- **Flutuações de Ambiente**: Mesmo com ambientes de teste consistentes, podem ocorrer variações que afetam os resultados dos testes. É crucial monitorar e resolver essas inconsistências rapidamente.
- **Cultura e Treinamento**: Implementar testes contínuos requer uma mudança cultural na equipe de desenvolvimento, com todos comprometidos com a qualidade e treinados nas melhores práticas de automação de testes.
---
## Em suma,
O teste contínuo é uma prática poderosa que, quando implementada corretamente, pode transformar a maneira como o software é desenvolvido e entregue. Ele não apenas melhora a qualidade do software, mas também acelera o ciclo de desenvolvimento, permitindo que as equipes entreguem valor aos usuários de maneira mais rápida e confiável.
Ao adotar uma abordagem de teste contínuo, as organizações estão melhor equipadas para enfrentar os desafios do desenvolvimento de software moderno e atender às expectativas crescentes de qualidade e velocidade no mercado.
---
Hello World!
Este conteúdo foi compartilhado com base no artigo de minha autoria disponível no [link](https://medium.com/@marcela.gomes/implementando-teste-cont%C3%ADnuo-garantindo-qualidade-em-cada-commit-da30da040a42), se gostou do conteúdo, faça chegar em mais alguém!
| mahamorim |
1,920,297 | RECOVER YOUR SCAMMED MONEY | When my bitcoin was stolen, I felt a mix of panic and helplessness. Desperate for a solution,... | 0 | 2024-07-12T00:12:49 | https://dev.to/illuminati_666_01972ceeab/recover-your-scammed-money-26mc | When my bitcoin was stolen, I felt a mix of panic and helplessness. Desperate for a solution, RECOVERY EXPERT successfully recovered all of my stolen bitcoin.
email: RecoveryExpert01@consultant.com
WhatsApp: + 1 (908) 991 - 7132
Telegram https://t.me/RECOVERYEXPERT0 | illuminati_666_01972ceeab | |
1,920,338 | 🚀 Seeking an experienced iOS developer with expertise in Mac app notarization! 🚀 | If you have a track record of successfully notarizing Mac apps, we want to hear from you. telegram :... | 0 | 2024-07-12T00:25:08 | https://dev.to/yoann_turpin_4d4675275abf/seeking-an-experienced-ios-developer-with-expertise-in-mac-app-notarization-5hdj | If you have a track record of successfully notarizing Mac apps, we want to hear from you.
telegram : @kingstar729 | yoann_turpin_4d4675275abf | |
1,920,339 | Season 12 and 13 Storm Ohtani caught up with Kim Ha-sung | In a game against the Cincinnati Reds in the 2024 Major League Baseball regular season at Great... | 0 | 2024-07-12T00:28:31 | https://dev.to/outlookindiacom_pluginpla/season-12-and-13-storm-ohtani-caught-up-with-kim-ha-sung-20i9 | 토토사이트 | In a game against the Cincinnati Reds in the 2024 Major League Baseball regular season at Great American Ball Park in Cincinnati, Ohio on the 24th (Korea Standard Time), Kim Ha-sung started the game as the eighth batter and shortstop, and recorded one hit, one walk and two steals from three times at bat. San Diego won the game 6-4 after a close game that lasted until the 10th inning of overtime.
Kim Ha-sung was excluded from the starting lineup for the first time this season against Cincinnati on the 23rd. He stopped playing in 51 consecutive games and took a day off, and succeeded in getting on base from his first at-bat. He walked straight against Cincinnati starter Frankie Montas in the top of the second inning with no outs and runners on the second and third bases when San Diego led 2-1. When Kyle Higashioka grounded out to shortstop in a chance to load the bases with no outs, Kim Ha-sung was out at second base.
Kim Ha-sung, who hit his second at-bat as a leadoff hitter in the top of the fourth inning with San Diego leading 4-2, struck out in four pitches by Montas.
Kim Ha-sung, who met Montas again in the top of the sixth inning with one out and one on first base, hit a splitter in the fifth pitch in the ball count of 2-2, hitting a grounder to shortstop, leaving the leading runner out and he survived on first base.
When Higashioka was at bat with two outs and a runner on first base, Kim started at the timing of Montas' second fastball and stole the second base. He stole his 12th base of the season.
With two outs and a runner on the second base, he stole the third base while throwing the cutter at the fourth pitch. The catcher took the timing so perfectly that he couldn't even throw. He stole his 13th stolen base in this season.
With two outs and a runner on third base, Higashioka had a tenacious game to reach as many as nine pitches, but was struck out by rookie strikeouts. Kim Ha-sung's efforts to create a chance with two steals in an inning also faded. San Diego, which missed a chance to run away, allowed a 4-4 tie after De Los Santos took the mound in the bottom of the sixth inning with a two-run shot by Nick Martini from the NC Dinos.
At the last at-bat in the ninth inning, Kim created another run-scoring chance. He hit Cincinnati's bullpen pitcher Alexis Diaz's first pitch slider with one out and no runner at the batter's box, hitting a double that cut through the left-center gap. However, he was out due to the pitcher's check when the next batter, Luis Campusano, was out. With one out and a runner on the second base, the chance turned into a situation where there were no runners on the second base, and Campusano struck out swinging and striking out. For San Diego, Jeremiah Estrada struck out three in the bottom of the ninth inning, bringing the game to extra time.
Luis Arraez's hit and Fernando Tatis Jr. hit an RBI double in the top of the 10th inning to break the balance. Jurickson Profar's intentional walk made it to the base with no outs and the bases loaded, and Jake Cronen Worth's sacrifice fly RBI single made the score 6-4. Closing pitcher Robert Suarez, who took over the mound in the bottom of the 10th inning, faced a crisis with a walk to the first and second bases after striking out the first batter, but he defended his victory by blocking Cincinnati's last attack with two fly balls.
Kim Ha-sung, who added another hit in two games, saw his batting average rise slightly from 0.214 to 0.216 and OPS from 0.696 to 0.704. Kim Ha-sung, who stole the base five times in the last five games, tied for fifth in the National League stolen base category with Shohei Ohtani (LA Dodgers) and Bryson Stat (Philadelphia Phillies) with 13 steals in the season.
BY: **[토토사이트](https://www.outlookindia.com/plugin-play/2023년-11월-스포츠-토토사이트-순위-및-추천-사설토토-먹튀검증-top15-news-328577
)** | outlookindiacom_pluginpla |
1,920,341 | Exploring the Internet Computer Protocol (ICP) | Introduction: The Internet Computer Protocol (ICP) is a crucial part of the internet infrastructure... | 0 | 2024-07-12T00:33:20 | https://dev.to/kartikmehta8/exploring-the-internet-computer-protocol-icp-4bp8 | Introduction:
The Internet Computer Protocol (ICP) is a crucial part of the internet infrastructure that is responsible for the transfer of data packets across the World Wide Web. It is a set of rules and protocols that control the flow of information between computers and networks. In this article, we will explore the various aspects of ICP and its impact on internet, both positive and negative.
Advantages:
One of the major advantages of ICP is its flexibility. It can adapt to different types of network architectures and can support the communication between different devices and networks. It also ensures that the data is transmitted securely, maintaining the integrity and confidentiality of information. Moreover, ICP allows for efficient routing of data packets, reducing the amount of network traffic and improving the overall performance of the internet.
Disadvantages:
One of the major challenges with ICP is its vulnerability to cyber-attacks. As it controls the flow of data, any breach in the protocol can lead to a compromise of sensitive information. Additionally, the use of ICP can lead to a single point of failure, where a malfunction can cause disruption of network services.
Features:
ICP offers numerous features, such as segmentation, fragmentation, and error control. These features ensure that the data is broken down into smaller packets and reassembled at the destination, making the transmission more reliable and efficient. It also supports multiplexing, which allows multiple data streams to be transmitted over a single connection, reducing network congestion.
Conclusion:
In conclusion, exploring the Internet Computer Protocol provides us with a better understanding of its role and impact on the functioning of the internet. Despite its disadvantages, ICP has played a significant role in the development of the internet, allowing for seamless transmission of data across the globe. With advancements in technology, it is essential to continuously improve and secure the ICP to ensure a safe and efficient internet experience for all users. | kartikmehta8 | |
1,920,342 | Running Kubernetes locally with Kind and .NET 8 (plus a bonus with Lens) | This isn't a tutorial about kubernetes, kind, or an API with .NET. It's just something that helped me... | 0 | 2024-07-12T00:34:49 | https://dev.to/thomazperes/running-kubernetes-locally-with-kind-and-net-8-plus-a-bonus-with-lens-oi8 | dotnet, kubernetes, docker | This isn't a tutorial about kubernetes, kind, or an API with .NET. It's just something that helped me when I was studying Kubernetes
#### Prerequisite tools needed
- [Kubernetes](https://kubernetes.io/)
- [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/)
- [Docker](https://docs.docker.com/get-docker/)
- [.NET](https://dotnet.microsoft.com/en-us/download)
#### A brief context about kind
Kind is a tool for running local Kubernetes clusters using Docker containers.
_____________________________________
First, let's create a simple API with .NET. (Really simple, I won't change anything).
Run the following command in the terminal:
```
dotnet new webapi
```
Add the following Dockerfile:
```Dockerfile
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /app
ENV ASPNETCORE_HTTP_PORTS=80
COPY . ./
RUN dotnet restore api1.csproj
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:8.0
WORKDIR /app
ENV ASPNETCORE_HTTP_PORTS=80
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "api1.dll"]
```
Run the following Docker command to create the image:
```
docker build -t api:1.0 .
```
To test the API, run:
```
docker run --rm -it -p 8001:80 -e ASPNETCORE_HTTP_PORTS=80 api:1.0docker run --rm -it -p 8001:80 -e ASPNETCORE_HTTP_PORTS=80 api:1.0
```
Then run:
```
curl http://localhost:8001/weatherforecast
```
___
After creating the API and the Docker image, let's create a Cluster with the following YAML:
```yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
```
After creating the YAML, run the following command:
```
kind create cluster --name api --config=Cluster/DeployCluster.yaml
```
To check the cluster, run:
```
kubectl cluster-info -context kind-api
```
Next, we need to add the app image to the cluster with the command:
```
kind load docker-image api:1.0 --name api
```
Now, let's continue working with kubernetes by creating the *Deployment* and *Service* YAML files.
Deployment.yaml:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
selector:
matchLabels:
app: api
version: v1
replicas: 3
template:
metadata:
labels:
app: api
version: v1
spec:
containers:
- name: api
image: api:1.0
ports:
- containerPort: 80
```
Service.yaml:
```yaml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- name: api
port: 80
targetPort: 80
```
After creating the YAML files for the *Deployment* and *Service*, run the following commands:
```
kubectl apply -f Deployment.yaml
kubectl apply -f Service.yaml
```
You can check the services and deployments with `kubectl get deployments` and `kubectl get svc`.
When checking the **Services** you will see that there is no **External-IP** available, which is needed to call the endpoints in our API. This not a kubernetes or a kind problem but rather a docker issue. For an alternative solution, you can start from this [link](https://github.com/kubernetes-sigs/kind/issues/1200#issuecomment-647145134).
To connect your local computer to the Docker bridge default gateway, use the following `port-forward` command:
```
sudo -E kubectl port-forward svc/api-service 8001:80
```
Now, you can run:
```
curl http://localhost:8001/WeatherForecast
```
If you are running the .NET api, the response will look like this:
```json
[
{
"date": "2024-03-08T23:14:25.6202397+00:00",
"temperatureC": 28,
"temperatureF": 82,
"summary": "Hot"
},
{
"date": "2024-03-09T23:14:25.6202432+00:00",
"temperatureC": 26,
"temperatureF": 78,
"summary": "Scorching"
},
{
"date": "2024-03-10T23:14:25.6202433+00:00",
"temperatureC": 44,
"temperatureF": 111,
"summary": "Hot"
},
{
"date": "2024-03-11T23:14:25.6202435+00:00",
"temperatureC": 8,
"temperatureF": 46,
"summary": "Warm"
},
{
"date": "2024-03-12T23:14:25.6202437+00:00",
"temperatureC": -10,
"temperatureF": 15,
"summary": "Warm"
}
]
```
With this, you have a cluster running locally.
## Lens
[Lens](https://k8slens.dev/) is a tool to managing and troubleshooting Kubernetes workloads and many other things.
Let's start by installing Lens and logging in (you will need to create an account in lens and onstain a lens ID).
You can run the following command in the terminakl:
```
k config view --output yaml
```
This command print your kubernetes configuration. We will need this to add to Lens. You can either connect the kube folder or copy the kube config and paste it (normally the config is found in `$HOME/.kube/config`).
With this, you will have the Cluster connection, and you can see something like this and many other things.


| thomazperes |
1,920,343 | AI Workforce Evolution: Emerging Roles and Future Perspectives | 1. Introduction The landscape of work and employment has been significantly reshaped by... | 27,673 | 2024-07-12T00:45:01 | https://dev.to/rapidinnovation/ai-workforce-evolution-emerging-roles-and-future-perspectives-3npj | ## 1\. Introduction
The landscape of work and employment has been significantly reshaped by the
advent of artificial intelligence (AI). As technology continues to advance, AI
is becoming increasingly integral to various industries, driving efficiency,
innovation, and transformation. This evolution is not only changing how tasks
are performed but also creating a dynamic shift in the workforce landscape,
necessitating new skills and roles. Understanding these changes is crucial for
businesses, workers, and policymakers to adapt and thrive in the new digital
economy.
## 2\. Understanding Prompt Engineers
Prompt engineering is a burgeoning field that has gained prominence with the
rise of advanced AI models, particularly in natural language processing and
generation. Prompt engineers specialize in designing, testing, and refining
prompts to effectively interact with AI models to produce desired outcomes.
This role is crucial in leveraging AI technology for various applications,
from content creation to data analysis.
## 3\. Exploring AI Operations Managers
AI Operations Managers play a pivotal role in the integration and management
of AI within organizations. As companies increasingly adopt AI technologies,
the need for specialized roles to oversee these operations becomes crucial. An
AI Operations Manager ensures that AI systems are implemented effectively,
aligning with business objectives and operational requirements.
## 4\. Training and Education
Pursuing a career in AI requires a solid educational foundation typically
starting with a bachelor’s degree in computer science, data science,
mathematics, or a related field. Advanced degrees like a master's or Ph.D. can
be particularly beneficial for those looking to delve deeper into specialized
areas of AI. Many universities now offer specific courses and degrees focused
on artificial intelligence and machine learning, which provide both
theoretical and practical knowledge necessary for a career in this field.
## 5\. Industry Demand and Job Outlook
The industry demand and job outlook across various sectors can significantly
influence career choices, educational pursuits, and business strategies.
Understanding the current market trends and future projections is crucial for
stakeholders at all levels, from students and job seekers to entrepreneurs and
policymakers.
## 6\. Case Studies and Real-World Applications
Case studies and real-world applications are essential for understanding the
practical implications of theoretical research. They provide concrete examples
of how concepts and theories are implemented in real-world scenarios, offering
insights into their effectiveness, challenges, and impacts.
## 7\. Challenges and Solutions
Every business faces challenges, but the key to success lies in identifying
these challenges early and finding effective solutions. Common challenges
include technological changes, market competition, regulatory compliance, and
workforce management. For practical insights and solutions, resources like
McKinsey & Company offer a wealth of information through their research and
case studies.
## 8\. Conclusion
The integration of artificial intelligence (AI) into various sectors is
reshaping the landscape of work and workforce development. As we conclude, it
is essential to summarize the emerging roles created by AI and discuss the
future perspectives on AI workforce development.
## 9\. References
When compiling a research paper, thesis, or any academic or professional
document, the inclusion of references is crucial. References provide the
foundation for your arguments, enhance the credibility of your work, and
acknowledge the contributions of others in the field. They are essential for
avoiding plagiarism by giving proper credit to the original sources of
information.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
## URLs
* <http://www.rapidinnovation.io/post/rise-of-prompt-engineers-and-ai-managers-in-2024>
## Hashtags
#AIWorkforceEvolution
#PromptEngineering
#AIManagement
#AITraining
#FutureOfWork
| rapidinnovation | |
1,920,344 | Transitioning into DevOps | DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). The... | 0 | 2024-07-12T08:43:24 | https://dev.to/irohomolola/002-12244transitioning-into-devops-2318 | webdev, devops, beginners, techtalks | DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). The goal is to shorten the system development life cycle and provide continuous delivery with high software quality. This requires a cultural shift, emphasizing collaboration between development and operations teams. | irohomolola |
1,920,345 | Meta | This is a submission for the [Build Better on Stellar: Smart Contract Challenge... | 0 | 2024-07-12T01:13:39 | https://dev.to/alla_santoshpavankumar_/meta-f0j | devchallenge, stellarchallenge, blockchain, web3 | *This is a submission for the [Build Better on Stellar: Smart Cont[](https://developer.oculus.com/manage/organizations/999570415204880/payment-info/)ract Challenge ](https://dev.to/challenges/stellar): Build a dApp*
## What I Built
<!-- Share an overview about your project and what it does. -->
## Demo
<!-- If submitting a browser-based dApp, please share a public URL for us to demo. -->
<!-- If submitting a mobile dApp, please share a video demo. -->
## My Code
<!-- Show us the code! Share a public link to your repo and be sure to include a README file with installation instructions. We also encourage you to add a license for your code. -->
## Journey
<!-- Tell us about your implementation and smart contract design, the motivation behind your project, what you learned, your experience with the ecosystem, anything you are particularly proud of, what you hope to do next, etc. -->
**Additional Prize Categories: Glorious Game and/or Super Sustainable**
<!-- Let us know if you’d like your submission considered for the glorious game and/or super sustainable prize categories. If not, please remove this section. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- IMPORTANT LAST STEP: Use the email address you have associated with your DEV account and fill out this form on the Stellar website: https://stellar.org/community/events/build-better-smart-contract-challenge -->
<!-- Thanks for participating! --> | alla_santoshpavankumar_ |
1,920,350 | What's New in API7 Enterprise: IAM for Granular Access Control | Introduction Previous versions of API7 Enterprise provided a simple, user-friendly, and... | 0 | 2024-07-12T01:25:53 | https://api7.ai/blog/api7-3.2.14-iam-finer-access-control | ## Introduction
Previous versions of [API7 Enterprise](https://api7.ai/enterprise) provided a simple, user-friendly, and comprehensive [RBAC (Role-Based Access Control)](https://api7.ai/blog/rbac-for-permission-control) management mechanism. This mechanism ensured system security while granting users flexible role permission configuration. As the types of resources and features of API7 Enterprise have increased, the traditional RBAC management gradually revealed its limitations in fine-grained control of permissions. Moreover, more enterprises are seeking more refined permission management strategies to meet their complex and changing business needs.
To further enhance the permission management features of API7 Enterprise, we have comprehensively upgraded the existing role permission system by introducing a more flexible and powerful IAM (Identity and Access Management) policy model. This model provides users with finer-grained permission control and greater flexibility, better meeting the complex and variable permission management needs of modern enterprises.
## What Is IAM Policy Model?
The [IAM (Identity and Access Management)](https://www.ibm.com/topics/identity-access-management) policy model represents a more detailed and efficient method of permission management. It allows administrators to define specific policies, each containing a set of rules (Statements). These rules specify in detail which users or roles can perform which actions on which resources. Compared to the traditional RBAC mechanism, this model offers greater flexibility and granularity.
<div align="center">
<img alt="IAM, Identity and Access Management" style="width: 60%" src="https://static.apiseven.com/uploads/2024/07/11/xbbRQFn5_whiteboard_exported_image.png"></img>
</div>
**Advantages of IAM over RBAC:**
- **Granular Control**: IAM can control permissions at the resource level and even for specific attributes or operations within resources, whereas RBAC often assigns permissions based on roles, with relatively coarse granularity.
- **Flexibility**: IAM allows administrators to manage policies and permissions directly without needing to create and manage numerous roles for indirect permission assignment, making the configuration more straightforward and flexible.
- **Scalability**: As system functionalities increase and resource types diversify, IAM can more easily adapt to changes by adding new policies to meet new permission needs, while RBAC may require the adjustment or addition of numerous roles to accommodate changes.
## How to Use IAM Policies in API7 Enterprise?
### 1. Create Permission Policies
After logging into API7 Enterprise, click the **"Organization"** button at the top right, and select the **"Permission Policies"** menu item from the dropdown menu.
<div align="center">
<img alt="Permission Policies" style="width: 30%" src="https://static.apiseven.com/uploads/2024/07/11/kfCuPqGT_iam-1.PNG"></img>
</div>
In the **Permission Policies** section, you can manage all policies. By default, there is a `super-admin-permission-policy` for the initial administrator.

Click the **"Add Policy"** button at the top right to enter the policy creation form. Here, you need to fill in the basic information of the Policies and configure the permissions in the Policy Editor, referred to as Statements.
<div align="center">
<img alt="Edit Statements" style="width: 60%" src="https://static.apiseven.com/uploads/2024/07/11/kYfuyNGx_iam-3.PNG"></img>
</div>
Statements are the core components of a policy, consisting of one or more statements. Each statement defines a specific access permission rule.
- **Effect**: Specifies the effect of the statement, usually `"Allow"` or `"Deny"`. A resource can be affected by multiple policies, and the IAM system determines the final access permissions based on the order and logic of the statements.
- **Action**: Defines a series of allowed or denied actions, such as `"gateway:DeleteGatewayGroup"` or `"iam:GetUser"`. These actions must be used in conjunction with resources (Resource) to be meaningful.
- **Resource**: Specifies the resources to which the statement applies, such as a specific gateway group or service. Wildcards (e.g., `<.*>`) can be used to match multiple resources.
- **Condition** (optional): Defines the conditions under which the statement will be effective. For example,
```
"conditions": {
"gateway_group_label": {
"type": "MatchLabel",
"options": {
"key": "type",
"operator": "exact_match",
"value": "production"
}
}
},
```
This represents that the operations defined in the statement are allowed to be effective only when the gateway group label is set to "production".
For example, to create a policy that restricts a user to only editing all published services within a specific gateway group, you can write it as follows:
```JSON
{
"statement": [
{
"resources": [
"arn:api7:gateway:gatewaygroup/{gateway group id}"
],
"actions": [
"<.*>Get<.*>" // Allow executing all operations starting with 'Get', which represent the 'read' permission for the specified gateway group
],
"effect": "allow" // Allow executing the operations defined above
// Note: This policy statement allows performing all 'Get' operations on the specified gateway group, such as retrieving gateway group information, listing services within the gateway group, etc.
},
{
"resources": [
"arn:api7:gateway:gatewaygroup/{gateway group id}/publishedservice/<.*>",
],
"actions": [
"<.*>" // Allow executing all operations
],
"effect": "allow" // Allow executing the operations defined above
// This policy statement allows performing all operations (create, read, update, delete, etc.) on the published services within the specified gateway group.
}
]
}
```
In this example, we defined two sets of resources (gateway group and published services within the gateway group) and set the allowed actions for each. Some common policy configuration examples can be found in the doc [Permission Policy Examples](https://docs.api7.ai/enterprise/reference/permission-policy-examples). For the available values of resources and actions and the corresponding APIs, refer to the doc [Permission Policy Actions and Resources](https://docs.api7.ai/enterprise/reference/permission-policy-action-and-resource).
After creating a policy, it cannot be directly assigned to users; we need to assign the policy to specific roles first.
### 2. Create Roles and Attach Policies
After logging into API7 Enterprise, click the **"Organization"** button at the top right, and select the **"Roles"** menu item from the dropdown menu.
<div align="center">
<img alt="Roles" style="width: 30%" src="https://static.apiseven.com/uploads/2024/07/11/3j4jOPp8_iam-4.png"></img>
</div>
In the **Roles** section, you can manage all roles. By default, there is a Super Admin role as the built-in administrator role.

Click the **"Add Custom Role"** button at the top right to enter the role creation form. Here, you need to fill in the basic information of the role. After creation, we will enter the role details page.

In the role details page, click the **"Attach Policy"** button to assign the previously created permission policies to the role.
<div align="center">
<img alt="Attach Policies to Roles" style="width: 70%" src="https://static.apiseven.com/uploads/2024/07/11/IVWPYSUs_iam-7.png"></img>
</div>
The created policies can be reused across multiple roles. When a role is associated with multiple policies, these policies are combined by default. That is, the permissions of a role are the sum of the permissions declared in all associated policies. After creating a role and attaching policies, we can assign the role to specific users.
### 3. Assign Roles to Users
After logging into API7 Enterprise, click the **"Organization"** button at the top right, and select the **"Users"** menu item from the dropdown menu.
<div align="center">
<img alt="Users" style="width: 26%" src="https://static.apiseven.com/uploads/2024/07/11/CoorpdBa_iam-8.png"></img>
</div>
In the **Users** section, you can manage all users. By default, there is an admin role as the built-in administrator.

In the right-side action column of the role list, click **"Update Roles"** to open the form drawer for updating the role of a specific user.
<div align="center">
<img alt="Update Roles for Users" style="width: 60%" src="https://static.apiseven.com/uploads/2024/07/11/MCdUTl2v_iam-10.png"></img>
</div>
A user can have multiple roles. When a user is granted multiple roles, the permissions are combined. That is, the user has the sum of the permissions of all roles.
### Simplified Role Mapping
With the optimization and upgrade of the user role model, the SSO role mapping process has been simplified. Now, when configuring the role mapping for login options, there is no need to set resource-level matching rules for each role individually. The resource and operation permissions are directly inherited from the selected built-in roles, making the permission configuration more intuitive and simplifying the complexity of permission management.
<div align="center">
<img alt="Role Mapping" style="width: 60%" src="https://static.apiseven.com/uploads/2024/07/11/dnblLML6_iam-11.png"></img>
</div>
## Summary
The introduction of IAM policies has enabled more flexible permission configuration and management. This adjustment not only enhances system security but also provides users with a greater possibility for customization.
In the future, we will continue to expand the types of resources supported by IAM policies, ensuring that all system resources can be included in fine-grained permission management, and continuously optimizing the policy editing and management interface, bringing users a more comprehensive and efficient permission management experience. | yilialinn | |
1,920,346 | Connect with NBA YoungBoy Merch on Instagram! | Discover exclusive NBA YoungBoy Merch on Instagram! Follow us to stay updated with the latest drops,... | 0 | 2024-07-12T01:16:03 | https://dev.to/nbayoungboymerch39/connect-with-nba-youngboy-merch-on-instagram-1m8b | nbayoungboymerch, instagram, exclusivemerch | Discover exclusive NBA YoungBoy Merch on Instagram! Follow us to stay updated with the latest drops, exclusive collections, and behind-the-scenes content. Don't miss out on the hottest merch from NBA YoungBoy, available only on our Instagram page. Join our community and be the first to know about new releases and special offers!
https://www.instagram.com/nbyoungboymerchshop/
 | nbayoungboymerch39 |
1,920,347 | Wix Studio | This is a submission for the Wix Studio Challenge . What I Built Demo ... | 0 | 2024-07-12T01:17:51 | https://dev.to/alla_santoshpavankumar_/wix-studio-2dj9 | devchallenge, wixstudiochallenge, webdev, javascript | *This is a submission for the [Wix Studio Challenge ](https://allasantoshpavanku/challenges/wix).*
## What I Built
<!-- Share an overview about your project. -->
## Demo
<!-- Share a link to your Wix Studio app and include some screenshots here. -->
## Development Journey
<!-- Tell us how you leveraged Wix Studio’s JavaScript development capabilities-->
<!-- Which APIs and Libraries did you utilize? -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! →
https://allasantoshpavanku.wixsite.com/mysite | alla_santoshpavankumar_ |
1,920,348 | Tutorial: Creación de un Proyecto Vue desde Cero e Integración con Bootstrap | En este tutorial, aprenderemos paso a paso cómo crear un proyecto Vue desde cero e integrarlo con... | 0 | 2024-07-12T01:28:34 | https://dev.to/lesniergonzalez/tutorial-creacion-de-un-proyecto-vue-desde-cero-e-integracion-con-bootstrap-2l3m | spanish |

En este tutorial, aprenderemos paso a paso cómo crear un proyecto Vue desde cero e integrarlo con Bootstrap para el estilo. Vue.js es un popular framework de JavaScript para construir interfaces de usuario, y Bootstrap es un framework frontend que simplifica el proceso de diseño. Al final de este tutorial, tendrás una aplicación Vue básica estilizada con Bootstrap.
{% embed https://youtu.be/ZQjwNvm8aLE %}
Requisitos Previos
Antes de comenzar, asegúrate de tener instalado lo siguiente:
Node.js y npm (Node Package Manager)
Vue CLI (Command Line Interface)
Paso 1: Configurar un Proyecto Vue
Si aún no has instalado Vue CLI, puedes hacerlo usando npm:
```
npm install -g @vue/cli
```
Ahora, creemos un nuevo proyecto Vue. Abre tu terminal y ejecuta:
```
vue create vue-bootstrap-project
```
Sigue las instrucciones para configurar tu proyecto. Elige la configuración predeterminada o selecciona manualmente las características que necesitas.
Paso 2: Instalar Bootstrap
Una vez configurado tu proyecto Vue, navega al directorio del proyecto:
```
cd vue-bootstrap-project
```
Instala Bootstrap y sus dependencias usando npm:
```
npm install bootstrap
```
Paso 3: Integrar Bootstrap
Ahora, integremos Bootstrap en tu proyecto Vue.
Opción 1: Usando el archivo CSS de Bootstrap
Abre tu archivo src/main.js e importa el archivo CSS de Bootstrap:
```js
import 'bootstrap/dist/css/bootstrap.min.css';
```
Opción 2: Usando Bootstrap Vue (opcional)
Alternativamente, puedes usar Bootstrap Vue, que proporciona componentes Vue.js para los elementos de Bootstrap:
```
npm install bootstrap-vue
```
En tu archivo src/main.js, importa Bootstrap Vue y registra sus componentes globalmente:
```js
import Vue from 'vue';
import BootstrapVue from 'bootstrap-vue';
import 'bootstrap/dist/css/bootstrap.min.css';
import 'bootstrap-vue/dist/bootstrap-vue.css';
Vue.use(BootstrapVue);
```
Paso 4: Crear tu Primer Componente Vue
Creemos un componente Vue simple para ver Bootstrap en acción.
Crea un nuevo componente en src/components/HelloWorld.vue:
```vue
<template>
<div class="container">
<h1>Hello, Vue.js with Bootstrap!</h1>
<b-button variant="primary">Primary Button</b-button>
<b-button variant="success">Success Button</b-button>
</div>
</template>
<script>
export default {
name: 'HelloWorld',
};
</script>
<style>
/* Optional: Add custom styles */
</style>
```
Paso 5: Usar tu Componente
Ahora, importa y usa tu componente HelloWorld en src/App.vue:
```vue
<template>
<div id="app">
<HelloWorld/>
</div>
</template>
<script>
import HelloWorld from './components/HelloWorld.vue';
export default {
name: 'App',
components: {
HelloWorld,
},
};
</script>
<style>
/* Optional: Add global styles */
</style>
```
Paso 6: Ejecutar tu Aplicación Vue
Finalmente, ejecuta tu aplicación Vue:
```
npm run serve
```
Abre tu navegador y ve a http://localhost:8080 para ver tu aplicación Vue estilizada con Bootstrap.
Conclusión
¡Felicitaciones! Has creado con éxito un proyecto Vue desde cero e integrado Bootstrap para el estilo. Vue.js y Bootstrap juntos ofrecen una combinación poderosa para construir aplicaciones web responsivas e interactivas. Experimenta más explorando los componentes de Bootstrap y personalizando tus propios componentes Vue para adaptarlos a las necesidades de tu aplicación. ¡Feliz codificación!
| lesniergonzalez |
1,920,349 | Mailbox unavailable. The server response was: 5.7.1 Service unavailable, Client host [IP ADDRESS] blocked using Spamhaus. | Mailbox unavailable. The server response was: 5.7.1 Service unavailable, Client host [IP ADDRESS]... | 0 | 2024-07-12T01:22:34 | https://dev.to/kath/mailbox-unavailable-the-server-response-was-571-service-unavailable-client-host-ip-address-blocked-using-spamhaus-329j |
Mailbox unavailable. The server response was: 5.7.1 Service unavailable, Client host [IP ADDRESS] blocked using Spamhaus. To request removal from this list see https://www.spamhaus.org/query/ip/IP ADDRESS AS(1450)
| kath | |
1,920,413 | proof | $argon2id$v=19$m=64,t=512,p=2$B5zGjIkH2yFiCtSvljY3Tg$/kiueTD5R5NCGfcmoiN5/g | 0 | 2024-07-12T02:31:27 | https://dev.to/lholh/proof-4820 | $argon2id$v=19$m=64,t=512,p=2$B5zGjIkH2yFiCtSvljY3Tg$/kiueTD5R5NCGfcmoiN5/g | lholh | |
1,920,351 | Seeking an experienced iOS developer with expertise in Mac app notarization! 🚀 | Dear Everyone. How are you? Hope you are doing well. I'm Seeking an experienced iOS developer with... | 0 | 2024-07-12T01:26:26 | https://dev.to/lilla_flowers_dd13e65858b/seeking-an-experienced-ios-developer-with-expertise-in-mac-app-notarization-772 | Dear Everyone. How are you? Hope you are doing well. I'm Seeking an experienced iOS developer with expertise in Mac app notarization! 🚀
If you have a track record of successfully notarizing Mac apps, we want to hear from you.
telegram : @kingstar729 | lilla_flowers_dd13e65858b | |
1,920,352 | Binary Images & Image Thresholding | Thresholding Thresholding is a simple yet effective technique used in image processing to... | 0 | 2024-07-12T02:52:04 | https://dev.to/catheryn/binary-images-image-thresholding-282 | ai, opencv, imagemanipulation, computervision | ## Thresholding
Thresholding is a simple yet effective technique used in image processing to convert a grayscale image into a [binary image](#binary-image). The core idea is to segment the image into two parts (usually 0 and 255) based on a specific threshold value.
Thresholding involves setting a threshold value that separates the pixel values of the image into two distinct groups:
- **Pixels above the threshold:** These pixels are usually set to the maximum value (often 255 for white in binary images).
- **Pixels below or equal to the threshold:** These pixels are usually set to the minimum value (often 0 for black in binary images).
## Types of Thresholding
**1. Global Thresholding:** A single global threshold value is applied to the entire image.
Example: Setting all pixel values above 165 to 255 (white) and those below or equal to 165 to 0 (black).
```
cv2.threshold(img, 165, 255, cv2.THRESH_BINARY)
An image with the following values:
[[103 105 105]
[211 210 210]
[212 211 211]
[139 138 137]]
Would become:
[[0 0 0]
[255 255 255]
[255 255 255]
[0 0 0]]
```
**2. Adaptive Thresholding:** The threshold value is determined for smaller regions of the image, allowing for different threshold values in different parts of the image.
Example: Setting all pixel values above the calculated mean to 255 (white) and those below or equal to 165 to 0 (black).
```
cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)
```
Read more about how Adaptive Thresholding is calculated.
Adaptive thresholding is useful for images with varying lighting conditions. For each pixel, the best possible value is used which results in a clearer image.
## Binary Image
A binary image is a type of image that has only two possible pixel values: 0 and 255. These values represent black and white, respectively. Binary images are used to simplify the analysis of images by reducing the complexity of the data. We use thresholding algorithms to achieve binary images.
## Significance of Binary Images
- **Simplification:** It reduces the complexity of an image by converting it to two colors, making it easier to analyze.
- **Segmentation:** In application of object detection, it can help in isolating objects from the background.
- **Feature Extraction:** It is useful for identifying and extracting specific features from an image, such as shapes or edges. | catheryn |
1,920,353 | Demo project | CREATE USER hccdp WITH SYSADMIN PASSWORD 'Sadece1234'; CREATE DATABASE hccdp_db; ALTER DATABASE... | 0 | 2024-07-12T01:30:46 | https://dev.to/ozcankara/demo-project-3d4b | ```
CREATE USER hccdp WITH SYSADMIN PASSWORD 'Sadece1234';
CREATE DATABASE hccdp_db;
ALTER DATABASE hccdp_db OWNER TO hccdp;
```
```
gsql -d postgres -p 5432 -U root -W
```
```
CREATE USER hccdp WITH SYSADMIN PASSWORD 'Sadece1234';
```
```
CREATE DATABASE hccdp_db;
```
```
ALTER DATABASE hccdp_db OWNER TO hccdp;
```
```
gsql -d postgres -p 5432 -U root -W
```
```
CREATE DATABASE hccdp_db;
CREATE USER hccdp WITH SYSADMIN PASSWORD '<your_password>';
ALTER DATABASE hccdp_db OWNER TO hccdp;
```
```
\l
```
```
gsql -d hccdp_db -p 5432 -U hccdp -W
```
```
CREATE TABLE example_table (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
age INT
);
```
```
INSERT INTO example_table (name, age) VALUES ('John Doe', 30);
```
```
SELECT * FROM example_table;
```
```
ssh root@*EIP*
```
```
wget https://sandbox-expriment-files.obs.cn-north-1.myhuaweicloud.com:443/20220525/GaussDB_opengauss_client_tools.zip
```
```
unzip GaussDB_opengauss_client_tools.zip
```
```
cd GaussDB_opengauss_client_tools/Euler2.8_arm_64/
```
```
tar -xvf GaussDB-Kernel-V500R001C20-EULER-64bit-gsql.tar.gz
```
```
source gsql_env.sh
```
GaussDB veritabanına bağlanmak için gsql kullanın. Komutta -h, GaussDB veritabanının dağıtıldığı ECS'nin özel IP adresini belirtir, -d veritabanı adını belirtir, -U kullanıcı adını belirtir, -p bağlantı noktasını belirtir ve -W parolayı belirtir.
```
gsql -h *Private IP address* -U root -d postgres -p 8000 -W yourpassword -r
```
```
\q
```
```
gsql --help
```
```
\copyright
```
```
\h
```
```
\help CREATE DATABASE
```
```
\?
```
```
gsql -h *Private IP address* -U root -d postgres -p 8000 -W yourpassword -l
```
```
gsql -V
```
```
gsql -h *Private IP address* -U root -d postgres -p 8000 -W yourpassword -q
```
```
create table t_test (firstcol int);
```
```
insert into t_test values(200);
```
```
select * from t_test;
```
```
\q
```
```
gsql -h *Private IP address* -U root -d postgres -p 8000 -W yourpassword -S
```
```
select * from t_test;
```
```
select * from t_test
```
```
\q
```
```
gsql -h *Private IP address* -U root -d postgres -p 8000 -W yourpassword -r
```
```
select * from t_test;
```
Terminali açın ve aşağıdaki komutu kullanarak ECS'ye bağlanın:
```
ssh username@ecs-ip-address
```
```
wget https://example.com/path/to/GaussDB_opengauss_client_tools.zip
unzip GaussDB_opengauss_client_tools.zip -d /path/to/destination
cd /path/to/destination/EulerOS_X86_64
```
```
cd /path/to/destination/EulerOS_X86_64
./gsql -d hccdp_db -h 192.168.0.183 -p 8000 -U hccdp
```
```
systemctl status gaussdb
```
```
systemctl start gaussdb
```
gsql İstemci Araçlarını Ayıklama ve Çalıştırma
```
wget https://example.com/path/to/GaussDB_opengauss_client_tools.zip
unzip GaussDB_opengauss_client_tools.zip -d /path/to/destination
cd /path/to/destination/EulerOS_X86_64
./gsql -d hccdp_db -h 192.168.0.183 -p 8000 -U hccdp
```
gsql istemcisinin bulunduğu dizine geçin.
```
cd /path/to/GaussDB_opengauss_client_tools/EulerOS_X86_64
```
GaussDB'ye bağlanın
```
./gsql -h 192.168.0.183 -U root -d postgres -p 8000 -W yourpassword -r
```
Burada 192.168.0.183 GaussDB'nin IP adresi, root kullanıcı adı, postgres veritabanı adı, 8000 port numarası ve yourpassword şifrenizdir.
Aşağıdaki komutu kullanarak GaussDB'ye bağlanın:
```
./gsql -d hccdp_db -h 192.168.0.183 -p 8000 -U hccdp
```
Burada hccdp_db veritabanı adı, 192.168.0.183 GaussDB'nin IP adresi, 8000 port numarası ve hccdp kullanıcı adıdır.o
```
CREATE SCHEMA hccdp;
CREATE TABLE hccdp.nation (
N_NATIONKEY INT PRIMARY KEY,
N_NAME VARCHAR(25),
N_REGIONKEY INT,
N_COMMENT VARCHAR(152)
);
```
```
\copy hccdp.nation FROM '/path/to/nation.csv' WITH (FORMAT csv, HEADER true);
\copy hccdp.part FROM '/path/to/part.csv' WITH (FORMAT csv, HEADER true);
\copy hccdp.supplier FROM '/path/to/supplier.csv' WITH (FORMAT csv, HEADER true);
\copy hccdp.partsupp FROM '/path/to/partsupp.csv' WITH (FORMAT csv, HEADER true);
\copy hccdp.orders FROM '/path/to/orders.csv' WITH (FORMAT csv, HEADER true);
\copy hccdp.lineitem FROM '/path/to/lineitem.csv' WITH (FORMAT csv, HEADER true);
```
Veri sorgulama işlemi için örnek bir SQL komutu:
```
SELECT * FROM hccdp.nation;
```
Stored procedure oluşturma:
```
CREATE OR REPLACE PROCEDURE proc_nation() AS $$
BEGIN
-- Buraya prosedür içeriği gelecek
END;
$$ LANGUAGE plpgsql;
```
| ozcankara | |
1,920,355 | Spend $0 to learn these Programming Languages in 2024 for free. | → Python → JavaScript → Java → C# → Ruby → Swift → Kotlin → C++ → PHP → Go → R → TypeScript... | 0 | 2024-07-12T01:47:03 | https://dev.to/e_opore_80/spend-0-to-learn-these-programming-languages-in-2024-for-free-2b19 | softwaredevelopment, python, css, java |
→ Python
→ JavaScript
→ Java
→ C#
→ Ruby
→ Swift
→ Kotlin
→ C++
→ PHP
→ Go
→ R
→ TypeScript
[]https://x.com/e_opore/status/1811567830594388315?t=_j4nncuIY2WfBm7icTW9-w&s=19 | e_opore_80 |
1,920,357 | Top 3 JavaScript Concepts Every Developer Should Know | Here are the top 3 JavaScript concepts every developer should know: Functions: Functions are... | 0 | 2024-07-12T16:00:00 | https://dev.to/devstoriesplayground/top-3-javascript-concepts-every-developer-should-know-5bjm | javascript, programming, programmers, productivity | Here are the top 3 JavaScript concepts every developer should know:
1. Functions: Functions are reusable blocks of code that perform a specific task. They are essential for organizing your code, making it more readable and maintainable. JavaScript functions can take parameters (inputs) and return values (outputs).

2. Asynchronous Programming: JavaScript is single-threaded, meaning it can only execute one task at a time. However,many web applications require performing multiple tasks simultaneously, such as fetching data from a server or waiting for user input. Asynchronous programming allows you to handle multiple tasks without blocking the main thread. This is achieved through techniques like callbacks, promises, and async/await.

3. DOM Manipulation: The Document Object Model (DOM) is a tree-like representation of an HTML document.JavaScript can manipulate the DOM to dynamically change the content and structure of a web page. This allows you to create interactive web applications that respond to user actions and update the UI in real-time.

#### Let's wrap up things
> These are just a few of the many important concepts in JavaScript. By mastering these fundamentals, you'll be well on your way to becoming a proficient JavaScript developer.
`HAPPY CODING 🚀`
| devstoriesplayground |
1,920,358 | ⚡ MySecondApp - React Native with Expo (P5)- Custom Bottom Tabs Navigator | ⚡ MySecondApp - React Native with Expo (P5)- Custom Bottom Tabs Navigator | 28,005 | 2024-07-12T01:58:53 | https://dev.to/skipperhoa/mysecondapp-react-native-with-expo-p5-custom-bottom-tabs-navigator-fd0 | webdev, react, reactnative, tutorial | ⚡ MySecondApp - React Native with Expo (P5)- Custom Bottom Tabs Navigator
{% youtube hrGTDB7B4SA %} | skipperhoa |
1,920,359 | What does box-sizing: border-box actually do? | When I first started learning CSS, I saw box-sizing: border-box in almost every CSS file I... | 0 | 2024-07-12T02:31:32 | https://dev.to/bridget_amana/what-does-box-sizing-border-box-actually-do-3ol5 | webdev, css, tutorial, frontend | When I first started learning CSS, I saw box-sizing: border-box in almost every CSS file I encountered. Like many beginners, I copied it without understanding its purpose. If this sounds familiar, don't worry—you’re not alone.
**What is box-sizing?**
The _box-sizing_ property in CSS controls how the width and height of an element are calculated. There are three main values:
1. **content-box** (default): The width and height apply only to the content, not the padding or border. This can lead to unexpected sizes if you add padding or borders later.
2. **border-box**: The width and height include padding and border, making the total size of the element more predictable.
3. **inherit**: The element inherits the box-sizing value from its parent.
**Why box-sizing: border-box?**
Here’s why box-sizing: border-box is so helpful:
- The width and height include padding and border. If you set an element's width to 200px, it will always be 200px, regardless of padding or border.
- No need to calculate the padding and borders to know the element’s total size. This makes designing and adjusting layouts much easier.
- Using border-box across all elements ensures a consistent design, making your CSS cleaner and more maintainable.
Here’s a simple example to illustrate the difference between **"content-box"**and **"border-box"**:
**Content box**
```
<div class="content-box">
<p>content-box</p>
</div>
```
```
.content-box {
background-color: red;
width: 200px;
padding: 20px;
border: 10px solid black;
text-align: center;
color: white;
}
```

As shown in the screenshot, `the box with box-sizing: content-box` has a total width of 260px. Here's why:
200px is the width set for the content area, 20px on each side adds up to 40px in total (20px + 20px), 10px on each side adds up to 20px in total (10px + 10px).
Total Width: 200px (content) + 40px (padding) + 20px (border) = 260px
**Why?** With _content-box_, the width you set is only for the content inside the box. Padding and border are added to this width, increasing the total size of the box.
**Border box**
```
<div class=" border-box">
<p>border-box</p>
</div>
```
```
.border-box {
box-sizing: border-box;
background-color: green;
width: 200px;
padding: 20px;
border: 10px solid black;
text-align: center;
color: white;
}
```

In contrast, the box with `box-sizing: border-box` has a total width of 200px. Here’s why:
200px is the width set for the entire box, including content, padding, and border, 20px on each side (included within the 200px width), 10px on each side (included within the 200px width).
Total Width: The width of 200px encompasses the content, padding, and border altogether. No additional space is added outside this width.
**Why?** With _border-box_, the width you set covers everything within the box, so the total size remains as specified, without extra padding or border extending beyond the given width.
Understanding and using box-sizing: border-box can simplify your CSS and make your layouts more predictable and easier to manage. If you had no idea about it I hope this explanation clears things up.
You can view and experiment with the code used in this example on [CodePen](https://codepen.io/Bridgetamana/pen/JjQdddZ).
If you enjoyed this post, connect with me on [LinkedIn](https://www.linkedin.com/in/bridget-amana/), and [Twitter](https://twitter.com/amana_bridget)!
| bridget_amana |
1,920,361 | [Unity] Publish assets that automatically save backups of files. | I created a Unity asset that automatically saves Unity backups. I would like to release it to you in... | 0 | 2024-07-12T02:09:33 | https://dev.to/uni928/unity-publish-assets-that-automatically-save-backups-of-files-2bik | asset, backup, auto, unity3d | I created a Unity asset that automatically saves Unity backups.
I would like to release it to you in the hope that it will be useful to you.
Please download ver1.1.0 (latest version) from the following site.
https://drive.google.com/file/d/1nnEEoMFRQqypSJAWU4odKbQz4JK86wpr/view?usp=sharing
***
Description of this asset
Automatically saves a backup every time a file is updated.
No other operations are required other than importing this asset.
The Assets_BackUP folder is automatically generated in the hierarchy where the Assets folder is located.
Every time a file in the Assets folder is updated, a backup is automatically saved there.
The backup file name includes the update date, so if you want to return to a specific point in time, you can identify it by the file name.
If you delete the Assets_BackUP folder, a new Assets_BackUP folder will be regenerated containing only a backup of the current state.
If the Assets_BackUP folder becomes too large, consider deleting it.
This asset will never overwrite any files in the Assets folder.
This asset only touches the Assets_BackUP folder.
Therefore, it is impossible for this asset to cause bugs.
Please use it with confidence.


***
Ver1.0.0 works with the code below.
Ver1.1.0 also has a function to partially delete backup files that are more than 4 business days old, but because the code is long, we will only post the code for Ver1.0.0.
If you create a new SephirothAutoBackUPMonitoring.cs, replace the code with the code below and put it in the Editor folder, it should work in the same state as ver1.0.0.
```Csharp
using System.Threading.Tasks;
namespace SephirothTools
{
public class SephirothAutoBackUPMonitoring
{
private static readonly object Lock = new();
private static bool isExec = false;
private static string DateString;
public const bool isCSharpFileOnly = false;
[UnityEditor.InitializeOnLoadMethod]
private static void Exec()
{
lock (Lock)
{
if (isExec)
{
return;
}
isExec = true;
Task.Run(() => TaskExec(System.DateTime.Today.ToString("yyyyMMdd-HHmmss")));
}
}
private static void TaskExec(string date)
{
DateString = date;
while (true)
{
try
{
ExecOneTime();
}
catch
{
}
System.Threading.Thread.Sleep(5000); // A thread for SephirothAutoBackUPMonitoring is generated and put into sleep, so the main thread is not stopped.
if(date != DateString)
{
break;
}
}
}
private static void ExecOneTime()
{
string targetPath = System.IO.Directory.GetCurrentDirectory() + "\\Assets_BackUP";
if (!System.IO.Directory.Exists(targetPath))
{
System.IO.Directory.CreateDirectory(targetPath);
}
foreach (string onePath in System.IO.Directory.GetFiles(System.IO.Directory.GetCurrentDirectory() + "\\Assets", "*", System.IO.SearchOption.AllDirectories))
{
if (isCSharpFileOnly && !onePath.EndsWith(".cs"))
{
continue;
}
string oneTargetDirectory = System.IO.Directory.GetCurrentDirectory() + "\\Assets_BackUP" + onePath.Substring((System.IO.Directory.GetCurrentDirectory() + "\\Assets").Length);
oneTargetDirectory = oneTargetDirectory.Substring(0, oneTargetDirectory.LastIndexOf("\\"));
string oneTargetFileName = System.IO.File.GetLastWriteTime(onePath).ToString("yyyyMMdd-HHmmss") + "_" + System.IO.Path.GetFileName(onePath);
string target = oneTargetDirectory + "\\" + oneTargetFileName;
if (!System.IO.File.Exists(target))
{
string[] pathMove = oneTargetDirectory.Split("\\");
string folderPath = pathMove[0];
for (int i = 1; i < pathMove.Length; i++)
{
folderPath = folderPath + "\\" + pathMove[i];
if (!System.IO.Directory.Exists(folderPath))
{
System.IO.Directory.CreateDirectory(folderPath);
}
}
System.IO.File.Copy(onePath, target);
}
}
}
}
}
```
***
We have now released an asset that automatically saves Unity backups.
We hope this will be helpful to your development.
Thank you for reading. | uni928 |
1,920,363 | A Comprehensive Guide to Android NDK Development with Android Studio | Introduction The Native Development Kit (NDK) is a collection of tools designed to help... | 0 | 2024-07-12T09:54:33 | https://dev.to/wetest/a-comprehensive-guide-to-android-ndk-development-with-android-studio-1d5c | programming, androiddev, javascript, devops | ## Introduction
The Native Development Kit (NDK) is a collection of tools designed to help developers efficiently create C or C++ dynamic libraries and automatically package the .so files and Java applications into an APK. There are several reasons why developers might choose to use the NDK:
1. **Code protection**: Java code in APKs can be easily decompiled, making it vulnerable to reverse engineering. In contrast, C/C++ libraries are more difficult to reverse compile, offering better protection.
2. **Utilization of existing open-source libraries**: A vast majority of open-source libraries are written in C/C++ code, making it convenient for developers to incorporate them into their projects.
3. **Enhanced execution efficiency**: Developing high-performance application logic using C can significantly improve the execution efficiency of an application, resulting in a better user experience.
4. **Portability**: Libraries written in C/C++ can be easily reused on other embedded platforms, making it simpler for developers to port their applications to different devices.
## Preparation
For the **NDK package download** link: http://dl.google.com/android/ndk/android-ndk32-r10-windows-x86_64.zip
After downloading the NDK zip package, extract it to the D:\Android directory:
With this package, there is no need to install Cygwin, as the NDK package already includes integrated Linux compilation functionality. This makes the process more convenient and straightforward for developers.
## Steps
**Create a new project in Android Studio or add a new module**.
This article will not go into detail on how to do this. In my example, I added a new Android library-type module to an existing project, named cloudNDKTest.
**Environmental configuration**
Click on the menu bar File --> Project Structure, or use the shortcut key Ctrl+Alt+Shift+S. Then, follow the steps shown in the image:

After completing this, a configuration will be generated in the local.properties file:

**Write native method**
Create a new Java file and declare a static native method. Don't worry if the method name appears in red:

**Compile the project**
Execute "Make Project" to compile the corresponding class files, which will be needed later when generating the .h files.
**Create a JNI directory**
Switch the view from Android to Project, and create a JNI directory under the src/main directory, at the same level as the Java directory.

**Generate the C++ .h file**
Click on the menu bar View --> Tool Windows --> Terminal, or use the shortcut key Alt+F12 to bring up the terminal window:

Then, execute the following command in the Terminal window:
cd cloudndktest/src/main
javah -d jni -classpath D:/Android/android-sdk/platforms/android-22/android.jar;../../build/intermediates/classes/debug com.tencent.XXX.XXX.cloudndktest.CloudNdkTest
Here, javah is the tool needed to generate header files, -d specifies the directory location for file creation, and -classpath specifies the file location of android.jar in the SDK folder. After the semicolon, it specifies the class file generated in step 4.
Finally, it will generate:

**Write the CPP file**
Create a CPP file in the JNI directory, and do not check the part marked in red below, as the .h file already exists.
Write the CPP file, include the previously created .h file, and implement the specific function.

**Compile**
**a.** First, add the following content to the module's build.gradle:

The above configuration code specifies the .so library name as CloudNdkTest; the library used during linking, corresponding to the LOCAL_LDLIBS in the Android.mk file; and the final output specifies the .so library under three ABI architectures.
**b.** Configure the gradle.properties file and add:
android.uesDeprecatedDNK=true
**c.** Add a reference to the static library in the Java class written in step

**d.** If you encounter the following error, please create an empty util.c file in the JNI directory. This is said to be a bug in the NDK.

**e.** Execute "Make Project" to compile the project.
## Precautions
If the CPP implementation uses the STL library, you need to add the following in step 8.a:

Currently, several compilation and linking methods are supported:
stlport_static --> Use the stlport version of STL with static linking
stlport_shared --> Use the stlport version of STL with dynamic linking
gnustl_static --> Use the GNU version of STL with static linking
It is important to note that it is better to compile through static libraries, as this will not cause conflicts between .so files in multiple modules and will also reduce the final package file size.
For more information, check out [WeTest](https://www.wetest.net/?utm_source=forum&utm_medium=dev&utm_content=a-comprehensive-guide
) services.
 | wetest |
1,920,451 | Creating Complex Animations with CSS | Introduction: In recent years, CSS has become increasingly popular for creating dynamic and... | 0 | 2024-07-12T04:15:30 | https://dev.to/tailwine/creating-complex-animations-with-css-4m96 | Introduction:
In recent years, CSS has become increasingly popular for creating dynamic and interactive animations on websites. With the help of CSS, web designers and developers can add complex animations to their designs without relying on third-party plugins or libraries. In this article, we will delve into the world of CSS animations and explore the advantages, disadvantages, and features of creating complex animations with CSS.
Advantages:
CSS animations offer a myriad of advantages, making them a preferred choice for web designers. Firstly, CSS animations are lightweight and fast to load, resulting in a smooth and seamless user experience. Secondly, they are easy to implement, as they require no additional scripting or external dependencies. Additionally, CSS animations allow for a high level of customization, giving designers control over the animation's speed, duration, and timing. They are also compatible with all modern browsers, making them a universal solution for creating complex animations.
Disadvantages:
Despite its numerous advantages, there are a few drawbacks to using CSS animations. One major disadvantage is that creating complex animations with CSS can be time-consuming and require a strong understanding of CSS syntax. Moreover, CSS animations are not yet fully supported in older browsers, which may lead to compatibility issues.
Features:
CSS animations offer several features that can help designers create captivating and dynamic animations. With the use of keyframes, designers can define specific points in an animation, allowing for more precise control over its movement. Transitions can also be used to achieve smooth and gradual animations. Furthermore, CSS animations also support various properties such as rotation, scaling, and skewing, allowing for a wide range of creative possibilities.
Conclusion:
CSS animations have revolutionized the world of web design by offering a powerful, lightweight, and easy-to-use solution for creating complex animations. While they may have a few drawbacks, the advantages and features of CSS animations make them an ideal choice for web designers looking to add a touch of interactivity to their designs. With continuous advancements in CSS technology, we can expect to see even more complex animations being created using CSS in the future. | tailwine | |
1,920,365 | ข้อมูล JSON ใน PostgreSQL: โลกใหม่ของการจัดการข้อมูลบน Postgres | เกรินนำ PostgreSQL ไม่เพียงแค่เป็นฐานข้อมูลที่มีความสามารถสูง แต่ยังมีฟีเจอร์สำหรับการจัดการข้อมูล... | 0 | 2024-07-12T03:22:09 | https://dev.to/everthing-was-postgres/khmuul-json-ain-postgresql-olkaihmkhngkaarcchadkaarkhmuulbn-postgres-3omk | postgres, json, restapi | **เกรินนำ**
PostgreSQL ไม่เพียงแค่เป็นฐานข้อมูลที่มีความสามารถสูง แต่ยังมีฟีเจอร์สำหรับการจัดการข้อมูล JSON ที่ทรงพลังอีกด้วย วันนี้เราจะพาคุณสำรวจโลกของ JSON ใน PostgreSQL ที่จะทำให้การจัดการข้อมูลของคุณเป็นเรื่องง่ายและสนุกขึ้น!
**ว่าแต่ JSON คืออะไร?**
JSON (JavaScript Object Notation) เป็นรูปแบบข้อมูลที่ใช้สำหรับการแลกเปลี่ยนข้อมูลระหว่างระบบ มีโครงสร้างที่เข้าใจง่ายทั้งสำหรับมนุษย์และเครื่องคอมพิวเตอร์ ซึ่งมักถูกใช้อย่างแพร่หลายในเว็บแอปพลิเคชันและบริการ API ต่าง ๆ
**รูปแบบของ JSON**
***วัตถุ (Object)***
วัตถุใน JSON ประกอบด้วยคู่ชื่อ-ค่า (name-value pairs) โดยใช้เครื่องหมาย {} และเครื่องหมายจุลภาค , เพื่อคั่นคู่ชื่อ-ค่าแต่ละคู่:
```json
{
"name": "Alice",
"age": 25,
"city": "Wonderland"
}
```
***อาร์เรย์ (Array)***
อาร์เรย์ใน JSON ประกอบด้วยรายการของค่าโดยใช้เครื่องหมาย [] และเครื่องหมายจุลภาค , เพื่อคั่นค่าต่าง ๆ:
```json
[
"Apple",
"Banana",
"Cherry"
]
```
***ประเภทของค่าใน JSON***
- สตริง (String): ข้อความที่อยู่ในเครื่องหมายอัญประกาศคู่ ""
```json
"example": "Hello, World!"
```
- ตัวเลข (Number): ค่าตัวเลข
```json
"example": 123
```
- วัตถุ (Object): ชุดของคู่ชื่อ-ค่า
```json
"example": {"key": "value"}
```
- อาร์เรย์ (Array): รายการของค่า
```json
"example": ["item1", "item2"]
```
- บูลีน (Boolean): ค่าความจริง true หรือ false
```json
"example": true
```
- ค่าว่าง (Null): ค่าว่าง null
```json
"example": null
```
***ตัวอย่างที่ซับซ้อนขึ้น***
แสดง Array ของ user Object ซึ่งObject user ประกอบด้วยชื่อ(name)และเมือง(city)เป็น String และอายุ(age) เป็น Number
```json
{
"users": [
{
"name": "Alice",
"age": 25,
"city": "Wonderland"
},
{
"name": "Bob",
"age": 30,
"city": "Builderland"
}
],
"isActive": true,
"totalUsers": 2
}
```
**มาเริ่มจัดการข้อมูล JSON กัน**
***สรา้งตารางที่มี column เป็น JSON***
เริ่มต้นสร้างตารางขึ้นมาก่อน
```sql
CREATE TABLE users (
id SERIAL PRIMARY KEY,
data JSONB
);
```
สร้างตาราง users ที่มีคอลัมน์ data เป็น JSONB
***การแทรกข้อมูล JSON***
การแทรกข้อมูล JSON ก็ง่ายไม่แพ้กัน ลองดูตัวอย่างนี้:
```sql
INSERT INTO users (data) VALUES ('{"name": "Alice", "age": 30, "city": "Wonderland"}');
```
ง่ายเหมือนการใส่ข้อมูลลงในฐานข้อมูลแบบธรรมดา!
***การค้นหาข้อมูล JSON***
การค้นหาข้อมูล JSON ใน PostgreSQL นั้นเหมือนการค้นหาในโลกแห่งมายา คุณสามารถใช้คำสั่งนี้:
```sql
Copy code
SELECT data->>'name' AS name FROM users WHERE data->>'city' = 'Wonderland';
```
คำสั่งนี้จะค้นหาผู้ใช้ที่อาศัยอยู่ในเมือง Wonderland และแสดงชื่อของพวกเขา
***การอัปเดตข้อมูล JSON***
การอัปเดตข้อมูล JSON ก็ทำได้ง่ายและรวดเร็ว ลองดูตัวอย่างนี้:
```sql
UPDATE users SET data = jsonb_set(data, '{age}', '31') WHERE data->>'name' = 'Alice';
```
เพียงเท่านี้ อายุของ Alice ก็จะถูกอัปเดตเป็น 31 ปี!
***การลบข้อมูล JSON***
การลบข้อมูล JSON ทำได้โดยการระบุคีย์ที่ต้องการลบ:
```sql
Copy code
UPDATE users SET data = data - 'city' WHERE data->>'name' = 'Alice';
```
คำสั่งนี้จะลบข้อมูลเมืองของ Alice ออกไป
**ฟังก์ชั่น และตัวดำเนินการเกี่ยวกับข้อมูล JSON**
จากการหัวข้อที่ผ่านมาจะพบว่า syntax หลายส่วนจะไม่ตรงตามรูปแบบภาษา SQL นักเนื่องจากประเภทข้อมูล JSON ไม่ใช่ชนิดข้อมูลมาตรฐานของ SQL แต่ Postgres ก็มีเครื่องมือที่ทรงพลังในการจัดการข้อมูลอยู่
***ตัวดำเนินการในการดึงข้อมูล
1. -> (Extract JSON Object Field): ดึงค่า JSON object field โดยไม่แปลงเป็นประเภทอื่น
2. ->> (Extract JSON Object Field as Text): ดึงค่า JSON object field และแปลงเป็นข้อความ
3. #> (Extract JSON Sub-Object): ดึงค่า JSON sub-object
4. #>> (Extract JSON Sub-Object as Text): ดึงค่า JSON sub-object และแปลงเป็นข้อความ
***ฟังก์ชันสำหรับการอัปเดตข้อมูล JSON***
1. jsonb_set: อัปเดตค่าใน JSONB
2. jsonb_insert: แทรกค่าใหม่ใน JSONB
3. jsonb_delete: ลบคีย์ออกจาก JSONB
***ฟังก์ชันสำหรับการสร้างและการตรวจสอบ JSON***
1. json_build_object: สร้าง JSON object จากคู่คีย์-ค่า
2. jsonb_pretty: จัดรูปแบบ JSONB ให้สวยงาม
3. jsonb_typeof: ตรวจสอบประเภทของค่าใน JSONB
> **<u>หมายเหตุ</u>**
> สามารถดูรายละเอียดเพิ่มเติม
https://www.postgresql.org/docs/current/functions-json.html
**ข้อควรรู้**
>**- JSON กับ JSONB ไม่เหมือนกันนะ** JSON กับจะเก็บข้อมูลในรูปแบบ text ในขณะที่ JSONB จะเก็บในรูปแบบ binary ทำให้รองรับการบีบอัดเพื่อประหยัดพื้นที่ และการเข้าข้อมูลแบบสุ่มทำให้เร็วกว่าจึงเหมาะสำหรับการเก็บข้อมูล JSON ที่ต้องการดัดแปลงบ่อย
>**- การทำ Index ในข้อมูลชนิด JSON** แนะนำดัชนีเป็นประเภท B-Tree ซึ่งทำงานได้ดีกว่า
>**- ขนาดข้อมูลสูงสุดที่รับได้** ใน postgres กำหนดค่าเริ่มต้นขนาดเพจไว้ 8KB และกำหนดประเภทข้อมูล JSON ไว้ 1 เพจดังนั้นขนาดข้อมูลจึงไม่ควรเกิน 8 KB
**ส่งท้าย**
หวังว่าบนความนี้จะทำให้เข้าใจข้อมูลประเภท JSON และสนุกกับการจัดการข้อมูลประเภทนี้ผ่าน postgres มากขึ้นเพื่อนำมาใช้ในproject ต่างๆได้ครับแล้วพบกันใหม่ครับ
| iconnext |
1,920,366 | Unleashing the Future: Blockchain, Web 3.0, and Solidity | In recent years, the tech world has witnessed the rapid emergence of blockchain technology, the... | 0 | 2024-07-12T02:18:36 | https://dev.to/mukunzi_ndahirojames_d6a/unleashing-the-future-blockchain-web-30-and-solidity-23ml | webdev, solidity, web3, blockchain | In recent years, the tech world has witnessed the rapid emergence of blockchain technology, the promise of Web 3.0, and the rise of Solidity as a leading programming language for smart contracts. Together, these innovations are reshaping the digital landscape, offering new ways to interact with the internet and revolutionizing industries.
**Understanding Blockchain
**
Blockchain is a decentralized digital ledger that records transactions across multiple computers in a way that ensures security, transparency, and immutability. Each block in the chain contains a list of transactions, and once added, it cannot be altered retroactively. This technology underpins cryptocurrencies like Bitcoin and Ethereum but extends far beyond digital currencies.
**Key Features of Blockchain:
**
**Decentralization:** Unlike traditional databases managed by a single entity, blockchain is distributed across a network of nodes, ensuring no single point of failure.
**Security**: Cryptographic techniques ensure that data on the blockchain is secure and tamper-proof.
**Transparency**: Transactions are visible to all participants, fostering trust and accountability.
**Immutability**: Once recorded, transactions cannot be altered or deleted, ensuring a permanent and auditable trail.
The Vision of Web 3.0
Web 3.0, or the decentralized web, is the next evolutionary step of the internet. It aims to create a more user-centric, private, and secure online experience by leveraging blockchain technology and decentralized protocols. Web 3.0 envisions an internet where users have control over their data, digital identities, and online interactions.
**Key Components of Web 3.0:
**
**Decentralized Applications (DApps)**: Applications that run on a blockchain network, offering greater transparency and security compared to traditional apps.
Smart Contracts: Self-executing contracts with the terms of the agreement directly written into code, running on blockchain networks like Ethereum.
**Tokenization:** The process of converting assets or rights into a digital token on a blockchain, enabling new economic models and peer-to-peer transactions.
**Interoperability: **The ability of different blockchain networks to communicate and interact seamlessly.
**Solidity: **The Backbone of Smart Contracts
Solidity is a high-level programming language designed for writing smart contracts on the Ethereum blockchain. Created by Dr. Gavin Wood in 2014, Solidity has become the most widely used language for developing smart contracts due to its robustness and versatility.
**Key Features of Solidity:
**
**Statically Typed:** Solidity is a statically-typed language, meaning variable types are explicitly declared, reducing errors and enhancing security.
**Contract-Oriented:** Solidity is specifically designed for writing contracts, with built-in features to handle blockchain-specific tasks.
**JavaScript-Like Syntax:** Its syntax is similar to JavaScript, making it accessible for developers familiar with web development languages.
**Ethereum Virtual Machine (EVM):** Solidity code is compiled to bytecode that runs on the EVM, enabling the deployment and execution of smart contracts on the Ethereum network.
Real-World Applications of Blockchain, Web 3.0, and Solidity
**Decentralized Finance (DeFi):** DeFi platforms use blockchain and smart contracts to create financial services that are open, transparent, and accessible to anyone with an internet connection. Examples include decentralized exchanges, lending platforms, and yield farming.
**Supply Chain Management:** Blockchain ensures transparency and traceability in supply chains, allowing stakeholders to track products from origin to destination, reducing fraud and improving efficiency.
**Digital Identity:** Web 3.0 aims to give individuals control over their digital identities, enabling secure and private interactions online without relying on centralized authorities.
**Tokenized Assets:** Real estate, art, and other assets can be tokenized on a blockchain, enabling fractional ownership, easier transferability, and new investment opportunities.
**The Future of Blockchain and Web 3.0
**
As blockchain technology matures and Web 3.0 gains traction, we can expect to see a more decentralized, secure, and user-centric internet. Innovations in scalability, interoperability, and user experience will drive the adoption of these technologies, unlocking new possibilities for industries and individuals alike.
In conclusion, blockchain, Web 3.0, and Solidity are at the forefront of a digital revolution, offering transformative solutions that challenge the status quo. Embracing these technologies will be key to navigating and thriving in the next era of the internet. | mukunzi_ndahirojames_d6a |
1,920,408 | AI Revolutionizing tech and society through innovation and ethics. | Introduction to Artificial Intelligence Artificial Intelligence (AI) has become an... | 0 | 2024-07-12T02:21:47 | https://dev.to/dainwi/ai-revolutionizing-tech-and-society-through-innovation-and-ethics-4pe4 | ai, techtalks | ## Introduction to Artificial Intelligence
Artificial Intelligence (AI) has become an integral part of our lives, shaping the way we interact with technology and paving the path for a more advanced future. From voice assistants to self-driving cars, AI has made significant advancements in various industries, showcasing its potential to revolutionize the world as we know it.
## Types of Artificial Intelligence
- **Narrow AI**: Designed for specific tasks like facial recognition or language translation.
- **General AI**: Exhibits human-like intelligence and can perform various intellectual tasks.
- **Strong AI**: Hypothetical form surpassing human intelligence, potentially outperforming humans in all cognitive tasks.
## Applications of AI in Various Industries
AI has found widespread applications across different sectors, transforming operations and enhancing efficiency:
- **Healthcare**: AI-powered diagnostics improve patient outcomes.
- **Finance**: AI algorithms analyze market trends and optimize investment strategies.
- **Manufacturing**: Robotics and automation enhance production processes.
## Machine Learning and Deep Learning
- **Machine Learning**: Enables machines to learn from data and make predictions.
- **Deep Learning**: Advanced machine learning using neural networks to process vast amounts of data.
## Natural Language Processing (NLP)
- Facilitates seamless communication between humans and machines through applications like chatbots and language translation.
## Computer Vision
- Allows machines to interpret and analyze visual information, used in facial recognition and autonomous vehicles.
## Robotics and Automation
- Revolutionizing manufacturing by streamlining processes and enhancing precision.
## Ethical Concerns in AI
- Privacy, bias, and accountability considerations are crucial in AI development.
## Future of Artificial Intelligence
Ongoing research and development are pushing the boundaries of AI innovation:
- **AI Ethics**: Ensuring transparency and fairness in AI algorithms.
- **Autonomous Systems**: Shaping a future where AI plays a central role in societal progress.
## Impact of AI on Jobs
- While enhancing efficiency, AI raises concerns about job displacement and the need for upskilling.
## AI in Education
- Personalizes learning experiences and provides interactive tools for students and educators.
## AI in the Entertainment Industry
- Leverages AI for content recommendations and personalized entertainment experiences.
## Challenges in the Adoption of AI
- Addressing challenges related to data privacy, regulatory compliance, and cultural barriers.
## The Role of AI in Climate Change
- Optimizes energy consumption and monitors environmental changes for sustainable practices.
## Conclusion
Artificial Intelligence is reshaping the future landscape of technology and innovation. By harnessing its power responsibly and ethically, we can create a more intelligent and sustainable world.
| dainwi |
1,920,410 | The competition between transparent LED screen and traditional LED display | In the continuous innovation of LED display technology, transparent LED screen, as a new display... | 0 | 2024-07-12T02:22:29 | https://dev.to/sostrondylan/the-competition-between-transparent-led-screen-and-traditional-led-display-3lh6 | led, display, transparent | In the continuous innovation of LED display technology, [transparent LED screen](https://sostron.com/products/crystal-transparent-led-screen/), as a new display technology, is gradually entering people's field of vision. This article will conduct an in-depth comparative analysis of transparent LED screen and traditional LED display, and explore their respective advantages and disadvantages.

The solid position of traditional LED display
Traditional LED display has occupied an irreplaceable position in the field of outdoor ultra-large screen display with its self-luminous, full-color, high refresh rate and other characteristics. They usually have a heavy box, built-in processor and heat dissipation equipment. Although they are bulky, they perform well in terms of protection level and resistance, and can meet the high standard of IP67. [The price of outdoor LED display is determined by ten aspects. ](https://sostron.com/the-price-of-outdoor-led-display-is-determined-by-ten-aspects/)
Innovative breakthrough of transparent LED screen
Transparent LED screen, also known as LED glass screen, has shown unique charm in advertising media, science and technology exhibitions, commercial displays and other fields with its high transparency and light and fashionable appearance. It has a simple structure, adopts all-aluminum alloy construction, does not require additional heat dissipation equipment, is light in weight, and is easy to install and maintain. [7 advantages of using LED transparent screen rental for retail display. ](https://sostron.com/7-advantages-of-using-led-transparent-screen-rental/)

Competition between design and process technology
Traditional LED display
Advantages: strong resistance, high protection level, minimum pitch can reach P0.4, providing high-density and high-resolution display effects. [What is a fine pitch LED display? ](https://sostron.com/what-is-a-fine-pitch-led-display/)
Disadvantages: bulky appearance, difficult to repair, and large heat dissipation requirements.

Transparent LED screen
Advantages: simple structure, light weight, good transparency, stylish frame design, easy to install and maintain.
Disadvantages: The protection level is relatively low, only up to IP46, and the minimum spacing is large, which limits the fineness of the display effect. [Take you to understand the LED IP level in 5 minutes. ](https://sostron.com/5-minutes-to-understand-the-led-ip-level/)
Adaptability of application scenarios
Due to its transparency, transparent LED screens are more suitable for installation in places such as glass curtain walls, adding a modern feel to buildings without blocking the line of sight. Traditional LED displays are more suitable for outdoor application scenarios that require high protection levels and high resolution. [Here are ten questions about transparent LED window displays. ](https://sostron.com/transparent-led-window-display-ten-questions-answered/)
Technological progress and market prospects
With the continuous advancement of technology, both traditional LED displays and transparent LED screens are constantly improving and optimizing in their respective application fields. The market size of transparent LED screens is expected to be close to 10 billion in 2020, showing its huge market potential.

Conclusion
Transparent LED screens and traditional LED displays have their own advantages, and they show their own advantages in different application scenarios. Choosing a suitable LED display requires comprehensive consideration of factors such as the specific use environment, maintenance costs, and aesthetic requirements. In the future, with the further development of technology, we have reason to believe that LED display technology will continue to promote innovation and development in the display industry.

Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about-us/). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [Small-pitch LED display market: new opportunities and challenges.](https://dev.to/sostrondylan/small-pitch-led-display-market-new-opportunities-and-challenges-fd8) Please click read.
Follow me! Take you to know more about led display knowledge.
Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello | sostrondylan |
1,920,411 | Calculating Adaptive Threshold in OpenCV | Read my article on Thresholding and Binary Images for a better background Adaptive thresholding is a... | 0 | 2024-07-12T02:59:27 | https://dev.to/catheryn/calculating-adaptive-threshold-in-opencv-1hh3 | ai, computervision | [Read my article on Thresholding and Binary Images for a better background](https://dev.to/catheryn/binary-images-image-thresholding-282)
Adaptive thresholding is a technique used to convert a grayscale image to a binary image (black and white). The threshold value is calculated for smaller regions (blocks) of the image rather than using a single global threshold value for the entire image.
We can perfom adaptive thresholding in OpenCV using this method:
```
img = cv2.adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C[, dst])
```
An explanation of the arguments:
- **src:** The image to be worked on.
- **maxValue:** The maximum value to use with the thresholding type.
- **adaptiveMethod:** The adaptive thresholding method to use. We have the [Adaptive_THRESH_MEAN_C and ADAPTIVE_THRESH_GAUSSIAN_C](https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html).
- **thresholdType:** The type of thresholding to apply. In this article we use the THRESH_BINARY. Read more about the [different threshold types](https://docs.opencv.org/4.x/d7/d1b/group__imgproc__misc.html#gaa9e58d2860d4afa658ef70a9b1115576)
- **blockSize:** The size of the block to calculate the threshold for.
- **C:** A constant subtracted from the calculated mean. This constant fine-tunes the thresholding.
## The Mean Calculation
First, we read the image:
```
# Read the original image.
img = cv2.imread('test_image.png', cv2.IMREAD_GRAYSCALE)
```
Let us assume the image translates to these numbers:
```
[[218 217 216 221 220 220]
[211 210 210 215 216 216]
[212 211 211 214 216 216]
[139 138 137 103 105 105]
[190 190 190 170 170 170]
[255 255 255 255 255 255]]
```
Next, we specify our adaptive thresholding method:
```
img_thresh_adp = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)
```
- **img:** This is the image we have translated into numbers using the cv2.imread method.
- **255:** This is the maximum value we are to use after calculations. This means that pixels that are above our result will be set to 255 (white) while the pixels that are below will be set to 0 (black)
- **cv2.ADAPTIVE_THRESH_MEAN_C:** This is the adaptiveThresholding algorithm. This method calculates the threshold for a pixel based on the mean of a certain number of pixels.
- **cv2.THRESH_BINARY:** THRESH_BINARY means that pixels above the threshold value will be set to the maximum value (255), and pixels below the threshold value will be set to 0.
- **3:** This means a 3 x 3 pixel area around each pixel is considered.
- **7:** This means after calculating the mean, we subtract 7 from it.
## Step-by-Step Calculation
**Block 1 (Top-left corner):**
Consider the 3x3 block starting at the top-left corner:
```
[[218 217 216]
[211 210 210]
[212 211 211]]
Mean: (218 + 217 + 216 + 211 + 210 + 210 + 212 + 211 + 211) / 9 = 212.8
Subtract constant from result: 212.8 - 7 = 205.8
```
Since every single number in block 1 is greater than 205, the numbers are all swapped for 255. Therefore the top left corner becomes this:
```
[[255 255 255]
[255 255 255]
[255 255 255]]
```
**Block 2:**
Consider the next 3x3 block:
```
[[217 216 221]
[210 210 215]
[211 211 214]]
Mean: 1925 / 9 = 213.8
Subtract constant: 213.8 - 7 = 206.8
```
Every single number in this block is greater than 206 so the numbers are all swapped for 255. Therefore the block becomes:
```
[[255 255 255]
[255 255 255]
[255 255 255]]
```
These calculations would be done row by row on a 3 x 3 basis until we have calculations for each section. If the current numbers are less than the result, we use 0 else we use 255. Also, note that the numbers are swapped for 255 only because that is what was specified as the maximum.
For example, below we have specified a maximum value of 200:
```
img_thresh_adp = cv2.adaptiveThreshold(img, 200, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 3, 7)
# Calculating the top left corner block
[[218 217 216]
[211 210 210]
[212 211 211]]
# Calculating the mean
Mean: 1916 / 9 = 212.8
Subtract constant: 212.8 - 7 = 205.8
# The result
[[200 200 200]
[200 200 200]
[200 200 200]]
```
The resulting image after will always contain a number between 0 and the maxValue specified.
I hope this clarifies adaptiveThresholding for someone out there! | catheryn |
1,920,414 | batuna bocghe | chung toi cung cap dich vu boc lai ghe van phong uy tin tai ha noi sai gon.Doi ngu chuyen gia lanh... | 0 | 2024-07-12T02:31:51 | https://dev.to/tongkhonoithat/batuna-bocghe-10ia | chung toi cung cap dich vu boc lai ghe van phong uy tin tai ha noi sai gon.Doi ngu chuyen gia lanh nghe luon tan tam voi cong viec.
Website: https://batuna.vn/boc-ghe-van-phong
Phone: 0975143838
Address: 19 ngo 46 ngoc hoi hoang mai
https://nguoiquangbinh.net/forum/diendan/member.php?u=141111
https://socialtrain.stage.lithium.com/t5/user/viewprofilepage/user-id/76281
https://webcastlist.com/story18596625/batuna-bocghe
https://penzu.com/p/0674f06ba9ec3bde
https://app.talkshoe.com/user/tongkhonoithat
https://lab.quickbox.io/tongkhonoithatfi
https://www.reddit.com/user/tongkhonoithatxq
https://www.silverstripe.org/ForumMemberProfile/show/160769
https://www.slideserve.com/tongkhonoithat
https://shoplook.io/profile/tongkhonoithat
https://teletype.in/@tongkhonoithat
https://www.facer.io/user/ThyhHpdiG6
https://magic.ly/tongkhonoithat
https://www.scoop.it/u/batunabocghe
https://chart-studio.plotly.com/~tongkhonoithat
https://jsfiddle.net/user/tongkhonoithat
https://bookmarkstumble.com/story19086891/batuna-bocghe
https://rentry.co/36c47y9g
http://gendou.com/user/tongkhonoithat
https://vnvista.com/hi/158046
http://hawkee.com/profile/7278996/
https://www.nintendo-master.com/profil/tongkhonoithat
https://dsred.com/home.php?mod=space&uid=3998999
https://flipboard.com/@batunabocghe
https://gitlab.pavlovia.org/tongkhonoithat
https://www.circleme.com/tongkhonoithat
https://www.ethiovisit.com/myplace/tongkhonoithat
https://bookmarkport.com/story19528526/batuna-bocghe
https://conifer.rhizome.org/tongkhonoithat
https://p.lu/a/tongkhonoithat/video-channels
https://www.reverbnation.com/tongkhonoithat
https://www.creativelive.com/student/batuna-bocghe?via=accounts-freeform_2
https://www.funddreamer.com/users/batuna-bocghe
https://www.metal-archives.com/users/tongkhonoithat
https://slides.com/tongkhonoithat
https://www.diggerslist.com/tongkhonoithat/about
https://www.gisbbs.cn/user_uid_3312975.html
https://turkish.ava360.com/user/tongkhonoithat/#
https://willysforsale.com/profile/tongkhonoithat
https://www.hahalolo.com/@66908cd405740e60d095551b
https://gatherbookmarks.com/story18159162/batuna-bocghe
https://www.deepzone.net/home.php?mod=space&uid=3846531
http://www.invelos.com/UserProfile.aspx?alias=tongkhonoithat
https://click4r.com/posts/u/7037450/Author-batuna
https://www.angrybirdsnest.com/members/tongkhonoithat/profile/
https://www.designspiration.com/kinhdoanhbatuna/
https://www.passes.com/tongkhonoithat
https://www.bondhuplus.com/tongkhonoithat
https://hindibookmark.com/story19110957/batuna-bocghe
https://personaljournal.ca/tongkhonoithat/chung-toi-cung-cap-dich-vu-boc-lai-ghe-van-phong-uy-tin-tai-ha-noi-sai-gon-doi
https://www.mountainproject.com/user/201868621/batuna-bocghe
https://disqus.com/by/tongkhonoithat/about/
https://community.fyers.in/member/ojzuozedyz
https://my.desktopnexus.com/tongkhonoithat/
https://www.elephantjournal.com/profile/kinhdoa-nhba-t-una/
https://challonge.com/tongkhonoithat
https://photoclub.canadiangeographic.ca/profile/21306984
https://bookmarkswing.com/story18904346/batuna-bocghe
https://www.noteflight.com/profile/42f14834bb6c1d143dfa557b2b37a7d0aad1c099
https://velopiter.spb.ru/profile/120727-tongkhonoithat/?tab=field_core_pfield_1
https://os.mbed.com/users/tongkhonoithat/
https://peatix.com/user/23038742/view
https://batocomic.org/u/2097071-tongkhonoithat
www.artistecard.com/tongkhonoithat#!/contact
https://www.anobii.com/fr/01e4db72779bc6703d/profile/activity
https://www.instapaper.com/p/tongkhonoithat
http://molbiol.ru/forums/index.php?showuser=1363556
https://ko-fi.com/batunabocghe
https://www.bakespace.com/members/profile/tongkhonoithat/1651648/
https://able2know.org/user/tongkhonoithat/
https://www.cakeresume.com/me/tongkhonoithat
https://www.pozible.com/profile/batuna-bocghe
https://wibki.com/tongkhonoithat?tab=batuna%20bocghe
https://glose.com/u/tongkhonoithatyt
https://www.metooo.io/u/66908e4126ad05118bdaaa2f
https://www.webwiki.com/info/add-website.html
https://bookmarkshq.com/story18966580/batuna-bocghe
https://www.bark.com/en/gb/company/tongkhonoithat/aRGyG/
https://www.codingame.com/profile/7d136026b9c600a892a7e378c3334f755213816
https://www.dermandar.com/user/tongkhonoithat/
https://www.naucmese.cz/batuna-bocghe?_fid=s99o
https://list.ly/kinhdoa-nhba-t-una/lists
https://link.space/@tongkhonoithat
https://facekindle.com/tongkhonoithat
https://hypothes.is/users/tongkhonoithat
https://forum.liquidbounce.net/user/tongkhonoithat/
https://community.amd.com/t5/user/viewprofilepage/user-id/426591
https://my.omsystem.com/members/tongkhonoithat
https://portfolium.com/tongkhonoithat
https://answerpail.com/index.php/user/tongkhonoithat
https://bandcamp.com/tongkhonoithat
https://www.palscity.com/tongkhonoithat
https://www.chordie.com/forum/profile.php?id=1998252
https://www.catchafire.org/profiles/2917887/
https://www.magcloud.com/user/tongkhonoithat
https://mssg.me/5xblc
https://www.kniterate.com/community/users/tongkhonoithat/
https://linkmix.co/24514389
https://doodleordie.com/profile/tongkhonoithat
https://pinshape.com/users/4849306-kinhdoanhbatuna#designs-tab-open
https://www.pubpub.org/user/batuna-bocghe
https://micro.blog/tongkhonoithat
https://www.wpgmaps.com/forums/users/tongkhonoithat/
https://allmylinks.com/tongkhonoithat
https://bookmarkextent.com/story19066690/batuna-bocghe
https://naijamp3s.com/index.php?a=profile&u=tongkhonoithat
https://tvchrist.ning.com/profile/batunabocghe
https://boersen.oeh-salzburg.at/author/tongkhonoithat/
https://wirtube.de/a/tongkhonoithat/video-channels
https://research.openhumans.org/member/tongkhonoithat
https://pixbender.com/batunabocghe6
https://skitterphoto.com/photographers/102756/batuna-bocghe
https://pxhere.com/en/photographer-me/4307512
https://kumu.io/tongkhonoithat/sandbox#untitled-map
https://telegra.ph/tongkhonoithat-07-12
https://answerpail.com/index.php/user/tongkhonoithat
https://bookmarkstime.com/story17856409/batuna-bocghe
https://blender.community/batunabocghe/
https://www.weddingbee.com/members/tongkhonoithat/
https://findaspring.org/members/tongkhonoithat/
https://inkbunny.net/tongkhonoithat
https://www.openstreetmap.org/user/tongkhonoithat
https://spinninrecords.com/profile/tongkhonoithat
https://manylink.co/@tongkhonoithat
https://flightsim.to/profile/tongkhonoithat
https://community.snapwire.co/user/tongkhonoithat
https://bandori.party/user/205872/tongkhonoithat/
https://potofu.me/tongkhonoithat
https://muckrack.com/batuna-bocghe
https://eatsleepride.com/rider/tongkhonoithat/profile
https://www.are.na/batuna-bocghe/channels
http://www.so0912.com/home.php?mod=space&uid=2275611
https://www.penname.me/@tongkhonoithat
https://www.foroatletismo.com/foro/members/tongkhonoithat.html
https://fontstruct.com/fontstructors/2464778/tongkhonoithat
https://roomstyler.com/users/tongkhonoithat
https://www.fitday.com/fitness/forums/members/tongkhonoithat.html
https://www.fimfiction.net/user/769528/tongkhonoithat
https://worldcosplay.net/member/1792507
https://crowdin.com/project/tongkhonoithat
https://devpost.com/kinhdoa-nhba-t-una
https://tongkhonoithat.notepin.co/
https://rotorbuilds.com/profile/48836/
https://electronoobs.io/profile/39639#
https://hackerone.com/tongkhonoithat?type=user
https://controlc.com/5ed134be
https://blogfonts.com/user/832815.html
https://www.allsquaregolf.com/golf-users/batuna-bocghe
http://www.askmap.net/location/6963768/vietnam/batuna-bocghe
https://www.edna.cz/uzivatele/tongkhonoithat/
https://www.shippingexplorer.net/en/user/tongkhonoithat/108821
https://bookmarketmaven.com/story17969059/batuna-bocghe
https://zenwriting.net/tongkhonoithat
https://camp-fire.jp/profile/tongkhonoithat
https://www.castingcall.club/tongkhonoithat
https://zzb.bz/bIddy
http://buildolution.com/UserProfile/tabid/131/userId/411279/Default.aspx
https://files.fm/tongkhonoithat/info
http://idea.informer.com/users/tongkhonoithat/?what=personal
https://www.notebook.ai/@tongkhonoithat
https://www.divephotoguide.com/user/tongkhonoithat/
https://www.ameba.jp/profile/general/tongkhonoithat/?account_block_token=lAGj0jS6mJid7F3Mwe6ux1yev6ehV2tL
https://www.babelcube.com/user/batuna-bocghe
https://bouchesocial.com/story19359279/batuna-bocghe
https://www.mobafire.com/profile/tongkhonoithat-1159790
https://www.pearltrees.com/tongkhonoithatpp
https://stocktwits.com/tongkhonoithat
https://community.tableau.com/s/profile/0058b00000IZm9n
https://hub.docker.com/u/tongkhonoithat
http://www.fanart-central.net/user/tongkhonoithat/profile
https://buyandsellhair.com/author/tongkhonoithat/
https://dribbble.com/tongkhonoithat/about
https://phijkchu.com/a/tongkhonoithat/video-channels
https://golosknig.com/profile/tongkhonoithat/
https://newspicks.com/user/10470879
https://postheaven.net/r0d1f6645u
https://collegeprojectboard.com/author/tongkhonoithat/
https://www.exchangle.com/tongkhonoithat
https://dutrai.com/members/tongkhonoithat.28565/#about
http://www.freeok.cn/home.php?mod=space&uid=5837896
https://www.checkli.com/tongkhonoithat
https://www.dnnsoftware.com/activity-feed/my-profile/userid/3204470
https://hashnode.com/@tongkhonoithat
https://bookmarkrange.com/story18816900/batuna-bocghe
https://wmart.kz/forum/user/169491/
https://app.roll20.net/users/13562538/batuna-b
https://www.equinenow.com/farm/tongkhonoithat.htm
https://expathealthseoul.com/profile/batuna-bocghe/
https://qiita.com/tongkhonoithat
https://bookmarkspring.com/story12313852/batuna-bocghe
https://opentutorials.org/profile/171088
https://visual.ly/users/kinhdoanhbatuna
https://687621.8b.io/
https://www.ohay.tv/profile/tongkhonoithat
https://www.plurk.com/p/3g1h6jeggz
https://tupalo.com/en/users/7011093
https://edenprairie.bubblelife.com/users/tongkhonoithat
https://www.proarti.fr/account/tongkhonoithat
https://play.eslgaming.com/player/20228429/
https://www.speedrun.com/users/tongkhonoithat
https://fileforum.com/profile/tongkhonoithat
https://maps.roadtrippers.com/people/tongkhonoithat
https://nhattao.com/members/tongkhonoithat.6558541/
https://sixn.net/home.php?mod=space&uid=3486845
https://unsplash.com/@tongkhonoithat
https://suzuri.jp/tongkhonoithat
https://crypt.lol/tongkhonoithat
https://qooh.me/tongkhonoitha
https://club.doctissimo.fr/tongkhonoithat/
https://help.orrs.de/user/tongkhonoithat
https://developer.tobii.com/community-forums/members/tongkhonoithat/
https://writeablog.net/tongkhonoithat
https://www.twitch.tv/tongkhonoithat/about
https://participez.nouvelle-aquitaine.fr/profiles/tongkhonoithat/activity?locale=en
https://www.artscow.com/user/3201100
https://www.giveawayoftheday.com/forums/profile/201270
http://bbs.01bim.com/home.php?mod=space&uid=956296
https://connect.garmin.com/modern/profile/ed3a3acc-be0f-4252-a6ea-4e0e52cc5581
https://www.goodreads.com/user/show/179899334-batuna-bocghe
https://dirstop.com/story19699006/batuna-bocghe
| tongkhonoithat | |
1,920,416 | www | ccccc | 0 | 2024-07-12T02:41:23 | https://dev.to/mary_jones_00b9f6cc17fde8/www-3loe | ccccc | mary_jones_00b9f6cc17fde8 | |
1,920,417 | Docker - Introduction, Architecture, and Most used Commands : Day 5 of 50 days DevOps Tools Series | Introduction Docker has revolutionised the way we build, ship, and run applications. It... | 0 | 2024-07-12T02:49:12 | https://dev.to/shivam_agnihotri/docker-introduction-architecture-and-most-used-commands-day-4-of-50-days-devops-tools-series-31pa | docker, containers, beginners, devops | ## **Introduction**
Docker has revolutionised the way we build, ship, and run applications. It enables developers to package applications along with their dependencies into lightweight containers that can run consistently across different environments. In this post, we will dive into Docker’s architecture and cover its basic commands in detail.
**Why Docker is Important for DevOps?**
**Consistency:** Docker ensures that applications run the same way in development, testing, and production environments.
**Isolation:** Containers provide isolated environments for applications, preventing conflicts and improving security.
**Scalability:** Docker makes it easy to scale applications horizontally by adding more containers.
**Efficiency:** Containers are lightweight and use system resources more efficiently compared to traditional virtual machines.
## Docker Architecture:

Docker architecture consists of the following key components:
Docker Client
Docker Daemon
Docker Images
Docker Containers
Docker Registry
**1. Docker Client**
The Docker client is a command-line interface (CLI) that allows users to interact with Docker. Users can issue commands such as docker build, docker run, and docker stop through the Docker client.
**2. Docker Daemon**
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. It communicates with the Docker client to execute commands.
**3. Docker Images**
Docker images are read-only templates that contain the instructions to create a Docker container. Images are built using Dockerfiles, which specify the steps needed to create the image.
**4. Docker Containers**
Containers are runnable instances of Docker images. They are isolated environments where applications run with their dependencies. Containers can be started, stopped, and deleted as needed.
**5. Docker Registry**
A Docker registry is a storage and distribution system for Docker images. Docker Hub is a public registry, but private registries can also be set up for internal use.
**Basic Docker Commands**
Now, let's explore the basic Docker commands and their usage.
docker --version: Displays the installed Docker version.
```
docker --version
```
docker info: Provides detailed information about the Docker installation, including the number of containers and images.
```
docker info
```
docker pull: Pulls an image from a Docker registry.
```
docker pull <image-name>
```
docker images: Lists all the Docker images available on the system.
```
docker images
```
docker run: Creates and starts a new container from an image.
```
docker run -it ubuntu
-it: Runs the container in interactive mode with a terminal.
ubuntu: Specifies the image to use.
```
docker ps: Lists all running containers.
```
docker ps
```
To list all containers (running and stopped), use:
```
docker ps -a
```
docker stop: Stops a running container.
```
docker stop <container_id>
```
docker start: Starts a stopped container.
```
docker start <container_id>
```
docker rm: Removes a stopped container.
```
docker rm <container_id>
```
docker rmi: Removes a Docker image.
```
docker rmi <image_id>
```
docker build: Builds a Docker image from a Dockerfile.
```
docker build -t my-image:latest .
-t: Tags the image with a name and optional tag (e.g., my-image:latest).
.: Specifies the directory containing the Dockerfile.
```
docker exec: Runs a command in a running container.
```
docker exec -it <container_id> /bin/bash
-it: Runs the command in interactive mode with a terminal.
/bin/bash: Specifies the command to run (in this case, starting a bash shell).
```
docker logs: Fetches the logs of a container.
```
docker logs <container_id>
```
docker inspect: Displays detailed information about a container or image.
```
docker inspect <container_id>
```
**Conclusion**
Docker is a powerful tool that simplifies the process of deploying and managing applications in isolated containers. Understanding Docker's architecture and mastering its basic commands are crucial steps for any DevOps engineer. In the next post, we will delve into advanced Docker concepts such as Docker Compose and Docker Swarm.
🔄 _Subscribe to our blog to get notifications on upcoming posts._
👉 **Be sure to follow me on LinkedIn for the latest updates**: [Shiivam Agnihotri](https://www.linkedin.com/in/shivam-agnihotri/)
| shivam_agnihotri |
1,920,418 | MusicFab: Best Music Converter | If you are streaming music lovers, then you won't want to miss MusicFab. It is a powerful and... | 0 | 2024-07-12T02:51:42 | https://dev.to/jing_nicole_c8ff65e304a9b/musicfab-best-music-converter-2bc5 | converter, music, mp3 | If you are streaming music lovers, then you won't want to miss [MusicFab](https://musicfab.org/). It is a powerful and professional software that can download any song from streaming music services, like Spotify or Apple Music, even without a premium account. | jing_nicole_c8ff65e304a9b |
1,920,419 | Website Traffic Strategies: Comparing SEO and Content Promotion | Hey everyone! So, you're trying to figure out whether SEO or content promotion is better for boosting... | 0 | 2024-07-12T02:53:11 | https://dev.to/juddiy/website-traffic-strategies-comparing-seo-and-content-promotion-4m5b | learning, website, seo | Hey everyone! So, you're trying to figure out whether SEO or content promotion is better for boosting your website traffic, right? Well, you're in the right place. Both methods are super effective, but they each have their own unique strengths. Let's dive in and see which one might be the best fit for you.
#### What is Search Engine Optimization (SEO)?
SEO is all about making your website's content and structure better so it ranks higher on SERPs, which helps bring in more organic traffic. It includes things like keyword research, on-page optimization, technical SEO, and link building. Using [SEO AI](https://seoai.run/) can help fine-tune your website's content and structure, making sure it follows the latest search engine best practices. The main goal is to make your website easier for search engines to understand and trust, so it shows up higher in relevant searches.
##### Advantages of SEO:
1. **Long-term Results**: Once your website ranks high on search engines, you will continuously receive organic traffic without having to pay for each click.
2. **High-Quality Traffic**: Traffic obtained through SEO is usually more targeted since users find your website by searching for related keywords.
3. **Brand Trust**: Websites that rank high are generally seen as more credible and authoritative by users, thereby enhancing your brand image.
#### What is Content Promotion?
Content promotion involves using various channels (such as social media, email marketing, partner websites, etc.) to promote your content and attract traffic and engagement. The goal of content promotion is to showcase your content to as many relevant audiences as possible, thereby increasing website visits and user engagement.
##### Advantages of Content Promotion:
1. **Quick Results**: Through paid ads or social media promotion, you can quickly attract a large amount of traffic.
2. **Diverse Channels**: Content promotion can be carried out through various channels, giving your content broader exposure and reach.
3. **High Interactivity**: Content promotion usually comes with user interaction and feedback, helping to build relationships and communities with users.
#### Combining SEO and Content Promotion Strategies
In fact, SEO and content promotion are not mutually exclusive but can complement each other. By combining both, you can maximize your website's traffic growth:
1. **High-Quality Content Creation**: First, create high-quality content that is also optimized for SEO. This content will not only help with search engine rankings but also engage users during promotion.
2. **Social Media Promotion**: Share your SEO-optimized content on social media platforms to attract users to click and visit your website.
3. **Link Building**: Use content promotion to gain citations and links from other websites, boosting your site's authority and SEO ranking.
5. **Data Analysis and Optimization**: Use analytics tools to track the effectiveness of both SEO and content promotion, making continuous adjustments and improvements based on data.
---
Overall, both SEO and content promotion have their own advantages. The best strategy is to combine them to achieve maximum traffic growth and brand exposure. I hope these tips help you out! If you have any questions or better insights, feel free to share. Let's work together! | juddiy |
1,920,420 | 🌟 Introducing HealthHub: Your Ultimate Wellness Companion 🏥💪 | Hey Dev.to community! I'm excited to share my latest project, HealthHub, a comprehensive wellness... | 0 | 2024-07-12T02:54:55 | https://dev.to/sneha422/healthhub-your-complete-wellness-companion-37c1 | webdev, programming, python, machinelearning |
Hey Dev.to community! I'm excited to share my latest project, **HealthHub**, a comprehensive wellness platform designed to empower you with AI-driven health tools. Built with the power of **MindsDB**, HealthHub offers an array of features to keep you at the peak of your health game.
### [👉 Check out the Project on Quira! and Vote](https://quira.sh/repo/sneha-4-22-Health_assistant-824874405?utm_source=copy&utm_share_context=quests_repos)
## 🎉 Features You'll Love
### 🩺 Diagnosis Predictor
Let AI handle your health queries! Simply input your age, gender, and symptoms, and our **Diagnosis Predictor** powered by MindsDB will provide you with accurate health predictions. It's like having a personal doctor available 24/7!
1. Navigate to the Diagnosis Predictor section.
2. Fill in your age, gender, and symptoms.
3. Click "Predict Diagnosis" and get your results instantly!
[Watch the demo](https://github.com/user-attachments/assets/6d500300-9bfe-46b3-9616-e7fdae5a38e0)
### 🤖 Health Chatbot
Meet your new health advisor! Our **Health Chatbot** uses advanced AI to provide instant, reliable health information. Got a question? Just ask!
1. Navigate to the Health Chatbot section.
2. Type your health-related question.
3. Click "Send" and get instant answers!
[Watch the demo](https://github.com/user-attachments/assets/eabee797-abdd-4b38-bcca-e91a7239356a)
### 🗓️ Weekly Health Planner
Planning your health activities has never been easier! Based on your interactions with our chatbot, the **Weekly Health Planner** will create a personalized health plan just for you.
1. Go to the Weekly Health Planner section.
2. Select your start and end dates.
3. Click "Generate Plan" and follow your custom health journey!
[Watch the demo](https://github.com/user-attachments/assets/bf48438f-a94c-4596-b79c-06324147883e)
### ✅ Health Checklist
Stay on top of your health goals with our **Health Checklist**. Customize your checklist, track your progress, and achieve your daily health targets effortlessly!
1. Navigate to the Health Checklist section.
2. Add new items using the input box and "Add" button.
3. Check off completed items to keep track of your progress.
[Watch the demo](https://github.com/user-attachments/assets/6adeace5-b0d2-4e78-915f-add9954b8651)
## 📺 YouTube Demonstration
Want to see HealthHub in action? Watch our demo video on YouTube!
{% embed https://www.youtube.com/watch?v=oDEMWkdwTWs %}
## 💬 Show Your Support
If you find this project exciting and useful, please support it by upvoting on **Quira** and starring the GitHub repository. Your support means the world to me!
### [👉 Upvote on Quira!](https://quira.sh/repo/sneha-4-22-Health_assistant-824874405?utm_source=copy&utm_share_context=quests_repos)
Thank you for your support! Let's make health management easier and smarter together!
| sneha422 |
1,920,421 | My Pen on CodePen | Check out this Pen I made! | 0 | 2024-07-12T02:55:04 | https://dev.to/luang_lertariyanunt_11487/my-pen-on-codepen-250c | codepen | Check out this Pen I made!
{% codepen https://codepen.io/passkoriys/pen/OJeVymx %} | luang_lertariyanunt_11487 |
1,920,423 | Understanding Vite Flow and Structure in a React Project | When working with React, Vite offers a streamlined development experience with a few key differences... | 0 | 2024-07-12T03:01:52 | https://dev.to/vyan/understanding-vite-flow-and-structure-in-a-react-project-2e84 | webdev, javascript, beginners, react | When working with React, Vite offers a streamlined development experience with a few key differences from the traditional Create React App setup. This blog post will explore the structure of a typical Vite project, focusing on key files such as `index.html`, `main.jsx`, and `App.jsx`.
## 1. index.html
In a Vite-powered React application, `index.html` serves as a critical starting point. Unlike Create React App, where scripts are injected automatically, Vite requires you to specify the script files directly. This explicit inclusion simplifies understanding the entry points and dependencies of your application.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Vite + React</title>
</head>
<body>
<div id="root"></div>
<!-- The root div where your React app will be mounted -->
<script type="module" src="/src/main.jsx"></script>
<!-- The script tag importing your main JavaScript module -->
</body>
</html>
```
In this example, you can see the script tag directly loading `main.jsx`. This direct inclusion is a major difference from Create React App, enhancing clarity and control over the project's entry points.
### 1.1 Dependencies
To ensure your script files load correctly, Vite leverages modern ES module imports. Ensure your `package.json` includes necessary dependencies:
```json
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0"
}
```
Explicitly including the script in the HTML file ensures the correct loading and execution order of your application, mitigating potential issues with script loading.
## 2. main.jsx
The `main.jsx` file serves as the entry point for your React application. This file is responsible for rendering the root component into the DOM. It's typically the file specified in the `src` attribute of the script tag in your `index.html`.
```javascript
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App.jsx';
import './index.css';
// Render the root component into the root element in the HTML
ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<App />
</React.StrictMode>
);
```
In this file, `ReactDOM.createRoot` is used to render the `App` component into the HTML element with the id `root`. This direct rendering approach, without holding any root elements temporarily, streamlines the process, making it clear where the application starts and what components are involved.
## 3. App.jsx
The `App.jsx` file contains the definition of your main `App` component. This component serves as the root of your React component tree.
```javascript
import React from 'react';
const App = () => {
return (
<div className="App">
<h1>Hello, Vite and React!</h1>
</div>
);
};
export default App;
```
In this file, you define the main structure and behavior of your application. The `App` component is where you'll build out the primary UI and functionality, just like you would in any other React project.
## Additional Materials and Best Practices
### 4. Using Tailwind CSS with Vite
Tailwind CSS can be easily integrated into a Vite project for utility-first styling.
1. **Install Tailwind CSS:**
```bash
npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p
```
2. **Configure Tailwind:**
Update `tailwind.config.js` with your project's specific paths:
```javascript
module.exports = {
content: ['./index.html', './src/**/*.{js,jsx,ts,tsx}'],
theme: {
extend: {},
},
plugins: [],
};
```
3. **Include Tailwind in your CSS:**
Update `index.css` to include Tailwind's base, components, and utilities:
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
### 5. Hot Module Replacement (HMR)
Vite offers HMR out of the box, allowing you to see changes in real-time without refreshing the page.
### 6. Environment Variables
Vite uses `.env` files to manage environment variables. Create a `.env` file at the root of your project and define your variables:
```env
VITE_API_URL=https://api.example.com
```
Access these variables in your application using `import.meta.env`:
```javascript
const apiUrl = import.meta.env.VITE_API_URL;
```
### 7. Optimized Build Process
Vite's build command (`vite build`) uses Rollup under the hood to produce highly optimized static assets for production. This ensures your application is fast and efficient.
## Conclusion
Working with Vite in a React project offers a streamlined and efficient development experience. Understanding the flow and structure of key files like `index.html`, `main.jsx`, and `App.jsx` can significantly enhance your development process. With the added benefits of Tailwind CSS integration, HMR, and optimized builds, Vite stands out as a modern, powerful tool for React developers.
By leveraging these features and best practices, you can create high-performance, scalable, and maintainable applications with ease. | vyan |
1,920,424 | Reddit营销软件,reddit霸屏工具,reddit私信软件 | Reddit营销软件,reddit霸屏工具,reddit私信软件 了解相关软件请登录... | 0 | 2024-07-12T03:06:48 | https://dev.to/vst_tw_d54d6ef07fa3d41155/redditying-xiao-ruan-jian-redditba-ping-gong-ju-redditsi-xin-ruan-jian-am0 |
Reddit营销软件,reddit霸屏工具,reddit私信软件
了解相关软件请登录 http://www.vst.tw
Reddit营销软件,解锁社交媒体的新颖营销策略
在数字时代,社交媒体已经成为企业推广和营销的重要渠道之一。在众多社交媒体平台中,Reddit作为一个集结了数百万用户的独特社区,为企业提供了独特的营销机会。然而,Reddit的社区氛围和独特的文化也使得在其上进行营销变得稍显复杂。为了解决这一挑战,越来越多的企业开始依赖Reddit营销软件,这些软件不仅能够简化营销流程,还能够帮助企业更好地融入Reddit社区,并实现更有效的营销目标。
Reddit营销软件的功能
Reddit营销软件通常具有以下功能,
自动发布和调度功能, 这些软件可以帮助用户自动发布内容,并根据Reddit社区的活跃度和最佳时机进行调度,以确保内容能够最大程度地吸引用户关注。
用户调研和分析, Reddit营销软件可以帮助用户深入了解Reddit社区的用户群体,包括他们的兴趣、偏好和行为习惯,从而更好地定位目标受众和制定营销策略。
监控和回应功能, 这些软件可以帮助用户监控Reddit上与其品牌相关的讨论和话题,及时回应用户的提问和评论,建立良好的品牌形象和用户关系。
内容创作和优化, Reddit营销软件通常也提供内容创作和优化工具,帮助用户创作高质量、具有吸引力的内容,并根据Reddit社区的特点进行优化,以提高内容的曝光和传播效果。
数据分析和报告, 这些软件还可以帮助用户分析营销效果,监测关键指标,并生成详尽的数据报告,帮助用户不断优化营销策略和提升ROI。
Reddit营销软件的优势
使用Reddit营销软件有以下几个优势,
提高效率, 自动化功能可以节省用户大量的时间和精力,让他们能够更专注于策略制定和创意发挥,提高营销效率。
增强精准度, 用户调研和分析功能可以帮助用户更准确地了解Reddit社区的用户群体,从而更精准地定位目标受众,制定针对性更强的营销策略。
改善互动体验, 监控和回应功能可以帮助用户及时回应用户的提问和评论,建立起与用户的互动,增强品牌与用户之间的联系。
优化内容效果, 内容创作和优化工具可以帮助用户创作更具有吸引力和共鸣力的内容,并根据Reddit社区的特点进行优化,提高内容的曝光和传播效果。
实时数据支持, 数据分析和报告功能可以帮助用户实时监测营销效果,及时调整策略,并生成详尽的数据报告,为决策提供数据支持。
Reddit营销软件的应用场景
Reddit营销软件适用于各种不同规模和行业的企业,特别是那些希望通过Reddit社区实现品牌曝光、用户增长和销售转化的企业。以下是一些典型的应用场景,
新产品推广, 企业可以利用Reddit营销软件在Reddit社区上发布关于新产品的信息,吸引用户关注并促进销售。
品牌建设, 企业可以通过在Reddit上分享有趣的内容和故事,提升品牌知名度和美誉度,建立起与用户的情感连接。
用户调查和反馈, 企业可以利用Reddit营销软件开展用户调查和投票活动,了解用户需求和反馈,为产品改进和优化提供依据。
社区参与和赞助, 企业可以通过Reddit营销软件监控Reddit社区上与其品牌相关的讨论和话题,及时参与并提供支持,赢得用户的好感和支持。
危机管理, 企业可以利用Reddit营销软件监控Reddit社区上与其品牌相关的负面信息和舆情,及时回应和处理,最大程度地减少危机带来的负面影响。
结语
Reddit营销软件为企业在Reddit社区上进行营销提供了便利和支持,帮助他们更好地融入Reddit社区,实现更有效的营销目标。然而,需要注意的是,Reddit社区有着独特的文化和氛围,企业在使用Reddit营销软件时需要尊重Reddit社区的规则和用户习惯,以避免造成不良影响。
了解相关软件请登录 http://www.vst.tw
| vst_tw_d54d6ef07fa3d41155 | |
1,920,425 | React: useEffect | Today's topic is to learn useEffect in React. In my point of view, the useEffect helps you call a... | 0 | 2024-07-12T04:12:31 | https://dev.to/ken2511/react-useeffect-3jk4 | Today's topic is to learn useEffect in React.
In my point of view, the useEffect helps you call a function in a proper time, so that you don't need to write an infinite loop to monitor...
Since the useEffect also calls the function when the component is first time rendered and also call another function when the component is being removed, we can do a lot based on it. It is very useful in anyways.
Before start, make sure that you are familiar with the `useState` component.
## Import Package
```javascript
import { useEffect } from 'React';
```
## Syntax
```javascript
useEffect(function, [dependency array]);
```
* The first param is a function which would be automatically called, and the function should return the "uninstall" function to clean up the memory if neccessary.
* The second param is an array that contain the variables you want to monitor. Once the parameter is changed, the function (first param) will be automatically called. An empty array means it does not monitor any variable.
A brief example:
```javascript
useEffect(() => {
do_something;
return uninstall_function;
}, [variable_to_be_monitored]);
```
## Example
First we make a simple click button screen without the useEffect:
```javascript
import React, { useState, useEffect } from 'react';
import './App.css';
function MyButton({ updateFunc }) {
return (
<div>
<button onClick={updateFunc} className='button'> Click Me </button>
</div>
);
}
function App() {
const [count, setCount] = useState(0);
return (
<div className="App">
<p>This is a counter</p>
<p>Currently the count is {count}</p>
<MyButton updateFunc={() => { setCount(count + 1); }}/>
<MyButton updateFunc={() => { setCount(count - 1); }}/>
</div>
);
}
export default App;
```
And here is what the screen looks like:

Then we add the useEffect to log to the console each time we click the button:
```javascript
import React, { useState, useEffect } from 'react';
import './App.css';
function MyButton({ updateFunc }) {
return (
<div>
<button onClick={updateFunc} className='button'> Click Me </button>
</div>
);
}
function App() {
const [count, setCount] = useState(0);
useEffect(() => {
console.log(`button clicked. count: ${count}`);
}, [count]);
return (
<div className="App">
<p>This is a counter</p>
<p>Currently the count is {count}</p>
<MyButton updateFunc={() => { setCount(count + 1); }}/>
<MyButton updateFunc={() => { setCount(count - 1); }}/>
</div>
);
}
export default App;
```
And you can see the console of the webpage. It is like this:

## Another Example
Here is the example updating the "online time" every second using a timer (if you don't know a timer, please search for `setInterval`...). In this time, the useEffect will help clear the timer when the component is removed.
```javascript
import React, { useState, useEffect } from 'react';
import './App.css';
function App() {
const [sec, setSec] = useState(0);
useEffect(() => {
// define the timer
const timer = setInterval(() => {
// insert a function into the `setSec` to avoid name space issues
// setSec(sec + 1);
setSec(prevSec => prevSec + 1);
}, 1000);
// return the cleaning function
return () => {
clearInterval(timer);
}
// the empty array below makes it only call the
// function once when the component is loaded
}, []);
return (
<div className="App">
<p>You stayed alive for {sec} seconds.</p>
</div>
);
}
export default App;
```
And the result is like the image below, and it is continuously updating with a minimum CPU workload.

| ken2511 | |
1,920,426 | FIXME Please: An Exercise in TODO Linters | A few weeks ago, I was talking with a developer in our Community Slack who was interested in adding... | 0 | 2024-07-12T03:13:16 | https://trunk.io/blog/fixme-please-an-exercise-in-todo-linters | devops, linters, tooling, tutorial | A few weeks ago, I was talking with a developer in our [Community Slack](https://slack.trunk.io/) who was interested in adding their own TODO linter. At face value, this is a trivial problem. There are several linters that already support this to varying degrees, and many of them offer decently extensible configuration and their own plugin ecosystems. But the more I thought about it, the more the question piqued my interest. Trunk supports [100+ linters](https://docs.trunk.io/check/configuration/supported) out of the box (OOTB), but which one would solve this problem best? So I set out to evaluate them all. Here are my findings...
To simplify this experiment, we should clarify what makes for a good TODO linter. Depending on your team’s culture, you may want to prevent _any_ TODOs from making it to main, or you may just want to keep tabs on them. But at a minimum, a TODO linter should satisfy the following:
1. Easily and quickly report what files have “TODO” strings and where
2. Support multiple languages/file types
3. Don’t generate additional noise (“mas<u>todo</u>n” isn’t a todo)
As a bonus, some TODO linters might:
1. Require specific syntax for TODO comments (e.g. [clang-tidy](https://clang.llvm.org/extra/clang-tidy/checks/google/readability-todo.html))
2. Support other keywords and cases (e.g. FIXME)
3. Be able to ignore false positives as appropriate (automatically handled with [trunk-ignore](https://docs.trunk.io/check/configuration/ignoring-issues))
**Now that we have our criteria, let’s dive in. All examples (both with and without Trunk) can be found in this [sample repo](https://github.com/trunk-io/todo-linter-demo), so feel free to follow along! If you haven’t used Trunk before, you can follow our setup instructions in our [docs](https://docs.trunk.io/check/usage).**
# The Sample File
We'll lint this file with all the tools we test in this blog. This file has some real TODO comments and some fake TODOs meant to confuse linters.
```markdown
# Test Data
A collection of different ways that TODO might show up.
``yaml
# TODO: Make this better
version: 0.1
``
``typescript
// TODO(Tyler): Optimize this
const a = !!!false;
``
<!-- MASTODON is not a fixme -->
## Another Heading
Look at all the ways to check for todo!
<!-- trunk-ignore-begin(todo-grep-wrapped,codespell,cspell,vale,semgrep,trunk-toolbox) -->
Let's ignore this TODO though
<!-- trunk-ignore-end(todo-grep-wrapped,codespell,cspell,vale,semgrep,trunk-toolbox) -->
```
# Per-Language Rules
Let’s try a naive approach. Several linters have built-in rules to check for TODOs (e.g. [ruff](https://docs.astral.sh/ruff/rules/line-contains-todo/), [ESLint](https://eslint.org/docs/latest/rules/no-warning-comments)). Many others support plugin ecosystems to add your own rules. Let’s take a look at markdownlint’s approach to this, using the [markdownlint-rule-search-replace](https://www.npmjs.com/package/markdownlint-rule-search-replace) package. Run `trunk check enable markdownlint` to get started.
In order to configure the rule, we must modify [.markdownlint.json](https://github.com/trunk-io/todo-linter-demo/blob/main/.markdownlint.json):
```json
{
"default": true,
"extends": "markdownlint/style/prettier",
"search-replace": {
"rules": [
{
"name": "found-todo",
"message": "Don't use todo",
"searchPattern": "/TODO/gi"
}
]
}
}
```
Then, we can run it and inspect the output:


Note that we have a `trunk-ignore` to suppress the `TODO` on line 24.
Markdownlint here gets the job done, but will of course only work on MD files. As soon as you start to add other file types, even YAML or JS, it doesn’t scale, and you’ll lose coverage and consistency, and chasing down the particular incantation to do this for every linter is intractable. Let’s look at some other more sustainable options.
# CSpell
[CSpell](https://cspell.org/) is a relatively extensible code spellchecker. It’s easy to use OOTB, and it runs on all file types. However, it has a high false positive rate and requires that you manually tune it by importing and defining new [dictionaries](https://cspell.org/docs/dictionaries/). Let’s see what it takes to turn it into a TODO linter. First, run `trunk check enable cspell`.
We can define our own dictionary or simply add a list of [forbidden words](https://cspell.org/docs/forbidden-words/) to [cspell.yaml](https://github.com/trunk-io/todo-linter-demo/blob/main/cspell.yaml):
```yaml
version: "0.2"
# Suggestions can sometimes take longer on CI machines,
# leading to inconsistent results.
suggestionsTimeout: 5000 # ms
words:
- "!todo"
- "!TODO"
```


We end up with a quick case-insensitive search for TODOs, albeit with some messy suggestions. It gets the job done, but getting it production-ready for the rest of our codebase will usually require curating additional dictionaries. Running it on the sample repo flags 22 additional false positive issues.
# codespell
[codespell](https://github.com/codespell-project/codespell) is a code spellchecker that takes a different approach. Much like CSpell, it is prone to false positives, but rather than defining dictionaries of allowlists, it looks for specific common misspellings and provides suggestions. This reduces its false positive rate, but it usually still requires some tuning. Run `trunk check enable codespell` to get started.
To teach codespell to flag TODOs, we need to define our own dictionary and reference it:
[todo_dict.txt](https://github.com/trunk-io/todo-linter-demo/blob/main/todo_dict.txt)
```text
todo->,encountered todo
```
[.codespellrc](https://github.com/trunk-io/todo-linter-demo/blob/main/.codespellrc)
```text
[codespell]
dictionary = todo_dict.txt
```


Still a bit cumbersome, but we can fine-tune the replacements if desired. Let’s examine some other options.
# Vale
[Vale](https://vale.sh/) is a code prose checker. It takes a more opinionated approach to editorial style, and thus can require lots of tuning, but it is very extensible. Let’s have it check for TODOs. Run `trunk check enable vale` to get started.
Vale has an opinionated, nested structure to define its configuration. For now, we will only do the minimum to check for TODOs:
[.vale.ini](https://github.com/trunk-io/todo-linter-demo/blob/main/.vale.ini)
```ini
StylesPath = "styles"
MinAlertLevel = suggestion
Packages = base
[*]
BasedOnStyles = Vale, base
```
[styles/base/todo.yml](https://github.com/trunk-io/todo-linter-demo/blob/main/styles/base/todo.yml)
```yaml
extends: existence
message: Don't use TODO
level: warning
scope: [raw, text]
tokens:
- TODO
```


If you’re already using Vale, and you’re willing to eat the cost of configuration, it can work quite well! Additionally, you can easily customize which file types and scopes it applies to. Let’s try a few more.
# Semgrep
[Semgrep](https://semgrep.dev/docs/cli-reference) is a static analysis tool that offers semantic-aware grep. It catches a number of vulnerabilities out of the box, and it’s fairly extensible. It handles most file types, although anecdotally it struggles in some edge cases (e.g. C++ macros, networkless settings). Run `trunk check enable semgrep` to get started.
Thankfully, Semgrep is configured pretty easily and lets us just specify words or patterns to check for. We can add a config file like so:
[.semgrep.yaml](https://github.com/trunk-io/todo-linter-demo/blob/main/.semgrep.yaml)
```yaml
rules:
- id: check-for-todo
languages:
- generic
severity: ERROR
message: Don't use TODO
pattern-either:
- pattern: TODO
- pattern: todo
```


It works pretty well!! And we can customize it however we want in their [playground](https://semgrep.dev/playground/r/3qUzQD/ievans.print-to-logger?editorMode=advanced), even modifying our pattern to require specific TODO styling. Semgrep seems like a decent contender for a best-effort solution, but let’s give a couple more a try.
# trunk-toolbox
trunk-toolbox is our [open-source](https://github.com/trunk-io/toolbox) homegrown linter Swiss Army knife. It supports a few different rules, including searching for TODO and FIXME. It works on all file types and is available just by running `trunk check enable trunk-toolbox`.
Enable TODO checking in [toolbox.toml](https://github.com/trunk-io/todo-linter-demo/blob/main/toolbox.toml):
```toml
[todo]
enabled = true
```


This immediately accomplishes the stated goal of a TODO linter–if you just want to find TODOs, just use trunk-toolbox–but it isn’t configurable beyond that.
# Grep Linter
Let’s take this one step further. How difficult is it to prototype a solution from scratch? Building a wrapper around grep is the no-brainer solution for this, so let’s start with that.
At its simplest, we can build something like:
[.trunk/trunk.yaml](https://github.com/trunk-io/todo-linter-demo/blob/main/plugin.yaml#L27-L46)
```yaml
lint:
definitions:
- name: todo-grep-linter
description: Uses grep to look for TODOs
files: [ALL]
commands:
- name: lint
run: bash -c "grep -E -i 'TODO\W' --line-number --with-filename ${target}"
output: pass_fail
success_codes: [0, 1]
```
This `pass_fail` linter will just report when we have TODOs. In order to get line numbers, we can wrap this in a script and make it a `regex` linter with an output that Trunk Check understands:
[todo_grep.sh](https://github.com/trunk-io/todo-linter-demo/blob/main/todo_grep.sh)
```bash
#!/bin/bash
set -euo pipefail
LINT_TARGET="${1}"
TODO_REGEX="TODO\W"
GREP_FORMAT="([^:]*):([0-9]+):(.*)"
PARSER_FORMAT="\1:\2:0: [error] Found TODO in line (TODO)"
grep -o -E "${TODO_REGEX}" --line-number --with-filename "${LINT_TARGET}" | sed -E "s/${GREP_FORMAT}/${PARSER_FORMAT}/"
```
[.trunk/trunk.yaml](https://github.com/trunk-io/todo-linter-demo/blob/main/plugin.yaml#L27-L46)
```yaml
lint:
definitions:
- name: todo-grep-wrapped
description: Uses grep to look for TODOs
files: [ALL]
commands:
- name: lint
run: sh ${cwd}/todo_grep.sh ${target}
output: regex
parse_regex: "((?P<path>.*):(?P<line>-?\\d+):(?P<col>-?\\d+): \\[(?P<severity>.*)\\] (?P<message>.*) \\((?P<code>.*)\\))"
success_codes: [0, 1]
```


It’s a bit messy, but it gets the job done. It’s another thing to maintain, but you can tune it as much as you want. We’ll definitely be using one of the pre-built solutions, though.
# What did we learn?
There are more than a couple of reasonable options, and depending on your appetite for configuration vs. plug-and-play, some make more sense than others. But overall, using an existing language-agnostic tool performs much better.

And regardless of your preference, all of these options can be super-charged by Trunk. Using [githooks](https://trunk.io/blog/githook-management) and [CI gating](https://docs.trunk.io/check/check-cloud-ci-integration/get-started), you can prevent TODOs from ever landing if that’s your taste. Or, you can burn them down incrementally, only tackling new issues with [Hold the Line](https://docs.trunk.io/check/reference/under-the-hood#hold-the-line). You can always make TODOs a non-blocking [threshold](https://docs.trunk.io/check/configuration#blocking-thresholds) if need be, or [turn them on for yourself](https://docs.trunk.io/check/reference/user-yaml) without blocking your team.
We all end up with more TODOs than we’d like, but it’s important to build processes that track them (and if necessary gate them) so they don’t get out of hand, just like any other linting issue. There are lots of reasonable options to choose from, but it’s important to make an informed decision when adopting a generalizable approach to linting.
If this post interests you, come check out our other linter definitions in our open-source [plugins repo](https://github.com/trunk-io/plugins) or come chat with us on [Slack](http://slack.trunk.io)! | tylerjang27 |
1,920,427 | Remixed Relay: Evolution of React Server Component | React Server Components (RSC) represents a significant advancement in how server-rendered content is... | 0 | 2024-07-12T03:14:02 | https://dev.to/guhandelta/evolution-of-react-server-componen-m8f | react, reactservercomponents, relay | React Server Components (RSC) represents a significant advancement in how server-rendered content is handled in React applications. The concept of React Server Components draws inspiration from various previous technologies and patterns, particularly Relay. Relay, a JavaScript framework for building data-driven React applications, provided foundational concepts that have influenced the development of React Server Components. This article delves into how React Server Components evolved from Relay, highlighting the similarities, differences, and advancements.
Understanding Relay
What is Relay?
Relay is a JavaScript framework developed by Facebook (now Meta) for managing and fetching GraphQL data in React applications. It was designed to handle complex data requirements and provide a highly optimized data-fetching layer for React.
Key Features of Relay
GraphQL Integration: Relay works seamlessly with GraphQL, allowing for precise data fetching and efficient updates.
Declarative Data Fetching: Components declare their data dependencies using GraphQL fragments, making data requirements explicit and co-located with the components.
Optimistic Updates: Relay supports optimistic UI updates, allowing the UI to reflect changes before the server confirms them.
Efficient Data Fetching: Relay minimizes over-fetching and under-fetching by composing multiple data requirements into a single query.
The Concept of Server Components
What are React Server Components?
React Server Components (RSC) allow developers to build components that run on the server and send HTML to the client. This approach aims to optimize server-side rendering (SSR) by splitting the rendering workload between the server and the client.
Key Features of React Server Components
Server-Side Execution: Components can be executed on the server, reducing the client-side JavaScript bundle size.
Direct Data Access: Server Components can directly access server-side data sources, such as databases or APIs, without additional client-side data fetching.
Seamless Integration: RSC can be seamlessly integrated with client components, enabling a hybrid rendering model.
Improved Performance: By offloading rendering to the server, React Server Components can improve the initial load performance and SEO.
Evolution from Relay to React Server Components
Similarities:
Declarative Data Requirements: Both Relay and React Server Components emphasize the declarative nature of data requirements. In Relay, components declare their GraphQL fragments, while in RSC, server components can directly fetch data and render HTML.
Optimized Data Fetching: Relay’s efficient data fetching mechanism influenced RSC’s ability to directly access and fetch data on the server, reducing the need for multiple client-side requests.
Component Co-location: In both Relay and RSC, data fetching logic is co-located with the components, making the data dependencies explicit and easier to manage.
Differences:
Rendering Paradigm: Relay focuses on optimizing client-side data fetching and updates using GraphQL, while React Server Components shift part of the rendering workload to the server, sending pre-rendered HTML to the client.
Server-Side Execution: Relay operates entirely on the client side, fetching data and updating the UI. RSC executes components on the server, leveraging server resources to generate HTML and send it to the client.
Data Fetching: Relay relies on GraphQL for data fetching, requiring a GraphQL server and schema. RSC can fetch data from any server-side data source, including REST APIs, databases, or other services, without being tied to GraphQL.
Advancements in React Server Components
Simplified Data Access: RSC simplifies data access by allowing server-side code to fetch data directly, avoiding the need for additional client-side data-fetching logic.
Reduced Client-Side Overhead: By moving part of the rendering logic to the server, RSC reduces the amount of JavaScript that needs to be executed on the client, leading to improved performance and faster initial loads.
Hybrid Rendering: RSC supports a hybrid rendering model where server components can be combined with client components, providing flexibility in rendering strategies.
Improved SEO: Server-side rendering with RSC improves SEO by delivering pre-rendered HTML content to search engines, making it easier for them to crawl and index the content.
Example: Transition from Relay to React Server Components
Relay Example
import { graphql, QueryRenderer } from 'react-relay';
import environment from './environment';
const App = () => (
<QueryRenderer
environment={environment}
query={graphql`
query AppQuery {
user(id: "1") {
name
}
}
`}
render={({ error, props }) => {
if (error) {
return <div>Error!</div>;
}
if (!props) {
return <div>Loading...</div>;
}
return <div>User: {props.user.name}</div>;
}}
/>
);
export default App;
How Relay’s Innovations Were Brought to React Through React Server Components
Relay introduced several groundbreaking features and methodologies for data handling and fetching in React applications. These features provided developers with a robust framework for managing complex data dependencies and optimizing client-side rendering.
React Server Components (RSC) have taken some of these innovations and integrated them into a server-rendered paradigm, enhancing the capabilities of React applications. This section emphasizes how the core functionalities of Relay have been adapted and evolved into React Server Components.
1) Declarative Data Fetching: From Relay to React Server Components
Relay’s Approach
Relay: In Relay, data fetching is declarative. Each component specifies its data requirements using GraphQL fragments, which are then composed into a single query by Relay. This approach ensures that data dependencies are clear and co-located with the component that uses them.
import { graphql, QueryRenderer } from 'react-relay';
const UserComponent = () => (
<QueryRenderer
environment={environment}
query={graphql`
query UserComponentQuery {
user(id: "1") {
name
email
}
}
`}
render={({ error, props }) => {
if (error) {
return <div>Error!</div>;
}
if (!props) {
return <div>Loading...</div>;
}
return (
<div>
<h1>{props.user.name}</h1>
<p>{props.user.email}</p>
</div>
);
}}
/>
);
export default UserComponent;
How was it solidified in RSC: React Server Components also use a declarative approach to data fetching, but with a key difference: data fetching occurs on the server. This allows for direct and efficient access to server-side resources without additional client-side requests.
// UserComponent.server.js
import React from 'react';
const fetchUserData = async (id) => {
const response = await fetch(`https://api.example.com/users/${id}`);
return response.json();
};
const UserComponent = async ({ id }) => {
const user = await fetchUserData(id);
return (
<div>
<h1>{user.name}</h1>
<p>{user.email}</p>
</div>
);
};
export default UserComponent;
2) Optimized Data Fetching and Minimization
Relay’s Approach
Relay: Relay optimizes data fetching by composing multiple component queries into a single network request. This minimizes the number of requests and ensures that only the required data is fetched.
import { commitMutation, graphql } from 'react-relay';
function updateUser(environment, userId, newName) {
const mutation = graphql`
mutation UpdateUserMutation($input: UpdateUserInput!) {
updateUser(input: $input) {
user {
id
name
}
}
}
`;
const variables = {
input: {
id: userId,
name: newName,
},
};
commitMutation(environment, {
mutation,
variables,
optimisticResponse: {
updateUser: {
user: {
id: userId,
name: newName,
},
},
},
onCompleted: (response, errors) => {
console.log('Mutation completed');
},
onError: (err) => console.error(err),
});
}
How was it solidified inRSC: React Server Components leverage the server’s ability to directly fetch and render data, eliminating the need for multiple client-side data fetching operations. This approach inherently minimizes network requests and optimizes the data fetching process.
3) Component Co-location and Data Dependency Management
Relay’s Approach
Relay: Data requirements are co-located with the components that use them, making it easier to manage and understand data dependencies.
const UserFragment = graphql`
fragment UserComponent_user on User {
name
email
}
`;
const UserComponent = ({ user }) => (
<div>
<h1>{user.name}</h1>
<p>{user.email}</p>
</div>
);
export default createFragmentContainer(UserComponent, {
user: UserFragment,
});
How was it solidified in RSC: In React Server Components, data fetching and rendering logic are similarly co-located. This maintains clarity and modularity, allowing developers to understand and manage data dependencies within the component itself.
// UserComponent.server.js
const fetchUserData = async (id) => {
const response = await fetch(`https://api.example.com/users/${id}`);
return response.json();
};
const UserComponent = async ({ id }) => {
const user = await fetchUserData(id);
return (
<div>
<h1>{user.name}</h1>
<p>{user.email}</p>
</div>
);
};
export default UserComponent;
4) Improved Performance and SEO:
Relay’s Approach
Relay: Relay improves client-side performance by reducing over-fetching and providing mechanisms for efficient updates. However, because it operates on the client side, initial loading times and SEO can still be challenging.
How was it solidified in RSC: React Server Components significantly enhance performance and SEO by rendering components on the server. The server can send fully-rendered HTML to the client, reducing the amount of JavaScript needed on the client side and providing immediate content for search engines to crawl.
// Server-side rendering logic
import { renderToString } from 'react-dom/server';
import App from './App';
const serverRender = (req, res) => {
const html = renderToString(<App />);
res.send(`<!DOCTYPE html>
<html>
<head>
<title>React Server Components</title>
</head>
<body>
<div id="root">${html}</div>
<script src="/bundle.js"></script>
</body>
</html>`);
};
5) Hybrid Rendering Model
Relay’s Approach
Relay: While Relay focuses on client-side data management and rendering, it does not inherently support server-side rendering.
How was it solidified in RSC: React Server Components introduce a hybrid rendering model, where server-rendered components can seamlessly integrate with client-rendered components. This hybrid approach allows developers to leverage the benefits of both server-side and client-side rendering within the same application.
// App.server.js
import React from 'react';
import HeaderComponent from './HeaderComponent.server';
import FooterComponent from './FooterComponent';
import UserComponent from './UserComponent.server';
const App = () => (
<div>
<HeaderComponent />
<UserComponent id="1" />
<FooterComponent />
</div>
);
export default App;
6) Enhanced Developer Experience
Relay’s Approach
Relay: Relay provides tools and conventions for managing GraphQL data, but it requires developers to understand and work with GraphQL schemas, queries, and mutations.
How was it solidified in RSC: React Server Components simplify the developer experience by allowing direct access to server-side data sources without requiring GraphQL. This reduces the learning curve and allows developers to use familiar REST APIs or other data sources.
7) Optimistic Updates:
Relay’s Approach
Relay: Relay supports optimistic updates, allowing the UI to be updated immediately based on an expected result while the actual mutation request is processed in the background. This feature improves the user experience by providing instant feedback.
import { commitMutation, graphql } from 'react-relay';
function updateUser(environment, userId, newName) {
const mutation = graphql`
mutation UpdateUserMutation($input: UpdateUserInput!) {
updateUser(input: $input) {
user {
id
name
}
}
}
`;
const variables = {
input: {
id: userId,
name: newName,
},
};
commitMutation(environment, {
mutation,
variables,
optimisticResponse: {
updateUser: {
user: {
id: userId,
name: newName,
},
},
},
onCompleted: (response, errors) => {
console.log('Mutation completed');
},
onError: (err) => console.error(err),
});
}
How was it solidified in RSC: React Server Components (RSC) provide a robust framework for handling optimistic updates. By leveraging server-side rendering, RSC can pre-render components with expected data changes, ensuring immediate UI feedback while maintaining consistency and integrity.
import { use, Suspense } from 'react-server-dom';
function UserComponent({ userId, optimisticData }) {
const user = use(fetchUser(userId));
return (
<div>
<h1>{optimisticData ? optimisticData.name : user.name}</h1>
{optimisticData ? null : <p>Loading...</p>}
</div>
);
}
async function fetchUser(userId) {
const response = await fetch(`/api/users/${userId}`);
return response.json();
}
async function updateUser(userId, newName) {
// Simulate a delay for updating user
await new Promise((resolve) => setTimeout(resolve, 1000));
return { id: userId, name: newName };
}
// Example usage with optimistic UI
function App({ userId }) {
const optimisticData = { name: 'Optimistic Name' }; // Mock optimistic data
return (
<Suspense fallback={<p>Loading...</p>}>
<UserComponent userId={userId} optimisticData={optimisticData} />
</Suspense>
);
}
8) Subsequent Data Fetching
Relay optimizes subsequent data fetching through:
Declarative Data Requirements: Components declare their data needs, and Relay fetches the necessary data.
Query Batching: Relay batches multiple GraphQL queries into a single request, reducing network overhead.
Caching: Relay caches query results, minimizing redundant network requests.
Automatic Refetching: Relay refetches data when variables change, keeping the UI up-to-date.
// Client-Side Data Fetching with Relay
import { useQueryLoader, usePreloadedQuery } from 'react-relay/hooks';
import { MyQuery } from './MyQuery';
function MyComponent() {
const [queryReference, loadQuery] = useQueryLoader(MyQuery);
useEffect(() => {
loadQuery({ id: 'some-id' });
}, [loadQuery]);
if (!queryReference) {
return <div>Loading...</div>;
}
return <DisplayData queryReference={queryReference} />;
}
function DisplayData({ queryReference }) {
const data = usePreloadedQuery(MyQuery, queryReference);
return <div>{data.user.name}</div>;
}
How was it solidified in RSC: React Server Components (RSC) take the concept of subsequent data fetching further by performing data fetching on the server. This approach offloads the responsibility of data fetching from the client to the server, enhancing performance and reducing client-side complexity.
import { use } from 'react-server-dom';
function UserComponent({ userId }) {
const user = use(fetchUser(userId));
return <div>{user.name}</div>;
}
async function fetchUser(userId) {
const response = await fetch(`/api/users/${userId}`);
return response.json();
}
export default UserComponent;
Conclusion:
Relay introduced several advanced features that have profoundly influenced the development of data-driven React applications. These features, including declarative data fetching, optimized data fetching and minimization, component co-location, and data dependency management, have set a high standard for managing complex data needs in React.
React Server Components have taken these foundational concepts and adapted them to enhance server-side rendering capabilities. By leveraging server-side execution, React Server Components offer significant improvements in performance, SEO, and developer experience. The transition from Relay’s client-side optimizations to RSC’s server-side capabilities represents a natural evolution, providing developers with powerful tools to build more efficient and performant applications.
As React continues to evolve, the integration of these advanced features into the core framework through React Server Components demonstrates React’s commitment to addressing the challenges of modern web development and providing inn
| guhandelta |
1,920,429 | THE ROLE OF SEMANTIC HTML IN ENHANCING SEO AND WEB ACCESSIBILITY | The Role of Semantic HTML in Enhancing SEO and Web Accessibility 1.Introduction to Semantic... | 0 | 2024-07-12T03:19:06 | https://dev.to/kevin_kimtai/the-role-of-semantic-html-in-enhancing-seo-and-web-accessibility-g99 | webdev, html | The Role of Semantic HTML in Enhancing SEO and Web Accessibility
<u>**1.Introduction to Semantic HTML**</u>
Semantic HTML involves using HTML tags that provide meaning and context to the web content they enclose. This approach not only improves the visual presentation but also enhances the understanding and accessibility of the content for search engines and users, particularly those using assistive technologies.
<u>**2.SEO Benefits of Semantic HTML**</u>
**How Semantic HTML Tags Help Search Engines Index and Rank Web Pages**
Semantic HTML tags give search engines clear indicators of the structure and importance of web content. Tags like `<header>`, `<nav>`, `<section>`, `<article>`, and `<footer>` define the various sections of a webpage, making it easier for search engines to parse and understand the content.
**Improving the Relevance and Quality of Search Results**
By using semantic HTML, web developers can provide search engines with more accurate information about the content and its significance. This improves the relevance and quality of search results, as search engines can better match the content to user queries.
- Example:
Adding semantic tags to a blog post helps search engines identify the title, author, publication date, and main sections of the article, leading to more relevant search results.
**Positive Impact on SEO Performance**
Proper use of semantic HTML can significantly boost a website’s SEO performance. Enhanced content understanding leads to better indexing, higher rankings, and improved visibility in search engine results pages (SERPs).
- Example:
A website using semantic HTML for its articles, sections, and headers can achieve higher rankings compared to one using non-semantic tags, due to the improved clarity and structure provided to search engines.
**<u>3.Accessibility Improvements with Semantic HTML</u>**
**Aiding Screen Readers and Assistive Technologies**
Semantic HTML aids screen readers and other assistive technologies by providing clear and meaningful structure to web content. Tags like `<header>`, `<nav>`, `<main>`, `<article>`, and `<footer>` help these technologies navigate and interpret the content more effectively.
- Example:
< Html>
<nav aria-label="Main Navigation">
<ul>
<li><a href="#home">Home</a></li>
<li><a href="#about">About</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
</nav>
</Html>
```
**Creating an Inclusive Web Experience**
Semantic HTML is crucial for creating an inclusive web experience for all users, including those with disabilities. It ensures that content is accessible and usable by everyone, regardless of their physical or cognitive abilities.
- Example:
Using the `<main>` tag to denote the main content area helps screen readers skip repetitive content and navigate directly to the primary information.
**Enhancing Usability for People with Disabilities**
Proper use of semantic HTML enhances the usability of web pages for people with disabilities. It provides a structured and logical flow of content, making it easier for users to understand and interact with the webpage.
- Example:
```<html>
<article>
<header>
<h2>Inclusive Design</h2>
<p>Author: Jane Doe</p>
</header>
<section>
<h3>Accessibility Features</h3>
<p>Our website includes several accessibility features, such as keyboard navigation and screen reader support...</p>
</section>
<footer>
<p>Published on July 5, 2024</p>
</footer>
</article>
</Html>
```

| kevin_kimtai |
1,920,431 | Mastering File Management with Ansible | In this lab, you will explore the Ansible File module, which allows you to manage files and directories on remote hosts. The File module provides a wide range of functionalities, such as creating, deleting, modifying permissions, and checking the existence of files and directories. | 27,737 | 2024-07-12T03:24:12 | https://dev.to/labex/mastering-file-management-with-ansible-593o | ansible, coding, programming, tutorial |
## Introduction
This article covers the following tech skills:

In [this lab](https://labex.io/tutorials/ansible-file-module-289654), you will explore the Ansible File module, which allows you to manage files and directories on remote hosts. The File module provides a wide range of functionalities, such as creating, deleting, modifying permissions, and checking the existence of files and directories.
## Create a File on Remote Host
In this step, you will create a file on a remote host using the Ansible File module.
First, create a new Ansible playbook file called `/home/labex/project/file-module-playbook.yaml` and open it in a text editor.
Add the following content to the playbook file:
```yaml
- hosts: localhost
tasks:
- name: Create a file on remote host
file:
path: /home/labex/file.txt
state: touch
```
- `file`: Ansible module to manipulate the file system.
- `path`: Specifies the path to the file, in this case `/home/labex/file.txt`.
- `state`: Specifies the state of the file. Here, `touch` indicates that the file will be created if it does not exist, or updated with access and modification timestamps if it already exists.
The purpose of this playbook is to create a file named `file.txt` on the remote host.
Then, run the playbook using the following command:
```bash
ansible-playbook file-module-playbook.yaml
```
Example output:
```bash
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Create a file on remote host] ********************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Finally, verify that the file `file.txt` is created in the specified path on the remote host.
```bash
ll /home/labex/file.txt
```
Example output:
```bash
-rw-rw-r-- 1 labex labex 0 Mar 10 03:12 file.txt
```
You will see the message indicating that `/home/labex/file.txt` was successfully created.
## Manage File Permissions
In this step, you will learn how to manage file permissions on a remote host using the Ansible File module.
First, modify the existing playbook file by removing all content and adding the following content to the playbook file:
```yaml
- hosts: localhost
tasks:
- name: Set file permissions
file:
path: /home/labex/file.txt
mode: "0644"
```
- `file`: Ansible module to manipulate the file system.
- `path`: Specifies the path to the file, in this case `/home/labex/file.txt`.
- `mode`: This parameter is used to set the permission mode of the file. Replace `"0644"` with the desired permission mode for the file. Refer to the [chmod](https://en.wikipedia.org/wiki/Chmod) documentation for more information on permission modes.
The purpose of this playbook is to set the permissions of the file `/home/labex/file.txt` to `0644`.
Then, run the playbook using the following command:
```bash
ansible-playbook file-module-playbook.yaml
```
Example output:
```bash
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Set file permissions] ****************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Finally, verify that the file permissions are set as specified on the remote host.
```bash
ll /home/labex/file.txt
```
Example output:
```bash
-rw-r--r-- 1 labex labex 0 Mar 10 03:12 /home/labex/file.txt
```
The `-rw-r--r--` here indicates that the mode of `/home/labex/file.txt` has been successfully set to `0644`.
## Delete a File on Remote Host
In this step, you will learn how to delete a file on a remote host using the Ansible File module.
First, modify the existing playbook file by removing all content and adding the following content to the playbook file:
```yaml
- hosts: localhost
tasks:
- name: Delete a file on remote host
file:
path: /home/labex/file.txt
state: absent
```
- `file`: Ansible module to manipulate the file system.
- `path`: Specifies the path to the file to be deleted, i.e. `/home/labex/file.txt`.
- `state`: This parameter indicates that the file should be in the `absent` state. Therefore, the goal of the task is to delete the file at the specified path.
The purpose of this playbook is to delete the file `/home/labex/file.txt` on the remote host.
Then, run the playbook using the following command:
```bash
ansible-playbook file-module-playbook.yaml
```
Example output:
```bash
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Delete a file on remote host] ********************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Finally, verify that the file `file.txt` is deleted and no longer exists on the remote host.
```bash
ll /home/labex/file.txt
```
Example output:
```bash
ls: cannot access '/home/labex/file.txt': No such file or directory
```
This message indicates that the `/home/labex/file.txt` file was successfully deleted.
## Check File Existence
In this step, you will learn how to check the existence of a file on a remote host using the Ansible File module.
First, modify the existing playbook file by removing all content and adding the following content to the playbook file:
```yaml
- hosts: localhost
tasks:
- name: Check file existence on remote host
stat:
path: /home/labex/file.txt
register: file_info
- name: Print file existence
debug:
msg: "File exists: {{ file_info.stat.exists }}"
```
- `stat`: This is one of Ansible's modules for getting status information about a file or directory.
- `path`: Specifies the path to the file to check, i.e. `/home/labex/file.txt`.
- `register`: Stores the result of the module execution in the variable `file_info` using the register keyword.
- `debug`: This is one of the Ansible modules that prints debugging information.
- `msg`: Use the `debug` module to print a message with information about the existence of a file, which is retrieved via `file_info.stat.exists`.
The purpose of this playbook is to check for the existence of the file `/home/labex/file.txt` on the remote host and print the information to standard output.
Then, run the playbook using the following command:
```bash
ansible-playbook file-module-playbook.yaml
```
Example output:
```bash
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that
the implicit localhost does not match 'all'
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Check file existence on remote host] *************************************
ok: [localhost]
TASK [Print file existence] ****************************************************
ok: [localhost] => {
"msg": "File exists: False"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
Finally, observe the output to see if the file `file.txt` exists on the remote host.
`"msg": "File exists: False"` indicates that the `/home/labex/file.txt` file does not exist.
## Summary
Congratulations! You have successfully completed the Ansible File Module lab. You have learned how to create files and directories, manage file permissions, delete files, and check file existence on remote hosts using the File module.
The File module is a powerful tool in Ansible that enables you to perform various file-related operations during automation tasks. You can now confidently use the File module in your Ansible playbooks to manage files and directories efficiently.
Keep exploring the Ansible documentation and other modules to expand your knowledge and improve your automation skills. Happy Ansible-ing!

---
> 🚀 Practice Now: [Ansible File Module](https://labex.io/tutorials/ansible-file-module-289654)
---
## Want to Learn More?
- 🌳 Learn the latest [Ansible Skill Trees](https://labex.io/skilltrees/ansible)
- 📖 Read More [Ansible Tutorials](https://labex.io/tutorials/category/ansible)
- 💬 Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) | labby |
1,920,432 | Flour, Frosting, and Forecasts | Cooking up a business strategy with baking analogies As a part-time hobby baker and a... | 0 | 2024-07-12T03:40:40 | https://dev.to/vidyaiyengar/flour-frosting-and-forecasts-3ccl | strategy, creativity, bakinganalogies | ## Cooking up a business strategy with baking analogies
As a part-time hobby baker and a full-time strategist, I often find myself whipping up parallels between these two passions of mine. For a baker with a dual obsession for frosting and analytics, picture this: I am baking a fancy cake for a birthday party, and suddenly, my inner strategist takes over, meticulously planning, preparing, and executing every step like it’s a corporate takeover! And on the flip side, when I am strategizing at work, I organically channel my inner baker, bringing a sprinkle of creativity, a dash of high-level thinking, and a generous dollop of research.
I decided to write a blog about mixing these ingredients together and bake up a delicious comparison between baking and business strategy. So let’s dive into a world where planning, preparation, execution, and adaptation are the secret ingredients to success in both the kitchen and the boardroom.
First things first.. **Requirements gathering and knowing your audience**
Whether you're baking a cake or crafting a business strategy, understanding your audience and gathering their requirements are the crucial first steps. Just like how a cake designed without considering the birthday person's preferences could flop dramatically, a business strategy that ignores customer needs is destined for failure. So, whether you’re donning a chef’s hat or a business suit, remember: it’s all about knowing your audience and nailing those requirements.
However, when opinions clash, that’s where your negotiation skills shine.
I recall my negotiations with a client who was very tight on budget and was hesitant about paying for a whole expensive bottle of red food coloring, but still wanted a red velvet cake.
I often run into similar sticky situations at work, where customers’ demands clash with my team’s grand plans and vision. My strategy? Channel my inner diplomat to strike a balance and negotiate a win-win. Creative alternative solutions are my secret weapon. Take the red velvet cake debacle, for instance. The client wanted a red cake but balked at the idea of paying for an entire bottle of red dye. So, I explained to them the health concerns with the red dye and proposed a pink velvet cake using beetroot juice. Voila! The client was thrilled, felt cared for and felt like a VIP. Plus, I got to feel like a culinary genius and a master negotiator all at once!
If I have to categorize requirement gathering, I would split it into 3 phases: understanding needs, detail orientation, feedback and adaptation.
## **Understanding needs**
Before even cracking an egg, a baker needs to know who’s eating the cake. Is it a sugar loving 5-year old or a gluten averse grown-up? This means asking the right questions about flavors, dietary restrictions, and favorite colors etc. Let’s face it, a pink cake for a ten-year-old boy who’s all about cars might not hit the mark.
## **Detail orientation**
Once you know your audience, it's time to gather the specifics. Vanilla or chocolate? Buttercream or fondant? How many layers? etc. etc. This is where detailed notes and sometimes even sketches come into play. Think of it as a cake blueprint. It’s just like how after identifying the audience, businesses gather detailed requirements. What features do customers want in the product? What pain points need addressing? This involves creating user personas, defining use cases, and drafting detailed product requirements documents.
## **Feedback and adaptation**
This phase is all about checking in with the client before the final bake to ensure the design meets their expectations, and being ready to make changes if they suddenly decide they hate sprinkles. Just like how we conduct beta tests and gather feedback before the final product launch, being agile enough to make necessary adjustments.
Then comes the **implementation** stage.
First thing to consider when you get down to implement is - **Ingredients and resources**. Just as baking a cake requires specific ingredients like flour, sugar, eggs, and an oven (because a cake baked in a microwave is a crime against desserts), executing a business strategy demands various resources—tools, technology, personnel etc. Before diving in, it's crucial to identify and gather all these necessities.
**Plan:** Choosing your recipe wisely is key. A solid recipe, er, business plan, covers everything from your target market to your investment risks. It helps you identify potential roadblocks and develop contingency plans. Think of it as your secret weapon against the rising cost of eggs and the unpredictable whims of fickle customers. So, before you preheat the oven on your strategic vision, make sure you've got a recipe for success.
Integrating business functions like engineering, support, marketing, sales, and operations is crucial for ensuring your strategy doesn't turn into a hot mess. If you just throw all the ingredients in the oven without any rhyme or reason, you're gonna end up with something that's barely edible, let alone delicious. The same goes for your business strategy. You can't have marketing doing their own thing, sales doing their own thing, and operations running around like headless chickens. You need to get everyone on the same page, working together towards a common goal. Sure, it takes some work to get all those different functions aligned, but it's worth it in the long run. When you've got a well-oiled machine where everyone is pulling in the same direction, that's when the magic happens. Your strategy will be rock solid, and you'll be able to execute it with precision and efficiency.
Remember, a well-executed strategy is like a perfectly baked cake: everyone wants a slice, and no one leaves the table disappointed (unless they're on a diet).
Then comes **taste testing and feedback**
Seeking feedback is like taste-testing your cake creations, except you're not just asking if it's moist enough, you're also finding out if it's trendy enough to be the next Instagram sensation. Just like tweaking your recipe based on whether the client prefers more cinnamon or less nutmeg, gathering customer feedback and analyzing performance metrics and survey results helps refine your business strategy.
Running a taste panel with your customers is like figuring out if your strategy is hitting the sweet spot or if it needs a pinch of innovation and maybe a sprinkle of extra customer care. Timely customer input can go a long way in ensuring you're delivering something that's truly satisfying to your audience. Because, let's face it, just like a cake that's too dry, a business strategy that falls flat is nobody's idea of a good time.
Now let me highlight the three essential virtues that serve as game changers in both baking and strategy: “**Research**”,"**patience and timing**" and "**adaptability and innovation**". In my experience, mastering these virtues isn't just advantageous, it's essential for achieving excellence and staying ahead in a dynamic world.
## **Research**
Remember, if you want to be great at the strategy world, you need to be a research ninja. Understanding your audience is like having a superpower. Who are your customers? What do they want? Do they want budget-friendly solutions or premium services? You’ll need to bust out the surveys, focus groups, and market research reports. When I am playing a baking wizard, I often channel my inner Sherlock Holmes. I try to sleuth out the latest trends, see what my fellow creative bakers are doing, and try to find eggless substitutes for recipes to cater to my vegetarian clients. I also try to concoct my own unique recipes to offer something unique and interesting that aligns with my client’s interests.
Sometimes clients may not have a clear idea of what they want, unlike when they do have a specific vision. Use these moments when clients are uncertain but still seek a positive experience and solutions from you. This is your opportunity to leverage creative ideas and explore unique solutions.
I've got a secret stash of unique cake ideas and recipes just waiting for that ideal "surprise me" request from a client. I leverage these opportunities when clients are uncertain yet eager for a positive experience and solutions from me. In business strategy however, you may not always encounter a customer asking for a strategic plan with a "surprise me" approach. Nevertheless, having a secret stash of ideas is invaluable because you never know when an opportunity might arise to use it.
## **Patience and timing**
Patience and timing are crucial in both baking and business strategy. It's like waiting for your cake to rise without peeking in the oven every five seconds or hitting refresh on your analytics dashboard every minute.
For bakers, it's an exercise in zen-like patience. You meticulously follow the recipe, resisting the urge to open the oven door and peek at your masterpiece in progress. And let's not forget the cooling phase, waiting for that cake to chill out while you resist the temptation to frost it prematurely.
As a strategist, it's a similar waiting game. You plant the seeds of your brilliant strategy, knowing full well that Rome wasn't built in a day and neither is your market dominance. Just like a perfectly baked cake, you know that good things come to those who wait and occasionally resist the urge to panic-bake another strategy on top of the first.
So, whether you're staring at your oven or your business timeline, remember: patience isn't just a virtue, it's a survival skill in both the kitchen and the boardroom.
## **Adaptability and Innovation**
Adaptability and innovation are like the secret ingredients that turn a bland dish into a culinary masterpiece or a complicated feature into a sought-after solution provider.
In the kitchen, it's all about experimenting with new recipes and techniques. Your grandma's apple pie recipe may be a classic and a keeper, but how good does it taste with just a pinch of chili powder in the mix or what if you substitute apples with mango or sprinkle rock salt on top ? Now all of a sudden you're the talk of the town with a brave new dessert! Innovation does taste good, after all!
Similarly, in business strategy, it's about staying ahead of the curve. You don’t want to always follow the trends; you want to lead the pack. When the market shifts like a souffle in the oven, you should adapt and pivot like how chefs do in a high-stakes masterchef competition.
And there you have it—baking and business strategy. Two worlds colliding in a cosmic dance of flour, frosting, and forecasts. Just like baking, strategy is equal parts science and art. You need to understand the technical elements, like SEO and analytics, but you also need to tap into your creative side to make it truly engaging and memorable.
So whether you're kneading dough or negotiating deals, remember to whisk away doubts, bake with gusto, and always leave room for a little innovation in your recipe for success. But remember, at the end of the day, succeeding in either the kitchen or the business strategy sphere comes down to one key ingredient: **passion**. If you don't genuinely care about what you're creating, it'll be painfully obvious to anyone who consumes it. So whether you're whipping up a souffle or a strategic business plan, make sure you're putting your heart into it. Your audience (and your taste buds) will thank you!
| vidyaiyengar |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.