id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,897,973 | Naples Golf Cart | Naples Golf Cart Address: 901 Airport-Pulling Rd S, Naples, FL 34104 Phone: (239) 224-3464 Email:... | 0 | 2024-06-23T18:08:36 | https://dev.to/naplesgolfcart/naples-golf-cart-3agn | golf, cart | **Naples Golf Cart
Address: 901 Airport-Pulling Rd S, Naples, FL 34104
Phone: (239) 224-3464
Email: media@naplesgolfcart.com
Website: https://naplesgolfcart.com/
GMB Profile: https://www.google.com/maps?cid=6886198017560260772**
Naples Golf Cart, situated at 901 Airport-Pulling Rd S, Naples, FL 34104, United States, is a distinguished provider of top-quality golf carts. Our commitment to delivering excellence in the golfing experience is reflected in our extensive range of carts and impeccable service.
Conveniently located in Naples, Florida, our address at 901 Airport-Pulling Rd S serves as a central hub for golf enthusiasts seeking reliable and innovative golf cart solutions. At Naples Golf Cart, we understand the importance of having the right equipment to enhance your golfing experience, and our diverse selection of carts caters to varying preferences and needs.
Our dedication to customer satisfaction is unwavering. Naples Golf Cart goes beyond being a mere supplier; we are your golfing partner, committed to providing expert guidance and support. Whether you're a seasoned player looking for advanced features or a beginner in need of a reliable ride, our team is here to assist you in making informed decisions.
Our commitment extends beyond the point of purchase. We offer reliable maintenance services to keep your golf cart in optimal condition, ensuring longevity and performance. Naples Golf Cart is not just a supplier; we are a comprehensive solution for all your golf cart needs.
Visit us at 901 Airport-Pulling Rd S, Naples, FL 34104, and immerse yourself in the world of Naples Golf Cart. Elevate your golfing experience with quality, reliability, and unmatched service. Naples Golf Cart – Where Excellence Meets the Greens.
**Store Hours:
**Monday -Friday: 9:00 am - 5:00 pm
Saturday: 10:00 am - 4:00 pm
Keywords: Golf Carts Naples FL, Naples Golf Cart | naplesgolfcart |
1,897,971 | To build Twilio AI Assistants | introduced Twilio AI Assistants, a platform to build customer-aware autonomous agents. These... | 0 | 2024-06-23T17:43:42 | https://dev.to/olatunjiayodel9/to-build-twilio-ai-assistants-hhp | twiliochallenge, ai, twilio |

introduced Twilio AI Assistants, a platform to build customer-aware autonomous agents. These Assistants can handle complex conversations, answer questions, and perform tasks without rigid decision trees or intent-based training¹.
- They leverage Large Language Models (LLMs) and automatically handle conversation history using Customer Memory. You can define Tools to make API requests and access Knowledge sources for domain expertise.
- If an Assistant can't handle a question, it can seamlessly hand over the conversation to a human agent in the Twilio Flex contact center.
2. SMS Chatbots with Twilio and OpenAI:
- Create an AI-powered SMS chatbot using Twilio and OpenAI. This chatbot can handle customer inquiries, book appointments, and provide information.
- Here's a step-by-step guide on how to build this experience using Twilio and OpenAI².
3. Google Cloud Integration:
- Twilio and Google Cloud have expanded their partnership to improve customer experiences with AI.
- You can deploy sophisticated AI-powered virtual agents using Google Cloud Contact Center AI integrated with Twilio Flex. | olatunjiayodel9 |
1,897,970 | KuyhAa | Mendapatkan semua retak perangkat lunak PC terbaru secara gratis, Unduh perangkat lunak PC versi... | 0 | 2024-06-23T17:41:53 | https://dev.to/softwaresde/kuyhaa-2d17 | Mendapatkan semua retak perangkat lunak PC terbaru secara gratis, Unduh perangkat lunak PC versi penuh yang 100% berfungsi termasuk generator kunci serial, patch, dan aktivator.
[KuyhAa](https://kuyhaapro.com/) | softwaresde | |
1,897,969 | How to Integrate Embedded MongoDB for Unit Testing in a Spring Application | Unit testing is a crucial part of software development, ensuring that individual components of an... | 0 | 2024-06-23T17:41:25 | https://dev.to/fullstackjava/how-to-integrate-embedded-mongodb-for-unit-testing-in-a-spring-application-49pi | springboot, java, webdev, mongodb | Unit testing is a crucial part of software development, ensuring that individual components of an application work as expected. When it comes to testing data persistence layers in a Spring application, MongoDB is a popular choice. To facilitate unit testing with MongoDB, we can use an embedded MongoDB instance. This eliminates the need for an actual running MongoDB server, making tests more reliable and easier to set up.
In this blog, we will walk through the steps required to integrate embedded MongoDB into a Spring application for unit testing.
## Table of Contents
1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Setting Up the Project](#setting-up-the-project)
4. [Adding Dependencies](#adding-dependencies)
5. [Configuring Embedded MongoDB](#configuring-embedded-mongodb)
6. [Creating a Repository](#creating-a-repository)
7. [Writing Unit Tests](#writing-unit-tests)
8. [Running the Tests](#running-the-tests)
9. [Conclusion](#conclusion)
## Introduction
Embedded MongoDB allows developers to run a MongoDB server embedded within the Java application, specifically for testing purposes. This ensures that tests are run in isolation and are not dependent on an external MongoDB instance. By using embedded MongoDB, we can simulate a real MongoDB environment and perform integration tests on our repositories.
## Prerequisites
Before we begin, ensure you have the following installed on your system:
- Java Development Kit (JDK) 8 or later
- Maven or Gradle build tool
- An Integrated Development Environment (IDE) like IntelliJ IDEA or Eclipse
## Setting Up the Project
First, create a new Spring Boot project using Spring Initializr or your preferred method. Ensure that you include `Spring Data MongoDB` as a dependency.
## Adding Dependencies
To use embedded MongoDB, we need to add the `de.flapdoodle.embed.mongo` dependency to our project. Additionally, we will add Spring Data MongoDB for interacting with MongoDB.
If you are using Maven, add the following dependencies to your `pom.xml` file:
```xml
<dependencies>
<!-- Spring Data MongoDB -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<!-- Embedded MongoDB -->
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<version>3.2.1</version>
</dependency>
<!-- Spring Boot Test -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
```
For Gradle, add the following dependencies to your `build.gradle` file:
```groovy
dependencies {
// Spring Data MongoDB
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'
// Embedded MongoDB
testImplementation 'de.flapdoodle.embed:de.flapdoodle.embed.mongo:3.2.1'
// Spring Boot Test
testImplementation 'org.springframework.boot:spring-boot-starter-test'
}
```
## Configuring Embedded MongoDB
Next, we need to configure Spring to use the embedded MongoDB instance during tests. Create a configuration class for this purpose.
```java
package com.example.demo;
import org.springframework.boot.test.context.TestConfiguration;
import org.springframework.context.annotation.Bean;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.SimpleMongoClientDbFactory;
import de.flapdoodle.embed.mongo.MongodExecutable;
import de.flapdoodle.embed.mongo.MongodStarter;
import de.flapdoodle.embed.mongo.config.IMongodConfig;
import de.flapdoodle.embed.mongo.config.MongodConfigBuilder;
import de.flapdoodle.embed.mongo.config.Net;
import de.flapdoodle.embed.mongo.distribution.Version;
import java.io.IOException;
@TestConfiguration
public class EmbeddedMongoConfig {
@Bean
public MongodExecutable embeddedMongoServer() throws IOException {
IMongodConfig mongodConfig = new MongodConfigBuilder()
.version(Version.Main.PRODUCTION)
.net(new Net("localhost", 27017, true))
.build();
MongodStarter starter = MongodStarter.getDefaultInstance();
return starter.prepare(mongodConfig);
}
@Bean
public MongoTemplate mongoTemplate() throws IOException {
embeddedMongoServer().start();
return new MongoTemplate(new SimpleMongoClientDbFactory("mongodb://localhost:27017/test"));
}
}
```
This configuration class sets up an embedded MongoDB instance that runs on `localhost` at port `27017`.
## Creating a Repository
Create a simple repository interface to interact with MongoDB. For this example, let's assume we have a `User` entity.
```java
package com.example.demo.repository;
import com.example.demo.model.User;
import org.springframework.data.mongodb.repository.MongoRepository;
public interface UserRepository extends MongoRepository<User, String> {
}
```
Create the `User` entity:
```java
package com.example.demo.model;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
@Document
public class User {
@Id
private String id;
private String name;
private String email;
// Constructors, getters, and setters
}
```
## Writing Unit Tests
Now, let's write unit tests to verify the functionality of the `UserRepository`. We'll use Spring's testing support to load the application context and the embedded MongoDB instance.
```java
package com.example.demo;
import com.example.demo.model.User;
import com.example.demo.repository.UserRepository;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.annotation.Import;
import org.springframework.test.context.ActiveProfiles;
import java.util.Optional;
import static org.assertj.core.api.Assertions.assertThat;
@SpringBootTest
@Import(EmbeddedMongoConfig.class)
@ActiveProfiles("test")
public class UserRepositoryTests {
@Autowired
private UserRepository userRepository;
@Test
public void testSaveAndFindUser() {
User user = new User();
user.setName("John Doe");
user.setEmail("john.doe@example.com");
userRepository.save(user);
Optional<User> foundUser = userRepository.findById(user.getId());
assertThat(foundUser).isPresent();
assertThat(foundUser.get().getName()).isEqualTo("John Doe");
assertThat(foundUser.get().getEmail()).isEqualTo("john.doe@example.com");
}
}
```
## Running the Tests
Run the tests using your IDE or build tool. The embedded MongoDB instance will start automatically before the tests and shut down after the tests are completed.
If using Maven, run the tests with:
```sh
mvn test
```
If using Gradle, run the tests with:
```sh
gradle test
```
## Conclusion
Integrating embedded MongoDB into a Spring application for unit testing provides a convenient and efficient way to test MongoDB-related functionality without the need for an external MongoDB server. By following the steps outlined in this blog, you can set up and use embedded MongoDB in your Spring application, ensuring your data persistence layer is thoroughly tested.
Using embedded MongoDB ensures that your tests are isolated, repeatable, and easy to manage, leading to more robust and reliable applications. | fullstackjava |
1,897,968 | The new and shiny | Every once in a while I come across something new (to me) and interesting, and it makes me wonder...... | 0 | 2024-06-23T17:39:08 | https://dev.to/armen138/the-new-and-shiny-4eil | gamedev, typescript, miniscript | Every once in a while I come across something new (to me) and interesting, and it makes me wonder... could I build a game engine on this? (spoiler alert: the answer is always yes). Recently I came across MiniScript, an embeddable scripting language that has apparently been around for years, but somehow escaped my attention until now. There are at least 3 implementations of the MiniScript interpreter, in C++, C# and TypeScript, which allows a lot of flexibility in what tools and frameworks to embed with. For my experiment, I've chosen TypeScript, as it is a language I use often in my day to day work, and allows for fast prototyping with excellent tooling.
## So what do I call it?
Oh, naming things - certainly one of the hardest parts of the entire project. I will base a lot of this project on concepts I used to build the A2D engine (over 12 years ago now!), but calling it A2D2 just sounds like a drone I'm not looking for. I've asked a chatbot to help me come up with a name and the best it could do was "MiniJamLab", which I guess isn't the worst. Suggestions welcome, though.
## Project Goals
First I want to say I'm not looking to build the next big thing, competing with the likes of Godot (I love Godot).
So, goals:
#### Avoid callbacks
12 years ago I built A2D around event listeners - which meant, as was customary for the time in JavaScript, *callback hell*. This time around, I want to avoid callbacks as much as possible, and instead keep things synchronous where possible, and use async/await where not.
#### Separate behavior scripts (using MiniScript!)
There should be no need for boilerplate code in the bahavior scripts. No creating drawing contexts, manipulating pixels, etc. Let the Engine handle the stuff, and write the code that actually makes things happen.
#### Testable code
This one gets me every time. I get excited and have half of the thing built and running a game of sorts before I realize I haven't added *any* unit tests! This time will be different. TDD is the name of the game, and it forces me to think about architecture differently, producing more testable code.
#### Leverage existing tools and formats
Why build a map editor if a perfectly good one exists already? Why invent a configuration file format if JSON and YAML are perfectly fine for the job? Let's not create headaches for solved problems (I know, the entire "Game Engine" problem has been solved, but let me have a little fun with this, ok?)
## Lessons Learned
And finally, I'm re-reading a lot of my own old code, particularly in the old A2D engine code, to see how I solved the common problems at the time, and what I can learn from my mistakes.
More updates to come...
| armen138 |
1,897,967 | 🚀 Exploring Predictive Analysis of Breast Tumor Diagnosis with Streamlit and SVM! 🚀 | Hey Devs! 👋 I'm excited to share my latest project where I've combined the power of Python,... | 0 | 2024-06-23T17:27:31 | https://dev.to/amna200123/exploring-predictive-analysis-of-breast-tumor-diagnosis-with-streamlit-and-svm-dh3 | machinelearning, datascience, healthtech, python | Hey Devs! 👋 I'm excited to share my latest project where I've combined the power of Python, Streamlit, and Support Vector Machines (SVM) to build an interactive app for predicting breast tumor diagnoses. Here’s a glimpse into what I’ve created:
🔍 Project Overview:
Breast cancer is a significant health concern, and early detection is crucial. My project utilizes fine-needle aspiration test data to classify tumors as malignant or benign. This application aims to support healthcare professionals in making informed decisions.
📊 Features and Highlights:
Data Upload and Exploration: Users can upload CSV or Excel files to explore data distributions and summary statistics instantly.
Exploratory Data Analysis (EDA): Visualize data with histograms, density plots, and correlation matrices to uncover insights before model training.
Data Preprocessing: Automate preprocessing steps like encoding categorical data and handling missing values to prepare data for machine learning.
Model Training with SVM: Build and optimize SVM models using Grid Search to achieve the best performance in classifying tumors.
Evaluation and Visualization: Assess model accuracy with classification reports, confusion matrices, and ROC curves. Visualize decision boundaries to understand how SVM classifies data points.
🔧 Tech Stack:
Python: For data processing, modeling, and visualization.
Streamlit: Interactive web app development.
Scikit-learn: Machine learning models and pipelines.
Matplotlib and Seaborn: Data visualization.
📈 Why It Matters:
This project showcases how machine learning can aid in healthcare diagnostics, emphasizing the importance of data-driven decisions in medical practices. It's a testament to the power of AI in making a real impact on people's lives.
👩💻 Join Me!:
Explore the app, dive into the code, and let's discuss how we can leverage technology for healthcare innovation. Your feedback and contributions are invaluable!
🔗 [https://analysis-of-breast-tumor-diagnosis-bxvsw5lwbt4hbgnhrfxeae.streamlit.app/] | amna200123 |
1,897,961 | Introduction to PHP Development | Hey DEV Community! 👋 Are you ready to dive into PHP development? In this post, we'll explore the... | 0 | 2024-06-23T17:15:36 | https://dev.to/amna200123/introduction-to-php-development-2agh | php, webdev, backend, programming | Hey DEV Community! 👋 Are you ready to dive into PHP development? In this post, we'll explore the fundamentals of PHP, its syntax, variables, control structures, functions, and more. Whether you're a beginner or looking to refresh your PHP skills, this guide will help you get started on building dynamic web applications with PHP! | amna200123 |
1,897,959 | AWS Cloud Practitioner Essentials | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:13:16 | https://dev.to/vidheyb/aws-cloud-practitioner-essentials-525k | vidheyb | ||
1,897,958 | Solutions Architect Learning Plan Badge Assessment | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:12:45 | https://dev.to/vidheyb/solutions-architect-learning-plan-badge-assessment-35n2 | vidheyb | ||
1,897,957 | Kubernetes hack | NOTE: this is an updated copy of my post in medium, where I'm not writing anymore. Have you lost... | 0 | 2024-06-23T17:08:52 | https://dev.to/caruccio/kubernetes-hack-1d0p | kubernetes, shell | > NOTE: this is an updated copy of my [post in medium](https://medium.com/@mateus.caruccio/kubernetes-hack-lost-ssh-access-to-node-5dd36d35c74c), where I'm not writing anymore.
Have you lost ssh access to one of your Kubernetes nodes? Why do you even need ssh access to nodes in the first place? Well, maybe something is stuck, or you need to see a config with your own eyes… I don’t know and I don’t care, they are your servers, not mine…
> I’m assuming you have admin level into kubernetes API.
Talk is cheap, show me the code®:
```sh
$ NODE_NAME=master-0
$ kubectl create -n kube-system -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: root-shell-$NODE_NAME
namespace: kube-system
spec:
nodeName: $NODE_NAME
containers:
- command:
- /bin/cat
image: alpine:3
name: root-shell
securityContext:
privileged: true
tty: true
stdin: true
volumeMounts:
- mountPath: /host
name: hostroot
hostNetwork: true
hostPID: true
hostIPC: true
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /
name: hostroot
EOF
```
This pod will create a privileged POD into the node master-0 (change it to your node name) running /bin/cat forever. Now you simply exec into it and change the host’s root to pod’s root:
```sh
$ kubectl -n kube-system exec -it root-shell-$NODE_NAME chroot /host /bin/bash
[root@master-0 /]# id
uid=0(root) gid=0(root) groups=0(root)
```
Profit!
PS: Here is a DaemonSet for the lazy like me.
```sh
$ kubectl create serviceaccount -n kube-system root-shell
### For OKD/Openshift clusters only:
$ oc adm add-scc-to-user privileged -n kube-system -z root-shell
$ kubectl create -n kube-system -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: root-shell
namespace: kube-system
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: root-shell
template:
metadata:
labels:
app: root-shell
spec:
terminationGracePeriodSeconds: 0
containers:
- command:
- /bin/cat
image: alpine:3
name: root-shell
tty: true
stdin: true
volumeMounts:
- mountPath: /host
name: hostroot
securityContext:
privileged: true
hostNetwork: true
hostPID: true
hostIPC: true
serviceAccountName: root-shell
hostNetwork: true
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
volumes:
- hostPath:
path: /
name: hostroot
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate
EOF
```
## AWS Bottlerock
Since now bottlerocket is a really nice OS alternative in EKS clusters, lets use control/admin containers to gain root-level access to the host.
As you known, Bottlerocket is a Read-only, container oriented operating system. This gives us a lot of benefits, but you can't simply `kubect exec + chroot` into it as we did in the stone age. Turns out there is two containers in the system: control and admin. We whant admin, since control is for adminstrative tasks (like upgrades and reboots).
First, we need a plain container to have access to host's `apiclient` binary:
```sh
$ kubectl create -n kube-system -f - <<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: apiclient
namespace: kube-system
spec:
revisionHistoryLimit: 0
selector:
matchLabels:
app: apiclient
template:
metadata:
labels:
app: apiclient
spec:
containers:
- command:
- sleep
- infinity
image: fedora
imagePullPolicy: Always
name: regain-access
securityContext:
seLinuxOptions:
level: s0
role: system_r
type: control_t
user: system_u
volumeMounts:
- mountPath: /usr/bin/apiclient
name: apiclient
readOnly: true
- mountPath: /run/api.sock
name: apiserver-socket
restartPolicy: Always
terminationGracePeriodSeconds: 0
volumes:
- hostPath:
path: /usr/bin/apiclient
type: File
name: apiclient
- hostPath:
path: /run/api.sock
type: Socket
name: apiserver-socket
updateStrategy:
rollingUpdate:
maxUnavailable: 100%
type: RollingUpdate
EOF
```
With that in place, let's use `apiclient exec` subcommand to activate and enter admin container:
```sh
$ kubectl exec -i -t -n kube-system apiclient-xtwxh -- apiclient exec -t control enter-admin-container
Confirming admin container is enabled...
Waiting for admin container to start...
Entering admin container
Welcome to Bottlerocket's admin container!
╱╲
╱┄┄╲ This container provides access to the Bottlerocket host
│▗▖│ filesystems (see /.bottlerocket/rootfs) and contains common
╱│ │╲ tools for inspection and troubleshooting. It is based on
│╰╮╭╯│ Amazon Linux 2, and most things are in the same places you
╹╹ would find them on an AL2 host.
To permit more intrusive troubleshooting, including actions that mutate the
running state of the Bottlerocket host, we provide a tool called "sheltie"
(`sudo sheltie`). When run, this tool drops you into a root shell in the
Bottlerocket host's root filesystem.
[root@admin]#
```
At the time of this writing, all you have to do now is execute `sudo sheltie` and voilà!
```sh
[root@admin]# sudo sheltie
bash-5.1# ps 1
PID TTY STAT TIME COMMAND
1 ? Ss 14:25 /sbin/init systemd.log_target=journal-or-kmsg systemd.log_color=0 systemd.show_status=true
bash-5.1#
``` | caruccio |
1,897,956 | Mastering Visual Hierarchy in UI Design: Key Principles and Techniques | Day 5: Learning UI/UX Design 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student... | 0 | 2024-06-23T17:08:37 | https://dev.to/prince_chouhan/mastering-visual-hierarchy-in-ui-design-key-principles-and-techniques-11l7 | ui, uidesign, ux, uxdesign | Day 5: Learning UI/UX Design
---
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. Today, I'm excited to share my learnings on visual hierarchy, a fundamental principle in UI design that enhances usability and guides user attention.
---
🗓️ Day 5 Topic: Visual Hierarchy in UI Design
---
📚 Today's Learning Highlights:
1. Concept Overview:
Visual hierarchy is the arrangement and presentation of elements in a way that signifies their importance. It helps guide users' attention and improves usability by making it easier for users to find and understand information.
2. Key Factors Contributing to Visual Hierarchy:
- Size:
- Larger Elements: Draw attention first and indicate higher importance.
- Smaller Elements: Indicate secondary information.
- Application: Headlines are larger than body text; primary buttons are larger than secondary ones.
- Color:
- Contrast: High contrast highlights important items.
- Bold Colors: Attract attention.
- Subdued Colors: Recede into the background, emphasizing primary elements.
- Application: Use accent colors for calls-to-action (CTAs), warnings, or key information.
- Placement:
- Top and Center: Elements at the top or center are perceived as more important.
- Left to Right: In cultures that read left to right, elements on the left side are seen first.
- Application: Place navigation menus at the top, important buttons in prominent positions.
- Proximity (Law of Proximity):
- Grouping: Elements close to each other are perceived as related.
- Spacing: Proper spacing delineates different sections and groups related items.
- Application: Group related form fields together, ensure related content is clustered.
- White Space (Negative Space):
- Definition: Empty space around elements in a design.
- Emphasis: More white space around an element makes it stand out.
- Clarity : Reduces clutter and enhances readability.
- Application: Use ample margins and padding around key elements, avoid overcrowding.
- Repetition:
- Consistency: Repeating styles indicate that elements are related.
- Pattern Recognition: Helps users recognize and predict the behavior of elements.
- Application: Consistent use of colors, fonts, and design elements for related items (e.g., all buttons for primary actions look the same).
---
🚀 Future Learning Goals:
Next, I'll explore the principles of consistency and simplicity in UI design.
---
📢 Community Engagement:
- How do you apply visual hierarchy in your designs?
- What challenges have you faced in establishing visual hierarchy?
---
💬 Quote of the Day:
_"Design is intelligence made visible." – Alina Wheeler_

Thank you for reading! Stay tuned for more updates as I continue my journey in UI/UX design.
#UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan | prince_chouhan |
1,897,955 | Aws Certified Cloud Practitioner | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:07:53 | https://dev.to/vidheyb/aws-certified-cloud-practitioner-5boa | vidheyb | ||
1,897,954 | AWS Well-Architected | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:07:11 | https://dev.to/vidheyb/aws-well-architected-5104 | vidheyb | ||
1,897,953 | A Beginner’s Guide to Machine Learning: Everything You Need to Know to Get Started | Machine learning (ML) is an interesting area of study that utilises computational methods,... | 0 | 2024-06-23T17:06:17 | https://dev.to/abhinav_yadav_554cab962bb/a-beginners-guide-to-machine-learning-everything-you-need-to-know-to-get-started-4oe6 | machinelearning, ai, beggine, beginners |
Machine learning (ML) is an interesting area of study that utilises computational methods, statistical analysis and domain knowledge to build systems that are capable of learning from the data and can make predictions or decisions based on it. People of every age and kind of a profile, from students to professionals and avid tech lovers, can benefit from a basic grasp of what ML is. This guide will simply orient you to the basics and guide you on your journey in becoming a principles-driven learner.
The topics covered in this article are:
- What is Machine Learning?
- Types of Machine Learning
- Steps to Getting Started with Machine Learning
- Tools and Libraries
## What is Machine Learning ?
In simpler words, machine learning is a subfield of artificial intelligence, which is further defined as the capability of machine to imitate the human behaviour i.e., learning on their own.
Here, machines learns from the hidden patterns within datasets, which helps them in making predictions.
You can see numerous examples of machine learning around you, for instance take example of email spam filtering, email services use machine learning to filter out spam emails. They collect a large dataset of emails labeled as “spam” or “not spam” and extract features such as email content, sender information, and the presence of links.
## Types of Machine Learning
There are three ways to use the technology of machine learning depending upon the need of the business:
**Supervised Learning**
Training the algorithm using the labeled input and output data. i.e., teaching the machine what to learn.
**Unsupervised Learning**
Training the algorithm with no labeled data. i.e., machine will automatically find what to learn.
**Reinforcement Learning**
Algorithm takes actions to max cumulative reward. i.e., machine will learn from it’s own mistake at every step.
Now focusing on each one of these methods in a broader way:
1.**Supervised Learning**
Supervised learning can also be understood as a process that is quite similar to explaining to a child what fruits are and showing them specific examples of apples, bananas, and oranges they should focus on. The child is trained to relate the distinguishing features such as colour and shape to each of the fruits. At a later time, the child can classify or name new fruits on the basis of the learned association. Likewise, in the supervised learning model, the prediction of the labels entails utilising data that has already been labeled to train a model and afterward label other unseen data.
Real-life examples:
- Email Spam (Classification)– The algorithm takes historical spam and non-spam emails as input. Consequently, it draws patterns in data to classify spam from others.
- Stock Price Prediction (Regression)– Historical business market data is fed to the algorithm in this method. With proper regression analysis, the new price for the future is predicted.
2.**Unsupervised Learning**
Unsupervised learning is like giving a child a mix of different fruits without telling them the names. The child groups similar-looking fruits together based on their features like color and shape. Similarly, in unsupervised learning, a model identifies patterns and clusters in data without predefined labels.
For Example:
Data with similar traits are asked to group by the algorithm. These groups are called clusters, and the process is called clustering. In retail analytics, various customers are usually clustered based on their purchase and other behaviours.
3.**Reinforcement Learning**
Reinforcement learning is like teaching a dog new tricks through trial and error. The dog receives rewards for performing desired actions and learns to maximise its rewards over time through exploration and feedback. Similarly, in reinforcement learning, an agent learns to make decisions in an environment to maximise a cumulative reward.
For Example:
An exciting example of reinforcement learning occurs when computers learn to play video games by themselves. The algorithm keeps on interacting with the game environment through a series of actions. This environment, in turn, gives a reward or punishment based on the nature of action taken.
## Steps to Getting Started with Machine Learning
Step 1 : **Collecting Data**
Machines initially learn from the data so, it is very important to collect reliable data so that machine learning model can find the correct patterns. The quality of data feed to the machine will decide the accuracy of the model. If the data will be outdated or full of errors prediction will be wrong.
Step 2: **Preparing the Data**
After getting all the data we prepare it, first, shuffle the data to ensure even distribution and eliminate order bias. Next, clean the data by removing unwanted entries, handling missing values, eliminating duplicates, and converting data types as needed, which may involve restructuring rows and columns. Then, visualise the data to understand its structure and the relationships between variables. Finally, split the cleaned data into a training set for the model to learn from and a testing set to evaluate the model’s accuracy.
Step 3: **Choosing a Model**
A machine learning model determines the output we get after running the machine learning algorithm on the collected data. We choose the relevant model for that according to our need. Over the time lots of machine learning models are derived about which we will learn further in this series.
Step 4: **Training the Model**
This is the most important step in the process of machine learning in this step we pass the prepared data to our machine learning model to find the patterns and make predictions. It results in the model learning from the data so that it can accomplish the task set. Over time, with training, the model gets better at predicting.
Step 5: **Evaluating the Model**
After training of our model it is important to check that how our model is performing on unseen data because if we use the same data used for the testing the result will not be accurate as model is familiar with the data.
Step 6: **Parameter Tuning**
Parameter tuning is done after training and evaluating our model to check if there is any scope in improving the accuracy of our model. Parameters are the variables in the model that the programmer generally decides.
Step 7: **Deploy the Model**
Now we can deploy our model for practical use, such as web application or mobile app.
## Tools and Libraries
**Programming Languages** :
**Python**: Widely used for implementing machine learning because of its readability and extensive library support.
**R**: R is very popular in implementation of statistical modelling and data analysis.
**Libraries**:
**Scikit-Learn**: It provides simple and efficient tools for data mining and data analysis.
**TensorFlow**: This is an open-source platform for machine learning used particularly deep learning.
**Keras**: This is a high-level neural networks API, running on top of TensorFlow.
**Pandas**: Pandas is useful for data manipulation and analysis.
**NumPy**: It supports large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions.
> Machine learning is a rapidly growing field with a vast array of applications. Starting with the basics and gradually exploring more advanced topics can set you on a path to becoming proficient in this exciting domain. Whether you’re looking to apply ML to solve practical problems or aiming for a career in data science, the journey begins with a solid understanding of the fundamentals.
Happy Learning !
Please do comment below if you like the content or not
Have any questions or ideas or want to collaborate on a project, here is my [linkedin](https://www.linkedin.com/in/abhinav-yadav-482a4a26b/)
| abhinav_yadav_554cab962bb |
1,897,952 | **Understanding Scalar and Vector in Front-End Web Development with ReactInnt-end web developnt, scalar an pl Vectities, however | In front-end web development, scalar and vector quantities play a crucial role, especially when... | 0 | 2024-06-23T17:04:14 | https://dev.to/godblessed/understanding-scalar-and-vector-in-front-end-web-development-with-reactinnt-end-web-developnt-scalar-an-plvectities-however-3li6 |
In front-end web development, scalar and vector quantities play a crucial role, especially when dealing with graphics, animations, and layout designs. Let's explore these concepts using React as our framework of choice.
**Scalar Quantities in React:**
Scalar quantities in React are used to represent singular values without direction. They are often used for defining sizes, lengths, durations, or any other numerical value that doesn't require directional information. For example:
```javascript
// Scalar value for setting the duration of an animation
const duration = 300; // 300 milliseconds
// Scalar value for setting the opacity of an element
const opacity = 0.9; // 90% opacity
```
In these cases, `duration` and `opacity` are scalar values that define specific properties of an element or animation.
**Vector Quantities in React:**
Vector quantities, however, involve both magnitude and direction. In the context of React and web development, vectors are crucial when you need to describe multi-dimensional transformations or movements. For instance:
```javascript
// Vector value for moving an element on the screen
const transform = {
x: 100, // 100 pixels to the right
y: -50 // 50 pixels up (negative value for upward movement)
};
// Applying the vector using inline styles in React
const MyComponent = () => (
<div style={{ transform: `translate(${transform.x}px, ${transform.y}px)` }}>
I'm a moving component!
</div>
);
```
Here, `transform` is a vector that contains both magnitude (the distance in pixels) and direction (rightward and upward).
**Combining Scalars and Vectors in React:**
In many cases, you'll combine scalars and vectors to achieve complex UI behaviors. For example, you might scale an element (scalar) while also rotating it (vector):
```javascript
// Scalar for scale and vector for rotation angle
const scale = 1.5; // 150% scale
const rotationAngle = 45; // 45 degrees
// Combining both in a style object
const style = {
transform: `scale(${scale}) rotate(${rotationAngle}deg)`
};
const MyStyledComponent = () => (
<div style={style}>
I'm scaled and rotated!
</div>
);
```
In this example, `scale` is a scalar quantity affecting the size of the component, while `rotationAngle` is a vector quantity defining the rotation around an axis.
**Advanced Applications of Scalars and Vectors in React**
Building upon the basics, let's delve into more advanced applications of scalar and vector quantities in React, focusing on state management and responsive design.
**State Management with Scalars and Vectors:**
React's state management often involves scalar values, such as numbers or strings, to track user inputs or application data. However, vectors come into play when managing the state of multi-dimensional data, such as the position of a draggable element:
```javascript
import React, { useState } from 'react';
const DraggableComponent = () => {
const [position, setPosition] = useState({ x: 0, y: 0 }); // Vector state
const handleDrag = (event) => {
setPosition({
x: event.clientX,
y: event.clientY
});
};
return (
<div
style={{ transform: `translate(${position.x}px, ${position.y}px)` }}
onMouseMove={handleDrag}
>
Drag me around!
</div>
);
};
```
In this example, `position` is a vector state that tracks the x and y coordinates of the draggable component.
**Responsive Design with Scalars and Vectors:**
Responsive design is another area where scalars and vectors are extensively used. Scalar values can define breakpoints for media queries, while vectors can specify complex layout transformations based on screen size:
```javascript
const breakpoints = {
mobile: '320px', // Scalar breakpoint for mobile devices
tablet: '768px', // Scalar breakpoint for tablets
};
const responsiveStyle = {
transform: window.innerWidth < parseInt(breakpoints.tablet) ?
'translate(0px)' : 'translate(100px, 50px)' // Vector for layout shift
};
const ResponsiveComponent = () => (
<div style={responsiveStyle}>
I adapt to screen sizes!
</div>
);
```
Here, `breakpoints` are scalar values used to define media query conditions, while `responsiveStyle.transform` is a vector that adjusts the component's position based on the screen width.
**Conclusion:**
Scalars and vectors are not just theoretical concepts; they have practical implications in everyday front-end development tasks. By understanding how to use these quantities in React, developers can create more interactive, intuitive, and adaptable user interfaces.
Understanding the distinction between scalar and vector quantities is essential for front-end developers. It allows for precise control over UI elements and animations. By leveraging frameworks like React, developers can easily implement these concepts into their applications to create dynamic and responsive user interfaces.
Whether you're adjusting the position, size, or rotation of elements, or controlling animation timings, scalars and vectors will be your fundamental tools. Embrace these concepts to enhance your front-end development skills.
As you continue to build applications with React or any other front-end framework, keep in mind how scalar and vector quantities can be applied to enhance your UI's functionality and design. With these tools at your disposal, you're well-equipped to tackle a wide range of development challenges. | godblessed | |
1,897,951 | Arbitrary code execution with pickle | Here's why pickle is unsafe if you don't know the origin of the pickled data: import pickle import... | 0 | 2024-06-23T17:03:50 | https://dev.to/tallesl/arbitrary-code-execution-with-pickle-2407 | python | Here's why pickle is unsafe if you don't know the origin of the pickled data:
```py
import pickle
import os
# Create a malicious class
class Malicious:
def __reduce__(self):
# os.system will execute the command
return (os.system, ('echo "This is malicious code!"',))
# Serialize the malicious object
malicious_data = pickle.dumps(Malicious())
# Deserialize the malicious object (this will execute the command)
pickle.loads(malicious_data)
``` | tallesl |
1,897,950 | Understanding Scalar and Vector in Front-End Web Development with React | In front-end web development, scalar and vector quantities play a crucial role, especially when... | 0 | 2024-06-23T17:03:25 | https://dev.to/godblessed/understanding-scalar-and-vector-in-front-end-web-development-with-react-igc |
In front-end web development, scalar and vector quantities play a crucial role, especially when dealing with graphics, animations, and layout designs. Let's explore these concepts using React as our framework of choice.
**Scalar Quantities in React:**
Scalar quantities in React are used to represent singular values without direction. They are often used for defining sizes, lengths, durations, or any other numerical value that doesn't require directional information. For example:
```javascript
// Scalar value for setting the duration of an animation
const duration = 300; // 300 milliseconds
// Scalar value for setting the opacity of an element
const opacity = 0.9; // 90% opacity
```
In these cases, `duration` and `opacity` are scalar values that define specific properties of an element or animation.
**Vector Quantities in React:**
Vector quantities, however, involve both magnitude and direction. In the context of React and web development, vectors are crucial when you need to describe multi-dimensional transformations or movements. For instance:
```javascript
// Vector value for moving an element on the screen
const transform = {
x: 100, // 100 pixels to the right
y: -50 // 50 pixels up (negative value for upward movement)
};
// Applying the vector using inline styles in React
const MyComponent = () => (
<div style={{ transform: `translate(${transform.x}px, ${transform.y}px)` }}>
I'm a moving component!
</div>
);
```
Here, `transform` is a vector that contains both magnitude (the distance in pixels) and direction (rightward and upward).
**Combining Scalars and Vectors in React:**
In many cases, you'll combine scalars and vectors to achieve complex UI behaviors. For example, you might scale an element (scalar) while also rotating it (vector):
```javascript
// Scalar for scale and vector for rotation angle
const scale = 1.5; // 150% scale
const rotationAngle = 45; // 45 degrees
// Combining both in a style object
const style = {
transform: `scale(${scale}) rotate(${rotationAngle}deg)`
};
const MyStyledComponent = () => (
<div style={style}>
I'm scaled and rotated!
</div>
);
```
In this example, `scale` is a scalar quantity affecting the size of the component, while `rotationAngle` is a vector quantity defining the rotation around an axis.
**Advanced Applications of Scalars and Vectors in React**
Building upon the basics, let's delve into more advanced applications of scalar and vector quantities in React, focusing on state management and responsive design.
**State Management with Scalars and Vectors:**
React's state management often involves scalar values, such as numbers or strings, to track user inputs or application data. However, vectors come into play when managing the state of multi-dimensional data, such as the position of a draggable element:
```javascript
import React, { useState } from 'react';
const DraggableComponent = () => {
const [position, setPosition] = useState({ x: 0, y: 0 }); // Vector state
const handleDrag = (event) => {
setPosition({
x: event.clientX,
y: event.clientY
});
};
return (
<div
style={{ transform: `translate(${position.x}px, ${position.y}px)` }}
onMouseMove={handleDrag}
>
Drag me around!
</div>
);
};
```
In this example, `position` is a vector state that tracks the x and y coordinates of the draggable component.
**Responsive Design with Scalars and Vectors:**
Responsive design is another area where scalars and vectors are extensively used. Scalar values can define breakpoints for media queries, while vectors can specify complex layout transformations based on screen size:
```javascript
const breakpoints = {
mobile: '320px', // Scalar breakpoint for mobile devices
tablet: '768px', // Scalar breakpoint for tablets
};
const responsiveStyle = {
transform: window.innerWidth < parseInt(breakpoints.tablet) ?
'translate(0px)' : 'translate(100px, 50px)' // Vector for layout shift
};
const ResponsiveComponent = () => (
<div style={responsiveStyle}>
I adapt to screen sizes!
</div>
);
```
Here, `breakpoints` are scalar values used to define media query conditions, while `responsiveStyle.transform` is a vector that adjusts the component's position based on the screen width.
**Conclusion:**
Scalars and vectors are not just theoretical concepts; they have practical implications in everyday front-end development tasks. By understanding how to use these quantities in React, developers can create more interactive, intuitive, and adaptable user interfaces.
Understanding the distinction between scalar and vector quantities is essential for front-end developers. It allows for precise control over UI elements and animations. By leveraging frameworks like React, developers can easily implement these concepts into their applications to create dynamic and responsive user interfaces.
Whether you're adjusting the position, size, or rotation of elements, or controlling animation timings, scalars and vectors will be your fundamental tools. Embrace these concepts to enhance your front-end development skills.
As you continue to build applications with React or any other front-end framework, keep in mind how scalar and vector quantities can be applied to enhance your UI's functionality and design. With these tools at your disposal, you're well-equipped to tackle a wide range of development challenges. | godblessed | |
1,897,949 | [WIP] ChatGPT API for Web Developers | This post are the notes from FrontendMaster's course by Maximiliano Firtman. | 0 | 2024-06-23T17:03:05 | https://dev.to/petrussola/wip-chatgpt-api-for-web-developers-9bc | This post are the notes from FrontendMaster's [course](https://frontendmasters.com/courses/chatgpt-api/) by Maximiliano Firtman. | petrussola | |
1,897,948 | Getting Started with Aws Cloud Essentials | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:02:10 | https://dev.to/vidheyb/getting-started-with-aws-cloud-essentials-2ih8 | vidheyb | ||
1,897,947 | Configure and Deploy AWS PrivateLink | A post by Bhogadi Vidhey | 0 | 2024-06-23T17:01:09 | https://dev.to/vidheyb/configure-and-deploy-aws-privatelink-2cle | vidheyb | ||
1,897,945 | DAY 1 PROJECT : PASSWORD GENERATOR | Generate Random Password: A Simple and Efficient Password Generator Creating secure and... | 0 | 2024-06-23T16:57:40 | https://dev.to/shrishti_srivastava_/day-1-project-2hg9 | webdev, javascript, beginners, programming | ## **Generate Random Password: A Simple and Efficient Password Generator**
Creating secure and strong passwords is crucial in today's digital age. To help users generate robust passwords effortlessly, I've built a simple yet effective Password Generator. Let's dive into the core features and structure of this project.
**LANGUAGES USED: HTML , CSS and JAVASCRIPT**
HTML
The foundation of our Password Generator is the HTML structure. Here’s a quick overview of the main elements:
- Title and Meta Tags: The <head> section includes the document's metadata and links to an external CSS file and a Google Material Icons stylesheet.
- Container: The main content is wrapped inside a <div> with a class container, ensuring everything is neatly organized.
- Password Display: A disabled input box (passBox) where the generated password will be displayed, accompanied by a copy icon (copyIcon) for easy copying.
- Password Length Slider: An input range slider (inputSlider) allows users to select the password length, with values ranging from 1 to 30.
- Options: Several checkboxes let users customize the characters included in the password. Options include:
- Lowercase letters (lowercase)
- Uppercase letters (uppercase)
- Numbers (numbers)
- Symbols (symbols)
- Generate Button: A button (genBtn) to trigger the password generation process.
CODE: 
CSS:
The CSS (Cascading Style Sheets) file is responsible for styling our HTML elements, ensuring a visually appealing and user-friendly design. Below, I'll explain the main sections of the CSS code used in this project.
- Body and Container Styling
- The body is styled with a light background color (#f4f4f4) and centered content using flexbox. This ensures that the generator appears in the middle of the screen, regardless of the screen size.
- The .container class defines the main box, giving it a white background, padding, rounded corners (border-radius: 8px), and a subtle shadow (box-shadow) to make it stand out.
- Header and Text Styling
- The h1 element, which contains the title "Password Generator," is styled with a bottom margin and a larger font size to make it prominent.
- Input Box and Copy Icon
- The .inputBox class positions the password input box and the copy icon together. The copy icon (#copyIcon) is positioned absolutely to align it to the right of the input box.
- The .passBox class styles the disabled input box where the generated password is displayed, giving it full width, padding, and a border.
- Password Indicator and Slider
- The .pass-indicator class is a simple div styled to act as a visual indicator for password strength. It’s given a fixed height and a background color.
- The range input slider (input[type="range"]) is styled to take the full width, providing a clear and interactive way for users to select the password length.
- Row Layout and Labels
- The .row class uses flexbox to align elements horizontally and space them out evenly, ensuring labels and checkboxes are well-aligned.
- The label elements are styled with a smaller font size for a cleaner look.
- Generate Button
- The .genBtn class styles the "Generate Password" button with a blue background (#007bff), white text, and padding. The button changes color slightly on hover, providing a visual cue to the user.
**Enhancing the Password Generator with JavaScript**
To bring our Password Generator to life, I've added some JavaScript. This code handles user interactions, generates random passwords based on user preferences, and provides feedback on password strength. Let's walk through the key parts of the script.
- Element Selection and Constants
We start by selecting all the necessary HTML elements using getElementById and defining character sets for different types of characters.

- Slider Input Event
We display the current value of the slider (password length) and generate a new password whenever the slider value changes.

- Generate Password Function
-This function builds the password based on selected options and updates the password box.

- Update Password Indicator
This function updates the visual indicator of password strength based on its length.

- Initial Load Event
Ensures the password strength indicator is updated when the page loads.

- Copy Password Functionality
Allows users to copy the generated password to the clipboard and provides visual feedback.

With this JavaScript code, our Password Generator becomes fully interactive:
- Users can dynamically adjust the password length using a slider.
- They can include or exclude various character types.
- The password's strength is visually indicated, and the generated password can be copied to the clipboard with a single click.
**This combination of HTML, CSS, and JavaScript results in a powerful, user-friendly tool for generating secure passwords.**
Feel free to customize and expand upon this project—whether it's adding more features, improving the UI, or optimizing the JavaScript code. The knowledge and experience gained here will undoubtedly serve you well in future endeavors.
Happy coding!
THANK YOU!
| shrishti_srivastava_ |
1,897,752 | Converting px in SCSS | A lot of the time in designs they give the font-size, line-height and letter-spacing in px. Which... | 0 | 2024-06-23T12:50:50 | https://blog.nicm42.co.uk/converting-px-in-scss | scss | ---
title: Converting px in SCSS
published: true
date: 2024-06-23 12:47:17 UTC
tags: scss
canonical_url: https://blog.nicm42.co.uk/converting-px-in-scss
---
A lot of the time in designs they give the font-size, line-height and letter-spacing in px. Which isn't helpful as they mean having to get the calculator out to convert them to more useful units. But a combination of Sass functions and mixins can do the calculations for us.
## Converting font-size
Here we want to use rems. This function is from https://dev.to/rekomat/comment/1ib3b:
``` scss
// From https://dev.to/rekomat/comment/1ib3b
@function pxToRem($pxValue) {
@return math.div($pxValue, 16px) * 1rem;
}
```
This is literally doing the same calculation as we do on the calculator: font-size divided by 16px and then written in rems. If your base font size has been set to 10px (for example) then you'd need to change the 16px in the function to be 10px.
You can use this function to convert anything from px to rem, not just font-sizes.
## Converting line-height
Here we do exactly the same as on the calculator to get a ratio:
``` scss
@function convertLineHeight($fontSize, $lineHeight) {
@return math.div($lineHeight, $fontSize);
}
```
## Converting letter-spacing
And again it's the same calculator calculation and expressed in ems:
``` scss
@function convertLetterSpacing($fontSize, $letterSpacing) {
@return math.div($letterSpacing, $fontSize) * 1em;
}
```
## Using the functions
To use those three functions we'd do this:
``` scss
p {
font-size: pxToRem(24px);
line-height: convertLineHeight(24px, 30px);
letter-spacing: convertLetterSpacing(24px, -0.48px);
}
```
Which is fine, but we have to give all three of them the font-size. We can use a mixin to make this more efficient:
``` scss
@mixin textUtils($fontSize, $lineHeight: 1, $letterSpacing: 0) {
font-size: pxToRem($fontSize);
@if $lineHeight != 1 {
line-height: convertLineHeight($fontSize, $lineHeight);
}
@if $letterSpacing != 0 {
letter-spacing: convertLetterSpacing($fontSize, $letterSpacing);
}
}
p {
@include textUtils(24px, 30px, -0.48px);
//@include textUtils(24px, 30px);
//@include textUtils(24px);
}
```
This mixin is set up so it needs the font-size, but it doesn't matter if you don't give it the letter-spacing, because that could be 0. And just in case, it also works if you don't give it the line-height. | nicm42 |
1,897,944 | 🚀 My Journey into the Software Dev Space: The Real Story. | 🚀 My Journey into the Software Dev Space: The Real Story. Transitioning into the software Dev space... | 0 | 2024-06-23T16:57:38 | https://dev.to/chinwuba_okafor_fed1ed88f/my-journey-into-the-software-dev-space-the-real-story-3132 | techlife, 100daysofcode, techbro, webdev | 🚀 My Journey into the Software Dev Space: The Real Story.
Transitioning into the software Dev space has been nothing short of daunting. The road to becoming proficient in software development is often painted as a smooth, glamorous path. But let’s be real , it’s far from that.
🔧 The Hurdles:
Learning Curves: From mastering new programming languages to understanding complex algorithms, the learning never stops. Each day presents a new challenge, and sometimes, it feels like you’re running a marathon with no finish line in sight.
Imposter Syndrome: There are days when you feel like you don’t belong, like you’re not good enough. The constant comparison to others can be overwhelming.
Burnout: Long hours, debugging sessions that last through the night, and the pressure to keep up with rapidly evolving technologies can take a toll on your mental and physical health.
💪 The Persistence:
Consistency is Key: Despite the hurdles, the one thing that keeps me going is consistency. Showing up every day, putting in the work, and pushing through the tough times is what truly matters.
Support Systems: Leaning on fellow tech enthusiasts, mentors, and online communities has been invaluable. We share knowledge, provide encouragement, and celebrate each other's successes.
Small Wins: Celebrating the small victories – solving a tough bug, completing a project, or even understanding a new concept – keeps the motivation alive.
🛠️ The Reality Check:
We often glamorize the tech world, showcasing the perks and successes. But the truth is, behind every successful project, there are countless hours of hard work, failures, and lessons learned. It’s not an easy path, and it’s okay to feel overwhelmed. What’s important is to keep pushing forward, one step at a time.
🌟 Why I Keep Going:
The passion for creating, solving problems, and the thrill of seeing my code come to life is what drives me. Knowing that each hurdle crossed brings me one step closer to my goals is what fuels my journey.
To everyone out there facing similar challenges, know that you’re not alone. Keep pushing, stay consistent, and remember that every great developer has faced these struggles too. 💪👩💻👨💻
| chinwuba_okafor_fed1ed88f |
1,897,943 | Understanding Deadlock: When Computers Play Musical Chairs | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T16:50:53 | https://dev.to/vidyarathna/understanding-deadlock-when-computers-play-musical-chairs-28c | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Deadlock is like musical chairs for processes: each process waits indefinitely for resources held by others, and no one can proceed. It occurs in concurrent systems, where resources are limited and processes compete for them.
## Additional Context
Deadlock prevention and resolution are critical in operating systems and database management. Techniques like resource ordering and deadlock detection algorithms help maintain system efficiency and prevent system halts. | vidyarathna |
1,897,941 | How to set up code preview in VS Code IDE? | Hello kind people of the internet, I'm following a beginning level tutorial on Vueschool.io, which... | 0 | 2024-06-23T16:46:25 | https://dev.to/whuteva_bf240d1bf5/how-to-set-up-code-preview-in-vs-code-ide-4l1a | Hello kind people of the internet,
I'm following a beginning level tutorial on Vueschool.io, which has an overview of using Volar to perform interactive code previews within VS Code IDE. This preview capability is a feature that I really want to have.
But alas, I've gone down an rabbit hole of trying to figure out just what to install in my IDE, as it seems the preview capability was extracted in v1.1.0.
Following some links I came across "Vue and Nuxt Preview", but it looks a little sketchy at version 0.0.2 as of 2/16/2023 with only about 2000 installs and zero ratings or reviews.
Rummaging around the internet I've also come across "vite-plugin-vue-component-preview" but those posts are two years old.
So at this point, on June 23 2024, after several hours of dredging the internet, I have no idea what to install to achieve an interactive Vue.js code review within the VS Code IDE.
Any help would be super appreciated
| whuteva_bf240d1bf5 | |
1,891,659 | Potential Companies I'd Like To Work At | While coming to the end of my bootcamp journey, it is now time to think about potential companies I'd... | 0 | 2024-06-23T16:41:04 | https://dev.to/uhrinuh/potential-companies-id-like-to-work-at-44jh | While coming to the end of my bootcamp journey, it is now time to think about potential companies I'd like to work at as a full-stack software developer.
As an Operation Spark grad, I have advanced knowledge on Javascript, Typescript, Node.js, MySQL, MongoDB, React, Sequelize, Mongoose, Prisma, and many other technologies, but to work at some companies I admire, I need to get more experience with some new tech.
The companies I'd love to work at eventually include Pinterest, Tumblr, Spotify, Instagram, NHL, and Reddit.
**What is Pinterest and what is their tech stack?**
A social media platform that allows users to discover, save, and share visual ideas.
**Their tech stack:**

- Python, Java, Golang are programming languages.
- React is a JavaScript library for building user interfaces based on components.
- MySQL is an open-source relational database management system.
- NGINX is a free open-source web server.
- Redis is an open source (BSD licensed), in-memory data structure store.
- Amazon S3 is used to store and retrieve any amount of data, at any time, from anywhere on the web.
- Django is a python-based web framework.
- Amazon Cloudfront is a content delivery with low latency and high data transfer speeds.
- Objective-C is the primary programming language you use when writing software for OS X and iOS.
- Memcached is a high-performance, distributed memory object caching system.
- Backbone.js is a JavaScript rich-client web app framework based on the model–view–controller design paradigm, intended to connect to an API over a RESTful JSON interface.
- Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation.
- Amazon SQS is a fully managed message queueing service.
- HBase is the Hadoop database and is a distributed, scalable, big data store.
- EdgeCast is the world's fastest and most reliable content delivery network.
- Qubole is used to prepare, integrate and explore Big Data in the cloud (Hive, MapReduce, Pig, Presto, Spark and Sqoop).
- MySQL_Utils is Pinterest's MySQL management tool.
**What is Tumblr and what is their tech stack?**
A social media platform and microblogging website that allows users to crate blogs and share content with others.
**Their tech stack:**

- PHP, Ruby, and Scala are programming languages.
- MySQL is an open-source relational database management system.
- NGINX is a free open-source web server.
- Redis is an open source (BSD licensed), in-memory data structure store.
- Kafka is a distributed event store and stream-processing platform. It is an open-source system developed by the Apache Software Foundation written in Java and Scala.
- Memcached is a high-performance, distributed memory object caching system.
- Backbone.js is a JavaScript rich-client web app framework based on the model–view–controller design paradigm, intended to connect to an API over a RESTful JSON interface.
- Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation.
- HBase is the Hadoop database and is a distributed, scalable, big data store.
- EdgeCast is the world's fastest and most reliable content delivery network.
- SoftLayer provides on-demand IT infrastructure, dedicated servers and cloud resources.
- Colossus is the I/O and Microservice library for Scala.
**What is Spotify and what is their tech stack?**
A digital media service that allows users to listen to music, podcasts, and audiobooks.
**Their tech stack:**

- Python and Java are programming languages.
- NGINX is a free open-souce web server.
- PostgreSQL is a powerful, open source object-relational database system.
- Bootstrap is used for UI purposes.
- Amazon S3 is used to store and retrieve any amount of data, at any time, from anywhere on the web.
- Kafka is a distributed event store and stream-processing platform. It is an open-source system developed by the Apache Software Foundation written in Java and Scala.
- Amazon Cloudfront is a content delivery with low latency and high data transfer speeds.
- Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is written in Java.
- Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation.
- Google BigQuery analyzes terabytes of data in seconds.
- Apache Storm is a distributed stream processing computation framework.
- Google Cloud Bigtable is the same database that powers Google Search, Gmail and Analytics.
**What is Instagram and what is their tech stack?**
A social media app and photo-sharing platform that allows users to upload, edit, and share photos and videos.
**Their tech stack:**

- Javascript, Python, Java, and reasonML are programming languages.
- React is a JavaScript library for building user interfaces based on components.
- NGINX is a free open-source web server.
- PostgreSQL is a powerful, open source object-relational database system.
- Redis is an open source (BSD licensed), in-memory data structure store.
- Django a python-based web framework
- GraphQL is an open-source data query and manipulation language for APIs and a query runtime engine. GraphQL enables declarative data fetching where a client can specify exactly what data it needs from an API.
- React Native is an open-source UI software framework created by Meta Platforms, Inc. It is used to develop applications for Android, Android TV, iOS, macOS, tvOS, Web, Windows and UWP by enabling developers to use the React framework along with native platform capabilities.
- Redux is the predictable state container for JavaScript apps.
- Objective-C is the primary programming language you use when writing software for OS X and iOS,
- Memcached is a high-performance, distributed memory object caching system.
- Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is written in Java.
- Gunicorn is a Python Web Server Gateway Interface HTTP server.
- Immutable.js is for immutable persistent data collections for Javascript which increase efficiency and simplicity, by Facebook.
- Gearman is a generic application framework to farm out work to other machines or processes.
**What is the NHL and what is their tech stack?**
The national hockey league.
**Their tech stack:**
- Java is a programming language.
- NGINX is a free open-source web server.
- Akamai is a massively distributed edge and cloud platform that keeps experiences closer to users.
**What is the Reddit and what is their tech stack?**
A social news and forum website where users can create, share, and promote content.
**Their tech stack:**

- Javascript and Python are programming languages.
- jQuery is a JavaScript library designed to simplify HTML DOM tree traversal and manipulation, as well as event handling, CSS animations, and Ajax.
- Node.js is a cross-platform, open-source JavaScript runtime environment that can run on Windows, Linux, Unix, macOS, and more.
- React is a JavaScript library for building user interfaces based on components.
- HTML5 is the 5th major revision of the core language of the World Wide Web.
- NGINX is a free open-source web server.
- PostgreSQL is a powerful, open source object-relational database system.
- Ubuntu is the leading OS for PC, tablet, phone and cloud.
- Redis is an open source (BSD licensed), in-memory data structure store.
- Amazon S3 is used to store and retrieve any amount of data, at any time, from anywhere on the web.
- Amazon EC2 is a part of Amazon.com's cloud-computing platform, Amazon Web Services, that allows users to rent virtual computers on which to run their own computer applications.
- Markdown is a lightweight markup language for creating formatted text using a plain-text editor.
- RabbitMQ is an open-source multi-protocol messaging broker.
- Flask is a micro-framework for Python.
- Memcached is a high-performance, distributed memory object caching system.
- Backbone.js is a - JavaScript rich-client web app framework based on the model–view–controller design paradigm, intended to connect to an API over a RESTful JSON interface.
- Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is written in Java.
- Underscore is a JavaScript library which provides utility functions for common programming tasks. It is comparable to features provided by Prototype.js and the Ruby language, but opts for a functional programming design instead of extending object prototypes.
- Gunicorn is a Python Web Server Gateway Interface HTTP server.
- Fastly is an American cloud computing services provider. It describes its network as an edge cloud platform, which is designed to help developers extend their core cloud infrastructure to the edge of the network, closer to users.
Of course if I were to work for any of these companies in the near future, depending on my role, I won't be expected to know all of these tech stacks. However, a good place to start is by recognizing all the different tech stacks that may be useful to learn to aid me in my programming journey. Hopefully this helps anyone else looking at similar companies to work at.
**Sources**
https://stackshare.io/pinterest/pinterest
https://stackshare.io/tumblr/tumblr
https://stackshare.io/spotify/spotify
https://stackshare.io/instagram/instagram
https://stackshare.io/nhl/nhl
https://stackshare.io/nhl/nhl-com
https://stackshare.io/reddit/reddit | uhrinuh | |
1,897,940 | GCP for beginners 2024: Build a simple web app with Cloud Run and Cloud Build through terminal | What is Cloud Run? Cloud run is a fully managed compute platform managed by Google Cloud that... | 0 | 2024-06-23T16:37:34 | https://dev.to/robertasaservice/gcp-for-beginners-2024-build-a-simple-web-app-with-cloud-run-and-cloud-build-through-terminal-2kop | **What is Cloud Run?**
Cloud run is a fully managed compute platform managed by Google Cloud that automatically scales containers and it allows developers to deploy containerized applications in minutes
**What are the key features of Cloud Run?**
1. Auto scaling = Scales up or down depending on traffic
2. Containerized applications = Run any stateless container
3. Fully managed = No need to manage servers, google takes care of everything
4. Pay per use = Pay only for the resources you use
**What is Cloud Run used for?**
1. Web applications and API’s = Great for deploying microservices, Restful API’s, and as well web applications
2. Data processing = Suitable for data transformation and ETL tasks
3. Background jobs = Can handly simultanious jobs
4. Event-driven applications = Works perfectly with event sources like cloud storage and pub/sub
It’s a solid combination of Cloud Functions and App Engine, best of both worlds and it can be used as a PaaS or Faas.
**Cloud Run in a Metaphor**
Imagine you’re running a food truck business. Each containerized application you deploy is like a food truck.
You can design and customize your food truck however you like, with any equipment, recipes (code), and ingredients (dependencies).
Now, think of Cloud Run as a food truck park.
The food truck park is the managed platform where your food trucks operate.
This park offers several features:
First, it provides flexibility—you can bring any type of food truck, serving any cuisine (programming language or framework).
Second, the park has a smart system that monitors customer traffic. When there are more customers, more food trucks are called in to handle the demand. When there are no customers, the food trucks are sent away to save costs.
Lastly, you don’t have to worry about maintaining the park infrastructure (like cleaning, security, or utilities).
The park management (Google Cloud) takes care of all that, so you can focus on your food (application) and customers (users).
**How to build a cloud run through cloud shell** [(Here's the video)](https://youtu.be/rGxegu0Hsfc?si=F5EU4MXcUXEYALmX)
Create a dir
```
mkdir my-cloud-run-app
```
cd into the folder we just created
```
cd my-cloud-run-app
```
Create a main.py file
```
nano main.py
```
```
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello, Cloud Build!'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
```
Create a requirements.txt with the following
```
Flask==2.0.1
Werkzeug==2.0.1
```
Create a dockerfile
```
nano Dockerfile
```
```
# Use the official Python image from the Docker Hub
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy the requirements file and install dependencies
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 8080
# Run the application
CMD ["python", "main.py"]
```
Create a cloudbuid.yaml file
```
nano cloudbuild.yaml
```
```
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-cloud-run-app', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/my-cloud-run-app']
images:
- 'gcr.io/$PROJECT_ID/my-cloud-run-app'
```
Now you should have something like this

Run cloud build to containerize the application
```
gcloud builds submit --config cloudbuild.yaml .
```
Deploy the cloud run (Make sure you put your Project ID and preferred region for example: us-central1)
```
gcloud run deploy my-cloud-run-app \
--image gcr.io/PROJECT_ID/my-cloud-run-app \
--platform managed \
--region YOUR_PREFERRED_REGION \
--allow-unauthenticated
```
After everything has been succesfully deployed, it will give you an URL to open/access your web app and it should say "Hello, Cloud Builders!"
Now make sure to remove it, after you are done! If not GCP will charge you.
Remove cloud run
```
gcloud run services delete my-cloud-run-app --platform managed --region YOUR_PREFERRED_REGION
```
Remove container image
```
gcloud container images delete gcr.io/YOUR_PROJECT_ID/my-cloud-run-app --force-delete-tags
```
| robertasaservice | |
1,892,833 | The Art of Falling | If you fall just right, you can use the energy to roll and spring back up. Don't ask me how, I'd hurt... | 0 | 2024-06-23T16:36:02 | https://dev.to/tacodes/the-art-of-falling-2aoa | hhgtg, motivation, meme | If you fall just right, you can use the energy to roll and spring back up. Don't ask me how, I'd hurt myself. Willy Wonka did it once though.

Folks in aikido understand the idea well, too -- these martial practitioners learn how to redirect energy to flow around or away from them.
When you fall, it's important to know how to fall without hurting yourself. What part of your body would you prefer to hit the ground first, face or butt? Credit to those who go in face-first, though.

Failing is much like falling, for it is inevitable. It's important to plan, practice, and prepare for it as much as possible... So when the day comes that you fail flat on your face.... You'll roll like a limber Willy Wonka, and the more you can turn your fail into an opportunity to achieve victory in your endeavor.
Happy Failing! | tacodes |
1,897,938 | Delicious Breakfast Classics: Waffles, Sandwiches, and Bowls | A popular breakfast choice for many people, waffles are a delicious and versatile option that can be... | 0 | 2024-06-23T16:32:26 | https://dev.to/abduljabbar4533/delicious-breakfast-classics-waffles-sandwiches-and-bowls-768 | crepeccino | A popular breakfast choice for many people, waffles are a delicious and versatile option that can be enjoyed in a variety of ways. Whether you prefer them topped with fresh fruit, a dollop of whipped cream, or drizzled with maple syrup, waffles are a classic morning staple that never fails to satisfy. From crispy Belgian waffles to lighter and thinner crepe-like waffles, there is a wide range of options to suit every taste bud.
One unique twist on the traditional waffle is the [Crepeccino](http://crepeccino.com/) waffle, a delightful combination of a crepe and a crunchy waffle. This hybrid creation offers the best of both worlds with a thin and delicate texture reminiscent of a crepe, but with the signature grid pattern of a waffle. Topped with a rich dollop of whipped cream and a drizzle of maple syrup, the crepeccino waffle is a must-try for those looking to add a gourmet touch to their breakfast routine.
### Maple Syrup Waffle
The maple syrup waffle is a classic breakfast choice that never fails to satisfy one's sweet cravings. The fluffy waffle paired with the rich, decadent maple syrup creates a perfect harmony of flavors that will surely leave your taste buds delighted. Whether enjoyed as a morning treat or a sweet afternoon snack, the maple syrup waffle is a timeless favorite for many.
For a twist on the traditional maple syrup waffle, consider experimenting with different toppings such as fresh fruits, whipped cream, or a sprinkle of cinnamon. These variations can elevate the flavors and textures of the waffle, adding a new dimension to this beloved dish. To explore more creative ways to enjoy the maple syrup waffle, [visit our website](http://crepeccino.com/) for inspiring recipes and ideas.
### Breakfast Sandwiches
When it comes to a classic breakfast option that never fails to satisfy, breakfast sandwiches are a go-to choice for many. One of the most beloved breakfast sandwiches is the Bacon and Egg Sandwich. This quintessential morning meal consists of crispy bacon strips, a perfectly cooked fried egg, and a slice of melted cheese sandwiched between two pieces of toasted bread or a fluffy English muffin. The combination of savory, salty, and rich flavors makes this breakfast sandwich a popular favorite for those looking for a hearty start to their day.
For those craving a delicious and filling breakfast option that goes beyond the typical bacon and eggs, breakfast bowls are a fantastic alternative. The Acai Bowl, in particular, has gained popularity in recent years for its health benefits and refreshing taste. This breakfast bowl typically features a thick base of acai puree topped with an assortment of fresh fruits, nuts, granola, and a drizzle of honey or nut butter. Whether enjoyed at home or on the go, breakfast bowls offer a convenient and nutritious way to kickstart your morning with a burst of flavors and energy.
### Bacon and Egg Sandwich
Bacon and egg sandwiches are a classic breakfast option that never fails to satisfy. The combination of crispy bacon, perfectly cooked eggs, and toasted bread creates a flavorful and filling meal that is beloved by many. This breakfast staple is not only delicious but also easy to make, making it a popular choice for busy mornings.
To make a bacon and egg sandwich, start by frying bacon in a skillet until it's nice and crispy. In a separate pan, cook your eggs to your liking - whether scrambled, fried, or poached. Assemble your sandwich by layering the bacon and eggs between two slices of toasted bread, and you're ready to enjoy a simple yet satisfying breakfast option.
### Breakfast Bowls
When it comes to wholesome and nutritious breakfast options, breakfast bowls are a popular choice for many. One of the most beloved breakfast bowl variations is the acai bowl. This delightful concoction typically consists of acai berries blended to a smooth consistency and layered with toppings like granola, fresh fruits, coconut flakes, and a drizzle of honey, making it a colorful and tasty way to kick start your day.
Acai bowls are not only delicious but are also packed with antioxidants and essential nutrients. They are a great source of vitamins, minerals, and fiber, making them a fulfilling and nourishing breakfast option. The versatility of acai bowls allows you to customize them according to your preferences, whether you prefer a sweet flavor profile with fruit toppings or a more decadent option with chocolate and nut butter. Acai bowls are not only Instagram-worthy but also a wholesome choice to energize you and keep you satisfied in the morning.
### Acai Bowl
Acai bowls are a popular choice for health-conscious individuals looking for a nutritious and delicious breakfast option. Packed with antioxidants, vitamins, and fiber, this colorful bowl typically features a base of blended acai berries topped with a variety of fruits, nuts, seeds, and granola. The combination of textures and flavors creates a satisfying meal that can keep you energized throughout the morning.
To make your own acai bowl at home, start by blending frozen acai puree with a splash of liquid (such as coconut water or almond milk) until you achieve a smooth consistency. Pour the mixture into a bowl and let your creativity shine by arranging a vibrant array of toppings like sliced bananas, berries, shredded coconut, and a drizzle of honey. Acai bowls are not only visually appealing but also a convenient and customizable breakfast option that can be tailored to suit your taste preferences and dietary needs.
| abduljabbar4533 |
1,897,937 | 1438. Longest Continuous Subarray With Absolute Diff Less Than or Equal to Limit | 1438. Longest Continuous Subarray With Absolute Diff Less Than or Equal to Limit Medium Given an... | 27,523 | 2024-06-23T16:27:33 | https://dev.to/mdarifulhaque/1438-longest-continuous-subarray-with-absolute-diff-less-than-or-equal-to-limit-4j9j | php, leetcode, algorithms, programming | 1438\. Longest Continuous Subarray With Absolute Diff Less Than or Equal to Limit
Medium
Given an array of integers `nums` and an integer `limit`, return the size of the longest **non-empty** subarray such that the absolute difference between any two elements of this subarray is less than or equal to `limit`.
**Example 1:**
- **Input:** nums = [8,2,4,7], limit = 4
- **Output:** 2
- **Explanation:** All subarrays are:
```
[8] with maximum absolute diff |8-8| = 0 <= 4.
[8,2] with maximum absolute diff |8-2| = 6 > 4.
[8,2,4] with maximum absolute diff |8-2| = 6 > 4.
[8,2,4,7] with maximum absolute diff |8-2| = 6 > 4.
[2] with maximum absolute diff |2-2| = 0 <= 4.
[2,4] with maximum absolute diff |2-4| = 2 <= 4.
[2,4,7] with maximum absolute diff |2-7| = 5 > 4.
[4] with maximum absolute diff |4-4| = 0 <= 4.
[4,7] with maximum absolute diff |4-7| = 3 <= 4.
[7] with maximum absolute diff |7-7| = 0 <= 4.
Therefore, the size of the longest subarray is 2.
```
**Example 2:**
- **Input:** nums = [10,1,2,4,7,2], limit = 5
- **Output:** 4
- **Explanation:** The subarray [2,4,7,2] is the longest since the maximum absolute diff is |2-7| = 5 <= 5.
**Example 3:**
- **Input:** nums = [4,2,2,2,4,4,2,2], limit = 0
- **Output:** 3
**Example 4:**
- **Input:** nums = [2,2,2,4,4,2,5,5,5,5,5,2], limit = 2
- **Output:** 6
**Constraints:**
- <code>1 <= nums.length <= 10<sup>5</sup></code>
- <code>1 <= nums[i] <= 10<sup>9</sup></code>
- <code>0 <= limit <= 10<sup>9</sup></code>
**Solution:**
```
class Solution {
/**
* @param Integer[] $nums
* @param Integer $limit
* @return Integer
*/
function longestSubarray($nums, $limit) {
$ans = 1;
$minQ = new SplDoublyLinkedList();
$maxQ = new SplDoublyLinkedList();
for ($l = 0, $r = 0; $r < count($nums); ++$r) {
while (!$minQ->isEmpty() && $minQ->top() > $nums[$r])
$minQ->pop();
$minQ->push($nums[$r]);
while (!$maxQ->isEmpty() && $maxQ->top() < $nums[$r])
$maxQ->pop();
$maxQ->push($nums[$r]);
while ($maxQ->bottom() - $minQ->bottom() > $limit) {
if ($minQ->bottom() == $nums[$l])
$minQ->shift();
if ($maxQ->bottom() == $nums[$l])
$maxQ->shift();
++$l;
}
$ans = max($ans, $r - $l + 1);
}
return $ans;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,897,936 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-23T16:25:44 | https://dev.to/povahe7690/buy-verified-paxful-account-1fbm | tutorial, react, python, ai | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | povahe7690 |
1,897,935 | Twilio Challenge: LOL Loops: Your Developer Rant Responder | This is a submission for Twilio Challenge v24.06.12 What I Built Built an WhatsApp bot... | 0 | 2024-06-23T16:23:24 | https://dev.to/gowtham758550/twilio-challenge-lol-loops-your-developer-rant-responder-51nj | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
Built an WhatsApp bot powered by Gemini AI.
**LOL Loops: Your Developer Rant Responder**
Entertaining Endeavors: Whenever a developer rants to it, it will push the boundaries of creativity and give them a hilariously witty reply that not only makes them realize the problem is not the end of the world but also leaves them laughing out loud.
## Demo
<!-- Share a link to your app and include some screenshots here. -->
My WhatsApp sandbox to message => <a href="http://wa.me/+14155238886?text=join%20help-cotton" target="_blank">Open whatsapp</a>
Initiate your chat with `join help-cotton`


Here is my function app built with node.
{% embed https://github.com/gowtham758550/dev-to-challenge-twilio %}
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
LOL Loops harnesses the power of Twilio and AI to create an engaging and entertaining experience for developers. Here’s how:
### Twilio Integration:
#### Seamless WhatsApp Communication:
Using Twilio's API, LOL Loops receives messages from developers ranting about their coding woes.
Instant Responses: Twilio ensures that messages are received and delivered in real-time, providing an instant and interactive experience for users.
AI Magic with Google Generative AI (Gemini API):
#### Creative and Witty Responses:
The Gemini API is used to generate clever, humorous replies. It's like having a stand-up comedian and motivational speaker in one!
Tech-Savvy Humor: The AI understands the context of developer rants and responds with tech-specific humor, playful sarcasm, and absurdity to lighten the mood.
Bringing It All Together:
#### User Interaction:
A developer sends a rant via WhatsApp.
Processing and Generation: The message is processed by LOL Loops, and the Gemini API generates a witty response.
Response Delivery: Twilio sends the hilarious reply back to the developer, turning their frown upside down and reminding them not to take life too seriously.
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
**Entertaining Endeavors:** Awarded to a top submission that pushes the boundaries of creativity and gives us a good laugh.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → --> | gowtham758550 |
1,897,933 | Understanding Psycopg2: Bridging Python and PostgreSQL | What is Psycopg2: Psycopg2 is a PostgreSQL database adapter for the Python programming... | 0 | 2024-06-23T16:22:41 | https://dev.to/mahendrap1512/understanding-psycopg2-bridging-python-and-postgresql-1cpj | python, psycopg2, database, postgres | ### What is Psycopg2:
Psycopg2 is a PostgreSQL database adapter for the Python programming language. It allows developers to interact with PostgreSQL database using Python. It provides functionalities to connect to the PostgreSQL database, execute SQL commands, fetch results from executed SQL commands, manage database transactions, and more.
### Querying the Database with Psycopg2:
Querying database can be simplified into following steps
1. Establish the connection to the database
2. Create a cursor object (which will be used to query the Database)
3. Execute SQL query with the help of cursor.
4. Retrieve database result, using `fetchone()`, `fetchmany()` or `fetchall()`, depend upon the use case.
5. Finally, close the connection, to free the allocated resources.
```python
import psycopg2
# 1. Make a connection to the database
conn = psycopg2.connect(
database="database name",
user="username",
password="password",
host="localhost",
port="5432" # default port of postgresql service
)
# 2. Create a cursor
with conn.cursor() as cur:
# 3. Execute your SQL command
cur.execute("SELECT * FROM table_name")
# 4. Retrieve the result and print it
rows = cur.fetchall()
for row in rows:
print(row)
# 5. Close the connection
conn.close()
```
<b>Pro tip</b>: We used context manager to create the cursor object
`with conn.cursor() as cur:` which will automatically close the cursor, once code goes out of the `with` block.
### A little deep dive with retrieving database result:
Once we execute a SQL command with `cur.execute() ` we can retrieve the results with `fetchone()`, `fetchmany()`, or `fetchall()`.
- `fetchone()`: This method retrieves the next row of a query result set and returns a single sequence, or `None` if no more rows are available.
```python
# Execute your SQL command
cur.execute("SELECT * FROM table_name")
# Fetch the next row
row = cur.fetchone()
# Print the row
print(row)
```
- `fetchmany([size=cursor.arraysize])`: This method retrieves the next set of rows of a query result and returns a list. An empty list is returned when no more rows are available. The number of rows to fetch per call is specified by the size parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched.
```python
# Execute your SQL command
cur.execute("SELECT * FROM table_name")
# Fetch the next set of rows
rows = cur.fetchmany(5)
# Print the rows
for row in rows:
print(row)
```
- `fetchall()`: This method retrieves all (remaining) rows of a query result and returns them as a list of tuples. An empty list is returned if no more rows are available.
```python
# Execute your SQL command
cur.execute("SELECT * FROM table_name")
# Fetch all results
rows = cur.fetchall()
# Print the results
for row in rows:
print(row)
```
These methods provide different ways to retrieve query results depending on the specific needs.
### Working with transactions in Psycopg2:
In order to perform any changes to the database such as insert, update, or delete records, we need to use a database transaction.
Theoretically, `Transaction` is the set of operations, which are atomic in nature, so, either all the operations of the set will be executed successfully or all will be failed.
Executing `Transaction` involves following steps:
1. Create a database connection
2. Create a database cursor
3. Execute the insert, update or delete command
4.1. Commit the changes
4.2. If something went wrong in the above steps, do a rollback
```Python
# 1. Create database connection
conn = psycopg2.connect(...)
try:
# 2. Create a cursor
with conn.cursor() as cur:
# 3. Execute SQL commands (e.g., INSERT INTO table_name)
cur.execute("INSERT INTO table_name VALUES (%s, %s)", (value1, value2))
# 4.1. Commit the transaction
conn.commit()
except Exception as e:
# 4.2. Roll back in case of error
conn.rollback()
print("An error occurred:", e)
```
And that’s it! Psycopg2 is your trusty bridge between Python and PostgreSQL. So go forth, create magic, and may your code dance elegantly through the data! 🌟✨ | mahendrap1512 |
1,897,734 | Next-Gen AI Interview: Multilingual and Real-Time Analysis | This is a submission for the Twilio Challenge What I Built The motivation behind this... | 0 | 2024-06-23T16:22:30 | https://dev.to/bilal1718/next-gen-ai-interview-multilingual-and-real-time-analysis-56g8 | devchallenge, twiliochallenge, ai, twilio |
This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)
## What I Built
The motivation behind this project stems from a deep desire to democratize the interview process and make it more accessible, personalized, and fair. Job interviews can be incredibly stressful, and traditional methods often fail to account for individual differences in language proficiency, communication style, and comfort levels. By harnessing the power of AI and advanced communication technologies, we can create an interview experience that not only assesses a candidate's qualifications but also provides real-time support and feedback. This platform aims to empower job seekers, helping them present their best selves and gain valuable insights to improve their performance. In a world where talent is everywhere but opportunities are not, this project seeks to bridge that gap and bring us closer to a more equitable job market.
So with this motivation, I have created an innovative AI-powered platform that personalizes the interview experience for job candidates. Users can fill out a survey detailing their job position, experience, skills, goals, and preferred language for the interview. Based on their responses, they can choose to conduct the interview via web browser or WhatsApp Business.
If a user opts for WhatsApp, they will receive interview questions generated by the Gemini API directly on WhatsApp, with the ability to respond and receive follow-up questions in their chosen language. Upon completion, they receive a voice call and a WhatsApp message with AI-generated feedback.
For those who prefer a web browser interview, the platform provides a video interface that opens the user's camera and microphone. Questions appear onscreen, and the AI analyzes body language using TensorFlow and PoseNet. Users receive real-time feedback on their posture and eye contact, can replay their responses, and view live transcripts of their speech. After the interview, they exit the video room and receive detailed feedback.
## Demo
{% embed https://www.youtube.com/watch?v=7nrJ0u7j3jA %}
**Multi Language Support->** Spanish

<!-- Share a link to your app and include some screenshots here. -->
## Source Code
{% embed https://github.com/bilal1718/Twilio_Competition %}
## Twilio and AI
Twilio APIs form the backbone of my project, enabling seamless communication across different channels. Twilio WhatsApp Business API ensures smooth delivery of interview questions and responses, while Twilio Voice API handles voice feedback calls. Twilio Video API powers the browser-based video interviews, integrating with TensorFlow and PoseNet for real-time body language analysis. The Gemini API generates intelligent, context-specific questions and feedback, while Microsoft Text Translator ensures language accessibility. Together, these technologies create a cohesive and intuitive interview experience that is accessible, multilingual, and highly interactive.
## Features
- Personalized Interview Setup
- Multilingual Support
- Dual Interview Modes (WhatsApp Business Mode, Web Browser Mode)
- AI-Generated Questions and Feedback
- Real-Time Body Language Analysis
- Replay and Transcript Features
- Users can replay their spoken answers to review performance.
- Live transcription of spoken responses displayed on screen.
- Seamless Integration with Twilio APIs
- Comprehensive Feedback
## Tech Stack
- Frontend: **React JS**
- Backend: **Express JS**
- Styling: **Tailwind CSS**
- AI and ML: **TensorFlow, PoseNet**
- APIs:
- _Twilio APIs:_
- **WhatsApp Business API**
- **Voice API**
- **Video API**
- _Gemini API (for intelligent question generation and feedback)_
- _Microsoft Text Translator API (for multilingual support)_
## Additional Prize Categories
**Twilio Times Two**: The project uses _Twilio Whatsapp Business API_, _Twilio Voice API_ and _Twilio Video API_.
**Impactful Innovators**: This project is beneficial for job seekers that can use AI-driven, multilingual support and real-time feedback, empowers them to shine and bridging the gap between global talent and opportunities.
**Entertaining Endeavors**: The platform enhances the interview experience by incorporating interactive features like _real-time body language_ analysis and _replayable responses_, making the process both informative and engaging for candidates.
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | bilal1718 |
1,897,906 | Scontreeno - Your AI-empowered expense manager companion | This is a submission for Twilio Challenge v24.06.12 What I Built Scontreeno - whose name... | 0 | 2024-06-23T16:21:38 | https://dev.to/marconline/scontreeno-your-ai-empowered-expense-manager-companion-16e2 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
Scontreeno - whose name is taken from "Scontrino", wich is the italian term for "Recipt" - is a Twilio and AI powered expense manager companion.
Scontreeno is offered as a Whatsapp bot in order to simplify the user experience. You can simply chat with it and upload an image (or a PDF) or your receipt. Scontreeno will analyze it, by using Microsoft Azure AI Services, and will ingest it into the system, creating an AI-enriched search index.

Scontreeno is capable of understanding **where**, **when**, **what** you purchased and **how much** you paid (both for the single items and for the total). By ingesting these informations, it's quite easy to enrich the bot with a conversational-based search engine in order to let the user ask:
"Hey, how much did I spend for vegetables last week?" or
"Hey, can you please tell me if I spent more for meat or fish during the last month?"
Scontreeno could also provide insights and suggestions to users (even extracting informations from other users' receipt). For example:
- you purchase a lot of alcohol / sodas
- your weekly vegetable expense is too low
- you can find your favourite brands at a lower price at shop XYZ this week
## Architectural analysis of the Scontreeno ecosystem

Scontreeno's demo is composed by two Microsoft Azure functions:
- TwilioInput, which receives messages from Twilio
- ReceiptInput, which receives uploaded media
TwilioInput has the responsability of understanding if the message contains a media and, if's true, to upload this content to Azure Blob Storage.
As soon as a file is uploaded an Event Grid event is raised and the second function, ReceiptInput, is triggered. Uploaded media is analyzed using Azure AI Document Intelligence, a powerful cognitive service provided by Microsoft, which can interpret what's inside the document.
Results are then sent back to the user, using Twilio APIs.
Azure AI Document Intelligence output can be easilly linked to Azure AI Search, out of the scope of this demo, to index document and process them.
## Code
Find the public repository here: [https://github.com/marconline-scontreeno/Scontreeno](`https://github.com/marconline-scontreeno/Scontreeno`)
## Demo
You can try it - only document upload (jpg / pdf) and receipt analysis - by following these steps:
1. Add Twilio Whatsapp Sandbox number to your contacts: +1(415)523-8886
2. Send him a "join save-famous" message
3. As soon as you are accepted into the sandbox, kindly reply with a "hello" message
4. After you have been greeted, upload a picture of a receipt (JPG / PDF version). You can also try adding something that is not a receipt, in order to verify that Scontreeno is smart enough!
## Twilio and AI
Programmable Messaging is a great Twilio feature and being integrated with Whatsapp let you create incredible user experiences. Processing sent messages using AI - like in this Scontreeno demo - let you create powerful services with almost no effort.


Scontreeno is capable of extracting informations from pictures of receipt like this one, even if sentences are skewed and folded.
## Additional Prize Categories
**Impactful Innovators**: in my opinion, the real power of Scontreeno is to let user understand how money is spent and when, so the first positive impact is a money management educations for users. But I won't underestimate the power of health suggestions: being able to understand what the user purchased, we can provide him with great insights useful for his health (like: you purchases too much sodas / alcohol, you are not buying vegetables or fruit, your meat consumption is too high and so on). By using Twilio and Whatsapp, tracking expenses is so easy! | marconline |
1,897,917 | Entity Simulation with Member and Articulations in Python | EntRAVE is an Python tool to visualize an entity made of members 🔹 and articulations 🔗. Convinced... | 0 | 2024-06-23T16:18:47 | https://dev.to/genius_um/entity-simulation-with-member-and-articulations-in-python-561l | programming, python, ai, opensource | **EntRAVE** is an Python tool to visualize an entity made of members 🔹 and articulations 🔗.

Convinced ? Go on [the GitHub](https://github.com/Geniusum/EntRAVE) | genius_um |
1,897,916 | FRONTEND DEVELOPMENT | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T16:12:35 | https://dev.to/jamesbraun12/frontend-development-4jkm | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Development of enabling users to interact with website elements. Examples include HTML, CSS and JavaScript.
## Additional Context
HTML involves structure, CSS provides styling and layout, and JavaScript enables interactivity. | jamesbraun12 |
1,897,914 | YUQTAM | Best Corporate Gifting Company in India | Streamline gifting with our one-stop bulk solutions. Explore branded personalized & corporate... | 0 | 2024-06-23T16:11:11 | https://dev.to/yuqtam/yuqtam-best-corporate-gifting-company-in-india-4lj4 | corporate, gifting, merchandise, branding | Streamline gifting with our one-stop bulk solutions. Explore branded personalized & [corporate gifting](https://yuqtam.in/) solution for unforgettable moments.
corporate gift, corporate gift giving, corporate gift supplier, corporate gifting, corporate gifting companies in bangalore, corporate gifting companies in mumbai, corporate gifting company, corporate gifting services, corporate gifts near me, corporate gifts suppliers, diwali gift for office employees, diwali gifts for employees, send corporate gifts, yuqtam, offineeds, giftana, giftana and offineeds, offineeds and giftana, | yuqtam |
1,897,913 | What' New in Angular 18 | 1. TypeScript 4.7 Support Angular 18 will fully support TypeScript 4.7, which includes several new... | 0 | 2024-06-23T16:09:00 | https://dev.to/pathan_najim_1a1eef23584f/what-new-in-angular-18-48em | **1. TypeScript 4.7 Support**
Angular 18 will fully support TypeScript 4.7, which includes several new features and improvements. This means that Angular developers will be able to take advantage of the latest TypeScript features, such as:
**i. Template Literal Types:** Allow developers to define the types of templates and catch errors early more precisely.
**ii. Improved read-only Support:** Provides a more consistent and safe way to use the read-only keyword.
**iii. New Import Types: **Help to make code more modular and organized.
**2. Improved Performance with Ivy**
Angular 18 will further improve the performance of Angular applications by making optimizations to the Ivy compiler.
- Faster startup times
- Smaller bundle sizes
- Better overall performance
**3.New ng-template API**
Angular 18 will introduce a new ng-template API that will make creating and using templates easier.
-More flexibility and power.
-The ability to create reusable and maintainable templates.
**
4. Improved Debugging Tools:
**
Details regarding debugging tool improvements in the current beta are not entirely available. Still, there are expectations that Angular 18 will introduce some good debugging tools with these enhancements which could include better error messages, faster stack traces, or even deeper understanding of application state during debugging.
**5.Zoneless Applications:**
Zones play a role in managing asynchronous tasks as well as change detection in Angular. Angular 18 introduces the option of creating zoneless applications. What is fully understood yet is all implications as well as ways in which they might be used; nevertheless, zoneless applications may provide opportunities for optimizing performance while simplifying complex application logic.
**6. Routing Enhancements:**
The Angular single-page applications (SPAs) greatly rely on routing because it enables users to access different pages in the application with a lot of ease. Angular 18 has improved the routing system so much, and this results from some exciting enhancements.
| pathan_najim_1a1eef23584f | |
1,897,912 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-23T16:08:17 | https://dev.to/povahe7690/buy-verified-cash-app-account-228f | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | povahe7690 |
1,897,909 | Beginners Python Decision Making: Understanding If-else statements effectively. | If-Else statements and its significance: "If-else" statements are a type of conditional statements... | 0 | 2024-06-23T16:07:18 | https://dev.to/davidbosah/beginners-python-decision-making-understanding-if-else-statements-effectively-40n4 | webdev, beginners, python, tutorial |
**If-Else statements and its significance:**
"If-else" statements are a type of conditional statements in programming which allows specific logic or conditions to be given to different blocks of code. It's literally creating a bias or condition to a code block. The benefits range from making decisions to controlling possibilities.
**Practical example:**
Lets say we want to write a code about weight information in such a manner that if the user is above 130kg He/She is overweight, see how we are going to write it.
```py
Weight = int(input("enter your weight in kg here"))
if weight >= 130:
print("You are overweight ")
else:
print("You are good to go")
```
**Principles you must apply:**
1. Ensure to use small letters for your functions.
2. Keep it simple, avoid ambiguity.
3. Use ":" after 'else' to avoid error.
4. Use integer function "int" to indicate that the expected input value is a number not a letter. | davidbosah |
1,897,911 | Day 27 of my progress as a vue dev | About today Today was another solid day. I ended up getting I lot of work done as I intended and it... | 0 | 2024-06-23T16:06:28 | https://dev.to/zain725342/day-27-of-my-progress-as-a-vue-dev-48mo | webdev, vue, typescript, tailwindcss | **About today**
Today was another solid day. I ended up getting I lot of work done as I intended and it was quite fulfilling. I completed my landing page and practiced my refactoring skill on the code to make it more clean and modularized which was bugging me since I started working on it and I'm glad I did that and I get it why this is an important skill to have (write a clean code).
**What's next?**
I will be starting work on my third and last landing page tomorrow and hope will be completing that soon. I'm looking for something complex to implement in this one that I haven't been able to have experience with earlier.
**Improvements required**
I am almost able to manage my routine but still there is friction that I would want to remove to reach my full work potential.
Wish me luck! | zain725342 |
1,897,910 | Accessibility in Frontend Development: Best Practices and Tools | Have you ever wondered why some websites lack accessibility? Many front-end developers build products... | 0 | 2024-06-23T16:02:19 | https://dev.to/abdulquadri_akosile_efe07/accessibility-in-frontend-development-best-practices-and-tools-3he0 | webdev, frontend, softwaredevelopment | Have you ever wondered why some websites lack accessibility? Many front-end developers build products without considering accessibility or testing them for impaired users. However, accessibility is vital because different end users need to access these products. Let's shed more light on web accessibility to understand its importance in front-end development better.
## What is Web Accessibility?
Web accessibility means making websites usable for everyone, including people with disabilities. This involves designing and developing web content that can be easily navigated and understood by users who may use assistive technologies, such as screen readers or keyboard navigation. In simple terms, web accessibility ensures that all users, regardless of their abilities, can access and interact with web content.
## Why Does Accessibility Matter in Frontend Development?
Accessibility improvements often lead to a better overall user experience. For example, providing alternative text for images helps visually impaired users and improves SEO. Clear and consistent navigation benefits everyone, not just those with disabilities. By focusing on accessibility, we can create more intuitive, user-friendly interfaces that enhance usability
## The Impact of Accessibility on User Experience and Inclusivity
**User Experience**: Accessibility features enhance the usability of a website for everyone. For instance, larger fonts and high-contrast colors improve readability for all users, not just those with visual impairments. Keyboard-friendly navigation benefits users who prefer or rely on keyboard use over a mouse.
**Inclusivity**: Web accessibility promotes inclusivity by ensuring that people with disabilities can access and interact with digital content. This inclusivity extends beyond individuals with permanent disabilities to include those with temporary impairments (e.g., a broken arm) and situational limitations (e.g., a noisy environment). By prioritizing accessibility, developers help bridge the digital divide, providing.
**Reaching a Broader Audience**: By making websites accessible, businesses and organizations can reach a wider audience. Over a billion people worldwide live with some form of disability, according to the World Health Organization. Ensuring your website is accessible allows you to tap into this significant user base, expanding your reach and potential market. Accessibility also improves SEO, as search engines favor, further increasing visibility and traffic.
> **SEO: A Brief Overview**
> **SEO**: Search engine optimization (SEO) is the process of enhancing your website to make it more visible on search engines like Google, Yahoo, and Bing. This helps people find your site when they search for products or services you offer.
## Best Practices for Accessible Frontend Development
To create a website with excellent web accessibility, follow these best practices: use semantic HTML, incorporate ARIA roles and attributes, ensure keyboard navigation is smooth, optimize multimedia elements for accessibility, and properly label forms and form elements. These practices help make your website more usable for everyone, including those with disabilities.
### Semantic HTML
Semantic HTML is the foundation of web accessibility. We have Semantic HTML and Non Semantic HTML. Semantic HTML are elements that provide meaningful structure to a webpage, making it easier for screen readers to navigate. They define the meaning of the content they contain. Examples are Semantic HTML are listed below:
- `<header>`: Represents introductory content or a set of navigational links.
- `<nav>`: Defines a set of navigation links.
- `<main>`: Specifies the main content of a document.
- `<aside>`: Represents content indirectly related to the main content, often used for sidebars.
- `<footer>`: Represents the footer for a section or document.
**Example**:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Accessible Webpage</title>
</head>
<body>
<header>
<h1>Website Title</h1>
<nav>
<ul>
<li><a href="#home">Home</a></li>
<li><a href="#about">About</a></li>
<li><a href="#services">Services</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
</nav>
</header>
<main>
<section id="home">
<h2>Welcome to Our Website</h2>
<p>This is the home section.</p>
</section>
<section id="about">
<h2>About Us</h2>
<p>This is the about section.</p>
</section>
</main>
<footer>
<p>© 2024 Accessible Webpage. All rights reserved.</p>
</footer>
</body>
</html>
```
In the example above,
- `<header>`: contains the main heading and navigation links, providing clear structure and meaning.
- `<main>`: encompasses the primary content of the page, with individual <section> elements to demarcate different parts of the content.
- `<footer>`: provides information typically found at the bottom of a webpage, such as copyright information.
While non-semantic HTML elements, like `<div>` and `<span>`, do not inherently convey meaning or structure.
**Note**: They can still be used in an accessible way with ARIA roles and attributes:
```html
<div role="navigation">
<ul>
<li><a href="#home">Home</a></li>
<li><a href="#about">About</a></li>
<li><a href="#services">Services</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
</div>
```
The `<div> `element is given a `role="navigation"` to indicate that it serves as a navigation section. This helps screen readers understand its purpose.
### ARIA
ARIA (Accessible Rich Internet Applications) roles, states, and properties are crucial tools for enhancing the accessibility of dynamic web content, especially for users with disabilities. They provide semantic meaning to elements, making it easier for assistive technologies like screen readers to interpret and interact with web content effectively.
**Example**
```html
<div role="alert">
Error: Your submission could not be processed. Please try again.
</div>
```
In this example, using the `role="alert"` attribute for error messages ensures that they are announced immediately by screen readers, alerting users to important information.
### Keyboard Navigation
To ensure accessibility, interactive elements on a website should be easily navigable using only the keyboard. This means users should be able to move between elements using the TAB key and activate them using the ENTER or space bar keys. Proper focus management and the use of the `tabindex` attribute help achieve this, ensuring that all users, including those who rely on keyboards, can interact with the website effectively.
**Example**:
```html
<nav aria-label="Main Navigation">
<ul>
<li><a href="#home" tabindex="0">Home</a></li>
<li><a href="#about" tabindex="0">About</a></li>
<li><a href="#services" tabindex="0">Services</a></li>
<li><a href="#contact" tabindex="0">Contact</a></li>
</ul>
</nav>
```
In this example, the `tabindex="0"` attribute makes the links clickable with the keyboard. This lets users move through the links using the TAB key. The `aria-label="Main Navigation"` attribute describes the navigation section for screen readers, helping users understand its purpose.
### Color Contrast and Visual Design
Ensuring sufficient color contrast (e.g., a 4.5:1 ratio for normal text) is crucial for readability. Tools like the <a target="_blank" href="https://webaim.org/resources/contrastchecker/">WCAG Contrast Checker</a> can help verify that your color choices meet accessibility standards. The design principle of contrast refers to the use of visually different elements. In addition to capturing attention, contrast can guide the viewer's eye to a focal point, highlight important information, and add variety or even drama to a design.
**Example**:
```html
<p style="color: #333; background-color: #fff; font-size: 16px;">
This is an example of text with sufficient contrast.
</p>
```
In this example, the **text color is dark (#333)** and the background **color is light (#fff)**, providing enough contrast for easy reading. This combination meets the 4.5:1 ratio recommended for normal text by accessibility standards. The f**ont size is also 16px**, which is considered a good size for readability.
### Forms and Labels
Properly labeled form elements improve usability. Using `aria-label`, `aria-labelledby`, and `aria-describedby` ensures that screen readers can correctly announce form elements and instructions.
**Example**:
```html
<form>
<label for="name">Name:</label>
<input type="text" id="name" name="name" aria-label="Full Name">
<label for="email">Email:</label>
<input type="email" id="email" name="email" aria-labelledby="emailLabel">
<span id="emailLabel">Email Address</span>
<button type="submit">Submit</button>
</form>
```
In this example, the "Email" field, has an additional description provided by a <span> element with the text "Email Address". This description is associated with the field using the` aria-labelledby `attribute.
The "Name" field also has an aria-label attribute set to "Full Name", providing an alternative accessible name for the field.
**Note:** The `aria-label` and `aria-labelledby` providing additional context and make the form more accessible to users who rely on screen readers.
### Media and Multimedia
Providing descriptive alt text is crucial for accessibility, as it ensures that users with visual impairments can understand the content and context of the image. This is particularly important when the image conveys important information or serves a functional purpose on the webpage
**Example**:
```html
<img src="image.jpg" alt="Description of the image">
```
In this example, the `alt` attribute provides a text description of the image, which is important for users who cannot see the image, such as those using screen readers.
## Essential Tools for Testing Web Accessibility
There are several tools available to help ensure accessibility in front-end development. These tools can help identify and fix accessibility issues in your code. Some popular tools include:
- **Lighthouse:** An open-source, automated tool for improving the quality of web pages. It includes an accessibility audit that can identify common issues and provide suggestions for improvement.
- **axe:** A free and open-source accessibility testing tool that can be used as a browser extension or integrated into your development workflow. It helps identify and fix accessibility issues in your code.
- **Wave:** A web accessibility evaluation tool that provides visual feedback about the accessibility of your web content. It can identify errors and suggest improvements.
- **WAVE Evaluation Tool:** A suite of evaluation tools that help authors make their web content more accessible to individuals with disabilities. It provides visual feedback about the accessibility of your web content and can identify errors and suggest improvements.
## Conclusion
Implementing accessibility in frontend development is crucial for creating inclusive web experiences. By following best practices like using semantic HTML, ARIA roles, proper keyboard navigation, sufficient color contrast, and correctly labeled forms, developers can ensure that their websites are accessible to all users, regardless of their abilities. Accessible design not only benefits users with disabilities but also enhances the overall user experience for everyone.
## Resources
[Accessibility — Make the web usable by everyone](https://developer.mozilla.org/en-US/docs/Learn/Accessibility)
[HTML elements reference](https://developer.mozilla.org/en-US/docs/Web/HTML/Element)
[Contrast Checker](https://webaim.org/resources/contrastchecker/)
| abdulquadri_akosile_efe07 |
1,897,908 | Launching GitLoop AI codebase Assistant and code reviewer for GitHub PRs, commits, issues and more... | I'm excited to announce new features for https://gitloop.com, a project I've been working on for... | 0 | 2024-06-23T15:55:06 | https://dev.to/akirasato/launching-gitloop-ai-codebase-assistant-and-code-reviewer-for-github-prs-commits-issues-and-more-431j | webdev, github, chatgpt, ai | I'm excited to announce new features for [https://gitloop.com](GitLoop), a project I've been working on for months.
GitLoop now not only scans and understands your entire codebase but also helps with new issues and reviews your changes directly on GitHub. Here’s what you can expect:
Deep Analysis: Personalized AI assistance for codebases.
Automated Reviews: GitLoop automatically reviews your pull requests and commits.
Custom Reviewing Configs and Prompts: Tailor the review process to fit your specific needs.
Conversational Insights: Get conversational explanations on why changes are necessary.
Issue Assistance: When a new issue is created, GitLoop is ready to help based on its deep understanding of your codebase.
Check it out here: [https://www.gitloop.com](https://www.gitloop.com) and give me your feedback.
GitLoop won’t replace the invaluable human interactions essential for mentoring and teaching juniors. Instead, it provides helpful hints to ensure no important details are overlooked. | akirasato |
1,895,709 | How to used CORS? | CORS? 웹 애플리케이션에서 HTTP 프로토콜을 통해 데이터를 요청하고 응답을 받는다. 즉, 웹 사이트는 URL이라는 출처를 가지고 요청을 하게 되는데,... | 0 | 2024-06-23T15:45:28 | https://dev.to/hxxtae/how-to-used-cors-4ld |

## CORS?
웹 애플리케이션에서 HTTP 프로토콜을 통해 데이터를 요청하고 응답을 받는다.
즉, 웹 사이트는 URL이라는 출처를 가지고 요청을 하게 되는데, `동일 출처`에서 불러온 문서나 스크립트가 다른 출처에서 가져온 리소스와 상호작용 하는 것을 제한하는 보안 방식을 **SOP(Same Origin Polict)**라 한다.
이는 악의를 가진 사용자가 정보를 탈취하거나 잠제적인 악성 문서를 격리하기 위해 브라우저에서 사용되는 방식이다.
그러나 웹 이라는 오픈스페이스 환경에서 다른 출처에 있는 리소스를 사용하는 일이 빈번하게 발생하게 되고 무작정 막을 수 없기 때문에 **CORS(Cross Origin Resource Sharing)** 라는 보안 방식이 등장하게 되었다.
> `동일 출처`란 두 URL의 프로토콜, 포트(명시한 경우), 호스트가 모두 같은 출처이다.
### SOP(Same Origin Policy)
- 다른 출처의 리소스를 사용하는 것을 제한하는 보안 방식.
- `XMLHttpRequest`, `Fetch API` 처럼 JavaScript로 서로 다른 도메인에 대한 요청은 SOP 정책을 따른다.
### CORS(Cross Origin Resource Sharing)
- 다른 출처의 자원을 공유하는 보안 방식.
- 교차 출처 리소스 공유(CORS)는 추가 HTTP 헤더를 사용하여, 한 출처에서 실행 중인 웹 애플리케이션이 다른 출처의 선택한 자원에 접근할 수 있는 권한을 부여하도록 브라우저에 알려주는 체제이다. 즉, 우리가 가져오는 리소스들이 안전한지 검사하는 관문이다.
- `link`, `script`, `img`, `video`, `audio`, `iframe` 태그 및 `@font-face` 처럼 교차 삽입의 경우 CORS 정책을 따른다.
> 출처를 비교하는 로직은 서버에 구현된 스펙이 아닌 `브라우저`에 구현된 스펙이다.

정리하자면 CORS 정책이 적용되는 네트워크 요청의 경우, 네트워크 요청(request)에는 현재 요청을 보내는 사이트의 `Origin` 헤더가 존재해야 하고, 네트워크 응답(response)에는 그에 상응하는 `Access-Control-Allow-Origin` 헤더가 존재해야 한다.
결과적으로 요청의 `Origin` 헤더와 응답의 `Access-Control-Allow-Origin` 헤더에 명시된 도메인이 일치해야만 네트워크 요청이 정상적으로 마무리되고 우리가 원했던 리소스를 가져올 수 있게 된다. -> **CORS 이슈는 SOP에 의해 발생한다.**
항상 이론을 이해하면서 고개를 끄덕이지만 실제로 접하면...

## CORS 동작에 따른 3가지 시나리오
위에서 설명한 바와 같이 CORS는 다음과 같은 과정을 가진다.
1. 클라이언트에서 HTTP 프로토콜을 사용하여 요청을 보내게 되는데 이 때 헤더에 `Origin` 으로 요청의 출처를 담아서 함께 보낸다.
2. 서버에서 `Access-Control-Allow-Origin` 에 요청의 출처를 담아 클라이언트로 응답을 보낸다.
3. 클라이언트에서 응답을 받으면 브라우저가 `Origin`과 `Access-Control-Allow-Origin`을 비교하여 SOP를 위반했는지 확인한다.
이러한 동작은 아래의 3가지 시나리오에 의해 다시 달라진다.
- Preflight Request
- Simple Request
- Credentialed Request
### Preflight Request
브라우저가 예비요청과 본 요청으로 나누어 서버로 전송하게 되는데, 이 예비요청을 Preflight 라고 한다.

- 실제 요청이 전송하기 안전한지 확인하기 위해 실제 요청 이전에 수행하는 사전 요청이다.
- 서버는 응답 헤더에 `Access-Control-Allow-Origin` 값을 전달하여 브라우저가 SOP 위반 여부를 판단하게 한다.
- `Access-Control-Allow-Origin: * | (origin)`
- OPTIONS 메서드를 사용하여 사전 요청을 보낸다.
- **일반적으로 CORS가 발생하는 요청이다.**
### Simple Request
명확한 명칭은 없지만 Preflight Request를 보내지 않고 서버에게 요청을 하는 것을 단순 요청(Simple Request)라고 한다.
하지만 아무 때나 단순 요청을 사용할 수 있는 것은 아니고, 특정 조건을 만족하는 경우에만 예비 요청을 생략할 수 있다. 게다가 이 조건이 조금 까다롭기 때문에 일반적인 방법으로 웹 어플리케이션 아키텍처를 설계하게 되면 거의 충족시키기 어려운 조건들이라 필자도 이런 경우를 거의 경험하지는 못 했다.

- 사전 요청을 보내지 않고 바로 서버에 요청을 보낸다.
- 서버는 응답 헤더에 `Access-Control-Allow-Origin` 값을 전달하여 브라우저가 SOP 위반 여부를 판단하게 한다.
- `Access-Control-Allow-Origin: * | (origin) | null`
- `Access-Control-Allow-Origin: <origin>` 인 경우 Vary 속성도 같이 제공해야 한다. [MDN 참고](https://developer.mozilla.org/ko/docs/Web/HTTP/Headers/Vary)
> Simple Request는 아래 조건을 전부 만족해야 요청이 가능하다.
> - GET, HEAD, POST 메소드 중 하나를 사용
> - Accept, Accept-Language, Content-Language, Content-Type, DPR, Downlink, Save-Data, Viewport-Width, Width 를 제외한 헤더를 사용하면 안된다.
> - Content-Type 사용 시 application/x-www-from-urlencoded, multipart/form-data, text/plain 만 허용
### Credentialed Request
Credentialed Request은 요청 시 보안을 강화하고 싶을 때 사용하는 방법이다.
일반적으로 XMLHttpRequest, fetch는 별도 옵션 없이 브라우저의 쿠키 정보, 인증과 관련된 헤더를 요청에 넣지 않지만, Credentialed Request를 통해 헤더에 인증과 관련된 정보를 넣을 수 있다.
이때 요청에 인증과 관련된 정보를 담을 수 있게 해주는 옵션이 바로 `credentials` 옵션이다.
> credentials 옵션 3가지
> - same-origin : 기본값이며, 같은 출처 간 요청에만 인증 정보를 담게 한다.
> - include : 모든 요청에 인증 정보를 담는다.
> - omit : 모든 요청에 인증 정보를 담지 않는다.
`same-origin`이나 `include`와 같은 옵션을 사용하여 리소스 요청에 인증 정보가 포함된다면, 이제 브라우저는 다른 출처의 리소스를 요청할 때 단순히 `Access-Control-Allow-Origin`만 확인하는 것이 아니라 `Access-Control-Allow-Credentials: true` 를 서버에서 응답 헤더에 포함하여 전달해 주어야 한다.
- 기존 CORS 방식에서 인증 관련 헤더를 포함할 때 사용하는 요청이다.
- 요청 헤더에 `credentials` 옵션을 사용하게 되면 인증 관련된 헤더를 넣을 수 있다.
- 서버는 응답 헤더에 `Access-Control-Allow-Origin : *` 값을 사용할 수 없으며, 명시적인 URL 이어야 한다.
- 서버는 응답 헤더에 반드시 `Access-Control-Allow-Credentials : true` 가 있어야 클라이언트의 인증 포함 요청에 허용이 가능하다.
---
ref: https://evan-moon.github.io/2020/05/21/about-cors/
ref: https://beomy.github.io/tech/browser/cors/
| hxxtae | |
1,897,904 | Stylish Accordion Animation | Check out this Pen I made! | 0 | 2024-06-23T15:31:52 | https://dev.to/alcu1n/stylish-accordion-animation-233f | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Alcu1n/pen/eYaryOb %} | alcu1n |
1,897,891 | Concept of computer science | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T15:16:30 | https://dev.to/milesonerd/concept-of-computer-science-4g31 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Computer science is the study of algorithms, data structures, programming, and computing theory. It involves designing and analyzing software and hardware to solve problems, automate tasks, and process information efficiently using computers.
<!-- Thanks for participating! --> | milesonerd |
1,897,890 | Weaving Your Enterprise Together: Implementing Integration Patterns with AWS Messaging Services | Weaving Your Enterprise Together: Implementing Integration Patterns with AWS Messaging... | 0 | 2024-06-23T15:14:54 | https://dev.to/virajlakshitha/weaving-your-enterprise-together-implementing-integration-patterns-with-aws-messaging-services-14o6 | 
# Weaving Your Enterprise Together: Implementing Integration Patterns with AWS Messaging Services
In today's dynamic digital landscape, enterprises rely on a complex interplay of applications, services, and systems. Effectively connecting these disparate components is paramount for achieving agility, scalability, and efficiency. This is where Enterprise Integration Patterns (EIPs) come into play, offering proven blueprints for orchestrating data flow and communication between applications. AWS provides a robust suite of messaging services that act as the backbone for implementing these patterns, enabling seamless integration within and across organizational boundaries.
This blog post delves into the world of EIPs, focusing on popular patterns like Publish/Subscribe, Point-to-Point, and Request-Reply. We'll explore how AWS services, primarily Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS), empower architects and developers to realize these patterns effectively. We'll also touch upon alternative solutions offered by other cloud providers.
### Understanding the Building Blocks: SQS and SNS
Before diving into specific patterns, let's understand the core AWS messaging services:
* **Amazon SQS:** A fully managed message queuing service. SQS provides a reliable intermediary for applications to send, store, and receive messages asynchronously. It offers two queue types:
* **Standard Queues:** Focus on high throughput, best-effort ordering, and at-least-once delivery. Ideal for use cases like log aggregation and stream processing.
* **FIFO (First-In, First-Out) Queues:** Guarantee strict message ordering and exactly-once processing. Suitable for applications requiring transactional integrity, such as order processing.
* **Amazon SNS:** A highly available, durable, and scalable pub/sub messaging service. SNS enables message distribution to various subscribers, including SQS queues, HTTP/HTTPS endpoints, email, and mobile push notifications. It decouples publishers from subscribers, promoting flexible and scalable communication.
### Common Integration Patterns and their AWS Implementations
1. **Publish/Subscribe (Pub/Sub)**
- **Concept:** A messaging pattern where publishers send messages to a topic without knowledge of specific receivers (subscribers). Subscribers express interest in specific topics and receive messages published to those topics.
- **AWS Implementation:** SNS forms the core of Pub/Sub on AWS.
- Publishers publish messages to SNS topics.
- Subscribers, such as SQS queues, Lambda functions, or HTTP endpoints, subscribe to relevant topics.
- SNS efficiently routes messages from publishers to all interested subscribers.
- **Use Case Example:** An e-commerce platform can use SNS to notify various services (inventory, shipping, notifications) about a new order. The order service publishes the order details to an SNS topic, and subscribing services process the message according to their role.
2. **Point-to-Point**
- **Concept:** This pattern ensures that a message is consumed by exactly one consumer. It's typically used for tasks that require guaranteed processing and avoidance of duplicate work.
- **AWS Implementation:** SQS is well-suited for Point-to-Point messaging.
- A producer sends a message to an SQS queue.
- A single consumer polls the queue for messages.
- SQS guarantees that only one consumer receives and processes a given message (using message visibility timeouts).
- **Use Case Example:** A financial application might use SQS for processing transactions. Each transaction request is sent to an SQS queue, and a dedicated worker process fetches and processes each message, ensuring each transaction is processed once.
3. **Request-Reply**
- **Concept:** This pattern enables synchronous communication where a sender expects a response to a message.
- **AWS Implementation:** A combination of SQS and SNS can facilitate a Request-Reply pattern.
- **Request:** The client sends a message to a dedicated request queue (SQS). This message includes a ReplyToQueueURL pointing to a separate response queue.
- **Processing:** A backend service (consumer) polls the request queue, processes the request, and sends the reply to the ReplyToQueueURL specified in the request.
- **Response:** The client listens to the designated response queue for the reply.
- **Use Case Example:** A travel booking system can use this pattern for real-time fare inquiries. The client sends a request to an SQS queue, and the booking service processes the request, queries external APIs, and sends the fare details back to the client's response queue.
4. **Fan-Out**
- **Concept:** The Fan-Out pattern distributes a message to multiple endpoints or processing pipelines for parallel processing.
- **AWS Implementation:** SNS excels in Fan-Out scenarios.
- A publisher sends a message to an SNS topic.
- Multiple SQS queues, Lambda functions, or other endpoints can subscribe to the topic.
- SNS delivers the message to all subscribers concurrently.
- **Use Case Example:** A media processing pipeline might use SNS to trigger different processing tasks (thumbnail generation, video transcoding, metadata extraction) for a newly uploaded video. Each task subscribes to the SNS topic and performs its specific operation in parallel.
5. **Routing Slip**
- **Concept:** The Routing Slip pattern defines a sequence of processing steps for a message. Each step may involve different services or logic.
- **AWS Implementation:** This can be achieved using SQS and AWS Step Functions.
- A message is initially placed in an SQS queue.
- Step Functions orchestrates the message flow:
- Each state in the Step Function can define a Lambda function or other service integration to process the message.
- Messages can be passed between states, ensuring sequential execution of the defined workflow.
- **Use Case Example:** An order fulfillment process might involve inventory checks, payment authorization, and shipping label generation. Step Functions can orchestrate this flow, routing the order information through the necessary steps.
### Beyond AWS: A Glimpse at Other Solutions
While AWS provides robust tools, other cloud providers offer comparable services:
* **Azure Service Bus:** Offers similar features to SQS and SNS, including queues, topics/subscriptions, and support for various messaging protocols.
* **Google Cloud Pub/Sub:** A scalable, real-time messaging service that follows the Pub/Sub model, enabling asynchronous message delivery to various subscribers.
* **RabbitMQ:** An open-source message broker known for its flexibility and support for various messaging protocols.
### Conclusion
AWS messaging services, coupled with a deep understanding of Enterprise Integration Patterns, provide a powerful toolkit for building loosely coupled, scalable, and robust enterprise systems. By leveraging these services and design patterns, organizations can improve application resilience, streamline data flows, and foster greater agility in responding to evolving business needs.
### Architect's Corner: An Advanced Use Case
Let's imagine a global e-commerce platform facing challenges with high-volume order processing and real-time inventory management. We can architect a solution using a combination of AWS services and EIPs:
**Scenario:** The platform experiences millions of orders per day. It's crucial to ensure order processing is efficient, inventory is accurately updated in real-time, and customers receive timely updates.
**Solution:**
1. **Order Ingestion:**
- Orders are received via API Gateway and placed into a high-throughput Kinesis Data Stream.
- This stream acts as a buffer, handling the initial burst of incoming orders.
2. **Order Enrichment and Routing:**
- Kinesis Data Analytics processes the order stream, performing tasks like:
- Enriching order data with customer information from a DynamoDB table.
- Routing orders based on criteria like product type or shipping location.
3. **Parallel Processing with Fan-Out:**
- SNS topics are used to fan-out order information to various downstream systems:
- **Inventory Management:** An SQS queue receives order details for inventory reservation. A dedicated worker process (e.g., ECS tasks) processes these messages, updating inventory levels in near real-time in a DynamoDB table.
- **Payment Processing:** Orders are routed to a separate SQS queue for payment authorization, leveraging services like AWS Lambda and potentially interacting with external payment gateways.
- **Shipping and Fulfillment:** Orders are published to another SNS topic for shipping label generation, warehouse picking, and dispatch notifications.
4. **Order Status Updates:**
- Each processing step publishes status updates to designated SNS topics.
- Customers can subscribe to relevant topics (e.g., order confirmation, shipment updates) via email, SMS (using services like AWS SNS or Twilio), or mobile push notifications.
**Benefits:**
- **High Throughput & Scalability:** Kinesis and SQS handle massive order volumes.
- **Real-time Inventory:** DynamoDB provides low-latency reads/writes for accurate inventory.
- **Loose Coupling:** SNS decouples systems, allowing independent scaling and evolution.
- **Real-time Customer Communication:** SNS enables flexible and targeted order status updates.
This example illustrates how combining AWS services and EIPs creates a powerful, event-driven architecture capable of handling complex business requirements at scale. By carefully choosing the right patterns and services, architects can build highly resilient, responsive, and scalable systems.
| virajlakshitha | |
1,897,887 | Why is Everyone into Indie Development? - FAV0 Weekly Issue 004 | website: fav0.com These are the shells I collected with friends at the beach last week. It was... | 0 | 2024-06-23T15:12:37 | https://dev.to/justin3go/why-is-everyone-into-indie-development-fav0-weekly-issue-004-1maj | website: [fav0.com](https://fav0.com/en/posts/2024/004)

These are the shells I collected with friends at the beach last week. It was quite a haul! For someone who has lived inland all their life, it was very exciting.
## \>\>Topics to Discuss
**Why are more and more people getting into indie development?**
A few years ago, most programmers aimed to join big tech companies, proudly wearing badges from these firms. Training courses, interview materials, and study guides were in high demand.
However, in recent years, there's been a shift towards indie development, with many aspiring to live a freelance lifestyle. Tools, websites, and resources for indie development are popping up everywhere.
Here, I want to share a few reasons that come to mind, hoping to spark some discussion.
**1) The Advent of the ChatGPT Era**
This has twofold benefits for promoting indie development:
1. Lower Barrier: Programmers can quickly become full-stack developers using ChatGPT tools, mastering full-stack development techniques, and more easily developing their own desired websites independently.
2. More Opportunities: Almost all applications can integrate with "LLM+", and there are still many niches that haven't yet adopted this technology.
**2) Full Control Over Your Work**
You often have to use poorly built infrastructure; you have to use undocumented components; you work on requirements that get removed and then re-added; you have to integrate absurd interfaces because "all the data is there, so it's doable."
Many times, you end up doing seemingly meaningless work without any recourse.
In indie development, although you may have to do more, at least most things can be done in a way that suits you.
**3) Forced into Indie Development Due to Layoffs**
**4) The Anticipation of Gaining Unique Experiences Compared to a Fixed Salary Job**
Positive user feedback, subscription messages, etc., can be very rewarding.
**5) Everyone Else is Doing It and Making Money, So I Want to Make Money Too**
*Of course, it might also be due to the information bubble effect. My focus has changed, so the information I encounter has also changed...*
## \>\>Must Read
### [CSS if Statements](https://x.com/LeaVerou/status/1801192208025940200)
The CSS WG has decided to add an inline `if()` to CSS. The `if()` function complements, but does not replace, media queries.
Like this:
```css
background: if(style(--variant: success), var(--green));
padding: if(var(--2xl), 1em, var(--xl) or var(--m), .5em);
```
Note: This feature is not yet available in browsers and will take some time, with the most optimistic estimate being about 2 years.
Related Links:
- [Github Issues Discussion](https://github.com/w3c/csswg-drafts/issues/10064)
- [Blog - Inline Conditionals in CSS?](https://lea.verou.me/blog/2024/css-conditionals/)
### JavaScript 2023 Survey Report
Very comprehensive, and the website is well-designed. Highly recommended!
Includes:
1. Practitioner Statistics
2. Features
3. Libraries
4. Other Tools
5. Usage
6. Resources
7. Usage

Interestingly, Vite won three awards in the final tally!

### [Kuaishou's Keling Large Model Release: Image-to-Video and Video Continuation Features](https://kling.kuaishou.com/)

A netizen generated a very interesting video from a classic image. [Related Link](https://x.com/Gorden_Sun/status/1804051003681149110):

## \>\>Useful Tools
### [Free Online Full-Stack Development Tutorial](https://www.theodinproject.com/)
Build dozens of showcase-worthy projects, from simple scripts to full programs and deployed websites...
You can learn:
1. Intermediate to Advanced HTML and CSS
2. Databases
3. NodeJS
4. JavaScript
5. React
6. Job-Related Skills
7. React
8. Ruby on Rails
9. Ruby
### [A Cool Front-End Logo Effect](https://github.com/guilanier/codrops-sdf-lensblur)
[Online Demo](https://tympanus.net/Tutorials/SDFLensBlur/)


### Open-Source Local Bulk Image Compression Tool
Purely local compression with no server-side logic, making it completely secure and open-source for self-deployment.
As shown below, I tested it, and this image was compressed from `2.48MB` to `370.8KB`, with almost no perceptible difference.

### [Microsoft's Open-Source Rich Text Editor](https://github.com/Microsoft/roosterjs?tab=readme-ov-file)
A framework-independent JavaScript rich text editor, neatly nested within an HTML `<div>` element.
It’s fully featured, but I personally don't like the color scheme of the demo site...

### [Tetris Font Animation](https://erikdemaine.org/fonts/tetris/?text=Justin3go&speed=4&black=1&grid=1¢er=1)
Check out the effect, it's quite interesting:

### [Library for Estimating Token Costs](https://github.com/AgentOps-AI/tokencost)
Much like the "Internet+" era, almost all applications can integrate with "LLM+".
However, most applications need to call APIs from major companies like OpenAI, which can be quite costly!
Hence, estimating costs is crucial. This Python library tracks prices from major providers in real-time and makes it easy to calculate token expenses:

### [Project-Based Learning Programming Tutorials](https://github.com/practical-tutorials/project-based-learning)
It’s well-known that learning programming involves more than just learning syntax; you need to practice by writing projects.
Here’s a highly recommended list of project-based learning tutorials on GitHub, with 177k stars.
It covers nearly 20 common programming languages and almost all computer science fields.
For example, building a Jupyter extension with JS:

## \>\>Interesting Finds
### Why is Changing Jobs Called "Jumping Ship"?
If we are not cattle or horses, why is changing jobs referred to as "jumping ship"?

## \>\>Worth Reading
### [Designing Data Tables](https://bootcamp.uxdesign.cc/data-table-design-patterns-4e38188a0981)
Tables are one of the most common elements in front-end development, but there are many design considerations. This article shares a lot of valuable insights.
### [Overview of Front-End Build Tools](https://sunsetglow.net/posts/frontend-build-systems.html)
### [Creating Flowcharts with CSS](https://coryrylan.com/blog/flow-charts-with-css-anchor-positioning)
With the introduction of the CSS Anchor Position API in Chrome 125, positioning elements relative to another element has become easier. This is a great way to handle complex positioning use cases like popups and tooltips.
However, CSS anchor positioning can also be used to create basic flowcharts. In this article, you can learn how to use CSS anchor positioning to create flowcharts and diagrams using only CSS.
### [Do Animals Have Consciousness?](https://news.ycombinator.com/item?id=40694284)
Bees can count, recognize faces, and use tools; octopuses avoid pain and seek pain relief; crabs overcome their aversion to light after experiencing electric shocks. These findings have prompted some scientists to reconsider whether animals possess consciousness.
While current evidence is not yet sufficient to conclusively prove animal consciousness, it is enough to suggest that animals might possess consciousness.
| justin3go | |
1,897,886 | HERE IS NEW LANGUAGE TRANSLATOR | Check out this Pen I made! | 0 | 2024-06-23T15:12:21 | https://dev.to/jonse_ketela_b13c463d2acf/here-is-new-crime-locator-app-el0 | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Jonse-ketela/pen/abrGVPM %} | jonse_ketela_b13c463d2acf |
1,897,885 | Issue 49 and 50 of AWS Cloud Security Weekly | (This is just the highlight of Issue 49 and 50 of AWS Cloud Security weekly @... | 0 | 2024-06-23T15:11:04 | https://aws-cloudsec.com/p/issue-49-and-50 | (This is just the highlight of Issue 49 and 50 of AWS Cloud Security weekly @ https://aws-cloudsec.com/p/issue-49-and-50 << Subscribe to receive the full version in your inbox weekly for free!!).
**What happened in AWS CloudSecurity & CyberSecurity last week June 10-June 20, 2024?**
- IAM Access Analyzer now provides actionable recommendations to assist you in addressing unused access. For roles, access keys, and passwords that are not in use, IAM Access Analyzer offers convenient console links to facilitate their deletion. Regarding unused permissions, IAM Access Analyzer evaluates your current policies and suggests refined versions customized to your access patterns.
- AWS has launched Amazon GuardDuty Malware Protection for Amazon S3 which enables scanning of newly uploaded objects to Amazon S3 buckets for potential malware, viruses, and suspicious uploads so that you can action to isolate these objects before they impact downstream processes.
- AWS Private Certificate Authority (AWS Private CA) introduces the Connector for SCEP, enabling secure and scalable enrollment of mobile devices using a managed cloud certificate authority (CA). Simple Certificate Enrollment Protocol (SCEP) is widely adopted by mobile device management (MDM) solutions for obtaining digital identity certificates from a CA and enrolling both corporate-issued and bring-your-own-device (BYOD) mobile devices. With the Connector for SCEP, organizations can leverage a managed private CA and SCEP solution to streamline operations, reduce costs, and optimize their public key infrastructure (PKI). Furthermore, this connector allows integration of AWS Private CA with leading SCEP-compatible MDM solutions such as Microsoft Intune and Jamf Pro.
- AWS Identity and Access Management (IAM) now introduces passkeys for multi-factor authentication. Built on FIDO standards and utilizing public key cryptography, passkeys provide robust authentication that is resistant to phishing attacks, surpassing traditional password security measures. The support is compatible with built-in authenticators such as Touch ID on Apple MacBooks and facial recognition via Windows Hello on PCs. Passkeys can be generated using a hardware security key or through a chosen passkey provider, utilizing methods like fingerprint, facial recognition, or device PIN.
- Amazon EKS has released the Pod Identity agent as open source that you can package and deploy the agent within EKS clusters. Pod Identity is a feature designed to streamline the configuration of Kubernetes applications with AWS IAM permissions for cluster administrators. To leverage the Pod Identity feature, it is necessary to run the Pod Identity agent on the worker nodes of the cluster. By open sourcing the Pod Identity agent, users now have the ability to independently build the agent. This grants a range of options for packaging and deploying the agent, allowing alignment with organizational deployment practices.
- AWS KMS has introduced support for Elliptic Curve Diffie-Hellman (ECDH) key agreement. This feature enables two parties to establish a shared secret securely over a public channel. With ECDH in AWS KMS, you can use another party's public key along with your own elliptic-curve KMS key hosted within the FIPS 140-2 validated hardware security module (HSM) of AWS Key Management Service (KMS) to derive this shared secret. Subsequently, the shared secret can be utilized to derive a symmetric key for encrypting and decrypting data between the parties using a symmetric encryption algorithm within your application.
- AWS introduced natural language query generation powered by generative AI in AWS CloudTrail Lake (preview) which equips you to analyze AWS activity events without needing to write intricate SQL queries and just simply ask questions in plain English. (Note: I did have some errors at times- "Query generator failed to generate a query. A valid SQL statement could not be generated using the given prompt. Reword your prompt and try again” and this feature is in early phase so you should double check the generated SQL queries to make sure it’s generating what you are investigating.)
**Trending on the news & advisories (Subscribe to the newsletter for details):**
- Panera disclosed security incident.
- Advanced Auto Parts confirms data breach.
| aws-cloudsec | |
1,897,883 | Triangle : summarize,ask, tweet,note ? | This is a submission for Twilio Challenge v24.06.12 What I Built I built a Twilio chatbot... | 0 | 2024-06-23T15:07:50 | https://dev.to/ppkshashi/triangle-summarizeask-tweetnote--3337 | devchallenge, twiliochallenge, ai, twilio | *This is a submission for [Twilio Challenge v24.06.12](https://dev.to/challenges/twilio)*
## What I Built
I built a Twilio chatbot with several features:
1. **URL Content Summarization**: Summarizes the content of URLs using the Google Gemini API.
2. **Text Summarization**: Provides summaries of text inputs.
3. **Tweet Creation**: Generates tweets based on user inputs.
4. **Audio Notes**: Converts audio notes to text using OpenAI's Whisper.
5. **General Inquiry**: Allows users to ask any questions, similar to Brad AI.
## Demo





## Twilio and AI
used twilio messaging whatsapp bot and thought of using sendgrid but faced issue while creating account.the feature that i thought of adding is the chat history where user can export the chat history and send the sheet to email.
{% embed https://github.com/shashi2602/triangle-bot %}
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | ppkshashi |
1,897,882 | Frog Eaters | How disgusting they are They guzzle slime glizz Yes we hunted all the Good meat And maybe all... | 0 | 2024-06-23T15:07:50 | https://dev.to/tacodes/frog-eaters-4a8p | ats, autoreject, jobhunt | How disgusting they are
They guzzle slime glizz
Yes we hunted all the
Good meat
And maybe all that's left
Is bugs and frogs and snakes and things
Nevertheless
Those disgusting frog eaters | tacodes |
1,897,881 | How Cyber Security Services Can Protect Your Business From Threats | Wanna become a data scientist within 3 months, and get a job? Then you need to check this out !... | 0 | 2024-06-23T15:07:39 | https://thedatascientist.com/how-cyber-security-services-can-protect-your-business-from-threats/ | cybersecurity, database, productivity, ai | Wanna become a data scientist within 3 months, and get a job? Then you need to [check this out ! ](https://go.beyond-machine.com/)
Businesses contend with a wide variety of cyber threats, including those that can compromise data, finance, and reputation. Cyber security services, therefore, become crucial in ensuring that the business is protected against any cyber threats. By providing comprehensive protection solutions, they guarantee that all digital assets are securely protected. In this article, we will look at essential aspects associated with cybersecurity in implementing such services to keep your business safe from arising risks and vulnerabilities.
## Understanding Cyber Security Services
Cyber security services refer to a variety of measures that aim to secure data in systems and networks. Such services are provided by cyber security companies that specialize in defining vulnerabilities and determining various cyber threats. Key elements of the comprehensive solution for cyber security include several aspects described below.
## Trending
[Why most data analytics conferences are useless
](https://thedatascientist.com/most-data-analytics-conferences-useless/)
**Threat Detection and Prevention**
Identifying a potential threat before it causes damage. Advanced monitoring techniques and tools allow for the detection of suspicious activities in real time. Early detection and immediate actions prevent data breaches.
**Incident Response**
Whenever a security incident occurs, the response should be extremely quick and effective. Coordinated reaction to an incident will assist in managing a post-breach situation by identifying the source of the breach, containing the threat, and mitigating damage.
**Data Protection**
Another key point in cyber security is data protection against unauthorized access and a breach of sensitive information. This incorporates data encryption, controlling access, and ensuring the safety of storage conditions. Data protection ensures that even if data is intercepted, it cannot be used maliciously.
**Network Security**
Securing the company’s network infrastructure becomes essential to protect against unauthorized access or attack. Network security provides various functionalities, such as firewalls, intrusion detection systems, and secure network protocols. It helps monitor and control traffic to allow only legitimate access.
**End-User Education** Educate about best practices for cybersecurity to prevent human error, which can otherwise lead to breaches. Regular training sessions and awareness programs let employees become aware of potential threats like phishing. A well-informed workforce is the first line of defense against cyber attacks.
## The Importance of Cyber Security Services for Businesses
Cyber attacks can have disastrous consequences for a business. The costs related to a data breach – covering remediation, legal fees, and loss of income – tend to reach colossal figures. To top it all off, there is immense reputational risk and associated long-lasting damaging customer trust and loyalty. This is where cyber security professional services come in for businesses, charting a way out from falling into such pitfalls by developing customized protection strategies.
Cyber security companies offer specialized expertise and cutting-edge technologies in safeguarding business data. They carry out in-depth risk assessments to detect vulnerabilities and consequently implement strong security measures. This proactive approach ensures that cyber threats will be averted, ensuring the integrity of business operations.
## Key Components of an Effective Cyber Security Strategy
A successful cyber security strategy relates to layers of safety and a holistic approach to threats. Different security implementations are required at different levels to arm an enterprise against different kinds of threats. Below are the few basic elements that must be part of every cyber security strategy:
- **Network Security Devices:** Equally important, firewalls and intrusion detection systems work to monitor and control network traffic. They basically block access of users who are not authorized and alert administrators to possible threats. IT security services ensure the proper configuration and maintenance of such systems.
- **Data Encryption:** Sensitive data is encrypted in order to prevent reading in case of interception by a third party. This is an essential part of any cyber security solution, at rest or in transit, to ensure that data remains safe. It is one of the critical mechanisms that can be used to safeguard the confidentiality of information from unauthorized access.
- **Regular Software Updates:** Keeping software up to date closes off security vulnerabilities. Cyber security professional services also involve regular patch management to ensure everything is run with the recent security updates, thereby protecting against new and prevailing threats by closing the known vulnerabilities.
- **Employee Training and Awareness:** Human errors are often the root cause of various cyber incidents. Educating employees on best practices reduces the potential for accidental data breaches. Training should include recognizing phishing emails, developing strong passwords, and adhering to devised security protocols.
- **Incident Response Planning:** Despite the best preventive measures, security incidents can still occur. A well-defined incident response plan enables businesses to respond quickly and effectively to minimize damage. A cyber security services company offers expertise in developing and executing such plans in order to respond quickly and effectively to limit any further damage.
## Embracing a Proactive Cyber Security Approach
There can be no better investment for organizations than cyber security solutions to protect their valuable business data and customers’ trust. Partnering with experienced cyber security companies is beneficial because it usually includes custom strategies and advanced technologies focused on preventing breaches and mitigating any form of risk. [Personal cyber security services improve the protection](https://thedatascientist.com/how-cyber-security-services-can-protect-your-business-from/) of key executives and sensitive information, further strengthening the overall security posture.
Choosing the right cybersecurity service provider is an important strategic decision. Such a decision can significantly improve your business’s resiliency against cyber threats. Cybersecurity is not just about data protection; proactive security will also help you focus on growth and innovation. Taking these actions will ensure your business stays secure and competitive.
---
Wanna become a data scientist within 3 months, and get a job? Then you need to [check this out !](https://go.beyond-machine.com/)
---
This blog was originally published on https://thedatascientist.com/how-cyber-security-services-can-protect-your-business-from-threats/ | ecaterinateodo3 |
1,897,879 | Shell Script for DevOps | Hi Everyone Guys, I have an announcement for DevOps and Cloud Engineers, From today I will try to... | 0 | 2024-06-23T15:06:15 | https://dev.to/dev_roy/shell-script-for-devops-pjl | devops, aws, shell | Hi Everyone
Guys, I have an announcement for DevOps and Cloud Engineers, From today I will try to share some points which related with DevOps, So we are staring with Shell Scripting, It is most important for every DevOps Engineer, I made the documentation for that, Sharing the link here.
Overview:-
* Basic Shell Script
* Variables Shell Script
* User inputs and Variables Shell Script
* Conversation with Shell Script
* Arguments Shell Scrip
* If, Else shell script
https://www.onlinenotepad.io?share-id=AIt6s2JOBl45168683
| dev_roy |
1,897,875 | GitOps Argo CD Setup On EKS | Argo CD is a GitOps continuous delivery tool for Kubernetes, enabling automatic synchronization of... | 0 | 2024-06-23T15:01:34 | https://dev.to/vikash_kumar_06/gitops-argo-cd-setup-on-eks-15io | Argo CD is a GitOps continuous delivery tool for Kubernetes, enabling automatic synchronization of application state with Git repositories, rollbacks, health checks, RBAC integration, multi-environment support, and seamless integration with CI/CD systems for streamlined deployments. Here are the some points about Argo CD, which we will use in our implementation:
**Git Ops agent** — Argo CD is responsible for pulling updated code from Git repositories and deploying it directly to Kubernetes resources. Infrastructure configuration and application updates can be manage in one system.
**Automated Deployment**: Argo CD automates the deployment of applications to specified target environments. You can define your application state declaratively, and Argo CD handles the deployment process.
**Multi-Cluster Management**: Argo CD allows you to manage and deploy applications across multiple Kubernetes clusters.
Template Support: Argo CD supports templating and config management using helm and Kustomize.
**Rollback:** Anywhere to any application configuration committed in Git repository
Argo CD have lot of features. Above maintained features we will use in our CI/CD pipeline. You can find more details Argo CD — Declarative GitOps CD for Kubernetes (argo-cd.readthedocs.io)
**Install Argo CD:**
Argo CD can be install on EKS using below 2 method: | vikash_kumar_06 | |
1,897,877 | Getting Started with MongoDB: A Beginner's Guide; | Introduce: what MongoDB is, its key features, and why it's a popular choice for developers. What is... | 0 | 2024-06-23T15:01:34 | https://dev.to/muhammedshamal/getting-started-with-mongodb-a-beginners-guide-4l62 | mongodb, database, programming, webdev | Introduce: what MongoDB is, its key features, and why it's a popular choice for developers.
1. What is MongoDB?
Definition: Explain MongoDB as a NoSQL database designed for scalability and flexibility.
Key Features: Highlight features like document-oriented storage, scalability, high performance, and flexibility in schema design.
> small line about, what is mongodb;
2. Why is MongoDB?
Schema-less: Explain the benefits of a schema-less database.
Document Model: Describe the JSON-like document model.
Use Cases: Mention common use cases such as content management, real-time analytics, and IoT applications.
3. Setting Up MongoDB
> Best way to go to Mongo Docs;
[MongoDB Installation Documentation](https://www.mongodb.com/docs/manual/installation/)
4. Basic Concepts
Databases and Collections: Explain the concept of databases and collections in MongoDB.
Documents: Describe documents and their JSON-like structure.
CRUD Operations: Briefly introduce Create, Read, Update, and Delete operations.
5. Creating a Database and Collection
It is simple by Mongo shell;
`use myFirstDatabase
db.createCollection("myFirstCollection")
`;
> Guys am giving CRUD for MongoDB;
Create
Inserting a Document: Show how to insert a document into a collection.
`
db.myFirstCollection.insertOne({
name: "John Doe",
age: 25,
city: "New York"
})
`
Read
Finding Documents: Demonstrate how to query documents.
`db.myFirstCollection.find({ name: "John Doe" })`
Update
Updating a Document: Show how to update an existing document.
`
db.myFirstCollection.updateOne(
{ name: "John Doe" },
{ $set: { age: 26 } }
)`
Delete
Deleting a Document: Demonstrate how to delete a document.
`db.myFirstCollection.deleteOne({ name: "John Doe" })`
Then ok friends......;
Happy Coding !

| muhammedshamal |
1,897,876 | KIP Protocol: Pioneer in Decentralized AI | 📚 TinTinLand's #TinTinLandWeb3LearningMonth has successfully entered its fourth week! 📅 In Week 4... | 0 | 2024-06-23T15:01:10 | https://dev.to/ourtintinland/kip-protocol-pioneer-in-decentralized-ai-4jei | webdev, beginners, ai, devops | 📚 TinTinLand's #TinTinLandWeb3LearningMonth has successfully entered its fourth week!
📅 In Week 4 (June 24th to June 30th), @KIPprotocol will bring a series of enriching online AMA sessions, workshops, and Zealy learning tasks.
🎨 #KIPProtocol is a decentralized underlying protocol designed for #AI app developers, model creators, and data owners. It aims to facilitate the creation, management, and monetization of decentralized digital property, enabling AI creators to achieve KnowledgeFi on #Web3 securely.
🛠️ Join our Discord for more details: https://discord.gg/65N69bdsKw
🚀 Participate in #TinTinLand Discord and Zealy task board for collaborative learning and tasks!
▪️ Zealy: https://zealy.io/cw/tintinland/questboard

📢 Join us for #TinTinAMA No.14 ➡️【#KIPProtocol: Pioneers of Decentralized #AI】
📅 June 25th | 21:00 UTC+8
👥 Guests:
🔹 @TracySalanderBC | Community Manager at TinTinLand
🔹 @julian_kip | Co-founder of @KIPprotocol
🔹 @HumanLevelJen | Chief AI Officer of @KIPprotocol
🔹 @orlowskilp | Infrastructure Lead of @KIPprotocol
📺 X Space: https://twitter.com/i/spaces/1DXxyjjrwRdKM
📢 Don't miss #TinTinMeeting Episode 40 airing this Thursday!
🎓 Topic: Decentralized #AI Product Deployment with @KIPprotocol
📅 June 27th (Thursday) |21:00 UTC+8
👥Guests:
@HumanLevelJen | Chief AI Officer of KIP Protocol
📺Youtube: https://www.youtube.com/live/KUq3Tjzff8I
| ourtintinland |
1,897,328 | Implement custom endpoint with pagination in Strapi | Strapi has to be one of the most popular open source CMS platforms currently available. I used it in... | 0 | 2024-06-23T15:00:04 | https://dev.to/dellboyan/implement-custom-endpoint-with-pagination-in-strapi-41i1 | webdev, javascript, strapi, opensource | [Strapi ](https://strapi.io/) has to be one of the most popular open source CMS platforms currently available. I used it in several projects and loved it's features and what you get for free. It's very easy to host on popular platforms like Digital Ocean or Netlify, very easy to use, and offers a bunch of features out of the box. In this article I will not go into details about how Strapi works and how to set it up, there are plenty of [tutorials available online](https://strapi.io/blog/categories/tutorials?page=1). What I would like to discuss is something I did not find online, and that is how you can customize Strapi endpoints to suit your needs.
## Strapi Endpoints
Once you setup Strapi, from the admin dashboard (http://localhost:1337/admin) you can access Content Type Builder and create a new type. Type can be a collection, or a single type. Collection would be for example a collection of posts, or collection of reviews, companies, users etc.
Any type that you create will consist of default values (id, createdAt, updatedAt etc.) and values you define like name, category, city, country etc.
Once a type is created, you will have a new API endpoint. For example, if you created a hotels type, you will now have an endpoint https://localhost:1337/api/hotels that will list all the hotels that are created and published. With this new endpoint you will have access to HTTPS method like find, findOne, delete, and create. Access to these methods is configurable from the admin dashboard so for example find can be available for all the visitors, while delete method can only access authenticated roles.
All the options that are editable from the admin are, of course, accessible and editable from the Strapi project as well, for example to customize route /api/hotels you would go to src/api/hotels
## When to customize Strapi?
While working with Strapi I can honestly say available features covered 90% use cases my clients had. But, depending on the project and stakeholder requirements I encountered situations where default features where not sufficient and I had to update how Strapi handles data in certain cases.
Let's continue with the example of hotels. Say you are building a directory of hotels and your stakeholder has this request:
> For each location a hotel can belong to, I need to have the ability to control first 10 positions inside that location from the admin dashboard.
One way you can solve this request is to create a repeatable component inside Strapi where administrator can choose a location and a position for that location. Since this is a repeatable component, administrator can define positions for all required locations.

So when you hit that api endpoint this value will look like this:
```
"position": [
{
"id": 1,
"position": 1,
"locations": [
{
"id": 11,
"name": "Iowa",
"createdAt": "2024-03-03T20:01:28.602Z",
"updatedAt": "2024-04-23T12:37:27.654Z",
"publishedAt": "2024-03-03T20:01:29.526Z",
}
]
},
{
"id": 2,
"position": 3,
"locations": [
{
"id": 14,
"name": "California",
"createdAt": "2024-03-03T20:01:28.602Z",
"updatedAt": "2024-04-23T12:37:27.654Z",
"publishedAt": "2024-03-03T20:01:29.526Z",
}
]
}
],
```
## How to customize an API route
Having this kind of data structure will work to administer first 10 positions for different locations, but you will not be able to apply this type of sorting in Strapi by default, so we will have to do some customization from the Strapi backend.
In your Strapi project navigate to src/api/hotels and you will see a couple of folders there. The one we are looking for is the controllers folder, we need to edit the file inside that folder.
> Controllers are JavaScript files that contain a set of methods, called actions, reached by the client according to the requested route. Whenever a client requests the route, the action performs the business logic code and sends back the response.
You can read more about controllers on the [official Strapi docs](https://docs.strapi.io/dev-docs/backend-customization/controllers), but essentially editing controllers will allow us to structure the data the way we need it in order to apply custom sorting of the hotels.
We will be editing the default find method inside the hotels controller, to start your controller file will look like this:
```
"use strict";
const { createCoreController } = require("@strapi/strapi").factories;
module.exports = createCoreController("api::hotel.hotel", ({ strapi }) => ({
async find(ctx, next) {
},
}));
```
Next, inside the find function we'll extract the query sort_by_premium_position and save it in a variable premiumSort. So only in case sort_by_premium_position query param is present containing the location id in the request we will apply the logic below.
We will also define a helper function parseQueryParam. This function is designed to handle and parse query parameters that may come in different formats. It processes the input parameter param based on its type and returns a parsed version of it.
```
"use strict";
const { createCoreController } = require("@strapi/strapi").factories;
module.exports = createCoreController("api::hotel.hotel", ({ strapi }) => ({
async find(ctx, next) {
const { query } = ctx;
const premiumSort = ctx.query.sort_by_premium_position;
// Utility function to parse query parameters
const parseQueryParam = (param) => {
if (Array.isArray(param)) {
return param.map((p) => JSON.parse(p));
}
if (typeof param === "string") {
return JSON.parse(param);
}
return param;
};
},
}));
```
Finally, if premiumSort is true we will get all the hotels, apply custom sorting and build custom pagination while making sure all query parameters like filtering, pagination and populating still works. If premiumSort is not present we will not change anything and just return the data normally.
Here is the entire code inside the controller file:
```
"use strict";
const { createCoreController } = require("@strapi/strapi").factories;
module.exports = createCoreController("api::hotel.hotel", ({ strapi }) => ({
async find(ctx, next) {
const { query } = ctx;
const premiumSort = ctx.query.sort_by_premium_position;
// Utility function to parse query parameters
const parseQueryParam = (param) => {
if (Array.isArray(param)) {
return param.map((p) => JSON.parse(p));
}
if (typeof param === "string") {
return JSON.parse(param);
}
return param;
};
try {
// Check if sorting by location is requested, if not we return everything regularly
if (premiumSort) {
const locationToSortBy = Array.isArray(premiumSort)
? premiumSort[0]
: premiumSort;
// Extract and parse query parameters, with fallbacks for undefined values
const filters = query.filters ? parseQueryParam(query.filters) : {};
const sort = query.sort ? parseQueryParam(query.sort) : [];
let populate = [];
if (query.populate) {
if (Array.isArray(query.populate)) {
populate = query.populate;
} else if (typeof query.populate === "string") {
populate = query.populate.split(",");
}
}
// Here we parse pagination parameters, supporting both string and array formats
let pagination = query.pagination ? parseQueryParam(query.pagination) : {};
if (typeof pagination === "string") {
pagination = JSON.parse(pagination);
} else if (Array.isArray(pagination)) {
pagination = pagination.reduce(
(acc, p) => ({ ...acc, ...JSON.parse(p) }),
{}
);
}
// Default pagination values if not specified
const page = pagination.page ? parseInt(pagination.page, 10) : 1;
const pageSize = pagination.pageSize ? parseInt(pagination.pageSize, 10) : 10;
// Fetch all data without pagination to apply custom sorting logic
const entities = await strapi.entityService.findMany(
"api::hotel.hotel",
{
filters,
sort,
populate: ["position", "position.locations", ...populate],
}
);
// Filter and sort entities based on the specified location ID and position
let sortedEntities = entities.sort((a, b) => {
const aPos = a.position.find((p) =>
p.locations.some((c) => c.id === parseInt(locationToSortBy))
);
const bPos = b.position.find((p) =>
p.locations.some((c) => c.id === parseInt(locationToSortBy))
);
// Sort by position values within the specified location
if (aPos && bPos) {
return aPos.position - bPos.position;
} else if (aPos) {
return -1;
} else if (bPos) {
return 1;
}
return 0;
});
// Calculate pagination metadata
const total = sortedEntities.length;
const pageCount = Math.ceil(total / pageSize);
const paginatedEntities = sortedEntities.slice(
(page - 1) * pageSize,
page * pageSize
);
// Construct metadata for pagination response
const meta = {
pagination: {
page,
pageSize,
pageCount,
total,
},
};
// Return sorted and paginated entities along with pagination metadata
return { data: paginatedEntities, meta };
} else {
// If no custom sorting is requested, return default find method
const { data, meta } = await super.find(ctx);
return { data: data, meta };
}
} catch (error) {
console.error("Error in find function:", error);
ctx.throw(400, "Invalid query parameters");
}
},
}));
```
## Conclusion
And that's it! Obviously, this example is related to a specific use case, but hopefully this example may help anyone who encounters similar requests while working on Strapi projects.
To work with me, visit [my website.](https://www.kodawarians.com/)
[Also, let's connect on x.com
](https://x.com/DellBoyan)
| dellboyan |
1,897,859 | Idempotency in 256 characters or less | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T14:52:48 | https://dev.to/derlin/idempotency-in-256-characters-or-less-118c | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Borrowed from Mathematics, an **idempotent** operation can run multiple times without changing the result. Writing to a file is idempotent, but appending to a file is not. A script using idempotent operations can thus fail midway and be restarted safely.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
This idempotency concept is so important, yet too often unknown. It is the magic behind famous tools such as Terraform, Ansible, Kubernetes Manifests, etc. Fault-tolerant REST APIs are mostly [idempotent REST APIs](https://restfulapi.net/idempotent-rest-apis/). It is everywhere, yet cleverly hidden. It is what makes operations repeatable and consistent.
There are better explanations online, but if my submission makes at least one person curious about this concept and want to know more, then I won already 🙂 !
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | derlin |
1,897,871 | What is NewAge Nations DAO | NewAge Nations DAO. is a community-driven project for the building of investment DAOs using Web3, AI,... | 0 | 2024-06-23T14:52:01 | https://dev.to/newagenations_053691fa72e/what-is-newage-nations-dao-phm | NewAge Nations DAO. is a community-driven project for the building of investment DAOs using Web3, AI, NFTs and Blockchain Technologies.
Solution: www.hvts.network
Protocol: www.newagecoin.cash | newagenations_053691fa72e | |
1,897,870 | Job Adventures - PDF generation | Jun 2024 | Well, here we are with a new series. This one is called Job Adventures where I will talk about some... | 27,829 | 2024-06-23T14:51:47 | https://medium.com/@goamaral/job-adventures-pdf-generation-jun-2024-93e468ce60dc | pdf, programming, newsletter, webdev | Well, here we are with a new series. This one is called *Job Adventures* where I will talk about some challenges I encountered on my day to day job.
In this article we will explore PDF generation. This is one of those classic tasks you rarely need to do but when the task eventually arrives, I get PTSD.
My first contact with building PDFs was with rails using https://github.com/mileszs/wicked_pdf. The task always seems easy, you just build HTML and render that to pdf. And in fact, the part of rendering the info to the pdf is easy. The nightmare comes when implementing what is on the mockups. How will CSS behave in printing mode? What if we have a component that can’t split on a page break, it should jump in its entirety to the next page? What if our cover page does not count to the page total? What if the cover page does not have an header/footer? Why is the pdf so big?
Some of those problems I had in the past, but at the time I was just rendering tables for a financial report. The main problem I remember having was the CSS part and the long generation time. Because I was not implementing the styling at the time, the CSS part was not really my problem, and I am sure wicked_pdf provides some default styles to help in this part. The long processing times were a problem because we were generating pdfs with over 100 pages, this process would take about 5 min and would get worse if more pdfs were being requested in parallel. I can’t remember what the solution was at the time but I think we ended up generating some pdfs in the background and sending them by email when ready. The wicked_pdf gem uses an instance of https://github.com/wkhtmltopdf/wkhtmltopdf under the hood. This causes problems because it can only generate pdfs one by one. The solution would probably be having a dedicated service that would orchestrate multiple wkhtmltopdf instances.
Jumping to today, I am using Go and my first instinct was to find a binding to wkhtmltopdf and go from there. I remember trying to find better solutions to wicked_pdf at the time and none was better, so I started with what I knew worked. What a big surprise it was when I opened wkhtmltopdf github page and found it archived. Basically, it was based on QtWebKit that stopped being maintained long ago. You can find a longer explanation [here](https://wkhtmltopdf.org/status.html).
After some searching, I found https://github.com/gotenberg/gotenberg. It ticked a lot of boxes.
- It is an independent service that communicates via HTTP. I just send the url to the page I want to convert to PDF and receive the pdf back. This way we have an easily scalable service that can be easily integrated with any other system/language.
- The same team maintains a docker image. So we don’t need to worry with any basic dependencies like headless chrome or fonts. Just start a container and relax.
- It is written in go, if needed, I can easily open an issue/PR or fork it.
And now you might say, all good. Just create an HTML page and we are done. I wish it would be that easy. Now it’s time to answer the questions I placed in the beginning.
**How will CSS behave in printing mode?**
**Why is the pdf so big?**
From what I experienced, there where not many sharp edges. The only thing that caught me off guard was `print-color-adjust` , it defaults to `economy` (which makes sense, to use less ink). The first pages I created were mostly text and tables, no problems at this point, until I added a couple of images and when previewing the print version, the colours were really saturated. It retrospective the solution was easy but at the time I had no clue if the problem was with gottenberg, what property I should change/add or if it was even possible. The solution was to set `print-color-adjust` to `exact` . Just be aware, that this is not free, the size of the pdf increased significantly.
**What if we have a component that cant split on a page break, it should jump in its entirety to the next page?**
**What if our cover page does not count to the page total?**
**What if the cover page does not have an header/footer?**
By default you can easily add a header and a footer to every page, the same applies to the counter. But requirements are rarely that simple. But this problems were moderately simple to solve. I disabled footers and headers and manually implemented a header and footer component, this way I have full control when they are shown and what pages count.
The big problem came with dynamically sized content. Without an image it can be hard to explain, but some components should not break (charts and content with side images) and others should (tables). Because all this components varied in the amount of info they had, I calculate the pixel height they would occupy, the vertical space I had left in the page and choose if the component should be split or not. These solution was far from perfect and I feel there should be a better. In hindsight, after exploring more properties like `page-break-before` I feel this could have solved many of my issues. Even with this in mind, one of the requirements was to have the table header always present at the top on a page break and I don’t think `page-break-*` properties would help with that.
This feature was developed a couple months ago, so I don’t recall a lot of the issues I had but these were the lessons that stuck with me and that will apply in the next pdf I need to generate (hopefully not soon). | goamaral |
1,897,869 | How we declare one dimensional array by using JavaScript and Python language | Declaring One Dimensional Array In programming, particularly in languages like JavaScript,... | 0 | 2024-06-23T14:51:13 | https://dev.to/wasifali/how-we-declare-one-dimensional-array-by-using-javascript-and-python-language-3dp5 | webdev, javascript, python, programming | ## **Declaring One Dimensional Array**
In programming, particularly in languages like JavaScript, Python, Java, C++, and others, arrays are fundamental data structures used to store collections of elements of the same type. Here, I'll explain how to declare and use a one-dimensional array, which is the simplest form of an array that stores elements in a single row or sequence.
## **Declaration and Initialization**
## **JavaScript Example**
In JavaScript, arrays are dynamic and can hold elements of different types. Here's how you declare and initialize a one-dimensional array:
```JavaScript
// Declaring and initializing an array with elements
let numbers = [1, 2, 3, 4, 5];
// Accessing elements in the array
console.log(numbers[0]); // Outputs: 1
console.log(numbers[2]); // Outputs: 3
// Adding elements to the array
numbers.push(6); // Adds 6 to the end of the array
console.log(numbers); // Outputs: [1, 2, 3, 4, 5, 6]
// Array length
console.log(numbers.length); // Outputs: 6
```
## **Python Example**
In Python, arrays are implemented using lists, which are versatile and can hold elements of different types as well:
```Python
# Declaring and initializing an array with elements
numbers = [1, 2, 3, 4, 5]
# Accessing elements in the array
print(numbers[0]) # Outputs: 1
print(numbers[2]) # Outputs: 3
# Adding elements to the array
numbers.append(6) # Adds 6 to the end of the array
print(numbers) # Outputs: [1, 2, 3, 4, 5, 6]
# Array length
print(len(numbers)) # Outputs: 6
```
## **Explanation of the Examples**
**Declaration and Initialization**
In both JavaScript and Python, arrays are declared by enclosing elements in square brackets [ ].
Elements in the array are separated by commas ,.
**Accessing Elements**
Elements in arrays are accessed using their index. In JavaScript and Python, arrays are zero-indexed, meaning the first element is at index 0, the second at index 1, and so on.
**Adding Elements**
Both languages provide methods (push() in JavaScript and append() in Python) to add elements to the end of the array.
**Array Length**
The length of an array can be determined using the length property in JavaScript or the len() function in Python.
## **Important Notes**
**Type Flexibility**
JavaScript arrays can hold elements of different types (number, string, object, etc.), while Python lists can also hold heterogeneous elements.
**Dynamic Size**
Both JavaScript arrays and Python lists are dynamic in size, meaning they can grow or shrink as needed.
**Indexing**
Remember that array indices start at 0. Accessing an index beyond the array's length results in an error or undefined/null value.
Arrays are versatile and essential in programming, providing efficient ways to store and manipulate collections of data. Understanding their basics and how to work with them is crucial for building robust applications.
## **Program**
```JavaScript
// Declare and initialize an array of numbers
let numbers = [3, 7, 1, 9, 5];
// Print the array
console.log("Array:", numbers);
// Accessing elements by index
console.log("Element at index 0:", numbers[0]); // Output: 3
console.log("Element at index 2:", numbers[2]); // Output: 1
// Calculate the sum of all elements in the array
let sum = 0;
for (let i = 0; i < numbers.length; i++) {
sum += numbers[i];
}
// Print the sum of array elements
console.log("Sum of array elements:", sum); // Output: 25
```
## **Explanation of the Program:**
**Array Declaration and Initialization:**
let numbers = [3, 7, 1, 9, 5];: This line declares an array named numbers and initializes it with five elements: 3, 7, 1, 9, and 5.
**Printing the Array:**
console.log("Array:", numbers);: This prints the entire array numbers to the console.
**Accessing Array Elements:**
console.log("Element at index 0:", numbers[0]);: Accesses and prints the element at index 0 of the array (3 in this case).
console.log("Element at index 2:", numbers[2]);: Accesses and prints the element at index 2 of the array (1 in this case).
**Calculating the Sum of Array Elements:**
let sum = 0;: Initializes a variable sum to 0.
for (let i = 0; i < numbers.length; i++) { sum += numbers[i]; }: Iterates through each element of the array using a for loop, adding each element to sum.
**Printing the Sum:**
console.log("Sum of array elements:", sum);: Prints the calculated sum of all elements in the array (25 in this case).
| wasifali |
1,897,868 | ⏰ Introducing TickWatch: A Versatile jQuery Plugin for Realistic Time and Number Displays | As developers, we often need a reliable, customizable way to display time or numbers on our web... | 0 | 2024-06-23T14:49:54 | https://dev.to/hichemtab-tech/introducing-tickwatch-a-versatile-jquery-plugin-for-realistic-time-and-number-displays-548f | webdev, javascript, jquery, programming | As developers, we often need a reliable, customizable way to display time or numbers on our web pages. Whether it's for a digital clock, a countdown timer, or simply showcasing a number, having the right tool can make all the difference. Enter [**TickWatch**](https://github.com/HichemTab-tech/TickWatch-js), a lightweight and highly customizable jQuery plugin that offers a seamless solution for all your time and number display needs.
## What is TickWatch?
[TickWatch](https://github.com/HichemTab-tech/TickWatch-js) is a powerful jQuery plugin designed to create **customizable clock displays** and **static number displays** with ease. One of its standout features is the **realistic 7-segment display style**, reminiscent of classic digital clocks and numeric displays. This unique visual style sets TickWatch apart, adding a touch of retro charm to your web projects.
## 🌟 Key Features
- **Realistic 7-Segment Display**: Create dynamic digital clock displays and static number displays that mimic classic 7-segment numeric clocks.
- **Digital Clock Display**: Easily create dynamic digital clock displays that update in real-time.
- **Static Number Display**: Display static numbers with a specified number of digits, perfect for dashboards and counters.
- **Customizable Formats**: Customize the display format to match your project's design and requirements.
- **Lightweight and Fast**: Minimal footprint ensures fast loading and smooth performance.
## 🚀 Getting Started with [TickWatch](https://github.com/HichemTab-tech/TickWatch-js)
Getting started with TickWatch is a breeze. Let's walk through the steps to integrate this powerful plugin into your project.
### Step 1: Include TickWatch in Your Project
First, you need to include jQuery and [TickWatch](https://github.com/HichemTab-tech/TickWatch-js) in your HTML file. You can download TickWatch from the [GitHub repository](https://github.com/HichemTab-tech/TickWatch-js) or use a CDN link.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TickWatch Demo</title>
<link rel="stylesheet" href="path/to/tickwatch.css">
</head>
<body>
<div class="clock"></div>
<div class="display"></div>
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script src="path/to/tickwatch.js"></script>
</body>
</html>
```
### Step 2: Initialize TickWatch
After including the necessary files, you can initialize [TickWatch](https://github.com/HichemTab-tech/TickWatch-js) on your desired elements. Here's a basic example:
```javascript
$(document).ready(function() {
$('.clock').TickWatch({
displayType: 'clock',
format: 'hh:mm:ss'
});
$('.display').TickWatch({
displayType: 'number',
digits: 6
});
});
```
### Step 3: Customize Your Display
[TickWatch](https://github.com/HichemTab-tech/TickWatch-js) offers various options to customize your display. You can change the format, adjust the number of digits, and more. Here are some common options:
- **displayType**: Set to 'clock' or 'number'.
- **format**: Customize the time format (e.g., 'hh:mm:ss' for clock).
- **digits**: Specify the number of digits for number display.
```javascript
$(document).ready(function() {
$('.clock').TickWatch({
displayType: 'clock',
format: 'hh:mm:ss A',
color: '#00ff00'
});
$('.display').TickWatch({
displayType: 'number',
digits: 8,
color: '#ff0000'
});
});
```
## 📈 Why Choose TickWatch?
[TickWatch](https://github.com/HichemTab-tech/TickWatch-js) stands out due to its unique **7-segment display style**, which adds a nostalgic yet modern touch to any project. It's perfect for:
- **Dashboards**: Display real-time data in a visually appealing way.
- **Web Apps**: Add dynamic clocks or timers to enhance user experience.
- **Counters**: Showcase static numbers with style.
## Join the TickWatch Community
We invite you to join the growing community of TickWatch users. Visit the [GitHub repository](https://github.com/HichemTab-tech/TickWatch-js) to get started, report issues, or contribute to the project. We welcome your feedback and look forward to seeing the creative ways you use TickWatch!
---
[TickWatch](https://github.com/HichemTab-tech/TickWatch-js) is designed to make your life easier and your projects more impressive. Whether you're a seasoned developer or just starting out, [TickWatch](https://github.com/HichemTab-tech/TickWatch-js) provides the tools you need to create stunning time and number displays. Try it out today and elevate your web development game! 🌟 | hichemtab-tech |
1,897,867 | onions and recursion | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte... | 0 | 2024-06-23T14:47:23 | https://dev.to/urjacodes/onions-and-recursion-5hac | devchallenge, cschallenge, computerscience, beginners | **This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).**
**_Explainer_**
Recursion is a method where a function calls itself until it satisfies a base case. It simplifies complex problems. It is like peeling an onion's layers until you reach its core- the layers are the function called repeatedly and the core is the base case.
**_Additional Context_**
Recursion is something I struggled to understand properly when I first learnt it as I could not understand why it was being done or why it could be of any use when for loops occur but I learnt more with experience. Written above in 256 characters, is how I would introduce my younger self to the concept of recursion so I would get a good handle of the topic right from the start. | urjacodes |
1,897,866 | Why Docker Popular ? | Why Docker Popular- Portability: Docker ensures that applications run consistently across any... | 0 | 2024-06-23T14:45:03 | https://dev.to/rahulcolud2023/why-docker-popular--5dek | Why Docker Popular-

1. **Portability:** Docker ensures that applications run consistently across any environment, from a developer's personal laptop to the production server, avoiding the common pitfall of encountering bugs and inconsistencies due to environment-specific differences.
2. **Efficiency:** Containers share the host system's kernel and do not require an operating system per application, making them lighter and faster than traditional virtual machines. This efficient use of system resources translates to faster start-up times and more scalable solutions.
3. **Isolation**: Docker provides application isolation, which improves security and allows multiple containers to run simultaneously on the same host without interference. Each container has its own filesystem, ensuring processes are separated and resource usage is controlled.
4. **Version Control and Component Reuse:** Docker images can be versioned, shared, and reused. This is akin to version controlling the entire environment. Developers can push their Docker images to a registry (such as Docker Hub) and pull them down for use without needing to set up environments manually.
5. **CI/CD :** Docker simplifies the development process by allowing developers to create local environments that match production systems.
Additionally, Docker works well with continuous integration and deployment (CI/CD) systems
Automating the deployment process and ensuring that applications are deployed consistently.
| rahulcolud2023 | |
1,897,865 | Leveraging GitHub Copilot Chat syntax: chat participants, chat variables, slash commands | GitHub Copilot Chat is an incredibly powerful and useful feature that allows you to chat with or about your code. Even though it’s 100% natural language-friendly (i.e., you can send your messages without using any specific syntax), leveraging some special chat capabilities can unlock new AI-assisted development scenarios and significantly boost your productivity. | 0 | 2024-06-23T14:44:42 | https://dev.to/webmaxru/leveraging-github-copilot-chat-syntax-chat-participants-chat-variables-slash-commands-34c9 | github, githubcopilot, aideveloper, aiassistant | ---
title: Leveraging GitHub Copilot Chat syntax: chat participants, chat variables, slash commands
published: true
description: GitHub Copilot Chat is an incredibly powerful and useful feature that allows you to chat with or about your code. Even though it’s 100% natural language-friendly (i.e., you can send your messages without using any specific syntax), leveraging some special chat capabilities can unlock new AI-assisted development scenarios and significantly boost your productivity.
tags: github, githubcopilot, aideveloper, aiassistant
# cover_image: https://res.cloudinary.com/daily-now/image/upload/s--Ww0R7c7a--/f_auto/v1719151849/ugc/content_30347ab9-9a22-4f3f-b7a2-e8576f07d1ee
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-23 14:40 +0000
---
GitHub Copilot Chat is an incredibly powerful and useful feature that allows you to chat with or about your code. Even though it’s 100% natural language-friendly (i.e., you can send your messages without using any specific syntax), leveraging some special chat capabilities can unlock new AI-assisted development scenarios and significantly boost your productivity.
These powerful features, which you can use by applying special syntax, include chat participants, slash commands, and context variables. Note that the described features are available in VS Code and might not be fully supported in other IDEs where GitHub Copilot Chat is available.
# Target your question or request by messaging one of the available chat participants
In GitHub Copilot Chat, you can reference one of the AI-powered “domain experts” using conventional chat syntax—by prepending @ to the participant name. The currently available chat participants are:
- `@workspace`: Knows everything about the code in your currently open workspace. This is the chat participant you will most likely communicate with frequently.
- `@terminal`: Knows all about the integrated terminal shell, its contents, and its buffer.
- `@vscode`: Knows about the VS Code editor, its commands, and features.
_Example: Let’s get information about the backend part of the project we’ve just been assigned to by asking the `@workspace` chat participant right after we open the project folder in VS Code._

In this particular case, you don’t even need to have files open in your editor. Compare this with the response you get without tagging `@workspace`:

The `@workspace` chat participant is instrumental for all solution-wide queries where you want all code to be considered for the chat response. However, this doesn't mean that all code will be used and sent as part of the prompt. The GitHub Copilot Chat extension in VS Code does its best to determine relevant files and parts of these files using local knowledge and intelligence first. You can check which files and code lines were used for the prompt by expanding the “Used references” line:

__Productivity hint: Use Ctrl-Enter (Cmd-Enter) instead of just Enter after typing your message, and the `@workspace` string will be inserted into your message automatically before sending.__
# Be precise in setting the context using chat variables
In many cases, considering the full solution as the context for your question or request (by using `@workspace`) is overkill. You might want to point to specific files or even parts of the files in your message. Chat variables can help! Use # to call one from this list:
- `#file`: Points to a specific file in your workspace.
- `#codebase`: All content of the open workspace. It’s similar to using `@workspace` and might be useful when you chat with another agent (like `@terminal`) but still want to reference the full solution.
- `#editor`: Source code in the editor’s viewport (visible part).
- `#git`: Current git repository: branch, remotes, path, etc.
- `#selection`: The currently selected code.
- `#terminalLastCommand`: Last run command in the editor’s terminal.
- `#terminalSelection`: Selection in the editor's terminal.
_Example: Let’s get help on improving method names in a specific file (and we want to ensure that the whole content of the file is taken into consideration)._

__Productivity hint: Use the up and down keyboard arrows to pick the chat variable you need after typing #. In the case of `#file`, use keyboard navigation again to pick one of the suggested files.__
# Call the most often used actions quickly with slash commands
Chatting with your code using natural language is fun, but having the option to call often-used actions using handy shortcuts is even better. Compare typing the full message “Explain how selected code works” versus typing “/”, then using keyboard arrows to pick `/explain` from the popup overlay. Another benefit of using the predefined syntax for commands is the confidence that GitHub Copilot understands our intent 100% correctly (natural language might have some ambiguity). There are a bunch of slash commands available. You can use them in conjunction with referencing the chat participant to provide the desired scope. Some of the commands are:
- `/help`: Help about available slash commands, chat participants, chat variables, and more.
- `/doc`: Generate documentation for the code.
- `/explain`: Explain how the code works (or get help with terminal commands if you prepend @terminal).
- `/fix`: Optimize and/or fix issues in the code.
- `/tests`: Create unit tests for the code.
- `/new`: Scaffold a new workspace.
_Example: Let’s get an explanation for one of the regular expressions in our code. Select the code line and use the slash command “
`/explain`._

__Productivity hint: Try GitHub Copilot Chat in inline mode instead of having the chat always open in the side pane. Press Ctrl-I (Cmd-I) and type your message in the small overlay dialog that appears right above the line where your cursor is in the code window.__
# Summary
Use chat participants, chat variables, and slash commands to maintain full control over the conversation context, ensure correct and consistent understanding of your intentions, and ultimately chat and code faster!
Start your free GitHub Copilot trial here: https://aka.ms/try-github-copilot
# References
- https://github.blog/changelog/2024-01-30-code-faster-and-better-with-github-copilots-new-features-in-visual-studio/#slash-commands
- https://github.blog/changelog/2023-11-30-github-copilot-november-30th-update/
- https://code.visualstudio.com/docs/copilot/copilot-chat#_chat-participants
- https://devblogs.microsoft.com/visualstudio/copilot-chat-slash-commands-and-context-variables/
- https://code.visualstudio.com/updates/v1_85#_terminal-agent-and-command-suggestion-improvements
- https://code.visualstudio.com/updates/v1_84#_chat-agents
| webmaxru |
1,897,864 | Creating a Kubernetes Cluster with Kubeadm and Containerd: A Comprehensive Step-by-Step Guide | Kubeadm is a tool designed to simplify the process of creating Kubernetes clusters by providing... | 27,750 | 2024-06-23T14:44:07 | https://psj.codes/creating-a-kubernetes-cluster-with-kubeadm-and-containerd-a-comprehensive-step-by-step-guide | kubernetes, cka, devops, tutorial | Kubeadm is a tool designed to simplify the process of creating Kubernetes clusters by providing `kubeadm init` and `kubeadm join` commands as best-practice "fast paths." - Kubernetes documentation
In this blog, we'll go through the step-by-step process of installing a Kubernetes cluster using Kubeadm.
## Prerequisites
Before you begin, ensure you have the following:
1. Ensure you have a compatible Linux host (e.g., Debian-based and Red Hat-based distributions). In this blog, we're using Ubuntu which is a Debian-based OS.
2. At least 2 GB of RAM and 2 CPUs per machine.
3. Full network connectivity between all machines in the cluster.
4. Unique hostname, MAC address, and product\_uuid for every node.
5. Ensure all the required ports are open for the control plane and the worker nodes. You can refer to [Ports and Protocols](https://kubernetes.io/docs/reference/networking/ports-and-protocols) or see the screenshot below.

***Disable swap on all the nodes.***
```bash
sudo swapoff -a
# disable swap on startup in /etc/fstab
sudo sed -i '/ swap / s/^/#/' /etc/fstab
```
## Setup container runtime(Containerd)
To run containers in Pods, Kubernetes uses a container runtime. By default, Kubernetes employs the `Container Runtime Interface (CRI)` to interact with your selected container runtime. Each node needs to have container runtime installed. In this blog, we'll use `containerd`.
***Run these instructions on all the nodes. I am using Ubuntu on all the nodes.***
* **Enable IPv4 packet forwarding:**
```bash
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
```
Run `sudo sysctl net.ipv4.ip_forward` to verify that `net.ipv4.ip_forward` is set to 1
* **Specify and load the following kernel module dependencies:**
```bash
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
```
* **Install containerd:**
***Add Docker's official GPG key:***
```bash
sudo apt-get update
sudo apt-get -y install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
```
***Add the repository to Apt sources:***
```bash
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
***Install containerd***
```bash
sudo apt-get update
sudo apt-get -y install containerd.io
```
For more details, refer to the [Official installation documentation](https://github.com/containerd/containerd/blob/main/docs/getting-started.md#option-2-from-apt-get-or-dnf)
* **Configure systemd cgroup driver for containerd**
First, we need to create a containerd configuration file at the location `/etc/containerd/config.toml`
```bash
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
```
Now, we enable the systemd cgroup driver for the CRI in `/etc/containerd/config.toml` at section `[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]` set `SystemCgroup = true`

OR we can just run
```bash
sudo sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" "/etc/containerd/config.toml"
```
Restart containerd
```bash
sudo systemctl restart containerd
```
Containerd should be running, check the status using:
```bash
sudo systemctl status containerd
```
## Install kubeadm, kubelet and kubectl
***Run these commands on all nodes. These instructions are for Kubernetes v1.30.***
* **Install**`apt-transport-https, ca-certificates, curl, gpg`**packages**
```bash
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
```
* **Download the public signing key for the Kubernetes package repositories**
```bash
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
```
* **Add the**`apt`**repository for Kubernetes 1.30**
```bash
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
```
* **Install kubelet, kubeadm and kubectl**
```bash
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
* **This is optional, Enable the kubelet service before running kubeadm**
```bash
sudo systemctl enable --now kubelet
```
## Initialize the k8s control plane
***Run these instructions only on the control plane node***
* ***kubeadm init***
To initialize the control plane, run the `kubeadm init` command. You also need to choose a pod network add-on and deploy a `Container Network Interface (CNI)` so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start until a network is installed.
We will use Calico CNI, so set the `--pod-network-cidr=192.168.0.0/16`.
```bash
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
```
Now, at the end of the `kubeadm init` command, you'll see `kubeadm join` command
`sudo kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash <hash>` copy it and keep it safe.
* **Run the following commands to set kubeconfig to access the cluster using kubectl**
```bash
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```
Now, check the node using
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-zzz-zz-z-zz NotReady control-plane 114s v1.30.2
```
The node is in `NotReady` state because `message: 'container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized' reason: KubeletNotReady` . After setting up the CNI, the node should be in a Ready state.
* **Set Up a Pod Network**
We must deploy a Container Network Interface (CNI) based Pod network add-on so that Pods can communicate with each other.
***Install the calico operator on the cluster.***
```bash
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
```
***Download the custom resources necessary to configure Calico.***
```bash
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml
```
***Verify all the Calico pods are in running state.***
```bash
$ kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6f459db86d-mg657 1/1 Running 0 3m36s
calico-node-ctc9q 1/1 Running 0 3m36s
calico-typha-774d5fbdb7-s7qsg 1/1 Running 0 3m36s
csi-node-driver-bblm8 2/2 Running 0 3m3
```
***Verify that the node is in a running state***
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xxx-xx-x-xx Ready control-plane 2m46s v1.30.2
```
## Join the worker nodes
Ensure `containerd`, `kubeadm`, `kubectl`, and `kubelet` are installed on all worker nodes, then run `sudo kubeadm join <control-plane-ip>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash <hash>`, which you can find at the end of the `kubeadm init` command's output.
### Check the cluster state
***Run these commands on the control-plane node since the worker nodes do not have the kubeconfig file.***
* **Check if the worker nodes are joined to the cluster.**
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xxx-xx-xx-xx Ready <none> 9m9s v1.30.2
ip-yyy-yy-yy-yy Ready <none> 23s v1.30.2
ip-zzz-zz-z-zz Ready control-plane 27m v1.30.2
```
To add a worker role to the worker node we can use `kubectl label node <node-name> node-role.kubernetes.io/worker=worker` command.
```bash
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-xxx-xx-xx-xx Ready worker 12m v1.30.2
ip-yyy-yy-yy-yy Ready worker 3m51s v1.30.2
ip-zzz-zz-z-zz Ready control-plane 31m v1.30.2
```
* **Check the workload running on the cluster**
```bash
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6fcb65fbd5-n4wsn 1/1 Running 0 35m
calico-apiserver calico-apiserver-6fcb65fbd5-nnggl 1/1 Running 0 35m
calico-system calico-kube-controllers-6f459db86d-mg657 1/1 Running 0 35m
calico-system calico-node-ctc9q 1/1 Running 0 35m
calico-system calico-node-dmgt2 1/1 Running 0 18m
calico-system calico-node-nw4t5 1/1 Running 0 9m49s
calico-system calico-typha-774d5fbdb7-s7qsg 1/1 Running 0 35m
calico-system calico-typha-774d5fbdb7-sxb5c 1/1 Running 0 9m39s
calico-system csi-node-driver-bblm8 2/2 Running 0 35m
calico-system csi-node-driver-jk4sz 2/2 Running 0 18m
calico-system csi-node-driver-tbrrj 2/2 Running 0 9m49s
kube-system coredns-7db6d8ff4d-5f7s5 1/1 Running 0 37m
kube-system coredns-7db6d8ff4d-qj9r8 1/1 Running 0 37m
kube-system etcd-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-apiserver-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-controller-manager-ip-zzz-zz-z-zz 1/1 Running 0 37m
kube-system kube-proxy-dq8k4 1/1 Running 0 9m49s
kube-system kube-proxy-t2sw9 1/1 Running 0 18m
kube-system kube-proxy-xd6nn 1/1 Running 0 37m
kube-system kube-scheduler-ip-zzz-zz-z-zz 1/1 Running 0 37m
tigera-operator tigera-operator-76ff79f7fd-jj4kf 1/1 Running 0 35m
```
## Conclusion
Setting up a Kubernetes cluster with Kubeadm involves a clear and structured process. You can create a functional cluster by meeting all prerequisites, configuring the container runtime, and installing Kubernetes components. Using Calico for networking ensures seamless pod communication. With the control plane and worker nodes properly configured and joined, you can efficiently manage and deploy workloads across your Kubernetes cluster.
***Thank you for reading this blog; your interest is greatly appreciated, and I hope it helps you on your Kubernetes journey; in the next article, we'll explore running workloads in the Kubernetes cluster.*** | pratikjagrut |
1,897,754 | Check used fonts on a webpage | Circumstances As I am currently learning next.j. I am going through the course created by... | 0 | 2024-06-23T14:42:22 | https://dev.to/machy44/check-used-fonts-on-a-webpage-29cn | webdev, css, frontend, webdesign | ---
title: Check used fonts on a webpage
published: true
description:
tags: #webdev #css #frontend #webdesign
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-23 12:46 +0000
---
##Circumstances
As I am currently learning next.j. I am going through the course created by next team (https://nextjs.org/learn).
There is a section related to fonts and images optimization.
Honestly, I have never paid much attention to fonts and which fonts are used on a certain element on a webpage.
But I realized going through this material that you are able to do that. You can find out which fonts are used on a certain element. You could have more than one custom fonts on the page and this could be useful.
As I am using Chrome as my default browser I opened devtools and checked the used fonts.

This seemed pretty poor and without much info.
But I did the same thing in firefox and firefox seems to have much more options regarding fonts. You could change the size, line height, spacing, weight through it's editor. This seems quite useful especially while you are trying to find out which font properties suit you best for a certain element.
<img width="100%" style="width:100%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExMmk2ZDJoNWQ1bzJiOHFzc295eTk4cm90d2d1MjQyZnFqcno5OTc0OSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Pl68JgH1lez8j1pyli/giphy.gif">
##My thoughts
These findings regarding fonts reminded me when I had some troubles with the layout in the past that I could do much better debugging in firefox than in chrome.
Firefox seems is taking more care about css stuff to make FE development more fluid than chrome does. I think I should take this in consideration next time while I am doing css work and use firefox as a default one!
Cheers. Have a good day! | machy44 |
1,897,863 | What is Docker | Visual thinking |2024 | What is Docker ? Docker is an open platform for developing, shipping, and running applications.... | 0 | 2024-06-23T14:38:43 | https://dev.to/rahulcolud2023/what-is-docker-visual-thinking-2024-4e62 | What is Docker ?
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker's methodologies for shipping, testing, and deploying code, you can significantly reduce the delay between writing code and running it in production.

@Rahulcloud2023
| rahulcolud2023 | |
1,897,862 | Concurrency and Parallelism | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T14:36:59 | https://dev.to/mr_destructive/concurrency-and-parallelism-37hl | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
### Concurrency
Concurrency is like having one person doing multiple household tasks by switching between them. Imagine someone who starts washing dishes, then pauses to vacuum the living room, and then goes back to finish the dishes. They’re doing tasks in small steps, one after the other, to make progress on all tasks without completing one entirely before starting the next.
### Parallelism
Parallelism is like having several people each doing different household tasks at the same time. One person vacuums the living room while another washes the dishes and someone else mows the lawn. They’re working simultaneously on different tasks, getting everything done faster.
| mr_destructive |
1,897,861 | AWS: Karpenter and SSH for Kubernetes WorkerNodes | Setting up SSH access to EC2 created by Karpenter in EKS from AWS Session Manager, AWS EC2 Instance Connect, and via EC2 User Data manually and from Terraform | 0 | 2024-06-23T14:36:36 | https://rtfm.co.ua/en/aws-karpenter-and-ssh-for-kubernetes-workernodes/ | security, aws, devops, kubernetes | ---
title: AWS: Karpenter and SSH for Kubernetes WorkerNodes
published: true
tags: security,aws,devops,kubernetes
description: Setting up SSH access to EC2 created by Karpenter in EKS from AWS Session Manager, AWS EC2 Instance Connect, and via EC2 User Data manually and from Terraform
canonical_url: https://rtfm.co.ua/en/aws-karpenter-and-ssh-for-kubernetes-workernodes/
---

We have an AWS EKS cluster with WorkerNodes/EC2 created with Karpenter.
The process of creating the infrastructure, cluster, and launching Karpenter is described in previous posts:
- [Terraform: Building EKS, part 1 — VPC, Subnets and Endpoints](https://rtfm.co.ua/en/terraform-building-eks-part-1-vpc-subnets-and-endpoints/)
- [Terraform: Building EKS, part 2 — an EKS cluster, WorkerNodes, and IAM](https://rtfm.co.ua/en/terraform-building-eks-part-2-an-eks-cluster-workernodes-and-iam/)
- [Terraform: Building EKS, part 3 — Karpenter installation](https://rtfm.co.ua/en/terraform-building-eks-part-3-karpenter-installation/)
What this system really lacks, is access to servers via SSH, without which you feel like… Well, like DevOps, not Infrastructure Engineer. In short, SSH access is sometimes necessary, but — surprise — Karpenter does not allow you to add a key to the WorkerNodes it manages out of the box.
Although, what’s the problem to add in the [`EC2NodeClass`](https://karpenter.sh/v0.32/concepts/nodeclasses/) a way to pass an SSH key, as it is done in the Terraform's `resource "aws_instance`" with the parameter `key_name?`
But it’s okay. If it’s not there, it’s not there. Maybe they will add it later.
Instead, Karpenter’s documentation [Can I add SSH keys to a NodePool?](https://karpenter.sh/docs/faq/#can-i-add-ssh-keys-to-a-nodepool) suggests using either [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) or [AWS EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-connect-methods.html), or “the old school way” — add the public part of the key via [AWS EC2 User Data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html), and connect via a bastion host or a VPN.
So what we’re going to do today is:
- try all three solutions one by one
- first we’ll do each one by hand, then we’ll see how to add it to our automation with Terraform
- and then we’ll decide which option will be the easiest
### Option 1: AWS Systems Manager Session Manager and SSH on EC2
AWS Systems Manager Session Manager is used to manage EC2 instances. In general, it can do quite a lot, for example, keep track of patches and updates for packages that are installed on instances.
Currently, we are only interested in it as a system that will allow us to have an SSH to a Kubernetes WorkerNode.
It requires an SSM agent, which is installed by default on all instances with Amazon Linux AMI.
Find nodes created by Karpenter (we have a dedicated label for them):
```
$ kubectl get node -l created-by=karpenter
NAME STATUS ROLES AGE VERSION
ip-10-0-34-239.ec2.internal Ready <none> 21h v1.28.8-eks-ae9a62a
ip-10-0-35-100.ec2.internal Ready <none> 9m28s v1.28.8-eks-ae9a62a
ip-10-0-39-0.ec2.internal Ready <none> 78m v1.28.8-eks-ae9a62a
...
```
Find an Instance ID:
```
$ kubectl get node ip-10-0-34-239.ec2.internal -o json | jq -r ".spec.providerID" | cut -d \/ -f5
i-011b1c0b5857b0d92
```
### AWS CLI: TargetNotConnected when calling the StartSession operation
Try to connect and get the error “ **TargetNotConnected** ”:
```
$ aws --profile work ssm start-session --target i-011b1c0b5857b0d92
An error occurred (TargetNotConnected) when calling the StartSession operation: i-011b1c0b5857b0d92 is not connected.
```
Or through the AWS Console:

But here, too, we have the connection error — “ **SSM Agent is not online** ”:

The error occurs because:
- either the IAM role that is connected to the instance does not have SSM permissions
- or EC2 is running on a private network and the agent cannot connect to an external endpoint
### SessionManager and IAM Policy
Let’s check. Find the IAM Role attached to this EC2:

And the policies connected to it — there is nothing about SSM:

Edit the Role by hand for now, then we will do it in Terraform code:

Attach the `AmazonSSMManagedInstanceCore`:

And in a minute or two, try again:


### SessionManager and VPC Endpoint
Another possible reason for the problems connecting the SSM agent to AWS is that the instance does not have access to SSM endpoints:
- ssm.region.amazonaws.com
- ssmmessages.region.amazonaws.com
- ec2messages.region.amazonaws.com
If the subnet is private, and has limits on external connections, then you may need to create a [VPC Endpoint](https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-create-vpc.html) for SSM.
See [SSM Agent is not online](https://repost.aws/questions/QUCuCl8hlTR_Sdnn7GrGsr4w/ssm-agent-is-not-online-the-ssm-agent-was-unable-to-connect-to-a-systems-manager-endpoint-to-register-itself-with-the-service) and [Troubleshooting Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-troubleshooting.html).
### AWS CLI: SessionManagerPlugin is not found
However, after the IAM fix, when connecting from a workstation using the AWS CLI, we can get the “ **SessionManagerPlugin is not found** ” error:
```
$ aws --profile work ssm start-session --target i-011b1c0b5857b0d92
SessionManagerPlugin is not found. Please refer to SessionManager Documentation here: http://docs.aws.amazon.com/console/systems-manager/session-manager-plugin-not-found
```
Install it locally — see the documentation [Installing the Session Manager Plugin for the AWS CLI](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html).
For Arch Linux there is a [`aws-session-manager-plugin`](https://aur.archlinux.org/packages/aws-session-manager-plugin) package in AUR:
```
$ yay -S aws-session-manager-plugin
```
And now we can connect:
```
$ aws --profile work ssm start-session --target i-011b1c0b5857b0d92
Starting session with SessionId: arseny-33ahofrlx7bwlecul2mkvq46gy
sh-4.2$
```
All that’s left is to add it to the automation.
### Terraform: EKS module, and adding an IAM Policy
For the Terraform EKS module from Anton Babenko, we can add a policy through the `iam_role_additional_policies` parameter - see the [`node_groups.tf`](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/7cd3be3fbbb695105a447b37c4653a00b0b51b94/node_groups.tf#L395), and in the examples of the [AWS EKS Terraform module](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/18.16.0#usage).
In the 20.0 module, the parameter name has changed — `iam_role_additional_policies` => `node_iam_role_additional_policies`, but we are still using version 19.21.0, and the role is added in this way:
```
...
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.21.0"
cluster_name = local.env_name
cluster_version = var.eks_version
...
vpc_id = local.vpc_out.vpc_id
subnet_ids = data.aws_subnets.private.ids
control_plane_subnet_ids = data.aws_subnets.intra.ids
manage_aws_auth_configmap = true
eks_managed_node_groups = {
...
# allow SSM
iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
...
```
Remove what we did manually, deploy the Terraform code, and check that the Policy has been added:

And the connection is working:
```
$ aws --profile work ssm start-session --target i-011b1c0b5857b0d92
Starting session with SessionId: arseny-pt7d44xp6ibvqcezj2oqjaxv5q
sh-4.2$ bash
[ssm-user@ip-10-0-34-239 bin]$ pwd
/usr/bin
```
### Option 2: AWS EC2 Instance Connect and SSH on EC2
Another way to connect is through EC2 Instance Connect. Documentation — [Connect to your Linux instance with EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-linux-inst-eic.html).
It also requires an agent, which is also installed by default on the Amazon Linux.
For instances on private networks, EC2 Instance Connect VPC Endpoint is required for connection.
### SecurityGroup and SSH
Instance Connect through the endpoint requires access to port 22, SSH (as opposed to SSM, which opens a connection through the agent itself).
Open the port for all addresses in the VPC:


### EC2 Instance Connect VPC Endpoint
Go to the VPC Endpoints and create an endpoint:

Select the _EC2 Instance Connect Endpoint_ type, the VPC itself, and the SecurityGroup:

Choose a Subnet — we have most of the resources in _us-east-1a_, so we’ll use it to avoid unnecessary cross-AvailabilityZone traffic (see [AWS: Cost optimization — services expenses overview and traffic costs in AWS](https://rtfm.co.ua/en/aws-cost-optimization-services-expenses-overview-and-traffic-costs-in-aws/)):

Wait a few minutes for the Active status:


And connect using AWS CLI by specifying `--connection-type eice`, because the instances are on a private network:
```
$ aws --profile work ec2-instance-connect ssh --instance-id i-011b1c0b5857b0d92 --connection-type eice
...
[ec2-user@ip-10-0-34-239 ~]$
```
### Terraform: EC2 Instance Connect, EKS, and VPC
For the Terraform, here you will need to add the[`node_security_group_additional_rules`](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest#input_node_security_group_additional_rules) in the [EKS module](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/) for SSH access, and create an EC2 Instance Connect Endpoint for the VPC, as in my case we create VPC and EKS separately.
```
...
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.21.0"
cluster_name = local.env_name
cluster_version = var.eks_version
...
# allow SSM
iam_role_additional_policies = {
AmazonSSMManagedInstanceCore = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
}
...
}
node_security_group_name = "${local.env_name}-node-sg"
cluster_security_group_name = "${local.env_name}-cluster-sg"
# to use with EC2 Instance Connect
node_security_group_additional_rules = {
ingress_ssh_vpc = {
description = "SSH from VPC"
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = [local.vpc_out.vpc_cidr]
type = "ingress"
}
}
node_security_group_tags = {
"karpenter.sh/discovery" = local.env_name
}
...
}
...
```
If you created it manually, as described above, then remove the rule from SecurityGroup with SSH and deploy it from Terraform.
For the VPC EC2 Instance Connect Endpoint, I did not find how to do this through Anton Babenko’s module [terraform-aws-modules/vpc](https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest), but you can make it a separate resource through [`aws_ec2_instance_connect_endpoint`](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_instance_connect_endpoint):
```
resource "aws_ec2_instance_connect_endpoint" "example" {
subnet_id = module.vpc.private_subnets[0]
security_group_ids = ["sg-0b70cfd6019c635af"]
}
```
However, here you need to pass the SecurityGroup ID from the cluster, and the cluster is created after the VPC, so there is a chicken-and-egg problem.
In general, the Instance Connect seems to be a little more complicated than SSM in the automation, because there are more changes in the code, and in different modules.
However, it is a working option, and if your automation allows it, you can use it.
### Option 3: the old-fashioned way with SSH Public Key via EC2 User Data
And the oldest and perhaps the simplest option is to create an SSH key yourself and add its public part to EC2 when creating an instance.
The disadvantages here are that it will be difficult to add many keys in this way, and EC2 User Data can sometimes go sideways, but if you need to add only one key, a kind of “super-admin” in case of emergency, then this is a perfectly valid option.
Moreover, if you have a VPN to the VPC (see [Pritunl: Launching a VPN in AWS on EC2 with Terraform](https://rtfm.co.ua/en/pritunl-launching-a-vpn-in-aws-on-ec2-with-terraform/)), then the connection will be even easier.
Create a key:
```
$ ssh-keygen
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/setevoy/.ssh/id_ed25519): /home/setevoy/.ssh/atlas-eks-ec2
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/setevoy/.ssh/atlas-eks-ec2
Your public key has been saved in /home/setevoy/.ssh/atlas-eks-ec2.pub
...
```
The public part can be stored in a repository — copy it:
```
$ cat ~/.ssh/atlas-eks-ec2.pub
ssh-ed25519 AAA***VMO setevoy@setevoy-wrk-laptop
```
Next, a few crutches: the `EC2NodeClass` in my case is created from the Terraform code through the `kubectl_manifest` resource. The easiest option that has come to mind so far is to add the public key to the `variables`, and then use it in the `kubectl_manifest`.
Later, I will probably move such resources to a dedicated Helm chart.
For now, let’s create a new variable:
```
variable "karpenter_nodeclass_ssh" {
type = string
default = "ssh-ed25519 AAA***VMO setevoy@setevoy-wrk-laptop"
description = "SSH Public key for EC2 created by Karpenter"
}
```
In the `EC2NodeClass` configuration, add the [`spec.userData`](https://karpenter.sh/docs/concepts/nodeclasses/#specuserdata):
```
resource "kubectl_manifest" "karpenter_node_class" {
yaml_body = <<-YAML
apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
name: default
spec:
amiFamily: AL2
role: ${module.eks.eks_managed_node_groups["${local.env_name_short}-default"].iam_role_name}
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "atlas-vpc-${var.environment}-private"
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: ${local.env_name}
tags:
Name: ${local.env_name_short}-karpenter
environment: ${var.environment}
created-by: "karpneter"
karpenter.sh/discovery: ${local.env_name}
userData: |
#!/bin/bash
mkdir -p ~ec2-user/.ssh/
touch ~ec2-user/.ssh/authorized_keys
echo "${var.karpenter_nodeclass_ssh}" >> ~ec2-user/.ssh/authorized_keys
chmod -R go-w ~ec2-user/.ssh/authorized_keys
chown -R ec2-user ~ec2-user/.ssh
YAML
depends_on = [
helm_release.karpenter
]
}
```
If you are using **not** Amazon Linux, then change the `ec2-user` to the desired one.
**_Important Note_** _: Keep in mind that changes to EC2NodeClass will recreate all instances, and that your services are configured for stable operation, see_ [_Kubernetes: Providing High Availability for Pods_](https://rtfm.co.ua/en/kubernetes-ensuring-high-availability-for-pods/)_._
Deploy it and check it:
```
$ kk get ec2nodeclass -o yaml
...
userData: #!/bin/bash\nmkdir -p ~ec2-user/.ssh/\ntouch ~ec2-user/.ssh/authorized_keys\necho
\"ssh-ed25519 AAA***VMO setevoy@setevoy-wrk-laptop\" >> ~ec2-user/.ssh/authorized_keys\nchmod -R go-w
~ec2-user/.ssh/authorized_keys\nchown -R ec2-user ~ec2-user/.ssh \n
...
```
Wait for Karpenter to create a new WorkerNode and try SSH:
```
$ ssh -i ~/.ssh/hOS/atlas-eks-ec2 ec2-user@10.0.39.73
...
[ec2-user@ip-10-0-39-73 ~]$
```
Done.
### Conclusions.
- **AWS SessionManager** : looks like the easiest option in terms of automation, recommended by AWS itself, but you need to think about how to use `scp` through it (although it seems to be possible through additional moves, see [.SSH and SCP with AWS SSM](https://globaldatanet.com/tech-blog/ssh-and-scp-with-aws-ssm))
- **AWS EC2 Instance Connect** : a cool feature from Amazon, but somehow more troublesome to automate, so not our option
- **“grandfathered” SSH** : well, the old one is tried and true :-) but I don’t really like User Data, because sometimes it can lead to problems with launching instances; however, it is also simple in terms of automation, and gives you the usual SSH without additional movements
_Originally published at_ [_RTFM: Linux, DevOps, and system administration_](https://rtfm.co.ua/en/aws-karpenter-and-ssh-for-kubernetes-workernodes/)_._
* * * | setevoy |
1,894,838 | Master Java Programming: Comprehensive Guide Part 1 | Since its debut, Java, a flexible and potent programming language, has played a significant role in... | 0 | 2024-06-23T14:32:35 | https://dev.to/bishop_bhaumik/master-java-programming-comprehensive-guide-part-1-1m7g | java, devops, development | Since its debut, Java, a flexible and potent programming language, has played a significant role in the software development industry. Java, which is well-known for its power and usability, has applications in many other domains, including commercial solutions and the creation of mobile applications. We shall examine Java's fundamental features and a variety of applications in this blog.
## Java Features
Java stands out in the programming world due to its rich set of features. Let's take a closer look at what makes Java so unique:
**Object-Oriented**
Java arranges code into objects that include both data and methods, in accordance with the object-oriented programming (OOP) paradigm. This method facilitates easy maintenance, modularity, and code reuse.
**Platform Independent**
One of Java's most significant benefits is its platform-agnostic nature. Java programs are compiled into bytecodes, which can run on any device equipped with the Java Virtual Machine (JVM). This "write once, run anywhere" capability saves developers' time and effort.
**Architecture-Insensitive**
Java's bytecode is designed to be architecture-neutral, making the compiled code executable on any processor with a compatible JVM. This feature ensures Java applications are not tied to any specific hardware.
**Portable**
Java's platform-agnostic and architecture-insensitive characteristics make it highly transferable. Java code can be easily transferred and executed across different environments without requiring any modification.
**Robust**
Java focuses on its dependability and robustness. It includes robust memory management features, automatic garbage collection, and exception handling, which minimize the likelihood of memory leaks and system crashes.
**Multithreading**
Multithreading is a vital aspect of Java, enabling the concurrent execution of multiple threads within a program. This capability is crucial for the development of high-performance applications that can effectively manage multiple tasks simultaneously.
In Java, a thread represents a lightweight subprocess, which is the smallest unit of processing. Threads allow a program to perform multiple operations simultaneously by dividing tasks into smaller, manageable sub-tasks. This division is particularly beneficial in situations where tasks are independent of one another, resulting in a more efficient use of system resources.
Java provides built-in support for multithreading through the thread class, which is a part of the Java language, and the java.util.concurrent package. This article will delve into the specifics of how multithreading works in Java.
implementing the Traversable interface. The Runnable interface is often preferred, as it separes the task from the thread-execution mechanism.
**Creating Threads**
In Java, you can construct threads by either implementing the Runnable interface or by extending the Thread class. Because it keeps the task and the thread execution mechanism apart, the Runnable interface is recommended.
_Example:_
```
class MyThread extends Thread {
public void run() {
System.out.println("Thread is running...");
}
}
class MyRunnable implements Runnable {
public void run() {
System.out.println("Runnable is running...");
}
}
public class Test {
public static void main(String[] args) {
MyThread t1 = new MyThread();
t1.start();
Thread t2 = new Thread(new MyRunnable());
t2.start();
}
}
```
**High Performance**
Java achieves high performance through the use of Just-In-Time (JIT) compilation, despite being an interpreted language. The JIT compiler converts bytecode into native machine code at runtime, significantly enhancing performance.
**Distributed**
Java also provides built-in support for distributed computing through technologies such as Remote Method Invocation (RMI) and the Java Naming and Directory Interface (JNDI). This makes it easier to develop applications that can operate across multiple devices and networks.
**Interpreted**
Java code is initially compiled into bytecode, which is then interpreted by the JVM. This interpretation allows for cross-platform execution and dynamic linking of new classes during runtime.
## Java Applications
Java's many applications demonstrate how versatile it is. The following are some crucial domains in which Java shines:
**Enterprise Solutions**
Large-scale enterprise application development is a major use case for Java. Because of its many libraries, security features, and scalability, it's perfect for creating reliable business solutions.
**Game Development**
For the purpose of creating video games, Java offers a wide range of frameworks and APIs, including the Lightweight Java Game Library (LWJGL). Because of Java's portability, games can operate across a variety of systems with little to no changes.
**Secured Web Development**
Web developers frequently utilize Java, especially when creating secure web apps. Web application development tools and features for safe, scalable, and maintainable web applications are provided by frameworks such as Spring and JavaServer Faces (JSF).
**Embedded Systems**
Java's compact and efficient nature makes it suitable for embedded systems. It is used in various devices, from smart cards and sensors to complex industrial systems, due to its portability and robustness.
**Mobile Application Development**
Java is the primary language for Android app development. The Android SDK provides a comprehensive set of tools and libraries for creating feature-rich mobile applications, making Java a crucial skill for mobile developers.
**Big Data Applications**
Java plays a significant role in big data technologies. Frameworks like Apache Hadoop and Apache Spark are built using Java, enabling the processing and analysis of large datasets efficiently.
## Java: JDK vs JRE vs JVM:
Java development and execution relies on three key components: JDK, JRE, and JVM. Each plays a distinct role in the Java ecosystem. The following is a brief overview of their functions and differences.
**Java Development Kit (JDK):**
_Purpose:_ Development
JDK is a comprehensive software development kit used by developers to create Java applications. It includes:
- Compiler (javac): Converts Java source code into bytecode.
- Java Runtime Environment (JRE): Provides libraries, Java Virtual Machines (JVM), and other components to run Java applications.
- Development Tools Debuggers, profilers, and other tools necessary for development.
JDK is essential for compiling and debugging Java programs.
**Java Runtime Environment (JRE):**
_Purpose:_ Execution
JRE is a subset of JDK designed for end users who want to run Java applications. It includes:
- Java Virtual Machine (JVM):Executes Java bytecode.
- Core Libraries: Necessary-class libraries and other files for Java applications.
- Supporting Files: Configuration files and settings.
The JRE does not include development tools such as compilers or debuggers, making them suitable for running, but not developing, Java applications.
**Java Virtual Machine (JVM)**
_Purpose:_ Execution and Interpretation
JVM is an abstract computing machine that enables a computer to run Java programs. Its key responsibilities are as follows:
- Bytecode Execution: Converts Java bytecode into machine code and executes it.
Memory Management: Handles memory allocation and garbage collection.
- Platform Independence: Allows Java applications to run on any device or operating system with a compatible JVM.
JVM is part of both JDK and JRE, ensuring that Java applications can be run on any platform.
**Summary**
- JDK: JRE + development tools used for developing Java applications.
- JRE: JVM + libraries used for running Java applications.
- JVM: Part of both JDK and JRE; interprets and executes Java bytecode.
Understanding these components helps developers and users to ensure that they have the necessary tools and environments to effectively develop and run Java applicationsly.
**Conclusion**
Java is a strong and adaptable language for developers due to its features and broad variety of applications. Java gives you the tools and features you need to build reliable, efficient software, whether you're working on embedded systems, big data applications, enterprise solutions, games, secure online applications, mobile apps, or embedded systems. Gaining an understanding of Java's fundamentals can open up a world of prospects for you in the software development industry.
In coming Weeks I will share other aspects and learnings. Till then Have a nice day.. | bishop_bhaumik |
1,897,857 | Still earning money with code? 😇 | Claude 3.5 Sonnet just coded a full-featured file upload (link to demo mp4) component for me: 1) + a... | 0 | 2024-06-23T14:32:25 | https://dev.to/alexanderisora/still-earning-money-with-code-1ofb | webdev, javascript, node, chatgpt | Claude 3.5 Sonnet just coded a full-featured file upload ([link to demo mp4](https://temp.paracast.io/ai_code.mp4)) component for me:
1) + a spinner
2) size restriction
3) file extension restriction
4) + a preview preview
5) + state management
6) + upload to a Cloudflare R2 bucket
If you still sell code, you definitely have to take a break and think about your future. What they pay you for today will cost way less tomorrow. Not because programmers will vanish, but because one programmer will be able to do x100 more work and therefore the market will need x100 fewer programmers. It is the basic law of supply and demand.
One possible way for you to still earn money with your laptop is by building your own products. You do not have to aim for millions. You can build a small project that brings you a net $10K/m and live a happy life in the future 🙂 | alexanderisora |
1,897,840 | update jdk8 with apt in ubuntu18 | Following steps are running on a ubuntu18 server vm in pve. And jdk8 was installed with apt... | 0 | 2024-06-23T14:28:29 | https://dev.to/masonycl/update-jdk8-in-ubuntu18-15lg | Following steps are running on a ubuntu18 server vm in pve.
And jdk8 was installed with apt previously a few years ago.
1. check current version in use
run: `java -version`
got: `openjdk version "1.8.0_312"`
This version is outdated.
2. check apt source
run:
`sudo apt update`
then:
run: `apt list | grep jdk`
this will give something like
`openjdk-8-jdk/bionic-updates,bionic-security,now 8u372-ga~us1-0ubuntu1~18.04 amd64`
followed by `[upgradable from: ...]`
This is quite obvious.
3. upgrade
run: `sudo apt upgrade openjdk-8-jdk`
This may complain like:
`Could not get lock /var/lib/dpkg/lock-frontend - open`
This means some apt process is holding the lock.
run: `sudo ps -ef | grep apt` to check related processes.
If the processes in result are some background task of apt or apt-get like daily-apt, just wait for the background work to finish.
If not, may have to kill the processes or contact server administrator.
then:
run: `sudo apt upgrade openjdk-8-jdk`
4. check again
run: `java -version`
got: `openjdk version "1.8.0_362"`
Done. | masonycl | |
1,897,855 | Como usar o comando oobe ypassnro no Windows | Você já encontrou a necessidade de configurar um computador novo, mas foi bloqueado pelo processo de... | 0 | 2024-06-23T14:24:09 | https://dev.to/kbdemiranda/como-usar-o-comando-oobeypassnro-no-windows-5c35 | windows, oobe, configuração, tecnologia | ---
title: Como usar o comando oobe\bypassnro no Windows
published: true
description:
tags:
- Windows
- OOBE
- Configuração
- Tecnologia
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cvxd0sagildprt4z74tj.png
---
Você já encontrou a necessidade de configurar um computador novo, mas foi bloqueado pelo processo de OOBE (Out-of-Box Experience)? Pode ser frustrante quando você está tentando preparar uma máquina e não pode avançar sem uma conexão de rede. Felizmente, há uma solução para contornar isso com o comando `oobe\bypassnro`.
#### O que é OOBE?
OOBE, ou Out-of-Box Experience, é a série de etapas iniciais que o Windows utiliza para configurar um novo dispositivo ou após uma instalação limpa do sistema operacional. Esse processo inclui a aceitação dos termos de licença, a criação de contas de usuário, configurações de privacidade e, muitas vezes, a exigência de uma conexão com a internet.
#### Bypass NRO: o que significa?
NRO significa Network Requirement Override, ou seja, a desativação da exigência de conexão de rede. Em determinadas situações, como quando você não tem acesso imediato a uma rede, é útil poder pular essa exigência e continuar a configuração do sistema.
#### Como usar `oobe\bypassnro`
1. **Inicie o OOBE**: Quando você está no meio do processo de configuração do OOBE e encontra a tela pedindo para se conectar a uma rede, pressione `Shift + F10`. Isso abrirá o Prompt de Comando.
2. **Digite o comando**: No Prompt de Comando, digite `oobe\bypassnro` e pressione Enter.
3. **Reinicialização**: O sistema vai reiniciar e o OOBE será relançado. Desta vez, ele permitirá que você pule a etapa de conexão de rede, permitindo configurar o sistema sem precisar estar online.
#### Por que isso é útil?
Este comando é particularmente útil para administradores de sistema e técnicos que precisam configurar múltiplos dispositivos de maneira eficiente, sem depender de uma conexão de internet imediata. Também é útil para situações em que a rede está temporariamente indisponível.
### Conclusão
O comando `oobe\bypassnro` é uma ferramenta simples, mas poderosa para contornar a exigência de rede durante a configuração inicial do Windows. Da próxima vez que você estiver configurando um novo dispositivo e encontrar a barreira da conexão de rede, lembre-se desse truque para economizar tempo e evitar frustrações.
*Photo by [Windows](https://unsplash.com/@windows?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash) on [Unsplash](https://unsplash.com/photos/person-using-windows-11-computer-on-lap-AigsWJmvoEo?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash).*
| kbdemiranda |
1,897,851 | Database Integration with Spring Boot : Best Practices and Tools | Integrating a database with a Spring Boot application is a common task that many developers do.... | 0 | 2024-06-23T14:19:24 | https://dev.to/abhishek999/database-integration-with-spring-boot-best-practices-and-tools-5doh | java, springboot, jpa, mysql | Integrating a database with a Spring Boot application is a common task that many developers do. Spring Boot, combined with Spring Data JPA, provides a robust framework for working with relational databases like MySQL. Additionally, tools like Flyway and Liquibase help manage database migrations efficiently. This blog will cover best practices for using Spring Data JPA with relational databases, integrating with MySQL, and managing database migrations with Flyway or Liquibase
**Using Spring Data JPA with Relational Databases**
Spring Data JPA simplifies the implementation of data access layers by reducing the amount of boilerplate code. It provides a powerful repository abstraction for various data stores, making database interactions more straightforward
**Best Practices for Using Spring Data JPA :**
**Integrating with SQL Databases like MySQL :**
MySQL is one of the most popular relational databases, and integrating it with Spring Boot is straightforward.
**Steps to Integrate MySQL with Spring Boot :**
**Add Dependencies:** Add the necessary dependencies for Spring Data JPA and MySQL connector in your pom.xml
```
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
```
**Database Configuration :** Configure the database connection details in application.properties or application.yml
```
spring:
datasource:
url: jdbc:mysql://localhost:3306/mydatabase
username: root
password: rootpassword
driver-class-name: com.mysql.cj.jdbc.Driver
jpa:
hibernate:
ddl-auto: update
show-sql: true
```
**Define Your Entities :** Start by defining your JPA entities Each entity represents a table in the database
```
@Entity
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(nullable = false)
private String name;
@Column(unique = true, nullable = false)
private String email;
// Getters and Setters
}
```
**Create Repositories :** Create repository interfaces to perform CRUD operations. Extend JpaRepository to leverage built-in methods and custom query methods
```
public interface UserRepository extends JpaRepository<User, Long> {
Optional<User> findByEmail(String email);
}
```
**Create Service Layer:** Use a service layer to encapsulate business logic and interact with the repository
```
@Service
public class UserService {
@Autowired
private UserRepository userRepository;
// Create operation
public User createUser(User user) {
// Perform validation or business logic if needed
return userRepository.save(user);
}
// Read operations
public Optional<User> findUserById(Long id) {
return userRepository.findById(id);
}
public Optional<User> findUserByEmail(String email) {
return userRepository.findByEmail(email);
}
public List<User> getAllUsers() {
return userRepository.findAll();
}
// Update operation
public User updateUser(Long id, User userDetails) {
// Ensure the user exists
User existingUser = userRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("User not found with id: " + id));
// Update user details
existingUser.setName(userDetails.getName());
existingUser.setEmail(userDetails.getEmail());
// Save updated user
return userRepository.save(existingUser);
}
// Delete operation
public void deleteUser(Long id) {
// Ensure the user exists
User existingUser = userRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("User not found with id: " + id));
// Delete user
userRepository.delete(existingUser);
}
}
```
**Exception Handling :**
In the updateUser and deleteUser methods, you may want to handle cases where the user with the specified ID doesn't exist. You can create a custom exception (e.g., ResourceNotFoundException) and throw it if necessary
```
@ResponseStatus(HttpStatus.NOT_FOUND)
public class ResourceNotFoundException extends RuntimeException {
public ResourceNotFoundException(String message) {
super(message);
}
}
```
**Run MySQL Server :** Ensure that the MySQL server is running, and the specified database (mydatabase) exists. You can create the database using MySQL CLI or a GUI tool like MySQL Workbench
**Test the Connection :** Run your Spring Boot application to verify the connection to the MySQL database. If configured correctly, Spring Boot will automatically create the necessary tables based on your entities
**Database Migration with Flyway or Liquibase :**
Managing database schema changes is essential for maintaining the integrity and consistency of your application. Flyway and Liquibase are two popular tools for handling database migrations.
**Using Flyway for Database Migrations**
Flyway is a migration tool that uses SQL scripts to manage database versioning
**Add Dependencies :** Add Flyway dependencies to your pom.xml
```
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
```
**Configure Flyway :** Configure Flyway in application.properties or application.yml
```
spring:
flyway:
enabled: true
locations: classpath:db/migration
```
**Create Migration Scripts :** Place your SQL migration scripts in the src/main/resources/db/migration directory. Name the scripts following Flyway's naming convention (V1__Initial_Setup.sql, V2__Add_User_Table.sql, etc.)
```
-- V1__Initial_Setup.sql
CREATE TABLE user (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL UNIQUE
);
```
**Run Migrations :** Flyway will automatically run the migrations on application startup
**Using Liquibase for Database Migrations :**
Liquibase is another powerful tool for managing database migrations, supporting XML, YAML, JSON, and SQL formats.
**Add Dependencies :** Add Liquibase dependencies to your pom.xml
```
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
</dependency>
```
**Configure Liquibase :** Configure Liquibase in application.properties or application.yml
```
spring:
liquibase:
enabled: true
change-log: classpath:db/changelog/db.changelog-master.yaml
```
**Create ChangeLog Files :** Define your database changes in src/main/resources/db/changelog. Create a master changelog file (db.changelog-master.yaml) that includes other changelog files
```
databaseChangeLog:
- changeSet:
id: 1
author: yourname
changes:
- createTable:
tableName: user
columns:
- column:
name: id
type: BIGINT
autoIncrement: true
constraints:
primaryKey: true
- column:
name: name
type: VARCHAR(100)
constraints:
nullable: false
- column:
name: email
type: VARCHAR(100)
constraints:
nullable: false
unique: true
```
**Run Migrations :** Liquibase will automatically run the migrations on application startup
**Conclusion**
Integrating databases with Spring Boot is seamless, thanks to Spring Data JPA, and tools like Flyway and Liquibase make managing database migrations straightforward. By following the best practices outlined in this blog, you can ensure your Spring Boot application interacts efficiently with relational databases like MySQL, and your database schema evolves smoothly as your application grows | abhishek999 |
1,894,729 | Web3Auth(次のjs)を使用したXRP Ledgerアカウントの作成:ステップバイステップガイド | はじめに XRP ledgerは、国境を越えた支払いに焦点を当てた分散型ブロックチェーンであり、金融取引を文書化するために使用され、高速で低コストで効果的です。 XRP... | 0 | 2024-06-23T14:19:20 | https://dev.to/amity808/web3authci-nojswoshi-yong-sitaxrp-ledgerakauntonozuo-cheng-sutetupubaisutetupugaido-1l31 | web3, web3auth, javascript, authjs | ##はじめに
**XRP ledger**は、国境を越えた支払いに焦点を当てた分散型ブロックチェーンであり、金融取引を文書化するために使用され、高速で低コストで効果的です。 XRP Ledgerは、Rippleの共同創設者兼CEOであるChris Larsenによって作成されました。 XRP Ledeは、そのネイティブ暗号通貨としてXRPを利用しました。
このチュートリアルでは、新しいアカウントを生成するweb3認証を使用してXRP Ledgerを実装しています。 このアカウントは、取引を実行するために使用できます。 これは、xrp ledgerの任意のアドレスにトークンを送信するために使用できます。 ユーザーがXRBアカウントを取得するには、ユーザーはGoogle discordまたはweb3authに関連付けられた他の認証を承認する必要があります。
###プロジェクトを開始するには. Nextjs、Tailwind CSS、およびその他のweb3認証ライブラリを利用します。
**あなたのターミナルを開けて下さい**
コードを配置するディレクトリに移動します
新しいnextjsプロジェクトを作成する
```javascript
npx create-next-app@latest my-project --typescript --eslint
```
あなたの端末には次のようなものがあるはずです

それを正常に実装したら、my-projectに移動します
```javascript
cd my-project
```
My-projectをcdすると、tailwind cssがインストールされます
```javascript
npm install -D tailwindcss postcss autoprefixer
```

### それが成功した後、あなたはこれを実行します
```javascript
npx tailwindcss init -p
```

プロジェクトに利用するいくつかの依存関係をインストールする必要があります
Yarnをパッケージインストーラとして使用したり、npmなどの知っているパッケージインストーラとして使用したりします。
```javascript
yarn add @web3auth/xrpl-provider @web3auth/xrpl-provider @web3auth/openlogin-adapter @web3auth/openlogin-adapter @web3auth/modal @web3auth/base @nice-xrpl/react-xrpl
```
インストール後、vs codeでファイルディレクトリを開くことができます
私達の開発プロセスを始めることができます
プロジェクトディレクトリにutilsフォルダとcomponentsフォルダを作成し、xrpプロバイダクライアントでweb3authログインによって生成されたアカウントを取得するなど、プロバイダーと呼び出そうとしているその他の関数を定義する必要があります。
> ルートプロジェクトディレクトリでプロジェクト構造でsrcを使用している場合は、アプリまたはページフォルダー内ではなく、ルートディレクトリに作成しない場合は、utilsフォルダーを作成します。
という名前の新しいフォルダを作成します xrpLRPC.ts
```javascript
//まず、typescript用のWeb3auth/baseからプロバイダータイプをインポートすることから始めます
import { IProvider } from "@web3auth/base";
// Xrp ledgerに入力を送信するために使用する関数をインポートします
import { convertStringToHex, Payment, xrpToDrops } from "xrpl"
//私たちはすべての私たちの要求を置く私たちのクラスを定義します
export default class XrplRPC {
// IProvider型のプライベートプロバイダ変数の宣言}
private provider: IProvider;
//指定された引数を使用してプロバイダー変数を初期化するコンストラクターを定義します
constructor (provider: IProvider) {
this.provider = provider
}
// プロバイダに関連付けられたアカウントを取得する方法
getAccounts = async (): Promise<any> => {
try {
// プロバイダを使用してアカウントを要求する
const accounts = await this.provider.request<never, string[]>({
method: "xrpl_getAccounts", // Specify the method to get accounts
});
console.log(accounts, "accounts"); // Log the accounts for debugging purposes
if (accounts) { // アカウントが返されたかどうかを確認します
// リスト内の最初のアカウントのアカウント情報を要求する
const accInfo = await this.provider.request({
method: "account_info", // アカウント情報を取得する方法を指定します
params: [
{
account: accounts[0], // 最初のアカウントを使用する
strict: false, //非strictモードでは、より寛大なアカウント情報の取得が可能になります
ledger_index: "current", // 現在の元帳インデックスを使用する
queue: true, // キューに入れられたトランザクションを含める
},
],
});
return accInfo; // アカウント情報を返す
} else {
return "アカウントが見つかりません、問題を報告してくださ"; // アカウントが見つからない場合は、エラーメッセージを返します
}
} catch (error) { // 発生したエラーの処理
console.error("Error", error); // エラーをログに記録する
return error; // エラーを返す
}
};
// 最初のアカウントの残高を取得する方法
getBalance = async (): Promise<any> => {
try {
// プロバイダを使用してアカウントを要求する
const accounts = await this.provider.request<string[], never>({
method: "xrpl_getAccounts", // アカウントを取得する方法を指定します
});
if (accounts) { // アカウントが返されたかどうかを確認します
// リスト内の最初のアカウントのアカウント情報を要求する
const accInfo = (await this.provider.request({
method: "account_info", // アカウント情報を取得する方法を指定します
params: [
{
account: accounts[0], // 最初のアカウントを使用する
strict: true, // 厳密なモードは正確な記述情報を保障します
ledger_index: "current", // 現在の元帳インデックスを使用する
queue: true, // キューに入れられたトランザクションを含める
},
],
})) as Record<string, Record<string, string>>;
return accInfo.account_data?.Balance; // 口座残高を返す
} else {
return "No accounts found, please report this issue."; // アカウントが見つからない場合は、エラーメッセージを返します
}
} catch (error) { // 発生したエラーの処理
console.error("Error", error); // エラーをログに記録する
return error; // エラーを返す
}
};
// 最初のアカウントのアドレスを取得する方法
getAccountAddress = async (): Promise<any> => {
try {
// プロバイダを使用してアカウントを要求する
const accounts = await this.provider.request<string[], never>({
method: "xrpl_getAccounts", // Specify the method to get accounts
});
if (accounts) { // アカウントが返されたかどうかを確認します
// リスト内の最初のアカウントのアカウント情報を要求する
const accInfo = (await this.provider.request({
method: "account_info", // アカウント情報を取得する方法を指定します
params: [
{
account: accounts[0], // 最初のアカウントを使用する
strict: true, //厳密なモードは正確な記述情報を保障します
ledger_index: "current", // 現在の元帳インデックスを使用する
queue: true, // キューに入れられたトランザクションを含める
},
],
})) as Record<string, Record<string, string>>;
return accInfo?.account; // アカウントアドレスを返す
} else {
return "No accounts found, please report this issue."; // アカウントが見つからない場合は、エラーメッセージを返します
}
} catch (error) { // 発生したエラーの処理
console.error("Error", error); // エラーをログに記録する
return error; // エラーを返す
}
}
// メッセージに署名する方法
signMessage = async (): Promise<any> => {
try {
const msg = "こんにちはの世界をこのストリートビューをXRPLによる Amityclev"; // 署名するメッセージを定義する
const hexMsg = convertStringToHex(msg); // メッセージを16進数の文字列に変換します
const txSign = await this.provider.request<{ signature: string }, never>({
method: "xrpl_signMessage", // メッセージに署名する方法を指定します
params: {
signature: hexMsg, // 署名する16進メッセージを指定します
},
});
return txSign; // 署名されたメッセージを返す
} catch (error) { // 発生したエラーの処理
console.log("error", error); // エラーをログに記録する
return error; // エラーを返す
}
};
// トランザクションに署名して送信する方法
signAndSendTransaction = async (): Promise<any> => {
try {
// プロバイダを使用してアカウントを要求する
const accounts = await this.provider.request<never, string[]>({
method: "xrpl_getAccounts", // Specify the method to get accounts
});
if (accounts && accounts.length > 0) { // アカウントが返され、リストが空でないかどうかを確認します
// 支払トランザクションオブジェクトの作成
const tx: Payment = {
TransactionType: "Payment", // トランザクションタイプの指定
Account: accounts[0] as string, // 最初のアカウントを送信者として使用する
Amount: xrpToDrops(50), // 送信する金額を指定し、XRPをdropsに変換します
Destination: "rM9uB4xzDadhBTNG17KHmn3DLdenZmJwTy", // 宛先アドレスの指定
};
// 取引の提出を要求する
const txSign = await this.provider.request({
method: "xrpl_submitTransaction", // トランザクションを送信する方法を指定します
params: {
transaction: tx, // Transactionオブジェクトを指定します
},
});
return txSign; // トランザクション署名を返す
} else {
return "failed to fetch accounts"; // アカウントが見つからない場合は、エラーメッセージを返します
}
} catch (error) { // 発生したエラーの処理
console.log("error", error); // エラーをログに記録する
return error; // エラーを返す
}
};
}
```
| amity808 |
1,897,850 | Clinical Decision Support Software: Transforming Modern Healthcare | Introduction Clinical Decision Support Software (CDSS) represents a significant... | 27,673 | 2024-06-23T14:18:22 | https://dev.to/rapidinnovation/clinical-decision-support-software-transforming-modern-healthcare-274i | ## Introduction
Clinical Decision Support Software (CDSS) represents a significant advancement
in the medical field, integrating information technology and healthcare to
improve patient outcomes. As healthcare systems become more complex and data-
driven, the role of sophisticated tools to aid medical professionals in their
decision-making processes becomes increasingly crucial.
## What is Clinical Decision Support Software?
CDSS is an advanced technology designed to help healthcare professionals make
informed decisions about patient care. It integrates and analyzes medical
data, providing recommendations and insights that enhance decision-making
processes.
## How Does Clinical Decision Support Software Work?
CDSS works by integrating with Electronic Health Records (EHR) to provide
real-time, evidence-based recommendations. It employs data analysis techniques
like predictive analytics and machine learning to transform raw data into
actionable insights.
## Types of Clinical Decision Support Systems
CDSS can be categorized into knowledge-based systems, which rely on structured
medical knowledge, and non-knowledge-based systems, which use AI and machine
learning to analyze data and make decisions.
## Benefits of Clinical Decision Support Software
CDSS enhances patient safety, improves healthcare quality, reduces costs, and
supports healthcare providers' decision-making by providing timely and
accurate information.
## Challenges in Implementation
Implementing CDSS involves challenges such as integration issues, user
resistance, and data privacy and security concerns. Addressing these
effectively is crucial for a smooth transition.
## Implementation Strategies
Effective implementation strategies include assessing organizational
readiness, choosing the right software, providing training and support for
healthcare providers, and continuous monitoring and optimization.
## Future of Clinical Decision Support Software
The future of CDSS is poised for transformative growth with advances in AI and
machine learning, predictive analytics, and personalized medicine, driving
improvements in patient care and operational efficiencies.
## Real-World Examples
Case studies demonstrate the impact of CDSS, such as improving diagnosis
accuracy with AI and reducing medication errors with computerized physician
order entry systems.
## Why Choose Rapid Innovation for Implementation and Development
Rapid Innovation offers expertise in AI and blockchain, customized solutions,
a proven track record, and comprehensive support, making it an ideal partner
for implementing and developing CDSS.
## Conclusion
Technology's role in advancing healthcare is multifaceted, enhancing nearly
every aspect of the industry. As technology continues to evolve, its
integration into healthcare systems globally is expected to deepen, driving
improvements in patient care, operational efficiencies, and overall health
outcomes.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Development](https://www.rapidinnovation.io/ai-software-development-
company-in-usa)
[AI Development](https://www.rapidinnovation.io/ai-software-development-
company-in-usa)
## URLs
* <https://www.rapidinnovation.io/post/clinical-decision-support-software-benefits-and-implementation-strategies>
## Hashtags
#Here
#are
#five
#relevant
#hashtags
#for
#the
#provided
#text:
#1.
#HealthTech
#2.
#ClinicalDecisionSupport
#3.
#PatientSafety
#4.
#AIinHealthcare
#5.
#EHRIntegration
| rapidinnovation | |
1,897,849 | The Magic of Binary Search: Finding Needles in Haystacks | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T14:17:34 | https://dev.to/vidyarathna/the-magic-of-binary-search-finding-needles-in-haystacks-44bg | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Binary search is like finding a word in a dictionary by opening it in the middle, then narrowing down based on alphabetical order. It works on sorted data, halving the search space with each comparison until the target is found or deemed absent.
## Additional Context
Binary search is efficient (O(log n)) for large datasets, unlike linear search (O(n)). It's used in algorithms for fast data retrieval and is fundamental in computer science and programming interviews. | vidyarathna |
1,897,847 | Understanding Cache Memory: The Librarian's Efficiency Secret | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-23T14:15:28 | https://dev.to/vidyarathna/understanding-cache-memory-the-librarians-efficiency-secret-1j6i | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Cache memory is like a librarian's desk: it stores frequently accessed information for quick retrieval. It sits between main memory (bookshelves) and the CPU (reader). Faster than RAM but smaller, it optimizes performance by reducing the need to fetch data from slower memory.
## Additional Context
Understanding cache hierarchy is crucial for optimizing system performance in CPUs. It minimizes latency and boosts efficiency by exploiting the principle of locality, where recently accessed data is likely to be accessed again soon. | vidyarathna |
1,873,366 | React Router for Single Page Applications | Introduction React Router is a powerful library for managing navigation and routing in... | 27,559 | 2024-06-23T14:14:00 | https://dev.to/suhaspalani/react-router-for-single-page-applications-de9 | webdev, react, routes, beginners | #### Introduction
React Router is a powerful library for managing navigation and routing in React applications. It allows developers to create a multi-page feel in a single-page application (SPA) by managing the browser's history and rendering components based on the URL. This week, we'll delve into the fundamentals of React Router, including setting it up, defining routes, and using navigation links.
#### Importance of React Router
React Router is essential for creating dynamic and user-friendly SPAs. It helps manage different views and user interactions within a single-page application, providing a seamless experience similar to traditional multi-page applications.
#### Setting Up React Router
**Installing React Router:**
- **Command to Install**:
```bash
npm install react-router-dom
```
**Setting Up the Router:**
- **Basic Setup**:
```javascript
import React from 'react';
import ReactDOM from 'react-dom';
import { BrowserRouter as Router, Route, Switch } from 'react-router-dom';
import App from './App';
import About from './About';
import Contact from './Contact';
ReactDOM.render(
<Router>
<Switch>
<Route exact path="/" component={App} />
<Route path="/about" component={About} />
<Route path="/contact" component={Contact} />
</Switch>
</Router>,
document.getElementById('root')
);
```
#### Defining Routes
**Basic Route Definition:**
- **Example**:
```javascript
import React from 'react';
import { Route, Switch } from 'react-router-dom';
import Home from './Home';
import About from './About';
import Contact from './Contact';
function App() {
return (
<div>
<Switch>
<Route exact path="/" component={Home} />
<Route path="/about" component={About} />
<Route path="/contact" component={Contact} />
</Switch>
</div>
);
}
export default App;
```
**Nested Routes:**
- **Example**:
```javascript
import React from 'react';
import { Route, Switch, Link } from 'react-router-dom';
import Profile from './Profile';
import Settings from './Settings';
function Dashboard() {
return (
<div>
<h2>Dashboard</h2>
<ul>
<li><Link to="/dashboard/profile">Profile</Link></li>
<li><Link to="/dashboard/settings">Settings</Link></li>
</ul>
<Switch>
<Route path="/dashboard/profile" component={Profile} />
<Route path="/dashboard/settings" component={Settings} />
</Switch>
</div>
);
}
export default Dashboard;
```
#### Navigation Links
**Using Link and NavLink:**
- **Link Component**:
```javascript
import React from 'react';
import { Link } from 'react-router-dom';
function Navigation() {
return (
<nav>
<ul>
<li><Link to="/">Home</Link></li>
<li><Link to="/about">About</Link></li>
<li><Link to="/contact">Contact</Link></li>
</ul>
</nav>
);
}
export default Navigation;
```
- **NavLink Component**:
```javascript
import React from 'react';
import { NavLink } from 'react-router-dom';
function Navigation() {
return (
<nav>
<ul>
<li><NavLink exact to="/" activeClassName="active">Home</NavLink></li>
<li><NavLink to="/about" activeClassName="active">About</NavLink></li>
<li><NavLink to="/contact" activeClassName="active">Contact</NavLink></li>
</ul>
</nav>
);
}
export default Navigation;
```
#### Programmatic Navigation
**Navigating Programmatically:**
- **Using history.push**:
```javascript
import React from 'react';
import { useHistory } from 'react-router-dom';
function Home() {
const history = useHistory();
const navigateToAbout = () => {
history.push('/about');
};
return (
<div>
<h1>Home Page</h1>
<button onClick={navigateToAbout}>Go to About Page</button>
</div>
);
}
export default Home;
```
#### Route Parameters
**Using Route Parameters:**
- **Example**:
```javascript
import React from 'react';
import { useParams } from 'react-router-dom';
function User() {
let { id } = useParams();
return <h2>User ID: {id}</h2>;
}
function App() {
return (
<div>
<Switch>
<Route path="/user/:id" component={User} />
</Switch>
</div>
);
}
export default App;
```
#### Redirects and Not Found Pages
**Handling Redirects:**
- **Example**:
```javascript
import React from 'react';
import { Redirect } from 'react-router-dom';
function Home() {
return <Redirect to="/about" />;
}
export default Home;
```
**Creating a Not Found Page:**
- **Example**:
```javascript
import React from 'react';
import { Route, Switch } from 'react-router-dom';
import Home from './Home';
import About from './About';
import Contact from './Contact';
import NotFound from './NotFound';
function App() {
return (
<div>
<Switch>
<Route exact path="/" component={Home} />
<Route path="/about" component={About} />
<Route path="/contact" component={Contact} />
<Route component={NotFound} />
</Switch>
</div>
);
}
export default App;
```
#### Conclusion
React Router is a fundamental tool for building SPAs with React, providing a way to navigate and manage different views efficiently. Understanding React Router is crucial for any React developer looking to create dynamic and user-friendly applications.
#### Resources for Further Learning
- **Online Courses**: Platforms like Udemy, Pluralsight, and freeCodeCamp offer comprehensive courses on React Router.
- **Books**: "React Router Quick Start Guide" by Sasan Seydnejad.
- **Documentation and References**: The official [React Router documentation](https://reactrouter.com/web/guides/quick-start) is an excellent resource.
- **Communities**: Join developer communities on platforms like Stack Overflow, Reddit, and GitHub for support and networking.
| suhaspalani |
1,897,846 | My take on Modeling Large Amounts of HLA Molecules with Ruby | This blog post presents a Ruby script for generating 3D models of HLA molecules using Meta's ESMFold | 0 | 2024-06-23T14:13:23 | https://dev.to/mariomarroquim/my-take-on-modeling-hla-molecules-with-ruby-37l5 | hla, modelling, ruby, threads | ---
title: My take on Modeling Large Amounts of HLA Molecules with Ruby
published: true
description: This blog post presents a Ruby script for generating 3D models of HLA molecules using Meta's ESMFold
tags: hla, modelling, ruby, threads
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-23 13:57 +0000
---
Understanding the 3D structures of HLA (human leukocyte antigen) molecules is crucial. These molecules help the immune system recognize what's foreign and what's not. Here's why this matters:
- **Transplants**: Matching HLA types between donors and recipients is key to avoid rejection.
- **Disease Links**: Some HLA types are linked to autoimmune diseases, infections, and drug reactions.
- **Vaccines**: Knowing how HLA presents antigens aids in designing effective vaccines.
- **Cancer Treatment**: Identifying how HLA presents tumor antigens helps in personalized cancer therapies.
### Ruby Script for HLA Modeling
Based on my previous work at [pHLA3D](https://www.phla3d.com.br), I developed a **[Ruby script](https://github.com/mariomarroquim/hla-modeller)** to efficiently model large numbers of HLA molecules. Here's how it works:
1. **Data Prep**: It gathers and preps HLA sequence data for analysis.
2. **Parallel Processing**: It split the workload across multiple threads, cutting down processing time.
3. **ESMFold Integration**: It uses Meta’s ESMFold online service, sending HLA sequences and getting back 3D models.
4. **Post-Processing**: I intend to clean up and validate the models for further use. **Can you help me with that?**
### Why This Rocks
- **Speed**: Parallel processing handles data quickly.
- **Scalability**: Using Meta's ESMFold means no need for a massive server setup.
- **Flexibility**: Written in Ruby, this script is easy to tweak and expand.
In short, this approach makes modeling HLA molecules faster and more accessible, helping advance research and medical treatments. **Please, contribute!** | mariomarroquim |
1,897,837 | Unified Control State Change Events - working with reactive form is never the same in Angular | Introduction In this blog post, I want to describe a new Angular 18 feature called unified... | 27,826 | 2024-06-23T14:09:34 | https://www.blueskyconnie.com/unified-control-state-change-events-in-angular/ | angular, tutorial, frontend | ##Introduction
In this blog post, I want to describe a new Angular 18 feature called unified control state change events that listen to events emitted from form groups and form controls. These events return an Observable that can pipe to different RxJS operators to achieve the expected results. Then, the Observable can resolve in an inline template by async pipe.
Form Group's events
- FormSubmittedEvent - It fires when a form submit occurs
- FormResetEvent - It fires when a form is reset
Form Control's events
- PristineChangeEvent - It fires when a form control changes from the pristine status to the dirty status
- TouchedChangeEvent - It firs when a form control changes from untouched to touched and vice versa.
- StatusChangeEvent - It fires when a form control's status is updated (valid, invalid, pending, and disabled).
- ValueChangeEvent - It fires when a form control updates its value.
I will demonstrate some examples of the events that I did in a Stackblitz demo.
###Boostrap Application
```typescript
// app.config.ts
export const appConfig: ApplicationConfig = {
providers: [
provideExperimentalZonelessChangeDetection()
]
};
```
```typescript
// main.ts
import { appConfig } from './app.config';
bootstrapApplication(App, appConfig);
```
Bootstrap the component and the application configuration to start the Angular application.
###Form Group Code
The form group has two fields, name and email, and a nested company form group. Moreover, it has a button to submit form data and another button to reset the form.
The CompanyAddressComponent is consisted of company name, address line 1, address line 2, and city.
```typescript
// company-address.component.ts
import { ChangeDetectionStrategy, Component, OnInit, inject } from "@angular/core";
import { FormGroup, FormGroupDirective, ReactiveFormsModule } from "@angular/forms";
@Component({
selector: 'app-company-address',
standalone: true,
imports: [ReactiveFormsModule],
template: `
<div [formGroup]="formGroup">
<div>
<label for="companyName">
<span>Company Name: </span>
<input id="companyName" name="companyName" formControlName="name">
</label>
</div>
<div>
<label for="address">
<span>Company Address Line 1: </span>
<input id="line1" name="line1" formControlName="line1">
</label>
</div>
<div>
<label for="line2">
<span>Company Address Line 2: </span>
<input id="line2" name="line2" formControlName="line2">
</label>
</div>
<div>
<label for="city">
<span>Company City: </span>
<input id="city" name="city" formControlName="city">
</label>
</div>
</div>
`,
styles: `
:host {
display: block;
}
`,
changeDetection: ChangeDetectionStrategy.OnPush,
})
export class CompanyAddressComponent implements OnInit {
formGroupDir = inject(FormGroupDirective);
formGroup!: FormGroup<any>;
ngOnInit(): void {
this.formGroup = this.formGroupDir.form.get('company') as FormGroup;
}
}
```
```typescript
// reactive-form.util.ts
export function makeRequiredControl(defaultValue: any) {
return new FormControl(defaultValue, {
nonNullable: true,
validators: [Validators.required],
updateOn: 'blur'
});
}
```
```typescript
// main.ts
<div class="container">
<h1>Angulara Version: {{ version }}!</h1>
<h3>Form Unified Control State Change Events</h3>
<h4>Type Pikachu in the name field to trigger valueChanges</h4>
<form [formGroup]="formGroup" (reset)="resetMyForm($event)" (submit)="formSubmit.next()">
<div>
<label for="name">
<span [style.color]="isNamePristine$ | async"
[style.fontWeight]="isNameTouched$ | async"
>Name: </span>
<input id="name" name="name" formControlName="name">
</label>
</div>
<div>
<label for="email">
<span>Email: </span>
<input id="email" name="email" formControlName="email">
</label>
</div>
<app-company-address />
<div>
<button type="submit">Submit</button>
<button type="reset">Reset</button>
</div>
</form>
<div>
@if (fields$ | async; as fields) {
<p>
Number of completed fields: {{ fields.completed }},
Percentage: {{ fields.percentage }}
</p>
}
@if (isPikachu$ | async; as isPikachu) {
<p>Pikachu is my favorite Pokemon.</p>
}
@if(formReset$ | async; as formReset) {
<p>Form reset occurred at {{ formReset.timestamp }}. Form reset occurred {{ formReset.count }} times.</p>
}
@if(formSubmit$ | async; as formSubmit) {
<p>Form submit occurred at {{ formSubmit.timestamp }}.</p>
<pre>Form Values: {{ formSubmit.values | json }}</pre>
}
<div>
<div>`,
formGroup = new FormGroup({
name: makeRequiredControl('Test me'),
email: new FormControl('', {
nonNullable: true,
validators: [Validators.email, Validators.required],
updateOn: 'blur',
}),
company: new FormGroup({
name: makeRequiredControl(''),
line1: makeRequiredControl(''),
line2: makeRequiredControl(''),
city: makeRequiredControl(''),
})
});
```
###Example 1: Track the last submission time and form values using FormSubmittedEvent
I would like to know the last time that the form was submitted. When the form is valid, the form values are also displayed.
```typescript
// main.ts
<form [formGroup]="formGroup" (submit)="formSubmit.next()">...</form>
formSubmit = new Subject<void>();
formSubmit$ = this.formGroup.events.pipe(
filter((e) => e instanceof FormSubmittedEvent),
map(({ source }) => ({
timestamp: new Date().toISOString(),
values: source.valid ? source.value: {}
})),
);
```
The `submit` emitter emits a value to the `formSubmit` subject. `formSubmit$` Observable filters the events to obtain an instance of `FormSubmittedEvent`. When the form has valid values, the values are returned. Otherwise, an empty Object is returned. The Observable finally emits the time of submission and valid form values.
```html
@if(formSubmit$ | async; as formSubmit) {
<p>Form submit occurred at {{ formSubmit.timestamp }}.</p>
<pre>Form Values: {{ formSubmit.values | json }}</pre>
}
```
Async pipe resolves formSubmit$ in the template to display the timestamp and JSON object.
###Example 2: Track number of times a form is reset using FormResetEvent
```typescript
// main.ts
<form [formGroup]="formGroup" (reset)="resetMyForm($event)" >...</form>
formReset$ = this.formGroup.events.pipe(
filter((e) => e instanceof FormResetEvent),
map(() => new Date().toISOString()),
scan((acc, timestamp) => ({
timestamp,
count: acc.count + 1,
}), { timestamp: '', count: 0 }),
);
resetMyForm(e: Event) {
e.preventDefault();
this.formGroup.reset();
}
```
The `reset` emitter invokes the `resetMyForm` method to reset the form. The `formReset$` Observable filters the events to obtain an instance of `FormResetEvent`. The Observable uses the `map` operator to produce the reset timestamp and the `scan` operator to count the number of occurrences. The Observable finally emits the time of reset and the number of resets.
```html
@if(formReset$ | async; as formReset) {
<p>Form reset occurred at {{ formReset.timestamp }}. Form reset occurred {{ formReset.count }} times.</p>
}
```
In the template, async pipe resolves formReset$ to display the timestamp and the count.
###Example 3: Update the label color when name field is dirty
I want to change the label of the name field to blue when it is dirty.
```typescript
// main.ts
formControls = this.formGroup.controls;
isNamePristine$ = this.formControls.name.events.pipe(
filter((e) => e instanceof PristineChangeEvent)
map((e) => e as PristineChangeEvent),
map((e) => e.pristine),
map((pristine) => pristine ? 'black' : 'blue'),
)
```
`isNamePristine$` Observable filters the events of the name control to obtain an instance of `PristineChangeEvent`. The Observable uses the first `map` operator to cast the `ControlEvent` to `PristineChangeEvent`. When the field is not dirty, the label color is black. Otherwise, the label color is blue.
```html
// main.ts
<span [style.color]="isNamePristine$ | async">Name: </span>
```
In the template, async pipe resolves `isNamePristine$` to update the color of the span element.
###Example 4: Update the font weight when name field is touched
```typescript
// main.ts
formControls = this.formGroup.controls;
isNameTouched$ = this.formControls.name.events.pipe(
filter((e) => e instanceof TouchedChangeEvent),
map((e) => e as TouchedChangeEvent),
map((e) => e.touched),
map((touched) => touched ? 'bold' : 'normal'),
)
```
`isNameTouched$` Observable filters the events of the name control to obtain an instance of `TouchedChangeEvent`. The Observable uses the first `map` operator to cast the `ControlEvent` to `TouchedChangeEvent`. When the field is touched, the label is bold. Otherwise, the label is normal.
```html
// main.ts
<span [style.fontWeight]="isNameTouched$ | async"></span>
```
In the template, async pipe resolves `isNameTouched$` to update the font weight of the span element.
###Example 5: Track the progress of a nested form using StatusChangeEvent
```typescript
// control-status.operator.ts
import { ControlEvent, StatusChangeEvent } from "@angular/forms"
import { Observable, filter, map, shareReplay, startWith } from "rxjs"
export function controlStatus(initial = 0) {
return (source: Observable<ControlEvent<unknown>>) => {
return source.pipe(
filter((e) => e instanceof StatusChangeEvent),
map((e) => e as StatusChangeEvent),
map((e) => e.status === 'VALID' ? 1 : 0),
startWith(initial),
shareReplay(1)
)
}
}
```
This custom RxJS operator filters the form control events to obtain an instance of StatusChangeEvent. When the form control is valid, the operator emits 1, otherwise it returns 0.
```typescript
// main.ts
numFields = countTotalFields(this.formGroup);
formControls = this.formGroup.controls;
companyControls = this.formControls.company.controls;
isEmailValid$ = this.formControls.email.events.pipe(controlStatus());
isNameValid$ = this.formControls.name.events.pipe(controlStatus(1));
isCompanyNameValid$ = this.companyControls.name.events.pipe(controlStatus());
isLine1Valid$ = this.companyControls.line1.events.pipe(controlStatus());
isLine2Valid$ = this.companyControls.line2.events.pipe(controlStatus());
isCityValid$ = this.companyControls.city.events.pipe(controlStatus());
fields$ = combineLatest([
this.isEmailValid$,
this.isNameValid$,
this.isCompanyNameValid$,
this.isLine1Valid$,
this.isLine2Valid$,
this.isCityValid$,
])
.pipe(
map((validArray) => {
const completed = validArray.reduce((acc, item) => acc + item);
return {
completed,
percentage: ((completed / validArray.length) * 100).toFixed(2)
}
}),
);
```
This is not the most efficient method but I construct an Observable for each form control in the nested form. Then, I pass these Observable to the `combineLatest` operator to calculate the number of completed fields. Moreover, the `validArray` has all the control status; therefore, `validArray.length` equals to the total number of form controls. I can use the information to derive the percent of completion and return the result in a JSON object.
```html
// main.ts
@if (fields$ | async; as fields) {
<p>
Number of completed fields: {{ fields.completed }},
Percentage: {{ fields.percentage }}
</p>
}
```
In the template, async pipe resolves `fields$` to display the number of completed fields and the percent of completion.
The following Stackblitz repo displays the final results:
{%embed https://stackblitz.com/edit/stackblitz-starters-h1eyk9?file=src%2Fapp.routes.ts %}
This is the end of the blog post that describes the unified control change events in reactive form †in Angular 18. I hope you like the content and continue to follow my learning experience in Angular, NestJS, GenerativeAI, and other technologies.
##Resources:
- Stackblitz Demo: https://stackblitz.com/edit/stackblitz-starters-tzv9hq?file=src%2Fmain.ts
- Github Repo: https://github.com/railsstudent/ng-unified-control-change-state-events-demo
- Github Page: https://railsstudent.github.io/ng-unified-control-change-state-events-demo/
| railsstudent |
1,897,845 | Ways to inspect elements on websites that Disabled inspect DevTool | Even if a website tries to disable DevTools, we can still access them using the browser's menu. Here... | 0 | 2024-06-23T14:08:08 | https://dev.to/rifat87/ways-to-inspect-elements-on-websites-that-disabled-inspect-devtool-c53 | devtool, disabled, inspect, webdev | Even if a website tries to disable DevTools, we can still access them using the browser's menu. Here are the ways to do it:
**Chrome:**
1. Click on the three vertical dots (⋮) in the top right corner of the browser window.
2. Select More tools > Developer tools.
3. Alternatively, you can press Alt + D and then press Enter to focus on the address bar, and then press F12.
---
**Firefox:**
1. Click on the three horizontal lines (≡) in the top right corner of the browser window.
2. Select Web Developer > Toggle Tools.
3. Alternatively, you can press Ctrl + Shift + E (Windows/Linux) or Cmd + Opt + E (Mac).
---
**Edge:**
1. Click on the three horizontal dots (…) in the top right corner of the browser window.
2. Select More tools > Developer tools.
3. Alternatively, you can press F12.
---
**Safari:**
1. Click on Safari in the top menu bar.
2. Select Preferences > Advanced.
3. Check the box next to Show Develop menu in menu bar.
4. Then, click on Develop in the top menu bar and select Show Web Inspector.
5. Alternatively, you can press Cmd + Opt + I.
By using these methods, you can bypass any attempts to disable DevTools and still access them.
---
**Brave Browser :**
1. Menu: Click on the three horizontal lines (≡) in the top right corner of the browser window.
2. Select More tools > Developer tools.
Keyboard shortcut: Press Ctrl + Shift + I (Windows/Linux) or Cmd + Opt + I (Mac).
3. Address bar: Press Alt + D and then press Enter to focus on the address bar, and then press F12.
These methods should allow you to access DevTools in Brave Browser, even if a website tries to disable them.
Note: Brave Browser is based on Chromium, so the keyboard shortcuts and menu options are similar to those in Google Chrome. | rifat87 |
1,897,844 | Question regarding GeoLocator and GeoCoding Flutter | i'm a begineer,i want to know will GeoLocator and GeoCoding still retrive my location data after... | 0 | 2024-06-23T14:03:39 | https://dev.to/store_mer_75f5c2d46ef977a/flutter-geolocator-and-geocoding-2l05 | help | i'm a begineer,i want to know will GeoLocator and GeoCoding still retrive my location data after packaging my app, because in development as soon as i disconnect my phone from AVD the GeoLocator doest print data on screen | store_mer_75f5c2d46ef977a |
1,897,843 | WebRTC : Create Your First WebRTC connection! | What is WebRTC? WebRTC is an opensource project that provides webApplications and sites to establish... | 0 | 2024-06-23T14:02:05 | https://dev.to/nirvanjha2004/webrtc-create-your-first-webrtc-connection-1954 | webdev, javascript, beginners, tutorial | What is WebRTC?
WebRTC is an opensource project that provides webApplications and sites to establish a peer2peer connection and do a real-time communication. It allows to send video, audio and data sharing between browsers.
P2P
WebRTC is a peer to peer protocol. This means that you directly send your media over to the other person without the need of a central server.

> NOTE:
> You do need a central server for signaling and sometimes for sending media as well (turn). We’ll be discussing this later.
Latency:- latency is the time it takes for data to travel from the source to the destination.
You use WebRTC for applications that require Less latency.
Examples include
1. Zoom/Google meet (Multi party call)
2. Omegle, teaching (1:1 call)
3. 30FPS games (WebRTC can also send data)
What is a Signaling Server?
To establish a connection, Both the server need to send each other their address in order to know whom they have to connect to. A signaling server is used for that. A signaling server itself does not handle the actual media or data transfer but facilitates the initial communication setup.
It is usually a WebSocket server but can be a (http) server as well.

What is a STUN server?
The primary purpose of a STUN server is to allow a client (or Browser) to determine its public-facing (which the world sees) IP address and the port that the NAT(explained below) has assigned to a communication session. This information is then used in the signaling process to facilitate direct communication between Browsers.
So what is NAT(Network Address Translation)??

Simple NAT Example
1.Your laptop (192.168.1.2) wants to visit a website.
2.The router changes the address to its public IP (e.g., 203.0.113.1) and sends the request.
3.The website sends data back to 203.0.113.1.
4.The router receives the data, knows it was for 192.168.1.2, and sends it to your laptop.
Coming back to STUN server:- This is How it Works

ICE Candidates :=>
ICE (Interactive Connectivity Establishment) candidates are network endpoints that WebRTC peers use to establish direct peer-to-peer connections. Each ICE candidate consists of:
1. IP Address: This is the address that identifies a device on a network. It can be either a local (private) IP address or a public IP address.
2. Port Number: Ports are numeric identifiers used by networking protocols to distinguish different types of traffic on the same IP address. They help direct data to the correct application or service running on a device.
In simple terms ICE candidates are potential IP address and Port through which two Browsers or peers can establish connection.
If two friends are trying to connect to each other in a hostel wifi , then they can connect via their private router ice candidates.
If two people from different countries are trying to connect to each other, then they would connect via their public IPs.
What is a TURN server?
A lot of times, your network doesn’t allow media to come in from browser2. This depends on how restrictive your network is.
So When direct peer-to-peer connections fail due to NAT traversal issues(or Router issues), a TURN server acts as a solution. It relays media and data between peers, ensuring that communication can still occur even if direct connections are not possible.

**Offer**
The process of the first browser (the one initiating connection) sending their ice candidates to the other side.
**Answer**
The other side returning their ice candidates is called the answer.
**SDP - Session description protocol**
A single file that contains all your ice candidates, what media you want to send, what protocols you’ve used to encode the media. This is the file that is sent in the offer and received in the answer.
SDP Look like this:-
(No need to understand this....senior dev things :))

**Summary:-**
1.You need a signaling server, stun server to initiate the webrtc conn b/w the parties. You can kill these once the conn is made.
2.You need to include a turn server incase any of the users are on a restrictive network so you can get back a turn ice candidate as well.
Now Lets Dive into some Coding Stuff:-
RTCPeerConnection (pc, peer connection)
This is a class that the browser provides you with which gives you access to the SDP, lets you create answers / offers , lets you send media.
This class hides all the complexity of webrtc from the developer.
To know more:-
https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection
Basic Steps:- (Don't worry if it feels jargony right now!)
Connecting the two sides
The steps to create a webrtc connection between 2 sides includes -
1. Browser 1 creates an RTCPeerConnection
2. Browser 1 creates an offer
3. Browser 1 sets the local description to the offer
4. Browser 1 sends the offer to the other side through the signaling server
5. Browser 2 receives the offer from the signaling server
6. Browser 2 sets the remote description to the offer
7. Browser 2 creates an answer
8. Browser 2 sets the local description to be the answer
9. Browser 2 sends the answer to the other side through the signaling server
10. Browser 1 receives the answer and sets the remote description
This is just to establish the p2p connection b/w the two parties
To actually send media, we have to
1. Ask for camera /mic permissions
2. Get the audio and video streams
3. Call addTrack on the pc
4. This would trigger a OnTrack callback on the other side
Implementation
1. We will be writing the code in Node.js for the Signaling server. It will be a websocket server that supports 3 types of messages
1. createOffer
2. createAnswer
3. addIceCandidate
2. React + PeerConnectionObject on the frontend.
If you want to get an idea on how it works:- https://jsfiddle.net/rainzhao/3L9sfsvf/
**
## Backend
**
1. Create an empty TS project, add ws to it
```
npm init -y
npx tsc --init
npm install ws @types/ws
```
2. Change rootDir and outDir in tsconfig
```
"rootDir": "./src",
"outDir": "./dist",
```
3. Create a simple websocket server
```
import { WebSocketServer } from 'ws';
const wss = new WebSocketServer({ port: 8080 });
let senderSocket: null | WebSocket = null;
let receiverSocket: null | WebSocket = null;
wss.on('connection', function connection(ws) {
ws.on('error', console.error);
ws.on('message', function message(data: any) {
const message = JSON.parse(data);
});
ws.send('something');
});
```
4. Try running the server
```
tsc -b
node dist/index.js
```
5. Add message handlers
```
import { WebSocket, WebSocketServer } from 'ws';
const wss = new WebSocketServer({ port: 8080 });
let senderSocket: null | WebSocket = null;
let receiverSocket: null | WebSocket = null;
wss.on('connection', function connection(ws) {
ws.on('error', console.error);
ws.on('message', function message(data: any) {
const message = JSON.parse(data);
if (message.type === 'sender') {
senderSocket = ws;
} else if (message.type === 'receiver') {
receiverSocket = ws;
} else if (message.type === 'createOffer') {
if (ws !== senderSocket) {
return;
}
receiverSocket?.send(JSON.stringify({ type: 'createOffer', sdp: message.sdp }));
} else if (message.type === 'createAnswer') {
if (ws !== receiverSocket) {
return;
}
senderSocket?.send(JSON.stringify({ type: 'createAnswer', sdp: message.sdp }));
} else if (message.type === 'iceCandidate') {
if (ws === senderSocket) {
receiverSocket?.send(JSON.stringify({ type: 'iceCandidate', candidate: message.candidate }));
} else if (ws === receiverSocket) {
senderSocket?.send(JSON.stringify({ type: 'iceCandidate', candidate: message.candidate }));
}
}
});
});
```
That is all that you need for a simple one way communication b/w two tabs.
**
## Frontend
**
1. Create a frontend repo
```
npm create vite@latest
```
2. Add two routes, one for a sender and one for a receiver
```
import { useState } from 'react'
import './App.css'
import { Route, BrowserRouter, Routes } from 'react-router-dom'
import { Sender } from './components/Sender'
import { Receiver } from './components/Receiver'
function App() {
return (
<BrowserRouter>
<Routes>
<Route path="/sender" element={<Sender />} />
<Route path="/receiver" element={<Receiver />} />
</Routes>
</BrowserRouter>
)
}
export default App
```
3. Remove strict mode in main.tsx to get rid of a bunch of webrtc connections locally (not really needed).
4. Go to src folder and create a components folder inside which create two files Sender.tsx and Reciever.tsx
5.Create components for sender
```
import { useEffect, useState } from "react"
export const Sender = () => {
const [socket, setSocket] = useState<WebSocket | null>(null);
const [pc, setPC] = useState<RTCPeerConnection | null>(null);
useEffect(() => {
const socket = new WebSocket('ws://localhost:8080');
setSocket(socket);
socket.onopen = () => {
socket.send(JSON.stringify({
type: 'sender'
}));
}
}, []);
const initiateConn = async () => {
if (!socket) {
alert("Socket not found");
return;
}
socket.onmessage = async (event) => {
const message = JSON.parse(event.data);
if (message.type === 'createAnswer') {
await pc.setRemoteDescription(message.sdp);
} else if (message.type === 'iceCandidate') {
pc.addIceCandidate(message.candidate);
}
}
const pc = new RTCPeerConnection();
setPC(pc);
pc.onicecandidate = (event) => {
if (event.candidate) {
socket?.send(JSON.stringify({
type: 'iceCandidate',
candidate: event.candidate
}));
}
}
pc.onnegotiationneeded = async () => {
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
socket?.send(JSON.stringify({
type: 'createOffer',
sdp: pc.localDescription
}));
}
getCameraStreamAndSend(pc);
}
const getCameraStreamAndSend = (pc: RTCPeerConnection) => {
navigator.mediaDevices.getUserMedia({ video: true }).then((stream) => {
const video = document.createElement('video');
video.srcObject = stream;
video.play();
// this is wrong, should propogate via a component
document.body.appendChild(video);
stream.getTracks().forEach((track) => {
pc?.addTrack(track);
});
});
}
return <div>
Sender
<button onClick={initiateConn}> Send data </button>
</div>
}
```
6. Create the component for receiver
```
import { useEffect } from "react"
export const Receiver = () => {
useEffect(() => {
const socket = new WebSocket('ws://localhost:8080');
socket.onopen = () => {
socket.send(JSON.stringify({
type: 'receiver'
}));
}
startReceiving(socket);
}, []);
function startReceiving(socket: WebSocket) {
const video = document.createElement('video');
document.body.appendChild(video);
const pc = new RTCPeerConnection();
pc.ontrack = (event) => {
video.srcObject = new MediaStream([event.track]);
video.play();
}
socket.onmessage = (event) => {
const message = JSON.parse(event.data);
if (message.type === 'createOffer') {
pc.setRemoteDescription(message.sdp).then(() => {
pc.createAnswer().then((answer) => {
pc.setLocalDescription(answer);
socket.send(JSON.stringify({
type: 'createAnswer',
sdp: answer
}));
});
});
} else if (message.type === 'iceCandidate') {
pc.addIceCandidate(message.candidate);
}
}
}
return <div>
</div>
}
```
And You are good to go!!
Do like and share it if you found it useful.
Share your views below and comment if you have a doubt.
Thanks :) | nirvanjha2004 |
1,897,842 | Các sản phẩm của Công ty TNHH Dược Phẩm Bình Đông có tốt không? | 1. Lịch sử hình thành và phát triển Dược Bình Đông (Bidophar) được thành lập với sự kế... | 0 | 2024-06-23T13:57:34 | https://dev.to/duocbinhdongvn/cac-san-pham-cua-cong-ty-tnhh-duoc-pham-binh-dong-co-tot-khong-3oo8 | ## 1. Lịch sử hình thành và phát triển
Dược Bình Đông (Bidophar) được thành lập với sự kế thừa tinh hoa của nền Y học cổ truyền Việt Nam, tiền thân là Cơ sở sản xuất thuốc Y học dân tộc Bình Đông.
Hơn 70 năm hình thành và phát triển
Hơn 2000 đại lý trên 3 miền Việt Nam
Hơn 10 sản phẩm đặc trưng
Từ năm 1950 đến nay, Dược Bình Đông không ngừng nghiên cứu việc kết hợp các công thức cổ truyền với công nghệ hiện đại trong quy trình sản xuất để cho ra đời những sản phẩm gần gũi với người hiện đại mà vẫn gìn giữ bản sắc Y học dân tộc Việt Nam.
Tồn tại gần một Thế kỷ với sứ mệnh mang những bài thuốc từ thảo dược thiên nhiên chăm sóc sức khoẻ con người, Dược Bình Đông đã và đang cải tiến mỗi ngày để phù hợp hơn với cơ địa của người tiêu dùng. Chúng tôi biết, cuộc sống ngày càng hối hả và bận rộn, ai ai cũng mong muốn quá trình trị bệnh được nhanh hơn. Nhưng điều cần nhất trong việc điều trị bằng thảo dược Đông y là thời gian thẩm thấu để cơ thể nâng cao sức đề kháng; qua đó điều trị gốc rễ của bệnh. Vì thế Dược Bình Đông luôn hy vọng các bạn hãy thật KIÊN NHẪN trong quá trình điều trị. Hãy để cơ thể chúng ta được chăm sóc một cách trọn vẹn nhất.
## 2. Thông tin về Giám đốc điều hành của Dược Bình Đông
Dưới đây là danh sách các Giám đốc điều hành (CEO) của Dược Bình Đông qua các thời kỳ:
Ông Nguyễn Văn Thơm
Nhiệm kỳ: 1925 - 1950
Thành tựu: Là người sáng lập Dược Bình Đông và đặt nền móng cho sự phát triển của công ty.
Ông Nguyễn Văn Tho
Nhiệm kỳ: 1950 - 1975
Thành tựu: Dẫn dắt Dược Bình Đông vượt qua nhiều giai đoạn khó khăn và khẳng định vị thế trên thị trường dược phẩm Việt Nam.
Ông Nguyễn Thành Hiếu
Nhiệm kỳ: 1985 - 2015
Thành tựu: Đưa Dược Bình Đông phát triển mạnh mẽ, trở thành một trong những doanh nghiệp dược hàng đầu Việt Nam.
Ông Nguyễn Thành Danh
Nhiệm kỳ: 2015 - Nay
Thành tựu: Tiếp tục đưa Dược Bình Đông phát triển bền vững và vươn tầm quốc tế.
## 3. Tầm nhìn và sứ mệnh
Tầm nhìn
Khẳng định vị thế của sản phẩm Bình Đông và thảo dược Việt trên thị trường quốc tế.
Sứ mệnh
Kết hợp công nghệ hiện đại với tinh hoa thảo dược thiên nhiên tạo ra sản phẩm tiên tiến, hiệu quả phục vụ sức khỏe cộng đồng.
## 4. Giá trị cốt lõi của Dược Bình Đông
Dược Bình Đông lấy sứ mệnh "Thảo dược tốt, Sức khỏe tốt, Thế giới tốt hơn" làm kim chỉ nam cho mọi hoạt động. Giá trị cốt lõi của công ty bao gồm:
Khách hàng
Cam kết với khách hàng về chất lượng sản phẩm, dịch vụ và giá trị cộng thêm.
Luôn đặt lợi ích của khách hàng lên hàng đầu, mang đến cho khách hàng những trải nghiệm tốt nhất.
Tích cực lắng nghe và tiếp thu ý kiến phản hồi của khách hàng để không ngừng cải thiện chất lượng sản phẩm và dịch vụ.
Nhân viên
Xây dựng môi trường làm việc chuyên nghiệp, thân thiện và cởi mở, tạo điều kiện cho nhân viên phát triển toàn diện.
Coi trọng nhân lực, đầu tư vào đào tạo và phát triển nguồn nhân lực.
Áp dụng chính sách đãi ngộ hợp lý, đảm bảo đời sống vật chất và tinh thần cho nhân viên.
Đối tác
Hướng đến sự hợp tác win-win với các nhà cung cấp và nhà phân phối.
Xây dựng mối quan hệ bền vững và tin cậy với các đối tác.
Cùng nhau chia sẻ mục tiêu và lợi ích chung.
Xã hội
Thể hiện trách nhiệm xã hội thông qua các hoạt động chăm lo cho môi trường, trẻ em và người cao tuổi.
Tham gia tích cực vào các chương trình thiện nguyện và hoạt động vì cộng đồng.
Góp phần xây dựng một xã hội văn minh, phát triển bền vững.
Kinh doanh
Đạo đức kinh doanh là kim chỉ nam cho mọi hoạt động kinh doanh của công ty.
Kinh doanh một cách minh bạch, công bằng và tuân thủ pháp luật.
Tránh xa mọi hành vi gian dối, lừa đảo và cạnh tranh không lành mạnh.
Với những giá trị cốt lõi này, Dược Bình Đông cam kết mang đến cho khách hàng những sản phẩm chất lượng cao, dịch vụ tốt nhất và góp phần vào sự phát triển bền vững của cộng đồng.
## 5. Quy trình sản xuất hiện đại tại Dược Bình Đông
Dược Bình Đông luôn chú trọng đầu tư vào quy trình sản xuất hiện đại, đảm bảo chất lượng sản phẩm cao nhất cho khách hàng. Quy trình sản xuất của Dược Bình Đông trải qua 5 bước chính:
Bước 1: Đảm bảo chất lượng dược liệu
Dược liệu được tuyển chọn từ những nguồn cung cấp uy tín, đảm bảo chất lượng theo tiêu chuẩn GACP (Good Agricultural and Collection Practices).
Dược liệu được kiểm tra kỹ lưỡng về độ ẩm, hàm lượng hoạt chất, các chỉ tiêu vi sinh và kim loại nặng trước khi đưa vào sản xuất.
Bước 2: Quy trình bào chế khép kín một chiều, công nghệ hiện đại
Quy trình sản xuất được thực hiện trong môi trường khép kín, đảm bảo an toàn vệ sinh thực phẩm.
Dược liệu được bào chế bằng các công nghệ hiện đại như chiết xuất, cô đặc, sấy thăng hoa, ... giúp giữ nguyên hoạt chất và tối ưu hóa hiệu quả của sản phẩm.
Bước 3: Đảm bảo vệ sinh trong môi trường sản xuất đúng tiêu chuẩn
Nhà xưởng sản xuất được thiết kế theo tiêu chuẩn GMP (Good Manufacturing Practices), đảm bảo vệ sinh an toàn thực phẩm.
Nhân viên sản xuất được đào tạo bài bản, tuân thủ nghiêm ngặt các quy trình vệ sinh trong quá trình sản xuất.
Bước 4: Bao bì cấp 1 được khử trùng bằng công nghệ nhiệt
Bao bì cấp 1 (tiếp xúc trực tiếp với sản phẩm) được khử trùng bằng công nghệ nhiệt trước khi sử dụng.
Việc khử trùng giúp đảm bảo sản phẩm được bảo quản an toàn, tránh bị nhiễm khuẩn.
Bước 5: Quy trình đóng gói thành phẩm tự động
Quy trình đóng gói thành phẩm được thực hiện tự động, đảm bảo vệ sinh và độ chính xác cao.
Sản phẩm được đóng gói trong bao bì chất lượng cao, giúp bảo quản sản phẩm tốt nhất.
Với quy trình sản xuất hiện đại và được kiểm soát chặt chẽ, Dược Bình Đông luôn mang đến cho khách hàng những sản phẩm chất lượng cao, an toàn và hiệu quả.
## 6. Công nghệ
Các sản phẩm của Dược Bình Đông hiện nay được gia công tại 2 nhà máy lớn, đạt chuẩn GMP của bộ Y tế
### 7. Giá trị sản phẩm Dược Bình Đông
Dược Bình Đông luôn đặt giá trị sản phẩm lên hàng đầu, mang đến cho khách hàng những sản phẩm chất lượng cao, an toàn và hiệu quả. Giá trị sản phẩm Dược Bình Đông thể hiện qua những điểm sau:
Uy tín và chất lượng
Mỗi sản phẩm của Dược Bình Đông đều được nghiên cứu và phát triển dựa trên nền tảng y học cổ truyền Việt Nam kết hợp với công nghệ hiện đại.
Sản phẩm được sản xuất trên dây chuyền hiện đại, đạt chuẩn GMP theo quy định của Bộ Y tế.
Dược Bình Đông luôn tuân thủ nghiêm ngặt các quy định về chất lượng sản phẩm, đảm bảo sản phẩm an toàn cho người sử dụng.
Hiệu quả
Các sản phẩm của Dược Bình Đông đều được kiểm chứng lâm sàng và có hiệu quả sử dụng cao trong việc điều trị và phòng ngừa các bệnh lý khác nhau.
Sản phẩm được nhiều bác sĩ, chuyên gia y tế tin tưởng khuyên dùng và được đông đảo người tiêu dùng tin tưởng sử dụng trong nhiều năm.
Cam kết
Dược Bình Đông cam kết 100% sản phẩm đạt chuẩn GMP theo quy định của Bộ Y tế.
Công ty luôn nỗ lực cải tiến quy trình kỹ thuật nhằm tăng năng suất của sản phẩm và hạ giá thành sản phẩm.
Dược Bình Đông luôn lắng nghe ý kiến phản hồi của khách hàng để không ngừng nâng cao chất lượng sản phẩm và dịch vụ.
Giải thưởng
Nhờ những nỗ lực không ngừng nghỉ trong việc nâng cao chất lượng sản phẩm, Dược Bình Đông đã vinh dự đạt được nhiều giải thưởng uy tín trong lĩnh vực y học cổ truyền, bao gồm:
Giải thưởng “Sản phẩm cho sức khỏe cộng đồng”.
Giải thưởng “Thương hiệu uy tín”
Giải thưởng “Sản phẩm chất lượng cao”.
Dược Bình Đông luôn lấy sức khỏe của người tiêu dùng làm mục tiêu hàng đầu. Với những sản phẩm chất lượng cao, an toàn và hiệu quả, Dược Bình Đông cam kết mang đến cho khách hàng sự hài lòng và tin tưởng tuyệt đối.
8. Sản phẩm tại Dược Bình Đông
Giá trị sản phẩm của Dược Bình Đông Dược Bình Đông hiện cung cấp nhiều sản phẩm dược thảo chất lượng cao như Thiên Môn Bổ Phổi, Long Đởm Giải Độc Gan, Song Phụng Điều Kinh, và nhiều sản phẩm khác.
Thiên Môn Bổ Phổi là sản phẩm thảo dược thiên nhiên giúp tăng cường chức năng hô hấp, hỗ trợ điều trị các bệnh về phổi, đặc biệt là ho, viêm phế quản và hen suyễn.
Long Đởm Giải Độc Gan là sản phẩm giúp giải độc gan, giảm các triệu chứng như đau bụng, đầy bụng, khó tiêu, táo bón và cải thiện chức năng gan.
Song Phụng Điều Kinh là sản phẩm dành cho phụ nữ giúp cân bằng nội tiết tố, giảm các triệu chứng khó chịu trong thời kỳ tiền mãn kinh và mãn kinh như bốc hỏa, mất ngủ, đau đầu và giúp duy trì sức khỏe vùng kín.
Tất cả các sản phẩm của Dược Bình Đông được sản xuất với công nghệ tiên tiến và tiêu chuẩn nghiêm ngặt đảm bảo chất lượng và an toàn. Chúng được bào chế từ thảo dược thiên nhiên, không chứa hóa chất độc hại, không gây tác dụng phụ.
9. Kết nối với Dược Bình Đông
Địa chỉ: 43/9 Mễ Cốc, Phường 15, Quận 8, Thành phố Hồ Chí Minh
Showroom: 22 Đường số 10, Phường 11, Quận 6, Thành phố Hồ Chí Minh
Hotline: 028.39.808.808
Nhà cung cấp: 028.66.800.300
Phòng kinh doanh: 028.66.800.100 - 028.66.800.200
Email: info@binhdong.vn | duocbinhdongvn | |
1,897,841 | Earn Money Online with AI and StudyPool: A Step-by-Step Guide | Today, in the digital world, making money online has never been easier, especially with the help of... | 0 | 2024-06-23T13:56:02 | https://dev.to/hassancoder/earn-money-online-with-ai-and-studypool-a-step-by-step-guide-bm9 | earnmoneywithai, studypoolfreeearnings, onlinemoneymakingtips, aicontentgeneration | Today, in the digital world, making money online has never been easier, especially with the help of AI and platforms like StudyPool.com. If you’re a student or someone who loves creating content, this guide will show you how to earn money for free with the help of AI and StudyPool. Follow these steps to start your journey to online earnings, with the potential to earn significant income as your content gains popularity.
Table of Contents
Step 1: Sign Up on StudyPool.Com
Step 2: Choose a Topic
Step 3: Research Articles
Step 4: Generate Content with AI
Step 5: Humanize Your AI-Generated Content
Step 6: Review and Edit
Step 7: Upload Your Document to the StudyPool
Step 8: Earn Money
Step 9: Withdraw Your Earnings
Tips for Success
Success Story
View Full Article on [TheEaglesTech.com](https://theeaglestech.com/article/earn-money-with-ai-and-studypool-2024/) | hassancoder |
1,897,756 | ChatGPT - Prompts for Project Management and Software Development Methodologies | Discover the various ChatGPT Prompts for Project Management and Software Development Methodologies | 0 | 2024-06-23T13:54:36 | https://dev.to/techiesdiary/chatgpt-prompts-for-project-management-and-software-development-methodologies-1j2f | chatgpt, promptengineering, ai, softwaredevelopment | ---
published: true
title: 'ChatGPT - Prompts for Project Management and Software Development Methodologies'
cover_image: 'https://raw.githubusercontent.com/sandeepkumar17/td-dev.to/master/assets/blog-cover/chat-gpt-prompts.jpg'
description: 'Discover the various ChatGPT Prompts for Project Management and Software Development Methodologies'
tags: chatgpt, promptengineering, ai, softwaredevelopment
series:
canonical_url:
---
## Project Management:
* Project management refers to the process of planning, organizing, and controlling the resources and activities required to achieve specific project goals and objectives.
* It involves the application of knowledge, skills, tools, and techniques to meet project requirements and deliverables within defined constraints such as time, budget, and scope.
* Project management encompasses various phases, including initiation, planning, execution, monitoring, and closure.
* It involves tasks like defining project goals, creating a project plan, assigning tasks to team members, monitoring progress, managing risks, and ensuring successful project completion.
## ChatGPT Prompts for Project Management:
| | Prompt |
| --- | --- |
| 1 | Develop a detailed project scope statement that clearly defines the project's boundaries, objectives, and deliverables. |
| 2 | Develop a project schedule using project management software or tools, considering task dependencies, resource availability, and project constraints. |
| 3 | Create a project timeline with clear milestones and deadlines. |
| 4 | Document and archive project deliverables, reports, and key project documentation. |
| 5 | Regularly review and update project documentation, including project plans, schedules, risk registers, and communication logs. |
| 6 | What role does documentation play in your software development process, and how do you ensure its accuracy and relevance? |
| 7 | Mitigate risks by identifying potential obstacles and developing contingency plans. |
| 8 | Develop a risk management plan to identify, assess, and mitigate potential risks throughout the project lifecycle. |
| 9 | Implement a project tracking system to monitor progress, identify risks, and manage changes. |
| 10 | Implement a quality management plan to ensure that project deliverables meet the defined quality standards. |
| 11 | Suggest ways to improve project team communication and collaboration. |
| 12 | Establish effective communication channels and protocols for project stakeholders. |
| 13 | Assign roles and responsibilities to project team members and establish clear lines of communication and reporting. |
| 14 | Recommend tools to manage the team members working remotely. |
| 15 | How do you handle post-project evaluation and lessons learned within your software development team? |
| 16 | Continuously improve project management processes and methodologies based on lessons learned and industry best practices. |
| 17 | Implement a change management process to effectively handle project changes, including change requests, impact assessment, and approval procedures. |
| 18 | Conduct regular project status meetings to discuss progress, address issues, and ensure alignment among team members. |
| 19 | Conduct a post-project evaluation to assess project success, identify lessons learned, and document best practices for future projects. |
| 20 | List common project management pitfalls and suggest ways to avoid them. |
## Software Development Methodologies:
Software Development Methodologies are structured approaches used by software development teams to plan, design, develop, test, and deliver software products.
Some commonly used software development methodologies include:
* **Waterfall:** This traditional sequential approach follows a linear flow, where each phase of the development process (requirements gathering, design, development, testing, deployment) is completed before moving on to the next.
* **Agile:** Agile methodologies, such as Scrum and Kanban, focus on iterative and incremental development. They emphasize collaboration, flexibility, and adaptability to changing requirements.
* **Lean:** Lean software development aims to eliminate waste and maximize value by delivering only what is necessary. It emphasizes continuous improvement, customer feedback, and reducing non-value-adding activities.
* **Rapid Application Development (RAD):** RAD methodologies prioritize rapid prototyping and quick iterations to accelerate the development process. They emphasize user involvement, iterative feedback, and the use of pre-built components to speed up development.
* **Spiral:** The Spiral model combines waterfall and iterative development elements. It involves multiple iterations and emphasizes risk management throughout the development process.
* **DevOps:** DevOps combines development (Dev) and operations (Ops) to enhance collaboration and streamline the software development and deployment process. It emphasizes automation, continuous integration, continuous delivery, and close collaboration between development and operations teams.
## ChatGPT Prompts for Software Development Methodologies:
| | Prompt |
| --- | --- |
| 1 | List the popular software development methodologies. |
| 2 | Suggest some methodologies for a `small development team`. |
| 3 | Describe the `Scrum framework` and its role in Agile software development. |
| 4 | Explain the concept of `Minimum Viable Product (MVP)` and its role in Agile software development. |
| 5 | Compare and contrast the `Waterfall` and `Agile software development` methodologies. |
| 6 | Explain the concept of continuous integration and its significance in `Agile software development`. |
| 7 | Discuss the benefits and challenges of adopting the `DevOps` methodology in software development. |
| 8 | Discuss the role of collaboration and communication in the `Kanban` software development methodology. |
| 9 | Explain the key principles and practices of the `Lean` software development methodology. |
| 10 | How does the `Rapid Application Development (RAD)` methodology differ from other software development methodologies? |
---
## NOTE:
> [Check here to review more prompts that can help the developers in their day-to-day life.](https://dev.to/techiesdiary/chatgpt-prompts-for-developers-216d)
| techiesdiary |
1,897,839 | Component id generation collision detected | host: {'collision-id': 'gallery-1'}, use different different id... | 0 | 2024-06-23T13:51:47 | https://dev.to/webfaisalbd/component-id-generation-collision-detected-5d7 | `host: {'collision-id': 'gallery-1'},`
> use different different id like:
> gallery-1
> gallery-2
```js
import { Component, ElementRef, Input, ViewChild } from '@angular/core';
@Component({
selector: 'app-gallery',
templateUrl: './gallery.component.html',
styleUrls: ['./gallery.component.scss'],
host: {'collision-id': 'gallery-1'},
})
export class GalleryComponent {
@ViewChild('fullScreen') elem!: ElementRef;
@Input() galleryData!: any[];
```
| webfaisalbd | |
342,196 | The Importance of HTML | In 1917, the artist Michael Duchamp submitted his work, "Fountain," to an art exhibition. It's a used... | 0 | 2020-05-23T14:08:03 | https://jerryjones.dev/2020/04/20/the-importance-of-html/ | html, a11y, codenewbie, beginners |
In 1917, the artist Michael Duchamp submitted his work, "Fountain," to an art exhibition. It's a used urinal. And it stirred up yet another conversation about, "What *is* art?"
I've only taken one art history class, so forgive my simplification here. Essentially, art is subjective. If you personally don't like Duchamp's urinal, it doesn't make it any less art.
JavaScript and CSS are the focus of most web designer/developer's learning, but they're subjective to the end user. There are better and worse ways to write your CSS and JS, but none are 100% right or wrong (as long as your page still works, is secure, etc).
**HTML has clearly right and wrong ways to write it, and this is too often ignored.** Here are several examples I've seen in the wild:
- A "button" that's actually a clickable `<div>` and not a `<button>`.
- A "title" that's actually a`<div>` and not a heading element (`<h1>`, `<h2>`, etc).
- A "label"" for an `<input>` that's actually a `<div>`.
- An "input" that's actually a `<div>` with keydown listeners.
Notice a pattern? Looking at you, `<div>`. 👀
The essential issue is using a non-semantic element when a semantic element should have been used.
## What Do We Mean by Semantic?
Semantic means that the element has a meaning. It says something about the content or its relationship to another thing. In HTML, basically anything that isn't a `<div>` or `<span>` is semantic.
There's also a continuum to what a tag tells us about the meaning of its content. For example, a `<section>` tells us less about its contents than an `<article>`.
`<section>` is still semantic, as it tells us that its contents should be considered as a group. But, an `<article>` tells its contents are grouped together and that it's a cohesive article.
For more examples, I'll walk through the Heading and Button elements to demonstrate how they are semantic.
### Heading Elements
An `<h1>` is a title of a page, and an `<h2>` beneath it gives a hierarchy to the page.
```
<!-- h1, the most important part -->
<h1>The Importance of HTML</h1>
<!-- "What Do We Mean by Semantic?" is a subsection of "The Importance of HTML" -->
<h2>What Do We Mean by Semantic?</h2>
<!-- "Headings" is a subsection of "What Do We Mean by Semantic?" -->
<h3>Headings</h3>
```
Using an appropriate heading structure, you can automatically create a table of contents. Here's how this article could be built into a table of contents just based off of the heading levels:
- `<h1>`: The Importance of HTML
- `<h2>`: What Do We Mean by Semantic?
- `<h3>`: Headings
- `<h3>`: Buttons
- `<h2>`: Non-Semantic Elements
- `<h2>`: Correct HTML Does Not Bring You Glory, But You Need to Do It
You can see the structure of the whole article being communicated just via the HTML. If I had used all `<div>`s, then the structure would look like:
- `<div>`: The Importance of HTML
- `<div>`: What Do We Mean by Semantic?
- `<div>`: Headings
- `<div>`: Buttons
- `<div>`: Non-Semantic Elements
- `<div>`: Correct HTML Does Not Bring You Glory, But You Need to Do It
There's no meaning attached to the `<div>`, so it would be a flat structure. Just by using the correct HTML we bring clarity and structure to the DOM.
### Buttons
A button submits or changes the state of something. By definition, it's always:
- focusable
- activated on space bar or enter key presses
- activated on mouse click.
When you make a `<div>` with a click listener, you're not using the semantic interactions that come for free when you use a `<button>`. You have to manually build out the:
- focus state
- keyboard interactions
- mouse interactions
Not only that, but when a screen reader comes to a `<button>Submit</button>`, it will use those semantics and announce, "Submit, button."
The same thing using a `<div>` would look like:
```
<!-- Just kidding, I'm not going to make an accessible div button. -->
<!-- Use a <button> please! 😂-->
```
When we use semantic HTML elements, we elevate the content's meaning. **It gives the content life.**
## Non-Semantic Elements
`<div>`s and `<span>`s are non-semantic elements. The `<div>` does not give the content any additional meaning. It's just a `<div>`.
I'm not being totally fair, as there is a tiny bit of meaning behind a `<div>` vs a `<span>`:
- A `<div>` is a block-level element, as in, it should wrap things together.
- A `<span>` is an inline element. It should be used within another element, like `<p><span class="dropcap">I</span>nline elements</p>`.
If there are no HTML elements that make sense for the content, then use a `<div>` or `<span>`. There's 100% a place for `<div>`s and `<span>`s. Not every piece of content or HTML element needs additional semantics.
**When writing HTML, use as specific of an element as makes sense for your content.** If there's nothing specific enough, then keep going for less and less meaningful tags. `<div>` and `<span>` are always the last choice.
## Correct HTML Does Not Bring You Glory, But You Need to Do It
You're not going to get a Webby Award or thousands of views on Codepen for how amazingly crafted your HTML is. You'll need to be OK going unrecognized for your work. But know that every time I use a screen reader or keyboard on a site *and it works correctly*, I have a little spark of joy. I'm sure I'm not alone here.
In the end, you'll have to be OK with knowing you did your best to make your work accessible to everyone. | jeryj |
1,897,838 | Spring Boot Testing Best Practices | Testing is a crucial aspect of software development, ensuring that your application behaves as... | 0 | 2024-06-23T13:48:21 | https://dev.to/abhishek999/spring-boot-testing-best-practices-1h0b | spring, springboot, junit, mockito | Testing is a crucial aspect of software development, ensuring that your application behaves as expected and is free from bugs. Spring Boot provides excellent support for testing, making it easier to write unit tests, integration tests, and test RESTful services. In this blog, we will explore best practices for testing Spring Boot applications, covering unit testing, integration testing, and using MockMvc for testing RESTful services
**Writing Unit Tests for Spring Boot Applications**
Unit tests are the foundation of a solid test suite. They focus on testing individual components in isolation, such as methods or classes, without depending on external systems like databases or web servers
**Best Practices for Unit Testing:**
**Use JUnit 5:** JUnit 5 is the latest version of the popular testing framework and provides powerful features for writing tests. Ensure you include the necessary dependencies in your pom.xml or build.gradle file
```
<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter</artifactId>
<version>5.8.1</version>
<scope>test</scope>
</dependency>
```
**Mock Dependencies:** Use mocking frameworks like Mockito to mock dependencies and focus on testing the logic of the class under test
```
@SpringBootTest
public class UserServiceTest {
@Mock
private UserRepository userRepository;
@InjectMocks
private UserService userService;
@Test
void testFindUserById() {
User user = new User(1L, "John");
when(userRepository.findById(1L)).thenReturn(Optional.of(user));
User result = userService.findUserById(1L);
assertEquals("John", result.getName());
}
}
```
**Test Boundary Conditions:** Ensure you test edge cases, null values, and invalid inputs to make your tests comprehensive
**Write Fast and Isolated Tests:** Unit tests should run quickly and independently. Avoid using external resources like databases or file systems
**Use Test Doubles for Dependencies:** When necessary, create stub or mock implementations for dependencies that your class under test interacts with. This helps isolate the component being tested
```
@Test
void testUserCreation() {
User user = new User(null, "Jane");
when(userRepository.save(any(User.class))).thenReturn(new User(1L, "Jane"));
User createdUser = userService.createUser(user);
assertNotNull(createdUser.getId());
assertEquals("Jane", createdUser.getName());
}
```
**Integration Testing with Spring Boot :**
Integration tests verify that different parts of the application work together as expected. They test the application’s behavior in a realistic environment, including interactions with databases, web servers, and other systems
**Best Practices for Integration Testing :**
**Use @SpringBootTest:** The @SpringBootTest annotation loads the full application context and is useful for writing integration tests
```
@SpringBootTest
@ExtendWith(SpringExtension.class)
public class UserServiceIntegrationTest {
@Autowired
private UserService userService;
@Autowired
private UserRepository userRepository;
@Test
void testFindUserById() {
User user = new User(1L, "John");
userRepository.save(user);
User result = userService.findUserById(1L);
assertEquals("John", result.getName());
}
}
```
**Use @Transactional for Database Tests:** Use the @Transactional annotation to ensure that database changes are rolled back after each test, maintaining a clean state
```
@SpringBootTest
@Transactional
public class UserServiceIntegrationTest {
// Integration tests
}
```
**Profile-Specific Configuration:** Use different application properties for testing to avoid conflicts with development or production environments
```
# src/test/resources/application-test.yml
spring:
datasource:
url: jdbc:h2:mem:testdb
driver-class-name: org.h2.Driver
username: sa
password: password
```
**Test Slices:** Use test slices like @WebMvcTest, @DataJpaTest, @RestClientTest to load only the necessary parts of the application context, making tests faster and more focused
```
@DataJpaTest
public class UserRepositoryTest {
@Autowired
private UserRepository userRepository;
@Test
void testSaveUser() {
User user = new User(null, "Alice");
User savedUser = userRepository.save(user);
assertNotNull(savedUser.getId());
assertEquals("Alice", savedUser.getName());
}
}
```
**Using MockMvc for Testing RESTful Services**
MockMvc is a powerful tool for testing Spring MVC controllers. It allows you to perform HTTP requests and assert responses without starting the entire web server
**Best Practices for Using MockMvc**
**Setup MockMvc with @WebMvcTest:** Use the @WebMvcTest annotation to test only the web layer and configure MockMvc
```
@WebMvcTest(UserController.class)
public class UserControllerTest {
@Autowired
private MockMvc mockMvc;
@MockBean
private UserService userService;
@Test
void testGetUserById() throws Exception {
User user = new User(1L, "John");
when(userService.findUserById(1L)).thenReturn(user);
mockMvc.perform(get("/users/1"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.name").value("John"));
}
}
```
**Test Different Scenarios:** Ensure you test various scenarios, including success, failure, and edge cases
```
@Test
void testGetUserById_NotFound() throws Exception {
when(userService.findUserById(1L)).thenThrow(new UserNotFoundException());
mockMvc.perform(get("/users/1"))
.andExpect(status().isNotFound());
}
```
**Use JSONPath for Response Assertions:** Use JSONPath expressions to assert specific parts of the JSON response.
```
mockMvc.perform(get("/users/1"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.id").value(1))
.andExpect(jsonPath("$.name").value("John"));
```
**Verify Interactions:** Use Mockito to verify that service methods are called as expected.
```
verify(userService).findUserById(1L);
```
**Test with Different HTTP Methods:** Test various HTTP methods (GET, POST, PUT, DELETE) to ensure your RESTful services handle them correctly
```
@Test
void testCreateUser() throws Exception {
User user = new User(null, "Jane");
when(userService.createUser(any(User.class))).thenReturn(new User(1L, "Jane"));
mockMvc.perform(post("/users")
.contentType(MediaType.APPLICATION_JSON)
.content("{\"name\": \"Jane\"}"))
.andExpect(status().isCreated())
.andExpect(jsonPath("$.id").value(1))
.andExpect(jsonPath("$.name").value("Jane"));
}
```
**Test Error Handling:** Ensure your tests cover error scenarios and that your application returns appropriate error responses
```
@Test
void testCreateUser_InvalidInput() throws Exception {
mockMvc.perform(post("/users")
.contentType(MediaType.APPLICATION_JSON)
.content("{\"name\": \"\"}"))
.andExpect(status().isBadRequest())
.andExpect(jsonPath("$.errors").isNotEmpty());
}
```
**Conclusion**
Testing is a vital part of developing reliable and maintainable Spring Boot applications. By following these best practices for unit testing, integration testing, and using MockMvc, you can ensure that your application is robust and behaves as expected. Remember to write tests that are fast, isolated, and comprehensive to cover various scenarios and edge cases
Happy testing! | abhishek999 |
1,897,834 | Twilio Intelligent Doctor By AbdulsalamAmtech | This is a submission for the Twilio Challenge What I Built I built an AI symptoms... | 0 | 2024-06-23T13:46:14 | https://dev.to/abdulsalamamtech/twilio-intelligent-doctor-by-abdulsalamamtech-431e | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
I built an AI symptoms analyzer and emergency reporter.
## Demo
<!-- Share a link to your app and include some screenshots here. -->

The demo link for my project called: [Twilio Intelligent Doctor](https://github.com/abdulsalamamtech/twilio-intelligent-doctor)
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
1. Firstly,
for AI symptoms analyzer a user on the app will enter his or her symptoms, choose sex (male or female) and age (e.g. 20 45 or 65), click on check condition button.
The symptoms will be sent to an AI model to analyze and send back possible condition and healthcare personnel to meet, then a page will show up with what the AI has analyze with a button to contact the person the AI has suggested.
If the button is clicked with a SMS or Voice options for Twilio to use and send the symptoms.
2. Secondly,
for the Emergency Reporter a user enters description of an emergency and address and click on the report button.
The emergency will be sent to an AI model on possible agency to contact, then the emergency message will be sent to the emergency agency through SMS or Voice.
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
My project qualifies for Twilio Impactful Innovation and Twilio Times Two.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
I credited myself @abdulsalamamtech for building this project and @devAbdulsalam for his support.
<!-- Don't forget to add a cover image (if you want). -->

<!-- Thanks for participating! → -->
Thanks for giving me the opportunity to participate in this twiliochallenge. | abdulsalamamtech |
1,897,835 | Directory Project for NSFW AI | Hey guys, introduce you to a new project I built lately, NSFW AIs, a new directory dedicated to all... | 0 | 2024-06-23T13:43:46 | https://dev.to/insanensfwdev/directory-project-for-nsfw-ai-486i | ai, discuss | Hey guys, introduce you to a new project I built lately, [NSFW AIs](https://nsfwais.io), a new directory dedicated to all things NSFW in the AI field. Whether you're looking for AI tools for sexting, creating NSFW art, engaging in AI roleplay, exploring deep nude apps, or generating AI hentai, this site has got you covered. We've categorized everything to make it super easy to find exactly what you're looking for.
What you'll find on NSFW AIs:
[AI Sexting](https://nsfwais.io/tag/AI%20Sexting): Discover the latest and best tools for AI-driven conversations.
[NSFW AI Art](https://nsfwais.io/tag/NSFW%20AI%20Art): Find amazing AI tools for creating adult-themed art.
[NSFW AI Roleplay](https://nsfwais.io/tag/NSFW%20Roleplay%20AI): Explore AI tools for immersive and interactive roleplay experiences.
[AI Hentai Generator](https://nsfwais.io/tag/AI%20Hentai%20Generator): Unleash your creativity with AI tools for generating hentai content.
Plus, we offer a curated list of the best alternatives for each category, so you always have options.
Check it out and let us know what you think! Your feedback and suggestions are always welcome. | insanensfwdev |
1,897,757 | The Book That Can Save You from Failing the AWS Certified Cloud Practitioner Exam — CLF-C02 | Are you gearing up to take the AWS Certified Cloud Practitioner Exam (CLF-C02) and feeling the... | 0 | 2024-06-23T13:38:13 | https://dev.to/mannan/the-book-that-can-save-you-from-failing-the-aws-certified-cloud-practitioner-exam-clf-c02-kme | aws, cloudcomputing, computerscience, network | Are you gearing up to take the AWS Certified Cloud Practitioner Exam (CLF-C02) and feeling the pressure? You’re not alone. Many candidates face the daunting task of understanding the vast array of AWS services, features, and best practices. But what if I told you there’s a book that can not only ease your study process but significantly boost your chances of passing the exam on your first try?
**Introducing the Ultimate Study Guide:** [Master the AWS Certified Cloud Practitioner CLF-C02 Exam with Expert-Crafted Question Bank](https://www.amazon.com/dp/B0CXY83J2R/).
[](https://www.amazon.com/dp/B0CXY83J2R/)
This comprehensive guide is your one-stop resource for mastering the AWS Certified Cloud Practitioner exam. Packed with clear explanations, detailed practice questions, and insightful explanations, this book is designed to make your preparation efficient and effective.
## Why This Book is a Game Changer
### 1. Extensive Practice Questions
Understanding concepts is one thing, but applying them is another. This book features a total of 390 exam-aligned questions that simulate the actual exam. These questions cover all the core areas you’ll be tested on, ensuring you have a well-rounded understanding of AWS services and best practices.
**Domain 1: Cloud Concepts** — 71 questions
**Domain 2: Security and Compliance** — 82 questions
**Domain 3: Cloud Technology and Services** — 173 questions
**Domain 4: Billing, Pricing, and Support** — 64 questions
### 2. Detailed Explanations
Each practice question comes with detailed explanations for why the correct answer is right and why the other options are not. This not only helps you learn the material but also understand the reasoning behind it, which is crucial for mastering AWS concepts.
[](https://www.amazon.com/dp/B0CXY83J2R/)
### 3. Comprehensive Coverage
From AWS fundamentals to advanced topics, this guide covers everything. It’s structured to provide a deep dive into each subject, making sure you’re well-prepared for any question that comes your way.
## Real Success Stories
Don’t just take our word for it. Here’s what some of our readers have to say:
> “This book was a lifesaver! The practice questions were incredibly similar to what I encountered on the actual exam. The explanations helped me understand the material on a deeper level. I passed on my first try!” — Jane D.
> “The strategies and tips in this book are gold. They helped me manage my time and focus on the most important areas. I went into the exam feeling confident and prepared.” — Mark S.
Post your success stories in the book review section and let others know how this book helped you achieve your goals.
## Bonus Resources
To further enhance your preparation, explore [NexusTech](https://nexustech.courses/) and test your knowledge with practice exams: [AWS Certified Cloud Practitioner Practice Exams | CLF-C02](https://nexustech.courses/tracks/aws-certified-cloud-practitioner-practice-exams/). Use coupon code SAVE40SUCCESS to avail a 40% discount during checkout.
## Get Your Copy Today
Don’t leave your success to chance. Equip yourself with the best resource available and take control of your AWS Certified Cloud Practitioner exam preparation.
Grab your copy of[ Master the AWS Certified Cloud Practitioner CLF-C02 Exam with Expert-Crafted Question Bank](https://www.amazon.com/dp/B0CXY83J2R/) now and start your journey towards becoming AWS certified. With this book, you’re not just studying for an exam — you’re building a solid foundation for a successful career in cloud computing.
## Final Thoughts
Passing the AWS Certified Cloud Practitioner exam can open up new career opportunities and set you on the path to becoming a cloud expert. With the right preparation, you can achieve this milestone. Let this book be your guide, mentor, and secret weapon. Invest in your future today, and watch the doors of opportunity swing wide open.
Happy studying, and here’s to your success on the AWS Certified Cloud Practitioner exam!
Click [here](https://www.amazon.com/dp/B0CXY83J2R/) to purchase your copy and begin your journey towards AWS certification success. | mannan |
1,897,831 | Variables | Today I’ve learned all of variables in Golang: var a int8 = -11 // 1byte var b int16 =... | 0 | 2024-06-23T13:33:25 | https://dev.to/vlad__siomga11/variables-l0m | Today I’ve learned all of variables in Golang:
var a int8 = -11
//
1byte
var b int16 = 32000
2bytes
var c int32 = 1000000000
4bytes
var d int64 = 100000000000000
// 8bytes
var e uint8 = 11
// same as int but cannot be negative
var f byte = 250
// same as uint8
var g rune = -2567876
// same as
int32
var k int = 765
// int32 or int64
var l uint = 765
// uint32 or uint64
var m float32 = 3.987
// 4 bytes
var n float64 = 3. 98779786879
// 8 bytes
var isOne = true
var onWay = false
var name = "i "
fh := "assign everything"
const eu float64 = 15.9777575 // cannot change it
| vlad__siomga11 | |
1,897,829 | Quickly Set Up a Local Web Server on Mac with ServBay | Do you want to quickly set up a local web server on your Mac? Are you tired of manually configuring... | 0 | 2024-06-23T13:30:20 | https://dev.to/servbay/quickly-set-up-a-local-web-server-on-mac-with-servbay-4mml | webdev, beginners, programming, php | Do you want to quickly set up a [local web server](https://www.servbay.com) on your Mac? Are you tired of manually configuring Apache, MySQL, PHP, and other components each time? Do you wish you could switch between different PHP versions with a single click or easily manage multiple virtual hosts? If your answer is yes, then you should definitely try ServBay, a powerful and convenient web development tool designed specifically for Mac users.

Key features of [ServBay](https://www.servbay.com) include:
- One of ServBay's standout features is the ability to use n**on-existent domains (Domain) and suffixes (TLDs)** in local development and create free SSL certificates for these domains. This allows developers to work in an encrypted HTTPS environment (e.g., https://api.servbay). This not only enhances the security of the development process but also significantly saves on investments in domains and SSL certificates.
- An easy-to-use graphical interface that allows you to start, stop, and restart servers effortlessly, as well as view server status and logs.
- Support for a variety of popular web technologies, including PHP, Node.js, MariaDB, Caddy and more. You can freely choose and combine them according to your needs.
- One-click switching between different PHP versions, from 5.6 to 8.4, and the ability to set PHP versions and extensions for each virtual host individually.
- Easy creation and management of multiple virtual hosts. You can specify different domains, ports, root directories, SSL certificates, etc., for each virtual host, and even use dynamic DNS services to make your virtual hosts accessible on the internet.
- Automatic backup and restoration of databases. You can regularly or manually back up your databases to prevent data loss or damage and restore to any backup point at any time.
- Integration with cloud services. You can deploy your projects to AWS, Google Cloud, or DigitalOcean, or use cloud storage services like Dropbox, OneDrive, or iCloud to sync your project files.
In short, ServBay is an extremely practical and powerful web development tool that allows you to easily set up and manage local web servers on your Mac. Whether you are a beginner or a professional, you can benefit from it.
---
Got questions? Check out our [support page](https://support.servbay.com) for assistance. Plus, you’re warmly invited to join our [Discord](https://talk.servbay.com) community, where you can connect with fellow devs, share insights, and find support.
If you want to get the latest information, follow [X(Twitter)](https://x.com/ServBayDev) and [Facebook](https://www.facebook.com/ServBay.Dev).
Let’s code, collaborate, and create together! | servbay |
1,897,762 | Speaking Goat Latin on the fastest bus to town | Weekly Challenge 274 Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance... | 0 | 2024-06-23T13:16:53 | https://dev.to/simongreennet/speaking-goat-latin-on-the-fastest-bus-to-town-59b2 | perl, python, theweeklychallenge | ## Weekly Challenge 274
Each week Mohammad S. Anwar sends out [The Weekly Challenge](https://theweeklychallenge.org/), a chance for all of us to come up with solutions to two weekly tasks. It's a great way for us all to practice some coding.
[Challenge](https://theweeklychallenge.org/blog/perl-weekly-challenge-274/), [My solutions](https://github.com/manwar/perlweeklychallenge-club/tree/master/challenge-274/sgreen)
## Task 1: Goat Latin
### Task
You are given a sentence, `$sentence`.
Write a script to convert the given sentence to Goat Latin, a made up language similar to Pig Latin.
Rules for Goat Latin:
1. If a word begins with a vowel ("a", "e", "i", "o", "u"), append "ma" to the end of the word.
1. If a word begins with consonant i.e. not a vowel, remove first letter and append it to the end then add "ma".
1. Add letter "a" to the end of first word in the sentence, "aa" to the second word, etc etc.
### My solution
A few weeks ago I mentioned how my employer - now former employer :( - give us access to GitHub Copilot. It's pretty cool, and practically wrote the code for me.

I did modify it slightly to make the code a little easier to understand, but I'm super impressed that it came up with a perfectly working solution.
```python
def goat_latin(sentence: str) -> str:
words = sentence.split(' ')
for i, word in enumerate(words):
if word[0].lower() not in ['a', 'e', 'i', 'o', 'u']:
word = word[1:] + word[0]
words[i] = word + 'maa' + 'a' * i
return ' '.join(words)
```
### Examples
```bash
$ ./ch-1.py "I love Perl"
Imaa ovelmaaa erlPmaaaa
$ ./ch-1.py "Perl and Raku are friends"
erlPmaa andmaaa akuRmaaaa aremaaaaa riendsfmaaaaaa
$ ./ch-1.py "The Weekly Challenge"
heTmaa eeklyWmaaa hallengeCmaaaa
```
## Task 2: Bus Route
### Task
Several bus routes start from a bus stop near my home, and go to the same stop in town. They each run to a set timetable, but they take different times to get into town.
Write a script to find the times - if any - I should let one bus leave and catch a strictly later one in order to get into town strictly sooner.
An input timetable consists of the service interval, the offset within the hour, and the duration of the trip.
### My solution
I'd love to see more challenges like this one. Yes, they take a longer time to come up with a solution, but they challenge me to turn the task into a working solution. And this is one where Copilot was less than helpful.
For the input from the command line, I take the integers in sets of three for the attribute of each route.
For this task I worked out a solution on my whiteboard before writing a single line of code. There are many different parts to my solution.
The first is a [dataclass](https://docs.python.org/3/library/dataclasses.html) that has information about each route. This makes it easier to reference the attributes of each route.
```python
from dataclasses import dataclass
@dataclass
class Route:
freq: int
offset: int
length: int
```
The next part to my solution is a function that takes the route, and finds the fastest bus that leaves at each minute. This is stored as a dict where the key is the departure minute, and the value is the journey time of the quickest bus leaving at that minute.
```python
def calculate_departures(routes):
departures = {}
for route in routes:
start_minute = route.offset % route.freq
while start_minute < 60:
if start_minute in departures and departures[start_minute] < route.length:
# This is a slower bus, so we can ignore it
continue
departures[start_minute] = route.length
start_minute += route.freq
return departures
```
The next function I have is called `next_bus`. Given the departures (from the above function) and a minute, it will determine the next bus to depart, which may possibly cross over an hour (eg. 11:59, 12:00, 12:01...). The function returns the `start_minute` minute and the `end_minute` minute (start minute + journey length).
```python
def next_bus(departures, minute):
start_minute = minute
while True:
if start_minute in departures:
return start_minute, start_minute + departures[start_minute]
start_minute += 1
if start_minute == 60:
start_minute = 0
```
The main function puts this all together. I start by defining `departures` from the `calculate_departures` function, and the `skip_bus` variable as an empty list.
```python
def bus_route(routes: list[Route]) -> list[int]:
# Get the start time of all bus routes
departures = calculate_departures(routes)
skip_bus = []
```
I then have a double loop. The outer loop uses the variable `minute` and is loops from zero to 59. I use the `next_bus` function to determine the `start_minute` and `end_minute` of the next bus to arrive.
The inner loop uses the variable `second_bus_start` and loops from one minute past the `start_minute` to one less than the `end_minute`. It checks if there is a bus departing at that minute. If there is and it gets us to the destination quicker, I exit the inner loop and add the `start_minute` value to the `skip_bus` list.
```python
for minute in range(60):
start_minute, end_minute = next_bus(departures, minute)
for second_bus_start in range(start_minute + 1, end_minute):
if second_bus_start % 60 not in departures:
continue
if second_bus_start + departures[second_bus_start % 60] < end_minute:
break
else:
continue
skip_bus.append(minute)
```
### Examples
```bash
$ ./ch-2.py 12 11 41 15 5 35
[36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]
$ ./ch-2.py 12 3 41 15 9 35 30 5 25
[0, 1, 2, 3, 25, 26, 27, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 55, 56, 57, 58, 59]
```
| simongreennet |
1,897,753 | How I published my first app to Apple Store #2 | Hello everyone! My laptop is still not back to life, so let's continue... You can read the... | 0 | 2024-06-23T13:11:58 | https://dev.to/uladzmi/how-i-published-my-first-app-to-apple-store-2-4g34 | mobile, development, ios, learning | Hello everyone! My laptop is still not back to life, so let's continue...
You can read the background story [here](https://dev.to/uladzmi/how-i-published-my-first-app-to-apple-store-1-491a).
### Development Process
First, I decided to go through the official tutorials and immediately came across [this paragraph](https://developer.apple.com/tutorials/app-dev-training/getting-started-with-scrumdinger):
> These tutorials are created for experienced developers who are new to SwiftUI. You’ll need to know about Swift before you begin, so start by reviewing The Basics of the Swift programming language for an overview of syntax, data types, and unique language features
I don't consider myself a highly experienced developer and had never worked with Swift before, but I really didn't want to read through the dry documentation since it would take a couple of days, and I just wanted to launch something in the simulator and my phone asap.

Without much hesitation, I skipped the basics and jumped straight into the tutorials. It started off quite well, I quickly grasped the basic concepts, but then the tutorials moved on to drawing and playing audio, which didn't fit within the scope of my logger. So, deciding that the knowledge gained from the first three or four lessons should be enough, I started working on my project.
***
For the MVP (Minimum Viable Product), or in other words, to start using my own app, I needed the following features:
- The ability to add/remove exercises from a list
- The ability to add/remove sets to these exercises
- The ability to view history
I started drafting the skeleton of the app and creating the first views, and as expected, problems from my lack of Swift knowledge quickly emerged. ChatGPT came to the rescue, sometimes providing complete nonsense, but also giving good ideas on where to dig. On the other hand, if I had finished the tutorials, I wouldn't have needed its help in many instances. Maybe I'll finish them next time... but who am I kidding 😅
In addition to ChatGPT, I didn't neglect the good old Google, and my application started to take shape.

***
One of the first questions to answer was data storage, as changing this aspect in the future would be very challenging. A full-fledged backend seemed overkill, although there are plans to try setting it up from scratch in the future. The tutorials provided an example of storing data locally in a JSON file, but that option didn't suit me either. In this case, I would either have to create many files for each type of object, for example, a file for exercises, a file for sets, a file for sessions, etc., and then somehow combine them, or store everything in one single file, which is not optimal and difficult to model properly. SQLite seemed like the ideal option, so I started looking in that direction. With the help of Google, I discovered that iOS has its own framework suitable for tasks like mine, and after fiddling for a couple of weeks with the help of a few YouTube videos and some GitHub examples, I managed to set up CoreData, which essentially uses SQLite inside, as I wanted.
Having figured out data storage, I started to think about where to get the initial list of exercises. One option was to create the exercises using the app itself, which was realistic for the first tests, but I didn't want to manually create 20-30 exercises. In the end, I decided to read the list of exercises from a CSV file and write them to CoreData at the first launch, updating what changed on subsequent app launches. I generated the exercises and their descriptions using ChatGPT, resulting in a list of 100 exercises for the main muscle groups.
***
Pleased with myself, I decided it was time to move on to testing in real conditions and began replicating workout records in my own app. Everything worked more or less stably, with many minor bugs that were easily fixed in the evenings. There was only one thing that bothered me: the app was not user-friendly at all, namely:
- I had to search for exercises in the huge list every time
- I had to set the weight and reps every time, even though I didn't increase the weights every workout and could simply use the last one
- And a few other similar little things that I initially didn't pay attention to, and it took a few more weeks to refine them.
Besides usability issues, another problem arose, which I thought was a bug, but I couldn't catch it. The issue was that about a week after installing the app on my iPhone, it stopped launching. This was very frustrating, as I deleted and reinstalled it a couple of times, losing all history. After a few such deletions, I finally realized that I could re-upload the app without deleting it, preserving the history, but even that didn't save me, as I often forgot to re-upload the app on time. It turned out to be a certificate issue, which without an Apple developer program subscription was valid for only one week, meaning it was time to create a full-fledged developer account. Nothing foretold trouble, and I was already imagining how I would publish the app next week, but my enthusiasm waned when I found out the subscription cost 100€ per year. This slightly altered my initial concept, but I'll talk about that and the preparation process for release in the next part...
***
Thanks to everyone who read till the end!
You can view screenshots or download it [here](https://apps.apple.com/de/app/yet-another-workoutlogger/id6484401597?l=en-GB&platform=iphone).
Any feedback is appreciated, thanks, and stay fit! | uladzmi |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.