id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,354,320 | The most efficient way to debug problems with PHPUnit mocks | PHPUnit can be overwhelming to those who are just learning the basics of programming and/or unit... | 0 | 2023-02-05T10:52:40 | https://dev.to/nikolastojilj12/the-most-efficient-way-to-debug-problems-with-phpunit-mocks-2fhj | php, tutorial, testing, phpunit | PHPUnit can be overwhelming to those who are just learning the basics of programming and/or unit testing in PHP. It is a powerful and robust tool which has been a cornerstone of unit testing in PHP world for many years now, which means it has a huge set of features covering almost any case you could encounter.
One of the most powerful features of PHPUnit are [stubs](https://phpunit.readthedocs.io/en/10.0/test-doubles.html?highlight=stubs#stubs) and [mocks](https://phpunit.readthedocs.io/en/10.0/test-doubles.html?highlight=stubs#mock-objects). They give you the ability to detach the rest of your code from the part which you are trying to test. You can mock anything - from (other) methods in a class you are testing to dependencies, abstract classes or even traits and interfaces.
The first step, of course, is building the mock, and it usually goes like this:
```php
$serviceMock = $this->getMockBuilder(YourService::class)
->disableOriginalConstructor()
->onlyMethods(['anotherMethodInYourClass', 'yetAnotherMethod'])
->getMock();
```
Apart from the methods pictured in the above example, you also have many others, like `setConstructorArgs()` and `setMethods()`, but we will not go deeper into what they do. What is important in our case is that there are many ways of how exactly you are going to create a mock object, which depends on your needs. You may need to mock a set of dependencies, an abstract class, an interface,...
If you are still learning (or even if you don't keep up with the changes between versions of PHPUnit), you may end up "guessing" which set of methods you are going to call when creating your mocks. Even StackOverflow is full of answers like "_The following way of creating a mock works for me_" - without any information or context provided.
**Well, let's provide some context.**
PHPUnit, under the hood, is creating a mock class just like you would write any class. It generates source code for the class, trait or interface which you are trying to mock and fills it with mocked property and return values. One of the checks PHPUnit performs is that the generated mock class code is a valid PHP code. This is done with a less known built-in PHP function [`eval($someString)`](https://www.php.net/manual/en/function.eval.php). This function evaluates whether `$someString` contains a valid PHP code. A bit of warning, though, as the official documentation states: _The eval() language construct is very dangerous because it allows execution of arbitrary PHP code. Its use thus is discouraged. If you have carefully verified that there is no other option than to use this construct, pay special attention not to pass any user provided data into it without properly validating it beforehand._
That being said, PHPUnit prepares the source code of the generated mocked class and stores it in a string, which is then evaluated using the `eval()` function.
> If you are getting errors while trying to create or use mocks, you are not sure what exactly is happening under the hood and how the generated mock class **actually** looks like - there is a way to see the actual contents (complete source code) of the mocked class, during debugging.
The key point to look at is the `PHPUnit\Framework\MockObject\MockClass` class and its `generate()` method. In PHPUnit 9.6 it goes like this:
```php
public function generate(): string
{
if (!class_exists($this->mockName, false)) {
eval($this->classCode);
call_user_func(
[
$this->mockName,
'__phpunit_initConfigurableMethods',
],
...$this->configurableMethods
);
}
return $this->mockName;
}
```
If you set a breakpoint at `eval($this->classCode)` you will be able to see the contents of `$this->classCode` which, obviously, contains the complete source code of the generated mock class. The best part is - it's even formatted properly, so you won't have any issues reading it.
You are now free to experiment with different types of mocks and options for their creation, while looking at how exactly your way of creating mocks affects the end result - the mocked object. Have fun!
If you liked the article,...
[](https://www.buymeacoffee.com/puEW3HvWvP) | nikolastojilj12 |
1,354,329 | 100 Days of Code: The Complete Python Pro Bootcamp for 2022 - Day 28 - Pomodoro GUI Application | Liquid syntax error: 'raw' tag was never closed | 0 | 2023-02-05T11:30:06 | https://dev.to/mikekameta/100-days-of-code-the-complete-python-pro-bootcamp-for-2022-day-28-pomodoro-gui-application-19og | Main.py
````python
from tkinter import *
import math
# ---------------------------- CONSTANTS ------------------------------- #
PINK = "#e2979c"
RED = "#e7305b"
GREEN = "#9bdeac"
YELLOW = "#f7f5dd"
FONT_NAME = "Courier"
WORK_MIN = 25
SHORT_BREAK_MIN = 5
LONG_BREAK_MIN = 20
reps = 0
timer = NONE
# ---------------------------- TIMER RESET ------------------------------- #
def reset_timer():
window.after_cancel(timer)
canvas.itemconfig(timer_text, text="00:00")
title_label.config(text="Timer")
check_marks.config(text="")
global reps
reps = 0
# ---------------------------- TIMER MECHANISM ------------------------------- #
def start_timer():
global reps
reps += 1
work_sec = WORK_MIN * 60
short_break_sec = SHORT_BREAK_MIN * 60
long_break_sec = LONG_BREAK_MIN * 60
if reps % 8 == 0:
count_down(long_break_sec)
title_label.config(text="Break", fg=RED)
elif reps % 2 == 0:
count_down(short_break_sec)
title_label.config(text="Break", fg=PINK)
else:
count_down(work_sec)
title_label.config(text="Work", fg=GREEN)
# ---------------------------- COUNTDOWN MECHANISM ------------------------------- #
def count_down(count):
count_min = math.floor(count / 60)
count_sec = count % 60
if count_sec < 10:
count_sec = f"0{count_sec}"
canvas.itemconfig(timer_text, text=f"{count_min}:{count_sec}")
if count > 0:
global timer
timer = window.after(1000, count_down, count - 1)
else:
start_timer()
marks = ""
work_sessions = math.floor(reps/2)
for _ in range(work_sessions):
marks += "✔"
check_marks.config(text=marks)
# ---------------------------- UI SETUP ------------------------------- #
window = Tk()
window.title("Pomodoro")
window.config(padx=100, pady=50, bg=YELLOW)
title_label = Label(text="Timer", fg=GREEN, bg=YELLOW, font=(FONT_NAME, 50))
title_label.grid(column=1, row=0)
canvas = Canvas(width=200, height=224, bg=YELLOW, highlightthickness=0)
tomato_img = PhotoImage(file="tomato.png")
canvas.create_image(100, 112, image=tomato_img)
timer_text = canvas.create_text(100, 130, text="00:00", fill="white", font=(FONT_NAME, 35, "bold"))
canvas.grid(column=1, row=1)
start_button = Button(text="Start", highlightthickness=0, command=start_timer)
start_button.grid(column=0, row=2)
reset_button = Button(text="Reset", highlightthickness=0, command=reset_timer)
reset_button.grid(column=2, row=2)
check_marks = Label(fg=GREEN, bg=YELLOW)
check_marks.grid(column=1, row=3)
window.mainloop()
```
tomato.png

| mikekameta | |
1,354,380 | Practical Vim command workflow | There are many commands in Vim, which means that you can achieve a same goal with many approaches.... | 0 | 2023-02-05T11:44:53 | https://m4xshen.dev/posts/vim-command-workflow | vim, linux, programming, productivity | There are many commands in Vim, which means that you can achieve a same goal with many approaches. Therefore it is difficult for beginner to learn how to accomplish an editing task with less keystrokes. In this tutorial, I'll share my Vim command workflow and give you some guidelines about how to move/edit text in Vim efficiently.
## Guidelines
Here are some general rules of my workflow.
1. Don't use arrow keys and mouse.
2. Use relative jump (eg: `5k 12j`) for vertical movement inside screen.
3. Use `CTRL-U CTRL-D CTRL-B CTRL-F gg G` for vertical movement outside screen.
4. Use word-motion (`w W b B e E ge gE`) for short distance horizontal movement.
5. Use `f F t T 0 ^ $ , ;` for mid long distance horizontal movement.
6. Use `operator + motion/text-object` (eg: `ci{ d5j`) whenever possible.
If you are not familiar with some of these concepts, please learn about the [vim basic commands](https://dev.to/m4xshen/vim-basic-commands-3aan) first.
## Examples
Here are 4 real situations I faced when creating a todo list website with javascript. You can think about how you will achieve the editing goal first and then see my approach.
Notes:
- `^` or `v` points to the position of the cursor.
- There are line number and relative line number on the left.
### Situation 1
Goal: Change `activeList` to `this` and add a `;` at the end of the line.
```javascript
// current mode: Normal
2 if(this.sortMethod == 'Name') {
1 activeList.uncheckedTodo.sort(sortWithName)
189 }
^
```
My approach is `-cwthis<ESC>A;<ESC>`.
- `-`: Go 1 line upward, on the first non-blank character
- `cwthis`: Change the word and type `this`.
- `<ESC>`: Leave insert mode.
- `A;<ESC>`: Jump to the end of the line, type `;` and leave Insert mode.

### Situation 2
Goal: Change `i-s+1` to `d` and add `new ` before `Date(y, m, d)`.
```javascript
// current mode: Normal
454 console.log(Date(y, m, i-s+1));
^
```
My approach is `Wct)d<C-o>FDnew <ESC>`. (`<C-o>` means `CTRL-O`)
- `W`: Go one word forward, ignore symbol.
- `ct)d`: Change till before the occurrence of `)` to the right and type `d`.
- `<C-o>`: Execute one command in Normal mode and then return to Insert mode.
- `FD`: Go to the occurrence of `D` to the left.
- `new <ESC>`: Type `new ` and leave Insert mode.

### Situation 3
Goal: Add a line `activeList.sortMethod = 'Date';` below `document.querySelector('.sort-date')...`.
```javascript
// current mode: Insert
1 document.querySelector('.sort-name').addEventListener('click', () => {
343 activeList.sortMethod = 'Name';
1 activeList.update(); ^
2 })
3
4 document.querySelector('.sort-date').addEventListener('click', () => {
5 activeList.update();
6 })
```
My approach is `<ESC>yy4jpci'Date<ESC>`.
- `<ESC>`: Leave insert mode.
- `yy`: Yank current line.
- `4j`: Go down 4 line.
- `p`: Paste the line we just yanked.
- `ci'Date<ESC>`: Change the content inside '', type `Date` and leave Insert mode.

### Situation 4
Goal: Move the whole block of `//sort` (line 200 ~ 207) to the beginning of `update()` function.
```javascript
// current mode: Normal
8 update() {
7 this.checkedTodo.forEach((todo) => {
6 this.element.insertBefore(todo.element, todoCreator.nextSibling);
5 });
4 this.uncheckedTodo.forEach((todo) => {
3 this.element.insertBefore(todo.element, todoCreator.nextSibling);
2 });
1 v
200 // sort
1 if(this.sortMethod == 'Name') {
2 this.uncheckedTodo.sort(sortWithName);
3 }
4 else if(this.sortMethod == 'Date') {
5 this.uncheckedTodo.sort(sortWithDate);
6 }
7
8 createCalendar(currentYear, currentMonth, this);
9 }
```
My approach is `dap8kp`.
- `dap`: Delete around the paragraph.
- `8k`: Go up 8 lines.
- `p`: Paste the paragraph we just deleted.

## Final Words
If you just start learning Vim operators, motions, it may take some times to think of what commands to use for each situation. However, If you keep practicing and using them, you'll become faster and faster. After a while, you’ll develop muscle memory for using these commands.
---
That’s a wrap. Thanks for reading. If you like this kind of stuff, consider following for more.
You can also see what I am working on my [Personal Website](https://m4xshen.dev) and [GitHub](https://github.com/m4xshen).
| m4xshen |
1,354,440 | JavaScript double exclamation mark explained (with examples) | ✋ Update: This post was originally published on my blog decodingweb.dev, where you can read the... | 0 | 2023-02-05T14:01:01 | https://www.decodingweb.dev/javascript-double-exclamation-mark | javascript, webdev, beginners | > ✋ **Update:** This post was originally published on my blog [decodingweb.dev](https://www.decodingweb.dev), where you can read the [latest version](https://www.decodingweb.dev/javascript-double-exclamation-mark) for a 💯 user experience. _~reza_
## What are those double not operators in JavaScript?
You might have noticed a double exclamation mark (`!!`) in JavaScript code and you may be curious what that means.
First, “double exclamation mark” (a.k.a the “double bang”) isn’t an operator itself. It’s two logical not (`!`) operators used twice.
Long story short, some developers use double Not operators to explicitly convert the operand’s [data type](https://www.decodingweb.dev/how-to-learn-a-programming-language#data-types) to boolean (`true` or `false`). The operand will be converted to `false` if its current value is **falsy** and `true` in case it’s **truthy**.
In JavaScript, a falsy value can be either of the following:
- false,
- 0,
- -0
- "" (empty string)
- null
- undefined
- NaN
These none-boolean values are considered `false` when used in a boolean context. For instance, when you use a falsy value as an if condition, it’ll bypass the `if` block:
```js
let logged_in = null
// This block isn't executed
if (logged_in) {
// Do something here
}
logged_in = ''
// This block isn't executed
if (logged_in) {
// Do something error
}
```
By using `!!`, you can convert a non-boolean value to a boolean based on its current value.
Here are some examples:
```js
// Falsy values will all be false
console.log(!!null) // false
console.log(!!0) // false
console.log(!!undefined) // false
Needless to say, anything that's not falsy is truthy!
// Truthy values
console.log(!!'dwd') // true
console.log(!!5) // true
```
JavaScript "double exclamation mark" can be used in any expression:
```js
let logged_in = 1
if (!!logged_in === true) {
// Do something here
}
```
In the above code, we're checking the boolean equivalent of `logged_in` and don't care about the actual value. The value `1` is truthy, so `!!logged_in` will return `true`.
You can achieve the same results by using the `Boolean` constructor (without using the `new` keyword).
```js
let logged_in = 1
if (Boolean(logged_in) === true) {
// Do something here ...
}
```
But do we need this confusing technique in the first place?
## When do you need to use the JavaScript "double exclamation mark" syntax?
If you ask me, you'd be fine without double Not operators!
Let's get a bit deeper.
JavaScript (unlike TypeScript) doesn't have static typing. But as a weakly-typed language, it does type coercion automatically whenever needed.
What is "type coercion"? You may ask.
All primitive values in JavaScript are one of these types:
- Undefined
- Null
- Number
- String
- Boolean
- BigInt
- Symbol
- NAN (Not a number)
Type coercion is the automatic or implicit conversion of a value from one data type to another (for instance, from a string to a number) - done at JavaScript's discretion.
Here's an example:
```js
let a = '12'
console.log(a * 5)
// output: 60
```
In the above example, the variable `a` contains a string value (`'12'`). When we multiply it by `5`, JavaScript casts it to a number (automatically). The result (`60`) confirms that.
Here's another one:
```js
let a = '5'
let b = 5
console.log(a + b)
// output: 55
```
In the above example, we have a sum expression (`'5' + 5`). This time, JavaScript chose to cast the numeric value (`5`) into a string. So instead of `10` (`5 + 5`), we got `55` - the concatenation of `'5'` and `'5'`.
Even if we compare `'5'` and `5` with the equality (`==`) operator, we'll get `true`, meaning they are equal - even though one is a string and one is a number.
```js
console.log(5 == '5')
// output: true
```
These quick experiments prove JavaScript converts data types in certain contexts.
To avoid unexpected behaviors, it's advised to use the strict equality operator (`===`) over the normal equality (`==`) operator in comparisons.
The strict equality operator (`===`) returns `true` only if both operands are equal and of the same type. This is where you might want to use the double exclamation sign or the `Boolean` constructor.
```js
let logged_in = 1
if (!!logged_in === true) {
// Do something here ...
}
```
This might seem totally unnecessary, though. You could use the short form without explicitly casting it to a boolean data type:
```js
let logged_in = 1
if (logged_in) {
// Do something here ...
}
```
This will give you the exact same result.
However, if you have a good reason to compare the values with an ESLint-friendly fashion (known as the `eqeqeq` rule), you can convert them to their boolean equivalent before comparing them with `===`.
All right, I think that does it. I hope you found this quick guide helpful.
Thanks for reading.
---
**❤️ You might like:**
- [Javascript if/else shorthand explained (with common uses cases)](https://www.decodingweb.dev/javascript-if-else-shorthand)
- [How to check if an element exists in JavaScript (with examples)](https://www.decodingweb.dev/javascript-check-if-element-exists)
- [JavaScript isset Equivalent (3 methods)](https://www.decodingweb.dev/javascript-isset-equivalent)
- [Integer division in JavaScript explained](https://www.decodingweb.dev/js-integer-division) | lavary |
1,354,672 | MongoDB Aggregation Pipeline - $function Stage | An Overview A popular NoSQL database called MongoDB offers a scalable and adaptable option... | 0 | 2023-02-05T18:45:48 | https://dev.to/shubhamdutta2000/the-function-stage-in-mongodb-aggregation-pipeline-590d | webdev, javascript, mongodb, productivity | ### An Overview
A popular NoSQL database called MongoDB offers a scalable and adaptable option for storing and handling enormous amounts of data. The aggregation pipeline, which enables users to carry out complex data analysis and manipulation, is one of MongoDB's core features. The `$function` stage of the MongoDB aggregation pipeline and its use cases will be the main topic of this blog.
---
### What is the $function Stage in MongoDB Aggregation Pipeline?
The MongoDB aggregation pipeline has an additional stage called `$function` that enables users to write their own unique JavaScript functions for data processing. When there is no built-in stage that can carry out the needed operation, this stage is especially helpful. Users have access to the complete JavaScript runtime and the MongoDB aggregation framework because the $function stage is executed within the context of the MongoDB server.
---
### Syntax:
The following syntax is used to define the `$function` stage:
```
{
$function: {
body: <code>,
args: <array expression>,
lang: "js"
}
}
```
The `args` field specifies an optional initial value that will be provided to the function, and the `body` field specifies the JavaScript function definition that will be performed. You can specify an empty array []. If the body function does not accept an argument. lang is The language used in the body need to specify `lang: "js"`.
---
### Use Cases for the `$function` Stage in MongoDB Aggregation Pipeline
1. **Running totals computation**: The `$function` stage can compute running totals by storing the cumulative sum in the state object.
2. **Implement custom algorithms**: Custom algorithms that cannot be implemented with the built-in stages can be implemented using the `$function` stage.
3. **Transforming Array**: Arrays can be transformed by mapping, filtering, or reducing the items of the array using the `$function` stage.
---
### Example:
Consider the following input document in the "`customer`" collection:
```
{
"_id": 1,
"name": "Shubham Dutta",
"email": "shubham_dutta@gmail.com",
"position": "Developer"
},
{
"_id": 2,
"name": "Arjun Dutta",
"email": "arjun.dutta@gmail.com",
"position": "Designer"
}
```
Now, we need to separate out each customer's first name from their entire name and lowercase the position. The aggregate process listed below can be used to do this:
```
db.customers.aggregate([
{
$project: {
'name': {
$function: {
body: function (name) {
let firstName = name.split(" ")[0];
return firstName;
},
args: {"$name"},
lang: "js",
},
},
'position': {
$function: {
body: function (position) {
let position = position.toLowerCase();
return position;
},
args: {"$position"},
lang: "js",
},
},
},
'_id': 0,
'email': 1
},
]);
```
The following is the result of the above pipeline:
```
[
{
"name": "Shubham",
"email": "shubham_dutta@gmail.com",
"position": "Developer"
},
{
"name": "Arjun",
"email": "arjun.dutta@gmail.com",
"position": "designer"
}
]
```
By splitting the name field on the space character and returning the first element of the resulting array, the first name of each customer is extracted from their entire name using the `$function` stage in this example. The toLowerCase() method is used to change the position field's case to lowercase. Each customer's name, email address, and position are included in a new object that is returned by the function argument using $project.
---
### Conclusion
The MongoDB aggregation pipeline has a significant feature called the `$function` stage that enables users to write unique JavaScript functions for data processing.
The `$function` stage gives you the freedom and power you need to design custom algorithms, manipulate arrays, or compute running totals.
You can fully utilize this feature and unleash the complete power of your MongoDB data by understanding the syntax and application of the `$function` stage.
| shubhamdutta2000 |
1,354,738 | Introducing Escape the Wave. My Exciting New Game! | Last week, I officially released my first game, Falling Square, which marked a great milestone for... | 21,725 | 2023-02-09T14:00:00 | https://blog.emilienleroy.fr/introducing-escape-the-wave-my-exciting-new-game | programming, gamedev, opensource, showdev | Last week, I officially released my first game, [Falling Square](https://github.com/EmilienLeroy/FallingSquare/), which marked a great milestone for me. However, the serious work now begins. This week, I'm starting development on my second game, [Escape the Wave](https://github.com/EmilienLeroy/EscapeTheWave). It will be a simple shooting game with some surprises (which I'll keep secret for now). The game will be available on Android and iOS, like my first game, and may also be available on Windows, Linux, and macOS.
## Virtual Joysticks for Player Movement
I've made my first addition to the game by incorporating the player and their movements. To cater to mobile devices, I've added a virtual joystick that enables users to move the player. I've also added another virtual joystick for rotation. Both virtual joysticks were developed by MarcoFazioRandom and can be found under the name "Virtual-Joystick-Godot" on GitHub at this link: [https://github.com/MarcoFazioRandom/Virtual-Joystick-Godot](https://github.com/MarcoFazioRandom/Virtual-Joystick-Godot)."

## Implementation of Player Shooting Mechanism
Next, I added the bullet and player shooting functionality. Currently, it's just a simple white ball without any textures, but there will likely be various types of bullets with unique effects in the future. To shoot, use the rotation joystick. Rotating the joystick will automatically trigger a shot.

## Addition of First Wall and Camera Implementation
Finally, I've added the first wall using a simple tileset, as well as a camera that follows the player. When a bullet hits a wall, it is automatically destroyed. The camera and wall serve as the first elements in creating a visually appealing and functional environment for the game.

## What's next
In the first week of development, I successfully laid the foundation for my new game. Building upon this solid foundation, I plan to add the first enemy in the upcoming week. This will bring the game one step closer to completion and provide players with a new challenge to overcome. I am eager to continue working on this project and can't wait to see how it evolves in the coming weeks. | emilienleroy |
1,355,120 | Top 5 Featured DEV Tag(#algorithms) Posts from the Past Week | The 4 Essential Skills of the Software Developers There are 4 main skill groups where all... | 0 | 2023-02-06T06:23:42 | https://dev.to/c4r4x35/top-5-featured-dev-tagalgorithms-posts-from-the-past-week-4d6g | algorithms, c4r4x35 | ##The 4 Essential Skills of the Software Developers
There are 4 main skill groups where all programmers must have. Most of these skills are resistant in time and are not influenced by the development in specific technologies (that are changing...
{% link https://dev.to/noriookawa/the-4-essential-skills-of-the-software-developers-43p3 %}
##Linked List
Hey, let´s talk about linked Lists and what are the differences between an array and a linked list.
Linked List
A linked list is structured by objects, each object have a head and a next value.
*A...
{% link https://dev.to/lausuarez02/linked-list-5bih %}
##It's the Most Wonderful Time of the Year - Dynamic PageRank and a Twitter Network
Introduction
"Merry Christmas!" Words that we read countless times on social networks at the most wonderful time of the year. Christmas brings with it an unprecedented madness of social networks with...
{% link https://dev.to/memgraph/its-the-most-wonderful-time-of-the-year-dynamic-pagerank-and-a-twitter-network-lo1 %}
###01 Benchmark of four JIT Backends
Related GitHub Repo
https://github.com/ssghost/JITS_tests
Abstract
just-in-time (JIT) compilation is a way of executing computer code that involves compilation during execution of a program (at run...
{% link https://dev.to/ssghost/01-benchmark-of-four-jit-backends-51i3 %}
##PageRank Algorithm for Graph Databases
The most interesting and famous application of PageRank is certainly the one that actually sparked its creation. Google founders Larry Page and Sergey Brin needed an algorithm to rank pages and...
{% link https://dev.to/memgraph/pagerank-algorithm-for-graph-databases-39kk %}
| c4r4x35 |
1,355,154 | Super useful a state management library "Recoil" | Have you ever used a state management library called Recoil? This is super useful and easy... | 0 | 2023-02-06T07:34:04 | https://dev.to/yuya0114/super-useful-a-state-management-library-recoil-3a9j | react, javascript, webdev, programming | ## Have you ever used a state management library called Recoil?
This is super useful and easy to understand.
This article explains the basics of Recoil.
Please click the Like button and Save button if you like this article.
## What you will learn in this article
- RecoilRoot
- Atom
- Selectors
## What is Recoil?
Recoil is a React state management library created by Facebook.
This is an experimental state management framework for React.
Not officially released yet.
## RecoilRoot
To use Recoil, you need to enclose the outside of the target scope with a RecoilRoot component.
If you're enclosing the root component with a RecoilRoot component allows for global State management.
```react
import React from 'react';
import { RecoilRoot } from 'recoil';
function App() {
return (
<RecoilRoot>
<CharacterCounter />
</RecoilRoot>
);
}
```
## Atom
Atom is necessary for state management.
Atom set a unique ID for each Atom with key and an initial value with default.
```react
export const textState = atom({
key: 'textState',
default: '',
});
```
Components that need to read from and write to an atom could use three way.
If you only want to get the value, you can use useRecoilValue.
If you only want to set the value, you can use useSetRecoilValue.
You can use useRecoilState () to both retrieve and update the state.
Here is how to use useRecoilState.
```react
function CharacterCounter() {
return (
<div>
<TextInput />
<CharacterCount />
</div>
);
}
function TextInput() {
/* useRecoilState determines from which Atom to retrieve
the value by passing the Key specified in Atom as an
argument.*/
// [value, setValueFunction] = useStateRecoil(Key)
const [text, setText] = useRecoilState(textState);
const onChange = (event) => {
setText(event.target.value);
};
return (
<div>
<input type="text" value={text} onChange={onChange} />
<br />
Echo: {text}
</div>
);
}
```
Here is how to use useSetRecoilValue and useRecoilValue.
```react
function TextInput() {
const text = useRecoilValue(textState);
const setText = useSetRecoilState(textState);
const onChange = (event) => {
setText(event.target.value);
};
return (
<div>
<input type="text" value={text} onChange={onChange} />
<br />
Echo: {text}
</div>
);
}
```
## Selector
Selector can return values processed from Atom state, or process and update Atom state.
Like Atom, each Selector is assigned a unique ID using key.
The "get" function returns the processed value of the state, and the "set" function processes the state and updates it.
Each time Atom is updated, it is render.
```react
const charCountState = selector({
key: 'charCountState', // unique ID (with respect to other atoms/selectors)
get: ({get}) => {
const text = get(textState);
return text.length;
},
});
```
```react
function CharacterCount() {
const count = useRecoilValue(charCountState);
return <>Character Count: {count}</>;
}
```
## Conclusion
Recoil can be used to manage values globally or only between specific components.
While Redux and others have complex syntax and take time to get used to, Recoil is simple and easy to use for state management.
ref: [Recoil](https://recoiljs.org/)
| yuya0114 |
1,355,256 | Good React Practices To Adopt while building UI | React is one of the best frameworks we can use for the development of the front-end size of our app.... | 0 | 2023-02-06T09:39:56 | https://dev.to/jaykaranja/good-react-practices-to-adopt-while-building-ui-2536 | react, javascript, webdev | React is one of the best frameworks we can use for the development of the front-end size of our app. It is great at handling states, dealing with data and visualizing it, and responding to user input and actions seamlessly. This is good for the user experience, as they will interact with the app in a very efficient manner. However, there are a few very critical sectors to put into consideration when using React which will help in ensuring that we build a good, brilliantly performant app which is going to run as expected, very accurate and also very ‘well reactive’. The following are some of the practices, dos and don’ts of React which we could adopt into iBusiness.
## **1. WINDOWING/LIST VIRTUALIZATION**
When dealing with very long lists in a React application, performance is one of the very important things to be keen on because rendering very large lists causes the app to decline in performance. Before the app is loaded, the entire list has to be loaded into DOM, causing a UI lag and hence affecting the performance of the application. To somehow reduce this effect, tools such as DevExtreme will come in handy for windowing such long lists and make it seamless, simple, and highly performant. Windowing is breaking down a list into different tabs and only render the data that is required at that moment in time.
## **2. ENSURING CORRECT KEY ASSIGNMENT FOR LISTS**
Lists usually use keys so as to uniquely identify one item from the other. These keys are really important since we need to have an identifier for the items so as to enable us to work with one piece of data at a time when need be. The right key assignment must be done to ensure that data is correctly handled as this is very Important while dealing with tables and this will be key in the app’s performance.
## **3. FUNCTIONAL COMPONENTS AND COMPONENTS INTERACTIONS**
Functional components are the most recommended method for building React apps. This is because it comes with some advantages such as:
- It requires less code
- Components are stateless
- Easy to test
- Much easy to understand for other engineers and by ourselves.
## **4. USING MORE STATELESS COMPONENTS**
When we write too many stateful components, they end up getting too complex and performance is reduced drastically. Debugging, reusability and testing also becomes very difficult, especially when there’s too many states to consider.
## **5. MAINTAINING A CLEAR FOLDER STRUCTURE**
Good folder and file structuring is highly important as it helps us as engineers to well visualize the app file locations and also make reference very easy. It will also help in imports as we’re able to import code from and to other code files instead of having a cluttered system with a bunch of files everywhere. Such practices can be done in ways such as grouping folders by features, similar files or routes. This will make maneuvering through the app easier for both us as the developers and the computer when it has to import files from one point to another.
## **6. MAKING USE OF NAMING CONVENTIONS**
When we write code in a specific convention such as camelcase, development of the system becomes better and more efficient as we will all know what a specific name means, for instance a variable name. This is going to be helpful as code will be written in a simplified way for everyone to understand and collaboration should be as smooth as butter. This will also help majorly in debugging as we will easily understand where exactly an issue in the code is coming from.
## **7. DEPENDENCY OPTIMIZATION**
When developing the front-end with React, we come across situations where installing a certain dependency is helpful. These dependencies contain some functionalities that we may need in order to run our app properly. However, there’s a downside when it comes to using dependencies. For instance, a dependency may contain 30 methods, and we need say for instance, 5 methods. What would happen when we install this dependency is that we end up containing a lot of unnecessary code that we may not need, ending up cluttering the app with such files. To avoid this, we may either use optimized versions of dependency packs for instance lodash-webpack-plugin instead of the whole loadsh package.
## **8. CHECKING FOR UNNECESSARY RERENDERS**
One of the main causes of decline in performance is component re-rendering. This may happen without even the knowledge of us as the engineers and it could keep the app from running as expected. Due to this, we must perform checks using tools like React Profiler, and the useMemo hook, which will help us in detecting such behavior and dealing with it accordingly.
## **9. LAZY LOADING**
This is a concept where we can load other segments of the app without certain components having loaded completely. This is important in performance as we can load the app while other components are still being processed to be rendered. This may apply where these components are not needed so urgently on the app. This can be good practice as users do not have to wait for important components to get rendered out. The app becomes rather performant because of this practice.
## **10. LIMITING COMPONENT CREATION**
One of React’s best wins is the ability to reuse components across the whole application. This makes coding much easier and also, performance gets way better as there is not a lot of code for the computer to work with. Just reusable components which will definitely end up making work way easier, reduce errors by a great margin and also make the software well structured.
## **11. REUSABILITY**
This is one very important concept that should be implemented correctly to make things easier for us as the engineers, and also for React itself to run with the optimum performance needed. Reusability is going to help the whole development process in ways such as error catching. When one piece of code is carefully examined by us as a team and we have ensured it is at its level best architecture, then we can use that piece of code to perform different tasks that the system will have to perform. This will help in a huge way, since the code is not individually reviewed, it is very less likely to produce an error or cause performance issues in the final product.
## DOS AND DON’TS THAT WE MAY INCORPORATE FROM THE REMARKS ABOVE IN SUMMARY
**DOS**
- Windowing of very large lists.
- Dependency Optimization
- Avoiding unnecessary component rerenders.
- Implementing reusability of components.
- Having a proper file structure, and using a specific convention for naming different things while writing the code.
Lazy loading
**DON’TS**
- Writing unnecessarily complicated code.
- Repeating oneself.
- Storing components and any other code file in a messy structure.
- Excessive use of plugins and dependencies.
NB: Always be careful while using Redux in development as when not used correctly, the performance of the app is drastically affected.
| jaykaranja |
1,355,274 | Build an Instagram Web App with Supabase and Next.js 🚀 | TL;DR In this tutorial, you'll learn how to create an Instagram web app utilizing... | 0 | 2023-02-07T14:43:31 | https://livecycle.io/blogs/nextjs-supabase-instagram/ | javascript, webdev, nextjs, tutorial | ## TL;DR
In this tutorial, you'll learn how to create an Instagram web app utilizing Supabase.js and Next.js.
## Intro
You may be familiar with an app called Instagram 🤪.
Instagram has exploded in popularity in recent years and currently boasts about 2 billion monthly active users worldwide.

From an end-user perspective, Instagram is the go-to place to share images and communicate with friends, family and the world at large.
But from a developer's POV, it's even more than that. Experimenting with building an Instagram-like web application is a proven, solid context for creating a fast, scalable web app with real-time data synchronization.
And so, in this tutorial, we'll create a basic Instagram web app utilizing both Supabase.js and Next.js. Hopefully, you'll find this to be a good way to work with these technologies and integrate them together.
## Why do we care about this stuff??
Now, just some quick context on who we are, and why we love talking about this stuff.
Our team at [Livecycle](https://www.sdk.livecycle.io/?utm_source=devto&utm_medium=post&utm_campaign=devtoinstagram) is obsessed with developer experience and developer happiness.
[Our SDK](https://www.sdk.livecycle.io/?utm_source=devto&utm_medium=post&utm_campaign=devtoinstagram) enables frontend teams to build products faster, together by enabling on-site, Figma-like comments on any PR deploy preview environment. With a contextual, async reviews you save yourself tons of time, money and headaches.
So it's no surprise that our teams loves DevX, devtools, and all-things frontend development. So let's dive right into it.
## So... What exactly is Next.js?
Before we roll up our sleeves, let's quickly review some basics. Next.js is a JavaScript framework that allows you to create server-side rendered and statically exported React apps. It offers a simple setup, automated code splitting, optimal speed, and a variety of other features that enable developers to create quick and scalable web applications.
It also supports a variety of CSS and styling alternatives, such as styled-jsx, CSS-in-JS libraries, and standard CSS.
## And What is Supabase.js?
Supabase.js is an open-source toolkit that enables developers to add real-time functionality to online applications that use PostgreSQL databases. It includes a query builder, real-time data synchronisation, and a JavaScript API for incorporating real-time features into web applications for dealing with a PostgreSQL database.
The great thing about Supabase.js is that it works with any front-end framework, such as React, Angular, and Vue. It seeks to simplify the development of real-time applications by abstracting away some of the complexities of real-time database synchronization.
With the intros out of the way...

## Setting up Supabase.js online & adding data to the database
### Step 1: Creating a Supabase account
The first thing that you will need to do is create an account on the Supabase [website](https://supabase.com/).
### Step 2: Creating a Supabase project
Next, create a new project from the example screen shown here.

Next, give your project a name. In this example, I called mine **instagram-app** (very original, I know...) and generate a password. Obviously, be sure to save the password somewhere safe so you can retrieve it when needed.
Choose a region and then keep it selected on the Free plan. Now click the **Create new project** button and wait for it to complete the setup.

### Step 3: Creating a database and a table
Now it's time for us to create a database with a table and some data for our Instagram app. Click on the **Database** button as highlighted here:

Follow up by clicking on the **New table** button on the next page.

On the next screen, give the table a name. In this example, we called it **posts**.

You now need to click on the **Import data via spreadsheet** button so that we can import some `.csv` data for the Instagram app.
Copy and paste the text below into the input and it should look like the image under this code block.
```js
id,text,image,likes,comments,date
9b4040d3-3b06-4701-bcc1-7eb1dbb33a14,Feeling thankful for this beautiful view 🌅 #blessed #sunsetlover,https://images.unsplash.com/photo-1557456170-0cf4f4d0d362?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Mnx8bGFrZXxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60,100,0,"January 10, 2023"
ddcb0ec7-d33a-4e7b-84c9-0a2e0b3a3d91,Starting the day off right with a healthy breakfast 🍓 #fitfoodie #wellness,https://images.unsplash.com/photo-1484723091739-30a097e8f929?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1547&q=80,50,0,"January 12, 2023"
64f1f0c9-4a4a-475e-941c-b45c5ab5ed5b,Making memories on this adventure 🌲 #wanderlust #neverstoplearning,https://images.unsplash.com/photo-1583384990896-96eec15d6160?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1587&q=80,143,0,"January 14, 2023"
db5f13af-4ba4-4ed7-9c5f-b7c45ab69f5b,Feeling the love with my furry friend 🐶 #dogsofinstagram #bestfriend,https://images.unsplash.com/photo-1585373683920-671438c82bfa?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8ODh8fGNhdHxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60,234,0,"January 17, 2023"
4c1404fa-c1d0-4b7c-9c0f-7e3f3d9563d8,Take me back to this tropical paradise 🏝️ #beachlife #islandvibes,https://images.unsplash.com/photo-1545579133-99bb5ab189bd?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MTB8fGlzbGFuZHxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60,122,0,"January 18, 2023"
ec1b7f3d-3fc3-4901-b1a0-1c0b85a7d8a2,Finding peace and tranquility in nature 🌳 #mindfulmoments #greenery,https://images.unsplash.com/photo-1519331379826-f10be5486c6f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8Mnx8cGFya3xlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60,64,0,"January 18, 2023"
f7d8300b-6f2f-4e74-b19c-10c53f0ccdc6,Making the most of this beautiful day ☀️ #outdooradventures #sunnydays,https://images.unsplash.com/photo-1517480448885-d5c53555ba8c?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MTA1fHxiZWFjaHxlbnwwfHwwfHw%3D&auto=format&fit=crop&w=900&q=60,23,0,"January 19, 2023"
9e7e0a2f-d6f3-440c-a50b-bb30dde03f77,Creating something new and beautiful 🎨 #artsy #diyfun,https://images.unsplash.com/photo-1579762715118-a6f1d4b934f1?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1584&q=80,89,0,"January 23, 2023"
bde3e02d-6d77-43a1-8e7f-a17a6c7f6b10,Feeling proud and inspired by this new accomplishment 💪 #personalbest #nevergiveup,https://images.unsplash.com/photo-1580261450046-d0a30080dc9b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1909&q=80,34,0,"January 26, 2023"
67a6ab74-eeaf-4d2b-8805-57dbfde71ef7,Indulging in my favorite sweet treat 🍰 #foodie #yum,https://images.unsplash.com/photo-1563729784474-d77dbb933a9e?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1587&q=80,214,0,"January 28, 2023"
8f8ef4e4-a23f-46cd-a3f8-a0e3d0fbdd83,Relaxing and rejuvenating on this chill day 💆 #selfcare #pampering,https://images.unsplash.com/photo-1540555700478-4be289fbecef?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=2670&q=80,105,0,"February 1, 2023"
M90b09e1b-ca1a-4b79-b87b-48df44b7b35e,Making memories with the people who matter most 💕 #familylove #friendshipgoals,https://images.unsplash.com/photo-1506869640319-fe1a24fd76dc?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=2670&q=80,21,0,"February 4, 2023"
```

Click the **Save** button and on the next screen, you will see a primary key warning message. Make the id the primary key, and the warning will go away. See the images below for reference:


Great job! The database and table are now setup. However you are likely to encounter a 'Supabase returns empty array' problem when we try to build the UI. This is because Supabase requires us to set up a [Row Level Security (RLS)](https://supabase.com/docs/learn/auth-deep-dive/auth-row-level-security). Follow the steps below to create one for your database:
First, create a new policy.

After that step select the **Get started quickly** option.

Choose a policy and use the template. Here I am enabling read access for everyone which will allow the data from the API to be fetched and rendered on the page.
## Creating an Instagram web app with Next.js
### Connecting Supabase.js to the Next.js app
#### Step 1: Setting up a Next.js project
Navigate to your desktop or a directory of your choice and run the command below in your terminal to scaffold a project using Next.js. Go through the setup with the preferences of your choice and then `cd` into the **supabase-instagram-app** folder and install the Supabase package.
```js
npx create-next-app@latest supabase-instagram-app
cd supabase-instagram-app
npm i @supabase/supabase-js
```
If you're using Visual Studio Code then you can just use the code below to open the project in your code editor. Otherwise, just open the project however you usually would.
```js
code .
```
#### Step 2: Connecting to Supabase.js
Create a file called `client.js` and put it inside the `api` folder. Use the code below for the file.
```js
import { createClient } from '@supabase/supabase-js';
export const supabase = createClient();
```
We will need the connection settings to connect our Next.js app to Supabase.js. To access this, go to your Supabase dashboard:
Project Settings > API
Copy the Project URL string and Project API keys string:

Now update the `client.js` file with the code here and add your own details:
```js
import { createClient } from '@supabase/supabase-js';
export const supabase = createClient(
// Project URL
'your project url',
// Project API keys
'your api key'
);
```
#### Step 3: Creating the Instagram app
Open the `index.js` file inside of the **pages** folder and then replace all of the code inside with the code below:
```js
import { useState, useEffect } from 'react';
import { supabase } from '../pages/api/client';
export default function Home() {
useEffect(() => {
fetchPosts();
}, []);
const [posts, setPosts] = useState([]);
const [loading, setLoading] = useState(true);
async function fetchPosts() {
const { data } = await supabase.from('posts').select('*');
try {
setLoading(false);
setPosts(data);
console.log('Data:', data);
} catch (error) {
console.log(error);
}
}
return (
<>
<div>
{loading ? (
<div>
<p>Loading...</p>
</div>
) : (
<div className="instagram-container">
<aside></aside>
<section>
<header>
<div className="instagram-profile-picture-container">
<div className="instagram-profile-picture"></div>
</div>
<div className="instagram-profile">
<div className="instagram-name">
<h2>johnjoe</h2>
<button>Edit Profile</button>
<div>⚙️</div>
</div>
<div className="instagram-stats">
<p>12 posts</p>
<p>145 folowers</p>
<p>356 following</p>
</div>
<div className="instagram-bio">
<p>
John Doe | Blogger
<br /> <span>Personal blog</span> <br /> London, UK <br />{' '}
🇬🇧 💫 @johndoe.codes - documenting my tech journey! <br />
<a
href="https://linktr.ee/"
target="_blank"
rel="noopener noreferrer"
>
https://linktr.ee/
</a>
</p>
</div>
</div>
</header>
<nav>
<button>POSTS</button>
<button>REELS</button>
<button>SAVED</button>
<button>TAGGED</button>
</nav>
<div className="instagram-container-gallery">
{posts.map((post) => {
return (
<div key={post.id} className="instagram-post">
<img src={post.image} alt={post.text} />
</div>
);
})}
</div>
</section>
</div>
)}
</div>
</>
);
}
```
Do the same for the `globals.css` and `Home.module.css` files.
The `globals.css` style file:
```css
@import url('../styles/Home.module.css');
@import url('https://fonts.googleapis.com/css2?family=Roboto&display=swap');
*,
*::before,
*::after {
margin: 0;
padding: 0;
box-sizing: border-box;
}
:root {
--main-font-family: 'Roboto', sans-serif;
--main-font-color: #ffffff;
--main-font-size: 16px;
}
html {
font-size: var(--main-font-size);
}
body {
background: #000000;
color: #ffffff;
}
```
The `Home.module.css` style file:
```css
.instagram-container {
display: flex;
flex-flow: row nowrap;
}
header {
display: flex;
flex-flow: row nowrap;
margin: 2rem;
border-bottom: 0.1rem solid #262626;
justify-content: space-evenly;
}
header span {
color: #a8a8a8;
}
.instagram-profile-picture {
border-radius: 100%;
height: 9rem;
width: 9rem;
background: #d4cbba;
}
.instagram-name {
display: flex;
flex-flow: row nowrap;
}
.instagram-name button {
border-radius: 0.3rem;
padding: 0.4rem;
border: none;
font-weight: bold;
margin-left: 1rem;
margin-right: 1rem;
}
.instagram-stats {
display: flex;
flex-flow: row nowrap;
justify-content: space-between;
margin: 1rem 0 1rem 0;
font-weight: bold;
}
.instagram-bio {
margin-bottom: 2rem;
}
.instagram-bio a {
color: #e0f1ff;
text-decoration: none;
font-weight: bold;
}
aside {
background: 000000;
height: 100vh;
width: 19.4375rem;
border-right: 0.1rem solid #262626;
}
section {
margin: 0 auto;
}
nav {
display: flex;
flex-flow: row nowrap;
justify-content: space-around;
width: 40rem;
margin: 0 auto;
}
nav button {
border: none;
background: none;
color: #ffffff;
font-weight: bold;
}
.instagram-container-gallery {
margin: 0 auto;
width: 58.4375rem;
display: flex;
flex-flow: row wrap;
}
.instagram-post img {
width: 18.3125rem;
height: 18.3125rem;
margin: 0.5rem 0.5rem 0.5rem 0.5rem;
}
```
Now run the command `npm run dev` to run the **supabase-instagram-app** and... Voilà!
You should see your shiny new Instagram clone app on [http://localhost:3000/](http://localhost:3000/) which is connected to your supabase database. The data we input into the table earlier should be rendered on the page as a gallery.
Great job! 👏👏👏
You can learn more about your Supabase API for the project from the API documents section as shown in the image below. If you click on the posts table it will give you the JavaScript CRUD code for all of your endpoints!

## Final thoughts
To summarize, creating a basic Instagram web app is a great context for working with Supabase and Next.js. Supabase makes it simple to create a scalable backend, and Next.js provides a straightforward framework for creating quick and efficient frontend apps. Developers can quickly and easily construct an Instagram-style app that is both practical and aesthetically beautiful by using these two technologies.
Cloud-based databases and frontend frameworks have made it simpler than ever to design and launch online apps, and Supabase and Next.js provide the ideal balance of power and simplicity to make this process as simple as possible.
So whether you're an experienced developer or just starting out, this mix of technologies is well worth considering for your next web development project.
## Thanks for reading!
If you found this article useful, then you're probably the kind of human who would appreciate the value Livecycle can bring to front end teams. I'd be thrilled if you [tried our SDK](https://www.sdk.livecycle.io/?utm_source=devto&utm_medium=post&utm_campaign=instagram) on one of your own projects and gave it a spin with your team. 🙏 | zevireinitz |
1,355,339 | If you are interested in micro services then I promise you won't regret here... | Hello Everyone. I started learning micro-services and now I am sharing it on my channel. I am using... | 0 | 2023-02-06T10:33:08 | https://dev.to/meenachinmay/if-you-are-interested-in-micro-services-then-i-promise-you-wont-regret-here-3ndi | webdev, microservices, nestjs, javascript | Hello Everyone. I started learning micro-services and now I am sharing it on my channel. I am using NestJS for it because building micro services using nodejs, nestjs makes it so easy nowadays. there is a lot what you can do with micro services if you are making it with nestjs.
I started a series called **Microservices based project setup and CRUD operations for any project** with NestJS. I have video on my channel and I am constantly uploading videos on my channel with a detailed explanation

[](https://youtube.com/shorts/vg-ZC8PX0V8?feature=share)
[](https://youtu.be/e3sYnAeXbUk)
[](https://youtu.be/cURMY9DGGx0)
[](https://youtu.be/Uojr2L0ICVU)
Have a look at all this and I promise you won't regret this. It is worthy to spend your time there.
Thank you so much. | meenachinmay |
1,357,366 | Need custom JavaScript to get total sum of price multiply by quantity from an item array of data layer | Need custom JavaScript to get total sum of... | 0 | 2023-02-07T21:09:23 | https://dev.to/rayhanshimul/need-custom-javascript-to-get-total-sum-of-price-multiply-by-quantity-from-an-item-array-of-data-layer-3j6i | {% stackoverflow 75377609 %} | rayhanshimul | |
1,361,030 | What is ReactJs? | React is a JavaScriptlibrary for building user interfaces. It allows you to build reusable UI... | 0 | 2023-02-10T16:58:46 | https://dev.to/bakardev/what-is-reactjs-g91 | webdev, react, beginners, tutorial | React is a `JavaScript `library for building user interfaces. It allows you to build reusable UI components and manage the state of your application efficiently. Here's a beginner's tutorial for ReactJS using examples:
**_Setting up a React environment:_**
To start building with React, you need to set up a development environment. You can do this using tools like `CodeSandbox `or you can set up a local environment using npm and a text editor like `Visual Studio Code`.
**_Creating a React component:_**
A `component `in React is a piece of UI that can be reused multiple times in an application. You can create a component in React using JavaScript `functions `or `classes`. Here's an example of a functional component:
```
import React from "react";
function MyComponent() {
return <h1>Hello, World!</h1>;
}
export default MyComponent;
```
And here's an example of a class component:
```
import React, { Component } from "react";
class MyComponent extends Component {
render() {
return <h1>Hello, World!</h1>;
}
}
export default MyComponent;
```
**_Rendering a component:_**
To render a component in React, you need to import it into a file and use the `<MyComponent />` syntax to render it. Here's an example:
```
import React from "react";
import MyComponent from "./MyComponent";
function App() {
return (
<div>
<MyComponent />
</div>
);
}
export default App;
```
**_Props and State:_**
In React, you can pass data to components using props and manage the state of a component using the state object. Props are properties that are passed to a component from its parent component. The state is an object that holds data that can change within a `component`. Here's an example:
```
import React, { useState } from "react";
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
}
export default Counter;
```
**_Event handling:_**
You can handle events in React components using event handlers. An event handler is a function that is executed when an event occurs, such as a button click. Here's an example:
```
import React, { useState } from "react";
function Toggler() {
const [isToggled, setToggled] = useState(false);
return (
<div>
<p>Toggled: {isToggled.toString()}</p>
<button onClick={() => setToggled(!isToggled)}>Toggle</button>
</div>
);
}
export default Toggler;
```
These are the basics of ReactJS. Visit [React Docs](https://reactjs.org/) for more. You can use these concepts to build more complex applications and learn advanced topics as you progress.
Follow @bakardev for more. ✌
| bakardev |
1,361,861 | Prediksi Syair Togel sgpHari ini | bocoran nomor keluaran sgp dan prediksi sgp 4d 3d 2d terjitu hari ini di Riano.org. Semoga prediksi... | 0 | 2023-02-11T12:39:01 | https://dev.to/syairsingapura/prediksi-syair-togel-sgphari-ini-13kl | bocoran nomor keluaran sgp dan **[prediksi sgp](https://riano.org/)** 4d 3d 2d terjitu hari ini di Riano.org. Semoga prediksi dan forum kode syair bisa membantu anda untuk JP | syairsingapura | |
1,363,453 | Users and Groups in Linux — How to use chown and chgrp commands? | When it comes to large organization, the Users and Groups plays important role in every side of... | 20,967 | 2023-02-13T04:43:52 | https://www.gogosoon.com/blog/users-and-groups-in-linux-how-to-use-chown-and-chgrp-commands | linux, beginners, chown, users |
When it comes to large organization, the Users and Groups plays important role in every side of people. There will be different levels of users in an organization. In order to scale this, We need a strong understanding of users and groups.
To protect files and directories in Linux from various types of users we can use chown and chgrp commands. These commands are used to manage which type of user can read, write, and execute a file.
We need to understand the basics of how groups and users work in Linux and how can we manipulate permissions for them.
Let’s get into the topic without any further ado.
## What are the Group and Users and use cases of the group?
A user is a normal entity to manipulate files, directories, and any type of action in a system. We can create any number of users in Linux.
A group contains zero or more users in it. Users in a group share the same permissions. The group allows us to set permissions on the group level instead of having to set permissions for individual users.
Let’s consider a scenario in software development, a machine has been used by various types of people like Administrators, Developers, and Testers.
Each person should have an individual level of access to the files in a system.
Yet there will be a common set of permission allowed for developers, similarly, testers and admins. So level of permissions is common for the individual users inside their respective groups.
Let’s consider there are 10 developers and 8 testers in my team and we’re using 1 shared computer (Each of us holds a laptop too).
We want to create a file that should be accessible only to the developers. Can we achieve this without using the concept of groups? Yes. It’s achievable. But, we have to assign permission to each developer.
The next day, I get news that my team is expanding to 150 developers and 20 testers due to an immediate client requirement.
Achievable again. But, it’s not scalable. It’s so tedious to manage permission for each and every developer if they share common permissions.
Here comes the supremacy of groups 👬. If we have all 10 developers in a group called **dev_group**, We can simply give permission to the group **dev_group.**
Not only for permissions but there are other use cases for groups available too.
## What are the primary and secondary groups in Linux?
As the name implies a Primary group is a group that a user belongs to that group by default.
For example,
A Secondary group is a group where we can add any number of users into that group.
## How to create a user?
Users are created by using useradd command. Each user in a Linux system has a unique user id.
useradd [OPTIONS]
Let’s create a new user named developer
useradd developer
## How to create a group?
Groups are created by using groupadd command. Similar to user, each group in a Linux system has a unique group id.
groupadd [OPTIONS]
Let’s create a new group named developers_group
groupadd developers_group
## How to add a user to a group?
sudo usermod -aG
Here’s the actual command to add the user developer to developers_group group
sudo usermod -aG developers_group developer
## How to list the groups?
You could ask the question, “How can we verify if the created group exists? and How to verify if the user is added to the group?”. The list of groups and the users who have permission to the group are stored in a file called group. It will be located under the /etc directory.
We can see the available groups by reading that file using the cat command.
cat /etc/group


This will be huge file. By default it has 70 to 100 lines. So, I’ve cropped the top and bottom part of the command’s output in the above screenshots.
The last 2 lines of the above screenshot describes that, there’s a new user called developer, a new group called developers_group, and the user developer is added to the developers_group group.
## How to know the existing owner and group ownership of a file?
We have a powerful and most familiar command in Linux, which will show the permissions involved in a file/directory. i.e., ls -l
ls -l test.sh

Let’s split the output separated by space and understand each part of it,
“-rw-rw-r-- 1" - Permission for file test.sh
1st occurrence of “ gogosoon " - Owner of the file
2nd occurrence of “ gogosoon " - Group ownership of the file
## How to change the Owner of a file/directory?
chown command is used to change the ownership of the file. The chown command is abbreviated to change owner.
From our above example, we have seen the file test.sh owned by the user named gogosoon.
Let’s change the ownership of the file to the user admin using the chown command.
sudo chown admin test.sh

From the above screenshot, we can clearly see that the owner of the file test.sh has been changed from gogosoon to admin.
## How to copy the ownership from one file to another?
I have faced this scenario once in my career. We use a common system in some rare usecases.
One day I am working on creating hundreds of files and gave access to my colleague’s user account. But the permissions to all the files will be the same. I was so lazy to do it manually and I’m sure that there must be some commands exist to do this. So I did a quick Google search to copy permission from one file to another. After few seconds, I found the solution and it was so simple. You can do this by adding a --reference flag.
chown --reference=
Let’s explore that with an example,
Let’s create a new file named copy.sh with my user account gogosoon.
The owner of the test.sh file is admin user (from our previous example). I want the ownership of test.sh file to be copied to newly created copy.sh file which was owned by gogosoon user.
sudo chown --reference=test.sh copy.sh

From the above screenshot, the first command describes the ownership of test.sh file which is owned by admin user.
The second command describes the ownership of the copy.sh file which is owned by the gogosoon user.
The third command copies the ownership of test.sh to copy.sh file.
The last command again describes the ownership of copy.sh file which is now owned by admin user.
You may wonder that I’ve told that I created multiple files, but how did I change the ownership of all the files at once?
That’s a different story. But leaving my answer here. I created a script that loops over all the files and changes the ownership by referencing a single master file.
## How to change ownership of multiple files with a single command?
You can do this by passing multiple file names to the chown command with one user name. This sets the ownership of all the given files to that particular user.
sudo chown admin copy.sh test.sh
Here’s an example where I try to set the ownership of files copy.sh and test.sh to admin user.
sudo chown admin copy.sh test.sh

## How to change the group ownership of a file?
Almost all the operations related to group can be achieved with chgrp command, which abbreviated to change group. It's almost similar to chown command.
Syntax of chgrp command,
I have already created a group called admin . I do not belong to this group. Let's change the group ownership of the test.sh file from gogosoon to admin group.
sudo chgrp admin test.sh

From the above screenshot, we can witness that group ownership of test.sh file has been changed from gogosoon to admin. Since, I do not belong to this group, I will not have write access to the file.
Let’s verify the same by opening the file in write mode,
nano test.sh

The above screenshot describes that (highlighted with red color at the bottom), I do not have write access to the test.sh file. Because I do not belong to the group admin.
## How to change the group ownership of a directory?
The same syntax for files is applicable for directories also. Here’s a quick example,
sudo chgrp test group_test/

But remember the above command changes the group ownership of only the files in that directory. To recursively change the group permission of all the directories inside that directory, we have to add -R flag with it.
sudo chgrp -R admin group_test/
Now the group ownership for all the files and directories inside group_test have been changed from gogosoon to admin

Let’s verify the output, by trying to write a file from the directory group_test as gogosoon user



Hurray !!! The ownership has been applied appropriately.
## Conclusion
In this article, you have learned about changing file and folder ownership of users and groups.
Subscribe to my newsletter by entering your email address in the below box to receive more such insightful articles that get delivered straight to your inbox.
Give a ❤️, if you liked and learned something new.
Have a look at my [site](https://5minslearn.gogosoon.com/?ref=DevTo_users_and_groups) which has a consolidated list of all my blogs.
Cheers !!!
| 5minslearn |
1,363,944 | THE GOLDEN RULES OF UI/UX DESIGN TO INCREASE USER ENGAGEMENTS | You must be familiar with Attention-deficit hyperactivity (ADH). Your short attention span causes it... | 0 | 2023-02-13T13:56:27 | https://dev.to/quokkalabs/the-golden-rules-of-uiux-design-to-increase-user-engagements-15f4 | ux, uiweekly, design | You must be familiar with Attention-deficit hyperactivity (ADH). Your short attention span causes it if you often feel restless, agitated, or fidgety when trying to focus on a task. The shrinking attention span phenomenon comes into action after this. Every individual is now overwhelmed & overloaded with information. Various products & services are used to flop over from one to another, with regards to the new products with daily usage yet to allow others any new opportunity.
Be prepared to battle for user attention and engagement to guarantee your **[SaaS development](https://quokkalabs.com/)**. Your marketing technique should plan to teach and engage the user to build the consistency standard. At last, you maintain that your user should collaborate with your administration daily, making it an essential piece of their lives. Your definitive objective is to make your item/product sticky. When individuals begin using your item, SaaS organizations need to consider making that item as stick as could be expected. Your users should involve it in their everyday work processes.
Attracting an audience toward your product requires on-time marketing and customer-centric design (UI/UX) as per your audience. In this blog, we will talk about efficient ways that you can start implementing straight away for user engagement.
## Why Is User Engagement Important?
They assist the organization with making due in the cutthroat market, increment deals, benefits, and guarantee development. In the end, all the SaaS organizations work for people. The users should make the most out of the item or administration, become blissful, and remain with the business to the extent that this is possible, guaranteeing recurring revenue.
### A successful user engagement strategy helps the business to:
- Develop user dependability, which prompts durable connections between the users and the organization.
- Increment income/revenue and, what is significantly more basic, make the revenue predictable.
- Stand apart from the opposition by giving a great user experience and building a positive picture for users.
User engagement is now and again alluded to as customer engagement, and their definitions are incredibly close; there is a slight distinction between these two ideas.
## Customer Engagement vs. User Engagement
The low-touch or tech-touch model uses a "**one-to-many**" customer interaction approach. And later focuses on digital engagement for free or lower contract-value customers. "**Customer Engagement**" typically suggests a high-contact relationship model, but "**User Engagement**" gets utilized by organizations that operate a low-contact relationship model. The customer commitment approach focuses more on in-person item preparation. Client commitment puts more in the in-item user onboarding experience.
The high-contact relationship model can be characterized as the "**coordinated**" approach and generally suggests ordinary help from a committed customer achievement manager and gets utilized for high-esteem customers and complex items. The low-contact or somewhere in the vicinity called tech-contact model proposes a "**one-to-many**" customer connection approach and spotlights on digital engagement for free of charge or lower contract esteem customers.
**We can say that any customer engagement model mainly covers the following:**
- High-contact relationship model
- More prominent agreement esteem per account
- More intricate item
- Paid assistance membership/subscription
**And user engagement model covers the following:**
- Low-contact relationship model
- Lower contract esteem per account
- Less complicated item
- Free item use
The engagement techniques we will examine in this blog can be applied to customer and user engagement models. However, user engagement can get viewed as a fundamental part of customer engagement and structures the solid groundwork. But how to engage your user?
## How To Engage With Users?
Various approaches that we can use to increase user engagement are as follows:
### Personalized Approach
We used to comprehend a custom-made client approach as adding a user name in an email. Except for this dynamic content, the following personalization type works significantly more concerning user engagement. It is occasion-driven robotization or customized onboarding. The best applications utilize this strategy to make a personalized experience inside the application and construct a novel customer journey. For instance, **Duolingo**, a language learning application, guides new users through customized onboarding as per various responsibilities to-be-done, making the application an ideal counterpart for different users. Users' decisions at each stage impact their in-app venture.
### Know Your Users
To refine your customer engagement technique, you should figure out who your users are, what needs they need to fulfill, and how frequently they interface with your item.
**A few kinds of users commonly utilize SaaS products as follows:**
- The clients who are consistently utilizing your item. They are the most drawn-in clients, faithful and dynamic. You ought to treat them appropriately, not advancing elements they've proactively used or things they've previously bought.
- The users who have yet to get involved in your item for a specific measure of time. These clients ought to be dealt with cautiously. You want to focus on how your item can help them, what esteem you can offer, and why they should allow you a next opportunity.
You may likewise have free users and the individuals who pay for the help, and the two sections most certainly request various methodologies. The more intensive client investigation you will attempt, the better you can refine your promoting strategies to increment user commitment.
### Making Back-End UI and Front-End UI
What do you feel when you see a site that could be more kept up with and more intuitive? Or on the other hand, you presently arrive at a site where you need help figuring out where to explore and go. You would exit from that site and end up on an alternate platform. UI is mainly tied in with improving complex work processes. UI pulls all the strings, the supporting characters in UX's story, and your back-end innovation.
UI configuration design assists you with grasping a brand, and you need to do a buyer examination to look at the current holes in the framework. Application, site pages, and self-look-at pages experience the most from a bad UI. In addition, it passes your perspective to your rival, and a negative brand image may be possible.
**Read More:**
{% embed https://quokkalabs.com/blog/the-impact-of-visual-design-on-overall-user-experience/ %}
### Catchy Messages
You are already mailing your user invite messages, newsletters, and thank you notes. Even so, it's feasible to send more catchy messages set off by a naturally set occasion. Initiated informing messages make a more customized experience contingent upon users' move on your site or inside your application.
Preceding wandering into the UX designing process, you want to dive into your information. Investigate profound statistical surveying to make heads or tails of client necessities, needs, and the subtleties of their inclinations - all of which vary from one to another in light of geological, segment, and psychographic parts.
In the short term, you need to comprehend what your user expect from you and why they have visited your site. It would assist you with making a customer's excursion consistent and straightforward. It will likewise assist you with designing your site to make it effectively open to all.
### Quality UI
You could design the most progressive software programming known to man; however, assuming the user stream gets messed up, you won't accumulate any adoption. It is pointless to consider programming software without adoption. UI and UX both feed off of our simplicity of us. Ease of use is their entire reason for being, or it ought to be.
Everything you can manage is separate content into smaller parts and make it more agreeable. Saying the most measure of data in the least number of words is a work of art all organizations ought to dominate. Being brief drives changes.
Anyway, you carry out your new user experience, yet how would you know if it's working? You check viability through various examinations and analyses. You can check this with the bounce rate. Bounce Rate is the time spent by a user before he jumps to another site. These bounce rates assume a considerable part in the ranking of your sites.
Simultaneously, if you have a great deal of data and pictures stacked on your page, this would expand the loading time, and it would take ages to get your site loaded. Assuming you give a consistent computerized insight, you should rest assured that your page gets positioned high, which would help the permeability and site traffic. UI designers' primary goal is to make engaging software programming for every stage, including web/mobile applications and web pages.
**Read More: **
{% embed https://quokkalabs.com/blog/web-designer-learn-the-9-skills-you-need/ %}
### Enhance Usability
There are plenty of applications available on the app store. The market that you are targeting has a great many choices readily available. We're working inside an exceptionally immersed climate. Adoption reduces to how compelling and responsive your page or application is.
**Ease of use assists you with achieving a positive brand image. The following are a couple of tips to upgrade the permeability and comfort of use of your page:**
- **Clear Representation:** If your users can't find the necessary data, there is a high opportunity that they could get disappointed simultaneously. You can acquire little messages and pictures or marks to make this catch their eye and complete this course of data hunting less burdensome.
- **Simple & Seamless Navigation:** It is how your site visitors move, starting with one screen and then onto the next.
- **No Compromise With Interface Intuitiveness:** Everything manages, making a coherent grouping for the visitors to utilize and manage.
- **Be Consistent:** You must maintain a high degree of consistency throughout your site. Check up on the plan of tabs and varieties of colors on the website.
- **Categorize Related Messages Together:** Bunch corresponding data under one mark to improve on user stream.
- **Keep Interruptions And Pop-ins At Negligible:** Never put in lots of components or elements on your page. Trust me; this would distract your crowd from the focal message you should pass on.
- **Attractive Presentation:** You can contribute some of your time planning the ideal site page, yet it ought to be close to consummate. It comes from making your site very creative, separating it from the pack. Your site guests ought to track down the right content at the place.
- **Quality Content Creation:** Get your content per the user requirement & trends. You need to place in the right happy to dazzle your crowd and use designs to connect with your users.
- **Be Unique:** Be unique from the content and design side. Colors play a significant part in deciding the engaging quality of your page. You can utilize some kinds of varieties to make an incredible impact. However, it all boils down to using a suitable design color creation to make your outwardly animated.
- **Be Sure For Results:** What might be the outcome if site guests find it challenging? To traverse the implications of the graphics you have utilized on your page, be sure about the outcomes and be ready for updates as per requirements.
That's it! Hence, the steps mentioned above will lead you toward user engagement. Following these tips will prompt more prominent supportability - you won't have to refresh, retool, and restart constantly. Unrivaled UX and UI begin with finding predominant web or mobile development partners. The better the code, the better adoption & user engagement.
For any queries, questions, or suggestions, drop a comment below. Thanks!
**Read More**
{% embed https://quokkalabs.com/blog/difference-between-ux-and-ui-design-a-learners-guide-2021/ %}
{% embed https://quokkalabs.com/blog/ios-vs-android-apps-ui-difference/ %}
| mayankranjan |
1,365,274 | Python OOP: Harnessing the Power of Classes and Objects | Python OOP, or Object-Oriented Programming, is a programming paradigm that emphasizes the use of... | 0 | 2023-02-14T14:14:51 | https://dev.to/ml_82/python-oop-harnessing-the-power-of-classes-and-objects-lce | programming, webdev, javascript, tutorial |

Python OOP, or Object-Oriented Programming, is a programming paradigm that emphasizes the use of objects to represent and interact with data and functionality. In Python, objects are created by defining classes, which are essentially blueprints for creating objects.
Classes define the attributes and methods of objects. Attributes are the characteristics or data that an object has, while methods are the functions or operations that an object can perform. When you create an object from a class, the object inherits the attributes and methods of the class.
One of the main benefits of using OOP in Python is that it allows for code reusability and modularity. By encapsulating data and functionality within objects, you can easily reuse and modify them without affecting the rest of the code.
In Python, you can define classes using the class keyword, followed by the name of the class and a colon. The body of the class contains the attributes and methods of the class. For example:
`class Person:
def __init__(self, name, age):
self.name = name
self.age = age
def greet(self):
print(f"Hello, my name is {self.name} and I am {self.age} years old.")`
In this example, we've defined a Person class with two attributes (name and age) and one method (greet). The __init__ method is a special method that is called when an object is created from the class. It initializes the attributes of the object with the values passed as arguments.
To create an object from the Person class, you can call the class as if it were a function:
`person1 = Person("Alice", 25)`
This creates a Person object with the name "Alice" and age 25. To access the attributes and methods of the object, you can use dot notation:
`print(person1.name) # "Alice"
person1.greet() # "Hello, my name is Alice and I am 25 years old."`
For more detail and a brief explanation of Python OOP (Object Oriented Programming), check our original article on- Python OOP (Object Oriented Programming)
Python Classes and Objects
In Python, a class is a blueprint or a template for creating objects. Objects are instances of a class and represent a specific entity that has its own properties (attributes) and methods. Here is an example of a class definition in Python:
`class Car:
def __init__(self, make, model, year):
self.make = make
self.model = model
self.year = year
def start(self):
print(f"The {self.year} {self.make} {self.model} is starting.")`
In this example, the Car class has three attributes (make, model, and year) and one method (start). The __init__ method is a special method that gets called when an object of the class is created. It sets the initial values of the object's attributes.
Overall, Python classes and objects provide a powerful way to encapsulate data and behavior into reusable and modular entities. By creating classes and objects, you can write code that is easier to read, maintain, and modify. For more detail and a brief explanation check our original article on- [Python Classes and Objects](https://ml-concepts.com/wiki/Python_Classes_and_Objects) | ml_82 |
1,365,277 | Porque usar o componente Image do NextJs e o conceito CLS e LCP | A tag IMG do HTML é uma velha conhecida, uma senhora da terceira idade, mas que negócio é esse de... | 0 | 2023-02-14T19:46:36 | https://dev.to/andpeicunha/pra-que-serve-a-tag-no-nextjs-e-o-conceito-cls-c3e | nextjs, react, html, tag | A tag IMG do HTML é uma velha conhecida, uma senhora da terceira idade, mas que negócio é esse de `<Image>` do NextJs?
Acredite meu camarada, se você trabalha com Front deve se preocupar com isso agora, seu SEO tem relação com essa tag.
---
#Pra que serve e porque usar?
> _Vamos entender porque a Vercel criou essa tag estendida do HTML e quais as vantagens de usar nos seus projetos._
Primeiro é preciso entender as principais métricas que o Google usa para analisar seu site e ranqueá-lo, isso foi divulgado em Jul/21, quando o [Google deixou claro as principais métricas](https://web.dev/vitals/).

> _A tag `<Image>` tem relação direta com as métricas CLS e LCP, mas o que é isso?_
**LCP** _(Largest Contentful Paint)_ é basicamente o tempo de carregamento da página, quanto tempo leva pra que o usuário tenha todos os elementos na tela e que possa ler, visualizar.
**CLS** _(Cumulative Layout Shift)_ é a métrica que identifica **mudanças bruscas de layout** no site durante o carregamento.
Você já deve ter vivido essa experiência, veja esse exemplo... você acessa um site, um determinado item carrega na página, mas quando você vai clicar, de repente aparece outro elemento e causa uma mudança no layout, empurrando o item que você queria pra baixo e te fazendo clicar em outra coisa.
Isso afeta a experiência do usuário e por isso é analisado e ranqueado pelo Google.
---
Bom, agora que entendemos o que é CLS e LCP podemos voltar propriamente a tag `<Image>` e entender como ela trabalha e como auxilia seu SEO.
Essa tag trabalha com 3 princípios:
### 1. Otimização
O Next utiliza o _Lazy Loading_, carregando imagens apenas quando estão visíveis na tela. Além disso, ela gera várias versões otimizadas de uma imagem, escolhendo automaticamente a melhor para exibir com base no **tamanho do dispositivo** e na **resolução da tela**.
### 2. Estabilidade
Previne mudança de layout automática quando as imagens estão sendo carregadas, _esse é o principal conceito pra uma bom ranqueamento em CLS_.
### 3. Cargas mais Rápidas
Melhora a experiência do usuário, fornecendo imagens mais rápidas, reduzindo a quantidade de dados transferidos e melhorando a velocidade de carregamento. Aqui afeta diretamente o LCP.
---
#Como usar em seus projetos
Primeiro você importa o componente Image
```
import Image from 'next/image';
```
Aqui pode ser de duas forma, neste exemplo abaixo eu importei a imagem e chamo ela dentro do componente `<Image>`.
```
import Pexels1 from "../public/pexels-1.jpg";
```
E no local onde eu usaria a tag `<img>` eu substituo pela `<Image>`
```
<Image
width={600}
height={400}
src={Pexels1} //estou chamando a imagem
alt="descrição da imagem - também importante pro SEO"
className="rounded-lg"
/>
```
Simples assim!
Você também pode colocar o caminho da imagem diretamente no componente, como nesse exemplo:
```
<Image
width={600}
height={400}
src="../public/pexels-1.jpg" // chamando a imagem diretamente
alt="descrição da imagem - também importante pro SEO"
className="rounded-lg"
/>
```
Mas lembre-se, você precisa fornecer o **Width e Height** nas props do componente, isso é importante pro Next calcular o tamanho da imagem de acordo com a tela e criar as versões de otimização da imagem.
#Comparativo
Agora vamos aos comparativos, essa é a parte legal.
Numa mesma página eu chamei o mesmo arquivo pelo componente `<Image>` e pela tag `<img>`
Nessa primeira imagem está o jpg que chamei pela tag `<img>`. Ela tem 2.7 Mb e demorou 45ms pra carregar, mas lembre-se que estou rodando em localhost, por isso ainda foi rápido.

---
Agora veja a mesma imagem que chamei pelo componente `<Image>`.
O Next converteu automaticamente pro formato Webp e a imagem agora tem 115Kb e demorou 1ms pra carregar.

<!--
---
**_Importante_**
Quando você usa imagens externas o Next não tem acesso a imagem antes do render, por exemplo quando estamos consumindo uma API. Neste caso pode afetar a **Otimização** e **Carga Rápida**, mas ao menos a **Estabilidade Visual** será preservar, já é um item importante pro ranqueamento, por isso atenção ao _Width_ e _Height_ ✌
-->
| andpeicunha |
1,365,301 | Software buatan anak bangsa | Latar Belakang Kini teknologi sudah semakin maju dan juga penggunaan software semakin banyak, akan... | 0 | 2023-02-14T14:21:27 | https://dev.to/verlinof/software-buatan-anak-bangsa-3l93 | programming, magangsrdkomatik2023 |

**Latar Belakang**
Kini teknologi sudah semakin maju dan juga penggunaan software semakin banyak, akan tetapi kita sering menggunakan software-software buatan negara lain dan juga pendiri software buatan indonesia pun masih sedikit yang digunakan secara massal, maka dari itu mungkin kita bisa mulai memajukan negara kita dalam teknologi dan pembuatan software agar dapat bersaing dengan yang lainnya
**Tujuan dan Manfaat**
Majunya teknologi pada zaman sekarang membawa angin segar bagi banyak pihak, seperti pembuat aplikasi,web,programmer, dan juga start up yang ada. Kita dapat membuat aplikasi dan digunakan oleh banyak orang juga, dengan banyaknya user yang memakai software kita juga akan membantu kita secara finansial dan juga membuka banyak lowongan kerja baru di negara kita, dan bisa saja software buatan kita dapat bersaing dengan perusahaan software raksasa lainnya yang sudah lebih dulu dikenal dan digunakan banyak orang. Semakin majunya bidang teknologi dalam negara tersebut, secara tidak langsung akan memajukan bidang lainnya dikarenakan pada saat ini hampir semua pekerjaan membutuhkan teknologi dibelakangnya untuk membantu pekerjaannya. Manfaat yang didapatkan dari berkembangnya industri software sangat banyak seperti memajukan ekonomi, membuka lowongan pekerjaan baru, dan masih banyak lagi.

**Batasan**
Tidak ada batasan dalam pengembangan software buatan kita sendiri, karna sebenarnya ide dan solusi didatangkan dari diri kita sendiri akan membuat software apa yang dapat menjawab masalah-masalah banyak orang dan dapat membantu banyak orang. | verlinof |
1,365,393 | Whatsapp Bot Reminder : whatsapp bot yang terintegrasi dengan google calendar. | Fitur baru whatsapp yang memungkinkan kita untuk memberi pesan untuk diri sendiri banyak dimanfaatkan... | 0 | 2023-02-15T05:29:38 | https://dev.to/heavenaulianisa/whatsapp-bot-reminder-whatsapp-bot-yang-terintegrasi-dengan-google-calendar-ckd | magangsrdkomatik2023, mobile, idea, programming | Fitur baru whatsapp yang memungkinkan kita untuk memberi pesan untuk diri sendiri banyak dimanfaatkan untuk menuliskan to do list atau daftar kegiatan yang ingin dilakukan, menuliskan tenggat dari suatu tugas, hingga menuliskan tanggal ulang tahun teman. Pemilihan whatsapp disini karena aplikasi ini senantiasa dibuka setiap saat oleh user. Dengan membuat catatan disini, jadwal kegiatan yang kita miliki bisa lebih terstruktur dan mudah dikelola sehingga kita tahu prioritas mana yang harus segera dilaksanakan.
Namun dengan menuliskan catatan seperti ini, kita cukup kesulitan mengingat detail dari batas waktu dari masing-masing kegiatan karena tidak ada fitur pengingat/reminder disana. Cara manual yang dapat dilakukan yaitu membuka dan membaca kembali pesan yang sudah kita tulis sebelumnya dan tentu saja cara ini tidak efisien dan memakan banyak waktu.
Dengan perkembangan teknologi yang semakin canggih, masa kini kita mengenal yang namanya perkembangan perangkat lunak atau software development. Dari permasalahan sebelumnya dan pengimplementasian software development, maka saya memiliki sebuah ide untuk membuat yang namanya “Whatsapp Bot Reminder”. Berikut gambaran singkat “Whatsapp Bot Reminder” :

Jadi cara kerja dari “Whatsapp Bot Reminder” ini adalah user menuliskan nama kegiatan dan tanggal pelaksanaannya lalu dikirim ke whatsapp chatbot. Oleh whatsapp chatbot, pesan yang kita tulis dikirimkan, ditangkap lalu diproses, diterjemahkan dan diekstrak data-datanya. Karena chatbotnya terintegrasi/terhubung dengan google calendar, maka data-data hasil terjemahan tadi diintegrasikan ke google calendar sehingga pada saat sudah mencapai tanggal dan jam yang kita tuliskan maka google calendar akan memberikan peringatan/reminder kepada kita melalui notifikasi ponsel kita masing-masing.
Contohnya seperti ini, pada chatbot yang sudah kita buat, kita dapat menuliskan pesan sesuai dengan format yang sudah dibuat. Misal formatnya :
/set judul kegiatan | tanggal dan waktu kegiatan
Contoh :
/set Deadline Laprak Pemrograman | 16/2/2023 23:59
Jadi, nanti pada tanggal 16 Februari 2023 jam 23:59 pada user akan muncul notifikasi dari google calendar.
Dengan adanya peringatan dari google calendar ini, maka kemungkinan untuk melewatkan suatu kegiatan atau keterlembatan dalam mengerjakan sesuatu menjadi lebih minim dan to do list yang dibuat dapat berjalan seperti yang diharapkan oleh user.
Namun, Whatsapp Bot Reminder ini memiliki keterbatasan diantaranya:
1. Ponsel yang digunakan harus selalu terhubung dengan internet.
2. Pada software ini hanya terbatas menggunakan whatsapp dan google calendar saja. Software ini belum support untuk messaging apps lain misal telegram dan line dan juga belum support untuk calendar lain misal apple calendar dan microsoft calendar.
| heavenaulianisa |
1,365,678 | Interop 2023, Chrome 110, Lighthouse 10, Edge 110, Polypane 13, Safari TP 163, and more | Front End News #090 | NOTE: This is issue #090 of my newsletter, which went live on Monday, February 13. If you find this... | 9,151 | 2023-02-14T18:38:31 | https://frontendnexus.com/news/090/ | newsletter, frontendnews, news | > **NOTE:** This is issue #090 of my newsletter, which went live on Monday, February 13. If you find this information useful and interesting and you want to receive future issues as they are published, ahead of everyone else, I invite you to join the subscriber list at [frontendnexus.com](https://frontendnexus.com/).
***
The web industry is getting ready for Interop 2023. In browser news, we go over the changes that took place in January. Chrome 110 has been released and you can test your website performance using Lighthouse 10. We round up this section with Edge 110, Polypane 13, and Safari Technology Preview 163.
On the release radar, we have Eleventy v2, Redwood JS 4, several Electron and Node updates, and more. Next, there are a dozen new resources to make your coding easier. Last, but not least, I am (re)introducing my next project, the Developer Creator Club.
***
## Interop 2023
Interop 2022 made good progress in improving the web platform. The main players have pledged to continue the effort in 2023. This time there will be no less than 26 focus areas. They cover areas such as CSS (Container Queries, CSS `:has()`, Color spaces and functions, masking, and more), Web Apps (ergonomics, Offscreen Canvas, Web Codecs, and more), Compatibility, and investigations into mobile platforms and accessibility APIs.
- [Interop 2023 Dashboard](https://wpt.fyi/interop-2023)
- Chrome: [Interop 2023: continuing to improve the web for developers](https://web.dev/interop-2023/)
- Microsoft: [Microsoft Edge and Interop 2023](https://blogs.windows.com/msedgedev/2023/02/01/microsoft-edge-and-interop-2023/)
- Mozilla: [Announcing Interop 2023](https://hacks.mozilla.org/2023/02/announcing-interop-2023/)
- WebKit: [Pushing Interop Forward in 2023](https://webkit.org/blog/13706/interop-2023/)
- Bocoup: [Interop 2023](https://bocoup.com/blog/interop-2023)
- Igalia: [Igalia and Interop 2023](https://www.igalia.com/news/2023/interop2023.html)
***
## Browser news
Rachel Andrew continues her monthly roundup of changes to the web platform. Firefox 109, Chrome 109, Edge 109, and Safari 16.3 rolled out to users. They brought new features, such as support for MathML, the `scrollend` event, or the Content-Security-Policy (CSP) `prefetch-src` directive.
- [New to the web platform in January](https://web.dev/web-platform-01-2023/)
### Chrome
Chrome 110 was released on February 1. This update implements the `:picture-in-picture` CSS pseudo-class, the `launch_handler` manifest member for web apps, `credentialless` iframes, and more.
- [New in Chrome 110](https://developer.chrome.com/blog/new-in-chrome-110/)
- [What's New In DevTools (Chrome 110)](https://developer.chrome.com/blog/new-in-devtools-110/)
Another important release from the Chrome team is Lighthouse 10. This update is already available via PageSpeed Insights or the command line through npm. You can already use it in Chrome Canary and will arrive in the stable channel in the Chrome 112 release.
- [What's new in Lighthouse 10](https://developer.chrome.com/blog/lighthouse-10-0/)
### Edge
Version 110 of Microsoft Edge brings new Immersive Reader policies, enables synchronization for users signed in using Azure Active Directory, and a new way to send files and notes across all your devices with Drop.
- [Edge 110 Release Notes](https://learn.microsoft.com/en-us/deployedge/microsoft-edge-relnote-stable-channel#version-1100158741-february-9-2023)
### Polypane 13
Polypane, the browser for ambitious web developers, is back in browser news with the update to version 13. The release runs on Chromium 110, supports regular Chrome extensions, and brings a host of features, fixes, and improvements. There are already two patches out (13.0.1 and 13.0.2), which makes this tool even more versatile.
- [Polypane 13: CSS Nesting, extension support in beta, search by selector and Chromium 110](https://polypane.app/blog/polypane-13-css-nesting-extension-support-in-beta-search-by-selector-and-chromium-110/)
### WebKit
Safari users can now try the new features included in Safari Technology Preview 163. This release enables Masonry layout by default and implements a huge list of fixes and improvements across most facets of the web platform.
- [Release Notes for Safari Technology Preview 163](https://webkit.org/blog/13839/release-notes-for-safari-technology-preview-163/)
***
## Software updates and releases
- **[Docusaurus 2.3.0](https://github.com/facebook/docusaurus/releases/tag/v2.3.0)** - Easy to maintain open source documentation websites
- **[Electron v22.2.0](https://github.com/electron/electron/releases/tag/v22.2.0)**, **[Electron v23.0.0](https://github.com/electron/electron/releases/tag/v23.0.0)**, **[Electron v24.0.0-alpha](https://github.com/electron/electron/releases/tag/v24.0.0-alpha.1)** - Build cross-platform desktop apps with JavaScript, HTML, and CSS
- **[Eleventy v2.0.0](https://www.11ty.dev/blog/eleventy-v2/)** - a simpler static site generator
- **[Node v19.6.0 (Current)](https://nodejs.org/en/blog/release/v19.6.0/)**, **[Node v18.14.0 (LTS)](https://nodejs.org/en/blog/release/v18.14.0/)**, **[February 2023 Security Releases](https://nodejs.org/en/blog/vulnerability/february-2023-security-releases/)** - an asynchronous event-driven JavaScript runtime
- **[Redwood v4.0.0](https://community.redwoodjs.com/t/redwood-v4-0-0-is-now-available/4538)** - The App Framework for Startups
- **[TestCafe v2.3.0](https://github.com/DevExpress/testcafe/releases/tag/v2.3.0)** - A Node.js tool to automate end-to-end web testing
***
## Front End Resources
- **[All Things AI](https://allthingsai.com/)** - A collection Of Artificial Intelligence tools & services
- **[`clamp()` Calculator](https://chrisburnell.com/clamp-calculator/)** - A tool for calculating viewport-based clamped values
- **[CodeImage](https://codeimage.dev/)** - A tool to beautify your code screenshots
- **[Colord](https://colord.omgovich.ru/)** - A tool for high-performance color manipulations and conversions
- **[ColorMagic](https://colormagic.app/)** - A color palette generator with AI
- **[Easing Gradients](https://larsenwork.com/easing-gradients/#editor)** - Supercharge your gradients with non-linear color mix and custom color spaces.
- **[Gradicol](https://gradicol.vercel.app/)** - Handpicked collection of linear gradients with premium website templates.
- **[IMG Quest](https://img.quest/)** - An open-source API to generate Open Graph images
- **[Shoelace](https://shoelace.style/)** - A forward-thinking library of web components.
- **[SVG Gobbler](https://www.svggobbler.com/)** - Download svg icons, logos, and vector content from any site
- **[Theme Toggles](https://toggles.dev/)** - a collection of animated toggles for switching between light and dark modes
- **[Website Metadata](https://websitemetadata.com/meta-tags-generator)** - Generate a complete list of HTML meta tags for your website.
There's more where that came from. Explore the rest of the [Front End Resource collection](https://frontendnexus.com/resources/).
***
## Presenting (again) the Developer Creator Club
Back in issue #088, I mentioned that I'm changing the frequency of this newsletter to make more time for another project. It is not a complete surprise, as I presented it for the first time in issue #53, almost a year ago. From now on, however, I will be taking a more active approach.
[The Developer Creator Club](https://creatorclub.dev/) is a digital garden dedicated to helping developers create content and products. Currently, it includes a curated set of articles focused on this topic, a collection of products I've tried myself (and that I can recommend), and a section dedicated to creator stories. This is where I'm interviewing other developers and getting their stories out there to inspire other people.
Therefore, if you are willing to share some of your experience, if you have a valuable story to say, I would love to have a chat with you.
- [The Developer Creator Club](https://creatorclub.dev/)
***
## Wrapping things up
Ukraine is still suffering from the Russian invasion - if you are looking for ways to help, please check Smashing Magazine's article [We All Are Ukraine 🇺🇦](https://www.smashingmagazine.com/2022/02/we-all-are-ukraine/) or get in touch with your trusted charity.
If you enjoyed this newsletter, there are a couple of ways to support it:
- 📢 [share the link to this issue on social media](https://frontendnexus.com/news/090/)
- ❤️ [follow this newsletter on Twitter](https://twitter.com/frontendnexus)
- ☕ [buy me a coffee](https://ko-fi.com/adriansandu)
Each of these helps me out, and I would appreciate your consideration.
That's all I have for this issue. Have a great and productive week, keep yourselves safe, spend as much time as possible with your loved ones, and I will see you again next time! | adriansandu |
1,365,736 | Networking on AWS - Part 3 | In the previous parts, we discussed the basics of networking on AWS and how to set up and configure a... | 0 | 2023-02-14T19:50:50 | https://dev.to/vanhoangkha_2k/networking-on-aws-part-3-1a4b | In the previous parts, we discussed the basics of networking on AWS and how to set up and configure a VPC on AWS. In this part, we will take a closer look at advanced networking concepts on AWS.
VPC Peering
VPC peering allows you to connect two VPCs, allowing instances in one VPC to communicate with instances in the other VPC. This can be useful if you have multiple VPCs that need to communicate with each other but are not part of the same network.
VPC peering is secure, and the traffic between the two VPCs is encrypted. You can also use VPC peering to connect VPCs across different regions.
Transit Gateway
Transit Gateway is a service that allows you to connect multiple VPCs and on-premises networks. It acts as a hub that connects all your networks, allowing you to manage traffic between them centrally.
Transit Gateway is highly available, and you can scale it to support up to 5,000 VPCs. You can also use it to connect VPCs across different regions.
VPC Endpoints
VPC endpoints allow you to connect to AWS services without the need for an internet gateway or NAT gateway. This improves security by reducing the number of entry points to your VPC.
VPC endpoints are available for many AWS services, including S3, DynamoDB, and Lambda. When you create a VPC endpoint, you can specify which service it connects to and which VPCs can access it.
Elastic Load Balancing
Elastic Load Balancing is a service that distributes incoming traffic across multiple instances in your VPC. This improves performance and availability by balancing the traffic load across multiple instances.
Elastic Load Balancing is available in three flavors: Application Load Balancer, Network Load Balancer, and Classic Load Balancer. Each load balancer type is optimized for specific use cases.
Conclusion
In conclusion, AWS provides advanced networking capabilities that allow you to create highly available and secure architectures for your applications. VPC peering, Transit Gateway, VPC endpoints, and Elastic Load Balancing are just a few of the many features available on AWS.
By leveraging these features, you can create complex and highly available network architectures that meet the needs of your applications. It is essential to have a solid understanding of the basics of networking on AWS and to keep up-to-date with the latest features and best practices to ensure that your deployments are secure, reliable, and performant. | vanhoangkha_2k | |
1,365,743 | SceneDelegate | In iOS, the SceneDelegate is a class that is responsible for managing the scenes in your app. A scene... | 0 | 2023-02-14T20:03:19 | https://dev.to/arkilis/scenedelegate-6h4 | ios, swift, swiftui | In iOS, the SceneDelegate is a class that is responsible for managing the scenes in your app. A scene is a discrete unit of your app's user interface, such as a window or tab. The SceneDelegate class provides methods that are called at various points in the lifecycle of a scene, such as when a scene is about to be displayed, or when a scene is about to be destroyed.
The SceneDelegate class is typically used to manage the state of a scene and to respond to events that affect the scene. For example, the SceneDelegate class might be used to manage the size and position of a window, or to respond to changes in the device's orientation.
In general, the SceneDelegate class is an optional part of the iOS app architecture, and you do not need to use it in your app unless you need to manage multiple scenes. If you only have one scene in your app, you can use the AppDelegate class to manage the lifecycle of your app instead.
Remove the SceneDelegate
There might be some reason we probably don't want the SceneDelegate class. To remove the SceneDelegate class from your iOS app, you can follow these steps:
1. Open your project in Xcode and in the Project Navigator, navigate to the SceneDelegate.swift file and delete it.
2. In the Project Navigator, navigate to the AppDelegate.swift file.
3. In the AppDelegate class, remove the UISceneDelegate protocol from the class declaration.
4. In the AppDelegate class, remove the var window: UIWindow? property.
5. In the AppDelegate class, remove the following methods:
```swift
func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions)
func sceneDidDisconnect(_ scene: UIScene)
func sceneDidBecomeActive(_ scene: UIScene)
func sceneWillResignActive(_ scene: UIScene)
func sceneWillEnterForeground(_ scene: UIScene)
func sceneDidEnterBackground(_ scene: UIScene)
```
6, In the Info.plist file, remove the UISceneManifest key and its associated dictionary.
After completing these steps, the SceneDelegate class will be removed from your app, and the AppDelegate class will be the only class responsible for managing the lifecycle of your app. However, note that removing the SceneDelegate class will also remove the ability to manage multiple scenes in your app, so you should only do this if you do not need this functionality.
For more detalis: https://needone.app/scenedelegate-in-swift/
| arkilis |
1,365,748 | ChatGPT $18 trial credit | ChatGPT offers $18 free trial credit, which is about to be exhausted soon. !ask Write me a HTML page... | 0 | 2023-02-14T20:13:04 | https://dev.to/doctorlai/chatgpt-18-trial-credit-23dk | chatgpt, blog | [ChatGPT](https://helloacm.com/how-chatgpt-impacts-ugc-user-generate-content/) offers $18 free trial credit, which is about to be exhausted soon.
!ask Write me a HTML page that shows a heart in animation using Javascript to my wife as a Valentine's gift. Could you make sure it can be viewed nicely on the mobile?
```html
<!DOCTYPE html>
<html>
<head>
<title>Valentine's Gift for My Wife</title>
<style>
body {
background-color: #FFC0CB;
}
#heart {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 200px;
height: 200px;
background-color: #FF1493;
border-radius: 50%;
animation: heart-beat 1s infinite;
}
@keyframes heart-beat {
0% {
transform: scale(1);
}
50% {
transform: scale(1.2);
}
100% {
transform: scale(1);
}
}
@media (max-width: 600px) {
#heart {
width: 100px;
height: 100px;
}
}
</style>
</head>
<body>
<div id="heart"></div>
</body>
</html>
```
Well, not a heart, [but a circle](https://zhihua-lai.com/v2023.html)
ChatGPT Plus in UK is not out yet - on waiting list though.

<hr/>
### [Steem](https://steemyy.com) to the Moon!
You can swap the STEEM/SBD to USDT (TRC-20) via [Steem2USDT](https://steemyy.com/steem2usdt.php) | doctorlai |
1,365,807 | Hola mundo! | Soy nuevo por acá y solo quiero hacer comunidad y amigos. | 0 | 2023-02-14T21:06:50 | https://dev.to/jussef/hola-mundo-5482 | Soy nuevo por acá y solo quiero hacer comunidad y amigos. | jussef | |
1,365,882 | Day 1.1, Here I Come! | Here's what I learned today: Block off time each week that I can dedicate myself to connecting with... | 0 | 2023-02-14T23:10:12 | https://dev.to/yjj/day-11-here-i-come-4a7d | Here's what I learned today:
- Block off time each week that I can dedicate myself to connecting with the developer community. Do 1-2 activities during the time.
- Google/search in SNS a topic from the course. Learn more about it.
- Read a blog post.
- Document my learning
| yjj | |
1,366,153 | My First Blog | I'm absolutely excited to be joining in this community.Looking forward to learning a lot of things.. | 0 | 2023-02-15T05:14:34 | https://dev.to/samaresh96/my-first-blog-29pi | I'm absolutely excited to be joining in this community.Looking forward to learning a lot of things.. | samaresh96 | |
1,366,250 | How To Install Ubuntu Server on Virtual Machine | cout << "Calzy Akmal - 2101569"; First, open up the link down below and select the button to... | 0 | 2023-02-15T07:14:28 | https://dev.to/calzkmal/how-to-install-ubuntu-server-on-virtual-machine-347o | `cout << "Calzy Akmal - 2101569";`
First, open up the link down below and select the button to download the Ubuntu Server.iso on your computer.

If you have finished downloading, I assume you have already installed the Virtual Box (if you haven't, open this link to see on how to install the Virtual Box: https://dev.to/calzkmal/how-to-install-ubuntu-os-on-virtual-machine-364i).
Now open up the Virtual Box to proceed to the next step.

Click the 'New' button to create a new Virtual Machine (VM) on your desktop.

Fill up the required field to complete the initial setup. Give your VM a name, select where do you want to put the VM on your desktop, and select where the .iso is located. After that, press next to proceed.

Fill out the 'Username' and 'Password' columns according to your own preferences, and change the 'Hostname' and 'Domain name' columns if you wanted to. After that, press next to proceed.

After that, allocate the memory needed for the machine to work on your desktop. Recommended amount for the 'Base Memory' is around 8192 MB, and 4 CPU for the 'Processors'.

Lastly, allocate the storage amount for the server to operate on the VM. Around 25 GB of storage is enough for the machine to work.

There you go! If you have finished all the steps required on the VM installation, the last thing to do is to see all the summaries of the settings that you have set before. Press the 'Finish' button if you feel good enough about the settings.

Congratulations! You have succesfully installed the Ubuntu Server on your VM. Now, fire it up to use it.
| calzkmal | |
1,366,251 | HTML accessibility: designing web pages that are accessible to people with disabilities. | In the current digital era, it's critical to make sure that everyone, including those with... | 0 | 2023-02-15T06:57:13 | https://dev.to/radnicks44/html-accessibility-designing-web-pages-that-are-accessible-to-people-with-disabilities-2nj8 | webdev, beginners, devops, typescript |
In the current digital era, it's critical to make sure that everyone, including those with disabilities, has access to internet content. We know the process of creating web pages that are accessible to those with impairments using HTML as HTML accessibility. This article will discuss the significance of HTML accessibility, some typical accessibility problems, and the best practices for creating accessible web pages.
**Why is HTML accessibility important?**
There are millions of people with disabilities around the globe, and numerous of them use assistive technologies to access online material. For instance, a deaf person may depend on closed captions to interpret video content, while a blind person may utilize a screen reader to browse a website. People with disabilities may find it challenging or impossible to access the content of a web page that has not been developed with accessibility in mind.
Additionally, it's crucial to develop accessible web pages for legal as well as ethical reasons. There are regulations requiring websites to be accessible to individuals with impairments in many nations, including the United States. For instance, websites must be accessible to individuals with disabilities under the Americans with Disabilities Act (ADA).
Common Accessibility Issues
Each form of disability has its own specific accessibility issues, and there are many different kinds of disabilities. The following are a few of the most prevalent accessibility problems that website designers need to be aware of:
1.Blindness: People who are blind may have trouble reading small print, differentiating colors, or navigating complicated layouts.
2.Hearing impairments: To understand audio content, people with hearing impairments may rely on closed captions or transcripts.
3.Motor impairments: People with motor impairments may find it challenging to access a website using the mouse or keyboard.
4.People with cognitive disabilities may have trouble traversing complex layouts or understanding complex language.
How to Build a Website That is Accessible (A Complete Guide)
It might be difficult to create accessible websites, but there are numerous best practices that web designers can adhere to to make sure their material is accessible to those with disabilities. We list some of the most significant best practices below:
1. Ensure That Your Webpage Allows Keyboard Navigation
Keyboard navigation is one of the elements of an accessible website. Users must be able to navigate your website without using a mouse for it to be deemed accessible. This is because many assistive technologies rely solely on keyboard navigation.
As a result, you should make sure users can surf and navigate your site using just a keyboard. This entails visiting pages, selecting links, etc. You can check this by visiting the front end of your website and browsing the page while pressing the Tab key. You should be able to navigate the page by pressing the Tab key. If it doesn't, you probably need to put in some work.
2. Use Colors with High Contrast
Low color contrast may make it difficult for some people to read text. Because of this, we advise selecting colors with a high contrast ratio, like black and white or black and yellow. All items on the page should be able to be distinguished from one another thanks to the color contrast on your website. Text, for instance, needs to be visible rather than slipping into the background. There are a few online resources you can make use of to increase visual accessibility. For instance, Contrast Checker may be helpful while selecting the color scheme for your website:
3. Offer alternative text for images
Make sure to include alternative text (alt text) that describes any photos you use on your website. By doing this, people who use assistive technology like dictation software and are unable to see the image will still be able to comprehend the information on the page.
Through your media library, you may add alternative text to photos in WordPress. Including alt text for all images on a web page can make the content more accessible to people with visual impairments.
4. Organize and structure content using heading hierarchies.
You can make your information easier to read by dividing it into smaller chunks. Because of this, structuring the information on your pages with headlines and lists helps improve web accessibility. Clear headings can make your pages easier for screen readers to read and comprehend. Additionally, it makes it easier for users of assistive technologies to browse your page's contents and helps with in-page navigation. WordPress advice to follow a predetermined heading structure, which calls for utilizing one H1 per page (usually for the header) and H2s and H3s for subsections.
5. Add transcripts and captions to videos
You should offer captions or subtitles for any videos you post on your website so that people who are hard of hearing or deaf can still enjoy the information. Screen reader users can experience your material without relying solely on the audio or visual components, with closed captioning and text transcripts.
With WordPress 5.6, you may use the Web Video Test Track Format (WebVTT) functionality to add captions and subtitles to WordPress videos. Simply add a video block to your page, then choose the Text tracks button from your horizontal navigation menu to access it.
Conclusion
HTML accessibility is a crucial aspect of website design that guarantees online material is accessible to those with disabilities. Web designers may help ensure that their material is accessible to everyone, regardless of ability, by adhering to best practices for creating accessible web pages.
| radnicks44 |
1,366,268 | Merge multiple kubeconfig files | We use kubeconfig files to organize information about clusters, users, namespaces, and authentication... | 0 | 2023-02-15T07:24:04 | https://dev.to/akyriako/merge-multiple-kubeconfig-files-20gb | kubernetes, kubectx, kubectl, kubeconfig | We use `kubeconfig` files to organize information about clusters, users, namespaces, and authentication mechanisms. `kubectl` command-line tool itself, uses `kubeconfig` files to source the information it needs in order to connect and communicate with the API server of a cluster.
By default, `kubectl` requires a file named config that lives under `$HOME/.kube` directory. You can multiple cluster entries in that file or specify additional kubeconfig files by setting the `KUBECONFIG` environment variable or by setting the `--kubeconfig` flag.
When you have multiple `kubeconfig` files and you want to merge them in one without using multiple files and switching among them with the `kubeconfig` flag, I personally get a headache because there is no way I remember how to merge those files from the command line. But there is a simpler way so please don’t waste brain cells to remember complicated bash commands.
As we mentioned before the **easy way** is by setting the `KUBECONFIG` environment variable. You can specify there multiple config files divided by the colon symbol (**:**) and `kubectl` will merge those automatically for you.
```
export KUBECONFIG=~/.kube/config:~/.rancher/local:~/.kube/kubeconfig.json
```
> Special treat: You can mix and match YAML and JSON files!
If you want to see now the current merged configuration that your `kubectl` is working with, just issue the command:
```
kubectl config view
```
and if you want to export this configuration for future use as a single file you can do it by running:
```
kubectl config view --flatten > my-config.yaml
```
and then you can replace your `~/.kube/config` file with the file above for permanent effect.
Don’t over-complicate staff, don’t try to memorize staff.
> Photo by Growtika Developer Marketing Agency on Unsplash
| akyriako |
1,391,021 | 30-Day Plan to build a Side Project -For Beginners | Devs, It's the first Monday of the month 🗓️ If you start today, in the next 4 weeks, you can have a... | 0 | 2023-03-06T21:19:47 | https://dev.to/zubairanwarkhan/30-day-plan-to-build-a-side-project-for-beginners-48o4 | ---
title: 30-Day Plan to build a Side Project -For Beginners
published: True
description:
tags:
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-03-06 21:13 +0000
---
Devs, It's the first Monday of the month 🗓️
If you start today, in the next 4 weeks, you can have a working and live side project 🚀
Here is a 30 Day step by step process. 🧵
I have created a detailed Google Sheet of the 30-day plan. Check [Link](https://saaswisdom.com/resources) to access it
Mentioned below is the very trimmed-down version of the plan, just for your high-level understanding and to get you motivated and excited.
You can refer to the mentioned Google sheet that has a detailed plan and tracking (More on that later), but for now, keep reading
Remember you are building in public. Use #buildinpublic in your tweets
Tweet every day and post on LinkedIn about your progress
Keep announcing the wins as well as failures.
Let's begin 🚀
--
Week 1 - Planning 📋
Monday: Brainstorming ideas (Light Activity)
Tuesday: Research ideas
Wednesday: Research existing tools
Thursday: Finalise the idea
Friday: (Get serious) Create product Roadmap & Milestones
Saturday: Continue with the roadmap
Sunday: Create next week's content
Week 2 - Getting Started 🛴
Monday: Take a Break
Tuesday: Setup Dev env, tools & frameworks
Wednesday: Work on the DB & structure
Thursday: Continue with the previous task
Friday: (Get serious) Start with the core logic & functionality
Saturday: Continue with the previous
Sunday: Create next week's content
Week 3 - Build 🛠
Monday: Break
Tuesday: Build core functionalities
Wednesday: Build core functionalities
Thursday: Build core functionalities
Friday: (Get serious) Complete the core functionalities
Saturday: Work on supplementary features
Sunday: Finish supplementary features & content
Week 4 - Ship 🚀
Monday: Take a break
Tuesday: Finish the basic UI and test it
Wednesday: Deploy for beta
Thursday: Allow your connections to test it
Friday: Fix bugs and test
Saturday: Fix bugs and test
Sunday: Release
The next voyage begins once you release your product.
More on that in upcoming posts, but for now...
Enjoy the process as much as you can 🧨
Do whatever, but don't deviate from this plan.
Remember, in 4 weeks you can do something that can have the potential to change your life
Check the link to access the Google Sheet
[Link](https://saaswisdom.com/resources)
| zubairanwarkhan | |
1,366,324 | Instalasi Ubuntu Server di Virtual Box Manager | Untuk tugas mata kuliah sistem operasi Unduh file ISO Ubuntu Server dari situs web resmi... | 0 | 2023-02-15T09:12:48 | https://dev.to/talim/instalasi-ubuntu-server-di-virtual-box-manager-1fbi | linux, ubuntu, devops, server | ## Untuk tugas mata kuliah sistem operasi
1. Unduh file ISO Ubuntu Server dari situs web resmi [Ubuntu](https://ubuntu.com/download/server)

2. Unduh aplikasi [Virtual Box Manager](https://www.virtualbox.org/wiki/Downloads)

3. Buka VirtualBox dan klik "New" untuk membuat mesin virtual baru.

4. Pada layar "Name and Operating System", masukkan nama untuk mesin virtual dan pilih "Linux" sebagai tipe sistem operasi, dan "Ubuntu (64-bit)" sebagai versinya.

5. Pada layar "Base Memory", alokasikan jumlah RAM yang ingin Anda berikan ke mesin virtual. Disarankan untuk mengalokasikan setidaknya 1 GB RAM. (pada gambar mengalokasikan sebesar 2 GB RAM)

6. Pada layar "File location and size", masukkan nama untuk file hard disk virtual, pilih lokasi untuk menyimpannya, dan alokasikan jumlah ruang hard disk yang ingin Anda berikan ke mesin virtual. Disarankan untuk mengalokasikan setidaknya 10 GB ruang hard disk.(Pada gambar mengalokasikan sebesar 20 GB)

7. Pada tampilan Summary klik "Finish" untuk membuat mesin virtual.

8. Di VirtualBox Manager, pilih mesin virtual yang baru saja Anda buat dan klik "Settings".

9. Pada jendela "Settings", klik "Storage" dan kemudian "Empty" di bawah "Controller: IDE".
10. Klik pada ikon "CD/DVD" dan kemudian "Choose Virtual Optical Disk File".

11. Telusuri file ISO Ubuntu Server yang Anda unduh pada langkah 1 dan pilih file tersebut.
12. Klik "OK" untuk menutup jendela "Settings".

13. Mulai mesin virtual dengan mengklik tombol "Start" di VirtualBox Manager.
14. Ikuti petunjuk instalasi Ubuntu Server untuk menyelesaikan instalasi.

15. Berikut tampilan akhir pada VirtualBox Manager setelah instalasi Ubuntu server telah selesai
 | talim |
1,366,339 | Introducing LT Debug: A faster, efficient, and simple debugging Chrome extension | We leverage Debugging tools, otherwise known as debuggers, to identify if there is any coding error... | 0 | 2023-02-15T08:53:03 | https://www.lambdatest.com/blog/introducing-lt-debug/ | debugging, testing, chrome, extensions |
We leverage Debugging tools, otherwise known as debuggers, to identify if there is any coding error present at different development stages and [Software Testing Life Cycle (STLC)](https://www.lambdatest.com/blog/software-testing-life-cycle/?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=blog). We can utilize them to check and repeat the conditions where a bug was found. We can then take a deeper look at the program state at a particular time while identifying the cause.
Debugging plays an important role in ensuring that software developers, engineers, and testers can fix every error before they release it to the users. This process is complementary to testing, where you learn how an error has occurred to the program.
With all these factors in mind -Drum rolls- LambdaTest is back with a brand new extension called [LT Debug](https://www.lambdatest.com/lt-debug?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) to make the life of testers easy and fuss-free while debugging.
Following the success of our [responsive checker](https://www.lambdatest.com/responsive-checker?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) tools like [LT Browser](https://www.lambdatest.com/lt-browser?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage), we are excited to reveal yet another tool that could make developers and testers dive into the abyss of debugging and stay afloat in testing.
## The story behind LT Debug
LambdaTest is a product company on its own. We can understand the pain of testers better than Michael Scott (Much better!). We have been in the field of [software testing](https://www.lambdatest.com/blog/the-golden-age-of-software-testing/?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=blog) and debugging for the past 5+ years. Our product at request team has kept their eyes, ears, and heart open to the testing community. They have listened to the requests of many awesome testers needing a debugging tool, Chrome extension.
The problem statement of testers worldwide is more or less the same when we talk about debugging. One of the most common [challenges faced by testers](https://www.lambdatest.com/blog/16-major-challanges-faced-by-testers-while-testing-a-web-application/?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=blog) is that they feel that debugging is directly proportional to long working hours with complex processes. It’s not always a cakewalk when you go to Google console to inject the test script. Paying a visit to the source code and changing it wouldn’t help much.
Now, here is a think tank: Is it easy to write a syntax and code or to write a syntax and fill the form?
The second option, obviously, right? That’s what made our team hop into brainstorming over many cups of coffee, take baby steps in product development, and finally, LT Debug is live!
This free [developer tool](https://www.lambdatest.com/developer-tools?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=blog) is simple-to-use with over nine amazing features used by web developers and testers during debugging on a day-by-day basis.
The coolest part is it’s available in **Dark Mode** with your system setup to ensure that even your little needs are taken care of by us. Now, get back to work without any strain. Just click on the sun and moon icon at the top right corner to activate them as per your choice.

***Get started with this complete [Selenium](https://www.lambdatest.com/selenium?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) automation testing tutorial. Learn what Selenium is, its architecture, advantages and more for automated cross browser testing.***
## How to install LT Debug?
Installation of LT Debug is as simple as spelling your name. Here is our simple step-by-step installation process:
**Step 1:** Go to the [LT Debug Chrome Extension](https://www.lambdatest.com/lt-debug?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) page.

**Step 2:** Click **Add to Chrome**. You will be redirected to the Chrome web store.

**Step 3:** Click **Add to Chrome**. A pop-up will appear asking if you need to add “LT Debug” to your browser. Click on **Add extension**.

**Step 3:** All done! Now, you can choose the feature you want to add and debug.

Here is a gist of how to debug mobile browsers with our developer tools:
{% youtube SVpr_V3nwLI %}
Subscribe to the [LambdaTest YouTube channel](https://www.youtube.com/c/LambdaTest?sub_confirmation=1) to learn more about cross [browser compatibility](https://www.lambdatest.com/feature?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage), [real time testing](https://www.youtube.com/watch?v=zKspUbraT-c&t=1s?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage), and [responsiveness testing](https://www.lambdatest.com/responsive-test-online?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage).
***Run your [Playwright](https://www.lambdatest.com/playwright?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) test scripts instantly on 50+ browser and OS combinations using the LambdaTest cloud.***
## Top features of LT Debug
So far, we have nine features for our most loved tribe of testers and developers. Let’s take you through a tour:
## Add/Modify Headers
With this feature, you can add, remove or modify the header, be it a request header or a response header. This way, you can easily test a header for a website request.
How to do it?
**Step 1:** Click on **Modify Headers**. Provide the needed values to modify the request header. Input the URL value and click **Save**.

**Step 2:** You can directly notice that the changes have been implemented on the request header value by clicking on the three dotted lines at the right corner of the screen of your Chrome browser. Go to **More tools -> Developer tools-> Network-> Headers** and click on the website source you need to test.

## Block Requests
LT Debug offers a feature to block HTTP requests based on your specific URL filter conditions. With this feature, you can easily filter URL requests as and when you like.
**Step 1:** Go to Block Requests. Type the exact URL or URL with a specific word. E.g: For lambdatest.com, you can just provide lambdatest. That would suffice. Click on **Save**.

**Step 2:** Now, the URL will have got blocked when you try to visit the website.

## Throttle Response
With this feature, you can efficiently perform [network throttling](https://www.lambdatest.com/blog/test-mobile-websites-on-different-network-conditions/?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=blog) to control network speed for every network request. Choose the URL of your choice when you want to reduce the speed or function at normal speed. You can also control if there is any millisecond delay.
Easily emulate network speed based on the use case without diving into Chrome element inspector. Not every user would have access to 4G or 5G. You can also try checking your website performance in a slow 3G network to understand how it performs in such geographical areas.
Here is how to try it:
**Step 1:** Open the LT Debug Chrome Extension tab. Click on **Throttle Response**.

**Step 2:** Provide your network type and URL. For example, you can try slow 3G.

**Step 2:** You can check for the performance under Slow 3G by visiting the website.
***Run your [Jest](https://www.lambdatest.com/jest?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) automation tests in massive parallel across multiple browser and OS combinations with LambdaTest.***
## Add/Remove Query Param
With this feature, you can simply change and modify the URL query parameters. The function of URL parameters is to read and organize the key, along with the value pairs present on the particular web page. This can simplify your debugging experience.
**Step 1:** Check out by adding a query param with key Param Name as Par 1 and the Param Value as Val 1 to URLs containing the word “lambdatest”. This will redirect you to the needed page whenever you visit the URL with the value “lambdatest” in it.

**Step 2:** Visit lambdatest.com to witness the changes reflected on the query parameter, right from the header.

## Redirect Requests
Use the **redirect request** tool when you want to configure a URL to redirect them to your preferred web URL.
**Step 1:** Provide the URL value that needs to be changed to the URL value where you need to redirect it.

**Step 2:** You can see that when you try to visit lambdatest.com, you are redirected to lambdatest.com/blog.
***Perform browser [automation testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) on the most powerful cloud infrastructure. Leverage LambdaTest automation testing for faster, reliable and scalable experience on cloud.***
## Change User Agent
Are you in need of [cross browser testing](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage)? You can easily switch between different user-agent strings in a faster manner. Simulate, imitate and spoof different browsers, devices and search engine spiders according to your choice.
**Step 1:** For instance, let’s test our website on the Chrome browser on Windows 10. You can also check for the user agent by **More tools -> Developer tools-> Network-> Headers**. Scroll down to find the user agent at the end.
This is the user agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36 [ip:37.163.61.154]

**Step 2:** Let’s say you want to [test on Internet Explorer](https://www.lambdatest.com/test-on-internet-explorer-browsers?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) 8 on Windows XP.
Here is the user agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0).
All you need to do is provide the **user agent string** of the browser and OS in the form.

**Step 3:** Now, perform the same mentioned in Step 1 to check if the user agent is reflected. Bingo! It has indeed been reflected on the page. Now you can perform cross browser testing just by entering the user agent string. No need to code or write a test script!

***Try an online [Selenium Testing](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) Grid to run your browser automation testing scripts. Our cloud infrastructure has 3000+ desktop & mobile environments. Try for free.)***
## Insert Scripts (CSS/JS)
Easily simulate the web page experience as soon as you inject the CSS or JS script on your own in the console.
**Step 1:** Do you want to test your favorite color as the background color of your website? Let’s provide the CSS code for the same in the form:
body {background-color:blue}

**Step 2:** Whiz-a-whiz! The background color has changed successfully.

## Allow CORS
If you want to perform cross-domain Ajax requests in websites and web apps faster, all you need to do is add the (Access-Control-Allow-Origin: *) rule to your response header. For example, you can easily bypass CORS on lambdatest.com when you turn it on while accessing the resources.
**Step 1:** All you need to do is provide the URL value.

**Step 2:** Click on the three dotted lines at the right corner of the screen of your Chrome browser. Go to **More tools -> Developer tools-> Network-> Headers** to check if the rule has been implemented on the response header.

## Content Security Policy
This feature allows you to remove the content security policy header on any website/web page of your choice.
**Step 1:** Click on the three dotted lines at the right corner of the screen of your Chrome browser. Go to **More tools -> Developer tools-> Console**. You can see that there is a content blocker due to security policy violations.

**Step 2:** Click on **Content Security Policy**. Provide the URL value where you need to remove the content blocker.

**Step 3:** Now, you can again visit the **Console** tab to check if the blocker has been removed. In our case, it’s a yes!

***Run [Appium](https://www.lambdatest.com/appium-mobile-testing?utm_source=devto&utm_medium=organic&utm_campaign=feb_15kj&utm_term=kj&utm_content=webpage) mobile testing of native and web apps. Improve your app quality with instant access to real devices on LambdaTest. Register now for free.***
## Why should you use LT Debug?
Even though there are many Debugging tools in the market, the tools built with the user in mind emerge as the winner. We believe we are one among them!
Here are the top reasons to trust our LT Debug browser extension:
* It’s free of cost forever.
* All you need to do is fill out a form and complete your debugging.
* You get a link to access 100+ extensions, exclusively for developers and testers, by developers and testers.
Here is how you do it:
**Step 1:** Click on **More Tools.**

**Step 2:** You will be redirected to our [free online tools](https://www.lambdatest.com/free-online-tools?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) page. Pick a tool of your choice.

**Step 3:** You set the rules and break them! Go to **Manage Rules** to view and modify the rules.

## Is there any other way to debug using LambdaTest apart from LT Debug?
Yes, of course! Follow these simple steps to debug your website using “Real Time Test”.
**Step 1:** Signup and Login into your LambdaTest account. Go to **Real Time Testing** from the left navigation menu. Choose the test configuration from a list of browsers, devices, versions, OS, screen resolution, and so on.

**Step 2:** Provide the URL of your choice.

**Step 3:** Right-click your mouse pointer and choose **Inspect.**

**Step 4: **After that, you can inspect the website as per your needs.

## Can I use LT Debug on a different browser other than Chrome?
Right now, we support only the Chrome extension. But don’t worry, we have got you covered through our [online browser testing](https://www.lambdatest.com/online-browser-testing?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) feature.
**Step 1:** Login to your LambdaTest account. Go to **Real Time Testing** from the left navigation menu. Choose the test configuration from a list of browsers, devices, versions, OS, screen resolution, and so on.

**Step 2:** Provide the URL of your choice. Click **START**.

**Step 3:** Go to the **LT Debug page** on the **Chrome web store**. Copy the URL.

**Step 4:** Now, click the Chrome Extension icon and add the URL link. All set! Now you can use our real time [live testing](https://www.lambdatest.com/live-testing?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) platform to add and use the LT Debug Chrome extension from any browser of your own, be it Safari or Opera.

## Conclusion
Debugging would no longer be an arduous task for web developers and testers. It’s going to be simpler, fun-filled, and flexible when you use LT Debug to fast-track your work. You can also depend on our 100+ extensions if you want to make your testing life seamless. Depend on our [online browser farm](https://www.lambdatest.com/online-browser-farm?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) to perform [Selenium automation tests](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=webpage) and cross browsing tests in over 3000+ browsers, OS, and devices.
Visit our learning hub for more insights on testing with our [automation testing tutorial](https://www.lambdatest.com/learning-hub/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=learning_hub) and [mobile app testing](https://www.lambdatest.com/learning-hub/mobile-app-testing?utm_source=devto&utm_medium=organic&utm_campaign=jfeb_15kj&utm_term=kj&utm_content=learning_hub).
Happy testing!
| aamritaangappa |
1,366,427 | Creating a Custom Solana Connect Wallet UI with React and Chakra UI | If you have worked with the Solana Wallet Adapter before, you will know that it is very easy to set... | 0 | 2023-02-15T09:44:00 | https://blog.anishde.dev/creating-a-custom-solana-connect-wallet-ui-with-react-and-chakra-ui | web3, react | If you have worked with the [Solana Wallet Adapter](https://github.com/solana-labs/wallet-adapter) before, you will know that it is very easy to set up a Connect Wallet button with a decent modal.
However, customization is pretty limited. We can only add some custom CSS hence changing the styles but that is about it. In this article, we go over the following:
1. Understanding how the `@solana/wallet-adapter-react-ui` package works
2. Using the `@solana/wallet-adapter-react` package to create a custom Connect Wallet page
You can check out the deployed version of what we are going to build [here](https://solana-react-wallet-adapter-custom-ui-example.vercel.app/)
Let's get started!
## How does `@solana/wallet-adapter-react-ui` work?
If you go through the [source code for the React UI package](https://github.com/solana-labs/wallet-adapter/tree/master/packages/ui/react-ui/src), then you will notice that it makes use of the `useWallet` hook from the `@solana/wallet-adapter-react` package. It also has the required components (buttons, modals, providers) and styling to be an easy-to-use library to implement a connect wallet feature.
### The `useWallet` hook
The `useWallet` hook returns functions like `select`, `connect`, and `disconnect` to select a wallet, connect to the selected wallet, and disconnect from the connected wallet respectively. It also returns some variables - `wallets` which is a list of all wallets with their availability status, `wallet` which is the currently selected wallet, `publicKey`, `connecting`, `connected`, and `disconnected` telling us about the current connection status.
## Using `@solana/wallet-adapter-react` to create a custom Connect Wallet UI
Let us start by creating a new [Next.js](https://nextjs.org/) project -
```bash
npx create-next-app solana-react-wallet-adapter-custom-ui-example --ts
```
It will ask you for some prompts, you can accept the defaults for this guide.

Now open up the project in your favorite IDE and open up the terminal. Then enter the following command to install the required dependencies -
```bash
npm i @solana/web3.js @solana/wallet-adapter-base @solana/wallet-adapter-react @solana/wallet-adapter-wallets
```
For this guide, I am going to be using [Chakra UI](https://chakra-ui.com/) for styling. To install the required dependencies for Chakra UI, run the following command -
```bash
npm i @chakra-ui/react @emotion/react @emotion/styled framer-motion
```
Now replace `_app.tsx` with the following -
```typescript
import "@/styles/globals.css";
import type { AppProps } from "next/app";
import { ChakraProvider, extendTheme } from "@chakra-ui/react";
import {
ConnectionProvider,
WalletProvider,
} from "@solana/wallet-adapter-react";
import { useMemo } from "react";
import {
GlowWalletAdapter,
PhantomWalletAdapter,
SolflareWalletAdapter,
MathWalletAdapter,
} from "@solana/wallet-adapter-wallets";
import { clusterApiUrl } from "@solana/web3.js";
const theme = extendTheme({
config: {
initialColorMode: "dark",
},
});
export default function App({ Component, pageProps }: AppProps) {
const wallets = useMemo(
() => [
new PhantomWalletAdapter(),
new SolflareWalletAdapter(),
new GlowWalletAdapter(),
new MathWalletAdapter(),
],
[]
);
const endpoint = useMemo(() => clusterApiUrl("mainnet-beta"), []);
return (
<ConnectionProvider endpoint={endpoint}>
<ChakraProvider theme={theme}>
<WalletProvider wallets={wallets} autoConnect>
<Component {...pageProps} />
</WalletProvider>
</ChakraProvider>
</ConnectionProvider>
);
}
```
We are setting up the `ConnectionProvider` and `WalletProvider` from `@solana/wallet-adapter-react` with the default Solana mainnet RPC URL and 4 wallets - Phantom, Solflare, Glow and Math Wallet (this is to demonstrate how the `wallets` list includes the wallet availability status).
Now replace `index.tsx` with the following -
```typescript
import { Container, Heading, VStack } from "@chakra-ui/react";
import { useWallet } from "@solana/wallet-adapter-react";
export default function Home() {
const { wallets } = useWallet();
console.log(wallets);
return (
<Container as="main" mt={32} maxW="3xl">
<VStack w="full" gap={8}>
<Heading textAlign="center">
Solana React Wallet Adapter Custom UI demo with Chakra UI
</Heading>
</VStack>
</Container>
);
}
```
Run `npm run dev` and go to [http://localhost:3000/](http://localhost:3000/) on your browser. Open up devtools and look at the console -

I have Backpack, Glow, Phantom, and Solflare installed so it shows up with a `readyState` of `Installed`. Note that Backpack, Glow and Phantom implement the [Solana Wallet Adapter Standard](https://github.com/solana-labs/wallet-standard) and hence show up with `StandardWalletAdapter` for the `adapter` field. As I don't have Math Wallet, the `readyState` for Math Wallet is `NotDetected`.
Now make a `components` directory under `src` and make a `Wallets.tsx` file under that with the following code -
```typescript
import { VStack, Button, Image, Text } from "@chakra-ui/react";
import { useWallet } from "@solana/wallet-adapter-react";
const Wallets = () => {
const { select, wallets, publicKey, disconnect } = useWallet();
return !publicKey ? (
<VStack gap={4}>
{wallets.filter((wallet) => wallet.readyState === "Installed").length >
0 ? (
wallets
.filter((wallet) => wallet.readyState === "Installed")
.map((wallet) => (
<Button
key={wallet.adapter.name}
onClick={() => select(wallet.adapter.name)}
w="64"
size="lg"
fontSize="md"
leftIcon={
<Image
src={wallet.adapter.icon}
alt={wallet.adapter.name}
h={6}
w={6}
/>
}
>
{wallet.adapter.name}
</Button>
))
) : (
<Text>No wallet found. Please download a supported Solana wallet</Text>
)}
</VStack>
) : (
<VStack gap={4}>
<Text>{publicKey.toBase58()}</Text>
<Button onClick={disconnect}>disconnect wallet</Button>
</VStack>
);
};
export default Wallets;
```
Now, replace `index.tsx` with the following -
```typescript
import { Heading, VStack } from "@chakra-ui/react";
import dynamic from "next/dynamic";
const Wallets = dynamic(() => import("../components/Wallets"), { ssr: false });
export default function IndexPage() {
return (
<VStack gap={8} mt={16}>
<Heading>Solana Custom Wallet UI example (Chakra UI)</Heading>
<Wallets />
</VStack>
);
}
```
Notice that we have to import the `Wallets` component dynamically using [`next/dynamic`](https://nextjs.org/docs/advanced-features/dynamic-import) or else you will be faced with a React hydration error.
The webpage should look like this now -

Note that it shows the wallets I have installed. You may have other wallets installed so it will show a different list. Notice how Math Wallet is not shown although I included the adapter. This is because we are filtering the `wallets` list making sure that `readyState` is `Installed`.
We can connect to a wallet simply by using the `select` method from `useWallet`. Note that `select` calls `connect` automatically. If the wallet is disconnected but a wallet is selected, we can call `connect` to connect to the currently selected wallet. The selected wallet state is persisted in the local storage. Note that calling `disconnect` removes the selected state as well but it is persisted between reloads and wallet locks.
Now if I click on any of these buttons, the wallet will pop up and ask me to connect. After connecting, the page should look like this -

Clicking "disconnect wallet" simply calls `disconnect` and the wallet is disconnected and the wallet state is removed from local storage.
## Conclusion
This guide went through how `@solana/wallet-adapter-react-ui` and then went over how to create a Connect Wallet feature in your React application simply using `@solana/wallet-adapter-react` and Chakra UI.
Check out the [GitHub Repository](https://github.com/AnishDe12020/solana-react-wallet-adapter-custom-ui-example) if you want to take a look at the final code for this guide. You can also check out the deployed version - [https://solana-react-wallet-adapter-custom-ui-example.vercel.app/](https://solana-react-wallet-adapter-custom-ui-example.vercel.app/)
Now you can try to extend this by creating a modal or a multi-button with a nice dropdown to disconnect or change the wallet just like how the official React UI package does it, but with your styles of course. | anishde12020 |
1,386,069 | Coupa Testing: All You Need to Know | Real-time operation monitoring is a challenge for supply chain organizations, and the resulting... | 0 | 2023-03-03T05:27:37 | https://www.reverbtimemag.com/blogs_on/coupa-testing-all-you-need-to-know | coupa, testing | 
Real-time operation monitoring is a challenge for supply chain organizations, and the resulting unsustainable expenses are a common result. To boost the effectiveness of the supply chain, many have put in place a solid set of procedures. However, they rely on various tools and frameworks in considerable measure. Businesses require a cutting-edge cloud-based business management tool like Coupa that can give them insights into their everyday operations.
Coupa has a wide range of features and advantages, but in order for it to fulfill the necessary security and quality standards, it is necessary to perform comprehensive Coupa testing.
**What is Coupa and how does it work**?
Coupa is cloud-based spend management. Organizations can manage all transactions related to payments, supply chain management, and procurement with the help of Coupa. Organizations may use Coupa to increase resilience, increase insight into spending, and turn every obstacle into a competitive advantage. Utilizing Coupa inventory software can reduce unnecessary costs by automating fulfillment and managing and tracking inventory. By fully automating contracts and converting paper contracts to digital ones through the use of Coupa contract lifecycle management technologies, you can also eliminate paper from your life. With all these benefits Coupa testing becomes an inevitable part of your business.
**What are the benefits of Coupa**?
Adopting Coupa can be beneficial for any supply chain firm trying to gain real-time insight into its everyday operations.
- Coupa is the ideal option for businesses thanks to its executive dashboards, cost management features, and real-time benchmarking.
- Coupa assists in overcoming issues with paper-based reporting by integrating the element of cost management and administration into a single user interface.
- Additionally, it enables enterprises to aggregate data from various places and systems. The platform offers a single source of truth because data is no longer dispersed among several files or formats.
- Automated reports shorten the time it takes to make judgments while also guaranteeing their timeliness and correctness.
- Because of Coupa's seamless integration with ERP or accounting systems and simple UI/UX facilitate the organization's supply chain. End-to-end visibility into budgets and expenses is made possible through this integration. As a result, the organization's total purchasing and expenditure operations as well as the complete supply chain ecosystem will be improved.
**Coupa test automation tool: Why do you need it**?
In this complex and highly configured Coupa software, automated testing helps upgrade quality, and saves time and cost. They have huge volumes of data, and testing regular procedures manually requires a significant amount of resources involved. Cloud updates, globalization, and integration with other systems are some of the factors which make automated Coupa testing a basic requirement for businesses.
To achieve successful Coupa testing, the selection of the right tool becomes foremost. The below pointers will act as a guide in helping enterprises select automation tools to suit their requirements.
- The tool you select should be flexible to perform tests without having the knowledge of coding and provides a no code/low code interface where manual testers/functional testers can even set up and write test cases from day one.
- The tool you choose should support the reusability of the tests and easy script maintenance in order to save time and effort.
- Tool should support features like record/playback where scripts can be created easily by performing manual actions.
- The tool you select should have a pre-built accelerator for packaged applications and end-to-end test cases that can help accelerate the automation setup.
- The tool should be compatible with ERP integrations with third party applications, mobile, web, API, UI, mainframe, and other ERPs.
- The tool should have features like change management and impact analysis to be able to find the impacted test cases on every update or new release.
- The tool should support multiple channels of accessing your ERP users, working across all web, mobile, desktop apps, and other emerging technologies.
Operational simplification and cost optimization might result from automated Coupa testing. However, you need to step up your Coupa test automation tool to overcome challenges with suppliers, expenses, and data accuracy. By doing this, you may spend less money while getting the most out of every dollar, directly affecting the bottom line. | rohitbhandari102 |
1,386,074 | LLaMA - Meta's new AI can run on a PC | Meta releases LLaMA, a state-of-the-art foundational large single-gpu language model | 0 | 2023-03-03T05:51:49 | https://dev.to/codingmoney/llama-metas-new-ai-can-run-on-a-pc-18ob | ai, machinelearning, nlp, chatgpt | {% embed https://www.youtube.com/watch?v=atMU1qo2Pok %}
Meta releases LLaMA, a state-of-the-art foundational large single-gpu language model | codingmoney |
1,386,199 | Get the host address for WSL2 | If you need to do something like accessing the services on the WSL2 host, you need to get the address by yourself. | 0 | 2023-03-03T08:54:27 | https://dev.to/socrateslee/get-the-host-address-for-wsl2-2b2 | ---
title: Get the host address for WSL2
published: true
description: If you need to do something like accessing the services on the WSL2 host, you need to get the address by yourself.
tags:
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-03-01 09:43 +0000
---
For WSL2, the ip address of a running distro is not static, neither the subset WSL2 uses is fixed. If you need to do something like accessing the services on the WSL2 host, you need to get the address by yourself.
Get the ip address the current running distro:
```
ip -4 -br route get 8.8.8.8 | head -n 1|awk '{print $7}'
```
Get the ip address the host(the gateway):
```
ip -4 -br route get 8.8.8.8 | head -n 1|awk '{print $3}'
```
And get the netmask:
```
ifconfig eth0 |grep netmask|awk '{print $4}'
```
| socrateslee | |
1,386,345 | Improvements to Planet Perl and Perlanet | This is a story of one of those nice incidents where something starts off simple, then spirals out of... | 0 | 2023-03-03T10:24:30 | https://perlhacks.com/2023/03/improvements-to-planet-perl-and-perlanet/ | programming, docker, perlanet, planetperl | ---
title: Improvements to Planet Perl and Perlanet
published: true
date: 2023-03-03 10:16:30 UTC
tags: Programming,docker,perlanet,planetperl
canonical_url: https://perlhacks.com/2023/03/improvements-to-planet-perl-and-perlanet/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/euw4rpgg4ws6jfcm4iyo.jpg
---
This is a story of one of those nice incidents where something starts off simple, then spirals out of control for a while but, in the end, everyone wins.
On Reddit, a few days ago, someone asked [‘Is there a “Planet Perl” with an RSS feed?’](https://www.reddit.com/r/perl/comments/119hu00/perl_rss_feeds/) and a few people replied, pointing out the existence of Planet Perl (which is the first Google result for [“Planet Perl”](https://www.google.com/search?q=planet+perl)). I’m obviously not marketing that site very well as every time I mention it, I get people (pleasantly) surprised that it exists.
On this occasion, it was [Elvin Aslanov](https://www.reddit.com/user/rwp0/) who seemed to discover my site for the first time. And, very soon afterwards, he started sending [pull requests](https://github.com/davorg/planetperl/pulls) to add feeds to the site. As a result, we now have three more feeds that are being pulled into the site.
- [Perl on Medium](https://medium.com/tag/perl). I’m slightly embarrassed that I hadn’t thought of this myself. I did, after all, once try to start [a Perl publication on Medium](https://medium.com/cultured-perl). I think I must have decided that there are better sites for technical blogging and blanked it from consideration. Medium’s not the busiest of places for Perl bloggers, but there are a few posts there and they’re mostly from people who are outside of the echo chamber – so getting more eyes on their posts is a good idea.
- [Perl questions on Stack Overflow](https://stackoverflow.com/feeds/tag/perl). Another one that would have been obvious if I had thought for a second. I’ve been answering questions on SO for years. It’s a good way to get more perspective on how Perl is being used across the industry. Unfortunately, the feed only includes the titles of the posts – you’ll need to click the link to actually see the question.
- [Perl commits on GitHub](https://github.com/Perl/perl5/commits/blead.atom). I’m interested in hearing how useful people think this is. I worry slightly that there will be times when the number of commits will overwhelm the other feeds. But maybe that’s a good idea. Perhaps it’s good for more people to see just how busy the Perl 5 Porters are. I’m a bit annoyed that the feed puts everything in a fixed-width font, but not (yet) annoyed enough to do anything about it.
You might know that Planet Perl is driven by [Perlanet](https://metacpan.org/pod/Perlanet). So adding new feeds is just a case of adding a few lines to [a configuration file](https://github.com/davorg/planetperl/blob/master/perlanetrc). And looking at the pull requests I got from Elvin, showed a potential problem in the way the configuration was laid out. Each feed has three lines of YAML configuration. There’s a title for the feed, a URL for a web page that displays the content of the feed and the URL for the feed itself. They’re called “title”, “web” and “url”. And it’s that last name that’s slightly problematic – it’s just not clear enough. Elvin got “web” and “url” muddled up in one of his PRs and, when I pointed that out to him, he suggested that renaming “url” to “feed” would make things much clearer.
I agreed, and the next day I hacked away for a while before releasing [version 3.0.0 of Perlanet](https://metacpan.org/release/DAVECROSS/Perlanet-v3.0.0/view/lib/Perlanet.pm). In this version, the “url” key is renamed to “feed”. It still accepts the old name (so older config files will still work) but you’ll get a warning if you try to use a config name in the old config.
I didn’t stop there. Last year, I wrote [a blog post about producing a docker image that already had Perlanet installed](https://dev.to/davorg/building-a-perlanet-container-43cm) – so that it was quicker to rebuild my various planets every few hours. Since then I’ve been rebuilding [that image](https://hub.docker.com/repository/docker/davorg/perl-perlanet/general) every time I updated Perlanet. But it’s been rather a manual process. And because I’m old and decrepit, I can never remember the steps I go through to rebuild it, tag it correctly and push it to the Docker Hub. This means it always takes far longer than it’s supposed to. So this time, I wrote [a script to do that for me](https://github.com/davorg/perl-perlanet-docker/blob/main/build). And because I now have the kind of mind set that sees GitHub Workflows everywhere I look, I wrote [a Workflow definition that builds and publishes the image](https://github.com/davorg/perl-perlanet-docker/blob/main/.github/workflows/publish_image.yml) any time the Dockerfile changes. I guess the next step will be to write an action that automatically updates the Dockerfile (thereby triggering the rebuild) each time I release a new version of Perlanet.
But that’s a problem for another day. For now, I’m happy with the improvements I’ve made to Planet Perl, Perlanet and the Perlanet Docker infrastructure.
The post [Improvements to Planet Perl and Perlanet](https://perlhacks.com/2023/03/improvements-to-planet-perl-and-perlanet/) appeared first on [Perl Hacks](https://perlhacks.com). | davorg |
1,386,360 | How to Build a Fully Responsive Sign Up Form Using TailwindCSS | A community is a social unit with commonality such as place, norms, religion, values, customs, or... | 0 | 2023-03-04T07:30:00 | https://medium.com/@mbianoubradon/how-to-build-a-fully-responsive-sign-up-form-using-tailwindcss-8500e1306c8c | webdev, html, tailwindcss |
](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kf49i1d0zugqcifirbg5.png)
A community is a social unit with commonality such as place, norms, religion, values, customs, or identity. There are many different type of communities, ranging from religions to social, to political and even Tech communities.
Being part of a community as a tech person or aspiring tech individual, is very important as it helps you grow and also open your mind to a million different possibilities to solving a particular problem.
I personally, I am part of many different communities, one of them is icodethis, a platform which really helped me improve my UI design and sharpen my coding skills. You can give it a try, and trust me you won’t regret.
At many instances, we were in one way or the other required to sign up for something before being able to fully use it. It being an offline service, or to take a course online, or even to a news letter. We all sign up for something atleast once in our life.
In this tutorial, we are going to build of this awesome component. A sign up form can have so many different forms. So I challenge you to look for other design, replicate it, and share it with us in the comment section.
For the time being, Let’s use this one, and get started.

## Understanding the Task
It’s important to divide you work or design into different parts, I always do that, it help me break it down to smaller components that I can easy build, then join them together to form the bigger component. (_I call it Divide and Conquer_ 😉)

Essentially, we have 2 main parts, for easy referencing, we will call it Before and After OR
## Structure of Code
I generally have the same structure when it comes to designing components. I believe they somehow have the same root. 😃
This is how it goes
```
<body>
<!-- First Layer -->
<div>
<!-- Second Layer -->
<div>
<!--
Here Goes the content
-->
</div>
</div>
</body>
```
## Before OR
As earlier agreed, we will reference the different parts as Before OR and After OR.
Now Let’s have a look at the first part.
```
<!-- Second Layer -->
<div class="bg-white rounded-xl text-center px-14 sm:px-20 py-4 ">
<!-- Form Header -->
<div class="my-5 w-full [&>*]:mx-auto">
<h2 class="text-2xl w-4/5 font-semibold mb-3">Join the community</h2>
<p class="text-sm text-center w-[97%]">Take your art to the next level. Get it
seen by millions of people
</p>
</div>
<!-- Join with Facebook -->
<div class="bg-[#4968ad] text-white text-sm font-semibold shadow-sm shadow-[#4968ad] py-2 rounded cursor-pointer hover:scale-105"><h2>Join with Facebook</h2></div>
.
.
.
</div>
```
Let’s understand the above code;
- `bg` stands for background, we gave a background of white to the container holding everything, give it a border radius (`rounded-xl`) and some padding-inline (`px`) and padding-block (`py`)
- For the form header, we gave some basic properties, margin-inline(`mx`) of auto, width full (`w-full`) and margin-block (`my`)
- For the join button, we gave it a background color, shadow, white text and a hover effect of scale.
## After OR
Now let’s build the different pieces of the different components after **OR**
```
<!-- Second Layer -->
<div class="bg-white rounded-xl text-center px-14 sm:px-20 py-4 ">
.
.
.
.
.
.
.
<!-- Input Fields -->
<div class="grid grid-cols-2 [&>*]:border [&>*]:border-slate-300 [&>*]:mb-3 gap-2 [&>*]:rounded [&>*]:text-slate-600 [&>*]:w-full [&>*:focus]:ring-slate-900">
<input type="text" placeholder="First name" class="col-span-2 sm:col-span-1">
<input type="text" placeholder="Last name" class="col-span-2 sm:col-span-1">
<input type="email" placeholder="Email" class="col-span-2">
<input type="password" placeholder="Password" class="col-span-2">
</div>
<!-- Footer-->
<div>
<div class="bg-[#1e2c4b] text-white text-sm font-semibold py-2 rounded text-center cursor-pointer shadow-sm shadow-[#1e2c4b] border border-[#1e2c4b] hover:bg-white hover:text-[#1e2c4b]"><h2>Create New Account</h2></div>
<div class="[&>span]:text-[#4968ad] text-[0.7rem] my-5 [&>span]:cursor-pointer [&>span:hover]:underline w-[90%] mx-auto">
By joining you agree to our <span>Terms of Service</span>
and <span>Privacy Policy</span>
</div>
</div>
</div>
```
I guess this is pretty clear, we
- Separate the input fields from the others. we create a grid with two columns grid grid-cols-2 added border to each child `[&>*]:border [&>*]:border-slate-300 [&>*]:mb-3` and also gave each child a focus effect `[&>*:focus]:ring-slate-900`.
- The second sub-part here (footer) has 2 part call to action button with the follow properties `bg-[#1e2c4b] text-white text-sm font-semibold py-2 rounded text-center cursor-pointer shadow-sm shadow-[#1e2c4b] border border-[#1e2c4b] hover:bg-white hover:text-[#1e2c4b]`. This kinda look like jagons. But let’s break as such, we apply a _dark blue background color, which changes to white on hover_. we gave a _white color to text which changes to dark blue on hover_. And of course, we have a _border and shadow of same color_. Hope it makes senses. 🤓
And that should be it !
But wait a minute ! We did Before OR ✅, and After OR ✅, that’s true. But what about OR itself ? 😮
Let’s get that done too. . .✍️
```
<!-- Second Layer -->
<div class="bg-white rounded-xl text-center px-14 sm:px-20 py-4 ">
.
.
.
.
.
.
.
<!-- Input Fields -->
<p class="text-slate-300 my-3 font-semibold relative before:content-[''] before:absolute before:w-[35%] before:bg-slate-300 before:h-0.5 before:left-4 before:rounded-full before:top-1/2
after:content-[''] after:absolute after:w-[35%] after:bg-slate-300 after:h-0.5 after:right-4 after:rounded-full after:top-1/2">
Or
</p>
.
.
.
.
.
</div>
```
- We have a normal Or text.
- As for the horizontal line at the left and the right of the **Or** text, we are using the _pseudo::selectors_ **before** for the line at the left and **after** for the line at the right.
Looks amazing, isn’t it ?😎

## Conclusion
We just built a simple and responsive Sign Up Form component without opening our CSS file😃. Thanks to Tailwindcss.
Many employers will need such components to be added to their website, and right now you should be proud that you are one those few who knows how to build it in less than 5mins, and you can do that without even leaving you HTML document 😎.
You can have a Live preview on [Codepen](https://codepen.io/mbianou-bradon/pen/bGjLVNo) or find the code on [Github](https://github.com/mbianou-bradon/icodethis-daily-ui-challenge/tree/main/public/January%202023/Sign%20Up%20Form)
Remember our challenge ? Check on a new design of Sign Up Form, code it, and share it with us in the comment section.
If you have any worries or suggestions, don’t hesitate to leave it in the comment section! 😊
See ya ! 👋 | mbianoubradon |
1,386,368 | How To Hash Password with PostgreSQL Function in Nodejs | Introduction Storing user passwords in plain text is a major security vulnerability. In... | 0 | 2023-03-03T11:12:09 | https://dev.to/ndohjapan/how-to-hash-password-with-postgresql-function-in-nodejs-3571 | postgres, node, express, database | ## Introduction
Storing user passwords in plain text is a major security vulnerability. In order to protect user data, it is important to hash passwords before storing them in a database. In this tutorial, we will be discussing how to hash passwords with PostgreSQL in a Node.js application by creating a PostgreSQL function and adding it to a migration file.
## Creating the Users Table and Hashing Function in Migration
To create the table and the function in a migration file, you need to create a new migration file with a name like XXXXXXXXXXXXXX_create_users_table.js where XXXXXXXXXXXXXX is a timestamp, then add the following code to the file:
```
exports.up = async function(knex) {
await knex.schema.createTable("users", table => {
table.string("username").notNullable();
table.string("first_name").notNullable();
table.string("last_name").notNullable();
table.string("password").notNullable();
table.string("email").notNullable();
});
await knex.raw(`
CREATE OR REPLACE FUNCTION hash_password(password VARCHAR(255))
RETURNS VARCHAR(255) AS $$
BEGIN
RETURN crypt(password, gen_salt('bf'));
END;
$$ LANGUAGE plpgsql;
`);
};
exports.down = async function(knex) {
await knex.schema.dropTable("users");
await knex.raw(`DROP FUNCTION IF EXISTS hash_password(VARCHAR)`);
};
```
This migration file exports two functions, up and down. The up function is used to create the table and the function when you run your migrations. It creates the "users" table with the fields you specified and creates the hash_password function that uses the Blowfish algorithm to hash the password. The down function is used to rollback the changes made by the up function. it drops the table and the function.
To run this migration file, you can use a package like knex and run the following command:
```
knex.migrate.latest()
```
## Creating the Signup API
Now, we can create a Node.js API called "Signup" that will be used to insert values into the "users" table. This API will use the PostgreSQL function we created to hash the password before it is stored in the table. Here is the code for the Signup API:
```
const express = require('express');
const { Client } = require('pg');
const app = express();
app.use(express.json());
const client = new Client();
await client.connect();
app.post('/signup', async (req, res) => {
const { username, first_name, last_name, password, email } = req.body;
const query = 'INSERT INTO users (username, first_name, last_name, password, email) VALUES ($1, $2, $3, hash_password($4), $5)';
const values = [username, first_name, last_name, password, email];
await client.query(query, values);
res.json({ message: 'User created successfully' });
});
app.listen(3000, () => {
console.log('Server started on port 3000');
});
```
In this example, the password is passed as a parameter to the hash_password function and the hashed password is returned and stored in the password field of the user table.
## Conclusion
In this tutorial, we have discussed how to hash passwords with PostgreSQL in a Node.js application by creating a PostgreSQL function. By using this function to hash the password before storing it in the "users" table, we can ensure that user data is protected in the event
| ndohjapan |
1,386,463 | 10 Best Continuous Integration Tools In 2023 | Technology is growing exponentially and to be in the game, organisations have no choice but to be... | 0 | 2023-03-03T12:30:39 | https://dev.to/pcloudy_ssts/10-best-continuous-integration-tools-in-2023-2j9f | Technology is growing exponentially and to be in the game, organisations have no choice but to be technologically enabled. Talking about ‘technology’ basically means creating solutions that are ‘faster, ‘convenient’ and ‘qualitative’. To keep up with the highly demanding technological dynamics, not only human resources need to be equipped with the contemporaneous developments of this industry but there is also a dire need of highly standardized processes in order to deliver the top-class results. That’s when the need of [DevOps](https://www.pcloudy.com/mobile-and-web-devops/) emerges. Right from the planning through delivery, the idea of introducing DevOps is to maintain the quality streak by a systematic collaboration of development and automation across the continuous delivery and continuous Integration. To make it simpler, there must be a convenient way to tackle the complicated scenarios without delays and for on time delivery. Hence, the introduction of Continuous integration tools makes it easier for the developers to streamline the development processes.
Continuous Integration methodology enables developers to provide immediate reporting whenever any defect is identified in the code so that immediate corrective action can be taken. It is an important part of DevOps that bis used to integrate various Devops stages. The testing process is also automated and the same is instantly reported to the user. There are innumerable Continuous Integration tools available in the market providing access to different unique features. These have open source as well as paid versions, depending upon the need of the user, the most preferred could be selected. Although all the continuous Integration tools are designed to perform the same basic functions but choosing the best suitable CI tool becomes important in the long run. Depending upon many factors like features, cost, ease of use, etc. more than one tools can also be chosen meeting varied needs and not just the single solution. Comparing the best continuous Integration tools that are available in the market, below is the list of 10 best and mostly used Continuous Integration tools which must not be ignored in 2020.
Continuous Integration Tools
1. Jenkins
Jenkins is a known and the most common Continuous Integration tool available today. Based on various comparisons, [Jenkins](https://jenkins.io/) tops the list. Jenkins is opensource continuous Integration server-based application that allows developers to build, automate and test any software project at a faster pace. It was originally a part of Hudson project started by Kohsuke Kawaguchi in the year 2004 but it was later on released by the name Jenkins in the year 2011. The tool has evolved over the years and has become the most reliable software delivery [automation tool](https://www.pcloudy.com/rapid-automation-testing/). The source code is in JAVA with few Groovy, Ruby and Antlr files. It has almost 1400 plugins to support the automation of the development tasks. Jenkins supports the entire software development life cycle right from building, testing, documenting and deploying. Jenkins comes with WAR file that allows easy installation of the tool which needs to be dropped into the JEE container and the setup can be run easily henceforth.
Key Features:
1. It is an open-source server for Continuous Integration tool
2. It is written in JAVA and comes with thousands of plugins that help in build, automation and deployment of any software project
3. It can be installed easily on any operating systems
4. User friendly interface that is easy to configure and with easy upgrades.
2. Buddy
Buddy is a web-based, self- hosted Continuous Integration (CI) and Continuous Delivery (CD) tool also known as Buddy Works.Buddy is a serious advancement as one of the trusted [CI CD tools](https://www.pcloudy.com/blogs/using-jenkins-as-your-go-to-ci-cd-tool/). It has an extremely friendly user-interface and is also the simplest tool to use for the web developers, designers and quality assurance teams. Git developers can use this tool for building, testing and deploying the websites and applications using Github, Bitbucket, GitLab codes.
Key Features:
1. Steps for launching containers, automating deployment, and setting up
monitoring can be easily customised
2. Build, Ship and Deploy as inbuilt stack feature
3. Can be deployed to any hosting and cloud service providers
4. Supports Grunt. Gulp, MongoDB, and MySQL
5. Real-time reports on progress, logs and history can be monitored
6. Docker based builds and tests.
3.TeamCity
Teamcity, first released in 2006 is a continuous Integration tool developed by JetBrains. It runs in Java environment and is used to build and deploy different projects. It supports integration with many cloud technologies like Microsoft Azure, VMWare, Amazon.
Key Features:
1. It is a free of cost Continuous Integration tool
2. Supports platforms like Java, .Net and Ruby
3. Allows easy integration with IDEs like Eclipse, IntelliJ IDEA, Visual studio
4. Allows code coverage, inspection and performs duplicates check and creates history
reports of any changes made
5. It supports running multiple builds and tests under different platforms and
environments.
4. Bamboo CI
Bamboo is another Continuous Integration (CI) and Continuous Deployment (CD) software developed by Altassian. It is written in Java and supports other languages and technologies like CodeDeply, Ducker, Maven, Git, SVN, Mercurial, Ant, AWS, Amazon, etc. The tool performs automatic build, testing and deployments. Automation thus saves time and allows developers some extra time to focus on the strategic aspects of the product.
Key Features:
1. Bamboo can build, test and deploy multiple projects simultaneously and in case of any build failure, it provides the analysis and the failure reports
2. Current status of the builds and server status can be monitored with the help of the REST API provided by Bamboo
3. Bamboo supports [testing tools](https://www.pcloudy.com/) like PHPUnit, JUnit, Selenium
4.It is compatible with JIRA and BitBucket
5. Bamboo is related to other products like JIRA, Confluence and Clover by Altassian allowing the developers and the other team members to be at the same page
6. It can also import data from Jenkins.
5.GitLab CI
GitLab Continuous Integration tool is a complete code management platform with multiple mini tools each performing a different set of function for the complete SDLC. It is owned by GitLab Inc and was created by engineers Dmitriy Zaporozhets and Valery Sizov . It provides important analysis on the code views, bug management,CI CD in a single web-based repository which also makes it the most demanded CI CD tool. GitLab CI is written in Ruby and Go and its core functionality is released under an open-source MIT license, keeping rest of the functions under proprietary license.
Key Features:
1. It is directly integrated with the GitLab Workflow
2. Allows all critical information on the code progress in a single dashboard
3. Free for the community edition, enterprise version is paid one
4. Language Programming CMD build scripts available allowing to program them in
any language
5. APIs are provided to allow better product integrations
6.Circle CI
Circle CI is one of the best Continuous Integration and Delivery tool available in the market. CircleCI provides a great platform for build and test automation along with comprehensive deployment process. It can be integrated with GitHub, GitHub Enterprise and Bitbucket to create builds. It also supports on-cloud Continuous Integration. Because of its strong features and efficient performance in this space, it is highly recommended by experts.
Key Features:
1. It easily Integrates with Bitbucket, GitHub, and GitHub Enterprise
2. It allows branch focussed deployment
3. It performs easy bug-cleanup, runs tests quickly and is highly customizable
4. Easily integrates with AWS, Google cloud and other services
5. Build tools like Maven, Gradle can be easily integrated
7.Codeship
Codeship Continuous Integration tool was acquired by Cloudbees. It is praised by its users for its combination of features for build and deployment. It is efficient, simple and deploys directly from the Github and Bitbucket. Its features of integration and delivery are combined in such a way that it makes more reliable deployment as soon as the code automatically tested.
Key Features:
1. It allows a very supportive environment when it comes to compatibility with different technologies, languages, deployment in different environments of choice.
2. It has a very fast and strong developer support and is very easy to use.
3. It also supports third party tools integration very well.
4.It requires a single sign-up for Github, Bitbucket and Gitlab
5. Allows simple file management configuration, easy monitoring and scale-up as per the need.
8. Cruise Control
CruiseControl is a Java based Continuous Integration platform. It is popular for allowing various source controls, email notifications and build technologies with the help of plugins. It is written in Java and has versions of .Net (CCNet) and Ruby (CruiseControl.rb.) as well.
Key Features:
1. Supplies builders for Ant, Nant, Maven, Phing, Rake, and Xcode.
2. It is an Open source Framework
3. Allows Custom build loops for build cycles
4.Its web interface provides visual status of the builds
5. Provides JSP Reporting for managing build results
9. BuildBot
Buildbot is a software development continuous integration platform that allows automatic compilation and testing in order to validate any changes occurred in the project. It is written in Python. Originally created by Brian Warner, it is now maintained by the developer Dustin Michelle. It is popular for performing complex automation testing of the Development Life Cycle processes and for application deployment. This is among one of those tools that allow distribution and execution of programs parallelly on different platforms.
Key Features:
1. It is an Open source Continuous Integration Platform
2. Automates complex building, application deployment and manages complicated software releases
3. Allows time estimation of build completion as it provides real-time insights of the build progress.
4. Uses Python, C and host requirements of Python and Twisted
5. Supports distributed, parallel execution across multiple platforms and provides extensive status reporting
10. GoCD
GoCDContinuous Integration server is owned by Thoughtworks. It streamlines the build, automation and deployments of complex build cycles. Its top USP is to enable plugins or design custom plugins for any requirements during the CI CD process. It follows business continuity concept under which it sets up multiple servers is possible in order to keep the data readily available at the time of emergency. It is compatible with Windows, OSX, AWS AMIs, Docker, Debian/APT, RPM/YUM, and Zip. It can run tests in multiple languages and provides robust reports on the insights.
Key Features:
1. It is an opensource Continuous Integration server.
2.It allows the deployment of any preferable versions of applications
3.It easily configures the dependencies based on the last report and allows on
demand deployments
4. There are numerous plugins available for this and can also be customised as per the
requirement.
5. It re uses the pipeline configuration keeping the configuration organised with the
help of its template system
6. The entire workflow can be tackled and watched with good tracking and feedback
system allowing the developer to track changes from committing through
deployment at a single place.
Conclusion
The above list of best [Continuous Integration tools](https://www.pcloudy.com/10-best-continuous-integration-tools/) describes each of the ten tools in detail and covers the best of all along with their main features. This information is insightful for those who still haven’t thought of inculcating these automation tools to build and deploy various aspects of the Software development projects. Continuous Integration, delivery and deployment are very critical and complex systems in the Software theory. They need to be handled with care in order to fetch great results. Choosing the right tool for your business would certainly help handle this responsibility well. It is not about choosing one best tool, but multiple tools can also be selected based on the requirements of the project. As the CI CD continues to grow and evolve, it leaves the innovators with more chances to explore on creating the best versions of such tools. | pcloudy_ssts | |
1,386,488 | Ansible : Deploying a Node.js App on EC2 Using Ansible | Deploying a Node.js App on EC2 Using Ansible Ansible is a popular open-source tool used... | 0 | 2023-03-03T13:14:40 | https://dev.to/arunbingari/ansible-deploying-a-nodejs-app-on-ec2-using-ansible-2o97 | devops, ansible, node, aws | ## Deploying a Node.js App on EC2 Using Ansible
Ansible is a popular open-source tool used for automation, configuration management, and orchestration of IT infrastructure. It is widely used by system administrators, DevOps engineers, and developers to simplify the deployment process of various applications on different environments. In this blog post, we will explore how to use Ansible to deploy a Node.js application on an EC2 instance.
**Requirements**
1. An AWS account with an EC2 instance launched
2. Ansible installed on your local machine
3. Basic knowledge of Ansible
Let's dive into the steps involved in deploying a Node.js app on EC2 using Ansible.
**Step 1: Setting Up the Inventory File**
The inventory file contains the list of servers or hosts that Ansible will manage. In our case, we need to specify the IP address of our EC2 instance. Here is an example of how our inventory file should look like:
```
[webservers]
1.1.1.1 ansible_ssh_private_key_file=~/.ssh/id_rsa ansible_user=root
```
This file defines a single host, with the IP address 1.1.1.1. We also specify the SSH private key file location and the user to connect to the host.
**Step 2: Writing the Ansible Playbook**
The Ansible playbook contains the instructions for Ansible to execute on the remote hosts. In our case, we need to perform three tasks: install Node.js and npm, create a new Linux user, and deploy the Node.js application. Here's how our playbook should look like:
```
---
- name: Install node and npm # A name to identify the playbook
hosts: 1.1.1.1 # The target host to execute the tasks on
tasks: # List of tasks to be performed
- name: Update apt repo and cache
apt: update_cache=yes force_apt_get=yes cache_valid_time=3600 # Update apt repository and cache
- name: Install nodejs and npm
apt: # Install Node.js and NPM
pkg:
- nodejs
- npm
- name: Create new linux user
hosts: 1.1.1.1 # The target host to execute the tasks on
tasks: # List of tasks to be performed
- name: Create linux user
user: # Create a new Linux user
name: arun
comment: arun admin
group: admin
- name: Deploy nodejs app
hosts: 1.1.1.1 # The target host to execute the tasks on
become: True # Switch to the root user for executing tasks
become_user: arun # Set the user as "arun" to perform tasks
tasks: # List of tasks to be performed
- name: unpack the nodejs file
unarchive: # Unpack the Node.js app
src:
dest: /home/arun
- name: Install dependencies
npm: # Install app dependencies
path: /home/arun/packages
- name: Start the application
command: # Start the Node.js app
chdir: /home/arun/packages/app
cmd: node server
async: 1000 # Run the command asynchronously
poll: 0 # Do not wait for the command to finish
- name: Ensure app is running
shell: ps aux | grep node # Check if the app is running
register: app_status # Register the output of the command as a variable
- debug: msg={{app_status.stdout_lines}} # Print the output of the previous task for debugging purposes
```
## In the above playbook, we define three plays.
**Play 1: Install node and npm**
This play installs the Node.js runtime and the Node Package Manager (npm) on the remote host. It uses the `apt` module to update the apt repository and cache, and then install the `nodejs` and `npm` packages.
**Play 2: Create new Linux user**
This play creates a new user on the remote host. It uses the `user` module to create a new user with the name "arun", a comment "arun admin", and assigns the user to the "admin" group.
**Play 3: Deploy nodejs app**
This play deploys the Node.js application on the remote host. It first switches to the user "arun" using the `become_user `and `become `parameters to execute subsequent tasks as that user. It then proceeds to perform the following tasks:
- Unpack the nodejs file using the `unarchive` module.
- Install dependencies using the `npm` module.
- Start the application using the `command `module, which executes the command to start the Node.js server and sets the working directory to the application directory.
- Check if the application is running using the `shell` module to run a command that lists all processes with the name "node".
- Debug the output of the previous task using the `debug `module to print the output of the previous command.
Each play is executed sequentially, with each play building on the previous one to ultimately deploy the Node.js application on the remote host.
**Step 3: Running the Ansible Playbook**
To run the Ansible playbook, execute the following command:
```
ansible-playbook -i inventory playbook.yml
```
This command will execute the playbook.
**Conclusion**
_This project demonstrates how Ansible can be used to automate the deployment of a Node.js application on an EC2 instance. With this playbook, you can easily and quickly deploy your Node.js application on a new server without any manual configuration._ | arunbingari |
1,386,685 | Mastering CSS Border Style: A Comprehensive Guide | CSS Borders refers to the line outside the padding and inside the margin of the CSS Box Model. This... | 0 | 2023-03-03T15:24:45 | https://www.lambdatest.com/blog/css-borders/ | css, webdev, beginners |
CSS Borders refers to the line outside the **padding** and inside the **margin** of the CSS Box Model. This line wraps around the padding and content in every HTML Element. That is both semantic and non-semantic elements.
Borderlines don’t have to be a solid straight line. We can apply different styles to it to change its looks and feel, as you will see as we learn more about it in this blog on CSS Borders.
Understanding how to use CSS Borders can improve our work with web elements. In this blog on CSS Borders, you will learn how to utilize CSS to generate and customize borders.
## Introduction to CSS Borders
Since borders separate between padding and margin, CSS Borders can be used as a good color contrast between a web element and the web page’s background. For instance, if the background color of a web page is light in value, we can simply apply a dark or slightly darker color to a border of a particular web element on the web page. This will help provide a good visual contrast.
A good example of web elements we can apply borders to are buttons, cards, form fields, etc. The image below shows us how we can work with borders.

A quick look at the CSS Box Model should give us an idea of where the border can be found and what it looks like.

From the above illustration, you will notice that the **border** sits on top of the **padding** and, at the same time, below the **margin**.
As you’ll soon notice, the CSS **border** property is a shorthand property meant to declare all the border properties in a single instance without including other sub-properties.
For clarity purposes, we’ll learn how to use each property associated with the CSS **border** property, which includes:
* border-width
* border-color
* border-styles
* border-radius
* border-image
Sub-properties are very useful for targeting individual properties and can improve how you style your web elements; when you understand how it works.
The CSS **border** property should not be confused with the CSS **outline** property, THEY ARE NOT THE SAME THING.

***Check this out: Perform manual or automated cross [browser test](https://www.lambdatest.com/?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage) on 3000+ browsers online. Deploy and scale faster with the most powerful cross browser testing tool online.***
## Border-style property
This property is used to set the style or decoration of the border of a web element. This property takes in the predefined values of the following:
* **Solid**: sets a single straight line around a web element.
* **Dashed**: sets a series of dashes (short hyphenated straight lines) around a web element.
* **Dotted**: sets a series of rounded dots around a web element.
* **Hidden**: hides the border.
* **Doubled**: sets two straight lines around a web element.
* **Groove**: sets a carved-out 3D appearance. This is the opposite of the ridge.
* **Ridge**: sets an extrude in 3D appearance. This is the opposite of the groove.
* **Inset**: sets an embedded 3D appearance. This is the opposite of the outset.
* **Outset**: sets an embossed 3D appearance. This is the opposite of the inset.
* **None**: removes the border. Behaves just like the hidden value.
It’s important to note that without the border-style property defined, other CSS Border properties, such as ***border-width, border-color, border-radius,*** etc., will not take effect. So the border should always be defined.
For a better understanding, see the illustration below.

Now let’s see how we can apply this using code.
Type and run the code below.
**HTML:**
```
<body>
<main>
<div class="box">
<section class="b dotted">border with dotted</section>
<section class="b solid">border with solid</section>
<section class="b dashed">border with dashed</section>
<section class="b hidden">border with hidden</section>
<section class="b doubled">border with double</section>
<section class="b groove">border with groove</section>
<section class="b ridge">border with ridge</section>
<section class="b inset">border with inset</section>
<section class="b outset">border with outset</section>
<section class="b none">border with none</section>
</div>
</main>
</body>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.dotted {
border-style: solid;
border-top-width: 2px;
border-color: black;
}
.bottom {
border-style: bottom;
border-bottom-width: 2px;
}
.solid {
border-style: solid;
border-left-width: 2px;
}
.dashed{
border-style: dashed;
border-right-width: 2px;
}
.hidden {
border-style: hidden;
border-right-width: 9px;
}
.doubled {
border-style: double;
border-right-width: 2px;
}
.groove {
border-style: groove;
border-right-width: 2px;
}
.ridge {
border-style: ridge;
border-right-width: 2px;
}
.inset {
border-style: inset;
border-right-width: 2px;
}
.outset {
border-style: outset;
border-right-width: 2px;
}
.none {
border-style: none;
border-right-width: 2px;
}
```
**Browser Output:**

***Check this out: Need a great solution for cross browser testing on Safari? Forget about emulators or simulators — use real online browsers. Try LambdaTest to test on [safari browser online](https://www.lambdatest.com/test-on-safari-browsers?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage)!***
## Sub properties of border style
The CSS **border-style** property is a shorthand for the following properties below.
* **border-top-style**: sets the line style of the web element’s top border.
* **border-bottom-style**: sets the line style of the web element’s bottom border.
* **border-left-style**: sets the line style of the web element’s left border.
* **border-right-style**: sets the line style of the web element’s right border.
When we use the CSS **border-style** property, any value assigned automatically targets the four corners of the web element’s border, **top**, **bottom**, **left**, and **right**.
So when we need to style one corner of the web element, we can use the sub-properties designed to handle this task.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b top">border-top-style</section>
<section class="b bottom">border-bottom-style</section>
<section class="b left">border-left-style</section>
<section class="b right">border-right-style</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.top {
border-top-style: solid;
}
.bottom{
border-bottom-style: dashed;
}
.left {
border-left-style: dotted;
}
.right {
border-right-style: double;
}
```
**Browser Output:**

Test your CSS Borders across 3000+ real browsers and OS. [Try LambdaTest Now!](https://accounts.lambdatest.com/register?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=register)
***Check this out: Test your native, hybrid, and web apps across all legacy and latest mobile operating systems on the most powerful [online emulator Android](https://www.lambdatest.com/android-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage).***
## Border-width property
This property is used to set the size of the border width, and to do this, we can either use the **keyword values** or **length values** associated with the CSS **border** property.
* **Keyword Values**: thin, medium, thick. (these are predefined values). These values are fixed.
* **Length Value**: rem, em, pt, %, px (These are CSS units). These values are not fixed. We can be creative and determine how big or small we want the width to appear.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b thin">A border with thin line</section>
<section class="b medium">A border with medium line</section>
<section class="b thick">A border with thick line</section>
<section class="b px">A border using the CSS px unit</section>
<section class="b rem">A border using the CSS rem unit</section>
<section class="b percentage">A border using the CSS % unit</section>
<section class="b no-style">border style is not defined</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.thin {
border-style: solid;
border-width: thin;
}
.medium {
border-style: solid;
border-width: medium;
}
.thick {
border-style: solid;
border-width: thick;
}
.px {
border-style: solid;
border-width: 7px;
}
.rem {
border-style: solid;
border-width: 0.07rem;
}
.percentage {
border-style: solid;
border-width: 3%;
}
.no-style {
border-width: 7px;
}
```
**Browser Output:**

From the browser output, you’ll notice the fourth element has the largest border width on the webpage compared to the others. This is because we will assign it a **width** of **7px**, but we can make it larger if we want, as long as we use CSS units (such as px, rem, em, %, etc.)
***Check this out: Test your native, hybrid, and web apps across all legacy and latest mobile operating systems on the most powerful [Android online emulator](https://www.lambdatest.com/android-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage).***
## Sub properties of border width
While the **px** CSS unit is absolute, changes from the web element do not directly affect it, unlike the **rem** and **em**, which are relative to the HTML root and parent elements. For example, the **width** of **7px** we assigned to our border will still be stuck at 7px if the user by any means decides to increase or decrease the browser font size, but using units like **rem** and **em**, which are relative scale better to fit the new adjusted size of the browser. This is great for accessibility and responsive design.
The keyword values (thin, medium, and thick) only give us a fixed **width**, which has already been predefined.
As you have seen, the last element does not have a border width. This is because we do not assign a CSS **border-style** to the element. Border-style **(border-style)** is required to make the border appear in every element.
The CSS border-width property is a shorthand for the following properties below.
* **border-top-width:** sets the line width of the web element’s top border.
* **border-bottom-width:** sets the line width of the web element’s bottom border.
* **border-left-width:** sets the line width of the web element’s left border.
* **border-right-width:** sets the line width of the web element’s right border.
As we’ve seen before, from the CSS **border-style** property, any value assigned to CSS **border-width** automatically targets the four corners of the web element’s border, **top**, **bottom**, **left**, and **right** respectively.
So to target each corner individually, the border-width sub-properties should be used.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b top">A border top width</section>
<section class="b bottom">A border bottom width</section>
<section class="b left">A border left width</section>
<section class="b right">A border right width</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.top {
border-style: solid;
border-top-width: 9px;
}
.bottom {
border-style: solid;
border-bottom-width: 9px;
}
.left {
border-style: solid;
border-left-width: 9px;
}
.right {
border-style: solid;
border-right-width: 9px;
}
```
**Browser Output:**

From the browser output we assign a **width** of **9px** to the **top**, **bottom**, **left** and **right** of each element.
## Border-color property
From the browser output we assign a **width** of **9px** to the **top**, **bottom**, **left** and **right** of each element.
This property is used to set the color of the element’s border.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b green">A border with green color</section>
<section class="b red">A border with red color</section>
<section class="b rgb1">A border with rgb color value</section>
<section class="b rgb2">A border with rgb color value</section>
<section class="b hexcode1">
A border with hexcode color value
</section>
<section class="b hexcode2">
A border with hexcode color value
</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.green {
border-style: solid;
border-width: 9px;
border-color: green;
}
.red {
border-style: solid;
border-width: 9px;
border-color: red;
}
.rgb1 {
border-style: solid;
border-width: 9px;
border-color: rgb(32, 113, 128);
}
.rgb2 {
border-style: solid;
border-width: 9px;
border-color: rgb(231, 248, 76) ;
}
.hexcode1 {
border-style: solid;
border-width: 9px;
border-color:#1301b3 ;
}
.hexcode2 {
border-style: solid;
border-width: 9px;
border-color: #0d0b25;
}
```
**Browser Output:**

From the browser output, we applied colors to the border of the elements on the web page using color keyword, rgb, and hexcode values.
***Check this out: Are you using [Playwright](https://www.lambdatest.com/playwright-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage) for automation testing? Run your Playwright test scripts instantly on 50+ browser/OS combinations using the LambdaTest cloud. ***
## Sub properties of border color
The CSS **border-color** property is a shorthand for the following properties below.
* **border-top-color:** sets the color of the web element’s top border.
* **border-bottom-color:** sets the color of the web element’s bottom border.
* **border-left-color:** sets the color of the web element’s left border.
* **border-right-color:** sets the color of the web element’s right border.
The CSS **border-color** property, any value assigned to the CSS **border-color** automatically targets the four corners of the web element’s border, **top**, **bottom**, **left**, and **right**, respectively.
We use these sub-properties when there is a need to target individual borders and apply different colors.
For better understanding, let’s see this in action.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b top">A border top color</section>
<section class="b bottom">A border bottom color</section>
<section class="b left">A border left color</section>
<section class="b right">A border right color</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.top {
border-style: solid;
border-width: 9px;
border-top-color: green;
}
.bottom {
border-style: solid;
border-width: 9px;
border-bottom-color: red;
}
.left {
border-style: solid;
border-width: 9px;
border-left-color: rgb(32, 113, 128);
}
.right {
border-style: solid;
border-width: 9px;
border-right-color: rgb(231, 248, 76) ;
}
```
**Browser Output:**

From the browser output, we created four separate div tags, and we applied a CSS **border-style** of **solid**, **border-width** of **9px** and different colors to each element’s border.
Also, **border-top-color**, **border-bottom-color**, **border-left-color**, and **border-right-color** are applied to different corners of each web element, respectively.
## Border-radius property
This property is used to apply a round corner effect to an element’s outer edge or border.
Unlike the other CSS **border** properties, the CSS **border-radius** property takes effect with or without an element border. You also don’t need to define a CSS **border-style** property for the CSS border-radius property to work.
The code below shows us how this works.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b square">A square with a border-radius</section>
<section class="b cirle">A circle with a border-radius</section>
<section class="b rectangle">A rectangle with a border-radius</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.square {
width: 150px;
height: 150px;
color: white;
background-color: skyblue;
border-radius: 12px;
}
.circle {
width: 150px;
height: 150px;
color: white;
background-color: coral;
border-radius: 50%;
}
.rectangle {
width: 400px;
height: 100px;
color: white;
background-color: darkmagenta;
border-radius: 0px 20px;
border: 9px solid #000;
}
```
**Browser Output:**

The CSS **border-radius** property is very different when we apply it to an element. It rounds the four corners of the element even when the said element **border** is set to **hidden** or **none**.
From the browser’s output, you’ll notice that the three elements have different shapes. The first square takes in **width** and **height** of **150px** on both sides and a CSS **border-radius** of **12px**, leading to a round corner effect.
The second element, which happens to be a circle, is set to the same **width** and **height **of **150px** but this time, a CSS **border-radius of 50%** . This creates a circle effect of the element.
While the third one is a rectangle, which takes in a **width** of **400px** and a **height** of **100px**. The CSS **border-radius** is set to **0px** on the top-left and bottom-right while **20px** on the **top-right** and **bottom-right**.
Also, we introduced a **border** and set the **width** to **9px** across, a **style** of **solid** and a **color** of **black**. Notice how this is done in one line of code. In this tutorial on CSS Borders, we’ll explain how this is done in the upcoming chapter.
***Check this out: A comprehensive [end to end Testing](https://www.lambdatest.com/learning-hub/end-to-end-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=learning_hub) tutorial that covers what E2E Testing is, its importance, benefits, and how to perform it with real-time examples.***
## Sub properties of border radius
We’ve talked about shorthand property before, the CSS **border-radius** is a **shorthand** property for the following properties below.
* **border-top-left-radius:** sets the radius of the web element’s top-left border.
* **border-top-right-radius:** sets the radius of the web element’s top-right border.
* **border-bottom-right-radius:** sets the radius of the web element’s bottom-right border.
* **border-bottom-left-radius:** sets the radius of the web element’s bottom-left border.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<section class="b top-left">
A border radius for top-left border
</section>
<section class="b top-right">
A border radius for top-right border
</section>
<section class="b bottom-right">
A border radius for bottom-right border
</section>
<section class="b bottom-left">
A border radius for bottom-left border
</section>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
width: 300px;
margin: 20px;
}
.top-left {
width: 400px;
height: 50px;
color: white;
background-color: rgb(0, 157, 219);
border-top-left-radius: 12px;
}
.top-right {
width: 400px;
height: 50px;
color: white;
background-color: rgb(15, 81, 107);
border-top-right-radius: 12px;
}
.bottom-right {
width: 400px;
height: 50px;
color: white;
background-color: rgb(15, 19, 252);
border-bottom-right-radius: 12px;
}
.bottom-left {
width: 400px;
height: 50px;
color: white;
background-color: rgb(3, 4, 85);
border-bottom-left-radius: 12px;
}
```
**Browser Output:**

Applying CSS border-radius is very popular in web design and development. Understanding and using this property effectively will improve your design elements.
## Border-image property
This property is used to set an image as the border of an element in place of a borderline.
This property can be used to declare all the border image sub-properties in one line of code.
**Syntax:**
```
border-image: url(“/folder/image.jpg”) 16 / 40px / 10px stretch;
```
Just like most of the border property, the CSS **border-image** property is a shorthand property for the following properties.
We will use the following images from Unsplash; you can download them here: Image1, Image2, and Image3.
***Check this out: Are you using Playwright for automation testing? Run your [Playwright test](https://www.lambdatest.com/playwright-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage) scripts instantly on 50+ browser/OS combinations using the LambdaTest cloud.***
## border-image-source
Used to set the image path to be used.
Type and run the code below.
**HTML:**
```
<div class=”b two”></div>
```
**CSS:**
```
.two {
width: 100px;
height: 100px;
background-color: lime;
border-image-source: url("/flower2.jpg");
border-image-width: 40px;
border-style: solid;
border-color: transparent;
}
```
**Browser Output:**

From the code sample above, we use the CSS **border-image-source** property to apply an image to the element border. We also apply a CSS **border-images-width** of **40px**; we’ll explain how this works shortly.
***Check this out: This [Playwright automation](https://www.lambdatest.com/blog/playwright-framework/?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=blog) tutorial will guide you through the setup of the Playwright framework, which will enable you to write end-to-end tests for your future projects.***
## border-image-width
Used to set the width of the border image. It takes integers as values in four places: the top, bottom, left, and right.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<div class="b one"></div>
<div class="b two"></div>
<div class="b three"></div>
<div class="b four"></div>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
margin: 10px;
}
.one {
width: 100px;
height: 100px;
background-color: red;
border-image: url("/flower.jpg") 90 repeat stretch;
border-image-width: 10px;
border-style: dotted;
border-color: transparent;
}
.two {
width: 100px;
height: 100px;
background-color: lime;
border-image: url("/flower2.jpg") 40% round ;
border-image-width: 30px 10px;
border-style: solid;
border-color: transparent;
}
.three {
width: 100px;
height: 100px;
background-color: cyan;
border-image: url("/flower3.jpg") 90 repeat stretch;
border-image-width: 50px;
border-style: double;
border-color: transparent;
}
.four {
width: 100px;
height: 100px;
background-color: cyan;
border-image: url("/flower3.jpg") 90 repeat stretch;
border-width: 50px;
border-style: double;
border-color: transparent;
}
```
**Browser Output:**

From the code sample above, we created four div tags with a **width **and **height** of **100px** across. We apply a border image to each div tag. These images are set to a different width with the CSS **border-image-width** property, which affects the structure of the elements, as you can see from the browser output.
The first element takes in a **border-image-width** of **10px**, this gives the image a **10px** width around the element. The next element, which is the second one, takes in **30px** for **top** and **bottom** and **10px** for **left** and **right**, while for the third and fourth elements, we applied the same styles to both but different border width properties. The third element takes a CSS **border-image-width** property, while the four elements take in a CSS **border-width** property. This is to illustrate the difference between the two properties, so we don’t mistake one for the other.
The block arrow shows us how the CSS **border-image-width** affects element structure compared to CSS **border-width**.
## border-image-slice
Used to slice up the image used as the border-image into different regions around the element border. It takes integers, keywords, or percentages **(%)** as values.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<div class="b one"></div>
<div class="b two"></div>
<div class="b three"></div>
<div class="b four"></div>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
margin: 10px;
}
.one {
width: 100px;
height: 100px;
background-color: red;
border-image-source: url("/flower.jpg");
border-image-width: 10px;
border-style: dotted;
border-color: transparent;
}
.two {
width: 100px;
height: 100px;
background-color: lime;
border-image-source: url("/flower2.jpg");
border-image-width: 40px;
border-style: solid;
border-color: transparent;
}
.three {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-width: 50px;
border-style: double;
border-color: transparent;
}
.four {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-width: 50px;
border-style: solid;
border-color: transparent;
}
```
**Browser Output:**

From the browser output above, this is how the **border-image** appears without the CSS border-image-slice property.
Now let’s look at how the **border-image** appears when we assign the CSS **border-image-property**.
Type and run the code below.
**CSS:**
```
.one {
border-image-slice: 5;}
.two {
border-image-slice: 20%;
}
.three {
border-image-slice: fill;
}
.four {
border-image-slice: 50;;
}
```
**Browser Output:**

Apart from the third element, which is assigned a value of fill, all other elements had changed from the cross-like shape they had when we first created them from the previous example.
***Check this out: Test your native, hybrid, and web apps across all legacy and latest mobile operating systems on the most powerful [online Android emulator](https://www.lambdatest.com/android-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage).***
## border-image-repeat
This is used to set how the image is distributed around the element’s border. That is, the images, when assigned their keyword value, will occupy any remaining space around the element’s border by either stretching, rounding, repeating, or spacing.
For better understanding, type and run the code below.
**HTML:**
```
<main>
<div class="box">
<div class="b one"></div>
<div class="b two"></div>
<div class="b three"></div>
<div class="b four"></div>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
margin: 10px;
}
.one {
width: 100px;
height: 100px;
background-color: red;
border-image-source: url("/flower.jpg");
border-image-slice: 5;
border-image-repeat: space;
border-image-width: 10px;
border-style: dotted;
border-color: transparent;
}
.two {
width: 100px;
height: 100px;
background-color: lime;
border-image-source: url("/flower2.jpg");
border-image-slice: 20%;
border-image-repeat: stretch;
border-image-width: 40px;
border-style: solid;
border-color: transparent;
}
.three {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-width: 20px;
border-image-repeat: round;
border-image-slice: 2;
border-style: double;
border-color: transparent;
}
.four {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-slice: 20;
border-image-repeat: repeat;
border-width: 50px;
border-style: solid;
border-color: transparent;
}
```
**Browser Output:**

**Notice** from the browser output the different distribution or repetition of the image across the element’s border. The first element has a value of **space**, which results in space being applied to separate the image into squares. The second element takes a value of **stretch**, which results in the image stretching across the element’s border. The third element takes a value of **round**, which results in the rounding of the image across the element’s border, while the four elements take a value of **repeat**, which results in the image repeating in a square-like manner across the border image.
We can also assign more than one value to the CSS **border-image-repeat** property in one line of code.
Try this example below.
**HTML:**
```
<div class=”b four”></div>
```
**CSS:**
```
.four {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-slice: 20;
border-image-repeat: repeat space;
border-width: 50px;
border-style: solid;
border-color: transparent;
}
```
**Brower Output:**

From the browser output, we added another value of **space **to the fourth element, which resulted in spaces being added to the element **left **and **right **side while the top and bottom retain its initial value of repeat.
## border-image-outset
This is used to set the distance between the border-image and the element border box.
Type and run the code below.
**HTML:**
```
<main>
<div class="box">
<div class="b one"></div>
<div class="b two"></div>
<div class="b three"></div>
<div class="b four"></div>
</div>
</main>
```
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
margin: 10px;
}
.one {
width: 100px;
height: 100px;
background-color: red;
border-image-source: url("/flower.jpg");
border-image-slice: 15;
border-image-repeat: space;
border-image-outset: 10px;
border-image-width: 10px;
border-style: dotted;
border-color: transparent;
}
.two {
width: 100px;
height: 100px;
background-color: lime;
border-image-source: url("/flower2.jpg");
border-image-slice: 20%;
border-image-repeat: stretch;
border-image-outset: 0 1rem;
border-image-width: 40px;
border-style: solid;
border-color: transparent;
}
.three {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-width: 20px;
border-image-repeat: round;
border-image-outset: 4;
border-image-slice: 2;
border-style: double;
border-color: transparent;
}
.four {
width: 100px;
height: 100px;
background-color: cyan;
border-image-source: url("/flower3.jpg");
border-image-slice: 20;
border-image-repeat: repeat;
border-image-outset: 10px;
border-width: 50px;
border-style: dotted;
border-color: transparent;
}
```
**Browser Output:**

From the browser out, the first element gives us a perfect example of how the CSS **border-image-outset** property works, as you can see how the four square boxes shoot out from the border box.
## Border (Shorthand property)
This sets all the border sub-properties in one line of code.
The example below shows us how this works. By creating a simple Click Me button project, we can see how we can assign multiple values in one line of code using the border shorthand property.
Type and run the code below.
**HTML:**
```
<main>
<div class="b box">
<button>Click Me</button>
</div>
</main>
```
**Browser Output:**

From the browser output, you’ll notice the thin line surrounding the Click Me text, this is what we refer to as a border line. By default the HTML button tag has it turned on. So we will set it to double and add other border sub-properties to make it look nice.
**CSS:**
```
.b {
background-color: rgb(240, 181, 181);
margin: 10px;
}
.box button{
background-color: blue;
color: white;
padding: 1rem 2rem;
border: 0.5rem double #aff;
border-radius: 1em;
box-shadow: 8px 8px 8px 0 rgba(0,0,0, 0.4);
}
```
**Browser Output:**

From the browser output, we added a **border-width** of **0.5rem**, a **border-style** of **double**, a **border-color** of hex **#aff**, and a **border-radius** of **1rem**. We also added a box shadow to create an emphasis on the border.
***Check this out: Test your apps on the cloud using our [android Emulator for iOS](https://www.lambdatest.com/mobile-emulator-for-app-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage). Ditch your in-house infrastructure, and gain access to 3000+ browsers in real time.***
## Testing CSS Borders
So far, we’ve talked about borders and border sub-properties, their uses, and how we can implement them. In this section, we will see how we can test CSS Borders.
CSS Borders are essential to web design, as they help define a website’s layout and visual appearance. Testing CSS Borders is important to ensure they appear as intended, function correctly, and prevent user experience issues. In other words, testing CSS Borders ensures visual consistency, accessibility, layout and positioning, and user interaction.
Testing your CSS Borders with [cloud testing](https://www.lambdatest.com/blog/cloud-testing-tutorial/?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=blog) platforms like LambdaTest can help you ensure that your web pages display correctly across multiple browsers and devices, improving user experience and enhancing the overall quality of your website or application.
LambdaTest is a cloud-based testing platform that allows you to test your website or application on a wide range of browsers and devices, including [real device cloud](https://www.lambdatest.com/real-device-cloud?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage), [Android Emulators](https://www.lambdatest.com/android-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage), and [iOS Simulators](https://www.lambdatest.com/ios-simulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage).
With the LambdaTest Real Time Testing feature, you can quickly and easily identify any issues related to your CSS Borders and address them before they impact the user experience. This saves you time and effort in [manually testing](https://www.lambdatest.com/learning-hub/manual-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=learning_hub) your website on multiple devices and browsers and ensures that your web pages look consistent and professional across all platforms.
{% youtube hJ-eP8TcGuY %}
Subscribe to the [LambdaTest YouTube Channel](https://www.youtube.com/c/LambdaTest?sub_confirmation=1) and stay updated with the latest tutorials around [Selenium testing](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage), [Cypress testing](https://www.lambdatest.com/cypress-testing?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage), and more.
***Check this out: Online mobile [emulator online](https://www.lambdatest.com/mobile-emulator-online?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=webpage) from LambdaTest allows you to seamlessly test your mobile applications, websites, and web apps on mobile browsers and mobile devices.***
## How to test CSS Borders on the cloud?
The code of our created button has already been uploaded on CodePen.
To get started with **Real Time Testing** with LambdaTest, follow these steps.
1. [Sign up for free](https://accounts.lambdatest.com/register?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=register) and log in to your LambdaTest account.
2. A dialog box should pop up. Click **Realtime Testing**.

3- This should open up the LambdaTest Real Time Browser Testing Dashboard, where you can start testing.

* Choose the browser you would like to test on (in our case, we’ll go with **Chrome**).
* Select the version, and we’ll choose version **103**.
* Select the OS you like to test on. In this case, we’ll choose **Windows 10**.
* Select a screen resolution. In this case, we’ll go with **1280 x 720**.
* In the provided field, paste the website URL you’d like to run the test on. In this case, we’ll paste the URL from the CodePen we created earlier.

4. Click on the **START** button.

You should see a loading screen; wait to let it load.

This will open the Real Time Browser Viewer hosted on the cloud running a real operating system.

This is where you can perform real time testing. This console lets you change resolutions, debug our website and share a screenshot with your teammates.
***Check this out: Automated [Functional Testing](https://www.lambdatest.com/blog/automated-functional-testing-what-it-is-how-it-helps/?utm_source=devto&utm_medium=organic&utm_campaign=mar02_sd&utm_term=sd&utm_content=blog) tests helps to ensure that your web app works as it was intented to. Learn more about functional tests, and how automating them can give you a faster release cycle!***
## Summary
You have learned **how to create and style borders using CSS**. We covered the following border properties.
**Border-width, Border-styles, Border-colors, Border-radius, Border-image**, and their shorthand **border** properties.
Thanks for taking the time to read this blog on CSS Borders to completion. Feel free to ask questions. I’ll gladly reply. You can find me on Twitter and other social media @ocxigin.
#Cheers
| alexanie |
1,387,533 | AWS Cloud Quest: Decoupling Applications Study Note | This is the DIY challenge of the Decoupling Applications in AWS Cloud Quest. DIY... | 21,167 | 2023-03-03T23:38:38 | https://dev.to/arc/aws-cloud-quest-decoupling-applications-23mo | aws | This is the DIY challenge of the Decoupling Applications in AWS Cloud Quest.

### DIY Steps:
1. Repeat step 6-15 to create another SQS queue and subscribe
| arc |
1,387,684 | 9 Reasons you need Continuous Monitoring in your company | Continuous monitoring is a step towards the end of the DevOps process. It informs all relevant teams... | 0 | 2023-03-06T02:31:00 | https://dev.to/pragyanatvade/9-reasons-you-need-continuous-monitoring-in-your-company-2170 | devops, continuousmonitoring, devjournal, programming | Continuous monitoring is a step towards the end of the DevOps process. It informs all relevant teams about the errors encountered during the production deployment.
## 9 Reasons you need continuous monitoring in your company
1. It detects any network or server problems.
2. It determines the root cause of any issues.
3. It maintains the security and availability of the service.
4. It monitors and troubleshoots server performance issues.
5. It allows us to plan for infrastructure upgrades before outdated systems cause failures.
6. It can respond to issues at the first sign of a problem.
7. It can be used to automatically fix problems when they are detected.
8. It ensures IT infrastructure outages have a minimal effect on your organization’s bottom line.
9. It can monitor your entire infrastructure and business processes.

Thanks for reading this.
If you have an idea and want to build your product around it, schedule a call with [me](https://calendly.com/pragyanatvade).
If you want to learn more in DevOps and Backend space, [follow me](https://dev.to/pragyanatvade).
If you want to connect, reach out to me on [Twitter](https://twitter.com/pragyanatvade) and [LinkedIn](https://www.linkedin.com/in/pragyanatvade/). | pragyanatvade |
1,387,695 | Python Programming: Essential Tips for Beginners, Intermediates, and Experts - Part 1 | This is going to be my first post in Dev.to In this post we are basically going to cover 15 Python... | 0 | 2023-03-04T06:20:54 | https://dev.to/debakarroy/python-programming-essential-tips-for-beginners-intermediates-and-experts-part-1-1kc7 | python, beginners, tips | This is going to be my first post in Dev.to
In this post we are basically going to cover 15 Python tips. 5 beginner, 5 intermediate and 5 advanced. You can try out the code present in this post [**here**](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py).
---
### 👶 5 Beginner Python Tips for Efficient Coding
#### 👉 Tip 1: Use List Comprehensions
List comprehensions are a concise and Pythonic way to create lists. Instead of using a for loop and appending to a list, you can use a list comprehension to create a list in a single line of code. For example:
```python
# Using a for loop
squares = []
for i in range(1, 6):
squares.append(i ** 2)
print(squares) # Output: [1, 4, 9, 16, 25]
# Using a list comprehension
squares = [i ** 2 for i in range(1, 6)]
print(squares) # Output: [1, 4, 9, 16, 25]
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use List Comprehensions When:
- You want to create a new list based on an existing list.
- You want to apply a function or operation to each element in a list.
- You want to filter a list based on a condition.
❌ Avoid Using List Comprehensions When:
- The expression in the comprehension is too complex or hard to read.
- The comprehension involves nested loops or conditions that are hard to understand.
- The resulting list would be very large and memory-intensive.
---
#### 👉 Tip 2: Use Namedtuples for Readability
Namedtuples are a subclass of tuples that allow you to give names to each field. This can make your code more readable and self-documenting. For example:
```python
from collections import namedtuple
# Defining a namedtuple
Person = namedtuple('Person', ['name', 'age', 'gender'])
# Creating an instance of the namedtuple
person = Person(name='John', age=30, gender='male')
# Accessing the fields of the namedtuple
print(person.name) # Output: John
print(person.age) # Output: 30
print(person.gender) # Output: male
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use `namedtuple` When:
- You have a simple object that consists of a fixed set of attributes or fields.
- You want to avoid defining a full-blown class with methods and inheritance.
- You want to create objects that are immutable and hashable.
- You want to use less memory than a regular object.
❌ Avoid Using `namedtuple` When:
- You need to define methods or properties for your objects.
- You need to customize behavior based on attributes or fields.
- You need to use inheritance or mixins.
- You need to create complex objects with many interdependent fields.
---
#### 👉 Tip 3: Use Context Managers
Context managers are a way to manage resources, such as files, db or network connections, in a safe and efficient way. The `with` statement in Python can be used as a context manager. For example:
```python
# Using a context manager to read a file
with open('file.txt', 'r') as f:
data = f.read()
print(data)
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use Context Managers When:
- You need to manage resources that need to be cleaned up when you are done with them, such as files or network connections.
- You want to ensure that cleanup code is always executed, even if there are errors or exceptions.
- You want to encapsulate setup and teardown code in a reusable way.
❌ Avoid Using Context Managers When:
- You need to manage resources that are very lightweight or do not need to be cleaned up, such as integers or small strings.
- You need more fine-grained control over the setup and teardown process, such as calling multiple setup or teardown functions at different times.
- The cleanup process is very complex or requires a lot of custom code.
---
#### 👉 Tip 4: Use the Zip Function
The zip function can be used to combine multiple lists into a single list of tuples. This can be useful when iterating over multiple lists in parallel. For example:
```python
# Combining two lists using zip
names = ['John', 'Alice', 'Bob']
ages = [30, 25, 35]
for name, age in zip(names, ages):
print(f'{name} is {age} years old.')
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use `zip()` When:
- You need to combine multiple iterables element-wise, and the iterables are of the same length.
- You want to iterate over multiple iterables simultaneously, and perform some operation on the corresponding elements.
- You want to convert multiple iterables into a single iterable that can be easily passed as an argument to a function.
❌ Avoid Using `zip()` When:
- The iterables are of different lengths, and you do not want to truncate or pad the shorter iterables.
- You need to modify or update the original iterables, rather than just iterating over them simultaneously.
- You need to perform more complex operations on the elements of the iterables, such as filtering, mapping, or reducing.
---
#### 👉 Tip 5: Use F-Strings for String Formatting
F-strings are a concise and Pythonic way to format strings. You can use curly braces {} to insert variables into a string, and you can also include expressions inside the curly braces. For example:
```python
name = 'John'
age = 30
print(f'My name is {name} and I am {age} years old.') # Output: My name is John and I am 30 years old.
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use `f-strings` When:
- You need to format strings with expressions, variables, or literals.
- You want to embed Python expressions directly into the string, rather than concatenating them with the string using +.
- You want to simplify the syntax for formatting strings, and make the code more readable and maintainable.
- You are using Python 3.6 or later, which introduced f-strings as a new feature.
❌ Avoid Using `f-strings` When:
- You need to format strings with complex expressions or multiple variables, and the code becomes hard to read or understand.
- You need to support older versions of Python that do not support f-strings.
- You are formatting a large number of strings and performance is a concern.
---
### 🧑💻 5 Intermediate Python Tips for Efficient Coding
#### 👉 Tip 1: Use Decorators for Code Reuse
Decorators are a way to modify the behavior of a function without changing its source code. This can be useful for adding functionality to a function or for reusing code across multiple functions. For example:
```python
# Defining a decorator
def my_decorator(func):
def wrapper(*args, **kwargs):
print('Before function')
result = func(*args, **kwargs)
print('After function')
return result
return wrapper
# Using the decorator
@my_decorator
def my_function():
print('Inside function')
my_function() # Output: Before function \n Inside function \n After function
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use Decorators When:
- You need to add behavior or functionality to existing functions or classes without modifying their code directly.
- You have a common behavior or functionality that you want to apply to multiple functions or classes.
- You want to separate the concerns of a function or class, and extract cross-cutting concerns into separate decorators.
- You want to simplify the implementation of a function or class by using decorators to handle boilerplate code, such as error handling, logging, or authentication.
❌ Avoid Using Decorators When:
- You have only a few functions or classes, and the behavior or functionality is specific to each of them.
- You have a simple behavior or functionality that can be implemented directly in the code, without requiring a decorator.
- You have a behavior or functionality that is tightly coupled to the implementation of a function or class, and modifying it using a decorator would introduce unnecessary complexity or duplication.
---
#### 👉 Tip 2: Use Generators for Memory Efficiency
Generators are a way to create iterators in a memory-efficient way. Instead of creating a list of all values, generators create values on the fly as needed. For example:
```python
# Creating a generator function
def squares(n):
for i in range(1, n+1):
yield i ** 2
# Using the generator
for square in squares(5):
print(square) # Output: 1 4 9 16 25
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use Generators When:
- You are working with large data sets that cannot fit in memory
- You need to process the data in a sequential manner, one value at a time.
- You want to avoid loading the entire data set into memory at once, to reduce memory usage and improve performance.
- You want to iterate over a sequence of values, but don't need random access to them.
- You want to generate an infinite sequence of values, such as a Fibonacci sequence or a stream of sensor data.
❌ Avoid Using Generators When:
- You need random access to the values in the sequence, or need to iterate over the values multiple times.
- You need to modify the sequence of values, or need to filter, sort, or transform the values in a non-sequential manner.
- You have a small data set that can fit in memory, and generating the values all at once would not cause memory issues.
- You are working with non-sequential data, such as images or audio files, that require random access to different parts of the data.
---
#### 👉 Tip 3: Use Enumerations for Readability
Enumerations are a way to define named constants in Python. This can make your code more readable and self-documenting. For example:
```python
from enum import Enum
# Defining an enumeration
class Color(Enum):
RED = 1
GREEN = 2
BLUE = 3
# Using the enumeration
print(Color.RED) # Output: Color.RED
print(Color.RED.value) # Output: 1
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use `Enum` When:
- You need a fixed set of constants with names that are easier to read and understand than raw integers or strings.
- You want to prevent errors from using incorrect values, by providing a set of valid options.
- You want to create a custom data type that can be used throughout your code, with its own methods and attributes.
❌ Avoid Using `Enum` When:
- You only need to define a few constants that are easily represented by integers or strings.
- You need to store more than just a name and a value for each constant, such as additional data or behavior.
- You need to create a set of related constants that can be used in different contexts, or that may change over time.
---
#### 👉 Tip 4: Use Map and Filter for Data Manipulation
The map and filter functions can be used to transform and filter data in a concise and efficient way. For example:
```python
# Using map to transform data
numbers = [1, 2, 3, 4, 5]
squares = map(lambda x: x ** 2, numbers)
print(list(squares)) # Output: [1, 4, 9, 16, 25]
# Using filter to filter data
numbers = [1, 2, 3, 4, 5]
evens = filter(lambda x: x % 2 == 0, numbers)
print(list(evens)) # Output: [2, 4]
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use `map()` When:
- You need to apply the same function to every element of an iterable object, and transform the elements into a new iterable.
- You want to avoid writing a for loop to iterate over the elements of the iterable, and apply the function to each element.
- You want to create a new iterable with the transformed elements, without modifying the original iterable.
✅ Use `filter()` When:
- You need to filter the elements of an iterable object based on a certain condition or criteria.
- You want to avoid writing a for loop to iterate over the elements of the iterable, and filter the elements based on a condition.
- You want to create a new iterable with the filtered elements, without modifying the original iterable.
❌ Avoid Using `map()` and `filter()` When:
- The function you want to apply to the elements is complex or requires multiple arguments.
- The condition you want to use for filtering is complex or requires multiple conditions or criteria.
- The iterable object is very large or requires significant memory, as using map() and filter() can create new objects that require additional memory.
---
#### 👉 Tip 5: Use Sets for Unique Values
Sets are a way to store unique values in Python. This can be useful for removing duplicates or for performing set operations such as union, intersection, and difference. For example:
```python
# Creating a set
numbers = {1, 2, 3, 2, 1}
print(numbers) # Output: {1, 2, 3}
# Performing set operations
set1 = {1, 2, 3}
set2 = {3, 4, 5}
print(set1.union(set2)) # Output: {1, 2, 3, 4, 5}
print(set1.intersection(set2)) # Output: {3}
print(set1.difference(set2)) # Output: {1, 2}
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
✅ Use Set When:
- You need to store a collection of unique elements and need to check for membership or intersection with other sets.
- You need to remove duplicates from a list or other iterable object, as sets automatically remove duplicates.
- You need to perform set operations such as union, intersection, and difference.
❌ Avoid Using Set When:
- You need to maintain the order of the elements, as sets are unordered.
- You need to access elements by index or position, as sets do not support indexing.
- You have duplicate elements that are important to the data or analysis, as sets automatically remove duplicates.
---
### 🚀 5 Expert Python Tips for Efficient Coding
#### 👉 Tip 1: Use Decorators for Performance Optimization
Decorators can be used for more than just code reuse. They can also be used for performance optimization by caching the results of a function. This can be useful for expensive calculations or for functions that are called frequently. For example:
```python
# Defining a memoization decorator
def memoize(func):
cache = {}
def wrapper(*args):
if args in cache:
return cache[args]
result = func(*args)
cache[args] = result
return result
return wrapper
# Using the decorator
@memoize
def fibonacci(n):
if n in (0, 1):
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(30)) # Output: 832040
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
---
#### 👉 Tip 2: Use Asynchronous Programming for Concurrency
Asynchronous programming is a way to write concurrent code that can handle multiple tasks at once. It can improve the performance of I/O-bound tasks such as network requests and file operations. For example:
```python
import asyncio
import aiohttp
async def fetch(session, url):
async with session.get(url) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
tasks = [
asyncio.create_task(
fetch(session, f"https://jsonplaceholder.typicode.com/todos/{i + 1}")
)
for i in range(10)
]
responses = await asyncio.gather(*tasks)
print(responses)
asyncio.run(main())
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
---
#### 👉 Tip 3: Use Function Annotations to Improve Readability
Function annotations can be used to provide additional information about function arguments and return values, improving the readability of your code. For example:
```python
def add(x: int, y: int) -> int:
return x + y
```
More complex example:
```python
from typing import Callable, TypeVar, Union
T = TypeVar('T')
def apply(func: Callable[[int, int], int], x: int, y: int) -> int:
return func(x, y)
def get_first_item(items: list[T]) -> T:
return items[0]
def greet(name: Union[str, None] = None) -> str:
if name is None:
return 'Hello, World!'
else:
return f'Hello, {name}!'
def repeat(func: Callable[[T], T], n: int, x: T) -> T:
for i in range(n):
x = func(x)
return x
def double(x: int) -> int:
return x * 2
def add(x: int, y: int) -> int:
return x + y
numbers = [1, 2, 3, 4, 5]
result = apply(add, 3, 4)
first_number = get_first_item(numbers)
greeting = greet()
repeated_number = repeat(double, 3, 2)
print(result) # Output: 7
print(first_number) # Output: 1
print(greeting) # Output: 'Hello, World!'
print(repeated_number) # Output: 16
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
---
#### 👉 Tip 4: Use collections.defaultdict for Default Values
If you need to initialize a dictionary with default values, you can use the collections.defaultdict class. For example:
```python
from collections import defaultdict
d = defaultdict(int)
d['a'] += 1
print(d['a']) # prints 1
print(d['b']) # prints 0 (default value for int)
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
---
#### 👉 Tip 5: Use contextlib.suppress to Suppress Exceptions
If you need to suppress a specific exception, you can use the contextlib.suppress context manager. For example:
```python
import contextlib
with contextlib.suppress(FileNotFoundError):
with open('file.txt') as f:
# do something with f
pas
```
[🏃♂️💻 Run it yourself](https://replit.com/@DebakarRoy/Python-Tips-Part-1#main.py)
| debakarroy |
1,387,858 | [Flutter] Solved Android clashing when native back button pressed. | You can solve by 👇 Future<bool> _onWillPop() async { final isFirstRouteInCurre ntTab =
!await _navigatorKeys[_currentTab].currentState.maybePop();
if (isFirstRouteInCurrentTab) {
if (_currentTab != TabItem.home) {
_onSelect(TabItem.home);
return false;
}
}
return isFirstRouteInCurrentTab;
}
```
🥳
| kaedeee |
1,387,998 | Database Replication | The term database replication means sharing the data to make sure it stays consistent between... | 20,359 | 2023-03-04T11:48:15 | https://pragyasapkota.medium.com/database-replication-e13299b3346d | distributedsystems, architecture, systems, beginners | The term database replication means sharing the data to make sure it stays consistent between redundant sources like multiple databases. This helps improve reliability, fault tolerance, and accessibility.
There are two kinds of replication for the database. Let’s look at them individually.
## Master-Slave Replication
In the master-slave replication, you can have a master that gives both read and write operations alongside one or more slaves that replicate write but only gives the read operations. To top it off, each slave can replicate its new slave in a tree-like manner. The system fails if the master fails but you can have the master offline and use the slaves for all the read operations until a new master among the slaves is selected.
### Advantages of master-slave replication
- The entire database has multiple backups.
- We don’t need to disturb the master for the read operation, but we can just use the slave.
- Slaves are flexible — can be made offline and synced back at any time.
### Disadvantages of master-slave replication
- More hardware is required.
- If the master fails, downtime is created, and the loss of data is possible.
- The complexity is increased.
- All the writes are required in the master.
- As the number of slaves increases, the times we must replicate also increases causing replication lag.

## Master-Master Replication
Moving on, there is master-master replication where both are the masters with the capacity of operating both read and write operations. In case either of the master fails, another one keeps the operation intact.
### Advantages of master-master replication
- Both masters are accessible.
- Simple and automatic to use.
- Quick Failover
- The write operations are synced along both nodes.
### Disadvantages of master-master replication
- Comparatively hard to configure and deploy.
- Synchronization increases the latency in the system and if it is spared, the consistency is loosed.
- Conflict resolution is required.
## Synchronous and Asynchronous replication
On synchronous replication, data is subsequently saved in the primary and replicated storage so that they are always synchronized. However, asynchronous replication just copies the data later when the primary storage has it. The update process is done in fixed schedules which saves cost as well.
**_I hope this article was helpful to you._**
**_Please don’t forget to follow me!!!_**
**_Any kind of feedback or comment is welcome!!!_**
**_Thank you for your time and support!!!!_**
**_Keep Reading!! Keep Learning!!!_** | pragyasapkota |
1,388,023 | Check file size in Ubuntu | Introduction Linux is one of the most popular operating systems in the world. It is... | 0 | 2023-03-04T13:02:19 | https://dev.to/hemanlinux/check-file-size-in-ubuntu-51pj | ubuntu |
# Introduction
Linux is one of the most popular operating systems in the world. It is Unix-like, and it is also open-source. Quite a big percentage of developers use Linux because it can be customized in so many ways.
What's cool about Linux is its command line. Every 'hackers' paradise. There are a ton of commands.
There is probably a command for almost anything that you want to do. For example what if you wanted to get the size of a directory? Well luckily for you there is a command for that. In this post, we are going to talk about that command and see what it can offer.
Reference: [3 ways to get the size of a file in Ubuntu](\https://www.howtouseubuntu.com/filesystem/how-to-check-file-size-in-ubuntu-command/)
# The `du` command
This command lets the user get a quick view of the **disk usage**. The best way to use it is by giving it the directory you want to see the size of. It should look a little something like this:
```
du directory_name
// output
2314 directory_name
```
This will give you the size of all the files and at the very end, it will give you the size of the directory itself. You could point out the full path, or you could just give the name of the directory you want to see if you are already on the same path.
But we could make this even easier just b adding 2 **flags**. Flags help the command change its behavior. For the `du` command we could add the flags `-s` and `-h`.
- `-s` stands for **summarize** and it will show you only the total size of the directory, without all those files popping up on your screen.
- `-h` stands for **human-readable** and it will convert the size so that you can read it easier. Just by running the command without the `-h` flag doesn't specify you a unit of measurement.
So now to get the best of this command, you should run it like this with the `-s` and `-h` flags:
```
du -sh directory_name
// output
44.5M directory_name
```
And if you wanted to see all of the directories sizes, you could just run this:
```
du -sh ./*
// output
12.2M dir1
2.5M dir2
55M dir3
```
Another thing you can do is use pipe(`|`) and sort them by size, which will make it even easier for you to see their sizes. What `|` does is get the output from the command behind it and add it into the input of the command after it. So to sort the directories by size just run the following command:
```
du -sh ./* | sort -h
// output
55M dir3
12.2M dir1
2.5M dir2
```
# Conclusion
This is a pretty useful command that I think everybody should know, just because of how short and easy it is. I hope that this post has helped you and I wish you happy coding.
[Follow us on howtouselinux website to get the latest Linux updates, expert insights, and practical guides.](https://www.howtouselinux.com/subscribe)
| hemanlinux |
1,388,170 | A Beginner's Guide to Curl: Part 2 - Options | If you want to teach yourself how to use curl, the best resource is the manual. Type the following... | 22,101 | 2023-03-04T16:00:31 | https://dev.to/jvon1904/a-beginners-guide-to-curl-part-2-2dko | zsh, bash, http | If you want to teach yourself how to use curl, the best resource is the manual.
Type the following command in your terminal.
```bash
$ man curl
```
The `man` command is used to read the manual for other command line utilities (like curl). To exit the manual, just type `q`.
You should see a brief synopsis of what curl is and what it does. As you can see, there are a lot more protocols than just `http`. For this guide we will only be using `http` since that's what the internet is built off of.
If you scroll down in the manual, you'll see a section titled "OUTPUT". By default, curl will print the response to your STDOUT (standard output or terminal screen).
For example, exit the manual by typing `q`, and type the following in your terminal.
```bash
$ curl http://www.example.com
```
You should see the HTML response printed below to your STDOUT. That's the default response when using curl. Now let's change that by adding an `option`.
Our first option will be the `-o` flag or its longer full version `--output`. Whenever you pass an option to curl, you can specify it with a short name (with one dash) or a full name (with two dashes).
The `-o` or `--output` option takes one argument, the name of the file you want to write to. This can be an existing file, or one that you haven't created yet.
Let's try it out! Type the following command. Notice we are changing our URL now.
```bash
$ curl --output three_sentences.txt http://metaphorpsum.com/sentences/3
```
Two things just happened. First, curl created the file you specified in your current directory. Second, curl wrote the response from the sever to that file.
If you edit the file, you'll see the response is there. Another way to see the contents is to `cat` the file, you'll see the contents of the file printed to your STDOUT.
```bash
$ cat three_sentences.txt
```
If you were to type the same command without the `--output` option and file name, what do you think would happen?
In the next article, we'll learn how to see more details contained in the request and response. [A Beginner's Guide to Curl: Part 3](https://dev.to/jvon1904/a-beginners-guide-to-curl-part-3-1lbc)
By the way, here are some other URLs you can try.
* `http://metaphorpsum/com/sentences/10`
* `http://metaphorpsum/com/paragraphs/3`
* `http://metaphorpsum/com/paragraphs/3/2`
| jvon1904 |
1,388,222 | Beginner's Guide to rtx (mise) | Learn how to install and use node and python for local development | 0 | 2023-03-04T19:37:48 | https://dev.to/jdxcode/beginners-guide-to-rtx-ac4 | python, node, asdf, rtx | ---
title: Beginner's Guide to rtx (mise)
published: true
description: Learn how to install and use node and python for local development
tags: python,node,asdf,rtx
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2023-03-04 17:07 +0000
---
_NOTE: rtx has been renamed to mise. This post is still relevant but anytime you see "rtx" just replace it with "mise"_
[rtx](https://github.com/jdxcode/rtx) is a tool that manages installations of programming language runtimes and other tools for local development.
If you are using pyenv, nvm, or asdf, you'll have a better experience with rtx. It's faster, easier to use, and generally has more features than any of those.
It's useful if you want to install a specific version of node or python or if you want to use different versions in different projects.
This guide will cover the 2 most commonly used languages in rtx: node and python, however you can still use this guide for other languages. Just change `node` and `python` out for `java` or `ruby` or whatever.
## Demo
{% embed https://asciinema.org/a/564654 %}
## Background
Before we talk about rtx, lets cover some background. It's important to understand these in case something isn't working as expected.
### `~/.bashrc` and `~/.zshrc`
These are bash/zsh scripts that are loaded every time you start a new terminal session. We "activate" rtx here so that it is enabled and can modify environment variables when changing directories.
Not sure which one you're using? Try just running a command that doesn't exist and it should tell you:
```
$ some_invalid_command
bash: Unknown command: some_invalid_command
```
### Environment Variables
Environment variables are a bunch of strings that exist in your shell and get passed to every command that is run. We can use them in any language, in node we use them like this:
```sh-session
$ export MY_VAR=testing-123
$ node -e "console.log('MY_VAR: ', process.env.MY_VAR)"
MY_VAR: testing-123
```
These are often used to change behavior of the commands we run. In essence, what rtx does is 2 things: manage installation of tools and manage their environment variables.
The latter is mostly invisible to you, but it's important to understand what it is doing, especially with one key environment variable: PATH.
### PATH environment variable
PATH is a special environment variable. It is fundamental to how your shell works.
However, it also is just an environment variable like any other. We can display it the same way we did earlier with node:
```sh-session
$ node -e "console.log('PATH: ', process.env.PATH)"
```
Or more commonly we can use `echo` (this is simplified, my real one has a lot more directories):
```sh-session
echo $PATH
/Users/jdx/bin:/opt/homebrew/bin:/usr/bin:/bin
```
What is special about PATH is how bash/zsh use it. Any time you run a command like `node` or `python`, but also `rm` or `mkdir` and most other things, it needs to "find" where that command exists before it can run it.
This happens behind the scenes, but we can get the "real path" of these tools with `which`:
```sh-session
$ which rm
/bin/rm
$ which node
/opt/homebrew/bin/node
```
If we look at the PATH I had above, we can see that `/bin` and `/opt/homebrew/bin` are included. bash/zsh will look at each directory in PATH (split on ":"), and check if there is an `rm` or `node` inside of each one. The first time it finds one, it returns it.
The way rtx works is it modifies PATH to include the paths to the expected runtimes, so once it is activated, you might see it include a directory like:
```
/Users/jdx/.local/share/rtx/installs/nodejs/18.0.0/bin
```
That directory will contain `node` and `npm` binaries which will override anything that might be on the "system" (meaning something like `/opt/homebrew/bin/node`).
## Installing rtx
See the [rtx documentation](https://rtx.jdx.dev) for instructions on how to install, for macOS I suggest installing with Homebrew:
```
brew install rtx
rtx --version
```
Alternatively, you can also just download rtx as a single file with curl, then make it executable:
```
curl https://rtx.jdx.dev/rtx-latest-macos-arm64 > ~/bin/rtx
chmod +x ~/bin/rtx
rtx --version
```
> ⚠️ **Warning**
>
> Replace "macos" and "arm64" with "linux" or "x64" if using a different OS or architecture.
> Also, this assumes that `~/bin` is on PATH which won't be the case by default. Add `export PATH="$PATH"` to your `~/.bashrc` or `~/.zshrc` if it isn't already included. `rtx -v` will fail otherwise because it can't find `rtx` if it's just in some random directory not in PATH.
## Activating rtx
In order for rtx to work it should* be activated. To do this, modify your `~/.bashrc` or `~/.zshrc` and add `"$(rtx activate bash)"` or `"$(rtx activate zsh)"`. You generally want to put this near the bottom of that file because otherwise `rtx` might not be on PATH if it's added earlier in the file.
In other words, you might have the following in your `~/.bashrc`:
```
export PATH="$HOME/bin:$PATH"
eval "$(rtx activate bash)"
```
If you swap those around it won't work since `rtx` won't be included in PATH.
Once you modify this file, you'll need to run `source ~/.bashrc` or `source ~/.zshrc` in order for the changes to take effect (or just open a new terminal window).
You can verify it is working by running `rtx doctor`. You should see a message saying "No problems found". If it shows an error, follow the instructions shown.
```
$ rtx doctor
rtx version:
1.21.2 macos-arm64 (built 2023-03-04)
shell:
/opt/homebrew/bin/fish
fish, version 3.6.0
No problems found
```
_(*This isn't strictly true, you can use rtx with [shims](#shims) or with `rtx exec`, but generally this is the way you'll want to use rtx.)_
## Installing Node.js
Now that rtx is installed and activated we can install node with it, to do this simply run the following:
```
rtx install nodejs@18
```
At this point node is installed, but not yet on PATH so we can't just call `node` and run it. To use it we can tell rtx to execute it directly:
```
rtx exec nodejs@18 -- npm install
rtx exec nodejs@18 -- node ./app.js
```
In these commands we're calling rtx, telling it to use `nodejs@18`, then we say `--` which tells rtx to stop listening to arguments and everything after that will be executed as a new command in the environment that rtx created. (If that isn't obvious just keep going, it'll likely make more sense later.)
Also note that `nodejs@18` sets up both the `node` and `npm` binaries so that `npm install` and `node ...` both use the same version of node from rtx.
This is fine for ad-hoc testing or one-off tasks, but it's a lot to type. What we want to do now is make nodejs@18 the default version that will be used when running `node` without prefixing it with `rtx exec nodejs@18 --`.
### Make nodejs@18 the global default
To make it the default, run the following:
```
rtx use --global nodejs@18
```
What this does is modify the global config (as of this writing that defaults to ~/.tool-versions, but it will eventually be `~/.config/rtx/config.toml`).
You can also edit this file manually. It looks like this for `~/.tool-versions`:
```
nodejs 18
```
Or this for `~/.config/rtx/config.toml` (you can use this today if you want, it's just that `rtx use --global` won't _write_ to this by default currently):
```
[tools]
nodejs = '18'
```
Now node is on PATH and we can just run the following:
```
npm install
node ./app.js
```
Let's do a bit more digging to see what is actually going on here. If we run `echo $PATH` we can see that our PATH has a new entry in it (simplified output):
```
$ echo $PATH
/Users/jdx/.local/share/rtx/installs/nodejs/18.14.2/bin:/opt/homebrew/bin:/usr/bin:/bin
```
If we look inside that "rtx/install/nodejs" directory we see the following:
```
ls /Users/jdx/.local/share/rtx/installs/nodejs/18.14.2/bin
corepack node npm npx
```
These are the commands that node provides. Because this is first in our PATH, these are what bash/zsh will execute when we run them.
Here are some other rtx commands with example output that can be helpful to see what is going on:
* show the location of an rtx bin
```
$ rtx which node
/Users/jdx/.local/share/rtx/installs/nodejs/18.14.2/bin/node
```
* get the current version this bin points to
```
$ rtx which node --version
18.14.2
```
* show the directory the current version of nodejs is installed to
```
$ rtx where nodejs
/Users/jdx/.local/share/rtx/installs/nodejs/18.14.2
```
### Make nodejs@18 the local default
Let's say we had a project where we wanted to use version 16.x of node instead. We can use rtx to have `node` and `npm` point to that version instead when in that directory:
```
cd ~/src/myproj
rtx use nodejs@16
```
Note that by default we don't have to install nodejs@16 ahead of time. rtx will prompt if it is not installed (see the [demo](https://asciinema.org/a/564654) in this article to see how that looks).
Now if we're in ~/src/myproj, then rtx will make `node` point to v16 and if we're anywhere else it will use v18.
## Upgrading node versions
If you want to use a new major version of node, then set it with `rtx use --global nodejs@20` or `rtx use nodejs@20`. You can also use `nodejs@lts` for LTS version or `nodejs@latest` for the latest version.
If you just want to update to a new minor or patch version (`18.1.0` or `18.0.1`, for example), then all you need to do is run `rtx install` so long as `nodejs 18` is what is in `~/.tool-versions`.
You can remove old versions no longer referenced with `rtx prune`.
## Installing Python
Now let's look at installing python which is a bit more complex than node and has some unique quirks.
### Make python@latest the global default
We can default python to the latest version of by using the following:
```
rtx use --global python@latest
python --version
pip --version
```
This will create many new bins we can call like `python`, `python3`, `python3.11`, `pip`, `pip3`, and `pip3.11`.
### Make python@3.11 the local default
We can also set local versions just like with node:
```
rtx use python@3.11
python --version
pip --version
```
### Multiple python versions
With python we can use multiple versions:
```
rtx use --global python@3.11 python@3.10
python --version
python3.10 --version
```
This will make `python` and `python3` use v3.11 and we can use v3.10 by running `python3.10` or `pip3.10`.
### Managing virtualenv with rtx
We can have rtx automatically setup a virtualenv (virtualenvs themselves are out of scope for this article) by using the following `.rtx.toml`:
```
[tools]
python = {version='3.10', virtualenv='.venv'}
```
Whenever inside of this directory, the `.venv` virtualenv will be created if it does not exist and rtx will automatically activate the virtualenv.
## Arbitrary environment variables with rtx
One of rtx's most loved features is the ability to set arbitrary env vars in different directories. This replaces what people commonly use dotenv and direnv for.
To do this, you need to use `.rtx.toml` instead of `.tool-versions` since the latter could not support this syntax. Just create this file in any directory you want the environment variables to take effect:
```toml
[env]
NODE_ENVIRONMENT = "production"
S3_BUCKET = "my_s3_bucket"
AWS_ACCESS_KEY_ID = "..."
AWS_SECRET_ACCESS_KEY = "..."
```
As long as rtx is activated, these environment variables will be setup whenever inside of that directory.
## Shims
Shims are an optional feature that can be used in some cases in rtx where things don't work as expected. See the rtx docs for more on how shims work.
If you want to integrate rtx with your IDE you'll likely want to use shims for that (but I recommend keeping `rtx activate` for usage in the terminal unless you have a unique setup where this doesn't work well).
## Comparisons to other tools
Let's examine other ways to install node/python and see how they compare:
### Node/Python Official .pkg
For macOS, if you go the official node and python websites they'll offer a .pkg installer to install node and python. I don't recommend these for a few reasons:
* No ability to update. If you want to get the latest version, you need to go to the website and reinstall. That should ideally be a single CLI command.
* No ability to use multiple versions. If you have multiple projects that need different versions you won't be able to use this method.
* Can conflict with PATH. If you have node or python installed by some other means, this can either override that install or not work because it is being overridden. Using rtx makes it easy to control when languages should and should not override the system versions.
### How is rtx different than asdf?
rtx is very similar to asdf and behaves as a drop-in replacement for almost any use-case. See the docs for more details on this. rtx can be thought of as a clone of asdf with extra features.
Under the hood, rtx uses asdf plugins so the logic for actually installing node and python is the same for both asdf and rtx. That's true today at least, rtx may diverge and fork asdf plugins to enable rtx-specific behavior if needed.
However it is much faster, has better UX, and it has extra features like the ability to modify arbitrary env vars (`FOO=bar`), or manage Python virtualenvs.
asdf is burdened by being written in bash which is very slow and greatly limits their capabilities. asdf can never be as fast or feature-complete as rtx just because it's written in bash. They would need a ground-up rewrite: which is rtx.
So far as I'm aware, there should be no use-case where asdf is a better fit than rtx.
### How is rtx different than Homebrew?
Tools like Homebrew are great (it's how I install most of my tools on macOS), but it won't have the latest version of tools when they come out since there is a manual process to update them.
Homebrew also does not let you use different versions in different directories (at least, not without manually modifying PATH). It doesn't let you install a specific version (e.g.: nodejs-18.0.0 instead of nodejs-18.0.1). It has some support for different major versions in some languages like: `brew install nodejs@18`.
Use homebrew if you just want to use the latest version of the tool and don't need different versions in different directories. rtx requires a bit more yak-shaving that isn't necessary for a lot of tools.
### How is rtx different than apt-get/yum/dnf?
Most Linux distros strive to maintain compatibility and do not allow new software into existing distributions. Because of this, they do not even include node since the versions change too fast and they go end-of-life when the distribution is still under LTS.
Python is included but it's often old or very old.
If python is just some system dependency you don't interact with much this is fine and ideal for compatibility reasons. However if you're writing python yourself, you'll want to use a much more modern version.
Multiple versions is also not terribly well supported by these mechanisms. It can be done, but in a seamless way where you just change directory and it magically has the right language versions in different directories.
You should install python in your Linux packages, but don't develop python projects using that version.
### How is rtx different than nvm/pyenv?
These are the most well known tools for switching between node and python versions. They function well, but they're very slow.
These also only work for a single language where rtx can be used to work with any language. It's only one tool to setup and if you want to pick up a different language you don't need to figure out a new tool. | jdxcode |
1,388,480 | What Does Passwordless Actually Mean? | Passwords have been around for a long time, and while they are easy to understand, they do come with... | 22,058 | 2023-03-07T15:40:00 | https://dev.to/propelauth/what-does-passwordless-actually-mean-cd4 | webdev, beginners, security, cybersecurity | Passwords have been around for a long time, and while they are easy to understand, they do come with some drawbacks. They are essential for keeping our online accounts secure, but they can also be a hassle to remember and manage. Luckily, there’s a way to log in that eliminates the need for passwords altogether: passwordless authentication. But what does that actually mean?
#### What is Passwordless Authentication?
As you might have guessed, passwordless authentication is a way to log into a website or app without using a traditional password. Instead, you use a different method to prove your identity, such as using your fingerprint, face recognition, a security token, or a one-time code or link sent to your phone or email.
#### How Does Passwordless Authentication Work?
The process of passwordless authentication varies depending on the method used, but the general idea is the same. Instead of entering a password, you provide some form alternative verifying information to prove your identity. Most commonly, you’ll see passwordless authentication in the form of a “magic link.” There’s two general ways magic links are implemented: the “Click the link in the email we just sent you” method, or the “we just sent you a code, enter it here” method.

There are other forms of passwordless authentication other than magic links. For example, with fingerprint authentication, you place your finger on a sensor to be recognized. With a security token, you would use a physical device that generates a one-time code.
#### Why Use Passwordless Authentication?
In terms of drawbacks to traditional authentication just using passwords, you probably already know them. People forget passwords leading to password reset flows where users could potentially churn. People re-use passwords meaning if one site is compromised, all their accounts can be compromised. Some sites make bad password requirements that don’t do much to protect their users and can both drive users away or force them to pick less secure passwords.

With passwordless authentication, a lot of these concerns go away. Passwords that are compromised don’t affect services that don’t use them. Many passwordless methods use devices like mobile phones to create incredibly easy flows, meaning less churn through the product or service. Not to mention passwordless can be faster and easier, making it even more convenient.
#### What Are the Drawbacks of Passwordless Authentication?
While passwordless authentication is more secure than traditional passwords, it’s not without its downsides. For example, biometric information can be stolen or spoofed, and one-time codes can be intercepted if they are sent via insecure channels. Another concern could be with magic links or one time passcodes landing in spam or promotions sections of an end user’s email, making for a bad user experience logging into your product.
Passwordless authentication is an innovative and secure way to log into websites and apps without passwords. By eliminating the need for passwords, it makes logging in faster, easier, and more secure. While it’s not without its downsides, passwordless authentication is a promising technology that has the potential to improve the security and accessibility of our online accounts. | propelauthblog |
1,388,525 | Planejamento[0] <04:05>/23 | Vamos Começar Definindo que tecnologias vou estudar Preciso estudar Python, mais... | 0 | 2023-03-05T01:21:23 | https://dev.to/devmedeiros/planejamento0-23-29m7 | study, programming, organization, planning | ## Vamos Começar Definindo que tecnologias vou estudar
- Preciso estudar Python, mais especificamente o SKLearn, numpy e outras bibliotecas para utilizar e implementar no trabalho, assim como AWS Cloud e estudar um pouco sobre a área DevOps. {A+}
- Também preciso ver C# e .NET, há tutoriais excelentes da Microsoft sobre isso, necessito me organizar pra ver -> motivo: entrei em um projeto que usa muito C# e .NET.{A}
- Continuar estudando Java também, mas em menos intensidade, só pegar um projetinho a cada 1/2 semanas.{C}
- Outro tópico para estudar é Docker ,apesar de não ter muita utilização agora, é bom pegar noções.{C}
- Electron é uma linguagem que sempre quis aprender, então vou colocar na lista, mas não é uma prioridade, nem de longe {E}
- React é algo que preciso aprender, não _agora_ e nem tão rápido, mas preciso, em quase todos os projetos irão usar. {B+}
### Ordem de Prioridade
{A+} > {A} > {B+} > {B} > {C} > {D} > {E}
### Resumo das Tecnologias
Python IA => {A+}
C#|.NET => {A}
React => {B+}
Java => {C}
Docker => {C}
Electron => {E}
## Que Aplicativos ou IDE's pretendo aprender a usar ?
- Visual Studio -> De longe a maior prioridade, IDE Incrível de absurda potência para C# e tudo. {A+}
- Vim -> Sem prioridade, dá uma olhada de leve. {E}
- Notepad++ -> Seria legal aprender, é diferenciado. {B}
| devmedeiros |
1,388,526 | Http Client API in Java: The basics | Overview The new HttpClient API was introduced in Java 11. It is a replacement for the old... | 22,450 | 2023-03-05T12:18:13 | https://dev.to/noelopez/http-client-api-in-java-26e | java, http, beginners | ## Overview
The new HttpClient API was introduced in Java 11. It is a replacement for the old HttpURLConnection API which was not suitable for HTTP protocol. This new API uses builder pattern and fluent API to create the required objects to communicate over the Network. It also provides the below features:
1. Support for HTTP2 protocol.
2. SSL encription.
3. Synchronous and Asynchronous communication models.
4. Support for HTTP methods.
5. Authentication mechanism (Basic).
6. Cookies.
The API contains three main classes:
- HttClient is used to send multiple requests and receive the responses over the newtwork.
- HttpRequest is an ummutable class representing an http request to be sent. It can be configured for a specific HTTP method and append the body if any.
- HttpResponse depicts a response comming from the web server. It is returned by the HttpClient when a request is submitted. If the call is asynchronous it returns a CompletableFuture.
The steps are straightforward. First, an instance of HttpClient is created, then the HTTP request to be dispatched. Finally, the request is passed to the HttpClient send methods and a response object is returned (or CompletableFuture if the call is asynchronous).
## Use Cases In Action
Without any further delay, let's take a look at some examples:
For this demo, a SpringBoot REST application will be exposing an endpoint (located in http://localhost:8080/api/v1/customers)
that allows to list/add/update/delete customers. Customer is just an ummutable POJO class with a few members. With the help of HttpClient API, we will perform CRUD operations while interacting with the service.
**1. Get List of customers**
The first scenario is to get a list of all customers. This is just a GET request to the customers resource URL.
```java
HttpClient client = HttpClient
.newBuilder()
.connectTimeout(Duration.ofMillis(500))
.build();
```
Note that connection will time out if it is not established in half a second. Next the http request object.
```java
HttpRequests request = HttpRequest
.newBuilder()
.uri(URI.create("http://localhost:8080/api/v1/customers"))
.header("Content-Type", "application/json")
.GET()
.build();
```
Now the communication can be done synchronous, that is, execution is blocked until the response is received.
```java
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.printf("Status %s \n", response.statusCode());
System.out.printf("Response %s \n", response.body());
```
The BodyHandlers class contains convenient methods to convert the response body data into java objects like a String.
Program Output
```console
Status 200
Response [
{"id":1,"name":"Joe Smith","email":"joe.smith@gmail.com","dateOfBirth":"2008-01-01"},
{"id":2,"name":"Robert Moody","email":"robert.moody@gmail.com","dateOfBirth":"1985-06-21"},
{"id":3,"name":"Jennifer Dolan","email":"jennifer.dolan@gmail.com","dateOfBirth":"1966-11-11"},
{"id":4,"name":"Christopher Farrel","email":"christopher.farrel@gmail.com","dateOfBirth":"1970-04-15"},
{"id":5,"name":"Caroline Red","email":"caroline.red@gmail.com","dateOfBirth":"1992-03-05"}
]
```
We could send the same request asynchronously invoking the sendAsynch method. This call is non-blocking and it will
return immediately a CompletableFuture.
```java
CompletableFuture<HttpResponse<String>> responseFuture = client.sendAsync(request, HttpResponse.BodyHandlers.ofString());
responseFuture
.thenApply(HttpResponse::body)
.thenApply(String::toUpperCase)
.thenAccept(System.out::println)
.join();
```
In the above pipeline, the body is extracted from the response, uppercased and printed.
Program Output
```console
[
{"ID":1,"NAME":"JOE SMITH","EMAIL":"JOE.SMITH@GMAIL.COM","DATEOFBIRTH":"2008-01-01"},
{"ID":2,"NAME":"ROBERT MOODY","EMAIL":"ROBERT.MOODY@GMAIL.COM","DATEOFBIRTH":"1985-06-21"},
{"ID":3,"NAME":"JENNIFER DOLAN","EMAIL":"JENNIFER.DOLAN@GMAIL.COM","DATEOFBIRTH":"1966-11-11"},
{"ID":4,"NAME":"CHRISTOPHER FARREL","EMAIL":"CHRISTOPHER.FARREL@GMAIL.COM","DATEOFBIRTH":"1970-04-15"},
{"ID":5,"NAME":"CAROLINE RED","EMAIL":"CAROLINE.RED@GMAIL.COM","DATEOFBIRTH":"1992-03-05"}
]
```
---
**2. Create a new Customer**
POST method will be used to create a new customer. The body must be populated with the customer data in JSON format. The BodyPublishers class provides handy methods to convert from java objects into a flow of data for sending as a request body.
```java
HttpRequest request = HttpRequest
.newBuilder()
.uri(URI.create("http://localhost:8080/api/v1/customers"))
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString("{\"name\":\"Sonia Lamar\",\"email\":\"sonia.lamar@mail.com\",\"dateOfBirth\":\"1998-07-29\"}"))
.build();
var response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.printf("Status %s \n", response.statusCode());
System.out.printf("Location %s \n", response.headers().map().get("location"));
```
Program Output
```console
Status 201
Location [http://localhost:8080/api/v1/customers/6]
```
---
**3. Update a new Customer**
PUT method will be used to replace entirely an existing customer. That means all fields will be changed except the id. For partial updates, like updating only the email, the PATCH method is more appropiate.
```java
HttpRequest request = HttpRequest
.newBuilder()
.uri(URI.create("http://localhost:8080/api/v1/customers/4"))
.header("Content-Type", "application/json")
.PUT(HttpRequest.BodyPublishers.ofString("{\"name\":\"Victor Martin\",\"email\":\"victor.martin@mail.com\",\"dateOfBirth\":\"1977-04-15\"}"))
.build();
var response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.printf("Status %s \n", response.statusCode());
System.out.printf("Body %s \n", response.body());
```
Program Output
```console
Status 200
Body {"id":4,"name":"Victor Martin","email":"victor.martin@mail.com","dateOfBirth":"1977-04-15"}
```
---
**4. Delete a new Customer**
Final scenario is to delete the customer which id is 3.
```java
var request = HttpRequest
.newBuilder()
.uri(URI.create("http://localhost:8080/api/v1/customers/3"))
.header("Content-Type", "application/json")
.DELETE()
.build();
var response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.printf("Status %s \n", response.statusCode());
```
Program Output
```console
Status 204
```
---
## Conclusion
We learned how to use the Java HttpClient API to consume a REST web service. We performed CRUD operations using the appropiate HTTP methods and inspect the response to verify status, headers and body.
Follow up on the second part of this series and check out how to authenticate to access secured resources. Click [here](https://dev.to/noelopez/http-client-api-in-java-part-2-75e) to learn more!
Check the code on [GitHub](https://github.com/NoeLopezSan/java-http-client)
| noelopez |
1,388,597 | Process Scheduling in Linux | cron is a process scheduler for Linux. The crontab is a list of commands that you want to run on a... | 0 | 2023-03-05T04:12:03 | https://dev.to/waji97/process-scheduling-in-linux-262k | linux, bash, beginners | > **cron** is a process scheduler for Linux. The **`crontab`** is a list of commands that you want to run on a regular schedule, and also the name of the command used to manage that list.
>
<aside>
👉 `cron` is a time-based job scheduler in Unix-like operating systems. Users can schedule jobs (commands or scripts) to run automatically at a specified time and date.
`crontab` is the program used to install, deinstall or list the tables used to drive the cron daemon. Each user can have their own crontab, and though these are files in /var, they are not intended to be edited directly.
</aside>
<aside>
💡 **Format**
`MIN HOUR DOM MON DOW CMD`
```
Field Description Allowed Value
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed
```
`“ * “ 모든 값을 의미 “ * ” : 모든 일 마다 ( 3번 필드 )
“ - " 범위 지정 “ 1-12 ” : 1월 부터 12월 ( 4번 필드 )
“ , “ 여러 개의 값 지정 “ 10,15 ” : 10시 그리고 15시 ( 2번 필드 )
“ / “ 특정 주기로 나눌 때 사용 “ */10 ” : 매 10분 마다 ( 1번 필드 )`
</aside>
- Linux System에서 주기적인 작업처리를 진행할 때 주로사용 된다. ( 예약 작업을 의미 )
- Cron은 프로세스 예약 데몬이며, 특정시각에 지정 된 작업을 수행한다.
- Crontab은 Cron에 의해 실행 될 예약 작업의 목록을 정의하는 파일을 말한다. ( CronTable )
- Cron은 사용자별 예약작업을 따로 가질 수 있다
- Cron작업에 대한 로그기록은 /var/log/cron에 저장 된다.
```bash
rpm -qa | grep cronie
```
```bash
ps -ef | grep cron
```
```bash
ls -l /var/log/cron
```
```bash
ls -l /var/spool/cron
```
```bash
ls -d /etc/cron
```
---
## Setting up cron
- `crontab -l` : 예약 작업 리스트 확인
- `crontab -e` : 예약 작업 편집
- `crontab -r` : 예약 작업 삭제
- `crontab -u [UserName]` : 특정 사용자의 예약작업 확인 및 편집
<aside>
💡 We can schedule a task or a process using the `crontab -e` command within the vi editor. Also, only the ‘root’ user can use the `crontab -u [UserName]` command.ca
</aside>
We can also see how the `crontab` file looks like:
```bash
➜ ~ cat /etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
```
## Examples
### Example1:
- Making an empty directory with a script file in it. Providing ‘execute’ permission to the file as well.
```bash
➜ ~ mkdir ./script
➜ ~ cd ./script
➜ script vi ./cron_test1.sh
#!/bin/bash
echo "Cron Test" > /root/cron.txt
➜ script chmod +x ./cron_test1.sh
```
- Setting up the schedule for the script to run and confirming it.
```bash
➜ script crontab -e
0 19 * * * /root/script/cron_test1.sh
➜ script crontab -l
0 19 * * * /root/script/cron_test1.sh
```
- After the script runs at 19:00
```bash
➜ script ls -l /root
total 12
-rw-r--r-- 1 root root 0 Jan 12 15:12 풀이
-rw-------. 1 root root 1483 Jan 10 11:41 anaconda-ks.cfg
**-rw-r--r-- 1 root root 10 Jan 26 23:24 cron.txt
# Checking the logs for cron**
➜ script tail /var/log/cron
```
### Example2:
- Creating a new script that saves logs as `tar` backup file and deletes it after 10 days.
```bash
➜ script vi ./cron_test2.sh
#!/bin/bash
DATE=$(date +%Y-%m-%d)
BACKUP_DIR="/backup“
tar -cvzpf $BACKUP_DIR/test-$DATE.tar.gz /var/log
find $BACKUP_DIR/* -mtime +10 -exec rm {} \;
```
- Giving the file executable permissions and also adding new crontab settings.
```bash
➜ script chmod +x ./cron_test2.sh
➜ script crontab -e
crontab: installing new crontab
0 20 * * * /root/script/cron_test2.sh
```
- After the script runs at 20:00
```bash
➜ script ls -l /backup
test-26-01-2023.tar.gz
**# Checking the logs for cron**
➜ script tail /var/log/cron
```
### Deletion, Backup and Control
- Before deleting our crontab settings,
```bash
➜ ~ crontab -l > /root/cron_back.txt
➜ ~ cat /root/cron_back.txt
0 19 * * * /root/script/cron_test1.sh
0 20 * * * /root/script/cron_test2.sh
```
- Deleting the crontab settings
```bash
➜ ~ crontab -r
➜ ~ crontab -l
no crontab for root
```
- We can easily backup our crontab settings using the bakcup .txt file that we created.
```bash
➜ ~ crontab /root/cron_back.txt
➜ ~ crontab -l
0 19 * * * /root/script/cron_test1.sh
0 20 * * * /root/script/cron_test2.sh
```
<aside>
👉 Just a note that we cannot delete a specific `crontab`
</aside>
- We can ‘control’ what users can set up `crontab` configs
```bash
➜ ~ vi /etc/cron.deny
itbank
# If we try to use the following command as 'itbank'
➜ itbank ~ crontab -e
You (itbank) are not allowed to use this program (crontab)
See crontab(1) for more information
``` | waji97 |
1,388,645 | Dependency Injection and Different ways to inject it using .NET Core API | In this blog, we are going to discuss dependency injection and its usage and benefits. Also, discuss... | 0 | 2023-03-05T05:15:45 | https://dev.to/jaydeep007/dependency-injection-and-different-ways-to-inject-it-using-net-core-api-5570 | csharp, dotnet, dotnetcore, asp | In this blog, we are going to discuss dependency injection and its usage and benefits. Also, discuss different ways to implement dependency injection.
## Prerequisites:
Basic understanding of C# Programming Language.
Understanding of Object-Oriented Programming.
Basic Understanding of .NET Core.
## The purpose behind the dependency injection design patterns is:
- In simple words, dependency means the object depends on another object to do some work.
- Using Dependency Injection we are able to write loosely coupled classes and because of that our current functionality of one class does not directly depend on another class because of that we can easily maintain, change and unit test code properly.
- It also takes care of the open-closed principle using abstraction and interface we are able to make some future changes easily without modifying existing functionality.
## Dependency Injection:
- Dependency Injection is used to inject the Object of the class into another class.
- Dependency Injection uses Inversion of Control to create an object outside the class and use that object using different ways like using Service Container which .NET Core provides.
**Now we are looking what is the problem we have faced without using Dependency Injection**
Suppose we have a class car, and that depends on the BMW class.
```
public class Car
{
private BMW _bmw;
public Car()
{
_bmw = new BMW();
}
}
public class BMW
{
public BMW()
{
}
}
```
Here in this example class, Car depends on the BMW class, and because they are tightly coupled, we need to create an object of the BMW class each time.
In the future, suppose we want to add some parameters to the BMW Class Constructor, like the model name, as shown in the below example.
```
public class Car
{
private BMW _bmw;
public Car()
{
_bmw = new BMW("BMW X1");
}
}
public class BMW
{
public BMW(string ModelName)
{
}
}
```
So, in this case, we need to add the Model Name parameter into the Car class, and again, we need to change the constructor and pass the parameter. This is not a good practice in software development, and because of that, there is a hard dependency created, and when our application is large, it’s hard to maintain.
These are the challenges we face while implementing functionality without Dependency Injection.
**Uses of Dependency Injection in .NET Core:**
.NET Core provides a mechanism like IOC Container that will respond to take care of the following things.
- The Registration of services with type and class in a container and due to this we are able to inject the services as per our need.
- The IOC Container also resolves the dependencies of classes of the required class.
- It also manages the lifetime of the object.
**Ways to register Lifetime of Services in startup class:**
**Scope:**
- It will create a single instance per scope.
- It will create instances of every request.
**Singleton:**
- It will create only a single instance per request and be used throughout the application.
- It also shared that same instance throughout the application.
**Transient:**
- It will make a new instance at each time and not share with other applications.
- It is used for small and lightweight applications.
**Now we are going to create .NET Core Product Application using Dependency Injection one by one.**

Then configure the project

Provide additional information like .NET Version and other configuration

Then, install the following NuGet Packages which we need for Swagger, SQL Server, Migration, and Database Update.
```
Microsoft.EntityFrameworkCore
Microsoft.EntityFrameworkCore.Design
Microsoft.EntityFrameworkCore.SqlServer
Microsoft.EntityFrameworkCore.Tools
Swashbuckle.AspNetCore
```

Now, Create the ProductDetail Class inside the Model Folder
```
using System.ComponentModel.DataAnnotations;
namespace Product.Model
{
public class ProductDetail
{
[Key]
public int Id { get; set; }
public string Name { get; set; }
public int Cost { get; set; }
public int NoOfStock { get; set; }
}
}
```
Later on, Create DBContextClass for the database-related operation of entity framework
```
using Microsoft.EntityFrameworkCore;
using Product.Model;
namespace Product.Data
{
public class DBContextClass : DbContext
{
public DBContextClass(DbContextOptions<DBContextClass> options) : base(options)
{
}
public DbSet<ProductDetail> Products { get; set; }
}
}
```
Add the Database Connection string of SQL Server in the appsetting.json file
```
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*",
"ConnectionStrings": {
"DefaultConnection": "Data Source=ServerName;Initial Catalog=ProductData;User Id=Test;Password=database@123;"
}
}
```
Create the ProductService and IProductService Classes inside Service Folder for data manipulation using Dependency Injection
```
using Product.Model;
using System.Collections.Generic;
namespace Product.Service
{
public interface IProductService
{
ProductDetail AddProduct(ProductDetail employee);
List<ProductDetail> GetProducts();
void UpdateProduct(ProductDetail employee);
void DeleteProduct(int Id);
ProductDetail GetProduct(int Id);
}
}
```
Now, create ProductService Class
```
using Product.Data;
using Product.Model;
using System.Collections.Generic;
using System.Linq;
namespace Product.Service
{
public class ProductService : IProductService
{
private readonly DBContextClass _context;
public ProductService(DBContextClass context)
{
_context = context;
}
public ProductDetail AddProduct(ProductDetail product)
{
_context.Products.Add(product);
_context.SaveChanges();
return product;
}
public void DeleteProduct(int Id)
{
var product = _context.Products.FirstOrDefault(x => x.Id == Id);
if (product != null)
{
_context.Remove(product);
_context.SaveChanges();
}
}
public ProductDetail GetProduct(int Id)
{
return _context.Products.FirstOrDefault(x => x.Id == Id);
}
public List<ProductDetail> GetProducts()
{
return _context.Products.OrderBy(a => a.Name).ToList();
}
public void UpdateProduct(ProductDetail product)
{
_context.Products.Update(product);
_context.SaveChanges();
}
}
}
```
After this, Create ProductController and inside that, you can see we inject product service into the constructor easily without being tightly coupled. So it helps to extend the functionality of the Product without modifying existing functionality.
```
using Microsoft.AspNetCore.Mvc;
using Product.Model;
using Product.Service;
using System.Collections.Generic;
namespace Product.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class ProductController : ControllerBase
{
private readonly IProductService productService;
public ProductController(IProductService _productService)
{
productService = _productService;
}
[HttpGet]
[Route("api/Product/GetProducts")]
public IEnumerable<ProductDetail> GetProducts()
{
return productService.GetProducts();
}
[HttpPost]
[Route("api/Product/AddProduct")]
public IActionResult AddProduct(ProductDetail product)
{
productService.AddProduct(product);
return Ok();
}
[HttpPut]
[Route("api/Product/UpdateProduct")]
public IActionResult UpdateProduct(ProductDetail product)
{
productService.UpdateProduct(product);
return Ok();
}
[HttpDelete]
[Route("api/Product/DeleteProduct")]
public IActionResult DeleteProduct(int id)
{
var existingProduct = productService.GetProduct(id);
if (existingProduct != null)
{
productService.DeleteProduct(existingProduct.Id);
return Ok();
}
return NotFound($"Product Not Found with ID : {existingProduct.Id}");
}
[HttpGet]
[Route("GetProduct")]
public ProductDetail GetProduct(int id)
{
return productService.GetProduct(id);
}
}
}
```
Finally, Register the service inside the ConfigureServices method and also add Db Provider and configure the Database connection string which we put in the app settings.
```
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.OpenApi.Models;
using Product.Data;
using Product.Service;
namespace Product
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
//DBContext Configuration
services.AddDbContext<DBContextClass>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
//Register the ProductService for DI purpose
services.AddScoped<IProductService, ProductService>();
//enable swagger
services.AddSwaggerGen(c =>
{
c.SwaggerDoc("v1", new OpenApiInfo { Title = "Product", Version = "v1" });
});
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseSwagger();
app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "Product v1"));
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
}
```
Our Project structure looks like as shown in the image

Finally, we are going to create a database and migrations using Entity Framework. So for that open the Package Manager Console in Visual Studio and enter the following command for migration one by one.
> Add-Migration "FirstMigration"
> database-update
Now, run the Product Application and open the swagger dashboard and use the endpoints.

There are also following different ways to inject the DI without Controller Constructor
Method 1:
```
[HttpGet]
[Route("api/Product/GetProducts")]
public IEnumerable<ProductDetail> GetProducts()
{
//method 1
var services = this.HttpContext.RequestServices;
var productService = (IProductService)services.GetService(typeof(IProductService));
return productService.GetProducts();
}
```
Method 2:
```
[HttpPost]
[Route("api/Product/AddProduct")]
public IActionResult AddProduct(ProductDetail product)
{
//Method 2
var productService =
(IProductService)this.HttpContext.RequestServices.GetService(typeof(IProductService));
productService.AddProduct(product);
return Ok();
}
```
Method 3:
```
//Method 3
public IActionResult UpdateProduct([FromServices] IProductService productService,
ProductDetail product)
{
productService.UpdateProduct(product);
return Ok();
}
```
So, This is all about Dependency Injection. I hope you understand
**Happy Coding!** | jaydeep007 |
1,388,769 | React - Preventing Event Propagation from Parent Elements, Event Bubbling, Capturing, and Propagation | This topic is not directly related to React, but understanding Event Bubbling and Capturing in HTML... | 0 | 2023-03-05T08:37:15 | https://dev.to/nakzyu/react-preventing-event-propagation-from-parent-elements-event-bubbling-capturing-and-propagation-4npp | react, javascript, beginners, webdev | This topic is not directly related to React, but understanding Event Bubbling and Capturing in HTML and JS can be applied to React as well.
Suppose there is a component with a div tag as the parent and a button tag as the child rendered in the DOM, as shown below:
```jsx
const Example = () => (
<div
onClick={(event) => {
console.log("div clicked");
}}
>
<button
onClick={(event) => {
console.log("button clicked");
}}
></button>
</div>
);
```
### Event Capturing and Bubbling
When the button is clicked, JS handles event propagation as follows:

#### 1 Starting from the top-level `Document`, it looks for the target of the `onClick` event one step at a time.
This process of finding the `target` is called `Event Capturing`.
#### 2 After finding the `target`, it goes back up to the top-level `Document`.
* This process of going back up is called `Event Bubbling`.
* While going back up, if the event registered on the element (the target, which is either the `button`, `div`, `body`, or `html`) that triggered the `Event Capturing`, then it triggers all corresponding events in order.
* Since the `button` has an event corresponding to the `onClick` event, `"button clicked"` is outputted to the console.
* As it goes up, the parent of the `button`, which is the `div`, also has an event corresponding to the `onClick` event, so `"div clicked"` is outputted to the console.
* There is no more output because `body` and `html` do not have any registered event for onClick.
### How to Prevent Console Output from the Div? - `Event Propagation`
Events occur during the `Event Bubbling Phase`. If you stop this bubbling, you can prevent the event from being executed. This is called `Event Propagation`.
You can execute propagation by calling `event.stopPropagation()`.
```jsx
const Example = () => (
<div
onClick={(event) => {
console.log("div clicked");
}}
>
<button
onClick={(event) => {
event.stopPropagation();
console.log("button clicked");
}}
></button>
</div>
);
```
If you propagate the `button`'s event like in the above example, the event will no longer bubble up, and the `div`'s onClick event will not be triggered. | nakzyu |
1,388,836 | Vanilla JS for selecting a local text file and reading its content | Recently I am working a small side project that aims to build a mechanical 4-keys macropad with the... | 0 | 2023-03-05T09:28:20 | https://dev.to/tobychui/vanilla-js-for-selecting-a-local-text-file-and-reading-its-content-31oh | javascript, webdev |

Recently I am working a small side project that aims to build a mechanical 4-keys macropad with the lowest cost possible. To make it more user friendly, I need to develop a website that let user load their previous config file (JSON) and allow them to continue their "graphical programming". I search online for a Javascript only solution that do the trick but it took me some time to reach what I need.
To save other's time (and make sure myself won't forget this in the future), here is how you allow user to select a text file, upload it using a File Selection Dialog and read its contents in Vanilla JS
```javascript
let input = document.createElement('input');
input.type = 'file';
input.multiple = true;
input.onchange = e => {
let files = e.target.files;
for (var i = 0; i < files.length; i++){
let read = new FileReader();
read.readAsBinaryString(files[i]);
read.onloadend = function(){
//Content of the file selected
console.log(read.result);
}
}
}
input.click();
```
Cheers! | tobychui |
1,388,902 | Some thoughts on Rust and Elm | Last week, my friend Joe asked me what's the best way to learn a programming language. I replied:... | 0 | 2023-03-05T12:00:23 | https://tonghe.xyz/2023/q1/rust-elm/ | Last week, my friend Joe asked me what's the best way to learn a programming language.
I replied: Go through the basics as quickly as possible. Then begin building things with it. This is how I learn Golang.
I had been curious about Rust for a while. And a coworker talked about how Rust was his favorite language with a lot of exuberance. I thought to myself, maybe I could do the same with Rust.
I was intimidated by “[the Rust book](https://doc.rust-lang.org/book/)”. It's a huge book of I don't know how many pages. And I never got past hello world and the curious exclamation mark after `println`.
That's when I discovered [Tour of Rust](https://tourofrust.com/) after some random Googling. Tour of Rust is a pleasant introduction to the basic syntax and concepts of Rust. With a compiler on the right side of the page, it adds some nice interactivity to the tutorial. It makes it easier to understand when you can try and change things and see how it fails.
So far my feeling is a few concepts (borrowing and referencing) in Rust are quite clever. I don't understand under what circumstances lifetime needs to be managed manually. Then it feels like a few things are added as an afterthought, which leads to a certain degree of internal incongruity in its syntax.
Yet despite the tribal sentiment among Go developers and Rust developers, these languages feel more similar than different.
Curiously, after searching Rust on YouTube, the algorithm god recommended a million Elm talks with one of them promising [to give you happiness](https://www.youtube.com/watch?v=kuOCx0QeQ5c).
Incidentally, I read a blog post from the Warp team about [building a UI in Rust](https://www.warp.dev/blog/why-is-building-a-ui-in-rust-so-hard).
In a gist, it was difficult to build a UI in Rust if they followed the [widget tree](https://docs.flutter.dev/development/ui/widgets-intro) approach of Flutter. Because multilayers of object inheritance do not feel natural or intuitive in Rust.
Although they used a close but different approach, the Model-View-Update approach used in the [Elm architecture](https://guide.elm-lang.org/architecture/) was an inspiration for the Warp team.
To use an inaccurate analogy in React terms, `Model` is like `Redux`, it manages all the states in your entire app. `View` manages the virtual DOM that renders the page as the states change. `Update` makes changes to the state upon receiving messages (events) when the user interacts with your app. Pretty neat.
What's charming about Elm is it's easier, way easier, linting and debugging that works out of the box. You don't need to configure anything, it just works. This is so much better than React.
But you do need to learn a new language and get used to the functional programming way of thinking.
Bottom line: Rust and Elm are both fun. Don't know if or when they will be useful at work. | t0nghe | |
1,388,916 | 5 best practices for AWS Lambda function design standards | Here are a few best practices to incorporate into your Lambda function design standards. 1.... | 0 | 2023-05-21T08:00:44 | https://dev.to/harithzainudin/5-best-practices-for-aws-lambda-function-design-standards-1564 | javascript, aws, lambda | Here are a few best practices to incorporate into your Lambda function design standards.
## 1. Store and reference dependencies locally
If your code retrieves any externalized configuration or dependencies, make sure they are stored and referenced locally after initial execution. For example, if your function retrieves information from an external source like a relational database or AWS Systems Manager Parameter Store, it should be kept outside of the function handler. By doing so, the lookup occurs when the function is initially run. Subsequent warm invocations will not need to perform the lookup.


If you see handler.js on line number 3, we are defining the global value. Then, on line number 6, we are checking either the value is exist or not, if yes, then we will get the value and assign back to the variable `value`. With this, in the next invocation of lambda, if the lambda is still warm, we do not need to get again the same parameter value.
## 2. Limit re-initialization of variables
You should also limit the re-initialization of variables or objects on every invocation. Any declarations in your Lambda function code (outside the handler code) remain initialized when a function is invoked.
To improve performance, limit the re-initialization of variables or objects in an AWS Lambda function. When a Lambda function is called, any code outside the handler function is only run once during the function's startup or cold start. Variable declarations and object initialization are included.
Re-initializing variables or objects on each invocation suggests that the code is doing redundant operations that might have been performed only once during startup. This can result in longer execution times and a decrease in the performance of your Lambda function.
Limiting variable or object re-initialization ensures that these activities are only executed once during the cold start. Subsequent Lambda function invocations (warm invocations) can then reuse the initialised variables.
## 3. Check and reuse existing connections
Add logic to your code to check whether a connection already exists before creating one. If one exists, just reuse it.

In this example, the database client is stored in a global variable dbClient. During the initial invocation (cold start), the code checks to see if dbClient is null. If it is null, a new database client is generated with the auxiliary function createDatabaseClient. If dbClient is not null, it indicates that a client already exists, and the function reuses existing client.
Following invocations of the Lambda function (warm invocations) might save the overhead of generating a new database connection and client setup, which can be time-consuming. Reusing the client aids in the performance and efficiency of the Lambda function.
Note: To ensure best practises for maintaining database connections, you must manage error scenarios, connection pooling, and proper client closure in a real-world scenario. The supplied example is simplified for demonstrative purposes.
## 4. Use /tmp space as transient cache
Add code to check whether the local cache has the data that you stored previously. Each execution context provides additional disk space in the /tmp directory that remains in a reused environment.

The code uses fs.existsSync to check if the cache file exists in the /tmp directory. If the file exists, it reads the data from the cache file using fs.readFileSync and parses it as JSON for further processing.
If the cache file does not exist, it retrieves the data from an external source, processes it, and then stores it in the cache file located in the /tmp directory using fs.writeFileSync.
By utilizing the /tmp space as a transient cache in JavaScript, you can reduce the need to repeatedly retrieve the same data from an external source, thereby improving the performance and reducing the execution time of your Lambda function.
## 5. Check that background processes have completed
And finally, make sure any background processes (or callbacks in the case of Node.js) are complete before the code exits. Background processes or callbacks initiated by the function that don’t complete when the function ended will resume, if you get a warm start.
## Summary
By utilising the best practice above, you will be able to reduced latency, optimize cost, improve performance and reduced load on external resources of your AWS Lambda.
To read more, you can go to [AWS Lambda Documentation](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) for more best practices.
Stay hungry, stay foolish, and keep learning!
Thank you for reading :D
----------
Psstt pstt :p
Do consider to love this article ❤️ and follow me! Why not right? It's FREE~
I would really appreciate it 👨🏻💻
Will be posting more on things related to AWS, JavaScript, Serverless and more!
Cover image by <a href="https://unsplash.com/@jdiegoph?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Diego PH</a> on <a href="https://unsplash.com/photos/fIq0tET6llw?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
| harithzainudin |
1,388,937 | Stack Creation through Step Function Workflow Execution | “ I have to create a stack automatically so I got the way to do it using step function workflow... | 0 | 2023-03-05T12:37:26 | https://dev.to/aws-builders/stack-creation-through-step-function-workflow-execution-3p7h | awsstepfunction, s3, cloudformation, iam | “ I have to create a stack automatically so I got the way to do it using step function workflow execution. It's very easy to set up using api parameters and cost will be based on the services as s3 and step function.”
AWS Step Functions is a serverless orchestration service that lets you combine AWS Lambda functions and other AWS services to build business-critical applications. Through Step Functions graphical console, you see your application’s workflow as a series of event-driven steps. Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of work that another AWS service performs. Each step in a workflow is a state.
In this post, you will get to know how to do stack creation through step function workflow execution. Here I have created s3 bucket, step function and added required permission in the role of step function.
#Prerequisites
You’ll need an Amazon Simple Storage Service for this post. [Getting started with Amazon Simple Storage Service](https://aws.amazon.com/s3/getting-started/) provides instructions on how to create a bucket in simple storage service. For this blog, I assume that I have a s3 bucket created.
#Architecture Overview

The architecture diagram shows the overall deployment architecture with data flow, aws step function, iam role, cloudformation and s3.
#Solution overview
The blog post consist of the following phases:
1. Create of AWS Step Function with Design Workflow as Cloudformation Create Stack Parameter
2. Output as Cloudformation and S3 Bucket Creation
I have a s3 bucket with required template file as below →


##Phase 1: Create of AWS Step Function with Design Workflow as Cloudformation Create Stack Parameter
1. Open the AWS Step Function console and create a state machine with default parameters. Create a design workflow as choose of cloudformation create stack for creation of cloudformation and its template service config. Also add the IAM required permission to execute the step function successfully. The api parameter contains cloudformation template url as s3 object url of template stored in s3 and capabilities parameter.




















##Phase 2: Output as Cloudformation and S3 Bucket Creation






#Clean-up
Delete S3 Bucket, Step Function, Iam Role and Cloudformation Stack.
#Pricing
I review the pricing and estimated cost of this example.
Cost of S3 = $0.91
Cost of Step Functions = $0 for first 4,000 state transitions = $0.0
Total Cost = $0.91
#Summary
In this post, I showed “stack creation through step function workflow execution”.
For more details on AWS Step Functions, Checkout Get started AWS Step Functions, open the [AWS Step Functions console](https://us-east-1.console.aws.amazon.com/states/home?region=us-east-1#/homepage). To learn more, read the [AWS Step Functions documentation](https://docs.aws.amazon.com/step-functions/?id=docs_gateway).
Thanks for reading!
Connect with me: [Linkedin](https://www.linkedin.com/in/gargee-bhatnagar-6b7223114)
 | bhatnagargargee |
1,389,015 | CLI Genie: Revolutionizing Command Line Workflow with Native Language Input and OpenAI's GPT-3 API | project - https://github.com/JM-Lab/cli-genie CLI Genie CLI Genie is a tool that assists... | 0 | 2023-03-05T14:46:21 | https://dev.to/jmlab/how-cli-genie-uses-openais-gpt-3-api-to-help-users-write-cli-commands-in-their-native-language-4fib | gpt3, cli, chatgpt, openai | project - https://github.com/JM-Lab/cli-genie

# CLI Genie
CLI Genie is a tool that assists users in writing CLI commands using their native language through OpenAI's GPT-3 API.
To put it simply, CLI Genie helps users who are not comfortable with writing commands in English to do so using their preferred language. By utilizing OpenAI's GPT-3 API, CLI Genie can provide accurate and relevant commands based on the user's request. It is important to note, however, that as with any software that uses machine learning or AI, there may be limitations and potential errors.
The CLI Genie is powered by OpenAI's advanced language model called gpt-3.5-turbo. It performs similarly to text-davinci-003 but is 10% cheaper per token.
## The Key Features
1. Support for native language input
* User input in their native language is understood and processed.
2. OS and version awareness
* Appropriate CLI commands are suggested based on the user's operating system and version.
3. Similar command recommendations
* If no appropriate command is found, similar commands with the same intent are recommended.
## Installation
### Requirements
- git
- Java 11 or higher
- OpenAI API key
### Linux and Mac
```
git clone https://github.com/JM-Lab/cli-genie.git
cd cli-genie
./gradlew install
sudo cp bin/cg /usr/local/bin
export OPENAI_API_KEY=[your key]
```
### Windows
```
git clone https://github.com/JM-Lab/cli-genie.git
cd cli-genie
.\gradlew.bat install
copy bin\cg.bat C:\Windows\System32
set OPENAI_API_KEY=[your key]
```
#### OpenAI API key
To use CLI Genie, you need to obtain the OPENAI_API_KEY from https://platform.openai.com and set it as an environment variable before the first run. The key will be stored in [USER HOME]/.cg/openai-api-key and used for subsequent runs.
## Usage
You can run CLI Genie by using the `cg` (short for CLI Genie). Available commands can be input in the user's mother
tongue.
The usage of the cg command is as follows:
```
cg [instructions in mother tongue]
```
## Examples
Here's examples of the usage of CLI Genie in various languages
### Korean
```
cg test.txt 파일에서 "abc"를 "cba"로 바꿔주세요
```
### English
```
cg replace the letters "abc" with "cba" in the file test.txt
```
### Mandarin Chinese
```
cg 在test.txt文件中用"cba"替换"abc"
```
### Hindi
```
cg test.txt फ़ाइल में "abc" को "cba" से बदलें
```
### Spanish
```
cg reemplazar las letras "abc" por "cba" en el archivo test.txt
```
### Arabic
```
cg استبدل الحروف "abc" بـ "cba" في الملف test.txt
```
### Bengali
```
cg test.txt ফাইলের "abc" অক্ষরগুলি "cba" দিয়ে পরিবর্তন করুন
```
### French
```
cg remplacer les lettres "abc" par "cba" dans le fichier test.txt
```
### Russian
```
cg заменить буквы "abc" на "cba" в файле test.txt
```
### Portuguese
```
cg substituir as letras "abc" por "cba" no arquivo test.txt
```
### Urdu
```
cg test.txt فائل میں حروف "abc" کو "cba" سے تبدیل کریں
```
### Japanese
```
cg test.txt ファイル内の文字列 "abc" を "cba" に置き換える
```
### Vietnamese
```
cg thay thế các chữ cái "abc" bằng "cba" trong tệp test.txt
``` | jmlab |
1,389,022 | python: unit test with mock functions from different modules | I recently started learning python 3 and unit test with pytest and unittest. As I struggle to figure... | 0 | 2023-03-05T15:14:14 | https://dev.to/kenakamu/python-unit-test-with-mock-functions-from-different-modules-cla | python, testing, mock | I recently started learning python 3 and unit test with ``pytest`` and ``unittest``.
As I struggle to figure out how to mock in several situations, I am taking note here so that anyone has same issue maybe find this useful someday.
## Structures and code
Before writing tests, this is my folder and files structures.
```shell
src/
├── my.py
├── my_modules/
│ ├── __init__.py
│ └── util.py
└── tests/
├── __init__.py
├── test_my.py
└── test_unit.py
```
my.py
```python
from my_modules.util import util
def main():
return util('my input')
if __name__ == '__main__':
main()
```
util.py
```python
from datetime import datetime
def util(input: str) -> str:
input = add_time(input)
return f"util: {input}"
def add_time(input: str) -> str:
return f"{datetime.now()}: {input}"
```
## Add a unit test for util function
For ``util`` method, I need to mock ``add_time`` that is from the same file. I found several ways to achieve this, and this is one of them.
test_unit.py
```python
from unittest.mock import patch, Mock
from my_modules.util import util
@patch('my_modules.util.add_time', Mock(return_value='dummy'))
def test_util():
expected = 'util: dummy'
input = 'input'
input = util(input)
assert expected == input
```
I used ``unittest.mock.patch`` function as a decorator and specify the desired return value as part of the ``Mock`` object. The namespace for ``add_time`` is the same as the ``util`` function as they are in the same module.
## Add a unit test for add_time function
To unit test ``add_time`` function, I need to mock ``datetime.now()`` function. I again use the ``unittest.mock.patch``. This time, I need to create the ``Mock`` with a bit more code as I need to mock a function, rather than the simple return_value nor an attribute.
```python
from datetime import datetime
from unittest.mock import patch, Mock
from my_modules.util import add_time
@patch('my_modules.util.datetime', Mock(**{"now.return_value": datetime(2023, 1, 1)}))
def test_add_time():
expected = f'{datetime(2023, 1, 1)}: input'
input = 'input'
input = add_time(input)
assert expected == input
```
I can pass a dictionary that contains attributes and methods information to the ``Mock`` object. As I mock ``now`` function, I use ``"now.return_value":<some date>``.
If it's an attribute, I can simply pass it to the ``Mock`` like ``Mock(attribute_name=<value>)`` or as part of the dictionary like ``{"attribute":<value>}``
The module name is a bit interesting as I need to specify ``my_modules.util.datetime``. The reason is that as soon as the ``datetime`` is imported to the ``util.py``, it becomes part of the same module, that was quite confusing to me.
I can show another sample of this in the next test.
## Add a unit test for main function
To test the ``main`` method in the ``my.py``, I just need to mock ``util`` method. Let's do it.
test_my.py
```python
from my import main
from unittest.mock import patch, Mock
@patch("my.util", Mock(return_value="dummy"))
def test_main():
result = main()
assert result =='dummy'
```
Even though the ``util`` function comes from the ``my_modules`` module, in the test time, it becomes ``my.util`` namespace as I previously explained.
I specify the ``Mock`` object in the decorator, but I cannot if the ``util`` function is actually called with the expected parameters. So, let's accept the mock inside the test function.
```python
@patch("my.util")
def test_main_util_called_with_expected_parameter(util_mock):
util_mock.return_value = 'dummy'
result = main()
assert result =='dummy'
util_mock.assert_any_call('my input')
```
This time, I use the decorator without passing the ``Mock`` object. Instead, I received it in the argument as ``util_mock``, then I specify the ``return_value``.
As I have access to the mock, I can then assert it the method is called with the expected arguments.
Lastly, I also learnt that I can use ``with`` statement as well to achieve the same.
```python
def test_main_util_called_with_expected_parameter_with():
with patch("my.util") as util_mock:
util_mock.return_value = 'dummy'
result = main()
assert result =='dummy'
util_mock.assert_any_call('my input')
```
By doing this, I don't need the decorator.
All the tests run successfully!

## Summary
I actually don't know yet which is the best way to mock functions. I believe each one has pros and cons, so if any of you have any suggestions or opinions, please let me know in the comment! Thanks. | kenakamu |
1,389,143 | Mettre en place l'authentification sur Nginx | Quand on utilise Nginx, il se peut que l'on veut restreindre l'accès à certaines URL ou chemins. Pour... | 0 | 2023-03-13T14:50:00 | https://dev.to/mxglt/mettre-en-place-lauthentification-sur-nginx-1cjj | sre, devops, webdev | Quand on utilise Nginx, il se peut que l'on veut restreindre l'accès à certaines URL ou chemins. Pour cela, on peut mettre en place une authentification, et c'est ce que l'on va voir aujourd'hui.
---
## Fichier htpasswd
Afin de permettre aux utilisateurs de se connecter, Nginx doit avoir le fichier `htpasswd` qui va contenir la liste des noms d'utilisateurs et leurs mot de passes avec le format suivant :
```
user1:password1
user2:password2
...
```
Les noms des utilisateurs sont en clairs, mais les mots de passe doivent être encryptés avec **BCrypt**.
Le plus simple pour pouvoir générer ce fichier et d'encoder les mots de passe est de le faire avec l'utilitaire `htpasswd`.
### Installer htpasswd
Cet utilitaire se trouve dans le package `apache2-utils`, que vous pouvez installer avec les commandes suivantes :
```bash
sudo apt-get update
sudo apt-get install apache2-utils
```
### Utiliser htpasswd
La commande se présente sous le format suivant :
```bash
htpasswd [option] [chemin fichier] [nom utilisateur]
```
L'option utile à connaître est `-c` pour permettre de créer le fichier.
En suite, il ne vous reste qu'à définir chacun des utilisateurs/mot de passe et le tour est joué!
*Exemple*
```bash
# Créer le fichier et ajouter l'utilisateur toto
htpasswd -c /etc/nginx/htpasswd toto
# Ajouter un autre utilisateur
htpasswd /etc/nginx/htpasswd titi
```
---
## Configurer Nginx
Dans votre configuration Nginx, il vous suffira d'ajouter `auth_basic` & `auth_basic_user_file` comme dans l'exemple suivant et votre Nginx est prêt!
```
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
root /usr/share/nginx/html;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ =404;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
```
---
J'espère que ça vous aidera! 🍺
---
<center><h3>Vous voulez me supporter?</h3></center>
<a href="https://www.buymeacoffee.com/mxguilbert7" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy Me A Coffee" height="41" width="174"></a> | mxglt |
1,389,157 | Symfony Station Communiqué — 03 March 2023. A look at Symfony, Drupal, PHP, Cybersecurity, and Fediverse news! | This communiqué originally appeared on Symfony Station, your source for cutting-edge Symfony, PHP,... | 0 | 2023-03-05T18:17:20 | https://www.symfonystation.com/Symfony-Station-Communique-03-March-2023 | symfony, drupal, php, cybersecurity | This communiqué [originally appeared on Symfony Station](https://www.symfonystation.com/Symfony-Station-Communique-03-March-2023), your source for cutting-edge Symfony, PHP, and Cybersecurity news.
Welcome to this week's Symfony Station Communiqué. It's your review of the essential news in the Symfony and PHP development communities focusing on protecting democracy. We also cover the cybersecurity world and the Fediverse.
Please take your time and enjoy the items most relevant and valuable to you. There are a plethora of Drupal and PHP items.
Thanks to Javier Eguiluz and Symfony for sharing [our latest communiqué](https://www.symfonystation.com/Symfony-Station-Communique-24-February-2023) in their [Week of Symfony](https://symfony.com/blog/a-week-of-symfony-843-20-26-february-2023).
**My opinions will be in bold.**
---
A significant proportion of the content we curate is on Medium. I highly recommend investing in a membership to access all the articles you want to read. It's a small investment that can boost your career. As you may have noticed, non-members can only access a limited number of articles per month.
**[Become a member here](https://medium.com/@mobileatom/membership)**! The compensation we receive from your use of this link helps pay for our weekly communiqué.
---

## Symfony
As always, we will start with the official news from Symfony. Highlight -> “This week, the upcoming Symfony 6.3 version experienced an intense development activity to finish many new features such as: adding a remember me option for JSON logins, allowing to trim parameters in XML config files, introducing a new Exclude attribute, allowing to define the batch size in Messenger component and allowing to extend the Autowire attribute.“
[A Week of Symfony #843 (20-26 February 2023)](https://symfony.com/blog/a-week-of-symfony-843-20-26-february-2023?utm_source=Symfony+Blog+Feed&utm_medium=feed)
Symfony announced:
[SymfonyOnline June 2023 - Submit you paper until March 6th!](https://symfony.com/blog/symfonyonline-june-2023-submit-you-paper-until-march-6th)
Blackfire has:
[Blackfire, a complete observability solution](https://blog.blackfire.io/blackfire-a-complete-observability-solution.html)
---
## Featured Item

The brilliant writer, Cory Doctorow explains why large corporate platforms suck:
“online platform businesses have a distinctly more abusive and sinister character. To a one, they follow the “[enshittification](https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys)” pattern: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.
Why are digital businesses more prone to this conduct than their brick-and-mortar cousins? One answer is *tech exceptionalism*: namely, that tech founders are evil wizards, uniquely evil and uniquely brilliant and thus able to pull off breathtakingly wicked acts of sorcery that keep us all in their thrall.
There’s another, simpler explanation for the enshittification of platform economics. Rather than trusting the self-serving narratives of the Prodigal Techbros who claim to have superhuman powers but promise that they have stopped using them for evil, we can adopt a more plausible worldview: that tech barons are ordinary mediocrities, no better and no worse than the monopolists that preceded them, and any differences come down to affordances in technology and regulation, not an especial wicked brilliance.”
### [Twiddler](https://doctorow.medium.com/twiddler-1b5c9690cce6)
---
### This Week
Anton Lytvynov has three good items:
[Symfony 6 and PHP 8: A Promising Future for Web Application Development](https://antonlytvynov.medium.com/symfony-6-and-php-8-a-promising-future-for-web-application-development-a1b8bae82fa)
[Creating Custom Web Applications with Symfony 6 and Php 8: A Step-by-Step Guide](https://antonlytvynov.medium.com/creating-custom-web-applications-with-symfony-6-and-php-8-a-step-by-step-guide-e911eee0fa06)
[Symfony vs. Other Web Frameworks: A Comprehensive Comparison](https://antonlytvynov.medium.com/symfony-vs-other-web-frameworks-a-comprehensive-comparison-73f8343b8417)
**There’s not much new here information-wise, but some exciting graphics make viewing worthwhile.**
Filip Horvat shows us how to:
[Set up framework for testing security in API Platform (Symfony)](https://medium.com/@fico7489/set-up-framework-for-testing-security-in-api-platform-symfony-67e87657c9a1)
### eCommerce
And Webkul shows us how to:
[Generate Symfony route in PrestaShop module](https://webkul.com/blog/generate-symfony-route-in-prestashop-module/)
Magecom explores:
[Profiling and Debugging for Magento and What to Consider When Choosing](https://dev.to/magecomcompany/profiling-and-debugging-for-magento-and-what-to-consider-when-choosing-4976)
Lemberg Solutions looks at:
[Drupal Commerce + SAP Integration: Solutions and Benefits](https://lembergsolutions.com/blog/drupal-commerce-sap-integration-solutions-and-benefits)
### CMSs
bitExpert announces:
[Sulu Security.txt Bundle 0.1.0 released!](https://blog.bitexpert.de/blog/sulu_securitytxt_0.1.0)
André Laugks shows us how to:
[Use XInclude to organize Content elements in Page templates in Sulu](https://dev.to/elevado/use-xinclude-to-organize-content-elements-in-page-templates-in-sulu-3lpd)
**Hurray, Sulu CMS pieces.**
Opensource examines:
[3 myths about open source CMS platforms](https://opensource.com/article/23/3/open-source-cms-myths)
Specbee shows us how to:
[Migrate to Drupal 9 (or 10) Without Losing Your Hard-Earned SEO Ranking](https://www.specbee.com/blogs/migrate-to-drupal-9-or-10-without-losing-seo-ranking)
And ComputerMinds shares a series on updating a Drupal Site:
[Drupal 10 upgrade: Defining the project scope](https://www.computerminds.co.uk/articles/drupal-10-upgrade-defining-project-scope)
[Drupal 10 upgrade: File to media](https://www.computerminds.co.uk/articles/drupal-10-upgrade-file-media)
Speaking of versions CTI Digital looks at:
[Drupal Through The Years: The Evolution of Drupal](https://www.ctidigital.com/blog/drupal-through-the-years-the-evolution-of-drupal)
And:
[How Drupal Has Evolved to Make Content Editors Lives Easier](https://www.ctidigital.com/blog/how-drupal-has-evolved-to-make-content-editors-lives-easier)
ADCI Solutions reviews the official Drupal frontend theme.
[Olivero Drupal Theme](https://www.adcisolutions.com/knowledge/olivero-drupal-theme)
Adam Vertsson explores:
[7 security modules for Drupal that you cannot live without](https://www.adamevertsson.se/en/articles/7-security-modules-drupal-you-cannot-live-without)
Drupal Journal shows us how to:
[Create custom Middlewares in Drupal](https://drupaljournal.com/article/drupal/create-custom-middlewares-drupal)
And Marouene shows us how to:
[Enforce Drupal Conventional commits using GrumPHP](https://medium.com/@mh.marouan/enforce-drupal-conventional-commits-using-grumphp-373fa8a275c7)
Matt Glaman wants us to:
[Check out the "Drupal at your fingertips" developer reference guide](https://mglaman.dev/blog/check-out-drupal-your-fingertips-developer-reference-guide)
**Very helpful.**
Evolving Web shares:
[Hands-On With Drupal 10: Discover the Best Modules Through Project Browser](https://evolvingweb.com/blog/hands-drupal-10-discover-best-modules-through-project-browser?utm_source=feed)
Axelerant looks at:
[How Acquia DXP Is Empowering Businesses To Design Digital Experiences](https://www.axelerant.com/blog/acquias-dxp-to-design-digital-experiences)
Speaking of Acquia, it and Drupal’s founder, Dries Buytaert opines on:
[Artificial Intelligence, the future of Content Management and the Web](https://dri.es/artificial-intelligence-the-future-of-content-management-and-the-web)
**He’s much more optimistic than I am.**
Lullabot examines:
[Orientation and Wayfinding: A Quick Overview](https://www.lullabot.com/articles/orientation-and-wayfinding-quick-overview)
Palantir shares:
[Yang's DrupalEasy Fellowship Experience: Taking a chance on a career change](https://www.palantir.net/blog/yangs-drupaleasy-fellowship-experience-taking-chance-career-change)
### Previous Weeks
And this one I missed in February:
[Tessa's DrupalEasy Fellowship Experience: Collaborative, supportive, and agile](https://www.palantir.net/blog/tessas-drupaleasy-fellowship-experience-collaborative-supportive-and-agile)
Drupal Journal also has:
[Printing a certificate by using the pdf_api in Drupal](https://drupaljournal.com/article/drupal/printing-certificate-using-pdfapi-drupal)
[Dynamic routes and route_callbacks in Drupal](https://drupaljournal.com/article/drupal/dynamic-routes-and-routecallbacks-drupal)
Specbee explores:
[Taming JavaScript in Drupal (Includes FAQs)](https://www.specbee.com/blogs/taming-javascript-in-drupal)
ADCI Solutions also reviews the official Drupal backend theme.
[Claro Admin Theme](https://www.adcisolutions.com/knowledge/claro-admin-theme)
Jolicode examines:
[How TaggedLocator Can Help You Design a Better Symfony Application](https://jolicode.com/blog/how-taggedlocator-can-help-you-design-better-symfony-application)

## PHP
### This Week
The PHP Foundation’s latest roundup is out.
[PHP Core Roundup #10](https://opencollective.com/phpfoundation/updates/php-core-roundup-10)
Tomasz Dobrowolski has two articles for us:
[9 Essential PHPStorm Shortcuts That Will Skyrocket Your Productivity as a PHP Developer](https://tomdob1.medium.com/9-essential-phpstorm-shortcuts-that-will-skyrocket-your-productivity-as-a-php-developer-4c82f3e09942)
[7 Concepts Every PHP Developer Must Understand To Succeed](https://levelup.gitconnected.com/7-concepts-every-php-developer-must-understand-to-succeed-e51ea83bd7b7)
**Outstanding stuff here.**
George looks at:
[Clean Architecture with PHP](https://medium.com/unil-ci-software-engineering/clean-architecture-with-php-22de915a6c50)
**Also excellent.**
Muhammad Noman Rauf shares a:
[Guide to Managing Dependencies with Composer](https://medium.com/@noman.rauf90/a-guide-to-managing-php-dependencies-with-ease-f8751fb5c46e)
Florian Bauer shares the cogent point that PHP needs better marketing.
[Why PHP should be renamed to HypeScript](https://medium.com/@florian_4237/why-php-should-be-renamed-to-hypescript-5baa55992cf1)
Farhan Tanvir is back with another:
[7 Useful PHP Libraries You Should Use in Your Next Project](https://medium.com/geekculture/7-useful-php-libraries-you-should-use-in-your-next-project-d728c29f1078)
Tom Smykowski has:
[PHP 8.3 Coming Soon: json_validate()](https://tomaszs2.medium.com/php-8-3-coming-soon-json-validate-a5fa498d65c6)
Samuel Fontebasso looks at:
[PHP+Nginx with Docker in production](https://blog.fontebasso.com.br/php-nginx-with-docker-in-production-a94b2b903666)
Nacho Colomina Torregrosa explores:
[Using GitHub actions to execute your PHP tests after every push](https://dev.to/icolomina/using-github-actions-to-execute-your-php-tests-after-every-push-2lpp)
Hamid Ghorashi explores:
[Using DTOs for Safe Data Transport in PHP Projects](https://medium.com/@h.ghorashi/using-dtos-for-safe-data-transport-in-php-projects-ff9a547203ea)
php[architect] examines:
[Serializing Data In PHP](https://www.phparch.com/2023/02/serializing-data-in-php/)
And on a related note, Veshraj Ghimire explores a:
[Deserialization Disaster in PHP](https://medium.com/pentesternepal/deserilaization-disaster-in-php-8f35f4013c91)
The Tech Cat opines on:
[PHP and Artificial Intelligence: A Match Made in Heaven](https://medium.com/the-ai-tech-corner/php-and-artificial-intelligence-a-match-made-in-heaven-6ce9ee7ca126)
Alin Pintilie looks at:
[PHP generators or how to simplify iterators](https://medium.com/@alin.pintilie/php-generators-or-how-to-simplify-iterators-17c499f6d087)
Vlad Reshetilo examines:
[The difference between PHP-Imagick and PHP-GD](https://medium.com/@vlreshet/the-difference-between-php-imagick-and-php-gd-19b84dc064b4)
Camilo Herrera looks at:
[Querying Whois information with PHP](https://medium.com/winkhosting/querying-whois-information-with-php-f686baee8c7)
Lukasz explores:
[Asynchronous PHP](https://lbacik.medium.com/asynchronous-php-1d94af9e0f19)
**This is an interesting article, as he is not discussing threads.**
Localheinz examines organizing tests code:
[Systems under test](https://localheinz.com/articles/2023/03/03/organizing-test-code-in-php/)
Charles Sprayberry shares:
[Introducing database-test-case](https://www.cspray.io/blog/introducing-database-test-case/)
### Previous Weeks
Stefan Priebsch explores:
[Domain-Driven Design with PHP](https://thephp.cc/articles/domain-driven-design-with-php)
Joe Tannenbaum examines:
[Creating a Loading Spinner in the Terminal Using PHP](https://blog.joe.codes/creating-a-loading-spinner-in-the-terminal-using-php)

## Other
[Please visit our Support Ukraine page](https://www.symfonystation.com/Support-Ukraine) to learn how you can help kick Russia out of Ukraine (eventually).
### The cyber response to Russia’s War Crimes and other douchebaggery
This week marked the one-year anniversary of Russia’s reign of terror in Ukraine. So, there were a lot of good articles published.
TechCrunch reports:
[A year on from Russia’s invasion, Ukrainian startups show astounding resilience](https://techcrunch.com/2023/02/24/a-year-on-from-russias-invasion-ukrainian-startups-show-astounding-resilience/)
[Hacker group defaces Russian websites to display the Kremlin on fire](https://techcrunch.com/2023/02/24/hacker-group-defaces-russian-websites-to-display-the-kremlin-on-fire/)
Ampere News reports:
[A Ukrainian cyber defender looks back after 1 year in the trenches: "This is war and crazy tragedy"](https://www.youtube.com/watch?v=QrZzq66_o7s)
The Next Web reports:
[Ukraine’s year of war exposes changing roles for cyber weapons](https://thenextweb.com/news/cybersecurity-in-ukraine-war-one-year-anniversary-russia-invasion)
DarkReading reports:
[How the Ukraine War Opened a Fault Line in Cybercrime, Possibly Forever](https://www.darkreading.com/analytics/ukraine-war-fault-line-cybercrime-forever)
The Washington Post reports:
[Impact of Ukraine-Russia war: Cybersecurity has improved for all](https://www.washingtonpost.com/technology/2023/02/25/ukraine-war-cyber-security/)
CEPA reports:
[Sanctions Against Russia Are More Effective Than Skeptics Suggest](https://cepa.org/article/sanctions-against-russia-are-more-effective-than-skeptics-suggest/)
### The Evil Empire Strikes Back
Grid reports:
[One year later, Russia’s misinformation war is still getting help from western companies.](https://www.grid.news/story/global/2023/02/24/one-year-later-russias-misinformation-war-is-still-getting-help-from-western-companies/)
Axios reports:
[Russian cybercrime is starting to rebound after war disruption](https://www.axios.com/2023/02/24/russian-cybercrime-rebound-ukraine)
HackRead reports:
[News Corp: Hackers sat undetected on its network for 2 years](https://www.hackread.com/news-corp-breach-hackers-undetected/)
**He He He. They are about as competent (aka shit) at cybersecurity as they are at “journalism”.**
The Register reports:
[Russian hacktivists DDoS hospitals, with pathetic results](https://www.theregister.com/2023/02/28/anonymous_sudan_ddos_hospitals/)
Ars Technica reports:
[Russia fines Wikipedia for publishing facts instead of Kremlin war propaganda](https://arstechnica.com/tech-policy/2023/02/russia-fines-wikipedia-for-publishing-facts-instead-of-kremlin-war-propaganda/)
Bleeping Computer reports:
[Russia bans private messaging apps owned by foreign entities](https://www.bleepingcomputer.com/news/security/russia-bans-private-messaging-apps-owned-by-foreign-entities/)
The Guardian reports:
[China spends billions on pro-Russia disinformation, US special envoy says](https://www.theguardian.com/world/2023/feb/28/china-spends-billions-on-pro-russia-disinformation-us-special-envoy-says)
Bleeping Computer reports:
[Chinese hackers use new custom backdoor to evade detection](https://www.bleepingcomputer.com/news/security/chinese-hackers-use-new-custom-backdoor-to-evade-detection/)
### Cybersecurity/Privacy
And:
[White House releases new U.S. national cybersecurity strategy](https://www.bleepingcomputer.com/news/security/white-house-releases-new-us-national-cybersecurity-strategy/)
Hackernoon reports:
[Network Detection and Response: the Future of Cybersecurity](https://hackernoon.com/network-detection-and-response-the-future-of-cybersecurity)
Portswigger reports:
[NIST plots biggest ever reform of Cybersecurity Framework](https://portswigger.net/daily-swig/nist-plots-biggest-ever-reform-of-cybersecurity-framework)
[CSF 2.0 blueprint offered up for public review](https://portswigger.net/daily-swig/nist-plots-biggest-ever-reform-of-cybersecurity-framework)
NBC News reports:
[U.S. Marshals Service suffers 'major' security breach that compromises
sensitive information, senior law enforcement officials say](https://www.nbcnews.com/politics/politics-news/major-us-marshals-service-hack-compromises-sensitive-info-rcna72581)
GovTech reports:
[Feds Push Local Election Officials to Boost Security Ahead of 2024](https://www.govtech.com/elections/feds-push-local-election-officials-to-boost-security-ahead-of-2024)
Darkreading reports:
[What GoDaddy's Years-Long Breach Means for Millions of Clients](https://www.darkreading.com/risk/what-godaddy-years-long-breach-means-millions-clients)
### More
The Markup asks:
[Section 230 Is a Load-Bearing Wall—Is It Coming Down?](https://themarkup.org/hello-world/2023/02/25/section-230-is-a-load-bearing-wall-is-it-coming-down)
Jens Oliver Meiert maintains this handy tool.
[HTML Elements Index](https://meiert.com/en/indices/html-elements/)
Jennifer Wjertzoch shows us:
[How to Make a Fully Accessible CSS-Only Carousel](https://levelup.gitconnected.com/how-to-make-a-fully-accessible-css-only-carousel-40e8bd62032b)
**No JS required. For more like this check out CSSUI in our website’s footer.**
Spicyweb points out the obvious:
[The Great Gaslighting of the JavaScript Era](https://www.spicyweb.dev/the-great-gaslighting-of-the-js-age/)
**JS frontend frameworks are bullshit.**
KD Nuggets shares:
[SQL Query Optimization Techniques](https://www.kdnuggets.com/2023/03/sql-query-optimization-techniques.html)
Mafiree explores the:
[Significance of using Invisible Primary key (GIPK) with MySQL 8.0](https://www.mafiree.com/blogs.php?blog_details=NDE=)
Kinsta shows us how to:
[Remove Docker Images, Volumes, and Containers in Seconds](https://kinsta.com/blog/docker-remove-images/)
### Fediverse
Andy Piper has:
[Thoughts on Dev Rel in the post-Twitter era](https://dev.to/andypiper/thoughts-on-dev-rel-in-the-post-twitter-era-2k8a)
There is big news this week from Flipboard.
Via TechCrunch:
[Flipboard joins the Fediverse with a Mastodon integration and community, plans for ActivityPub](https://techcrunch.com/2023/02/28/flipboard-joins-the-fediverse-with-a-mastodon-integration-and-community-plans-for-activitypub/)
Engadget looks at what drove the decision.
[Flipboard is leaning into Mastodon — and away from Twitter](https://www.engadget.com/flipboard-is-leaning-into-mastodon-and-away-from-twitter-160036103.html)
As doe the Fediverse Report.
[Flipboard joins the Fediverse](https://fediversereport.com/flipboard-joins-the-fediverse/)
Here’s the official announcement.
[The Future of Flipboard Is Federated](https://about.flipboard.com/inside-flipboard/flipboard-mastodon-federated/)
**So, if you are on Flipboard but not Mastodon, now is the time to join.**
And follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz) to get an idea of what will be in next week’s communiqué.
The Verge reports:
[Mozilla thinks Mastodon could be the next HTTP](https://www.theverge.com/2023/3/1/23620904/mozilla-thinks-mastodon-could-be-the-next-http)
Also, this week, Medium started allowing signups to [their Mastodon instance](https://blog.medium.com/medium-embraces-mastodon-19dcb873eb11). If you are also interested in content production, marketing, strategy, and related fields, you can follow me at [@mobileatom@me.dm](https://me.dm/@mobileatom).
And don’t forget that Tumblr and Flickr are joining in the future.
## CTAs
- That’s it for this week. Please share this communiqué.
- Also, be sure to [join our newsletter list at the bottom of our site’s pages](https://www.symfonystation.com/contact). Joining gets you each week's communiqué in your inbox (a day early).
- Follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz) or at [@symfonystation@phpc.social](https://phpc.social/web/@symfonystation) on Mastodon for daily coverage. Consider joining the [@phpc.social](https://phpc.social/web/home) instance. If this communique is a little overwhelming, you can get a condensed weekly news highlight post on [Friendica](https://friendica.me/profile/friendofsymfony).
Do you own or work for an organization that would be interested in our promotion opportunities? Or supporting our journalistic efforts? If so, please get in touch with us. We’re in our infancy, so it’s extra economical. 😉
More importantly, if you are a Ukrainian company with coding-related products, we can offer free promotion on [our Support Ukraine page](https://www.symfonystation.com/Support-Ukraine). Or, if you know of one, get in touch.
Keep coding Symfonistas!
**[Visit our Communiqué Library](https://www.symfonystation.com/Communiqu%C3%A9s)**
You can find a vast array of curated evergreen content.
##Author

###Reuben Walker
Founder
Symfony Station
| reubenwalker64 |
1,389,248 | CheatGPT - The Ultimate Cheat Engine Powered by OpenAI | Are you tired of studying for hours and still not being able to ace your tests? Look no further,... | 0 | 2023-03-05T19:59:51 | https://dev.to/epavanello/cheatgpt-the-ultimate-cheat-engine-powered-by-openai-1l85 | chatgpt, ai, svelte, saas | Are you tired of studying for hours and still not being able to ace your tests? Look no further, because **CheatGPT** is here to save the day!
**CheatGPT** is a SaaS project that uses the power of AI to help you cheat your way to success. Simply submit your question, whether it's in text or image form, and CheatGPT will provide you with the perfect answer. No more wasting time studying or worrying about failing, CheatGPT has got you covered.
But wait, you may be thinking, isn't cheating unethical? We hear you, and that's why CheatGPT is all in good fun. We believe that if the technology exists, why not have a little fun with it? Plus, think of all the time you'll save!
The project is being developed using the latest technologies, including [SvelteKit](https://kit.svelte.dev/), [Supabase](https://supabase.com/), [Vercel](http://vercel.com/) and [OpenAI](https://openai.com/). The source code is available on [GitHub](https://github.com/epavanello/cheatgpt.app) for anyone to contribute to and improve. We welcome all developers to join us in creating the ultimate cheat engine powered by AI.
If you're interested in staying up to date on the development of **CheatGPT**, make sure to subscribe to our newsletter. You'll be the first to know when the service is officially available and ready for use.
So what are you waiting for? Join us in creating the ultimate cheat engine and let's have some fun with AI!
⭐ [Github Repo](https://github.com/epavanello/cheatgpt.app)
🐦 [Twitter account](https://twitter.com/e_pavanello)
💻 [Website](http://cheatgpt.app/) | epavanello |
1,389,577 | NodeList vs Array | Cuando usamos una misma clase para múltiples elementos de HTML, podemos acceder a todos estos nodos... | 0 | 2023-03-06T04:35:47 | https://dev.to/cesar_ramez/nodelist-vs-array-87 | javascript, webdev, programming, frontend | Cuando usamos una misma clase para múltiples elementos de HTML, podemos acceder a todos estos nodos mediante la propiedad `document.querySelectorAll('.class-name')`. Esto nos mostrará un tipo de dato llamado ***NodeList*** el cual, a su vez, pese a mostrar todos los elementos que coinciden con la clase enviado a `querySelectorAll` como un array, no es lo mismo que un array “tradicional” como el que conocemos de toda la vida…
Esta diferencia se basa en que ***NodeList*** aunque podría iterarse sobre él usando `forEach`, no es compatible con todos los métodos con los que podría usarse en un array común. Por ejemplo, con ***NodeList*** no podríamos usar métodos como `reduce`, `map`, `filter`, `some`, etc.
Siempre que usemos `querySelectorAll` para iterar sobre varios nodos que manejan una clase en común, podemos convertir a ***NodeList*** en un array tradicional usando la propiedad Spread Operator que ofrece ES6. De hecho, siempre que se trabaje con ***NodeList*** es recomendable pasarlo a un array tradicional. Esto se debe a que los motores de JS en el navegador, principalmente V8 de Chrome, están optimizados para trabajar con arrays.
Accediendo a múltiples nodos con `querySelectorAll`:
```js
const nodeList = document.querySelectorAll('div')
```
Resultado de ejemplo:
```js
> NodeList(5) [div.container, div.input-group, div.form, div.invalid, div.feedback]
```
Convertir NodeList a un array tradicional:
```js
const nodeListAsArray = [...nodeList]
```
---
Si te gustó el post o tienes algún comentario, cuéntame en [Twitter](https://twitter.com/ramez_cesar). 😀
| cesar_ramez |
1,389,632 | Difference between git rebase and git pull | git pull and git rebase are both Git commands that are used to integrate changes from one branch into... | 0 | 2023-03-06T05:43:06 | https://dev.to/atultrp/difference-between-git-rebase-and-git-pull-f40 | git, pull, programming, beginners | `git pull` and `git rebase` are both Git commands that are used to integrate changes from one branch into another. However, they work differently and have different effects on your Git repository's history.
`git pull` is used to update your local branch with changes from a remote branch. It combines the git fetch command (which downloads the changes from the remote repository) with the git merge command (which integrates the changes into your local branch). This means that when you use git pull, you are effectively creating a new merge commit in your local branch, which records the fact that you merged the remote changes.
`git rebase`, on the other hand, is used to integrate changes from one branch into another by moving the entire branch to a new base commit. In other words, instead of creating a new merge commit, git rebase replays the changes from one branch on top of another branch, creating a linear history without any merge commits.
### To summarize:
`git pull` combines changes from a remote branch with your local branch, creating a merge commit.
`git rebase` integrates changes from one branch into another by replaying the changes on top of another branch, creating a linear history.
So, if you want to keep a clean and linear Git history, git rebase is generally preferred over git pull. However, if you are working on a team and collaborating with others, you may need to use git pull to keep your local branch in sync with the remote branch. | atultrp |
1,389,771 | An alternative to docker for php development? | Hi everyone! I start a opensource project, created for PHP and Web engineers using MacOS systems for... | 0 | 2023-03-06T08:49:00 | https://dev.to/xpf0000/an-alternative-to-docker-for-php-development-3g48 | webdev, opensource, docker, php | Hi everyone!
I start a opensource project, created for PHP and Web engineers using MacOS systems for development, to provide a more simple and useful tool to manage the local server environment.
For myself, I now use this tool for php development, as a replacement for mamp pro
I'm not sure if this tool is useful for other people, but it seems that people talk more about docker
I'll post the project address, hope it helps
github: [https://github.com/xpf0000/PhpWebStudy](https://github.com/xpf0000/PhpWebStudy)
 | xpf0000 |
1,389,782 | Minesweeper | I created my first full game in python and it's minesweeper! This project was assigned by... | 0 | 2023-03-06T09:13:28 | https://dev.to/jehrl/minesweeper-397b | beginners, python | I created my first full game in python and it's minesweeper!

This project was assigned by Codecadamy in my Learn to Python course as final project.
I was creating it only with skills learn in course.
It takes player input and every round it print for player the current visual grid.
Github:
[https://github.com/jehrl/minesweeper]
| jehrl |
1,389,829 | Constraints | Constraint clauses specify constraints that new or updated rows must satisfy for an INSERT or UPDATE... | 0 | 2023-03-06T09:57:58 | https://dev.to/llxq2023/constraints-1ik | opengauss | Constraint clauses specify constraints that new or updated rows must satisfy for an INSERT or UPDATE operation to succeed. If there is any data behavior that violates the constraints, the behavior is terminated by the constraints.
Constraints can be specified when a table is created (by executing the CREATE TABLE statement) or after a table is created (by executing the ALTER TABLE statement).
Constraints can be column-level or table-level. Column-level constraints apply only to columns, and table-level constraints apply to the entire table.
The common constraints of openGauss are as follows:
· NOT NULL: specifies that a column cannot store **NULL** values.
· UNIQUE: ensures that the value of a column is unique.
· PRIMARY KEY: functions as the combination of NOT NULL and UNIQUE and ensures that a column (or the combination of two or more columns) has a unique identifier to help quickly locate a specific record in a table.
· FOREIGN KEY: ensures the referential integrity for data in one table to match values in another table.
· CHECK: ensures that values in a column meet specified conditions.
**NOT NULL**
If no constraint is specified during table creation, the default value is **NULL**, indicating that **NULL** values can be inserted into columns. If you do not want a column to be set to **NULL**, you need to define the **NOT NULL** constraint on the column to specify that **NULL** values are not allowed in the column. When you insert data, if the column contains **NULL**, an error is reported and the data fails to be inserted.
**NULL** does not mean that there is no data. It indicates unknown data.
For example, create the **staff** table that contains five columns. The **NAME** and **ID** columns cannot be set to **NULL**.

Insert data into the **staff** table. When a **NULL** value is inserted into the **ID** column, the database returns an error.

**UNIQUE**
The UNIQUE constraint specifies that a group of one or more columns of a table can contain only unique values.
For the UNIQUE constraint, **NULL** is not considered equal.
For example, create the **staff1** table that contains five columns, where **AGE** is set to **UNIQUE**. Therefore, you cannot add two records with the same age.

Insert data into the **staff1** table. When two identical data records are inserted into the **AGE** column, the database returns an error.

**PRIMARY KEY**
PRIMARY KEY is the unique identifier of each record in a data table. It specifies that a column or multiple columns in a table can contain only unique (non-duplicate) and non-**NULL** values.
PRIMARY KEY is the combination of NOT NULL and UNIQUE. Only one primary key can be specified for a table.
For example, create the **staff2** table where **ID** indicates the primary key.

**FOREIGN KEY**
The FOREIGN KEY constraint specifies that the value of a column (or a group of columns) must match the value in a row of another table. Generally, the FOREIGN KEY constraint in one table points to the UNIQUE KEY constraint in another table. That is, the referential integrity between two related tables is maintained.
For example, create the **staff3** table that contains five columns.

Create the **DEPARTMENT** table and add three columns. The **EMP_ID** column indicates the foreign key and it is similar to the **ID** column of the **staff3** table.

**CHECK**
The CHECK constraint specifies an expression producing a Boolean result where the INSERT or UPDATE operation of new or updated rows can succeed only when the expression result is **TRUE** or **UNKNOWN**; otherwise, an error is thrown and the database is not altered.
A CHECK constraint specified as a column constraint should reference only the column's value, while an expression in a table constraint can reference multiple columns. **<>NULL** and **!=NULL** are invalid in an expression. Change them to IS **NOT NULL**.
For example, create the **staff4** table and add a CHECK constraint to the **SALARY** column to ensure that the inserted value is greater than **0**.

Insert data into the **staff4** table. When the inserted value of the **SALARY** column is not greater than 0, the database reports an error.
 | llxq2023 |
1,394,354 | Best online Dsa Course | Having trouble finding the best Course for dsa? Data structure courses will teach you the material,... | 0 | 2023-03-09T12:15:04 | https://dev.to/hrishii07/best-online-dsa-course-38cc | webdev, javascript, beginners | Having trouble finding the best Course for dsa? Data structure courses will teach you the material, push you to apply what you've learned, and encourage you to overcome challenges.
In addition to the best DSA Course Online, Skillslash also offers job placement assistance. Skillslash has made a name for itself as a leading provider of training in data structures, full-stack development, and business analytics, in addition to being the premier data science institute in the country.
Let's begin with the importance of studying [data structures and algorithms](https://skillslash.com/best-dsa-course). Then we'll know why it's a good idea to take a best online Dsa course from Skillslash.
Every business that focuses on making a product needs employees who can think critically and come up with workable solutions. This is essential because the challenges faced daily by such firms tend to be fairly huge and complex, and employers seek out workers who can do the required duties with the least amount of time and input from the company.
Knowledge of Data Structures and Algorithms is indicative of one's ability to find effective solutions to challenging problems. As a result, these companies are constantly on the lookout for qualified Data structure and algorithms professionals to fill unfilled jobs within their organizations. The first round of an organization's online testing process will most likely consist of questions about different types of data structures. When selecting either best dsa course online or best dsa course offline, it is important to make sure that the course meets your specific demands.
So, knowing you've taken the best online dsa course will set you up for success.
Skillslash data structures and algorithms course Highlights:
• Instructors from MAANG: Mastering theoretical ideas is made easier with the assistance of industry specialists from MAANG enterprises.
• Customize your course: Create a plan of study that addresses your specific goals and interests with the guidance of a counselor
• Live Interactive Sessions: Instead of studying with prerecorded videos, you will study with live sessions and get your questions answered in real-time.
• Practical Experience: Gain experience with cutting-edge AI firms, get System Design Certification, and improve your job prospects.
The fee of our system design course comes to INR 65,000 (pls GST).
For what it's worth, here's why Skillslash provides the best course for data structures and algorithms:
• Live interactive classes
• Lifetime video recordings
• Industry experts as your mentors
• 100% Job Guarantee commitment
By questioning established methods of education, we hope to ensure your future success. We are widely considered to have the best data structures and algorithms course in python.
Furthermore, we also offer a c++ DSA Course and a java DSA Course for professionals to facilitate a smooth transition. With Skillslash, you can be confident that your future in this industry is bright if you decide to make the switch. Feel free to contact our support staff if you have any questions about our data structures course.
| hrishii07 |
1,390,613 | Catch up on JavaScript 2: Easepick, magic-regexp, Token CSS, and more | Easepick Adding a date picker to an application is always a pain. Recently I found this... | 21,994 | 2023-03-06T14:26:59 | https://marcin.codes/posts/catch-up-on-javascript-2:-easepick-magic-regexp-token-css-and-more/ | webdev, javascript, beginners, react | 
## [Easepick](https://github.com/easepick/easepick/)
Adding a date picker to an application is always a pain. Recently I found this library and its react wrapper — it’s amazing. Easy to use and customize also have an excellent design out of the box.

## [React-Transition-State](https://github.com/szhsin/react-transition-state)
react-transition-group is a known library to add enter/exit transitions to components. Today’s library wants to give you similar functionality. It’s tiny and gives you to granularly control the state of transition.

## [🦄 magic-regexp](https://github.com/danielroe/magic-regexp)
Create regexp in a more functional style. magic-regexp supports grouping and has excellent typing that lets you preview the final expression.

## [Token CSS](https://github.com/tokencss/tokencss)
Generate tokens based on the config you define and then use them in your CSS. These tokens then will be replaced by PostCSS and replaced with defined values. Stay in sync with the defined project design.

## [Bling](https://github.com/TanStack/bling/)
Create RPC functions from inside your client-side code.

Want to read more about RPC? I made a short [post about RPC and what the fuss is about it](https://dev.to/marcin_codes/what-the-fuss-is-about-rpc-1j5m-temp-slug-1924101). Covered [Solid Start](https://start.solidjs.com/getting-started/what-is-solidstart)’s RPC, [Qwik](https://qwik.builder.io/)’s version, and what they are doing.
Please react to this post or leave a comment and let me know what you think. Check [my Twitter](https://twitter.com/marcin_codes) where I post these libraries daily.
## Stay up to date with my writing
You can stay up to date by following me in one of these ways:
- Follow me on [Medium](https://medium.com/@marcin-codes)
- Follow me on [Twitter](https://twitter.com/marcin_codes)
- Follow me on [Dev.to](https://dev.to/marcin_codes)
- Follow me on [Mastodon](https://mas.to/@marcin_codes)
- Add [my blog](https://marcin.page/) to the [RSS feed](https://marcin.page/feed.xml) reader | marcin_codes |
1,390,622 | How to handle custom error handling in express js | what is error handler an error handler is a function that is called when an error occurs... | 0 | 2023-03-06T14:50:24 | https://dev.to/krishnacyber/how-to-handle-custom-error-handling-in-express-js-1gpo | javascript, express, programming, tutorial | ## what is error handler
an error handler is a function that is called when an error occurs in an application. In the context of web development with Node.js and Express, an error handler is a middleware function that is used to handle errors that occur during the request-response cycle.
## purpose of error handler
The purpose of an error handler is to ensure that errors are properly handled and that the user is presented with an appropriate error message or response. An error handler can also be used to log error details, debug the application, and prevent the application from crashing.
## error handling in express
In Express, error handlers are defined as middleware functions that have an extra parameter for the err object. If an error occurs in any middleware function or route handler, Express will automatically call the error handler with the err object as the first parameter. The error handler can then inspect the err object and take appropriate action based on the error type and context.
Haha,😂😂 I think you got some confusion. Ok 👌 Don't worry i am here to help you right 👉 .
Ok let's dive into the example and procedure
## How to write Custom Error Handler Middleware in Express.js using JavaScript🚀
Boom 💥
1. ### Create Custom ErrorHandler middleware

Don't worry here's a code
```js
// ErrorHandler.js
const ErrorHandler = (err, req, res, next) => {
console.log("Middleware Error Hadnling");
const errStatus = err.statusCode || 500;
const errMsg = err.message || 'Something went wrong';
res.status(errStatus).json({
success: false,
status: errStatus,
message: errMsg,
stack: process.env.NODE_ENV === 'development' ? err.stack : {}
})
}
export default ErrorHandler
```
NOTE: _ err.stack shows the exact which file and where the error comes from. This is only needed in developement mode to debug your code. It becomes dangerous when your project structure is exposed on production. Always use NODE_ENV to development during development.
2. ### Attach Custom Error Handler as The Last Middleware to Use
Note that the error handling middleware function should always be the last middleware function registered in the app, so that it can catch any errors thrown by other middleware functions or route handlers.
```js
const express = require('express);
//initializing server instanse
const app = express();
const ErrorHandler = require(./middlewares/ErrorHandler.j)
//routes and other middlewares
//eroor hanlder middleware at last middleware
app.use(ErrorHandler);
```
### 3. How to Call the ErrorHandler then 😕
To call an error handler in a custom Express application, you can define a middleware function that passes an error object to the next middleware function using the next() function.
Here's an example:
```js
// Define an error handling middleware function
function errorHandler(err, req, res, next) {
res.status(err.statusCode || 500);
res.json({
error: {
message: err.message,
},
});
}
```
use middlewares as
```js
// Register middleware functions in the Express app
const express = require("express");
const app = express();
app.use(myMiddleware);
app.use(errorHandler);
```
In this example, the myMiddleware function throws a custom error by creating a new instance of the MyCustomError class and passing it to the next middleware function using the next() function.
The errorHandler function is then called because the error was passed to the next middleware function with the next() function. The errorHandler function inspects the error object, sets the HTTP status code and returns a JSON response with an error message.
## why should i use custom error handler in web application
Improved error messages: By defining your own error classes and error messages, you can provide more specific and informative error messages to your users, which can help them understand and resolve errors more easily.
Consistent error handling: By using a custom error handler, you can ensure that errors are handled consistently across all parts of your application. This can help to avoid confusion and improve the overall user experience.
Error logging: By defining a custom error handler, you can log error details to a file or a logging service, which can help you to identify and fix errors more quickly.
Prevent application crashes: By properly handling errors in your application, you can prevent your application from crashing or behaving unpredictably. This can help to ensure that your application remains stable and reliable, even in the face of unexpected errors.
| krishnacyber |
1,390,683 | Meme Monday 😝 | Meme Monday! Today's cover image comes from last week's thread. DEV is an inclusive space! Humor in... | 0 | 2023-03-06T15:35:23 | https://dev.to/ben/meme-monday-193h | watercooler, discuss, jokes | **Meme Monday!**
Today's cover image comes from [last week's thread](https://dev.to/ben/meme-monday-59gk).
DEV is an inclusive space! Humor in poor taste will be downvoted by mods. | ben |
1,390,840 | My dev experience at The Collab Lab | As a self-taught developer, there are 2 things that I always missed during my learning journey:... | 0 | 2023-03-06T17:31:30 | https://www.cristina-padilla.com/collablab.html | webdev, beginners, programming, productivity | As a self-taught developer, there are 2 things that I always missed during my learning journey: building projects in a team and getting feedback. I thought that both things could help me improve my coding skills much faster so I started to look for any kind of experience or community that could provide me that. And one day I luckily came across [The Collab Lab](https://the-collab-lab.codes/).
The Collab Lab is a place to collaborate in a group project remotely with other early-career developers. It was a great opportunity for me as I did not only have the chance to gain practical experience on a real project but I could also get immediate feedback and support from professional developers.

I was extremely excited when I got offered a spot! It would be my first and closest experience to a real developer job: I would be working for 8 weeks on a project within a team of 3 other developers and the support and guidance of 3 mentors.

## What does a week look like at the Collab Lab?
Our main goal was to **build a smart shopping list app**. The tasks were split into **13 issues**. We started solving the functionality issues and finished with the design/styling part.
Every week I was paired with a colleague to work on an issue. This was a great experience as I had the chance to do my first **pair programming sessions** with 3 different collabies for 8 weeks.
After solving the assigned issue, we did not only have to submit a PR (pull request) but also do a **code review** of our collabies task and receive a code review from them and our mentors. At the end of the week we had an online sync all together to **demo** our tasks, go through a **learning module** and do a **retrospective**.

## What did I learn?
In terms of coding skills, I did not only learn a lot about **ReactJS** (I had previous experience but it helped me refresh some concepts again and learn new ones), but also about other technologies I had no experience with, like **Firebase** (collections, [local storage](https://www.cristina-padilla.com/localstorage.html), etc.), **Tailwind CSS and Git**.

The Collab Lab experience helped me specially improve my Git skills. As I had previously worked alone on my own projects, I only knew the [basic git commands](https://www.cristina-padilla.com/gitcommands.html) to push my changes to GitHub. This time I had to use all necessary git commands when working on a team project to create a new branch, pull, push, merge, etc. And because mistakes happen on the way and are part of the learning journey, I faced situations where I had to learn commands that I never thought I would use, like [git rebase](https://www.cristina-padilla.com/gitrebase.html) and [git rebase interactive](https://www.cristina-padilla.com/gitrebaseinteractive.html).

Regarding non-coding skills, The Collab Lab was an amazing experience that also helped me grow my **communication skills**. On the one hand, writing was very important (submit well explained [PRs](https://www.cristina-padilla.com/codereview.html), asking the right questions or suggesting code ideas on the [code reviews](https://www.cristina-padilla.com/codereview.html), writing understandable commit descriptions, etc.) On the other hand, oral communication was equally important (sharing ideas during [pair programming](https://www.cristina-padilla.com/pairprogramming.html) sessions, doing weekly demos explaining your code and problem-solving approach, etc.)

What I found great of the whole experience was **the combination of technical and soft skills**. In both situations I often felt challenged and had to [step out of my comfort zone](https://www.cristina-padilla.com/comfortzone.html).
There were **tough tasks, frustrating and self-doubting moments **where my collabies and I felt we wouldn't manage to find a solution but this is always part of the learning path. Instead of panicking, taking breaks and not being afraid of asking for help were very useful to beat [imposter syndrome](https://www.cristina-padilla.com/imposter.html). Our mentors were a big support in this sense.

What would The Collab Lab be without its amazing mentors? They were not only our 8 weeks support, but they also shared their expertise with very insightful **learning modules** (about technical writing, [accessibility](https://www.cristina-padilla.com/accessibility.html), pair programming, code reviews, devs communication, etc.) and **retrospectives**.

Retrospectives were a very important team exercise, which consisted of reviewing performance of the previous weeks. This allowed us not only to share feedback on how to improve the collaboration workflow but also to praise the whole team. For example, in the beginning pairing sessions worked quite well although there was still room for improvement (deliver a task faster, ask questions earlier, write better PRs, solve merging conflicts, etc.).
Nevertheless, every feedback was a new opportunity to learn. For example, there was one occasion during the last two weeks, where everybody was a bit tired and sick, so I decided to jump in and prepare a Wireframe proposing some design ideas to make the whole process a bit faster and easier. I also split the design issue into smaller and more digestible issues so that we could split the tasks between us. This helped finish the project faster.
After 8 weeks, my team and I built the amazing [Shoppr. app](https://tcl-49-smart-shopping-list.web.app/) and wrote a recap about the work process and our experiences. Check it out [here](https://dev.to/the-collab-lab/the-collab-lab-tcl-49-recap-429d).

## Next steps: The Career Lab
The project was over so what's next? The Collab Lab also offers an optional 2 week program that helps rock the interview process. I decided to participate and had the opportunity to:
- get great tips to update [my Linkedin profile](https://www.linkedin.com/in/cristina-padilla-plasencia/) and get it reviewed by a mentor.
- assist to a Q&A session with a technical recruiter.
- watch two mentors practising a team fit and a technical interview.
- work on a take-home-assignment and present it at a technical mock interview.
- practise a job fit mock interview.
After this amazing experience and as a self-taught developer, I can only recommend anyone to participate in The Collab Lab or any similar group project. Transitioning into a new career is tough and when you do it alone, it can be very scary. Learning to code can be a more gratifying, fun and less frustrating experience when you have others alongside you. The Collab Lab can also help you find out what you are good at or you like the most (frontend development, accessibility, technical writing, UX-UI design, project management, etc.) So what are you waiting for?
| crispitipina |
1,390,908 | O uso de Chakra em aplicações feita com React. | Quando criamos uma aplicação seja mobile ou web, pensamos no que será necessário inserir,que... | 0 | 2023-03-06T18:22:20 | https://dev.to/altencirsilvajr/o-uso-de-chakra-em-aplicacoes-feita-com-react-1d3b | react, beginners, programming | Quando criamos uma aplicação seja mobile ou web, pensamos no que será necessário inserir,que linguagem usar, consumo de APIs, estilização entre outras coisas. Infelizmente, muitas alternativas visuais que gostaríamos de utilizar podem acabar deixando nossso sistema com requisições em excesso e o tornando lento. Seria muito bom existir uma biblioteca que visa construção de interfaces para o usuário e que a fluidez é primordial. Pensando nisso um time de desenvolvedores criou o Chakra Ui. Veremos o que é e como ele é executada.
O `Chakra UI` é uma biblioteca baseada em componentes. Ele é composto de blocos de construção básicos que podem ajudá-lo a criar o front-end de seu aplicativo da web.
Já passou mais tempo do que necessário tendo que se importar em como centralizar um elemento `<div>` ou tentar inserir um botão num lugar exato da tela,o fazendo "esquecer" da parte lógica e de construção (Back-end) do sistema? Isso acontece com todos, por essa razão o Chakra Ui é uma excelente ferramenta, pois além de reutilizável, tem módulos preparados para uso na hora. Acredito que já tenha compreendido a idéia de Chakra na teoria, nós ainda veremos um exemplo real de uso dessa biblioteca, mas antes, vamos intalá-la:
**Instalando Chakra-UI**
Dentro de um diretório vamos o instalar via npm:
```
npm install @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4
```
(Pode ter percebido que na instalação, aparece o Emotio e o Framer-motion: responsáveis por gerênciamento e animações do Chakra, eles trabalham juntos.)
Temos também uma solução de instalação para yarn:
```
yarn add @chakra-ui/react @emotion/react@^11 @emotion/styled@^11 framer-motion@^4
```
Ao iniciar o Chakra, nós precisamos inserir o `Chakra-Provider`:
```
import React from "react"
// 1. import de `ChakraProvider` componente
import { ChakraProvider } from "@chakra-ui/react"
function App({ Component }) {
// 2.return no app
return (
<ChakraProvider>
<Component />
</ChakraProvider>
)
}
```
Abaixo conseguimos ver um exemplo de uso de botões usando Chakra:
```
import { Button } from "@chakra-ui/react";
function App() {
return (
<div>
<Button colorScheme="blue">Clique aqui</Button>
</div>
);
}
export default App;
```
No exemplo anterior, o botão pode ser customizado e tinha coloração azul.
O Chakra UI possui muitos outros componentes que podem ser usados da mesma forma. Alguns exemplos incluem:
**Componentes do CHAKRA-UI:**
- Box: um componente de caixa flexível e responsivo
- Text: um componente de texto com vários estilos
- Input: um componente de entrada de dados
- Select: um componente de seleção de opções
- Checkbox: um componente de seleção de caixa de seleção
- Radio: um componente de seleção de botão de opção
- Slider: um componente de controle deslizante
Todos esses componentes citados podem ser usados para criar um ambiente estilizado agradável .Por exemplo, para margin-topCSS, você escreveria como `<Text mt={8} >`. Isso definirá uma margem superior 8px no elemento selecionado. Sua estilização de cores lembra o TailWindCss, caso você tenha familiaridade com essa biblioteca, será de grande ajuda no entendimento.
**Responsividade via CHAKRA-UI:**
Quando trabalhamos com diversas interfaces, de diferentes telas e pra diferentes locais:seja mobile ou web, nos deparamos sempre com o problema chato da falta de responsividade entre os ambientes. O que pode dar certo em uma tela de celular, pode dar MUITO errado em uma tela de computador. Felizmente o Chakra consegue fazer isso, por exemplo:
```
import { Box } from "@chakra-ui/react";
function App() {
return (
<Box display={{ base: "none", md: "block" }}>
<p>Este parágrafo só é exibido em telas médias ou maiores</p>
</Box>
);
}
```
Como é dito, o código acima só consegue exibir a escrita em determinados tipos de tela. Claro, aqui foi deixado a uma estatística feita pelo próprio sistema, mas é possível definir mexendo diretamente no tamanho:
```
import { Box } from "@chakra-ui/react";
function MyComponent() {
return (
<Box
bg="blue.500"
color="white"
p={4}
width={{ base: "100%", md: "50%", lg: "25%" }}
>
Este componente tem uma largura de 100% em telas pequenas, 50% em telas médias e 25% em telas grandes.
</Box>
);
}
```
E para que não seja apenas diversos exemplos de códigos mas nenhuma imagem real do uso deste, que tal ver um exemplo feito em chakra por imagem?
```
import * as React from "react";
import { Box, Center, Image, Flex, Badge, Text } from "@chakra-ui/react";
import { MdStar } from "react-icons/md";
export default function Example() {
return (
<Center h="100vh">
<Box p="5" maxW="320px" borderWidth="1px">
<Image borderRadius="md" src="https://bit.ly/2k1H1t6" />
<Flex align="baseline" mt={2}>
<Badge colorScheme="pink">Plus</Badge>
<Text
ml={2}
textTransform="uppercase"
fontSize="sm"
fontWeight="bold"
color="pink.800"
>
Verified • Cape Town
</Text>
</Flex>
<Text mt={2} fontSize="xl" fontWeight="semibold" lineHeight="short">
Modern, Chic Penthouse with Mountain, City & Sea Views
</Text>
<Text mt={2}>$119/night</Text>
<Flex mt={2} align="center">
<Box as={MdStar} color="orange.400" />
<Text ml={1} fontSize="sm">
<b>4.84</b> (190)
</Text>
</Flex>
</Box>
</Center>
);
}
```
O código acima consegue criar a seguinte imagem abaixo:

O Chakra UI foi criado para impulsionar o processo de desenvolvimento para outro nível. É muito flexível, a documentação é ótima e há muitos modelos pré-construídos para ajudá-lo a acelerar o processo de criação de sua interface.
Obrigado por ler até aqui!!!!
| altencirsilvajr |
1,391,205 | Conectadas: Mercado Livre e Reprograma oferecem curso online e 100% gratuito de tecnologia para jovens mulheres | O Conectadas é um programa criado em parceria entre o Mercado Livre e o Reprograma com o objetivo... | 0 | 2023-03-07T01:09:43 | https://guiadeti.com.br/conectadas-mercado-livre-reprograma-curso-gratuito/ | bolsas, cursogratuito, inclusão, treinamento | ---
title: Conectadas: Mercado Livre e Reprograma oferecem curso online e 100% gratuito de tecnologia para jovens mulheres
published: true
date: 2023-03-06 23:55:30 UTC
tags: Bolsas,CursoGratuito,Inclusão,Treinamento
canonical_url: https://guiadeti.com.br/conectadas-mercado-livre-reprograma-curso-gratuito/
---

O Conectadas é um programa criado em parceria entre o Mercado Livre e o Reprograma com o objetivo de fornecer cursos online exclusivos para adolescentes mulheres.
Este programa é uma grande oportunidade para as jovens se capacitarem profissionalmente e se prepararem para o mercado de trabalho, com aulas ministradas por profissionais qualificados e experientes.
## Conteúdo
<nav><ul>
<li><a href="#o-programa-conectadas">O Programa Conectadas</a></li>
<li><a href="#o-reprograma">O Reprograma</a></li>
<li><a href="#o-mercado-livre">O Mercado Livre</a></li>
<li><a href="#requisitos">Requisitos</a></li>
<li><a href="#inscricoes">Inscrições</a></li>
<li><a href="#compartilhe">Compartilhe!</a></li>
</ul></nav>
## O Programa Conectadas
O Conectadas é um programa do Mercado Livre em parceria com a {reprograma} que oferece cursos online e gratuitos de [tecnologia](https://guiadeti.com.br/guia-tags/cursos-de-tecnologia/) exclusivamente para adolescentes mulheres. O objetivo é incentivar a participação feminina no mercado de [tecnologia](https://guiadeti.com.br/guia-tags/cursos-de-tecnologia/), uma área que historicamente tem sido dominada por homens.
O curso é totalmente online, permitindo que as participantes tenham flexibilidade para estudar de acordo com suas próprias agendas. As participantes terão a oportunidade de aprender habilidades técnicas valiosas e desenvolver uma rede de contatos no setor de tecnologia.
O programa oferece uma imersão em tecnologia com duração de 54 horas e consiste em 16 encontros online que cobrem diversos temas, incluindo transformação digital, economia 4.0, sustentabilidade, design de experiência do usuário, análise de dados, desenvolvimento de projetos e negócios digitais, marketing, comunicação digital e introdução a [HTML](https://guiadeti.com.br/guia-tags/cursos-de-html/).
Durante o programa, as participantes recebem mentoria de mulheres que já atuam na área de tecnologia, provenientes do Mercado Livre e da Reprograma, possibilitando conexões valiosas. Em 2023, o programa Conectadas planeja atingir 1300 meninas em seis países, no Brasil, serão 150 vagas somente no primeiro semestre.
Entre as novidades da edição deste ano, algumas bolsas de estudo serão oferecidas para o projeto de formação em Back-end [Python](https://guiadeti.com.br/guia-tags/cursos-de-python/), desenvolvido pela Reprograma.
## O Reprograma
O Reprograma é uma iniciativa incrível que busca empoderar mulheres na área de tecnologia, oferecendo cursos e mentoria para ajudá-las a desenvolver habilidades e entrar no mercado de trabalho.
Com uma equipe altamente qualificada e comprometida, o Reprograma já ajudou muitas mulheres a conquistar posições de destaque em empresas de tecnologia de todo o país.
Se você quer aprender a programar, desenvolver seu potencial criativo e fazer parte de uma comunidade de mulheres inspiradoras, o Reprograma é o lugar certo para você! Venha fazer parte dessa transformação e se prepare para uma carreira de sucesso em tecnologia.
## O Mercado Livre
O Mercado Livre é uma das maiores empresas de comércio eletrônico da América Latina, presente em diversos países e oferecendo aos seus usuários uma plataforma segura e confiável para comprar e vender produtos de forma fácil e rápida.
Além disso, a empresa também está comprometida em apoiar iniciativas de inclusão e diversidade, como o Conectadas em parceria com o Reprograma, oferecendo cursos online e gratuitos de tecnologia para jovens mulheres.
Com uma visão de futuro voltada para o desenvolvimento tecnológico, o Mercado Livre busca contribuir para um mundo mais conectado e acessível a todos.
## Requisitos
O programa prioriza meninas provenientes de escolas públicas ou que possuem bolsa de estudos em escolas particulares e busca ter pelo menos 50% de participação de jovens pretas, pardas e indígenas. Além disso, o programa também estimula a participação de jovens mulheres trans e travestis.
- Ser uma pessoa que se identifica com o gênero feminino, incluindo mulheres cisgêneras, trans e travestis;
- Ter idade entre 16 e 18 anos;
- Ser fluente em língua portuguesa;
- Residir atualmente no Brasil.
## Inscrições
[Inscreva-se agora](https://reprograma.typeform.com/to/hGYntgRN) para participar deste programa incrível!
## Compartilhe!
Gostou do conteúdo sobre o programa Conectadas? Compartilhe com todos seus amigos!
O post [Conectadas: Mercado Livre e Reprograma oferecem curso online e 100% gratuito de tecnologia para jovens mulheres](https://guiadeti.com.br/conectadas-mercado-livre-reprograma-curso-gratuito/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,391,402 | python: use multiple patch decorators to mock functions | I wrote how to mock in the previous article. This time, I mock multiple functions in the test to see... | 0 | 2023-03-07T05:43:06 | https://dev.to/kenakamu/python-use-multiple-patch-decorators-to-mock-functions-15i3 | python, testing, mock | I wrote how to mock in the [previous article](https://dev.to/kenakamu/python-unit-test-with-mock-functions-from-different-modules-cla). This time, I mock multiple functions in the test to see how I can handle them.
## Structures and code
This is almost same as before, just adding one more function in util.py.
```shell
src/
├── my.py
├── my_modules/
│ ├── __init__.py
│ └── util.py
└── tests/
├── __init__.py
├── test_my.py
└── test_unit.py
```
my.py
```python
from my_modules.util import util, get_data
def main():
data = get_data()
return util(data)
if __name__ == '__main__':
main()
```
util.py
```python
from datetime import datetime
def util(input: str) -> str:
input = add_time(input)
return f"util: {input}"
def add_time(input: str) -> str:
return f"{datetime.now()}: {input}"
def get_data() -> str:
return "Return some data"
```
## Add a unit test for main function
To test the ``main`` method in the ``my.py``, I need to mock both ``util`` and ``get_data`` method. Let's do it.
Firstly, I added another ``patch`` and specify ``return_value``. It doesn't change the test body though.
```python
@patch("my.util", Mock(return_value="dummy"))
@patch("my.get_data", Mock(return_value="some data"))
def test_main():
result = main()
assert result =='dummy'
```
Let's receive the mock objects as the arguments.
```python
@patch("my.util")
@patch("my.get_data")
def test_main_util_called_with_expected_parameter(get_data_mock, util_mock):
get_data_mock.return_value = 'some data'
util_mock.return_value = 'dummy'
result = main()
assert result =='dummy'
util_mock.assert_any_call('some data')
```
The interesting part is the order of argument. As you see, I can get the mock from bottom up order of the ``patch`` decorators. I firstly though I can receive the mocks in the same order as the decorators, but I was wrong.
Finally, let's try ``with`` statement.
```python
def test_main_util_called_with_expected_parameter_with():
with patch("my.util") as util_mock:
util_mock.return_value = 'dummy'
with patch("my.get_data") as get_data_mock:
get_data_mock.return_value = 'some data'
result = main()
assert result =='dummy'
util_mock.assert_any_call('some data')
```
I am not 100% sure if this is the correct way to implement, but it works as expected anyway.
I just added one more example. This time, I specify the Mock object in the first decorator only. In this case, I can receive just one mock object as an argument.
```python
@patch("my.util", Mock(return_value="dummy"))
@patch("my.get_data")
def test_main_get_data_called(get_data_mock):
get_data_mock.return_value = 'some data'
result = main()
assert result =='dummy'
assert get_data_mock.called
```
I check if ``get_data`` function is called.
## Summary
I understand when I get the mock object as the arguments. It's a bit confusing but once I understand, it's very useful.
| kenakamu |
1,391,467 | Carbonmade vs Squarespace vs Authory | Carbonmade insists that "templates are boring.” Squarespace is known for its gorgeous,... | 0 | 2023-03-07T09:07:06 | https://authory.com/blog/carbonmade-vs-squarespace-vs-authory/ | portfolio | ---
title: Carbonmade vs Squarespace vs Authory
published: true
date: 2023-02-08 20:12:36 UTC
tags: Portfolio
canonical_url: https://authory.com/blog/carbonmade-vs-squarespace-vs-authory/
---

Carbonmade insists that "templates are boring.”
Squarespace is known for its gorgeous, attention-gripping templates.
Authory imports and backs up all your content automatically, no matter where it has been published online.
So, which portfolio website builder do you pick among the three above?
When looking for a portfolio builder, I, too, compared Carbonmade vs Squarespace vs Authory. It took me a while, but I finally managed to distill these tools down to their primary offerings and make a decision based strictly on my requirements.
I'm quite proud to say that I'm very happy with my final choice. In light of this small success (not too small considering the tool I chose gave me a portfolio that got me well-paying clients in just three weeks), I wanted to share my own experience and impression while doing the "Carbonmade vs Squarespace vs Authory" dance. Perhaps it'll help you choose and create that perfect portfolio as well.
📖 _**What you’ll get out of this article**:_
• _The primary features I wanted in my perfect portfolio builder_
• _Exploration of each tool in some detail_
• _Why I chose one tool over the other two_
## What I wanted in my perfect online portfolio builder
When choosing a professional portfolio builder, the following features were non-negotiable for me:
- User-friendly with an intuitive UI that let me set up a professional portfolio within minutes. No coding skills required, ever.
- Allows uploading and showcasing of content in multiple formats — text, audio, and video.
- Customizable enough to achieve the aesthetic appearance I wanted without too much work. I'm a freelance writer; I don't need award-winning designs on my banner, but I do want prospective clients to think my portfolio looks sharp, minimal, and navigable.
- Some form of analytics that offers me some insight into how my published content is performing on the internet in terms of readership, traffic, engagement, etc.
- Affordable for a newbie freelancer like me (I'd been writing full-time until October 2022). I couldn't afford anything more than $10 a month.
## Carbonmade vs Squarespace vs Authory: A Comparison
## Carbonmade

_Carbonmade_
As I mentioned before, Carbonmade doesn't really do templates. Instead, they offer customizable layouts in which every element is drag-and-drop. You can customize everything — portfolio grid, navigation, colors, fonts — all with just a few clicks. You also get unlimited uploads of all file types with no file size limit.
Uniquely, starting a free trial does not require a credit card. You only pay when you go live with your portfolio.
### Primary features of Carbonmade
- Completely customizable layout blocks with no pre-set designs. Unlimited design possibilities based on what you can imagine.
- Unlimited galleries, be they gallery grids, gallery sliders, etc. Galleries support all image formats, videos, and PDFs.
- All galleries are optimized for mobile screens.
- Animators & videographers can crop and/or loop video within the tool itself.
- All videos get full HD support with high-speed loading times.
- Easy access to dozens of typefaces.
- Custom domain offered in all plans.
- Analytics data on how many people have visited your portfolio or which of your projects is most popular.
### Carbonmade's pricing plans
**Beginner Plan**: $9/month
**Pro Plan**: $12/month
**Unlimited Plan**: $22/month
### Portfolios created with Carbonmade

_Aran Quinn's illustration & animation portfolio_
[Aran Quinn](https://aranquinn.com/) is an award-winning Irish Animation Director based in New York.

_Allen Laseter's design portfolio_
[Allen Laseter](https://allenlaseter.carbonmade.com/) is an animator, designer, and director based out of Tennessee.
## Squarespace

_Squarespace_
A fixture on most "best website builder" lists, Squarespace is the king of templates. Like all other niches, it also offers templates for creating your own portfolio.
You also get almost every necessary for site creators — search engine optimization, marketing tools, eCommerce capabilities, a free custom domain on all yearly plans, and the like.
* * *
🖱️ Authory is, amongst other things, a portfolio builder & content backup service, used by thousands of top professionals worldwide.
[Get started for free now.](https://authory.com/signup/create-account?utm_source=blog&utm_medium=content&utm_campaign=midpage&utm_content=blog_mid_page)
* * *
### Primary features of Squarespace
- A wide range of uniquely designed templates tailored for different professions.
- All templates are customizable and equipped with responsive design. They look great on different devices with different screen sizes/resolutions.
- Templates come with specific sections you often see on websites — Contact, About Me, Blog, Products, Portfolio, and the like. Less work for you when setting up.
- Allows and accepts multiple content types and page elements: text, photos, videos, audio, galleries, products, newsletter sign-ups, appointments, calendars, tour dates, reservations, menus, forms, maps, links, files, social links, buttons, quotes, custom code, charts, etc.
- Allows password protection, whether for a single page or the entire site.
- Provides in-built SEO for your entire portfolio.
- Doesn't require plugins to expand website functionality.
- Create an eCommerce section to sell your work with a few clicks.
- A drag-and-drop editor for easy customization and setup of your portfolio.
- Multiple extensions for integrating third-party tools that manage marketing, finance, sales, products, inventory, shipping, etc.
### Squarespace's pricing plans
**Personal Plan:** $23/month
**Business Plan:** $33/month
**Commerce Basic:** $36/month
**Commerce Advanced:** $65/month
### Portfolios created with Squarespace

_Caylon Hackwith’s design portfolio_
[Caylon Hackwith](https://caylonhackwith.com/) is a photographer, videographer, and art director

_Mike Perry’s art portfolio_
[Mike Perry](https://mike-perry-studio.squarespace.com/) is an Emmy award-winning artist.
## Authory

_Authory_
While other website builders have robust, market-ready features, they didn't give me what Authory did (and what I wanted the most): automatically importing ALL bylined content from ANY digital source.
You enter the source website, give the tool about 48 hours, and come back to find all your bylined pieces in one place. You can import from an unlimited number of sources. The tool won't just import all existing content from each source, but all pieces published in the future.
That means, you never have to copy-paste/manually upload any of your bylined pieces to your portfolio ever again. Of course, there is the option to manually add non-bylined content, if you so require.
Additionally, all imported content is permanently backed up. So, not only does Authory build your portfolio for you by bringing all your content into one place, but it also saves that content forever. And again, the entire process is automatic. All you do is enter the correct website URLs.
### Primary features of Authory
- All bylined content is imported automatically from any digital source — websites, social media, podcast platforms, YouTube, etc.). This applies to both existing and future content published on the specific source.
- All imported content is permanently and automatically backed up. Irrespective of the status of the original link, you'll always have a copy of your content on Authory. There’s no steep learning curve because Authory makes your portfolio **for** you.
- All backups are in the content's original format (text/media) rather than just screenshots.
- All imported content is downloadable as high-res PDFs or as HTML files at any time.
- Effective customization options that don't require much work, but always end up making your portfolio look sharp and future-forward.
- Email notifications alert you whenever one of your pieces is imported by Authory.
- In-built search engine optimization and responsive design implementation for every portfolio.
- Robust Analytics implementation that offers you real numbers on how your content performs across the web and popular social media sites every 30 days.
- Create newsletters with a couple of clicks. The tool will send your newly published content to subscribers automatically.
- Widgets to display your portfolio on other sites, such as your personal website (if you have one).
### Portfolios created with Authory

_David Pogue's thought leadership portfolio_
[David Pogue](https://authory.com/DavidPogue) is a 6x Emmy award-winning writer, correspondent, and podcast host.

_Melissa Kalt's thought leadership portfolio_
[Melissa Kalt](https://authory.com/MelissaKaltMD) is an award-winning physician, mentor, and writer.
### Authory's price
$8/month
## Why Authory won this battle
While Carbonmade and Squarespace provide powerful features that would be non-negotiable for individuals in different professional situations, they did nothing to reduce my effort in building an online portfolio.
For both of these tools, I'd have to look for the content I want to showcase manually, and copy-paste/upload them by hand. There's also no assurance on storage/backup (unless you get the plugin/pay for a more expensive plan), so I'd also have to organize these pieces on my device or the cloud. Additionally, the content will be lost if the source link dies for some reason since I don't have a copy of any kind.
In light of these drawbacks, the advantages these tools offered didn't make sense. A visual editor or the promise of unlimited pages didn't do much to compensate for the labor I'd have to invest in putting together my portfolio.
Don't get me wrong. These tools are ideal for individuals with different requirements or ones in different domains. UX designers would probably appreciate Carbonmade's customizable layouts for their UX portfolio. Similarly, perhaps graphic designers want Squarespace's unique template to reflect their own artistic/aesthetic inclinations.
However, for a freelance writer for myself, the best website builder is the one that collated all my content, saved it, and expected me to put in about 3 to 5 minutes of work to receive a fully-finished portfolio.\*\* There wasn't any need to go through video tutorials to set it up — though those are available for folks who need it.
Of course, it doesn't hurt that Authory is noticeably cheaper than the other tools in this list.
On average, using Authory saves me about five and a half hours every month — time I'd otherwise spend bothering clients for publication dates, copy-pasting links once my articles have been published, and scrambling to find relevant older articles in my six-year-long writing repertoire.
Additionally, I looked at Authory's user base and decided it was populated by people who have already achieved the success I am working towards. Take David Pogue (mentioned above) with his 6 Emmy awards, [Steven Levy, Editor at Large, WIRED](https://authory.com/StevenLevy), or [Brian Fung, a Technology Reporter at CNN](https://authory.com/BrianFung). These individuals have the careers I want, and I'm not ashamed to say that I'm following their lead in choosing Authory over all other website builders.
However, you should not take my word for it or even the word of the stalwarts I mentioned. Just [get started with Authory for free](https://authory.com/signup/create-account?utm_source=blog&utm_medium=content&utm_campaign=portfolio&utm_content=carbonmade_vs_squarespace) and gauge if you're getting what you want out of this premium [portfolio builder](https://authory.com/blog/best-portfolio-builder). | protimauthory |
1,391,486 | Service Discovery | Service discovery is a process of detecting services within network clusters. It works on Service... | 20,359 | 2023-03-07T06:42:49 | https://pragyasapkota.medium.com/service-discovery-8184d05bdc0e | systems, microservices, architecture, beginners | Service discovery is a process of detecting services within network [clusters](https://dev.to/pragyasapkota/clustering-how-much-does-it-differ-from-load-balancing-3i1e). It works on Service Discovery Protocol (SDP) — a networking standard for detecting network service by identifying resources. Usually, we see services invoke each other via language-level methods or procedure calls in the [monolithic](https://dev.to/pragyasapkota/monoliths-2n2m) application. But modern [microservices](https://dev.to/pragyasapkota/microservices-21oe) have virtualized or containerized environments where there are instances of a service and their locations that change dynamically. Also, we have mechanisms that enable clients of the service to make requests to the dynamically changing set of temporary service instances.
Some of the common examples of service discovery tools are etcd, Consul, Apache Thrift, Apache Zookeeper, etc.
## Implementations of service discovery
### Client-side Discovery
On the Client-side service discovery, the client looks for the service location of another service by querying a service registry that handles the network locations of all the service instances.

### Server-side Discovery
Likewise, we have an intermediate element like a load balancer where a client requests the service via the load balancer which will be consequently forwarded to the available service in an instant.

## What is Service Registry?
Service Registry is a database with the network locations of service instances for the clients to reach out to when required. To note, a service registry is highly available and up to date at any time.
## Service Registration
There are some ways of getting the service information known as service registration.
### Self-Registration
Self-registration registers and de-registers service instances itself in the service registry. It even encourages the service instances to send a [heartbeat request](https://dev.to/pragyasapkota/heartbeat-messaging-what-is-it-18a6) to keep its registration alive.
### Third-Party Registration
Further, we have third-party registration that keeps the track of changes in the running instances by polling the deployment environment or subscribing to events. It also records a new service instance in the database while the service registry also de-registers terminated service instances.
## Service Mesh
When you have [distributed](https://dev.to/pragyasapkota/distributed-system-the-definition-nkh) communication, service-to-service communication is necessary but routing this communication, all the application clusters within and across the system gets complex. Then, comes Service Mesh to the rescue by managing and securing the communications between the individual services. In addition, communication is also made observable. The SDP works on full standards for service detection in the mesh. Some of the common examples of service mesh are [Istio](https://istio.io/latest/about/service-mesh/) and [Envoy](https://www.envoyproxy.io/).
**_I hope this article was helpful to you._**
**_Please don’t forget to follow me!!!_**
**_Any kind of feedback or comment is welcome!!!_**
**_Thank you for your time and support!!!!_**
**_Keep Reading!! Keep Learning!!!_** | pragyasapkota |
1,391,498 | Top 15 Tech Websites to Bookmark Now to Stay Updated | In the fast-paced world of technology, staying updated is crucial for everyone, whether you are a... | 0 | 2023-03-07T07:03:32 | https://dev.to/tigereye_zr/top-15-tech-websites-to-bookmark-now-to-stay-updated-40k7 | technews, technologytrends, websiterecommendations, stayupdated | In the fast-paced world of technology, staying updated is crucial for everyone, whether you are a tech enthusiast, a student, or a professional. To keep yourself up-to-date with the latest technological advancements and news, you need to bookmark some reliable websites that provide relevant information.
In this article, we have compiled a list of the top 15 tech websites that you should bookmark now to stay updated.
### TechCrunch
[TechCrunch](https://techcrunch.com/) is one of the most popular tech websites that provides news and analysis on technology startups, products, and trends. It covers a wide range of topics, including AI, cybersecurity, blockchain, and more. TechCrunch has a team of experienced writers and editors who deliver high-quality content regularly.
### Wired
[Wired](https://www.wired.com/) is a website that covers news and analysis on technology, science, and culture. It features articles on a wide range of topics, including smartphones, laptops, gaming, and more. Wired has a team of experienced writers and editors who deliver engaging and informative content on a regular basis.
### CNET
[CNET](https://www.cnet.com/) is a widely popular website that provides reviews, news, and analysis on a variety of tech products and services. It also covers other topics like culture, entertainment, and science. CNET has a large team of editors and writers who produce high-quality content on a regular basis.
### Gizmodo
[Gizmodo](https://gizmodo.com/) is a website that covers news and analysis on technology, science, and culture. It features articles on a wide range of topics, including smartphones, laptops, gaming, and more. Gizmodo's team of writers and editors consistently produce engaging and informative content.
### Engadget
[Engadget](https://www.engadget.com/) is a website that provides news, reviews, and analysis on a wide range of technology-related topics, including smartphones, laptops, gaming, and more. It has a large team of experienced writers and editors who deliver high-quality content on a regular basis.
### Mashable
[Mashable](https://mashable.com/) is a website that covers news and analysis on technology, culture, and entertainment. It features articles on a wide range of topics, including social media, gadgets, apps, and more. Mashable has a team of experienced writers who produce engaging and informative content regularly.
### TechRadar
[TechRadar](https://www.techradar.com/) is a website that provides news, reviews, and analysis on a variety of tech products and services, including smartphones, laptops, and software. It has a team of experienced writers and editors who consistently produce high-quality content.
### TechTalkiz
[TechTalkiz](https://techtalkiz.com/) is a website that covers a wide range of technology topics, including software, hardware, gadgets, and more. It provides news, reviews, and analysis on the latest technology trends, and features articles written by experienced writers and industry experts. TechTalkiz also includes a forum where users can discuss technology-related topics and get help with their tech-related issues.
### The Verge
[The Verge](https://www.theverge.com/) is a website that covers news and analysis on technology, science, and culture. It features articles on a wide range of topics, including smartphones, laptops, gaming, and more. The Verge has a team of experienced writers and editors who deliver engaging and informative content on a regular basis.
### ZDNet
[ZDNet](https://www.zdnet.com/) is a website that provides news, analysis, and reviews on technology, cybersecurity, and business. It features articles on a wide range of topics, including cloud computing, software, and networking. ZDNet has a team of experienced writers and editors who produce high-quality content regularly.
### Digital Trends
[Digital Trends](https://www.digitaltrends.com/) is a website that covers news, reviews, and analysis on a variety of tech products and services, including smartphones, laptops, and smart home devices. It has a team of experienced writers and editors who consistently deliver engaging and informative content.
### Tom's Guide
[Tom's Guide](https://www.tomsguide.com/) is a website that provides reviews, buying guides, and news on a variety of tech products and services, including smartphones, laptops, and gaming. It has a team of experienced writers and editors who produce high-quality content regularly.
### PCWorld
[PCWorld](https://www.pcworld.com/) is a website that provides news, reviews, and analysis on a variety of tech products and services, including laptops, desktops, and software. It has a team of experienced writers and editors who deliver engaging and informative content on a regular basis.
### Android Central
[Android Central](https://www.androidcentral.com/) is a website that provides news, reviews, and analysis on all things related to Android devices, including smartphones, tablets, and wearables. It has a team of experienced writers and editors who consistently produce high-quality content.
### MacRumors
[MacRumors](https://www.macrumors.com/) is a website that provides news, reviews, and analysis on all things related to Apple products, including Macs, iPhones, and iPads. It has a team of experienced writers and editors who consistently deliver engaging and informative content.
## Conclusion
In conclusion, staying updated on the latest tech trends is crucial in today's fast-paced world. With the constant [advancements in technology](https://dev.to/brettclawson75/why-it-is-important-to-stay-up-to-date-with-technology-20hh
), it's important to have reliable sources that can provide accurate information and insights.
The 15 websites listed above are some of the best resources for staying up-to-date on the latest technology news and trends.
Whether you're a tech enthusiast or just want to stay informed, these websites offer a wealth of information and analysis that can help you make informed decisions about the technology you use.
From hardware and software reviews to in-depth analysis of industry trends, these websites have got you covered. So, be sure to bookmark these sites and stay informed on the latest in tech!
| tigereye_zr |
1,391,514 | Write a program to implement triangle questions in java. | Requirements and Specifications Write a program that inputs 3 numbers, and determines if these... | 0 | 2023-03-07T07:26:59 | https://dev.to/mick_jenifer006/write-a-program-to-implement-triangle-questions-in-java-2fe1 | java, javahomeworkhelp, programmingassignmenthelp, javaassignmenthelp |
Requirements and Specifications
Write a program that inputs 3 numbers, and determines if these numbers are sides of a triangle. if a value <=0 is entered ignore that data, and exit the program.
To be a triangle. the sum of any 2 sides of a triangle must be greater than the 3rd side. all the numbers input will be integers.
http://www.mathwarehouse.com/geometry/triangles/triangle-inequality-theorem-rule-explained.php
if it is a valid triangle, determine the area(Herons formula), and perimeter of the triangle(sum of all sides), and if it is scalene(no sides equal), isosceles(2 sides equal), or equilateral(all sides equal)
and if it’s a right triangle (a2 +b2 =c2 –-remember you did not necessarily input the largest side last)
Herons formula
http://www.mathopenref.com/heronsformula.html
you must have at least 2 methods in addition to main
you may not use Math.max or Math.min or anything other than what we covered in chapters 1,2,3, 4, 5.
Name your project LastNameProject4 so that mine would be LichtenthalProject4
If you name your file incorrectly, or use anything we did not cover in the specified chapters your grade will be 0
sample runs
```
Source Code
import java.util.Scanner;
public class LastNameProject4 {
// Entry point of the program
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
// Get the 3 sides
System.out.print("Enter side 1: ");
int side1 = in.nextInt();
System.out.print("Enter side 2: ");
int side2 = in.nextInt();
System.out.print("Enter side 3: ");
int side3 = in.nextInt();
// Validate the triangle sides
if (side1 <= 0 || side2 <= 0 || side3 <= 0) {
System.out.println("Invalid data entered.");
return;
}
// Check if it's a triangle, the sum of two sides should
// be greater than the other side
if (side1 + side2 > side3 && side2 + side3 > side1 && side1 + side3 > side2) {
// Calculate the perimeter
double perimeter = side1 + side2 + side3;
System.out.println("Perimeter: " + perimeter);
// Calculate the area of the triangle
double semiPerimeter = perimeter / 2.0;
double area = Math.sqrt(semiPerimeter
* (semiPerimeter - side1)
* (semiPerimeter - side2)
* (semiPerimeter - side3));
System.out.println("Area: " + area);
// Check the kind of triangle
if (side1 != side2 && side2 != side3 && side1 != side3) {
System.out.println("No sides equal-scalene");
} else if (side1 == side2 || side1 == side3 || side2 == side3) {
System.out.println("2 sides equal-isosceles");
} else if (side1 == side2 && side2 == side3) {
System.out.println("3 sides equal-equilateral");
}
// Check if right triangle
if (side1 * side1 + side2 * side2 == side3 * side3
|| side2 * side2 + side3 * side3 == side1 * side1
|| side3 * side3 + side1 * side1 == side2 * side2) {
System.out.println("It is a right triangle");
} else {
System.out.println("It is not a right triangle");
}
} else {
// Not a triangle
System.out.println("Not sides of a triangle");
}
}
}
```
If you need assistance with such assignment or project visit [Java Homework Help](https://www.programminghomeworkhelp.com/java-assignment/) website
| mick_jenifer006 |
1,391,612 | openGauss Adding or Deleting a Standby Node | Availability This feature is available since openGauss 2.0.0. ... | 0 | 2023-03-07T09:04:30 | https://dev.to/liyang0608/opengauss-adding-or-deleting-a-standby-node-2p12 | ## Availability
This feature is available since openGauss 2.0.0.
## Introduction
Standby nodes can be added and deleted.
## Benefits
If the read pressure of the primary node is high or you want to improve the disaster recovery capability of the database, you need to add a standby node. If some standby nodes in a cluster are faulty and cannot be recovered within a short period of time, you can delete the faulty nodes to ensure that the cluster is running properly.
## Description
openGauss can be scaled out from a single node or one primary and multiple standbys to one primary and eight standbys. Cascaded standby nodes can be added. Standby nodes can be added when a faulty standby node exists in the cluster. One primary and multiple standbys can be scaled in to a single node. A faulty standby node can be deleted.
Standby nodes can be added or deleted online without affecting the primary node.
## Enhancements
None.
## Constraints
For adding a standby node:
Ensure that the openGauss image package exists on the primary node.
Ensure that the same users and user groups as those on the primary node have been created on the new standby node.
Ensure that the mutual trust of user root and the database management user has been established between the existing database nodes and the new nodes.
Ensure that the XML file has been properly configured and information about the standby node to be scaled has been added to the installed database configuration file.
Ensure that only user root is authorized to run the scale-out command.
Do not run the gs_dropnode command on the primary node to delete other standby nodes at the same time.
Ensure that the environment variables of the primary node have been imported before the scale-out command is run.
Ensure that the operating system of the new standby node is the same as that of the primary node.
Do not perform an primary/standby switchover or failover on other standby nodes at the same time.
For deleting a standby node:
Delete the standby node only on the primary node.
Do not perform an primary/standby switchover or failover on other standby nodes at the same time.
Do not run the gs_expansion command on the primary node for scale-out at the same time.
Do not run the gs_dropnode command twice at the same time.
Before deletion, ensure that the database management user trust relationship has been established between the primary and standby nodes.
Run this command as a database administrator.
Before running commands, run the source command to import environment variables of the primary node.
Dependencies
None. | liyang0608 | |
1,391,709 | Scribe Chrome Extension: The Ultimate Tool for Documenting Your Processes | In today’s fast-paced business world, effective process documentation is essential for streamlining... | 0 | 2023-03-07T10:25:42 | https://dev.to/esedev/scribe-chrome-extension-the-ultimate-tool-for-documenting-your-processes-4kif | scribe, documentprocess, chromeextension, productivity | In today’s fast-paced business world, effective process documentation is essential for streamlining workflows and increasing productivity. The Scribe Chrome Extension is the ultimate tool for documenting your processes, making it easier than ever to capture and organize key information and collaborate with team members. With its advanced features, including automatic recording of user activity, manual note-taking and tagging, audio and video recording, and integration with project management tools, the Scribe Chrome Extension is the perfect solution for any individual or team looking to improve their productivity and achieve their goals more efficiently.
**What is Scribe?**
Scribe is a productivity tool that is available as a Chrome extension. It is designed to help users capture and organize information in a structured and efficient manner. With Scribe, users can create workspaces to organize their projects, documents to capture information, and tags to categorize and search for their content. The extension also includes a rich text editor that allows users to format text, add images, and embed links to external resources. Scribe can be used for a variety of use cases, such as process documentation, project management, and knowledge management. It is a versatile tool that can help users streamline their workflow and increase productivity. _Scribe was founded in 2019_
Scribe generates a step-by-step guide through your cursor clicks and the keystroke. Scribe is totally free and you can also use Scribe on your desktop by simply upgrading to the premium version which includes some additional features like customizing screenshots, branded guides, etc.
**Key features of the Scribe Chrome Extension.**
The Scribe Chrome Extension offers a range of powerful features for documenting your processes, including:
**Automatic recording of user activity:** Scribe records every action you take on your computer, including website visits, mouse clicks, and keyboard strokes, making it easy to capture and analyze your workflow.
**Manual note-taking and tagging:** You can also manually take notes and tag them with keywords and labels, making it easy to find and organize your information.
**Audio and video recording:** Scribe allows you to record audio and video notes, enabling you to capture spoken information, presentations, and other content that might be difficult to capture in writing.
**Integration with project management tools:** Scribe integrates with popular project management tools like Asana and Trello, allowing you to create tasks and track your progress directly from the extension.
**Collaboration features:** Scribe makes it easy to collaborate with your team members, allowing you to share notes and recordings, assign tasks, and leave feedback and comments.
**Centralised information:** Scribe creates a centralised repository of information, making it easy to access and share knowledge across your organisation.
**Privacy and security:** Scribe takes privacy and security seriously, encrypting all data and giving you control over who has access to your information.
**Use cases of Scribe**
1. Scribe can be used to document processes, workflows, and procedures in a clear and organized way. This can help teams optimize their processes and improve collaboration.
2. Scribe can be used to capture and organize knowledge within an organization. This can include best practices, procedures, and company policies.
3. Scribe can be used to document project plans, tasks, and milestones. This can help teams stay on track and ensure that projects are completed on time and within budget.
4. Scribe can be used to document customer support procedures, such as troubleshooting steps and best practices. This can help support teams resolve issues more efficiently and effectively.
5. Scribe can be used to create training materials and documentation for new hires. This can help new employees get up to speed more quickly and become productive members of the team.
**How to use the Scribe Chrome Extension for documenting your processes**

1. Install the Scribe Chrome Extension from the Chrome Web Store.
2. Click on the Scribe icon in your browser toolbar to open the extension.
3. Click on ‘Start recording’ in the extension dropdown.
4. Click on the blinking red button in the corner of your screen and select ‘Complete Recording’
5. You can create a new workspace for your project and give it a descriptive name.
6. Add a new document to your workspace by clicking on the “New Document” button.
7. Give your document a title that reflects its content.
8. Use the rich text editor to add text, images, and links to your document.
9. Use the “Tags” feature to organize your documents by topic or category.
10. Use the “Export” feature to download your document as a PDF or text file.
11. Use the “Share” feature to collaborate with team members or share your document with others.
12. Use the “Search” feature to quickly find information within your workspace.
In conclusion, the Scribe Chrome Extension is a powerful tool for documenting your processes. With its intuitive interface and robust feature set, it makes it easy to capture and organize information in a way that is accessible and actionable. Whether you are a business owner, project manager, or team leader, Scribe can help you streamline your processes, improve collaboration, and boost productivity. By using Scribe to document your processes, you can ensure that your team is aligned, your workflows are optimized, and your projects are successful. So if you’re looking for a tool that can help you take your process documentation to the next level, look no further than the Scribe Chrome Extension.
_**Reference:**_
https://scribehow.com/page/Getting_Started_With_Scribe__eR6WdPtoTOuhiSZ3EcGLmw
| esedev |
1,391,841 | File Privacy | 3DES is one of the most prominent forms of encryption and has a symmetric-key block cipher algorithm.... | 0 | 2023-03-07T13:10:18 | https://dev.to/vengito/file-privacy-k9d | security, privacy, encryption | 3DES is one of the most prominent forms of encryption and has a symmetric-key block cipher algorithm. File Privacy uses Triple DES (3DES or TDES).
You encrypt your files using the key file. The key file is created within the program. First you select the file you want to encrypt and then the folder where the output file will be saved.
File Privacy has following features:
- File decryption/encryption (3DES)
- Create random key hex string and save as file
- Hex string based decryption/encryption
Usage is very easy and a manual also included in application.
{% embed https://www.youtube.com/watch?v=yJlumXN_Cx4 %}
[Get](https://www.microsoft.com/store/apps/9nr3f6t9g53z) | vengito |
1,391,908 | Conor McGregor's Net Worth: How He Built His Fortune | Articlesworlds.com is a platform where everyone can read or search for any kind of article and also... | 0 | 2023-03-07T13:36:07 | https://dev.to/articlesworlds/conor-mcgregors-net-worth-how-he-built-his-fortune-3o3m | [Articlesworlds.com](https://www.articlesworlds.com/) is a platform where everyone can read or search for any kind of article and also can share their ideas with us in the comment section. Today our topic is [Conor McGregor's Net Worth](https://www.articlesworlds.com/2023/03/conor-mcgregors-net-worth-how-he-built.html).
**Read About ⇒ [Real Estate Investment Trust UK (REIT): Everything You Need to Know to Start Investing Today](https://www.articlesworlds.com/2023/03/real-estate-investment-trust-uk-reit.html)**
[Conor McGregor](https://www.articlesworlds.com/2023/03/conor-mcgregors-net-worth-how-he-built.html) is a call synonymous with the game of blended martial arts (MMA). He has become one of the most popular and successful athletes of the modern era, amassing a fortune that most people can only dream of. But how did he build his fortune? In this article, we will take a closer look at Conor McGregor's net worth and the key factors that have contributed to his financial success.
## Introduction of Conor McGregor's
[Conor McGregor](https://www.articlesworlds.com/2023/03/conor-mcgregors-net-worth-how-he-built.html) was born in Dublin, Ireland, in 1988. He started his combat sports journey at a young age, training in boxing and then transitioning to MMA in his early twenties.
McGregor made his professional debut in 2008 and quickly rose to prominence with his unique fighting style and trash-talking ability. He signed with the Ultimate Fighting Championship (UFC) in 2013 and became the first fighter in the promotion's history to hold titles in two weight classes simultaneously.
## Early Life and Career
McGregor grew up in Dublin, where he developed a passion for combat sports. He started boxing at the age of 12 and quickly became proficient in the sport. McGregor eventually transitioned to MMA, where he found even more success. He made his professional debut in 2008 and quickly built a reputation as a talented fighter with a charismatic personality.
## UFC Career
McGregor signed with the UFC in 2013 and quickly made an impact. He won his first fight in the promotion with a first-round knockout, and then went on to win his next four fights. McGregor's trash-talking ability and unique fighting style quickly made him a fan favorite, and he became one of the UFC's biggest draws. [Read More](https://www.articlesworlds.com/2023/03/conor-mcgregors-net-worth-how-he-built.html). | articlesworlds | |
1,392,265 | React Custom Hooks: Reusable and Efficient Stateful Logic | Hooks in react are built-in functions introduced in React version 16.8. React hooks allow the use of... | 0 | 2023-03-07T20:07:39 | https://dev.to/judeebekes67/react-custom-hooks-reusable-and-efficient-stateful-logic-4nja | webdev, react, beginners, javascript | Hooks in react are built-in functions introduced in React version 16.8. React hooks allow the use of React library features such as lifecycle methods, state, and context in functional components without having to worry about rewriting it to a class.
**What are custom hooks?**
Custom hooks in React are a powerful and flexible feature that enables you to encapsulate and reuse stateful logic across multiple components. They are a way to extract logic from a component and encapsulate it in a reusable function that can be called from multiple components.
**How to Create Custom Hooks?**
Custom hooks follow a simple naming convention like built-in hooks, which is to prefix the name with "use" to enable react know that it is a hook. For example, a hook that fetches data from an API might be named useApiData or useFetch.
Custom hooks are built on top of the existing React Hooks API that is for it to be seen as a hook, custom hooks has to make use of at least one of the built-in hooks like useState, useEffect, and useRef. These built-in hooks allow you to manage state, perform side effects, and manipulate the DOM within functional components.
To create a custom hook, you start by defining a function that encapsulates the logic to be reused. This function can use any of the built-in React hooks or other custom hooks that you have created.
For example, let's say we want to create a custom hook that fetches data from an API and handles loading and error states. Here's what that might look like:
```
import { useState, useEffect } from 'react';
function useApiData(url) {
const [data, setData] = useState(null);
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const fetchData = async () => {
setIsLoading(true);
setError(null);
try {
const response = await fetch(url);
const json = await response.json();
setData(json);
} catch (error) {
setError(error);
}
setIsLoading(false);
}
useEffect(() => {
fetchData();
}, [url]);
return { data, isLoading, error };
}
```
In this example, the useApiData function takes a URL as its argument and returns an object containing the fetched data, a boolean indicating whether the data is currently being loaded, and any error that occurred while fetching the data. Custom hooks can return any value, but it's a standard convention to return values in an array or an object as in the case of the example above.
To use this custom hook in a component, we simply call it like any other hook:
```
import React from 'react';
import { useApiData } from './useApiData';
function MyComponent() {
const { data, isLoading, error } = useApiData('https://api.example.com/data');
let content = ‘’
if (isLoading) {
content = <div>Loading...</div>;
}
if (error) {
content = <div>Error: {error.message}</div>;
}
If(data !== null) {
content = <pre{JSON.stringify(data, null, 2)}</pre>
}
return (
<div>
{content}
</div>
);
}
```
In this example, we import the useApiData function from a separate file where is was created and call it with the URL we want to fetch. We then use the values returned by the hook to render the component, handling loading and error states as appropriate.
One of the key benefits of custom hooks is that they allow you to encapsulate complex logic and make it easy to be reusable across multiple components. This can help to reduce code duplication and improve the overall maintainability of your code.
In summary, React custom hooks are a powerful tool that supports the extraction and reuse of stateful logic across multiple components. They are built on top of the existing React Hooks API and follow a simple naming convention. Custom hooks can encapsulate complex logic and help to reduce code duplication, making your code more maintainable and easier to work with.
For any questions: send me an email to [Jude Ebeke](judeebekes67@gmail.com) | judeebekes67 |
1,392,562 | Obtain data in openGauss via ODBC (example) | // This example demonstrates how to obtain data in openGauss through ODBC. // DBtest.c (compile... | 0 | 2023-03-08T03:33:59 | https://dev.to/490583523leo/obtain-data-in-opengauss-via-odbc-example-1hkp | ```
// This example demonstrates how to obtain data in openGauss through ODBC.
// DBtest.c (compile with: libodbc.so)
#include <stdlib.h>
#include <stdio.h>
#include <sqlext.h>
#ifdef WIN32
#include <windows.h>
#endif
SQLHENV V_OD_Env; / / Handle ODBC environment
SQLHSTMT V_OD_hstmt; // Handle statement
SQLHDBC V_OD_hdbc; // Handle connection
char typename[100];
SQLINTEGER value = 100;
SQLINTEGER V_OD_erg,V_OD_buffer,V_OD_err,V_OD_id;
int main(int argc,char *argv[])
{
// 1. Application environment handle
V_OD_erg = SQLAllocHandle(SQL_HANDLE_ENV,SQL_NULL_HANDLE,&V_OD_Env);
if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_WITH_INFO))
{
printf("Error AllocHandle\n");
exit(0);
}
// 2. Set environment attributes (version information)
SQLSetEnvAttr(V_OD_Env, SQL_ATTR_ODBC_VERSION, (void*)SQL_OV_ODBC3, 0);
// 3. Apply for a connection handle
V_OD_erg = SQLAllocHandle(SQL_HANDLE_DBC, V_OD_Env, &V_OD_hdbc);
if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_OD_INFO))
{
SQLFreeHandle(SQL_HANDLE_EN) ;
exit(0);
}
// 4. Set connection properties
SQLSetConnectAttr(V_OD_hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_ON, 0);
// 5. Connect to the data source, where "userName" and "password" represent the user name and password for connecting to the database, please modify it according to the actual situation.
// If the user name and password have been configured in the odbc.ini file, then this can be left blank (""); but this is not recommended, because once the odbc.ini permission is not well managed, the database user password will be leaked.
V_OD_erg = SQLConnect(V_OD_hdbc, (SQLCHAR*) "gaussdb", SQL_NTS,
(SQLCHAR*) "userName", SQL_NTS, (SQLCHAR*) "password", SQL_NTS);
if ((V_OD_erg != SQL_SUCCESS) && (V_OD_erg != SQL_SUCCESS_WITH_INFO))
{
printf("Error SQLConnect %d\n",V_OD_erg);
SQLFreeHandle(SQL_HANDLE_ENV, V_OD_Env);
exit(0);
}
printf("Connected !\n");
// 6. Set statement attribute
SQLSetStmtAttr(V_OD_hstmt,SQL_ATTR_QUERY_TIMEOUT,(SQLPOINTER *)3,0);
// 7. Apply for statement handle
SQLAllocHandle(SQL_HANDLE_STMT, V_OD_hdbc, &V_OD_hstmt);
// 8. Execute the SQL statement directly.
SQLExecDirect(V_OD_hstmt,"drop table IF EXISTS customer_t1",SQL_NTS);
SQLExecDirect(V_OD_hstmt,"CREATE TABLE customer_t1(c_customer_sk INTEGER, c_customer_name VARCHAR(32));",SQL_NTS);
SQLExecDirect(V_OD_hstmt,"insert_2 value into 1 customer ,li)",SQL_NTS);
// 9. Prepare to execute
SQLPrepare(V_OD_hstmt,"insert into customer_t1 values(?)",SQL_NTS);
// 10. Bind parameters
SQLBindParameter(V_OD_hstmt,1,SQL_PARAM_INPUT,SQL_C_SLONG,SQL_INTEGER,0,0,
&value,0,NULL);
// 11. Execute the prepared statement
SQLExecute(V_OD_hstmt);
SQLExecDirect(V_OD_hstmt,"select id from testtable",SQL_NTS) ;
// 12. Get the attribute of a column in the result set
SQLColAttribute(V_OD_hstmt,1,SQL_DESC_TYPE,typename,100,NULL,NULL);
printf("SQLColAttribute %s\n",typename);
// 13. Bind the result set
SQLBindCol(V_OD_hstmt,1,SQL_C_SLONG, (SQLPOINTER)&V_OD_buffer,150,
(SQLLEN *)&V_OD_err);
// 14. Get the data in the result set through SQLFetch
V_OD_erg=SQLFetch(V_OD_hstmt);
// 15. Get and return the data through SQLGetData.
while(V_OD_erg != SQL_NO_DATA)
{
SQLGetData(V_OD_hstmt,1,SQL_C_SLONG,(SQLPOINTER)&V_OD_id,0,NULL);
printf("SQLGetData ----ID = %d\n",V_OD_id);
V_OD_erg=SQLFetch(V_OD_hstmt);
};
printf(" Done !\n");
// 16. Disconnect the data source and release the handle resource
SQLFreeHandle(SQL_HANDLE_STMT,V_OD_hstmt);
SQLDisconnect(V_OD_hdbc);
SQLFreeHandle(SQL_HANDLE_DBC,V_OD_hdbc);
SQLFreeHandle(SQL_HANDLE_ENV, V_OD_Env);
return(0) ;
}
```
| 490583523leo | |
1,392,775 | Today's Fun Joke For Developers - Daily Developer Jokes | Check out today's daily developer joke! (a project by Fred Adams at xtrp.io) | 4,070 | 2023-03-08T08:00:03 | https://dev.to/dailydeveloperjokes/todays-fun-joke-for-developers-daily-developer-jokes-2kkd | jokes, dailydeveloperjokes | ---
title: "Today's Fun Joke For Developers - Daily Developer Jokes"
description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)"
series: "Daily Developer Jokes"
published: true
tags: #jokes, #dailydeveloperjokes
---
Hi there! Here's today's Daily Developer Joke. We hope you enjoy it; it's a good one.

---
For more jokes, and to submit your own joke to get featured, check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/). We're also open sourced, so feel free to view [our GitHub Profile](https://github.com/dailydeveloperjokes).
### Leave this post a ❤️ if you liked today's joke, and stay tuned for tomorrow's joke too!
_This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit._
<!--
Joke text:
___Q:___ Where does the pirate stash all of their digital treasures?
___A:___ RAR
-->
| dailydeveloperjokes |
1,392,787 | Tiny Container Images With Distroless Containers | When deploying applications using containers, it's usually a goal to minimize the size of the... | 0 | 2023-03-08T08:18:10 | https://blog.jonstodle.com/tiny-container-images-with-distroless-containers/ | docker, rust, go, node | ---
title: Tiny Container Images With Distroless Containers
published: true
date: 2021-10-16 00:00:00 UTC
tags: Docker,Rust,Go,nodejs
canonical_url: https://blog.jonstodle.com/tiny-container-images-with-distroless-containers/
---
When deploying applications using containers, it's usually a goal to minimize the size of the container image.
Achieving a small container size varies a lot between different languages and technologies. There is no _single_ solution that will work for everyone, as usual, but there’s a pretty interesting alternative if you’re deploying Rust, D, Go, Java or node.js applications.
Google maintains a repo of what they call _distroless containers_. These are container base images that contain as little as possible. They’ve stripped away pretty much everything until you’re basically left with _libc_ and not much more. There isn’t even a shell included!
Another nice benefit of the distroless images, apart from the size consideration, is the security. With so few tools and executables inside the container it increases the difficulty of being able to do much if you manage to get access to it. Less code, means less bugs and vulnerabilities.
To start using one is pretty simple. Build your application and copy the final binary into the image based on one of the distroless images:
```
FROM golang:1.13-buster as build
WORKDIR /src
COPY . /src
RUN go get -d -v ./...
RUN go build -o /go/bin/app
FROM gcr.io/distroless/base-debian10
COPY --from=build /go/bin/app /
CMD ["/app"]
```
It’s important to note that you have to use the for `[“/app”]` when defining a `CMD` or `ENTRYPOINT` command in the docker file. If you use the “bare” form, e.g. `CMD “/app”` , Docker will prepend the command with a shell which will not work, as there’s not shell inside the distroless container.
And that’s it!
Check out their [repo on Github](https://github.com/GoogleContainerTools/distroless) for more information.
* * *
Happy coding! | jonstodle |
1,392,872 | Ecommerce, where should I start? | Hello, I am a Junior Developer with 1 yr of experience, mostly FrontEnd and a bit of BackEnd. And I... | 0 | 2023-03-08T09:50:59 | https://dev.to/green8888elephant/ecommerce-where-should-i-start-4pb0 | webdev, ecommerc, nextjs, aws | Hello, I am a Junior Developer with 1 yr of experience, mostly FrontEnd and a bit of BackEnd. And I was always wondering, how to make my own ecommerce website, to sell some products there.
My knowledge:
ReactJS/NextJS 4/5
TailwindCSS 4/5
ExpressJS/tRPC 2.5/5
PostgeSQL 3/5
Docker Deployment with CI/CD from GitHub 3.5/5
Stripe 0/5
I checked out some repo in Github with NextJS + business logic (add to cart, etc.).
And some of them could be a really good starting point. But for sure I will need to add alot to it, to create a product from that.
There are alot of CMS or Headless CMS like Strapi/Medusa etc.
Or should I find just CMS and do whatever I need with that CMS?
I think the best way and easy to work with payments is Stripe
For deployment I think AWS S3 bucket + Storage for DB, for product images and EC2 for server itself
If you have any suggestion, will be glad to hear from you! | green8888elephant |
1,425,771 | A Beginner's Guide to Amazon S3 Permissions and Access Control | Sign in to the AWS Management Console at [https://console.aws.amazon.com] using your AWS account... | 0 | 2023-04-04T13:57:11 | https://dev.to/beauty/a-beginners-guide-to-amazon-s3-permissions-and-access-control-4kn3 | + Sign in to the AWS Management Console at [https://console.aws.amazon.com] using your AWS account credentials.
+ Navigate to the S3 Dashboard by selecting S3 from the list of services.

+ Click the "Create Bucket" button.

+ In the "Bucket Name and Region" section, enter a name for your bucket. **Bucket name must be globally unique and must not contain spaces or uppercase letters, so you may need to choose a different name if the name you select is already taken.**
+ Choose the region where you want your bucket to be located. Selecting a region that's closest to your users can help minimize latency and improve performance.

+ Configure your bucket options. Here, you can choose settings such as versioning, encryption, and access control. You can also add tags to your bucket to help organize it.
+ Click the "Create Bucket" button to create your bucket.

**Congratulations!** You've now created an S3 bucket in AWS. You can use your bucket to store objects such as files, images, and videos. You can also configure your bucket to host static websites or serve as a backend for your applications.
## <u>Now making whatever you uploaded (e.g picture) public</u>
+ Select the bucket that contains the object you want to make public.
+ Find the object in the list and click on it to open its details page.
+ Click the "Permissions" tab in the object details page and Under the "Edit public access" section, click the "Edit" button and save changes afterwards.


+ Under "permission", the "edit object ownership" section, click the "ALC enabled" button.

+ Under the "Access for other AWS accounts" section, click the "Edit" button.
+ Select "Grant public read access to this object(s)" and click "Save changes".


+ To test if the object is now public and readable, copy the URL of the object again and paste it into a web browser. The object should now be visible.
**Note:** Making an object public means that anyone with the object's URL can view it. Be careful when making objects public and ensure that you only make objects public that you want to be accessible to anyone. Also, keep in mind that public objects may be subject to web crawlers and search engines, and may be cached in different locations around the world.
## <u>Terminating an S3 bucket</u>
Terminating an S3 bucket means deleting it completely. This is useful if you no longer need the bucket, or if you want to start fresh with a new bucket. However, it's important to note that once a bucket is deleted, all objects stored within the bucket are permanently deleted and cannot be recovered.
+ Select the bucket you want to terminate.
+ Click the "Delete bucket" button.
In the confirmation dialog box, type the name of the bucket to confirm that you want to delete it.
+ Click "Confirm" to delete the bucket.
After confirming the deletion, AWS will immediately begin the process of deleting the bucket and all of its contents. It may take some time to complete, depending on the size of the bucket and the number of objects it contains.
It's important to note that terminating a bucket will not only delete all of the objects stored within the bucket, but will also delete any associated metadata, permissions, and access control policies. Be sure to carefully review the contents of the bucket before terminating it, and make sure that you have backups of any data that you need to keep. | beauty | |
1,393,041 | How to Use a Sass/SCSS with Expo SDK v48 and TypeScript | If you're a mobile app developer using Expo, you're probably already familiar with how easy it is to... | 0 | 2023-03-08T14:12:54 | https://dev.to/gihanrangana/how-to-use-a-sassscss-with-expo-sdk-v48-and-typescript-3nef | expo, saas, scss, reactnative | If you're a mobile app developer using Expo, you're probably already familiar with how easy it is to build a high-quality app with minimal setup. However, if you're used to using Sass/SCSS in your web development workflow, you might be wondering how to incorporate it into your Expo project. In this tutorial, we'll walk you through how to set up Sass/SCSS in your Expo SDK v48 project, using TypeScript.
### Step 1: Create Project with expo v48
The first step you should have to create a expo project with the latest expo v48.
```
npx create-expo-app my-app --template typescript
cd my-app
```
### Step 2: Install Dependencies
Install the dependencies needed for Sass/SCSS to work with Expo SDK v48 and TypeScript. Run the following command in your terminal:
```
npm install babel-plugin-react-native-classname-to-style babel-plugin-react-native-platform-specific-extensions node-sass react-native-sass-transformer
```
If you want create `.d.ts` file for your `.scss` files add `react-native-typed-sass-transformer` instead of `react-native-sass-transformer`
### Step 3: Update package.json
In this step we have to update `package.json` to use `react-native` from github repo. This is required to use `className` prop on react native elements. update `react-native` with follows:
```json
"dependencies": {
"react-native": "gihanrangana/react-native#v0.71.3"
}
```
### Step 4: Configure the babel.config.js
Now we need to update `babel.config.js` file as follows
```js
//babel.config.js
module.exports = function (api) {
api.cache(true);
return {
presets: ['babel-preset-expo'],
plugins: [
"react-native-classname-to-style",
[
"react-native-platform-specific-extensions",
{ extensions: ["scss", "sass"] },
],
],
};
};
```
### Step 5: Configure metro.config.js
In this step we configure the `metro.config.js` to transform styles from `.scss` or `.sass` files.
First create the `transformer.js` file at project root as follows:
```js
//transformer.js
cosnt upstreamTransformer = require("metro-react-native-babel-transformer");
const sassTransformer = require("react-native-sass-transformer");
const theme = (process.cwd() + "/src/styles/Global.scss").replace(/\\/g, "/");
module.exports.transform = function ({ src, filename, options }) {
if (filename.endsWith(".scss") || filename.endsWith(".sass")) {
var opts = Object.assign(options, {
sassOptions: {
functions: {
"rem($px)": px => {
px.setValue(px.getValue() / 16);
px.setUnit("rem");
return px;
}
}
}
});
src = `@import "${theme}"; \n\n ` + src;
return sassTransformer.transform({ src, filename, options: opts });
} else {
return upstreamTransformer.transform({ src, filename, options });
}
};
```
This file will transform all scss files you import in your project. There is one plus point,
You can create `Global.scss` files under the `src/styles/` directory. on that file you can create your own sass variables, then you can use any sass variables you declared on `Global.scss` any ware in your project globally without importing.
Now you have to run this command to create `metro.config.js` file:
```
npx expo customize metro.config.js
```
This command will create file called `metro.config.js`, after that update file as follows:
```js
// metro.config.js
const { getDefaultConfig } = require('expo/metro-config');
const config = getDefaultConfig(__dirname);
const { resolver: { sourceExts } } = config;
config.transformer.babelTransformerPath = require.resolve("./transformer.js");
config.resolver.sourceExts = [...sourceExts, "scss", "sass"];
module.exports = config;
```
Ok great! you are done now with configurations. let's test it now!
> Create file called index.js on root of the project with this content
```js
//index.js
import { registerRootComponent } from "expo";
import App from "./src/App";
registerRootComponent(App)
```
> Update package.json as follows:
```json
"main": "index.js",
```
> Create directory called `src` and move `App.tsx` to it.
> Create `App.scss` inside the `src` directory
```scss
/*App.scss*/
.container {
flex: 1;
justify-content: center;
align-items: center;
background-color: $bg;
padding: 10px;
}
.text {
color: $light0;
font-size: rem(24px);
text-align: center;
}
```
> Update `App.tsx` as follows:
```tsx
// App.tsx
import { StatusBar } from 'expo-status-bar';
import { Text, View } from 'react-native';
import styles from './App.scss';
export default function App() {
return (
<View className={styles.container}>
<Text className={styles.text}>This is app using scss to create styling the components</Text>
<StatusBar style="auto" />
</View>
);
}
```
> Now run the app with `npm start`, you will get output like this

See this repo to get better understand about this configuration:
[expo-sdk-48-sass-transformer-ts](https://github.com/gihanrangana/expo-sdk-48-sass-transformer-ts)
## Conclusion
Using Sass/SCSS with Expo SDK v48 and TypeScript can enhance your mobile app development workflow by allowing you to write more modular and maintainable stylesheets. With just a few simple steps, you can setup babel and metro to handle Sass/SCSS files and include them in your TypeScript project. From there, you can write Sass/SCSS code and import it into your TypeScript files.Overall, incorporating Sass/SCSS into your Expo project can make your styling more efficient, organized, scalable and also you can use same stylesheet from `mobile app to web app` or `web app to mobile app` easily.
| gihanrangana |
1,393,213 | Auto-updating hosts file using cronjob | In this post I will show you how you can easily set your hosts file in "/etc/hosts" to automatically... | 0 | 2023-03-08T15:46:31 | https://dev.to/spignelon/auto-updating-hosts-file-using-cronjob-2fdc | cron, cronjob, linux, hosts | In this post I will show you how you can easily set your hosts file in "/etc/hosts" to automatically update with the help of cron.
<!--more-->
## What is cron?
*cron is the time-based job scheduler in Unix-like computer operating systems. cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times, dates or intervals. It is commonly used to automate system maintenance or administration.*
## Installing Cron on Your System
You can use package manager on the distro of your choice to install cron.
For Ubuntu/Debian based distros:
```sudo apt install cron```
For Arch Linux:
```sudo pacman -S cronie```
For Fedora:
```sudo dnf install cronie```
## Enabling cron daemon to run on boot
```sudo systemctl enable cronie```
For Ubuntu/Debian based:
```sudo systemctl enable cron```
## Crontab format
The basic format for a crontab is:
```minute hour day_of_month month day_of_week command```
* minute: values can be from 0 to 59.
* hour: values can be from 0 to 23.
* day_of_month: values can be from 1 to 31.
* month: values can be from 1 to 12.
* day_of_week: values can be from 0 to 6, with 0 denoting Sunday.
Now if we want to run a command every Monday at 5:00pm we use:
```0 17 * * 1 command``` ( 17 is 5pm in 24 hour clock format ).
You can use [cron.help](https://cron.help/) for more help to understand the syntax.
## Setting up your cronjob using crontab
We will use sudo for this since it requires root privilege to edit "/etc/hosts" file.
```sudo crontab -e```
I personally use [StevenBlack](https://github.com/StevenBlack/hosts)'s hosts file to block advertisements and malicious domains so I would use:
```0 17 * * 1 curl https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts > /etc/hosts```
The above command will fetch the updated hosts file every Monday at 5:00pm using curl from the above URL and overwrite the current "/etc/hosts" file.
After adding the above command, save and exit crontab editor.
---
This blog was originally written on June 18, 2022.
URL: https://paper.wf/spignelon/auto-updating-hosts-file-using-cronjob | spignelon |
1,393,240 | ¿Qué son los Server-Sent Events (SSE) y cómo funcionan? | En una aplicación web tradicional, la actualización de información en tiempo real puede ser un... | 0 | 2023-03-08T16:33:51 | https://dev.to/andersonsinaluisa/que-son-los-server-sent-events-sse-y-como-funcionan-fk4 |

En una aplicación web tradicional, la actualización de información en tiempo real puede ser un problema, ya que requiere una comunicación constante entre el cliente y el servidor. En estos casos, se suelen utilizar técnicas como polling o long-polling, en las que el cliente realiza peticiones al servidor en intervalos regulares para obtener actualizaciones de información.
Sin embargo, estas técnicas pueden resultar ineficientes en situaciones en las que se necesita una actualización constante de información, ya que pueden generar un tráfico excesivo en la red y un aumento en el consumo de recursos del servidor.
## ¿Cómo solucionar estos problemas?
Para solucionar estos problemas, se han desarrollado alternativas más eficientes como WebSockets y Server-Sent Events (SSE), que permiten una comunicación bidireccional entre el cliente y el servidor y una actualización constante de información en tiempo real sin tener que realizar peticiones innecesarias al servidor.
Los Server-Sent Events (SSE) son una tecnología web que permite a los servidores enviar datos a los clientes de forma asincrónica, en tiempo real y sin necesidad de realizar peticiones adicionales al servidor. Esto se logra a través de una conexión HTTP de larga duración entre el cliente y el servidor, en la que el servidor envía eventos al cliente cuando se producen actualizaciones en los datos.
El protocolo HTTP es el que se utiliza para establecer la conexión SSE, utilizando una petición GET con una cabecera específica que indica que se desea una respuesta en formato de eventos. El servidor envía los eventos al cliente en un stream continuo, separados por líneas en blanco y precedidos por un identificador único y opcionalmente, otros campos de metadatos.
El cliente recibe los eventos en tiempo real y puede procesarlos para actualizar la información en la página web sin necesidad de refrescarla. Además, los SSE permiten la gestión de errores y la posibilidad de reintentar la conexión en caso de fallos.
Un ejemplo sencillo de cómo se ve un stream de eventos en el navegador sería el siguiente:
```
var source = new EventSource('/stream');
source.onmessage = function(event) {
console.log(event.data);
};
```
En este ejemplo, se crea una instancia de EventSource para conectarse al servidor y recibir los eventos. El servidor envía los eventos a través de la ruta «/stream» y el cliente los recibe mediante la función «onmessage», que muestra los datos del evento en la consola del navegador.
## Ventajas y desventajas de los Server-Sent Events
Los Server-Sent Events (SSE), WebSockets y polling son tecnologías que permiten la actualización de información en tiempo real en aplicaciones web, pero presentan diferencias en cuanto a sus ventajas y desventajas.
Las principales ventajas de los SSE son:
Compatibilidad con el protocolo HTTP: Los SSE utilizan el protocolo HTTP, lo que significa que no es necesario abrir un nuevo canal de comunicación, como en el caso de WebSockets, y pueden utilizarse con servidores web estándar.
Fácil implementación: La implementación de SSE es relativamente sencilla y no requiere conocimientos avanzados de programación.
Consumo de recursos limitado: Los SSE son menos demandantes en cuanto a recursos del servidor y del cliente que otras tecnologías, como WebSockets o polling, lo que puede resultar beneficioso en aplicaciones con grandes volúmenes de tráfico.
Capacidad de reintentar la conexión: En caso de fallos en la conexión, los SSE permiten la reanudación automática de la conexión, lo que mejora la fiabilidad de la tecnología.
Sin embargo, los SSE también presentan algunas desventajas, entre ellas:
Limitaciones en la comunicación bidireccional: Los SSE solo permiten la comunicación unidireccional, es decir, del servidor al cliente, lo que puede ser un problema en aplicaciones que requieren una comunicación bidireccional, como en juegos en línea.
Limitaciones en la cantidad y tamaño de los datos: Los SSE pueden ser limitados en cuanto a la cantidad y tamaño de los datos que se pueden enviar en cada evento, lo que puede limitar su utilidad en aplicaciones con grandes volúmenes de información.
En cuanto a cuándo es recomendable utilizar SSE, esta tecnología puede ser una buena opción en aplicaciones que requieren actualizaciones en tiempo real de información, pero no necesitan una comunicación bidireccional constante y no tienen grandes volúmenes de información para enviar. Por ejemplo, en aplicaciones de noticias, actualizaciones de redes sociales o seguimiento de eventos en tiempo real.
## Implementación de Server-Sent Events en una aplicación web
Aquí describiré un ejemplo de implementación de Server-Sent Events (SSE) utilizando PHP y JavaScript:
En el lado del servidor (PHP):
En primer lugar, se debe habilitar la respuesta HTTP de SSE en el servidor mediante la configuración de cabeceras. Para ello, se debe enviar una respuesta con las siguientes cabeceras:
```
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Symfony\Component\HttpFoundation\StreamedResponse;
class StreamController extends Controller
{
public function stream()
{
$response = new StreamedResponse();
$response->headers->set('Content-Type', 'text/event-stream');
$response->headers->set('Cache-Control', 'no-cache');
$response->headers->set('Connection', 'keep-alive');
$response->setCallback(function () {
while (true) {
// Agrega aquí la lógica para obtener los datos que deseas enviar
$data = [
'mensaje' => 'Este es un mensaje de ejemplo',
'fecha' => date('Y-m-d H:i:s')
];
// Crea el evento Server-Sent Event y envía los datos
echo
"event: nombre_del_evento\n";
echo "data: " . json_encode($data) . "\n\n";
ob_flush();
flush();
// Espera 10 segundos antes de enviar el siguiente evento
sleep(10);
}
});
return $response;
}
}
```
A continuación, se deben enviar los eventos desde el servidor. Esto se hace mediante la escritura de los eventos en el cuerpo de la respuesta HTTP.
echo "event: nombre_del_evento\n";
echo "data: " . $datos_del_evento . "\n\n";
Aquí, el «nombre_del_evento» es un nombre arbitrario que se le da al evento, y los «datos_del_evento» son los datos que se envían con el evento.
Es necesario enviar un carácter de nueva línea en blanco («\n\n») después de cada evento para que el navegador pueda procesarlo adecuadamente.
En el lado del cliente (JavaScript):
Se debe crear un objeto EventSource en JavaScript para conectarse al servidor y recibir los eventos.
`const source = new EventSource('url_del_servidor');
`
A continuación, se deben definir los manejadores de eventos para procesar los eventos recibidos del servidor.
```
source.addEventListener('nombre_del_evento', function(evento) {
console.log(evento.data); // procesa los datos del evento recibido
});
```
Es posible definir manejadores de eventos para diferentes eventos en el servidor.
```
source.addEventListener('otro_nombre_de_evento', function(evento) {
console.log(evento.data); // procesa los datos del evento recibido
});
```
En resumen, el proceso de envío de eventos desde el servidor a través de SSE se realiza mediante la configuración de cabeceras para habilitar la respuesta HTTP SSE, la escritura de eventos en el cuerpo de la respuesta y el envío de un carácter de nueva línea en blanco después de cada evento. Por otro lado, en el lado del cliente, se debe crear un objeto EventSource para conectarse al servidor y definir manejadores de eventos para procesar los eventos recibidos del servidor.
Las principales ventajas de los SSE son:
Compatibilidad con el protocolo HTTP: Los SSE utilizan el protocolo HTTP, lo que significa que no es necesario abrir un nuevo canal de comunicación, como en el caso de WebSockets, y pueden utilizarse con servidores web estándar.
Fácil implementación: La implementación de SSE es relativamente sencilla y no requiere conocimientos avanzados de programación.
Consumo de recursos limitado: Los SSE son menos demandantes en cuanto a recursos del servidor y del cliente que otras tecnologías, como WebSockets o polling, lo que puede resultar beneficioso en aplicaciones con grandes volúmenes de tráfico.
Capacidad de reintentar la conexión: En caso de fallos en la conexión, los SSE permiten la reanudación automática de la conexión, lo que mejora la fiabilidad de la tecnología.
Sin embargo, los SSE también presentan algunas desventajas, entre ellas:
Limitaciones en la comunicación bidireccional: Los SSE solo permiten la comunicación unidireccional, es decir, del servidor al cliente, lo que puede ser un problema en aplicaciones que requieren una comunicación bidireccional, como en juegos en línea.
Limitaciones en la cantidad y tamaño de los datos: Los SSE pueden ser limitados en cuanto a la cantidad y tamaño de los datos que se pueden enviar en cada evento, lo que puede limitar su utilidad en aplicaciones con grandes volúmenes de información.
En cuanto a cuándo es recomendable utilizar SSE, esta tecnología puede ser una buena opción en aplicaciones que requieren actualizaciones en tiempo real de información, pero no necesitan una comunicación bidireccional constante y no tienen grandes volúmenes de información para enviar. Por ejemplo, en aplicaciones de noticias, actualizaciones de redes sociales o seguimiento de eventos en tiempo real.
## Casos de uso de Server-Sent Events
Server-Sent Events (SSE) es una tecnología que permite enviar eventos desde el servidor al cliente de forma asíncrona y en tiempo real. A continuación, se presentan algunos casos de uso comunes de SSE:
Notificaciones en tiempo real: Las notificaciones en tiempo real son un caso de uso común para SSE. Por ejemplo, una aplicación de mensajería puede utilizar SSE para enviar notificaciones a los usuarios en tiempo real cuando reciben un nuevo mensaje.
Actualización de feeds de redes sociales: Las redes sociales pueden utilizar SSE para actualizar los feeds de los usuarios en tiempo real. Por ejemplo, cuando un usuario sigue a otra persona, SSE puede utilizarse para actualizar automáticamente el feed de ese usuario con las publicaciones más recientes de la persona que sigue.
Monitoreo de eventos en tiempo real: Las empresas pueden utilizar SSE para monitorear eventos en tiempo real, como el rendimiento del servidor o el comportamiento de los usuarios en una aplicación. SSE permite que las empresas reciban alertas en tiempo real cuando ocurre un evento importante.
Actualización de información en tiempo real: Las aplicaciones pueden utilizar SSE para actualizar la información en tiempo real en el navegador sin necesidad de recargar la página. Por ejemplo, una aplicación de seguimiento de precios puede utilizar SSE para actualizar los precios de los productos en tiempo real.
SSE es una solución más adecuada en estos casos que otras alternativas como el polling o WebSockets por las siguientes razones:
SSE utiliza una conexión HTTP de larga duración, lo que reduce la sobrecarga de red en comparación con el polling, donde el cliente debe realizar solicitudes frecuentes al servidor.
SSE es una solución más simple que WebSockets y no requiere la implementación de un protocolo personalizado.
SSE es compatible con la mayoría de los navegadores modernos, lo que lo hace una opción más accesible que WebSockets.
En general, SSE es una solución adecuada para casos de uso que requieren actualizaciones en tiempo real, pero que no requieren una comunicación bidireccional compleja entre el servidor y el cliente.
Es importante destacar que SSE es una opción más simple y accesible que WebSockets, por lo que es ideal para aplicaciones que no requieren una comunicación bidireccional compleja. Además, SSE ofrece una solución más eficiente que el polling, ya que reduce la sobrecarga de red al mantener una conexión HTTP de larga duración. En general, SSE es una herramienta valiosa para mantener a los usuarios informados en tiempo real sobre los eventos importantes en una aplicación.
Referencias
«Server-Sent Events» en MDN Web Docs: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
«HTML5 Server-Sent Events» en W3Schools: https://www.w3schools.com/html/html5_serversentevents.asp
«How to Use Server-Sent Events (SSE) in Your Node.js App» en SitePoint: https://www.sitepoint.com/server-sent-events-nodejs/
«Server-Sent Events (SSE) with PHP» en Tania Rascia’s blog: https://www.taniarascia.com/server-sent-events-php/
«Server-Sent Events (SSE) with Express» en Tania Rascia’s blog: https://www.taniarascia.com/server-sent-events-with-node-js/
«Real-time Notifications with Server-Sent Events» en Scotch.io: https://scotch.io/tutorials/real-time-notifications-with-server-sent-events#toc-part-1-what-are-server-sent-events-sse- | andersonsinaluisa | |
1,393,306 | CSS advanced | I will start my portfolio. I'm thinking on creating a personal page for my cats as a project. ,-. ... | 0 | 2023-03-08T17:21:52 | https://dev.to/neto1895/css-advanced-37b9 | I will start my portfolio. I'm thinking on creating a personal page for my cats as a project.
,-. _,---._ __ / \
/ ) .-' `./ / \
( ( ,' `/ /|
\ `-" \'\ / |
`. , \ \ / |
/`. ,'-`----Y |
( ; | '
| ,-. ,-' | /
| | ( | hjw | /
) | \ `.___________|/
`--' `--' | neto1895 | |
1,393,347 | Indifikators | C++ indificator: indifikatorlar bu noyob isimlar yani "ism son yosh kun hafta" ya'ni "name num ege... | 0 | 2023-03-08T18:31:05 | https://dev.to/nuriddin152/indifikators-3enj | cpp, programming, beginners | C++ indificator:
indifikatorlar bu noyob isimlar yani "ism son yosh kun hafta"
ya'ni "name num ege day month"

| nuriddin152 |
1,393,492 | Neptune Intro | Hello Everyone, This is my first post 😊 I'm excited to become a part of this community. I am in IT... | 0 | 2023-03-08T19:55:41 | https://dev.to/neptune/introduction-4n07 | neptune | Hello Everyone,
This is my first post 😊 I'm excited to become a part of this community.
I am in IT industry since 3+ years as a QA Engineer mostly works on automation testing with Selenium and Rest assured.
-Neptune | neptune |
1,393,584 | Beyond Basics: Building Scalable TypeScript Applications with Chain of Responsibility Design Pattern | Chain of Responsibility (CoR) is a behavioral design pattern that passes a request between a chain of... | 0 | 2023-03-08T21:49:24 | https://samuelkollat.hashnode.dev/beyond-basics-building-scalable-typescript-applications-with-chain-of-responsibility-design-pattern | typescript, designpatterns, architecture, programming | Chain of Responsibility (CoR) is a behavioral design pattern that passes a request between a chain of objects. In this pattern, each object in the chain can either handle the request or pass it on to the next object in the chain.
Today we will explore how to implement this design pattern in TypeScript.
Using CoR in TypeScript can simplify complex request processing, especially when the chain of objects needs to be changed or extended dynamically. The main idea behind this pattern is to decouple the sender of a request from its receiver by allowing more than one object to handle the request.
## Implementing Chain of Responsibility
Let's start with a simple example to understand how the CoR pattern works in TypeScript. Let's say we have a series of objects that need to process a request for a certain document type. In this case, we can create a chain of objects that will handle the request, as shown below:
```typescript
interface DocumentHandler {
setNext(handler: DocumentHandler): DocumentHandler;
handle(documentType: string): string;
}
```
Now we will create new classes named `PDFHandler` and `TextHandler` that implement `DocumentHandler` interface:
```typescript
class PDFHandler implements DocumentHandler {
private nextHandler: DocumentHandler;
setNext(handler: DocumentHandler): DocumentHandler {
this.nextHandler = handler;
return handler;
}
handle(documentType: string): string {
if (documentType === 'pdf') {
return `Handling PDF Document`;
} else if (this.nextHandler) {
return this.nextHandler.handle(documentType);
} else {
return `Cannot Handle Document Type: ${documentType}`;
}
}
}
class TextHandler implements DocumentHandler {
private nextHandler: DocumentHandler;
setNext(handler: DocumentHandler): DocumentHandler {
this.nextHandler = handler;
return handler;
}
handle(documentType: string): string {
if (documentType === 'text') {
return `Handling Text Document`;
} else if (this.nextHandler) {
return this.nextHandler.handle(documentType);
} else {
return `Cannot Handle Document Type: ${documentType}`;
}
}
}
```
Each class can handle a specific type of document. If a class cannot handle the request, it passes it to the next object in the chain. We can create a chain of objects by calling the `setNext()` method on each object and passing in the next object in the chain:
```typescript
const pdfHandler = new PDFHandler();
const textHandler = new TextHandler();
pdfHandler.setNext(textHandler);
console.log(pdfHandler.handle('pdf')); // Output: Handling PDF Document
console.log(pdfHandler.handle('text')); // Output: Handling Text Document
console.log(pdfHandler.handle('excel')); // Output: Cannot Handle Document Type: excel
```
You can find the complete example in this [CodeSandbox](https://codesandbox.io/p/sandbox/beyoid-basics-typescript-cor-samuel-kollat-r4nf1c?file=%2Fsrc%2Findex.ts).
We can see from the output that the request for each document type is handled by the appropriate object in the chain. If the request cannot be handled by any of the objects in the chain, the last object in the chain returns a message indicating that it cannot handle the request.
## Beyond Basics
Now let's look at a real-world example of using the CoR pattern in a web application that processes payments. The payment processing system can have different payment methods, such as credit cards, PayPal, and bank transfers. Each payment method can have different authentication and validation requirements.
To implement this system using the CoR pattern, we can create a chain of objects representing payment methods. Let's start again with the interface and some types:
```typescript
export type PaymentMethod = "creditcard" | "paypal" | "banktransfer";
export type Payment = {
method: PaymentMethod;
amount: number;
cardNumber: string;
cvv: string;
expirationDate: string;
};
interface PaymentHandler {
setNext(handler: PaymentHandler): PaymentHandler;
handlePayment(payment: Payment): Promise<boolean>;
}
```
Each object that implements the `PaymentHandler` interface can authenticate and validate the corresponding payment method. If an object cannot handle the request, it passes the request to the next object in the chain:
```typescript
class CreditCardHandler implements PaymentHandler {
private nextHandler: PaymentHandler;
setNext(handler: PaymentHandler): PaymentHandler {
this.nextHandler = handler;
return handler;
}
async handlePayment(payment: Payment): Promise<boolean> {
if (payment.method === 'creditcard') {
console.log('Processing Credit Card Payment...');
return true;
} else if (this.nextHandler) {
return this.nextHandler.handlePayment(payment);
} else {
return false;
}
}
}
class PayPalHandler implements PaymentHandler {
private nextHandler: PaymentHandler;
setNext(handler: PaymentHandler): PaymentHandler {
this.nextHandler = handler;
return handler;
}
async handlePayment(payment: Payment): Promise<boolean> {
if (payment.method === 'paypal') {
console.log('Processing PayPal Payment...');
return true;
} else if (this.nextHandler) {
return this.nextHandler.handlePayment(payment);
} else {
return false;
}
}
}
class BankTransferHandler implements PaymentHandler {
private nextHandler: PaymentHandler;
setNext(handler: PaymentHandler): PaymentHandler {
this.nextHandler = handler;
return handler;
}
async handlePayment(payment: Payment): Promise<boolean> {
if (payment.method === 'banktransfer') {
console.log('Processing Bank Transfer Payment...');
return true;
} else if (this.nextHandler) {
return this.nextHandler.handlePayment(payment);
} else {
return false;
}
}
}
```
In this example, three classes implement the `PaymentHandler` interface. Each class represents a payment method and handles the authentication and validation for that payment method. We create a chain of objects by calling the `setNext()` method on each object and passing in the next object in the chain:
```typescript
const creditCardHandler = new CreditCardHandler();
const payPalHandler = new PayPalHandler();
const bankTransferHandler = new BankTransferHandler();
creditCardHandler.setNext(payPalHandler).setNext(bankTransferHandler);
const payment: Payment = {
method: 'creditcard',
amount: 100,
cardNumber: '1234567890',
cvv: '111',
expirationDate: '01/24'
};
const isPaymentSuccessful = await creditCardHandler.handlePayment(payment);
console.log('Payment Successful:', isPaymentSuccessful);
```
You can find the complete example in this [CodeSandbox](https://codesandbox.io/p/sandbox/beyond-basics-typescript-cor-samuel-kollat-02-kqlocv?file=%2Fsrc%2Findex.ts).
We can see from the output that the payment request is handled by the appropriate object in the chain based on the payment method. If the payment method cannot be handled by any of the objects in the chain, the last object in the chain returns `false`.
### Middleware in web servers
One great example of using the CoR pattern is the handling of HTTP requests through middlewares in web servers. Let's take an example of Express.js, a very popular Node.js framework. In Express.js, [middlewares functions](https://expressjs.com/en/guide/using-middleware.html) have access to the request object, the response object, and the next middleware in the application’s request-response cycle. Middleware functions can perform tasks such as parsing the request body, handling authentication, performing validation, and many more.
When a request is made to an application, the middleware functions are executed in the order they are defined. Each middleware function can perform operations on the request and/or response objects and pass control to the next middleware function in the chain using the `next()` function. If a middleware function doesn't call `next()`, the request-response cycle is terminated, and no further middleware functions are executed.
Middleware functions can be added to or removed from the chain at any point, allowing developers to modify the behavior of their applications easily.
Let's look at an example:
```typescript
const express = require('express');
const app = express();
// 1. Middleware for parsing a request
app.use((req, res, next) => {
console.log('Parsing...');
next();
});
// 2. Middleware for authenticating a request
app.use((req, res, next) => {
console.log('Authenticating...');
next();
});
// 3. Route handler middleware for sending a response
app.get('/', (req, res) => {
res.send('Sending response...');
});
app.listen(3000, () => {
console.log('Server listening...');
});
```
In this example, two middleware functions are defined using the `app.use()` and one using `app.get()` function. The first middleware function logs a message to the console and calls `next()` to pass control to the next middleware function. The second middleware function does the same. Finally, a route handler sends a response back to the client. When a request is made to the server, the middleware functions are executed in the order they are defined.
### Benefits
Benefits of using the CoR pattern:
1. **Encapsulation**: The CoR pattern promotes encapsulation by separating the code that creates a request from the code that handles the request. This makes the code more modular and easier to maintain.
2. **Flexibility**: The pattern allows you to easily add, remove or reorder the chain of handlers without affecting other parts of the code. This makes it easier to modify the behavior of an application or add new features.
3. **Extensibility**: The pattern makes adding new handlers to the chain easy, allowing you to extend the functionality of an application without modifying existing code.
4. **Decoupling**: The pattern reduces the dependencies between classes, which makes the code more loosely coupled and easier to test.
### Drawbacks
Here are some aspects of the CoR pattern that should be taken into account when evaluating if it fits your application needs:
1. **Performance**: The CoR pattern can introduce performance overhead if the chain of handlers is long or if the handlers are complex.
2. **Difficulty in debugging**: The pattern can make it harder to trace the flow of a request through the application, especially if the chain of handlers is long.
3. **Complexity**: The pattern can make the code more complex and harder to understand, especially if there are many different types of requests and handlers.
The Chain of Responsibility pattern is a useful pattern for handling requests in a flexible and extensible way. It allows multiple objects to handle requests by chaining them together, and it provides a way to add or remove objects from the chain without affecting the other objects. However, it is important to be aware of both the benefits and drawbacks that may arise when using this pattern.
Photo by <a href="https://unsplash.com/@flyd2069?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">FLY:D</a> on <a href="https://unsplash.com/photos/ZNOxwCEj5mw?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a>
| samuelkollat |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.