id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,909,402 | Urgent Appeal: Help Nabil's Family Rebuild Their Lives in Gaza | Hello, My Name is Nabil , Computer Engineer From Gaza , 33 year old , My family and I have been... | 0 | 2024-07-02T21:25:09 | https://dev.to/nabil_zaqout_53c29fc15be6/urgent-appeal-help-nabils-family-rebuild-their-lives-in-gaza-dma | gaza, python, webdev, programming | Hello, My Name is Nabil , Computer Engineer From Gaza , 33 year old , My family and I have been deeply affected by the recent war in Gaza. We are struggling to rebuild our lives and urgently need your support. 🏠💔 ▶ https://gofund.me/73675284
Any contribution, no matter how small, can make a huge difference. Please consider donating and sharing our story with others. 💖
Thank you from the bottom of my heart for your kindness and generosity. 🙏✨
Donate Link ▶ https://gofund.me/b7d52f2a
| nabil_zaqout_53c29fc15be6 |
1,909,250 | Automating User Management on Linux using Bash Script | Efficient user management is important for maintaining security and productivity. Manual management... | 0 | 2024-07-02T21:10:30 | https://dev.to/danielfavour/automating-user-management-on-linux-using-bash-script-3o9l |
Efficient user management is important for maintaining security and productivity. Manual management of users and groups can be time-consuming, especially in larger organizations where administrators need to handle multiple accounts and permissions. Automating these tasks not only saves time but also reduces the risk of human error.
This guide discusses a practical approach to automating user management on a Linux machine using a Bash script. You will learn how to create users, assign groups, generate secure passwords for users, and log actions using a single bash script.
## Overview of the Bash Script Functionality
Below is an overview of the tasks the Bash script will automate for efficient user management:
- **Read User Data:** The script will read a text file containing employee usernames and their corresponding group names.
- **Create Users and Groups:** It will create users and groups as specified in the text file.
- **Set Up Home Directories:** The script will set up home directories for each user with appropriate permissions and ownership.
- **Generate Secure Passwords:** It will generate random, secure passwords for the users.
- **Log Actions:** All actions performed by the script will be logged to the `/var/log/user_management.log` directory.
- **Store Passwords Securely:** Generated passwords will be securely stored in the `/var/secure/user_passwords.txt` directory.
- **Error Handling:** The script will include error handling to manage scenarios such as existing users and groups.
### Prerequisites
To get started with this tutorial, you must have the following:
- A Linux machine with administrative privileges.
- Basic knowledge of Linux commands.
- A text editor of choice (vim, nano, etc)
## Setting Up the User Data File
The first step is to create a text file containing the username for each employee and the groups to be assigned to each of them.
In your terminal, create a `user_password.txt` file:
```jsx
touch user_passwords.txt
```
Paste the below content into the file
```
john;qa
jane;dev,manager
robert;marketing
emily;design,research
michael;devops
olivia;design,research
william;support
sophia;content,marketing
daniel;devops,sre
ava;dev,qa
```
The above is a list of usernames for the employees and their respective group(s).
## Writing the Bash script
To start creating the Bash script, follow these steps in your terminal:
### Create the Script file
Open your terminal and run the following command to create an empty file named `create_users.sh`:
```jsx
touch create_users.sh
```
Use your preferred text editor to open the `create_users.sh` file and begin writing the script:
```jsx
nano create_users.sh
```
(If you are using a different editor, replace nano with its command)
### Add the Shebang Line
At the top of the `create_users.sh` file, include the [shebang](https://linuxhandbook.com/shebang/).
```
#!/bin/bash
```
This line specifies the interpreter that will be used to execute the script. In this case,` #!/bin/bash` indicates that the script should be run using the Bash shell.
### Check Root Privileges
Creating users and groups typically requires administrative privileges because it involves modifying system files and configurations. After the shebang line, add the below configuration to ensure that the Bash script is executed with root privileges:
```jsx
if [[ $(id -u) -ne 0 ]]; then
echo "This script must be run as root."
exit 1
fi
```
This checks if the script is running with root privileges. If the current user ID `($(id -u))` does not equal `(-ne) 0`, then the condition is true (indicating the script is not running as root), and the code within the `then ... fi` block will execute accordingly. In this case, it will output `"This script must be run as root."` to the terminal, and then the script will exit with a status of `1`. This exit status is a signal to the operating system and any other processes that the script encountered an error and did not complete successfully.
### Check that the Input File is Passed as an Argument
In Bash scripting, "arguments" are the values or parameters provided to a script or command when it is invoked from the command line. For example, if you have a Bash script named `process_file.sh` and you want to read an input file, `data.txt` provided as an argument, in the terminal, you will execute it with:
```
./process_file.sh data.txt
```
Inside your Bash script (process_file.sh), you can access this argument using special variables like $1, $2, etc. $1 specifically refers to the first argument passed (data.txt in this case). Once the script captures the argument ($1), it can use it in various ways. For instance, it might open and read the file specified (data.txt), process its contents, or perform any other operation that the script is designed to do.
In this case, the `create_users.sh` script needs to read the user data file, `user_passwords.txt` containing the usernames and groups of the employees so it can perform certain actions.
Paste the below configuration in your script:
```jsx
if [[ $# -ne 1 ]]; then
echo "Usage: $0 <input-file>"
exit 1
fi
```
- **`if [[ ... ]]; then ... fi`:** This is a conditional statement in Bash. The code inside the `then ... fi` block will execute only if the condition within the double square brackets `[[ ... ]]` evaluates to true.
- **`if [[ $# -ne 1 ]]; then`:** This checks if the number of arguments ($#) passed to the script is not equal (-ne) to 1.
- **`echo "Usage: $0 <input-file>"`:** If the condition is true (meaning the wrong number of arguments were provided), this line prints a helpful message to the terminal explaining how the script should be used.
- **`Usage`:** A standard keyword indicating the start of usage instructions.
- **`$0`:** This is a special variable that holds the name of the script itself. It is automatically replaced with the actual name of the script when it runs (for example, "create_users.sh").
- **`<input-file>`:** This placeholder communicates to the user that they need to provide the name of the input file (for example, "user_passwords.txt") as the argument when running the script.
- **`exit 1`:** This terminates the script with an exit status of 1. An exit status of 1 signals that the script encountered an error and did not complete successfully.
### Assign Variables
The next step is to assign variables for essential paths and files.
```jsx
INPUT_FILE=$1
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
```
- **`INPUT_FILE=$1`:** This line assigns the first command-line argument (the `user_passwords.txt` file) to the variable `INPUT_FILE`. This makes it easier to reference the filename throughout the script.
- **`LOG_FILE="/var/log/user_management.log"`:** This sets the variable LOG_FILE to the path where the script will write its log messages. This log will help track the actions performed by the script.
- **`PASSWORD_FILE="/var/secure/user_passwords.txt"`:** This sets the variable `PASSWORD_FILE` to the path where the generated passwords for the users will be stored securely.
### Log Messages
Logging messages is a common practice in scripting and software development as it records what happens at each step in the script.
Add the below configuration to log messages in the $LOG_FILE directory:
```jsx
# Function to log messages
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}
```
- **`log_message()`:** This function is a reusable piece of code designed to create formatted log entries and write them to a specified log file.
- **`echo "$(date '+%Y-%m-%d %H:%M:%S') - $1":
date '+%Y-%m-%d %H:%M:%S'`:** This generates the current date and time in the format "YYYY-MM-DD HH:MM:SS". `$1` represents the message you pass to the function as the first argument. This command combines the timestamp and the message, separated by a hyphen (-), creating the formatted log entry.
- **`tee -a $LOG_FILE`:** The `tee` command reads from standard input (in this case, the output of the echo command) and writes it to both standard output (the terminal) and to one or more files. The `-a` option tells `tee` to append to the file ($LOG_FILE) instead of overwriting it. This ensures that previous log entries are preserved.
### Ensure the `/var/secure` Directory Exists
Before proceeding with user creation, it's imperative to establish a secure environment for storing the generated passwords. The passwords will be stored in the `/var/secure` directory, so it is necessary to check if this directory exists and configure it with appropriate permissions to limit access to authorized users only.
```jsx
if [[ ! -d "/var/secure" ]]; then
mkdir -p /var/secure
chown root:root /var/secure
chmod 600 /var/secure
fi
```
- **`if [[ ! -d "/var/secure" ]]; then`:** This condition checks
if the /var/secure directory exists. The `-d` flag checks if the path is a directory, and the `!` negates the result, meaning the code within the `then` block will only execute if the directory does not exist.
- **`mkdir -p /var/secure`:** This line creates the `/var/secure` directory if it does not exist. The `-p` option ensures that any necessary parent directories are also created.
- **`chown root:root /var/secure`:** This changes the owner of the `/var/secure` directory to the root user and the root group. This is a security best practice, as sensitive data like passwords should only be accessible to the system administrator.
- **`chmod 600 /var/secure`:** This changes the permissions of the `/var/secure` directory so that the owner (root) has full read and write access to the directory, no users in the root group (other than root itself) can access the directory's contents in any way and no other users on the system can access the directory's contents.
- **`fi`:** It is used to close an `if` statement and indicates the end of the block of code that should be executed conditionally based on the evaluation of the if statement.
### Generate the User Passwords
User passwords should be unique and secure. `/dev/urandom` ensures your passwords are both random and secure. It is a special file in Unix-like systems that provides a constant stream of high-quality random data, making it difficult to predict or replicate the generated passwords.
Paste the below configuration in the script:
```jsx
generate_password() {
tr -dc 'A-Za-z0-9!@#$%^&*()_+=-[]{}|;:<>,.?/~' </dev/urandom | head -c 16
}
```
- **`generate_password() {`:** This defines the start of a function named `generate_password`.
- **`tr -dc`**: This command is used to delete all characters from the input that are not in the specified set. `tr` stands for "translate" and is used to delete or replace characters, the `-d` option specifies that characters should be deleted, and the `-c` option complements the set of characters. This means that instead of deleting the characters specified in the set, it will delete all characters that are _not_ in the set.
- **`'A-Za-z0-9!@#$%^&*()_+=-[]{}|;:<>,.?/~'`:** This is the set of characters allowed in the password, including uppercase letters (A-Z), lowercase letters (a-z), digits (0-9), and various special characters.
- **`</dev/urandom | `:** `/dev/urandom` is a special file in Unix-like operating systems that provides random data. `<` is used to redirect the contents of `/dev/urandom` as input to the command on the left, and the file contents are passed through the pipe `|` to the next command.
- **`head -c 16`**: The `head` command displays the first few lines of the file content that passes through the pipe, and the `-c 16` option modifies `head` to output only the first 16 bytes of its input.
### Process the Input File
The `user_password.txt` file that was passed in can now be used to carry out user management tasks. It will read the usernames and groups from the input file, create the users and their personal group, add them to their respective groups, set up their home directory, and generate and store their passwords. To execute these tasks efficiently, it's beneficial to keep them within a while loop so that each line of user data is processed sequentially, ensuring systematic user management operations.
To create the while loop, use:
```jsx
# Read the input file line by line
while IFS=';' read -r username groups; do
```
- **`while ... do`:** This starts a loop that continues to read lines from the input until there are no more lines to read.
- **`IFS=';'`:** This sets the internal field separator (IFS) to a semicolon. It tells the read command to use semicolons as the delimiter for splitting input lines into fields.
- **`read -r`:** Reads the input line into the variables username and groups.
- tr -d '[:space:]' removes all whitespace characters from the username and groups.
In this loop, there will be several iterations that will be carried out:
**Iteration 1: Trim any leading/trailing whitespace**
```jsx
# Trim any leading/trailing whitespace
username=$(echo "$username" | tr -d '[:space:]')
groups=$(echo "$groups" | tr -d '[:space:]')
```
- **`username=$(echo "$username" | tr -d '[:space:]')`:** This line removes all whitespace characters from the beginning and end of the username value.
- **`groups=$(echo "$groups" | tr -d '[:space:]')`:** This line does the exact same thing as the first line, but for the groups variable. It removes any leading or trailing whitespace from the list of groups associated with the user.
Logs the username and associated groups read from the input file to be certain the script is reading the `user_password.txt` file correctly:
```jsx
# Debug: Log the username and groups read from the file
log_message "Read line: username='$username', groups='$groups'"
```
This is useful for troubleshooting.
**Iteration 2: Check if usernames or groups are empty**
```jsx
if [[ -z "$username" || -z "$groups" ]]; then
log_message "Error: Username or groups missing in line: $username"
continue
fi
```
- **`[[ -z "$username" || -z "$groups" ]]`:** `[[ ... ]]` is a conditional expression in Bash for testing. `-z "$username"` checks if the variable username is empty (has zero length). `-z "$groups"` checks if the variable groups is empty (has zero length). `||` is the logical OR operator, which means the condition is true if either $username or $groups (or both) are empty.
- **`log_message "Error: Username or groups missing in line: $username"`:** If either `$username` or `$groups` is empty, this command writes an error message to the `$LOG_FILE`.
- **`continue`:** If the condition is true (i.e., either $username or $groups is empty), `continue` skips the rest of the current iteration of the loop. The script then moves on to the next iteration to process the next line from the input file.
**Iteration 3: Check if the user already exists, otherwise create the user's personal group**
```jsx
# Check if the user already exists
if id "$username" &>/dev/null; then
log_message "User $username already exists, skipping."
else
# Create the user's personal group
if ! getent group "$username" >/dev/null; then
groupadd "$username"
log_message "Created group: $username"
fi
```
- **`if id "$username" &>/dev/null; then:`** This `if` statement checks if the user `$username` exists by querying the user database. `&>/dev/null` redirects both stdout (standard output) and stderr (standard error) to `/dev/null`, discarding any output. If the user exists, the condition is true (id command succeeds), and the script proceeds inside the if block.
- **`log_message "User $username already exists, skipping."`:** If the user already exists (id command succeeds), this message is logged and the user creation process is skipped for this user.
- **`else`:** If the user does not exist (the id command fails), the script proceeds with user and group creation.
- **`if ! getent group "$username" >/dev/null; then`:** This command checks if a group with the username already exists and proceeds only if it doesn't.
- **`groupadd "$username"`:** Creates a new group with the same name as the username. This group will serve as the user's primary group, providing some basic permissions and ownership settings.
- **`log_message "Created group: $username"`:** Calls the `log_message function` to record the action of creating the group.
The message passed to the function includes the name of the group that was created.
**Iteration 4: Create the users with their personal group**
To create users with their personal groups, add:
```jsx
# Create the user with the personal group
useradd -m -g "$username" "$username"
log_message "Created user: $username"
```
- **`useradd -m -g "$username" "$username"`:** This creates a new user account with the specified username, creates their home directory, and assigns the user to their personal group.
- **`log_message "Created user: $username"`:** Calls the logging function and logs the username of the newly created account.
**Iteration 5: Assign passwords to users**
For the users to have access, they should be assigned their own passwords each.
```jsx
# Generate a random password and set it for the user
password=$(generate_password)
echo "$username:$password" | chpasswd
log_message "Set password for user: $username"
```
- **`password=$(generate_password)`:** Calls the `generate_password` function to generate a password, and stores the output within a variable named `password`.
- **`echo "$username:$password" | chpasswd`:** This takes the generated password, formats it correctly, and passes it to the chpasswd command to set the password for the user.
- **`log_message "Set password for user: $username"`:** The function logs the action taken, indicating that the password has been set for the user.
**Iteration 6: Add the user to additional groups**
```jsx
IFS=',' read -ra group_array <<< "$groups"
for group in "${group_array[@]}"; do
if ! getent group "$group" &>/dev/null; then
groupadd "$group"
log_message "Created group: $group"
fi
usermod -aG "$group" "$username"
log_message "Added user $username to group: $group"
done
```
- **`IFS=',' read -ra group_array <<< "$groups"`:** `IFS=','` sets the Internal Field Separator (IFS) to comma (,). This means that when the read command reads `$groups`, it will split it into multiple parts using comma as the delimiter. `read -ra group_array <<< "$groups"` reads the content of $groups into an array group_array, splitting it based on the comma delimiter (','), `-r` prevents backslashes from being interpreted as escape characters and `-a group_array` assigns the result to the array variable group_array.
- **`for group in "${group_array[@]}"; do`:** Iterates over each element (group) in the group_array array.
- **`if ! getent group "$group" &>/dev/null; then`:** `getent group "$group"` checks if the group $group exists in the system, `!` negates the result, meaning if the group does not exist (! getent ...), the condition becomes true. `&>/dev/null` redirects both stdout and stderr to /dev/null, discarding any output. If the group does not exist, it proceeds with group creation.
- **`groupadd "$group"`:** Creates the group $group if it does not already exist.
- **`log_message "Created group: $group"`:** Logs a message indicating that the group $group was successfully created.
- **`usermod -aG "$group" "$username"`:** Adds the user $username to the group $group. `-aG "$group": -a` appends the user to the group without removing them from other groups `-G` specifies a list of supplementary groups.
- **`log_message "Added user $username to group: $group"`:** Logs a message indicating that the user $username was successfully added to the group $group.
**Iteration 7: Create and set the home directory permissions**
Users should have access to their individual home directories to perform certain actions. Add the below configuration in the script:
```jsx
mkdir -p "/home/$username"
chown -R "$username:$username" "/home/$username"
chmod 755 "/home/$username"
```
- **`mkdir -p "/home/$username"`:** This creates the home directory for each new user, where $username is the username of the user being processed.
- **`chown -R "$username:$username" "/home/$username"`:** This changes the ownership of the user's home directory and all files and subdirectories within it.
- **`chmod 755 "/home/$username`:** This sets the permissions for the newly created user's home directory. `755` means the owner ($username) has read, write, and execute permissions (rwx). Users in the same group as the owner and other users have read and execute permissions (r-x).
**Iteration 8: Store the username and password securely**
```jsx
# Store the username and password securely
echo "$username,$password" >> $PASSWORD_FILE
chmod 600 "$PASSWORD_FILE"
log_message "Password for $username stored in $PASSWORD_FILE."
fi
```
- **`echo "$username:$password" >> "$PASSWORD_FILE"`:** This line stores the username and its corresponding generated password in a file designated for storing passwords securely.
- **`chmod 600 "$PASSWORD_FILE"`:** This command restricts access to the password file to ensure it remains secure and confidential.
- **`log_message "Password for $username stored in $PASSWORD_FILE."`:** Logs a message indicating that the password for $username has been stored in the password file ($PASSWORD_FILE).
- `fi`: The `fi` keyword marks the end of the entire `if...else` conditional block, which started at iteration 3. It marks the end of the code block to execute if the condition is true.
### End the Loop
End the loop using:
```jsx
done < "$1"
```
The expression `done < "$1"` signifies the end of the while loop and instructs it to read input from the file specified by the `INPUT_FILE=$1` variable.
The complete script is available in [this GitHub repository](https://github.com/FavourDaniel/user-management-in-linux).
## Run the Script
To execute the Bash script without calling Bash in your terminal, make the script executable:
```jsx
chmod +x create_users.sh
```
Once the script is executable, run it from the directory where it resides:
```jsx
./create_users.sh user_passwords.txt
```
## Verify the Script Executed Tasks Successfully
Several checks should be carried out to ensure that the script executed all the user management tasks successfully.
To verify user existence, run:
```jsx
id <username>
```
To verify that the user password file was successfully created in the /var/secure/ directory, run:
```jsx
cat /var/secure/user_passwords.txt
```
To verify group existence, run:
```jsx
getent group <username>
```
To verify the log file as created and all actions are logged correctly without any errors or unexpected behaviors, run:
```jsx
cat /var/log/user_management.log
```
- To verify that each user has a personal group with the same name as their username, run:
```jsx
getent passwd <username>
getent group <username>
```
## Conclusion
This article highlights the automation of user management tasks using Bash scripts. It explores how scripting enhances efficiency in creating users, managing groups, setting permissions, and securing passwords on Linux systems.
---
This article is my stage 1 task at HNG internship program. HNG is an internship that helps people improve their tech skills. To learn more about the HNG internship, visit their [website](https://hng.tech/internship). If you are looking to hire talented Developers and Designers from the internship, visit [HNG hire](https://hng.tech/hire).
| danielfavour | |
1,909,380 | Finite-state machine example in JavaScript | Finite-state machine example in JavaScript | 0 | 2024-07-02T21:08:16 | https://dev.to/artem/finite-state-machine-example-in-javascript-2npm | javascript, patterns, example | ---
title: Finite-state machine example in JavaScript
published: true
description: Finite-state machine example in JavaScript
tags: #javascript #patterns #example
cover_image: https://images.unsplash.com/photo-1574087631700-abf928509b80?q=80&w=1888&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-02 19:05 +0000
---
## What is Finite-state machine?
### Context
FSM refers to classes of automata

**The finite state machine (FSM)** is a software design pattern where a given model transitions to other behavioral states through external input.
### Example using if else
Let's say we have a simple task where we check, for example, a traffic light and perform actions depending on the current state.
```js
function trafficLightAction(color) {
if (color === 'green') {
console.log('Go');
} else if (color === 'yellow') {
console.log('Slow down');
} else if (color === 'red') {
console.log('Stop');
} else {
console.log('Invalid color');
}
}
// Function call examples
trafficLightAction('green'); // Return: Go
trafficLightAction('yellow'); // Return: Slow down
trafficLightAction('red'); // Return: Stop
trafficLightAction('blue'); // Return: Invalid color
```
### Example with using Finite-state machine (FSM)
Now let's implement the same functionality using a state machine. A state machine will be an object where each key (state) is associated with a specific action.
```js
const trafficLightFSM = {
green: () => console.log('Go'),
yellow: () => console.log('Slow down'),
red: () => console.log('Stop'),
invalid: () => console.log('Invalid color'),
};
function trafficLightActionFSM(color) {
const action = trafficLightFSM[color] || trafficLightFSM['invalid'];
action();
}
// Function call examples
trafficLightActionFSM('green'); // Return: Go
trafficLightActionFSM('yellow'); // Return: Slow down
trafficLightActionFSM('red'); // Return: Stop
trafficLightActionFSM('blue'); // Return: Invalid color
```
Now, our traffic light will works well.
Disclaimer:
Several levels of additional tests would not hurt here, and perhaps another programming language ;)

| artem |
1,886,353 | RESTful APIs | Topic: "Building RESTful APIs with Node.js and Express" Description: How to design and build... | 27,559 | 2024-07-02T20:55:00 | https://dev.to/suhaspalani/restful-apis-4m6p | restapi, backend, api, javascript | - *Topic*: "Building RESTful APIs with Node.js and Express"
- *Description*: How to design and build RESTful APIs using Node.js and Express.
#### Content:
#### 1. Introduction to RESTful APIs
- **What is REST**: Explain REST (Representational State Transfer) and its principles.
- **HTTP Methods**: Discuss common HTTP methods: GET, POST, PUT, DELETE, PATCH.
#### 2. Setting Up the Project
- **Project Initialization**:
```bash
mkdir myapi
cd myapi
npm init -y
npm install express body-parser
```
- **Project Structure**:
```
myapi/
├── node_modules/
├── package.json
├── index.js
```
#### 3. Creating a Simple API
- **Basic Express Setup**:
```javascript
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
const port = 3000;
app.use(bodyParser.json());
app.listen(port, () => {
console.log(`API running at http://localhost:${port}`);
});
```
#### 4. Defining Routes
- **Sample Routes**:
```javascript
let books = [];
app.get('/books', (req, res) => {
res.json(books);
});
app.post('/books', (req, res) => {
const book = req.body;
books.push(book);
res.status(201).json(book);
});
app.get('/books/:id', (req, res) => {
const book = books.find(b => b.id === parseInt(req.params.id));
if (book) {
res.json(book);
} else {
res.status(404).send('Book not found');
}
});
app.put('/books/:id', (req, res) => {
const index = books.findIndex(b => b.id === parseInt(req.params.id));
if (index !== -1) {
books[index] = req.body;
res.json(books[index]);
} else {
res.status(404).send('Book not found');
}
});
app.delete('/books/:id', (req, res) => {
books = books.filter(b => b.id !== parseInt(req.params.id));
res.status(204).send();
});
```
#### 5. Testing the API
- **Using Postman or Curl**: Demonstrate how to use Postman or Curl to test API endpoints.
- **Example Tests**:
- GET /books
- POST /books with JSON body `{"id": 1, "title": "1984", "author": "George Orwell"}`
- GET /books/1
- PUT /books/1 with JSON body `{"id": 1, "title": "Animal Farm", "author": "George Orwell"}`
- DELETE /books/1
| suhaspalani |
1,909,379 | Finite-state machine example in JavaScript | Finite-state machine example in JavaScript | 0 | 2024-07-02T21:08:16 | https://dev.to/artem/finite-state-machine-example-in-javascript-2m98 | javascript, patterns, example | ---
title: Finite-state machine example in JavaScript
published: true
description: Finite-state machine example in JavaScript
tags: #javascript #patterns #example
cover_image: https://images.unsplash.com/photo-1574087631700-abf928509b80?q=80&w=1888&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-02 19:05 +0000
---
## What is Finite-state machine?
### Context
FSM refers to classes of automata

**The finite state machine (FSM)** is a software design pattern where a given model transitions to other behavioral states through external input.
### Example using if else
Let's say we have a simple task where we check, for example, a traffic light and perform actions depending on the current state.
```js
function trafficLightAction(color) {
if (color === 'green') {
console.log('Go');
} else if (color === 'yellow') {
console.log('Slow down');
} else if (color === 'red') {
console.log('Stop');
} else {
console.log('Invalid color');
}
}
// Function call examples
trafficLightAction('green'); // Return: Go
trafficLightAction('yellow'); // Return: Slow down
trafficLightAction('red'); // Return: Stop
trafficLightAction('blue'); // Return: Invalid color
```
### Example with using Finite-state machine (FSM)
Now let's implement the same functionality using a state machine. A state machine will be an object where each key (state) is associated with a specific action.
```js
const trafficLightFSM = {
green: () => console.log('Go'),
yellow: () => console.log('Slow down'),
red: () => console.log('Stop'),
invalid: () => console.log('Invalid color'),
};
function trafficLightActionFSM(color) {
const action = trafficLightFSM[color] || trafficLightFSM['invalid'];
action();
}
// Function call examples
trafficLightActionFSM('green'); // Return: Go
trafficLightActionFSM('yellow'); // Return: Slow down
trafficLightActionFSM('red'); // Return: Stop
trafficLightActionFSM('blue'); // Return: Invalid color
```
Now, our traffic light will works well.
Disclaimer:
Several levels of additional tests would not hurt here, and perhaps another programming language ;)

| artem |
1,909,378 | Why I'm Excited for the HNG Internship | I'm Joel, a computer science student at Accra Technical University, Ghana. As an aspiring backend... | 0 | 2024-07-02T21:08:01 | https://dev.to/joel_legend_7fc19067f4b11/why-im-excited-for-the-hng-internship-30b7 | I'm Joel, a computer science student at Accra Technical University, Ghana. As an aspiring backend developer, I haven't encountered any significant backend problems yet. I'm eager to dive in and learn through hands-on experience, and the HNG Internship offers the perfect opportunity to do just that.
I am very excited about starting the https://hng.tech/internship. This opportunity will help me gain the technical skills I need and prepare me for a successful career in backend development. I’m looking forward to tackling real challenges, learning from experienced mentors, and contributing to meaningful projects. | joel_legend_7fc19067f4b11 | |
1,909,376 | Exploratory Testing: User Experience Improvements for Scrape Any Website | Introduction: Welcome to my detailed bug report for Scrape Any Website (SAW), version 1.1.15.0 for... | 0 | 2024-07-02T21:04:39 | https://dev.to/codereaper0/bug-hunting-on-saw-35lb | bug, scrapping, programming | **Introduction:** Welcome to my detailed bug report for Scrape Any Website (SAW), version 1.1.15.0 for x86 architecture. In this post, I'll share the bugs and usability issues I encountered during my exploratory testing of this data extraction tool. This exercise is part of my ongoing efforts to improve web scraping tools and ensure they meet user needs effectively
**Application Name and Version:** Scrape Any Website, Version 1.1.15.0
**Test Environment:**
Device: Dell Latitude E7270
Operating System: Windows 11
SAW Version: 1.1.15.0
**Links to get the application**
Website - [SAW](https://scrapeanyweb.site/)
Windows Store - [Download Page](https://apps.microsoft.com/detail/9mzxn37vw0s2)
**Testing Methodology:**
A multifaceted testing approach was employed, focusing on:
**Functional Testing:** Evaluating core functionalities like website scraping, data extraction, and output formatting.
**Usability Testing:** Assessing the application's ease of use, intuitiveness of the interface, and overall user-friendliness.
**Performance Testing:** Observing general responsiveness and scraping speeds during exploration.
**Edge Case Testing:** Pushing the boundaries of expected use scenarios to uncover potential bugs or limitations.
**Findings:**
My testing efforts revealed several areas for improvement in SAW. A detailed report outlining these findings, including specific bug reports is available in the attached spreadsheet: Full Report Excel [Link](https://docs.google.com/spreadsheets/d/1vmXSYHIZO-Eepjt9ylzefQrsQVww0iTyJ0LIz5TcLYw/edit?usp=sharing)
**Conclusion:** The exploratory testing of Scrape Any Website revealed several bugs and usability issues. Addressing these issues will significantly improve the user experience and reliability of the application. | codereaper0 |
1,909,375 | Creating Users and Groups on Linux with a Bash Script | Overview This bash script automates the process of creating multiple users and groups on a... | 0 | 2024-07-02T21:01:12 | https://dev.to/seundavid_dev/creating-users-and-groups-on-linux-with-a-bash-script-3fm9 | ## Overview
This bash script automates the process of creating multiple users and groups on a Linux system. It's designed to streamline the onboarding process for new employees or system users. The script reads user information from an input file, creates users with their respective groups, sets random passwords, and logs all actions.
## Features
- Creates users and their personal groups
- Adds users to additional specified groups
- Generates random passwords for each user
- Logs all actions for auditing purposes
- Stores generated passwords securely
- Handles existing users and groups
- Provides detailed error checking and reporting
## Prerequisites
- Linux environment (tested on Ubuntu)
- Root or sudo access
- Bash shell
## Installation
1. Clone this repository `https://github.com/Seundavid18/HNG11-Stage-1-Task` or download the `create_users.sh` script.
2. Make the script executable:
```
chmod +x create_users.sh
```
## Usage
1. Create an input file (e.g., `users.txt`) with the following format:
```
username; group1,group2,group3
```
Each line represents a user. The username and groups are separated by a semicolon (;). Multiple groups are separated by commas (,).
2. Run the script with root privileges:
```
sudo ./create_users.sh users.txt
```
Replace `users.txt` with the path to your input file.
## Input File sample
```
light; sudo,dev,www-data
idimma; sudo
mayowa; dev,www-data
```
- Each line represents a user
- Username and groups are separated by a semicolon (;)
- Multiple groups are separated by commas (,)
- Whitespace around separators is ignored
## Output
- Users are created with their home directories
- Each user is assigned to their specified groups
- Random passwords are generated for each user
- All actions are logged in `/var/log/user_management.log`
- Passwords are stored in `/var/secure/user_passwords.csv`
## Troubleshooting
- Ensure you have root privileges when running the script
- Check the log file at `/var/log/user_management.log` for detailed information about each action and any errors
- Verify that the input file is formatted correctly
## Conclusion
By automating user and group management with a Bash script, you can efficiently handle multiple users, ensure security through proper permissions and password management, and maintain an audit log of all actions. This script provides a solid foundation for user management in a Linux environment.
## HNG11 Interniship
Visit https://hng.tech/internship | https://hng.tech/hire for more information about HNG and it's internship opportunities. | seundavid_dev | |
1,909,374 | shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.4 | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the... | 0 | 2024-07-02T20:59:35 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-24-5a0 | javascript, opensource, nextjs, shadcnui | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI.
In parts 2.0 to 2.3, we looked at different functions involved to [check if there is an existing config.](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L74C3-L78C4)
Let’s move on to the next line of code

After checking if there is an existing config, in the getProjectConfig function, the next step is to find out the project type.
This project type is your project’s type in which you are trying to install shadcn-ui components via init command.
```js
const projectType = await getProjectType(cwd)
```
[getProjectType](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L111) is imported from [ui/packages/cli/src/utils/get-project-info.ts](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L111) and this function has some checks to identify if your Next.js project uses app or pages router and if it has src folder.
```js
export async function getProjectType(cwd: string): Promise<ProjectType | null> {
const files = await fg.glob("\*\*/\*", {
cwd,
deep: 3,
ignore: PROJECT\_SHARED\_IGNORE,
})
const isNextProject = files.find((file) => file.startsWith("next.config."))
if (!isNextProject) {
return null
}
const isUsingSrcDir = await fs.pathExists(path.resolve(cwd, "src"))
const isUsingAppDir = await fs.pathExists(
path.resolve(cwd, \`${isUsingSrcDir ? "src/" : ""}app\`)
)
if (isUsingAppDir) {
return isUsingSrcDir ? "next-app-src" : "next-app"
}
return isUsingSrcDir ? "next-pages-src" : "next-pages"
}
```
This code is pretty self-explanatory except for the fg.glob
```js
const files = await fg.glob("\*\*/\*", {
cwd,
deep: 3,
ignore: PROJECT\_SHARED\_IGNORE,
})
```
Check out the fast-glob docs about the [deep property](https://www.npmjs.com/package/fast-glob#deep).

You might be wondering what’s PROJECT\_SHARED\_IGNORE.
Well, PROJECT\_SHARED\_IGNORE is an [array initiated at the top of file](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L24).

Check out the docs for [ignore property.](https://www.npmjs.com/package/fast-glob#ignore)

Conclusion:
-----------
As I moved on to the next line of code in getProjectConfig in shadcn-ui CLI source code, I found a function named getProjectType. This function’s purpose is to find out if your Next.js project uses app or pages router and whether it has src folder.
This code is pretty self explanatory:
```js
export async function getProjectType(cwd: string): Promise<ProjectType | null> {
const files = await fg.glob("\*\*/\*", {
cwd,
deep: 3,
ignore: PROJECT\_SHARED\_IGNORE,
})
const isNextProject = files.find((file) => file.startsWith("next.config."))
if (!isNextProject) {
return null
}
const isUsingSrcDir = await fs.pathExists(path.resolve(cwd, "src"))
const isUsingAppDir = await fs.pathExists(
path.resolve(cwd, \`${isUsingSrcDir ? "src/" : ""}app\`)
)
if (isUsingAppDir) {
return isUsingSrcDir ? "next-app-src" : "next-app"
}
return isUsingSrcDir ? "next-pages-src" : "next-pages"
}
```
Except for the way files are accessed. fg.glob is set to depth level of 3 and ignores certain folders such as node\_modules, dist, build, public and .next.
From the code snippet above, there’s one of 5 values expected as a project type: null || “next-app-src” || “next-app” || “next-pages-src” || “next-pages”.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
[Build shadcn-ui/ui from scratch](https://tthroo.com/)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L111](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L111)
2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L24](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/get-project-info.ts#L24) | ramunarasinga |
1,909,370 | A Product-Engineering Understanding | The Correct Way To Write Javascript Is To Write Bad Javascript As Such Everyone is good at writing... | 0 | 2024-07-02T20:48:34 | https://dev.to/theholyspirit/a-product-engineering-understanding-332k | product, engineering, leadership | The Correct Way To Write Javascript Is To Write Bad Javascript
As Such
Everyone is good at writing Javascript.
## Human Software
I've been in a lot of software houses, and an interesting reflection I make that that the engineering units tend to have a runtime like the software they write.
**The Runtimes of Note Are:**
Interpreted
Compiled
Javascript
Human Software Is The Humans' Software.
## Javascript Is A Shitshow
Javascript Is A Wild Ecosystem
Regarding Javascript Applications, The Written Software Varies Wildly From Software House To Software House.
JavaScript Is A Shitshow
Because Javascript Is The Language Of the Human User Interface
And Humans Are Shitshows
The Runtime Of Javascript Only Software Houses Is The Unfiltered Human Software Of The World.
Working Javascript Always Answers The Question "How Is It Possible For It To..." A Crafty Scripter Can Get It Done. If Future Readability Is Not A Concern, Crafty Scripts May Be The Call.
## Further Thoughts:
I Recommend Consistently Applying The Empathetic Product Story For Readability. Empathetic Product Story For Readability Technique States To Start With Perfect Product Definition. Given PPD, A Vocabulary Is Extracted.
Perfect Product Understanding Allows Owners (OwnerGroup1 and Owner Group 2) To Have Shared Context While Refering to The Unique Effect. The Software Written Under Empathetic Product Story For Readability Originates From Product Understanding And, Correctly Positioned, Flows To The Engineering And Other Organizational Units.
PPU Answers "How Is It Possible For Me To".
Given The Product Understanding, An Engineer, Rather Than A Scripter, Answers "How Is It Possible For An Owner To". | theholyspirit |
1,909,345 | Pergunte ao especialista - Operadores | Já que os operadores de curto-circuito são, em alguns casos, mais eficientes do que seus equivalentes... | 0 | 2024-07-02T20:48:14 | https://dev.to/devsjavagirls/pergunte-ao-especialista-operadores-ep1 | java | Já que os operadores de curto-circuito são, em alguns casos, mais eficientes do que seus equivalentes comuns, por que Java oferece os operadores AND e OR comuns?
Em alguns casos, você pode querer que os dois operandos de uma operação AND ou OR sejam avaliados devido aos efeitos colaterais produzidos.

Na primeira instrução if, i é incrementada independentemente do sucesso da condição.
Com o operador de curto-circuito, i não é incrementada quando o primeiro operando é falso.
A lição aprendida é que ao esperar que o operando do lado direito de uma operação AND ou OR seja avaliado, deve-se usar versões dessas operações em Java que não sejam de curto-circuito.
| devsjavagirls |
1,909,247 | Operador de atribuição | O operador de atribuição é o sinal de igual simples, =. Funciona de maneira similar em Java e outras... | 0 | 2024-07-02T20:47:57 | https://dev.to/devsjavagirls/operador-de-atribuicao-113f | java | - O operador de atribuição é o sinal de igual simples, =.
- Funciona de maneira similar em Java e outras linguagens de programação.
Forma geral: var = expressão;
- O tipo de var deve ser compatível com o tipo de expressão.
- Permite a criação de uma cadeia de atribuições.
Exemplo:
int x, y, z;
x = y = z = 100; -> atribui x, y, e z com 100.
O operador = fornece o valor da expressão do lado direito.
O valor de z = 100 é 100, que é atribuído a y e, em seguida, a x.
A cadeia de atribuição facilita configurar várias variáveis com um valor comum.
**Atribuições abreviadas**
- Java fornece operadores de atribuição abreviada para simplificar certas instruções de atribuição.
Exemplo:
x = x + 10;
pode ser escrito como x += 10;.
O operador += atribui a x o valor de x mais 10.
Outro exemplo:
x = x - 100;
é igual a x -= 100;.
As duas instruções atribuem a x o valor de x menos 100.
- A atribuição abreviada funciona para todos os operadores binários em Java (operações entre dois operandos).
- Forma geral da atribuição abreviada:
var op = expressão;
Operadores aritméticos e lógicos de atribuição abreviada:

- Operadores de atribuição compostos combinam uma operação com uma atribuição.
- São formalmente chamados de operadores de atribuição compostos.
- Vantagens: mais compactos do que seus equivalentes não abreviados.
Exemplos:
1. += (Adição e Atribuição): adiciona o valor do lado direito ao valor do lado esquerdo e atribui o resultado à variável do lado esquerdo.

2. -= (Subtração e Atribuição): subtrai o valor do lado direito do valor do lado esquerdo e atribui o resultado à variável do lado esquerdo.

3. *= (Multiplicação e Atribuição): multiplica o valor do lado esquerdo pelo valor do lado direito e atribui o resultado à variável do lado esquerdo.

4. /= (Divisão e Atribuição): divide o valor do lado esquerdo pelo valor do lado direito e atribui o resultado à variável do lado esquerdo.

5. %= (Módulo e Atribuição): calcula o resto da divisão do valor do lado esquerdo pelo valor do lado direito e atribui o resultado à variável do lado esquerdo.

6. &= (AND e Atribuição): realiza a operação lógica AND bit a bit entre o valor do lado esquerdo e o valor do lado direito, e atribui o resultado à variável do lado esquerdo.

7. |= (OR e Atribuição): realiza a operação lógica OR bit a bit entre o valor do lado esquerdo e o valor do lado direito, e atribui o resultado à variável do lado esquerdo.

8. ^= (XOR e Atribuição): realiza a operação lógica XOR bit a bit entre o valor do lado esquerdo e o valor do lado direito, e atribui o resultado à variável do lado esquerdo.

| devsjavagirls |
1,909,236 | Operadores relacionais e lógicos | Operador relacional se refere aos relacionamentos que os valores podem ter uns com os... | 0 | 2024-07-02T20:47:14 | https://dev.to/devsjavagirls/operadores-relacionais-e-logicos-19me | java | - Operador relacional se refere aos relacionamentos que os valores podem ter uns com os outros.
- Operador lógico se refere às maneiras como os valores verdadeiro e falso podem estar conectados.
- Operadores relacionais produzem resultados verdadeiros ou falsos.
- Com frequência, operadores relacionais trabalham com operadores lógicos.
Operadores relacionais:

Operadores lógicos:

- O resultado dos operadores relacionais e lógicos é um valor boolean.
- Em Java, todos os objetos podem ser comparados para igualdade ou diferença usando == e !=.
- Os operadores de comparação <, >, <= ou >= só podem ser aplicados a tipos com relacionamento sequencial.
- Operadores relacionais podem ser aplicados a tipos numéricos e ao tipo char.
- Valores de tipo boolean só podem ser comparados quanto à igualdade ou diferença.
- Em Java, true > false não tem significado.
- Operandos dos operadores lógicos devem ser do tipo boolean gerando um resultado de tipo boolean.
- Os operadores lógicos &, |, ^ e ! dão suporte às operações AND, OR, XOR e NOT.

- Como a tabela mostra, o resultado de uma operação exclusive OR é verdadeiro quando exatamente um, e apenas um, operando é verdadeiro.

A saída do programa é dada a seguir:
i < j
i <= j
i != j
!(b1 & b2) is true
b1 | b2 is true
b1 ^ b2 is true
**Operadores lógicos de curto-circuito**
- Java fornece versões de curto-circuito dos operadores lógicos AND e OR para código mais eficiente.
- Em uma operação AND, se o primeiro operando for falso, o resultado é falso independente do segundo operando.
- Em uma operação OR, se o primeiro operando for verdadeiro, o resultado é verdadeiro independente do segundo operando.
- Nesses casos, não é necessário avaliar o segundo operando, economizando tempo e produzindo código mais eficiente.
- O operador AND de curto-circuito é && e o operador OR de curto-circuito é ||.
- Seus equivalentes comuns são & e |.
- A versão comum sempre avalia cada operando, enquanto a versão de curto-circuito só avalia o segundo operando quando necessário.
Exemplo de uso: determinar se d é um fator de n usando && para evitar erro de divisão por zero em uma operação de módulo.

A instrução if primeiro verifica se d é igual a zero para impedir uma divisão por zero.
Se d for zero, o operador AND de curto-circuito interrompe a execução e não realiza a operação de módulo.
No primeiro teste, d é igual a 2 e a operação de módulo é executada.
No segundo teste, d é zero, e a operação de módulo é ignorada, evitando a divisão por zero.
- O operador AND comum avalia ambos os operandos, levando a um erro de tempo de execução na divisão por zero.
- Os operadores de curto-circuito são formalmente chamados de conditional-or e conditional-and em Java, mas o termo “curto-circuito” é mais usado.
| devsjavagirls |
1,909,368 | DO NOT use ASP.NET Identity for Minimal API Endpoints! | Context: Since this morning, I’ve been trying to add custom fields to my Identity... | 0 | 2024-07-02T20:44:02 | https://dev.to/ganatrajay2000/do-not-use-aspnet-identity-for-minimal-api-endpoints-48p0 | csharp, dotnet, beginners, learning | ### Context:
Since this morning, I’ve been trying to add custom fields to my Identity authentication in my ASP.NET blogging project. I needed to add basic fields like First Name and Last Name. While I managed to add these columns to my database table, they didn’t populate when using the register API. After spending over five hours researching and finding nothing helpful for MVC or Minimal APIs, I stumbled upon [Andrew Lock's article](https://andrewlock.net/should-you-use-the-dotnet-8-identity-api-endpoints/). For more details, you can go through it!
<br />
### My Recommendation
Honestly, save yourself the headache. Skip ASP.NET Identity for Minimal API Endpoints if you need custom user data fields. Consider using custom JWT authentications. You'll avoid a lot of unnecessary hassle. | ganatrajay2000 |
1,909,367 | Tail Recursion | A tail recursive method is efficient for reducing stack size. A recursive method is said to be tail... | 0 | 2024-07-02T20:41:56 | https://dev.to/paulike/tail-recursion-37o0 | java, programming, learning, beginners | A tail recursive method is efficient for reducing stack size. A recursive method is said to be _tail recursive_ if there are no pending operations to be performed on return from a recursive call, as illustrated in Figure below (a). However, method **B** in Figure below (b) is not tail recursive because there are pending operations after a method call is returned.

For example, the recursive **isPalindrome** method (lines 9–16) in [RecursivePalindrome.java](https://dev.to/paulike/recursive-helper-methods-4fpd) is tail recursive because there are no pending operations after recursively invoking **isPalindrome** in line 15. However, the recursive **factorial** method (lines 17–22) in [ComputeFactorial.java](https://dev.to/paulike/case-study-computing-factorials-3jc2) is not tail recursive, because there is a pending operation, namely multiplication, to be performed on return from each recursive call.
Tail recursion is desirable: because the method ends when the last recursive call ends, there is no need to store the intermediate calls in the stack. Compilers can optimize tail recursion to reduce stack size.
A nontail-recursive method can often be converted to a tail-recursive method by using auxiliary parameters. These parameters are used to contain the result. The idea is to incorporate the pending operations into the auxiliary parameters in such a way that the recursive call no longer has a pending operation. You can define a new auxiliary recursive method with the auxiliary parameters. This method may overload the original method with the same name but a different signature. For example, the **factorial** method in [ComputeFactorial.java](https://dev.to/paulike/case-study-computing-factorials-3jc2) is written in a tail-recursive way in the code below.

The first **factorial** method (line 5) simply invokes the second auxiliary method (line 6). The second method contains an auxiliary parameter result that stores the **result** for the factorial of **n**. This method is invoked recursively in line 14. There is no pending operation after a call is returned. The final result is returned in line 12, which is also the return value from invoking **factorial(n, 1)** in line 6. | paulike |
1,909,363 | User(s) Creation Automation with Bash | Hey there! So here is a synopsis of my stage 1 task at the HNG DevOps Internship ... | 0 | 2024-07-02T20:29:28 | https://dev.to/0xugochukwu/users-creation-automation-with-bash-461m | devops, bash | Hey there!
So here is a synopsis of my stage 1 task at the [HNG DevOps Internship](https://hng.tech/internship)
### Task:
Your company has employed many new developers. As a SysOps engineer, write a bash script called `create_users.sh` that reads a text file containing the employee’s usernames and group names, where each line is formatted as `user;groups`.
The script should create users and groups as specified, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions to `/var/log/user_management.log`. Additionally, store the generated passwords securely in `/var/secure/user_passwords.txt`.
Ensure error handling for scenarios like existing users and provide clear documentation and comments within the script.
**Sample Input**
```
light; sudo,dev,www-data
idimma; sudo
mayowa; dev,www-data
```
### Solution
The programme I wrote solves the problem by following these procedures;
- First, we read the `input file` using a function that adds the users to a global variable called `users` and the groups to another variable called `groups`. It does this simultaneously allowing the index of each user in `users` to match their corresponding groups in `groups`.
I also ensure the user has entered a valid input file before running this.
Here's the code that does all of this:
```bash
declare -a users
declare -a groups
function read_input() {
local file="$1"
while IFS= read -r line; do
user=$(echo "$line" | cut -d';' -f1)
groups_data=$(echo "$line" | cut -d';' -f2 | tr -d '[:space:]')
users+=("$user")
groups+=("$groups_data")
done < "$file"
}
if [[ $# -ne 1 ]]; then
echo "Usage: $0 <input_file>"
exit 1
fi
input_file="$1"
echo "Reading input file: $input_file"
read_input "$input_file"
```
- Next, I go on to create the required files and their directories if they don't already exist using this code:
```bash
log_file="/var/log/user_management.log"
password_file="/var/secure/user_passwords.txt"
if [ ! -f "$log_file" ]; then
mkdir -p /var/log
touch "$log_file"
fi
if [ ! -f "$password_file" ]; then
mkdir -p /var/secure
touch "$password_file"
fi
```
- Then the main event, at this point I have a list of the users in `users`, a list of their corresponding groups in `groups` and all the files I will need to store valuable information such as logs and the passwords of the users I created.
Now, I use a `for` loop to iterate over each user and their corresponding groups with an index.
Remember we created the `users` and `groups` array simultaneously by looping over each line in the file this means the user at `index
0` in `users` needs to be added to the groups at `index 0` in `groups`, so our `for` loop will look like this:
```bash
for (( i = 0; i < ${#users[@]}; i++ )); do
user="${users[$i]}"
user_groups="${groups[$i]}"
```
So `user` is the user we are working on and `user_groups` are the groups we are adding them to.
Next, we check if the `user` exists, if they do we just continue with the next iteration; else, we create them with this code:
```bash
if id "$user" &>/dev/null; then
echo "User $user already exists, Skipped" | tee -a "$log_file"
else
# Create user
useradd -m -s /bin/bash "$user"
if [[ $? -ne 0 ]]; then
echo "Failed to create user $user" | tee -a "$log_file"
exit 1
fi
echo "User $user created" | tee -a "$log_file"
```
Then we set a password for them by using openssl to generate 50 random base64 characters then we filter the result to extract the alphabets, numbers and some special characters. We finally slice the first 10 characters and this will serve as the user's password.
We go ahead to store the users password in `/var/secure/user_passwords.txt`.
These are done using the code below:
```bash
# Set password
password=$(openssl rand -base64 50 | tr -dc 'A-Za-z0-9!?%=' | head -c 10)
echo "$user:$password" | chpasswd
if [[ $? -ne 0 ]]; then
echo "Failed to set password for $user" | tee -a "$log_file"
exit 1
fi
echo "Password for $user set" | tee -a "$log_file"
echo "$user:$password" >> "$password_file"
```
Finally, we add the user to their groups.
We do this by first of all checking if a personal group was created for the user--most linux distros do this by default on user creation due to security reasons. But if it didn't we create a personal group for the user and add the user to the group.
See code below:
```bash
# Create personal group if linux distro didn't create it by default
if grep -q "^$user:" /etc/group; then
echo "Personal group $user already exists" | tee -a "$log_file"
else
echo "Personal group $user does not exist, creating $user" | tee -a "$log_file"
groupadd "$user"
if [[ $? -ne 0 ]]; then
echo "Failed to create personal group $user" | tee -a "$log_file"
exit 1
fi
fi
# Add user to personal group
usermod -aG "$user" "$user"
if [[ $? -ne 0 ]]; then
echo "Failed to add $user to $user group" | tee -a "$log_file"
exit 1
fi
echo "Added $user to $user group" | tee -a "$log_file"
```
After which we split the previously created variable `user_groups` by `,` and iterate through each element creating the group if it doesn't already exist and adding the user to the group. Then we finally close our initial loop
We do this using this code:
```bash
# Add user to other groups
for group in $(echo "$user_groups" | tr ',' '\n'); do
if grep -q "^$group:" /etc/group; then
echo "Group $group already exists" | tee -a "$log_file"
else
echo "Group $group does not exist, creating $group" | tee -a "$log_file"
groupadd "$group"
if [[ $? -ne 0 ]]; then
echo "Failed to create group $group" | tee -a "$log_file"
exit 1
fi
fi
usermod -aG "$group" "$user"
if [[ $? -ne 0 ]]; then
echo "Failed to add $user to $group group" | tee -a "$log_file"
exit 1
fi
echo "Added $user to $group group" | tee -a "$log_file"
done
fi
done
```
And just like that, all the new employees now have user profiles!
You can also reuse this script for new employees. Exciting right?
You will also notice how I appropriately log each event in the log file and gracefully handle failures in each command so even if we run into unexpected problems in our execution we not only end the programme gracefully we also have a log for further investigation.
That's it for now, but be sure to tune in for the Stage 2 task 😉
I am also open to constructive criticism so be sure to leave a comment.
Also, you can learn more about HNG [here](https://hng.tech/)
| 0xugochukwu |
1,909,364 | Recursion vs. Iteration | Recursion is an alternative form of program control. It is essentially repetition without a loop.... | 0 | 2024-07-02T20:28:24 | https://dev.to/paulike/recursion-vs-iteration-df | java, programming, learning, beginners | Recursion is an alternative form of program control. It is essentially repetition without a loop. When you use loops, you specify a loop body. The repetition of the loop body is controlled by the loop control structure. In recursion, the method itself is called repeatedly. A selection statement must be used to control whether to call the method recursively or not.
Recursion bears substantial overhead. Each time the program calls a method, the system must allocate memory for all of the method’s local variables and parameters. This can consume considerable memory and requires extra time to manage the memory.
Any problem that can be solved recursively can be solved nonrecursively with iterations. Recursion has some negative aspects: it uses up too much time and too much memory. Why, then, should you use it? In some cases, using recursion enables you to specify a clear, simple solution for an inherently recursive problem that would otherwise be difficult to obtain.
Examples are the directory-size problem, the Tower of Hanoi problem, and the fractal problem, which are rather difficult to solve without using recursion.
The decision whether to use recursion or iteration should be based on the nature of, and your understanding of, the problem you are trying to solve. The rule of thumb is to use whichever approach can best develop an intuitive solution that naturally mirrors the problem. If an iterative solution is obvious, use it. It will generally be more efficient than the recursive option.
Recursive programs can run out of memory, causing a **StackOverflowError**. If you are concerned about your program’s performance, avoid using recursion, because it takes more time and consumes more memory than iteration. In general, recursion can be used to solve the inherent recursive problems such as Tower of Hanoi, recursive
directories, and Sierpinski triangles. | paulike |
1,909,362 | Case Study: Fractals | Using recursion is ideal for displaying fractals, because fractals are inherently recursive. A... | 0 | 2024-07-02T20:25:41 | https://dev.to/paulike/case-study-fractals-70m | java, programming, learning, beginners | Using recursion is ideal for displaying fractals, because fractals are inherently recursive. A _fractal_ is a geometrical figure, but unlike triangles, circles, and rectangles, fractals can be divided into parts, each of which is a reduced-size copy of the whole. There are many interesting examples of fractals. This section introduces a simple fractal, the _Sierpinski_ triangle, named after a famous Polish mathematician.
A Sierpinski triangle is created as follows:
1. Begin with an equilateral triangle, which is considered to be a Sierpinski fractal of order (or level) **0**, as shown in Figure below (a).
2. Connect the midpoints of the sides of the triangle of order **0** to create a Sierpinski triangle of order **1** (Figure below (b)).
3. Leave the center triangle intact. Connect the midpoints of the sides of the three other triangles to create a Sierpinski triangle of order **2** (Figure below (c)).
4. You can repeat the same process recursively to create a Sierpinski triangle of order **3**, **4**, . . . , and so on (Figure below (d)).

The problem is inherently recursive. How do you develop a recursive solution for it? Consider the base case when the order is **0**. It is easy to draw a Sierpinski triangle of order **0**. How do you draw a Sierpinski triangle of order **1**? The problem can be reduced to drawing three Sierpinski triangles of order **0**. How do you draw a Sierpinski triangle of order **2**? The problem can be reduced to drawing three Sierpinski triangles of order **1**, so the problem of drawing a Sierpinski triangle of order **n** can be reduced to drawing three Sierpinski triangles of order **n - 1**.
The code below gives a program that displays a Sierpinski triangle of any order, as shown in Figure above. You can enter an order in a text field to display a Sierpinski triangle of the specified order.
```
package application;
import javafx.application.Application;
import javafx.geometry.Point2D;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.control.TextField;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Polygon;
import javafx.stage.Stage;
public class SierpinskiTriangle extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
SierpinskiTrianglePane trianglePane = new SierpinskiTrianglePane();
TextField tfOrder = new TextField();
tfOrder.setOnAction(e -> trianglePane.setOrder(Integer.parseInt(tfOrder.getText())));
tfOrder.setPrefColumnCount(4);
tfOrder.setAlignment(Pos.BOTTOM_RIGHT);
// Pane to hold label, text field, and a button
HBox hBox = new HBox(10);
hBox.getChildren().addAll(new Label("Enter an order: "), tfOrder);
hBox.setAlignment(Pos.CENTER);
BorderPane borderPane = new BorderPane();
borderPane.setCenter(trianglePane);
borderPane.setBottom(hBox);
// Create a scene and place it in the stage
Scene scene = new Scene(borderPane, 200, 210);
primaryStage.setTitle("SierpinskiTriangle"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
scene.widthProperty().addListener(ov -> trianglePane.paint());
scene.heightProperty().addListener(ov -> trianglePane.paint());
}
public static void main(String[] args) {
Application.launch(args);
}
/** Pane for displaying triangles */
static class SierpinskiTrianglePane extends Pane {
private int order = 0;
/** Set a new order */
public void setOrder(int order) {
this.order = order;
paint();
}
SierpinskiTrianglePane() {
}
protected void paint() {
// Select three points in proportion to the pane size
Point2D p1 = new Point2D(getWidth() / 2, 10);
Point2D p2 = new Point2D(10, getHeight() - 10);
Point2D p3 = new Point2D(getWidth() - 10, getHeight() - 10);
this.getChildren().clear(); // Clear the pane before redisplay
displayTriangles(order, p1, p2, p3);
}
private void displayTriangles(int order, Point2D p1,
Point2D p2, Point2D p3) {
if (order == 0) {
// Draw a triangle to connect three points
Polygon triangle = new Polygon();
triangle.getPoints().addAll(p1.getX(), p1.getY(), p2.getX(), p2.getY(), p3.getX(), p3.getY());
triangle.setStroke(Color.BLACK);
triangle.setFill(Color.WHITE);
this.getChildren().add(triangle);
}
else {
// Get the midpoint on each edge in the triangle
Point2D p12 = p1.midpoint(p2);
Point2D p23 = p2.midpoint(p3);
Point2D p31 = p3.midpoint(p1);
// Recursively display three triangles
displayTriangles(order - 1, p1, p12, p31);
displayTriangles(order - 1, p12, p2, p23);
displayTriangles(order - 1, p31, p23, p3);
}
}
}
}
```
The initial triangle has three points set in proportion to the pane size (lines 62–64). If **order == 0**, the **displayTriangles(order, p1, p2, p3)** method displays a triangle that connects the three points **p1**, **p2**, and **p3** in lines 74–79, as shown in Figure below (a). Otherwise, it performs the following tasks:
1. Obtain the midpoint between **p1** and **p2** (line 83), the midpoint between **p2** and **p3** (line 84), and the midpoint between **p3** and **p1** (line 85), as shown in Figure below (b).
2. Recursively invoke **displayTriangles** with a reduced order to display three smaller Sierpinski triangles (lines 88–90). Note that each small Sierpinski triangle is structurally identical to the original big Sierpinski triangle except that the order of a small triangle is one less, as shown in Figure below (b).

A Sierpinski triangle is displayed in a **SierpinskiTrianglePane**. The **order** property in the inner class **SierpinskiTrianglePane** specifies the order for the Sierpinski triangle. The **Point2D** Class, represents a point with _x_- and _y_-coordinates. Invoking **p1.midpoint(p2)** returns a new **Point2D** object that is the midpoint between **p1** and **p2** (lines 83–85). | paulike |
1,909,307 | Using OCI Bucket for Terraform/OpenTofu remote state backend | Store Terraform state files in Oracle Cloud Infrastructure (OCI) Object Storage by configuring an... | 0 | 2024-07-02T20:19:03 | https://dev.to/farisdurrani/using-oci-bucket-for-terraformopentofu-remote-state-backend-n15 | oraclecloud, cloud, terraform | _Store Terraform state files in Oracle Cloud Infrastructure (OCI) Object Storage by configuring an S3-compatible backend._
A Terraform backend defines where Terraform stores its state data files. Without a backend, the state file lives locally on a single machine, making it hard for others to work based on the same cloud state, as well as having to store sensitive information locally.
This page describes how to configure an S3-compatible backend on OCI Object Storage Bucket by adding the backend block to your configuration.
## A simple example
### Assumptions
- A Terraform/OpenTofu version >= 1.7
### 1. Install Terraform/OpenTofu
Follow the official installation page to install the Terraform or OpenTofu CLI on your machine:
- [Terraform](https://developer.hashicorp.com/terraform/install)
- [OpenTofu](https://opentofu.org/docs/intro/install)
All instructions in this doc will use the `terraform` CLI and otherwise refer to Terraform. Simply swap `terraform` with `tofu` if you prefer to use OpenTofu as all instructions and file contents are otherwise similar.
### 2. Configure the OCI Provider profile
To deploy OCI resources, you need access to manage the resources from your machine. This can be achieved using an API Key. To complete this step, see [Setting up the OCI Configuration File using API Keys](https://dev.to/farisdurrani/setting-up-the-oci-configuration-file-using-api-keys-96c).
### 3. Create your AWS Customer Secret Key
Create a Customer Secret Key on your OCI console. This key enables Terraform to write to the bucket.
Head to **Profile picture** > **My profile** > **Customer secret keys** > **Generate secret key**
Give any display name you desire.

### 4. Add your AWS Customer Secret Key
i) Create or go to the file `~/.aws/credentials`
ii) Add the secret **Generated key** and **Access key** in the file under a profile name.
In this example, we use `default` as the profile name.
```txt
[default]
aws_access_key_id=68ce92f58a480b5cc17205467816a53b662f167a
aws_secret_access_key=1swn+e6GIyRz4tcEO42b95im7EBVO8rM5WM9apTs+fQ=
```

### 5. Create your Terraform files
We'll create a folder with these files to create one VCN in a specified compartment:
```txt
📦terraform-test
┣ 📜main.tf
┣ 📜provider.tf
┗ 📜terraform.tf
```
The `terraform.tf` file will:
- tell Terraform to use the `oci` provider
- ensure the Terraform version is >= 1.7
- use the S3-compatible OCI bucket backend to store the state
**Important**
Make sure to update:
- the `bucket` attribute to reflect the name of your bucket
- the `endpoints` attribute to use your region and object storage namespace (found in Profile > Tenancy Details)
- the `profile` attribute. We use `"default"` as set in the previous step. Optionally for better configuration, use [Partial Configuration](https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration)
```hcl
# terraform.tf
terraform {
required_providers {
oci = {
source = "oracle/oci"
version = ">= 6.0.0"
}
}
required_version = ">=1.7"
backend "s3" {
bucket = "bucket01"
key = "terraform.tfstate"
region = "us-ashburn-1"
endpoints = { s3 = "https://idjqfqrpn5uq.compat.objectstorage.us-ashburn-1.oci.customer-oci.com" }
profile = "default"
skip_region_validation = true
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
skip_s3_checksum = true
use_path_style = true
}
}
```
The `provider.tf` file sets the OCI profile you are using. `DEFAULT` is the default profile
```hcl
# provider.tf
provider "oci" {
config_file_profile = "DEFAULT"
}
```
The `main.tf` file creates one simple VCN in the compartment you specify. Make sure to edit the `compartment_id`.
```hcl
# main.tf
resource "oci_core_vcn" "test_vcn" {
#Required
compartment_id = "ocid1.compartment.oc1..aaaaaaaaivk7ay7yourcompartmentocidpdx3rb37g55uguzga"
#Optional
cidr_blocks = ["10.5.0.0/16"]
display_name = "vcn-test-01"
}
```
### 6. Deploy
Let us initialize and apply the plan:
```sh
terraform init
terraform apply
```
If all goes well, we see a success message:

And of course, the created VCN:

The Terraform tfstate file in the bucket:

## References
- [Oracle: Using Object Storage for State Files](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/terraformUsingObjectStore.htm)
## Safe harbor statement
_The information provided on this channel/article/story is solely intended for informational purposes and cannot be used as a part of any contractual agreement. The content does not guarantee the delivery of any material, code, or functionality, and should not be the sole basis for making purchasing decisions. The postings on this site are my own and do not necessarily reflect the views or work of Oracle or Mythics, LLC._
_This work is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0)._ | farisdurrani |
1,909,347 | What is the difference between git commit -m and git commit -am? | With git commit -m you only commit changes which are added using git add command. With git commit... | 0 | 2024-07-02T20:13:19 | https://dev.to/mbshehzad/what-is-the-difference-between-git-commit-m-and-git-commit-am-1151 | git, node | With **git commit -m** you only commit changes which are added using git add command.
With **git commit -am** you commit ALL changed including those changes as well which are not added ussing git add | mbshehzad |
1,908,392 | Creating a Flask App: My issues as a beginner and how I fixed them. | Hi everyone, Eworitse Egbejule here. This is my first blog post and I want to share my experience... | 0 | 2024-07-02T20:11:22 | https://dev.to/ebiscott/creating-a-flask-app-my-issues-as-a-beginner-and-how-i-learnt-to-fix-them-254j | webdev, beginners, python, learning | Hi everyone, **Eworitse Egbejule** here. This is my first blog post and I want to share my experience with creating a flask app; the issues I had while making it and how I fixed them.
Flask is a lightweight and flexible microframework used for building web apps in Python with minimal setup. Here's a sample code of a simple flask web app:

My app is called Microblog and it's a simple blog flask app with it's frontend self written in HTML and CSS.
I followed these steps to build my app:
#### Step 1: Design the Frontend
* I started by creating the HTML and CSS code for a simple blog interface.
* The blog lets users record notes in the form of a diary.
* The entries are then presented back to the user in a clean, user-friendly format.
#### Step 2: Develop the Backend
* I followed a tutorial to build the Flask backend. Flask is perfect for this due to its simplicity and flexibility.
*This stage involved setting up routes, handling user inputs, and managing the data.
#### Step 3: Handle Backend Hosting Issues
* The main challenge came with hosting the site. Initially, I attempted to host it on [Render](https://render.com).
* I connected it to my GitHub repo [here](https://github.com/EbiScott/practice-blog), but encountered multiple errors.
* The errors were frustrating, and a friend suggested switching to [Vercel](https://vercel.com). Even on Vercel, I faced troubleshooting issues.
## Steps I took to fix my deployment issues:
* Check app.py File:
I had to check if my Flask app was correctly instantiated as app.
```python
from flask import Flask
app = Flask(__name__)
if __name__ == '__main__':
app.run()
```
* Updated the gunicorn Command:
I modified the start command in my deployment settings to:
``` bash
gunicorn app:app
```
* Had to ensure correct Directory Structure:
I checked to make sure that `app.py` is in the root directory of your project.
I created a `vercel.json` Configuration File:
```json
{
"version": 2,
"builds": [
{
"src": "app.py",
"use": "@vercel/python"
}
],
"routes": [
{
"src": "/(.*)",
"dest": "app.py"
}
]
}
```
* Push to GitHub and Link to Vercel:
After fixing the issues I pushed my code to GitHub. Then I linked my GitHub repository to Vercel for automatic deployment.
### Deployment: Making It Live
I deployed the application on Vercel and pushing my code to GitHub, followed by linking the repository to Vercel for automatic deployment. You can check out my web app here at [The Microblog](https://the-microblog.vercel.app/).

## HNG 11 Backend Internship
The process of building this microblog application strengthened my interest and love for backend development. It taught me valuable lessons in routing, CRUD operations, and deployment. Now, I'm excited to bring this passion and these skills to the HNG Internship.
The HNG Internship provides a great opportunity to learn from experienced professionals, work on practical projects, and connect with other programmers. I'm very much interested in the intensive learning and opportunities that are provided by the internship. In case you're interested and want to find out more information about the program, please checkout [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire).
## Conclusion
This project was a significant milestone in my backend development journey. I'm really looking forward to the challenges and learning opportunities that the HNG Internship will bring. If you're also an aspiring developer like me, I highly recommend checking out the HNG Internship to take your skills to the next level. | ebiscott |
1,909,346 | Case Study: Tower of Hanoi | The Tower of Hanoi problem is a classic problem that can be solved easily using recursion, but it is... | 0 | 2024-07-02T20:07:33 | https://dev.to/paulike/case-study-tower-of-hanoi-18j6 | java, programming, learning, beginners | The Tower of Hanoi problem is a classic problem that can be solved easily using recursion, but it is difficult to solve otherwise. The problem involves moving a specified number of disks of distinct sizes from one tower to another while observing the following rules:
- There are n disks labeled 1, 2, 3, . . . , n and three towers labeled A, B, and C.
- No disk can be on top of a smaller disk at any time.
- All the disks are initially placed on tower A.
- Only one disk can be moved at a time, and it must be the smallest disk on a tower.
The objective of the problem is to move all the disks from A to B with the assistance of C. For example, if you have three disks, the steps to move all of the disks from A to B are shown in Figure below.

In the case of three disks, you can find the solution manually. For a larger number of disks, however—even for four—the problem is quite complex. Fortunately, the problem has an inherently recursive nature, which leads to a straightforward recursive solution.
The base case for the problem is **n = 1**. If **n == 1**, you could simply move the disk from A to B. When **n > 1**, you could split the original problem into the following three subproblems and solve them sequentially.
1. Move the first **n - 1** disks from A to C recursively with the assistance of tower B, as shown in Step 1 in Figure below.
2. Move disk **n** from A to B, as shown in Step 2 in Figure below.
3. Move **n - 1** disks from C to B recursively with the assistance of tower A, as shown in Step 3 in Figure below.

The following method moves n disks from the **fromTower** to the **toTower** with the assistance of the **auxTower**:
`void moveDisks(int n, char fromTower, char toTower, char auxTower)`
The algorithm for the method can be described as:
`if (n == 1) // Stopping condition
Move disk 1 from the fromTower to the toTower;
else {
moveDisks(n - 1, fromTower, auxTower, toTower);
Move disk n from the fromTower to the toTower;
moveDisks(n - 1, auxTower, toTower, fromTower);
}`
The code below gives a program that prompts the user to enter the number of disks and invokes the recursive method **moveDisks** to display the solution for moving the disks.

This problem is inherently recursive. Using recursion makes it possible to find a natural, simple solution. It would be difficult to solve the problem without using recursion.
Consider tracing the program for **n = 3**. The successive recursive calls are shown in Figure below. As you can see, writing the program is easier than tracing the recursive calls. The system uses stacks to manage the calls behind the scenes. To some extent, recursion provides a level of abstraction that hides iterations and other details from the user.
 | paulike |
1,909,344 | à mon égard: From Bug Squasher to Code Conqueror | Hey everyone, it's conradgabe here, your friendly neighborhood backend developer! Today, I want to... | 0 | 2024-07-02T20:05:34 | https://dev.to/conradgabe/a-mon-egard-from-bug-squasher-to-code-conqueror-889 | Hey everyone, it's conradgabe here, your friendly neighborhood backend developer! Today, I want to take you on a thrilling adventure – not into the jungle, but into the heart of a particularly nasty backend bug. Buckle up, because we're about to dissect the problem, step-by-step, and emerge victorious (with a newfound appreciation for clean code!).
Now, before we delve into the code caverns, let me tell you a little secret: a crucial part of being a backend developer is selling yourself. It's not just about writing elegant code (though that's definitely important!); it's about showing potential employers how you can solve their problems and navigate even the trickiest situations. That's why I'm incredibly excited to announce that I'm embarking on a new journey with the HNG Internship program @"https://hng.tech/"!
But first, let's conquer that bug!
The Case of the Disappearing Data
Imagine this: you're working on a critical backend system, and suddenly, data starts vanishing like a magician's act. Users report missing information, and panic starts to set in. That's exactly what happened to me recently. Data was being written to the database, but then... poof! Gone without a trace.
Step 1: Embrace the Detective Mindset
The first step in any debugging adventure is to gather clues. I meticulously looked through logs, analyzed code execution flow, and even set up breakpoints to pinpoint exactly where the data was disappearing. It was like being a digital Sherlock Holmes, except with a keyboard instead of a magnifying glass.
Step 2: Interrogate the Suspects (The Code)
After gathering my evidence, it was time to confront the prime suspects – the code itself. Line by line, I went through the data handling process, looking for any logical flaws or race conditions that might be causing the issue. The culprit turned out to be a sneaky bug in a recently implemented caching mechanism. It was caching not just the data itself, but also the fact that it shouldn't be there anymore!
Step 3: Fix the Glitch and Celebrate the Win
With the culprit identified, fixing the bug was a breeze. A minor code modification later, and our missing data reappeared like a rabbit from a hat. The relief was exhilarating, and it served as a powerful reminder of the satisfaction that comes from solving a complex problem.
Why HNG Internship?
Now, let's talk about the HNG Internship. As a developer who thrives on challenges and enjoys collaborating with a talented community, this program is a perfect fit. I'm particularly drawn to the focus on building real-world projects https://hng.tech/internship – the kind that have a tangible impact on people's lives. This internship will be a fantastic opportunity to hone my skills, learn from experienced mentors, and contribute to something meaningful.
I can't wait to embark on this new adventure with HNG and share my experiences with all of you. Stay tuned for more coding conquests, debugging triumphs, and insights from the HNG trenches!
conradgabe out!
| conradgabe | |
1,909,343 | Which LLM to Choose: 12 key aspects to consider when building AI solutions | Overview of the Leading LLMs The leaderboard below presents a high-level comparison of... | 0 | 2024-07-02T20:04:51 | https://mindsdb.com/blog/which-llm-to-choose-12-key-aspects-to-consider-when-building-ai-solutions | ## Overview of the Leading LLMs
The leaderboard below presents a high-level comparison of leading large language models (LLMs) from various providers such as OpenAI, Google, Anthropic, Cohere, Meta, Mistral AI, and Databricks. Models are evaluated based on key factors including performance (price, quality, and speed), context window length, and licensing. Models are rated on a star-based system for price, quality, and speed to help quickly identify the ideal model based on these key factors. Later in this post, we’ll dive deeper into each of these categories as well as other aspects to consider when building applications with LLMs.
A more detailed version of this leaderboard can be found [here](https://docs.google.com/spreadsheets/d/1x3dqVxb59tkOSJEIyRIIzDDDMbCdT_zMDPOrY95ROB8/edit?gid=1507205688#gid=1507205688).
## Navigating the LLM Revolution
The rise of large language models (LLMs) has revolutionized natural language processing, enabling companies to seamlessly apply AI to a broad range of tasks. This analysis explores leading LLMs, examining their capabilities, applications, and performance as well as key considerations to make when deciding which model to use. Our focus includes not only OpenAI models but also notable contenders like Anthropic, Meta, Google, and more.
LLMs have evolved from traditional NLP models designed for specific tasks to versatile tools capable of handling a wide range of applications. Models like OpenAI's ChatGPT have shifted the paradigm, excelling in various tasks without the need for specialized training. By connecting these models to internal data, businesses can seamlessly integrate AI, outperforming traditional NLP models in most major tasks.
Over the past year, the adoption of LLM technologies has surged, with startups and major corporations developing their own foundation models. Giants like OpenAI, Google, and Meta lead the charge, while newcomers like Mistral AI and Databricks are also making significant strides. This blog aims to guide you through the complex process of choosing the right model for your needs.
## Understanding LLM Benchmarks
When selecting a model, the instinct might be to choose the “best” one. However, picking an LLM can be more complicated than it seems. Standard benchmarks rank models based on their performance on test sets, which can be generalized knowledge or domain-specific (e.g., coding, multilingual tasks). While benchmarks are useful, they have limitations:
- **Data Leakage:** Often time test data leaks into training datasets, causing models to memorize answers skewing leaderboard results and not accurately reflecting real world performance.
- **Errors:** Some leaderboards have fundamental flaws and errors, so their rankings should be taken with a grain of salt.
- **Real World Performance:** Benchmark performance might not correlate directly with domain-specific tasks, especially if your use case differs from the benchmark scenarios.
There are many different benchmarks all with different pros and cons, and it’s generally a good practice to look at model performance across different benchmarks. Some of the popular benchmarks to consider include the following:
- **MMLU (Massive Multitask Language Understanding):** A benchmark that evaluates the performance of large language models across 57 diverse subjects at varying difficulty levels using multiple-choice questions.
- **Chatbot Arena:** A web application where users chat with multiple models and vote on the best output. This benchmark considers response quality, speed, and conciseness.
- **MT Bench (Multi-Task Benchmark):** A benchmark designed to measure the versatility and robustness of large language models across a variety of tasks including translation, summarization, and question answering.
- **HumanEval:** A benchmark that assesses the code generation capabilities of large language models by evaluating their ability to generate correct and functional code based on given programming problems.
We reference [Artificial Analysis](https://artificialanalysis.ai/) to pull model benchmarks and recommend checking them out for more information. For easier reference, in our comparison chart, we rank models based on price and their MMLU score out of 3 stars using quartile rankings. Note that these benchmarks are not perfect representations of model performance. Recently, Scale AI launched a [private leaderboard](https://scale.com/leaderboard) to provide better model evaluations, which we also recommend exploring. While leaderboards are useful guides, remember to consider other factors like cost, speed, privacy, and specific features when choosing an LLM.
## Why Experiment with Different Models
Benchmarks and performance aren’t the only things to consider when picking which model to use. Choosing the right LLM involves considering various factors such as cost, speed, and performance. For example, if developing a local application where models are meant to run on device, using a large model would be very slow and almost unusable. While most leaderboards place OpenAI’s GPT-4 at the top based on standard benchmarks, it may not always be the best choice for every use case. Swapping in different open-source models and augmenting them with external data using techniques like RAG (retrieval-augmented generation) can bridge performance gaps and reduce costs while also offering other benefits in terms of speed and specialized capabilities. For instance, some models are faster (beneficial for real-time applications) and cheaper (useful for processing large volumes of text).
**5 Key Aspects for Picking an LLM:**
1. **Performance:** Models are typically evaluated on standard benchmarks and ranked based on scores. Depending on the use case, consider benchmarks like MMLU for general knowledge and HumanEval for code-related tasks.
2. **Cost:** LLMs are usually hosted by a company and priced per token. Costs can vary significantly; for example, smaller open-source models like Llama-3-8b ($0.20/1M tokens) cost significantly less than GPT-4 ($30/1M tokens). While cheaper models may not perform as well, they can be sufficient for some tasks. Exploring different models across various price ranges can help identify the best model for your budget.
3. **Output Speed:** Depending on the use case, speed can be crucial. In real-time applications like voice apps, slow responses can impact user experience, whereas speed may be less critical for tasks like processing meeting transcripts overnight. Output speed is measured in two ways: the latency or time to first token (TTFT) and overall tokens per second (throughput). A quick TTFT allows for streaming responses, making the application feel faster, while throughput is crucial for tasks requiring the full text to be generated quickly.
4. **Privacy Features:** Proprietary models from companies like OpenAI, Google, and Anthropic are accessed via API, requiring data to be shared with the hosting company. For some applications, this is acceptable, but in others, using an open-source model that can be hosted locally might be better, ensuring data remains on-premises.
5. **Specific Capabilities:** Some models are specialized for tasks like code generation, tool use, function calling, or processing multiple modalities (e.g., images, audio). Using these specialized models can be more cost-effective and result in better performance compared to larger, more expensive general models. Some examples include Code Llama and Cohere’s retrieval models.
## Which model to use
Choosing the right model is often the first step in building a robust application. LLMs are large pretrained models with extensive general knowledge and reasoning capabilities, suitable for a wide range of tasks. However, they have limitations, including a fixed knowledge cutoff date.
To keep models up-to-date, they often need external data, which can be integrated using tools like search APIs or techniques like retrieval-augmented generation (RAG). Some tasks, like sentiment analysis, classification, or translation, can be handled by models without additional data, especially when prompted with a few examples. However, tasks requiring specific or private data, such as chatbots referencing internal documents, need supplementary data to function correctly.
When an application relies heavily on a model's internal knowledge, proprietary models like GPT-4, Gemini, and Opus tend to outperform smaller open-source models like Llama or Mistral. This is because larger models possess more extensive general knowledge and superior reasoning abilities. However, the performance gap narrows when model outputs are augmented with external data and techniques like few-shot prompting.
## General Guidance for Picking a Model
- **Start with High-Performing Models:** Begin prototyping and developing your LLM application with top-performing models like OpenAI’s GPT-4, Google’s Gemini, or Claude’s Opus to ensure high output quality.
- **Iterate and Optimize:** After establishing a baseline with a high-performing model, explore swapping in different models based on specific use cases and cost considerations. Techniques such as supplementing with few-shot examples or connecting to external tools can help match the performance of larger models.
- **Evaluate Trade-offs:** Consider other benefits beyond performance, such as speed. In some cases, a slight drop in performance might be acceptable if it results in significant improvements in generation speed or cost savings.
By experimenting with different models and techniques, you can optimize both the cost and performance of your LLM applications, potentially achieving better results than sticking with a single model.
## 7 Key Considerations for Building an LLM Application
LLMs are excellent for quick demos since they provide a great out-of-the-box experience with their internal knowledge. However, creating a robust and reliable application involves more than just the model. Building an end-to-end AI application includes several key components:
1. **Data Connectors:** LLM applications often need to connect models with data from various sources like databases, APIs, and cloud storage. Tools like MindsDB simplify this process by integrating multiple data sources into a single platform, making data management and utilization more efficient.
2. **Data Preprocessing:** Preparing and cleaning data ensures quality inputs for the model. LLMs perform best with structured data, so preprocessing raw data is crucial for improving model accuracy and efficiency.
3. **Embedding Models:** Embedding models encode data into dense vector representations, capturing semantic meaning and aiding in tasks like similarity search and classification. High-quality embeddings enhance the model’s ability to understand and process data effectively.
4. **Vector Databases:** Vector databases store and query embeddings efficiently, enabling fast similarity searches and handling large volumes of high-dimensional data. They are crucial for applications requiring real-time responses and high scalability.
5. **RAG Pipelines:** RAG pipelines enhance LLM responses by integrating external data. Relevant documents or data are retrieved and used to augment the model’s output, providing more accurate and up-to-date responses. Setting up RAG pipelines involves many steps from retrieving the correct documents either with traditional methods like keyword search or semantic similarity to reranking or preprocessing them with models to craft a working pipeline.
6. **Prompt Engineering/Management:** Effective prompts guide the model to produce specific responses. Prompt engineering involves designing and managing prompts to be contextually relevant and optimized for performance, significantly enhancing the model’s output accuracy and relevance.
7. **Observability and Evaluation:** Monitoring and evaluating model performance is crucial for reliability. Observability tools track metrics like response time and accuracy, while evaluation tools assess outputs against benchmarks. These tools help in detecting issues and making data-driven improvements.
Testing and building robust pipelines that integrate these components is crucial for creating production-grade LLM applications. MindsDB can help by providing a streamlined way to connect and preprocess data, making the process more efficient and reliable.
## Introducing “Minds” - pre-packaged AI systems
[Minds](https://start.mdb.ai/) are AI systems with built-in expertise designed to help AI agents accomplish tasks. These plug-and-play systems need little setup and are designed to be consumed in a seamless manner just like traditional LLMs with an OpenAI compatible API.
Minds abstract away the complexities of building an AI application and bundles all of the components into a “Mind” that creates an agent to seemingly accomplish the task it was created for. Our first mind - the Database-Mind - is designed to directly interact with data using natural language. To use it all you need to do is pass in a database schema and ask questions in natural language and the Mind handles the rest and just returns answers.
Database-Mind is the first of many to come. You can check it out for free [here](https://docs.mdb.ai/docs/database-mind).
## Deployment Options: Self Hosting vs Serverless
When deploying LLMs, you have several options, each with its own advantages and considerations. Here's a comparison between self-hosting and serverless deployment, along with insights on using inference providers.
**Self Hosting**
Self-hosting LLMs provides greater control over the environment and ensures that all data remains on-premises, which can be crucial for maintaining privacy and security. This approach is particularly beneficial for applications that handle sensitive information and cannot afford to share data with third parties. However, self-hosting requires significant upfront investment in infrastructure and technical expertise to manage and maintain the systems. While this can lead to lower costs for high-volume usage in the long run, the initial setup and ongoing management can be complex and resource-intensive.
**Serverless Deployment**
Serverless deployment offers the advantage of scalability and reduced maintenance overhead. This option is ideal for applications that need to scale quickly and efficiently without the need for significant infrastructure investment. With serverless deployment, you can focus on developing your application while the service provider handles the infrastructure, scaling, and maintenance. This model is particularly useful for variable workloads, where the demand can fluctuate, as it allows for automatic scaling to meet the demand without manual intervention.
**Inference Providers**
Inference providers like Anyscale, Fireworks AI, and Together AI offer services that simplify the deployment and management of LLMs. These providers offer several advantages:
- **Ease of Integration:** Inference providers offer standardized APIs, making it simple to integrate LLMs into your applications.
- Scalability: They provide auto-scaling capabilities to handle varying workloads efficiently.
- **Cost Efficiency:** By hosting open-source models, these providers can offer serverless endpoints at a lower cost compared to proprietary models, enabling you to swap out expensive models for cheaper alternatives without sacrificing performance.
- **Advanced Features:** Many inference providers offer additional features such as model fine-tuning and assistance in deploying custom instances, allowing you to tailor the models to your specific needs.
- **Monitoring and Optimization:** These services include tools for monitoring and optimizing model performance, helping to ensure reliability and efficiency.
Inference providers can significantly reduce the complexity of deploying and scaling LLMs, making it easier for businesses to leverage the power of these models without the need for extensive infrastructure and technical expertise.
## Wrapping up
Navigating the landscape of large language models (LLMs) requires a careful balance of multiple factors such as performance, cost, speed, privacy, and specific capabilities. The evolution of LLMs from task-specific NLP models to versatile, general-purpose tools has revolutionized natural language processing and broadened the range of applications where these models can excel.
While benchmarks provide valuable insights into model performance, they should be considered alongside real-world application needs and constraints. It's crucial to experiment with different models and techniques, such as retrieval-augmented generation (RAG) and prompt engineering, to find the optimal balance for your specific use case.
Understanding the deployment options is also vital. Whether you choose to self-host or use serverless deployment, each approach comes with its own set of benefits and trade-offs. Inference providers can simplify the deployment process, offering scalable and cost-effective solutions that integrate seamlessly into your application infrastructure.
In conclusion, the right LLM for your application depends on a thorough evaluation of your requirements and constraints. By staying informed about the capabilities and limitations of different models, and by leveraging the right tools and techniques, you can harness the full potential of LLMs to drive innovation and efficiency in your projects. The landscape of LLMs is rapidly evolving, and staying adaptable and open to experimentation will be key to success in this dynamic field.
| mindsdbteam | |
1,909,342 | Did you know? | 1.the world's first programmer was a woman. 2.99% of programmers are men and only 1% are women. 3.90%... | 0 | 2024-07-02T20:04:36 | https://dev.to/abdurahmon_mansurov/did-you-know-4ocj | 1.the world's first programmer was a woman.
2.99% of programmers are men and only 1% are women.
3.90% of programmers are under 45 years old.
4.Half of the programmers come to the office in slippers/slippers. That is why they are called "Guys in sandals".
5.Although there are 8,500 programming languages in the world, programmers use only one in 10 of them. | abdurahmon_mansurov | |
1,909,341 | 🛡️ Key cybersecurity threats in 2024: What should businesses know? | 🔒 Cybersecurity is paramount in today's digital landscape. With new threats emerging daily, it's... | 0 | 2024-07-02T20:00:09 | https://dev.to/namik_ahmedov/key-cybersecurity-threats-in-2024-what-should-businesses-know-38j1 | cybersecurity, security | 🔒 Cybersecurity is paramount in today's digital landscape. With new threats emerging daily, it's crucial to stay informed. Some key threats include:
1️⃣ **Phishing** - attackers posing as trusted sources.
2️⃣ **Ransomware** - extortion programs holding data hostage.
3️⃣ **DDoS Attacks** - overwhelming servers with traffic.
🛡️ To protect yourself, start with regular updates and robust passwords. Enhance security with multi-layered measures, educate employees on cybersecurity basics, and utilize modern detection systems to mitigate risks. What security practices does your company follow? Share your insights! hashtag#Cybersecurity hashtag#DataProtection hashtag#TechSecurity | namik_ahmedov |
1,909,340 | What is the difference between null and undefined? | undefined means a variable has been declared but has not yet been assigned a value, whereas null is... | 0 | 2024-07-02T19:56:22 | https://dev.to/abdurahmon_mansurov/what-is-the-difference-between-null-and-undefined-5hg7 | undefined means a variable has been declared but has not yet been assigned a value, whereas null is an assignment value, meaning that a variable has been declared and given the value of null . | abdurahmon_mansurov | |
1,909,339 | Case Study: Finding the Directory Size | Recursive methods are efficient for solving problems with recursive structures. The preceding... | 0 | 2024-07-02T19:54:54 | https://dev.to/paulike/case-study-finding-the-directory-size-2946 | java, programming, learning, beginners | Recursive methods are efficient for solving problems with recursive structures. The preceding examples can easily be solved without using recursion. This section presents a problem that is difficult to solve without using recursion. The problem is to find the size of a directory. The size of a directory is the sum of the sizes of all files in the directory. A directory d may contain subdirectories. Suppose a directory contains files f1, f2, ... , fm and subdirectories d1, d2, ... , dn, as shown in Figure below.

The size of the directory can be defined recursively as follows:
`size(d) = size(f1) + size(f2) + ... + size(fm) + size(d1) + size(d2) + ... + size(dn)`
The **File** class can be used to represent a file or a directory and obtain the properties for files and directories. Two methods in the **File** class are useful for this problem:
- The **length()** method returns the size of a file.
- The **listFiles()** method returns an array of **File** objects under a directory.
The code below gives a program that prompts the user to enter a directory or a file and displays its size.

If the **file** object represents a directory (line 20), each subitem (file or subdirectory) in the directory is recursively invoked to obtain its size (line 23). If the **file** object represents a file (line 26), the file size is obtained and added to the total size (line 27).
What happens if an incorrect or a nonexistent directory is entered? The program will detect that it is not a directory and invoke **file.length()** (line 27), which returns **0**. Thus, in this case, the **getSize** method will return **0**.
To avoid mistakes, it is a good practice to test all cases. For example, you should test the program for an input of file, an empty directory, a nonexistent directory, and a nonexistent file. | paulike |
1,909,337 | Recursive Binary Search | Binary search was introduced in Searching Arrays. For binary search to work, the elements in the... | 0 | 2024-07-02T19:44:24 | https://dev.to/paulike/recursive-binary-search-i83 | java, programming, learning, beginners | Binary search was introduced in [Searching Arrays](https://dev.to/paulike/searching-arrays-4fb7). For binary search to work, the elements in the array must be in increasing order. The binary search first compares the key with the element in the middle of the array. Consider the following three cases:
- Case 1: If the key is less than the middle element, recursively search for the key in the first half of the array.
- Case 2: If the key is equal to the middle element, the search ends with a match.
- Case 3: If the key is greater than the middle element, recursively search for the key in the second half of the array.
Case 1 and Case 3 reduce the search to a smaller list. Case 2 is a base case when there is a match. Another base case is that the search is exhausted without a match. The code below gives a clear, simple solution for the binary search problem using recursion.

The first method finds a key in the whole list. The second method finds a key in the list with index from **low** to **high**.
The first **binarySearch** method passes the initial array with **low = 0** and **high = list.length - 1** to the second **binarySearch** method. The second method is invoked recursively to find the key in an ever-shrinking subarray. | paulike |
1,909,333 | Recursive Selection Sort | Selection sort was introduced in Sorting Arrays. Recall that it finds the smallest element in the... | 0 | 2024-07-02T19:37:20 | https://dev.to/paulike/recursive-selection-sort-2jkl | java, programming, learning, beginners | Selection sort was introduced in [Sorting Arrays](https://dev.to/paulike/sorting-arrays-25n4). Recall that it finds the smallest element in the list and swaps it with the first element. It then finds the smallest element remaining and swaps it with the first element in the remaining list, and so on until the remaining list contains only a single element. The problem can be divided into two subproblems:
- Find the smallest element in the list and swap it with the first element.
- Ignore the first element and sort the remaining smaller list recursively.
The base case is that the list contains only one element. The code below gives the recursive sort method.

Two overloaded **sort** methods are defined. The first method, **sort(double[] list)**, sorts an array in **list[0..list.length - 1]** and the second method, **sort(double[] list, int low, int high)**, sorts an array in **list[low..high]**. The second method can be invoked recursively to sort an ever-shrinking subarray. | paulike |
1,909,332 | Recursive Helper Methods | Sometimes you can find a solution to the original problem by defining a recursive function to a... | 0 | 2024-07-02T19:36:33 | https://dev.to/paulike/recursive-helper-methods-4fpd | java, programming, learning, beginners | Sometimes you can find a solution to the original problem by defining a recursive function to a problem similar to the original problem. This new method is called a recursive helper method. The original problem can be solved by invoking the recursive helper method.
The recursive **isPalindrome** method in [RecursivePalindromeUsingSubstring.java](https://dev.to/paulike/problem-solving-using-recursion-539e) is not efficient, because it creates a new string for every recursive call. To avoid creating new strings, you can use the low and high indices to indicate the range of the substring. These two indices must be passed to the recursive method. Since the original method is **isPalindrome(String s)**, you have to create the new method **isPalindrome(String s, int low, int high)** to accept additional information on the string, as shown in the code below.

Two overloaded **isPalindrome** methods are defined. The first, **isPalindrome(String s)**, checks whether a string is a palindrome, and the second, **isPalindrome(String s, int low, int high)**, checks whether a substring **s(low..high)** is a palindrome. The first method passes the string **s** with **low = 0** and **high = s.length() – 1** to the second method. The second method can be invoked recursively to check a palindrome in an ever-shrinking substring. It is a common design technique in recursive programming to define a second method that receives additional parameters. Such a method is known as a _recursive helper method_.
Helper methods are very useful in designing recursive solutions for problems involving strings and arrays. The sections that follow give two more examples. | paulike |
1,909,331 | My Story of VC1 | Now LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT... | 0 | 2024-07-02T19:34:07 | https://dev.to/ishaan_singhal_f3b6b687f3/my-story-of-vc1-1lfn | Now LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVENow LET'S HOPE IT GOES LIVE | ishaan_singhal_f3b6b687f3 | |
1,909,329 | Automation with Bash | Enhancing Efficiency with Automation and Bash Scripting: A Practical Guide In today's... | 0 | 2024-07-02T19:32:14 | https://dev.to/olavic/automation-with-bash-772 | devops, cloudcomputing, automation, webdev | ### Enhancing Efficiency with Automation and Bash Scripting: A Practical Guide
In today's fast-paced IT environments, automation is a game-changer. It saves time, reduces human error, and ensures consistency across systems. One of the most powerful tools for automation is Bash scripting, which allows administrators to automate routine tasks in Unix-based systems. In this article, we'll explore the power of Bash scripting through a practical example: automating user and group management.
#### What is Bash Scripting?
Bash (Bourne Again SHell) is a command processor that runs in a text window where users can type commands. Bash scripting involves writing a series of commands in a file, allowing them to be executed sequentially. This is incredibly useful for automating repetitive tasks, such as managing users, performing system maintenance, or deploying software.
#### Why Automate User and Group Management?
Managing users and groups is a common task for system administrators, especially in large organizations where new employees join frequently. Automating this process ensures that all users are created with the correct permissions and groups, home directories are set up properly, and passwords are securely generated and stored.
#### Example Project: Automating User Creation with Bash
Let’s dive into a practical example where we automate the process of user and group creation using a Bash script.
##### Project Requirements
- Create a script named `create_users.sh`.
- Read usernames and groups from a text file where each line is formatted as `user;groups`.
- Create users and assign them to their respective groups.
- Set up home directories with appropriate permissions.
- Generate random passwords for users and store them securely.
- Log all actions to `/var/log/user_management.log`.
##### Script Breakdown
Here’s a detailed look at the script:
1. **Ensure the Script is Run as Root**:
The script checks if it is being run as root, as creating users and modifying system files requires root privileges.
```
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
```
2. **Check for Input File**:
Validates that an input file is provided as an argument to the script.
```
if [ -z "$1" ]; then
echo "Usage: $0 <name-of-text-file>"
exit 1
fi
INPUT_FILE=$1
```
3. **Initialize Log and Password Files**:
Defines log and password files, creates the `/var/secure` directory if it doesn't exist, and sets appropriate permissions.
```
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.csv"
mkdir -p /var/secure
chmod 700 /var/secure
```
4. **Password Generation Function**:
Defines a function to generate a random 12-character password.
```
generate_password() {
tr -dc A-Za-z0-9 </dev/urandom | head -c 12
}
```
5. **Process the Input File**:
Reads each line from the input file, extracts the username and groups, and processes them.
```
while IFS=';' read -r username groups; do
username=$(echo "$username" | xargs)
groups=$(echo "$groups" | xargs)
if [ -z "$username" ]; then
continue
fi
if id "$username" &>/dev/null; then
echo "User $username already exists. Skipping..." | tee -a "$LOG_FILE"
continue
fi
useradd -m "$username" | tee -a "$LOG_FILE"
usermod -g "$username" "$username" | tee -a "$LOG_FILE"
if [ -n "$groups" ]; then
IFS=',' read -ra ADDR <<< "$groups"
for group in "${ADDR[@]}"; do
group=$(echo "$group" | xargs)
if ! getent group "$group" >/dev/null; then
groupadd "$group" | tee -a "$LOG_FILE"
fi
usermod -aG "$group" "$username" | tee -a "$LOG_FILE"
done
fi
password=$(generate_password)
echo "$username:$password" | chpasswd | tee -a "$LOG_FILE"
echo "$username,$password" >> "$PASSWORD_FILE"
chown -R "$username:$username" "/home/$username" | tee -a "$LOG_FILE"
chmod 700 "/home/$username" | tee -a "$LOG_FILE"
echo "Created user $username with groups $groups" | tee -a "$LOG_FILE"
done < "$INPUT_FILE"
```
6. **Secure the Password File**:
Ensures that the password file has restricted permissions so only the file owner can read it.
```
chmod 600 "$PASSWORD_FILE"
echo "User creation process completed." | tee -a "$LOG_FILE"
```
##### Running the Script
1. **Create the Script File**:
Create the script file using a text editor.
```bash
sudo nano create_users.sh
```
2. **Make the Script Executable**:
Change the script's permissions to make it executable.
```
sudo chmod +x create_users.sh
```
3. **Create the Input File**:
Create the input file with usernames and groups using a text editor.
```
sudo nano users.txt
```
4. **Run the Script**:
Execute the script with the input file as an argument.
```
sudo ./create_users.sh users.txt
```
5. **Verify the Output**:
Check the log and password files to verify the actions performed by the script.
```
sudo cat /var/log/user_management.log
sudo cat /var/secure/user_passwords.csv
```
#### Benefits of Automation
1. **Efficiency**: Automating repetitive tasks saves time, allowing administrators to focus on more critical issues.
2. **Consistency**: Automated scripts ensure that tasks are performed the same way every time, reducing the risk of errors.
3. **Security**: Automating password generation and secure storage minimizes the risk of weak or exposed passwords.

### Conclusion
Bash scripting is a powerful tool for automating system administration tasks. By automating user and group management, administrators can ensure that new employees are onboarded quickly and securely. This example demonstrates how a simple script can significantly enhance operational efficiency and security.
For more information about automation and learning opportunities, check out the [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire) programs.
Also checkout my github page for more projects
https://github.com/OlavicDev/User-and-Group-Management-with-Bash.git | olavic |
1,909,328 | Text | Bei der Erstellung meiner Hausarbeit war ich oft überfordert von der Menge an Arbeit. Zum Glück fand... | 0 | 2024-07-02T19:31:57 | https://dev.to/faweqss/text-1k8m | Bei der Erstellung meiner Hausarbeit war ich oft überfordert von der Menge an Arbeit. Zum Glück fand ich [https://wirschreiben.ch/hausarbeit/](https://wirschreiben.ch/hausarbeit/). Der Service bot mir nicht nur Hilfe beim Schreiben, sondern auch wertvolle Unterstützung bei der Strukturierung und Formulierung meiner Thesen. Diese individuelle Betreuung half mir, meine Arbeit auf ein höheres Niveau zu bringen und pünktlich abzugeben. Ich kann diesen Service jedem empfehlen, der Unterstützung bei seiner Hausarbeit benötigt. | faweqss | |
1,909,327 | 13 Top Skool Alternatives for Making Money Online with Your Community | In today's digital age, the demand for versatile online learning and community-building platforms has... | 0 | 2024-07-02T19:30:27 | https://dev.to/lonare/13-top-skool-alternatives-for-making-money-online-with-your-community-4h7g | javascript, webdev, react, python | In today's digital age, the demand for versatile online learning and community-building platforms has never been higher.
Whether you're an educator, entrepreneur, or content creator, finding the right platform to host courses, engage communities, or manage business operations is crucial.
Skool, a prominent player in the online education space, faces tough competition from a range of alternatives, each offering unique features and pricing models to suit diverse needs.
From comprehensive course creation tools to robust community engagement features, we explore 13 alternatives to Skool, summarising their pricing structures and highlighting standout features.
Whether you're looking for an affordable entry point with a free plan or seeking advanced capabilities for scaling your business, this guide will help you navigate the landscape of online platforms, ensuring you find the perfect fit for your educational or business ventures.
Let's delve into these alternatives, starting with Odd Circles, a promising newcomer offering a compelling free plan and affordable premium options, and proceeding through a spectrum of platforms catering to varying needs and budgets.

## [Odd Circles](https://www.oddcircles.com)
Pricing: Free plan for life; premium plan at $72 per year.
Opinion: Odd Circles offers a compelling free plan and affordable premium options, making it a standout for beginners and cost-conscious users.
## [Teachable](https://teachable.com/)
Pricing: Free plan; paid plans start at $0. 8.75/month.
Opinion: Teachable is renowned for its robust analytics and user-friendly interface, making it a top choice for serious course creators.
## [Thinkific](https://www.thinkific.com/)
Pricing: Free plan; paid plans start at $49/month.
Opinion: Thinkific excels in customizable course templates and marketing tools, ideal for those looking to grow their online course business.
## Podia
Pricing: Free plan; paid plans start at $9/month.
Opinion: Podia offers comprehensive features like email marketing and affiliate management, making it a great all-in-one solution for creators.
## Mighty Networks
Pricing: Plans start at $119/month.
Opinion: Mighty Networks is unparalleled in community engagement tools, perfect for creators prioritizing a strong community around their content.
## Kajabi
Pricing: Plans start at $69/month.
Opinion: Kajabi provides extensive features for course creation and marketing automation, though it comes at a higher price point.
## Circle.so
Pricing: Plans start at $49/month.
Opinion: Circle.so is focused on community building with customizable forums and integration options, ideal for fostering engagement.
## GoHighLevel
Pricing: Plans start at $97/month.
Opinion: GoHighLevel offers a comprehensive suite including CRM, marketing automation, and appointment scheduling, tailored for business needs.
## Udemy
Pricing: Revenue-sharing model.
Opinion: Udemy boasts a vast library of courses and global reach, making it a powerhouse for reaching a broad audience.
## Slack
Pricing: Free plan; paid plans start at $8.75 per user per month.
Opinion: Slack excels in team communication with its versatile channels and integrations, suitable for collaborative environments.
## Discord
Pricing: Free plan; Nitro subscription for enhanced features.
Opinion: Discord offers robust communication tools like voice and video chat, though primarily designed for gaming, it's adaptable for educational use.
## Bettermode
Pricing: Plans start at $24/month.
Opinion: Bettermode provides a personalised learning experience with one-on-one tutoring and flexible scheduling, albeit at a higher cost.
## NAS.io
Pricing: 7.2% per transaction.
Opinion: NAS.io offers transaction-based pricing, suitable for those looking to monetize content directly through user interactions.
These platforms each cater to different needs, from course creation and community building to communication and transaction-based services, providing ample choices depending on specific requirements and budgets.
A complete comoparison with all factors has been done in this article if you want to checkout : [13 Top Skool Alternatives for Making Money Online with Your Community](https://medium.com/@lonare/13-top-skool-alternatives-for-making-money-online-with-your-community-06dc3f3e4695) | lonare |
1,909,326 | Technical Article: Explaining the create_users.sh Script | In this article, we will walk you through the script for submission of the Stage 1, explaining each... | 0 | 2024-07-02T19:30:02 | https://dev.to/orunsolu/technical-article-explaining-the-createuserssh-script-4opm | In this article, we will walk you through the script for submission of the [](https://hng.tech/internship) Stage 1, explaining each step and the reasoning behind it.
Managing users in a growing organization can be a daunting task, especially when it involves setting up accounts, assigning groups, creating home directories, and ensuring secure password handling. To streamline this process, we developed a bash script called _create_users.sh_.
## Script Overview
The _create_users.sh_ script reads a text file containing employee usernames and group names, creates users and groups, sets up home directories with appropriate permissions, generates random passwords, and logs all actions.
**Key Features**
1. **Reading Input File:** The script takes a single argument – the name of the text file containing user information. Each line in the file is formatted as `user;groups`, where `groups` are optional and separated by commas.
2. **Logging Actions**: All actions performed by the script are logged to `/var/log/user_management.log` to ensure transparency and ease of debugging.
3. **Secure Password Handling**: Random passwords are generated for each user and stored securely in `/var/secure/user_passwords.txt`, with permissions set so that only the file owner can read it.
**Detailed Breakdown**
1. **Check Input File**: The script first checks if the input file is provided as an argument. If not, it displays usage instructions and exits.
```bash
if [ $# -ne 1 ]; then
echo "Usage: $0 <name-of-text-file>"
exit 1
fi
```
2. **Initialize Log and Password Files**: The script ensures that the log directory and files exist. It also sets appropriate permissions for the password file.
```bash
mkdir -p /var/log
touch $LOG_FILE
mkdir -p /var/secure
touch $PASSWORD_FILE
chmod 600 $PASSWORD_FILE
```
3. **Log Function**: A helper function to log actions with timestamps.
```bash
log_action() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE
}
```
4. **Generate Password Function**: This function generates a random 12-character password using `/dev/urandom`.
```bash
generate_password() {
tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo ''
}
```
5. **Processing Users**: The script reads the input file line by line, processes each user, and performs the following actions:
- Checks if the user already exists.
- Creates a new user with a home directory.
- Creates a personal group for the user.
- Assigns the user to additional groups.
- Sets permissions for the home directory.
- Generates and sets a random password.
- Logs the actions performed.
```bash
while IFS=";" read -r username groups; do
# Processing steps here
done < "$INPUT_FILE"
```
By automating the user management process, `create_users.sh` ensures consistency, security, and efficiency. This script can significantly reduce the administrative burden on system administrators and help maintain a secure and well-organized user environment.
For more information on how to streamline your technical processes and to learn about opportunities to work with talented developers, visit the [HNG Internship website](https://hng.tech/internship) and explore [premium services](https://hng.tech/premium). | orunsolu | |
1,909,324 | Automation: Onboard New Engineers on Linux with Best Practice Bash/Shell Scripting. | Synopsis Your role is SysOps or SysAdmin Engineer, you are tasked with onboarding new... | 0 | 2024-07-02T19:28:27 | https://dev.to/wandexdev/automation-onboard-new-engineers-on-linux-with-best-practice-bashshell-scripting-121o | bash, ubuntu, automation, linux | ## Synopsis
Your role is **SysOps or SysAdmin Engineer**, you are tasked with onboarding new engineers on most of the company's Linux servers. Users, groups, and home directories would be created. Access permissions for each user following the rule of less privilege should be observed. It would be inefficient to do so manually, looking at the number of servers and new engineers to be onboarded.
> *I have created a script that meets the basic requirements and some more.*
>
>*It puts measures in place for errors while running the script, creates secure files to store user lists and passwords, creates files to debug and log processes, and finally sends notifications on both the terminal and Slack, all while following best practices.*
## Essentials:
- An Ubuntu server
- Basic Git Knowledge
- Basic Linux Knowledge
- A terminal user with sudo privileges.
## Procedures:
I would walk you through the logic flow of the script, lets take them in sections.
### 1. Initiation Setup Section
```shell
#!/bin/bash
set -e
# Define log and secure password files
LOG_FILE="/var/log/user_management.log"
SECURE_PASSWORD_FILE="/var/secure/user_passwords.txt"
SECURE_PASSWORD_CSV="/var/secure/user_passwords.csv"
SLACK_WEBHOOK_URL="https://hooks.slack.com/services/WandesDummySlack/webhook/url"
# Define custom exit codes
E_INVALID_INPUT=10
E_USER_CREATION_FAILED=20
E_GROUP_CREATION_FAILED=30
E_ADD_USER_TO_GROUP_FAILED=40
# Define resource limits
ulimit -t 60 # CPU time limit in seconds
ulimit -v 1000000 # Virtual memory limit in kilobytes
sudo mkdir -p /var/log
sudo mkdir -p /var/secure
sudo touch "$LOG_FILE"
sudo touch "$SECURE_PASSWORD_FILE"
sudo touch "$SECURE_PASSWORD_CSV"
sudo chmod 600 "$SECURE_PASSWORD_FILE"
sudo chmod 600 "$SECURE_PASSWORD_CSV"
```
Valid bash scripts begin with `#!/bin/bash`, its called a **shebang** or **hashbang** and it tells the OS what interpreter to use. `set -e` is a personal default line of mine for exiting a script the moment an error occurs. The next lines are **variables** and they store the value of the intended files and urls. I have 3 files that log every action, store user passwords, and my **slack webhook url for notifications** once a user has been onboarded successfully. Next lines are **exit codes**, **timeouts** and **resource limits** I defined in the script for easy debugging, hanging runs and resource optimization.
The last set of lines in this stage show all absolute file paths and their corresponding files, which would be created and properly secured with permissions. The 600 in `chmod 600` indicates that only the owner has read and write rights on the file; others have zero access.
### 2. Functions Section
```shell
# Function to log messages to the log file
log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sudo tee -a "$LOG_FILE" > /dev/null
}
# Function to send notifications to Slack
send_slack_notification() {
local message=$1
curl -X POST -H 'Content-type: application/json' --data "{\"text\":\"${message}\"}" "$SLACK_WEBHOOK_URL"
}
# Function to generate a random password of length 12
generate_random_password() {
< /dev/urandom tr -dc A-Za-z0-9 | head -c10
}
# Function to validate input format for usernames and groups
validate_input() {
local username=$1
local groups=$2
if [[ -z "$username" || -z "$groups" ]]; then
log "Error: Invalid input. Usernames and groups are required."
send_slack_notification "Invalid input provided. Usernames and groups are required."
exit $E_INVALID_INPUT
fi
}
```
Function definitions are placed at the beginning to ensure they are available when called later in the script. As the comments on each function describes, these functions do different things. The first uses the stored message and webhook variables to make up the body of the `curl` it would make to **Slack Channel** in json format, the second generates a 10-character password piped from random 1–10 and A–Z alphanumeric characters; and the third uses the `if fi` statement to validate the username and group name from the text file input. It also logs an error for this step if it occurs.
```shell
# Function to create a user and set up their home directory
create_user() {
local username=$1
local password=$2
# Check if the user already exists
if id "$username" &>/dev/null; then
log "User $username already exists. Skipping user creation."
else
log "Creating user $username."
# Attempt to create the user with a timeout
timeout 10 sudo useradd -m -s /bin/bash "$username" || {
log "Failed to create user $username."
send_slack_notification "Failed to create user $username."
exit $E_USER_CREATION_FAILED
}
# Set the user's password and home directory permissions
echo "$username:$password" | sudo chpasswd
sudo chmod 700 "/home/$username"
sudo chown "$username:$username" "/home/$username"
log "User $username created successfully with password $password."
echo "$username:$password" | sudo tee -a "$SECURE_PASSWORD_FILE" > /dev/null
echo "$username,$password" | sudo tee -a "$SECURE_PASSWORD_CSV" > /dev/null
fi
}
# Function to create a group
create_group() {
local groupname=$1
# Check if the group already exists
if getent group "$groupname" &>/dev/null; then
log "Group $groupname already exists."
else
log "Creating group $groupname."
# Attempt to create the group with a timeout
timeout 10 sudo groupadd "$groupname" || {
log "Failed to create group $groupname."
send_slack_notification "Failed to create group $groupname."
exit $E_GROUP_CREATION_FAILED
}
log "Group $groupname created successfully."
fi
}
```
The two functions above for creating users and groups use an `if, else, fi` statement to process their logic. The first has the username validated variable from prevoius function being used to **verify existing user** before executing the add user command with a **timeout** to prevent lag, then it sets the user's password by piping to `chpasswd`. It then goes ahead to secure a `chmod 700` permission for restrictive access for the home directory. The second for the group is quite similar, just that it is a group being created this time, not a user. Both functions have a log line to log errors as well.
### 3. Onboarding Section
```shell
onboard_user() {
local username=$1
local groups=$2
# Validate the input format
validate_input "$username" "$groups"
# Generate a random password for the user
local password=$(generate_random_password)
# Create the user with the generated password
create_user "$username" "$password"
# Create a personal group for the user
create_group "$username"
# Add the user to their personal group
add_user_to_group "$username" "$username"
# Process and add the user to the specified groups
IFS=',' read -ra group_array <<< "$groups"
for group in "${group_array[@]}"; do
group=$(echo "$group" | xargs) # Trim whitespace from group name
create_group "$group"
add_user_to_group "$username" "$group"
done
# Notify terminal that user has been successfully onboarded
echo "User $username has been successfully onboarded with groups: $groups"
}
```
As clearly explained in the comments, this function effectively combines the functionalities of the previous functions, more like the definer. It loops through the Input validation function, password generation function, user creation function, personal group creation function, and adding user to group function. It particularly splits the groups into an array using a `comma delimeter (IFS=',')`, loops through again and **trims any trailing white space** before calling the functions to do their duty.
The last line of this section would send an **output message to the terminal** each time a user has been successfully onboarded. A slack message also goes to the defined **Slack Channel** for the same purpose.
### 4. Script Execution Section
```shell
# Check if the script argument is provided
if [[ $# -ne 1 ]]; then
echo "Usage: $0 <users_file>"
exit 10 # Invalid input exit code
fi
# Read from the provided text file
users_file="$1"
# Read from the input file and process each line
while IFS=';' read -r username groups; do
# Remove leading and trailing whitespaces
username=$(echo "$username" | xargs)
groups=$(echo "$groups" | xargs)
onboard_user "$username" "$groups"
done < "$users_file"
```
Now that we have ensured all functions and necessary configurations are defined and ready to be used when processing the input file, this last part checks if the script argument (input file) is provided. If not, it exits with an error message. It also validates the absence of white space and then processes the user data from a file. In summary, this section executes all previously prepared functions with the input data.
### 5. Usage:
- Save the entire script as a whole with the name `create_users.sh` or whatever name you like. The whole script can be found at my [github repository](https://github.com/wandexdev/Linux-User-Management-Automation-with-Bash-Scripting).
- Assemble the input file; the argument formatted this way: usernames are differentiated by a semicolon, and groups are differentiated by a comma.
```txt
alice; admin,dev,qa
bob; prod
carol; test,dev,prod
tunde; pilot,prod,test,dev
tade; pilot,dev
```
- Save the file as `text.txt` or your preferred name
On the ubuntu terminal, run `chmod +x create_users.sh` to ensure the file is executable.
- Run script with `sudo` via `sudo bash create_users.sh text.txt`
What we eventually get are success messages like those shown in Figure 1 below:
<figure>
<img
src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/86trbixmrn049ltgbe8z.PNG"
alt="Terminal window displaying a green success message"
>
<figcaption>Figure 1: Success Messages Displayed on Terminal</figcaption>
</figure>
### 6. Conclusion
In conclusion, this blog post has covered the automation of linux user mangement for new staff. By following these steps and leveraging bash scripting, you can effectively do the automations and improve efficiency by 100%. Now it's your turn! Try out these techniques and share your experiences in the comments below. Additionally, if you have any questions, feel free to leave a comment or reach out to me directly on [Twitter](https://twitter.com/wandexdev).
Lastly, I would love to appreciate the team at [HNG internship](https://hng.tech/internship) for doing an awesome job at impacting knowledge. The [team](https://hng.tech/hire) is a home of talents. Kudos | wandexdev |
1,909,323 | If in a Crowdsourced Data Annotation Pipeline, a GPT-4 | If in a Crowdsourced Data Annotation Pipeline, a GPT-4 | 0 | 2024-07-02T19:28:18 | https://aimodels.fyi/papers/arxiv/if-crowdsourced-data-annotation-pipeline-gpt-4 | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [If in a Crowdsourced Data Annotation Pipeline, a GPT-4](https://aimodels.fyi/papers/arxiv/if-crowdsourced-data-annotation-pipeline-gpt-4). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- This paper examines the performance of a large language model (GPT-4) compared to crowdsourced human annotators in a data annotation pipeline.
- The researchers investigate whether GPT-4 can replace human annotators in certain tasks, or if a hybrid approach combining human and machine annotations is more effective.
- The study analyzes the quality, speed, and cost of annotations produced by GPT-4 and crowdsourced workers across different annotation tasks.
## Plain English Explanation
The paper looks at how well a powerful AI language model called GPT-4 can do at annotating data, compared to having human workers do the task. Annotation means adding labels or descriptions to data, like saying what's in an image or summarizing the key points of a document.
The researchers wanted to see if GPT-4 could potentially replace human workers for some annotation tasks, or if a mix of human and machine annotations might work better. They compared the quality, speed, and cost of the annotations made by GPT-4 versus crowdsourced human workers across different types of annotation jobs.
The findings could help companies and researchers figure out the best way to get data annotated efficiently, whether that's using AI, people, or a combination of both.
## Technical Explanation
The paper examines the performance of the GPT-4 large language model compared to crowdsourced human annotators in a data annotation pipeline. The researchers investigate whether GPT-4 can replace human annotators in certain tasks, or if a hybrid approach combining human and machine annotations is more effective.
The study analyzes the quality, speed, and cost of annotations produced by GPT-4 and crowdsourced workers across different annotation tasks, including [link to "https://aimodels.fyi/papers/arxiv/gpt-is-not-annotator-necessity-human-annotation"]. The results suggest that GPT-4 can achieve high-quality annotations in some cases, but human annotators still outperform it in other tasks, such as [link to "https://aimodels.fyi/papers/arxiv/annollm-making-large-language-models-to-be"].
The paper also explores the potential of using a hybrid approach, where GPT-4 and human annotators work together, as described in [link to "https://aimodels.fyi/papers/arxiv/how-can-i-get-it-right-using"]. This could leverage the strengths of both approaches and lead to more efficient and accurate data annotation pipelines.
## Critical Analysis
The paper provides valuable insights into the capabilities and limitations of using a large language model like GPT-4 for data annotation tasks. However, as mentioned in [link to "https://aimodels.fyi/papers/arxiv/hidden-flaws-behind-expert-level-accuracy-gpt"], there may be hidden flaws or biases in the model's performance that are not fully addressed in this study.
Additionally, the paper does not delve into the potential challenges of integrating GPT-4 into real-world annotation workflows, such as the need for [link to "https://aimodels.fyi/papers/arxiv/use-structured-knowledge-base-enhances-metadata-curation"] to enhance the model's understanding of the task and domain-specific knowledge.
Further research could explore the long-term implications of relying on large language models for critical data annotation tasks, and investigate ways to ensure the reliability and fairness of these systems.
## Conclusion
This paper provides a valuable exploration of the potential for using a powerful AI model like GPT-4 to assist with data annotation tasks, either by replacing human annotators or working in a hybrid approach. The findings suggest that GPT-4 can achieve high-quality annotations in some cases, but human annotators still outperform it in other tasks.
The insights from this study could help organizations and researchers optimize their data annotation pipelines, balancing the strengths of human and machine-based approaches to achieve more efficient and accurate results. As large language models continue to advance, this area of research will likely grow in importance, with implications for a wide range of applications that rely on high-quality annotated data.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,322 | Repairing Catastrophic-Neglect in Text-to-Image Diffusion Models via Attention-Guided Feature Enhancement | Repairing Catastrophic-Neglect in Text-to-Image Diffusion Models via Attention-Guided Feature Enhancement | 0 | 2024-07-02T19:27:44 | https://aimodels.fyi/papers/arxiv/repairing-catastrophic-neglect-text-to-image-diffusion | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Repairing Catastrophic-Neglect in Text-to-Image Diffusion Models via Attention-Guided Feature Enhancement](https://aimodels.fyi/papers/arxiv/repairing-catastrophic-neglect-text-to-image-diffusion). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- This paper explores techniques to address the issue of "catastrophic-neglect" in text-to-image diffusion models, which occurs when the model struggles to generate images that accurately reflect the provided text.
- The researchers propose an "Attention-Guided Feature Enhancement" (AGFE) approach to improve the alignment between the generated images and the input text.
- The AGFE method aims to enhance the relevant visual features in the generated images based on the attention mechanism, helping the model better capture the semantic relationships between the text and the desired image.
## Plain English Explanation
The paper focuses on a problem in text-to-image AI models, where the generated images may not always accurately reflect the provided text. This issue is known as "catastrophic-neglect," and it can be frustrating for users who expect the AI to generate images that closely match their textual descriptions.
To address this problem, the researchers developed a new technique called "Attention-Guided Feature Enhancement" (AGFE). The key idea behind AGFE is to use the attention mechanism, which helps the AI model understand the relationships between different parts of the text, to enhance the relevant visual features in the generated images.
Imagine you ask the AI to generate an image of a "cute, fluffy dog." The attention mechanism would identify the important elements of the text, like "cute," "fluffy," and "dog," and use that information to ensure the generated image has the appropriate visual characteristics, such as a soft, furry appearance and a canine shape. This helps the AI model create images that are more closely aligned with the textual description, reducing the issue of "catastrophic-neglect."
By improving the relationship between the text and the generated images, the AGFE approach can make text-to-image AI models more useful and user-friendly, allowing them to better translate our ideas and descriptions into visual representations.
## Technical Explanation
The paper introduces an "Attention-Guided Feature Enhancement" (AGFE) approach to address the problem of "catastrophic-neglect" in text-to-image diffusion models. [Catastrophic-neglect is a well-known issue in these models, where the generated images may fail to accurately reflect the provided text description.](https://aimodels.fyi/papers/arxiv/towards-better-text-to-image-generation-alignment)
The core idea behind AGFE is to leverage the attention mechanism, which helps the model understand the semantic relationships between different parts of the text, to enhance the relevant visual features in the generated images. [This builds on previous work on using attention to improve text-to-image generation and personalization.](https://aimodels.fyi/papers/arxiv/attention-calibration-disentangled-text-to-image-personalization)
The AGFE method works by first extracting visual features from the generated image and text features from the input text. It then uses the attention weights to identify the most relevant text features and selectively enhances the corresponding visual features. [This "object-attribute binding" approach has been explored in other text-to-image generation research.](https://aimodels.fyi/papers/arxiv/object-attribute-binding-text-to-image-generation)
The researchers evaluate the AGFE approach on several text-to-image benchmarks and find that it outperforms existing methods in terms of improving the alignment between the generated images and the input text descriptions. [The work builds on a growing body of research on enhancing text-to-image generation capabilities.](https://aimodels.fyi/papers/arxiv/enhancing-text-to-image-editing-via-hybrid)
## Critical Analysis
The paper presents a promising approach to addressing the "catastrophic-neglect" issue in text-to-image diffusion models. However, the researchers acknowledge that their method has some limitations:
1. The AGFE approach is designed to work with specific text-to-image diffusion models and may not be easily transferable to other architectures or modalities.
2. The performance improvements, while significant, are still limited, and there is room for further refinement and optimization of the attention-guided feature enhancement.
3. The researchers only evaluate the method on standard benchmarks and do not explore real-world applications or user studies to assess the practical impact of their approach.
Additionally, it would be valuable to see further exploration of the underlying mechanisms and biases in text-to-image models that contribute to the "catastrophic-neglect" problem. [Understanding these issues more deeply could lead to more robust and generalizable solutions.](https://aimodels.fyi/papers/arxiv/not-just-pretty-pictures-toward-interventional-data)
Overall, the AGFE approach represents an important step forward in improving the alignment between text and generated images, but continued research and innovation will be necessary to fully address the challenges in this rapidly evolving field.
## Conclusion
This paper presents a novel "Attention-Guided Feature Enhancement" (AGFE) technique to address the issue of "catastrophic-neglect" in text-to-image diffusion models. The AGFE method leverages the attention mechanism to selectively enhance the relevant visual features in the generated images, improving the alignment between the text descriptions and the resulting images.
The researchers demonstrate the effectiveness of AGFE through experiments on several text-to-image benchmarks, showing that it outperforms existing approaches. While the method has some limitations, it represents an important advancement in the ongoing efforts to enhance the capabilities and real-world applicability of text-to-image generation systems.
As the field of text-to-image AI continues to evolve, the AGFE approach and similar attention-guided techniques may play a crucial role in developing more robust and user-friendly models that can reliably translate our ideas and descriptions into compelling visual representations.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,321 | Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text | Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text | 0 | 2024-07-02T19:27:09 | https://aimodels.fyi/papers/arxiv/spotting-llms-binoculars-zero-shot-detection-machine | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text](https://aimodels.fyi/papers/arxiv/spotting-llms-binoculars-zero-shot-detection-machine). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- The paper proposes a novel approach called "Binoculars" for detecting text generated by modern large language models (LLMs) with high accuracy.
- The method relies on contrasting the output of two closely related LLMs, rather than requiring training data or model-specific modifications.
- Binoculars achieves state-of-the-art performance in spotting machine-generated text from a range of LLMs, including ChatGPT, without being trained on any ChatGPT data.
## Plain English Explanation
The paper explores the challenge of distinguishing text written by humans from text generated by powerful AI language models, known as [large language models (LLMs)](https://aimodels.fyi/papers/arxiv/survey-llm-generated-text-detection-necessity-methods). While this may seem difficult, as both humans and LLMs can produce complex and varied text, the researchers have developed a clever approach called "Binoculars" that can accurately identify machine-generated text.
The key insight behind Binoculars is that by comparing the output of two closely related LLMs, it's possible to detect subtle differences that reveal whether the text was generated by a human or a machine. This approach is more effective than [methods that require training data](https://aimodels.fyi/papers/arxiv/few-shot-detection-machine-generated-text-using) or are specific to particular LLMs, like [ChatGPT](https://aimodels.fyi/papers/arxiv/mage-machine-generated-text-detection-wild) or [GPT-3](https://aimodels.fyi/papers/arxiv/who-wrote-this-key-to-zero-shot).
Binoculars works by performing a simple calculation using the outputs of two pre-trained LLMs, without needing any additional training. This makes it a versatile and efficient tool for detecting machine-generated text from a wide range of modern LLMs, including ChatGPT, with high accuracy.
## Technical Explanation
The paper introduces a novel approach called "Binoculars" for detecting text generated by modern large language models (LLMs). Unlike previous methods that require training data or are specific to particular LLMs, Binoculars achieves state-of-the-art performance in identifying machine-generated text by leveraging the differences between the outputs of two closely related pre-trained LLMs.
The core idea behind Binoculars is that while both humans and LLMs can exhibit a wide range of complex behaviors, there are subtle differences in the way they generate text that can be captured by contrasting the outputs of two similar LLMs. The researchers developed a scoring mechanism that quantifies these differences, allowing Binoculars to accurately distinguish human-written and machine-generated text without any model-specific modifications or training data.
The paper presents a comprehensive evaluation of Binoculars across a variety of text sources and scenarios. The results show that Binoculars can detect over 90% of samples generated by ChatGPT and other LLMs at a false positive rate of only 0.01%, despite not being trained on any ChatGPT data. This impressive performance highlights the power of the Binoculars approach in tackling the challenging problem of [detecting machine-generated text in the wild](https://aimodels.fyi/papers/arxiv/mage-machine-generated-text-detection-wild).
## Critical Analysis
The paper presents a promising approach to the important problem of [detecting text generated by large language models](https://aimodels.fyi/papers/arxiv/survey-llm-generated-text-detection-necessity-methods), but it's worth considering some potential caveats and areas for further research.
One limitation of the Binoculars method is that it relies on the availability of two closely related pre-trained LLMs. While the researchers demonstrate its effectiveness across a range of models, there may be situations where such LLM pairs are not readily available, which could limit the method's practical applicability.
Additionally, the paper does not explore the robustness of Binoculars to adversarial attacks or attempts by LLM developers to evade detection. As the field of [machine-generated text detection](https://aimodels.fyi/papers/arxiv/few-shot-detection-machine-generated-text-using) continues to evolve, it will be important to investigate the long-term resilience of such detection techniques.
Furthermore, the paper does not delve into the potential societal implications of widespread machine-generated text detection capabilities. As [AI-generated content](https://aimodels.fyi/papers/arxiv/who-wrote-this-key-to-zero-shot) becomes more prevalent, it will be crucial to consider the ethical and privacy considerations surrounding the use of such detection tools.
Overall, the Binoculars approach represents a significant advancement in the field of LLM-generated text detection, but further research and thoughtful discussion on the broader implications will be essential as these technologies continue to evolve.
## Conclusion
The paper presents a novel approach called Binoculars that can accurately detect text generated by modern large language models (LLMs) without requiring any training data or model-specific modifications. By leveraging the differences between the outputs of two closely related pre-trained LLMs, Binoculars achieves state-of-the-art performance in identifying machine-generated text, including from advanced models like ChatGPT.
This innovative detection method has important implications for content moderation, online safety, and the responsible development of LLM technologies. As the use of AI-generated text continues to grow, tools like Binoculars will play a crucial role in helping to maintain the integrity and authenticity of online discourse. The paper's comprehensive evaluation and the method's versatility across a range of LLMs make it a promising contribution to the ongoing efforts to [detect machine-generated text in the wild](https://aimodels.fyi/papers/arxiv/mage-machine-generated-text-detection-wild).
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,319 | Bayesian Regression Markets | Bayesian Regression Markets | 0 | 2024-07-02T19:26:01 | https://aimodels.fyi/papers/arxiv/bayesian-regression-markets | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Bayesian Regression Markets](https://aimodels.fyi/papers/arxiv/bayesian-regression-markets). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- Machine learning tasks are highly sensitive to the quality of input data
- Acquiring relevant datasets can be challenging, especially when held privately by competitors
- The paper proposes a regression market to provide a monetary incentive for data sharing
## Plain English Explanation
When training machine learning models, the quality of the data used as input is very important. However, companies or individuals may be reluctant to share their valuable data, especially if they are competitors in the same market. To address this, the researchers developed a "regression market" - a system that provides financial incentives for people to share their data.
The regression market uses a [Bayesian framework](https://aimodels.fyi/papers/arxiv/analytical-results-uncertainty-propagation-through-trained-machine) to handle a wide range of regression tasks, where the goal is to predict a numerical output based on input data. This allows the market to be more flexible and useful in different applications.
The researchers thoroughly analyzed the properties of this regression market and found that it can help mitigate the financial risks that agents (data owners) face, which is an issue with some previous proposals in the literature.
## Technical Explanation
The paper focuses on supervised learning for regression tasks, where the goal is to predict a numerical output based on input data. The researchers developed a "regression market" mechanism that uses a Bayesian framework, allowing it to handle a more general class of regression problems.
The key elements of the proposed system include:
- A market structure that provides monetary incentives for data owners to share their data
- A Bayesian approach that can accommodate various types of regression tasks, beyond just linear regression
- An analysis of the market properties, including how it can reduce financial risks for the participating agents compared to previous proposals
The researchers thoroughly explored the theoretical properties of this regression market, demonstrating how it can be a useful tool for facilitating data sharing, even in cases where data owners may be reluctant to collaborate due to competitive concerns.
## Critical Analysis
The paper presents a well-designed regression market mechanism that addresses some of the limitations of previous proposals in this area. By adopting a Bayesian framework, the system can handle a broader range of regression tasks, which is a strength.
However, the paper does not discuss potential practical challenges in implementing such a market in real-world scenarios. For example, it would be important to consider issues related to data privacy, security, and the incentives of different stakeholders to participate.
Additionally, the paper focuses on the theoretical properties of the market, but does not provide empirical evaluation or case studies demonstrating the practical efficacy of the approach. [Evaluating the performance of such a system in real-world applications](https://aimodels.fyi/papers/arxiv/what-teaches-robots-to-walk-teaches-them) would be an important next step to assess its feasibility and potential impact.
## Conclusion
This paper introduces a promising regression market mechanism that aims to incentivize data sharing, even in competitive settings. By adopting a Bayesian framework, the system can handle a broader range of regression tasks, which is a key advantage.
The theoretical analysis of the market properties is thorough and demonstrates how the proposed approach can mitigate financial risks for participating agents. However, further research is needed to address practical implementation challenges and empirically evaluate the system's performance in real-world applications.
Overall, this research contributes to the ongoing efforts to [facilitate data sharing and collaboration](https://aimodels.fyi/papers/arxiv/uplift-modeling-under-limited-supervision) in machine learning, which is crucial for developing robust and effective models, especially in domains where data is scarce or held privately.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,318 | Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion | Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion | 0 | 2024-07-02T19:25:26 | https://aimodels.fyi/papers/arxiv/ctrl-v-higher-fidelity-video-generation-bounding | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion](https://aimodels.fyi/papers/arxiv/ctrl-v-higher-fidelity-video-generation-bounding). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- This paper introduces "Ctrl-V," a novel approach for generating higher-fidelity video content with precise control over the motion of objects within the video.
- The key innovation is the use of bounding boxes to define and control the movement of specific objects, enabling more fine-grained and realistic video generation.
- The authors demonstrate the effectiveness of Ctrl-V through extensive experiments and comparisons to state-of-the-art video generation methods.
## Plain English Explanation
The researchers have developed a new technique called "Ctrl-V" that allows for the generation of more realistic and customizable video content. The core idea behind Ctrl-V is using bounding boxes to precisely control the movement of specific objects within the video.
Typically, generating high-quality video is a challenging task, as it requires accurately modeling the complex dynamics and interactions of multiple elements. Ctrl-V addresses this by giving the user the ability to define the motion of particular objects using bounding boxes. This provides a higher level of control and enables the generation of videos that closely match the desired object movements.
For example, if you wanted to create a video of a car driving down a street, you could use Ctrl-V to draw bounding boxes around the car and specify how it should move - the speed, trajectory, and other details. The system would then generate a video that faithfully depicts the car's motion, resulting in a more realistic and customizable output.
By leveraging this bounding box-based control, the Ctrl-V approach can produce videos with greater fidelity and more precise object movements compared to other state-of-the-art video generation techniques. This could have applications in areas like visual effects, video game development, and even autonomous vehicle simulation.
## Technical Explanation
The key technical innovation in the Ctrl-V paper is the use of bounding boxes to guide and control the motion of objects within the generated video. This builds upon recent advancements in [diffusion-based video generation](https://aimodels.fyi/papers/arxiv/trailblazer-trajectory-control-diffusion-based-video-generation) and [multi-video generation](https://aimodels.fyi/papers/arxiv/collaborative-video-diffusion-consistent-multi-video-generation).
The authors leverage a [novel bounding box regression method](https://aimodels.fyi/papers/arxiv/novel-bounding-box-regression-method-single-object) to precisely define the spatial extent and movement of objects in the video. This is combined with a [camera-aware video generation](https://aimodels.fyi/papers/arxiv/camvig-camera-aware-image-to-video-generation) approach to ensure the generated content matches the specified camera perspective.
The overall Ctrl-V architecture consists of several key components:
1. **Bounding Box Encoder**: Encodes the user-specified bounding box information into a latent representation.
2. **Motion Diffusion**: A diffusion-based model that generates the object's motion trajectory based on the bounding box input.
3. **Video Synthesis**: A separate network that generates the final video frames, conditioned on the object motion and other scene context.
The authors demonstrate the effectiveness of Ctrl-V through extensive experiments, comparing it to state-of-the-art video generation methods like [MotionClone](https://aimodels.fyi/papers/arxiv/motionclone-training-free-motion-cloning-controllable-video). The results show that Ctrl-V can produce higher-fidelity videos with more precise control over object motion.
## Critical Analysis
The Ctrl-V paper presents a compelling approach for improving the quality and control of video generation, but there are a few potential caveats to consider:
1. **Scalability**: While Ctrl-V excels at controlling the motion of individual objects, it may face challenges when scaling to videos with multiple, interacting objects. Extending the bounding box-based control to more complex scenes could require significant additional research and engineering.
2. **Training Data**: The performance of Ctrl-V, like many deep learning-based methods, is likely dependent on the quality and diversity of the training data used. Ensuring the system can generalize to a wide range of real-world video scenarios may require careful curation of the training dataset.
3. **Computational Complexity**: The authors do not provide detailed information about the computational requirements of Ctrl-V. Generating high-fidelity video in real-time may require significant computational resources, which could limit its practical deployment in some applications.
4. **Ethical Considerations**: As with any powerful video generation technology, there are potential ethical concerns around the misuse of Ctrl-V, such as the creation of misleading or deceptive content. The research community should continue to explore ways to mitigate these risks.
## Conclusion
The Ctrl-V paper introduces a novel approach for generating higher-fidelity video content with precise control over the motion of objects within the video. By leveraging bounding boxes to guide the object movements, Ctrl-V demonstrates significant improvements in video quality and realism compared to state-of-the-art methods.
This research has the potential to impact a wide range of applications, from visual effects and video game development to autonomous vehicle simulation and beyond. As the field of video generation continues to advance, techniques like Ctrl-V will likely play an increasingly important role in enabling more realistic and customizable video content.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,317 | Problem Solving Using Recursion | If you think recursively, you can solve many problems using recursion. The preceding sections... | 0 | 2024-07-02T19:23:43 | https://dev.to/paulike/problem-solving-using-recursion-539e | java, programming, learning, beginners | If you think recursively, you can solve many problems using recursion. The preceding sections presented two classic recursion examples. All recursive methods have the following characteristics:
- The method is implemented using an **if-else** or a **switch** statement that leads to different cases.
- One or more base cases (the simplest case) are used to stop recursion.
- Every recursive call reduces the original problem, bringing it increasingly closer to a base case until it becomes that case.
In general, to solve a problem using recursion, you break it into subproblems. Each subproblem is the same as the original problem but smaller in size. You can apply the same approach to each subproblem to solve it recursively.
Recursion is everywhere. It is fun to _think recursively_. Consider drinking coffee. You may describe the procedure recursively as follows:
`public static void drinkCoffee(Cup cup) {
if (!cup.isEmpty()) {
cup.takeOneSip(); // Take one sip
drinkCoffee(cup);
}
}`
Assume **cup** is an object for a cup of coffee with the instance methods **isEmpty()** and **takeOneSip()**. You can break the problem into two subproblems: one is to drink one sip of coffee and the other is to drink the rest of the coffee in the cup. The second problem is the same as the original problem but smaller in size. The base case for the problem is when the cup is empty.
Consider the problem of printing a message **n** times. You can break the problem into two subproblems: one is to print the message one time and the other is to print it **n - 1** times. The second problem is the same as the original problem but it is smaller in size. The base case for the problem is **n == 0**. You can solve this problem using recursion as follows:
`public static void nPrintln(String message, int times) {
if (times >= 1) {
System.out.println(message);
nPrintln(message, times - 1);
} // The base case is times == 0
}`
Note that the **fib** method in the preceding section returns a value to its caller, but the **drinkCoffee** and **nPrintln** methods are **void** and they do not return a value.
If you _think recursively_, you can use recursion to solve many of the problems presented in earlier chapters of this book. Consider the palindrome problem in [Palindrome.java](https://dev.to/paulike/case-studies-on-loops-27l1). Recall that a string is a palindrome if it reads the same from the left and from the right. For example, “mom” and “dad” are palindromes, but “uncle” and “aunt” are not. The problem of checking whether a string is a palindrome can be divided into two subproblems:
- Check whether the first character and the last character of the string are equal.
- Ignore the two end characters and check whether the rest of the substring is a palindrome.
The second subproblem is the same as the original problem but smaller in size. There are two base cases: (1) the two end characters are not the same, and (2) the string size is **0** or **1**. In case 1, the string is not a palindrome; in case 2, the string is a palindrome. The recursive method for this problem can be implemented as shown in the code below.

The **substring** method in line 8 creates a new string that is the same as the original string except without the first and last characters. Checking whether a string is a palindrome is equivalent to checking whether the substring is a palindrome if the two end characters in the
original string are the same. | paulike |
1,909,316 | Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre | Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre | 0 | 2024-07-02T19:23:09 | https://aimodels.fyi/papers/arxiv/cutting-through-buggy-adversarial-example-defenses-fixing | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Cutting through buggy adversarial example defenses: fixing 1 line of code breaks Sabre](https://aimodels.fyi/papers/arxiv/cutting-through-buggy-adversarial-example-defenses-fixing). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- The paper reveals significant flaws in the evaluation of the Sabre defense against adversarial examples, which was accepted at IEEE S&P 2024.
- The authors find clear signs of gradient masking in the original evaluation, which they trace back to a bug in the evaluation code.
- By fixing a single line of code, the authors are able to reduce Sabre's robust accuracy to 0%.
- In response, the authors modify the defense and introduce a new component, but this fix also contains a bug that reduces robust accuracy below baseline levels.
## Plain English Explanation
The researchers examined a defense mechanism called Sabre, which was designed to protect machine learning models from [adversarial examples](https://aimodels.fyi/papers/arxiv/certified-adversarial-robustness-machine-learning-based-malware). Adversarial examples are small, carefully crafted perturbations to an input that can fool a model into making incorrect predictions.
The researchers found significant problems with how the Sabre defense was evaluated in the original paper. They discovered that the evaluation was flawed, exhibiting a phenomenon called [gradient masking](https://aimodels.fyi/papers/arxiv/from-attack-to-defense-insights-into-deep). Gradient masking occurs when a defense mechanism inadvertently hides important information that attackers need to find effective adversarial examples.
The researchers traced the gradient masking to a bug in the original evaluation code. By fixing just a single line of code, they were able to reduce Sabre's robust accuracy (its ability to withstand adversarial attacks) to 0%. This means the defense was not nearly as effective as the original paper claimed.
In response to this finding, the Sabre authors modified their defense and added a new component. However, the researchers found that this modified defense also contained a bug. By fixing one more line of code, they were able to reduce the robust accuracy of the updated Sabre defense to even lower than the baseline level (the performance without any defense).
## Technical Explanation
The researchers conducted a thorough [evaluation of the Sabre defense](https://aimodels.fyi/papers/arxiv/attackbench-evaluating-gradient-based-attacks-adversarial-examples) using a variety of attack methods. They found that the original evaluation suffered from [gradient masking](https://aimodels.fyi/papers/arxiv/from-attack-to-defense-insights-into-deep), a phenomenon where the defense mechanism inadvertently hides important information that attackers need to find effective adversarial examples.
By investigating the evaluation code, the researchers discovered a bug that was causing the gradient masking. They were able to fix this bug by modifying a single line of code, which then reduced Sabre's robust accuracy to 0%.
In response, the Sabre authors modified their defense and introduced a new component that was not described in the original paper. However, the researchers found that this modified defense also contained a bug. By fixing one more line of code, they were able to reduce the robust accuracy of the updated Sabre defense to below baseline levels.
## Critical Analysis
The researchers' findings raise significant concerns about the validity of the original Sabre paper. The discovery of bugs in both the evaluation and the modified defense suggests that the Sabre authors may have overlooked important details in their work.
While the researchers were able to identify and fix the bugs, it is concerning that such fundamental issues were present in a defense mechanism that was accepted at a prestigious conference like IEEE S&P. This raises questions about the rigor of the review process and the ability of the research community to thoroughly vet defensive techniques against adversarial attacks.
The researchers' work also highlights the importance of [robust evaluation](https://aimodels.fyi/papers/arxiv/towards-robust-domain-generation-algorithm-classification) and the need for researchers to carefully examine their own work and the work of others. The [discovery of these bugs](https://aimodels.fyi/papers/arxiv/novel-approach-to-guard-from-adversarial-attacks) suggests that the field of adversarial machine learning may still have significant room for improvement.
## Conclusion
The researchers' analysis of the Sabre defense reveals serious flaws in the original evaluation and the modified defense. Their findings suggest that the Sabre defense may not be as effective as the original paper claimed and that the research community needs to be more diligent in vetting defensive techniques against adversarial attacks. This work highlights the importance of robust evaluation and the need for researchers to carefully examine their own work and the work of others in this rapidly evolving field.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,315 | Navigating Flatiron Bootcamp School: Phase 1 Challenges and Lessons Learned | Enrolling in Flatiron Bootcamp School was an exciting yet daunting step toward my dream of becoming a... | 0 | 2024-07-02T19:22:08 | https://dev.to/edwin_olivares_cf4f814058/navigating-flatiron-bootcamp-school-phase-1-challenges-and-lessons-learned-1836 | Enrolling in Flatiron Bootcamp School was an exciting yet daunting step toward my dream of becoming a skilled software developer. As I embarked on Phase 1, I anticipated a challenging yet rewarding journey. However, the reality of the experience brought several unexpected challenges, particularly in managing my time effectively. With only a month left to demonstrate my progress, I want to share my journey through Phase 1, the obstacles I faced, and the strategies I developed to overcome them.
The Allure and Reality of Phase 1
Phase 1 of Flatiron Bootcamp School is designed to build a solid foundation in programming fundamentals. It covers essential topics such as JavaScript basics, connecting to JSON servers, and understanding CSS and the Document Object Model (DOM). The structured curriculum, experienced instructors, and supportive community promised an immersive learning experience. However, the intensity of the program quickly became apparent. Balancing daily life, lessons, coding exercises, and personal commitments required effective time management, a skill I initially underestimated.
The most significant challenge I faced during Phase 1 was managing my time efficiently. The bootcamp left too much room for procrastination. Despite my not so great intentions, I found myself struggling to keep up with the curriculum. At the outset, I underestimated the sheer volume of material covered each week. The daily lectures and coding exercises demanded consistent effort and focus. Initially, I allocated insufficient time for review and practice, believing I could catch up later. This miscalculation quickly snowballed, leaving me overwhelmed and behind schedule.
Procrastination became a significant barrier. The temptation to delay studying or completing assignments in favor of less demanding activities was ever-present. Additionally, distractions from family passings, social media and household chores further eroded my productivity. These habits hindered my progress and created a cycle of stress and guilt. Without a clear plan, I often found myself aimlessly switching between tasks. Important assignments and concepts were neglected, while less critical activities consumed valuable time. My lack of prioritization resulted in incomplete assignments and a shallow understanding of crucial topics.
Tackling JavaScript, JSON Servers, CSS, and the DOM
Understanding JavaScript fundamentals, connecting to JSON servers, and mastering CSS and the DOM were integral parts of Phase 1. These topics presented their own unique challenges and required significant dedication to grasp fully. JavaScript, being the backbone of web development, was a crucial aspect of the curriculum. Learning the intricacies of variables, functions, loops, and event handling was daunting at first. The syntax and logic of JavaScript felt foreign, and it took time to become comfortable writing and debugging code.
Connecting to JSON servers and fetching data dynamically added another layer of complexity. Understanding APIs, HTTP requests, and asynchronous programming required a steep learning curve. Initial attempts often led to errors and frustration, but persistence paid off as I gradually learned to handle data fetching and manipulation effectively. CSS and the DOM posed their own set of challenges. Styling web pages to achieve a visually appealing design was more than I anticipated. Learning about selectors, properties, and responsive design principles demanded careful attention to detail. Additionally, manipulating the DOM to create interactive web applications required a deep understanding of how elements are structured and modified dynamically.
Recognizing the need for change, I dove into several strategies to improve my time management and overall performance in Phase 1. These adjustments not only helped me but also instilled habits that will benefit me throughout the bootcamp and beyond. I began by creating a detailed daily schedule, breaking down tasks into manageable chunks. Each day included designated periods for lectures, exercises, and review sessions. I also set aside time for breaks and leisure activities to prevent burnout. This structured approach provided a clear roadmap and reduced the likelihood of procrastination.
Setting achievable daily and weekly goals is crucial now. I identified key concepts and assignments that needed immediate attention and focused on completing them first. This prioritization ensured that I tackled the most critical tasks without feeling overwhelmed. Gradually, I built momentum and regained confidence in my abilities. To combat distractions, I created a dedicated study space free from interruptions. I also used productivity tools and apps to block distracting websites during study sessions. By eliminating external disruptions, I was able to maintain concentration and complete tasks more efficiently.
The Pomodoro Technique, which involves working in focused intervals (typically 25 minutes) followed by short breaks, proved highly effective. This method enhanced my productivity and prevented burnout by balancing intense work periods with relaxation. It also made long study sessions feel more manageable. I will engage with the Flatiron community, participating in study groups and seeking help from instructors and peers from here on.
With only a month left to demonstrate my progress, I am proud of the strides I have made in overcoming my time management challenges and mastering the core concepts of Phase 1. Understanding JavaScript, connecting to JSON servers, and mastering CSS and the DOM were significant milestones in my journey. The structured schedule, realistic goal-setting, and proactive approach to minimizing distractions have significantly improved my performance. While there is still much to learn and achieve, I am confident in my ability to navigate the remaining phases of the bootcamp successfully.
| edwin_olivares_cf4f814058 | |
1,909,314 | {sigma}-GPTs: A New Approach to Autoregressive Models | {sigma}-GPTs: A New Approach to Autoregressive Models | 0 | 2024-07-02T19:22:01 | https://aimodels.fyi/papers/arxiv/sigma-gpts-new-approach-to-autoregressive-models | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [{sigma}-GPTs: A New Approach to Autoregressive Models](https://aimodels.fyi/papers/arxiv/sigma-gpts-new-approach-to-autoregressive-models). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- Introduces a new approach to autoregressive models called σ-GPTs (Sigma-GPTs)
- Proposes a novel sampling method that can generate diverse samples while maintaining high quality
- Demonstrates improved performance on language modeling and text generation tasks compared to traditional autoregressive models
## Plain English Explanation
The paper introduces a new type of language model called σ-GPTs (Sigma-GPTs), which take a different approach to autoregressive modeling compared to traditional models like GPT. Autoregressive models work by predicting the next token in a sequence based on the previous tokens. However, this can lead to issues like lack of diversity in the generated samples.
The key innovation in σ-GPTs is a new sampling method that aims to address this problem. Instead of simply selecting the most likely next token, σ-GPTs consider a range of potential tokens and use a technique called "rejection sampling" to select one that balances quality and diversity. This allows the model to generate more varied and unexpected outputs while still maintaining high quality.
The paper demonstrates that σ-GPTs outperform traditional autoregressive models on language modeling and text generation benchmarks, producing more diverse and engaging samples. This could have applications in areas like creative writing, dialog systems, and open-ended text generation.
## Technical Explanation
The paper proposes a new approach to autoregressive modeling called σ-GPTs (Sigma-GPTs). Autoregressive models like [GPT](https://aimodels.fyi/papers/arxiv/foundational-gpt-model-meg) work by predicting the next token in a sequence based on the previous tokens. However, this can lead to issues like lack of diversity in the generated samples, as the model often simply selects the most likely next token.
To address this, the authors introduce a novel sampling method for σ-GPTs. Instead of just taking the top-k most likely tokens, σ-GPTs consider a wider range of potential tokens and use a technique called "rejection sampling" to select one. This involves evaluating each candidate token based on both its likelihood and a diversity score, and then probabilistically accepting or rejecting it. This allows the model to generate more varied and unexpected outputs while still maintaining high quality.
The paper evaluates σ-GPTs on language modeling and text generation tasks, showing that they outperform traditional autoregressive models. The authors also conduct ablation studies to analyze the impact of different components of the approach, such as the diversity score and the rejection sampling process.
## Critical Analysis
The paper presents a promising new approach to autoregressive modeling that addresses important limitations of existing techniques. The authors' use of rejection sampling to balance quality and diversity is an elegant solution to the lack of diversity issue that often plagues autoregressive models.
However, the paper does not delve deeply into the potential downsides or limitations of the σ-GPT approach. For example, the rejection sampling process may introduce additional computational overhead, which could be a concern for real-time applications. The authors also do not explore how the method might perform on more open-ended or creative text generation tasks, where the balance between quality and diversity may be even more crucial.
Additionally, the paper does not address potential biases or unintended behaviors that could arise from the σ-GPT approach. As with any powerful language model, there is a risk of the model perpetuating or amplifying societal biases, producing harmful or offensive content, or being used for malicious purposes like [text generation attacks](https://aimodels.fyi/papers/arxiv/curse-recursion-training-generated-data-makes-models). Further research is needed to understand the safety and robustness of σ-GPTs in real-world applications.
## Conclusion
The σ-GPT approach presented in this paper represents a meaningful step forward in autoregressive modeling, addressing a key limitation of traditional methods. By incorporating a novel sampling technique that balances quality and diversity, the authors have demonstrated impressive results on language modeling and text generation tasks.
This research could have significant implications for a wide range of applications that rely on generative language models, from creative writing and dialog systems to [personalized recommendation engines](https://aimodels.fyi/papers/arxiv/recgpt-generative-personalized-prompts-sequential-recommendation-via) and [stock market prediction](https://aimodels.fyi/papers/arxiv/stockgpt-genai-model-stock-prediction-trading). As the field of [generative AI](https://aimodels.fyi/papers/arxiv/generative-ai-based-text-generation-methods-using) continues to advance, approaches like σ-GPTs may play a key role in unlocking the full potential of these powerful language models.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,313 | .Net Tarixi | .NET - U Windows, Linux va macOS operatsion tizimlari uchun bepul va ochiq manbali boshqariladigan... | 0 | 2024-07-02T19:21:03 | https://dev.to/xojimurodov/net-tarixi-2n2a |

.NET - U Windows, Linux va macOS operatsion tizimlari uchun bepul va ochiq manbali boshqariladigan dasturiy ta'minot ramkasidir. [4] Loyiha birinchi navbatda Microsoft xodimlari tomonidan .NET Foundation yordamida ishlab chiqilgan va MIT litsenziyasi ostida chiqarilgan.
1990-yillarning oxirida Microsoft ".NET platformasi" ning bir qismi sifatida joriy qilingan boshqariladigan kodning ishlash vaqti va dasturlash tilini (C#) ishlab chiqishni boshladi, asosiy ish vaqti va dasturiy ta'minot kutubxonalari .NET Frameworkni tashkil qiladi.
2000-yilda Professional dasturchilar konferensiyasida C# tili e'lon qilinganidan va o'z dasturiy ta'minotining oldindan ko'rish versiyalari mavjudligidan ko'p o'tmay, Microsoft umumiy til infratuzilmasi deb nomlangan ECMA orqali standartlashtirish ustida ish boshladi. Shu bilan birga, kompaniya xususiy dasturiy ta'minot sifatida o'z dasturini ishlab chiqish va qo'llab-quvvatlashni davom ettirdi.
```
.NET Core 1.0 2016 yil 27 iyunda Microsoft Visual Studio 2015 Update 3 bilan birga chiqarildi, bu esa .NET Core dasturini ishlab chiqish imkonini beradi.
.NET Core 1.0.4 va .NET Core 1.1.1 .NET Core Tools 1.0 va Visual Studio 2017 bilan birga 2017-yil 7-martda chiqarildi.
.NET Core 2.0 2017-yilning 14-avgustida Visual Studio 2017 15.3, ASP.NET Core 2.0 va Entity Framework Core 2.0 bilan birga chiqarildi. .NET Core 2.1 2018-yil 30-mayda chiqarilgan.
NET Core 2.2 2018-yil 4-dekabrda chiqarildi.
.NET Core 3 2019-yil 23-sentabrda chiqarildi. NET Core 3 Windows ish stoli ilovalarini ishlab chiqishga yordam beradi va butun asosiy kutubxonaning ish faoliyatini sezilarli darajada yaxshilaydi
2021 yil noyabr oyida Microsoft .NET 6.0.
2022 yil noyabr oyida .NET 7.0.
2023 yil noyabr oyida .NET 8.0 ni chiqardi.
```
| xojimurodov | |
1,909,311 | Bytes Are All You Need: Transformers Operating Directly On File Bytes | Bytes Are All You Need: Transformers Operating Directly On File Bytes | 0 | 2024-07-02T19:20:52 | https://aimodels.fyi/papers/arxiv/bytes-are-all-you-need-transformers-operating | machinelearning, ai, beginners, datascience | *This is a Plain English Papers summary of a research paper called [Bytes Are All You Need: Transformers Operating Directly On File Bytes](https://aimodels.fyi/papers/arxiv/bytes-are-all-you-need-transformers-operating). If you like these kinds of analysis, you should subscribe to the [AImodels.fyi newsletter](https://aimodels.substack.com) or follow me on [Twitter](https://twitter.com/mikeyoung44).*
## Overview
- The paper investigates a novel approach to deep learning that can operate directly on file bytes, without the need for modality-specific preprocessing.
- The proposed model, called ByteFormer, achieves significantly higher accuracy on ImageNet classification compared to previous models of similar size.
- The same ByteFormer architecture can also perform audio classification and joint classification of images and audio without any modality-specific changes.
## Plain English Explanation
Typically, deep learning models for tasks like image classification first need to convert the raw image data into a specific format that the model can understand, like a tensor of RGB values. This preprocessing step is designed specifically for the image modality and can be a bottleneck.
Instead, the researchers in this paper developed a model called [ByteFormer](https://aimodels.fyi/papers/arxiv/optimizing-byte-level-representation-end-to-end) that can operate directly on the raw file bytes, without any modality-specific preprocessing. This allows the model to be used with various data types, like images and audio, without the need for custom handling.
On the ImageNet image classification benchmark, ByteFormer achieved a 5% higher top-1 accuracy compared to previous models of similar size, like [DeiT](https://aimodels.fyi/papers/arxiv/image-is-worth-more-than-16x16-patches). The researchers also showed that ByteFormer can be used for audio classification on the Speech Commands V2 dataset, achieving comparable accuracy to the state-of-the-art.
Furthermore, the ByteFormer model was able to handle joint classification of both images and audio together, without any explicit knowledge of the input modality. This demonstrates the model's ability to learn modality-independent representations.
## Technical Explanation
The key innovation in the ByteFormer model is its ability to perform classification directly on the raw file bytes, without the need for any modality-specific preprocessing or decoding. This is achieved through the use of a [Transformer-based](https://aimodels.fyi/papers/arxiv/spacebyte-towards-deleting-tokenization-from-large-language) architecture that can learn to extract relevant features from the byte-level representation.
The researchers demonstrate the effectiveness of this approach by achieving a 5% improvement in top-1 accuracy on the ImageNet classification benchmark compared to the DeiT model, while using an order of magnitude fewer parameters. This suggests that the ByteFormer model is able to learn more efficient and generalizable representations from the raw data.
Additionally, the researchers show that the same ByteFormer architecture can be applied to audio classification on the Speech Commands V2 dataset, achieving comparable accuracy to the state-of-the-art. This highlights the model's ability to learn modality-independent representations that can be applied across different data types.
The researchers also explore the use of ByteFormer for joint classification of images and audio, demonstrating the model's capability to handle multimodal data without any explicit knowledge of the input modality. This is an important capability for real-world applications where data may come from a variety of sources.
## Critical Analysis
One potential limitation of the ByteFormer approach is that it may be less sample-efficient compared to models that rely on modality-specific preprocessing. The ability to operate directly on raw data could come at the cost of requiring more training data to learn the necessary features.
Additionally, the paper does not provide a detailed analysis of the interpretability or explainability of the ByteFormer model. As the model operates directly on byte-level representations, it may be more challenging to understand the internal workings and the reasoning behind its decisions.
Further research could explore ways to improve the sample efficiency of the ByteFormer model, potentially by incorporating modality-specific inductive biases or transfer learning techniques. Investigations into the interpretability of the model's representations and decision-making processes could also shed light on its strengths and limitations.
## Conclusion
The [ByteFormer](https://aimodels.fyi/papers/arxiv/optimizing-byte-level-representation-end-to-end) model presented in this paper represents a significant step towards more flexible and generalizable deep learning systems. By performing classification directly on raw file bytes, the model can operate on a variety of data modalities without the need for custom preprocessing.
The demonstrated improvements in ImageNet classification accuracy and the model's ability to handle audio and multimodal data suggest that this approach has the potential to unlock new possibilities in a wide range of applications, from [robust latent representation tuning](https://aimodels.fyi/papers/arxiv/robust-latent-representation-tuning-image-text-classification) for image-text classification to [audio classifier performance tuning](https://aimodels.fyi/papers/arxiv/tuning-analysis-audio-classifier-performance-clinical-settings) in clinical settings. As deep learning continues to evolve, techniques like ByteFormer may pave the way for more flexible and powerful models that can adapt to diverse data sources and tasks.
**If you enjoyed this summary, consider subscribing to the [AImodels.fyi newsletter](https://aimodels.substack.com) or following me on [Twitter](https://twitter.com/mikeyoung44) for more AI and machine learning content.** | mikeyoung44 |
1,909,308 | Learn Generative AI from Real Product Development Experience | Learn Generative AI from Real Product Development Experience. Yes, you heard that right!... | 0 | 2024-07-02T19:16:19 | https://dev.to/aws-builders/learn-generative-ai-from-real-product-development-experince-1eci | aws, ai, genai, llm | **Learn Generative AI from Real Product Development Experience. Yes, you heard that right!** https://www.meetup.com/meetup-group-komabzoz/events/301904960/
Join us every 2nd Thursday of the month to learn Generative AI from actual product development experience. I will share my insights and experiences working on two Generative AI products on the AWS platform using Anthropic, Vector Database, and several other services.
The first product is scheduled to go live by the end of July 2024. This series of hands-on workshops will cover:
- Generative AI
- Prompt Engineering
- Vector Database
- Building Software with Generative AI
- Generative AI and the Future of Work
Come and join me in this jaw-dropping hands-on session on Generative AI. Each month, you'll learn something new and get closer to developing your own product or starting an exciting career in the rapidly growing field of Generative AI.
Don't miss out on this incredible opportunity to dive into the world of Generative AI!
 | ameet |
1,909,305 | Case Study: Computing Fibonacci Numbers | In some cases, recursion enables you to create an intuitive, straightforward, simple solution to a... | 0 | 2024-07-02T19:11:18 | https://dev.to/paulike/case-study-computing-fibonacci-numbers-47ac | java, programming, learning, beginners | In some cases, recursion enables you to create an intuitive, straightforward, simple solution to a problem. The **factorial** method in the preceding section could easily be rewritten without using recursion. In this section, we show an example for creating an intuitive solution to a problem using recursion. Consider the well-known Fibonacci-series problem:

The Fibonacci series begins with **0** and **1**, and each subsequent number is the sum of the preceding two. The series can be recursively defined as:
`fib(0) = 0;
fib(1) = 1;
fib(index) = fib(index - 2) + fib(index - 1); index >= 2`
The Fibonacci series was named for Leonardo Fibonacci, a medieval mathematician, who originated it to model the growth of the rabbit population. It can be applied in numeric optimization and in various other areas.
How do you find **fib(index)** for a given **index**? It is easy to find **fib(2)**, because you know **fib(0)** and **fib(1)**. Assuming that you know **fib(index - 2)** and **fib(index - 1)**, you can obtain **fib(index)** immediately. Thus, the problem of computing **fib(index)** is reduced to computing **fib(index - 2)** and **fib(index - 1)**. When doing so, you apply the idea recursively until **index** is reduced to **0** or **1**.
The base case is **index = 0** or **index = 1**. If you call the method with **index = 0** or **index = 1**, it immediately returns the result. If you call the method with **index >= 2**, it divides the problem into two subproblems for computing **fib(index - 1)** and **fib(index - 2)** using recursive calls. The recursive algorithm for computing **fib(index)** can be simply described as follows:
`if (index == 0)
return 0;
else if (index == 1)
return 1;
else
return fib(index - 1) + fib(index - 2);`
The code below gives a complete program that prompts the user to enter an index and computes the Fibonacci number for that index.

The program does not show the considerable amount of work done behind the scenes by the computer. Figure below, however, shows the successive recursive calls for evaluating **fib(4)**. The original method, **fib(4)**, makes two recursive calls, **fib(3)** and **fib(2)**, and then returns **fib(3) + fib(2)**. But in what order are these methods called? In Java, operands are evaluated from left to right, so **fib(2)** is called after **fib(3)** is completely evaluated. The labels in Figure below show the order in which the methods are called.

As shown in Figure above, there are many duplicated recursive calls. For instance, **fib(2)** is called twice, **fib(1)** three times, and **fib(0)** twice. In general, computing **fib(index)** requires roughly twice as many recursive calls as does computing **fib(index - 1)**. As you try larger index values, the number of calls substantially increases, as shown in Table below.

The recursive implementation of the **fib** method is very simple and straightforward, but it isn’t efficient, since it requires more time and memory to run recursive methods. Though it is not practical, the recursive **fib** method is a good example of how to write recursive methods. | paulike |
1,909,304 | Simple thoughts | If you read an article that interests you and you enjoy it, or if you watch a video, consider leaving... | 0 | 2024-07-02T19:09:09 | https://dev.to/douglasmakey/simple-thoughts-4i4i | If you read an article that interests you and you enjoy it, or if you watch a video, consider leaving a message or a reaction. I don't consider myself a content creator, but I genuinely enjoy writing my articles and sharing them with others. For me, it's incredibly rewarding if just one person finds my articles helpful or at least interesting to read.
When someone leaves a comment or reacts, it makes all the hours I spend preparing the content feel worthwhile. I know you don't have to do it—it's not mandatory—but next time, think about spending a few seconds to make the hours others spend creating content worth it. ❤️
I'm not the only one who feels this way. I sometimes think my content might not be too interesting, but there's a lot of amazing content out there. So, next time, consider the effort and dedication people put into creating great content. Your engagement will make them happy! | douglasmakey | |
1,909,096 | Automating Linux User Creation with Bash Script | In today's fast-paced technology environment, efficiency and automation are key. Automating tasks... | 0 | 2024-07-02T19:08:05 | https://dev.to/oluwatosin_dorcas_63db390/automating-user-creation-with-bash-script-4ae1 | In today's fast-paced technology environment, efficiency and automation are key. Automating tasks with a Bash script can save a significant amount of time and reduce errors. In this technical report, we will walk through the process of creating a Bash script to automate user and group creation, setting up home directories, and managing permissions and passwords.
**Project Overview**
Your company has recently hired several new developers, and you need to create user accounts and groups for them. To streamline this process, we will write a Bash script called create_users.sh.
This script will;
1. Read a text file containing usernames and group names,
2. Create users and groups as specified,
3. Set up home directories,
4. Generate random passwords, and
5. Log all actions to /var/log/user_management.log and store the generated passwords securely in /var/secure/user_passwords.txt.
We can create the Bash script called "create_users.sh" with this command;

**Implementation steps**
Let's walk through the script step-by-step to understand its functionality.
1. **Checking root privileges;**
This line specifies that the script should be executed with the Bash shell.

The script checks if it is being run as root. If not, it prompts the user to run the script with root privileges and exits.

2. **Checking for User Data File**;
The script checks if the filename (user-data-file) is provided as an argument. If not, it displays the correct usage and exits.

3. **Initializing Variables and Creating Directories**;
The script creates the necessary directories and sets appropriate permissions to ensure security.
Here, The 'user_data_file' stores the filename provided as an argument. Additionally 'log_file' and 'password_file' store the paths for logging actions and storing passwords.

4. **Generating Random Passwords**;
A function to generate random passwords using openssl.

5. **Reading User Data File and Creating Users**;
The script reads the user data file line by line. For each line, it:
. Trims any leading or trailing whitespaces from the username and groups.
. Checks if the user already exists. If so, it logs the information and moves to the next user.
. Creates the user and assigns them a personal group.

6. **Adding Users to Additional Groups**;
If additional groups are specified, the script adds the user to these groups, creating the groups if they do not exist.

7. **Setting Home Directory Permissions**;
The script sets appropriate permissions for the user's home directory.

8. **Generating and Storing Passwords**;
It generates a random password, sets it for the user, and stores it in the password file.

09. **Logging Actions**;
Finally, the script logs all actions and completes the user creation process.

**Running the script**;
1. **Create the txt file containing the users and the groups;**
The user accounts' structure is contained in this text file. Save and close the file.
.
Every line in the file identifies a user along with the groups (such "admin" or "finance") to which they are assigned. The semicolon divides the groups and users. users.txt has the structure:
.
2. **Ensure the script is executable**;

3. **Run script**;

**Verify the results**
1. Check the log file for actions performed;

2. Verify the user passwords file;

3. Ensure the new users and groups are created correctly;

**Conclusion**
This script automates the creation of users and groups, ensuring a streamlined onboarding process. This article is a stage two task in the DevOps of HNG internship. For more information about the HNG Internship and how it can benefit your organization, visit [HNG Internship](https://hng.tech/internship) and [HNG Hire](https://hng.tech/hire).
By using this tutorial, you can make your organization's user management procedure more efficient and ensure that new developers are onboarded promptly.
Happy Scripting.
| oluwatosin_dorcas_63db390 | |
1,909,302 | AWS S3 Event Triggering | *🚀 Excited to share my latest project leveraging AWS to automate event-driven workflows! 🌐 * 🔧... | 0 | 2024-07-02T19:05:22 | https://dev.to/sukuru_naga_sai_srinivasu/aws-s3-event-triggering-3m5n | aws, lambda, serverless, shellscripting |

**🚀 Excited to share my latest project leveraging AWS to automate event-driven workflows! 🌐
**
**🔧 Project Overview:**
I've developed a robust automation script using AWS services to handle events in an S3 bucket seamlessly. This project showcases the power of serverless computing and event-driven architecture.
**📋 Key Features:**
1) AWS Lambda: Dynamically processes events triggered by S3 object creations.
2) SNS Notifications: Sends instant notifications via email on new object uploads.
3) IAM Role Management: Securely manages permissions for Lambda function execution.

**📦 Deployment Steps:**
1) AWS Setup: Configured IAM roles and policies for seamless integration.
2) S3 Bucket Configuration: Created a dedicated bucket to store and trigger events.
3) Lambda Function Deployment: Packaged and deployed Python-based Lambda functions.
4) SNS Integration: Integrated SNS for real-time email notifications on S3 events.
**🛠️ Technologies Used:**
1) AWS CLI
2) Bash Scripting
**🔍 Outcome:**
1) Streamlined workflows, enhanced operational efficiency, and improved real-time notifications for critical events.

**📈 Benefits:**
1) Scalability: Scales effortlessly with increased workload demands.
2) Reliability: Ensures reliable event processing with minimal latency.
3) Cost Efficiency: Optimizes costs with pay-per-use serverless computing.

| sukuru_naga_sai_srinivasu |
1,909,300 | REACTJS vs TYPESCRIPT - A COMPARISON | INTRODUCTION ReactJS and TypeScript are two powerful technologies, used separately or together for... | 0 | 2024-07-02T19:03:50 | https://dev.to/orionthehunter/reactjs-vs-typescript-comparing-two-frontend-technologies-54h2 | webdev, react, typescript, beginners | **INTRODUCTION**
ReactJS and TypeScript are two powerful technologies, used separately or together for optimal and scalable web applications. In this article, I'll talk about ReactJs and TypeScript, and also compare both technologies, so you can choose the right one for your project.
**REACTJS**
ReactJS is a JavaScript library, developed by Facebook that enables developers create dynamic and reusable user interface components capable of managing their own state.
**REACTJS FEATURES**
* React Components: With React, you an create reusable components.
* Data Binding: React uses a one-way data binding, it makes operations fast.
* Virtual Dom: React uses a virtual DOM which minimizes the number of direct manipulations to the main DOM.
* JSX: React uses JSX, a syntax extension for JavaScript, to define the structure and appearance of components.
**TYPESCRIPT**
Typescript is a superset of JavaScript that compiles to plain JavaScript. Developed by Microsoft, TypeScript offers advanced tooling and better error checking.
**TYPESCRIPT FEATURES**
* Static Typing: This helps in catching errors in the compiling stage thereby reducing runtime errors.
* Type Inference: TypeScript automatically infers types, reducing the amount of manual type annotations required.
* Compatibility: Works smoothly with JavaScript libraries and frameworks.
* Enhanced Tooling: TypeScript offers enhanced tooling, which includes autocompletion, refactoring, and navigation.
**COMPARING REACTJS AND TYPESCRIPT**
Quickly, let us compare these two Frontend technologies:
In Runtime performance, React uses virtual DOM which ensures high performance for interactive UIs, Typescript static typing improves code quality leading to a good run time and applications performance overall.
In terms of Community, React has a larger community and rich ecosystem, there are countless libraries and tools available i.e Redux, Mobx, etc. TypeScript is also widely used across many projects. It is supported by major frameworks and libraries. You can use TypeScript together with ReactJs.
For flexibility, React developers can choose libraries and tools according to their needs. TypeScript is a superset of JavaScript with optional static typing, which provides safety and efficient tooling benefits without sacrificing flexibility.
**CONCLUSION**
In conclusion, ReactJS and TypeScript are widely used technologies, they can be used separately for individual projects but when used together, they are powerful together in building modern, maintainable and scalable web applications. React’s performance and flexibility makes it suitable for large applications, while TypeScript’s static typing and efficient tooling makes for good code quality and maintainability. React may
**HNG INTERNSHIP AND REACTJS**
As I begin my journey with the HNG Internship, I am excited to go deep into ReactJS. The internship provides a very good opportunity to work on real-world projects, collaborate with experienced mentors and meet new supportive interns. Using ReactJS in the HNG internship program will expose me to advanced concepts and the best practices.
I am excited about the chance to contribute to impactful projects, gain hands-on experience, and connect with potential employers through the HNG platform. I look forward to exploring new roles in frontend development.
If you are eager to learn coding, I recommend checking out the HNG Internship! it’s a great platform to kickstart your career in tech! Use any of the links below:
(https://hng.tech/internship)
(https://hng.tech/premium)
| orionthehunter |
1,908,149 | Hosting a Static Website on Azure Blob Storage | Azure storage stores all types of data. It is mainly designed for storage where we can easily store... | 0 | 2024-07-02T19:02:20 | https://dev.to/tracyee_/hosting-a-static-website-on-azure-blob-storage-jbi | azurestorage, cloudcomputing, staticwebsite | Azure storage stores all types of data. It is mainly designed for storage where we can easily store both structured and unstructured data. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data. In Microsoft Azure, you can easily upload your static website and share the link anywhere and anyone can see your website.
In Azure you can enable your static website for free, but you will pay for the storage account. Azure uses the encryption technique for storing the data. In this article, all the steps are mentioned one by one with proper screenshots. Follow all the steps and host your static website.
**Log in to Azure**
- Click on Storage Accounts
- Click on "Create"


**Create a Storage Account**
In the Basics tab, under Project details, make sure the correct subscription is selected and select _StorageRG_ from my resource group list you create a new resource group if you don’t want to use an existing one.
- Storage account name - Choose a unique name
- Region -_Select_ a region
- Performance - _Select_ Standard
- Redundancy - _Select_ Geo Redundant Storage
- _Click_ the next button or configure the other sections if required, otherwise leave as default.
- _Click_ on Review + Create

**Validation**
Check if storage account has configured properly or not,if proper then it will show the create option otherwise you need to check all the configurations again. Then _click_ on the **Create Button**.

**Deployment**

- Go to Resources

**Navigate to Static Website**
- On Data Management dropdown
- Click on Static Website

**Configure the Static Website**
- Enable Static Website
- Enter index document name
- Enter error document
- Save

- Now you will see that azure created two links ( Primary and Secondary)end points.
- Copy the primary end point
- Azure created a storage container to host the static website **$WEB**, _Click_ on it

**Navigate to Containers**
- On Data storage dropdown
- _Click_ container
- _Click_ **$WEB**

**Uploading Files from the PC**
- click on Upload.
- Navigate to where the website folder is located on the computer.
- Drag and Drop the files from the location to the provided box.


**Testing on a browser**
- When your files are successfully uploaded then paste your primary end point on your browser.

Thank you for sticking with this post and going through the steps, I hope you were able to use this guide to setup a static website on Azure Storage. | tracyee_ |
1,909,299 | Recreating Apple's iPhone 15 Pro Website: A Modern Web Development Showcase | In my latest project, I embarked on an ambitious journey to recreate Apple’s iPhone 15 Pro website... | 0 | 2024-07-02T19:01:17 | https://dev.to/syedahmedullah14/recreating-apples-iphone-15-pro-website-a-modern-web-development-showcase-3jlo | webdev, javascript, react, programming | In my latest project, I embarked on an ambitious journey to recreate Apple’s iPhone 15 Pro website using a cutting-edge tech stack. Leveraging the power of React.js, Three.js, GSAP, and more, I aimed not only to replicate the sleek, interactive experience of the original site but also to push the boundaries of modern web development. Here’s a detailed look at how I brought this vision to life.
## Tech Stack Overview
### React.js:
As the foundation for building a responsive and dynamic user interface.
### Three.js:
For intricate 3D model rendering, allowing users to explore the
iPhone 15 Pro from various angles and in multiple colors and sizes.
### React Three Fiber:
Integrated seamlessly with Three.js, ensuring smooth interaction and performance within the React framework.
### React Three Drei:
A supportive library that facilitated complex Three.js functionalities, enhancing development efficiency.
### GSAP (Greensock Animation Platform):
Used extensively for creating beautiful, subtle animations that elevate user engagement and usability.
### Vite:
Chosen for its speed in development and building, optimizing the workflow throughout the project.
### Tailwind CSS:
Employed for its utility-first approach to styling, ensuring a consistent and visually appealing design across devices.
## Key Features Implemented
Smooth Animations with GSAP: Enhanced the user experience with seamless transitions and animations, ensuring a delightful browsing experience.
3D Model Rendering: Implemented using Three.js, allowing users to interactively view the iPhone 15 Pro in different colors and sizes, mimicking the original site’s visual appeal.
Custom Video Carousel: Developed using GSAP, providing an engaging way to showcase videos related to the iPhone 15 Pro, adding dynamic content to the site.
Fully Responsive Design: Ensured that the website remains accessible and visually appealing across a wide range of devices and screen sizes, enhancing usability and accessibility.




Project Setup and Deployment
To explore and experience the project yourself, follow these steps:
1. Clone the repository:
```
git clone https://github.com/syedahmedullah14/Apple-website.git
cd Apple-website
```
2. Install dependencies:
`npm install`
3. Start the development server:
`npm run dev`
4. Build for production:
`npm run build`
## Conclusion
This project stands as a testament to the power of combining modern web technologies to create a visually stunning and highly functional website. Through the integration of React.js, Three.js, GSAP, and other tools, I successfully recreated Apple’s iPhone 15 Pro website while showcasing my skills in frontend development.
Special thanks to Adrian and JavaScript Mastery for their invaluable guidance throughout this process, which greatly contributed to the project’s success.
🔗[Live Site](https://apple-website-livid.vercel.app/)
💻[GitHub Repository](https://github.com/syedahmedullah14/Apple-website)
Explore the live site to immerse yourself in the interactive experience of the iPhone 15 Pro!
| syedahmedullah14 |
1,909,298 | Call to Action: 8 Convincing CTA Design Tips | Find out what your CTA should be to maximize conversion rates in your digital solution. CTA, in the... | 0 | 2024-07-02T18:59:31 | https://dev.to/agunwachidiebelecalistus/call-to-action-8convincing-cta-designtips-c2f | webdev, design, learning, beginners | _Find out what your CTA should be to maximize conversion rates in your digital solution._
CTA, in the context of web development, refers to elements of a web page that encourage users to take a specific action. These can be buttons, links, or other interface elements that encourage visitors to interact with the website, such as registering, subscribing to a newsletter, downloading a file, making a purchase, etc. Without the right CTA design, your business risks being ignored by a large portion of your audience. Therefore, in our article, we will tell you how to create a call action that will interest users and fulfill its intended purpose.
**How to Create a Catchy CTA**
Let's look at eight tips that will allow you to create a CTA that can attract the attention of your website visitors and increase your conversion rate.
**1. Understanding interest and motivation**
Understanding the interests and motivations of your target audience is the basis for creating an effective call to action. Research what's important to your users, what problems they want to solve, and what their goals are. Knowing these aspects allows you to create a CTA that directly addresses the needs and desires of users.
**2. Prioritizing good UX**
Prioritizing a good user experience (UX) means that the CTA should be intuitive and easily accessible. Your goal is to make interaction with the website as convenient and enjoyable as possible. Consider the navigation so that the user can easily find the information he needs and take the action you encourage him to take.
**3. Optimizing UI design**
Optimizing your UI design is important to make your CTA stand out from the rest of your content. Use contrasting colors, prominent fonts, and ample space around the CTA to grab the user's attention. A well-designed UI helps users notice and take the suggested action faster.
**4. Strategic content and CTA placement**
Strategic placement of content and CTAs plays a key role in their effectiveness. Place calls to action in visible and logical places, such as at the beginning and end of the page, in the middle of long texts, or next to important information. This helps guide the user along the path you have intended for them.
**5. Designing effective CTAs**
Creating effective CAs involves balancing attractive design with functionality. The button or element should be visible, but not overly intrusive. Use clear and understandable forms that are easy to understand and do not leave the user in doubt about their purpose.
**6. Emphasizing the user's goal**
Focusing on the user's goal helps create a CTA that is perceived as useful and relevant. Focus on how your product or service can solve a user's problem or improve their life. The call to action should show what exactly the user will receive as a result of completing the proposed step.
**7. Crafting compelling text**
There is an art to creating compelling CTA copy. The text should be short, clear, and motivating to action. Use strong verbs and language that convey a sense of urgency or benefit. Examples: "Buy now", "Get a discount", and "Learn more".
**8. Appealing to emotions**
Emotional impact is a powerful tool in a marketer's arsenal. Call to action by appealing to the user's emotions: fear of missing the opportunity, desire to benefit, or joy of owning something new. Emotionally charged CTAs can significantly increase conversion
rates.
**The Job Ain't Done Yet**
Once you've created a CTA using all the above tips, it's only part of the job and you still have one important step to go through making sure that the call to action serves its purpose. To do this, you need to conduct A/B testing. This way you can understand how button design or placement changes affect your conversion rate. Experiment with these attributes until you get the maximum effect because even the smallest modifications can give you better user engagement.
**Simplify Your Job With Advanced UX/ UI Tools**
To simplify all stages of creating a CTA as much as possible, you can use convenient tools from Flowmapp.
At your disposal is the Wireframe Creator in which you can design the structure of your website in the smallest detail and see the full picture of what the call to action will look like. Plus, it's a great way to make changes to it without much effort.
You can also use Flowmapp's User Flow tool to build a conversion funnel from the moment they visit your website until they interact with your CTA. At the same time, you can track what obstacles stand in the way of visitors and get rid of them so that users can complete the target action as quickly and effortlessly as possible.
CTA plays a crucial role in converting a regular website visitor into an active user or customer. Without clear and compelling calls to action, visitors can become lost on your site, unsure of what steps to take. An effective CTA guides the user, making their journey through the site intuitive and focused. If you want to simplify the work associated with creating a CTA, you can use completely free UX/UI design tools from Flowmapp. | agunwachidiebelecalistus |
1,909,297 | Visual Content Modeling: A Comprehensive Guide | Content modeling is a crucial aspect of designing and developing content-rich applications, websites,... | 0 | 2024-07-02T18:59:03 | https://www.builder.io/m/explainers/content-modeling | cms, developer, webdev, programming | Content modeling is a crucial aspect of designing and developing content-rich applications, websites, and digital experiences. It involves defining the structure, relationships, and attributes of different types of content within a system. By creating a content model, developers can ensure that content is organized, consistent, and easy to manage throughout its lifecycle.
## **Understanding the basics of content modeling**
At its core, [content modeling](https://www.builder.io/c/docs/models-intro) is about understanding the nature of your content and how it relates to other pieces of information within your system. It requires analyzing your content types, their characteristics, and how they fit together to create a cohesive user experience.

To get started with content modeling, first, ask yourself a few questions:
- What types of content do I have (for example, articles, products, events)?
- What are the key attributes of each content type (for example, title, description, author)?
- How do different content types relate to each other? (for example, an article belongs to a category like a basketball article in a category of sports)
<!-- -->
Answering these questions will help you build a clear picture of your content ecosystem and its structure.
## **Benefits of content modeling**
Investing time in content modeling offers several key benefits:
- **Consistency**: A well-defined content model ensures content is structured consistently across your application, making it easier to manage and maintain over time.
- **Flexibility**: By separating content from presentation, a content model allows you to reuse and repurpose content across different channels and devices without duplication.
- **Scalability**: A robust content model can accommodate growth and changes in your content strategy, enabling you to add new content types or attributes as needed.
- **Collaboration**: Content modeling promotes collaboration between developers, designers, and content creators by providing a shared understanding of how content should be structured and presented.
<!-- -->
## **Implementing a content modeling process**
To create a content model, follow these steps:
1. **Identify content types**: Start by listing all the different types of content within your system, such as articles, products, or user profiles.
2. **Define attributes**: For each content type, determine the key attributes or fields that describe its characteristics. For example, an article might have a title, body, author, and publication date.
3. **Establish relationships**: Consider how different content types relate to each other. Are there hierarchical relationships (for example., a product belongs to a category) or associative relationships (for example., an article relates to multiple tags)?
4. **Create a visual representation**: Use diagrams, entity-relationship models, or other visual tools to map out your content model and make it easier to understand and communicate with others.
5. **Iterate and refine**: As you implement your content model, be prepared to iterate and make adjustments based on feedback and real-world usage.
<!-- -->
<br>

## **Content modeling best practices**
To ensure your content model is effective and maintainable, consider these best practices:
- **Keep it simple**: Avoid over-complicating your content model with unnecessary types or attributes. Start with the essentials and add complexity only when needed.
- **Use clear naming conventions**: Choose descriptive names for your content types and attributes to make them easy to understand and work with.
- **Consider reusability**: Design your content types and attributes with reusability in mind so that content can be easily repurposed across different contexts.
- **Plan for extensibility**: Build flexibility into your content model to accommodate future growth and changes in your content strategy.
- **Document your model**: Create clear documentation that explains your content model, its purpose, and how developers and content creators should use it.
<!-- -->
## **Integrating content modeling with headless content management systems**
Content modeling is important when working with a [headless CMS](https://www.builder.io/m/knowledge-center/headless-cms-visual-guide), where content is decoupled from presentation and delivered through APIs. Defining a clear content model upfront ensures your content is structured consistently across different channels and devices.
When integrating content modeling with a headless CMS, consider these tips:
- Align your content model with your CMS: Ensure that your content model can be easily implemented within your chosen headless CMS, taking into account any limitations or best practices.
- Use content types and fields: Most headless CMS platforms provide a way to define content types and fields that align with your content model. Use these features to enforce consistency and validation.
- Leverage relationships and references: Many headless CMS platforms support relationships and references between content types, allowing you to create more complex content structures.
- Test and iterate: As you begin populating your headless CMS with content, test how well your content model works in practice and be prepared to make adjustments as needed.
<!-- -->

## **Pitfalls of a traditional approach**
Traditional headless CMSes have long relied on structured content modeling, which benefits data organization and consistency. However, this approach has several limitations:
1. Lack of flexibility: Rigid content structures often fail to accommodate diverse content needs and evolving design requirements.
2. Increased engineering tickets: Content changes frequently require developer intervention, leading to bottlenecks and slower time-to-market.
3. Limited visual control: Content creators often struggle to achieve desired layouts and designs within the constraints of predefined structures.
4. Scalability issues: As content needs grow and diversify, maintaining and updating numerous content types becomes increasingly complex.
<!-- -->
These pitfalls have led to a growing demand for a more adaptable and user-friendly approach to content modeling that balances structure with creativity.
## **Visual content modeling: A new approach**
Visual content modeling combines structured content modeling with the flexibility of component-based, drag-and-drop page building. It allows certain fields and content areas to be "open-ended,” where creators can use pre-built components to build layouts visually. Developers can register specific components and design tokens in the code, ensuring that the [visual editing](https://www.builder.io/m/knowledge-center/visual-editing) experience uses only approved elements and styles.
Here's how it typically works:
1. Developers define a base content model with core structured fields, just like they would in traditional content modeling.
2. Developers create reusable components and design tokens.
3. Content creators can freely build by dragging and dropping pre-approved components onto a visual editor without the need for developer resources.
4. The platform serializes the visual layout and content into a format that can be stored and delivered via [API](https://www.builder.io/c/docs/api-intro).
<!-- -->
The result is content that has the benefits of structured data where needed but allows for pixel-perfect designs and creative flexibility where wanted. This approach reduces dependency on development tickets for content changes while retaining developer control over the content model, design system, and available components.
Think of your components as content models, each allowing them to be reused like Lego blocks. This approach enables your team to streamline the process from idea to production with extreme efficiency.

## **When to use visual content modeling**
Consider visual content modeling when:
- You need some structured content but also flexibility for marketing teams to build new layouts.
- You want to empower marketing with drag-and-drop creation while retaining developer control.
- You're building content that needs a high degree of creative flexibility.
- You want to enable content and design iteration without a full development cycle.
- You want to leverage AI for rapid, on-brand content generation.
<!-- -->
It might be excessive when:
- Your content is highly structured and uniform.
- You only have a few stakeholders making frequent content changes.
- Your project is mostly read-only without constant content updates.
<!-- -->
The choice depends on your project and your team's needs. It can deliver the flexibility and agility required for projects with frequently changing design needs while maintaining developer control over core design elements.
## **Leveraging AI with visual content modeling**

Visual content modeling can be enhanced with [generative AI for rapid content creation.](https://www.builder.io/c/docs/ai#generating-content) Here's how it works:
1. The content creator provides a brief set of requirements.
2. The AI system, trained on specific content models and design systems, generates a complete visual layout.
3. The generated content can be reviewed, edited, and published without writing code.
<!-- -->
This AI-assisted approach allows for faster creation and iteration of on-brand content, adhering to predefined design systems and component libraries. It's particularly powerful when combined with A/B testing and personalization, enabling the quick creation of multiple page variants for different audiences or goals.
<video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F0b008da5cf774ce5afe9f0b6d8094a88%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=0b008da5cf774ce5afe9f0b6d8094a88&alt=media&optimized=true" width="320" height="240" controls></video>
## **Real-world content modeling example**
To better understand how content modeling works in practice, let's explore a real-world example:
### **E-commerce Product Catalog**

Imagine you're building an e-commerce website that sells clothing. Your content model might include the following content types:
- **Product**: Represents an individual clothing item with attributes like name, description, price, size, color, and category.
- **Category**: Represents a grouping of related products, with attributes like name and description.
- **Brand**: Represents the manufacturer or brand of a product, with attributes like name and logo.
<!-- -->
Relationships between these content types might include:
- A product belongs to a category
- A product is associated with a brand
<!-- -->
By structuring your content this way, you can easily display products by category, filter by brand, and maintain consistent product information across your site.
```javascript
interface Product {
id: string;
name: string;
price: number;
category: Category;
brand: Brand;
}
const Page = ({ products }: { products: Product[] }) => (
<ul>
{products.map(({ id, name }) => (
<li key={id}>
<Link href={`/products/${id}`}>{name}</Link>
</li>
))}
</ul>
);
```
## **Implementing content modeling in your workflow**
Now that you understand the basics of content modeling and have seen a real-world example, how can you start incorporating it into your own development workflow? Here are some tips:
- **Start early**: Consider your content model early in the project lifecycle, ideally during the planning and design phase. This will help ensure that your content structure aligns with your overall project goals and user needs.
- **Collaborate with stakeholders**: Involve content creators, designers, and other stakeholders in the content modeling process to get their input and ensure everyone is on the same page.
- **Iterate and refine**: Don't expect to get your content model perfect on the first try. Be prepared to iterate and adjust as you work with real content and user feedback.
- **Document and communicate**: Clearly document and share your content model with all relevant team members to ensure consistent understanding and implementation.
- **Use the right tools**: Choose tools that support your content modeling approach and make it easy to implement and maintain your content structure over time.
<!-- -->
By following these tips and making content modeling a core part of your development process, you can create more structured, flexible, and maintainable content-driven applications.
**Tip:** Create a [data model](https://www.builder.io/c/docs/models-data#creating-data-models)
<video src="https://cdn.builder.io/o/assets%2FYJIGb4i01jvw0SRdL5Bt%2F2dff78c988ff4830bdaf631628a1dcf9%2Fcompressed?apiKey=YJIGb4i01jvw0SRdL5Bt&token=2dff78c988ff4830bdaf631628a1dcf9&alt=media&optimized=true" width="320" height="240" controls></video>
## **Frequently Asked Questions (FAQ)**
### **What are content models?**
Content models serve as a blueprint for creating, managing, and presenting content across various channels and devices. It helps ensure consistency, flexibility, and scalability in content-driven projects.
### **How often should I update my content model?**
Your content model should be updated as your content and requirements change. This ongoing process requires regular evaluation and iteration to ensure that your model continues to serve your users and your business effectively.
## **Conclusion**
Content modeling is an essential skill for developers working on content-rich projects. By taking the time to understand, define, and structure your content upfront, you can create a solid foundation for building scalable and adaptable digital experiences. Whether you're working on a website, mobile app, or other content-driven application, investing in content modeling will pay off in the long run.
Builder's Visual Development Platform is revolutionizing front-end programming, offering AI-powered design-to-code, a visual editor, and an enterprise CMS. We drive unprecedented efficiency from idea to production, respecting your design systems and code preferences. By tackling the industry's hardest problems, we're changing how digital experiences are built, enabling teams to create, iterate, and optimize with unmatched speed and quality. With [Visual Copilot](https://www.builder.io/m/design-to-code), you can convert Figma designs into clean code with one click. Incrementally adopt Builder or rebuild your front end from scratch using the tech stack of your choice.
| stahlwalker |
1,909,296 | Shadow Testing: Ensuring Seamless Software Deployment | Introduction In the realm of software development and deployment, ensuring the reliability and... | 0 | 2024-07-02T18:56:12 | https://dev.to/keploy/shadow-testing-ensuring-seamless-software-deployment-3n5d | webdev, javascript, programming, tutorial |

Introduction
In the realm of software development and deployment, ensuring the reliability and performance of new code changes before they reach production is paramount. [Shadow testing](https://keploy.io/blog/community/the-game-of-shadow-testing-the-core-of-test-generation), a technique often employed in the continuous integration/continuous deployment (CI/CD) pipeline, plays a crucial role in achieving this goal. This article delves into the concept of shadow testing, its benefits, implementation strategies, and best practices.
What is Shadow Testing?
Shadow testing, also known as mirroring or live testing, involves running the new code version alongside the existing production code in a non-intrusive manner. The primary objective is to compare the behavior and performance of the new version against the current one without impacting the end-users. This technique allows developers to identify potential issues, validate performance improvements, and ensure compatibility with existing systems before fully deploying the new code.
How Shadow Testing Works
In a shadow testing setup, incoming production traffic is duplicated and routed to both the existing production environment and the shadow environment running the new code version. The responses from both environments are then compared to identify any discrepancies. This process helps in validating that the new code behaves as expected under real-world conditions.
Benefits of Shadow Testing
1. Risk Mitigation: Shadow testing minimizes the risk associated with deploying new code by identifying issues in a controlled environment before they affect end-users.
2. Real-World Validation: By using actual production traffic, shadow testing provides a realistic assessment of the new code's behavior and performance.
3. Performance Comparison: It allows for a direct comparison of performance metrics between the old and new code versions, helping to identify any performance regressions or improvements.
4. Seamless User Experience: Since shadow testing is non-intrusive, end-users continue to interact with the production environment without any interruptions or degradation in service.
5. Early Detection of Bugs: Running the new code in parallel with production helps in catching bugs and issues that might not be evident in a staging or testing environment.
Implementing Shadow Testing
1. Set Up the Shadow Environment: Create a shadow environment that mirrors the production environment as closely as possible. This includes replicating the infrastructure, databases, and configurations.
2. Traffic Duplication: Implement a mechanism to duplicate incoming production traffic and route it to both the production and shadow environments. This can be achieved using load balancers, traffic mirroring tools, or custom routing logic.
3. Response Comparison: Capture and compare the responses from both environments to identify any discrepancies. This can be done using automated scripts or specialized comparison tools.
4. Monitoring and Logging: Implement comprehensive monitoring and logging to capture metrics, errors, and performance data from both environments. This information is crucial for identifying and diagnosing issues.
5. Analysis and Reporting: Analyze the collected data to identify any deviations in behavior or performance. Generate detailed reports to provide insights and recommendations for further action.
Best Practices for Shadow Testing
1. Ensure Environment Parity: The shadow environment should closely match the production environment to provide accurate and reliable test results. Any differences in infrastructure, configurations, or data can lead to misleading conclusions.
2. Automate Traffic Duplication: Use automated tools and scripts to duplicate and route traffic, ensuring consistency and reliability in the shadow testing process.
3. Isolate the Shadow Environment: Ensure that the shadow environment is isolated from the production environment to prevent any unintended interactions or data corruption.
4. Focus on Key Metrics: Identify and monitor key performance indicators (KPIs) and metrics that are critical to your application's performance and reliability. This includes response times, error rates, and resource utilization.
5. Gradual Rollout: Consider gradually increasing the amount of traffic routed to the shadow environment to identify and address issues incrementally.
6. Iterate and Improve: Regularly review and refine your shadow testing processes based on feedback and insights gained from previous tests. This continuous improvement approach helps in enhancing the effectiveness of shadow testing over time.
7. Collaborate with Stakeholders: Involve key stakeholders, including developers, testers, and operations teams, in the shadow testing process. Collaboration ensures that all perspectives are considered and that potential issues are addressed comprehensively.
Challenges and Considerations
1. Resource Intensive: Shadow testing can be resource-intensive, requiring duplicate infrastructure and additional monitoring tools. Organizations need to weigh the costs against the benefits.
2. Data Privacy and Security: Ensure that sensitive data is handled securely during shadow testing to prevent any breaches or privacy violations.
3. False Positives: Differences in non-critical aspects between the production and shadow environments can lead to false positives. It's important to distinguish between significant issues and minor discrepancies.
4. Complexity in Setup: Setting up and maintaining a shadow testing environment can be complex, especially for large and intricate systems. Proper planning and coordination are essential.
Conclusion
Shadow testing is a powerful technique that enables organizations to validate new code changes under real-world conditions without impacting end-users. By providing a realistic assessment of the new code's behavior and performance, shadow testing helps in mitigating risks, improving reliability, and ensuring a seamless user experience. While it can be resource-intensive and complex to implement, the benefits of early bug detection, performance validation, and risk mitigation make shadow testing a valuable addition to the software deployment process.
| keploy |
1,888,107 | 30 days of AWS - Part 3: AWS Well-Architected Framework | Definition To put it simply, the AWS well-architected framework is a collection of best... | 27,709 | 2024-07-02T18:56:01 | https://dev.to/achenchi/30-days-of-aws-part-3-aws-well-architected-framework-8c0 | beginners, aws, wellarchitectedframework, learning | ## Definition
To put it simply, the AWS well-architected framework is a **collection of best practices and guidelines** for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud.
It is built upon 6 pillars. Namely:
- Security
- Cost optimization
- Operational excellence
- Reliability
- Efficiency
- Sustainability
Acronym to remember it by: S-C-O-R-E-S
## Operational Excellence
**Focus** - Run and monitor systems to deliver business value. Continually improve and support processes and procedures.
**Key Topics**
- Automating changes
- Responding to events
- Defining standards to maintain daily operations
### Design Principles
- **Perform operations as code**- Define the entire workload as code and update it with code.
- **Make frequent, small, reversible changes**- Design workloads that can be updated regularly. Make provision for reversible changes in small increments.
- **Refine operations procedures frequently**- Look for opportunities to improve operations procedures.
- **Anticipate failure**- Identify potential failure sources so they can be removed or mitigated.
- **Learn from all operational failures**-Drive improvement through lessons learnt from all operational events and failures.
## Security
**Focus**- Protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
**Key topics**
- Protecting confidentiality and integrity of data
- Identifying and managing who can do what
- Protecting systems
- Establishing controls to detect security events
### Design Principles
- **Implement a strong identity foundation**- Make use of the principle of least privilege. Enforce separation of duties with appropriate authorization. Centralize privilege management. Reduce or eliminate the use of long-term credentials.
- **Enable traceability**- Monitor, alert, and audit actions and changes to your environment in real time. Integrate logs and metrics to automatically take action.
- **Apply security at all layers**- Apply defense in depth and apply security controls to all layers of your architecture.
- **Automate security best practices**- Automate security mechanisms to improve your ability to securely scale more rapidly and cost-effectively.
- **Protect data in transit and at rest**- Classify your data into sensitivity levels and use mechanisms such as tokenization, encryption, and access control.
- **Keep people away from data**- Create mechanisms and tools to reduce or eliminate direct data access.
- **Prepare for security events**- Run incident response management simulations and use automation tools to increase your detection, investigation, and recovery speed.
## Reliability Pillar
**Focus**- Ensure a workload performs its intended functionality correctly and consistently when it's expected to.
**Key topics**
- Recovery planning
- Handling change
- Designing distributed systems
### Design principles
- **Stop guessing capacity**- Monitor demand and system usage, and automate the addition or removal of resources.
- **Manage change in automation**- Use automation to make changes to infrastructure.
- **Scale horizontally to increase aggregate workload availability**- Replace one large resource with multiple smaller resources and distribute requests across these resources.
- **Automatically recover from failure**- Monitor systems for key performance indicators and configure your systems to trigger an automated recovery in case of a breach.
- **Test recovery procedures**- Test how your systems fail and validate your recovery procedures.
## Performance Efficiency pillar
**Focus**- Use IT and computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes.
**Key topics**
- Selecting the right resource types and sizes based on workload requirements
- Monitoring performance
- Making informed decisions to maintain efficiency as business needs evolve.
### Design Principles
- **Go global in minutes**- Deploy systems in multiple regions to reduce latency and enhance customer experience at minimal cost.
- **Experiment more often**- Perform comparative testing of different types of service configurations.
- **Use serverless architectures**- Serverless architectures remove the operational burden of running and maintaining servers.
- **Democratize advanced technologies**- Consume technologies as a service. This enables teams to focus on product development instead of resource provisioning and management.
- **Consider mechanical sympathy**- Use the technology approach that aligns best to what you are trying to achieve.
## Cost optimization pillar
**Focus**- Avoid unnecessary costs
**Key topics**
- Understanding and controlling where money is being spent
- Selecting the most appropriate and right number of resource types
- Analysing spending over time
- Scaling to meet business needs without overspending
### Design principles
- **Implement cloud financial management**- Build capability through knowledge building, programs, resources, and processes to become a cost-efficient organization.
- **Adopt a consumption model**- Pay only for the computing resources that you require.
- **Measure overall efficiency**- Measure the business output of the workload and costs that are associated with delivering it. Use this measure to know the gains that you make from increasing output and reducing costs.
- **Stop spending money on undifferentiated heavy lifting**- Focus on your customers and business projects instead of the IT infrastructure such as racking, stacking, and powering services.
- **Analyse and attribute spending**- | achenchi |
1,909,295 | Case Study: Computing Factorials | A recursive method is one that invokes itself. Many mathematical functions are defined using... | 0 | 2024-07-02T18:55:51 | https://dev.to/paulike/case-study-computing-factorials-3jc2 | java, programming, learning, beginners | A recursive method is one that invokes itself. Many mathematical functions are defined using recursion. Let’s begin with a simple example. The factorial of a number **n** can be recursively defined as follows:
`0! = 1;
n! = n × (n - 1)!; n > 0`
How do you find **n!** for a given **n**? To find **1!** is easy, because you know that **0!** is **1**, and **1!** is **1 × 0!**. Assuming that you know **(n - 1)!**, you can obtain **n!** immediately by using **n × (n - 1)!**. Thus, the problem of computing **n!** is reduced to computing **(n - 1)!**. When computing **(n - 1)!**, you can apply the same idea recursively until **n** is reduced to **0**.
Let **factorial(n)** be the method for computing **n!**. If you call the method with **n = 0**, it immediately returns the result. The method knows how to solve the simplest case, which is referred to as the base case or the _stopping condition_. If you call the method with **n > 0**, it reduces the problem into a subproblem for computing the factorial of **n - 1**. The _subproblem_ is essentially the same as the original problem, but it is simpler or smaller. Because the subproblem has the same property as the original problem, you can call the method with a different argument, which is referred to as a _recursive call_.
The recursive algorithm for computing **factorial(n)** can be simply described as follows:
`if (n == 0)
return 1;
else
return n * factorial(n - 1);`
A recursive call can result in many more recursive calls, because the method keeps on dividing a subproblem into new subproblems. For a recursive method to terminate, the problem must eventually be reduced to a stopping case, at which point the method returns a result to its caller. The caller then performs a computation and returns the result to its own caller. This process continues until the result is passed back to the original caller. The original problem can now be solved by multiplying n by the result of factorial(n - 1).
The code below gives a complete program that prompts the user to enter a nonnegative integer and displays the factorial for the number.

The **factorial** method (lines 17–22) is essentially a direct translation of the recursive mathematical definition for the factorial into Java code. The call to **factorial** is recursive because it calls itself. The parameter passed to **factorial** is decremented until it reaches the base case of **0**.
You see how to write a recursive method. How does recursion work behind the scenes? Figure below illustrates the execution of the recursive calls, starting with **n = 4**.

The use of stack space for recursive calls is shown in Figure below.

It is simpler and more efficient to implement the **factorial** method using a loop. However, we use the recursive **factorial** method here to demonstrate the concept of recursion. Later in this chapter, we will present some problems that are inherently recursive and are difficult to solve without using recursion.
If recursion does not reduce the problem in a manner that allows it to eventually converge into the base case or a base case is not specified, infinite recursion can occur. For example, suppose you mistakenly write the **factorial** method as follows:
`public static long factorial(int n) {
return n * factorial(n - 1);
}`
The method runs infinitely and causes a **StackOverflowError**.
The example discussed in this section shows a recursive method that invokes itself. This is known as _direct recursion_. It is also possible to create _indirect recursion_. This occurs when method **A** invokes method **B**, which in turn invokes method **A**. There can even be several more methods involved in the recursion. For example, method **A** invokes method **B**, which invokes method **C**, which invokes method **A**. | paulike |
1,909,294 | CLR | Common Language Runtime(CLR) - .Net tomonidan ishlab dasturlarning jarayonini boshqaradi. Kompilyator... | 0 | 2024-07-02T18:54:44 | https://dev.to/xojimurodov/clr-345a | Common Language Runtime(CLR) - .Net tomonidan ishlab dasturlarning jarayonini boshqaradi. Kompilyator kompilyatsiya qilingan codeni mashina kodiga o'zgartiradi (0 - 1).
CLR tomonidan taqdim etiladigan xizmatlar xotirani boshqarish, xatoliklar bilan ishlash, xavfsizlik va boshqalarni o'z ichiga oladi.

CLR .NET ning asosiy komponentidir. Bu codelarni boshqaradigan va turli xizmatlarni taqdim etish orqali ishlab chiqish jarayonini osonlashtirishga yordam beradi. Asosan, u har qanday .NET dasturlash tilidan qat'iy nazar .NET dasturlari bajarilishini boshqarish uchun javobgardir. Common Language Runtime ostida ishlaydigan code boshqariladigan code deb ataladi. Boshqacha qilib aytganda, CLR .NET uchun boshqariladigan runtime muhitini ta'minlaydi, deb ayta olamiz.
| xojimurodov | |
1,909,293 | Top DevOps Services to Elevate Your Business Performance | Top DevOps Services to Elevate Your Business Performance The business world is a race... | 0 | 2024-07-02T18:54:34 | https://dev.to/marufhossain/top-devops-services-to-elevate-your-business-performance-2ehg | ## Top DevOps Services to Elevate Your Business Performance
The business world is a race track these days. Customers expect new features and updates lightning fast, and businesses need to adapt quickly to stay ahead. That's where DevOps comes in! It's a powerful approach that breaks down walls between development and operations teams, allowing for faster software delivery and higher quality. But implementing DevOps can be tricky. That's where professional DevOps services come to the rescue! These services can help organizations of all sizes unlock the full potential of DevOps and achieve significant business growth.
**Core DevOps Services: Building a Strong Foundation**
Let's dive into some key DevOps services that can supercharge your business performance:
* **Continuous Integration and Delivery (CI/CD):** Imagine a smooth, automated pipeline for your software development. That's CI/CD in action! It automates tasks like building, testing, and deploying code changes frequently. This means fewer errors, faster releases, and catching problems early. With CI/CD, you can get new features and updates to your customers faster, giving you a competitive edge in the market.
* **Infrastructure as Code (IaC):** Think of infrastructure (servers, networks) as the building blocks of your applications. Traditionally, setting up infrastructure could be a manual and error-prone process. IaC changes the game by treating infrastructure like code. This allows you to automate provisioning and management, ensuring consistency, scalability, and fewer human errors. With IaC, you can easily spin up new environments for testing or development, reducing bottlenecks and speeding up your development process.
* **Containerization and Orchestration:** Imagine tiny, self-contained packages that hold your application and all its dependencies. These are containers! Containerization allows you to package your application once and run it consistently on any server. But managing many containers can get complex. That's where container orchestration comes in. Container orchestration tools like Kubernetes automate the deployment, scaling, and management of containerized applications at scale. This allows you to focus on building great features, not infrastructure headaches.
* **Monitoring and Logging:** Just like a car needs regular checkups, your applications need monitoring. DevOps services provide tools for continuous monitoring of your applications' performance and health. These tools can identify potential issues before they become problems, allowing for faster resolution and a smoother user experience. Additionally, log management tools capture and analyze application logs, helping you troubleshoot issues and understand how your applications are behaving.
**Advanced DevOps Services: Taking Your Business to the Next Level**
As your DevOps journey progresses, you can explore advanced services for even greater agility:
* **Security as Code (SecOps):** Security shouldn't be an afterthought. SecOps integrates security practices throughout the entire software development lifecycle. This means baking security checks into your CI/CD pipeline and proactively identifying and addressing vulnerabilities. With SecOps, you can release secure applications faster and reduce the risk of breaches, protecting your business and your customers' data.
* **Cloud-Native Development:** The cloud offers a flexible and scalable platform for building applications. Cloud-native development focuses on building applications specifically for the cloud, taking advantage of its unique features. This allows you to develop applications that are more agile, scalable, and cost-efficient. By embracing cloud-native development, you can leverage the power of the cloud to accelerate your business innovation and stay ahead of the curve.
* **Performance Optimization:** Even the most amazing features fall flat if your application is slow. DevOps services can help you identify performance bottlenecks and optimize your application for a smooth user experience. This might involve optimizing code, database queries, or server configurations. By keeping your applications running smoothly, you can ensure happy and engaged users who keep coming back for more.
* **DevOps Consulting:** Need expert guidance on your DevOps journey? A [DevOps consulting](https://www.clickittech.com/devops-consulting/?utm_source=backlinks&utm_medium=referral
) firm can assess your current development practices, identify areas for improvement, and develop a customized DevOps strategy. They can also help you implement DevOps tools and train your team on best practices. With DevOps consulting, you can gain valuable expertise and accelerate your DevOps adoption process, getting the most out of this powerful approach.
**Choosing the Right DevOps Services for You**
The best DevOps services for your business depend on your specific goals, infrastructure, and skill sets. Here are some tips for choosing the right services:
* **Identify your pain points:** What are the biggest bottlenecks slowing down your development process?
* **Evaluate your team's skills:** Do you have the in-house expertise to implement DevOps effectively?
* **Consider your budget:** Different DevOps services come with varying costs.
**The Benefits of Investing in DevOps Services**
By leveraging DevOps services, you can unlock a range of benefits for your business:
* **Faster Time to Market:** Get new features and updates to customers quicker, giving you a competitive edge.
* **Improved Software Quality:** Reduce errors and deliver more reliable software, leading to happier customers and fewer headaches for your support team.
* **Increased Innovation and Agility:** Respond faster to changing market demands and experiment with new features more easily. This allows you to stay ahead of the competition and delight your customers with cutting-edge solutions.
* **Reduced Costs and Improved Efficiency:** Automate tasks and free up resources for more strategic work. This can lead to significant cost savings and a more efficient development process.
* **Enhanced Scalability and Security:** DevOps practices promote building applications that are scalable and can easily handle increased user traffic. Additionally, by integrating security throughout the development lifecycle, you can build more secure applications and protect your business from cyber threats.
**Real-World Examples: The Power of DevOps in Action**
Many companies across various industries have seen significant improvements by adopting DevOps services. For example, a retail company used DevOps to automate its infrastructure provisioning, reducing the time it took to set up new development environments from weeks to hours. This allowed them to experiment with new features and applications much faster, ultimately leading to a significant increase in online sales. In another example, a financial services company used DevOps to streamline its software delivery process, enabling them to release new features and bug fixes every week instead of every quarter. This improved customer satisfaction and helped them stay competitive in a rapidly evolving market.
**Conclusion: Embrace DevOps Services for a Brighter Future**
DevOps is more than just a set of tools; it's a cultural shift that can revolutionize the way your business develops and delivers software. By leveraging professional DevOps services, you can gain the expertise and guidance you need to unlock the full potential of DevOps and achieve significant business growth. Don't get left behind! Start exploring DevOps services today and watch your business soar to new heights!
| marufhossain | |
1,909,292 | Laws of UX 📜 | Day 13: Laws of UX 📜 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE student with a... | 0 | 2024-07-02T18:53:29 | https://dev.to/prince_chouhan/laws-of-ux-opn | ui, uidesign, ux, uxdesign | Day 13: Laws of UX 📜
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. Today, I'm exploring the important Laws of UX.
🗓️ Day 13 Topic: Laws of UX
📚 Today's Learning Highlights:
Concept Overview:
UX laws are principles guiding the design and development of user-centered digital products, websites, and interfaces. They aim to create products that are usable, efficient, effective, and satisfying for users.

Key Takeaways:
1️⃣ Hick's Law:
🔸 Definition: Decision time increases with the number of choices.
🔸 Application: Reduce options for faster decision-making.
🔸 Example: Limit navigation menu options.
2️⃣ Fitts' Law:
🔸 Definition: Time to move to a target depends on distance and size.
🔸 Application: Make important targets large and easy to reach.
🔸 Example: Large, easily accessible buttons.
3️⃣ Jakob's Law:
🔸 Definition: Users expect your site to work like others they know.
🔸 Application: Use familiar design patterns.
🔸 Example: Main navigation at the top or left side.
4️⃣ Gestalt Principles:
🔸 Definition: Principles describing how humans perceive visual information.
🔸 Key Principles: Proximity, Similarity, Continuity, Closure, Figure-Ground, Common Region.
🔸 Example: Group related elements by proximity.
5️⃣ Von Restorff Effect (Isolation Effect):
🔸 Definition: Items that stand out are more likely to be remembered.
🔸 Application: Use contrast and color to highlight important elements.
🔸 Example: Design CTA buttons with contrasting colors.

Challenges:
🔸 Implementing multiple UX laws simultaneously can be complex.
🔸 Balancing between simplicity and functionality.
Solution:
🔹 Prioritize UX laws based on the project requirements and user needs.
🔹 Test and iterate to find the best balance.
Practical Application:
1. Reduce Choices: Limit the number of options in navigation menus.
2. Optimize Targets: Make frequently used buttons large and accessible.
3. Use Familiar Patterns: Design consistent with popular sites.
4. Apply Gestalt Principles: Group related elements and create visual hierarchy.
5. Highlight Key Elements: Use contrast and color to make important items stand out.
📢 Community Engagement:*
How do you apply these UX laws in your design work? Share your insights!
💬 Quote of the Day:
"Good design is obvious. Great design is transparent." - Joe Sparano
🎉 Module Completion:
I've successfully completed Module 2 - UI Design Principles, covering: Layout
Visual Hierarchy
Visual Noise
Iconography
Typography
Contrast
Color Palette
Spacing
Grids
Consistency
Laws of UX
🔜 Next Module: Figma Academy
I will now explore Module 3, learning about Figma tools, techniques for creating and editing designs, and effective collaboration.
Thank you for following my UI/UX design journey! Stay tuned for more updates.
#UIUXDesign #DesignThinking #UserExperience #UIDesign #UXDesign #DesignPrinciples #WebDesign #GraphicDesign #InteractionDesign #DigitalDesign #ui #ux #figma
| prince_chouhan |
1,909,290 | Building a Modern Website with React, Tailwind CSS, and Vite | In today's fast-paced web development landscape, creating a modern, responsive, and visually... | 0 | 2024-07-02T18:49:39 | https://dev.to/syedahmedullah14/building-a-modern-website-with-react-tailwind-css-and-vite-4o21 | webdev, javascript, programming, react | In today's fast-paced web development landscape, creating a modern, responsive, and visually appealing website requires a combination of the right tools and best practices. Recently, I embarked on a journey to build such a website using React.js, Tailwind CSS, and Vite. The result is a stunning, feature-rich site that I am excited to share with you.
## Why Vite, React.js, and Tailwind CSS?
### Vite
Vite is a modern build tool that offers lightning-fast development times. Unlike traditional bundlers, Vite leverages native ES modules and provides instant server start, fast HMR, and optimized builds.
### React.js
React.js needs no introduction. It’s a powerful JavaScript library for building user interfaces, offering component-based architecture and a vast ecosystem that makes development efficient and scalable.
### Tailwind CSS
Tailwind CSS is a utility-first CSS framework that allows for rapid styling directly within your markup. It offers a highly customizable and responsive design system that scales effortlessly.
## Key Features of the Project
Beautiful Sections
The website features several well-designed sections, including:
### Hero:
A captivating introduction with a call-to-action.
### Services:
Detailed information about the services offered.
### Features:
Highlights of key features and benefits.
### How to Use:
A guide on how to utilize the website or service.
### Roadmap:
Future plans and updates.
### Pricing:
Clear and concise pricing information.
### Footer and Header:
Consistent navigation and information across the site.
Parallax Animations








To enhance user engagement, the site incorporates parallax animations triggered by mouse movements and scrolling. These animations add depth and interactivity, making the browsing experience more dynamic.
### Complex UI Geometry
Utilizing Tailwind CSS, the site showcases intricate shapes such as circular feature displays, grid lines, and side lines. These elements add a modern and sophisticated touch to the design.
### Latest UI Trends
The design incorporates modern UI trends, including bento grids, which organize content in an aesthetically pleasing and accessible manner.
### Cool Gradients
Stylish gradients are used to enhance the visual appeal of cards, buttons, and other UI elements. Tailwind CSS makes it easy to apply and customize these gradients.
### Responsive Design
Ensuring that the website functions seamlessly across all devices is crucial. The responsive design guarantees that users have a consistent and optimized experience, whether on a desktop, tablet, or smartphone.
### Emphasis on Code Architecture and Reusability
Throughout the development process, a strong emphasis was placed on code architecture and reusability. Components were designed to be modular and reusable, making future updates and maintenance more manageable.
### Quick Start
Follow these steps to set up the project locally on your machine.
### Prerequisites
Make sure you have the following installed on your machine:
Git
Node.js
npm (Node Package Manager)
Cloning the Repository
```
git clone https://github.com/syedahmedullah14/brainwave.git
cd brainwave
```
### Installation
Install the project dependencies using npm:
`npm install`
### Running the Project
Start the development server:
`npm run dev`
### Live Demo
Check out the live version of the project [here](https://jaser-brainwave.netlify.app/).
### A Special Thanks
I would like to extend my heartfelt gratitude to Adrian for his invaluable guidance throughout this project. His insights and advice were instrumental in bringing this website to life.
## Conclusion
Building this modern website with Vite, React.js, and Tailwind CSS has been an enriching experience. The combination of these tools allows for a fast, efficient, and enjoyable development process, resulting in a high-quality, visually appealing, and responsive website. I hope this journey inspires you to explore these technologies and create amazing projects of your own.
Feel free to check out the project on [GitHub](https://github.com/syedahmedullah14/brainwave)and leave your thoughts!
| syedahmedullah14 |
1,909,287 | I lost $93 while testing the newly released Open AI vision | Introduction Hey everyone! It's Saad Fazal here, and today I want to talk about something... | 0 | 2024-07-02T18:38:17 | https://dev.to/mrsaadfazal/i-lost-93-while-testing-the-newly-released-open-ai-vision-1k20 | github, gitlab, cybersecurity | ## Introduction
Hey everyone! It's Saad Fazal here, and today I want to talk about something that I've been noticing more and more on GitHub: the alarming lack of security awareness among some developers. As much as I love the collaborative spirit of open-source, it's crucial that we all take security seriously.
I was messing around on GitHub, just doing some casual searches, and guess what I found? Yep, OpenAI API keys scattered around in public repos like confetti at a New Year's party. If you're thinking, "Oh no, not me!"—think again. Here's the search query I used:
```
(path:*.xml OR path:*.json OR path:*.properties OR path:*.sql OR path:*.txt OR path:*.log OR path:*.tmp OR path:*.backup OR path:*.bak OR path:*.enc OR path:*.yml OR path:*.yaml OR path:*.toml OR path:*.ini OR path:*.config OR path:*.conf OR path:*.cfg OR path:*.env OR path:*.envrc OR path:*.prod OR path:*.secret OR path:*.private OR path:*.key) AND (access_key OR secret_key OR access_token OR api_key OR apikey OR api_secret OR apiSecret OR app_secret OR application_key OR app_key OR appkey OR auth_token OR authsecret) AND ("sk-" AND (openai OR gpt))
```

## Why This is a Big Deal
### Financial Risks
Exposing your API keys is like leaving your wallet on the sidewalk. Sure, someone might just ignore it, but chances are, someone’s going to pick it up and go on a spending spree with your hard-earned cash. And trust me, those OpenAI bills can rack up fast!
#### My Funny Mishap with OpenAI Vision
So, I was once testing the newly released OpenAI Vision using the API, and in a classic "whoops" moment, I accidentally put my Python code in a loop. It kept taking screenshots of my desktop and sending POST requests to the OpenAI Vision API. Within just 5 minutes, I was charged $93. Talk about an expensive lesson in debugging!
### Security Breaches
Leaving your keys out in the open can lead to unauthorized access to your systems. It’s not just about the money—you could be giving hackers the keys to your kingdom. They can wreak havoc, steal data, or worse.
### Professional Reputation
Imagine a potential employer or client stumbling upon your exposed keys. Awkward, right? It doesn’t exactly scream “I’m a responsible developer.” Keeping your credentials secure is a must for maintaining your professional image.
## Steps to Secure Your API Keys
### Use Environment Variables
Store your keys in environment variables instead of hardcoding them in your files. This keeps them out of your source code and reduces the risk of accidental exposure.
### Git Ignore
Make sure your `.gitignore` file is properly configured to exclude sensitive files like `.env`. This prevents them from being committed to your repository.
### Secrets Management
Use secrets management tools provided by cloud providers or services like AWS Secrets Manager, Azure Key Vault, or HashiCorp Vault. These tools help you manage and access your secrets securely.
### Regular Audits
Regularly audit your repositories for accidental exposures. Use tools like TruffleHog, GitGuardian, or similar to scan your codebase for sensitive information.
### Private Repos Aren't Safe Either
Just because a repository is private doesn't mean it's safe to store your credentials there. If your account gets compromised, so do all your private repos. Treat them with the same level of security as you would a public repo.
## Conclusion
Let's all take a moment to reflect on our security practices. It's easy to overlook these details, but the implications can be severe. By taking proactive steps, we can protect our projects, our finances, and our reputations.
I hope this blog post helps raise awareness about the importance of security on GitHub. Let's work together to make our projects safer and more secure. If you have any thoughts or additional tips, feel free to share them!
Stay secure, stay vigilant, and happy coding! | mrsaadfazal |
1,909,286 | Recursion | Recursion is a technique that leads to elegant solutions to problems that are difficult to program... | 0 | 2024-07-02T18:38:15 | https://dev.to/paulike/recursion-d1a | java, programming, learning, beginners | Recursion is a technique that leads to elegant solutions to problems that are difficult to program using simple loops. Suppose you want to find all the files under a directory that contain a particular word. How do you solve this problem? There are several ways to do so. An intuitive and effective solution is to use recursion by searching the files in the subdirectories recursively.
H-trees, depicted in Figure below, are used in a very large-scale integration (VLSI) design as a clock distribution network for routing timing signals to all parts of a chip with equal propagation delays. How do you write a program to display H-trees? A good approach is to use recursion.

To use recursion is to program using _recursive methods_—that is, to use methods that invoke themselves. Recursion is a useful programming technique. In some cases, it enables you to develop a natural, straightforward, simple solution to an otherwise difficult problem. | paulike |
1,909,285 | Apps Script: o ambiente JS para o Ecossistema Google | Conteúdo Descobrindo o Apps Script Aprendendo com o Apps Script Explorando os... | 0 | 2024-07-02T18:36:15 | https://dev.to/fabianoraiser/apps-script-o-ambiente-js-para-o-ecossistema-google-2ihb | webdev, javascript, beginners, programming | ## Conteúdo
* [Descobrindo o Apps Script](#descobrindo)
* [Aprendendo com o Apps Script](#aprendendo)
* [Explorando os Recursos](#recursos)
* [Desafios e Melhorias](#desafios)
* [Considerações Finais](#conclusao)
Você provavelmente já trabalhou ou trabalha com ferramentas do Ecossistema Google. Seu Drive deve estar cheio de arquivos do Docs, Planilhas e mais Planilhas contendo Dados, Formulários de onde você pegou esses dados, Apresentações .ppt com os gráficos desses dados, etc e etc...
O Google Apps Script é uma plataforma de script poderosa que permite criar extensões e automações personalizadas nos produtos Google. Essa ferramenta é ideal para quem trabalha com o Ecossistema Google e busca soluções flexíveis e integradas com outros produtos da empresa.
### Descobrindo o Apps Script <a name="descobrindo"></a>
Em um dos meus projetos recentes, tive a oportunidade de trabalhar com grandes volumes de dados espalhados por diversos arquivos. Buscando otimizar os processos, pesquisei por ferramentas que facilitassem a manipulação desses dados. Como desenvolvedor front-end, pensei em criar uma landing page para automatizar as operações. No entanto, ao me deparar com as APIs da Google, percebi a necessidade de um cartão de crédito para confirmação de identidade e cobrança de excedentes.
Em um ambiente público com burocracias complexas, essa solução não era viável. Foi aí que encontrei o Google Apps Script, a alternativa perfeita para minhas necessidades.
### Aprendendo com o Apps Script <a name="aprendendo"></a>

Mesmo a documentação e tutoriais da própria plataforma e no canal da Google Workspace Developers serem em inglês, me desenrolei e dominei ferramenta. A curva de aprendizado foi desafiadora, tendo muitas idas e vindas.
Um dos pontos que também me incomodou foi a falta de um recurso de autocompletar no editor de código, similar ao Emmet do VSCode. Essa limitação dificultou a escrita do código, tornando-a mais lenta e trabalhosa, e por vez estressante.
### Explorando os Recursos do Apps Script <a name="recursos"></a>

O Apps Script possui diversas vantagens que o tornam uma ferramenta valiosa para desenvolvedores:
* **Integração nativa com APIs da Google:** Permite fácil acesso e manipulação de dados de outros produtos Google, como Drive, Planilhas e Gmail.
* **Ambiente nativo baseado em JavaScript:** Familiar e fácil de aprender para quem já possui conhecimento em JavaScript, assim como eu.
* **Criação de extensões e automações personalizadas:** Possibilita automatizar tarefas repetitivas e criar soluções personalizadas para suas necessidades, como verificar repetições de dados ou inserções.
* **Suporte para genAI e Chatbots:** Abre novas possibilidades para desenvolvimento de soluções inovadoras, como chatbots e interfaces interativas, o qual ainda irei estudar um pouco melhor.
### Desafios e Melhorias <a name="desafios"></a>

Apesar dos seus pontos fortes, o Apps Script ainda apresenta alguns desafios que podem ser aprimorados, e ao meu ver, deveriam ser implementados de imediato:
* **Falta de tutoriais em português:** Dificulta o aprendizado para quem não domina o inglês, o que ao meu ver é um ponto crucial, já que muitos usuários da Google não tem esse conhecimento.
* **Documentação com termos não localizados:** Pode gerar confusão e dificultar a compreensão, sorte que já tinha experiência e vi os erros.
* **Editor de código sem autocompletar:** Torna a escrita de código mais lenta e deixando o fluxo de trabalho enfadonho.
### Considerações Finais <a name="conclusao"></a>
O Google Apps Script é uma ferramenta poderosa e versátil para quem trabalha com o Ecossistema Google. Apesar de alguns desafios que precisam ser superados, como a falta de tutoriais em português e a documentação incompleta, o Apps Script tem um grande potencial para automatizar tarefas, criar soluções inovadoras e otimizar o trabalho com os produtos Google. | fabianoraiser |
1,909,284 | Building a Robust Web Hosting Solution with AWS Cloud | Overview Utilizing Amazon Web Services (AWS) can offer unmatched scalability, security,... | 0 | 2024-07-02T18:34:50 | https://dev.to/oloko0201/building-a-robust-web-hosting-solution-with-aws-cloud-1jf8 | ## Overview
Utilizing Amazon Web Services (AWS) can offer unmatched scalability, security, and performance in the ever-changing web hosting industry. This article will discuss how to use several AWS services to construct a web hosting environment such that your applications are reliable, safe, and highly available.
## Key Components of Our Architecture
DNS Services with Amazon Route 53
Amazon Route 53 is your go-to for domain registration, DNS routing, and health checks. It simplifies domain management and ensures your users are directed to the nearest and healthiest endpoints.
Edge Caching with Amazon CloudFront
Amazon CloudFront, a content delivery network (CDN), caches your content at edge locations worldwide. This reduces latency and improves the user experience by serving content quickly.
Edge Security with AWS WAF
AWS WAF (Web Application Firewall) protects your applications from common web exploits and attacks. By setting custom rules, you can filter out malicious traffic and safeguard your content.
Load Balancing with Elastic Load Balancing (ELB)
Elastic Load Balancing distributes incoming traffic across multiple EC2 instances in different Availability Zones, ensuring no single instance is overwhelmed. This enhances fault tolerance and availability.
DDoS Protection with AWS Shield
AWS Shield provides automatic protection against DDoS attacks, ensuring your infrastructure remains available even during large-scale attacks.
Firewalls with Security Groups
Security Groups act as virtual firewalls for your EC2 instances, controlling inbound and outbound traffic at the instance level. This provides a robust security layer to protect your applications.
Caching with Amazon ElastiCache
Amazon ElastiCache improves your application's performance by caching frequently accessed data. Using Redis or Memcached, it reduces the load on your databases and speeds up data retrieval.
Managed Database with Amazon RDS
Amazon RDS offers managed relational databases with high availability and scalability. It supports multiple database engines and automatically handles backups, patching, and failover.
Static Storage and Backups with Amazon S3
Amazon S3 provides scalable object storage for static assets and backups. It's ideal for storing images, videos, and backups with high durability and availability.
## Detailed Architecture Overview
1. DNS Resolution and Edge Services
Amazon Route 53: Start by creating a hosted zone for your domain. Route 53’s routing policies like simple, weighted, or geolocation routing ensure traffic is efficiently directed. Set up health checks to monitor endpoint health.
Amazon CloudFront: Create a CloudFront distribution that points to your ELB as the origin. This setup ensures low latency and high-speed content delivery. Enable HTTPS for secure communication.
AWS WAF: Define rules to filter out malicious traffic. Use managed rule groups or custom rules for specific needs, such as protection against SQL injection or cross-site scripting (XSS).
2. Load Balancing and Auto Scaling
Elastic Load Balancer (ELB): Set up an Application Load Balancer (ALB) for HTTP/HTTPS traffic, and a Network Load Balancer (NLB) if you need to handle TCP traffic at scale. Enable cross-zone load balancing for even distribution.
AWS Shield: AWS Shield Standard is automatically included to protect against DDoS attacks. For more comprehensive protection, consider AWS Shield Advanced.
3. Compute Layer
Amazon EC2 Instances: Deploy your application on EC2 instances across multiple Availability Zones for redundancy.
Auto Scaling Groups: Ensure your application scales automatically based on demand by setting up Auto Scaling groups.
4. Security
Security Groups: Configure inbound and outbound rules to allow traffic from trusted sources. Follow the principle of least privilege by only opening necessary ports.
Network ACLs: Use Network ACLs for an additional layer of security at the subnet level.
5. Caching Layer
Amazon ElastiCache: Set up Redis or Memcached clusters to cache frequently accessed data. Enable replication and automatic failover for high availability.
6. Database Layer
Amazon RDS: Choose a database engine like MySQL, PostgreSQL, or others supported by RDS. Enable Multi-AZ deployment for automatic failover and create read replicas for read-heavy workloads.
7. Storage and Backup
Amazon S3: Create S3 buckets for static assets and backups. Use lifecycle policies to transition objects to cheaper storage classes or delete them after a certain period. Enable versioning and cross-region replication for added durability.
Implementing the Architecture
The following figure provides another look at that classic web application architecture and how it can leverage the AWS Cloud computing infrastructure.

Here's a high-level overview of the steps to set up this architecture:
1. Set Up Route 53 for DNS Management:
```
aws route53 create-hosted-zone --name example.com --caller-reference unique-string
```
2. Create a CloudFront Distribution:
```
{
"CallerReference": "unique-string",
"Aliases": {
"Quantity": 1,
"Items": ["example.com"]
},
"DefaultRootObject": "index.html",
"Origins": {
"Quantity": 1,
"Items": [
{
"Id": "origin1",
"DomainName": "my-load-balancer-1234567890.us-west-2.elb.amazonaws.com",
"CustomOriginConfig": {
"HTTPPort": 80,
"HTTPSPort": 443,
"OriginProtocolPolicy": "http-only"
}
}
]
},
"DefaultCacheBehavior": {
"TargetOriginId": "origin1",
"ViewerProtocolPolicy": "redirect-to-https",
"AllowedMethods": {
"Quantity": 7,
"Items": ["HEAD", "GET", "POST", "PUT", "PATCH", "OPTIONS", "DELETE"],
"CachedMethods": {
"Quantity": 2,
"Items": ["HEAD", "GET"]
}
},
"Compress": true,
"ForwardedValues": {
"QueryString": false,
"Cookies": {
"Forward": "none"
}
},
"MinTTL": 0,
"DefaultTTL": 86400,
"MaxTTL": 31536000
},
"Enabled": true
}
```
3. Configure AWS WAF:
```
aws wafv2 create-web-acl --name myWebACL --scope CLOUDFRONT --default-action Allow
```
4. Set Up the Application Load Balancer:
```
aws elbv2 create-load-balancer --name my-alb --subnets subnet-12345678 subnet-87654321
aws elbv2 create-target-group --name my-targets --protocol HTTP --port 80 --vpc-id vpc-12345678
aws elbv2 register-targets --target-group-arn arn:aws:elasticloadbalancing:region:123456789012:targetgroup/my-targets/abcdefg --targets Id=i-12345678 Id=i-87654321
```
5. Deploy EC2 Instances with Auto Scaling:
```
aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-asg --launch-configuration-name my-lc --min-size 1 --max-size 10 --desired-capacity 2 --vpc-zone-identifier "subnet-12345678,subnet-87654321"
```
6. Configure ElastiCache:
```
aws elasticache create-cache-cluster --cache-cluster-id my-cluster --engine redis --cache-node-type cache.m4.large --num-cache-nodes 1 --preferred-availability-zone us-west-2a
```
7. Set Up RDS:
```
aws rds create-db-instance --db-instance-identifier mydbinstance --db-instance-class db.m4.large --engine mysql --master-username admin --master-user-password password --allocated-storage 20 --multi-az
```
8. Create an S3 Bucket:
```
aws s3 mb s3://my-bucket
```
In summary
You may create a web hosting environment that is highly available, secure, and scalable by utilizing AWS services. This architecture makes sure your application runs as efficiently as possible—even when it's under a lot of stress—and that it's safe from frequent threats. With AWS, take your web hosting to the next level by starting to deploy these components right now.
| oloko0201 | |
1,909,282 | A Deep Dive into Two-Dimensional Arrays: Techniques and Use Cases | A two-dimensional array is essentially an array of arrays, providing a way to store data in a... | 0 | 2024-07-02T18:22:28 | https://dev.to/m__mdy__m/a-deep-dive-into-two-dimensional-arrays-techniques-and-use-cases-44f3 | programming, beginners, learning, machinelearning |
A two-dimensional array is essentially an array of arrays, providing a way to store data in a matrix-like structure. This concept extends the idea of a one-dimensional array, where data is stored in a linear order, to two dimensions, allowing data to be organized in rows and columns. Two-dimensional arrays are particularly useful for representing data that naturally forms a grid, such as digital images or game boards.
#### One-Dimensional vs. Two-Dimensional Arrays
**One-Dimensional Array:**
A one-dimensional array is a list of elements stored in a single row. Each element in this list can be accessed using a single index. For example:
```java
int[] myArray = {0, 1, 2, 3};
```
Here, `myArray` is a simple array containing four integers. Each element is accessed using its index, starting from 0. To access the third element (which has the value 2), you use:
```java
myArray[2]; // Accesses the third element in the array
```
**Two-Dimensional Array:**
A two-dimensional array, on the other hand, is an array of arrays. Each element in a two-dimensional array is itself an array that can be accessed using two indices: one for the row and one for the column. For example:
```java
int[][] myArray = {
{0, 1, 2, 3},
{3, 2, 1, 0},
{3, 5, 6, 1},
{3, 8, 3, 4}
};
```
This array can be visualized as a grid:
```
0 1 2 3
3 2 1 0
3 5 6 1
3 8 3 4
```
To access the element in the third row and second column (which has the value 5), you use:
```java
myArray[2][1]; // Accesses the element in the third row and second column
```
#### Initializing Two-Dimensional Arrays
Two-dimensional arrays can be initialized in various ways. One common method is to use nested loops to assign values to each element:
```java
int rows = 4;
int cols = 4;
int[][] myArray = new int[rows][cols];
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
myArray[i][j] = 0;
}
}
```
This creates a 4x4 array and initializes all elements to 0. Another way to initialize a two-dimensional array is to directly specify the values, as shown in the previous example.
#### Using Two-Dimensional Arrays in Practice
Two-dimensional arrays are extremely useful in applications where data needs to be represented in a grid. For example, a grayscale image can be stored as a two-dimensional array, where each element represents the intensity of a pixel:
```java
int[][] image = {
{236, 189, 189, 0},
{236, 80, 189, 189},
{236, 0, 189, 80},
{236, 189, 189, 80}
};
```
Here, each number represents the brightness of a pixel, with 0 being black and 255 being white.
#### Iterating Through Two-Dimensional Arrays
To iterate through every element of a two-dimensional array, nested loops are used. This allows you to access and manipulate each element by its row and column indices:
```java
int rows = 10;
int cols = 10;
int[][] myArray = new int[rows][cols];
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
myArray[i][j] = i + j;
}
}
```
This example initializes each element to the sum of its row and column indices.
#### Practical Example: Drawing a Grayscale Image
A program can be written to create and display a grayscale image using a two-dimensional array. Each pixel's brightness is determined by the value stored in the array:
```java
size(200, 200);
int cols = width;
int rows = height;
int[][] myArray = new int[cols][rows];
// Initialize the array with random grayscale values
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
myArray[i][j] = int(random(255));
}
}
// Draw the image
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
stroke(myArray[i][j]);
point(i, j);
}
}
```
In this code, a 200x200 pixel canvas is created, and each pixel is assigned a random grayscale value. The `draw` loop then sets the color of each pixel accordingly.
#### Storing Objects in a Two-Dimensional Array
Two-dimensional arrays can also store objects, making them useful for creating grids of objects in visual programs. For example, consider a grid of `Cell` objects, where each cell's brightness oscillates over time:
```java
Cell[][] grid;
int cols = 10;
int rows = 10;
void setup() {
size(200, 200);
grid = new Cell[cols][rows];
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
grid[i][j] = new Cell(i*20, j*20, 20, 20, i + j);
}
}
}
void draw() {
background(0);
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
grid[i][j].oscillate();
grid[i][j].display();
}
}
}
class Cell {
float x, y, w, h, angle;
Cell(float tempX, float tempY, float tempW, float tempH, float tempAngle) {
x = tempX;
y = tempY;
w = tempW;
h = tempH;
angle = tempAngle;
}
void oscillate() {
angle += 0.02;
}
void display() {
stroke(255);
fill(127 + 127 * sin(angle));
rect(x, y, w, h);
}
}
```
This code creates a 10x10 grid of `Cell` objects. Each cell oscillates in brightness over time, creating a dynamic visual effect.
### Summary
Two-dimensional arrays are a powerful data structure that allow for the representation and manipulation of data in a matrix format. They extend the concept of one-dimensional arrays by adding an extra dimension, making them ideal for applications involving grids or matrices, such as images or game boards. Through the use of nested loops, elements in a two-dimensional array can be efficiently accessed and modified, enabling complex operations and visual representations.
#### Benefits of Two-Dimensional Arrays:
1. **Natural Grid Representation**:
- Perfect for data that is naturally organized in a grid, like images, game boards, or spreadsheets.
2. **Efficient Access and Modification**:
- Accessing and modifying elements using row and column indices is straightforward and efficient.
3. **Versatile Applications**:
- Useful in various fields, from computer graphics and digital image processing to mathematical computations and simulations.
#### Potential Pitfalls:
1. **Index Out of Bound Errors**:
- Care must be taken to avoid accessing elements outside the defined array boundaries, which can cause runtime errors.
2. **Memory Usage**:
- Two-dimensional arrays can consume significant memory, especially for large datasets, so it's important to consider memory limitations.
3. **Performance Considerations**:
- Operations on large two-dimensional arrays can be computationally intensive, so performance optimizations may be necessary for real-time applications.
#### Alternatives:
While two-dimensional arrays are highly effective, other data structures may be better suited for certain tasks:
- **Lists of Lists**:
- In languages like Python, lists of lists can provide more flexibility and dynamic sizing.
- **Sparse Matrices**:
- For large grids with mostly empty values, sparse matrix representations can save memory.
- **Custom Data Structures**:
- Depending on the specific application, custom data structures tailored to the problem may offer better performance or usability.
### Conclusion
Understanding and utilizing two-dimensional arrays can significantly enhance your ability to handle complex data structures and develop more sophisticated programs. Whether you're working on simple grid-based games or complex image processing tasks, mastering two-dimensional arrays is an essential skill for any programmer.
## Deepen Your Algorithmic Journey: A World of Discovery Awaits
Excited to delve deeper into the world of non-linear array addressing and beyond? My GitHub repository, **[Algorithms & Data Structures](https://github.com/m-mdy-m/algorithms-data-structures)**, offers a treasure trove of algorithms and data structures for you to explore.
**Experiment, Practice, and Master:**
* **Dive into:** A diverse collection of algorithms and data structures awaits your exploration, providing ample opportunity to practice, solidify your knowledge, and refine your understanding.
* **Continuous Growth:** While some sections are actively under development as part of my ongoing learning journey (estimated completion: 2-3 years), the repository is constantly expanding with new content.
**Let's Build a Community of Learners:**
The quest for knowledge doesn't end with exploration! I actively encourage feedback and collaboration. Encountered a challenge? Have a suggestion for improvement? Eager to discuss algorithms and performance optimization? Reach out and let's connect!
* **Join the Conversation:**
* **Twitter:** [@m__mdy__m](https://twitter.com/m__mdy__m)
* **Telegram:** **Join my channel here: [https://t.me/medishn](https://t.me/medishn)** (Note: This is the preferred channel for the most up-to-date discussions)
* **GitHub:** [m-mdy-m](https://github.com/m-mdy-m)
**Together, let's build a vibrant learning community where we can share knowledge and push the boundaries of our understanding.** | m__mdy__m |
1,909,152 | Top 7 Featured DEV Posts of the Week | Welcome to this week's Top 7, where the DEV editorial team handpicks their favorite posts from the... | 0 | 2024-07-02T18:08:56 | https://dev.to/devteam/top-7-featured-dev-posts-of-the-week-2cfe | top7 | _Welcome to this week's Top 7, where the DEV editorial team handpicks their favorite posts from the previous week._
Congrats to all the authors that made it onto the list 👏
{% embed https://dev.to/samuelfaure/networking-is-easy-fun-and-probably-not-what-you-think-it-is-2ijc %}
@samuelfaure shares their experience getting to their dream job, and how to make networking part of daily life.
---
{% embed https://dev.to/neutrino2211/gecko-making-a-programming-language-is-hard-4g0a %}
@neutrino2211 reflects on the challenges and complexities they faced while creating a new programming language called Gecko.
---
{% embed https://dev.to/snyk/polyfill-supply-chain-attack-embeds-malware-in-javascript-cdn-assets-55d6 %}
@snyk_sec details a supply chain attack where malware was embedded in JavaScript CDN assets via a polyfill. We learn about JavaScript polyfills, security risks of polyfills hosted on CDNs, and more.
---
{% embed https://dev.to/kwnaidoo/i-got-hacked-and-blew-up-prod-43a3 %}
@kwnaidoo shares three ways they've taken down production, and how to avoid these mistakes.
---
{% embed https://dev.to/yelldutz/understanding-react-hooks-3e69 %}
@yelldutz explains how react hooks came to be (by way of wrapper hell), and breaks down the unique purposes and functions of three hooks.
---
{% embed https://dev.to/m_midas/http-468-keyboard-required-4een %}
@m_midas introduces HTTP 468, a humorous and fictional status code indicating that a keyboard is required.
---
{% embed https://dev.to/coderamrin/how-content-creation-helped-me-land-my-first-tech-job-4d8b %}
@coderamrin discusses how creating content played a pivotal role in securing their first tech job, and some ideas on how to get started if you're looking to do the same.
---
_And that's a wrap for this week's Top 7 roundup! 🎬 We hope you enjoyed this eclectic mix of insights, stories, and tips from our talented authors. Keep coding, keep learning, and stay tuned to DEV for more captivating content and [make sure you’re opted in to our Weekly Newsletter] (https://dev.to/settings/notifications) 📩 for all the best articles, discussions, and updates._
| thepracticaldev |
1,909,260 | Using a Bash script to Automate User Account Management in Linux | It's critical for Sysops engineers to adopt the habit of effectively managing user accounts,... | 0 | 2024-07-02T18:07:47 | https://dev.to/shirlyne_thiongo_e4e524b/using-a-bash-script-to-automate-user-account-management-in-linux-47el | It's critical for Sysops engineers to adopt the habit of effectively managing user accounts, particularly when onboarding new clients, users, or staff members. This post will walk you through the process of writing a Bash script that adds users, adds them to groups, generates passwords, and logs all of your activities automatically. This module is a component of the HNG Devops Track job. To start working practically with devops and all of its relevant stacks, HNG is a great place to start. Applying is possible, or you can learn more at https://hng.tech/internship. Now let's begin the assignment for today.
**Overview**
Handling user accounts by hand can be laborious and prone to mistakes. This procedure needs to be automated for consistency, time savings, and reduced mistake. Using a file containing a list of usernames and groups, we will develop a script named "create_users.sh" that creates users and groups, configures their home directories, generates random passwords, and logs all tasks, actions, and errors to a management.log file.
**Prerequisites:**
It is essential that you possess the following knowledge before we start; if not, please review previous modules to familiarize yourself with the ideas;
1.Basic familiarity with Linux commands
2.Rights to administer the system and
3.A text editor (such as vim, nano, etc.)
**Summary**
The following tasks are completed by the script:
1.reads from a file a list of users and groups.
2.establishes users and places them in designated groups.
3.creates home directories and grants the necessary access.
4.creates passwords at random for users.
5.records every operation in the file /var/log/user_management.log.
6./var/secure/user_passwords.csv is where the created passwords are safely stored.
Script Breakdown
Below is a breakdown of the complete script with explanations.
Bash shells are commonly found on Linux operating systems. In this article, we will be working primarily with Ubuntu, a Linux distribution. You can download and set up Ubuntu here: (Canonical Ubuntu)[http://ubuntu.com/download].
Once you have your terminal open, you should write the following scrips:

To open the created script for editing,iput he following:

This will open a terminal window on the nano text editor.Type the following script;


- Save the script: Press Ctrl + O (the letter O, not zero) to write out (save) the file. Press Enter to confirm.
- Exit Nano: Press Ctrl + X to exit the editor
To populate the users list,enter the following script,this will also open a nano text editor:

On the terminal:

- Save the script: Press Ctrl + O (the letter O, not zero) to write out (save) the file. Press Enter to confirm.
- Exit Nano: Press Ctrl + X to exit the editor.
Type the following script to confirm if the users are populated:

To execute the script with a text file containing usernames and groups as an argument,write the following script:

To verify the script:
After running the script, verify the results:
Log File: Check
```
/var/log/user_management.log
```
for logged actions.
```
sudo cat /var/log/user_management.log
```
2. Password File: Check
```
/var/secure/user_passwords.csv
```
for generated passwords.
```
sudo cat /var/secure/user_passwords.csv
```
3. User and Group Existence: Verify that users and groups were created
```
id user1
```
```
id user2
```
```
id user3
```
In Conclusion:
The onboarding process for new users, staff, or accounts can be greatly streamlined by automating user account management with bash scripts. You may build a strong script that guarantees users are created, assigned to groups, and have safe passwords while logging actions for transparency and auditability by following the instructions we provided in this post. Remember that HNG made these challenges possible. You can also join their VIP channel to receive perks and study in a very supportive setting. To sign up, visit https://hng.tech/premium. I'll see you in the upcoming module. Thank you.
| shirlyne_thiongo_e4e524b | |
1,909,281 | Building a Scalable Web App with AWS Elastic Beanstalk, DynamoDB, CloudFront, and Edge Location - with AWS Dashboard and EB CLI | In this real-world project, I was tasked with implementing an application capable of supporting a... | 0 | 2024-07-02T18:22:44 | https://dev.to/cansu_tekin_b017634d64dfd/building-a-scalable-web-app-with-aws-elastic-beanstalk-dynamodb-cloudfront-and-edge-location-with-aws-dashboard-and-eb-cli-1j43 | ebcli, aws, elasticbeanstalk, cloud | 
In this real-world project, I was tasked with implementing an application capable of supporting a high volume of simultaneous users. This application was utilized during a large conference attended by over 10,000 people, both in-person and online, from around the globe. The event featured live broadcasts and the drawing of 10 vouchers for 3 Cloud certifications. At the peak moment, more than 10,000 audience members registered their emails to participate in the raffle.
We used AWS, Elastic Beanstalk services to deploy the web application, DynamoDB to store emails, and CloudFront to cache static and dynamic files in an Edge Location close to the user.
## Solution Architecture

**Part 1: Create a table in DynamoDB to store users’ email addresses and deploy the application using Elastic Beanstalk**
**Part 2: Create a CloudFront distribution**
**Part 3: Perform Load testing**
## Part 1: Create a table in DynamoDB to store users’ email addresses and deploy the application using Elastic Beanstalk
Create a table in DynamoDB to store users’ email addresses and deploy the application using Elastic Beanstalk, which will provision infrastructures such as EC2, Elastic Load Balancer, and Auto Scaling group.
1. **Create a table in DynamoDB to store users’ email addresses**
Amazon DynamoDB is a fully managed, high-performance, and highly scalable NoSQL database service designed to handle large-scale data loads and ensure low-latency responses.
Search for DynamoDB in the AWS console and create a table “users”. Leave anything else as default.
A partition key is a unique attribute in a DynamoDB table that determines the partition in which the data is stored. Each item in the table must have a unique value for the partition key.
Table name: users
Partition key: email, type String

**2. Create an Elastic Beanstalk Application**
Amazon Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
Many people will try accessing our application from a mobile or desktop to register. At this stage, the application needs to be robust and scalable to handle high traffic from many users. Elastic Beanstalk allows us to deploy and manage the web application in AWS Cloud without worrying about the infrastructure. It simplifies the process by provisioning and managing essential AWS resources like EC2 instances, Elastic Load Balancers, and Auto Scaling groups, ensuring the application remains responsive and available under varying loads. We will upload the application files, and Elastic Beanstalk will automatically manage capacity provisioning, load balancing, and scaling.
Elastic Beanstalk will use the provided application files to deploy the application. It is important to organize the application folders in a way that Elastic Beanstalk can understand. Check the [Elastic Beanstalk documentation](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/tutorials.html) before uploading the application files. Each application requires its own folder structure. This project’s folders and files were designed specifically for the Python application.

application.py
from flask import Flask, render_template, flash, redirect, url_for, session, request, logging
from wtforms import Form, StringField, TextAreaField, PasswordField, validators
from wtforms.validators import InputRequired, Email
import boto3
import os
from urllib.parse import quote as url_quote
application = Flask(__name__)
#dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
region = os.environ['AWS_REGION']
dynamodb = boto3.resource('dynamodb',region_name=region)
def put_user(email):
table = dynamodb.Table('users')
response = table.put_item(
Item={
'email': email
}
)
return response
# Index
@application.route('/', methods=['GET', 'POST'])
def index():
form = RegisterForm(request.form)
if request.method == 'POST' and form.validate():
user_resp = put_user(form.email.data)
return render_template('obrigado.html')
return render_template('index.html', form=form)
# Register Form Class
class RegisterForm(Form):
email = StringField('Email', [InputRequired("Please enter your email.")])
if __name__ == '__main__':
application.secret_key='secret123'
application.run(debug=True)
requirements.txt
boto3==1.21.8
botocore==1.24.8
Flask==2.0.3
passlib==1.7.2
WTForms==2.3.3
jsons==1.6.1
itsdangerous==2.1.0
Werkzeug==2.0.3
Proper organization of the application files helps Elastic Beanstalk deploy the application automatically. Structure your project, zip, and store the zip file in a S3 bucket or locally. Elastic Beanstalk will use it to deploy the application.
**3. Configure environment**
Search for Elastic Beanstalk in the AWS console and “**Create Application”.**
Application name: tcb-conference
Platform (runtime environment): Python 3.8
Application code: Upload the application file code
Presets: High availability

It launches an environment named tcb-conference-env with these [AWS resources:](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.CreateApp.html)
* An Amazon Elastic Compute Cloud (Amazon EC2): An Amazon EC2 virtual machine configured to run web apps on the platform you choose.
* An Amazon EC2 security group: An Amazon EC2 security group configured to allow incoming traffic on port 80. This resource lets HTTP traffic from the load balancer to the EC2 instance running your web app.
* An Amazon Simple Storage Service (Amazon S3) bucket: A storage location for your source code, logs, and other artifacts that are created when you use Elastic Beanstalk.
* Amazon CloudWatch alarms: Two CloudWatch alarms that monitor the load on the instances in your environment and are triggered if the load is too high or too low. When an alarm is triggered, your Auto Scaling group scales up or down in response.
* An AWS CloudFormation stack: Elastic Beanstalk uses AWS CloudFormation to launch the resources in your environment and propagate configuration changes. The resources are defined in a template that you can view in the [AWS CloudFormation console.](https://console.aws.amazon.com/cloudformation)
* A domain name (autogenerated in our case): A domain name that routes to your web app in the form [subdomain.region.elasticbeanstalk.com](http://subdomain.region.elasticbeanstalk.com).

We do not need to specify the S3 bucket URL at this point. Once we move to the next step (from environment configuration to service access configuration) an S3 bucket will be automatically created without waiting to complete the EBS application creation process. We can use this bucket to store our application code or create a different one. I will use this one.

After designing the folder structure for the Python application based on AWS Elastic Beanstalk documentation, the application files will be uploaded to S3 bucked. That location is used to run the application. Go to S3 bucked and upload the application files.


Update your Public S3 URL with the bucket URL.

At this point, all public access to this bucket is allowed as default. I will keep it as it is but update it for later projects.
**4. Configure service access**
We need to set up the necessary permissions and roles that allow Elastic Beanstalk to interact with other AWS services securely.
1. **Service Role**: Grants permissions to the Elastic Beanstalk service itself, allowing it to manage AWS resources on your behalf. This includes creating and managing EC2 instances, load balancers, and other resources necessary for running your application.
2. **EC2 Instance Profile:** Grants permissions to the EC2 instances running our application. It allows the instances to interact with other AWS services (e.g., S3, DynamoDB) on your behalf.

When you click View permission details you will see recommended permission by AWS. We will add them to our IAM roles as well.
**Service role permission recommendations:**
1. **AWSElasticBeanstalkEnhancedHealth** (this comes as default)
2. **AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy** (we will attach it)**:** This policy is for the AWS Elastic Beanstalk service role used to perform managed updates of Elastic Beanstalk environments. The policy grants broad permissions to create and manage resources across several AWS services including AutoScaling, EC2, ECS, Elastic Load Balancing, and CloudFormation.
**EC2 instance profile recommendations:**
1. **AWSElasticBeanstalkWebTier:** Provide the instances in your web server environment access to upload log files to Amazon S3.
2. **AWSElasticBeanstalkWorkerTier:** Provide the instances in your worker environment access to upload log files to Amazon S3, to use Amazon SQS to monitor your application’s job queue, to use Amazon DynamoDB to perform leader election, and to Amazon CloudWatch to publish metrics for health monitoring.
3. **AWSElasticBeanstalkMulticontainerDocker:** Provide the instances in your multicontainer Docker environment access to use the Amazon EC2 Container Service to manage container deployment tasks.
First, we need to create an IAM role and attach it to the Elastic Beanstalk application. You can select *Create and use a new service role* if you do not have an existing one, or you want AWS to create one for you. This will create an IAM role with necessary permissions but you may need to add additional permissions based on your application needs. I will create a new one for this project following the steps below:
1. **Create a Service Role**:
Go to IAM -> Roles -> Create role -> AWS service

When we specify the Use case as Elastic Beanstalk it comes with some [default policies.](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-roles-service.html)

Named :elastic-beanstalk-service-role
**AWSElasticBeanstalkEnhancedHealth:** AWS Elastic Beanstalk Service policy for Health Monitoring system.
**AWSElasticBeanstalkService:** AWS Elastic Beanstalk Service role policy grants permissions to create & manage resources (i.e.: AutoScaling, EC2, S3, CloudFormation, ELB, etc.) on your behalf. This policy comes with the following permissions by default:
* **AllowCloudformationOperationsOnElasticBeanstalkStacks**: Allows full CloudFormation operations.
* **AllowDeleteCloudwatchLogGroups**: Allows Elastic Beanstalk to clean up log groups when environments are deleted.
* **AllowECSTagResource**: Allows tagging of ECS resources
* **AllowS3OperationsOnElasticBeanstalkBuckets**: Allows full S3 operations on Elastic Beanstalk-specific buckets. Grants permissions to manage Elastic Beanstalk application versions, environment configurations, and logs stored in S3 buckets.
* **AllowLaunchTemplateRunInstances**: Enables Elastic Beanstalk to launch EC2 instances using predefined launch templates.
* **AllowOperations**: Allows Elastic Beanstalk to fully manage instances, security groups, load balancers, scaling policies, and other resources necessary for the application environment. Includes permissions for Auto Scaling, EC2, ECS, Elastic Load Balancing, IAM, CloudWatch, RDS, S3, SNS, SQS, and CodeBuild.

Most of the necessary policies are attached by default. Create a role with given permissions first. If we need any additional permission we need to attach it to this role.

We will attach the recommended policy we mentioned earlier: **AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy**

**AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy**: Includes permissions needed specifically for updating instances and other resources.
**AWSElasticBeanstalkService**: Includes a wider range of permissions necessary for creating, updating, and deleting various AWS resources managed by Elastic Beanstalk. This policy is on a deprecation path. It comes as a default for now.
**2. Create an EC2 Instance Profile**
We will follow similar steps to create an EC2 Instance Profile.

We need to attach specific policies to give additional permission to the EC2 instance that our Python application will run inside it. In our case, our application needs to write data and read data from the DynamoDB table. We need a specific policy for that which is *AmazonDynamoDBFullAccess.*

Named :elastic-beanstalk-ec2-service-role
After we added AWS recommended policies, the final role comes with the following policies:

**3. Create an SSH key:**
To access the EC2 instance via SSH connection we will create an SSH key.
EC2 -> Key pairs -> Create key pair

Download and store the key.
We are all set for service access configuration.

You can *Skip to review* and edit configuration there or you can go step by step until the end.
**5. Set up networking, database, and tags-[optional](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.vpc.html)**
If you don’t configure a VPC, Elastic Beanstalk uses the default VPC. I will use a custom VPC I had before. You can use the default one. The configuration will be the same.

We will choose subnets in each AZ for the EC2 instances that run our application. To avoid exposing our application instances to the Internet, we can run them inside private subnets and load balancers in public subnets such that our application will be open the public access through NAT. If we choose private subnets for our EC2 instances, the VPC must have a NAT gateway in a public subnet that the EC2 instances can use to access the Internet. I will not create a NAT gateway and I will run both in public subnets for simplicity. To run the load balancer and instances in the same public subnets, we will assign public IP addresses to the instances.

I only picked public subnets and activated public IP addresses for EC2 instances. I am not going to use RDS, I will move to the next configuration.
**6. Configure instance traffic and scaling *— optional***
We will set the Root volume type: General Purpose SSD — 8 GB

***Capacity:***Instances Min: 2 Instances Max: 4 Instance type: t2-micro

The Elastic Beanstalk will automatically create a Load Balancer for our application, with a minimum of 2 EC2 instances running when we first launch our application, and a maximum of 4 if triggered.
Configure the trigger that lets the auto-scaling group know when to scale up and add up more instances. if the load goes beyond 50% of CPU utilization, it is going to add more instances up to 4 to keep up with the workload.
***Scaling triggers:***

***Load balancer network settings:*** We will use the same setting as we used for EC2 instances.

***Listeners:*** By default, the load balancer is configured with a standard web server on port 80. I will use default settings.

If you wish you can configure Elastic Load Balancer to capture logs with detailed information about requests sent to your Load Balancer. Those logs will be stored in Amazon S3. I will not enable it for this project.

**7. Configure updates, monitoring, and logging *— optional***
I will not touch settings here, if you want you can configure them based on your needs. It uses NGINX by default.


I will only set the AWS_REGION environment variable here. It will be passed to my application.

Submit to create the environment for the application run. It takes a few minutes.

***S3 Bucket:***

***EC2 instances:***

***Security Groups:***

Inbound-outbound rules of security groups:



***Load Balancer:***

The environment health turned to RED, I will look at the logs and figure it out before trying to run my application.

It did not work in my case. I ran this application without any problem with the same settings before. It should work. I do not know if AWS has updated any settings on its services since my last run. I got the following error for this time:
ERROR: ModuleNotFoundError: No module named 'application'
I applied the solutions below to solve it but I was not able to solve this error after too much debugging.
* AWS calls the Flask instance as an *application* but Gunicorn calls it as an *app.* I updated a line, application = app = Flask(__name__), in my *application.py* file and then set WSGIPath to application:application.
I will use EB CLI after this point with the exact same settings. If you are also not able to run or debug by yourself you can move with me to use EB CLI.
Do not forget to destroy all the resources you created with Elastic Bean Stalk; *tcb-conference-env* and *tcb-conference* application. Keep the DynamoDB table.
## Building a Scalable Web App with AWS Elastic Beanstalk, DynamoDB, and CloudFront — with EB CLI
## Part 1: Deploy the application using Elastic Beanstalk — with EB CLI
1. **Install EB CLI**
EB CLI is AWS Elastic Beanstalk Command Line Interface. First, we will install EB CLI. You can use this [aws-elastic-beanstalk-cli-setup](https://github.com/aws/aws-elastic-beanstalk-cli-setup?tab=readme-ov-file#macoslinux) to install based on your operating system. I will follow the instructions for MacOS:
pip install awsebcli
# Verify the EB CLI Installation
eb --version
**2. Configure the EB CLI**
Initialize an EB CLI project and select your region.
# Initialize
eb init
#or
eb init -p python-3.8 tcb-conference --region us-east-1

**3. Setup IAM Credentials**
We must provide our credentials first. We must provide a secret key and access key to authenticate (who we are) and authorize (what permission we do have) to allow EB CLI to access and manage AWS resources for us. Let’s go to the IAM console to create a secret and access key. The secret access key is available for download only when you create it, make sure you download it after you create it.
Go to IAM console -> Users -> Create User

I will give AdministratorAccess access, and I can update later if I wish.

Go and create access key.
Use case: Command Line Interface (CLI)

When you run EB CLI it will ask for these credentials. You can also configure the AWS CLI using environment variables
export AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
export AWS_DEFAULT_REGION=us-east-1
We are ready to configure the EBS environment, similar to what we did on the AWS console before:

**4. Create an Elastic Beanstalk Environment**
We will use the YAML file for configuration. The YAML file should be under *.ebextentions*. EBS will automatically detect and run this config file when it is provided. Otherwise, it will use default EBS settings to create an EBS environment.
mkdir .ebextensions
touch .ebextensions/environment_configuration.config


Our application files will be zipped and uploaded to the S3 bucket while creating the environment.
We will use the same configuration in our YAML file. This is actually easier and faster compared to using AWS Dashboard.
YAML config file:
option_settings:
aws:elasticbeanstalk:environment:
ServiceRole: "aws-elasticbeanstalk-service-role" # Set service role
aws:autoscaling:launchconfiguration:
InstanceType: t2.micro # Specify the instance type (adjust as needed)
EC2KeyName: ebs-ssh-key # Set EC2 key pair
IamInstanceProfile: aws-elasticbeanstalk-ec2-role # Set IAM instance profile
RootVolumeType: gp2
RootVolumeSize: "10"
DisableIMDSv1: true # Deactivate IMDSv1
aws:autoscaling:asg:
MaxSize: 4 # Maximum number of instances
MinSize: 2 # Minimum number of instances
aws:ec2:vpc:
VPCId: "vpc-03f8678fb9c5d5ea1" # Set the VPC ID
ELBScheme: public
Subnets:
- "subnet-0449c3e40202e7665"
- "subnet-01e26ba1a707f5b13"
- "subnet-06a672c7f4aea1795"
- "subnet-0461ac2c4cea08257"
ELBSubnets:
- "subnet-0449c3e40202e7665"
- "subnet-01e26ba1a707f5b13"
- "subnet-06a672c7f4aea1795"
- "subnet-0461ac2c4cea08257"
AssociatePublicIpAddress: true # Enable public IP addresses for instances
aws:elasticbeanstalk:healthreporting:system:
SystemType: "basic" # Enable enhanced health reporting
aws:autoscaling:trigger:
MeasureName: "CPUUtilization" # Use CPUUtilization as trigger measurement
UpperThreshold: "50" # Upper threshold for CPU utilization
LowerThreshold: "40" # Lower threshold for CPU utilization
Unit: "Percent"
Period: "1"
UpperBreachScaleIncrement: "1" # Increase instance count by 1 on upper breach
LowerBreachScaleIncrement: "-1" # Decrease instance count by 1 on lower breach
aws:elasticbeanstalk:container:python:
WSGIPath: "application:application" # Set WSGIPath to application:application
aws:elasticbeanstalk:application:environment:
AWS_REGION: "us-east-1" # Set AWS_REGION environment property
You can use [EBS documentation](https://docs.aws.amazon.com/pdfs/elasticbeanstalk/latest/dg/awseb-dg.pdf#command-options-general) here while preparing your YAML file.
Ready to create an EBS environment:
eb create tcb-conference-env --region us-east-1

We are now able to create our EBS environment without any problems at this time!
Health: OK

Click on the Domain name to open the application:

Our application files were zipped and uploaded to the S3 bucket. Remember we created an EBS application named *tcb-conference* at the beginning of the EB CLI initialization. Our files are zipped and placed in a directory named *tcb-conference.*
*S3 bucket:*

*EC2 instances (Minimum 2):*

*Load Balancer:*

*Auto Scaling Group:*

*CloudFormation:*

**5. Validate the Application**
The participants will need to enter their email address on the web page and the application will insert the email address into the DynamoDB.
Try to register:


Upps! We are not able to register. Our frontend is working but we are not allowed to register. Our email will not be written to DynamoDB. Let’s go and check our EC2 instance role.

We do not have permission to write data to DynamoDB. Permit your EC2 instance to write data on the DynamoDB table inside the IAM role. Add the permission **AmazonDynamoDBFullAccess** on the EC2 associate role in IAM.

Try again:

Check the *users’* DynamoDB table:

Awesome!
## Part 2: Create a CloudFront Distribution
The CloudFront is a Content Delivery Network. All the static and dynamic files (CSS, HTML, etc) coming from the application will be cached on the edge location closer to the user so that it can improve the application’s performance and provide the lowest latency. When a user requests content, the request is routed to the edge location.
Go to Console and search for CloudFront -> Create a CloudFront distribution
Our application origin is our Elastic Load Balancer.


* Cache policy: CachingOptimized
* Allowed HTTP methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE. We want to allow the POST method to insert the data inside DynamoDB.



* Once CloudFront distribution is created, a domain name to access our application is already associated with it. This will give the Route 53 DNS entry that we use to access the application throughout CloudFront.
* We could have a Custom SSL certificate — (optional), branded and customized domain name for our purpose associated with our CloudFront distribution and put the SSL certificate associated with that as well. We are going to use the default one created for us.

Copy the CloudFront domain name when it is ready. Let’s confirm if we can access the application using the CloudFront.
Here it is:

## Part 3: Perform Load testing
We will basically simulate here what will happen if many users are accessing to EC2 instance at the same time and CPU utilization goes up.
We will induce load on the CPU. Copy the IP address from one of the EC2 instances. Open remote connectivity on your computer via SSH:

ssh -i "ebs-ssh-key.pem" ec2-user@[ec2-52-201-156-146.compute-1.amazonaws.com](http://ec2-52-201-156-146.compute-1.amazonaws.com)

Install the stress tool to perform the load testing:
sudo amazon-linux-extras install epel -y
sudo yum install stress -y
stress -c 4
This comment is going to bump up the CPU utilization inside the EC2 instance. Check Elastic Beanstalk status if it is turned “Warning”. Stress command bumping CPU utilization of this instance 100 %. We configured it as if CPU utilization is higher than 50%, the auto-scaling group will add one new instance so we can keep up with the workload.

Open a new terminal, connect remotely with SSH again, and use the “top” command that shows CPU utilization on the operating system.

New instances will be added shortly to scale up. Every single user who goes through the load balancer will be redirected to one of these instances.


Stop the loading process and end up stress command and check the process is not running anymore by running the command below:
Ctrl + C
ps -ef | grep stress
The stress is running anymore so the health status will change to OK. The auto scale will scale down and remove 3rd instance added before, so we can save cost.
Once you finish exploring it, please remove the Elastic Beanstalk application, and Elastic Beanstalk environment, disable and delete the CloudFront distribution, and finally delete the DynamoDB users table.
CONGRATULATIONS!!!
**REFERENCES**
[Elastic Beanstalk service role](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-roles-service.html)
[AWS Elastic Beanstalk Developer Guide](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html)
[AWS-Deploying a Flask application to Elastic Beanstal](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create-deploy-python-flask.html)k
[Deploying a Flask Application to Elastic Beanstalk](https://testdriven.io/blog/flask-elastic-beanstalk/)
[Elastic Beanstalk-supported platforms](https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html#platforms-supported.python)
[No module named ‘application’ Error while deploying a simple web app to Elastic Beanstalk](https://stackoverflow.com/questions/62479386/no-module-named-application-error-while-deploying-simple-web-app-to-elastic-be)
[WSGI configuration for Django Deployment using EB CLI](https://repost.aws/questions/QUcm3GAgnESN2wCOtD-hqzsQ/wsgi-configuration-for-django-deployment-using-eb-cli)
[Create an application source bundle](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html)
[What is AWS Elastic Beanstalk?](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html)
[Deploy a sample web application using Elastic Beanstalk](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.html)
[Install the EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html)
[AWS security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)
[EBS configuration options for all environments](https://docs.aws.amazon.com/pdfs/elasticbeanstalk/latest/dg/awseb-dg.pdf#command-options-general)
| cansu_tekin_b017634d64dfd |
1,909,149 | What's the difference between Css & Scss | Thank God for style sheet languages like Css and Scss; front-end would have been as stylish as a... | 0 | 2024-07-02T18:02:58 | https://dev.to/peterbabs/whats-the-difference-between-css-scss-29b2 | css, scss, webdev, frontend | Thank God for style sheet languages like Css and Scss; front-end would have been as stylish as a potato in a tuxedo!
If utilized properly, we can give your users an appealing feeling and a memorable experience with the appearance of our web pages.
The nutshell of this article is this: **CSS**, primarily, is the **powerful **dude that helps us tell our website how it should look—colors, fonts, spacing, you name it. While Scss is like adding "**S**uper" behind the "**P**ower" of CSS.
Though, they've got a lot of similarities. Hence, we must clarify the differences.
## What are the differences between CSS and Scss?
**Cascading Style Sheet** or **CSS** is the standard language used to describe the look and formatting of a document written in HTML. It allows you and me as developers to control layout, colors, fonts, and overall design.
On the other hand, **Sassy Cascading Style Sheet** or **SCSS **is a superset CSS. What that means is that any valid CSS syntax is also a valid SCSS. But in addition to that, SCSS introduces powerful features like variables, nesting, and mixins, which promote code readability, reusability, and maintainability. But at the end of the day, it's been compiled to CSS behind the scenes since that is what our HTML file can understand.
Think of CSS as customizing your video game character. You get to choose basic features like hair color and clothes. Scss is what helps you take your character up a notch. What if you could save your favorite colors and outfits to reuse again without having to remember every detail? That's the Scss variable feature for you. Note that this is also available with vanilla css.
With Scss, you also get to **nest your styling rules**. Meanwhile, this cannot be achieved in pure CSS. Nesting lets you keep your code tidy by putting related styles together, making it way easier to read.
Take a look:
Here's Css with which you cannot nest the css rules
```
.parent{
padding: 1rem;
}
.child{
margin: auto;
}
```
Here's scss allowing you to nest a CSS rule within its parent selector rule block:
```
.parent{
padding: 1rem;
.child{
margin: auto;
}
}
```
Another cool feature that SCSS provides but pure CSS lacks is called "mixins". With SCSS mixins, you can define reusable chunks of CSS, reducing redundancy.
In the video game example I made earlier, Mixins are like keeping both a kneel strike and a punch in a single command. As a result, your character will attack with both moves on an enemy whenever you call that command.
How cool is that? 😊
Instead of having to repeat the same couple of CSS declarations multiple times, You can keep these declarations as a group and apply them altogether by referencing the mixin name.
Take a look:
The typical method of centering a block element with CSS.
```
.item1{
display: flex;
align-items: center;
justify-content: center;
}
.item2{
display: flex;
align-items: center;
justify-content: center;
}
```
A better approach with Scss
1. Create mixin.
```
@mixin centerItem {
display: flex;
align-items: center;
justify-content: center;
}
```
2. Apply (include) mixin where it's needed.
.item1{
@include centerItem;
}
.item2{
@include centerItem;
}
So, while CSS is the basic tool you start with to make your web pages look good, SCSS is like leveling up. It helps you manage everything better, especially when your project gets bigger and more complicated.
Was that helpful?
| peterbabs |
1,908,995 | My HNG journey. Stage Zero: How to Deploy a Static Webpage Using Nginx | Introduction Over the past few years, HNG internships have always been talked about with... | 0 | 2024-07-02T17:59:23 | https://dev.to/ravencodess/my-hng-journey-stage-zero-how-to-deploy-a-static-webpage-using-nginx-55ij | webdev, nginx, docker, html | ## Introduction
Over the past few years, [HNG ](https://hng.tech/) internships have always been talked about with a certain level of fear and respect. "A grueling and unforgiving experience", some would say.
This year, I have decided to give the DevOps track a try. 10 stages of tasks to be covered and I intend to document my entire journey.
### Requirements
The requirements are straightforward;
- The webpage must be hosted on a cloud-based virtual machine
- The webpage must be publicly accessible
- The webpage must utilize a web server like Nginx or Apache
- The webpage must include The intern's name, Slack username and email address
- The webpage must include a button that links back to the official HNG website
### Prerequisites
**Skills/Knowledge**: Basic understanding of Docker, Nginx, and Linux CLI.
**Tools/Software**: Docker, Nginx, AWS.
**Let's get started**
**Set Up Virtual Machine**
I am going to be using AWS EC2 for this step but feel free to use any cloud platform of your choice.
I'm opting for this EC2 specification because it's a simple static web server:
- Instance Name: `My Web Server`
- AMI: `Amazon Linux 2023 (Free Tier)`
- Instance Type: `t2 micro`
- Storage: `Default 8GB gp3`
- Networking: `Default VPC and subnet`
- Security Groups: `Allow HTTP, HTTPS access from anywhere, and SSH access from my IP`
Make sure to create a new key-pair and save it securely in case you want to SSH into the instance.




Expand the Advanced Details panel, Scroll to the bottom to find User Data and the paste in this script.
```
#!/bin/bash
# Update software
yum update -y
# Install Docker
yum install docker -y
service docker start
usermod -aG docker ec2-user
# Install Git
yum install git -y
# Clone the repository
git clone https://github.com/Ravencodess/static-webpage-stage-0.git
cd static-webpage-stage-0
# Build the Docker image
docker build -t my-web-app .
# Run the container and map port 80 on the server to port 80 inside the container
docker run -d -p 80:80 my-web-app
```
This script pulls the application code from my public Github repo and launches the application on an Nginx-based docker container and maps the port 80 on the server to port 80 inside the container. This allows us to access the static webpage on our browser by simply accessing the public IP address of our server.
Let's take a closer look at the contents of the repository.

The HTML, CSS, and JS folders contain the necessary files, stylesheets and script needed to create a functional webpage and can be edited and configured anyhow you see fit.
I want to pay more detail to the contents of the `Dockerfile` and `nginx.conf file`
Dockerfile

The Dockerfile performs the following steps
- It uses the latest nginx image from Docker Hub as the base image for the application.
- It removes the default nginx.conf configuration file and replaces it with our modified conf file.
- It then copies the html,css and javascript files into the /usr/share/nginx folder which Nginx would use to serve the web page.
- Finally it documents that port 80 is exposed and starts the container.
nginx.conf

This file sets up Nginx to listen on localhost port 80 and sets up redirection for all static assets the webpage would need.
Your instance should have launched successfully and passed status check by now, locate the public IP address or DNS, and paste it in a new browser tab.
Your webpage should be live.


Happy Launching 🚀 | ravencodess |
1,909,244 | Creating a Dynamic Blog with Flask, HTMX, TailwindCSS, and Authentication (Part 2) | Building a dynamic blog with Flask and HTMX can be both fun and rewarding. This guide will take you... | 0 | 2024-07-02T17:58:32 | https://devtoys.io/2024/07/02/creating-a-dynamic-blog-with-flask-htmx-tailwindcss-and-authentication-part-2/ | htmx, flask, tailwindcss, tutorial | ---
canonical_url: https://devtoys.io/2024/07/02/creating-a-dynamic-blog-with-flask-htmx-tailwindcss-and-authentication-part-2/
---
Building a dynamic blog with Flask and HTMX can be both fun and rewarding. This guide will take you through the entire process, focusing on making your blog interactive without the need for a complex single-page application (SPA) framework. By the end, you’ll have a fully functional blog where users can create, read, update, and delete posts seamlessly.
- 1️⃣ If you are brand new to HTMX, check out this article! -> [Comprehensive Guide to HTMX: Building Dynamic Web Applications with Ease](https://devtoys.io/2024/06/22/comprehensive-guide-to-htmx-building-dynamic-web-applications-with-ease/)
- 2️⃣ If you want to focus on barebones without any authentications or adding css framework, check out Part 1 of this tutorial for a simpler version of the app here -> [Building a Dynamic Blog with Flask and HTMX](https://devtoys.io/2024/06/30/building-a-dynamic-blog-with-flask-and-htmx/)
---
## What You’ll Need
- Basic knowledge of HTML, CSS, and JavaScript
- Basic understanding of Python and Flask (or your preferred backend framework)
- Python and pip installed on your machine
---
## 👽 TL:DR You can find the complete source code here-> [GitHub Repo 🔗 for Part 2 Tutorial](https://github.com/judescripts/flask-htmx/tree/961a5fb62838aa2e0866098d794700ff7629dfea/part-2)
---
## Step 1: Setting Up Your Environment
**1.1 Install Flask**
First things first, let’s set up our Flask environment. Open your terminal and create a virtual environment, then install Flask:
```bash
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
pip install Flask Flask-SQLAlchemy Flask-Login Flask-Bcrypt
```
---
**1.2 Install TailwindCSS**
Next, let’s set up TailwindCSS. You’ll need Node.js and npm installed on your machine.
**Install TailwindCSS:**
```bash
npm install -D tailwindcss
npx tailwindcss init
```
Configure TailwindCSS by editing the tailwind.config.js file:
```javascript
module.exports = {
content: [
'./templates/**/*.html',
'./static/js/**/*.js',
],
theme: {
extend: {},
},
plugins: [],
}
```
**Create a TailwindCSS input file static/css/tailwind.css:**
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
**Add a build script to your package.json:**
```json
"scripts": {
"build": "tailwindcss -i ./static/css/tailwind.css -o ./static/css/styles.css --watch"
}
```
**Run the build script to generate your CSS:**
```bash
npm run build
```
---
**1.3 Create the Project Structure**
Organize your project directory as follows:
```bash
blog_app/
├── static/
│ ├── css/
│ │ └── tailwind.css
│ └── js/
│ └── scripts.js
├── templates/
│ ├── base.html
│ ├── index.html
│ ├── post.html
│ ├── edit_post.html
│ ├── login.html
│ ├── register.html
│ └── post_snippet.html
├── app.py
└── models.py
```
---
## Step 2: Create the Flask Backend
**2.1 Define Models**
In models.py, define a simple data model for blog posts and user authentication using SQLAlchemy:
```python
from flask_sqlalchemy import SQLAlchemy
from flask_login import UserMixin
from flask_bcrypt import Bcrypt
db = SQLAlchemy()
bcrypt = Bcrypt()
class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(150), unique=True, nullable=False)
email = db.Column(db.String(150), unique=True, nullable=False)
password = db.Column(db.String(150), nullable=False)
class Post(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.String(100), nullable=False)
content = db.Column(db.Text, nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
user = db.relationship('User', backref='posts')
```
---
**2.2 Set Up Flask Application**
Next, set up your Flask application in app.py:
```python
from flask import Flask, render_template, request, redirect, url_for, flash
from flask_login import LoginManager, login_user, logout_user, login_required, current_user
from models import db, User, Post, bcrypt
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///blog.db'
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECRET_KEY'] = 'your_secret_key'
db.init_app(app)
login_manager = LoginManager(app)
login_manager.login_view = 'login'
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
with app.app_context():
db.create_all() # Create database tables
@app.route('/')
def index():
posts = Post.query.all()
return render_template('index.html', posts=posts)
@app.route('/post/<int:post_id>')
def post(post_id):
post = Post.query.get_or_404(post_id)
return render_template('post.html', post=post)
@app.route('/create', methods=['POST'])
@login_required
def create():
title = request.form['title']
content = request.form['content']
if not title or not content:
flash("Title and content cannot be empty", "danger")
return redirect(url_for('index'))
new_post = Post(title=title, content=content, user_id=current_user.id)
db.session.add(new_post)
db.session.commit()
return render_template('post_snippet.html', post=new_post)
@app.route('/edit/<int:post_id>', methods=['GET', 'POST'])
@login_required
def edit(post_id):
post = Post.query.get_or_404(post_id)
if request.method == 'POST':
if post.user_id != current_user.id:
flash("You are not authorized to edit this post", "danger")
return redirect(url_for('index'))
post.title = request.form['title']
post.content = request.form['content']
db.session.commit()
return redirect(url_for('post', post_id=post.id))
return render_template('edit_post.html', post=post)
@app.route('/delete/<int:post_id>', methods=['POST', 'DELETE'])
@login_required
def delete(post_id):
post = Post.query.get_or_404(post_id)
if post.user_id != current_user.id:
flash("You are not authorized to delete this post", "danger")
return redirect(url_for('index'))
db.session.delete(post)
db.session.commit()
return '<script>window.location.href = "{}";</script>'.format(url_for('index'))
@app.route('/login', methods=['GET', 'POST'])
def login():
if request.method == 'POST':
email = request.form['email']
password = request.form['password']
user = User.query.filter_by(email=email).first()
if user and bcrypt.check_password_hash(user.password, password):
login_user(user)
return redirect(url_for('index'))
flash('Invalid email or password', 'danger')
return render_template('login.html')
@app.route('/register', methods=['GET', 'POST'])
def register():
if request.method == 'POST':
username = request.form['username']
email = request.form['email']
password = request.form['password']
hashed_password = bcrypt.generate_password_hash(password).decode('utf-8')
new_user = User(username=username, email=email, password=hashed_password)
db.session.add(new_user)
db.session.commit()
flash('Account created successfully', 'success')
return redirect(url_for('login'))
return render_template('register.html')
@app.route('/logout')
@login_required
def logout():
logout_user()
return redirect(url_for('index'))
if __name__ == '__main__':
app.run(debug=True)
```
---
## Step 3: Create HTML Templates
**3.1 Base Template**
In templates/base.html, define the base HTML structure:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blog App</title>
<link rel="stylesheet" href="{{ url_for('static', filename='css/styles.css') }}">
<script src="https://unpkg.com/htmx.org@2.0.0"></script>
<script src="{{ url_for('static', filename='js/scripts.js') }}" defer></script>
</head>
<body class="bg-gray-100">
<nav class="bg-gray-800 p-4 text-white">
<div class="container mx-auto">
<a href="{{ url_for('index') }}" class="mr-4">Home</a>
{% if current_user.is_authenticated %}
<a href="{{ url_for('logout') }}" class="mr-4">Logout</a>
{% else %}
<a href="{{ url_for('login') }}" class="mr-4">Login</a>
<a href="{{ url_for('register') }}">Register</a>
{% endif %}
</div>
</nav>
<div class="container mx-auto py-8">
{% with messages = get_flashed_messages(with_categories=true) %}
{% if messages %}
{% for category, message in messages %}
<div class="alert alert-{{ category }} mb-4 p-4 border-l-4 border-{{ category }}-400 bg-{{ category }}-100 text-{{ category }}-700">{{ message }}</div>
{% endfor %}
{% endif %}
{% endwith %}
{% block content %}{% endblock %}
</div>
</body>
</html>
```
---
**3.2 Index Template**
In templates/index.html, create the index page to list all posts:
```html
{% extends "base.html" %}
{% block content %}
<h1 class="text-3xl font-bold text-center mb-8">Blog Posts</h1>
{% if current_user.is_authenticated %}
<form hx-post="{{ url_for('create') }}" hx-target="#posts" hx-swap="beforeend" method="post" class="mb-8 p-4 bg-white shadow-md rounded">
<input type="text" name="title" placeholder="Title" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<textarea name="content" placeholder="Content" required class="w-full p-2 mb-4 border border-gray-300 rounded"></textarea>
<button type="submit" class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Create</button>
</form>
{% endif %}
<div id="posts" class="space-y-4">
{% for post in posts %}
{% include 'post_snippet.html' %}
{% endfor %}
</div>
{% endblock %}
```
---
**3.3 Post Template**
In templates/post.html, create the template for displaying a single post:
```html
{% extends "base.html" %}
{% block content %}
<div id="post-{{ post.id }}" class="post bg-white p-8 shadow-md rounded">
<h1 class="text-2xl font-bold mb-4">{{ post.title }}</h1>
<p class="mb-4">{{ post.content }}</p>
{% if current_user.is_authenticated and post.user_id == current_user.id %}
<div class="post-buttons flex space-x-4">
<a href="{{ url_for('edit', post_id=post.id) }}"
class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Edit</a>
<form action="{{ url_for('delete', post_id=post.id) }}" hx-delete="{{ url_for('delete', post_id=post.id) }}"
hx-target="#post-{{ post.id }}" hx-swap="outerHTML" method="post"
class="delete-form inline-block">
<input type="hidden" name="_method" value="DELETE">
<button type="submit" class="bg-red-500 text-white py-2 px-4 rounded hover:bg-red-600">Delete</button>
</form>
</div>
{% endif %}
</div>
{% endblock %}
```
---
3.4 Post Snippet Template
In templates/post_snippet.html, create a snippet for individual posts to be used for dynamic updates:
```html
<div class="post bg-white p-4 shadow-md rounded" id="post-{{ post.id }}">
<h2 class="text-xl font-bold"><a href="{{ url_for('post', post_id=post.id) }}" class="hover:underline">{{ post.title }}</a></h2>
<p class="mb-4">{{ post.content }}</p>
{% if current_user.is_authenticated and post.user_id == current_user.id %}
<div class="post-buttons flex space-x-4">
<form action="{{ url_for('delete', post_id=post.id) }}" hx-delete="{{ url_for('delete', post_id=post.id) }}" hx-target="#post-{{ post.id }}" hx-swap="outerHTML" method="post" class="delete-form inline-block">
<a href="{{ url_for('edit', post_id=post.id) }}" class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Edit</a>
<input type="hidden" name="_method" value="DELETE">
<button type="submit" class="bg-red-500 text-white py-2 px-4 rounded hover:bg-red-600">Delete</button>
</form>
</div>
{% endif %}
</div>
```
---
**3.5 Edit Post Template**
In templates/edit_post.html, create the template for editing a post:
```html
{% extends "base.html" %}
{% block content %}
<h1 class="text-3xl font-bold text-center mb-8">Edit Post</h1>
<form method="post" class="mb-8 p-4 bg-white shadow-md rounded">
<input type="text" name="title" value="{{ post.title }}" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<textarea name="content" required class="w-full p-2 mb-4 border border-gray-300 rounded">{{ post.content }}</textarea>
<button type="submit" class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Save</button>
</form>
{% endblock %}
```
---
**3.6 Login Template**
In templates/login.html, create the template for user login:
```html
{% extends "base.html" %}
{% block content %}
<h1 class="text-3xl font-bold text-center mb-8">Login</h1>
<form method="post" class="mb-8 p-4 bg-white shadow-md rounded">
<input type="email" name="email" placeholder="Email" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<input type="password" name="password" placeholder="Password" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<button type="submit" class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Login</button>
</form>
{% endblock %}
```
---
**3.7 Register Template**
In templates/register.html, create the template for user registration:
```html
{% extends "base.html" %}
{% block content %}
<h1 class="text-3xl font-bold text-center mb-8">Register</h1>
<form method="post" class="mb-8 p-4 bg-white shadow-md rounded">
<input type="text" name="username" placeholder="Username" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<input type="email" name="email" placeholder="Email" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<input type="password" name="password" placeholder="Password" required class="w-full p-2 mb-4 border border-gray-300 rounded">
<button type="submit" class="bg-blue-500 text-white py-2 px-4 rounded hover:bg-blue-600">Register</button>
</form>
{% endblock %}
```
---
## 🔥 Fired up to learn HTMX in more depth? This is a MUST read for leveling up. 🆙
[](https://amzn.to/3XKDBeI)
## [Hypermedia Systems Kindle Edition](https://amzn.to/3XKDBeI)
---
## Step 4: Add Enhanced Debugging for HTMX
Create a simple JavaScript file (scripts.js) to handle HTMX events for better debugging:
```javascript
/* static/js/scripts.js */
document.addEventListener('htmx:afterRequest', (event) => {
console.log('HTMX request completed:', event.detail);
});
document.addEventListener('htmx:error', (event) => {
console.error('HTMX request error:', event.detail);
});
```
---
## Step 5: Testing Your Application
Now that you have set up the backend, created the HTML templates, and added HTMX for interactivity, it’s time to test your application. Make sure your Flask server is running by using the command:
```bash
flask --debug run
```
*Open your web browser and navigate to http://127.0.0.1:5000/. You should see your blog’s home page, where you can create, view, edit, and delete blog posts.*
---
**Create a Post**
- Enter a title and content in the form at the top of the page.
- Click the “Create” button. The new post should appear instantly on the page without a full page reload.
---
**View a Post**
- Click on the title of a post to view its full content on a separate page.
---
**Edit a Post**
- Click the “Edit” link next to a post.
- Modify the title or content and click “Save”. You should be redirected to the updated post’s page.
- Click home on top to go back to home page.
---
**Delete a Post**
- Click the “Delete” button next to a post. The post should be removed instantly without a full page reload.
---
## Conclusion
In this comprehensive tutorial, you have learned how to create a dynamic blog application using Flask, HTMX, and TailwindCSS. Here’s a quick recap of what we’ve covered:
- Setting up a Flask environment and project structure
- Creating and configuring a Flask application
- Defining models with SQLAlchemy
- Creating HTML templates for your blog
- Adding HTMX attributes for dynamic form submission and deletion
- Styling your application with TailwindCSS
- Adding user authentication and authorization
By following these steps, you can build modern web applications with enhanced interactivity without the need for complex single-page application frameworks. HTMX allows you to keep your workflow simple and productive while providing a smooth user experience.
---
## Further Reading and Resources
To deepen your understanding and keep up with the latest trends and best practices in web development, here are some resources you might find helpful:
- [HTMX Documentation](https://htmx.org/)
- [Flask Documentation](https://flask.palletsprojects.com/en/3.0.x/)
- [SQLAlchemy Documentation](https://docs.sqlalchemy.org/en/20/)
- [TailwindCSS Documentation](https://tailwindcss.com/docs)
Hope you enjoyed a more robust version of the app. The sky is the limit continue to improve and even create a project template that could work for real-world project. Happy coding!
---
## 🔥 If you enjoyed this article, please come join our hacker community at [DevToys.io](https://devtoys.io) and sign up for our newsletter to stay connected and stay with the latest news, trends and gadgets! 👽
| 3a5abi |
1,909,256 | Automating User and Group Management in Linux with Bash | Managing user accounts and permissions on a Linux system can be a daunting task, especially in a... | 0 | 2024-07-02T17:57:34 | https://dev.to/mary_12/automating-user-and-group-management-in-linux-with-bash-4i47 | linux, bashscripting, devops | Managing user accounts and permissions on a Linux system can be a daunting task, especially in a dynamic environment with frequent new hires. To simplify and automate this process, we have created the create_users.sh script. This Bash script reads user and group information from a file, creates users and groups, sets up home directories, generates random passwords, and logs all actions.
In this article, we'll walk you through how the script works and how to use it effectively. This guide is particularly useful for SysOps engineers looking to streamline user management in their systems.
For those interested in pursuing or learning more about technical roles and internships, the HNG Internship Program offers an excellent opportunity to enhance your skills and experience.
## Understanding the create_users.sh Script
**Key Features**
- Automated User Creation: Reads user information from a text file and creates users accordingly.
- Group Management: Ensures each user has a personal group with the same name and adds users to specified additional groups.
- Home Directory Setup: Creates and configures home directories with secure permissions.
- Password Generation: Generates random passwords for new users and stores them securely.
- Logging: Logs all actions and errors for easy monitoring and troubleshooting.
**Input File Format**
The input file should list users and their groups in the following format:
1. username; group1,group2,group3
2. Each line represents one user
3. The username and groups are separated by a semicolon ;.
4. Groups are optional and can be separated by commas ,.
Example Input:
`_light; sudo,dev,www-data
idimma; sudo
mayowa; dev,www-data_`
In this example:
light will be created with additional groups sudo, dev, and www-data.
idimma will be added to the sudo group.
mayowa will join the dev and www-data groups.
**Script Details**
```
Here is the complete create_users.sh script:
`#!/bin/bash
# File paths
LOG_FILE="/var/log/user_management.log"
PASSWORD_FILE="/var/secure/user_passwords.txt"
# Ensure /var/secure directory exists
if ! mkdir -p /var/secure 2>/dev/null; then
echo "Failed to create /var/secure directory. Permission denied."
exit 1
fi
chmod 700 /var/secure
# Clear log and password files
> "$LOG_FILE" 2>/dev/null || { echo "Failed to create log file $LOG_FILE. Permission denied."; exit 1; }
> "$PASSWORD_FILE" 2>/dev/null || { echo "Failed to create password file $PASSWORD_FILE. Permission denied."; exit 1; }
chmod 600 "$PASSWORD_FILE"
# Function to generate a random password
generate_password() {
tr -dc A-Za-z0-9 </dev/urandom | head -c 12
}
# Check if input file is provided
if [ -z "$1" ]; then
echo "Usage: $0 <user_list_file>"
exit 1
fi
# Read the input file line by line
while IFS=';' read -r username groups; do
username=$(echo "$username" | xargs)
groups=$(echo "$groups" | xargs)
if id "$username" &>/dev/null; then
echo "User $username already exists. Skipping..." | tee -a "$LOG_FILE"
continue
fi
# Create personal group with the same name as the user
if ! getent group "$username" &>/dev/null; then
if ! groupadd "$username" 2>/dev/null; then
echo "Failed to create group $username. Permission denied." | tee -a "$LOG_FILE"
continue
fi
echo "Group $username created." | tee -a "$LOG_FILE"
fi
# Create the user with the personal group
if ! useradd -m -g "$username" -s /bin/bash "$username" 2>/dev/null; then
echo "Failed to create user $username. Permission denied." | tee -a "$LOG_FILE"
continue
fi
echo "User $username created with home directory." | tee -a "$LOG_FILE"
# Add user to additional groups
IFS=',' read -ra ADDR <<< "$groups"
for group in "${ADDR[@]}"; do
group=$(echo "$group" | xargs)
if ! getent group "$group" &>/dev/null; then
if ! groupadd "$group" 2>/dev/null; then
echo "Failed to create group $group. Permission denied." | tee -a "$LOG_FILE"
continue
fi
echo "Group $group created." | tee -a "$LOG_FILE"
fi
if ! usermod -aG "$group" "$username" 2>/dev/null; then
echo "Failed to add user $username to group $group. Permission denied." | tee -a "$LOG_FILE"
continue
fi
echo "User $username added to group $group." | tee -a "$LOG_FILE"
done
# Set up home directory permissions
chmod 700 "/home/$username"
chown "$username:$username" "/home/$username"
# Generate a random password and set it for the user
password=$(generate_password)
echo "$username:$password" | chpasswd 2>/dev/null || { echo "Failed to set password for user $username. Permission denied."; continue; }
# Log the password securely
echo "$username,$password" >> "$PASSWORD_FILE"
echo "Password for user $username set." | tee -a "$LOG_FILE"
done < "$1"
echo "User creation process completed." | tee -a "$LOG_FILE"
```
`
**Step-by-Step ExplanationInitialization and Setup:**
The script starts by defining paths for the log and password files.
It ensures these files exist and sets secure permissions on the password file to make it readable only by the owner.
**Functions:**
1. generate_password: Generates a 12-character random password.
2. log_message: Logs messages with a timestamp to both the console and the log file.
3. Input Validation: The script checks if an input file is provided and verifies its existence and readability.
4. Processing Each User: The script reads each line of the input file, splits it into a username and groups, and trims any whitespace. It skips empty username entries and checks if the user already exists.
5. User and Group Creation: If the user does not exist, it creates a personal group with the same name. It creates the user with a home directory and assigns the personal group.
6. Home Directory Setup: The script sets secure permissions (700) on the user's home directory and changes ownership to the user.
7. Password Management: It generates a random password for each user.
8. Logging: All actions and errors are logged to /var/log/user_management.log.
**Using the Script**
To use the script, follow these steps
- Save the Script: Save the provided script as create_users.sh.
- Make the Script Executable: `chmod +x create_users.sh`
- Prepare Your Input File: Create a text file with usernames and groups in the required format.
Example:
`_light; sudo,dev,www-data
idimma; sudo
mayowa; dev,www-data_`
- Run the Script: `sudo ./create_users.sh input_file.txt`
**Conclusion**
The create_users.sh script provides an efficient and automated way to manage user accounts in Linux. It handles user and group creation, home directory setup, password generation, and logs all actions for easy monitoring and troubleshooting. This script is a valuable tool for SysOps engineers looking to streamline their user management processes.
## Learn about HNG internship
This article is task 2 in the devops track of HNG Internship. For those interested in further developing their skills and gaining real-world experience, consider exploring the HNG Internship Program via this link https://hng.tech/internship. This program offers a range of opportunities to work on challenging projects and collaborate with professionals in the field.
For organizations looking to hire top talent from the HNG Internship, you can find more information on https://hng.tech/hire. | mary_12 |
1,909,255 | Widget state synchronisation across tabs | Check out our latest blog on implementing widget state synchronization in the neetoChat widget! The... | 0 | 2024-07-02T17:57:10 | https://dev.to/tsudhishnair/widget-state-synchronisation-across-tabs-55p4 | webdev, javascript, blog, programming | Check out our latest blog on implementing widget state synchronization in the neetoChat widget!
The neetoChat widget is the end-user-facing companion of our neetoChat application. By embedding it on their website, neetoChat users can easily interact with their customers in real time.
Read more here: https://www.bigbinary.com/blog/widget-synchronisation
| tsudhishnair |
1,909,245 | Does flutter dart can make significant changes in mobile development? | A post by Aadarsh Kunwar | 0 | 2024-07-02T17:45:06 | https://dev.to/aadarshk7/does-flutter-dart-can-make-significant-changes-in-mobile-development-27bb | flutter, androiddevelopment, dart, android | aadarshk7 | |
1,909,252 | Dramacool | Watch Asian Dramas and Shows in HD (2024) | Watch dramacool latest Asian dramas and movies with English subtitles. Vast library of Korean,... | 0 | 2024-07-02T17:55:42 | https://dev.to/hitler/dramacool-watch-asian-dramas-and-shows-in-hd-2024-1k6g | Watch [dramacool](https://dramacool.org.za/) latest Asian dramas and movies with English subtitles. Vast library of Korean, Chinese, Japanese content. Lightning-fast updates.
[https://medium.com/@dramacoolgadha/dramacool-watch-asian-dramas-and-shows-in-hd-2024-f2559cda4334](https://medium.com/@dramacoolgadha/dramacool-watch-asian-dramas-and-shows-in-hd-2024-f2559cda4334)
[https://www.patreon.com/posts/107335428](https://www.patreon.com/posts/107335428) | hitler | |
1,909,251 | Supercharge Your Node.js App with Blazing Fast In-Memory Caching! | Introduction If you’re experiencing latency issues in your Node.js application,... | 0 | 2024-07-02T17:55:00 | https://dev.to/shu12388y/supercharge-your-nodejs-app-with-blazing-fast-in-memory-caching-3jec | webdev, javascript, node, database |
## Introduction
If you’re experiencing latency issues in your Node.js application, implementing in-memory caching can be a game-changer. This guide will walk you through how to set up in-memory caching using node-cache in an Express.js server to significantly improve your API performance.
## The Challenge
We recently saw a rapid increase in our backend project, reaching over 1000 active users in just 5 days. However, this spike led to latency issues, with API calls averaging around 200ms. Initially, we used Redis for caching, but latency remained between 180-200ms because our Redis instance and main server were both in the US-West region, while our users were primarily in India.
To address this, we implemented in-memory caching on the server, reducing latency by approximately 60%.
Here's how to implement in-memory caching using Node.js and Express.js:
**- Install the node cache package**
`npm install node-cache`
**- Setup your Cache**
```
const NodeCache = require('node-cache');
const myCache = new NodeCache({ stdTTL: 100, checkperiod: 120 });
// stdTTL is the default time-to-live for cache entries (in seconds)
// checkperiod is the interval for automatic cache cleaning (in seconds)
```
**- Implement in the express js server**
```
const express = require('express');
const app = express();
// Example data fetching function
async function fetchDataFromDatabase() {
// Simulate a database call
return { data: 'This is some data from the database.' };
}
app.get('/data', async (req, res) => {
const cacheKey = 'myDataKey';
const cachedData = myCache.get(cacheKey);
if (cachedData) {
return res.json({ source: 'cache', data: cachedData });
}
const data = await fetchDataFromDatabase();
myCache.set(cacheKey, data);
res.json({ source: 'database', data });
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
```
**- Manage your cache**
```
// Get cache statistics
console.log(myCache.getStats());
// Manually delete a cache entry
myCache.del('myDataKey');
// Flush the entire cache
myCache.flushAll();
```
| shu12388y |
1,908,675 | What are Proxies in JavaScript | In JavaScript, a proxy is an object that acts as an intermediary between a target object and any code... | 0 | 2024-07-02T17:52:55 | https://dev.to/waelhabbal/what-are-proxies-in-javascript-453f | javascript, proxies, webdev | In JavaScript, a proxy is an object that acts as an intermediary between a target object and any code that interacts with the target object. A proxy can be used to intercept and manipulate the interactions between the target object and the code that uses it. Proxies are often used to provide additional functionality, validation, or logging capabilities to objects.
**Types of Proxies in JavaScript**
There are several types of proxies in JavaScript:
1. **Proxy Constructor**: The `Proxy` constructor is used to create a new proxy object. It takes two arguments: the target object that the proxy will intercept and a handler object that defines the behavior of the proxy.
2. **Reflect**: The `Reflect` object is a built-in object in JavaScript that provides methods for creating and working with proxies.
**Creating a Proxy**
To create a proxy in JavaScript, you need to use the `Proxy` constructor. Here is an example:
```
const target = {
get Foo() {
return "Hello";
}
};
const handler = {
get: (target, prop) => {
if (prop === "Foo") {
return "Hello World!";
} else {
return Reflect.get(target, prop);
}
}
};
const proxy = new Proxy(target, handler);
console.log(proxy.Foo); // Output: "Hello World!"
console.log(proxy.Bar); // Output: "undefined"
```
In this example, we create a target object with a single property `Foo` that returns the string "Hello". We then create a handler object that intercepts property access on the target object. When the property `Foo` is accessed, the handler returns "Hello World!". If any other property is accessed, the handler simply calls `Reflect.get` to retrieve the value from the target object.
**Real-World Scenarios for Using Proxies**
Proxies can be used in a variety of real-world scenarios. Here are a few examples:
1. **Data Validation**: You can use a proxy to validate data before it is stored or sent over the network. For example, you could create a proxy for an API endpoint that validates user input before sending it to the server.
```
const API = {
postUser(user) {
// validate user data
if (!user.name || !user.email) {
throw new Error("Invalid user data");
}
// send user data to server
}
};
const validatedAPI = new Proxy(API, {
apply: (target, thisArg, args) => {
const [method, ...rest] = args;
if (method === "postUser") {
const validatedData = validate(rest[0]);
return target[method](validatedData);
} else {
return target[method](...rest);
}
}
});
validatedAPI.postUser({ name: "", email: "" }); // Output: Error: Invalid user data
validatedAPI.postUser({ name: "John", email: "john@example.com" }); // Output: User created successfully
```
2. **Caching**: You can use a proxy to cache frequently accessed data to improve performance. For example, you could create a proxy for an API endpoint that caches responses for a certain amount of time.
```
const API = {
getUser(id) {
// fetch user data from server
}
};
const cachedAPI = new Proxy(API, {
get: (target, prop) => {
if (prop === "getUser") {
const cache = {};
const cachedUser = cache[id];
if (cachedUser) {
return cachedUser;
} else {
const result = target[prop](id);
cache[id] = result;
return result;
}
} else {
return Reflect.get(target, prop);
}
}
});
cachedAPI.getUser(1); // fetches user data from server
cachedAPI.getUser(1); // returns cached user data
```
3. **Logging**: You can use a proxy to log interactions with an object or function. For example, you could create a proxy for an API endpoint that logs requests and responses.
```
const API = {
postUser(user) {
// send user data to server
}
};
const loggedAPI = new Proxy(API, {
apply: (target, thisArg, args) => {
console.log(`Request: ${args[0]}`);
const result = target.apply(thisArg, args);
console.log(`Response: ${result}`);
return result;
}
});
loggedAPI.postUser({ name: "John", email: "john@example.com" });
// Output:
// Request: postUser { name: "John", email: "john@example.com" }
// Response: User created successfully
```
These are just a few examples of how you can use proxies in JavaScript. Proxies can be used in many other scenarios where you need to intercept and manipulate interactions with objects or functions.
**Conclusion**
Proxies are a powerful tool in JavaScript that can be used to provide additional functionality, validation, or logging capabilities to objects. By creating custom proxies using the `Proxy` constructor or using built-in proxies like `Reflect`, you can write more robust and maintainable code. I hope this post has provided a comprehensive introduction to proxies in JavaScript and has inspired you to explore their possibilities further!
References:
- [Proxy - JavaScript | MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) | waelhabbal |
1,909,248 | Next.js SEO Best Practices | Next.js is a powerful and versatile framework for building React applications, but when it comes to... | 0 | 2024-07-02T17:50:28 | https://dev.to/zinotrust/nextjs-seo-best-practices-2me9 | Next.js is a powerful and versatile framework for building React applications, but when it comes to SEO, there are some best practices you should follow to ensure your site is search engine friendly.
Here are some Next.js SEO best practices to help you optimize your website for better search engine visibility:
1. **Optimize Meta Tags**:
Ensure each page has unique and descriptive title tags, meta descriptions, and meta keywords. Use relevant keywords to improve ranking.
2. **Create SEO-Friendly URLs**:
Use descriptive and keyword-rich URLs for each page to make it easier for search engines to understand the content.
3. **Optimize Images**:
Use descriptive file names and alt text for images to improve accessibility and provide context to search engines.
4. **Enable Server-Side Rendering**:
Next.js allows for server-side rendering, which can improve SEO by delivering fully rendered pages to search engines.
5. **Utilize Schema Markup**:
Implement structured data using JSON-LD to provide search engines with more information about your content.
6. **Mobile Optimization**:
Ensure your site is responsive and optimized for mobile devices, as mobile-friendliness is a key ranking factor.
7. **Page Speed**:
Optimize your site's performance to improve loading times, as faster websites tend to rank higher in search results.
8. **Internal Linking**:
Use internal links between related pages to help search engines discover and index content more effectively.
9. **Monitor and Analyze**:
Regularly monitor your site's performance using tools like Google Analytics to track SEO metrics and make data-driven decisions.
10. **Create Quality Content**:
Ultimately, creating high-quality and relevant content is key to successful SEO, so focus on providing value to your audience.
By following these Next.js SEO best practices, you can improve your website's visibility and attract more organic traffic from search engines. | zinotrust | |
1,909,246 | MongoDB or Firebase | A post by Aadarsh Kunwar | 0 | 2024-07-02T17:45:46 | https://dev.to/aadarshk7/mongodb-or-firebase-5d8i | firebase, mongodb | aadarshk7 | |
1,909,243 | **¡JAMstack: Construyendo Sitios Web como un Mandaloriano!**🤖 | ¡Hola Chiquis! 👋🏻 Prepárense para construir sitios web con JAMstack, la tecnología del futuro. En el... | 0 | 2024-07-02T17:41:22 | https://dev.to/orlidev/jamstack-construyendo-sitios-web-como-un-mandaloriano-2nif | webdev, jamstack, beginners, tutorial | ¡Hola Chiquis! 👋🏻 Prepárense para construir sitios web con JAMstack, la tecnología del futuro. En el cosmos del desarrollo web, donde las líneas de código son estrellas y los sitios web planetas, surge una fuerza poderosa: JAMstack. 🦾 Al igual que un Mandaloriano navega la galaxia con sus habilidades y herramientas únicas, JAMstack ofrece un conjunto de tecnologías que permiten construir sitios web rápidos, seguros y escalables. ¡Prepárate para posicionar tus sitios web en la cima de los resultados de búsqueda y atraer a una multitud de visitantes!

Imagina a Din Djarin 🧔🏻 como un desarrollador web JAMstack. Su nave espacial, la Razor Crest, representa el sitio web JAMstack. Los motores de la nave, potentes y confiables, son como la velocidad y el rendimiento de JAMstack. La armadura mandaloriana de Din, resistente y protectora, simboliza la seguridad de JAMstack. Y al igual que Din siempre tiene sus herramientas a mano, un desarrollador JAMstack cuenta con un conjunto de herramientas y tecnologías poderosas para crear sitios web increíbles.
En el mundo del desarrollo web, JAMstack 🛰️ está emergiendo como una poderosa fuerza, lista para transformar la forma en que creamos y experimentamos sitios web. Al igual que un Mandaloriano navega por la galaxia con sus habilidades y herramientas únicas, JAMstack ofrece un conjunto de tecnologías que permiten construir sitios web rápidos, seguros y escalables.
¿Qué es JAMstack? 🛸
Imagina un sitio web construido con bloques Lego: cada bloque representa una tecnología específica, como HTML, CSS o JavaScript. JAMstack utiliza estos bloques para crear sitios web estáticos, pre-renderizados y entregados a través de una red de entrega de contenido (CDN). Esto significa que los sitios web se cargan instantáneamente, sin necesidad de esperar a que un servidor procese cada solicitud. Es un enfoque moderno para desarrollar aplicaciones web y sitios estáticos. Se basa en tres pilares principales:
+ JavaScript: Utilizamos JavaScript para agregar interactividad y dinamismo a nuestros archivos estáticos. Se utiliza para gestionar la interacción del usuario en el navegador.
+ APIs: Los datos se obtienen a través de APIs, lo que permite que la aplicación sea dinámica. Las APIs proporcionan datos y funcionalidades necesarias para el sitio web.
+ Marcado (Markup): El contenido se sirve como archivos estáticos (HTML, CSS, etc.). El contenido se pre-renderiza como archivos HTML estáticos.

Ahora, ¿cómo podemos relacionar esto con "The Mandalorian"? 🌌Imagina que JAMstack es como el casco de Mando, el cazarrecompensas sin rostro de la serie:
+ JavaScript (J): Al igual que el casco de Mando, el JavaScript es la parte visible y activa del sitio web. Controla las interacciones y animaciones, como el visor de Mando que se ilumina cuando detecta peligro.
+ APIs (A): Las APIs son como los contactos de Mando en toda la galaxia. Cuando necesita información (como la ubicación de un objetivo o el pronóstico del tiempo en Tatooine), se comunica con sus contactos. Del mismo modo, las APIs proporcionan datos dinámicos al sitio web, como información de productos, comentarios o pronósticos del tiempo.
+ Markup (M): El markup es como la armadura de Mando. Se pre-renderiza y se convierte en archivos HTML estáticos antes de llegar al navegador. Esto hace que el sitio sea rápido y seguro, como la armadura de Mando que lo protege de los disparos láser.
¿Por qué usar JAMstack? 🌠
Este enfoque de arquitectura web se está volviendo cada vez más popular, como la creciente popularidad de los Mandalorianos en la galaxia.
- Sitios web rápidos y seguros: Imagina sitios web que se cargan instantáneamente y son confiables, como una nave espacial mandaloriana que vuela rápido y segura. Los sitios web JAMstack son increíblemente rápidos, cargando en milisegundos. Esto mejora la experiencia del usuario y aumenta el SEO. Al ser sitios estáticos, son menos vulnerables a ataques cibernéticos, como una armadura mandaloriana protegiendo a su dueño. Los sitios web JAMstack se cargan instantáneamente, como un rayo láser atravesando la galaxia.
- Fácil escalabilidad: Piensa en sitios web que pueden adaptarse a un mayor tráfico sin problemas, ideal para sitios web con picos de visitas, como una nave espacial transportando una gran cantidad de pasajeros o como un ejército mandaloriano que se expande. JAMstack se adapta sin problemas a un mayor tráfico, ideal para sitios web con picos de visitas.
- Experiencias de desarrollador mejoradas: Imagina herramientas y procesos que facilitan el trabajo de los desarrolladores, como el entrenamiento mandaloriano que mejora las habilidades de combate. Ofrece un entorno de desarrollo más simple y agradable. Ofrece un entorno de desarrollo más simple y agradable, como usar herramientas intuitivas para construir una nave espacial mandaloriana.

Ventajas de JAMstack con elementos de "The Mandalorian" 👽
- Rendimiento (Razor Crest): Imagina que el Razor Crest, la nave de Mando, es como un sitio JAMstack. Es rápido y ágil, como cuando Mando evita disparos láser. El Razor Crest sirve contenido estático pre-renderizado desde una CDN, al igual que un sitio JAMstack. Esto reduce la latencia y mejora la experiencia del usuario.
- Seguridad (Beskar Armor): La armadura de beskar de Mando es indestructible, al igual que la seguridad en un sitio JAMstack. Al no depender de bases de datos en tiempo real, los sitios JAMstack son menos vulnerables a ataques. Además, las actualizaciones de contenido se realizan de manera controlada, como cuando Mando repara su armadura.
- Escalabilidad (Alianza Rebelde): La Alianza Rebelde es un ejemplo de escalabilidad en JAMstack. Puedes agregar más instancias de CDN o servicios de backend sin afectar el rendimiento, al igual que la Alianza recluta más rebeldes para enfrentar al Imperio.
Facilidad de mantenimiento (Baby Yoda): Baby Yoda es adorable y fácil de cuidar, al igual que un sitio JAMstack. Al trabajar con archivos estáticos, el mantenimiento es simple. No hay servidores complicados que administrar, como cuando Mando cuida a Grogu.
- Independencia de proveedores (Cazarrecompensas independientes): Mando es un cazarrecompensas independiente, y los sitios JAMstack también son independientes. No estás atado a un proveedor específico. Puedes cambiar servicios de backend o CDN sin afectar la funcionalidad, como cuando Mando elige sus trabajos.
- Mejor SEO (Búsqueda de objetivos): Mando siempre encuentra a su objetivo, y los sitios JAMstack también. Los motores de búsqueda prefieren contenido estático y bien estructurado. Los sitios JAMstack tienden a clasificar mejor en los resultados de búsqueda, como cuando Mando sigue las pistas para encontrar a su presa.

Ejemplos de JAMstack con elementos de "The Mandalorian" ☄️
- Paths (construido con Gatsby): Imagina que Paths es como el Razor Crest, la nave de Mando. Al igual que el Razor Crest, Paths es rápido y ágil. Sirve contenido estático pre-renderizado desde una CDN, reduciendo la latencia y mejorando la experiencia del usuario.
- Tu propio sitio JAMstack (HTML simple sin JavaScript): Esto es como el casco de Mando. A veces, un sitio JAMstack puede ser tan simple como un archivo HTML estático sin JavaScript. No hace nada dinámico, pero sigue siendo rápido y seguro.
- Otros ejemplos: Imagina que otros sitios JAMstack son como los cazarrecompensas independientes en la galaxia digital. Cada uno tiene su enfoque único y utiliza herramientas como React, Vue o Svelte para proporcionar interactividad y dinamismo.
Sitios web JAMstack ✨
+ The New York Times: Utiliza JAMstack para su sitio web principal, logrando una velocidad y rendimiento excepcionales.
+ Shopify: Implementa JAMstack para mejorar la experiencia de compra en su plataforma de comercio electrónico.
+ Snipcart: Una solución de comercio electrónico que demuestra el poder de JAMstack para el comercio minorista en línea.
+ Hulu: Ofrece miles de películas y series de televisión bajo demanda mediante una suscripción mensual.
+ Ticketmaster: Un sitio expansivo y multiusuario construido con Next.js.
Estos ejemplos abarcan diversas industrias y demuestran cómo JAMstack puede adaptarse a diferentes casos de uso. 🚀
Ahora, imagina que JAMstack es como The Mandalorian 👨🏻🚀 en su búsqueda de tesoros en la galaxia. Aquí tienes algunos ejemplos:
Baby Yoda API Fetch (JavaScript): 🗡️
```
async function fetchBabyYoda() {
try {
const response = await fetch('https://api.babyyoda.com');
const data = await response.json();
console.log('Baby Yoda:', data.name);
} catch (error) {
console.error('Error fetching Baby Yoda:', error);
}
}
```

Static Site Generation (SSG) con Gatsby (Marcado): 💫 Gatsby crea archivos estáticos durante la compilación, como si estuviéramos armando nuestra armadura beskar. Luego, los servimos desde una CDN.
```
// src/pages/index.js
import React from 'react';
const HomePage = () => (
<div>
<h1>The Mandalorian JAMstack</h1>
<p>Este sitio es más rápido que el Halcón Milenario.</p>
</div>
);
export default HomePage;
```
API Gateway (API): Imagina que el Razor Crest (la nave de Mando) se comunica con diferentes planetas (APIs) para obtener información sobre recompensas y misiones.
Con Jamstack, podrás: 🔫
- Posicionar tus sitios web en los primeros lugares de los resultados de búsqueda: Al ser rápidos y seguros, JAMstack te ayuda a mejorar tu SEO y atraer a más visitantes orgánicos.
- Reducir costos de alojamiento: Los sitios web JAMstack son generalmente más económicos de alojar que los sitios web tradicionales.
- Mejorar la experiencia del usuario: La velocidad y el rendimiento de JAMstack brindan una experiencia de usuario excepcional.
En resumen, JAMstack es como si construyéramos nuestra propia nave espacial con piezas modulares, asegurándonos de que sea rápida, segura y eficiente. ¡Que la Fuerza (y el JAMstack) te acompañen! 🚀
Conclusión 💥
JAMstack está transformando el panorama del desarrollo web, ofreciendo una alternativa moderna y eficiente a los métodos tradicionales. Al igual que un Mandaloriano siempre está preparado para cualquier desafío, JAMstack brinda las herramientas y tecnologías necesarias para crear sitios web rápidos, seguros y escalables. Si estás buscando construir el próximo gran sitio web, JAMstack es el camino a seguir. ¡Que la Fuerza te acompañe en tu viaje JAMstack!
🚀 ¿Te ha gustado? Comparte tu opinión.
Artículo completo, visita: https://lnkd.in/ewtCN2Mn
https://lnkd.in/eAjM_Smy 👩💻 https://lnkd.in/eKvu-BHe
https://dev.to/orlidev ¡No te lo pierdas!
Referencias:
Imágenes creadas con: Copilot (microsoft.com)
##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #JAMstack


| orlidev |
1,909,242 | HTML Lists tags in depth | HTML Lists: A Comprehensive Guide Lists are fundamental elements in HTML that help... | 0 | 2024-07-02T17:41:09 | https://dev.to/ridoy_hasan/html-lists-tags-in-depth-37ni | webdev, html, beginners, learning | ### HTML Lists: A Comprehensive Guide
Lists are fundamental elements in HTML that help organize content in a structured manner. HTML supports two main types of lists: ordered lists and unordered lists. In this article, we'll explore how to use these lists effectively, along with examples and best practices.
#### 1. Unordered Lists
Unordered lists are used to group a set of related items in no particular order. They are defined using the `<ul>` tag, and each item within the list is defined using the `<li>` tag.
**Example: Unordered List**
```html
<!DOCTYPE html>
<html>
<head>
<title>Unordered List Example</title>
</head>
<body>
<h1>My Favorite Fruits</h1>
<ul>
<li>Apple</li>
<li>Banana</li>
<li>Cherry</li>
</ul>
</body>
</html>
```
**Output:**
- Apple
- Banana
- Cherry
In this example, the unordered list contains three list items: Apple, Banana, and Cherry. The default style for unordered lists is a bullet point for each item.
#### 2. Ordered Lists
Ordered lists are used to group a set of related items in a specific order. They are defined using the `<ol>` tag, and each item within the list is defined using the `<li>` tag.
**Example: Ordered List**
```html
<!DOCTYPE html>
<html>
<head>
<title>Ordered List Example</title>
</head>
<body>
<h1>Steps to Make a Sandwich</h1>
<ol>
<li>Get two slices of bread</li>
<li>Spread butter on the bread</li>
<li>Add your favorite fillings</li>
<li>Put the slices together</li>
</ol>
</body>
</html>
```
**Output:**
1. Get two slices of bread
2. Spread butter on the bread
3. Add your favorite fillings
4. Put the slices together
In this example, the ordered list outlines the steps to make a sandwich, with each step numbered sequentially.
#### 3. Nested Lists
Lists can be nested inside each other to create a hierarchy of items. This is useful for representing complex structures.
**Example: Nested Lists**
```html
<!DOCTYPE html>
<html>
<head>
<title>Nested List Example</title>
</head>
<body>
<h1>My Daily Tasks</h1>
<ul>
<li>Morning
<ul>
<li>Exercise</li>
<li>Breakfast</li>
</ul>
</li>
<li>Afternoon
<ul>
<li>Work</li>
<li>Lunch</li>
</ul>
</li>
<li>Evening
<ul>
<li>Relax</li>
<li>Dinner</li>
</ul>
</li>
</ul>
</body>
</html>
```
**Output:**
- Morning
- Exercise
- Breakfast
- Afternoon
- Work
- Lunch
- Evening
- Relax
- Dinner
In this example, each main task (Morning, Afternoon, Evening) contains a nested list of subtasks.
#### 4. Description Lists
Description lists are used to define terms and their descriptions. They are defined using the `<dl>` tag, with each term wrapped in a `<dt>` tag and each description in a `<dd>` tag.
**Example: Description List**
```html
<!DOCTYPE html>
<html>
<head>
<title>Description List Example</title>
</head>
<body>
<h1>HTML Tags</h1>
<dl>
<dt><ul></dt>
<dd>Defines an unordered list.</dd>
<dt><ol></dt>
<dd>Defines an ordered list.</dd>
<dt><li></dt>
<dd>Defines a list item.</dd>
</dl>
</body>
</html>
```
**Output:**
- `<ul>`
Defines an unordered list.
- `<ol>`
Defines an ordered list.
- `<li>`
Defines a list item.
In this example, the description list explains the purpose of the `<ul>`, `<ol>`, and `<li>` tags.
#### Benefits of Using HTML Lists
- **Organization**: Lists provide a clear structure to your content, making it easier to follow.
- **Readability**: They break up text and make information easier to digest.
- **Accessibility**: Screen readers can easily navigate through lists, improving the user experience for visually impaired users.
### Conclusion
Understanding how to use HTML lists is essential for organizing and presenting content effectively. Whether you're using ordered lists for steps, unordered lists for items, nested lists for complex structures, or description lists for definitions, mastering these elements will enhance the readability and usability of your web pages.
Follow me on linkedin- https://www.linkedin.com/in/ridoy-hasan7 | ridoy_hasan |
1,909,241 | How to Add Typing Effects to Your React App with React Typical | Introduction Have you always wondered how to create a typing effect on your website? I too was... | 0 | 2024-07-02T17:40:09 | https://dev.to/code_duchess/how-to-add-typing-effects-to-your-react-app-with-react-typical-55o | webdev, javascript, beginners, react | **Introduction**
Have you always wondered how to create a typing effect on your website? I too was wondering. I have learned to implement a dynamic typewriting effect in React using the `react-typical` library. This effect can switch up your website design, especially in your hero section which plays a vital role in engaging your visitors.
**Prerequisites**
Before we begin, make sure you have basic knowledge of React and have Node installed in your system. You'll also need a React project set-up. If you don't have one yet, you can create it using Create React App. I will also be making use of [Tailwind CSS](https://tailwindcss.com/docs/installation) for styling.
## Step 1: Set Up Your React Project
If you don’t already have a React project, you can set one up quickly using Create React App:
Using **npx**
```
npx create-react-app my-app
```
or if you're familiar with yarn.
Using **yarn**
```
yarn create react-app my-app
```
After your react app has been installed you cd into your project using
```
cd my-app
```
## Step 2: Install react-typical
Using **npm**
```
npm install react-typical --save
```
Using **yarn**
```
yarn add react-typical
```
## Step 3: Install Tailwind CSS
Install tailwindcss via npm or yarn, and create your tailwind.config.js file.
Using **npm**
```
npm install -D tailwindcss
npx tailwindcss init
```
Using **yarn**
```
yarn add tailwindcss --dev
npx tailwindcss init
```
## Step 4: Configure your template paths
Add the paths to all of your template files in your tailwind.config.js file.
```
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ["./src/**/*.{html,js}"],
theme: {
extend: {},
},
plugins: [],
}
```
This step configures Tailwind to scan your specified files (like HTML, JavaScript, or React files) for class names. By doing this, Tailwind can generate the necessary styles based on the classes used in those files, which helps in optimizing the CSS output, reducing file size, and ensuring that only the styles you actually use are included in the final build.
## Step 5: Add the Tailwind directives to your CSS
Add the @tailwind directives for each of Tailwind’s layers to your main CSS file.
```
@tailwind base;
@tailwind components;
@tailwind utilities;
```
This step integrates Tailwind CSS into your project. It connects Tailwind’s base, component, and utility styles to your CSS file, which is then linked to your `index.js`. This setup allows you to use Tailwind CSS classes flexibly throughout your code.
## Step 6: Develop a React Component for Typing Animation
In your `src` folder create a `component` folder which would be used in storing your components.
In your `component` folder, create a file and call it `TypingEffect.js`. Import `React` and `Typical` library for creating typing animations.
```
import React from 'react';
import Typical from 'react-typical';
```
Then, add the following code to define the TypingEffect component:
```
import React from 'react';
import Typical from 'react-typical';
const TypingEffect = () => {
return (
<h1 className="text-3xl lg:text-6xl">
<Typical
steps={[
'Unlock the Future of Digital Artistry', 2000,
'Unlock the Future of Digital Collectibles', 2000,
'Unlock the Future of Digital Assets', 2000,
]}
loop={1}
wrapper="span"
/>
</h1>
);
};
export default TypingEffect;
```
The `Typical` component is used to create a typing animation. The `steps` prop defines the text to be typed and the duration (2000 milliseconds) each text stays before changing. The `loop` prop is set to 1, meaning the animation will play once. The `wrapper` prop wraps the animated text in a `<span>` element.
## Step 7: Import and Use the TypingEffect Component
Open the `App.js` file in the src directory and import the `TypingEffect` component. Then, use it within the `App` component to display the typing effect header.
```
import TypingEffect from './component/TypingEffect';
function App() {
return (
<div >
<TypingEffect/>
</div>
);
}
export default App;
```
## Step 8: Run Your Application
Finally, start your React application to see the typing effect in action:
Using **npm**
```
npm start
```
Using **yarn**
```
yarn start
```
This is how it looks on my browser.

## Step 9: Customize the Styles
To elevate my code further, I incorporated custom animations into `TypingEffect.js`. I've documented the entire process in a detailed, step-by-step tutorial on my YouTube channel. You can watch it here: [React Typical Tutorial](https://youtu.be/B6I4E4CwH3g). Check it out for a comprehensive guide!
```
import React from 'react'
import Typical from 'react-typical'
import HeroImg from '../assets/img.jpeg'
const TypingEffect = () => {
return (
<div className='bg-gray-900 h-screen flex justify-between items-center p-16'>
<h1 className='text-6xl text-white w-1/2'>
<Typical
steps={[
'Unlock the Future of Digital Artistry', 2000,
'Unlock the Future of Digital Collectibles', 2000,
'Unlock the Future of Digital Assets', 2000,
]}
loop={1}
wrapper='span'
/>
</h1>
<img src={HeroImg} className='w-1/3 rounded-lg' />
</div>
)
}
export default TypingEffect
```

**Conclusion**
The React Typing effect can boost visibility and engagement on your website by dynamically displaying key messages. It adds an interactive element that captures user attention, making your content more memorable and appealing. Implementing this feature can enhance user experience, highlight important information, and create a modern, professional look for your site. By following the steps outlined, you can easily integrate a typing effect into your React projects, ensuring your web presence stands out. To access my repository, simply follow this link [github repo](https://github.com/Uhiene/React-typing-effect)
**Resources**
[Tailwind Doc Installation](https://tailwindcss.com/docs/installation)
[React Typical](https://www.npmjs.com/package/react-typical)
| code_duchess |
1,909,240 | 10 Examples of Cyber-Physical Systems | Cyber-physical systems (CPS) are revolutionizing how we live, work, and interact with the world.... | 0 | 2024-07-02T17:39:55 | https://claroty.com/blog/10-examples-of-cyber-physical-systems | Cyber-physical systems (CPS) are revolutionizing how we live, work, and interact with the world. These complex systems, unifying hardware, software, and networks, are at the heart of numerous critical industries and applications, from industrial and manufacturing to healthcare and the public sector. They enhance interconnectivity between devices and systems across sectors in order to optimize efficiency and enhance productivity.
With the recent growth of CPS in every sector and industry, it's important to understand what they are, how they impact your business, and how to protect them. In many cases, more connectivity means a greater attack surface or exposure possibilities. For example, Claroty's Team82 demonstrated the existence of the ["CPS Blind Spot"](https://claroty.com/resources/reports/understanding-the-riskiest-exposures-in-cyber-physical-systems). Through their extensive research they discovered that 38% of OT and IoMT devices which contain high-risk exposure factors do not contain critical vulnerabilities. This means that operating from a traditional vulnerability management approach creates a severe blind spot for organizations as to their true risk posture.
The first step to protecting your CPS is to understand what these complex assets are and how they operate. Today, we're exploring the potential and implications of CPS, diving deep into examples across various industries in order to better understand the foundational role of CPS and why they must be protected.
What Are Cyber-Physical Systems?
--------------------------------
Simply put, [cyber-physical systems](https://claroty.com/blog/cyber-physical-systems-security-is-the-new-ot-security) connect the physical and cyber world. They are engineered platforms that seamlessly integrate computation, control, networking and analytics with the physical environment and its users. They hold transformative potential, affecting a wide variety of applications from medical devices to energy systems.
CPS is found in several critical industries. Manufacturing, for example, leverages CPS to drive automation and precision. Similarly, the healthcare industry uses them in advanced medical equipment. These incredibly important assets are fundamental to the operations of these industries, [differing from IT devices](https://claroty.com/blog/it-and-ot-cybersecurity-key-differences) because they exist in the physical world and contribute to physical processes, from machinery in an assembly line to devices used in surgery.
Because they straddle the cyber and physical worlds, CPS must be protected differently than IT devices. While IT devices often receive frequent software updates and can withstand both active and passive queries, the cyber-physical systems typically contain devices that aren't updated as frequently and can be sensitive to active queries that don't use the proper protocols. Additionally, the consequences of cyber attacks on CPS can lead to physical damage, safety risks for operators, and serious disruption of business operations.
While the [Internet of Things (IoT)](https://claroty.com/blog/extended-internet-of-things-xiot-faq) is a recognized concept, it's important to differentiate IoT from CPS. IoT generally refers to a network of interconnected devices, sharing and acting on data. CPS, on the other hand, includes not only the interconnectedness of IoT but adds greater emphasis on close integration with physical processes, real-time responses, and advanced analytics.
10 Examples of Cyber-Physical Systems
-------------------------------------
For deeper insight on the capabilities and potential of CPS, here are some examples you may come across in a range of fields.
### Operational Technology
Operational Technology (OT) uses both hardware and software to change, monitor, or manage physical processes, devices, and events within an organization or environment.
### Industrial Internet of Things (IIoT)
Industrial Internet of Things (IIoT) is a network of interconnected devices designed to boost industrial efficiency and productivity. IIoT enhances industrial processes by leveraging real-time data analysis, predictive maintenance, quality control, and seamless supply chain management.
### Industrial Control Systems (ICS)
Embedded within virtually all industrial processes, [Industrial Control Systems (ICS)](https://claroty.com/blog/cybersecurity-dictionary-industrial-control-systems-ics-security) are a type of CPS that manages, commands, and regulates industrial operations. They contribute to operations running smoothly, safely, and effectively.
### Building Management Systems (BMS)
[Building Management Systems (BMS)](https://claroty.com/blog/the-high-stakes-of-securing-healthcare-building-management-systems) are designed to control, monitor, manage, and optimize various systems within a building, such as HVAC, electricity, security, and fire safety. As a type of CPS, BMS allow for energy-saving and cost-efficient building operations, and help preserve the safety, availability, and integrity of the operations and processes occurring within a facility.
### Smart Grids
Integrating information and communication with power infrastructure, smart grids are a prime example of CPS. Smart grids offer real-time monitoring, decision making, and energy distribution, which helps evolve the conventional power grid into an intelligent one using digital technology, sensors, and software.
### Smart Buildings
Smart buildings employ CPS to enhance comfort, energy efficiency, and security. By integrating sensors, control systems, and software, smart buildings manage lighting, ventilation, power consumption, and more. This optimizes resources and offers a more sustainable built environment.
### Robotics
From manufacturing lines to surgical procedures, robotics have transformed various industries. This form of CPS provides enhanced precision, increased productivity, and improved safety.
### Smart Transportation Systems
The transportation sector employs CPS for improving efficiency, safety, and sustainability. Transportation organizations rely heavily on this form of CPS for real-time traffic monitoring, route planning, autonomous vehicles, and more.
### Medical Devices
In healthcare, CPS has transformed patient care with medical devices, or the [Internet of Medical Things (IoMT)](https://claroty.com/blog/iomt-101-guide-to-the-internet-of-medical-things) that monitor patient vitals, dispense medication, or guide surgeries. These systems ensure a high degree of care and reliability, providing improved patient outcomes.
### Smart Manufacturing
Smart manufacturing is a form of CPS that provides enhanced efficiency and flexibility in production processes. With real-time optimization of manufacturing operations leading to enhanced productivity.
The Challenges of Managing Cyber-Physical Systems
-------------------------------------------------
As we're seen, CPS is bringing forth a new era of productivity and efficiency in several key industries. But at the same time, CPS also presents new challenges. These are the top issues to keep in mind.
### Require Different Protection Tools from IT Systems
Cybersecurity is not one-size-fits-all, and as we've outlined, there is a significant difference between IT and CPS. Utilizing cybersecurity tools meant for IT systems will not protect CPS. In some cases, these solutions could impair sensitive OT devices. CPS require their own protection tools that have been especially designed to handle considerations unique to CPS, including system fragility, unique architectures, proprietary protocols, and environmental and operational constraints.
### Interoperability
Interoperability between various systems and devices can present difficulties due to a lack of standardized protocols. As CPS continue to grow, the issues do as well, and organizations must strive to balance the benefits of improving productivity on one hand with reducing the cyber risk that comes from connectivity on the other.
### Growing Security Risks
With so many interconnected devices and the possibilities of potential exposures, CPS are an attractive target for cyber attacks. Security concerns are growing day by day, particularly because the stakes of CPS security can have far reaching implications in both the digital and physical world, resulting in damages or losses. Unfortunately, many CPS devices are not designed with security in mind, making it all the more difficult to secure them properly. This makes it that much more important to find the right solution to protect these devices.
### Outdated Approaches to Visibility
The traditional method of achieving asset visibility for OT devices within CPS has emphasized passive queries, due to the sensitive nature of OT devices. In reality, passive-only queries lack the depth necessary for total visibility within CPS.
### Scalability
Scalability presents another challenge. As an organization increases its CPS, handling the vast amounts of real-time data generated, and ensuring all systems are updated, secured and running optimally can become an increasingly complex task.
### Compliance
The regulatory landscape for CPS is continually evolving. Ensuring compliance with data protection regulations, safety standards, and industry-specific legislation is an ongoing issue that all organizations relying on CPS must address.
### Real-Time Data Processing and Actionable Insights
Any lag in real-time data processing can pose challenges within CPS, which typically require a continuous stream of data for constant output to maintain accurate, real-time insights. Similarly, using IT-centric tools can lead to an incomplete asset inventory that would otherwise be achievable with CPS-specific tools. Lacking a complete asset inventory can impact an organization's ability to take actionable steps towards threat detection, vulnerability management, network segmentation, and more.
How to Optimize Your Cyber-Physical Systems
-------------------------------------------
With these challenges in mind, effective management strategies for CPS are imperative. Consider these strategies to protect your CPS.
### 1\. Security
Security in CPS requires comprehensive solutions, including those that encompass physical and human factors. A thorough security strategy should include the following:
- Exposure management: Determine the impact exposures could have on business operations and build a programmatic approach to continuous threat exposure management that is specifically designed for CPS.
- Network protection: Without visibility into your network, it's difficult to identify what each connected device is and how it communicates. Taking steps like network segmentation, optimization, and policy compliance monitoring is key for protecting your entire network.
- Secure access: Traditional methods of remote access can be risky, making a secure access solution that provides privileged access and identity management imperative.
- Threat detection: Utilizing a CPS protection platform that detects both known and unknown threats is foundational to protecting the security of operational environments.
In addition to these key areas, some other measures to consider include adopting are:
- Zero Trust Architecture: This approach does not assume that a device or user, whether inside or outside the network, is trustworthy without verification, significantly reducing the potential for unauthorized access.
- Instruction Detection/Prevention Systems (IDPS): These systems identify and mitigate cyber threats before they infiltrate the network.
- Physical security: Measures like access control and surveillance systems also have a role to play since securing CPS isn't just about digital security. Protecting the physical interface of these systems is also critical.
### 2\. Performance and Reliability
Maintaining high performance and reliability of CPS involves continuous monitoring of your systems, including regular system health checks and routine maintenance. You can further enhance reliability through redundancy, in which critical components are duplicated to prevent total system failure in the event of a breakdown.
### 3\. Interoperability and Integration
Effectively managing CPS includes ensuring interoperability and integration across all your systems. Using standardized protocols and establishing an integrating with current workflows can significantly reduce complexity. This allows for simpler data exchange and shared functions across your systems.
### 4\. Optimization Techniques
There are many optimization techniques available to improve performance, efficiency, and longevity of your CPS. These strategies include system modeling, preventive maintenance, insights from AI optimization, and resource allocation to minimize energy consumption.
### 5\. Human-Centered Design
Finally, human interaction with CPS must be taken into account. One aspect in which this is key is secure access, which allows users to interact with CPS remotely to operate, maintain, and update CPS in various environments. Because remote access can introduce new security risks, it's imperative to adopt enhanced secure access security measures to protect your CPS.
Selecting a Cyber-Physical System Protection Platform
-----------------------------------------------------
To directly face the challenges presented above and fully leverage the potential of CPS, organizations require robust strategies to protect and secure every part of their network. The first step is to evaluate your CPS protection platform to understand whether it is capable of handling every aspect of CPS security your environment demands.
Some of the most important criteria to look for in the selection process for a robust CPS protection platform include:
- Industry expertise: Selecting a platform that displays industry expertise and a deep commitment to driving progress in the CPS protection sector is one indication of that platform's merits. Award winning products and research teams, working with manufacturers to disclose vulnerabilities, and equipping customers with the means to leverage stronger protection against threats make a significant impact on your CPS protection strategy.
- Deep visibility: It is only through multiple discovery methods that you can achieve deep visibility within all CPS devices connected to your network. This means choosing a platform that uses both active and passive discovery methods, including those that use unique or proprietary protocols, are air-gapped, or are otherwise unreachable through passive-only methods.
- Broad solution set: Limited use-cases can be a sign that a platform doesn't have the breadth of experience to address all your needs. Seek out a vendor with depth in their portfolio that supports all types of CPS across the XIoT, deployment needs, and network architectures. Your unique needs and environments should be supported by their offering.
- Enabling business outcomes: The right data elements are critical to achieving better business outcomes. By giving you the option of managing, monitoring, and controlling your CPS security solutions in one place, the right solution can help you streamline risk management, apply compensating controls, respond to threats, and manage your overall security posture.
- Deployment flexibility: Having the option to deploy cybersecurity products on-premises or in the cloud, with the option to function on user-supplied software, is essential. This can help cut costs that come with acquiring, maintaining, and updating hardware and gives you the flexibility to determine where and how to deploy the solution based on your unique requirements.
Claroty is an industry leader in CPS protection and trusted across industries to deliver unmatched visibility, protection, and threat detection. To learn more about Claroty's protection methods, speak with a member of [our team](https://claroty.com/request-a-demo). | yayabobi | |
1,909,238 | My Pen on CodePen | Check out this Pen I made! | 0 | 2024-07-02T17:39:29 | https://dev.to/rico_craselaofficial_2dd/my-pen-on-codepen-5a2f | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Rico-Crasela-Official/pen/XWwvVEP %} | rico_craselaofficial_2dd |
1,909,237 | Overcoming an Unidentified Bug in our SpringBoot Application: My Inspiring Journey with the HNG Internship | Embarking on a journey with the HNG Internship program has been my career aspiration ever since I... | 0 | 2024-07-02T17:37:50 | https://dev.to/realest-techy-leidi/overcoming-an-unidentified-bug-in-our-springboot-application-my-inspiring-journey-with-the-hng-internship-1al7 | java, testing, backend, webdev |
Embarking on a journey with the HNG Internship program has been my career aspiration ever since I stumbled upon the program which I believe represents a significant step towards honing my backend development skills. You can be a part of this awesome opportunity by registering through this link <https://hng.tech/internship> . Recently, I encountered a particularly challenging problem that tested my abilities and pushed me to really think outside the box.
**The Challenge**
One of the most critical issues I faced involved an unidentified bug in our Spring Boot application that caused intermittent failures in the user registration process. The bug was elusive, not appearing consistently and leaving little trace in the logs, making it difficult to diagnose and resolve.
**Step-by-Step Approach to Overcoming this Challenge**
1. **Understanding the Problem**
This was the first step I adopted in handling this challenge. I tried to thoroughly understand the causes of this issue and gather as much information as possible about the failures. I utilized;
* User Reports: I collected detailed reports from users experiencing the issue, noting the specific circumstances under which the failures occurred.
* Log Analysis: I analyzed the logs for any patterns or anomalies that could provide clues, although the logs were sparse and inconsistent.
* Reproducing the Issue: I attempted to reproduce the issue in a controlled environment, running multiple tests under various conditions to trigger the bug.
2. **Identifying Potential Causes
**
With the initial information gathered, I brainstormed potential causes for the intermittent failures which I presumed maybe due to;
* Concurrency Issues: Given the sporadic nature of the bug, I considered concurrency issues, such as race conditions or thread safety problems.
* Database Transactions: I reviewed the database transactions to ensure that there were no issues with data consistency or integrity.
* Third-Party Services: I examined interactions with third-party services, considering whether external dependencies could be the cause of the intermittent failures.
3. **Enhanced Logging and Monitoring
**
To gather more data and pinpoint the issue, I implemented enhanced logging and monitoring using;
* Detailed Logging: I added detailed logging at various points in the registration process to capture more granular information about the application's state and behavior.
* Monitoring Tools: I set up monitoring tools like Prometheus and Grafana to track real-time metrics and visualize any patterns that emerged.
4. **Code Review and Debugging**
With enhanced logging in place, I conducted a thorough code review and debugging session.
* Code Review: I meticulously reviewed the code, looking for potential bugs, such as improper exception handling, uninitialized variables, or misconfigured dependencies.
* Debugging: Using a combination of IntelliJ IDEA's debugger and the new log data, I traced the execution flow to identify where the failures occurred.
5. **Fixing the Bug**
After a detailed analysis, I discovered that the issue stemmed from a misconfiguration in the Spring Boot application's dependency injection.
* Dependency Injection: The bug was caused by a race condition in the way certain beans were initialized. Specifically, a singleton bean was being accessed by multiple threads before it was fully initialized.
* Solution: I modified the bean scope and initialization logic to ensure proper synchronization. This involved using `@PostConstruct` to complete any necessary setup before the bean was accessed by other components. Below is a snippet of the correction I made in my code base;
``` java
@Service
public class UserService {
private final UserRepository userRepository;
private final SomeDependency someDependency;
@Autowired
public UserService(UserRepository userRepository, SomeDependency someDependency) {
this.userRepository = userRepository;
this.someDependency = someDependency;
}
@PostConstruct
public void init() {
// Ensure that someDependency is fully initialized before use
someDependency.initialize();
}
// Registration logic
}
```
6. **Testing and Verification
**
With the fix implemented, rigorous testing was essential to ensure the bug was resolved.
* Unit Tests: I created detailed unit tests to cover all edge cases and ensure the registration process was robust.
* Integration Tests: I performed integration tests to verify that the entire process worked correctly in a real-world scenario.
* User Testing: I deployed the fix to a staging environment and invited users to test the registration process, monitoring for any further issues.
7. **Deployment and Monitoring**
Deployment to our production environment required careful planning to minimize disruptions. I also set up monitoring to track the performance and stability of the registration process in real-time.
* Deployment Planning: I planned the deployment to occur during off-peak hours, ensuring minimal impact on users.
* Monitoring Setup: I configured monitoring tools to track registration success rates, error rates, and other relevant metrics, allowing for proactive issue detection and resolution.
and thank me later.
Reflections on Overcoming this Challenge
Going through this backend challenge was undoubtedly demanding, but immensely rewarding. It not only deepened my technical expertise but also strengthened my problem-solving abilities and collaborative skills within a team setting. The experience reinforced my passion for backend development and my eagerness to continue learning and growing in this dynamic field.
**My journey and I**
I am that “tech-lady” that can almost never be caught without her eyes fixated on the screen. Yeah, that’s how much I enjoy coding and researching. Participating in this HNG Internship program is one of the goals I have smashed for this year and counting. _So proud of myself..lol. _
**Why the HNG Internship**
Being a part of the participants for the HNG Internship represents a pivotal opportunity to further expand my knowledge and skills under the mentorship of industry experts. The program’s focus on practical, hands-on experience aligns perfectly with my career goals of becoming a proficient backend developer. Moreover, the chance to work on real-world projects alongside talented peers promises to be a transformative learning experience.
In conclusion, resolving complex backend challenges is not just about writing code; it’s about understanding the problem deeply, designing elegant solutions, and continuously iterating towards improvement. The journey with the HNG Internship marks a new chapter in my career, filled with excitement, growth, and the promise of contributing meaningfully to the tech community. If you are looking to hire talented developers like myself, you can check out <https://hng.tech/hire> and thank me later. | realest-techy-leidi |
1,884,577 | Symfony 7 vs. .NET Core 8 - Controllers | Disclaimer This is a tutorial or a training course. Please don't expect a walk-through... | 0 | 2024-07-02T17:37:06 | https://dev.to/awons/symfony-7-vs-net-core-8-controllers-31o7 | symfony, dotnetcore, controllers | ## Disclaimer
This is a tutorial or a training course. Please don't expect a walk-through tutorial showing how to use ASP.NET Core. It only compares similarities and differences between Symfony and ASP.NET Core. Symfony is taken as a reference point, so if features are only available in .NET Core, they may never get to this post (unless relevant to the comparison).
Most of the concepts mentioned in this post will be discussed in more detail later. This should be treated as a comparison of the "Quick Start Guides."
## Intro
Bot frameworks have a concept of controllers. A controller is a group of one or more actions put together, usually in the form of a class and methods. While in both frameworks, we can point routes to things different than controllers (like inlined endpoints in .NET or callbacks in Symfony), we will focus here on the controllers only.
## Controllers
### Symfony
We've already seen in [Symfony 7 vs. .NET Core 8 - Web application; the basics](https://dev.to/awons/symfony-7-vs-net-core-8-web-application-the-basics-2ip8), that we can define a controller by attaching an attribute to it, like in the following example:
```php
class LuckyController
{
#[Route('/lucky/number/{max}', name: 'app_lucky_number')]
public function number(int $max): Response
{
$number = random_int(0, $max);
return new Response(
'<html><body>Lucky number: '.$number.'</body></html>'
);
}
}
```
The basic idea is to map a route (URL) using a specific method. This method returns a response object that is sent to the browser.
This is the most trivial example, but extending our controller with an `AbstractController` class gives us access to a few handy helper methods, which we will discuss later.
```php
class LuckyController extends AbstractController
{
}
```
#### .NET Core
.NET Core is no different here. We can either have a method in a class that will serve as a target of a route or extend from a base class that will provide us with some additional helpers, similar to Symfony.
```c#
public class LuckyController
{
[Route("/lucky/number/{max}", Name = "app_lucky_number")]
public IResult Number(int max)
{
var random = new Random();
var number = random.Next(max);
var response = $"Lucky number: {numhttps://learn.microsoft.com/en-us/aspnet/core/razor-pages/?view=aspnetcore-8.0&tabs=visual-studiober}";
return TypedResults.Ok(response);
}
}
```
We inherit from a base controller to access those helper methods.
```c#
public class LuckyExtendedController : Controller
{
[Route("/lucky/number/{max}", Name = "app_lucky_number")]
public IActionResult Number(int max)
{
var random = new Random();
var number = random.Next(max);
var html = $"<html><body>Lucky number: {number}</body></html>";
return new ContentResult
{
Content = html,
ContentType = "text/html"
};
}
}
```
On the surface, it looks very similar. But an astute reader probably noticed that the example not extending the base class returns an object implementing the `Microsoft.AspNetCore.Http.IResult`, and the one extending the base class returns an object implementing the `Microsoft.AspNetCore.Mvc.IActionResult`.
.NET Core distinguishes between two different approaches to building web applications:the
* MVC-base extensively uses classes from the `Microsoft.AspNetCore.Mvc` namespace.
* Minimal APIs that use responses from the `Microsoft.AspNetCore.Http` namespace.
We can still mix and match, as in the former example, but we can also do things like this:
```c#
app.MapGet("/hello", () => TypedResults.Ok(new Message() { Text = "Hello World!" }));
```
The above example will return a `200` JSON response.
Building such minimal APIs is also possible in Symfony, but the difference is that Symfony does not distinguish between minimal APIs and MVC. Behind the scenes, the same `Symfony\Component\HttpFoundation\Respone` object is always used.
Another difference is that in .NET Core, we can integrate with OpenAPI out of the box (it is part of the framework), while in Symfony, an API-based application with OpenAPI features is only available using a third-party tool—the [API Platform](https://api-platform.com/).
We will review building API-based applications later in this series.
### Redirecting
#### Symfony
We have two ways to redirect in Symfony:
1. Redirect to a route
```php
public function index(): RedirectResponse
{
return $this->redirectToRoute('homepage', [], Response::HTTP_MOVED_PERMANENTLY);
}
```
2. Redirect to a specific URL
```php
public function index(): RedirectResponse
{
return $this->redirect('http://symfony.com/doc');
}
```
We return a `RedirectResponse` containing either a hardcoded URL or a URL generated from the existing routing configuration.
#### .NET Core
The biggest difference is that in .NET Core, we achieve the same things using more methods than in Symfony. For example, the permanent redirect is an argument in Symfony and a separate method in .NET Core. But apart from that, we can do the same things easily.
```c#
[Route("/redirect-to-route")]
public IActionResult MyRedirectToRoute()
{
return RedirectToRoute("app_lucky_number", new { max = 123 });
}
[Route("/redirect-to-route-permanent")]
public IActionResult MyRedirectToRoutePermanent()
{
return RedirectToRoutePermanent("app_lucky_number", new { max = 123 });
}
```
```c#
[Route("/redirect-to-url")]
public IActionResult MyRedirect()
{
return Redirect("https://example.com");
}
[Route("/redirect-to-url-permanent")]
public IActionResult MyRedirect()
{
return RedirectPermanent("https://example.com");
}
```
### Rendering templates
#### Symfony
Symfony uses Twig as a templating engine, and the `AbstractController` can help render a template:
```php
return $this->render('lucky/number.html.twig', ['number' => $number]);
```
We pass a path to a template and variables that will be used there.
PS. We will get into more details regarding templating in one of the following posts.
#### .NET Core
It is not that different in .NET Core. While Symfony uses a third-party templating engine, Twig, .NET has [Razor](https://learn.microsoft.com/en-us/aspnet/core/razor-pages/?view=aspnetcore-8.0&tabs=visual-studio). The rendering looks very similar, with one difference: due to naming conventions, we don't need to specify the template we want to render (unless we want to break out of convention).
```c#
ViewData["number"] = number;
return View();
```
### Injecting services directly into an action
#### Symfony
While I wouldn't consider this a good practice, we can definitely do something like this:
```php
#[Route('/lucky/number/{max}')]
public function number(int $max, LoggerInterface $logger): Response
{
$logger->info('We are logging!');
// ...
}
```
Symfony will automatically inject a configured service into our action.
#### .NET Core
This is also possible here. The only difference is that we have to explicitly state we want to inject an argument from DI.
```c#
[Route("/lucky/number/{max}")]
public IActionResult NumberV3(int max, [FromServices] ILogger<LuckyExtendedController> logger)
{
var random = new Random();
var number = random.Next(max);
logger.LogInformation("My lucky number is {number}", number);
// ...
}
```
### Generating controllers
#### Symfony
We can generate a controller with a corresponding template using the command line.
```bash
php bin/console make:controller BrandNewController
created: src/Controller/BrandNewController.php
created: templates/brandnew/index.html.twig
```
#### .NET Core
We can do the same in .NET Core, with a small caveat: we need to execute two commands, one to generate the controller and one to generate the view (so the template).
```bash
dotnet new mvccontroller -n BrandNewController -o Controllers --namespace App.Controllers
```
```bash
dotnet new view -n Index -o Views/BrandNew
```
### Handling errors
In the context of an MVC application, errors that reach the browser will be represented with status codes corresponding to a specific error type.
#### Symfony
If we haven't found something and want to give back a 404 status code, we can use a helper method:
```php
throw $this->createNotFoundException('The product does not exist');
```
or explicitly throw an exception:
```php
throw new Symfony\Component\HttpKernel\Exception\NotFoundHttpException('The product does not exist');
```
There are a few similar exception classes that will take care of returning the proper status code with the response.
Anything not inherited from the base `HttpException` will result in status code 500.
By default, Symfony will not display technical details if we are in production mode. In dev mode, we will get many details that can be useful for debugging.
#### .NET Core
.NET Core is different in this regard.
We can still use helper functions to return responses with specific status code like:
```c#
return BadRequest();
```
or
```c#
return Conflict()
```
But throwing an exception always leads to 500. We have to write custom code to convert exceptions into errors with specific status codes.
This means if we are not extending from the base `Controller` class, we need a different solution. We can use the `TypedResults` class to return a response we want:
```c#
public class TestErrorNoInheritanceController
{
[Route("/test-error/no-inherit")]
public IResult Index()
{
return TypedResults.NotFound();
}
}
```
### Accessing the current request
#### Symfony
In Symfony, this is as simple as defining the method argument as in the following example:
```php
public function index(Request $request): Response
{
$page = $request->query->get('page', 1);
// ...
}
```
#### .NET Core
Depending on whether we inherit from the base class or not, the implementation will differ.
If we inherit from the base controller, we have access to the `HttpContext` object:
```c#
public IActionResult Index()
{
var body = HttpContext.Request.Body;
return Ok(body.ToString());
}
```
### Mapping of a request
We have already talked about it `here`, but we will now compare some more Symfony examples.
#### Symfony
We can map query string parameters:
```php
public function dashboard(
#[MapQueryParameter] string $firstName,
#[MapQueryParameter] string $lastName,
#[MapQueryParameter] int $age,
): Response
{
// ...
}
```
Mapping can be combined with validation/filtering:
```php
public function dashboard(
#[MapQueryParameter(filter: \FILTER_VALIDATE_REGEXP, options: ['regexp' => '/^\w+$/'])] string $firstName,
#[MapQueryParameter] string $lastName,
#[MapQueryParameter(filter: \FILTER_VALIDATE_INT)] int $age,
): Response
{
// ...
}
```
We can even map to DTOs like this:
```php
class UserDto
{
public function __construct(
#[Assert\NotBlank]
public string $firstName,
#[Assert\NotBlank]
public string $lastName,
#[Assert\GreaterThan(18)]
public int $age,
) {
}
}
```
```php
public function dashboard(
#[MapQueryString] UserDto $userDto
): Response
{
// ...
}
```
#### .NET Core
We can also map from query string parameters, which is very similar to how it's done in Symfony.
```c#
public IActionResult Index([FromQuery] string firstName, [FromQuery] string lastName, [FromQuery] int age)
{
return new ContentResult
{
ContentType = "text/html",
Content = $"firstName: {firstName}; lastName: {lastName}; age: {age}"
};
}
```
Validation is also possible, but as I've already mentioned, we need check the state of the model ourselves and act accordingly.
```c#
public IActionResult IndexValidation(
[FromQuery][StringLength(100, MinimumLength = 10, ErrorMessage = "First name must be between 10 and 100 characters")] string firstName,
[FromQuery][RegularExpression(@"^[a-zA-Z''-'\s]{1,40}$", ErrorMessage = "Some characters are not allowed.")] string lastName,
[FromQuery] int age)
{
if (!ModelState.IsValid)
{
return ValidationProblem(ModelState);
}
return new ContentResult
{
ContentType = "text/html",
Content = $"firstName: {firstName}; lastName: {lastName}; age: {age}"
};
}
```
Last but not least, we can map an entire object; again, it is similar to how we would do it in Symfony.
```c#
public record UserDto(
[StringLength(100, MinimumLength = 10)] string firstName,
string lastName,
int age
);
```
```c#
public IActionResult IndexModel([FromQuery] UserDto userDto)
{
if (!ModelState.IsValid)
{
return ValidationProblem(ModelState);
}
return Json(userDto);
}
```
### Session
#### Symfony
The current session can be accessed from the current request.
```php
$request->getSession()
```
Symfony has an interesting concept of "flash messages." We can set a message in the current request, which will be available in the next one (and only the next one; it will be automatically deleted afterward).
```php
$this->addFlash(
'notice',
'Your changes were saved!'
);
// $this->addFlash() is equivalent to $request->getSession()->getFlashBag()->add()
return $this->redirectToRoute(/* ... */);
```
#### .NET Core
In .NET Core session needs to be first enabled:
```c#
// Program.cs
builder.Services.AddSession(options =>
{
options.Cookie.Name = "MyCookieName";
options.IdleTimeout = TimeSpan.FromSeconds(10);
options.Cookie.IsEssential = true;
});
// ...
app.UseSession();
```
We also have a similar feature to flash messages in Symfony. It is called `TempData`. The end result is the same, but the usage is different.
We must define and tag a variable as a `[TempData]`. Afterward, this data will remain in session until it is read.
```c#
public class HomeController : Controller
{
[TempData]
public string? Message { get; set; }
public IActionResult Index()
{
Message = "My flash message";
return RedirectToRoute("my_controller");
}
}
```
TempData can be passed in a cookie or stored in a session. It can be later retrieved either in a template or a controller. The easiest way to use it is to inherit from the base controller, where we get direct access to TempData.
```c#
public class MyController : Controller
{
[Route("/my-controller", Name = "my_controller")]
public IActionResult Index()
{
string message = TempData["Message"]?.ToString() ?? "";
return new ContentResult
{
ContentType = "text/html",
Content = message
};
}
}
```
We can also get the value directly in a template like this:
```c#
@page
@model IndexModel
@{
if (TempData["Message"] != null)
{
<h3>Message: @TempData["Message"]</h3>
}
}
```
The biggest differences in .NET are:
* A message is not always automatically removed. It must be read, and the return status code must be 200.
* We can use the `Peek` method to get the value, which will not mark the message for deletion.
* We can use the `Keep` method to keep the message anyway (after reading it).
### Accessing configuration values
#### Symfony
Symfony has a concept of `parameters`. Those parameters can come from different sources and be used to configure services. Parameters are normally defined in a YAML configuration file like this:
```yaml
# config/services.yaml
parameters:
# the parameter name is an arbitrary string (the 'app.' prefix is recommended
# to better differentiate your parameters from Symfony parameters).
app.admin_email: 'something@example.com'
```
Those parameters can be accessed from within a controller like this:
```php
public function index(): Response
{
$adminEmail = $this->getParameter('app.admin_email');
// ...
}
```
#### .NET Core
In .NET Core, we can also configure our application using configuration values. These values can come from different sources (using providers) and even be hot-reloaded. Like in Symfony, we can access those values within our controller, but it is a bit more complicated. We need to inject the configuration service/object into the controller to access any values. This may seem strange, but normally, we configure our services based on the values (we will get to that later) and do not use config values directly.
```c#
public class MyController : Controller
{
private readonly IConfiguration _configuration;
public MyController(IConfiguration configuration)
{
_configuration = configuration;
}
[Route("/my-controller")]
public IActionResult Index()
{
var allowedHosts = _configuration.GetValue<string>("AllowedHosts");
// ...
}
}
```
### Returning responses
#### Symfony
We can return different types of responses using helper functions.
This could be a JSON response (Symfony will try to serialize whatever we pass as an argument).
```php
public function index(): JsonResponse
{
// returns '{"username":"jane.doe"}' and sets the proper Content-Type header
return $this->json(['username' => 'jane.doe']);
// the shortcut defines three optional arguments
// return $this->json($data, $status = 200, $headers = [], $context = []);
}
```
We can also stream a response if, for example, we want to return a file.
```php
public function download(): BinaryFileResponse
{
// send the file contents and force the browser to download it
return $this->file('/path/to/some_file.pdf');
}
```
We can also return an early response (status code 103). However, this is only available when we use SAPI like FrankenPHP. It will not work with the regular PHP SAPI.
```php
public function index(): Response
{
$response = $this->sendEarlyHints([
new Link(rel: 'preconnect', href: 'https://fonts.google.com'),
(new Link(href: '/style.css'))->withAttribute('as', 'stylesheet'),
(new Link(href: '/script.js'))->withAttribute('as', 'script'),
]);
// prepare the contents of the response...
return $this->render('homepage/index.html.twig', response: $response);
}
```
#### .NET Core
We can do the same thing in .NET Core.
Returning a JSON response:
```c#
public class JsonController : Controller
{
[Route("/json")]
public IActionResult Index()
{
return Json(new { key = 1, key_2 = "value" });
}
}
```
In both frameworks, we can control the configuration of the JSON serializer (with the `context` array parameter in Symfony and the second argument to the `Json` method: `JsonSerializerSettings` instance).
The biggest difference is that controlling the status code is not as straightforward as in Symfony.
We can definitely do it by coding something specific, but this is not available out of the box.
I personally don't see this as a big deal. At the end of the day, why would we want to return a status other than 200 when we return a JSON-formatted response?
Returning a file stream is similar to Symfony, though .NET operates directly on streams (as an alternative, we can use a byte array, but it won't be as efficient as using a stream).
```c#
public IActionResult Index()
{
FileStream fileHandler = System.IO.File.OpenRead("appsettings.json");
return File(fileHandler, "application/octet-stream");
}
```
Regarding early hints, I tried to figure out how to do it, but unfortunately, I gave up. It might be possible, but definitely not out of the box (such as by using a simple helper method as in Symfony).
## What's next?
In the next post, I will compare Twig to Razor (it will be high-level only; details would require a few posts, I guess).
Thanks for your time!
I'm looking forward to your comments. You can also find me on [LinkedIn](https://www.linkedin.com/in/aleksanderwons/), [X](https://x.com/AleksanderWons), or [Discord](https://discordapp.com/users/601775386233405470). | awons |
1,909,235 | Happy 21st Birthday Digital Samba! | It’s time to celebrate! Exactly 21 years ago today we visited a notary office in Barcelona to... | 0 | 2024-07-02T17:36:14 | https://dev.to/digitalsamba/happy-21st-birthday-digital-samba-2h1c | It’s time to celebrate! Exactly 21 years ago today we visited a notary office in Barcelona to officially incorporate Digital Samba. Over the last two decades a lot has changed, but what hasn’t changed is our enthusiasm for bringing people together in new and imaginative ways. We remain dedicated to providing a safe and secure environment for your online communications in the new era of artificial intelligence.
But before we reveal what we have in store for the future, let’s take a brief trip down memory lane and visit some of the highlights of the past 21 years.
## 21 years in a nutshell
- 2002 - Macromedia released the Flash Communication Server. Flash was already a hot topic in client-side development, but the introduction of a communication server opened the door to “quick and easy” online video applications. Intrigued, three of us got together, plugged in our bulky Logitech QuickCam Express webcams (remember those?) and got cracking.

- 2003 - Digital Samba is born! We delivered a full suite of video applications for a telemedicine customer this year. Armed with this experience, we won a contract with the Spanish department of transportation (Dirección General de Tráfico). The rest, as they say, is history.
- 2006 - After years of project-based work, we stepped into the product world and released the OnSync video conferencing platform. Over the next 10 years, we added a huge amount of features.
- 2016 - Flash is dead! Well, sort of. Not quite yet. But there’s a new kid on the block, and you can’t go into a sales call without hearing about WebRTC. We took a pragmatic approach and started investigating, without losing our heads in all the drama.
- 2019 - Flash is dead! For real this time. But we’ve been busy and bring our WebRTC product called Samba Live into action. Almost all the features of the Flash product were developed from scratch in the WebRTC product, but most importantly, the product was ready to be used at scale.
- 2020 - The pandemic hits hard, and luckily we are ready. We experience accelerated product usage and company growth, and we’re proud to say that our platform handled it with extreme finesse.
- 2022 - SaaS video conferencing products are commonplace. But if there is one thing we learned during the pandemic, it’s that people want embedded solutions. We pivot from a pure SaaS to an embedded VPaaS and release Digital Samba Embedded.
- 2024 - Embedded solutions are not built equally, and we strongly believe in the prebuilt approach. We have been working hard to bring all our SaaS features into our embedded product. You don’t have to use all the features, but they are there if you need them! And of course, 2024 is the year of AI - more on that later.
## Our users love us, and we love our users
A picture is worth a thousand words, and these pictures say it all.

Above all, our customers value our approachability. We care, and it shows. Of course, it doesn’t hurt that our platform is super stable, scales really well, is extremely secure, and delivers crisp audio & video.
## If you prebuild it, they will come
If you understand that movie reference, congratulations - you’re as old as us 😛
Since we last celebrated a birthday, we have rolled full steam ahead towards the belief that prebuilt is where we want to be. There are lots of VPaaS solutions out there - lots of competition. We aim to stand out by making embedded as easy as it can be (as easy as it should be). We give you all the tools that you need to make your embedded solution a success, no matter the use case. You want to embed a simple 1v1 video conference? We can do that. You want to embed a full video conferencing solution with all the bells and whistles? We can do that, too.
Hold on, prebuilt sacrifices flexibility, right? Correct, but 21 years of experience allows us to make solid decisions, and we provide an API that is powerful enough to dissuade even the pickiest developers from going down the strenuous development path with a low-level VPaaS provider.
## Privacy, privacy, privacy
Privacy first, second and third. We don’t play fast and loose with your data. We are one of the only European providers who don’t use any US subprocessors in our product. Your data is only processed in Europe, and only on European servers belonging to European companies. All our competitors are GDPR compliant, but if you read the fine print, you will find that most of them process your data on servers belonging to US mega-corporations (AWS, MS Azure, Google Cloud). If you interpret the privacy legislation literally, these data transfers are allowed, but it just doesn’t sit right with us.
When it comes to AI, we could easily take shortcuts and develop those features on US-based AI cloud platforms. But we’re sticklers when it comes to keeping your data safely within European borders, so we gladly do it the hard way and host all AI models ourselves. With us, your sensitive data doesn’t get bounced around from datacenter to datacenter like a pinball. You can trust that your data is processed only within Digital Samba infrastructure.
## 12 months worth of features
We may not be the biggest team in the world, but we get things done. We practise our own flavour of Agile, releasing updates in regular 2-week intervals. Want proof? Check out our release notes, and subscribe if you want to receive update notifications.
Here’s a taste of what we’ve been up to, starting with July 2023:
- July: Custom logo
- August: E2EE
- September: Room entry control, Custom CNAME, Custom HTML title
- October: Whiteboard (based on the awesome tldraw project)
- November: Google calendar integration
- December: Support for 500 users per session (up from 100)
- January-March: Support for 2000 users per session
- April: Breakout Rooms, Q&A, Private Chat
- May: AI Transcripts, AI Summaries, Image Presenting, In Session Component Control
- June: LTI integration, Polling
All feature development follows privacy by design and security by design principles. It’s not an afterthought - it’s baked in.
If you just glance at that list, you could be fooled into thinking that we’re once again developing a SaaS product. And that’s the thing, we kind of are, but not in the traditional sense. We are prebuilding SaaS features that can be embedded. All of these features are managed and configured via our API, and all of them can be controlled via our SDK.
With Digital Samba, you get a partner (not just a supplier or a vendor) with a unique perspective. We have lots of SaaS experience, and we know how end users think. We are also developers, and we know how to create tools for developers. This allows us to create an embedded platform that is both easy to use and offers you an insight to the data that you care about most. You can push data into our platform, and you can pull data from our platform into yours - all via the API. Take a look at some of the API calls to retrieve a session transcript, or a session summary, or Q&A. There’s more where that came from.
## The future is bright
AI is scary. AI is awesome. We cater to both sides of the coin. We won’t deny that we are extremely excited about AI and the opportunities it presents. At the same time, we believe in the ethical application of AI, and we will never send your data into a black box, especially when we don’t have any idea what that box does. Over the next 12 months, we will be exploring how this nascent technology can be applied to specific use cases in industries such as telehealth, education, sales support, and recruitment - to name but a few. We’re also keeping a close eye on the evolution of “on-device” processing capabilities that would allow AI models to run in the browser instead of on a server, thereby cutting out the middleman and keeping data processing local and private.
Perhaps less revolutionary, but still exciting, we will continue to push features into the platform to achieve full feature parity with our previous SaaS product. We’re close but not quite there yet. Expect to see the release of a full-fledged content library, integrated web application, picture-in-picture and multi-user screen sharing in the next months. Again, these are just tools. If your use case doesn’t need them, don’t activate them. Do please drop us a line if you want to see other features that aren’t currently on our roadmap.
## A big thank you to all
We would be nothing without you, so thank you for your trust! If there’s anything you’d like to see on our roadmap as we speed ahead towards our 22nd birthday, please [get in touch](https://www.digitalsamba.com/contact-us). We’re great listeners, and we love a good conversation. Find us on social media or subscribe to our newsletter.
## https://www.digitalsamba.com/blog/happy-21st-birthday-digital-samba
!
| digitalsamba | |
1,909,234 | HTML cool things | 1) Make a call or mail: Need a link to make a call or mail? a tag to the rescue Make a call Send an... | 0 | 2024-07-02T17:35:57 | https://dev.to/fida_jafri_b5296bd481b98d/html-cool-things-207j | 1) Make a call or mail:
Need a link to make a call or mail? a tag to the rescue
<a href="tel:43211234123">Make a call</a>
<a href="mailto:abc@gmail.com">Send an email</a>
2) Add a color picker:
Need a color picker to you webpage? No need to use any libraries. It's just one line far.
<input type="color"/>
Here it is.
This image is showing color picker
3) Editabele Content:
Make you any content editable by just adding the contenteditable attribute to the element.
<p contenteditable="true">
This is a paragraph
</p>
Here! how it's look like.
GIFThis is a video showing the editable text
4) Mark tag:
Got something that is very important and wanna highlight it?
Here is the solution.
<p>It is very <mark>Important</mark></p>
Image description
5) Time tag:
The <time> tag defines a specific time (or datetime).
<time datetime="2022-01-18T14:00:00Z">January 18, 2022 2:00 PM</time>
The datetime attribute is used to translate date and time into machine-readable formate.
6) Progress bar:
Want to show progress of anything. use progress tag.
<progress value="75" max="100"></progress>
Picture showing progress bar
7) Disable option:
Use disable attribute to disable any element from options.
<select>
<option>Apple</option>
<option>Orange</option>
<option disabled>Mango</option>
</select>
You can see Mango is disabled
picture showing disabled option
8) Lazy Loading Image:
The loading attribute specifies whether a browser should load an image immediately or to defer loading of off-screen images until for example the user scrolls near them.
<img src="image.jpg" loading="lazy" alt="Some detail mentioned">
Hope you liked this post. If you got something to say Comment section is for you. | fida_jafri_b5296bd481b98d | |
1,909,232 | Fixing... err... I mean "implementing" my Amazon DynamoDB approach with Gen-AI | Second part of the series where I build a serverless app to track my Green Card priority date and implement storing the data in DynamoDB. | 27,940 | 2024-07-02T17:32:52 | https://community.aws/content/2ihOgUm8x9k15XZ6aBfvUTlwmNN/fixing-err-i-mean-implementing-my-amazon-dynamodb-approach-with-gen-ai | amazonqdeveloper, terraform, lambda, dynamodb | ---
title: "Fixing... err... I mean \"implementing\" my Amazon DynamoDB approach with Gen-AI"
description: Second part of the series where I build a serverless app to track my Green Card priority date and implement storing the data in DynamoDB.
tags: amazonqdeveloper, terraform, lambda, dynamodb
canonical_url: "https://community.aws/content/2ihOgUm8x9k15XZ6aBfvUTlwmNN/fixing-err-i-mean-implementing-my-amazon-dynamodb-approach-with-gen-ai"
series: Building a Serverless Web Scraper with Amazon Q
---
## Introduction
In the [previous article](https://dev.to/aws/building-a-serverless-web-scraper-with-a-little-lot-of-help-from-amazon-q-5886), I started building a serverless app to scrape the USCIS priority dates from their website. I was able to extract the dates from the website, but the schema for my DynamoDB table needed a change to store the data. Today, I'm going to tackle this using [Amazon Q Developer](https://aws.amazon.com/developer/generative-ai/amazon-q/?trk=01b23196-71a4-403b-909c-65ddc571caa0&sc_channel=el).
The issue I ran into was how I was storing my data. I had the primary key as `pk = f"{filing_type}#{category}"` and the secondary key as `sk = f"{country}"`. The data that I'm trying to store has a date for when the Visa Bulletin was published (`bulletin_date`), and each bulletin has two tables I'm interested in for the priority date (`filing_type` for `Final Date` and `Application Date`). Each of those tables has a row per visa category (`category`), and column for the country groupings. The date for each combination of country and category is what I'm trying to store. This will allow me to query the historic data specific to me, with `category='3rd'`, `filing_type='Final Date'`, and `country='All Chargeability Areas Except Those Listed'`.
## Table Schema
First item on my list today is to ensure I have a table schema to store my data without overwriting entries like I had in my previous attempt. First, [I update the schema](#prompt-1) using the current data structure, [update the `store_data`](#prompt-2) function with structure below (and also [implement handling table request throttling](#prompt-3)), with a [quick detour](#prompt-4) to dig into `PROVISIONED` vs `PAY_PER_REQUEST` for my table access. Now we can update our Terraform with the new table definition:
```hcl
resource "aws_dynamodb_table" "visa_bulletin_data" {
name = "VisaBulletinData"
billing_mode = "PAY_PER_REQUEST"
hash_key = "pk"
range_key = "sk"
attribute {
name = "pk"
type = "S"
}
attribute {
name = "sk"
type = "S"
}
}
```
The `bulletin_date` will be used for the primary key (`pk`), and a composite secondary key (`sk`) using the `filing_type`, `country`, and `category` to group the data. Our `pk` and `sk` will be defined in our code as:
```python
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
```
I went back and forth between using provisioned capacity vs on-demand, I do like the idea of using the retry logic, but in the end, I settled on going for a simpler approach. I know that once I have everything working, I will do another pass to improve the code for the app. Ensuring that I combine the database write / retry logic with how I currently handle getting the data will cause issues. In the `scrape_visa_bulletin` function, I'm looping over each of the bulletins to extract the data, and adding that URL to the `ProcessedURLs` table so I don't process it again, but if writing to the database fails, it won't reprocess the pages that weren't saved. We now have the following for our `store_data` function:
```python
def store_data(data):
try:
print("Storing data")
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
print("Done storing data")
except Exception as e:
print(f"Unable to store the data, error: {e}")
```
After running `terraform apply`, grabbing some fresh coffee, finishing it, and cleaning my keyboard, I can test the updated code.

> `TIL`: it takes time to change a DynamoDB table's access mode.
```bash
aws_dynamodb_table.visa_bulletin_data: Modifications complete after 17m41s [id=VisaBulletinData]
aws_lambda_function.visa_bulletin_scraper: Modifying... [id=visa-bulletin-scraper]
aws_lambda_function.visa_bulletin_scraper: Still modifying... [id=visa-bulletin-scraper, 10s elapsed]
aws_lambda_function.visa_bulletin_scraper: Still modifying... [id=visa-bulletin-scraper, 20s elapsed]
aws_lambda_function.visa_bulletin_scraper: Modifications complete after 25s [id=visa-bulletin-scraper]
Apply complete! Resources: 1 added, 3 changed, 1 destroyed.
(package) ➜ uscis-priority-date-tracker git:(main) ✗
```
Commented out the lines that handle checking if the URL was previously processed, and I can run the app again. Woohoo!! I can see my data in my table! It looks like what I need, but now I need to update the `read_data` method to retrieve it.

## Reading the Data
[Updating `read_data`](#prompt-5) to use the new structure, and [printing the output](#prompt-6) provides the following code (I'm using defaults for now) after needing to [query using the wrong `pk` and `sk`](#prompt-7):
```python
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk_prefix = "BULLETIN_DATE#"
sk_prefix = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
response = table.query(
KeyConditionExpression=Key('pk').begins_with(pk_prefix) & Key('sk').begins_with(sk_prefix),
ScanIndexForward=False
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['bulletin_date'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
bulletin_date = item['bulletin_date']
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
And 💥. ***It does not work!!!***
```bash
Traceback (most recent call last):
File "/Users/cobusb/projects/uscis-priority-date-tracker/src/local_test.py", line 21, in <module>
result = lambda_handler(mock_event, mock_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cobusb/projects/uscis-priority-date-tracker/src/handler.py", line 190, in lambda_handler
eb3_data = read_data()
^^^^^^^^^^^
File "/Users/cobusb/projects/uscis-priority-date-tracker/src/handler.py", line 123, in read_data
response = table.query(
^^^^^^^^^^^^
File "/Users/cobusb/projects/terraform-samples/us-visa-dates-checker/src/package/lib/python3.12/site-packages/boto3/resources/factory.py", line 581, in do_action
response = action(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cobusb/projects/terraform-samples/us-visa-dates-checker/src/package/lib/python3.12/site-packages/boto3/resources/action.py", line 88, in __call__
response = getattr(parent.meta.client, operation_name)(*args, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cobusb/projects/terraform-samples/us-visa-dates-checker/src/package/lib/python3.12/site-packages/botocore/client.py", line 565, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/cobusb/projects/terraform-samples/us-visa-dates-checker/src/package/lib/python3.12/site-packages/botocore/client.py", line 1021, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the Query operation: Query key condition not supported
```
Turns out that the primary key [needs to be an "equality condition"](#prompt-8). I also notice that the files referenced are from a different directory than where I moved my code to. Initially this was going to be part of a larger repo with Terraform examples, but decided to split it out. I *think* this is due to using `venv` from that folder, but as long as my code runs, I'm going to ignore it for now. Let's fix the database first.
## Redoing the Table Again
[After checking if a schema change](#prompt-9) would fix it, I update both the `store_data` and `read_data` methods to use the new schema:
```python
def store_data(data):
try:
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}#COUNTRY#{country}"
sk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m-%d')}"
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
except Exception as e:
print(f"Unable to store the data, error: {e}")
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}#COUNTRY#{country}"
response = table.query(
KeyConditionExpression=Key('pk').eq(pk),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['sk'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
While I've used DynamoDB quite a bit in the past, I haven't had to deal with data when doing a schema migration. I'm going to save that for another day, for now I just want a [way to delete all the data](#prompt-10) with the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/#cli-aws-dynamodb). Looking at the list of methods, I don't see an equivalent to SQL's `TRUNCATE TABLE` statement, and it [doesn't look like it exists](#prompt-11). I'm now very curious what will happen to the data, so I'm going to [update the table definition in Terraform](#prompt-12) and YOLO it.
Oh.
The table schema isn't changing, only how I store the data.
💡!!!
Ok, this makes sense. We aren't changing the table schema, only the data we are storing in it. I feel like I need to generate on of those [300](https://www.imdb.com/title/tt0416449/) memes with "THIS IS NOSQL!!!!!". To clear out the "old" data, I can either manually click to delete them in batches via the AWS Console, write some extra code, or drop and recreate the table - dropping the table since this is Sparta :P
After another `terraform apply`, the code works! Here's the first few lines of output:
```text
Retrieved the [Final Date] data for [3rd] for [All Chargeability Areas Except Those Listed]:
Bulletin 2024-07-01: 2021-12-01
Bulletin 2024-06-01: 2022-11-22
Bulletin 2024-05-01: 2022-11-22
Bulletin 2024-04-01: 2022-11-22
Bulletin 2024-03-01: 2022-09-08
Bulletin 2024-02-01: 2022-09-01
Bulletin 2024-01-01: 2022-08-01
Bulletin 2023-12-01: 2021-12-01
```
## What I Learnt
After the back and forth to get the table structure to store my data, and then that last change of the values of the `pk` and `sk` for the data that didn't change the table schema, I think I'm starting to think about the data structures in the right way. And repeating the recap from the first piece, the overall process of trial and error by patching different solutions together is different. Instead of the searching and multiple tabs, I now rarely leave my IDE.
I've added a todo for myself to read up on how to handle migrating existing data with DynamoDB, think that will be very interesting. In the past, I've done many large SQL based data migrations, and I honestly have no idea how I would do it for this app other than code to handle it. Probably would need to have that in a function, call it from the main method, and do another deployment to delete it. Looks like I have my next article topic after this series is done.
While generative-ai has helped speed up creating applications, you do need to take into account that it is very literal. Looking at the response [when I asked it for the AWS CLI command to delete all items in a DynamoDB table](#prompt-10), it still provided a technically accurate and feasible solution, just not one you would really use.
## Final Code
As for the previous piece, I've tagged this with `article-2`, and pushed the tag.
> [Source code for this article.](https://github.com/build-on-aws/uscis-priority-date-tracker/tree/article-2)
## Conclusion
I've been adding the prompts at the bottom of each piece so far, and I'll keep doing it for the rest of this series as I'm trying to simulate watching and listening to someone live code, but in written form. The intent is to share the thought process instead of just the final, polished solution. This will still be turned into a single solution of how to do it from start to finish without the back and forth, but for that, I first need to finish up all the parts still waiting for me. Next up, I'll update the function so I can pass the parameters that are currently hard-coded via defaults for the `read_data` function. At least, that is the plan, right after I double check the message I had after pushing this code:
```text
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
remote:
remote: GitHub found 1 vulnerability on build-on-aws/uscis-priority-date-tracker's default branch (1 moderate). To find out more, visit:
remote: https://github.com/build-on-aws/uscis-priority-date-tracker/security/dependabot/1
```
*Stay tuned for the next installment of this series...*
---
## Using Provisioned Capacity
Initially I used provisioned capacity for my DynamoDB table, along with logic to handle the retries for me, here is the code that was removed with this logic:
```hcl
resource "aws_dynamodb_table" "visa_bulletin_data" {
name = "VisaBulletinData"
billing_mode = "PROVISIONED"
read_capacity = 5
write_capacity = 5
hash_key = "pk"
range_key = "sk"
attribute {
name = "pk"
type = "S"
}
attribute {
name = "sk"
type = "S"
}
}
```
```python
def store_data(data):
try:
print("Storing data")
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
max_retries = 5
retry_delay = 1
max_delay = 60
for attempt in range(max_retries):
try:
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
break # Exit the retry loop if the write succeeds
except table.meta.client.exceptions.ProvisionedThroughputExceededException as e:
if attempt == max_retries - 1:
raise e
else:
# Apply exponential backoff with jitter
delay = retry_delay * (2 ** attempt) + random.uniform(0, 1)
delay = min(delay, max_delay)
print(f"Write capacity exceeded, retrying in {delay} seconds...")
time.sleep(delay)
except Exception as e:
print(f"Error writing item to DynamoDB: {e}")
break
print("Done storing data")
except Exception as e:
print(f"Unable to store the data, error: {e}")
```
---
## Prompts used in this article
### Prompt 1
Can you help me define the table structure for DynamoDB using Terraform, I'm trying to store the following data? My data consists of a collection of objects, each with a BulletinDate for the month/year the data is from, and then split into 2 categories for the filing type called "priority date" and "filing date". For each of these filings, I will have a collection of data defined by 2 dimensions: Country, and Category. So for a single BulletinDate, I would have a collection of data using Country and Category as the key, with PriorityDate as the value, and the collection needs to be grouped by FilingType for "priority date" and "filing date". Can you please show me how to create this table?
**Response:**
Sure, I can help you define the table structure for DynamoDB using Terraform. Based on your data structure, we can create a single table with a composite primary key consisting of a partition key and a sort key. [1]
Here's how you can define the table structure in your Terraform code:
```hcl
resource "aws_dynamodb_table" "visa_bulletin_data" {
name = "VisaBulletinData"
billing_mode = "PROVISIONED" # or "PAY_PER_REQUEST"
read_capacity = 5 # Set appropriate read/write capacity units for provisioned mode
write_capacity = 5
hash_key = "pk" # Partition key
range_key = "sk" # Sort key
attribute {
name = "pk"
type = "S" # String type
}
attribute {
name = "sk"
type = "S" # String type
}
# Additional settings, global secondary indexes, etc.
}
```
In this structure, we're using a composite primary key with a partition key (`pk`) and a sort key (`sk`). Both keys are defined as strings (`S`).
To store your data, you can use the following format for the primary key:
```python
pk = "BULLETIN_DATE#<BulletinDate>"
sk = "FILING_TYPE#<FilingType>#COUNTRY#<Country>#CATEGORY#<Category>"
```
For example, if you have a `BulletinDate` of "2023-05", a `FilingType` of "priority_date", a `Country` of "India", and a `Category` of "EB3", the primary key would be: [2]
```python
pk = "BULLETIN_DATE#2023-05"
sk = "FILING_TYPE#priority_date#COUNTRY#India#CATEGORY#EB3"
```
With this structure, you can store the `PriorityDate` value as an attribute in the DynamoDB item.
You can also consider creating a Global Secondary Index (GSI) on the `pk` and `sk` attributes to allow efficient querying based on different combinations of the primary key attributes.
Note: Make sure to replace the placeholders (`<BulletinDate>`, `<FilingType>`, `<Country>`, and `<Category>`) with your actual values when constructing the primary key.
Sources:
1. [Creating a table - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.CreateTable.html)
2. [Create a table with a resource-based policy - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/rbac-create-table.html)
---
### Prompt 2
Thanks, given that I have my source data added to a flat collection with the code below, can you please provide the python code to store this data into the table above? Here's how I store the data in my collection:
```python
employment_based_data.append({
'filing_type': filing_type,
'country': countries[cell_id - 1],
'category': category_value,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': cell_date.strftime("%Y-%m-%d")
})
```
**Response:**
Sure, I can provide you with the Python code to store the data in the DynamoDB table using the structure we defined earlier. Here's an example: [1]
```python
import boto3
from datetime import datetime
# Initialize DynamoDB resource
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('VisaBulletinData')
def store_data(data):
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
# Assuming you have the employment_based_data list populated
store_data(employment_based_data)
```
Here's how the `store_data` function works:
1. We iterate over each item in the `employment_based_data` list.
2. For each item, we extract the `filing_type`, `country`, `category`, `bulletin_date`, and `date` values.
3. We construct the partition key (`pk`) using the `BULLETIN_DATE` prefix and the `bulletin_date` in the format `YYYY-MM`.
4. We construct the sort key (`sk`) using the `FILING_TYPE`, `filing_type`, `COUNTRY`, `country`, and `CATEGORY`, `category` values.
5. We use the `put_item` method of the DynamoDB table to store the item with the constructed primary key ( `pk` and `sk`) and the other attributes ( `filing_type`, `country`, `category`, `bulletin_date`, and `date`). [2]
Note that we're using the `datetime` module to convert the `bulletin_date` and `date` strings to `datetime` objects, and then back to strings in the desired format (`YYYY-MM` for `bulletin_date` and `YYYY-MM-DD` for `date`).
After running this code, your data should be stored in the DynamoDB table with the defined structure. [3]
Sources:
1. [Create a DynamoDB table - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.CreateTable.html)
2. [Change the read and write capacity mode for a DynamoDB table | AWS re:Post](https://repost.aws/knowledge-center/dynamodb-change-read-write-capacity-mode)
3. [Exploring Amazon DynamoDB SDK clients | AWS Database Blog](https://aws.amazon.com/blogs/database/exploring-amazon-dynamodb-sdk-clients/)
---
### Prompt 3
Can you update this code to be able to handle exceeding the write capacity for my table with some kind of graceful backoff and retry logic?
**Response:**
Sure, I can update the code to handle exceeding the write capacity for your DynamoDB table by implementing a backoff and retry logic. Here's an example of how you can modify the `store_data` function to achieve this:
```python
import boto3
from datetime import datetime
import time
import random
import math
# Initialize DynamoDB resource
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('VisaBulletinData')
def store_data(data):
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
max_retries = 5 # Maximum number of retries
retry_delay = 1 # Initial retry delay in seconds
max_delay = 60 # Maximum retry delay in seconds
for attempt in range(max_retries):
try:
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
break # Exit the retry loop if the write succeeds
except table.meta.client.exceptions.ProvisionedThroughputExceededException as e:
if attempt == max_retries - 1:
raise e # Raise the exception if the maximum number of retries is reached
else:
# Apply exponential backoff with jitter
delay = retry_delay * (2 ** attempt) + random.uniform(0, 1)
delay = min(delay, max_delay)
print(f"Write capacity exceeded, retrying in {delay} seconds...")
time.sleep(delay)
except Exception as e:
print(f"Error writing item to DynamoDB: {e}")
break
# Assuming you have the employment_based_data list populated
store_data(employment_based_data)
```
Here's what we've added to the `store_data` function:
1. We define the maximum number of retries (`max_retries`) and the initial retry delay (`retry_delay`) in seconds.
2. We also define the maximum retry delay (`max_delay`) in seconds to prevent excessive delays.
3. We wrap the `put_item` operation in a `for` loop that runs up to `max_retries` times.
4. If the `ProvisionedThroughputExceededException` is raised, indicating that the write capacity has been exceeded, we apply an exponential backoff with jitter before retrying the write operation.
5. The exponential backoff delay is calculated as `retry_delay * (2 ** attempt) + random.uniform(0, 1)`, where `attempt` is the current retry attempt. This introduces a random jitter to prevent multiple clients from retrying at the same time.
6. The calculated delay is capped at `max_delay` to prevent excessive delays.
7. If the maximum number of retries is reached, the `ProvisionedThroughputExceededException` is raised, allowing you to handle it as needed.
8. For any other exceptions, we print an error message and break out of the retry loop.
With this implementation, the code will retry writing to the DynamoDB table if the write capacity is exceeded, with an exponential backoff and jitter to avoid overwhelming the table with retries. If the maximum number of retries is reached, the exception is raised, allowing you to handle it appropriately (e.g., logging, retrying later, etc.).
Note that this implementation assumes that you're using the provisioned capacity mode for your DynamoDB table. If you're using the on-demand capacity mode, you don't need to handle the `ProvisionedThroughputExceededException` specifically, as DynamoDB will automatically handle the scaling for you.
---
### Prompt 4
Can you explain the difference between the provisioned capacity and on-demand for DynamoDB, and include how it would affect accessing my data?
**Response:**
Certainly! Amazon DynamoDB offers two capacity modes: provisioned capacity and on-demand. The choice between these modes affects the way you manage capacity and pay for the service. Here's an explanation of the differences and how they impact costs: [1]
**Provisioned Capacity Mode:**
* In this mode, you specify the number of read and write capacity units you expect your application to require.
* You are billed for the provisioned capacity, regardless of whether you consume it or not.
* Provisioned capacity is suitable for workloads with predictable or consistent traffic patterns, where you can forecast your capacity requirements. [2]
* You can enable auto scaling to automatically adjust the provisioned capacity based on your utilization, helping to optimize costs and performance.
* If your application exceeds the provisioned capacity, you may experience throttling, which can be mitigated by increasing the provisioned capacity or enabling auto scaling.
**On-Demand Capacity Mode:**
* In this mode, you don't need to provision any capacity upfront.
* You pay for the data reads and writes your application performs, based on the number of read and write request units consumed.
* On-demand capacity is ideal for workloads with unpredictable or [3]
Sources:
1. [On-demand capacity mode - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/on-demand-capacity-mode.html)
---
### Prompt 5
Thanks, can you show me how to update the read_data function using this new table schema so that I can pass filing_type, category, and country to return a sorted list? It should use the date bulletin_date for the sort.
**Response:**
Sure, here's how you can update the `read_data` function to retrieve and sort the data based on the `filing_type`, `category`, and `country` parameters, using the `bulletin_date` for sorting:
```python
from boto3.dynamodb.conditions import Key
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}"
sk_prefix = f"COUNTRY#{country}"
response = table.query(
KeyConditionExpression=Key('pk').eq(pk) & Key('sk').begins_with(sk_prefix),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['bulletin_date'], reverse=True)
return sorted_items
```
Here's how the `read_data` function works:
1. We construct the partition key (`pk`) using the `FILING_TYPE`, `filing_type`, and `CATEGORY`, `category` values.
2. We construct the sort key prefix (`sk_prefix`) using the `COUNTRY` and `country` values.
3. We use the `query` method of the DynamoDB table to retrieve items where the partition key ( `pk`) matches the constructed value, and the sort key ( `sk`) begins with the constructed prefix ( `sk_prefix`).
4. We set the `ScanIndexForward` parameter to `False` to reverse the order of the results, so that the latest `bulletin_date` is returned first.
5. We store the retrieved items in the `items` variable.
6. We sort the `items` list in descending order based on the `bulletin_date` attribute using the `sorted` function and a lambda function as the `key`.
7. We return the sorted list of items.
To use the `read_data` function, you can call it with the desired `filing_type`, `category`, and `country` parameters:
```python
# Retrieve and sort data for 'Final Date', '3rd' category, and 'All Chargeability Areas Except Those Listed' country
sorted_data = read_data()
# Retrieve and sort data for 'Filing Date', 'EB2' category, and 'India' country
sorted_data = read_data(filing_type='Filing Date', category='EB2', country='India')
```
The `sorted_data` variable will contain a list of items sorted in descending order by `bulletin_date`, filtered by the specified `filing_type`, `category`, and `country`.
Note that the `query` operation in DynamoDB is efficient for retrieving data based on the primary key (partition key and sort key). If you need to filter the data based on other attributes, you may need to consider using a Global Secondary Index (GSI) or performing a scan operation, which can be less efficient for large datasets.
---
### Prompt 6
Thanks, can you now show me how I would use the returned data to print out a line per entry, with the values from the input parameters? It should have the following first (with the [] around the values for the variables):
Retrieved the [Final Date] data for [EB3] for [All Chargeability Areas Except Those Listed]:
And then a line per entry with this format:
Bulletin 2024-07-01: 2021-11-01
> The code provided by this response isn't quite correct, it is using the previous table schema for the `pk` and `sk`, so I just copied the lines from `store_data` to replace them.
**Response:**
Certainly! Here's how you can modify the `read_data` function to print the desired output format:
```python
from boto3.dynamodb.conditions import Key
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}"
sk_prefix = f"COUNTRY#{country}"
response = table.query(
KeyConditionExpression=Key('pk').eq(pk) & Key('sk').begins_with(sk_prefix),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['bulletin_date'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
bulletin_date = item['bulletin_date']
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
In this updated version of the `read_data` function, we've added the following changes:
1. After sorting the items, we print the header line using the `print` function and an f-string. The header line will be in the format:
```text
Retrieved the [Final Date] data for [EB3] for [All Chargeability Areas Except Those Listed]:
```
2. We then iterate over the `sorted_items` list using a `for` loop.
3. For each item, we extract the `bulletin_date` and `date` values.
4. We print a line for each item using the `print` function and an f-string. The line will be in the format:
```text
Bulletin 2024-07-01: 2021-11-01
```
To use the updated `read_data` function, you can call it with the desired `filing_type`, `category`, and `country` parameters:
```python
# Retrieve and print data for 'Final Date', '3rd' category, and 'All Chargeability Areas Except Those Listed' country
read_data()
# Retrieve and print data for 'Filing Date', 'EB2' category, and 'India' country
read_data(filing_type='Filing Date', category='EB2', country='India')
```
The output will be printed to the console, with the header line followed by a line for each item in the sorted list, displaying the `bulletin_date` and `date` values.
Note that this implementation assumes that you have the necessary data in your DynamoDB table with the expected attributes ( `bulletin_date` and `date`). If the data structure or attribute names are different, you'll need to adjust the code accordingly.
---
### Prompt 7
This doesn't look correct, can you please recheck as earlier in the conversation we defined pk and sk for the table when storing data using this:
```python
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
```
**Response:**
You're absolutely right, my apologies. I should have used the correct primary key structure that we defined earlier. Here's the updated `read_data` function with the correct primary key construction:
```python
from boto3.dynamodb.conditions import Key
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk_prefix = "BULLETIN_DATE#"
sk_prefix = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
response = table.query(
KeyConditionExpression=Key('pk').begins_with(pk_prefix) & Key('sk').begins_with(sk_prefix),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['bulletin_date'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
bulletin_date = item['bulletin_date']
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
Here are the changes:
1. We define the `pk_prefix` as `"BULLETIN_DATE#"` to match the partition key structure we defined earlier.
2. We construct the `sk_prefix` using the `filing_type`, `country`, and `category` values, following the sort key structure we defined earlier.
3. In the `KeyConditionExpression` of the `query` method, we use the `begins_with` condition for both the partition key ( `pk`) and the sort key ( `sk`) to retrieve items where the `pk` begins with `"BULLETIN_DATE#"` and the `sk` begins with the constructed `sk_prefix`.
With these changes, the `read_data` function should now correctly retrieve and print the data based on the primary key structure we defined earlier.
Thank you for catching my mistake. Please let me know if this updated version looks correct.
---
### Prompt 8
When I use this code, it errors with:
```text
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the Query operation: Query key condition not supported
```
**Response:**
Ah, I see the issue now. The `ValidationException` is occurring because the `KeyConditionExpression` we're using is not supported for the `query` operation in DynamoDB. [1]
The `query` operation in DynamoDB requires the partition key to be an equality condition, and the sort key can either be an equality condition or a range condition (using comparison operators or the `begins_with` function).
To fix this issue, we need to modify the `read_data` function to use the partition key as an equality condition and the sort key as a range condition. Here's the updated code:
```python
from boto3.dynamodb.conditions import Key
def read_data(filing_type='Final Date', category='3rd', country='All Chargeability Areas Except Those Listed'):
pk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m')}"
sk_prefix = f"FILING_TYPE#{filing_type}#COUNTRY#{country}#CATEGORY#{category}"
response = table.query(
KeyConditionExpression=Key('pk').eq(pk) & Key('sk').begins_with(sk_prefix),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['bulletin_date'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
bulletin_date = item['bulletin_date']
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
Here are the changes:
1. We construct the partition key ( `pk`) using the `BULLETIN_DATE` prefix and the `bulletin_date` in the format `YYYY-MM`.
2. In the `KeyConditionExpression` of the `query` method, we use the `eq` condition for the partition key (`pk`) to match the constructed `pk` value exactly.
3. For the sort key (`sk`), we use the `begins_with` condition with the constructed `sk_prefix`.
By using the `eq` condition for the partition key and the `begins_with` condition for the sort key, we should be able to query the data correctly without encountering the `ValidationException`.
Note that you'll need to pass the `bulletin_date` parameter to the `read_data` function, or you can use a default value if you want to retrieve the latest data by default.
Please let me know if this updated code works for you or if you have any further questions.
Sources:
1. [Key condition expressions for the Query operation - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.KeyConditionExpressions.html)
---
### Prompt 9
Ok, given the requirement for an equality condition on the primary key, can you suggest what data we should use for the primary key given this constraint? I want to be able to retrieve the data for a specific filing_type, category, and country.
**Response:**
Given the requirement for an equality condition on the partition key, we can modify the primary key structure to accommodate retrieving data for a specific `filing_type`, `category`, and `country`.
Here's a suggested primary key structure:
```python
pk = "FILING_TYPE#<filing_type>#CATEGORY#<category>#COUNTRY#<country>"
sk = "BULLETIN_DATE#<bulletin_date>"
```
With this structure, the partition key ( `pk`) will be a composite key consisting of the `filing_type`, `category`, and `country` values, separated by a delimiter (`#`). The sort key ( `sk`) will be the `bulletin_date`.
Here's how you can update the `store_data` and `read_data` functions to use this new primary key structure:
```python
def store_data(data):
for item in data:
filing_type = item['filing_type']
country = item['country']
category = item['category']
bulletin_date = datetime.strptime(item['bulletin_date'], "%Y-%m-%d")
date = datetime.strptime(item['date'], "%Y-%m-%d")
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}#COUNTRY#{country}"
sk = f"BULLETIN_DATE#{bulletin_date.strftime('%Y-%m-%d')}"
table.put_item(
Item={
'pk': pk,
'sk': sk,
'filing_type': filing_type,
'country': country,
'category': category,
'bulletin_date': bulletin_date.strftime("%Y-%m-%d"),
'date': date.strftime("%Y-%m-%d")
}
)
def read_data(filing_type, category, country):
pk = f"FILING_TYPE#{filing_type}#CATEGORY#{category}#COUNTRY#{country}"
response = table.query(
KeyConditionExpression=Key('pk').eq(pk),
ScanIndexForward=False # Reverse the order to get the latest bulletin_date first
)
items = response['Items']
# Sort the items by bulletin_date in descending order
sorted_items = sorted(items, key=lambda x: x['sk'], reverse=True)
# Print the header
print(f"Retrieved the [{filing_type}] data for [{category}] for [{country}]:")
# Print each item
for item in sorted_items:
date = item['date']
print(f"Bulletin {bulletin_date}: {date}")
return sorted_items
```
In the `store_data` function:
1. We construct the partition key ( `pk`) using the `FILING_TYPE`, `filing_type`, `CATEGORY`, `category`, and `COUNTRY`, `country` values.
2. We construct the sort key ( `sk`) using the `BULLETIN_DATE` prefix and the `bulletin_date` value in the format `YYYY-MM-DD`.
In the `read_data` function:
1. We construct the partition key ( `pk`) using the `filing_type`, `category`, and `country` values passed as arguments.
2. We use the `query` method with the `KeyConditionExpression` set to `Key('pk').eq(pk)` to retrieve items where the partition key matches the constructed `pk` value exactly.
3. We sort the retrieved items by the sort key ( `sk`) in descending order.
4. When printing the data, we extract the `bulletin_date` from the sort key ( `sk`) by splitting it on the `#` delimiter and taking the second part.
With this primary key structure, you can retrieve data for a specific `filing_type`, `category`, and `country` combination by passing those values to the `read_data` function:
```python
# Retrieve data for 'Final Date', 'EB3', and 'All Chargeability Areas Except Those Listed'
read_data('Final Date', 'EB3', 'All Chargeability Areas Except Those Listed')
# Retrieve data for 'Filing Date', 'EB2', and 'India'
read_data('Filing Date', 'EB2', 'India')
```
This approach ensures that you can query the data efficiently using the partition key equality condition, while still being able to retrieve data for specific combinations of `filing_type`, `category`, and `country`.
Please note that if you need to query the data based on different combinations of these attributes, you may need to consider creating additional Global Secondary Indexes (GSIs) to optimize those queries.
---
### Prompt 10
Thanks, what is the easiest way to delete all my data from my DynamoDB table since the schema has changed? If I can do it with the AWS CLI, can you show me the command please?
**Response:**
To delete all data from your DynamoDB table using the AWS CLI, you can use the `scan` operation to retrieve all items in the table, and then use the `batch-write-item` operation to delete them in batches. [1]
Here's the command to scan and retrieve all items from the table:
```bash
aws dynamodb scan --table-name VisaBulletinData --output json > items.json
```
This command will scan the `VisaBulletinData` table and store the retrieved items in a JSON file named `items.json`.
Next, you can use the following Python script to delete the items in batches using the `batch-write-item` operation:
```python
import json
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('VisaBulletinData')
# Load the items from the JSON file
with open('items.json', 'r') as f:
items = json.load(f)['Items']
# Batch size for the write operation
batch_size = 25
# Delete items in batches
for i in range(0, len(items), batch_size):
batch = items[i:i+batch_size]
with table.batch_writer() as batch_writer:
for item in batch:
print(f"Deleted batch {i//batch_size+1}")
print("All items deleted from the table.")
```
Here's how the script works:
1. It loads the items from the `items.json` file created by the `scan` operation.
2. It defines a `batch_size` for the `batch-write-item` operation. In this example, it's set to 25, but you can adjust it based on your requirements.
3. It iterates over the items in batches of `batch_size`.
4. For each batch, it uses the `batch_writer` context manager to perform the `batch-write-item` operation.
5. Inside the `batch_writer` context, it iterates over the items in the batch and calls the `delete_item` method with the primary key ( `pk` and `sk`) of each item.
6. After each batch is processed, it prints a message indicating the batch number.
7. Finally, it prints a message indicating that all items have been deleted from the table.
Make sure to replace `'VisaBulletinData'` with the name of your DynamoDB table.
Note: This approach assumes that your table doesn't have any Global Secondary Indexes (GSIs). If you have GSIs, you'll need to delete the data from the GSIs as well, or consider recreating the table if it's more convenient.
Sources:
1. [Use DeleteTable with an AWS SDK or CLI - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/example_dynamodb_DeleteTable_section.html)
---
### Prompt 11
Does DynamoDB have the equivalent of SQL's "Truncate table"?
**Response:**
Yes, DynamoDB does have an equivalent of SQL's "TRUNCATE TABLE" operation. It's called the `DeleteTable` operation, which deletes the entire table and all its data. [1]
Here's how you can use the AWS CLI to delete (truncate) a DynamoDB table:
```bash
aws dynamodb delete-table --table-name VisaBulletinData
```
Replace `VisaBulletinData` with the name of your DynamoDB table.
This command will delete the entire table, including all its data and any Global Secondary Indexes (GSIs) associated with the table.
After running this command, the table will be in the `DELETING` state for a short period, and then it will be completely removed from DynamoDB.
If you want to recreate the table with the same name and structure after deleting it, you'll need to wait for the `DeleteTable` operation to complete before creating a new table with the same name.
Alternatively, you can use the AWS Management Console, AWS SDKs, or the DynamoDB API to perform the `DeleteTable` operation. [2]
It's important to note that the `DeleteTable` operation is an irreversible operation, and all data in the table will be permanently deleted. If you need to keep a backup of the data, you should export it before running the `DeleteTable` operation.
Sources:
1. [Removing a table - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.RemoveTable.html)
2. [Deleting data from a table - Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.DeleteData.html)
---
### Prompt 12
Thanks, can you provide me with the updated Terraform to create this table please? Also change it to use on-demand pricing instead of provisioned.
> Interesting that `ttl` and `tags` suddenly pop up in this response, I'm going to strip that out as I don't need or want that.
**Response:**
Sure, here's the updated Terraform code to create the DynamoDB table with the new schema and using on-demand pricing: [1]
```hcl
resource "aws_dynamodb_table" "visa_bulletin_data" {
name = "VisaBulletinData"
billing_mode = "PAY_PER_REQUEST"
hash_key = "pk"
range_key = "sk"
attribute {
name = "pk"
type = "S"
}
attribute {
name = "sk"
type = "S"
}
ttl {
attribute_name = "TimeToExist"
enabled = false
}
tags = {
Name = "VisaBulletinData"
Environment = "production"
}
}
```
Here's what the code does:
1. It defines an `aws_dynamodb_table` resource with the name `visa_bulletin_data`.
2. The `name` attribute sets the name of the DynamoDB table to `VisaBulletinData`.
3. The `billing_mode` is set to `PAY_PER_REQUEST`, which enables on-demand pricing for the table. [2]
4. The `hash_key` and `range_key` attributes define the partition key ( `pk`) and sort key ( `sk`) for the table, respectively.
5. The `attribute` blocks define the attributes for the partition key ( `pk`) and sort key ( `sk`), both of which are strings ( `S`).
6. The `ttl` block is used to configure the Time to Live (TTL) feature for the table. In this case, it's disabled by setting `enabled` to `false`.
7. The `tags` block defines two tags for the DynamoDB table: `Name` and `Environment`.
After applying this Terraform code, it will create a new DynamoDB table named `VisaBulletinData` with the specified primary key structure ( `pk` and `sk`) and using on-demand pricing.
Note that you'll need to have the AWS provider configured in your Terraform code, and you'll need to run `terraform init` and `terraform apply` to create the DynamoDB table.
Also, keep in mind that with on-demand pricing, you'll be charged for the read and write requests made to the table, rather than provisioning read and write capacity units upfront. You can refer to the AWS DynamoDB pricing documentation for more details on the on-demand pricing model.
Sources:
1. [Amazon DynamoDB Pricing for On-Demand Capacity](https://aws.amazon.com/dynamodb/pricing/on-demand/)
| cobusbernard |
1,909,231 | How to Send Emails with Email API: Practical Examples in Popular Languages and Frameworks | In this article, I’ll show you how to send emails using an email API in various programming languages... | 0 | 2024-07-02T17:27:45 | https://mailtrap.io/blog/api-send-email/ | In this article, I’ll show you how to send emails using an email API in various programming languages and frameworks.
I’ll also break down the differences between SMTP and email APIs, but if you’re already aware of them and your needs, feel free to skip ahead by clicking on some of the following links:
- [PHP](https://mailtrap.io/blog/api-send-email/#Send-emails-in-PHP)
- [Laravel](https://mailtrap.io/blog/api-send-email/#Send-emails-in-Laravel)
- [Python](https://mailtrap.io/blog/api-send-email/#Send-emails-in-Python)
- [Node.js](https://mailtrap.io/blog/api-send-email/#Send-emails-in-Nodejs)
_Note that as my API of choice, I’ll be using [Email API](https://mailtrap.io/email-sending/), a part of the Mailtrap Email Delivery Platform. However, the core principles in this article can be applied to any [email API provider](https://mailtrap.io/blog/best-email-api/)._
## Setting up Mailtrap Email API
Mailtrap Email API is based on the [REST principle](https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm)s, which classifies it as [REST, or RESTful API](https://www.ibm.com/topics/rest-apis#:~:text=IBM,transfer%20(REST)%20architectural%20style.). These types of APIs offer greater flexibility and, as you’ll notice, are super easy to work with as they use standard HTTP methods (e.g., GET, POST, PUT, DELETE), which most developers are familiar with.
Here’s what you need to do to get started with Mailtrap API and integrate it into your project:
- **Sign up for a [free Mailtrap account](https://mailtrap.io/signin)**: You can sign up using your Google, GitHub, or Office 365 accounts, or, if you prefer, simply use your email address.

- **Verify your domain**: Navigate to the “Sending Domains” tab and click “Add Domain.” Enter your domain name and confirm by clicking the “Add” button.

- **Update DNS records**: Mailtrap will provide and automatically authenticate specific DNS records (SPF, DKIM, DMARC) that should be added to your domain provider’s DNS settings.
_Tip: here’s a detailed [Getting Started Guide](https://help.mailtrap.io/article/110-get-started-mailtrap-email-sending) to help you out._ 👀
Once your domain is verified, you can integrate your application with Mailtrap’s Email API.
- **Access API credentials**: Go to the Sending Domains tab, select your verified domain, and open the “Integration” tab. Here, you’ll find credentials for both Transactional and Bulk streams.

- **Build your HTTP request**: Use the provided API credentials to configure an authenticated HTTP request in your preferred programming language or framework.
- Mailtrap also offers and maintains official SDKs for [PHP](https://github.com/railsware/mailtrap-php), [Python](https://github.com/railsware/mailtrap-python), [Ruby](https://github.com/railsware/mailtrap-ruby), [Node.js](https://github.com/railsware/mailtrap-nodejs), and [Elixir](https://github.com/railsware/mailtrap-elixir). I’ll show you how to use them in a jiffy.
- **Run your script**: Execute your email sending script. If everything is set up correctly, your email will land in the recipient’s inbox, and you will see it in the “Email Logs” section.

**Note**: Each domain has unique API tokens. To manage these, go to Settings → API Tokens and click “Add Token” if you need additional tokens. Additionally, Mailtrap’s API supports email templates, attachments, custom variables, and email categories. For detailed information, check the [API documentation](https://api-docs.mailtrap.io/).
## Send emails in PHP
_For a comprehensive guide on sending emails in PHP, read the [full article here](https://mailtrap.io/blog/php-email-sending/)_
{% embed https://youtu.be/f8oxbmJY7Jw %}
Since we already covered the Mailtrap installation process, let’s start by integrating the [Mailtrap PHP SDK](https://github.com/railsware/mailtrap-php).
First, install Mailtrap PHP SDK using one of the following [Composer](https://getcomposer.org/) commands:
- **With Symfony HTTP client**:
`composer require railsware/mailtrap-php symfony/http-client nyholm/psr7`
- _With Guzzle HTTP client_:
`composer require railsware/mailtrap-php guzzlehttp/guzzle php-http/guzzle7-adapter`
**Note**: Mailtrap API Client uses [PSR-18 client](https://www.php-fig.org/psr/psr-18/) abstraction and is thus not hard-coupled to any library that sends HTTP messages. It gives you the flexibility to choose which HTTP client you want to use.
However, it’s recommended to use [Symfony](https://symfony.com/) for its fast performance, ease of use, flexibility, and strong community support.
Next, using the below code, you can send a plain text email using the Mailtrap PHP SDK:
```
<?php
use Mailtrap\Config;
use Mailtrap\EmailHeader\CategoryHeader;
use Mailtrap\EmailHeader\CustomVariableHeader;
use Mailtrap\Helper\ResponseHelper;
use Mailtrap\MailtrapClient;
use Symfony\Component\Mime\Address;
use Symfony\Component\Mime\Email;
use Symfony\Component\Mime\Header\UnstructuredHeader;
require __DIR__ . '/vendor/autoload.php';
// Your API token from here https://mailtrap.io/api-tokens
$apiKey = getenv('MAILTRAP_API_KEY');
$mailtrap = new MailtrapClient(new Config($apiKey));
$email = (new Email())
->from(new Address('example@your-domain-here.com', 'Mailtrap Test'))
->replyTo(new Address('reply@your-domain-here.com'))
->to(new Address('email@example.com', 'Jon')) // Single recipient
->priority(Email::PRIORITY_HIGH)
->subject('Best practices of building HTML emails')
->text('Hey! Learn the best practices of building HTML emails and play with ready-to-go templates. Mailtrap's Guide on How to Build HTML Email is live on our blog');
// Headers
$email->getHeaders()
->addTextHeader('X-Message-Source', 'domain.com')
->add(new UnstructuredHeader('X-Mailer', 'Mailtrap PHP Client')); // the same as addTextHeader
// Custom Variables
$email->getHeaders()
->add(new CustomVariableHeader('user_id', '45982'))
->add(new CustomVariableHeader('batch_id', 'PSJ-12'));
// Category (should be only one)
$email->getHeaders()
->add(new CategoryHeader('Integration Test'));
try {
$response = $mailtrap->sending()->emails()->send($email); // Email sending API (real)
var_dump(ResponseHelper::toArray($response)); // body (array)
} catch (Exception $e) {
echo 'Caught exception: ', $e->getMessage(), "\n";
}
```
This script initializes the Mailtrap client with your API key, which can be found in the Integration tab under the Sending Domains section.
It then creates a new email message and sets various properties, such as the `sender`, `recipient`, `subject`, and `content`. Additionally, it adds custom headers and variables.
Finally, the script sends the email using the `send() function` and outputs the response or an error message if the email fails to send.
### HTML email
Sending HTML emails in PHP couldn’t be easier as all you need to do is add the `->html()` method.
To save you some time, I’ve prepared a code you can use to send an HTML email to multiple recipients and attach files to it:
```
<?php
use Mailtrap\Config;
use Mailtrap\EmailHeader\CategoryHeader;
use Mailtrap\EmailHeader\CustomVariableHeader;
use Mailtrap\Helper\ResponseHelper;
use Mailtrap\MailtrapClient;
use Symfony\Component\Mime\Address;
use Symfony\Component\Mime\Email;
use Symfony\Component\Mime\Header\UnstructuredHeader;
use Symfony\Component\Mime\Part\DataPart;
require __DIR__ . '/vendor/autoload.php';
// Your API token from here https://mailtrap.io/api-tokens
$apiKey = “MAILTRAP_API_KEY”;
$mailtrap = new MailtrapClient(new Config($apiKey));
$email = (new Email())
->from(new Address('example@your-domain-here.com', 'Mailtrap Test'))
->replyTo(new Address('reply@your-domain-here.com'))
->to(new Address('email@example.com', 'Jon'))
->priority(Email::PRIORITY_HIGH)
->cc('mailtrapqa@example.com')
->addCc('staging@example.com')
->bcc('mailtrapdev@example.com')
->subject('Best practices of building HTML emails')
->text('Hey! Learn the best practices of building HTML emails and play with ready-to-go templates. Mailtrap's Guide on How to Build HTML Email is live on our blog')
->html(
'<html>
<body>
<p><br>Hey</br>
Learn the best practices of building HTML emails and play with ready-to-go templates.</p>
<p><a href="https://mailtrap.io/blog/build-html-email/">Mailtrap's Guide on How to Build HTML Email</a> is live on our blog</p>
<img src="cid:logo">
</body>
</html>'
)
->embed(fopen('https://mailtrap.io/wp-content/uploads/2021/04/mailtrap-new-logo.svg', 'r'), 'logo', 'image/svg+xml');
// Add an attachment
$email->attachFromPath('/path/to/your/file.pdf', 'Filename.pdf', 'application/pdf');
// Headers
$email->getHeaders()
->addTextHeader('X-Message-Source', 'domain.com')
->add(new UnstructuredHeader('X-Mailer', 'Mailtrap PHP Client')); // the same as addTextHeader
// Custom Variables
$email->getHeaders()
->add(new CustomVariableHeader('user_id', '45982'))
->add(new CustomVariableHeader('batch_id', 'PSJ-12'));
// Category (should be only one)
$email->getHeaders()
->add(new CategoryHeader('Integration Test'));
try {
$response = $mailtrap->sending()->emails()->send($email); // Email sending API (real)
var_dump(ResponseHelper::toArray($response)); // body (array)
} catch (Exception $e) {
echo 'Caught exception: ', $e->getMessage(), "\n";
}
```
## Send emails in Laravel
To quickly integrate the Mailtrap email API into your Laravel application, you can use the [official PHP SDK](https://github.com/railsware/mailtrap-php) and docs for the [Laravel framework bridge](https://github.com/railsware/mailtrap-php/tree/main/src/Bridge/Laravel). The Mailtrap library is also fully compatible with Laravel 9.x and above.
_For more information, read our complete guide on [sending emails in Laravel](https://mailtrap.io/blog/send-email-in-laravel/) which covers different sending methods as well._
{% embed https://youtu.be/lsna1S8y1vg %}
Nonetheless, here’s an example of sending plain-text emails in Laravel;
- Install the Mailtrap PHP client and dependencies using composer:
`composer require railsware/mailtrap-php symfony/http-client nyholm/psr7`
- Next, add Mailtrap transport into your _**config/mail.php**_ file:
```
<?php
return [
/*
|--------------------------------------------------------------------------
| Mailer Configurations
|--------------------------------------------------------------------------
*/
'mailers' => [
// start mailtrap transport
'mailtrap' => [
'transport' => 'mailtrap'
],
// end mailtrap transport
]
];
```
- Now, to the Laravel **_.env_** file, add your Mailtrap credentials:
```
MAIL_MAILER="mailtrap"
MAILTRAP_HOST="send.api.mailtrap.io"
MAILTRAP_API_KEY="YOUR_API_KEY_HERE"
MAIL_FROM_ADDRESS="name@registered_domain.com"
```
- To send the email, create a _**mailable**_ class:
`php artisan make:mail WelcomeMail`
- Open the newly created _**WelcomeMail.php**_ file in _**app/Mail**_ directory, and configure it as follows:
```
<?php
namespace App\Mail;
use Illuminate\Bus\Queueable;
use Illuminate\Mail\Attachment;
use Illuminate\Mail\Mailable;
use Illuminate\Mail\Mailables\Address;
use Illuminate\Mail\Mailables\Content;
use Illuminate\Mail\Mailables\Envelope;
use Illuminate\Mail\Mailables\Headers;
use Illuminate\Queue\SerializesModels;
use Mailtrap\EmailHeader\CategoryHeader;
use Mailtrap\EmailHeader\CustomVariableHeader;
use Symfony\Component\Mime\Email;
use Symfony\Component\Mime\Header\UnstructuredHeader;
class WelcomeMail extends Mailable
{
use Queueable, SerializesModels;
private string $name;
/**
* Create a new message instance.
*/
public function __construct(string $name)
{
$this->name = $name;
}
/**
* Get the message envelope.
*/
public function envelope(): Envelope
{
return new Envelope(
from: new Address('jeffrey@example.com', 'Jeffrey Way'),
replyTo: [
new Address('taylor@example.com', 'Taylor Otwell'),
],
subject: 'Welcome Mail',
using: [
function (Email $email) {
// Headers
$email->getHeaders()
->addTextHeader('X-Message-Source', 'example.com')
->add(new UnstructuredHeader('X-Mailer', 'Mailtrap PHP Client'))
;
// Custom Variables
$email->getHeaders()
->add(new CustomVariableHeader('user_id', '45982'))
->add(new CustomVariableHeader('batch_id', 'PSJ-12'))
;
// Category (should be only one)
$email->getHeaders()
->add(new CategoryHeader('Integration Test'))
;
},
]
);
}
/**
* Get the message content definition.
*/
public function content(): Content
{
return new Content(
view: 'mail.welcome-email',
with: ['name' => $this->name],
);
}
/**
* Get the attachments for the message.
*
* @return array<int, \Illuminate\Mail\Mailables\Attachment>
*/
public function attachments(): array
{
return [
Attachment::fromPath('https://mailtrap.io/wp-content/uploads/2021/04/mailtrap-new-logo.svg')
->as('logo.svg')
->withMime('image/svg+xml'),
];
}
/**
* Get the message headers.
*/
public function headers(): Headers
{
return new Headers(
'custom-message-id@example.com',
['previous-message@example.com'],
[
'X-Custom-Header' => 'Custom Value',
],
);
}
}
```
- Create a Blade view template for your email in **_resources/views/emails/welcome.blade.php_**:
```
<p>Hey {{$name}}!</p>
<br>
<p>Welcome to the party! You've just joined the coolest club in town.</p>
```
- To the _**app/routes/console.php**_ file, add the CLI router:
```
<?php
use App\Mail\WelcomeMail;
use Illuminate\Support\Facades\Artisan;
use Illuminate\Support\Facades\Mail;
/*
|--------------------------------------------------------------------------
| Console Routes
|--------------------------------------------------------------------------
|
*/
Artisan::command('send-welcome-mail', function () {
Mail::to('testreceiver@gmail.com')->send(new WelcomeMail("Jon"));
// Also, you can use specific mailer if your default mailer is not "mailtrap" but you want to use it for welcome mails
// Mail::mailer('mailtrap')->to('testreceiver@gmail.com')->send(new WelcomeMail("Jon"));
})->purpose('Send welcome mail');
```
- Lastly, to send your email, call the CLI command:
`php artisan send-welcome-mail`
### HTML email
To send an HTML email via Laravel, all you have to do is add HTML code to the Blade file we previously created (e.g., **welcome.blade.php**).
For a basic HTML message, use the code below:
```
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport"
content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<style>
p {
font-size: 12px;
}
.signature {
font-style: italic;
}
</style>
</head>
<body>
<div>
<p>Hey {{ $name }},</p>
<p>Can your Laravel app send emails yet? 😉 </p>
<p class="signature">Mailtrap</p>
</div>
</body>
</html>
```
_If you want to customize your HTML emails further, be sure to give our Laravel HTML article a read._
## Send emails in Python
Below I’ll show a basic example of how to send an email in Python. However, if you’d like to understand how to handle more complex email-sending scenarios in Python, I recommend reading our [dedicated article](https://mailtrap.io/blog/python-send-email) on this topic.
{% embed https://youtu.be/wDYADks8VBM %}
Similarly to other languages and frameworks, once the Mailtrap account is all set, I’ll use [Mailtrap’s official Python SDK](https://github.com/railsware/mailtrap-python) to streamline the process of sending emails via the API in Python.
Here’s how it works:
- Open your terminal and run the following command:
`pip install mailtrap`
- Send a plain-text email by running this script:
```
from mailtrap import Mail, Address, MailtrapClient
# Create a Mail object with basic details for a plain text email
mail = Mail(
# Specify the sender's email address and optional name
sender=Address(email="mailtrap@example.com", name="Mailtrap Test"),
# Specify one or more recipients; here we use a list with a single recipient
to=[Address(email="your@email.com", name="Your Name")],
# Subject of the email
subject="Simple Plain Text Email",
# The plain text content of the email
text="This is a plain text email sent using the Mailtrap SDK. Simple and straightforward.",
# Optional: categorize this email for easier sorting or management in the Mailtrap service
category="Test",
# Optional: Additional headers can be specified, but are not required for plain text emails
headers={"X-Example-Header": "HeaderValue"}
)
# Initialize the MailtrapClient with your API token
client = MailtrapClient(token="your-api-key")
# Send the email using the client's send method
client.send(mail)
print("Plain text email sent successfully.")
```
_Don’t forget to insert your Mailtrap credentials (e.g., verified sending domain and API key)._
### HTML email
To send an HTML email, simply adjust the script above by specifying the `html` parameter in the Mail object with the `HTML content`. Like so:
```
from mailtrap import Mail, Address, MailtrapClient
# Create a Mail object for sending an HTML email
mail = Mail(
sender=Address(email="mailtrap@example.com", name="Mailtrap Test"),
to=[Address(email="recipient@email.com", name="Recipient Name")],
subject="Your HTML Email Subject Here",
text="This is a fallback text for email clients that don't render HTML",
html="""
<!DOCTYPE html>
<html>
<head>
<title>Email Title</title>
</head>
<body>
<h1>Hello, World!</h1>
<p>This is an <strong>HTML email</strong> sent from the Mailtrap Python SDK.</p>
<p>Here's a link: <a href="https://example.com">Visit Example.com</a></p>
</body>
</html>
""",
# You can categorize this email or add custom headers as needed
category="HTML Email",
headers={"X-Custom-Header": "Value"}
)
# Initialize the MailtrapClient with your API token
client = MailtrapClient(token="your-api-key")
# Send the email
client.send(mail)
print("HTML email sent successfully.")
```
## Send emails in Node.js
In this chapter, I’ll show you how to add email-sending functionality to your Node.js application with Mailtrap’s [official SDK](https://www.npmjs.com/package/mailtrap) that allows easy integration.
{% embed https://youtu.be/Wa9KDiB7C_I %}
The examples I’ll show you will be basic, but, if you’re feeling dev-savvy, feel free to check out our detailed article on [sending emails in Node.js](https://mailtrap.io/blog/send-emails-with-nodejs/).
Now, let’s start by installing Mailtrap Node.js package with either [npm](https://www.npmjs.com/) or [yarn](https://yarnpkg.com/):
```
npm install mailtrap
# or, if you are using yarn:
yarn add mailtrap
```
Then, send a plain text email by running the following code:
```
import { MailtrapClient } from "mailtrap"
/**
* For this example to work, you need to set up a sending domain,
* and obtain a token that is authorized to send from the domain.
*/
const TOKEN = "<YOUR-TOKEN-HERE>";
const SENDER_EMAIL = "<SENDER ADDRESS@YOURDOMAIN.COM>";
const RECIPIENT_EMAIL = "<RECIPIENT@EMAIL.COM>";
const client = new MailtrapClient({ token: TOKEN });
const sender = { name: "Mailtrap Test", email: SENDER_EMAIL };
client
.send({
from: sender,
to: [{ email: RECIPIENT_EMAIL }],
subject: "Hello from Mailtrap!",
text: "Welcome to Mailtrap Sending!",
})
.then(console.log)
.catch(console.error);
```
As Mailtrap’s Node.js package uses [ECMAScript (ES) modules](https://nodejs.org/api/esm.html), I suggest adding `“type:” “module”` in your **package.json** file.
### HTML email
Use the following code snippet to send an HTML email:
```
import { MailtrapClient } from "mailtrap"
/**
* For this example to work, you need to set up a sending domain,
* and obtain a token that is authorized to send from the domain.
* @see https://help.mailtrap.io/article/69-sending-domain-setup
*/
const TOKEN = "<YOUR-TOKEN-HERE>";
const SENDER_EMAIL = "<SENDER@YOURDOMAIN.COM>";
const RECIPIENT_EMAIL = "<RECIPIENT@EMAIL.COM>";
const client = new MailtrapClient({ token: TOKEN });
client
.send({
category: "test",
custom_variables: {
hello: "world",
year: 2022,
anticipated: true,
},
from: { name: "Mailtrap Test", email: SENDER_EMAIL },
to: [{ email: RECIPIENT_EMAIL }],
subject: "Hello from Mailtrap!",
html: `
<!doctype html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body style="font-family: sans-serif;">
<div style="display: block; margin: auto; max-width: 600px;" class="main">
<h1 style="font-size: 18px; font-weight: bold; margin-top: 20px">Congrats for sending test email with Mailtrap!</h1>
<p>Inspect it using the tabs you see above and learn how this email can be improved.</p>
<p>Now send your email using our fake SMTP server and integration of your choice!</p>
<p>Good luck! Hope it works.</p>
</div>
<!-- Example of invalid for email html/css, will be detected by Mailtrap: -->
<style>
.main { background-color: white; }
a:hover { border-left-width: 1em; min-height: 2em; }
</style>
</body>
</html>
`,
})
.then(console.log)
.catch(console.error);
```
## Test emails and email sending on staging
This article has ~4,000 words and about 50% of that is code only. Considering this, you can easily imagine the not-so-small potential for errors while implementing the code into your app.
Hence, it’s of utmost importance to test your email-sending functionality. This is an industry-standard practice that ensures your code works properly, your emails are going where they’re supposed to while avoiding spam filters, and much more.
Consider it as tasting a dish you’ve just prepared. After putting in all the effort to make it, you wouldn’t want to serve an omelette that’s not salty at all, would you? 🧂
For this purpose, especially if you’ve used our email API for sending, I recommend using [Mailtrap Testing API](https://mailtrap.io/automated-email-testing/).
{% embed https://youtu.be/AveaJc6c3fI %}
Via Mailtrap Testing API, you can run the following commands:
- **Inbox** – Create new inboxes, reset credentials, receive messages, clean one or all messages in the inbox, mark messages as read, and manage users.
- **Project** – Create new projects, update/delete them, and manage their users.

- **Email content** – Inspect the raw HTML (you can also download it), text, and get detailed info about the HTML part, including a list of possible errors, along with message attachments.

- **Bcc and message headers** – Receive and test message headers.

- **Deliverability** – You get a SPAM report, domain blacklisting details, and a spam score, which if you keep under 5, can proactively solve potential problems such as hitting the spam filters and ending up in the junk folder.

With the Email Testing API, you can also [test your templates](https://mailtrap.io/blog/email-templates-testing-api/). Simply enable sandbox, specify the inbox ID, receive template test, and then send it through the API in the production environment.

Lastly, Mailtrap Email Testing API is quite easy to integrate for testing, automation, and testing automated sequences.
For example, here’s how you would test emails using Python and [Mailtrap Email Testing API](https://mailtrap.io/automated-email-testing/):
- Establish a connection to the API endpoint using `http.client.HTTPSConnection`.
- Define your email content (e.g., recipients, sender, subject, text content, attachments, etc.)
- Make a POST request to the API with your payload and necessary headers, including your API token for authorization. Here’s how it looks in practice:
```
import http.client
import json
def test_send_email():
conn = http.client.HTTPSConnection("sandbox.api.mailtrap.io")
payload = {
"to": [{"email": "john_doe@example.com", "name": "John Doe"}],
"cc": [{"email": "jane_doe@example.com", "name": "Jane Doe"}],
"bcc": [{"email": "james_doe@example.com", "name": "Jim Doe"}],
"from": {"email": "sales@example.com", "name": "Example Sales Team"},
"attachments": [
{
"content": "base64_encoded_content_here",
"filename": "index.html",
"type": "text/html",
"disposition": "attachment"
}
],
"custom_variables": {"user_id": "45982", "batch_id": "PSJ-12"},
"headers": {"X-Message-Source": "dev.mydomain.com"},
"subject": "Your Example Order Confirmation",
"text": "Congratulations on your order no. 1234",
"category": "API Test"
}
headers = {
'Content-Type': "application/json",
'Accept': "application/json",
'Api-Token': "your_api_token_here" # Replace with your real API token
}
# Convert the payload to a JSON string
json_payload = json.dumps(payload)
# Make the POST request
conn.request("POST", "/api/send/inbox_id", json_payload, headers) # Replace 'inbox_id' with your real inbox ID
# Get the response
response = conn.getresponse()
data = response.read()
print(data.decode("utf-8"))
if __name__ == "__main__":
test_send_email()
```
Of course, if you’re interested in more information, be sure to check the [API docs](https://api-docs.mailtrap.io/docs/mailtrap-api-docs/bcf61cdc1547e-send-email-including-templates) for details on API testing.
**[Test Emails with Mailtrap for Free]**(https://mailtrap.io/register/signup)
## Wrapping up
Whether you’ve opted for sending emails with API or, perhaps, [with SMTP](with SMTP), I hope you now have a better understanding of the intricacies involved in the email backend.
And one more thing: I heartily invite you to further explore our [blog](https://mailtrap.io/blog/), where you can learn how to send emails in various languages and frameworks and read articles such as:
- [Send Emails in Java: Guide with Code Examples](https://mailtrap.io/blog/sending-email-using-java/)
- [Nodemailer: Tutorial with Code Snippets](https://mailtrap.io/blog/sending-emails-with-nodemailer/)
- [How to Send Beautiful Mail Notifications in Laravel](https://mailtrap.io/blog/laravel-notification-tutorial/)
- [Flask Send Email: Tutorial with Code Snippets](https://mailtrap.io/blog/flask-email-sending/)
We appreciate you chose this article to know more about methods of [sending emails via API and SMTP](https://mailtrap.io/blog/api-send-email/). To read more articles on related topics, follow Mailrap Blog!
| idjuric660 | |
1,909,230 | Technical Article for Creating and Managing Users with a Bash Script | This article describes a Bash script, create_users.sh, designed to automate the creation of user... | 0 | 2024-07-02T17:25:43 | https://dev.to/marg4cf3553b4099/technical-article-for-creating-and-managing-users-with-a-bash-script-5gl7 | This article describes a [Bash script](https://github.com/MegCyber/HNGTask2.git), create_users.sh, designed to automate the creation of user accounts and group assignments based on a text file containing user data. This script streamlines the user onboarding process for system administrators, improving efficiency and reducing the risk of errors.
## Script Functionality:
The script takes a text file as input, where each line represents a user with the following format:
`username;groups`
`username`: The desired username for the new account.
`groups`: A comma-separated list of groups the user should belong to.
## Here's a breakdown of the script's functionality:
-
**Reads User List:** The script iterates through each line of the provided text file.
-
**Processes User and Groups:** It extracts the username and groups (if any) from each line.
-
**Creates User and Primary Group:** The script creates a primary group with the same name as the username and adds the user to it.
-
**Error Handling:** It checks if the user already exists. If so, it logs a warning message and skips to the next user.
-
**Generates Random Password:** A secure, random password is generated for each user.
-
**Creates Home Directory:** The script creates a home directory for the new user and sets appropriate ownership and permissions.
-
**Sets User Password:** The script securely sets the generated password for the user account.
-
**Adds User to Additional Groups:** If additional groups are specified in the input file, the script adds the user to those groups.
-
**Logs Activity:** All actions (user creation, group assignment, etc.) are logged to a dedicated log file for audit purposes.
-
**Stores Passwords Securely:** Passwords are not stored directly in the script or logs. Instead, they are saved in a separate file with restricted permissions (accessible only by the script user).
## Technical Details:
The script utilizes various Linux commands to achieve its functionalities. Here's a brief overview of some key commands:
-
`useradd`: Creates a new user account.
-
`groupadd`: Creates a new group.
-
`chpasswd`: Sets or modifies a user's password.
-
`/dev/urandom`: Generates random data from the system's random number generator.
-
`tr`: Transforms characters (used for filtering random data to alphanumeric characters).
-
`usermod`: Modifies an existing user account.
## Benefits of Script Automation:
Using a script to automate user creation offers several benefits:
-
**Efficiency**: It significantly reduces the time and effort required to create multiple user accounts.
-
**Accuracy**: Automating the process minimizes the risk of typos or errors that can occur during manual user creation.
-
**Consistency**: The script ensures that all user accounts are created with the same settings and permissions, promoting consistency across the system.
-
**Auditability**: The logging mechanism provides a clear record of all user creation activities, useful for security purposes and troubleshooting.
## Conclusion
By following these steps, we can automate the user creation process, ensuring consistency and saving time. This approach can be easily extended or modified to fit specific requirements.
For more information on our internship program, visit [HNG Internship](https://hng.tech/internship) and Start your journey to becoming a world-class developer today, check out [HNG](https://hng.tech/hire). | marg4cf3553b4099 | |
1,909,221 | Buy Negative Google Reviews | https://dmhelpshop.com/product/buy-negative-google-reviews/ Buy Negative Google Reviews Negative... | 0 | 2024-07-02T17:22:29 | https://dev.to/bikacaj604/buy-negative-google-reviews-3pl7 | devops, productivity, aws, opensource | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-negative-google-reviews/\n\n\n\n\nBuy Negative Google Reviews\nNegative reviews on Google are detrimental critiques that expose customers’ unfavorable experiences with a business. These reviews can significantly damage a company’s reputation, presenting challenges in both attracting new customers and retaining current ones. If you are considering purchasing negative Google reviews from dmhelpshop.com, we encourage you to reconsider and instead focus on providing exceptional products and services to ensure positive feedback and sustainable success.\n\nWhy Buy Negative Google Reviews from dmhelpshop\nWe take pride in our fully qualified, hardworking, and experienced team, who are committed to providing quality and safe services that meet all your needs. Our professional team ensures that you can trust us completely, knowing that your satisfaction is our top priority. With us, you can rest assured that you’re in good hands.\n\nIs Buy Negative Google Reviews safe?\nAt dmhelpshop, we understand the concern many business persons have about the safety of purchasing Buy negative Google reviews. We are here to guide you through a process that sheds light on the importance of these reviews and how we ensure they appear realistic and safe for your business. Our team of qualified and experienced computer experts has successfully handled similar cases before, and we are committed to providing a solution tailored to your specific needs. Contact us today to learn more about how we can help your business thrive.\n\nBuy Google 5 Star Reviews\nReviews represent the opinions of experienced customers who have utilized services or purchased products from various online or offline markets. These reviews convey customer demands and opinions, and ratings are assigned based on the quality of the products or services and the overall user experience. Google serves as an excellent platform for customers to leave reviews since the majority of users engage with it organically. When you purchase Buy Google 5 Star Reviews, you have the potential to influence a large number of people either positively or negatively. Positive reviews can attract customers to purchase your products, while negative reviews can deter potential customers.\n\nIf you choose to Buy Google 5 Star Reviews, people will be more inclined to consider your products. However, it is important to recognize that reviews can have both positive and negative impacts on your business. Therefore, take the time to determine which type of reviews you wish to acquire. Our experience indicates that purchasing Buy Google 5 Star Reviews can engage and connect you with a wide audience. By purchasing positive reviews, you can enhance your business profile and attract online traffic. Additionally, it is advisable to seek reviews from reputable platforms, including social media, to maintain a positive flow. We are an experienced and reliable service provider, highly knowledgeable about the impacts of reviews. Hence, we recommend purchasing verified Google reviews and ensuring their stability and non-gropability.\n\nLet us now briefly examine the direct and indirect benefits of reviews:\nReviews have the power to enhance your business profile, influencing users at an affordable cost.\nTo attract customers, consider purchasing only positive reviews, while negative reviews can be acquired to undermine your competitors. Collect negative reports on your opponents and present them as evidence.\nIf you receive negative reviews, view them as an opportunity to understand user reactions, make improvements to your products and services, and keep up with current trends.\nBy earning the trust and loyalty of customers, you can control the market value of your products. Therefore, it is essential to buy online reviews, including Buy Google 5 Star Reviews.\nReviews serve as the captivating fragrance that entices previous customers to return repeatedly.\nPositive customer opinions expressed through reviews can help you expand your business globally and achieve profitability and credibility.\nWhen you purchase positive Buy Google 5 Star Reviews, they effectively communicate the history of your company or the quality of your individual products.\nReviews act as a collective voice representing potential customers, boosting your business to amazing heights.\nNow, let’s delve into a comprehensive understanding of reviews and how they function:\nGoogle, with its significant organic user base, stands out as the premier platform for customers to leave reviews. When you purchase Buy Google 5 Star Reviews , you have the power to positively influence a vast number of individuals. Reviews are essentially written submissions by users that provide detailed insights into a company, its products, services, and other relevant aspects based on their personal experiences. In today’s business landscape, it is crucial for every business owner to consider buying verified Buy Google 5 Star Reviews, both positive and negative, in order to reap various benefits.\n\nWhy are Google reviews considered the best tool to attract customers?\nGoogle, being the leading search engine and the largest source of potential and organic customers, is highly valued by business owners. Many business owners choose to purchase Google reviews to enhance their business profiles and also sell them to third parties. Without reviews, it is challenging to reach a large customer base globally or locally. Therefore, it is crucial to consider buying positive Buy Google 5 Star Reviews from reliable sources. When you invest in Buy Google 5 Star Reviews for your business, you can expect a significant influx of potential customers, as these reviews act as a pheromone, attracting audiences towards your products and services. Every business owner aims to maximize sales and attract a substantial customer base, and purchasing Buy Google 5 Star Reviews is a strategic move.\n\nAccording to online business analysts and economists, trust and affection are the essential factors that determine whether people will work with you or do business with you. However, there are additional crucial factors to consider, such as establishing effective communication systems, providing 24/7 customer support, and maintaining product quality to engage online audiences. If any of these rules are broken, it can lead to a negative impact on your business. Therefore, obtaining positive reviews is vital for the success of an online business\n\nWhat are the benefits of purchasing reviews online?\nIn today’s fast-paced world, the impact of new technologies and IT sectors is remarkable. Compared to the past, conducting business has become significantly easier, but it is also highly competitive. To reach a global customer base, businesses must increase their presence on social media platforms as they provide the easiest way to generate organic traffic. Numerous surveys have shown that the majority of online buyers carefully read customer opinions and reviews before making purchase decisions. In fact, the percentage of customers who rely on these reviews is close to 97%. Considering these statistics, it becomes evident why we recommend buying reviews online. In an increasingly rule-based world, it is essential to take effective steps to ensure a smooth online business journey.\n\nBuy Google 5 Star Reviews\nMany people purchase reviews online from various sources and witness unique progress. Reviews serve as powerful tools to instill customer trust, influence their decision-making, and bring positive vibes to your business. Making a single mistake in this regard can lead to a significant collapse of your business. Therefore, it is crucial to focus on improving product quality, quantity, communication networks, facilities, and providing the utmost support to your customers.\n\nReviews reflect customer demands, opinions, and ratings based on their experiences with your products or services. If you purchase Buy Google 5-star reviews, it will undoubtedly attract more people to consider your offerings. Google is the ideal platform for customers to leave reviews due to its extensive organic user involvement. Therefore, investing in Buy Google 5 Star Reviews can significantly influence a large number of people in a positive way.\n\nHow to generate google reviews on my business profile?\nFocus on delivering high-quality customer service in every interaction with your customers. By creating positive experiences for them, you increase the likelihood of receiving reviews. These reviews will not only help to build loyalty among your customers but also encourage them to spread the word about your exceptional service. It is crucial to strive to meet customer needs and exceed their expectations in order to elicit positive feedback. If you are interested in purchasing affordable Google reviews, we offer that service.\n\n\n\n\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | bikacaj604 |
1,909,215 | ByteDance’s RsPack: The Future of Web Bundling || Shahin Islam Arpon || a4arpon | ByteDance and JavaScript Web development is always changing, and ByteDance, the company... | 0 | 2024-07-02T17:21:18 | https://dev.to/a4arpon/bytedances-rspack-the-future-of-web-bundling-shahin-islam-arpon-a4arpon-op4 | webdev, javascript, tiktok, rust | ## ByteDance and JavaScript
Web development is always changing, and ByteDance, the company behind TikTok and CapCut, is leading the way with a new tool called RsPack. This tool promises to make developers' lives easier by solving common problems with bundling JavaScript. Here's what you need to know about RsPack.
## Why RsPack?
ByteDance relies heavily on JavaScript for many of its products. However, they ran into issues with their current bundling tool, Webpack, especially with Node.js compatibility. To solve these problems, ByteDance created RsPack, a new bundler that’s fast, efficient, and designed to work seamlessly with existing tools.
## What Makes RsPack Special?
### Speed and Performance
- **Fast Startup**: RsPack combines TypeScript and Rust with a parallelized architecture, which means it starts up quickly, making the development process smoother.
- **Lightning HMR**: It offers superior Hot Module Replacement (HMR) performance, which is crucial for large projects. This means changes you make in the code appear almost instantly in the browser.
### Compatibility
- **Webpack Compatible**: RsPack works with existing Webpack plugins and loaders, so developers can easily switch without having to redo their setup.
- **Module Federation**: It supports Module Federation, which helps in developing large-scale web applications by allowing multiple independent builds to work together.
### Production Optimizations
- **Integrated Optimizations**: RsPack has built-in optimizations like tree shaking and minification, which improve the performance and size of your final build without needing extra plugins.
- **Framework Agnostic**: It’s not tied to any specific frontend framework, so it can be used with any project.
## Real-World Impact
ByteDance created RsPack because their existing tools were slowing them down. Their production build times could take up to half an hour, and starting the development server often took several minutes. RsPack was designed to solve these issues with:
- **Quick Dev Mode Startups**: Developers often run `npm run dev` many times an hour. RsPack ensures these startups are fast, keeping productivity high.
- **Speedy Builds**: In Continuous Integration/Continuous Deployment (CI/CD) pipelines, every second counts. RsPack significantly reduces build times, speeding up the entire deployment process.
- **Flexible Configuration**: RsPack maintains the flexibility of Webpack, allowing developers to customize it to fit their specific needs without high migration costs.
## Conclusion
RsPack is ByteDance's answer to the limitations they faced with current bundling tools. It promises faster startup times, better performance, and easy integration with existing setups. As ByteDance prepares to release RsPack to the public, developers around the world are eager to try this new tool and experience its benefits firsthand.
Stay tuned for more updates on RsPack, and get ready to explore a new era of efficient and powerful web development!
## Author
Shahin Islam Arpon
[Facebook](https://www.facebook.com/a4arpon)
[Portfolio](https://a4arpon.me)
[GitHub](https://github.com/a4arpon)
[LinkedIn](https://www.linkedin.com/in/a4arpon) | a4arpon |
1,909,153 | Como um Malware opera, quais são os seus tipos e como se proteger. | No mundo digital de hoje, a segurança cibernética se torna cada vez mais importante. Entre as... | 0 | 2024-07-02T17:19:47 | https://dev.to/jaque_py/como-um-malware-opera-quais-sao-os-seus-tipos-e-como-se-proteger-26e8 | cybersecurity, malware, security, beginners | No mundo digital de hoje, a segurança cibernética se torna cada vez mais importante. Entre as principais ameaças, os **malwares** se destacam por sua capacidade de causar danos significativos a dispositivos, redes e dados. Entender o funcionamento, os tipos e as medidas de proteção contra malwares é essencial para garantir a segurança online.
Mas, o que é um Malware?
**Malware**, abreviação de **software malicioso**, é um termo genérico que engloba diversos tipos de programas nocivos projetados para causar danos ou obter acesso não autorizado a sistemas computacionais, causando algum prejuízo. Esses softwares podem ser criados com diversas motivações, como roubo de dados, extorsão, espionagem ou simplesmente vandalismo.
Alguns dos tipos comuns:
**Vírus**: Assim como os nossos “desqueridos” da biologia, os virtuais se replicam automaticamente e se propagam por diferentes meios, infectando arquivos e sistemas. Primeiro ele se liga a um host, sempre que esse host é executado ele se espalha infectando outros arquivos ou programas. Podem modificar, corromper e até mesmo apagar seus dados, além de diminuir o desempenho do sistema ou espionar o usuário.
Não sei o que é um host - um host pode ser um servidor, pc, celular, notebook, tablet, smart TV… várias possibilidades. Se for possível conectar a internet ele pode ser um host. Eles conseguem armazenar dados, fornecer algum serviço que precise, enviar e receber mensagens, executar aplicativos e muito mais.
**Cavalo de Troia**: Esse tipo de Malware consegue se disfarçar de algo legítimo e até útil para a pessoa, dessa forma o Cavalo de Troia induz o usuário a fazer a instalação ou baixar algum de seus arquivos, permitindo que o malware execute suas ações maliciosas sem que a pessoa perceba que está fazendo algo do qual irá te prejudicar, como o roubo de dados ou instalação de outros malwares.
**Worm**: Similar ao vírus, o worm se propaga automaticamente, porém, ele não precisa de um host para se replicar ou de uma ação humana despercebida. Por se propagar rápido, um worm pode sobrecarregar a rede deixando ela lenta, espionar usuários, fazer captura de tela, derrubar sites e até mesmo criptografar dados para extorsão.
**Ransomware**: Um tipo de Malware que consegue criptografar os arquivos da vítima, exigindo o pagamento de um resgate para liberá-lo, caso contrário, os dados podem ser perdidos para sempre. O Ransomware causa graves perdas financeiras e interrupções nas operações de empresas e usuários.
**Keyloggers**: Monitora o teclado sabendo tudo o que o usuário digitar, capturando senhas, credenciais ou outras informações confidenciais.
**Adware**: Nem sempre malicioso, mas fica exibindo propagandas indesejadas, pop-ups, se tornando bem irritante e por ser tão intrusivo ele pode abrir espaço para malwares piores.
**Spyware**: Feito para coletar informações pessoais sem que saibam. Assim como o Keylogger, o Spyware consegue capturar o que o usuário pressiona no teclado, além de acessar seus históricos e dados pessoais, geralmente para fins de marketing ou espionagem.
**Rootkit**: Esconde a presença de outros malwares no sistema, dificultando a detecção e remoção dos mesmos.
Lembrando que esses não são todos, você pode achar mais pesquisando na internet caso queira.
**Formas de se proteger contra os Malware**
- Tenha um **antivírus atualizado**, um bom antivírus consegue detectar e bloquear esses malwares maliciosos, impedindo as ameaças.
- Deixem sempre seus **softwares atualizados**, as atualizações frequentemente contêm correções de segurança que podem proteger contra vulnerabilidades exploradas por malwares.
- Existe algo chamado “**Engenharia social**” que explora erros humanos, entendendo a forma que as pessoas pensam, como vão agir e se aproveitando da falta de conhecimento das mesmas. Pesquisem sobre isso, é realmente útil conhecer sobre. É preciso manter atenção com e-mails, mensagens e sites falsos que tentam enganar o usuário para revelar informações confidenciais ou baixar malwares.
- Sabendo disso, outras formas de se proteger é **não clicar em todos os links e anexos por aí**, sempre tenha um olhar de dúvida sobre as coisas. Parece besta, mas recentemente 2 pessoas que conheço caíram em algum golpe online confiando na palavra de outra pessoa, pode acontecer com qualquer um então desconfie.
- Em sites de compra, veja se tem o cadeadinho lá em cima, do lado da URL da página, leia o endereço do site para ver se não é uma cópia do original (geralmente as cópias tem algo diferente no www), **suspeite de qualquer coisa diferente**.
- No Whatsapp ou SMS **desconfie de prêmios fáceis, surpresas que parecem boas ou de dívidas absurdas em alguma conta bancária sua**. Pergunte a familiares se sabem de algo, pesquise a empresa suspeita na internet e em vários lugares, fale com o seu banco diretamente, NÃO CLIQUE EM NADA ENVIADO PARA VOCÊ ANTES DE CHECAR TUDO ISSO.
- Falar sobre os links também se liga em **apenas baixar arquivos de sites confiáveis**, nada de entrar em sites suspeitos ou desconhecidos e sair baixando o que aparecer. Existe a opção de configurar seu navegador para não baixar arquivos automaticamente, ou pelo menos te perguntar algo antes de baixar (dependendo do navegador), você consegue achar essa opção por si ou com uma pesquisa rápida na internet.
- Tome **cuidado com redes Wi-fi públicas**. Já disse de usar VPN, mas de qualquer forma não use elas para coisas confidenciais suas, como conta bancária. Também existe o risco de pessoas mal intencionadas criarem um wi-fi público para roubar seus dados. Se possível use sua própria internet.
Infelizmente a segurança online pode acabar caindo só sobre você. Cuide do que é seu e informe as pessoas sobre esses cuidados. Essa Segurança é um compromisso contínuo, fique de olho em qualquer comportamento estranho dos seus dispositivos e tente denunciar qualquer atividade suspeita que ver por aí.
Fontes/materiais de apoio:
(Microsoft - O que é malware?)
https://www.microsoft.com/pt-br/security/business/security-101/what-is-malware
[](https://www.microsoft.com/pt-br/security/business/security-101/what-is-malware)
(Kaspersky - O que é engenharia social?)
https://www.kaspersky.com.br/resource-center/definitions/what-is-social-engineering
[](https://www.kaspersky.com.br/resource-center/definitions/what-is-social-engineering)
(Suporte Google - Proteja-se contra malware)
https://support.google.com/google-ads/answer/2375413?hl=pt-BR#zippy=
[](https://support.google.com/google-ads/answer/2375413?hl=pt-BR#zippy=)
| jaque_py |
1,909,151 | “ A Complete Guide to Obtaining a Work Permit Visa for New Zealand: Step-by-Step” | Are you thinking about starting a new job and living and working in New Zealand? Renowned for its... | 0 | 2024-07-02T17:16:46 | https://dev.to/aman_kapoor_ea3f94d4f74a0/-a-complete-guide-to-obtaining-a-work-permit-visa-for-new-zealand-step-by-step-93g | skilled, worker, visa, techsavvyimmigration | Are you thinking about starting a new job and living and working in New Zealand?
Renowned for its stunning scenery and dynamic culture, New Zealand presents fantastic prospects for proficient laborers over the globe.
The[ **[New Zealand Skilled Worker Visa]](https://www.techsavvyimmigration.com/how-to-apply-for-a-new-zealand-skilled-worker-visa-a-step-by-step-guide/) is your ticket to realizing your ambition of relocating to and working in this stunning nation.
If you’re interested in working in New Zealand, the Application Process Guide for the New Zealand Skilled Worker Visa is a thorough and educational resource.
We’ll guide you through the entire process of getting this visa in this post, so you can be sure you have the information and assurance you need to start this thrilling adventure.
**Understanding About the Skilled Worker Visa for New Zealand**
For a limited time, skilled workers from outside are permitted to work in New Zealand with the New Zealand Skilled Worker Visa, also referred to as a **[Work Visa or Work Permit.](https://www.techsavvyimmigration.com/how-to-apply-for-a-new-zealand-skilled-worker-visa-a-step-by-step-guide/)**
This visa is intended to address the skills gap in the nation and increase the number of workers across a range of industries.
**Qualifications for Obtaining a Visa
**
The **[New Zealand Skilled Worker Visa has certain qualifying requirements](https://www.techsavvyimmigration.com/how-to-apply-for-a-new-zealand-skilled-worker-visa-a-step-by-step-guide/)** that you must fulfill in order to be eligible.
· Have the credentials and abilities that are sought after in New Zealand.
· Fulfill the standards for character and health established by the government of New Zealand.
· Hold an official work offer from an employer in New Zealand.
**Getting a Job in New Zealand
**
It can be difficult to get work abroad, but it can be done with the appropriate strategy. These actions can assist you in locating employment in New Zealand:
Investigate the Job Market: Find out which industries in New Zealand are suffering from a lack of skilled workers by conducting in-depth research.
Make Contact with Recruitment Agencies: Make contact with respectable recruitment firms that focus on matching talented candidates with positions in New Zealand.
Use Online employment Portals: Look for relevant employment openings by exploring New Zealand’s job boards and online job portals.
Utilize LinkedIn to network: Create a compelling profile and make connections with New Zealand industry experts.
Attend Expos and Job Fairs: Pay attention to career fairs and expos that are focused on New zealand opportunities.
**Getting the Visa Application Ready
**
The next step after receiving a job offer is to get your visa application ready. This section will walk you through the process of obtaining the necessary paperwork, such as your passport, transcripts of education, letters of employment history, and the offer of employment from your potential employer.
Since any mistakes could cause the process to lag, this step demands close attention to detail. What you should do is as follows:
Obtaining Documents: A seamless application procedure depends on obtaining all required documentation. Usually, these documents consist of:
Passport: Make sure your passport is valid for a minimum of six months after the date you plan to visit New Zealand.
Educational Certificates: Present documentation of your degree of study and any applicable certificates.
Work Experience Letters: Get verification of your prior employment from the letters you received there.
Job Offer Letter: Don’t forget to attach the letter of offer from your potential New Zealand company.
Police Clearance Certificate: Obtain a police clearance certificate from the authorities in your country of origin.
**Character and Health Requirements
**
The character and health of visa applicants is a major priority for the government of New Zealand. The medical exam and character references needed for the visa application procedure will be covered in this section.
**Submitting the application for a visa
**
It’s time to submit your application once you’ve meticulously prepared it and double-checked all the supporting materials. For this, you have two choices:
Online Application: Use the New Zealand Immigration website to submit your application. Usually, this is more convenient and faster.
Paper Application: If you’d like, you can apply on paper at the New Zealand embassy or consulate that is closest to you.
**Visa Processing Time
**
The number of applications received and the specifics of each case may affect the length of time it takes to complete a New Zealand skilled worker visa.
You will learn about the normal processing period in this part, as well as why it is crucial to apply well in advance of the date you plan to go.
**How to Handle the Interview Process
**
In certain circumstances, the application procedure for a visa may include an interview. This section will include advice on how to get ready for the interview as well as what to anticipate from it.
**Getting a Visa and Visiting New Zealand
**
**As soon as your visa application is accepted, you can start getting ready to travel to New Zealand. This section will help you plan your travel, make hotel and travel arrangements, and ensure a seamless adjustment to your new job and way of life in** **[New Zealand.](https://www.techsavvyimmigration.com/how-to-apply-for-a-new-zealand-skilled-worker-visa-a-step-by-step-guide/)**
**Relocating to New Zealand
**
Moving to a new nation may be exhilarating and difficult at the same time. This section will provide guidance on acclimating to your new surroundings, locating acceptable housing, and comprehending the local way of life.
**Examining Career Paths and Promotion
**
It’s crucial to look at possible job progression prospects as you start your professional career in New Zealand. This section will showcase opportunities for career advancement and development around the nation.
**Getting Used to the New Society
**
Embracing the rich and varied culture of New Zealand can improve your trip in general. This section will examine the nation’s cultural features and provide advice on adjusting to and assimilating into the community.
**Acquiring Knowledge of the Regional Language
**
Even though English is the most common language in New Zealand, knowing a few Maori expressions can show respect and an understanding of the culture. This area aims to promote the acquisition and appreciation of languages. | aman_kapoor_ea3f94d4f74a0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.