id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,910,384
An In-Depth Look at Audio Classification Using CNNs and Transformers
Introduction Audio classification is a fascinating area of machine learning that involves...
0
2024-07-03T15:55:05
https://dev.to/aditi_baheti_f4a40487a091/an-in-depth-look-at-audio-classification-using-cnns-and-transformers-1981
transformers, deeplearning, ai, cnn
## Introduction Audio classification is a fascinating area of machine learning that involves categorizing audio signals into predefined classes. In this blog, we will delve into the specifics of an audio classification project, exploring the architectures, methodologies, and results obtained from experimenting with Convolutional Neural Networks (CNNs) and Transformers. ## Dataset The project utilized the **ESC-50 dataset**, a compilation of environmental audio clips categorized into 50 different classes. Specifically, the **ESC-10 subset** was used, narrowing the dataset to 10 categories for more focused experimentation. ## Architecture 1: Convolutional Neural Networks (CNNs) ### Initial Setup The initial model setup for audio classification relied heavily on CNNs. These networks use convolutional layers to extract features from the audio signals progressively, increasing the output channel size from 16 to 64. Each convolutional layer is followed by a max-pooling layer to reduce the spatial dimensions and highlight the most critical features. #### Original Model The original model focused solely on feature extraction without incorporating dropout, early stopping, or other regularization techniques. This led to a basic yet effective structure for understanding audio data's complex patterns. #### Enhanced Model To combat overfitting and improve generalization, several enhancements were made: - **Dropout**: Introduced to randomly deactivate neurons during training, thereby preventing over-reliance on specific paths. - **Early Stopping**: Implemented to halt training when validation performance plateaued, ensuring the model does not overfit to the training data. - **Regularization**: Additional techniques were employed to further stabilize the training process and enhance generalization. ### Results The use of k-fold cross-validation, with fold 1 reserved for validation, provided a comprehensive evaluation of the model's performance. Key observations from hyperparameter tuning include: - **Reduced Overfitting**: The enhanced model exhibited lower test losses and higher test accuracies, F1 scores, and ROC AUC values across all folds compared to the original model. The following table summarizes the performance across different folds: | Metric | Fold 2 (Original) | Fold 2 (Enhanced) | Fold 3 (Original) | Fold 3 (Enhanced) | Fold 4 (Original) | Fold 4 (Enhanced) | Fold 5 (Original) | Fold 5 (Enhanced) | |----------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------|-------------------| | Avg. Training Accuracy | 63.49% | 51.15% | 68.77% | 43.67% | 68.64% | 55.49% | 67.55% | 49.84% | | Avg. Validation Accuracy | 34.25% | 38.42% | 39.17% | 35.00% | 38.54% | 40.64% | 38.44% | 43.97% | | Test Loss | 7.7658 | 1.5196 | 4.4111 | 1.4217 | 4.1973 | 1.5789 | 4.4777 | 1.5499 | | Test Accuracy | 30.42% | 48.47% | 42.08% | 45.97% | 40.56% | 43.47% | 45.69% | 42.92% | | F1 Score | 0.26 | 0.47 | 0.40 | 0.45 | 0.41 | 0.42 | 0.44 | 0.39 | | ROC AUC | 0.72 | 0.88 | 0.81 | 0.88 | 0.78 | 0.87 | 0.80 | 0.86 | ### Confusion Matrix and ROC Curve The confusion matrix and ROC curve for the best performing fold (Fold 2) highlight the classifier's ability to distinguish between most classes effectively. However, there are instances of misclassification, suggesting the need for further refinement in the model. ## Architecture 2: Transformers Transformers, known for their success in natural language processing, were adapted for audio classification in this project. The core of this architecture involves: - **Convolutional Layers**: Used initially to extract basic audio features such as tones and rhythms. - **Transformer Blocks**: Employed to process these features using attention mechanisms, enabling the model to focus on different parts of the audio sequence dynamically. - **Multi-Head Attention**: Utilized to attend to various representation subspaces simultaneously, enhancing the model's interpretive capabilities. - **Positional Encodings**: Incorporated to retain the sequential order of audio data, allowing the model to adapt positional information effectively. ### Performance Metrics The transformer model was evaluated with different numbers of attention heads (1, 2, and 4). Key observations include: - **Two Heads Model**: This configuration outperformed others in terms of test accuracy and F1 score, suggesting an optimal balance between feature learning and generalization. - **Four Heads Model**: Despite higher train accuracy, this model exhibited signs of overfitting, with less effective feature integration for classification. The table below outlines the performance metrics for different configurations: | Number of Heads | Train Accuracy | Valid Accuracy | Test Accuracy | Train Loss | Valid Loss | Test Loss | F1 Score | ROC AUC | |-----------------|----------------|----------------|---------------|------------|------------|-----------|----------|---------| | 1 Head | 80.74% | 46.39% | 43.47% | 0.5412 | 2.5903 | 2.9106 | 0.41 | 0.82 | | 2 Heads | 79.91% | 49.86% | 49.86% | 0.5778 | 2.4115 | 2.4757 | 0.47 | 0.86 | | 4 Heads | 81.71% | 44.86% | 42.78% | 0.5759 | 2.6297 | 2.4895 | 0.40 | 0.84 | ### Enhanced Model with Transformers The enhanced model employed additional techniques such as gradient clipping and the AdamW optimizer, coupled with a learning rate scheduler. This configuration significantly improved the model's stability and generalization capabilities. - **Gradient Clipping**: Applied to prevent exploding gradients, ensuring stable training. - **AdamW Optimizer**: Recognized for its weight decay regularization, enhancing the model's performance on validation data. The enhanced model demonstrated superior performance across several metrics: | Metric | Enhanced Model | |-----------------|----------------| | Train Accuracy | 79.81% | | Validation Accuracy | 55.00% | | Test Accuracy | 58.19% | | Train Loss | 0.6030 | | Validation Loss | 1.5191 | | Test Loss | 1.1435 | | F1 Score | 0.56 | | ROC AUC | 0.93 | ### Trainable Parameters - **SoundClassifier**: Approximately 16.4 million trainable parameters. - **AudioClassifierWithTransformer**: About 8.9 million trainable parameters. ## Conclusion This project illustrates the potential of both CNNs and Transformers in audio classification tasks. While CNNs provide a solid foundation for feature extraction, Transformers offer advanced capabilities through attention mechanisms, enhancing the model's ability to interpret complex audio signals. By incorporating regularization techniques and advanced optimizers, the enhanced models achieved significant improvements in generalization and stability, highlighting the importance of these strategies in machine learning. The results underscore the effectiveness of using a combination of traditional convolutional methods and modern transformer architectures to tackle the challenges of audio classification, paving the way for further innovations in this exciting field. ---
aditi_baheti_f4a40487a091
1,910,382
Easy way to process waterfall functions
Have you ever encountered a process that flows like a waterfall? This is an example. Even though...
0
2024-07-03T15:48:40
https://dev.to/alfianriv/easy-way-to-process-waterfall-functions-g0p
Have you ever encountered a process that flows like a waterfall? This is an example. ![Waterfall flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/843lhsee8voqibedzfgb.png) Even though we only hit 1 endpoint, we have to process all of them. Yes, that's easy, but what if step 2 fails and we want everything to be repeated. But the data has already entered the database? Just delete it anyway, but it's lazy to handle it all. Maybe the method I provide can make all of that easier, even if there are dozens of steps. Disclaimer first, I implement this on nestjs. And I hope those of you who read already understand and are proficient in using nestjs. We first prepare the base service, which contains the execute command to run all the steps in order. ``` import { Injectable } from '@nestjs/common'; import { FALLBACK_STEP_METADATA_KEY } from 'src/decorators/fallback.decorator'; import { STEP_METADATA_KEY } from 'src/decorators/step.decorator'; import { uid } from 'uid'; @Injectable() export class WaterfallService { private steps: { methodName: string; order: number }[] = []; private fallbacks: { methodName: string; order: number }[] = []; constructor() { this.collectSteps(); } private collectSteps() { const methods = Object.getOwnPropertyNames(Object.getPrototypeOf(this)); methods.forEach((methodName) => { const order = Reflect.getMetadata(STEP_METADATA_KEY, this, methodName); if (order !== undefined) { this.steps.push({ methodName, order }); } const fallbackOrder = Reflect.getMetadata( FALLBACK_STEP_METADATA_KEY, this, methodName, ); if (fallbackOrder !== undefined) { this.fallbacks.push({ methodName, order: fallbackOrder }); } }); this.steps.sort((a, b) => a.order - b.order); this.fallbacks.sort((a, b) => a.order - b.order); } async executeSteps(params?) { const eventId = uid(6); let executedSteps = []; let returnedData: any; try { for (const step of this.steps) { let paramPassed = params; if (step.order > 1) { paramPassed = returnedData; } const result = await (this as any)[step.methodName]( eventId, paramPassed, ); returnedData = result; executedSteps.push(step); } } catch (error) { await this.executeFallbacks(executedSteps, eventId); throw error; // Re-throw the error after handling fallbacks } } private async executeFallbacks(executedSteps, eventId) { // Execute fallbacks in reverse order for (let i = executedSteps.length - 1; i >= 0; i--) { const step = executedSteps[i]; const fallback = this.fallbacks.find((f) => f.order === step.order); if (fallback) { await (this as any)[fallback.methodName](eventId); } } } } ``` Next, we will create a decorator to make it easier for the execute function to run the steps in the order we want. ``` import 'reflect-metadata'; export const STEP_METADATA_KEY = 'step_order'; export function Step(order: number): MethodDecorator { return (target, propertyKey, descriptor) => { Reflect.defineMetadata(STEP_METADATA_KEY, order, target, propertyKey); }; } export const FALLBACK_STEP_METADATA_KEY = 'fallback_step_order'; export function Rollback(order: number): MethodDecorator { return (target, propertyKey, descriptor) => { Reflect.defineMetadata(FALLBACK_STEP_METADATA_KEY, order, target, propertyKey); }; } ``` The last one we implement into our service. ``` import { BadRequestException, Injectable } from '@nestjs/common'; import { WaterfallService } from './commons/waterfall/waterfall.service'; import { Step, Rollback } from './decorators/step.decorator'; @Injectable() export class AppService extends WaterfallService { @Step(1) async logFirst(eventId) { console.log('Step 1 [eventId]:', eventId); } @Rollback(1) async fallbackFirst(eventId) { console.log('Rollback 1 [eventId]:', eventId); } @Step(2) async logSecond(eventId, data) { console.log('Step 2 [eventId]:', eventId); } @Rollback(2) async fallbackSecond(eventId) { console.log('Rollback 2 [eventId]:', eventId); } @Step(3) async logThird(eventId) { console.log('Step 3 [eventId]:', eventId); } @Rollback(3) async fallbackThird(eventId) { console.log('Rollback 3 [eventId]:', eventId); } async execute() { await this.executeSteps(); return 'Step Executed'; } } ``` Now in the controller don't forget to call the execute function, like this example ``` import { Controller, Get } from '@nestjs/common'; import { AppService } from './app.service'; @Controller() export class AppController { constructor(private readonly appService: AppService) {} @Get() getHello() { return this.appService.execute(); } } ``` Then run the nestjs. For the test hit `[GET]http://localhost:3000` Later in the log terminal it will be like this ![Example log terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o3bzo865nqbpm2znxkle.png) Now it's easy, right? You just add as many function steps as you need. Now what if in step 3 there is an error and want to rollback the previous processes that have been run? Take it easy, as long as there is a function with the decorator `@Rollback(number)` The function will run if an error occurs. For example, here's an example. ``` import { BadRequestException, Injectable } from '@nestjs/common'; import { WaterfallService } from './commons/waterfall/waterfall.service'; import { Step, Rollback } from './decorators/step.decorator'; @Injectable() export class AppService extends WaterfallService { @Step(1) async logFirst(eventId) { console.log('Step 1 [eventId]:', eventId); } @Rollback(1) async fallbackFirst(eventId) { console.log('Rollback 1 [eventId]:', eventId); } @Step(2) async logSecond(eventId, data) { console.log('Step 2 [eventId]:', eventId); } @Rollback(2) async fallbackSecond(eventId) { console.log('Rollback 2 [eventId]:', eventId); } @Step(3) async logThird(eventId) { throw new BadRequestException('Something error in step 3'); } @Rollback(3) async fallbackThird(eventId) { console.log('Rollback 3 [eventId]:', eventId); } async execute() { await this.executeSteps(); return 'Step Executed'; } } ``` Later, if it is run again, the results will be like this ![Example log terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vqy54z3ot8kcx2zt9hte.png) I intentionally add eventId in each step, if there is a case you insert data into the database, also save the eventId to identify that the data has the eventId owner. So when you rollback and want to delete the data, you don't get confused about which data is being rolled back. For the example case of data in the first function return that is passed to the next function, just return the function, then the return will be passed to the next function. This is the example ``` import { BadRequestException, Injectable } from '@nestjs/common'; import { WaterfallService } from './commons/waterfall/waterfall.service'; import { Step, Rollback } from './decorators/step.decorator'; @Injectable() export class AppService extends WaterfallService { @Step(1) async logFirst(eventId, data) { console.log('Step 1 [eventId]:', eventId); console.log('Step 1 [data]:', data); return { step: 1, message: 'this data from step 1', }; } @Rollback(1) async fallbackFirst(eventId) { console.log('Rollback 1 [eventId]:', eventId); } @Step(2) async logSecond(eventId, data) { console.log('Step 2 [eventId]:', eventId); console.log('Step 2 [data]:', data); return { step: 2, message: 'this data from step 2', }; } @Rollback(2) async fallbackSecond(eventId) { console.log('Rollback 2 [eventId]:', eventId); } @Step(3) async logThird(eventId) { throw new BadRequestException('Something error in step 3'); } @Rollback(3) async fallbackThird(eventId) { console.log('Rollback 3 [eventId]:', eventId); } async execute() { const data = { step: 0, message: 'this data from initial function' }; await this.executeSteps(data); return 'Step Executed'; } } ``` Then the terminal will look like this ![Example log terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/evft6twtg2d4jtywpege.png) I added a repo example [here](https://github.com/alfianriv/waterfall-flow-nestjs). That's all my method of running the waterfall function, Maybe if there are questions and collaboration, you can contact me.
alfianriv
1,910,381
Automate User Management on Linux with a Bash Script
Automate User Management on Linux with a Bash Script Managing users on a Linux system can...
0
2024-07-03T15:48:39
https://dev.to/nueldstark/automate-user-management-on-linux-with-a-bash-script-27h4
linux, devops, bash
## **Automate User Management on Linux with a Bash Script** Managing users on a Linux system can be a tedious task, especially when dealing with multiple users and groups. Automation can greatly simplify this process. In this article, we'll walk through creating a bash script to automate user creation, group assignment, and password management, logging all actions for auditing purposes. ## **Overview** Our script, `create_users.sh`, will: 1. Read a text file containing usernames and group names. 2. Create users and assign them to specified groups. 3. Ensure each user has a personal group. 4. Set up home directories with appropriate permissions. 5. Generate random passwords for users. 6. Log all actions to `/var/log/user_management.log`. 7. Securely store generated passwords in `/var/secure/user_passwords.csv`. ## **Prerequisites** To follow along, you need: - A Linux system with sudo privileges. - `openssl` installed for generating random passwords. ## Step-by-Step Guide ### 1. Prepare the User and Group Text File First, create a text file that contains the usernames and group names. Each line should follow the format `username; groups`, where groups are separated by commas. #### Example `user_groups.txt`: ```plaintext alice; sudo,developers,web bob; admin,devops charlie; www-data dave; sudo eve; dev,admin,www-data ``` ### 2. Write the Bash Script Let's break down the script into major steps for clarity. #### Step 1: Setting Up Logging and Password Storage We start by defining the log file and password file locations and ensuring they exist with the appropriate permissions. ```bash #!/bin/bash # Log file location LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure the log file exists touch $LOG_FILE chmod 600 $LOG_FILE # Ensure the password file exists and set appropriate permissions mkdir -p /var/secure touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE ``` #### Step 2: Helper Functions Next, we define a function to generate random passwords and another to log messages. ```bash # Function to generate random passwords generate_password() { openssl rand -base64 12 } # Log a message log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } ``` #### Step 3: Reading the Input File We read the input file line by line and process each user and their groups. ```bash # Check if the script is run with an argument if [ $# -eq 0 ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi # Read the file line by line while IFS=';' read -r username groups; do # Remove leading/trailing whitespace from username and groups username=$(echo $username | xargs) groups=$(echo $groups | xargs) ``` #### Step 4: Creating Users and Groups We create users, assign them to their personal groups, and handle group assignments. ```bash # Check if user already exists if id -u $username >/dev/null 2>&1; then log "User $username already exists. Skipping creation." continue fi # Create user with a personal group useradd -m -s /bin/bash $username log "User $username created." # Set up home directory permissions chmod 700 /home/$username chown $username:$username /home/$username log "Set up home directory for $username." # Generate random password and set it password=$(generate_password) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE log "Password set for $username." # Add user to their own group usermod -aG $username $username log "User $username added to their personal group $username." ``` #### Step 5: Assigning Users to Additional Groups We ensure users are added to any additional groups specified in the text file. ```bash # Assign user to additional groups if [ -n "$groups" ]; then IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do # Remove whitespace from group name group=$(echo $group | xargs) # Create group if it doesn't exist if ! getent group $group >/dev/null; then groupadd $group log "Group $group created." fi # Add user to group usermod -aG $group $username log "User $username added to group $group." done fi done < "$1" ``` #### Step 6: Final Steps Finally, we ensure the password file is only readable by the owner. ```bash # Ensure password file is only readable by the owner chmod 600 $PASSWORD_FILE echo "User creation process completed. Check $LOG_FILE for details." ``` ### 3. Make the Script Executable Ensure the script is executable by running: ```bash chmod +x create_users.sh ``` ### 4. Run the Script Run the script with your user and group file as an argument: ```bash sudo bash create_users.sh user_groups.txt ``` ### 5. Verify the Results - **Log file**: Check `/var/log/user_management.log` for the log of all actions performed. ```bash sudo cat /var/log/user_management.log ``` - **Password file**: Check `/var/secure/user_passwords.csv` to view the usernames and their generated passwords (only accessible by the root user). ```bash sudo cat /var/secure/user_passwords.csv ``` ### 6. Deleting Users To delete a user along with their home directory and mail spool, use: ```bash sudo userdel -r username ``` Replace `username` with the actual username you want to delete. ## Conclusion This script automates the tedious task of user management on a Linux system, ensuring users are created with the correct groups and home directory permissions, and passwords are securely managed. By logging actions and securely storing passwords, it also helps in maintaining a good audit trail and security practices. Automation like this can save valuable time and reduce the risk of manual errors, especially in larger environments. Feel free to adapt this script to suit your specific requirements and extend its functionality as needed. Happy scripting! --- Get more insight on scripting from the HNG internship - https://hng.tech/internship https://hng.tech/hire
nueldstark
1,910,378
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-07-03T15:46:29
https://dev.to/siwoni5341/buy-verified-cash-app-account-44n
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbms682ebhzrhc79xs2x.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
siwoni5341
1,910,377
Election of Student Council
Check out this Pen I made!
0
2024-07-03T15:46:11
https://dev.to/bidz/election-of-student-council-jnf
codepen
Check out this Pen I made! {% codepen https://codepen.io/bidz/pen/rNgXgwg %}
bidz
1,910,376
Apple lanza su IA centrada en la privacidad: un nuevo paradigma para la inteligencia artificial
En un movimiento audaz que promete redefinir el panorama de la inteligencia artificial, Apple acaba...
0
2024-07-03T15:45:41
https://dev.to/wgbn/apple-lanza-su-ia-centrada-en-la-privacidad-un-nuevo-paradigma-para-la-inteligencia-artificial-4lc
ai, apple, policy, opinion
En un movimiento audaz que promete redefinir el panorama de la inteligencia artificial, Apple acaba de anunciar una serie de avances tecnológicos que colocan la privacidad del usuario en el centro de sus innovaciones en IA. Con el lanzamiento de Apple Intelligence y Private Cloud Compute (PCC), la compañía de Cupertino no sólo lanza nuevos productos, sino que establece un nuevo estándar para toda la industria tecnológica. ¿Será? ### Inteligencia de Apple: IA personal y poderosa ![Descripción de la imagen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4kyrrpgqfti67f9097d.jpg) Apple Intelligence se describe como un “sistema de inteligencia personal” que integra potentes modelos generativos directamente en el corazón del iPhone, iPad y Mac. Este enfoque representa un cambio de paradigma en la forma en que pensamos sobre la IA en los dispositivos personales. A diferencia de muchas soluciones de IA existentes que dependen en gran medida del procesamiento en la nube, Apple Intelligence aprovecha el poder del hardware de Apple para realizar tareas complejas directamente en el dispositivo. Esto no sólo mejoró la velocidad y la capacidad de respuesta, sino que también garantizó un nivel de privacidad sin precedentes. Las características de Apple Intelligence son realmente impresionantes: - **Herramientas de escritura**: el sistema puede ayudar a los usuarios a reescribir, revisar y resumir texto. Imagine tener un asistente de redacción personal siempre a mano, capaz de refinar sus ideas o resumir documentos extensos en segundos. - **Image Playground**: esta herramienta permite a los usuarios crear imágenes divertidas y divertidas directamente en aplicaciones como Mensajes, Notas y Keynote. Es como tener un estudio de diseño de bolsillo, listo para dar vida a tus ideas visuales. - **Genmoji**: la capacidad de crear emojis personalizados para cualquier situación abre un nuevo mundo de expresión personal. Imagina poder crear un emoji que capture perfectamente tu estado de ánimo o una situación única. - **Integración con Siri y App Intents**: Apple Intelligence mejora las capacidades de Siri, haciéndola más natural y personal. Además, los desarrolladores pueden aprovechar los App Intents predefinidos y previamente entrenados para hacer que las acciones de su aplicación sean más reconocibles en todo el sistema. Estas características no son sólo incrementos, sino una reinvención completa de cómo interactuamos con nuestros dispositivos. La IA ya no es una característica adicional, sino una parte integral y ubicua de la experiencia del usuario. ### Computación en la nube privada: redefiniendo la seguridad en la nube ![Descripción de la imagen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kvpve2ageywj6j3pqkm.jpg) Private Cloud Compute (PCC) es, sin duda, el aspecto más revolucionario del anuncio de Apple. Esta nueva infraestructura en la nube está diseñada específicamente para el procesamiento privado de IA, extendiendo las sólidas garantías de seguridad y privacidad de los dispositivos Apple a la nube. El CCP aborda varias preocupaciones críticas que han afectado a los servicios de IA en la nube: - **Computación sin estado**: PCC garantiza que los datos personales de los usuarios se utilicen exclusivamente para cumplir con la solicitud del usuario y no se conserven después del procesamiento. Esto significa que una vez que se cumpla su solicitud, sus datos se borrarán por completo del sistema. No hay historial ni rastros, solo el resultado que usted solicitó. - **Garantías aplicables**: Las garantías de seguridad y privacidad de PCC son técnicamente aplicables, no dependiendo únicamente de políticas o promesas. Esto se logra mediante una combinación de hardware personalizado y un sistema operativo altamente seguro. PCC aprovecha el poder de Apple Silicon en servidores personalizados, llevando tecnologías de seguridad del iPhone como Secure Enclave y Secure Boot al centro de datos. - **Sin acceso privilegiado al tiempo de ejecución**: ni siquiera los usuarios de confianza del sitio web de Apple pueden eludir las garantías de privacidad de CCP. Este es un cambio radical con respecto a los sistemas tradicionales en la nube, donde los administradores suelen tener un amplio acceso para solucionar problemas. - **No apuntabilidad**: un atacante no puede comprometer datos de usuarios específicos sin intentar un ataque amplio a todo el sistema PCC. Esto se logra mediante un sofisticado sistema de “difusión de objetivos” que oculta el origen de las solicitudes y distribuye el procesamiento de modo que ningún nodo individual pueda ser objetivo de un ataque específico. - **Transparencia verificable**: Apple está dando un paso sin precedentes al hacer que las imágenes de software de cada versión de producción de CCP estén disponibles para investigaciones de seguridad. Esto permite a investigadores independientes verificar las afirmaciones de seguridad de Apple y ayudar a identificar posibles vulnerabilidades. La arquitectura PCC es una obra maestra de la ingeniería de seguridad. Utiliza una combinación de cifrado de extremo a extremo, hardware seguro y firmar software innovador para crear un entorno de computación en la nube que sea fundamentalmente privado y seguro. ### Implicaciones para el futuro de la IA El enfoque de Apple hacia la IA tiene profundas implicaciones para el futuro de la tecnología: - **Privacidad como estándar**: Apple está estableciendo un nuevo estándar donde la privacidad no es una característica adicional, sino una parte fundamental de la arquitectura de IA. Esto podría obligar a otras empresas a repensar sus enfoques en materia de privacidad y seguridad de los datos. - **Confianza y transparencia**: al permitir que los investigadores de seguridad verifiquen sus afirmaciones, Apple está generando confianza como pocas empresas de tecnología lo han hecho antes. Esto podría conducir a una nueva era de transparencia en la industria tecnológica, donde las empresas sean más abiertas sobre sus prácticas de seguridad y privacidad. - **Computación de borde vs. Computación en la nube**: Apple Intelligence muestra que muchas tareas de IA se pueden realizar en el dispositivo, lo que reduce la dependencia de los servicios en la nube. Esto no sólo mejora la privacidad, sino que también puede generar experiencias de usuario más rápidas y con mayor capacidad de respuesta. - **Desafío para los competidores**: otras empresas de tecnología ahora enfrentarán presión para igualar o superar las garantías de privacidad de Apple en sus propios servicios de inteligencia artificial. Esto puede acelerar la innovación en privacidad y seguridad en toda la industria. - **Cambio en el desarrollo de la IA**: el enfoque de Apple podría influir en cómo se desarrollan e implementan los modelos de IA. Es posible que haya un movimiento hacia modelos más pequeños y más eficientes que puedan ejecutarse en dispositivos locales en lugar de grandes modelos basados ​​en la nube. - **Impacto en la regulación**: las innovaciones de Apple podrían influir en las futuras regulaciones de IA, estableciendo nuevos estándares para lo que es técnicamente posible en términos de privacidad y seguridad de la IA. ### ¿Es este realmente el futuro? Con Apple Intelligence y Private Cloud Compute, Apple no sólo está lanzando nuevos productos, sino que también está redefiniendo las expectativas sobre lo que la IA puede y debe ser. Al colocar la privacidad y la seguridad en el centro de sus innovaciones en IA, la empresa está lanzando un desafío a toda la industria. El enfoque de Apple demuestra que es posible tener una IA potente y útil sin sacrificar la privacidad. Este podría ser un punto de inflexión en nuestra forma de pensar sobre la IA y los datos personales. Sin embargo, también plantea preguntas importantes: ¿Otras empresas seguirán su ejemplo? ¿Cómo afectará esto al desarrollo de modelos de IA que dependen de grandes cantidades de datos de los usuarios? Queda por ver cómo responderán los competidores y si los consumidores valorarán estas garantías de privacidad lo suficiente como para influir en sus elecciones de productos y servicios. Una cosa es segura: Apple acaba de elevar significativamente el listón de la IA responsable y centrada en el usuario. A medida que avanzamos hacia un futuro cada vez más dominado por la IA, el enfoque de Apple sirve como un poderoso recordatorio de que la innovación tecnológica no tiene por qué producirse a expensas de la privacidad personal. De hecho, como ha demostrado Apple, la privacidad puede ser un catalizador para la innovación, lo que lleva a soluciones más creativas y centradas en el usuario. El tiempo dirá si este enfoque se convierte en el nuevo estándar de la industria o sigue siendo el diferenciador de Apple. De cualquier manera, es un desarrollo emocionante que promete dar forma al futuro de la IA en formas que al menos podrían haberse predicho. #### Referencias [Presentación de los modelos básicos de servidor y de dispositivo de Apple](https://machinelearning.apple.com/research/introtaining-apple-foundation-models?source=post_page-----bfee1aad0f27--------- -----------------------) [Computación en la nube privada: una nueva frontera para la privacidad de la IA en la nube](https://security.apple.com/blog/private-cloud-compute/?source=post_page-----bfee1aad0f27------ --------------------------)
wgbn
1,910,375
Next.js vs Vue.js: In-depth Comparative Study
Introduction: Next.js, developed by Vercel, is a React framework renowned for its advanced...
0
2024-07-03T15:45:08
https://dev.to/dominion_olonilebi_9dd01d/nextjs-vs-vuejs-in-depth-comparative-study-1na
webdev, nextjs, vue, programming
**Introduction:** Next.js, developed by Vercel, is a React framework renowned for its advanced capabilities such as server-side rendering and static site generation. Engineered to streamline the development of React applications, Next.js empowers developers to create optimized and scalable solutions, resulting in accelerated page loads and enhanced SEO performance. Vue.js is a progressive JavaScript framework renowned for its simplicity and seamless integration with other libraries, offering reactive data binding to automatically update the view when underlying data changes. **Purpose:** This article aims to provide guidance to developers, particularly those in the beginner to intermediate stages, in selecting the most suitable javascript framework for web development. **Outline:** - Introduction to Next.js - Benefits/Advantages of Next.js - Introduction to Vue.js - Advantages of using Vue.js - Comparative Analysis of Next.js and Vue.js - Conclusion/Summary **Introduction to Next.js:** Next.js is an open-source web development framework built on React, renowned for its remarkable features and widespread adoption. Developed by Vercel, Next.js distinguishes itself with robust capabilities such as server-side rendering (SSR) and enhanced search engine optimization (SEO). The framework offers built-in routing, simplifying the creation of dynamic routes and management of navigation within applications. **Advantages of Next.js** - **Server-side Rendering (SSR):** Next.js enhances SEO by addressing slow rendering and loading times associated with client-side rendering. Its SSR capabilities ensure that search engines can efficiently crawl and index your content. This improves SEO and initial load performance by rendering pages on the server. Content is already available for search engines to index, and users see a fully rendered page on the first load. - **Built-in CSS and JavaScript Bundling:** Next.js takes care of bundling and optimizing your CSS and JavaScript code, streamlining the development process. - **Static Site Generation (SSG):** Next.js accelerates development by offering built-in features and conventions. Developers can focus on building features rather than configuring complex setups. This pre-renders pages at build time, making them super fast to load. Ideal for content that changes infrequently, like blog posts or landing pages. - **Data Fetching:** Next.js offers several ways to fetch data, including getStaticProps for fetching data at build time and getServerSideProps for fetching data on each request. This flexibility allows you to choose the most appropriate method for your specific needs. **Introduction to Vue.js:** Vue.js is a progressive JavaScript framework tailored for crafting user interfaces. Distinguished by its incremental adoption approach, Vue.js updates the UI automatically in response to data changes. Unlike traditional frameworks, Vue.js is engineered to be highly adaptable from its inception. Developers leverage Vue.js to construct contemporary and interactive web applications effortlessly, with a focus on efficiency and scalability. Utilizing standard HTML, CSS, and JavaScript, Vue.js employs a declarative, component-based programming paradigm for streamlined UI development. Its core library concentrates solely on the view layer, ensuring seamless integration with other libraries and existing projects. When paired with modern tooling and complementary libraries, Vue.js excels in powering sophisticated single-page applications as well. **Advantages of Vue.js:** - **Easy to Learn:** Vue.js has a gentle learning curve, making it accessible to beginners. Its straightforward syntax and clear documentation contribute to its popularity. - **Lightweight:** Vue.js is lightweight, ensuring fast performance and an enhanced user experience. - **Versatility:** Vue.js can be integrated into existing projects and used for various types of applications. Whether you’re building a small widget or a complex SPA, Vue.js fits the bill. - **Reactivity:** Vue.js employs a reactive system that tracks changes to your data. Whenever a data property is modified, Vue.js efficiently updates the corresponding parts of the UI, ensuring a consistent and responsive user experience. **Comparing Next.js and Vue.js:** **Performance** - Server-Side Rendering in Next.js offers superior SEO performance and initial page load speeds, compared to Vue's client-side rendering approach. - Vue's ecosystem, including Vuex for state management and Vue Router, provides a more straightforward learning curve for beginners than Next.js's complex configuration and file-based routing system. - Next.js's built-in API routes facilitate back-end development within the same project, unlike Vue, which typically requires a separate back-end service. - Vue's flexibility in integrating with other libraries and tools makes it more adaptable for diverse projects, whereas Next.js is optimized for React-based applications. At [HNG](https://hng.tech/internship), React.js is prominently utilized in our web development endeavors. Recognized for its robustness, React is a distinguished JavaScript library known for constructing dynamic and interactive user interfaces (UIs). Its component-based architecture enables the seamless creation of reusable UI elements, thereby enhancing the manageability and maintainability of intricate web applications. React is particularly valued for its efficiency and versatility in crafting single-page applications (SPAs), reflecting its pivotal role in modern web development practices. Visit [HNG Internship](https://hng.tech/internship) to gain more knowledge. **In Conclusion:** Selecting a JavaScript framework hinges entirely on its efficiency and suitability to your specific needs. At [HNG Internship](https://hng.tech/internship), you will have the opportunity to expand your knowledge, engage in diverse projects, and enhance your technical skills. I trust this article has provided valuable insights to guide your decision towards choosing the most suitable framework for your projects.
dominion_olonilebi_9dd01d
1,910,374
Make a digital clock in Mini Micro
Today let's do a little project in Mini Micro that makes an old-school digital clock. This will...
0
2024-07-03T15:44:52
https://dev.to/joestrout/make-a-digital-clock-in-mini-micro-3mpl
programming, miniscript, minimicro, animation
Today let's do a little project in Mini Micro that makes an old-school digital clock. This will illustrate pulling apart a sprite sheet, animation by drawing directly to a PixelDisplay (rather than using actual sprites), and the `dateTime` module to get the current time. ## The digits image Among the built-in images in Mini Micro is [/sys/pics/digits.png](https://github.com/JoeStrout/minimicro-sysdisk/blob/master/sys/pics/digits.png). It looks like this: ![/sys/pics/digits.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n0jyg1i5z1cm4g02j27.png) Against a white background, this is pretty confusing, isn't it? If you view it against a black background — for example, by doing `view "/sys/pics/digits.png"` in Mini Micro — it looks like this: ![/sys/pics/digits.png against a black background](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5rvr5m0qwsdf4jo2wdet.png) OK, now we can see that it has the digits 0 through 9, plus the letters A-F (handy if we wanted to make a [hexadecimal](https://en.wikipedia.org/wiki/Hexadecimal) display). But why are there two apparently-identical rows of digits? The answer becomes clear if we view against a different background color, neither black nor white. [Try](https://miniscript.org/MiniMicro/#playnow) this: ``` gfx.clear color.blue gfx.drawImage file.loadImage("/sys/pics/digits.png") gfx.scale = 4 ``` ![/sys/pics/digits.png, scaled up against a blue background](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tahij78x4mmnma2b8jm2.png) Aha! Now we can see why there are two rows. The top row has dark segments for the "off" bits of each digit, while in the bottom row, those areas are transparent. Which you use will depend on how you want it to look. For today's project, we'll use the top row, which will have the subtle but realistic dark segments. This will allow us to simply overdraw a new digit on top of an old one, without having to erase first (and it also looks cool, if the background color is not pure black). We'll also be using only the first ten digits; we will ignore A through F today. ## Pulling apart a sprite sheet Even though we're not actually using sprites today, a big image that contains a bunch of smaller image is still called a "sprite sheet" (or sometimes an "atlas", but I prefer the former term). So let's begin by loading digits.png and then pulling out the ten digits we need, as we would with any other sprite sheet. Launch Mini Micro, if you haven't already (or use the [web version](https://miniscript.org/MiniMicro/#playnow) if you're OK with not saving your work at the end). Use the `edit` command to launch the editor, and type in this code: ``` digitImage = file.loadImage("/sys/pics/digits.png") dw = 10; dh = 18 // digit width and height digits = [] for i in range(0,9) digits.push digitImage.getImage(i * dw, dh, dw, dh) end for ``` This loads the image into a variable called `digitImage`, and then prepares a list (`digits`) with the individual digit images. Each of these is `dw` pixels wide, and `dh` pixels tall. We're getting the top row because the second parameter to `getImage` is `dh` rather than `0` — remember, coordinates in Mini Micro almost always count from the bottom up. So the bottom row starts at Y = 0, and the top row starts at Y = 18 (or `dh`). Run this code, and then test it by entering `view digits[4]` on the command line. You should see a little digital 4 in the center of the screen. ## Some helper functions Our goal is to draw a time string like "08:22:45". We'll draw this one character at a time. So, let's make two helper functions: one to draw a digit, and one to draw a colon. `edit` your program again, and add the following. ``` drawDigit = function(value=0, col=0) x = col * dw y = 0 gfx.drawImage digits[value], x, y, dw, dh, 0, 0, dw, dh, gfx.color end function drawColon = function(col=0) x = col * dw y = 0 gfx.fillEllipse x + dw/2 - 1, y+5, 2, 2 gfx.fillEllipse x + dw/2 - 1, y+12, 2, 2 end function ``` Both of these start by calculating the X and Y position to draw at, given the column position (`col`) where we want the character to appear. `drawDigit` calls `gfx.drawImage` to do its work. If we didn't care about tinting the digits, we could just do: ``` gfx.drawImage digits[value], x, y ``` But we want to support drawing colored digits, so it can look like old-school red or green LED clocks. So we have to supply all nine parameters to `[drawImage](https://miniscript.org/wiki/PixelDisplay.drawImage)`, just to get to the last one, which is the tint color. (MiniScript does not support passing arguments by name, but only by position.) The `drawColon` function starts by computing X and Y as well, and then just calls `fillEllipse` twice, once for each dot of the colon. (Feel free to play around with the size and position of these dots later to make it your own!) If you run this code, and then do `clear` to clear the screen, you should be able to enter `drawDigit` to make a little 0 appear at the bottom of the screen. Also try `drawColon 1` to make a colon appear next to it (in column 1). ## Drawing the current time To get the current time, go back to the top of your code and insert: ``` import "dateTime" ``` This loads the `dateTime` module (found in /sys/lib), which gives us access to the current date and time. Then scroll to the bottom of your program, and add this function: ``` drawTime = function(time) if time == null then time = dateTime.now[-8:] for i in time.indexes if time[i] == ":" then drawColon i else drawDigit time[i].val, i end if end for end function ``` This draws the given time string, or if you don't give it one, it gets the current time as the last 8 digits of `dateTime.now`. Then it loops over that string, calling the `drawColon` and `drawDigit` functions we defined before. Run this, `clear` the screen, and then enter `drawTime`. You should see the current time appear in the lower-left corner. Almost done! ## The main program Finally, with all these helper functions in hand, we are ready to create the main program. `edit` your code, and add this to the bottom: ``` clear //gfx.scale = 8 gfx.scrollX = -150; gfx.scrollY = -350 gfx.color = color.red while true drawTime wait end while ``` This starts by clearing the screen, and configuring the main PixelDisplay (aka `gfx`) with a red color, and a scroll position. (Instead of scrolling the display we could have also changed the calculation of X and Y in our draw methods — feel free to experiment!) There is also a commented out line, `gfx.scale = 8`. If you want to zoom in and have a closer look at your digits, feel free to uncomment that line. After all that setup, the main loop is simple: it just loops forever, calling `drawTime` every second. ![Animated GIF of clock display](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rlvkdgkitacnnxl4nco3.gif) When you run it, you should see a display like the above. (Press Control-C to break out of the program.) ## Taking it further This was a fun little exercise in pulling apart a sprite sheet, making some drawing helpers, and getting the current time. But the same digits and drawing method could be used for lots of other purposes. Some ideas: • Make a countdown timer for a bomb in a bomb-defusing game • Make a doomsday clock showing days, hours, and minutes to the next election • Show game score in a retro digital display • Make a CPU emulator (like [CHIP-8](https://github.com/JoeStrout/minimicro-chip8)) with digital hexadecimal displays for its internal state These are just a few ideas. Do you have any others? Questions about anything presented here? Post them in the comments below. Happy coding!
joestrout
1,910,373
The Ultimate Guide to Engagement Rings: Finding the Perfect Symbol of Love
Choosing an engagement ring is a significant milestone in a couple’s journey. This ring is not only a...
0
2024-07-03T15:44:29
https://dev.to/marcdevon/the-ultimate-guide-to-engagement-rings-finding-the-perfect-symbol-of-love-2h9h
Choosing an engagement ring is a significant milestone in a couple’s journey. This ring is not only a piece of jewelry but a symbol of commitment, love, and the start of a new chapter together. With an array of styles, gemstones, and settings to choose from, the process can be overwhelming. This guide will help you navigate the choices and find the perfect engagement ring that captures your unique love story. 1. Understanding the Four Cs of Diamonds The Four Cs—Cut, Color, Clarity, and Carat—are the universal standards for grading diamonds. Understanding these factors is crucial in selecting the right diamond. Cut: The cut affects a diamond's brilliance. Well-cut diamonds reflect light beautifully, making them appear more brilliant and larger than their carat weight suggests. Popular cuts include round, princess, cushion, and emerald. Color: Diamonds are graded on a scale from D (colorless) to Z (light yellow or brown). Colorless diamonds are the most sought-after, but near-colorless diamonds (G-J) can offer great value with minimal color visible to the naked eye. Clarity: Clarity measures the presence of internal or external flaws, known as inclusions and blemishes. Diamonds with fewer inclusions are rarer and more valuable. The clarity scale ranges from Flawless (FL) to Included (I1, I2, I3). Carat: Carat weight measures the size of the diamond. While larger diamonds are rarer and more expensive, the cut, color, and clarity significantly impact a diamond's overall appearance and value. 2. Choosing the Right Setting The setting of an engagement ring not only secures the diamond but also enhances its beauty. Here are some popular settings: Prong Setting: This classic setting uses small metal claws to hold the diamond securely while allowing maximum light to pass through, enhancing its brilliance. Bezel Setting: A metal rim encircles the diamond, providing a modern look and added protection, making it ideal for active lifestyles. Halo Setting: A circle of smaller diamonds surrounds the center stone, creating the illusion of a larger diamond and adding extra sparkle. Pavé Setting: Small diamonds are set closely together along the band, adding continuous sparkle and a luxurious feel. Channel Setting: Diamonds are set in a groove between two metal walls, providing a sleek and modern look while protecting the stones. 3. Selecting the Perfect Metal The metal of the ring band can influence the overall look and durability of the ring. Popular choices include: Platinum: Known for its durability and natural white sheen, platinum is hypoallergenic and maintains its luster over time. White Gold: Rhodium-plated to enhance its shine, white gold offers a similar look to platinum at a lower price. Yellow Gold: Classic and timeless, yellow gold complements traditional and vintage-style rings. Rose Gold: With its warm and romantic hue, rose gold has become a popular choice for modern and vintage-inspired rings. 4. Exploring Alternative Gemstones While diamonds are the traditional choice for [engagement rings in Toronto](https://www.serliandsiroan.com/shop-engagement-rings/), alternative gemstones are increasingly popular for their unique colors and meanings: Sapphires: Available in a range of colors, sapphires are a durable and vibrant alternative to diamonds. Emeralds: Known for their rich green color, emeralds symbolize rebirth and love. They are a softer stone, so they require careful handling. Rubies: Symbolizing passion and protection, rubies are a stunning and durable choice for engagement rings. Moissanite: A lab-created stone that closely resembles a diamond but offers more brilliance and fire, moissanite is an affordable and ethical alternative. 5. Personalizing Your Ring Adding personal touches can make an engagement ring even more special: Custom Design: Work with a jeweler to create a custom-designed ring that reflects your partner's style and your unique love story. Engravings: Add a personal touch with an engraving inside the band, such as initials, a special date, or a meaningful phrase. Birthstones: Incorporate birthstones or other gemstones that hold personal significance into the design. 6. Setting a Budget Establishing a budget early on can help narrow down options and make the selection process smoother. Consider the following: Prioritize Features: Decide which aspects of the ring are most important, such as diamond size, quality, or the type of setting. Consider Longevity: While staying within budget is important, remember that an engagement ring is a long-term investment in a symbol of your commitment. 7. Ethical and Sustainable Options Ethical considerations are increasingly important to modern couples: Lab-Grown Diamonds: These diamonds are identical to natural diamonds in terms of physical and chemical properties but are more affordable and have a lower environmental impact. Conflict-Free Diamonds: Ensure your diamond is sourced from conflict-free areas and adheres to the Kimberley Process, which aims to prevent the flow of conflict diamonds. Conclusion Choosing an engagement ring is a deeply personal and significant decision. By understanding the Four Cs, exploring different settings and metals, considering alternative gemstones, and personalizing the ring to reflect your love story, you can find the perfect symbol of your commitment. Whether you opt for a classic diamond, a vibrant sapphire, or a custom-designed piece, the right engagement ring will be a cherished reminder of your love and the journey ahead.
marcdevon
1,910,372
What Are the Most Cost-Effective Ways to Try THC-A for the First Time
Consumers trying THC-A for the first time can find the best deals by focusing on online retailers...
0
2024-07-03T15:44:12
https://dev.to/rosojig/what-are-the-most-cost-effective-ways-to-try-thc-a-for-the-first-time-3b3h
Consumers trying THC-A for the first time can find the best deals by focusing on online retailers offering competitive prices. Comparing prices across different websites and looking for promotional discounts, first-time buyer coupons, and loyalty programs can further reduce costs. Additionally, reading customer reviews and checking for detailed product descriptions can ensure newbies get quality THC-A products at affordable prices. Online retailers are the most cost-effective way to find affordable THCA products because they have lower overhead costs than physical stores. Online stores also provide a broad selection, making comparing prices and finding the [best deals easier](http://google.com/). Additionally, online retailers frequently compete for customer loyalty by offering discounts, promotional codes, and loyalty programs, further reducing consumer costs.
rosojig
1,910,369
Apple Launches Its Privacy-Focused AI: A New Paradigm for Artificial Intelligence
In a bold move that promises to redefine the landscape of artificial intelligence, Apple has just...
0
2024-07-03T15:44:10
https://dev.to/wgbn/apple-launches-its-privacy-focused-ai-a-new-paradigm-for-artificial-intelligence-39g
ai, apple, policy, opinion
In a bold move that promises to redefine the landscape of artificial intelligence, Apple has just announced a series of technological advancements that put user privacy at the center of its AI innovations. With the launch of Apple Intelligence and Private Cloud Compute (PCC), the Cupertino company is not just launching new products, but setting a new standard for the entire technology industry. It will be? ### Apple Intelligence: Personal and Powerful AI ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4kyrrpgqfti67f9097d.jpg) Apple Intelligence is described as a “personal intelligence system” that integrates powerful generative models directly into the heart of iPhone, iPad and Mac. This approach represents a paradigm shift in how we think about AI on personal devices. Unlike many existing AI solutions that rely heavily on cloud processing, Apple Intelligence harnesses the power of Apple hardware to perform complex tasks directly on the device. This not only improved speed and responsiveness, but also ensured an unprecedented level of privacy. Apple Intelligence features are truly impressive: - **Writing Tools**: The system can help users rewrite, revise and summarize text. Imagine having a personal writing assistant always at hand, capable of refining your ideas or summarizing long documents in seconds. - **Image Playground**: This tool allows users to create fun and playful images directly in apps like Messages, Notes and Keynote. It's like having a pocket-sized design studio, ready to bring your visual ideas to life. - **Genmoji**: The ability to create custom emojis for any situation opens up a new world of personal expression. Imagine being able to create an emoji that perfectly captures your mood or a unique situation. - **Integration with Siri and App Intents**: Apple Intelligence enhances Siri's capabilities, making it more natural and personal. Additionally, developers can leverage predefined and pre-trained App Intents to make their app's actions more discoverable across the system. These features are not just increments, but a complete reinvention of how we interact with our devices. AI is no longer an additional feature, but an integral and ubiquitous part of the user experience. ### Private Cloud Computing: Redefining Cloud Security ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kvpve2ageywj6j3pqkm.jpg) Private Cloud Compute (PCC) is, without a doubt, the most revolutionary aspect of Apple's announcement. This new cloud infrastructure is designed specifically for private AI processing, extending the robust security and privacy guarantees of Apple devices to the cloud. The CCP addresses several critical concerns that have plagued cloud AI services: - **Stateless Computing**: PCC ensures that users' personal data is used exclusively to fulfill the user's request and is not retained after processing. This means that once your request is fulfilled, your data is completely erased from the system. There is no history, no traces — just the result you requested. - **Applicable guarantees**: PCC's security and privacy guarantees are technically applicable, not depending only on policies or promises. This is achieved through a combination of custom hardware and a highly secure operating system. PCC leverages the power of Apple Silicon on custom servers, bringing iPhone security technologies like Secure Enclave and Secure Boot to the data center. - **No Runtime Privileged Access**: Even Apple's website trust people can't get around CCP's privacy guarantees. This is a radical change from traditional cloud systems, where administrators typically have broad access for troubleshooting purposes. - **Non-targetability**: An attacker cannot compromise specific user data without attempting a broad attack on the entire PCC system. This is achieved through a sophisticated “target diffusion” system that obscures the origin of requests and distributes processing so that no individual node can be targeted by a specific attack. - **Verifiable Transparency**: Apple is taking the unprecedented step of making software images of every production build of CCP available for security research. This allows independent researchers to verify Apple's security claims and help identify potential vulnerabilities. The PCC architecture is a masterpiece of security engineering. It uses a combination of end-to-end encryption, secure hardware, and sign innovative software to create a cloud computing environment that is fundamentally private and secure. ### Implications for the Future of AI Apple's approach to AI has profound implications for the future of technology: - **Privacy as Standard**: Apple is setting a new standard where privacy is not an additional feature, but a fundamental part of the AI ​​architecture. This could force other companies to rethink their approaches to data privacy and security. - **Trust and Transparency**: By allowing security researchers to verify its claims, Apple is building trust in a way few technology companies have done before. This could lead to a new era of transparency in the technology industry, where companies are more open about their security and privacy practices. - **Edge Computing vs. Cloud Computing**: Apple Intelligence shows that many AI tasks can be performed on-device, reducing dependence on cloud services. This not only improves privacy, but can also lead to faster, more responsive user experiences. - **Challenge for Competitors**: Other technology companies will now face pressure to match or exceed Apple's privacy guarantees in their own AI services. This can accelerate privacy and security innovation across the industry. - **Change in AI Development**: Apple's approach could influence how AI models are developed and deployed. There may be a move towards smaller, more efficient models that can run on local devices rather than large cloud-based models. - **Impact on Regulation**: Apple's innovations could influence future AI regulations, setting new standards for what is technically possible in terms of AI privacy and security. ### Is this really the future? With Apple Intelligence and Private Cloud Compute, Apple is not just launching new products, but redefining expectations for what AI can and should be. By placing privacy and security at the center of its AI innovations, the company is issuing a challenge to the entire industry. Apple's approach demonstrates that it is possible to have powerful and useful AI without sacrificing privacy. This could be a turning point in how we think about AI and personal data. However, it also raises important questions: Will other companies follow suit? How will this affect the development of AI models that rely on large amounts of user data? It remains to be seen how competitors will respond and whether consumers will value these privacy guarantees enough to influence their product and service choices. One thing is certain: Apple has just significantly raised the bar for responsible, user-centric AI. As we move into a future increasingly dominated by AI, Apple's approach serves as a powerful reminder that technological innovation doesn't have to come at the expense of personal privacy. In fact, as Apple has demonstrated, privacy can be a catalyst for innovation, leading to more creative, user-centric solutions. Time will tell whether this approach becomes the new industry standard or remains Apple's differentiator. Either way, it's an exciting development that promises to shape the future of AI in ways that could at least have been predicted. #### References [Introducing Apple's On-Device and Server Foundation Models](https://machinelearning.apple.com/research/introducing-apple-foundation-models?source=post_page-----bfee1aad0f27--------- -----------------------) [Private Cloud Compute: A new frontier for AI privacy in the cloud](https://security.apple.com/blog/private-cloud-compute/?source=post_page-----bfee1aad0f27------ --------------------------)
wgbn
1,910,371
Startups: Lessons I Learned
Throughout my career, I've had the opportunity to work in several startups across different sectors....
0
2024-07-03T15:43:54
https://dev.to/douglaspujol/startups-lessons-i-learned-47fk
Throughout my career, I've had the opportunity to work in several startups across different sectors. In this brief article, I share some key points I learned from these experiences. **Start Simple and Build Only What You Need** The best way to start a new project is to begin simply and avoid unnecessary complexities. Projects naturally become more complex over time, so focus on basic functionalities and develop the code based on the real needs of your product. Avoid creating complex solutions prematurely, as this wastes time and resources. A good architecture should be flexible, cost-effective, and capable of adapting to changes. **Don't Fall Victim to Hype** One of the biggest challenges for companies today is the hasty adoption of modern technologies, whether it's a new framework, a new styling library, or an architecture popularized by an influencer. While exploring new technologies is exciting, do so cautiously. New technologies often come with bugs and uncovered use cases because they need time to mature. Be critical and aware of the trade-offs of each choice. **Small, Talented Teams Outperform Large Mediocre Teams** If you're a manager, value and retain your talents. Maintaining a small, highly skilled team is more efficient than having large, average teams. Many companies believe that more programmers lead to better software quality, which is a misconception. The synergy and efficiency of a small, competent team are unmatched and extremely valuable for project success. **Wheat and the Chaff** Know how to identify who is truly committed to helping and building something extraordinary with you, as opposed to those who are just comfortable. Value dedicated professionals who are willing to grow alongside the project. **Take Care of Yourself and Your Family** Programming is a passion that has transformed my life and the lives of many others. However, maintaining a balance between professional and personal life is essential for success. Include daily physical activities, such as going to the gym or practicing jiu-jitsu, and dedicate quality time to your family. A strong mind and a healthy body are crucial for achieving any goal. **Culture Is Crucial** The importance of organizational culture became clear to me after several years in the workforce. Culture shapes the future of the company. Invest in creating a positive work environment where values and goals are shared among all team members. **Recognition Matters** The most valuable recognition comes in monetary forms, development opportunities, and meaningful friendships. The best companies I've been a part of were those where I built lasting friendships. These are the standards I prioritize in every project I undertake.
douglaspujol
1,910,368
Modération de chat avec OpenAI
Modérer le PubNub Chat en utilisant les fonctions PubNub et l'api de modération gratuite d'OpenAI
0
2024-07-03T15:42:12
https://dev.to/pubnub-fr/moderation-de-chat-avec-openai-1op3
Toute application contenant un [chat in-app](https://www.pubnub.com/solutions/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) a besoin d'un moyen de réguler et de modérer les messages que les utilisateurs peuvent échanger. Comme il n'est pas possible de modérer tous les contenus inappropriés avec des modérateurs humains, le système de modération doit être automatique. Comme les utilisateurs essaieront fréquemment de contourner la modération, l'apprentissage automatique, l'IA générative et les grands modèles de langage (LLM) \[et les modèles GPT tels que GPT-3 et GPT-4\] sont des moyens populaires de modérer le contenu. La modération est un sujet complexe, et PubNub offre diverses solutions pour répondre à tous les cas d'utilisation de nos développeurs. - Les[fonctions PubNub](https://www.pubnub.com/docs/serverless/functions/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) peuvent intercepter et modifier les messages avant qu'ils n'atteignent leur destination. Vous pouvez appliquer une logique personnalisée au sein d'une fonction, y compris en appelant une API REST externe, ce qui vous permet d'utiliser n'importe quel service externe pour la modération des messages. Cette approche est utilisée dans cet article pour intégrer OpenAI. - Les fonctions PubNub offrent des [intégrations personnalisées](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) qui supportent la modération de contenu et l'analyse de sentiment, y compris [Lasso Moderation](https://www.pubnub.com/integrations/lasso-moderation/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), [Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), [un filtre de blasphème basé sur RegEx](https://www.pubnub.com/integrations/chat-message-profanity-filter/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), [Lexalytics](https://www.pubnub.com/integrations/lexalytics/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), et [Community Sift](https://www.pubnub.com/integrations/communitysift/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr). - L'espace de travail BizOps de PubNub peut [surveiller et modérer les conversations](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), y compris la possibilité d'éditer et de supprimer des messages. Le point final de modération d'Open AI -------------------------------------- Cet article se penche sur l'[API de modération d'OpenAI](https://platform.openai.com/docs/guides/moderation/overview), une API REST qui utilise l'intelligence artificielle (IA) pour déterminer si le texte fourni contient des termes potentiellement préjudiciables. L'objectif de l'API est de permettre aux développeurs de filtrer ou de supprimer le contenu préjudiciable et, au moment de la rédaction de cet article, elle est fournie **gratuitement**, mais ne prend en charge que l'anglais. Le modèle qui sous-tend l'API de modération classera le texte fourni comme suit (extrait de la [documentation de l'API](https://platform.openai.com/docs/guides/moderation/overview)) : - **Haine :** contenu qui exprime, incite ou encourage la haine basée sur la race, le sexe, l'ethnicité, la religion, la nationalité, l'orientation sexuelle, le handicap ou la caste. Les contenus haineux visant des groupes non protégés (par exemple, les joueurs d'échecs) relèvent du harcèlement. - **Haine / Menace :** Contenu haineux qui comprend également des actes de violence ou des préjudices graves à l'encontre du groupe ciblé en raison de sa race, de son sexe, de son appartenance ethnique, de sa religion, de sa nationalité, de son orientation sexuelle, de son statut de handicapé ou de sa caste. - **Harcèlement :** Contenu qui exprime, incite ou promeut un langage de harcèlement à l'égard d'une cible. - **Harcèlement / Menace :** Contenu de harcèlement qui inclut également des actes de violence ou de graves préjudices à l'encontre d'une cible. - **Automutilation :** contenu qui promeut, encourage ou dépeint des actes d'automutilation, tels que le suicide, les coupures et les troubles de l'alimentation. - **Automutilation / Intention :** Contenu dans lequel le locuteur exprime qu'il se livre ou a l'intention de se livrer à des actes d'automutilation, tels que le suicide, la mutilation et les troubles de l'alimentation. - **Automutilation / Instructions :** Contenu qui encourage la réalisation d'actes d'automutilation, tels que le suicide, la mutilation et les troubles alimentaires, ou qui donne des instructions ou des conseils sur la manière de commettre de tels actes. - **Sexuel :** Contenu destiné à susciter une excitation sexuelle, comme la description d'une activité sexuelle, ou qui fait la promotion de services sexuels (à l'exclusion de l'éducation sexuelle et du bien-être). - **Sexuel / mineurs :** Contenu à caractère sexuel mettant en scène une personne âgée de moins de 18 ans. - **Violence :** Contenu décrivant la mort, la violence ou les blessures physiques. - **Violence / graphique :** Contenu décrivant la mort, la violence ou les blessures physiques de manière graphique. Les résultats sont fournis dans une structure JSON comme suit (encore une fois, tirée de la documentation de l'API) : ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` Appel de l'API de modération Open AI à partir de PubNub ------------------------------------------------------- L'**intégration de l'API de modération dans n'importe quelle application PubNub est facile en utilisant les fonctions PubNub** en suivant ce tutoriel étape par étape : Les fonctions vous permettent de capturer des événements en temps réel qui se produisent sur la plateforme PubNub, tels que des messages envoyés et reçus ; vous pouvez ensuite écrire du code serverless personnalisé dans ces fonctions pour modifier, réacheminer, augmenter ou filtrer les messages selon les besoins. Vous devrez utiliser le type d'événement "Before Publish or Fire" ; ce type de fonction sera invoqué _avant que_ le message ne soit livré et doit finir de s'exécuter avant que le message ne soit libéré pour être livré à ses destinataires. La [documentation](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) PubNub fournit plus de contexte et de détails, mais en résumé : "Before Publish or Fire" est un appel synchrone qui peut _modifier un message ou sa charge utile_. ### Créer la fonction PubNub 1. Connectez-vous au [portail d'administration de](https://admin.pubnub.com?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) PubNub et sélectionnez l'application et le jeu de clés de l'application que vous souhaitez modérer. 2. Sélectionnez 'Functions', qui se trouve sous l'onglet 'Build'. 3. Sélectionnez '+ CREATE NEW MODULE' et donnez un nom et une description au module. 4. Sélectionnez "+ CREATE NEW FUNCTION" et donnez un nom à la fonction. 5. Pour le type d'événement, sélectionnez "Before Publish or Fire". 6. Pour le nom du canal, entrez **\*** (cette démo utilisera **\***, mais votre application peut choisir de ne spécifier ici que les canaux que vous souhaitez modérer). Après avoir créé la fonction PubNub, vous devez fournir votre clé API Open AI comme secret. 1. Sélectionnez "MES SECRETS" et créez une nouvelle clé avec le nom "OPENAI\_API\_KEY". 2. [Générez une clé API Open AI](https://platform.openai.com/account/api-keys) et assurez-vous qu'elle a accès à l'API modérée. 3. Fournissez la clé API générée au secret de la fonction PubNub que vous venez de créer. Le corps de la fonction PubNub se présente comme suit : ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` La fonction elle-même est assez simple : Pour chaque message reçu : - Le transmettre à la fonction de modération d'Open AI - Ajouter l'objet de modération retourné comme une nouvelle clé sur l'objet Message (JSON) **Sauvegardez votre fonction et assurez-vous que votre module est démarré.** ### Temps de latence La fonction PubNub que vous venez de créer sera exécutée de manière synchrone à chaque fois qu'un message est envoyé, et ce message ne sera pas délivré tant que la fonction n'aura pas fini de s'exécuter. Comme la fonction contient un appel à une API externe, la latence de délivrance dépendra de la vitesse de retour de l'appel API à Open AI, qui est hors du contrôle de PubNub et qui pourrait être assez élevée. Il existe plusieurs façons d'atténuer toute dégradation de l'expérience utilisateur. La plupart des déploiements fournissent à l'expéditeur un retour immédiat indiquant que le message a été envoyé et s'appuient ensuite sur les accusés de réception pour indiquer que le message a été délivré (ou signalé). ### Mise à jour de l'application client Examinons ce qui serait nécessaire pour gérer la charge utile de modération dans votre application en utilisant la [démo Chat](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), qui est une application React qui utilise le [PubNub Chat SDK](https://www.pubnub.com/docs/chat/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) pour montrer la plupart des fonctionnalités d'une application de chat typique. Configurez un attribut pour savoir si un message potentiellement dangereux doit être affiché ou non : ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` Et ajouter une certaine logique pour ne pas afficher un message potentiellement nuisible par défaut, dans ce cas dans [message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx): ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") Notez que ces changements ne sont pas présents dans la version **hébergée** de la [démo de chat](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr), mais le [ReadMe contient des instructions complètes](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md) pour la construire et l'exécuter vous-même à partir de votre propre jeu de clés. Récapitulation -------------- Et voilà, un moyen rapide et facile (et gratuit) d'ajouter à la fois la modération et l'analyse des sentiments à votre application en utilisant Open AI. Pour en savoir plus sur l'intégration d'Open AI avec PubNub, consultez ces autres ressources : - [Intégration de l'API OpenAI GPT avec les fonctions](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) - [Construire un Chatbot avec PubNub et ChatGPT](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) (Ajouter un Chatbot à notre vitrine PubNub) - [Améliorer une Geo App avec PubNub & Chat GPT / OpenAI](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) N'hésitez pas à contacter l'équipe DevRel à [devrel@pubnub.com](mailto:devrel@pubnub.com) ou à contacter notre équipe de [support](https://support.pubnub.com/hc/en-us?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) pour obtenir de l'aide sur n'importe quel aspect de votre développement PubNub. Comment PubNub peut-il vous aider ? =================================== Cet article a été publié à l'origine sur [PubNub.com](https://www.pubnub.com/blog/chat-moderation-with-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) Notre plateforme aide les développeurs à construire, fournir et gérer l'interactivité en temps réel pour les applications web, les applications mobiles et les appareils IoT. La base de notre plateforme est le réseau de messagerie en temps réel le plus grand et le plus évolutif de l'industrie. Avec plus de 15 points de présence dans le monde, 800 millions d'utilisateurs actifs mensuels et une fiabilité de 99,999 %, vous n'aurez jamais à vous soucier des pannes, des limites de concurrence ou des problèmes de latence causés par les pics de trafic. Découvrez PubNub ---------------- Découvrez le [Live Tour](https://www.pubnub.com/tour/introduction/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) pour comprendre les concepts essentiels de chaque application alimentée par PubNub en moins de 5 minutes. S'installer ----------- Créez un [compte PubNub](https://admin.pubnub.com/signup/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) pour un accès immédiat et gratuit aux clés PubNub. Commencer --------- La [documentation PubNub](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr) vous permettra de démarrer, quel que soit votre cas d'utilisation ou votre [SDK](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=fr).
pubnubdevrel
1,910,367
Weekly Watercooler Thread
I kicked this off as "Watercooler Wednesday", but going to change the title because we don't need to...
0
2024-07-03T15:41:37
https://dev.to/ben/weekly-watercooler-thread-110g
watercooler, discuss
I kicked this off as "Watercooler Wednesday", but going to change the title because we don't need to overdo it on the "day of week" theme. 😄 *** This is a general discussion thread about... Whatever. What's new in your life? Hobbies, interests, games, kids, parents, travel, career, whatever. Let's keep this chat light and positive and see if it can become a nice weekly check-in.
ben
1,910,366
Weekly Watercooler Thread
I kicked this off as "Watercooler Wednesday", but going to change the title because we don't need to...
0
2024-07-03T15:41:37
https://dev.to/ben/weekly-watercooler-thread-4008
I kicked this off as "Watercooler Wednesday", but going to change the title because we don't need to overdo it on the "day of week" theme. 😄 *** This is a general discussion thread about... Whatever. What's new in your life? Hobbies, interests, games, kids, parents, travel, career, whatever. Let's keep this chat light and positive and see if it can become a nice weekly check-in.
ben
1,910,364
Apple Lança sua IA com Foco na Privacidade: Um Novo Paradigma para a Inteligência Artificial
Em um movimento audacioso que promete redefinir o cenário da inteligência artificial, a Apple acaba...
0
2024-07-03T15:41:17
https://dev.to/wgbn/apple-lanca-sua-ia-com-foco-na-privacidade-um-novo-paradigma-para-a-inteligencia-artificial-5foh
ai, apple, opinion, privacy
Em um movimento audacioso que promete redefinir o cenário da inteligência artificial, a Apple acaba de anunciar uma série de avanços tecnológicos que colocam a privacidade do usuário no centro de suas inovações em IA. Com o lançamento do Apple Intelligence e do Private Cloud Compute (PCC), a empresa de Cupertino não está apenas lançando novos produtos, mas estabelecendo um novo padrão para toda a indústria de tecnologia. Será? ### Apple Intelligence: IA Pessoal e Poderosa ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4kyrrpgqfti67f9097d.jpg) O Apple Intelligence é descrito como um “sistema de inteligência pessoal” que integra modelos generativos poderosos diretamente no coração do iPhone, iPad e Mac. Esta abordagem representa uma mudança de paradigma na forma como pensamos sobre IA em dispositivos pessoais. Ao contrário de muitas soluções de IA existentes que dependem fortemente do processamento na nuvem, o Apple Intelligence aproveita o poder do hardware Apple para realizar tarefas complexas diretamente no dispositivo. Isso não apenas melhorou a velocidade e a responsividade, mas também garantiu um nível sem precedentes de privacidade. Os recursos do Apple Intelligence são realmente impressionantes: - **Ferramentas de Escrita**: O sistema pode ajudar os usuários a reescrever, revisar e resumir texto. Imagine ter um assistente de redação pessoal sempre à mão, capaz de refinar suas ideias ou resumir longos documentos em segundos. - **Playground de Imagens**: Esta ferramenta permite aos usuários criar imagens divertidas e lúdicas diretamente em aplicativos como Mensagens, Notas e Keynote. É como ter um estúdio de design de bolso, pronto para dar vida às suas ideias visuais. - **Genmoji**: A capacidade de criar emojis personalizados para qualquer situação abre um novo mundo de expressão pessoal. Imagine poder criar um emoji que capture perfeitamente seu estado de espírito ou uma situação única. - **Integração com Siri e App Intents**: O Apple Intelligence aprimora as capacidades da Siri, tornando-a mais natural e pessoal. Além disso, os desenvolvedores podem aproveitar os App Intents predefinidos e pré-treinados para tornar as ações de seus aplicativos mais descobríveis em todo o sistema. Esses recursos não são apenas incrementos, mas uma reinvenção completa de como interagimos com nossos dispositivos. A IA não é mais um recurso adicional, mas uma parte integrante e onipresente da experiência do usuário. ### Computação em Nuvem Privada: Redefinindo a Segurança na Nuvem ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kvpve2ageywj6j3pqkm.jpg) O Private Cloud Compute (PCC) é, sem dúvida, o aspecto mais revolucionário do anúncio da Apple. Esta nova infraestrutura de nuvem foi projetada especificamente para processamento de IA privada, estendendo as robustas garantias de segurança e privacidade dos dispositivos Apple para a nuvem. O PCC aborda várias preocupações críticas que têm atormentado os serviços de IA na nuvem: - **Computação sem estado**: O PCC garante que os dados pessoais dos usuários sejam usados exclusivamente para cumprir a solicitação do usuário e não sejam retidos após o processamento. Isso significa que, uma vez que sua solicitação seja atendida, seus dados são completamente apagados do sistema. Não há histórico, não há rastros — apenas o resultado que você solicitou. - **Garantias aplicáveis**: As garantias de segurança e privacidade do PCC são tecnicamente aplicáveis, não dependendo apenas de políticas ou promessas. Isso é alcançado através de uma combinação de hardware personalizado e um sistema operacional altamente seguro. O PCC utiliza o poder do Apple Silicon em servidores personalizados, trazendo tecnologias de segurança do iPhone, como o Secure Enclave e o Secure Boot, para o data center. - **Sem acesso privilegiado em tempo de execução**: Mesmo o pessoal de confiabilidade do site da Apple não pode contornar as garantias de privacidade do PCC. Isso é uma mudança radical em relação aos sistemas tradicionais de nuvem, onde os administradores geralmente têm acesso amplo para fins de solução de problemas. - **Não-direcionabilidade**: Um atacante não pode comprometer dados de usuários específicos sem tentar um ataque amplo a todo o sistema PCC. Isso é conseguido através de um sofisticado sistema de “difusão de alvo” que obscurece a origem das solicitações e distribui o processamento de forma que nenhum nó individual possa ser alvo de um ataque específico. - **Transparência verificável**: A Apple está dando um passo sem precedentes ao disponibilizar imagens de software de cada compilação de produção do PCC para pesquisa de segurança. Isso permite que pesquisadores independentes verifiquem as alegações de segurança da Apple e ajudem a identificar possíveis vulnerabilidades. A arquitetura do PCC é uma obra-prima de engenharia de segurança. Ela utiliza uma combinação de criptografia de ponta a ponta, hardware seguro e design de software inovador para criar um ambiente de computação na nuvem que é fundamentalmente privado e seguro. ### Implicações para o Futuro da IA A abordagem da Apple para IA tem implicações profundas para o futuro da tecnologia: - **Privacidade como Padrão**: A Apple está estabelecendo um novo padrão onde a privacidade não é um recurso adicional, mas uma parte fundamental da arquitetura de IA. Isso pode forçar outras empresas a repensar suas abordagens para privacidade e segurança de dados. - **Confiança e Transparência**: Ao permitir que pesquisadores de segurança verifiquem suas afirmações, a Apple está construindo confiança de uma maneira que poucas empresas de tecnologia fizeram antes. Isso pode levar a uma nova era de transparência na indústria de tecnologia, onde as empresas são mais abertas sobre suas práticas de segurança e privacidade. - **Edge Computing vs. Cloud Computing**: O Apple Intelligence mostra que muitas tarefas de IA podem ser realizadas no dispositivo, reduzindo a dependência de serviços na nuvem. Isso não apenas melhora a privacidade, mas também pode levar a experiências de usuário mais rápidas e responsivas. - **Desafio para Concorrentes**: Outras empresas de tecnologia agora enfrentarão pressão para igualar ou superar as garantias de privacidade da Apple em seus próprios serviços de IA. Isso pode acelerar a inovação em privacidade e segurança em toda a indústria. - **Mudança no Desenvolvimento de IA**: A abordagem da Apple pode influenciar a forma como os modelos de IA são desenvolvidos e implantados. Pode haver um movimento em direção a modelos menores e mais eficientes que possam funcionar em dispositivos locais, em vez de grandes modelos baseados em nuvem. - **Impacto na Regulamentação**: As inovações da Apple podem influenciar futuras regulamentações de IA, estabelecendo novos padrões para o que é tecnicamente possível em termos de privacidade e segurança de IA. ### Será mesmo o futuro? Com o Apple Intelligence e o Private Cloud Compute, a Apple não está apenas lançando novos produtos, mas redefinindo as expectativas para o que a IA pode e deve ser. Ao colocar a privacidade e a segurança no centro de suas inovações em IA, a empresa está lançando um desafio para toda a indústria. A abordagem da Apple demonstra que é possível ter IA poderosa e útil sem sacrificar a privacidade. Isso pode ser um ponto de virada na forma como pensamos sobre IA e dados pessoais. No entanto, também levanta questões importantes: Será que outras empresas seguirão o exemplo? Como isso afetará o desenvolvimento de modelos de IA que dependem de grandes quantidades de dados de usuários? Resta saber como os concorrentes responderão e se os consumidores valorizarão essas garantias de privacidade o suficiente para influenciar suas escolhas de produtos e serviços. Uma coisa é certa: a Apple acaba de elevar significativamente o padrão para IA responsável e centrada no usuário. À medida que avançamos para um futuro cada vez mais dominado pela IA, a abordagem da Apple serve como um lembrete poderoso de que a inovação tecnológica não precisa vir às custas da privacidade pessoal. Na verdade, como a Apple demonstrou, a privacidade pode ser um catalisador para a inovação, levando a soluções mais criativas e centradas no usuário. O tempo dirá se essa abordagem se tornará o novo padrão da indústria ou permanecerá como um diferencial da Apple. De qualquer forma, é um desenvolvimento emocionante que promete moldar o futuro da IA ​​de maneiras que pelo menos poderiam ter sido previstas. #### Referências [Introducing Apple’s On-Device and Server Foundation Models](https://machinelearning.apple.com/research/introducing-apple-foundation-models?source=post_page-----bfee1aad0f27--------------------------------) [Private Cloud Compute: A new frontier for AI privacy in the cloud](https://security.apple.com/blog/private-cloud-compute/?source=post_page-----bfee1aad0f27--------------------------------)
wgbn
1,910,363
Startups: Lições que Aprendi
Ao longo da minha trajetória profissional, tive a oportunidade de trabalhar em várias startups de...
0
2024-07-03T15:37:40
https://dev.to/douglaspujol/startups-licoes-que-aprendi-1j5f
Ao longo da minha trajetória profissional, tive a oportunidade de trabalhar em várias startups de diversos setores. Neste breve artigo, compartilho alguns pontos que aprendi com essas experiências. **1. Comece de forma simples e Construa apenas o que você precisa.** A melhor maneira de iniciar um novo projeto é começar de forma simples e evitar complexidades desnecessárias. Projetos naturalmente se tornarão mais complexos com o tempo, então concentre-se nas funcionalidades básicas e desenvolva o código com base nas necessidades reais do seu produto. Evite criar soluções complexas prematuramente, pois isso desperdiça tempo e recursos. Uma boa arquitetura deve ser flexível, econômica e capaz de se adaptar às mudanças. **2. Não Seja Vítima do Hype** Um dos maiores problemas para as empresas hoje é a adoção precipitada de tecnologias modernas, seja um novo framework, uma nova lib de estilos ou uma arquitetura popularizada por um influenciador. Embora seja bacana explorar novas tecnologias, faça isso com cautela. Novas tecnologias frequentemente apresentam bugs e casos de uso não cobertos, pois precisam de tempo para amadurecer. Seja crítico e consciente dos trade-offs de cada escolha. **3. Equipes Pequenas e Talentosas Superam Grandes Times Medíocres** Se você é gestor, valorize e mantenha seus talentos. Manter uma equipe pequena e altamente qualificada é mais eficiente do que ter grandes times medianos. Muitas empresas acreditam que mais programadores resultam em software de melhor qualidade, o que é um equívoco. A sinergia e a eficiência de um time pequeno e competente são inigualáveis e extremamente valiosas para o sucesso do projeto. **4. Separe o Joio do Trigo** Saiba identificar quem está verdadeiramente comprometido em ajudar e construir algo extraordinário com você, em contraste com aqueles que estão apenas acomodados. Valorize os profissionais dedicados e dispostos a crescer junto com o projeto. **5. Cuide de Si e da Sua Família** Programar é uma paixão que transformou minha vida e a vida de muitas pessoas. No entanto, manter um equilíbrio entre a vida profissional e pessoal é essencial para o sucesso. Inclua atividades físicas diárias, como academia ou jiu-jitsu, e dedique tempo de qualidade à sua família. Uma mente forte e um corpo saudável são fundamentais para alcançar qualquer objetivo. **6. A Cultura é Crucial** A importância da cultura organizacional só ficou clara para mim após alguns anos de experiência no mercado de trabalho. A cultura define o futuro da empresa. Invista em criar um ambiente de trabalho positivo, onde os valores e objetivos são compartilhados por todos os membros da equipe. **7. O Melhor Reconhecimento** O reconhecimento mais valioso vem de formas monetárias, oportunidades de desenvolvimento e grandes amizades. As melhores empresas pelas quais passei foram aquelas onde construí amizades duradouras. Esses são os parâmetros que considero importantes em todos os projetos dos quais participo.
douglaspujol
1,910,362
Chat-Moderation mit OpenAI
Moderieren Sie PubNub-Chats mit PubNub-Funktionen und der kostenlosen Moderations-Api von OpenAI
0
2024-07-03T15:37:11
https://dev.to/pubnub-de/chat-moderation-mit-openai-jho
Jede Anwendung, die einen [In-App-Chat](https://www.pubnub.com/solutions/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) enthält, benötigt eine Möglichkeit, die Nachrichten, die Nutzer austauschen können, zu regulieren und zu moderieren. Da es nicht möglich ist, alle unangemessenen Inhalte durch menschliche Moderatoren zu moderieren, muss das Moderationssystem automatisch sein. Da Nutzer häufig versuchen werden, die Moderation zu umgehen, sind maschinelles Lernen, generative KI und große Sprachmodelle (LLMs) \[und GPT-Modelle wie GPT-3 und GPT-4\] beliebte Methoden zur Moderation von Inhalten. Moderation ist ein komplexes Thema, und PubNub bietet verschiedene Lösungen, um alle Anwendungsfälle unserer Entwickler zu erfüllen. - [PubNub-Funktionen](https://www.pubnub.com/docs/serverless/functions/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) können Nachrichten abfangen und verändern, bevor sie ihr Ziel erreichen. Sie können innerhalb einer Funktion benutzerdefinierte Logik anwenden, einschließlich des Aufrufs einer externen REST-API, so dass Sie jeden externen Dienst für die Nachrichtenmoderation nutzen können. Dieser Ansatz wird in diesem Artikel für die Integration mit OpenAI verwendet. - PubNub-Funktionen bieten [benutzerdefinierte Integrationen](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), die Inhaltsmoderation und Stimmungsanalyse unterstützen, darunter [Lasso Moderation](https://www.pubnub.com/integrations/lasso-moderation/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), [Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), [ein RegEx-basierter Profanitätsfilter](https://www.pubnub.com/integrations/chat-message-profanity-filter/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), [Lexalytics](https://www.pubnub.com/integrations/lexalytics/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) und [Community Sift](https://www.pubnub.com/integrations/communitysift/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de). - Der BizOps Workspace von PubNub kann [Konversationen überwachen und moderieren](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), einschließlich der Möglichkeit, Nachrichten zu bearbeiten und zu löschen. Der Open AI Moderationsendpunkt ------------------------------- Dieser Artikel befasst sich mit [der Moderation API von OpenAI](https://platform.openai.com/docs/guides/moderation/overview), einer REST-API, die mithilfe von künstlicher Intelligenz (KI) feststellt, ob der bereitgestellte Text potenziell schädliche Begriffe enthält. Die API soll es Entwicklern ermöglichen, schädliche Inhalte zu filtern oder zu entfernen, und wird zum Zeitpunkt der Erstellung dieses Artikels **kostenlos** zur Verfügung gestellt, unterstützt jedoch nur Englisch. Das Modell hinter der Moderations-API kategorisiert den bereitgestellten Text wie folgt (aus der [API-Dokumentation](https://platform.openai.com/docs/guides/moderation/overview)): - **Hass:** Inhalte, die Hass aufgrund von Rasse, Geschlecht, ethnischer Zugehörigkeit, Religion, Nationalität, sexueller Orientierung, Behinderungsstatus oder Kaste ausdrücken, dazu auffordern oder fördern. Hassvolle Inhalte, die sich gegen nicht geschützte Gruppen richten (z. B. Schachspieler), gelten als Belästigung. - **Hass / Bedrohung:** Hassvolle Inhalte, die auch Gewalt oder ernsthaften Schaden gegenüber der Zielgruppe aufgrund von Rasse, Geschlecht, ethnischer Herkunft, Religion, Nationalität, sexueller Orientierung, Behinderung oder Kaste beinhalten. - **Belästigung:** Inhalte, die eine belästigende Sprache gegenüber einer beliebigen Zielperson ausdrücken, dazu auffordern oder fördern. - **Belästigung/Bedrohung:** Belästigende Inhalte, die auch Gewalt oder ernsthaften Schaden für eine beliebige Zielgruppe beinhalten. - **Selbstbeschädigung**: Inhalte, die zu selbstbeschädigenden Handlungen wie Selbstmord, Schneiden und Essstörungen auffordern, ermutigen oder diese darstellen. - **Selbstbeschädigung/Absicht:** Inhalte, in denen der Sprecher zum Ausdruck bringt, dass er selbstschädigende Handlungen wie Selbstmord, Schneiden und Essstörungen vornimmt oder vorhat, diese vorzunehmen. - **Selbstbeschädigung/Anweisungen:** Inhalte, die zu selbstschädigenden Handlungen wie Selbstmord, Schneiden und Essstörungen ermutigen oder Anweisungen oder Ratschläge geben, wie man solche Handlungen vornimmt. - **Sexuell:** Inhalte, die sexuelle Erregung hervorrufen sollen, wie z. B. die Beschreibung sexueller Aktivitäten, oder die für sexuelle Dienstleistungen werben (ausgenommen Sexualerziehung und Wellness). - **Sexuell / Minderjährige:** Sexuelle Inhalte, die eine Person unter 18 Jahren zeigen. - **Gewalttätigkeit:** Inhalte, in denen Tod, Gewalt oder körperliche Verletzungen dargestellt werden. - **Gewalt/Grafik:** Inhalte, die Tod, Gewalt oder Körperverletzung in grafischen Details darstellen. Die Ergebnisse werden in einer JSON-Struktur wie folgt bereitgestellt (wiederum aus der API-Dokumentation entnommen): ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` Aufrufen der Open AI Moderation API von PubNub ---------------------------------------------- Die**Integration der Moderations-API in eine beliebige PubNub-Anwendung ist mit Hilfe von PubNub-Funktionen ganz einfach**, wenn Sie dieser Schritt-für-Schritt-Anleitung folgen: Funktionen ermöglichen es Ihnen, Echtzeit-Ereignisse auf der PubNub-Plattform zu erfassen, z. B. gesendete und empfangene Nachrichten; Sie können dann innerhalb dieser Funktionen benutzerdefinierten serverlosen Code schreiben, um Nachrichten nach Bedarf zu ändern, umzuleiten, zu ergänzen oder zu filtern. Sie müssen den Ereignistyp "Before Publish or Fire" verwenden; dieser Funktionstyp wird aufgerufen _, bevor_ die Nachricht zugestellt wird, und muss seine Ausführung beenden, bevor die Nachricht zur Zustellung an die Empfänger freigegeben wird. Die [PubNub-Dokumentation](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) bietet weitere Hintergründe und Details, aber zusammenfassend lässt sich sagen: "Before Publish or Fire" ist ein synchroner Aufruf, der _eine Nachricht oder ihre Nutzlast ändern_ kann _._ ### Erstellen Sie die PubNub-Funktion 1. Loggen Sie sich in das [PubNub-Administrationsportal](https://admin.pubnub.com?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) ein und wählen Sie die Anwendung und das Keyset für die App aus, die Sie moderieren möchten. 2. Wählen Sie "Funktionen", die Sie unter dem Reiter "Erstellen" finden. 3. Wählen Sie '+ CREATE NEW MODULE' und geben Sie dem Modul einen Namen und eine Beschreibung 4. Wählen Sie "+ NEUE FUNKTION ERSTELLEN" und geben Sie der Funktion einen Namen. 5. Wählen Sie für den Ereignistyp "Vor Veröffentlichung oder Feuer". 6. Geben Sie für den Kanalnamen **\*** ein (in dieser Demo wird **\*** verwendet, aber Ihre Anwendung kann hier nur die Kanäle angeben, die Sie moderieren möchten) Nachdem Sie die PubNub-Funktion erstellt haben, müssen Sie Ihren Open AI API-Schlüssel als Geheimnis angeben. 1. Wählen Sie 'MY SECRETS' und erstellen Sie einen neuen Schlüssel mit dem Namen 'OPENAI\_API\_KEY'. 2. [Generieren Sie einen Open AI API-Schlüssel](https://platform.openai.com/account/api-keys) und stellen Sie sicher, dass dieser Schlüssel Zugriff auf die moderate API hat. 3. Geben Sie den generierten API-Schlüssel an das gerade erstellte PubNub-Funktionsgeheimnis weiter. Der Körper der PubNub-Funktion sieht wie folgt aus: ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` Die Funktion selbst ist recht einfach: Für jede empfangene Nachricht: - Übergeben Sie sie an die Open AI-Moderationsfunktion - Fügen Sie das zurückgegebene Moderationsobjekt als neuen Schlüssel an das Message (JSON) Objekt an. **Speichern Sie Ihre Funktion und stellen Sie sicher, dass Ihr Modul gestartet ist.** ### Latenzzeit Die PubNub-Funktion, die Sie soeben erstellt haben, wird jedes Mal synchron ausgeführt, wenn eine Nachricht gesendet wird, und die Nachricht wird erst ausgeliefert, wenn die Funktion fertig ausgeführt ist. Da die Funktion einen Aufruf an eine externe API enthält, hängt die Auslieferungslatenz davon ab, wie schnell der API-Aufruf an Open AI zurückkommt, was außerhalb der Kontrolle von PubNub liegt und ziemlich hoch sein kann. Es gibt mehrere Möglichkeiten, die Beeinträchtigung des Benutzererlebnisses abzumildern. Die meisten Implementierungen geben dem Absender eine sofortige Rückmeldung, dass die Nachricht gesendet wurde und verlassen sich dann auf Lesebestätigungen, um anzuzeigen, dass die Nachricht zugestellt (oder gemeldet) wurde. ### Aktualisieren der Client-Anwendung Betrachten wir, was erforderlich wäre, um den Moderations-Payload innerhalb Ihrer Anwendung zu handhaben, indem wir die [Chat-Demo](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) verwenden, die eine React-Anwendung ist, die das [PubNub Chat SDK](https://www.pubnub.com/docs/chat/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) verwendet, um die meisten Funktionen einer typischen Chat-Anwendung zu zeigen. Richten Sie ein Attribut ein, um zu verfolgen, ob eine potenziell schädliche Nachricht angezeigt werden soll oder nicht: ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` Und fügen Sie eine Logik hinzu, um eine potenziell schädliche Nachricht standardmäßig nicht anzuzeigen, in diesem Fall in [message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx): ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") Beachten Sie, dass diese Änderungen nicht in der **gehosteten** Version der [Chat-Demo](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) enthalten sind, aber die [ReadMe enthält vollständige Anweisungen](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md), um sie selbst zu erstellen und von Ihrem eigenen Keyset aus auszuführen. Einpacken --------- Und da haben Sie es, eine schnelle und einfache (und kostenlose) Möglichkeit, Ihrer Anwendung mit Open AI sowohl Moderation als auch Sentiment-Analyse hinzuzufügen. Um mehr über die Integration von Open AI mit PubNub zu erfahren, schauen Sie sich diese anderen Ressourcen an: - [OpenAI GPT API Integration mit Funktionen](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) - [Erstellen Sie einen Chatbot mit PubNub und ChatGPT](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) (Hinzufügen eines Chatbots zu unserem PubNub-Showcase) - [Erweitern Sie eine Geo-App mit PubNub & Chat GPT / OpenAI](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) Wenden Sie sich an das DevRel-Team unter [devrel@pubnub.com](mailto:devrel@pubnub.com) oder an unser [Support-Team](https://support.pubnub.com/hc/en-us?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de), wenn Sie Hilfe zu einem beliebigen Aspekt Ihrer PubNub-Entwicklung benötigen. Wie kann PubNub Ihnen helfen? ============================= Dieser Artikel wurde ursprünglich auf [PubNub.com](https://www.pubnub.com/blog/chat-moderation-with-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) veröffentlicht. Unsere Plattform hilft Entwicklern bei der Erstellung, Bereitstellung und Verwaltung von Echtzeit-Interaktivität für Webanwendungen, mobile Anwendungen und IoT-Geräte. Die Grundlage unserer Plattform ist das größte und am besten skalierbare Echtzeit-Edge-Messaging-Netzwerk der Branche. Mit über 15 Points-of-Presence weltweit, die 800 Millionen monatlich aktive Nutzer unterstützen, und einer Zuverlässigkeit von 99,999 % müssen Sie sich keine Sorgen über Ausfälle, Gleichzeitigkeitsgrenzen oder Latenzprobleme aufgrund von Verkehrsspitzen machen. PubNub erleben -------------- Sehen Sie sich die [Live Tour](https://www.pubnub.com/tour/introduction/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) an, um in weniger als 5 Minuten die grundlegenden Konzepte hinter jeder PubNub-gestützten App zu verstehen Einrichten ---------- Melden Sie sich für einen [PubNub-Account](https://admin.pubnub.com/signup/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) an und erhalten Sie sofort kostenlosen Zugang zu den PubNub-Schlüsseln Beginnen Sie ------------ Mit den [PubNub-Dokumenten](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de) können Sie sofort loslegen, unabhängig von Ihrem Anwendungsfall oder [SDK](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=de)
pubnubdevrel
1,910,296
Starting a new Django project with PostgresSQL and Docker
Initial project setup: $ mkdir client-management $ cd client-management $ python3 -m...
0
2024-07-03T15:36:37
https://dev.to/samuellubliner/starting-a-new-django-project-with-postgressql-and-docker-3hoj
django, python, docker, postgres
## Initial project setup: ```bash $ mkdir client-management $ cd client-management $ python3 -m venv .venv $ source .venv/bin/activate (.venv) $ (.venv) $ python3 -m pip install django~=5.0 (.venv) $ django-admin startproject django_project . (.venv) $ python manage.py runserver ``` Visit http://127.0.0.1:8000/ to confirm the successful install and then quit the server. ```bash (.venv) $ touch requirements.txt (.venv) $ pip freeze > requirements.txt ``` ## Docker Use Docker to streamline local development with PostgreSQL - Read more about security before deployment - <https://docs.docker.com/engine/security/rootless/> After installing Django , deactivate the virtual environment and set up Docker. View my `Dockerfile` `docker-compose.yml` and `.dockerignore` files by visiting my repository on [GitHub here](https://github.com/Samuel-Lubliner/client-management). ```bash (.venv) $ deactivate $ $ touch Dockerfile docker-compose.yml .dockerignore $ docker-compose up ``` ### About Docker - Docker Image: a read-only template with instructions for creating a container - Supported Python images: <https://hub.docker.com/_/python/> - `Dockerfile`: defines the custom image - Docker container: running instance of a Docker image. - `docker-compose.yml`: additional instructions for the container ### Docker flow: 1. create a new virtual environment and install Django 2. create a new Django project within it 3. add `Dockerfile` with custom image instructions 4. add `.dockerignore` 5. build the image 6. add `docker-compose.yml` 7. Spin up containers with `docker-compose up` 8. Stop the container: - Press `Control + c` - Run `docker-compose down` ### Detached mode - runs containers in the background. Use it for a single command line tab. Run `docker-compose up -d` - Error output won’t always be visible in detached mode. See the current output by running `docker-compose logs` ### Docker vs  local commands - Preface traditional commands with `docker-compose exec [service]` For example: ```bash $ docker-compose exec web python manage.py migrate $ docker-compose exec web python manage.py createsuperuser ``` ## Psycopg Psycopg is a database adapter. Start off with the binary version for quick installation. Update if the project needs performance boost. Learn more: - <https://www.psycopg.org/psycopg3/docs/basic/install.html#binary-installation> - <https://docs.djangoproject.com/en/5.0/ref/databases/> ### Install Psycopg First stop running the Docker container by running `docker-compose down`. Docker replaces the virtual environment. The Docker host replaces the local operating system. Since I am using docker, I won’t install locally. Instead, I will just update requirements.txt with the `psycopg[binary]` package at the bottom of the file. ## About `docker-compose.yml` `docker-compose.yml` specifies two separate containers running within Docker. 1. web for the Django local server 2. db for the PostgreSQL database. Docker containers are ephemeral. Information is lost when the container stops running. In `docker-compose.yml`, the `postgres_data` volumes mount binds to the local computer. ## Configure PostgreSQL - Configure the environment to use trust authentication for the database. For a databases with many users, be more explicit with permissions. - Update the `DATABASES` config in `django_project/settings.py` file to use PostgreSQL - Now build the new image and start the two containers in detached mode by running `docker-compose up -d --build`. - Refresh the Django welcome page at <http://127.0.0.1:8000/> to show Django has successfully connected to PostgreSQL via Docker. Remember to `docker-compose down` to save computer resources when you are finished. ## Up next Before migrating, I will add a custom user model.
samuellubliner
1,910,361
Future of Artificial Intelligence
🚀 Embracing the Future of AI: Transforming Tomorrow with Innovation and...
0
2024-07-03T15:34:17
https://codexdindia.blogspot.com/2024/07/future-of-artificial-intelligence.html
ai, abotwrotethis, webdev, javascript
### 🚀 Embracing the Future of AI: Transforming Tomorrow with Innovation and Collaboration > Know More :- https://codexdindia.blogspot.com/2024/07/future-of-artificial-intelligence.html Artificial Intelligence (AI) isn't just transforming industries—it's revolutionizing our world in ways we once only imagined in sci-fi! As we step into the 21st century, AI isn't just a buzzword; it's our co-pilot into the future, reshaping everything from healthcare to finance with its supercharged brainpower. Let's dive into how AI is not only changing the game but also how we can shape its evolution responsibly! #### 🌟 Unleashing AI's Potential Across Industries AI isn't just a tool; it's the secret sauce behind smarter healthcare systems diagnosing diseases faster and more accurately. Imagine personalized treatment plans tailored just for you, all thanks to AI crunching massive amounts of data! Meanwhile, in finance, AI's crunching numbers at lightning speed, predicting market trends, and even automating trades with precision that's reshaping the global economy. #### 🌍 Shaping Economies and Opportunities Speaking of economies, AI isn't just boosting productivity; it's creating new opportunities and shaking up traditional job markets. Sure, there's talk about automation, but with it comes a wave of innovation and growth that can't be ignored. It's about balancing efficiency with fairness, ensuring everyone has a shot at the jobs of tomorrow through education and retraining programs. #### 🤖 Ethical AI: Navigating the Gray Areas But hold up—AI isn't without its ethical dilemmas! We're talking about privacy concerns, biases in algorithms, and the impact of AI on our everyday lives. It's why we need smart regulations and ethical frameworks to steer AI in the right direction. After all, AI's power should benefit society as a whole, not just a select few! #### 🎓 Education, Research, and Beyond Education's getting an AI makeover too! Think personalized learning experiences that adapt to your style and pace. And in research? AI's turbocharging scientific breakthroughs in medicine, climate science—you name it! Imagine a future where AI isn't just a partner in discovery but a catalyst for tackling humanity's biggest challenges. #### 🌟 Challenges? Bring 'Em On! Sure, AI's got hurdles to leap—like making algorithms more transparent and robust. But with advances in quantum computing and cross-disciplinary teamwork, we're gearing up to conquer these challenges and unlock even more mind-blowing possibilities! #### 🤝 Human-AI Synergy: The Future of Collaboration The real magic happens when humans and AI team up! It's about combining AI's analytical prowess with human creativity, empathy, and ethical judgment. Together, we can create a future where technology serves our highest ideals and fuels sustainable development for generations to come. #### 🚀 Ready for Liftoff? So, buckle up! The future of AI isn't just bright—it's dazzling! From transforming industries to redefining how we learn and innovate, AI's rewriting the rules of what's possible. But remember, it's not just about the tech; it's about how we shape it. Let's embrace AI's potential, tackle its challenges head-on, and build a future where innovation knows no bounds! #### 🌈 Conclusion: The AI-Powered Tomorrow As we navigate this AI-powered frontier, let's keep our eyes on the horizon—a future where AI isn't just a tool but a force for good, enhancing human potential and creating a more connected world. Together, we'll shape a tomorrow where the possibilities are limitless and the future is brighter than ever before! ### Join the Journey! Ready to dive deeper into the world of AI? Stay tuned, keep exploring, and together, let's unlock the secrets of tomorrow's technology today! 🌟🤖
sh20raj
1,910,360
OpenAI를 사용한 채팅 중재
펍넙 함수 및 OpenAI의 무료 중재 API를 사용하여 펍넙 채팅 중재하기
0
2024-07-03T15:32:10
https://dev.to/pubnub-ko/openaireul-sayonghan-caeting-jungjae-43dg
[인앱 채팅이](https://www.pubnub.com/solutions/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 포함된 모든 애플리케이션에는 사용자가 주고받을 수 있는 메시지를 규제하고 중재할 수 있는 방법이 필요합니다. 인간 중재자가 모든 부적절한 콘텐츠를 중재하는 것은 불가능하므로 중재 시스템은 자동으로 작동해야 합니다. 사용자가 종종 중재를 회피하려고 시도하기 때문에 머신러닝, 생성 AI, 대규모 언어 모델(LLM)\[및 GPT-3, GPT-4 등의 GPT 모델\]은 콘텐츠를 중재하는 데 널리 사용되는 방법입니다. 모더레이션은 복잡한 주제이며, PubNub은 개발자의 모든 사용 사례를 충족할 수 있는 다양한 솔루션을 제공합니다. - [PubNub 함수는](https://www.pubnub.com/docs/serverless/functions/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 메시지가 목적지에 도달하기 전에 메시지를 가로채서 수정할 수 있습니다. 외부 REST API 호출을 포함하여 함수 내에서 사용자 정의 로직을 적용할 수 있으므로 메시지 검토를 위해 외부 서비스를 사용할 수 있습니다. 이 문서에서는 이 접근 방식을 사용하여 OpenAI와 통합합니다. - PubNub 함수는 콘텐츠 검토 및 감정 분석을 지원하는 [사용자 지정 통합](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 기능을 제공하며, 여기에는 올가미 [검토](https://www.pubnub.com/integrations/lasso-moderation/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko), [Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko), [RegEx 기반 욕설 필터](https://www.pubnub.com/integrations/chat-message-profanity-filter/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko), [Lexalytics](https://www.pubnub.com/integrations/lexalytics/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 및 [Community Sift](https://www.pubnub.com/integrations/communitysift/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 등이 포함됩니다. - PubNub의 BizOps 워크스페이스는 메시지 편집 및 삭제 기능을 포함하여 [대화를 모니터링하고 중재할](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 수 있습니다. Open AI 중재 엔드포인트 ---------------- 이 문서에서는 인공 지능(AI)을 사용하여 제공된 텍스트에 잠재적으로 유해한 용어가 포함되어 있는지 여부를 판단하는 REST API인 [OpenAI의 모더레이션 API에](https://platform.openai.com/docs/guides/moderation/overview) 대해 살펴봅니다. 이 API의 목적은 개발자가 유해한 콘텐츠를 필터링하거나 제거할 수 있도록 하는 것이며, 작성 시점에서는 영어만 지원하지만 **무료로** 제공되고 있습니다. 모더레이션 API의 모델은 제공된 텍스트를 다음과 같이 분류합니다( [API 문서에서](https://platform.openai.com/docs/guides/moderation/overview) 발췌): - 혐오**:** 인종, 성별, 민족, 종교, 국적, 성적 지향, 장애 여부 또는 계급에 근거하여 혐오를 표현, 선동 또는 조장하는 콘텐츠입니다. 보호받지 못하는 집단(예: 체스 플레이어)을 겨냥한 혐오 콘텐츠는 괴롭힘에 해당합니다. - **증오/협박:** 인종, 성별, 민족, 종교, 국적, 성적 지향, 장애 여부 또는 카스트에 따라 대상 그룹에 대한 폭력이나 심각한 위해를 포함하는 혐오 콘텐츠입니다. - **괴롭힘:** 특정 대상에 대한 괴롭힘을 표현, 선동 또는 조장하는 콘텐츠. - **괴롭힘/협박:** 모든 대상에 대한 폭력 또는 심각한 위해를 포함하는 괴롭힘 콘텐츠. - 자해**:** 자살, 자해, 섭식 장애 등 자해 행위를 조장, 장려 또는 묘사하는 콘텐츠입니다. - **자해/의도:** 화자가 자살, 자해, 섭식 장애 등 자해 행위에 관여하거나 관여할 의도가 있음을 표현하는 콘텐츠입니다. - **자해/지시:** 자살, 자해, 섭식 장애 등 자해 행위를 조장하거나 그러한 행위를 하는 방법에 대한 지침 또는 조언을 제공하는 콘텐츠입니다. - **성적인 콘텐츠:** 성행위에 대한 묘사 등 성적 흥분을 불러일으키거나 성적인 서비스를 홍보하는 콘텐츠(성교육 및 건강 관련 내용은 제외). - **성적인 / 미성년자:** 18세 미만의 개인이 등장하는 성적인 콘텐츠. - **폭력:** 죽음, 폭력 또는 신체적 상해를 묘사하는 콘텐츠. - **폭력 / 그래픽:** 죽음, 폭력 또는 신체적 상해를 그래픽으로 자세히 묘사하는 콘텐츠. 결과는 다음과 같이 JSON 구조로 제공됩니다(API 문서에서 발췌): ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` PubNub에서 Open AI 모더레이션 API 호출하기 ------------------------------- 이 단계별 튜토리얼에 따라 PubNub**함수를 사용하여 모더레이션 API를 모든 PubNub 애플리케이션에 쉽게 통합할 수** 있습니다: 함수를 사용하면 송수신되는 메시지와 같이 PubNub 플랫폼에서 발생하는 실시간 이벤트를 캡처할 수 있으며, 해당 함수 내에서 사용자 지정 서버리스 코드를 작성하여 필요에 따라 메시지를 수정, 경로 변경, 보강 또는 필터링할 수 있습니다. 이 함수 유형은 메시지가 전달되기 _전에_ 호출되며 메시지가 수신자에게 전달되기 전에 실행을 완료해야 합니다. PubNub [문서에서](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 자세한 배경과 세부 사항을 확인할 수 있지만 요약하면 '게시 또는 실행 전'은 _메시지 또는 해당 페이로드를 변경할_ 수 있는 동기식 호출입니다. ### PubNub 함수 만들기 1. PubNub [관리자 포털에](https://admin.pubnub.com?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 로그인하고 관리하려는 앱의 애플리케이션과 키 집합을 선택합니다. 2. '빌드' 탭 아래에서 찾을 수 있는 '함수'를 선택합니다. 3. '+ 새 모듈 만들기'를 선택하고 모듈의 이름과 설명을 입력합니다. 4. '+ 새 함수 만들기'를 선택하고 함수 이름을 지정합니다. 5. 이벤트 유형으로 '게시 또는 실행 전'을 선택합니다. 6. 채널 이름에 **\*를** 입력합니다(이 데모에서는 **\*를** 사용하지만 애플리케이션에서는 여기에서 관리하려는 채널만 지정할 수 있습니다). PubNub 함수를 생성한 후에는 Open AI API 키를 비밀로 제공해야 합니다. 1. '내 비밀'을 선택하고 'OPENAI\_API\_KEY'라는 이름의 새 키를 생성합니다. 2. [Open AI API 키를 생성하고](https://platform.openai.com/account/api-keys) 해당 키가 일반 API에 액세스할 수 있는지 확인합니다. 3. 생성된 API 키를 방금 생성한 PubNub 함수 시크릿에 제공합니다. PubNub 함수의 본문은 다음과 같습니다: ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` 함수 자체는 매우 간단합니다: 수신된 각 메시지에 대해 - Open AI 모더레이션 함수에 전달합니다. - 반환된 모더레이션 객체를 메시지(JSON) 객체에 새 키로 추가합니다. **함수를 저장하고 모듈이 시작되었는지 확인합니다.** ### 지연 시간 방금 만든 PubNub 함수는 메시지가 전송될 때마다 동기적으로 실행되며, 해당 메시지는 함수 실행이 완료될 때까지 전달되지 않습니다. 함수에 외부 API 호출이 포함되어 있으므로 전달 지연 시간은 Open AI에 대한 API 호출이 얼마나 빨리 반환되는지에 따라 달라지며, 이는 PubNub의 통제를 벗어난 것이며 상당히 높을 수 있습니다. 사용자 경험의 저하를 완화하는 방법에는 여러 가지가 있습니다. 대부분의 배포는 발신자에게 메시지가 전송되었다는 즉각적인 피드백을 제공한 다음 읽기 영수증에 의존하여 메시지가 전달(또는 보고)되었음을 나타냅니다. ### 클라이언트 애플리케이션 업데이트 일반적인 채팅 앱의 대부분의 기능을 보여주기 위해 [PubNub Chat SDK를](https://www.pubnub.com/docs/chat/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 사용하는 React 애플리케이션인 [Chat 데모를](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 사용하여 애플리케이션 내에서 모더레이션 페이로드를 처리하는 데 필요한 것이 무엇인지 고려해 보겠습니다. 잠재적으로 유해한 메시지가 표시되어야 하는지 여부를 추적하는 속성을 설정합니다: ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` 그리고 유해할 수 있는 메시지를 기본적으로 표시하지 않도록 로직을 추가합니다(이 경우 [message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx) 내에): ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") 이러한 변경 사항은 **호스팅된** 버전의 [Chat 데모에는](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 제공되지 않지만, 사용 설명서에 이를 빌드하고 자체 키 집합에서 직접 실행할 수 있는 [전체 지침이 포함되어](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md) 있다는 점에 유의하세요. 마무리 --- 지금까지 Open AI를 사용하여 애플리케이션에 모더레이션과 감성 분석을 모두 추가하는 빠르고 쉬운(그리고 무료인) 방법을 알아봤습니다. Open AI와 PubNub의 통합에 대해 자세히 알아보려면 다음 리소스를 확인하세요: - [함수와 OpenAI GPT API 통합하기](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) - [PubNub 및 ChatGPT로 챗봇 구축하기](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) (PubNub 쇼케이스에 챗봇 추가하기) - [PubNub & Chat GPT / OpenAI로 지오 앱 개선하기](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 개발팀( [devrel@pubnub.com](mailto:devrel@pubnub.com) )에 문의하거나 [지원팀에](https://support.pubnub.com/hc/en-us?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 연락하여 PubNub 개발과 관련된 모든 부분에 대해 도움을 받으세요. PubNub이 어떤 도움을 드릴 수 있나요? ======================== 이 문서는 원래 [PubNub.com에](https://www.pubnub.com/blog/chat-moderation-with-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 게시되었습니다. 저희 플랫폼은 개발자가 웹 앱, 모바일 앱 및 IoT 디바이스를 위한 실시간 인터랙티브를 구축, 제공 및 관리할 수 있도록 지원합니다. 저희 플랫폼의 기반은 업계에서 가장 크고 확장성이 뛰어난 실시간 에지 메시징 네트워크입니다. 전 세계 15개 이상의 PoP가 월간 8억 명의 활성 사용자를 지원하고 99.999%의 안정성을 제공하므로 중단, 동시 접속자 수 제한 또는 트래픽 폭증으로 인한 지연 문제를 걱정할 필요가 없습니다. PubNub 체험하기 ----------- [라이브 투어를](https://www.pubnub.com/tour/introduction/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 통해 5분 이내에 모든 PubNub 기반 앱의 필수 개념을 이해하세요. 설정하기 ---- PubNub [계정에](https://admin.pubnub.com/signup/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 가입하여 PubNub 키에 무료로 즉시 액세스하세요. 시작하기 ---- 사용 사례나 [SDK에](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 관계없이 [PubNub 문서를](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ko) 통해 바로 시작하고 실행할 수 있습니다.
pubnubdevrel
1,910,359
Crypt::OpenSSL::PKCS12 1.91 released to CPAN
a long over due release
0
2024-07-03T15:30:24
https://dev.to/jonasbn/cryptopensslpkcs12-191-released-to-cpan-2in2
opensource, perl, release, openssl
--- title: Crypt::OpenSSL::PKCS12 1.91 released to CPAN published: true description: a long over due release tags: opensource, perl, release, openssl # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-07-03 15:10 +0000 --- @timlegge a prolific Perl and Open Source contributor is keeping me busy with PRs to the OpenSSL related Perl distributions for which I am the current maintainer. - [Crypt::OpenSSL::PKCS12](https://metacpan.org/pod/Crypt::OpenSSL::PKCS12) - [Crypt::OpenSSL::X509](https://metacpan.org/pod/Crypt::OpenSSL::X509) I have been a bit slow in getting the PRs reviewed and the releases shipped. I did however manage to get two trial-releases shipped. - `1.10` - `1.11` The last official release was 1.9 back in November 2021. The first trial demonstrated that the code was working, but cpan-testers reported some failing tests, luckily @timlegge recognized the issues and we got that addressed. Perl's version number support is not always easy to work with and `1.9` could actually be interpreted as `1.90` meaning that: `1.10` and `1.11` are actually lower version numbers. Luckily these where only trials and not breaking anything as such. So the release following: `1.9` (`1.90`) is `1.91` and as I am wrting this we are gathering momentum for `1.92`. The two distributions are not owned by me, I am just the maintainer. @timlegge and others are working on collecting OpenSSL related Perl distributions in [a GitHub organisation](https://github.com/perl-openssl), something I would very much like to support, so currently I am working on selling the idea to the original author. Unfortunately recent events in Open Source like "xz" and "polyfill.js" does not make the environment as safe as it should be so it make take some time before an actual transition will happen, but I am still hoping and I belive that a stronger organization around these distributions will make it easier to get them maintained and releases shipped more frequently. ## Change Log ## 1.91 Mon Jun 24 22:00:13 CEST 2024 Due to a mistake with the release numbering, the two trial releases, should not have been `1.10` and `1.11`, but `1.91` and `1.92` respectively. So there will be a jump from 1.9 to 1.91, since 1.9 is equivalent to 1.90. This release contains changes tested on the two previous trial releases. In addition the following changes have been adopted: - PR: [#47](https://github.com/dsully/perl-crypt-openssl-pkcs12/pull/47) from @timlegge, improving support for building with OpenSSL located in non-standard locations ## 1.11 (TRIAL) Wed Jun 5 20:21:54 CEST 2024 - Improved support for older versions of OpenSSL via PR [#46](https://github.com/dsully/perl-crypt-openssl-pkcs12/pull/46) from @timlegge This should address reports on failing tests from CPAN testers, see also: [#45](https://github.com/dsully/perl-crypt-openssl-pkcs12/issues/45) - Minor cleanup to repository files used for varies tools ## 1.10 (TRIAL) Fri Apr 26 16:09:56 CEST 2024 - Improved support for OpenSSL 3.0 via RT: [42](https://github.com/dsully/perl-crypt-openssl-pkcs12/pull/42) from @timlegge - Distribution tooling changed from `Module::Install` to `Dist::Zilla`
jonasbn
1,910,357
Day 3 of 100 Days of Code
Wed, Jul 3, 2024 Little distraction with mom losing email access yesterday, took a bit of effort,...
0
2024-07-03T15:27:40
https://dev.to/jacobsternx/day-3-of-100-days-of-code-28p9
100daysofcode, beginners, webdev, javascript
Wed, Jul 3, 2024 Little distraction with mom losing email access yesterday, took a bit of effort, but was able to resolve with best possible solution. Currently working on CSS Selectors in CSS Fundamentals lesson, which I've seen before, so wanting to see how far I can move today. My attainable aim this week is to complete Web Dev Foundations: Fundamentals of HTML (done) Fundamentals of CSS (in progress) Developing Websites Locally Deploying Websites Improved Styling with CSS Making a Website Responsive Then review and complete the 2 Web Dev Foundations assessments, first: question/answer and second: coding, over the weekend, this being a 100 Day of Code challenge, so coding weekends, haha. Next week is going to be oh so much more fun(!), starting with JavaScript, which I've been anticipating for so long that I cannot begin to describe.
jacobsternx
1,910,356
OpenAIによるチャット・モデレーション
PubNub関数とOpenAIの無料モデレーションAPIを使ってPubNubチャットをモデレートする
0
2024-07-03T15:27:08
https://dev.to/pubnub-jp/openainiyorutiyatutomoderesiyon-5f9o
[アプリ内チャットを](https://www.pubnub.com/solutions/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)含むアプリケーションは、ユーザーが交換できるメッセージを規制し、モデレートする何らかの方法が必要です。 人間のモデレーターですべての不適切なコンテンツをモデレートすることは不可能なので、モデレーション・システムは自動的でなければなりません。 ユーザーはモデレーションを回避しようとすることが多いので、機械学習、生成AI、大規模言語モデル(LLM)[およびGPT-3やGPT-4などのGPTモデル]は、コンテンツをモデレートする一般的な方法です。 モデレーションは複雑なトピックであり、PubNubはすべての開発者のユースケースを満たすために様々なソリューションを提供しています。 - [PubNubファンクションは](https://www.pubnub.com/docs/serverless/functions/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、宛先に到達する前にメッセージをインターセプトして変更することができます。外部REST APIを呼び出すなど、Function内でカスタムロジックを適用することができ、メッセージモデレーションに任意の外部サービスを使用することができます。 この記事ではOpenAIと統合するためにこのアプローチを使用します。 - PubNub Functionsは、[Lasso Moderation](https://www.pubnub.com/integrations/lasso-moderation/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、[Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、[RegExベースの冒涜フィルタ](https://www.pubnub.com/integrations/chat-message-profanity-filter/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、[Lexalytics](https://www.pubnub.com/integrations/lexalytics/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、[Community Siftなど](https://www.pubnub.com/integrations/communitysift/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、コンテンツモデレーションとセンチメント分析をサポートする[カスタム統合を](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)提供します。 - PubNubのBizOps Workspaceは、メッセージの編集と削除を含む[会話の監視とモデレーションを](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)行うことができます。 Open AIモデレーション・エンドポイント ---------------------- この記事では、[OpenAIのModeration APIについて見て](https://platform.openai.com/docs/guides/moderation/overview)いく。このAPIは、人工知能(AI)を使用して、提供されたテキストに有害な用語が含まれているかどうかを判断するREST APIである。 このAPIの意図は、開発者が有害なコンテンツをフィルタリングまたは削除できるようにすることであり、執筆時点では、英語しかサポートしていないが、**無料で**提供されている。 Moderation APIの背後にあるモデルは、提供されたテキストを次のように分類します([APIドキュメントから](https://platform.openai.com/docs/guides/moderation/overview)引用): - **ヘイト:**人種、性別、民族、宗教、国籍、性的指向、障害の有無、カーストに基づくヘイトを表明、扇動、促進するコンテンツ。保護されていないグループ(例:チェスプレイヤー)に向けた憎悪のコンテンツは、ハラスメントです。 - **憎悪/脅迫:**人種、性別、民族性、宗教、国籍、性的指向、障害の状態、またはカーストに基づき、対象となるグループに対する暴力または深刻な危害も含む憎悪的な内容。 - **ハラスメント:**あらゆるターゲットに対して嫌がらせの言葉を表現、扇動、促進するコンテンツ。 - **ハラスメント/脅迫:**暴力や深刻な危害を含むハラスメントコンテンツ。 - **自傷行為:**自殺、切り傷、摂食障害などの自傷行為を助長、奨励、描写するコンテンツ。 - **自傷/意図:**自殺、切断、摂食障害などの自傷行為に関与している、または関与する意思があることを表明するコンテンツ。 - **自傷/指示:**自殺、切断、摂食障害などの自傷行為の実行を奨励する内容、またはそのような行為の実行方法について指示や助言を与える内容。 - **性的なもの:**性行為の描写など性的興奮を喚起する内容、または性的サービスを宣伝する内容(性教育や健康増進を除く)。 - **性的/未成年:**18歳未満の個人を含む性的コンテンツ。 - **暴力:**死、暴力、身体的傷害を描写するコンテンツ。 - **暴力 / グラフィック:**死、暴力、身体的傷害を生々しく描写したコンテンツ。 結果は以下のようなJSON構造で提供されます(これもAPIドキュメントから引用): ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` PubNubからOpen AI Moderation APIを呼び出す ----------------------------------- **モデレーションAPIを任意のPubNubアプリケーションに統合するには**、このステップバイステップのチュートリアルに従って**PubNub Functionsを使用して簡単に**できます: 関数を使用すると、メッセージの送受信など、PubNubプラットフォームで発生するリアルタイムのイベントをキャプチャできます。その後、必要に応じてメッセージを変更、再ルーティング、補強、またはフィルタリングするために、これらの関数内にカスタムサーバーレスコードを記述できます。 この関数タイプはメッセージが配信される_前に_呼び出され、メッセージが受信者に配信されるようにリリースされる前に実行を終了する必要があります。 PubNubの[ドキュメントには](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)より多くの背景と詳細が記載されていますが、要約すると、"Before Publish or Fire "は_メッセージまたはそのペイロードを変更_できる同期呼び出しです。 ### PubNub関数を作成する 1. PubNub[管理ポータルに](https://admin.pubnub.com?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)ログインし、モデレートしたいアプリのアプリケーションとキーセットを選択します。 2. Build'タブの下にある'Functions'を選択します。 3. CREATE NEW MODULE'を選択し、モジュールに名前と説明を付けます。 4. CREATE NEW FUNCTION'を選択し、関数に名前を付けます。 5. イベントタイプに'Before Publish or Fire'を選択する。 6. チャンネル名には「**\*」を**入力します(このデモでは**「\*」を**使用しますが、アプリケーションによってはここでモデレートしたいチャンネルのみを指定することもできます)。 PubNub関数を作成したら、Open AI APIキーをシークレットとして提供する必要があります。 1. MY SECRETS'を選択し、'OPENAI\_API\_KEY'という名前で新しいキーを作成します。 2. Open[AI APIキーを生成](https://platform.openai.com/account/api-keys)し、そのキーがmoderate APIにアクセスできることを確認します。 3. 生成したAPIキーを先ほど作成したPubNub関数のsecretに渡します。 PubNub関数の本体は以下のようになります: ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` 関数自体は非常に簡単です: 受信した各メッセージに対して - 受け取った各メッセージをOpen AIモデレーション関数に渡す - 返されたモデレーションオブジェクトをメッセージ(JSON)オブジェクトの新しいキーとして追加する。 **関数を保存し、モジュールが起動していることを確認します。** ### 遅延 今作成したPubNub関数はメッセージが送信されるたびに同期的に実行され、関数の実行が終了するまでそのメッセージは配信されません。 関数には外部APIへの呼び出しが含まれているため、配信レイテンシはOpen AIへのAPI呼び出しが返す速度に依存します。 ユーザーエクスペリエンスの低下を軽減する方法はいくつかある。ほとんどのデプロイメントでは、メッセージが送信されたことを送信者に即座にフィードバックし、メッセージが配信された(または報告された)ことを示すリードレシートに依存しています。 ### クライアント・アプリケーションの更新 典型的なチャットアプリのほとんどの機能を表示するために[PubNub Chat SDKを](https://www.pubnub.com/docs/chat/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)使用するReactアプリケーションである[Chat Demoを](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)使用して、アプリケーション内でモデレーションペイロードを処理するために必要なものを考えてみましょう。 潜在的に有害なメッセージを表示すべきかどうかを追跡する属性を設定します: ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` また、有害な可能性のあるメッセージをデフォルトで表示しないようにするためのロジックを、この場合は[message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx)内に追加します: ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") これらの変更は[チャットデモの](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja) **ホストされた**バージョンには存在しませんが、[ReadMe には](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md)チャットデモをビルドし、自分のキーセットから実行するための[完全なインストラクションが含まれて](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md)います。 まとめ --- これで、Open AI を使ってアプリケーションにモデレーションとセンチメント分析の両方を追加する手軽で簡単な(そして無料の)方法ができました。 Open AIとPubNubの統合の詳細については、以下の他のリソースをチェックしてください: - [OpenAI GPT APIと関数の統合](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja) - [PubNubとChatGPTでチャットボットを構築](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)する(PubNubショーケースにチャットボットを追加する) - [PubNubとChat GPT / OpenAIでジオ・アプリを強化する](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja) あなたのPubNub開発のどのような側面についても、[devrel@pubnub.com](mailto:devrel@pubnub.com)、お気軽にDevRelチームに連絡するか、または私たちの[サポート](https://support.pubnub.com/hc/en-us?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)チームにお問い合わせください。 PubNubはどのようにお役に立ちますか? ===================== この記事は[PubNub.comに](https://www.pubnub.com/blog/chat-moderation-with-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)掲載されたものです。 私たちのプラットフォームは、開発者がWebアプリ、モバイルアプリ、およびIoTデバイスのためのリアルタイムのインタラクティブ性を構築、配信、管理するのに役立ちます。 私たちのプラットフォームの基盤は、業界最大かつ最もスケーラブルなリアルタイムエッジメッセージングネットワークです。世界15か所以上で8億人の月間アクティブユーザーをサポートし、99.999%の信頼性を誇るため、停電や同時実行数の制限、トラフィックの急増による遅延の問題を心配する必要はありません。 PubNubを体験 --------- [ライブツアーを](https://www.pubnub.com/tour/introduction/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)チェックして、5分以内にすべてのPubNub搭載アプリの背後にある本質的な概念を理解する セットアップ ------ [PubNubアカウントに](https://admin.pubnub.com/signup/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)サインアップすると、PubNubキーに無料ですぐにアクセスできます。 始める --- [PubNubのドキュメントは](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)、ユースケースや[SDKに](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=ja)関係なく、あなたを立ち上げ、実行することができます。
pubnubdevrel
1,910,353
Multiplayer in Rust using Renet and Bevy
Here at Renuo, we specialize in web technologies such as Ruby on Rails, React, Angular, and Spring....
0
2024-07-03T15:24:33
https://dev.to/cuddlybunion341/multiplayer-in-rust-using-renet-and-bevy-17p6
rust, gamedev
Here at Renuo, we specialize in web technologies such as Ruby on Rails, React, Angular, and Spring. One of our core company values is continuous learning: we love exploring new technologies even beyond our usual scope of expertise. Inspired by Michael’s Unity Powerday, I decided to delve into how multiplayer games operate. As a team, we held a competition to implement FPS (First-Person Shooter) games using C# boilerplate. Initially, the sheer amount of boilerplate required felt overwhelming. Beyond that, I wanted to understand client/server data synchronization at a lower level of abstraction. My recent experiences with [Rust](https://rustlang.org/) and [Bevy](https://bevyengine.org/) convinced me to write this blog article to share my newfound learnings of game development. ## Why Choose Rust ### Advantages Rust is a statically typed, memory-safe, multi-paradigm programming language that matches the performance of C. Due to its safety, concurrency features, and modern syntax, it has gained popularity among developers in recent years. Some notable software written in Rust includes: * **Rapier3d**: A performant physics engine often used with ThreeJS. * **Ripgrep**: A performant command line search tool. * **Alacritty**: A performant, minimalistic cross-platform terminal emulator. * **Warp**: A performant, modern terminal IDE. * **Tauri**: A performant and lightweight alternative to ElectronJS. * **Amethyst**: A performant tiling window manager for MacOS. * **Condorium Blockchain**: A performant and secure blockchain technology. > I mean, there is no such thing as a perfect programming language Rust is merely a statically type low-level multi-paradigm perfect programming language > [YouTube interview](https://www.youtube.com/watch?v=TGfQu0bQTKc&t=95s) by [Programmers Are Also Human](https://www.youtube.com/@programmersarealsohuman5909), Rust is a perfect programming language. ![Ferris the Rustacean](https://user-images.githubusercontent.com/8974888/231858967-7c37bf1e-335b-4f5a-9760-da97be9f54bb.png) ## Picking a game engine > There are currently 5 games written in Rust. And 50 game engines. > Interview with a Senior Rust Developer - [2:52](https://www.youtube.com/watch?v=TGfQu0bQTKc&t=168s) There are too many game engines available for Rust. An excellent resource is [Are We Game Yet](https://arewegameyet.rs/ecosystem/engines/). I also recommend [this article by GeeksforGeeks](https://www.geeksforgeeks.org/rust-game-engines/), which makes picking the optimal engine easier. ### The Difference Between Bevy and Other Engines While big game engines like Godot, Unity, and Unreal Engine come with graphical editors, Bevy focuses on providing a simple yet powerful, multithreaded system to manage game state with minimal code. ## Understanding ECS The [ECS](https://en.wikipedia.org/wiki/Entity_component_system) (Entity Component System) is a software pattern that emphasizes a modular design. It is commonly utilized in game and game engine development. This approach separates the data and behaviour of game entities into components, making it easier to manage and organize complex systems. ### Components of ECS 1. **Entities:** Unique identifiers of a group of components (A u32 wrapper in bevy). 2. **Components:** Modular data pieces that represent specific Entity attributes. (A struct that derives the Component macro in bevy) 3. **System:** Logic that operates on entities and their components. (A struct that derives the Resource macro in bevy) ### Systems in Bevy Systems in Bevy are functions that take various parameters such as queries, EventReaders, assets, and resources and apply logic to them. One powerful feature of Bevy systems is the Query interface. It allows you to fetch specific data for entities in your project. For instance, if no entity is found, the `single_mut()` function will raise an error. Multiple queries are possible as long as entities do not overlap. Below is an example where the `MyPlayer` component doesn't contain any data but is used to denote that the entity belongs to the client player. ```rust pub fn update_player_movement_system( mut keyboard_events: EventReader<KeyboardInput>, mut query: Query<(&mut Transform, &MyPlayer)>, ) { let (mut transform, _) = query.single_mut(); for event in keyboard_events.read() { let mut delta_position = Vec3::new(0.0, 0.0, 0.0); match event.key_code { KeyCode::KeyW => delta_position.z += 0.1, KeyCode::KeyS => delta_position.z -= 0.1, KeyCode::KeyA => delta_position.x -= 0.1, KeyCode::KeyD => delta_position.x += 0.1, _ => {} } let new_position = transform.translation + delta_position; transform.translation = new_position; } } ``` The example above has a flaw: the player position update has a fixed step. Instead of using a fixed-step update, consider using the time passed since the last step. This will ensure a consistent movement speed regardless of the frame rate. ## Picking Networking Libraries We need to decide on networking libraries after choosing Bevy as our game engine. Here are a few options: * Matchbox * Naia * Renet * Bootleg_networking * Spicy_networking I chose **Renet** because of its popularity and my good experiences with its boilerplate. Additionally, I included **Serde** for efficient binary message encoding. ## Sketching the scene Before coding, let's sketch a simple scene: - **Camera:** Renders the scene. - **Plane:** Represents the floor. - **Green Cube:** Represents the player. - **Red Cubes:** Represent other players. Attributes to synchronize: * **Position:** `Vec3` Input method: - **Keyboard (WASD):** Used to translate the player. ### Handling Player inputs There are three main ways to handle player inputs: 1. **Client-side:** The client handles inputs, moves the player, and sends the position to the server. 2. **Server-side:** The client sends input data to the server, and the server responds with the position. 3. **Hybrid:** The client handles inputs and shares them with the server, which then responds with position synchronization. The client-side approach can reduce latency, but is less secure. The server-side approach is more secure but adds server load. The hybrid approach offers a balance, but is more complex. ## Planning ### Client | Type | Name | Description | | --------- | --------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | | Component | PlayerEntity(ClientId) | Represents an enemy player entity. | | Component | MyPlayer | Marks the current player entity. | | Event | PlayerSpawnEvent(ClientId) | Emitted when a player joins. Adds a player object to the scene. | | Event | PlayerDespawnEvent(ClientId) | Emitted when a player leaves. Removes a player object from the scene. | | Event | PlayerMoveEvent(ClientId, Vec3) | Emitted by the player controller when a player moves. | | Event | LobbySyncEvent(HashMap<ClientId, PlayerAttributes>) | Emitted when the client receives sync messages from the server. Updates other player positions using their ID and position. | | System | send_message_system | Shares MyPlayer position data with the server. | | System | receive_message_system | Processes messages received from the server. | | System | update_player_movement_system | Updates player position from keyboard input. | | System | setup_system | Sets up the scene with a camera, a ground plane, and a mesh for the current player. | | System | handle_player_spawn_event_system | Adds enemy players to the scene once they join in. | | System | handle_lobby_sync_event_system | Updates enemy player positions and potentially spawns missed players into the scene. | ### Server | Type | Name | Description | | -------- | ------------------------------------------------ | ------------------------------------------------------------------------------------------------------------- | | Resource | PlayerLobby(HashMap<ClientId, PlayerAttributes>) | Holds attributes of all players currently in the game. Used to synchronize these attributes with the clients. | | System | send_message_system | Broadcasts player positions to keep enemy player positions in clients up-to-date. | | System | receive_message_system | Updates player lobby position based on messages received from the RenetClient. | | System | handle_events_system | Handles events such as ClientConnected and ClientDisconnected from the Bevy Renet plugin. | ## Deciding on a project structure I separated the ECS components into specific modules to structure the Bevy project and used two entry points: one for the client and one for the server. Shared code, such as structures for Client-Server communication, can be placed in a global `lib` module. ``` src ├── client │   ├── components.rs │   ├── events.rs │   ├── main.rs │   ├── resources.rs │   └── systems.rs ├── lib.rs └── server ├── main.rs ├── resources.rs └── systems.rs ``` Defining various entry points is as simple as adding this to the `Cargo.toml` file: ``` [[bin]] name = "server" path = "src/server/main.rs" [[bin]] name = "client" path = "src/client/main.rs" ``` Afterwards, the binaries can be run with the `--bin` argument: ``` cargo run --bin server cargo run --bin client ``` ## Setting up Boilerplate To integrate `bevy_renet` into the bevy project, I followed the [Bevy Renet documentation](https://github.com/lucaspoffo/renet/blob/master/bevy_renet/README.md). In my setup, I used these two default channels: - **Unreliable:** Used for sending and receiving messages for player attribute synchronization. (We don't care about every state change, we can pick the last one) - **ReliableOrdered:** Used for sending and receiving messages for player actions such as joining and leaving. ## Synchronising player positions Here's an example of sending player attributes from the client to the server: ```rust pub fn send_message_system(mut client: ResMut<RenetClient>, query: Query<(&MyPlayer, &Transform)>) { let (_, transform) = query.single(); let player_sync = PlayerAttributes { position: transform.translation.into(), }; let message = bincode::serialize(&player_sync).unwrap(); client.send_message(DefaultChannel::Unreliable, message); } ``` Handling messages from the client on the server: ```rust pub fn receive_message_system(mut server: ResMut<RenetServer>, mut player_lobby: ResMut<PlayerLobby>) { for client_id in server.clients_id() { let message = server.receive_message(client_id, DefaultChannel::Unreliable); if let Some(message) = message { let player: PlayerAttributes = bincode::deserialize(&message).unwrap(); player_lobby.0.insert(client_id, player); } } } ``` Sending attributes of all players back to the client: ```rust pub fn send_message_system(mut server: ResMut<RenetServer>, player_lobby: Res<PlayerLobby>) { let chanel = DefaultChannel::Unreliable; let lobby = player_lobby.0.clone(); let event = multiplayer_demo::ServerMessage::LobbySync(lobby); let message = bincode::serialize(&event).unwrap(); print_lobby(&player_lobby); server.broadcast_message(chanel, message); } ``` Synchronizing the client scene with the player attributes from the server: ```rust pub fn handle_lobby_sync_event_system( mut spawn_events: EventWriter<PlayerSpawnEvent>, mut sync_events: EventReader<LobbySyncEvent>, mut query: Query<(&PlayerEntity, &mut Transform)>, my_clinet_id: Res<MyClientId>, ) { let event_option = sync_events.read().last(); if event_option.is_none() { return; } let event = event_option.unwrap(); for (client_id, player_sync) in event.0.iter() { if *client_id == my_clinet_id.0 { continue; } let mut found = false; for (player_entity, mut transform) in query.iter_mut() { if *client_id == player_entity.0 { let new_position = player_sync.position; transform.translation = new_position.into(); found = true; } } if !found { info!("Spawning player {}: {:?}", client_id, player_sync.position); spawn_events.send(PlayerSpawnEvent(*client_id)); } } } ``` ## Conclusion The multiplayer demo project demonstrates the intricate planning and attention to detail needed to synchronize player attributes between the client and server. This showcases the complexity of creating a seamless multiplayer experience at a lower level. For more detailed code, visit the MIT-Licensed [GitHub repository](https://github.com/CuddlyBunion341/bevy-multiplayer).
cuddlybunion341
1,910,352
The Ultimate Guide to Shoulder Bags in Pakistan: Style, Functionality, and Where to Buy
Shoulder bags are an essential accessory, offering both style and functionality. In Pakistan, the...
0
2024-07-03T15:23:33
https://dev.to/aodour-pk/the-ultimate-guide-to-shoulder-bags-in-pakistan-style-functionality-and-where-to-buy-1ka1
Shoulder bags are an essential accessory, offering both style and functionality. In Pakistan, the demand for shoulder bags has been rising due to their versatility and practicality. Whether you're a student, a professional, or a traveler, a good [shoulder bag](https://www.aodour.pk/shop/women-bags/shoulder-bags) can make a significant difference in your daily life. In this post, we'll explore the best shoulder bags available in Pakistan, what to look for when buying one, and where to find the best deals.
aodour-pk
1,910,322
Difference between Java Script events and Java Script ES6 Event handlers!!
**HOW TO MAKE A JAVASCRIPT EVENT **To create an event with javascript, we can add attributes with the...
0
2024-07-03T14:52:13
https://dev.to/code_guruva_204d4e19ed643/difference-between-javascript-events-and-event-handlers-2704
**HOW TO MAKE A JAVASCRIPT EVENT **To create an event with javascript, we can add attributes with the event names above, to the html element that we want to give an event for example. **JavaScript events** can be defined as something the user does in a website or browser. This can be a multitude of different things such as clicking a button, closing the browser, hovering over an element on the browser, etc. **onChange Event :** <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Document</title> <script> function msg(x){ document.bgColor=x; } </script> </head> <body> <select onchange="msg(this.value)"> <option value="">Select Color</option> <option value="Red">Red</option> <option value="green">Green</option> <option value="blue">Blue</option> </select> </body> </html> **onChange, Event With DOM :** <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <select id="selCourse" onchange="msg()"> <option>React JS</option> <option>JavaScript</option> <option>Node JS</option> <option>Angular</option> </select> <script> function msg() { // alert("hi..") var txt = document.getElementById('selCourse').value; alert(txt); } </script> </body> </html> **onSubmit Event :** <!DOCTYPE html> <html lang="en"> <head> <title>Document</title> <script> function msg(){ alert("form submited") } </script> </head> <body> <form onsubmit="msg()"> <button> Submit </button> </form> </body> </html> **onSubmit Event, DOM :: ** <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> <form onsubmit="msg(event)" id="form"> <input type="text" id="username"> <input type="password" id="password"> <button>submit</button> </form> <script> function msg(e){ e.preventDefault() alert("form submited..") var username = ocument.getElementById('username').value var pass = document.getElementById('password').value console.log(username); console.log(pass) } </script> </body> </html> **Adding Event Listener **The addEventListener() method attaches an event handler to the specified element.The addEventListener() method attaches an event handler to an element without overwriting existing event handlers. You can add many event handlers to one element. You can add many event handlers of the same type to one element, i.e two "click" events. You can add event listeners to any DOM object not only HTML elements. i.e the window object. The addEventListener() method makes it easier to control how the event reacts to bubbling. When using the addEventListener() method, the JavaScript is separated from the HTML markup, for better readability and allows you to add event listeners even when you do not control the HTML markup. You can easily remove an event listener by using the removeEventListener() method. The first parameter is the type of the event (like "click" or "mousedown" or any other HTML DOM Event. <body> <button id="myBtn">Try it</button> <script> function myFunction() { alert ("Hello World!"); } document.getElementById("myBtn").addEventListener("click", myFunction); </script> </body> OR <script> document.getElementById("myBtn").addEventListener("click", function() { alert("Hello World!"); }); </script> **Add Alert Date :: with addEventListener ** <body> <button id="dte">Date is..</button> <script> var today = new Date(); document.getElementById("dte").addEventListener("click",function(){ alert(today); }) </script> </body> **onClick event with addEventListener ** <!DOCTYPE html> <html lang="en"> <head> <title>submit form </title> </head> <body> <button id="btn"> Submit </button> <script> var btn = document.getElementById("btn"); btn.addEventListener('click', function () { alert("this is alert message"); }) </script> </body> </html> **onClick -onChange event with addEventListener ** <!DOCTYPE html> <html lang="en"> <head> <title>submit form </title> </head> <body> <select id="change"> <option>this is demo text</option> <option>this is demo two</option> <option>this is demo three</option> <option>this is demo four</option> </select> <button id="btn"> Submit </button> <script> // onclick Handelor var btn = document.getElementById("btn"); btn.addEventListener('click', function () { alert("this is alert message"); }) // onChange Handelor var change = document.getElementById("change"); change.addEventListener('change',function(){ console.log(change.value); }) </script> </body> </html> **onSubmission with addEventListener ** <!DOCTYPE html> <html lang="en"> <head> <title>submit form </title> </head> <body> <form id="form"> <input type="text" id="username" placeholder="username" required /> <input type="email" id="email" placeholder="email" required /> <button value="register"> Submit </button> </form> <script> var form = document.getElementById("form"); form.addEventListener('submit',function(e){ event.preventDefault() var username = ocument.getElementById("username").value console.log(username) var email = document.getElementById("email").value console.log(email) }) </script> </body> </html>
code_guruva_204d4e19ed643
1,910,351
Automating Linux User Management with Bash Scripts
Introduction Automation is essential for improving operational efficiency and preserving system...
0
2024-07-03T15:22:23
https://dev.to/mubarak_ajibola_96a34686b/automating-user-management-in-linux-using-bash-scripting-m5c
**Introduction** Automation is essential for improving operational efficiency and preserving system consistency in today's dynamic IT environments. This article examines a bash script meant to automate Linux system user administration. The script was created as part of the HNG Internship DevOps Stage 1 work and helps with password generation, user creation, group assignments, security logging, and permissions configuration. **Script Overview** To efficiently handle users, the bash script **`create_users.sh`** makes use of essential Linux commands and facilities. The program creates random passwords securely using OpenSSL, reads user and group data from an input file (**`users.txt`**), processes each entry to create users with their corresponding groups, configures home directories with the necessary permissions, and records all operations to **`/var/log/user_management.log`**.It also makes sure that generated passwords are stored securely in **`/var/secure/user_passwords.csv`**. **Key Features and Functionality** **Input File Processing:** - The script parses **`users.txt`**, where each line specifies a username followed by semicolon-separated groups (e.g., username; group1,group2). **User and Group Management:** - Checks if each user and their primary group exists. If not, it creates them. - Adds users to specified additional groups and creates those groups if they don't exist. **Password Management:** - Generates strong, random passwords for each user using OpenSSL. - Sets the generated password securely and logs the event to provide an audit trail. **Home Directory Setup:** - Ensures each user has a home directory created with strict permissions (700) and ownership for security. **Logging and Auditing:** - All operations performed by the script are logged with timestamps in **`/var/log/user_management.log`**. This facilitates troubleshooting and auditing of user management activities. **Security Considerations:** - Passwords are stored securely in **`/var/secure/user_passwords.csv`**, with permissions restricted (600) and ownership restricted to root. This ensures only authorized personnel can access password information. **Script Implementation: `create_users.sh`** **1. Script Initialization and Input Handling** ``` #!/bin/bash # Ensure script is run with root privileges if [[ $EUID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi # Check if the input file is provided as argument if [ $# -ne 1 ]; then echo "Usage: $0 <input_file>" exit 1 fi INPUT_FILE=$1 # Check if the input file exists if [ ! -f $INPUT_FILE ]; then echo "Input file not found!" exit 1 fi ``` **2. File and Directory Setup** ``` # Log file path LOG_FILE="/var/log/user_management.log" # Password file path PASSWORD_FILE="/var/secure/user_passwords.csv" # Create the secure directory if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure # Create the log file if it doesn't exist and set permissions touch $LOG_FILE chmod 600 $LOG_FILE ``` **3. Logging Function** ``` # Function to log messages with timestamps log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } ``` **4. User and Group Management** ``` # Loop through each line in the input file while IFS=";" read -r username groups; do # Remove leading and trailing whitespace username=$(echo $username | xargs) groups=$(echo $groups | xargs) # Create the user group if it doesn't exist if ! getent group "$username" >/dev/null; then groupadd "$username" log_message "Group $username created." else log_message "Group $username already exists." fi # Create the user if it doesn't exist if ! id -u "$username" >/dev/null 2>&1; then useradd -m -g "$username" -s /bin/bash "$username" log_message "User $username created with home directory." else log_message "User $username already exists." continue fi # Add user to additional groups specified IFS=',' read -ra ADDR <<< "$groups" for group in "${ADDR[@]}"; do group=$(echo $group | xargs) if ! getent group "$group" >/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -aG "$group" "$username" log_message "User $username added to group $group." done # Generate a random password for the user password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE # Store password in CSV format log_message "Password for user $username set and stored." # Set permissions for the user's home directory chmod 700 /home/"$username" chown "$username":"$username" /home/"$username" log_message "Permissions for /home/$username set to 700 and ownership set to $username:$username." done < "$INPUT_FILE" ``` **5. Conclusion** ``` echo "User creation process completed." exit 0 ``` **Create a Sample Input File: `users.txt`** ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` **Execution and Conclusion** To execute the script, run: ``` chmod +x create_users.sh ``` Run the script with root privileges: ``` sudo ./create_users.sh users.txt ``` This article offers a thorough analysis of the **`create_users.sh`** script, demonstrating its powerful automation capabilities for Linux system user management chores. Organizations may improve security through standard operating procedures, expedite user provisioning, and keep thorough audit logs of all user management operations by putting this script into effect. For more information on the HNG Internship and opportunities in tech, visit **[HNG Internship](https://hng.tech/internship)** and **[HNG Hire](https://hng.tech/hire)**.
mubarak_ajibola_96a34686b
1,910,350
Stop Using UUIDs in Your Database
How UUIDs can Destroy SQL Database Performance. One of the most common way to uniquely identify rows...
0
2024-07-03T15:22:20
https://dev.to/manojgohel/stop-using-uuids-in-your-database-ogj
webdev, javascript, typescript, programming
How UUIDs can Destroy SQL Database Performance. One of the most common way to uniquely identify rows in a database is by using **UUID fields**. This approach, however, comes with performance caveats that you must be aware of. In this article, we discuss **two performance issues** that may arise when using UUIDs as keys in your database tables. So without further ado… Let’s jump right in! # What are UUIDs? UUID stands for Universally Unique Identifier. There are many versions of UUID, but in this article we will consider the most popular one: **UUIDv4**. Here is an example of how a UUIDv4 looks like: [<source>](https://planetscale.com/blog/the-problem-with-using-a-uuid-primary-key-in-mysql) > NOTE: Each UUID has the digit **4** in the same position to denote the version. # Problem 1 — Insert Performance When a new record is inserted into a table, the **index** associated with the primary key must be updated to maintain optimal query performance. Indexes are constructed using the B+ Tree data structure. If you want to learn more about how **Indexes and B+ Trees** work, I highly suggest watching this great video from [Abdul Bari](https://www.youtube.com/watch?v=aZjYr87r1b8). > **TL;DR:** for every record insertion, the underlying B+ Tree must be rebalanced to optimize query performance. [10.2 B Trees and B+ Trees. How they are useful in Databases](https://www.youtube.com/watch?v=aZjYr87r1b8) 0:00 / 39:41•Live **The rebalancing process becomes highly inefficient for UUIDv4.** This is because of the inherent randomness of UUIDs, making it harder to keep the tree in balance. As you scale, you will have **millions** of nodes to rebalance, which dramatically decreases the insert performance when using UUID keys. > NOTE: Other options such as UUIDv7 could be a better option, since they have inherent ordering which makes it easier to index them. # Problem 2 — Higher Storage Let’s consider the size of a UUID with an auto-incrementing integer key: By comparison, auto-incrementing integers consume 32bits per value, while UUIDs consume **128bits** per value. > This is 4x more per row. Additionally, most people store UUIDs in human-readable form, which means a UUID could consume up to **688bits** per value. > This is approximately 20x more per row. Let’s evaluate how UUIDs can actually impact your storage by simulating a realistic database. We will adapt the tables used by [Josh Tried Coding](https://www.youtube.com/watch?v=wkqwyrcuPs0) in this example: > This example uses a Neon postgresql database. * **Table 1** will contain 1 million rows with UUIDs. * **Table 2** will contain 1 million rows with auto-incrementing integers. Here are the results, let’s break down each statistic one by one: **Total table size:** When considering both table sizes, the UUID table is approximately **2.3x** larger than the Integer table! **ID field size:** An individual UUID field requires **9.3x** more storage space than an equivalent integer field! **ID column size:** When excluding other attributes in each table, there is a **3.5x** size difference between the UUID and Integer columns! # Conclusion UUIDs are a great way to ensure uniqueness between records in a table. These problems are prevalent at scale, so UUIDs will not actually cause noticeable performance degradation for most people. Although these problems are prevalent at scale, it is important to realise the implications of using UUIDs in your tables and ensure optimal database design.
manojgohel
1,910,333
My Web Development Journey Day-3: Git & Github 🎨
In my third day of mission webdevelopment, I learned about Git and Git hub.To be honest I already had...
27,922
2024-07-03T15:19:32
https://dev.to/shemanto_sharkar/my-web-development-journey-day-3-git-github-5ffe
webdev, beginners, programming, git
In my third day of mission webdevelopment, I learned about Git and Git hub.To be honest I already had idea about github and already worked with it.But in this module I get to know how to make brach and work with it. Then I learned more about CSS gradient.I learned- 1)Adding google font in css 2)CSS Gradients 3)Transitions 4)Flex Box 5) Justify-content vs align-item I will write a separate blog on interview questions on Git and Github.
shemanto_sharkar
1,910,331
Moderacja czatu za pomocą OpenAI
Moderowanie czatu PubNub za pomocą funkcji PubNub i darmowej aplikacji do moderowania od OpenAI
0
2024-07-03T15:17:47
https://dev.to/pubnub-pl/moderacja-czatu-za-pomoca-openai-29o0
Każda aplikacja zawierająca [czat w aplikacji](https://www.pubnub.com/solutions/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) musi w jakiś sposób regulować i moderować wiadomości, które użytkownicy mogą wymieniać. Ponieważ nie jest możliwe moderowanie wszystkich nieodpowiednich treści za pomocą ludzkich moderatorów, system moderacji musi być automatyczny. Ponieważ użytkownicy często próbują obejść moderację, uczenie maszynowe, generatywna sztuczna inteligencja i duże modele językowe (LLM) \[oraz modele GPT, takie jak GPT-3 i GPT-4\] są popularnymi sposobami moderowania treści. Moderacja to złożony temat, a PubNub oferuje różne rozwiązania spełniające wszystkie przypadki użycia naszych deweloperów. - [Funkcje](https://www.pubnub.com/docs/serverless/functions/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) PubNub mogą przechwytywać i modyfikować wiadomości, zanim dotrą one do miejsca docelowego. W ramach funkcji można stosować niestandardową logikę, w tym wywoływanie zewnętrznego interfejsu API REST, co pozwala na korzystanie z dowolnej zewnętrznej usługi do moderowania wiadomości. To podejście zostało wykorzystane w tym artykule do integracji z OpenAI. - Funkcje PubNub oferują [niestandardowe integracje](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), które obsługują moderację treści i analizę nastrojów, w tym [Lasso Moderation](https://www.pubnub.com/integrations/lasso-moderation/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), [Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), [filtr wulgaryzmów oparty na RegEx](https://www.pubnub.com/integrations/chat-message-profanity-filter/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), [Lexalytics](https://www.pubnub.com/integrations/lexalytics/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) i [Community Sift](https://www.pubnub.com/integrations/communitysift/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl). - BizOps Workspace firmy PubNub może [monitorować i moderować konwersacje](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), w tym edytować i usuwać wiadomości. Punkt końcowy moderowania Open AI --------------------------------- W tym artykule przyjrzymy się [OpenAI Moderation API](https://platform.openai.com/docs/guides/moderation/overview), interfejsowi API REST, który wykorzystuje sztuczną inteligencję (AI) do określenia, czy dostarczony tekst zawiera potencjalnie szkodliwe terminy. Intencją API jest umożliwienie programistom filtrowania lub usuwania szkodliwych treści, a w chwili pisania tego tekstu jest on udostępniany **bezpłatnie**, choć obsługuje tylko język angielski. Model stojący za Moderation API będzie kategoryzował dostarczony tekst w następujący sposób (zaczerpnięty z [dokumentacji API](https://platform.openai.com/docs/guides/moderation/overview)): - **Nienawiść**: Treści, które wyrażają, podżegają lub promują nienawiść ze względu na rasę, płeć, pochodzenie etniczne, religię, narodowość, orientację seksualną, niepełnosprawność lub kastę. Nienawistne treści skierowane do grup nieobjętych ochroną (np. szachistów) stanowią nękanie. - **Nienawiść / Groźby:** Nienawistne treści, które obejmują również przemoc lub poważne obrażenia wobec grupy docelowej w oparciu o rasę, płeć, pochodzenie etniczne, religię, narodowość, orientację seksualną, status niepełnosprawności lub kastę. - **Nękanie:** Treści, które wyrażają, podżegają lub promują napastliwy język wobec dowolnej grupy docelowej. - **Nękanie/groźby:** Treści nękające, które obejmują również przemoc lub poważne obrażenia wobec dowolnej osoby. - **Samookaleczenie**: Treści, które promują, zachęcają lub przedstawiają akty samookaleczenia, takie jak samobójstwo, cięcie się i zaburzenia odżywiania. - **Samookaleczenie / zamiar:** Treści, w których osoba mówiąca wyraża, że angażuje się lub zamierza angażować się w akty samookaleczenia, takie jak samobójstwo, cięcie się i zaburzenia odżywiania. - **Samookaleczenie / Instrukcje:** Treści, które zachęcają do dokonywania aktów samookaleczenia, takich jak samobójstwo, cięcie się i zaburzenia odżywiania, lub które zawierają instrukcje lub porady dotyczące popełniania takich aktów. - **Seksualne:** Treści mające na celu wzbudzenie podniecenia seksualnego, takie jak opisy aktywności seksualnej lub promujące usługi seksualne (z wyłączeniem edukacji seksualnej i dobrego samopoczucia). - **Seksualne / Nieletni:** Treści o charakterze seksualnym przedstawiające osoby poniżej 18 roku życia. - **Przemoc:** Treści przedstawiające śmierć, przemoc lub obrażenia fizyczne. - **Przemoc / Grafika:** Treści przedstawiające śmierć, przemoc lub obrażenia fizyczne w szczegółach graficznych. Wyniki są dostarczane w strukturze JSON w następujący sposób (ponownie zaczerpnięte z dokumentacji API): ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` Wywoływanie Open AI Moderation API z PubNub ------------------------------------------- **Integracja interfejsu API Moderation z dowolną aplikacją PubNub jest łatwa przy użyciu funkcji PubNub**, postępując zgodnie z tym samouczkiem krok po kroku: Funkcje umożliwiają przechwytywanie zdarzeń w czasie rzeczywistym zachodzących na platformie PubNub, takich jak wysyłane i odbierane wiadomości; można następnie napisać niestandardowy kod bezserwerowy w ramach tych funkcji, aby modyfikować, przekierowywać, rozszerzać lub filtrować wiadomości w razie potrzeby. Będziesz musiał użyć typu zdarzenia "Before Publish or Fire"; ten typ funkcji zostanie wywołany _przed_ dostarczeniem wiadomości i musi zakończyć wykonywanie, zanim wiadomość zostanie zwolniona, aby mogła zostać dostarczona do odbiorców. [Dokumentacja](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) PubNub zawiera więcej informacji i szczegółów, ale w skrócie: "Before Publish or Fire" to wywołanie synchroniczne, które może _zmienić wiadomość lub jej ładunek_. ### Tworzenie funkcji PubNub 1. Zaloguj się do [portalu administracyjnego](https://admin.pubnub.com?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) PubNub i wybierz aplikację oraz zestaw kluczy dla aplikacji, którą chcesz moderować. 2. Wybierz "Functions", które można znaleźć w zakładce "Build". 3. Wybierz "+ CREATE NEW MODULE" i nadaj modułowi nazwę i opis. 4. Wybierz "+ CREATE NEW FUNCTION" i nadaj funkcji nazwę. 5. Jako typ zdarzenia wybierz "Before Publish or Fire". 6. Jako nazwę kanału wpisz **\*** (w tym demo użyjemy **\***, ale twoja aplikacja może określić tutaj tylko kanały, które chcesz moderować). Po utworzeniu funkcji PubNub musisz podać swój klucz API Open AI jako sekret. 1. Wybierz "MOJE SEKRETY" i utwórz nowy klucz o nazwie "OPENAI\_API\_KEY". 2. Wygeneruj[klucz](https://platform.openai.com/account/api-keys) API Open AI i upewnij się, że klucz ma dostęp do umiarkowanego API. 3. Podaj wygenerowany klucz API do właśnie utworzonego sekretu funkcji PubNub. Treść funkcji PubNub będzie wyglądać następująco: ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` Sama funkcja jest dość prosta: Dla każdej otrzymanej wiadomości: - Przekaż ją do funkcji moderacji Open AI - Dołącz zwrócony obiekt moderacji jako nowy klucz do obiektu Message (JSON). **Zapisz funkcję i upewnij się, że moduł jest uruchomiony.** ### Opóźnienie Funkcja PubNub, którą właśnie utworzyłeś, będzie wykonywana synchronicznie za każdym razem, gdy wiadomość zostanie wysłana, a wiadomość ta nie zostanie dostarczona, dopóki funkcja nie zakończy wykonywania. Ponieważ funkcja zawiera wywołanie zewnętrznego interfejsu API, opóźnienie dostarczenia będzie zależeć od szybkości powrotu wywołania API do Open AI, co jest poza kontrolą PubNub i może być dość wysokie. Istnieje kilka sposobów na złagodzenie pogorszenia wrażeń użytkownika. Większość wdrożeń zapewnia natychmiastową informację zwrotną dla nadawcy, że wiadomość została wysłana, a następnie polega na potwierdzeniach odczytu, aby wskazać, że wiadomość została dostarczona (lub zgłoszona). ### Aktualizacja aplikacji klienckiej Rozważmy, co byłoby wymagane do obsługi ładunku moderacji w aplikacji przy użyciu [Chat Demo](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), która jest aplikacją React wykorzystującą [PubNub Chat SDK](https://www.pubnub.com/docs/chat/overview?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) do pokazania większości funkcji typowej aplikacji czatu. Skonfiguruj atrybut, aby śledzić, czy potencjalnie szkodliwa wiadomość powinna zostać wyświetlona: ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` I dodać trochę logiki, aby domyślnie nie wyświetlać potencjalnie szkodliwej wiadomości, w tym przypadku w [message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx): ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") Zauważ, że te zmiany nie są obecne w **hostowanej** wersji [Chat Demo](https://www.pubnub.com/demos/chat/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), ale [ReadMe zawiera pełne instrukcje](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md), jak ją zbudować i uruchomić z własnego zestawu kluczy. Podsumowanie ------------ Oto szybki i łatwy (oraz darmowy) sposób na dodanie zarówno moderacji, jak i analizy sentymentu do aplikacji przy użyciu Open AI. Aby dowiedzieć się więcej o integracji Open AI z PubNub, zapoznaj się z innymi zasobami: - [Integracja API OpenAI GPT z funkcjami](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) - [Tworzenie Chatbota z PubNub i ChatGPT](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) (Dodawanie Chatbota do naszej prezentacji PubNub) - [Ulepszanie aplikacji geograficznej za pomocą PubNub i Chat GPT / OpenAI](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) Zachęcamy do skontaktowania się z zespołem DevRel pod adresem [devrel@pubnub.com](mailto:devrel@pubnub.com) lub z naszym zespołem [pomocy](https://support.pubnub.com/hc/en-us?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) technicznej w celu uzyskania pomocy w dowolnym aspekcie rozwoju PubNub. Jak PubNub może ci pomóc? ========================= Ten artykuł został pierwotnie opublikowany na [PubNub.com](https://www.pubnub.com/blog/chat-moderation-with-openai/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) Nasza platforma pomaga programistom tworzyć, dostarczać i zarządzać interaktywnością w czasie rzeczywistym dla aplikacji internetowych, aplikacji mobilnych i urządzeń IoT. Fundamentem naszej platformy jest największa w branży i najbardziej skalowalna sieć przesyłania wiadomości w czasie rzeczywistym. Dzięki ponad 15 punktom obecności na całym świecie obsługującym 800 milionów aktywnych użytkowników miesięcznie i niezawodności na poziomie 99,999%, nigdy nie będziesz musiał martwić się o przestoje, limity współbieżności lub jakiekolwiek opóźnienia spowodowane skokami ruchu. Poznaj PubNub ------------- Sprawdź [Live Tour](https://www.pubnub.com/tour/introduction/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl), aby zrozumieć podstawowe koncepcje każdej aplikacji opartej na PubNub w mniej niż 5 minut. Rozpocznij konfigurację ----------------------- Załóż [konto](https://admin.pubnub.com/signup/?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) PubNub, aby uzyskać natychmiastowy i bezpłatny dostęp do kluczy PubNub. Rozpocznij ---------- [Dokumenty](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl) PubNub pozwolą Ci rozpocząć pracę, niezależnie od przypadku użycia lub [zestawu SDK](https://www.pubnub.com/docs?utm_source=devto&utm_medium=syndication&utm_campaign=off_domain&utm_content=pl).
pubnubdevrel
1,910,330
Chat Moderation with OpenAI
Moderate PubNub Chat using PubNub Functions and the free moderation api from OpenAI
0
2024-07-03T15:17:47
https://dev.to/pubnub/chat-moderation-with-openai-33ea
Any application containing [in-app chat](https://www.pubnub.com/solutions/chat/?) needs some way to regulate and moderate the messages that users can exchange.  Since it is not feasible to moderate all inappropriate content with human moderators, the moderation system must be automatic.  Since users will frequently try to circumvent moderation, machine learning, generative AI, and large language models (LLMs) \[and GPT models such as GPT-3 and GPT-4\] are popular ways to moderate content. Moderation is a complex topic, and PubNub offers various solutions to meet all of our developers’ use cases. - [PubNub Functions](https://www.pubnub.com/docs/serverless/functions/overview?) can intercept and modify messages before they reach their destination. You can apply custom logic within a Function, including calling an external REST API, allowing you to use any external service for message moderation. This approach is used in this article to integrate with OpenAI. - PubNub Functions offer [custom integrations](https://www.pubnub.com/integrations/?page=1&sortBy=Most%20recent&) that support content moderation and sentiment analysis, including [Lasso Moderation](https://www.pubnub.com/integrations/lasso-moderation/?), [Tisane](https://www.pubnub.com/integrations/tisane-labs-nlp/?), [A RegEx based profanity filter](https://www.pubnub.com/integrations/chat-message-profanity-filter/?), [Lexalytics](https://www.pubnub.com/integrations/lexalytics/?), and [Community Sift](https://www.pubnub.com/integrations/communitysift/?).  - PubNub’s BizOps Workspace can [monitor and moderate conversations](https://www.pubnub.com/how-to/monitor-and-moderate-conversations-with-bizops-workspace/?), including the ability to edit and delete messages. The Open AI Moderation Endpoint ------------------------------- This article will look at [OpenAI’s Moderation API](https://platform.openai.com/docs/guides/moderation/overview), a REST API that uses artificial intelligence (AI) to determine whether the provided text contains potentially harmful terms.  The API's intention is to allow developers to filter or remove harmful content, and at the time of writing, it is provided **free of charge** though only supports English. The model behind the Moderation API will categorize the provided text as follows (taken from the [API documentation](https://platform.openai.com/docs/guides/moderation/overview)): - **Hate:** Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harassment. - **Hate / Threatening:** Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. - **Harassment:** Content that expresses, incites, or promotes harassing language towards any target. - **Harassment / Threatening:** Harassment content that also includes violence or serious harm towards any target. - **Self-Harm:** Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. - **Self-Harm / Intent:** Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders. - **Self-Harm / Instructions:** Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts. - **Sexual:** Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness). - **Sexual / Minors:** Sexual content that includes an individual who is under 18 years old. - **Violence:** Content that depicts death, violence, or physical injury. - **Violence / Graphic:** Content that depicts death, violence, or physical injury in graphic detail. Results are provided within a JSON structure as follows (again, taken from the API documentation): ```js { "id": "modr-XXXXX", "model": "text-moderation-007", "results": [ { "flagged": true, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": true, "violence": true }, "category_scores": { // Out of scope for this article } } ] } ``` Calling the Open AI Moderation API from PubNub ---------------------------------------------- **Integrating the Moderation API into any PubNub application is easy using PubNub Functions** by following this step-by-step tutorial: Functions allow you to capture real-time events happening on the PubNub platform, such as messages being sent and received; you can then write custom serverless code within those functions to modify, re-route, augment, or filter messages as needed. You will need to use the “Before Publish or Fire” event type; this function type will be invoked _before_ the message is delivered and must finish executing before the message is released to be delivered to its recipients.  The PubNub [documentation](https://www.pubnub.com/docs/serverless/functions/overview#what-function-type-to-use?) provides more background and detail, but in summary: “Before Publish or Fire” is a synchronous call that can _alter a message or its payload._ ### Create the PubNub Function 1. Log into the PubNub [admin portal](https://admin.pubnub.com?) and select the application and keyset for the app you want to moderate. 2. Select ‘Functions’, which can be found unde the ‘Build’ tab. 3. Select ‘+ CREATE NEW MODULE’ and give the module a name and description 4. Select ‘+ CREATE NEW FUNCTION’ and give the function a name. 5. For the event type, select ‘Before Publish or Fire’ 6. For the Channel name, enter **\*** (this demo will use **\***, but your application may choose to specify only the channels here that you want to moderate) Having created the PubNub function, you need to provide your Open AI API key as a secret. 1. Select ‘MY SECRETS’ and create a new key with name ‘OPENAI\_API\_KEY’ 2. [Generate an Open AI API key](https://platform.openai.com/account/api-keys) and ensure that key has access to the moderate API. 3. Provide the generated API key to the PubNub function secret you just created. The body of the PubNub function will look as follows: ```js const xhr = require('xhr'); const vault = require('vault'); export default request => { if (request.message && request.message.text) { let messageText = request.message.text return getOpenaiApiKey().then(apiKey => { return openAIModeration(messageText).then(aiResponse => { // Append the response to the message request.message.openAiModeration = aiResponse; // If the message was harmful, you might also choose to report the message here. return request.ok(); }) }) } return request.ok(); }; let OPENAI_API_KEY = null; function getOpenaiApiKey() { // Use cached key if (OPENAI_API_KEY) { return new Promise(resolve => resolve(OPENAI_API_KEY)); } // Fetch key from vault return vault.get("OPENAI_API_KEY").then(apikey => { OPENAI_API_KEY = apikey; return new Promise(resolve => resolve(OPENAI_API_KEY)); }); } function openAIModeration(messageText) { const url = 'https://api.openai.com/v1/moderations'; const http_options = { 'method': 'POST', 'headers': { "Content-Type": "application/json", "Authorization": `Bearer ${OPENAI_API_KEY}`, }, 'body': JSON.stringify({ "input": messageText }), timeout: 9500, retries: 0 }; return xhr.fetch(url, http_options) .then((resp) => { const body = JSON.parse(resp.body); return body; }) .catch((err) => { console.log(err); return "Open AI Timed out"; }); } ``` The function itself is quite straightforward: For each message received: - Pass it to the Open AI moderation function - Append the returned moderation object as a new key on the Message (JSON) object  **Save your function and make sure your module is started** ### Latency The PubNub function you have just created will be executed synchronously every time a message is sent, and that message will not be delivered until the function has finished executing.  Since the function contains a call to an external API, the delivery latency will depend on how fast the API call to Open AI returns, which is outside of PubNub’s control and could be quite high. There are several ways to mitigate any degradation in the user experience. Most deployments provide immediate feedback to the sender that the message was sent and then rely on read receipts to indicate that the message is delivered (or reported).  ### Update the Client Application Let’s consider what would be required to handle the moderation payload within your application using the [Chat Demo](https://www.pubnub.com/demos/chat/?), which is a React application that uses the [PubNub Chat SDK](https://www.pubnub.com/docs/chat/overview?) to show most of the features of a typical chat app.   Set up an attribute to track whether or not a potentially harmful message should be displayed: ```js const [showHarmfulMessage, setShowHarmfulMessage] = useState(false) ``` And add some logic to not show a potentially harmful message by default, in this case within [message.tsx](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/90447262583c251c983f04f23ffb23adcbbd6d25/chat-sdk-demo-web/app/chat/ui-components/message.tsx): ```js {( !message.content.openAiModeration || !message.content.openAiModeration?.results[0].flagged || showHarmfulMessage) && (message.content.text )} { !showHarmfulMessage && message.content.openAiModeration?.results[0].flagged && <span>Message contains potentially harmful content <span className="text-blue-400 cursor-pointer" onClick={() => {setShowHarmfulMessage(true)}}>(Reveal) </span> </span> } ``` ![Chat Moderation with OpenAI - Image](https://www.pubnub.com/cdn/3prze68gbwl1/1qmBAFCDiDwwKdp7TLSCaw/516d3b1de50c22784996f1e43a65fdc7/Screenshot_2024-07-01_at_10.59.58.png "Chat Moderation with OpenAI - Image 01") Note that these changes are not present on the **hosted** version of the [Chat Demo](https://www.pubnub.com/demos/chat/?) but the [ReadMe contains full instructions](https://github.com/PubNubDevelopers/Chat-SDK-Demo-Web/blob/main/README.md) to build it and run it yourself from your own keyset.  Wrap up ------- And there you have it, a quick and easy (and free) way to add both moderation and sentiment analysis to your application using Open AI. To learn more about integrating Open AI with PubNub, check out these other resources: - [OpenAI GPT API Integration with Functions](https://www.pubnub.com/blog/openai-gpt-api-integration-with-functions/?) - [Build a Chatbot with PubNub and ChatGPT](https://www.pubnub.com/blog/build-a-chatbot-with-pubnub-and-chatgpt-openai/?) (Adding a Chatbot to our PubNub showcase) - [Enhance a Geo App with PubNub & Chat GPT / OpenAI](https://www.pubnub.com/blog/enhance-geo-app-with-pubnub-and-openai-chatgpt/?) Feel free to reach out to the DevRel team at [devrel@pubnub.com](mailto:devrel@pubnub.com) or contact our [Support](https://support.pubnub.com/hc/en-us?) team for help with any aspect of your PubNub development. How can PubNub help you? ======================== This article was originally published on [PubNub.com](https://www.pubnub.com/blog/chat-moderation-with-openai/?) Our platform helps developers build, deliver, and manage real-time interactivity for web apps, mobile apps, and IoT devices. The foundation of our platform is the industry's largest and most scalable real-time edge messaging network. With over 15 points-of-presence worldwide supporting 800 million monthly active users, and 99.999% reliability, you'll never have to worry about outages, concurrency limits, or any latency issues caused by traffic spikes. Experience PubNub ----------------- Check out [Live Tour](https://www.pubnub.com/tour/introduction/?) to understand the essential concepts behind every PubNub-powered app in less than 5 minutes Get Setup --------- Sign up for a [PubNub account](https://admin.pubnub.com/signup/?) for immediate access to PubNub keys for free Get Started ----------- The [PubNub docs](https://www.pubnub.com/docs?) will get you up and running, regardless of your use case or [SDK](https://www.pubnub.com/docs?)
pubnubdevrel
1,909,836
A Guide to Reducing Risks in Generative AI
Generative AI services are changing industries by automating complicated operations, generating...
0
2024-07-03T15:17:21
https://dev.to/calsoftinc/a-guide-to-reducing-risks-in-generative-ai-3007
ai, automation, machinelearning, productivity
Generative AI services are changing industries by automating complicated operations, generating content, and enhancing decision-making processes. However, those advantages include important risks that should be handled. This describes practical measures for reducing those risks and ensuring the proper usage of generative AI. ## Understanding Generative AI Services Generative AI services use machine learning algorithms to generate fresh content or data. These services include text content generation, image creation, music composition, and more. While these competencies are powerful, they can also be misused which leads to ethical, security, and operational concerns. This necessitates identifying the risks in Gen AI services. ## Identifying Risks in Generative AI [**Generative AI services**](https://www.calsoftinc.com/generative-ai-services-solutions/) encompass numerous risks, which include producing misleading information, the functionality misuse of AI-generated outputs, and a loss of transparency in AI decision-making. For instance, generative AI can create practical but false media content, consisting of deepfakes, that can spread misinformation. However, it is possible to address these risks and alleviate them. ## Mitigating Data Privacy Risks in Generative AI ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70ud4c3t56ynkkqrzg4g.png) ## 1. Rigorous testing and validation Ensuring the reliability and safety of generative AI models is crucial. This is achieved with comprehensive testing and validation. Here are the key components: **•Comprehensive Testing:** This involves subjecting the AI models to various scenarios and inputs to evaluate their performance. It is critical to understand how the AI operates in normal and stressful circumstances. This helps in identifying potential failures or biases in the AI's outputs. **•Artificial Data Use:** Using artificial data allows testing the models without risking real-world information privacy. Synthetic data mimics real-world data patterns but doesn`t include actual consumer information. This technique is especially beneficial for testing how models manage lateral instances or unexpected inputs, thereby lowering the chance of harmful outputs when deployed in real-world settings. **•Scenario-Based Testing:** Stress-testing the AI under different scenarios helps identify how the system handles unusual or extreme cases. For instance, a healthcare AI might be tested with various patient conditions to ensure it provides safe and accurate recommendations. ## 2. Incorporating transparency measures Building trust and accountability requires implementing transparency measures. Here's how transparency can be implemented in different services: •**Explainable AI Models**: Creating systems that can articulate their choices and results aids users in comprehending the fundamentals of content creation. This is especially crucial in fields where decisions can have a big impact, like law and medicine. •**User-Friendly Explanations:** End users should be able to understand the explanations provided quickly. This entails staying away from technical jargon and providing facts in an understandable way. **•Decision Traceability:** Ensuring that each choice can be traced back to the data and rules utilized aids in auditing and comprehending the system's behaviour. This traceability is critical to sustaining accountability and confidence. ## 3. Implementing robust data governance Data governance is a cornerstone of reliable and safe generative AI services. Effective data governance includes: •**Data Curation:** To guarantee that training data for AI models appropriately reflects real-world situations that the AI will encounter, rigorous selection and management of the data are essential. This entails removing irrelevant or maybe skewed data. •**Bias Mitigation:** It`s essential to identify and resolve biases through routinely comparing training protocols and data sources. This makes it possible to ensure AI models live unbiased and do not reinforce preexisting biases inside the data. •**Data quality:** Ensuring that the data is correct, comprehensive, and up to date is known as facts quality. High-quality data improves AI model overall performance and lowers the probability of manufacturing biased or inaccurate findings. •**Regular Audits:** FData monitoring and auditing on a regular basis allows us to discover and deal with problems before they become principal concerns. Our plan consists of common audits to affirm compliance with moral requirements and data protection legislation. ## 4. Developing Ethical Guidelines and Standards Ethical norms and standards are essential for directing the development and deployment of generative AI systems. Here's how they can be established: •**Ethical Frameworks:** Developing comprehensive ethical frameworks is essential to address concerns such as consent, privacy, and the rights of individuals whose data may be used. These frameworks should be aligned with societal values and legal requirements. •**Consent Management:** Ensuring that individuals have given explicit consent for their data to be used in training AI models is crucial. This involves transparent communication about how their data will be used and stored. •**Privacy Protections:** Implementing robust privacy protections to safeguard individuals' data is necessary. This includes data anonymization techniques and stringent access controls to prevent unauthorized use of data. •**Regulatory Compliance:** It is critical to stay current on new rules and regulations governing technology and data protection. Following these principles guarantees that systems are legal and ethical. •**Ethical Audits:** Regular ethical audits of AI systems help in maintaining ethical standards over time. These audits can identify potential ethical issues and provide recommendations for improvement. Businesses may mitigate the risks associated with generative AI solutions and assure their responsible usage by setting ethical norms, maintaining strong data governance, fully testing AI models, and applying transparency measures. These actions are crucial for creating reliable and potent AI applications that advance society and industry. Following best practices can overcome the risks in Gen AI services. ### Implementing Best Practices for Generative AI Services To reduce risks in generative AI services, organizations should: • **Conduct Risk Assessments:** Perform thorough risk assessments before deploying AI systems. • **Engage Stakeholders:** Involve diverse stakeholders in the AI development process. • **Keep Systems Updated:** Regularly update AI systems with the latest security patches and improvements. • **Ensure Compliance:** Make sure AI systems comply with relevant regulations and standards. ## Conclusion While generative AI offers a lot of potential, it additionally comes with a lot of risks. Businesses can take benefit of generative AI whilst restricting risks via way of means of the use of reliable procedures. Important measures consist of prioritizing information protection, eliminating bias, strengthening security, and addressing ethical concerns. Applications of generative AI which can be used ethically come to be more dependable and powerful, benefiting both society and industry. Calsoft being a leading technology partner, offers expert advice and strong [**data security services**](https://www.calsoftinc.com/technology/security/) to ensure that generative AI tools are utilized safely and successfully. Businesses that engage with Calsoft can maximize the potential of generative AI while mitigating risks. Reference Links: [1] Managing the risk of Generative AI- [Harvard Business Review](https://hbr.org/2023/06/managing-the-risks-of-generative-ai) [2] AI risk management framework- [NIST](https://www.nist.gov/itl/ai-risk-management-framework)
calsoftinc
1,910,329
Dive into Cybersecurity Mastery with Cyber Security Tutorials! 🔒
Comprehensive cybersecurity tutorials covering network mapping, packet analysis, vulnerability assessments, and ethical hacking techniques using Kali Linux.
27,801
2024-07-03T15:14:54
https://getvm.io/tutorials/cyber-security-tutorials
getvm, programming, freetutorial, technicaltutorials
Hey there, fellow cybersecurity enthusiasts! 👋 Are you looking to level up your skills and become a pro in the world of network security and ethical hacking? Well, I've got the perfect resource for you - the Cyber Security Tutorials! This comprehensive path, available at [https://labex.io/tutorials/category/cysec](https://labex.io/tutorials/category/cysec), is a treasure trove of knowledge and hands-on experience in the field of cybersecurity. Whether you're a beginner or an experienced professional, these tutorials have something for everyone. ## What's Included? 🔍 The Cyber Security Tutorials cover a wide range of topics, from network mapping and packet analysis to vulnerability assessments and ethical hacking techniques using the powerful Kali Linux. You'll learn how to: - Gain practical skills in network mapping with Nmap, packet analysis with Wireshark, and ethical hacking methodologies using Kali Linux. 🌐 - Explore host discovery, port scanning, vulnerability assessments, traffic capture, packet dissection, and a wide range of security tools. 🔍 - Develop hands-on expertise in network reconnaissance, web app testing, wireless hacking, exploitation, and post-exploitation techniques. 💻 - Equip yourself with essential InfoSec knowledge to identify risks, conduct ethical hacking assessments, and strengthen organizational security posture. 🛡️ ## Why You Should Check It Out 🤔 This comprehensive path is ideal for individuals interested in cybersecurity, network security, and ethical hacking. It provides you with the perfect blend of theoretical knowledge and practical skills, enabling you to develop a deep understanding of the subject matter and become a true cybersecurity pro. 💪 So, what are you waiting for? Head over to [https://labex.io/tutorials/category/cysec](https://labex.io/tutorials/category/cysec) and dive into the world of Cyber Security Tutorials. Get ready to level up your skills, stay ahead of the curve, and become a cybersecurity superhero! 🚀 ## Supercharge Your Cybersecurity Journey with GetVM Playground 🚀 To truly master the concepts covered in the Cyber Security Tutorials, I highly recommend utilizing the powerful GetVM Playground. This Google Chrome browser extension provides an immersive online coding environment, allowing you to put your newfound knowledge into practice seamlessly. With the GetVM Playground, you can dive right into hands-on exercises and experiments without the hassle of setting up local development environments. The intuitive interface and real-time feedback make it the perfect companion to the Cyber Security Tutorials, enabling you to reinforce your understanding and develop practical skills in network mapping, packet analysis, vulnerability assessments, and ethical hacking techniques. The best part? You can access the GetVM Playground directly from the Cyber Security Tutorials page at [https://getvm.io/tutorials/cyber-security-tutorials](https://getvm.io/tutorials/cyber-security-tutorials). This seamless integration ensures that you can transition effortlessly between theory and practice, solidifying your cybersecurity expertise every step of the way. 🎉 So, what are you waiting for? Unlock the full potential of the Cyber Security Tutorials by exploring the GetVM Playground and taking your learning experience to new heights. Get ready to become a true cybersecurity master! 💻 --- ## Practice Now! - 🔗 Visit [Cyber Security Tutorials](https://labex.io/tutorials/category/cysec) original website - 🚀 Practice [Cyber Security Tutorials](https://getvm.io/tutorials/cyber-security-tutorials) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,907,862
Embracing Data: Documenting My Journey into Data Analytics
I embarked on a journey into the world of data a little over a year ago. My fascination with data and...
27,952
2024-07-03T15:14:39
https://dev.to/tmsn/embracing-data-documenting-my-journey-into-data-analytics-528k
data, datascience, beginners, career
I embarked on a journey into the world of data a little over a year ago. My fascination with data and AI systems has been a long-standing interest. I believe data is fundamental to everything and finds applications in every field. Last year, I came across the [ACN Scholarships](https://www.africancoding.network/acn-youth) opportunity which offered Google Certification Scholarships programs. I applied and was selected for the Google Analytics Course. This course opened me to the potential and practical applications of data analysis, as well as the skills needed to build a career around it. Following the course, I immersed myself in various projects aimed at refining and expanding my skills in data analysis. I engaged with LeetCode, Stratascratch SQL and R exercises, and experimented with Tableau dashboards. However, like many journeys, mine encountered unexpected pauses and detours due to life’s unpredictable nature. Nevertheless, determined to reignite my passion for data, I have decided to launch this blog. I intend to document my ongoing progress, share valuable insights gained from my projects, and reflect on the continuous learning process. In the coming posts, I look forward to revisiting some of the projects I’ve completed, offering a behind-the-scenes look at the challenges, breakthroughs, and lessons learned along the way. Through this platform, I hope to track my personal growth and inspire and connect with fellow enthusiasts and professionals in the field. Join me as I navigate through past achievements and set ambitious goals for the future. I’m eager to continue evolving within this dynamic field and to share this journey with you, each step of the way. **Next post:** One particularly memorable aspect of my journey so far was working through my end-of-course case study, which applied our learnings to real-world scenarios. I look forward to sharing the tea in my next post!
tmsn
1,910,328
Secure Your AS400 Systems with Sennovate’s Managed Detection and Response (MDR) Service
In the cyber threat landscape, protecting important business systems such as AS400 is critical....
0
2024-07-03T15:14:28
https://dev.to/sennovate/secure-your-as400-systems-with-sennovates-managed-detection-and-response-mdr-service-43a8
cybersecurity, security, infosec, mdr
In the cyber threat landscape, protecting important business systems such as AS400 is critical. Sennovate provides a comprehensive Managed Detection and Response (MDR) service specifically built for AS400. This service assures that your AS400 systems are secure, resilient, and meet industry requirements. In this blog article, we will go over the specifics of how we integrate AS400 systems, collect and parse logs, and the unique value of protecting your legacy systems. **Seamless Integration with AS400 Systems and Architecture** For our customers integrating their AS400 systems, we deployed a specialized sensor that acts as a log collector within their infrastructure. This sensor is designed to efficiently pull logs from the IBM i Syslog Reporting Manager. Once collected, the logs are forwarded to our Security Operations Center (SOC) platform. Our SOC platform is equipped with advanced analytical tools and algorithms that allow our team of skilled analysts to monitor the detections being triggered in real-time. **Log Ingestion Using IBM i Syslog Reporting Manager**: We utilize the IBM i Syslog Reporting Manager to ingest logs from your AS400 systems. This tool enables us to collect comprehensive log data efficiently and is part of IBM’s toolkit. **Log Parsing and Analysis**: Once the logs are ingested, our log collector parses the raw log data into key fields. This step ensures that we can effectively monitor and understand the activity within your AS400 environment. **Detection Enablement**: Based on the parsed log data, we enable advanced detections to identify potential security threats. This proactive approach allows us to detect and respond to anomalies and suspicious activities in real-time. This architecture ensures that any anomalies or potential security threats are promptly identified and addressed, providing comprehensive protection for your AS400 systems. **Ensuring Compliance and Comprehensive Dashboards** Compliance with industry regulations is crucial for many businesses, and Sennovate’s MDR service helps you stay compliant by offering detailed reporting and documentation, along with comprehensive dashboards for real-time insights. We ensure your AS400 systems adhere to relevant regulatory standards, such as GDPR, HIPAA, and PCI-DSS, and provide audit-ready reports with detailed logs and documentation. Additionally, we offer custom dashboards tailored to your specific needs, giving you real-time visibility into your security posture, key metrics, and alerts. We created more than 10 different dashboards covering from user behavior to TCP and TELNET connections. One example of our customized dashboards is the following main dashboard we created to aggregate all the data coming in from the AS400, providing an overview. **The Sennovate Difference** When you choose Sennovate for your AS400 security needs, you are teaming up with a trusted cybersecurity leader. Here’s what makes us stand out: Expertise and Experience: Our team consists of seasoned professionals with deep expertise in AS400 systems and cybersecurity. Innovative Solutions: We leverage the latest technology and innovative approaches to deliver top-notch security services. Customer-Centric Approach: We prioritize your business needs, providing personalized solutions and exceptional customer service. Proven Track Record: Our clients trust us to protect their most critical assets, and we have a proven track record of success. Securing your AS400 systems with Sennovate’s MDR service is a strategic move that offers robust protection, compliance assurance, and peace of mind. Our seamless integration, 24/7 monitoring, advanced threat detection, rapid incident response, and comprehensive reporting ensure that your AS400 environment remains secure and resilient. **Learn More About [Sennovate MDR-as-a-Service](https://sennovate.com/secure-your-as400-systems-with-sennovates-managed-detection-and-response-mdr-service/) for Legacy Systems** Don’t leave your organization’s security to chance. Discover the unmatched protection Sennovate offers through our MDR-as-a-Service offering. We assist organizations like yours in assessing their security posture, identifying risks, and implementing robust security solutions aligned with industry best practices to mitigate those risks effectively. We provide comprehensive end-to-end Managed Detection and Response services, covering advisory, implementation, and 24×7 managed services. To know more about our solutions and services, visit [https://sennovate.com/](https://sennovate.com) or contact us at hello@sennovate.com
sennovate
1,910,325
Newly arrived
New in here, have a great day everyone...
0
2024-07-03T15:12:21
https://dev.to/rhey0027/newly-arrived-1ebj
web, python, database, anything
New in here, have a great day everyone...
rhey0027
1,909,131
amber: writing bash scripts in amber instead. pt. 4: functions
a while ago, i blogged about uploading files to s3 using curl and provided the solution as two...
27,793
2024-07-03T15:04:42
https://dev.to/gbhorwood/amber-writing-bash-scripts-in-amber-instead-pt-4-functions-5ba0
linux, bash
a while ago, i blogged about [uploading files to s3 using curl](https://gbh.fruitbat.io/2024/04/22/uploading-to-s3-with-bash/) and provided the solution as [two functions written in bash](https://gist.github.com/gbhorwood/d861c7a21f2ab151046025137f6a65b1), and basically the only feedback was "wait. you can write _functions_ in bash?" you can. but in reality, you probably don't want to. the syntax for defining (and calling) bash functions is horrible. there's a reason people genrally don't use them. writing and using functions in amber, by comparison, is a borderline delight of sanity. if you've ever written a function in php or python or javascript, amber's function syntax will feel familiar and, well, 'normal'. ![community disaster meme](https://gbh.fruitbat.io/wp-content/uploads/2024/07/meme_amber4.jpg "the community disaster meme")<figcaption>a look at the syntax of functions in bash </figcaption> ## a simple function to start let's start with a basic hello-world calibre function: ```dart fun hello() { echo "hello" } hello() ``` functions are defined with the keyword `fun` to keep things nice and terse, and have a body inside of braces. very comortable, standard stuff. when we call this function, the string 'hello' gets printed to `STDOUT`. ## accepting arguments arguments can be passed to a function as a comma-separated list. again, no surprises here. ```dart fun personalized_hello(name) { echo "hello {name}" } personalized_hello("gbhorwood") ``` note here that we're using [string interpolation](https://en.wikipedia.org/wiki/String_interpolation) in our `echo` statement to output our variable. ## return statements we can return a value from a function using the `return` statement, just as we would expect. ```dart fun get_personalized_hello(name) { return "hello {name}" } echo personalized_hello("gbhorwood") ``` ## a little bit of type safety everybody loves some type safety in their programming language, and amber obliges us by accepting optional types for both arguments and return values. ```dart fun sum(a: Num, b: Num): Num { return a + b } ``` types are defined using the colon syntax. amber has five types: * **`Text`**: strings, basically. * **`Num`**: either integers or floats. * **`Bool`**: the standard `true` or `false` * **`Null`**: the nothing type. amber uses `Null` as a return types for functions that do not return values. * **`[<some type>]`**: the array type. in amber, arrays cannot contain mixed types, so the type definition for an array includes the type of the array's elements. if we want to define an array of numbers, for instance, we would type it is `[Num]`. **important note:** if we define one type in a function, we have to define _all_ the types. for instance, we cannot define just the argument type without also defining the return type. not setting a return type of `Null` here throws an error. ```dart // this errors because there is no return type fun say_my_name(name: Text) { echo name } say_my_name("gbhorwood") // WARN Function has typed arguments but a generic return type ``` we would fix this error by writing our function as: ```dart // this works fun say_my_name(name: Text): Null { echo name } ``` likewise, if we define the type of one argument, we have to define them all: ```dart // this error because last_name has no type fun say_my_name(first_name: [Text], last_name): Null { echo "hello" } say_my_name("grant", "horwood") // ERROR Function 'say_my_name' has a mix of generic and typed arguments ``` and, of course, if we define a type, we have to obey it. ```dart fun say_my_name(name: Text) { echo name } say_my_name(9) // 1st argument 'name' of function 'say_my_name' expects type 'Text', but 'Num' was given ``` ## throwing errors with `fail` functions in amber can 'throw' an error by using the `fail` statement with an exit code. in this example, we want our function to fail if the user is not root. ```dart fun root_only_function() { unsafe if($whoami$ != "root") { fail 1 } echo "only root can do this" } ``` in the [first installment](https://gbh.fruitbat.io/2024/06/18/amber-writing-bash-scripts-in-amber-instead-pt-1-commands-and-error-handling/), we covered handling bash errors using the `failed` block. we can 'catch' the errors 'thrown' by `fail` the same way. ```dart root_only_function() failed { echo "failed condition" } ``` likewise, we can also ignore the errors we `fail` from our functions by using `unsafe`.<p> a while ago, i blogged about <a href="https://gbh.fruitbat.io/2024/04/22/uploading-to-s3-with-bash/">uploading files to s3 in bash</a> and provided the solution as <a href="https://gist.github.com/gbhorwood/d861c7a21f2ab151046025137f6a65b1">two functions written in bash</a>, and basically the only feedback was "wait. you can write <i>functions</i> in bash?" </p> <p> you can. but in reality, you probably don't want to. the syntax for defining (and calling) bash functions is horrible. there's a reason people genrally don't use them. </p> writing and using functions in amber, by comparison, is a borderline delight of sanity. if you've ever written a function in php or python or javascript, amber's function syntax will feel familiar and, well, 'normal'. <h2 class="wp-block-heading"> a simple function to start</h2> <p> let's start with a basic hello-world calibre function: </p> ```dart unsafe root_only_function() ``` ### trapping `failed` cases in our functions we can also, of course, handle errors from commands by using the `failed` block _inside_ our functions. this function, for example, attempts a shell command and, on failure, throws it's own `fail`. ```dart fun failing_touch() { silent $touch /etc/passwd$ failed { fail 1 } } failing_touch() failed { echo "function failing_touch failed" } ``` note that we applied `silent` to our shell command to suppress bash's output. we only want users to see _our_ error messages, not the shell's. ### pushing `failed` cases up to our function call trapping an error and throwing an explicit `fail` is a bit clumsy. amber also allows us to automatically `fail` up to our to where our function is called by replacing the `failed` block in our function with `?`. ```dart fun failing_touch() { silent $touch /etc/passwd$? } failing_touch() failed { echo "function failing_touch failed" } ``` in this example, our function, when called, fails exactly the same way as it would if we'd called `$touch /etc/passwd$` directly. very handy. ## conclusion this series has covered calling shell commands; handling errors; composing `if` statements and writing loops; using the convenience commands in the standard library; and writing functions. is that all amber can do? no. but it is certainly enough for us to start using this language to do useful, meaningful things. ## a note about vim writing code in vim is a joyful thing (or, at least, that's my opinion), but not having syntax highlighting in this modern day and age is intolerable, so i composed an [amber syntax file for vim](https://github.com/gbhorwood/amber.vim). i've never written a syntax file before and the effort there is clearly sophomoric, but it _does_ work. > 🔎 this post was originally written in the [grant horwood technical blog](https://gbh.fruitbat.io/2024/07/03/amber-writing-bash-scripts-in-amber-instead-pt-4-functions/)
gbhorwood
1,910,323
How to Perform Data Validation in Node.js
Data validation is essential to avoid unexpected behavior, prevent errors, and improve security. It...
0
2024-07-03T15:02:41
https://blog.appsignal.com/2024/06/19/how-to-perform-data-validation-in-nodejs.html
node
Data validation is essential to avoid unexpected behavior, prevent errors, and improve security. It can be performed both on a web page — where data is entered — and on the server, where the data is processed. In this tutorial, we'll explore data validation in the Node.js backend. Then, you'll learn how to implement it in Express using the `express-validator` library. Get ready to become a Node.js data validation expert! ## What Is Data Validation? [Data validation](https://en.wikipedia.org/wiki/Data_validation) ensures that data (whether entered or provided) is correct, consistent, and useful for its intended purpose. This process is typically performed in frontend applications, such as when dealing with forms. Likewise, it is essential to validate data in the backend as well. In this case, data validation involves checking path and query parameters, as well as the body data sent to servers. This ensures that the data received by each API meets the specified criteria, preventing errors and vulnerabilities and ensuring the smooth functionality of your application. ## Main Benefits of Validating Incoming Requests in Node.js Validating incoming requests in Node.js offers several key benefits: - **Enhancing security**: Mitigate some threats, such as injection attacks and data breaches. Proper validation prevents attackers from exploiting vulnerabilities in your application by sending it malformed or malicious data. - **Improving reliability**: Ensure that only valid and sanitized data is processed and stored in the backend application. That enhances the overall integrity and reliability of the data, leading to a more robust and trustworthy server. - **Maintaining compliance**: Make sure that the data handled by the server adheres to specific data format requirements or meets internal coding standards. Now that you know what data validation is and why you should enforce it in your Node.js application, let's see how to do it in this step-by-step tutorial! ## Prerequisites To follow this tutorial, you need a Node.js 18+ application with a few endpoints. For example, the following Express server is perfect: ```javascript const express = require("express"); // initialize an Express server const app = express(); app.use(express.json()); // an array to use as an in-memory database const users = [ { id: 1, email: "john.doe@example.com", fullName: "John Doe", age: 30 }, { id: 2, email: "jane.smith@example.com", fullName: "Jane Smith", age: 25 }, { id: 3, email: "bob.johnson@example.com", fullName: "Bob Johnson", age: 40 }, { id: 4, email: "alice.williams@example.com", fullName: "Alice Williams", age: 35, }, { id: 5, email: "mike.brown@example.com", fullName: "Mike Brown", age: 28 }, { id: 6, email: "sara.taylor@example.com", fullName: "Sara Taylor", age: 33 }, { id: 7, email: "chris.lee@example.com", fullName: "Chris Lee", age: 22 }, { id: 8, email: "emily.davis@example.com", fullName: "Emily Davis", age: 45 }, { id: 9, email: "alex.johnson@example.com", fullName: "Alex Johnson", age: 27, }, { id: 10, email: "lisa.wilson@example.com", fullName: "Lisa Wilson", age: 38, }, ]; // define three sample endpoints app.get("/api/v1/users/:userId", (req, res) => { const userId = req.params.userId; // find a user by id const user = users.find((user) => user.id == userId); if (!user) { res.status(404).send("User not found!"); } else { res.send({ user: user, }); } }); app.get("/api/v1/users", (req, res) => { // select all users by default let filteredUsers = users; const search = req.query.search; if (search !== undefined) { // filter users by fullName with a case-insensitive search filteredUsers = users.filter((user) => { return user.fullName.toLowerCase().includes(search.toLowerCase()); }); } res.send({ users: filteredUsers, }); }); app.post("/api/v1/users", (req, res) => { const newUser = req.body; const maxId = users.reduce((max, user) => (user.id > max ? user.id : max), 0); // add a new user with an auto-incremented id users.push({ id: maxId + 1, ...newUser, }); res.status(201).send(); }); // start the server locally on port 3000 const port = 3000; app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` This defines a local variable named `users` as an in-memory database. Then, it initializes an Express application with the following three endpoints: - `GET /api/v1/users/:userId`: To retrieve a single user from the `users` array based on its id. - `GET /api/v1/users`: To get the list of users in the database. It accepts an optional `search` query parameter to filter users by their full name. - `POST /api/v1/users`: To add a new user to the `users` array. Next, you'll need a library to perform data validation in Node.js. With thousands of weekly downloads, [`express-validator`](https://github.com/express-validator/express-validator) is the most popular option. `express-validator` provides a set of Express middleware to validate and sanitize incoming data to server APIs. Behind the scenes, these middleware functions are powered by [`validator.js`](https://github.com/validatorjs/validator.js). If you are unfamiliar with this package, `validator.js` is the most widely used data validation library in the entire JavaScript ecosystem. What makes `express-validator` so successful is its rich set of features and intuitive syntax for validating Express endpoints. It also provides tools for determining whether a request is valid, functions for accessing sanitized data, and more. Add the [`express-validator` npm package](https://www.npmjs.com/package/express-validator) to your project's dependencies with: ```bash npm install express-validator ``` Perfect! You now have everything you need to perform data validation in an Express application. For a faster setup, clone the [GitHub repository supporting this guide](https://github.com/Tonel/nodejs-express-validator-demo): ```bash git clone https://github.com/Tonel/nodejs-express-validator-demo ``` You'll find the Express server above and further implementations in dedicated branches. `express-validator` supports two equivalent ways to implement data validation: 1. [**Validation Chain**](https://express-validator.github.io/docs/guides/validation-chain): Define the validation rules by calling one method after another through [method chaining](https://en.wikipedia.org/wiki/Method_chaining). 2. [**Schema Validation**](https://express-validator.github.io/docs/guides/schema-validation): Define the validation rules in an object-based schema to match against the incoming data. Let's dive into both! ## Data Validation in Node.js With Validation Chains Learn how to implement data validation through validation chains in `express-validator`. ### Understand Validator Chains In `express-validator`, validation chains always begin with one of the following middleware functions: - [`check()`](https://express-validator.github.io/docs/api/check/): Creates a validation chain for the selected fields in any of the `req.body`, `req.cookies`, `req.headers`, `req.query`, or `req.params` locations. If the specified fields are present in more than one location, the validation chain processes all instances of that field's value. - [`body()`](https://express-validator.github.io/docs/api/check/#body): Same as `check()`, but it only checks fields in `req.body`. - [`cookie()`](https://express-validator.github.io/docs/api/check/#cookie): Same as `check()`, but it only checks fields in `req.cookies`. - [`header()`](https://express-validator.github.io/docs/api/check/#header): Same as `check()`, but it only checks fields in `req.headers`. - [`param()`](https://express-validator.github.io/docs/api/check/#param): Same as `check()`, but it only checks fields in `req.params`. - [`query()`](https://express-validator.github.io/docs/api/check/#query): Same as `check()`, but it only checks fields in `req.query`. These middleware functions accept one or more field names to select from incoming data. They also provide some methods, which is possible because [JavaScript functions are actually first-class objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions). Their methods always return themselves, leading to the method chaining pattern. So, let's assume that you want to ensure that the `name` body parameter contains at least 4 characters when it is present. This is how you can specify that with an `express-validator` validator chain: ```javascript body("name").optional().isLength({ min: 4 }); ``` Each method chain will return a valid Express middleware function you can use for validation in a route handler. A single route handler can have one or more validation middleware, each referring to different data fields. [Check out the express-validator documentation for all validation chain methods](https://express-validator.github.io/docs/api/validation-chain/). Time to see validation chains in action! ### Validate Route Parameters Suppose you want to ensure that the `userId` parameter in `GET /api/v1/users/:userId` is an integer. This is what you may end up writing: ```javascript app.get("/api/v1/users/:userId", param("userId").isInt(), (req, res) => { // business logic... }); ``` While the validation chain defined above is correct, it's not enough, as `express-validator` doesn't report validation errors to users automatically. Why? Because it's better if developers always manually define how to handle invalid data! You can access the result object of data validation in an Express endpoint through the [`validationResult()`](https://express-validator.github.io/docs/api/validation-result/#validationresult) function. Import it along with the validation middleware function from `express-validator`: ```javascript const { check, body, // ... validationResult, } = require("express-validator"); ``` Then, you can define the validated route handler for `GET /api/v1/users/:userId` as below: ```javascript app.get("/api/v1/users/:userId", param("userId").isInt(), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { const userId = req.params.userId; // find a user by id const user = users.find((user) => user.id == userId); if (!user) { res.status(404).send("User not found!"); } else { res.send({ user: user, }); } } else { res.status(400).send({ errors: result.array() }); } }); ``` When `userId` is not an integer, the endpoint will return a `400` response with validation error messages. In production, you should override that response with a generic message to avoid providing useful information to potential attackers. ### Validate Query Parameters In this case, you want to ensure that the `search` query parameter in the `GET /api/v1/users` endpoint is not empty or blank when it is present. This is the validation chain you need to define: ```javascript query("search").optional().trim().notEmpty(); ``` Note that the order of the method calls in the chain is important. If you switch `trim()` with `notEmpty()`, blank strings will pass the validation rule. `trim()` is a sanitizer method and it affects the values stored in `search`. If you need to access the sanitized data directly in the route handler, you can call the [`matchedData()`](https://express-validator.github.io/docs/api/matched-data) function. The validated and sanitized endpoint will now be specified as follows: ```javascript app.get( "/api/v1/users", query("search").optional().trim().notEmpty(), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { // select all users by default let filteredUsers = users; // read the matched query data from "req" const data = matchedData(req); const search = data.search; if (search !== undefined) { // filter users by fullName with a case-insensitive search filteredUsers = users.filter((user) => { return user.fullName.toLowerCase().includes(search.toLowerCase()); }); } res.send({ users: filteredUsers, }); } else { res.status(400).send({ errors: result.array() }); } } ); ``` Notice that `req.query.search` will contain the original value of the `search` query parameter, but you want to use its sanitized value. ### Validate Body Data Now, suppose you want the body of `POST /api/v1/users` to follow these rules: - `fullName` must not be empty or blank. - `email` must be a valid email. If there's an invalid email, the validation error message should be “Not a valid e-mail address.” - `age` must be an integer greater than or equal to 18. This is how you can implement the desired data validation: ```javascript app.post( "/api/v1/users", body("fullName").trim().notEmpty(), body("email").isEmail().withMessage("Not a valid e-mail address"), body("age").isInt({ min: 18 }), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { // read the matched body data from "req" const newUser = matchedData(req); const maxId = users.reduce( (max, user) => (user.id > max ? user.id : max), 0 ); // add a new user with an auto-incremented id users.push({ id: maxId + 1, ...newUser, }); res.status(201).send(); } else { res.status(400).send({ errors: result.array() }); } } ); ``` There are a couple of critical aspects to emphasize in this example. First, a single route handler can have multiple validation middlewares referring to the same data source. Second, the methods offered by the middleware functions not only specify how to validate the data, but also allow you to customize error messages and more. ### Test the Validated Endpoints Launch your Node.js application and verify that the data validation logic works as expected. Otherwise, check out the `chain-validation` branch from the [repository that supports this article](https://github.com/Tonel/nodejs-express-validator-demo): ```bash git checkout chain-validation ``` Install the project dependencies and launch an Express development server: ```bash npm install npm run start ``` Your Node.js application should now be listening locally on port 3000. Open your favorite HTTP client and try to make a `GET` request to `/api/v1/users/edw`: ![Make GET request](https://blog.appsignal.com/images/blog/2024-06/get-request.png) Since `edw` is not a number, you'll get a `400` response. Specifically, the error array generated by `express-validator` contains one or more error objects in the following format: ```javascript { "type": "field", "value": "edw", "msg": "Invalid value", "path": "userId", "location": "params" } ``` You can simulate another validation error by calling the `GET /api/v1/users` endpoint with a blank `search` query parameter: ![Call GET endpoint](https://blog.appsignal.com/images/blog/2024-06/call-get-endpoint.png) Again, trigger a validation error by calling the `POST /api/v1/users` API with an invalid body: ![Validation error](https://blog.appsignal.com/images/blog/2024-06/validation-error.png) If you call the three endpoints with the expected data instead, they will return a successful response as expected. Great, you just learned how to perform data validation in Node.js! All that remains is to explore the equivalent schema-based approach to data validation. ## Data Validation in Node.js with Schema Validation Let's now see how to define data validation through schema objects with `express-validator`. ### Understand Schema Validation In `express-validator`, [schemas](https://express-validator.github.io/docs/guides/schema-validation#what-are-schemas) are an object-based way of defining validation and/or sanitization rules on a request. While their syntax differs from validation chains, they offer exactly the same functionality. Under the hood, `express-validator` translates schemas into validation chain functions. Thus, you can choose between one syntax or the other, depending on your preference. The same Express application can have some endpoints validated with chains and others validated with schemas. Schemas are simple JavaScript objects whose keys represent the fields to validate. Schema values contain validation rules in the form of objects. Pass a schema object to the [`checkSchema()`](https://express-validator.github.io/docs/api/check-schema) function to get an Express validation middleware. For example, this is how you can use a schema to ensure that the `name` body parameter contains at least 4 characters when it is present: ```javascript checkSchema( { name: { optional: true, isLength: { options: { min: 4 } }, }, }, ["body"] ); ``` By default, `checkSchema()` behaves like `check()`. To specify which input data sources it should check, you can pass them in an array as the second parameter. In the example above, the validation schema object will only be applied to the body data. Sometimes, you may need to validate a body's inner field. [Check out the documentation to see what `express-validator` offers in field selection](https://express-validator.github.io/docs/guides/field-selection). ### Validate Route, Query, and Body Data With Schemas Here's how you can specify the validation rules shown in the method chaining section above through schema validation: - `GET /api/v1/users/:userId`: ```javascript app.get( "/api/v1/users/:userId", checkSchema( { userId: { isInt: true }, }, ["params"] ), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { const userId = req.params.userId; // find a user by id const user = users.find((user) => user.id == userId); if (!user) { res.status(404).send("User not found!"); } else { res.send({ user: user, }); } } else { res.status(400).send({ errors: result.array() }); } } ); ``` - `GET /api/v1/users`: ```javascript app.get( "/api/v1/users", checkSchema( { search: { optional: true, trim: true, notEmpty: true }, }, ["query"] ), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { // select all users by default let filteredUsers = users; // read the matched query data from "req" const data = matchedData(req); const search = data.search; if (search !== undefined) { // filter users by fullName with a case-insensitive search filteredUsers = users.filter((user) => { return user.fullName.toLowerCase().includes(search.toLowerCase()); }); } res.send({ users: filteredUsers, }); } else { res.status(400).send({ errors: result.array() }); } } ); ``` Note that the order of the attributes in the schema object matters. Placing the `trim` attribute after `notEmpty` will result in a different validation rule. - `POST /api/v1/users`: ```javascript app.post( "/api/v1/users", checkSchema( { fullName: { trim: true, notEmpty: true, }, email: { errorMessage: "Not a valid e-mail address", isEmail: true, }, age: { isInt: { options: { min: 18 } }, }, }, ["body"] ), (req, res) => { // extract the data validation result const result = validationResult(req); if (result.isEmpty()) { // read the body data from the matched data const newUser = matchedData(req); const maxId = users.reduce( (max, user) => (user.id > max ? user.id : max), 0 ); // add a new user with an auto-incremented id users.push({ id: maxId + 1, ...newUser, }); res.status(201).send(); } else { res.status(400).send({ errors: result.array() }); } } ); ``` As you can see, not much changes from chain validation. The two approaches are completely equivalent. ### Test the Validated Endpoints Start your Express application and prepare to test your schema-based data validation. Otherwise, check out the `schema-validation` branch from the [repository supporting this article](https://github.com/Tonel/nodejs-express-validator-demo): ```bash git checkout chain-validation ``` Install the project's dependencies, start the local server, and repeat the API calls made in the method chaining validation section. You should get the exact same results! ## Wrapping Up: Protect Your Node.js Server From Unexpected Incoming Data In this post, we defined Node.js data validation and explored its benefits for a backend application. You now know: - The definition of data validation - Why you should check the data received by an endpoint before feeding it to business logic - How to implement data validation in Node.js with two different approaches: method chaining and schema validation Thanks for reading! **P.S. If you liked this post, [subscribe to our JavaScript Sorcery list](https://blog.appsignal.com/javascript-sorcery) for a monthly deep dive into more magical JavaScript tips and tricks.** **P.P.S. If you need an APM for your Node.js app, go and [check out the AppSignal APM for Node.js](https://www.appsignal.com/nodejs).**
antozanini
1,899,197
The Complete Guide to Serverless Apps II - Functions and Apps
In Part I we took a close look at the term “serverless” as it is used in cloud computing. We spoke...
27,854
2024-07-03T15:00:00
https://www.fermyon.com/serverless-guide/serverless-functions-and-serverless-apps
serverless, cloud, webassembly, cloudcomputing
In [Part I](https://dev.to/fermyon/the-complete-guide-to-serverless-apps-i-introduction-1ga4) we took a close look at the term “serverless” as it is used in cloud computing. We spoke about a serverless application - where you do not have to write a software server. Instead, you focus only on writing a request handler. Let’s now spend some time talking about this programming model; easily creating serverless functions and serverless applications. Your program is started when a request is received. The request object is passed into a function in your code. That function is expected to run to completion, possibly handing back a response. Once the function has been completed, the program exits. There are three characteristics of this sort of program: * It is short running, often running for only milliseconds. * It is triggered by an event or a request. * It is responsible merely for dealing with that request (often returning a response). ## Hello World 👋 For the sake of clarity, let’s look at a simple example of this kind of program. We will use the world’s most popular programming language, **JavaScript**, for this example. But the pattern is similar across languages. Also, we will write an example of a serverless function that handles an HTTP request. ```javascript const encoder = new TextEncoder() // Declare a function that handles a request (in this case, an HTTP request) export async function handleRequest(request) { // Send back an object that describes a response (in this case, an HTTP response) return { status: 200, body: encoder.encode("I'm a Serverless Function").buffer } } ``` There are three things to note about the example above: 1. We do not set up a server of any sort (we don't even import any libraries). 2. There is a function called `handleRequest()` that takes a `request` object. This function is called when an inbound HTTP request occurs. 3. The function returns a response. In this case, it's an HTTP response with a `200` response code (which means no error occurred) and the content that will be displayed in the web browser. Here is the same example in Python ```python class IncomingHandler(http.IncomingHandler): def handle_request(self, request: Request) -> Response: return Response( 200, {"content-type": "text/plain"}, bytes("I'm a Serverless Function written in Python", "utf-8") ) ``` <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExdmp2dzUwdG5nNXd5Y2hrYnZrazdiY2o4eWZqZHVmMXF4Z21iMWJjcyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/SZOojjqpIrY9AHz1VQ/giphy.gif"> We don't start a server, map ports, handle interrupts, declare SSL/TLS certificates, or anything like that. The serverless app platform does all that stuff on our behalf outside of our code. When a request comes in, this app is started, the `handleRequest` function is called, and then the app exits. And how fast is this? Different Serverless platforms have different speeds. With [Spin](https://github.com/fermyon/spin), the handler can be started in under a millisecond. That is why there is no reason to run a server. If we can start this fast, it's much more efficient (and much cheaper) to **not** be running idle servers. The above is an example of a serverless function. And when we package that up and send it off to a server, we have built a simple serverless app. ## More Definitions 😅 _"Wait, i'm confused! If this is a Serverless function, what are Functions as a Service? How does it differ from an Edge Function?"_ <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExb2o2M2k0YXZldDh4YTkzZ2c3djd6dnBhaDUzY3QzY3U1cGZmcnJqNyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPDNOFwZ0zi8nrq/giphy.gif"> These are valid questions so let's clarify the two: #### Functions as a Service (FaaS) When AWS Lambda first hit the scene, cloud mavens were keen on collapsing all cloud service names into “as-a-Service”-isms. For example: * core infrastructure services like compute and networking became “Infrastructure-as-a-Service (IaaS)”. * serverless databases were called “DB-as-a-Service (DBaaS)” and so on. In such an environment, it is no surprise that the first wave of serverless app platforms was given the unattractive monicker “Function-as-a-Service”. Personally, I prefer using "Serverless functions" in favour of FaaS and here's why: * The term FaaS is opaque. If you don’t know what it means, there are not many clues embedded in the term itself. As with all the “aaS”es, one finds oneself mentally listing words that start with F for clarification. * The term itself refers to the service that runs. So what do you call an application that runs in a FaaS? A Function-as-a-Service Function? A Function-as-a-Service App? That just sounds confusing. * Lastly, in English "FaaS" can be verbally hard to distinguish from "PaaS". In contrast, an app run inside of a PaaS is usually called a server or a microservice. Thus, most people in the field refer to apps that run in a FaaS as serverless apps or serverless functions. > The most famous PaaS, [Heroku](https://www.heroku.com/), does not refer to itself as a PaaS, and for the same reason, we don’t use FaaS. Much of their documentation uses the term “cloud application platform.” #### Cloud Functions and Edge Functions The terms cloud functions and edge functions occasionally arise when talking about serverless applications. For example, [Netlify](https://docs.netlify.com/functions/overview/) uses these terms in its documentation. The distinction between these "cloud" and "edge" [serverless functions](definition-of-serverless-functions) does not concern the functions themselves but rather where the specific function is being executed. * A cloud function executes "in the cloud," which usually means at one of the main hyperscalers such as AWS, Azure, or Google Cloud. * An edge function executes on the edge, a concept we will cover more later. In a nutshell, "edge" refers to the proximity between the end user and the function which they are calling. The term edge also refers to the proximity of the function being executed and the data being processed. The ultimate goal is to obtain the most efficient round-trip between the user, the function and any data being processed. Providers like Vercel and Netlify must make this distinction because the APIs they provide for the functions that run in the cloud are different from the APIs they provide for the functions that they run in edge providers like CloudFlare. This is an implementation-specific API difference that bubbles up to the developer. Our view is that “edge functions” and “cloud functions” are varieties of serverless apps. Keep in mind, when we talk about cloud and edge computing later on that the term **edge** is niche and only relates to a subset of service providers. ## Conclusion 😌 Thanks for staying with us thus far! In this post, we saw what the code for a Serverless function looks like and what happens when it is triggered by an event. In the upcoming posts we'll deep-dive in the characteristics of a Serverless App. We'll look at execution time, CPU & Memory, Statelessness, and more. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExcGY4bXAzeGwzOTZlYjdnczY2cnR5bmFobmg4bzgwaTNnb3dudTJhaSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Lgd5dyd6T0myHCsi2X/giphy.gif"> Let us know if you have come across any other terminology around Serverless Apps, and we'll try and compare and contrast it for you.
sohan26
1,910,070
VerifyVault Beta v0.3 Released
🔒✨ Exciting News! VerifyVault Beta v0.3 has RELEASED!✨🔒 🔑 Key Updates: Password Reminders to keep...
0
2024-07-03T14:59:34
https://dev.to/verifyvault/verifyvault-beta-v03-released-37dc
opensource, security, cybersecurity, github
🔒✨ Exciting News! VerifyVault Beta v0.3 has RELEASED!✨🔒 🔑 **Key Updates:** - **Password Reminders** to keep you on track 📝 - **Export Secret Keys** securely 🔐 - **Automatic Password Lock** for added security 🛡️ - **Export accounts via TXT Encrypted** for easy backups 📦 - Introducing **Clear All and Restore All buttons to the Recycle Bin** ♻️ - **Revamped automatic backup system** — never worry about losing data again! 🔄 _Take control of your digital security with VerifyVault._ 🚀 📂 **Repository:** https://github.com/VerifyVault ⬇️ **Direct Download:** https://github.com/VerifyVault/VerifyVault/releases/tag/Beta-v0.3
verifyvault
1,910,019
TypeScript 5.5: Exciting New Features
TypeScript has become increasingly popular among developers as a more structured alternative to...
0
2024-07-03T14:53:19
https://dev.to/enodi/typescript-55-new-features-27ef
typescript, frontend, backend, webdev
TypeScript has become increasingly popular among developers as a more structured alternative to JavaScript. It helps define types within your code, making it easier to catch errors like typos etc on time. Recently, TypeScript rolled out version 5.5, introducing several new features. In this article, we’ll take a closer look at four(4) of these new features and explain them in a simple, easy-to-understand manner. ### 1. **Inferred Type Predicates** One of the key improvements in TypeScript 5.5 is better type inference, especially with arrays and filtering. What does this mean? As your code progresses, the type of a variable can change. With Inferred Type Predicates, TypeScript now adjusts the type definitions accordingly. Let's look at an example: ``` const namesAndAges = ["Elijah", "Sophia", "Liam", "Isabella", "Mason", 23, 24]; const ages = namesAndAges.filter(age => typeof age === 'number'); ages.forEach((age) => console.log(age + 1)); ``` In TypeScript 5.0(Previous Versions): ![Inferred Type in TypeScript 5.0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd6muhn4evx6nhp0904n.png) In this example, `namesAndAges` is an array of type `(string|number)[]`. We're filtering out non-numeric values, leaving us with an array of numbers (ages). However, TypeScript 5.0 still sees the array `ages` as `(string|number)[]`, causing an error when trying to add 1 to `age` due to potential string types. Previously, we might have needed to explicitly cast `age` to `number` like this: ``` ages.forEach((age) => console.log(age as number + 1)); ``` **TypeScript 5.5** simplifies this process: With TypeScript 5.5, TypeScript handles this type of inference more accurately, as shown below: ![Inferred Type in TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/214h60u4owl3qil1jsuy.png) This improvement makes type inference a lot better making TypeScript more intuitive and effective in catching potential errors early. ### 2. Control Flow Narrowing for Constant Indexed Accesses Another significant enhancement in TypeScript 5.5 is better type narrowing for accessing object properties. **What does this mean?** Let's break it down with an example: ``` type ObjType = Record<string, number | string>; function downCase(obj: ObjType, key: string) { if (typeof obj[key] === "string") { console.log(obj[key].toLowerCase()); } } ``` In TypeScript 5.0(Previous Versions): ![Control Flow Narrowing in TypeScript 5.0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9p40ngjv035x8px3eutd.png) In this example, the `downCase` function checks if the property's value is of type string, and if so, converts it to lowercase. However, the previous TypeScript versions can’t be sure if `obj[key]` is a number or a string, leading to an error when trying to use the `toLowerCase` method. To avoid this in previous versions, we could do something like this: ``` type ObjType = Record<string, number | string>; function downCase(obj: ObjType, key: string) { const value = obj[key]; if (typeof value === "string") { console.log(value.toLowerCase()); } } ``` Here, an intermediary variable `value` is used to help TypeScript understand the type. **With TypeScript 5.5**, this workaround is no longer needed. The new version automatically narrows the type correctly based on the condition `typeof obj[key] === "string"` as shown below: ![Control Flow Narrowing in TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/srdlrgl1r3zegfgnfhod.png) Because Typescript is aware that the property’s value is of type `string`, all string methods are available for use as shown below ![Control Flow Narrowing in TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed4iexq05u75rbbo7m74.png) ### 3. Regular Expression Syntax Checking TypeScript 5.5 also brings a useful feature for developers working with regular expressions: syntax checking on regular expressions. **What does this mean?** Before this update, TypeScript would skip over regular expressions and not validate their syntax, potentially allowing errors to slip through unnoticed. With TypeScript 5.5, basic syntax checks are now performed on regular expressions. Let’s explain this with an example: In TypeScript 5.0(Previous Versions): ``` const regex = /hello(world/; ``` In the example above, the regular expression `/hello(world/` has an unclosed parenthesis `(`. Regular expressions require that every opening parenthesis has a corresponding closing parenthesis. In previous versions of TypeScript, this mistake would go unnoticed, and no error would be flagged. **No error was flagged in Typescript v5.0** ![Regular Expression in TypeScript 5.0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdjpkmzue7yb98spon82.png) With TypeScript 5.5, this regular expression will be flagged as an error because the syntax is incorrect due to the unmatched parenthesis. This improvement helps catch common mistakes in regular expressions early, making your code more reliable. An error was flagged in TypeScript 5.5 ![Regular Expression in Syntax Checking TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ex7c93u2w9973wxfefi.png) The fixed code in TypeScript 5.5 ![Regular Expression in TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hggkmn4k4941at7t2hxh.png) **Note**: TypeScript’s regular expression support is limited to regular expression literals. If you use `new RegExp` with a string literal, TypeScript will not check the provided string's syntax. You can read more [here](https://devblogs.microsoft.com/typescript/announcing-typescript-5-5/#regular-expression-syntax-checking) ### 4. Support for New ECMAScript Set Methods TypeScript 5.5 has added support for the new methods introduced to the `Set` object in JavaScript. These methods include `union`, `intersection`, `difference`, `symmetricDifference`, and more, expanding how you can work with Sets. **What does this mean?** These new `Set` methods allow for more powerful and concise operations on sets, such as combining or finding common elements between them. Let’s break this down with an example: In previous TypeScript versions: ``` let primaryColors = new Set(["red", "blue", "yellow"]); let secondaryColors = new Set(["green", "orange", "purple"]); // Trying to use the new Set methods would result in an error // The previous versions of TypeScript would not recognize these methods console.log(primaryColors.union(secondaryColors)); // Error: Property 'union' does not exist on type 'Set<string>' console.log(primaryColors.intersection(secondaryColors)); // Error: Property 'intersection' does not exist on type 'Set<string>' ``` ![No Support for New ECMAScript Set Methods in TypeScript 5.0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0s2wv9pu3ip9y8vrtxbc.png) Before TypeScript 5.5, attempting to use these methods would result in errors because TypeScript didn’t recognize them. This was because these methods were newly introduced in the latest ECMAScript specification and weren’t yet supported by TypeScript. **With TypeScript 5.5**, these new methods are now fully supported. You can use them directly without TypeScript flagging any errors. In TypeScript 5.5: ``` let primaryColors = new Set(["red", "blue", "yellow"]); let secondaryColors = new Set(["green", "orange", "purple"]); console.log(primaryColors.union(secondaryColors)); // Combines both sets console.log(primaryColors.intersection(secondaryColors)); // Finds common elements ``` ![Support for New ECMAScript Set Methods in TypeScript 5.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yap3cv7a9qmmzlb3x4i.png) TypeScript 5.5 brings significant enhancements like improved type inference, support for new ECMAScript Set methods, and better regular expression syntax checking. Part 2 of this article will cover additional features introduced in version 5.5. In case you're wondering, I used TypeScript Playground to test and demonstrate these updates. Explore TypeScript 5.5 features yourself on the [TypeScript Playground](https://www.typescriptlang.org/play). If you're excited about TypeScript 5.5 like I am, drop a like or comment below to share your thoughts on these awesome new features! Stay tuned for more updates in Part 2! Until then, happy coding!!! :)
enodi
1,910,321
Flood Escape launched on Arcadia
Our Flood Escape game was added to the Arcadia platform with all the levels already unlocked ,...
0
2024-07-03T14:53:11
https://enclavegames.com/blog/flood-escape-arcadia/
arcadia, floodescape, javascript, gamedev
--- title: Flood Escape launched on Arcadia published: true date: 2024-07-03 14:51:56 UTC tags: arcadia,floodescape,javascript,gamedev canonical_url: https://enclavegames.com/blog/flood-escape-arcadia/ cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/922db80jbpg2tg4ic7px.png --- Our [Flood Escape](https://enclavegames.com/games/flood-escape/) game was [added to the Arcadia platform](https://arcadia.fun/games/dab0c501-a915-4ac1-9227-8d585e2b173b/) with **all the levels already unlocked** , including the newly implemented [Badlucky one](https://medium.com/js13kgames/badlucky-escaped-0bb042146106). Given my [recent move to join OP Games](https://enclavegames.com/blog/op-guild/) I’m beginning to be more involved with various in-house projects - that include both the **OP Guild** and the **Arcadia** platform, which you might’ve already noticed with the [OP Guild × Arcadia collaboration](https://gamedevjs.com/competitions/op-guild-x-arcadia-upload-your-game-and-win-cash-prizes/). As part of that you can win **500 USDC** (from the total 1500 USDC pool) if you submit your game to Arcadia - the [deadline](https://gamedevjs.com/competitions/op-guild-x-arcadia-deadline-extended/) is July 15th. Doing so is as straightforward as possible, you don’t even need a wallet anymore as registering through email was enabled recently (with the help from Fortmatic). When you’re in, go to your [Game Creator page](https://arcadia.fun/game-creator/) and click **Create Game**. You can choose to host it yourself (we’re using [GitHub Pages](https://end3r.com/blog/2014/02/host-your-html5-games-on-github-pages/) for all our games), or upload it to Arcadia. ![Arcadia Game Creator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pbxwi3gfgedv9lr3nojl.png) Enter all the basic info: title, url, description, screenshots. Hit **Submit** and wait for acceptance - that’s it! After your game is accepted you’re already in and the game is available on the platform. If you’d like to have it featured just reach out and we’ll make sure it happens. Next step would be to add the **Tournament Mode** (like [Puckit! did](https://gamedevjs.com/competitions/op-guild-x-arcadia-deadline-extended/#puckit)), so folks can have a competitive play. It’s entirely optional, but then your game might end up being played during **Game Nights** which would bring you extra revenue. I’m defnitely planning to add that to [Flood Escape](https://arcadia.fun/games/dab0c501-a915-4ac1-9227-8d585e2b173b/) soon, so expect a blog post in the coming weeks with the details!
end3r
1,910,320
How to Promote Your Self-Made Laravel Package
So, you’ve poured your heart and soul into a shiny new Laravel package, and now it’s time to show it...
0
2024-07-03T14:50:05
https://dev.to/makowskid/how-to-promote-your-self-made-laravel-package-1ca9
webdev, laravel, php, programming
So, you’ve poured your heart and soul into a shiny new Laravel package, and now it’s time to show it off to the world. But how? Well, here are some top-notch strategies that’ll get your package the attention it deserves. ## Document Like a Pro Start by crafting a complete `README.md` file. Include every detail about your package, from installation instructions to usage examples. Don't forget to tag your GitHub repository with at least 20 relevant tags to ensure Packagist can link and distribute it properly. ## Submit to Laravel Package Directories - [LaraPackages](https://larapackages.com) - [Made with Laravel](https://madewithlaravel.com) - [Laravel Package](https://laravel-package.com) ## Leverage Laravel News Add your package to the [Laravel News Links section](https://laravel-news.com/links). This is a great way to reach a dedicated Laravel audience. ## Reach Out to Laravel Daily Send a submission to [Laravel Daily](https://laraveldaily.com/packages). They often feature new and exciting packages. ## Community Forums - Share on the [Laravel.io community forum](https://laravel.io) - Post on the [Laracasts Discuss forum](https://laracasts.com/discuss) ## Feature in Awesome Lists Submit your package to [Awesome Laravel](https://github.com/chiraggude/awesome-laravel) and other curated lists. This is like getting a Michelin star for your code. ## Discord and Slack - Join the [Laravel Discord community](https://discord.com/invite/laravel) and share your package. - Post on the #packages channel in [Larachat Slack](https://larachat.slack.com). ## Reddit Crossposting If your package has a specific niche, like an API wrapper for OpenAI, crosspost on relevant subreddits such as r/OpenAI and r/Laravel. ## Create a YouTube Tutorial Make a short video demonstrating your package. Show people how it solves a common problem. Visual aids can be incredibly compelling. ## Write About It Post on [Dev.to](https://dev.to), [Medium](https://medium.com), and your own blog. Articles can provide more in-depth insights into your package. ## Facebook Groups Share your package in Laravel user groups on Facebook. These communities are often very active and engaged. ## Laravel Tricks Post about your package on [Laravel Tricks](https://laravel-tricks.com/tricks). It’s another platform frequented by Laravel developers. ## Social Media Blitz Tweet, share, and shout about your package on all your social media platforms. ## Comment Marketing Google “best Laravel packages” and add thoughtful comments to relevant articles and blogs. Mention your package where appropriate. By following these steps, you'll not only boost visibility but also establish your package as a valuable resource in the Laravel community. Happy coding, and may your package become the next big thing! --- If you’ve made it this far, I invite you to check out the Laravel package for [SharpAPI.com](https://SharpAPI.com/) - the AI-powered workflow automation API. Whether it’s E-Commerce, Marketing, Content Management, HR Tech, or Travel, the [SharpAPI.com Laravel SDK Client](https://github.com/sharpapi/sharpapi-laravel-client) has got your back. Created by yours truly, it’s like having a tireless assistant who never drinks your coffee. Try it and watch your workflows practically run themselves!
makowskid
1,910,317
Essential Concepts | MongoDB | Part 1
Basics SQL vs NoSQL,Documents and Collections, Data Types SQL vs NoSQL SQL...
0
2024-07-03T14:48:34
https://dev.to/aakash_kumar/essential-concepts-mongodb-part-1-ca9
webdev, node, mongodb, database
##Basics ###SQL vs NoSQL,Documents and Collections, Data Types **SQL vs NoSQL** 1. SQL (Structured Query Language): Traditional relational databases like MySQL, PostgreSQL. They use tables to store data, and data is structured in rows and columns.Example: A table Users with columns id, name, email. 2. NoSQL (Not Only SQL): More flexible data models like document databases (MongoDB), key-value stores, wide-column stores, etc.Example: A collection Users where each user is a JSON-like document. ###Documents and Collections **Document:** A record in a NoSQL database, typically stored in a JSON-like format. **Example:** ``` { "id": 1, "name": "John Doe", "email": "john.doe@example.com" } ``` **Collection:** A group of documents, similar to a table in SQL. **Example:** A collection Users containing documents like the one above. ###Data Types### **String:** "John Doe" **Number:** 25 **Boolean:** true **Array:** ["reading", "traveling"] **Object:** {"street": "123 Main St", "city": "Anytown"} ###Methods### **insert()** **Example:** Insert a new user into the Users ``` collection.db.Users.insert({ name: "Alice", email: "alice@example.com" }); ``` **find()** **Example:** Find all users in the Users ``` collection.db.Users.find(); ``` **update()** **Example:** Update the email of a user with name "Alice". ``` db.Users.update({ name: "Alice" }, { $set: { email: "newalice@example.com" } }); ``` **deleteOne()** **Example:** Delete a user with name "Alice". ``` db.Users.deleteOne({ name: "Alice" }); ``` **bulkWrite()** **Example:** Perform multiple operations in a single call. ``` db.Users.bulkWrite([ { insertOne: { document: { name: "Bob", email: "bob@example.com" } } }, { updateOne: { filter: { name: "John Doe" }, update: { $set: { email: "newjohn@example.com" } } } }, { deleteOne: { filter: { name: "Alice" } } } ]); ``` ###Comparison Operators### **$eq (Equal To)** **Example:** Find users with name "John Doe". ``` db.Users.find({ name: { $eq: "John Doe" } }); ``` **$gt (Greater Than)** **Example:** Find users older than 25. ``` db.Users.find({ age: { $gt: 25 } }); ``` **$lt (Less Than)** **Example:** Find users younger than 25. ``` db.Users.find({ age: { $lt: 25 } }); ``` **$lte (Less Than or Equal To)** **Example:** Find users aged 25 or younger. ``` db.Users.find({ age: { $lte: 25 } }); ``` **$gte (Greater Than or Equal To)** **Example:** Find users aged 25 or older. ``` db.Users.find({ age: { $gte: 25 } }); ``` **$ne (Not Equal To)** **Example:** Find users not named "John Doe". ``` db.Users.find({ name: { $ne: "John Doe" } }); ``` ###Logical Operators **$and** **Example:** Find users named "John Doe" who are older than 25. ``` db.Users.find({ $and: [ { name: "John Doe" }, { age: { $gt: 25 } } ] }); ``` **$or** **Example:** Find users named "John Doe" or younger than 25. ``` db.Users.find({ $or: [ { name: "John Doe" }, { age: { $lt: 25 } } ] }); ``` **$not** **Example:** Find users not named "John Doe". ``` db.Users.find({ name: { $not: { $eq: "John Doe" } } }); ``` **$nor** **Example:** Find users neither named "John Doe" nor older than 25. ``` db.Users.find({ $nor: [ { name: "John Doe" }, { age: { $gt: 25 } } ] }); ``` ###Array Operators **$in** **Example:** Find users whose names are either "John Doe" or "Alice". ``` db.Users.find({ name: { $in: ["John Doe", "Alice"] } }); ``` **$nin** **Example:** Find users whose names are neither "John Doe" nor "Alice". ``` db.Users.find({ name: { $nin: ["John Doe", "Alice"] } }); ``` **$all** **Example:** Find users who have both "reading" and "traveling" in their hobbies. ``` db.Users.find({ hobbies: { $all: ["reading", "traveling"] } }); ``` **$elemMatch** **Example:** Find users who have an address in "New York". ``` db.Users.find({ addresses: { $elemMatch: { city: "New York" } } }); ``` **$size** **Example:** Find users who have exactly 2 hobbies. ``` db.Users.find({ hobbies: { $size: 2 } }); ``` ##Element Operators **$exists** **Example:** Find users who have an email address. ``` db.Users.find({ email: { $exists: true } }); ``` **$type** **Example:** Find users whose age is a number. ``` db.Users.find({ age: { $type: "number" } }); ``` **$regex** **Example:** Find users whose email ends with "example.com". ``` db.Users.find({ email: { $regex: /example\.com$/ } }); ``` ###Projection Operators **$project** **Example:** Include only the name and email fields. ``` db.Users.find({}, { name: 1, email: 1 }); ``` **$include and $exclude** **Example:** Exclude the age field. ``` db.Users.find({}, { age: 0 }); ``` **$slice** **Example:** Limit the array to the first 3 elements. ``` db.Users.find({}, { hobbies: { $slice: 3 } }); ``` ###Indexes **Single Field** **Example:** Create an index on the email field. ``` db.Users.createIndex({ email: 1 }); ``` **Compound** **Example:** Create a compound index on name and email. ``` db.Users.createIndex({ name: 1, email: 1 }); ``` **Text** **Example:** Create a text index on the description field. ``` db.Users.createIndex({ description: "text" }); ``` These concepts and examples provide a comprehensive overview of MongoDB operations, queries, aggregation, transactions, and security measures. If you need more details or have specific scenarios to explore, feel free to ask! Happy Coding 🧑‍💻 **Connect with Me 🙋🏻: [LinkedIn](https://www.linkedin.com/in/aakash-kumar-182a11262?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app)**
aakash_kumar
1,910,319
Technical Report: Automating User and Group Creation
Overview As a DevOps Engineer, automating the management of users and groups is crucial for...
0
2024-07-03T14:47:58
https://dev.to/vera_tee_9f6e7a1b6b500c42/technical-report-automating-user-and-group-creation-2g4
**Overview** As a DevOps Engineer, automating the management of users and groups is crucial for maintaining a streamlined and secure environment. This report details the automation process for creating users and groups using a shell script. The script ensures consistent and efficient management of user accounts and group memberships across multiple servers. **Objectives** **Automate User Creation**: Automatically create user accounts with predefined settings. **Automate Group Creation:** Create user groups and assign users to these groups. **Ensure Security and Consistency:** Implement secure practices and maintain consistency in user and group configurations. Simplify User Management: Reduce manual effort and errors associated with user management tasks. Prerequisites **Linux Server: **The script is designed for Linux-based systems. **Administrative Privileges:** Root or sudo access is required to create users and groups. **Shell Scripting Environment:** Bash shell is used for scripting. **Script Details** 1. User and Group Creation Script ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mghw6beuo90gbaj95f7.jpg) **2. Script Explanation** Function: create_group - Check if a group exists using getent group. - If the group does not exist, it creates the group using groupadd. - Logs the creation status. **Function: create_user ** - Checks if a user exists using id. - If the user does not exist, it creates the user with useradd, assigns a home directory, group, and default shell. - Sets the user's password using chpasswd. - Logs the creation status. **Groups Array **Lists the groups to be created. - Iterates over the array and calls create_group for each group. **Users Array ** - Lists the users to be created in the format username:group:password. - Iterates over the array, splits the string, and calls create_user for each user. **3. Execution** **Save Script:** Save the script as create_users_and_groups.sh. Make Executable: Ensure the script is executable: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p2g9we95i7nb7m3xe0kd.jpg) **Run Script**: Execute the script with root or sudo privileges: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpkr9u20utprz19hnwc6.jpg) ## Benefits ** **Consistency**: Ensures that users and groups are created with the same settings across different systems. **Efficiency**: Reduces the time and effort required to manually create users and groups. **Security**: Minimizes human errors that could lead to security vulnerabilities. **Scalability**: Easily scales to add more users and groups by modifying the arrays in the script. ## **Conclusion** Automating the creation of users and groups is an essential task for DevOps Engineers to manage large-scale environments effectively. The provided shell script simplifies this process, ensuring consistent and secure user management. By incorporating this automation, organizations can enhance their operational efficiency and reduce the risk of errors in user and group management. Thanks to [(https://hng.tech/premium)} for the internship opportunity to work and learn from the best.
vera_tee_9f6e7a1b6b500c42
1,910,318
How to Create and Deploy a Custom Theme for VS Code
How to Create and Deploy a Custom Theme for VS Code Creating a custom theme for Visual...
0
2024-07-03T14:46:43
https://github.com/SH20RAJ/shade-vscode-theme
vscode, theme
# How to Create and Deploy a Custom Theme for VS Code Creating a custom theme for Visual Studio Code (VS Code) allows you to personalize your development environment and share your unique aesthetic with the community. This guide will walk you through the steps to create, package, and deploy a custom theme on the VS Code Marketplace. > Try My Created Theme :- [Shade Theme](https://marketplace.visualstudio.com/items?itemName=sh20raj.shade) > [![Image description](https://raw.githubusercontent.com/SH20RAJ/shade-vscode-theme/main/assets/logo.png)](https://marketplace.visualstudio.com/items?itemName=sh20raj.shade) ## Step 1: Set Up Your Development Environment Before you start, make sure you have the following tools installed: - [Node.js](https://nodejs.org/) - [Visual Studio Code](https://code.visualstudio.com/) You will also need to install Yeoman and the Visual Studio Code Extension Generator: ```bash npm install -g yo generator-code ``` ## Step 2: Generate the Theme Extension Use Yeoman to generate a new theme extension: ```bash yo code ``` Follow the prompts: - Select `New Color Theme`. - Enter a name for your theme. - Choose `Dark` or `Light` based on your preference. - Decide whether to start with an existing theme or start fresh. This will generate a new directory with the necessary files for your theme extension. ## Step 3: Customize Your Theme Navigate to the generated directory and open it in VS Code. The main file you'll be working with is `themes/YourThemeName-color-theme.json`. This JSON file defines the colors for various UI components and syntax highlighting. Here's a basic example of what the JSON structure looks like: ```json { "name": "YourThemeName", "type": "dark", "colors": { "editor.background": "#1E1E1E", "editor.foreground": "#D4D4D4", "activityBar.background": "#333333" }, "tokenColors": [ { "scope": "comment", "settings": { "foreground": "#6A9955" } }, { "scope": "keyword", "settings": { "foreground": "#569CD6" } } ] } ``` Customize the colors to match your desired theme. You can refer to the [VS Code Theme Color Reference](https://code.visualstudio.com/api/references/theme-color) for a list of customizable properties. > Ad. > {% youtube https://www.youtube.com/watch?v=dLC-iBkrKSc&ab_channel=ShadeTech %} ## Step 4: Test Your Theme To see your changes in action, press `F5` to open a new VS Code window with your theme applied. Make adjustments as needed by editing the `YourThemeName-color-theme.json` file and restarting the window. ## Step 5: Package Your Theme Once you're satisfied with your theme, it's time to package it for deployment. First, install the `vsce` tool, which is used to package and publish VS Code extensions: ```bash npm install -g vsce ``` In the root of your theme project, run the following command to create a `.vsix` file: ```bash vsce package ``` This command generates a `.vsix` file that you can use to install the theme locally or publish it to the Marketplace. ## Step 6: Publish Your Theme To publish your theme, you need to create a publisher account on the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/manage/publishers). ### Create a Publisher 1. Visit the [Publisher Management page](https://marketplace.visualstudio.com/manage/publishers). 2. Click on `New Publisher` and follow the instructions to create your publisher profile. ### Generate a Personal Access Token 1. Go to the [Azure DevOps page](https://dev.azure.com/). 2. Click on your profile picture and select `Security`. 3. Under `Personal access tokens`, click `New Token`. 4. Name your token, set the expiration date, and select `All accessible organizations`. 5. Click `Create` and copy the generated token. ### Publish the Theme Use the `vsce` tool to publish your theme. Run the following command, replacing `your-publisher-name` with your publisher name: ```bash vsce publish --pat <your_personal_access_token> ``` You can also automate this step by saving your publisher name and personal access token in the `package.json`: ```json "publisher": "your-publisher-name" ``` Then, run: ```bash vsce publish ``` Your theme should now be available on the [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/vscode). ## Step 7: Update and Maintain Your Theme To update your theme, make changes to the `YourThemeName-color-theme.json` file, increment the version number in the `package.json` file, and run the `vsce publish` command again. --- {% github https://github.com/SH20RAJ/shade-vscode-theme %} By following these steps, you can create, customize, and share your very own VS Code theme. Happy theming!
sh20raj
1,910,316
Exploring and Enhancing Scrape Any Website (SAW)🪚: A Detailed Bug Report
Over a two-hour session, I meticulously explored the app, scrutinized various features, and...
0
2024-07-03T14:45:36
https://dev.to/jessica_aki_a64c068f9f828/exploring-and-enhancing-scrape-any-website-saw-a-detailed-bug-report-h4e
qa, testing
Over a two-hour session, I meticulously explored the app, scrutinized various features, and documented a series of issues. This deep dive into testing unveiled both the technical intricacies and the subtle nuances of user experience and design. Let’s dive into what I discovered and how we can make SAW even better! ### Explore the SAW App 🚀 If you’re interested in exploring the [Scrape Any Website](https://apps.microsoft.com/detail/9mzxn37vw0s2) app yourself, you can access it [here](https://apps.microsoft.com/detail/9mzxn37vw0s2). Your feedback and contributions are highly valuable and can help us improve the app further. Dive in and start your web scraping adventure with [Scrape Any Website](https://scrapeanyweb.site/)! ### Testing Approaches 🌟 ####Functional Testing 🛠️ For functional testing, I systematically navigated through each feature of the SAW app to ensure they worked as intended. This involved creating and editing scrape job names and URLs, testing the functionality of buttons and menus, and verifying the accuracy of data scraped. Each function was assessed to ensure it met the expected outcomes without errors. #### Usability Testing 🖱️ Usability testing focused on the overall user experience. I evaluated the ease of use, clarity of instructions, and intuitiveness of the interface. Issues such as unreadable text, lack of prompts, and navigation difficulties were identified. The goal was to ensure that users can efficiently and effectively use the app without confusion or frustration. #### Performance Testing 🚀 Performance testing was conducted to measure the app’s responsiveness and stability under various conditions. I tested the app’s behavior when scraping large numbers of URLs, observing load times, system resource usage, and the app’s ability to handle high-demand operations. Notable performance issues included slow loading times and Chromedriver crashes. #### Security Testing 🔒 Security testing involved evaluating the app’s ability to bypass common web security measures, such as Cloudflare protection. This was done to ensure that the app can handle scenarios where websites have implemented security protocols to prevent scraping. [Shown Here](https://drive.google.com/file/d/1vL_lS79oPQ3C-h42-q0xX8qI0_fP3vsg/view?usp=drive_link) ### Testing Results 🚀 During the testing session, I uncovered several notable issues that affect the usability and functionality of the SAW app: - **Unreadable Text Due to White Background**: A major issue was the unreadable text due to a white font on a white background when adding or editing scrape job names and URLs in the sidebar. - **No Prompt to Add URL in Sidebar**: The absence of a prompt to add URLs in the sidebar led to confusion, as users were unclear on how to proceed. - **Inability to Edit URLs and Scrape Job Names**: The inability to edit URLs and scrape job names on the homepage posed significant problems, requiring users to delete and re-enter information. - **Browser Setting Change Issues**: Changing browser settings without proper prompts added to the confusion, as users were unaware of saved changes. - **Default Folder Change Prompting for a File**: When attempting to change the default folder for file storage, the app incorrectly prompted for a file instead of a folder. - **Handling 30X Errors**: The app's inability to handle 30X errors (such as redirects) left users without feedback when such errors occurred. - **Chromedriver Crashes and Manual Closure of Drivers**: The system became overwhelmed when scraping large numbers of URLs, leading to crashes, and requiring manual closure of each terminal and driver. This one scared me honestly, I was using the the Chromedriver for a site with like 300+ links and I set it to Chromedriver to test it out, only for the app to be throwing terminals and opening small chrome environments, at first it was ok, just 2 or 3 at a time. Then it opened 9 at the same time💀 that's not 9 chrome environments, that 9 chrome environments and 9 terminals. Then I stopped it because it was disrupting my system. I stopped it and had to manually close the chromes till i figured out i just needed to close the terminals ### Suggested Improvements 💡 Based on the test results, here are my suggestions for improving the SAW app: - **Optimize Text Display**: Ensure text is visible by using contrasting colors. - **Add Prompts for Actions**: Include clear prompts for adding URLs and other actions. The users need these prompts for a smooth ride while using the app. - **Enable Editing of Inputs**: Allow users to edit URLs and scrape job names directly. The scrape jobs name can be changed from the inside navbar but the urls can't be changed at all. So the user is forced to delete and rewrite instead of just edit it. - **Improve Browser Setting Feedback**: Provide clear feedback when browser settings are changed. I noticed the update is only implemented when the app is restarted. - **Fix Folder Prompts**: Correct the prompt to ask for a folder instead of a file. If the user can't change the folder at all, it's better the button isn't present at all, otherwise it should be a flexible and easy change for the user. - **Handle Redirects Gracefully**: Implement handling for 30X errors and provide user feedback. It gives feedback on 40X and 50X errors but not 301 errors - **Enhance Performance**: Optimize Chromedriver usage to prevent crashes and automate driver closure. As much as finishing the scrape quickly is important, overloading the system by opening too many chromedrivers won't help. ### Detailed Bug Report 📄 For a comprehensive view of all the issues identified during the testing session, including steps to reproduce and suggested fixes, please refer to the detailed bug report [here](https://docs.google.com/spreadsheets/d/1MJ6ao6_VGl0PIFvpJOlLC-agaR3AQ4QzkLasbYdP7t0/edit?usp=sharing). This report provides an in-depth analysis of each issue and serves as a valuable resource for the development team to address and resolve the identified problems. ### Final Thoughts 🌐 Testing the SAW app was a rewarding experience that highlighted the importance of detailed testing and thorough documentation. By addressing these key issues, we can significantly improve the user experience and functionality of the app. I’m excited to see how SAW evolves and look forward to contributing to its ongoing improvement.✨
jessica_aki_a64c068f9f828
1,910,314
Hire PHP Developer: Unlocking the Full Potential of Your Web Development
In the rapidly evolving digital landscape, having a robust online presence is not just an advantage...
0
2024-07-03T14:44:27
https://dev.to/brucewilliam6004/hire-php-developer-unlocking-the-full-potential-of-your-web-development-34hn
webdev, programming, devops
In the rapidly evolving digital landscape, having a robust online presence is not just an advantage but a necessity. Businesses today need dynamic, scalable, and secure web solutions to stay competitive. This is where PHP developers come into play. [Hire a PHP developer](https://einnovention.us/hire-php-developers) can transform your web projects, bringing versatility, efficiency, and innovation. ## Why Hire a PHP Developer? PHP, or Hypertext Preprocessor, is a powerful scripting language widely used for web development. It is particularly favored for its ability to create dynamic and interactive web pages. Here are several compelling reasons to hire a PHP developer: **1. Versatility and Compatibility:** PHP is compatible with various platforms, including Windows, Linux, and macOS, and supports databases such as MySQL, PostgreSQL, and Oracle. **2. Open Source and Cost-Effective:** As an open-source language, PHP reduces development costs while offering extensive support through its large community of developers. **3. Scalability and Flexibility:** PHP supports many frameworks, such as Laravel, Symfony, and CodeIgniter, enabling developers to create scalable and flexible web applications. **4. Speed and Performance:** PHP ensures faster loading times and optimal performance, which is crucial for retaining users and enhancing the user experience. ## Essential Qualities to Look for in a PHP Developer When hiring a PHP developer, it is vital to assess certain qualities to ensure they align with your project requirements and business goals: Technical Proficiency A proficient PHP developer should possess strong technical skills, including: **• Expertise in PHP and Related Technologies:** In-depth knowledge of PHP, HTML, CSS, JavaScript, and related frameworks and libraries. **• Database Management:** Experience working with databases like MySQL and MongoDB and understanding SQL. **• Version Control Systems:** Proficiency in using version control systems such as Git for code management and collaboration. ## Problem-Solving Skills PHP developers should demonstrate excellent problem-solving abilities and be capable of diagnosing and resolving issues efficiently. This includes debugging code, optimizing performance, and ensuring security measures are in place. ## Communication and Collaboration Effective communication is critical to successful project execution. A PHP developer should be able to articulate technical concepts clearly and collaborate seamlessly with other team members, including designers, project managers, and clients. ## Portfolio and Experience Reviewing a developer’s portfolio provides insights into their practical experience and the types of projects they have handled. Look for: **• Diverse Project Experience:** A range of projects showcasing versatility and capability in handling different web development challenges. **• Client Testimonials:** Positive feedback from previous clients indicating reliability and professionalism. ## The Hiring Process: Steps to Secure the Best Talent Hiring the right PHP developer involves a structured approach to ensure you select the best fit for your project. Here’s a step-by-step guide: **1. Define Your Project Requirements Clearly outline your project scope, including the specific functionalities, features, and timelines. This will help you identify the skill set required and set realistic expectations. 1. Source Potential Candidates Utilize various platforms to find potential PHP developers: • Job Portals and Freelance Platforms: Websites like LinkedIn, Indeed, Upwork, and Freelancer are excellent sources for finding skilled developers. • Professional Networks: Leverage your professional network to get recommendations and referrals. 1. Conduct Thorough Interviews Evaluate candidates through a combination of technical assessments and interviews. Focus on: • Technical Skills Assessment: Coding tests and practical assignments to assess their technical capabilities. • Behavioral Interviews: Gauge their problem-solving approach, communication skills, and cultural fit. 1. Review Portfolios and References Examine their previous work and speak to past clients or employers to verify their expertise and reliability. 1. Negotiate and Finalize the Contract Once you’ve selected the right candidate, discuss and agree on terms, including project scope, timelines, payment structure, and confidentiality agreements. ## Benefits of Hiring a Dedicated PHP Developer Engaging a dedicated PHP developer offers several advantages, ensuring your project’s success and delivering high-quality outcomes: • Focused Expertise: Dedicated developers bring specialized knowledge and focused attention to your project, enhancing efficiency and innovation. • Consistency and Continuity: The continuous involvement of a dedicated developer ensures consistency in coding standards and project continuity. • Adaptability: Dedicated developers can quickly adapt to changing project requirements, ensuring timely delivery and flexibility. Conclusion [Hire PHP developer](https://einnovention.us ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ie49rbc963pab7lbeojt.png)) is a strategic investment that can significantly elevate your web development projects. By understanding the essential qualities to look for, following a structured hiring process, and leveraging the benefits of dedicated expertise, you can ensure that your business achieves its digital goals efficiently and effectively. Contact us! Choose wisely, and watch your web presence transform into a dynamic, engaging, and high-performing platform. visit site: einnovention.us Phone number: +1 (209) 737-0590 Address: 5901 Chase Rd, Ste 210 MI 48126S
brucewilliam6004
1,910,313
WordPress Development Company
Tekglide is a WordPress development company specialized in developing websites using basic WordPress...
0
2024-07-03T14:43:34
https://dev.to/tekglide/wordpress-development-company-3848
Tekglide is a [WordPress development company](https://tekglide.com/wordpress-development) specialized in developing websites using basic WordPress and offering customized solutions to order them, resulting in an effective website. Our team comprises the most creative developers and designers who create effective, easy-to-use websites you need for your business. Services include the development of custom themes, customization of a particular plugin, and site maintenance, among others. The main difference between us and our competitors is that we focus on developing original and superior products that are beautiful and well-thought-out. We also focus on the significance of an online company and work to offer helpful solutions. Are you planning to redesign your current website, or would you like to have a completely new website? No problem – Tekglide is ready to offer you the help you need to design your website. We look forward to hearing from you to discuss your design project and how we can help you achieve and increase your online presence with great web designs.
tekglide
1,910,311
Strategy and Tips for Migrating Large-Scale Applications to ReactJS
Migrating a large-scale application to ReactJS can seem daunting, but with the right strategy and...
0
2024-07-03T14:41:51
https://dev.to/imensosoftware/strategy-and-tips-for-migrating-large-scale-applications-to-reactjs-23jd
reactjsdevelopment, webdev, hirereactjsdeveloperrs, reactjsdevelopmentcompany
Migrating a large-scale application to ReactJS can seem daunting, but with the right strategy and expert guidance, the transition can be smooth and highly beneficial. In this comprehensive guide, we'll delve into effective strategies and tips for migrating your large-scale applications to ReactJS, ensuring optimal performance, maintainability, and scalability. Whether you’re considering React JS development services or looking to **[hire ReactJS developer](https://www.imensosoftware.com/developers/hire-reactjs-developers/)**, this article will provide the insights needed to make informed decisions. We will explore key aspects including planning the migration, breaking down the process, leveraging the right tools, and addressing common challenges. By the end, you’ll have a clear roadmap for successfully migrating your application to ReactJS. ## Planning Your Migration to ReactJS **Understanding the Current State of Your Application** Before diving into the migration, it's crucial to have a clear understanding of your current application. Conduct a thorough analysis of the existing architecture, technologies, and dependencies. Identify the core functionalities, critical modules, and any potential bottlenecks. This will help you create a detailed migration plan and set realistic expectations. **Setting Clear Objectives** Define what you aim to achieve with the migration. Are you looking to improve performance, enhance user experience, or simplify maintenance? Setting clear objectives will guide your decisions throughout the migration process and help measure the success of the project. **Engaging Stakeholders** Involve all relevant stakeholders from the beginning. This includes developers, project managers, and business stakeholders. Their input and support are essential for a smooth migration. Regular communication ensures everyone is on the same page and can address any concerns promptly. ## Breaking Down the Migration Process **Incremental vs. Big Bang Approach** One of the key decisions is choosing between an incremental migration and a big bang approach. - Incremental Migration: This involves gradually replacing parts of the application with ReactJS. It's less risky and allows for continuous deployment and testing. However, it may require maintaining compatibility between old and new systems during the transition. - Big Bang Approach: In this approach, the entire application is migrated to ReactJS in one go. It's faster but riskier and requires thorough testing before deployment. This method is suitable for smaller applications or when there's a clear window for a complete overhaul. **Creating a Component Library** Building a reusable component library can streamline the migration process. Identify common UI elements and functionalities that can be abstracted into reusable React components. This not only speeds up development but also ensures consistency across the application. **Implementing State Management** State management is a critical aspect of **[ReactJS applications](https://www.imensosoftware.com/blog/reactjs-and-ar-vr-exploring-the-possibilities-for-business-applications/)**. Choose the right state management solution based on your application's complexity. Popular options include Redux, MobX, and the Context API. Proper state management ensures that your application remains scalable and maintainable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7hgffd8osagncpfr8wy.jpg) ## Leveraging the Right Tools **Code Analysis and Transformation Tools** Utilize code analysis and transformation tools to ease the migration. Tools like jscodeshift can automate parts of the code transformation, reducing manual effort and minimizing errors. These tools can also help identify deprecated patterns and suggest modern alternatives. **Testing Frameworks** Ensure robust testing practices during the migration. Use testing frameworks like Jest and React Testing Library to write comprehensive unit and integration tests. Automated testing helps catch issues early and ensures that new features do not break existing functionality. **Performance Optimization Tools** ReactJS offers various tools and techniques for optimizing performance. Use tools like React DevTools to analyze and optimize component rendering. Implement techniques like code splitting and lazy loading to improve initial load times and overall performance. ## Addressing Common Challenges **Handling Legacy Code** Migrating legacy code can be challenging. Prioritize critical modules and gradually refactor them to ReactJS. Maintain compatibility layers where necessary to ensure smooth integration. Consider rewriting particularly outdated or complex parts for better long-term maintainability. **Ensuring Consistent User Experience** Consistency in user experience is vital during the migration. Use feature flags to control the rollout of new features and gather user feedback. Gradually introduce changes to minimize disruptions and ensure a seamless transition for end-users. **Managing Team Dynamics** Migration projects can strain team dynamics. Ensure that your team is well-versed in ReactJS and has access to necessary resources and training. Foster a collaborative environment where team members can share knowledge and support each other throughout the process. ## Conclusion: Successfully Migrating to ReactJS Migrating a large-scale application to ReactJS is a complex but rewarding endeavor. By carefully planning the migration, breaking down the process into manageable steps, leveraging the right tools, and addressing common challenges, you can achieve a successful transition. Whether you’re partnering with a **[React JS application development company](https://www.imensosoftware.com/technologies/react-js-development-company/)** or looking to hire ReactJS developers, following these strategies and tips will ensure a smooth and efficient migration process. Embrace the power of ReactJS to enhance your application’s performance, maintainability, and scalability, setting the stage for future growth and innovation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kme6gjzs11d0dsm8tlvy.png) By following these guidelines, you can navigate the complexities of migrating a large-scale application to ReactJS with confidence. For expert assistance, consider engaging React JS development services or hiring experienced ReactJS developers. Their expertise can provide invaluable support and ensure a successful migration, allowing you to reap the full benefits of ReactJS.
imensosoftware
1,910,310
Hello world, I'm Mh Mitas
hello world I am learning web development console.log('Hello World') Enter fullscreen...
0
2024-07-03T14:39:59
https://dev.to/mhmitas/hello-world-38id
**hello world** 1. I am learning web development ```js console.log('Hello World') ``` <br> ## **Lorem ipsum** Lorem ipsum dolor sit amet consectetur adipisicing elit. Quibusdam eaque aperiam deserunt nostrum blanditiis sapiente beatae incidunt asperiores magni. Fugiat voluptatum facilis delectus! Vero deleniti quis, non iure voluptate nulla.
mhmitas
1,910,308
Mastering the Art of Event Management: Key Strategies for Success
In the dynamic world of modern business and social engagement, event management stands as a pivotal...
0
2024-07-03T14:38:49
https://dev.to/laurasmith/mastering-the-art-of-event-management-key-strategies-for-success-2fj5
eventmanagement, events, eventmanagementsoftware
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nls5n5vgj0kf32z41z6r.jpg)In the dynamic world of modern business and social engagement, event management stands as a pivotal skill. Whether organizing a corporate conference, a gala fundraiser, or a wedding celebration, the ability to orchestrate events with precision and flair can make or break their success. Here’s a comprehensive guide to mastering the art of event management. ## Understanding the Essence of Event Management Event management encompasses the strategic planning, organization, and execution of occasions ranging from small gatherings to large-scale conferences. At its core, **[successful event management](https://inevent.com/blog/tech-and-trends/event-management-tools-for-seamless-and-successful-events.html)** hinges on meticulous attention to detail, effective communication, and a deep understanding of client objectives and audience expectations. ### Key Components of Effective Event Management **Detailed Planning:** The foundation of any successful event is meticulous planning. This involves defining objectives, setting budgets, establishing timelines, and identifying key stakeholders. A comprehensive event plan serves as a roadmap, guiding organizers through each stage of preparation. **Clear Communication:** Effective communication is paramount in event management. It involves maintaining regular contact with clients, suppliers, and team members to ensure everyone is aligned on goals, responsibilities, and timelines. Clear communication fosters a collaborative environment essential for smooth event execution. **Vendor Coordination:** Events often rely on various vendors such as caterers, decorators, and audio-visual specialists. Coordinating these vendors requires careful negotiation, contract management, and oversight to ensure seamless integration of services on the event day. **Logistics Management:** Logistics encompass venue selection, transportation, accommodation (if required), and technical setup. Paying attention to logistical details ensures that attendees experience a smooth and enjoyable event without disruptions. **Risk Management:** Anticipating and mitigating risks is a critical aspect of event management. This includes preparing contingency plans for potential disruptions such as weather changes, technical failures, or unforeseen emergencies. ## Strategies for Success **Harnessing Technology:** Utilize **[event management software](https://inevent.com/blog/marketing/event-management-software-pricing.html)** for streamlined registration, attendee management, and feedback collection. These tools enhance efficiency and provide valuable data for post-event analysis. **Creating Memorable Experiences:** Beyond logistics, successful events leave a lasting impression on attendees. Incorporate creative elements such as interactive activities, engaging speakers, or themed décor to enhance guest experience and promote event objectives. **Post-Event Evaluation:** Conducting a thorough post-event evaluation is essential for continuous improvement. Solicit feedback from attendees and stakeholders to identify strengths, weaknesses, and areas for enhancement in future events. ### The Role of a Skilled Event Manager A skilled event manager possesses a unique blend of creativity, organization, and problem-solving abilities. They act as the central point of contact, orchestrating all elements of an event while remaining adaptable to changing circumstances. Their expertise lies in balancing logistical precision with creative innovation to deliver exceptional experiences that align with client expectations. #### Conclusion In conclusion, mastering the art of event management requires a combination of meticulous planning, effective communication, and a commitment to creating memorable experiences. By leveraging strategic thinking, attention to detail, and innovative solutions, event managers can navigate the complexities of organizing successful events that leave a lasting impact on attendees. Whether planning a corporate affair or a personal celebration, embracing these principles will pave the way for success in the dynamic field of event management.
laurasmith
1,910,304
The carbon footprint of email - Top facts and questions
Email has become a vital tool for communication in both personal and professional contexts. It is a...
0
2024-07-03T14:36:28
https://againstdata.com/blog/top-facts-and-questions-about-the-carbon-footprint-of-email
privacy, carbon, buildinpublic, theycoded
Email has become a vital tool for communication in both personal and professional contexts. It is a pervasive element of modern life, with billions of emails sent every day, therefore it is undoubtedly efficient and convenient. However, its environmental impact is sometimes disregarded. Energy is needed for every email sent, received, and stored, which adds to the global carbon footprint. Several aspects of email's carbon footprint are examined in this article. ## How does an email have a carbon footprint? Every email we send requires power to be displayed, and electricity is also used by the network connection to transport the email. Every server on the internet uses electricity to store email momentarily before forwarding it to other servers. ## Understanding the Carbon Footprint of Email The typical brief email exchanged between two laptops releases 0.3g of CO2e. If the email is transmitted from phone to phone, this figure drops to 0.2g CO2e, and if it is a spam email that a filter has detected, it drops to 0.03g CO2e. On the other hand, emissions rise noticeably with lengthier emails. > [**Reduce your CO2 footprint**](https://againstdata.com?utm_source=devto&utm_medium=article&utm_campaign=the-carbon-footprint-of-email-top-facts-and-questions-2277) by unsubscribing, deleting hundreds of emails and exercising your right to be forgotten with one-click. An email with a picture or attachment may release up to 50g of CO2e, but a long email that takes ten minutes to send and three minutes to read releases only 17g. The device's embodied carbon makes up most of these emissions; the networks and data centers used in the sending and storage of these emails, as well as the device's energy consumption, have minor contributions. An individual's average email usage produces 3 to 40 kg of CO2e over the course of a year. This is the same as traveling 10 to 128 miles in a modest gasoline-powered vehicle. Even though emails individually contribute very little to global emissions, the 3.9 billion email users worldwide make this contribution substantial. Email has a significant contribution to the overall carbon footprint of digital technology because of the cumulative effect of billions of emails exchanged every day. ![](https://againstdata.s3.eu-central-1.amazonaws.com/upload/kJ1BDArZCGuCyWdsFZpxBTQ3h0yi9zJt937HO3Mc.jpg) ### Top facts about the carbon footprint of email 1. **Global Impact of Email:** 2019 saw a stabilization of the world's CO2 emissions from energy usage at 33.2 gigatons, partly due to lower emissions from power generation in wealthy nations. This was made possible by boosting nuclear power, switching from coal to natural gas, and growing renewable energy. Email's influence on the environment also attracted attention in the meantime. An office worker's daily email intake of 126 can result in a yearly CO2e effect per worker of 184 kg, which is equivalent to the emissions of certain poor nations. While advancements in clean energy technology are essential, attempts to lessen the total carbon footprint connected with digital communication may also be aided by cutting out on pointless emails, controlling spam, and improving data storage. This emphasizes how crucial it is to address energy use as well as digital behaviors in order to achieve environmental sustainability. 2. **Efficiency Paradox:** Efficiency improvements in email technology have made it easier and cheaper to send and store messages. However, this convenience has led to a massive increase in the number of emails people send and keep. Even though each email uses a small amount of energy, the sheer volume of emails globally means a lot of energy is still used. This includes energy for sending emails, storing them in data centers, and keeping servers running. So, while emails themselves are efficient, the overall impact on the environment from all the energy used is significant. 3. **Data Centers' Energy Consumption:** Data centers require a lot of energy to run and cool servers. The need for data centers rises with the growth of digital services, like email, which raises energy consumption and increases carbon emissions, particularly if the data centers are fueled by fossil fuels. A major factor in this need is the use of email. Every email costs energy to transmit and store, especially those with big attachments, as stated before. The amount of energy required to process and store emails increases along with their volume. Approximately 1% of the world's power is used in data centers, and this percentage is growing. One of the biggest energy users is cooling systems. In order to lessen their influence on the environment, data centers must: - Boost energy efficiency by upgrading the hardware and cooling. - Make use of solar and other sustainable energy sources. - Optimize operations to cut unnecessary energy use. 4. **Corporate Responsibility:** Businesses may lessen their environmental effect considerably by addressing the carbon footprint of their digital operations. Controlling email usage is important since it affects data centers' energy usage. To reduce their digital carbon footprint, businesses may do the following: - Simplify Communication: Motivate staff members to send fewer emails and concentrate on using more effective ways to communicate. - Optimize Data Storage: Put in place procedures for routinely deleting superfluous information and outdated emails. - Adopt Renewable Energy: Make the switch to renewable energy sources for data centers and other digital infrastructure. 5. **Carbon Cost of Deleting Emails:** As the process of deleting emails requires energy, we now know that it has a minor carbon cost. Data must be sent from your device to the server, which requires energy, in order to delete an email. A single email deletion consumes very little energy, yet it nonetheless produces some CO2 emissions, but the emissions mount up when millions of people remove emails. However, the initial carbon cost of deleting emails is often outweighed by the energy and carbon emissions saved by not holding them for an extended period of time. Since data centers require a lot of electricity to operate and store vast volumes of data, decreasing the volume of stored emails aids in reducing their total energy consumption and related carbon emissions. To reduce the overall digital carbon footprint, the carbon cost of deleting emails is a tiny but essential step in the right direction. 6. **Renewable Energy's Role:** Renewable energy sources like solar, wind, hydro, and geothermal power are crucial in reducing this carbon footprint. These sources produce electricity without emitting greenhouse gases, making them essential for powering data centers and digital infrastructure. Major companies like Google, Microsoft, and Amazon are transitioning their data centers to renewable energy, significantly cutting their carbon emissions. Key strategies include: - Powering Data Centers with Renewable Energy: Reducing emissions by using clean energy. - Improving Energy Efficiency: Utilizing advanced cooling systems and efficient practices. - Adopting Distributed Energy Solutions: Implementing localized renewable energy installations. - Corporate Sustainability Goals: Companies adopting renewable energy as part of their sustainability strategies. - Economic Viability: The decreasing cost of renewable energy makes it increasingly feasible. 7. **Global email traffic:** In 2021, over 306 billion emails were sent daily, a number expected to exceed 376 billion by 2025. The increasing volume of email traffic presents a challenge. Even with improved efficiency, the overall energy consumption continues to rise due to the growing number of emails. ![](https://againstdata.s3.eu-central-1.amazonaws.com/upload/T68gk8n841tpYPrgWTc7LLkjfw79lGl9U4cQJgrU.jpg) ### Top questions about the carbon footprint of email 1. **How does email compare to other digital activities in terms of carbon emissions?** An hour of Facebook use generates roughly 2 grams of CO2, an hour of Instagram scrolling produces about 1.5 grams, and an hour of Netflix standard definition viewing produces about 55 grams. About one gram of CO2 is released by each participant during an hour-long Zoom meeting, and 20 grams are released during an hour of online gaming. In light of this, email uses less carbon than other internet activities like gaming and streaming videos. Therefore, you could say that the use of email is much healthier for the environment in comparison to other digital activities. 2. **What kind of data is required for assessing the carbon footprint of emails?** In order to evaluate the total environmental effect, this involves monitoring power use, estimating CO2 emissions from that energy use based on where the energy originates from (such as coal or renewable sources), and taking into account variables like the frequency, volume, and types of emails sent. 3. **Are there email providers that are better for the environment?** Indeed, there are email services that are better for the environment than others. They accomplish this by employing renewable energy sources, such as solar or wind power, and enhancing the energy efficiency of their data centers. Providers further lessen their environmental effect by buying renewable energy credits or taking part in carbon offset schemes. Selecting providers that openly provide information about their sustainability initiatives might encourage the adoption of greener digital practices. Here are some examples: - Google: Invests heavily in renewable energy to power their data centers and offices. - Microsoft: Committed to achieving carbon negative status by 2030 through renewable energy investments. - Apple: Powers many data centers with solar and wind energy, focusing on sustainability. - GreenGeeks: Offers eco-friendly hosting solutions with renewable energy credits. - ProtonMail: Operates data centers sustainably with a focus on energy efficiency. 4. **Can I calculate the carbon emissions from my personal email account?** This is a rough estimate on how to calculate your email carbon emissions: - Research Energy Use: Check if your email provider uses renewable energy and how efficiently their data centers operate. - Estimate Data Transmission: Calculate the average amount of data (including attachments) you send per email and how often you send emails. - Convert to Carbon Emissions: Suppose your email provider uses 0.6 kWh of electricity per email. If their emissions factor (CO2 per kWh) is 0.5 kg CO2/kWh: Carbon emissions per email = 0.6 kWh * 0.5 kg CO2/kWh = 0.3 kg CO2 per email. - Calculate Total Emissions: If you send 100 emails per month: Monthly carbon emissions = 100 emails * 0.3 kg CO2 per email = 30 kg CO2 per month. 5. **What actions can individuals take to help lower the environmental impact of using the internet and email?** - Cut Down on Email Volume: Reduce the number of pointless emails you send. - Employ Effective Communication: For brief conversations, choose voice or instant messaging. - Handle Subscriptions: Remove yourself from unsolicited newsletters and emails. - Limit Attachments: Don't attach huge files; instead, use file-sharing services. - Eliminate Superfluous Emails: Frequently purge and discard emails. One way to do this is to use an Inbox cleaner tool like [Against Data](https://app.againstdata.com/). - Turn on energy saving by putting electronics to sleep when not in use. - Select services from suppliers who use renewable energy to show your support for renewable energy. - Inform Others: Spread knowledge on digital carbon footprints. - Select Green Hosting: Look for hosting companies that use environmentally friendly methods. - Advocate for Change: Back laws that encourage the use of renewable energy sources and energy efficiency. ### Conclusion In summary, email is an essential tool in modern life, yet its widespread usage adds significantly to the world's carbon footprint. Comprehending the environmental impact necessitates taking data center energy-intensive activities and email transmission efficiency into account. A person's personal carbon footprint related to digital communication can be reduced by selecting email providers who use renewable energy and by implementing strategies like cutting out on pointless emails and attachments. By encouraging responsible digital behavior and energy conservation, these actions help create a more sustainable future.
extrabright
1,910,305
CSS里面的各种长度
CSS里面有各种长度单位,总的来说,可以分成两类: 绝对长度单位 相对长度单位
0
2024-07-03T14:35:52
https://dev.to/robertg/cssli-mian-de-ge-chong-chang-du-54n8
CSS里面有各种长度单位,总的来说,可以分成两类: 1. 绝对长度单位 2. 相对长度单位
robertg
1,910,302
TYPO3 v13.2—Release Notes
The third sprint release of the TYPO3 v13 release cycle, version 13.2, offers a range of practical...
0
2024-07-03T14:30:38
https://dev.to/typo3/typo3-v132-release-notes-73h
typo3, releasenotes, webdev, programming
The third sprint release of the TYPO3 v13 release cycle, version 13.2, offers a range of practical improvements for editors and significant enhancements under the hood. NOTE: Article Source - typo3.org Aiming to enhance the experience for backend users who manage TYPO3 site content, the new version includes various improvements to the backend UI to simplify the work of editors. These enhancements are designed to make the interface more modern, intuitive, and feature-rich, ensuring editors can work efficiently. Most of the changes in TYPO3 v13.2 are technical. For example, groundwork has been laid for the integration of Content Blocks with a new Schema API. While Content Blocks are not yet fully integrated into the TYPO3 Core, progress is being made. More details on this can be found in André Kraus's article "Content Blocks on the Road Towards TYPO3 v13 — Report Q1/2024." The reference index has also received updates that will significantly speed up future operations. ## Key Changes in TYPO3 Version 13.2 **Backend Search Improvements** The backend search now allows users to find terms on pages, page content, database records, backend modules, and workspaces. This improvement helps users working with large TYPO3 installations. **Mass Editing of Selected Columns** Users can now update multiple records at once in the TYPO3 backend. The new "Edit columns" button presents only the currently active columns, streamlining the editing process. **Record List Download Presets** The data export modal window now supports presets, allowing users to download data with predefined fields. This feature simplifies regular data exports. **Form Listings** The Form Framework has been updated with sortable columns and a cleaner interface. Forms that are still in use are protected from deletion. **Schema API** The new Schema API, introduced in [TYPO3 v13.2](https://nitsantech.de/en/blog/typo3-v132), provides an object-based approach to work with TCA definitions. This API is a crucial foundation for the future integration of Content Blocks. **Self-contained Content Elements** Changes under the hood now allow for the integration of Content Blocks. TYPO3 loads rendering libraries early, making them available in the frontend. **Reference Index** The reference index (refindex) has been reworked, reducing the number of SQL queries needed to retrieve data and significantly boosting system performance. An updated refindex is essential whenever extensions or the TYPO3 Core are changed. **System Requirements, Support, and Maintenance** TYPO3 v13 requires PHP version 8.2, with security updates available until December 2025. Each sprint release will be supported until the next minor version is published, with long-term support for TYPO3 v13 LTS until 31 October 2027. **Download and Installation** Details about the release and installation instructions are available at get.typo3.org. Composer is recommended for setting up your TYPO3 environment. **Feature Freeze Ahead!** The next milestone is TYPO3 version 13.3, scheduled for 17 September 2024, which will mark the feature freeze for the v13 cycle. The focus will then shift to testing and refinement. Now is the best time to submit code contributions for TYPO3 v13 LTS.
typo3
1,910,301
Swagger + Node.js (Express) : A Step-by-Step Guide
Following the post on configuring Swagger for a SpringBoot project, today I will introduce you to a...
0
2024-07-03T14:30:19
https://dev.to/cuongnp/swagger-nodejs-express-a-step-by-step-guide-4ob
javascript, node, beginners, programming
Following the post on configuring Swagger for a SpringBoot project, today I will introduce you to a step-by-step guide to set up Swagger in a Node.js (Express) project. ## 1. Set Up Your Project First, create a new Node.js project if you don't have one already. ```bash mkdir swagger-demo cd swagger-demo npm init -y npm install express swagger-ui-express swagger-jsdoc ``` ## 2. Create Your Express Server Create an index.js file (or app.js, depending on your preference): ```bash touch index.js ``` ```javascript const express = require('express'); const swaggerUi = require('swagger-ui-express'); const swaggerJsDoc = require('swagger-jsdoc'); const app = express(); const port = process.env.PORT || 3000; // Swagger setup const swaggerOptions = { swaggerDefinition: { myapi: '3.0.0', info: { title: 'My API', version: '1.0.0', description: 'API documentation', }, servers: [ { url: 'http://localhost:3000', }, ], }, apis: ['./routes/*.js'], // files containing annotations as above }; const swaggerDocs = swaggerJsDoc(swaggerOptions); app.use('/api-docs', swaggerUi.serve, swaggerUi.setup(swaggerDocs)); // Sample route app.get('/api/hello', (req, res) => { res.send('Hello World!'); }); app.listen(port, () => { console.log(`Server is running on http://localhost:${port}`); }); ``` ## 3. Document Your Routes Create a routes folder and add a hello.js file to it (or any route file): ```bash mkdir routes cd routes touch hello.js ``` Then, add the following code to `hello.js`: ```javascript /** * @swagger * /api/user: * get: * summary: Retrieve a list of users * responses: * 200: * description: A list of users * content: * application/json: * schema: * type: array * items: * type: object * properties: * id: * type: integer * example: 1 * name: * type: string * example: John Doe */ app.get('/api/user', (req, res) => { res.json([{ id: 1, name: 'John Doe' }]); }); ``` ## 4. Run Your Server Start your server: ``` node index.js ``` ## 5. Access Swagger UI Open your browser and navigate to http://localhost:3000/api-docs. You should see the Swagger UI with your documented API. ![Swagger-javascript](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qstxe6yfvs2h5agxenj9.png) Following these steps, you can configure Swagger for your JavaScript project, providing interactive API documentation for your development team and end-users. Thank you for reading! See you in the next post. [Swagger + SpringBoot Project](https://dev.to/cuongnp/supercharge-your-spring-boot-application-with-swagger-a-step-by-step-guide-to-interactive-api-documentation-18g7)
cuongnp
1,907,762
JS Builders Meetup – Learn the PubSub Design Pattern for JavaScript!
Hey Dev.to Community! 👋 On July 3rd we had our monthly virtual gathering for JavaScript enthusiasts...
0
2024-07-03T14:30:00
https://dev.to/buildwebcrumbs/join-us-at-js-builders-meetup-learn-the-pubsub-design-pattern-for-javascript-568o
javascript, meetup, webdev, programming
**Hey Dev.to Community! 👋** On July 3rd we had our monthly virtual gathering for JavaScript enthusiasts and web developers looking to expand their knowledge and connect with the community. **Guest Speaker:** Vitor Norton, Developer Advocate at Superviz **In this session, Vitor Norton will be teaching us about the Publisher/Subscriber (PubSub) Design Pattern. This pattern is essential for projects involving real-time collaboration and is a common topic in technical job interviews.** {%embed https://youtube.com/live/U54Aj1c9WDo %} --- Thanks for watching!
pachicodes
1,909,261
Simplify your unit testing with generative AI
When I write code, I write tests. But I especially hate the process of getting my test framework...
0
2024-07-03T14:30:00
https://community.aws/content/2ihDE9A59SAbHwUqkLWhyl78Eax/simplify-your-unit-testing-with-generative-ai
aws, ai, productivity, testing
When I write code, I write tests. But I especially hate the process of getting my test framework setup in my project and writing those first few tests that will eventually guide the way for a more full-fledged test suite. Because of this, I have been experimenting with generative AI tools, like [Amazon Q Developer](https://aws.amazon.com/developer/generative-ai/amazon-q/), to support my testing efforts and to write and fix tests more quickly. I want to share with you today how I'm getting my test framework setup, creating those first few tests, and the back and forth prompts I'm using to be more productive. In the examples below, I have a small React app using Typescript, deployed with [AWS Amplify Gen 2](https://docs.amplify.aws/react/start/quickstart/). I like to get my test suite set up as I'm kicking off a project, so now is the time. Once my test suite is set up, I'll show you how to write a couple of tests and get a passing test suite. Let's get started! ## Setting up a test suite It's been a bit since I've worked with React and so I'm looking to learn more about which testing framework is common practice to use. I've used Jest in the past, so I expect that will show up in my research. I start by asking Amazon Q: `What are some options for test frameworks for testing this React app? Test framework needs to support Typescript, be able to use the React Testing Library for component testing, and support mocking.` ![Amazon Q prompt asking for test framework options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjszehv3xx05jhnz7cv4.png) I get four options here, with Jest being one of them. [Vitest](https://vitest.dev/) looks interesting and this project is using Vite, so I ask for more info: `Why would I choose Jest over Vitest?` ![Amazon Q prompt asking why Jest over Vitest](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uc4z57rz110f285oyxz6.png) I decide to go with Vitest and next I ask Amazon Q how to get my test suite set up: `What are the steps I need to take to set up this project to use Vitest? Include React Testing Library, support for typescript.` ![Amazon Q prompt asking what the steps are to setup Vitest in this project.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j126jv2ei0kbkpze7dpv.png) ![Steps 3-4 from Amazon Q prompt asking what the steps are to setup Vitest in this project.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ivwmlknx197hk38f04fr.png) I use the steps from the Amazon Q response as my guide to setting up the test suite. Before going through each step, I review them to make sure it's what I need, to make sure it's accurate enough to proceed. There were a couple of spots that gave me trouble and I'll point them out below. ### Step 1: Install dependencies I run the proposed command at the command line to install the Vitest, React Testing Library, and related dependencies: `npm install -D vitest @testing-library/react @testing-library/jest-dom @testing-library/user-event jsdom` ### Step 2: Configure Vitest I create a `vitest.config.ts` file in the root of my project and paste in the proposed code: ```typescript /// <reference types="vitest" /> /// <reference types="vite/client" /> import react from '@vitejs/plugin-react'; import { defineConfig } from 'vitest/config'; export default defineConfig({ plugins: [react()], test: { globals: true, environment: 'jsdom', setupFiles: ['./src/setupTests.ts'], }, }); ``` ### 3. Create a setup file I create the `setupTests.ts` file in the `src` directory as referenced in the previous step and add the proposed code: ```typescript // src/setupTests.ts import '@testing-library/jest-dom/extend-expect'; ``` ⚠️ Warning: For anyone scanning through this article for the code, the line above is actually incorrect and we fix it in the next section! ### 4. Update package.json Finally, I update the `package.json` file to add the following to the `scripts` block: ```json { "scripts": { "test": "vitest" } } ``` This will allow me to run `npm test` at the command line to run the tests and watch for changes. Everything so far has been smooth. Now it's time to write my first test. ## Writing some tests So far, the bulk of my work is in the `RecipeList` component that either lists a user's recipes or shows a "New recipe" button if they don't have any recipes yet. I know that I want the following five test cases: 1. display "Your Recipes" header text 2. display a button to create a new recipe 3. render a list of recipes when user has recipes 4. render a button to view each recipe when user has recipes 5. render a button to create a new recipe when user has no recipes I have two approaches to create tests with Amazon Q. I can start by asking Q to suggest an example test: `Give me an example vitest test for RecipeList. One test will test that RecipeList displays "Your Recipes" text. The second test will test that a button to create a "New Recipe" is displayed. Mock the useRecipeData hook to return a list of recipes from the default function.` ![Amazon Q prompt asking for an example test.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mjll45utf8mrd97x9i2.png) Starting with this example, I clean this up. In the `useRecipeData` mock, I fix two issues: 1. remove the imports for React (unnecessary) and vitest (I'm using a global setup, so already imported; see step 2 in the previous section) 2. change the module path to `../hooks/useRecipeData` 3. swap `recipes` for `default` because the hook I'm mocking uses a default export rather than a named export. In both tests, I add a `<BrowserRouter>` wrapper around the `<RecipeList>` component. The test now looks like this: ```typescript // RecipeList.test.tsx import { render, screen } from '@testing-library/react'; import { describe, it, vi } from 'vitest'; import RecipeList from './RecipeList'; import { BrowserRouter } from 'react-router-dom'; vi.mock('../hooks/useRecipeData', () => ({ default: () => ({ recipes: [], }), })); describe('RecipeList', () => { it('should display "Your Recipes" text', () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); const yourRecipesText = screen.getByText('Your Recipes'); expect(yourRecipesText).toBeInTheDocument(); }); it('should display a button to create a new recipe', () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); const newRecipeButton = screen.getByRole('button', { name: /New Recipe/i }); expect(newRecipeButton).toBeInTheDocument(); }); }); ``` Now, it's time to check if the tests pass or fail! ## Run the test suite to get green In the terminal, I run `npm test` to run the tests and watch for changes. I immediately run into an error. I'm not sure what this means, so I ask Amazon Q for help: `What does this error message mean: ```Error: Missing "./extend-expect" specifier in "@testing-library/jest-dom" package``` when running ```npm test```?` Based on the response, I need to update our `setupTests.ts` file from: ```typescript // setupTests.ts import '@testing-library/jest-dom/extend-expect'; ``` To: ```typescript // setupTests.ts import '@testing-library/jest-dom'; ``` So that it uses the default export, rather than the removed `extend-expect` named export. ![Amazon Q prompt asking what a specific error message means.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gzthu2n4wu7difch931.png) After saving the file, the tests are automatically rerun and this issue is resolved. However, the tests are not yet green and passing. The test `should display "Your Recipes" text` fails because `Your Recipes` should only be displayed when the user has no recipes and we set this up with an empty list. Let's fix that by adding some mock recipes. Because `vi.mock` is hoisted to the top of the file, I also need to make `mockRecipes` hoisted, by using `vi.hoisted`. I then move these to a `beforeEach` block: ```typescript beforeEach(() => { const mocks = vi.hoisted(() => { return { mockRecipes: [ { id: '1', title: 'Recipe 1', instructions: 'Instructions 1' }, { id: '2', title: 'Recipe 2', instructions: 'Instructions 2' }, ] } }) vi.mock('../hooks/useRecipeData', () => ({ default: vi.fn().mockReturnValue({ recipes: mocks.mockRecipes, }), })); }); ``` After saving the file, all of the tests are passing! My (very small) test suite is green! Next, I can write a few more tests to extend the test suite. ## Bonus: Add more tests Once I have the core of a test stubbed out, I can start typing out code and let Amazon Q's inline code prompting make suggestions for me. Another test I want to add is `render a list of recipes when user has recipes`. To use Amazon Q's inline code suggestions, I start typing out the test with the description of what I want. In the screenshot below, I start typing this line: ```typescript it('should render a list of recipes when user has recipes', async () => { ``` Amazon Q will provide a suggestion automatically (see the light grey suggestion in the screenshot, between lines 47-48) or I can use the [shortcuts](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/actions-and-shortcuts.html) `Option+C` (Mac) or `Alt+C` (Windows). I like the suggestion, so I hit `Tab` to accept it. ![Using Amazon Q inline code suggestion to write a test.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzmytp0wp6n9r0sc38ug.png) My final test looks like this: ```typescript // RecipeList.test.tsx import { render, screen, cleanup } from '@testing-library/react'; import RecipeList from './RecipeList'; import { BrowserRouter } from 'react-router-dom'; const mocks = vi.hoisted(() => { return { mockRecipes: [ { id: '1', title: 'Recipe 1', instructions: 'Instructions 1' }, { id: '2', title: 'Recipe 2', instructions: 'Instructions 2' }, ] } }) describe('RecipeList', () => { beforeEach(() => { cleanup(); vi.mock('../hooks/useRecipeData', () => { return { default: vi.fn().mockReturnValue( { recipes: mocks.mockRecipes }), } }) }); it('should display "Your Recipes" text', () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); const yourRecipesText = screen.getByText('Your Recipes'); expect(yourRecipesText).toBeInTheDocument(); }); it('should display a button to create a new recipe', () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); const newRecipeButton = screen.getByRole('button', { name: /New Recipe/i }); expect(newRecipeButton).toBeInTheDocument(); }); it('should render a list of recipes when user has recipes', async () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); expect(await screen.findByText('Recipe 1')).toBeInTheDocument(); expect(screen.getByText('Recipe 2')).toBeInTheDocument(); }); it('should render a button to view each recipe when user has recipes', async () => { render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); const viewButtons = await screen.findAllByText('View'); expect(viewButtons).toHaveLength(2); }) it('should render a button to create a new recipe when user has no recipes', async () => { vi.doMock('../hooks/useRecipeData', () => { return { default: vi.fn().mockReturnValue( { recipes: [] }), } }) render( <BrowserRouter> <RecipeList /> </BrowserRouter> ); expect(await screen.findByText('New Recipe')).toBeInTheDocument(); }); }); ``` ## Wrapping up That's it! The test suite is green. In this article, I showed you how to setup a Vitest test suite to be used with Typescript and React Testing Library. We chose Vitest over Jest because the project uses Vite but we still interact with it a lot like Jest. We wrote some tests and then ran them from the command line using Vitest's test runner command that watches for changes to tests or code under test. We used Amazon Q Developer to support our work throughout, asking it questions, for example code, for help with errors, and used inline code suggestions. Ready to try Amazon Q Developer in your testing flow? Check out how to get started in [VSCode](https://community.aws/content/2fVw1hN4VeTF3qtVSZHfQiQUS16/getting-started-with-amazon-q-developer-in-visual-studio-code) or [JetBrains IDEs](https://community.aws/content/2fXj10wxhGCExqPvnsJNTycaUcL/adding-amazon-q-developer-to-jetbrains-ides). Have you been using an AI assistant to help you with testing? Drop a comment 💬 below to share what tool you're using and how it helps you.
jennapederson
1,910,299
The Role of Physics in Elevator Operation and Safety
Physics plays a crucial role in the operation and safety of elevators, ensuring that these systems...
0
2024-07-03T14:29:53
https://dev.to/liftcomplex/the-role-of-physics-in-elevator-operation-and-safety-3il9
Physics plays a crucial role in the operation and safety of elevators, ensuring that these systems function smoothly and securely. By understanding and applying the principles of physics, engineers can design elevators that are efficient, reliable, and safe for passengers. Gravity and Motion At the core of elevator operation is the principle of gravity. Elevators must counteract gravitational forces to lift and lower passengers between floors. This is achieved through a system of counterweights and pulleys, which balance the load and reduce the amount of energy required to move the elevator car. Newton's laws of motion come into play here, as engineers calculate the forces needed to accelerate and decelerate the car smoothly, providing a comfortable ride for passengers. Mechanical Advantage The use of pulleys and counterweights in elevators is an application of mechanical advantage, a concept in physics that allows a smaller force to lift a larger load. By carefully designing the pulley system, engineers can ensure that the elevator operates efficiently, using less energy and reducing wear on components. This not only enhances the performance of the elevator but also extends its lifespan. Energy Conversion Elevators convert electrical energy into mechanical energy through the use of motors. The efficiency of this energy conversion is critical to the operation of the elevator. Physics helps engineers design motors that are efficient and powerful enough to handle the demands of lifting and lowering the elevator car. Additionally, regenerative braking systems can capture some of the energy used in braking and convert it back into electrical energy, improving overall efficiency. Safety Mechanisms Safety is paramount in elevator design, and physics is at the heart of many safety mechanisms. One key safety feature is the governor, a device that regulates the speed of the elevator. If the elevator exceeds a safe speed, the governor activates the safety brakes, preventing the car from descending too quickly. The principles of friction and inertia are crucial here, as the brakes must generate enough friction to stop the car without causing sudden jolts that could harm passengers. Structural Integrity The structural integrity of the elevator shaft and car is another area where physics is essential. Engineers must ensure that the materials used can withstand the stresses and strains of daily operation. This involves calculations related to tension, compression, and shear forces. By understanding these physical principles, engineers can select materials and design structures that are both strong and lightweight. Vibration and Noise Reduction Elevators must also provide a quiet and smooth ride. Physics helps engineers design systems that minimize vibrations and noise. This involves studying the natural frequencies of the elevator components and using damping materials to absorb vibrations. By reducing these unwanted movements, engineers can enhance the passenger experience and prolong the life of the elevator. In conclusion, physics is integral to every aspect of elevator operation and safety. From the basic principles of gravity and motion to the advanced design of safety mechanisms and structural components, physics ensures that elevators are efficient, reliable, and safe for everyday use. As technology advances, the role of physics in elevator design will continue to evolve, driving innovations that improve performance and safety.
liftcomplex
1,910,298
Understanding CSS Progress Bars
In the world of web development, having a visually appealing and user-friendly interface is...
0
2024-07-03T14:28:33
https://dev.to/code_passion/understanding-css-progress-bars-795
css, html, webdesign, tutorial
In the world of web development, having a visually appealing and user-friendly interface is essential. The progress bar is a vital aspect of reaching this goal. Progress bars not only give users a sense of readiness and feedback, but they also enhance the overall user experience. Although there are various ways to implement progress bars, CSS offers a flexible and adaptable approach. In this post, we’ll look into CSS progress bars, including its capabilities, stylistic options, and recommended implementation methods. **Structure of a Circular CSS Progress Bar** At its core, a progress bar is a graphic representation of the completion status of a task or process. CSS allows developers to create progress bars using simple markup and stylistic techniques without relying on complicated JavaScript tools or frameworks. By employing CSS variables such as width, background-color, and border-radius, developers can adjust the appearance of progress bars to correspond with their design preferences and branding requirements. **How a circular Progress Bar Works** A circular progress bar is a visual indicator used in user interfaces to display the status of an operation or process. Circular progress bars fill clockwise around a circle rather than horizontally from left to right. [Read more example of CSS progress bar](https://skillivo.in/understanding-css-progress-bars/) **Amazing CSS Progress Bar Examples** **1. CSS Progress Bar styling to create a rotating circle border effect** **output-** [![CSS Progress Bar styling to create a rotating circle border effect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zllzwfrstmualt26so2j.gif)](https://skillivo.in/understanding-css-progress-bars/) Let’s break it down step by step: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Rotating Circle Border</title> <style> body { margin: 0 auto; width: 500px; } .circle { margin-top:200px; width: 400x; height: 400px; border: 7px solid transparent; border-radius: 50%; border-top-color: #4caf50; animation: rotate 2s linear infinite; } @keyframes rotate { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } </style> </head> <body> <div class="circle"></div> </body> </html> ``` border defines the border of the circle. It is initially set to a fixed transparency of 5 pixels. This means that the border is initially invisible (transparent) but has a width of 5 pixels. the border radius is set to 50%, which rounds the border to form a circle. the top color of the border is set from the top edge of the circle color. In this case it is set to #4caf50 which is shaded green. [The animation property](https://skillivo.in/introduction-to-css-animation-1/) is used to rotate the animation. @keyframes rotate defines the rotate animation: 1. from specifies the starting state of the animation, where the circle is rotated 0deg (i.e., no rotation). 2. to specifies the ending state of the animation, where the circle is rotated 360deg (i.e., one full rotation). **Explanation:** 1. When you load this HTML file in a web browser, it displays a webpage with a single circular element (div) that has a rotating border. 2. The border of the circle rotates continually around its center due to the rotate animation defined in the CSS. 3. The animation duration is set to 2 seconds, allowing the circle to complete a full rotation in that time. 4. The animation timing function is set to linear, ensuring constant rotation speed during the animation. 5. The infinite keyword ensures that the animation repeats indefinitely. **Conclusion** [CSS progress bars](https://skillivo.in/understanding-css-progress-bars/) are useful tools that help developers design visually appealing and functioning user experiences. Developers can customise the appearance and behaviour of progress bars to their design preferences, improving the overall user experience. By following best practices for implementation, developers may guarantee that progress bars are accessible, performant, and smoothly incorporated into their web apps. With this book, you’ll be well-prepared to grasp CSS progress bars and take your web development projects to new heights.
code_passion
1,910,297
Error Handling (How to create Custom Error Handlers and Exceptions) - FastAPI Beyond CRUD (Part 16)
This video explores error handling in FastAPI, focusing on customizing exception raising and...
0
2024-07-03T14:26:36
https://dev.to/jod35/error-handling-how-to-create-custom-error-handlers-and-exceptions-fastapi-beyond-crud-part-16-7h9
fastapi, python, api, programming
This video explores error handling in FastAPI, focusing on customizing exception raising and tailoring error responses to meet our application's specific requirements. We cover creating custom exception classes and utilizing them effectively to manage errors and personalize their presentation according to the application's needs. {%youtube jYbNq6QAQNI%}
jod35
1,910,294
React-Hook-Form vs Formik: The Good, Bad, and Ugly
React-Hook-Form and Formik are the most popular libraries for handling forms in React applications....
0
2024-07-03T14:21:33
https://joyfill.io/react-hook-form-vs-formik-the-good-bad-and-ugly
react, forms, opensource, github
React-Hook-Form and Formik are the most popular libraries for handling forms in React applications. Both have their own sets of pros and cons, which can help in deciding which one to use based on the specific needs of your project. > 💡 Note: Both of these react form libraries are open source, so be cognizant of that (a reason for the cons provided in this article). If you are a SaaS product looking for a **deep integration with more extensible form components that can fit within the tentacles of your application(s)** and that is well supported by more than just a OSS community, consider a more robust form library like [Joyfill](https://joyfill.io/developers). Now, without further ado, let’s dive into the pros and cons of React-Hook-Form and Formik. ## What is React-Hook-Form? React-Hook-Form is a lightweight and performant library for managing form state and validation in React applications. It leverages React hooks to provide an intuitive API for handling form inputs, validations, and submissions. ### React-Hook-Form Pros: 1. **Performance**: React-Hook-Form is built with performance in mind. It minimizes re-renders by leveraging uncontrolled components and ref APIs, which can be especially beneficial for large forms. 2. **Minimal Boilerplate**: It requires less code and configuration compared to Formik. The API is simple and straightforward, making it easy to set up and use. 3. **Built-in Validation**: It offers built-in support for form validation using either custom functions or schema-based validation with libraries like Yup. 4. **Tiny Bundle Size**: React-Hook-Form has a smaller bundle size compared to Formik, which can be a crucial factor in performance-sensitive applications. 5. **Flexible and Extensible**: It provides flexibility in managing form state and validation. You can easily integrate it with other libraries and tools. ### React-Hook-Form Cons: 1. **Learning Curve**: While the API is simple, the concepts of uncontrolled components and refs might be less familiar to developers who are used to working with controlled components. 2. **Limited Built-in Features**: React-Hook-Form focuses on being lightweight and performant, which means it might lack some of the built-in features that Formik provides out-of-the-box, requiring additional custom code or third-party integrations. ## What is Formik? Formik is a popular form library for React that simplifies form handling by managing form state, validation, and submission. It provides a declarative approach to building forms, reducing boilerplate code and improving code readability. ### Formik Pros: 1. **Rich Features**: Formik comes with a lot of built-in features such as field-level validation, nested objects, and arrays, making it very powerful for complex forms. 2. **Controlled Components**: Formik uses controlled components, which can be more intuitive for developers who are familiar with this pattern in React. 3. **Ease of Use**: It has a comprehensive and well-documented API, which makes it easy to set up and use for common form-related tasks. 4. **Community and Ecosystem**: Formik has a large community and a mature ecosystem with many examples, plugins, and integrations. ### Formik Cons: 1. **Performance**: Formik can lead to performance issues with large forms due to frequent re-renders, as it uses controlled components. 2. **Boilerplate Code**: It often requires more boilerplate code compared to React-Hook-Form, which can lead to larger and more complex codebases. 3. **Bundle Size**: Formik has a larger bundle size, which might be a consideration for performance-critical applications. ### React-Hook-Form and Formik Comparison Table | | React Hook Form | Formik | | --- | --- | --- | | Gzipped bundle size | 12.12KB | 44.34KB | | Dependencies | 0 | 8 | | GitHub stars | 40.2k | 33.7k | | Active maintenance | Yes | No | | Performance | Good | Good | | Documentation | Good | Good | | License | MIT | Apache 2.0 | | NPM weekly downloads | 5.2 Million | 2.7 Million | | Pricing | Free | Free | | Community support | Good | Good | | Open GitHub issues | 13 | 688 | | Closed GitHub issues | 4,384 | 1,550 | ### The Good The combination of simplified form management, performance optimizations, built-in validation, ease of integration, strong community support, flexibility, and enhanced developer experience makes React-Hook-Form and Formik highly desirable tools for handling forms in React applications. They help reduce boilerplate code, improve form handling efficiency, and allow developers to build robust and scalable forms with minimal effort. ### The Bad and Ugly While React-Hook-Form and Formik offer many advantages for form handling in React applications, there are valid reasons why developers might choose to avoid them. These include concerns about the learning curve, bundle size, performance, unnecessary complexity for simple forms, dependency management, a preference for native React solutions, potential issues with library maintenance, specific project requirements, and integration challenges. Developers must weigh these considerations against the benefits to determine the best approach for their specific use case. ### React-Hook-Form and Formik Comparison Summary - **Performance**: React-Hook-Form generally offers better performance due to its use of uncontrolled components and ref APIs. Formik's use of controlled components can lead to more frequent re-renders and potentially slower performance in large forms. - **Ease of Use**: Formik might be easier to use initially due to its comprehensive documentation and built-in features. React-Hook-Form requires understanding some more advanced concepts but rewards with less boilerplate and better performance. - **Flexibility and Extensibility**: Both libraries are flexible, but React-Hook-Form's minimalistic approach can make it easier to integrate with other tools and libraries without added complexity. - **Bundle Size**: React-Hook-Form has a smaller bundle size, making it a better choice for performance-sensitive applications. Ultimately, the choice between React-Hook-Form and Formik depends on the specific needs of your project. If performance and minimal boilerplate are top priorities, React-Hook-Form is a great choice. If you need a feature-rich solution with robust validation and form handling out-of-the-box, Formik might be more suitable.
johnpagley
1,910,293
Step-by-Step Guide: Estimated Reading Time in Bear Blog
Learn how to display the estimated reading time for your Bear blog posts using a simple script. Adjust placement and reading speed easily.
0
2024-07-03T14:20:56
https://dev.to/yordiverkroost/step-by-step-guide-estimated-reading-time-in-bear-blog-3maf
bear, blog, reading, development
--- title: Step-by-Step Guide: Estimated Reading Time in Bear Blog published: true description: Learn how to display the estimated reading time for your Bear blog posts using a simple script. Adjust placement and reading speed easily. tags: Bear, Blog, Reading, Development cover_image: https://bear-images.sfo2.cdn.digitaloceanspaces.com/yordi-1720013282.webp # Use a ratio of 100:42 for best results. # published_at: 2024-07-03 14:19 +0000 --- I'll be very short for those who just clicked this title and want results. To show the estimated number of minutes it takes to read a blog post, add the following script to the footer directive of your Bear blog: ```javascript <script src="https://cdn.jsdelivr.net/gh/froodooo/bear-plugins@0.0.26/bear/reading-time.js" /> ``` This shows the estimated reading time just below the post date, using 255 words per minute as an average reading speed. The placement and reading speed can be changed by adding the following data attributes to the script: - `data-before-child`: \[number\] (*defaults to 4*) - `data-wpm`: \[number\] (*defaults to 255*) For example, to put the estimated reading time above the post date and set an average reading speed of 200 words per minute, use the following script: ```javascript <script src="https://cdn.jsdelivr.net/gh/froodooo/bear-plugins@0.0.26/bear/reading-time.js" data-before-child="3" data-wpm="200"/> ``` There you go, enjoy! # The code For those who are interested, this is the full script that called in the custom footer directive of a Bear blog: ```javascript if (document.querySelector("body").classList.contains("post")) { const readingTime = Math.ceil(document.querySelector("main").innerText.trim().split(/\s+/).length / parseInt(document.currentScript.getAttribute("data-wpm") ?? 255)); document .querySelector("main") .insertBefore( document.body.appendChild( Object.assign( document.createElement("p"), { className: "reading-time", innerHTML: `Reading time: ${readingTime} minute${readingTime > 1 ? "s" : ""}` })), document.querySelector("main").childNodes[parseInt(document.currentScript.getAttribute("data-before-child") ?? 4)]); } ``` The first line makes sure that this code is only executed on blog post pages. Then, the average reading time in minutes is calculated by counting the words in the blog post (by splitting on the whitespace between words). The resulting value is rounded up. Then finally, a new paragraph is inserted on the page as a child of the tag with id `main`. A ternary operation makes sure that there is no "s" behind the word "minute" if the reading time is equal to 1 minute.
yordiverkroost
1,909,514
Linux User Creation Bash Script
The purpose of this script is to read a text file containing an employee’s usernames and group names,...
0
2024-07-03T14:19:04
https://dev.to/wanjiru/linux-user-creation-bash-script-5ff3
The purpose of this script is to read a text file containing an employee’s usernames and group names, where each line is formatted as user;groups.The script should create users and groups as specified, set up home directories with appropriate permissions and ownership and generate random passwords for the users. The first line in this script is called a shebang which tells the OS which interpreter to use and in this case, the script will be interpreted and executed using Bash shell. ``` #!/bin/bash ``` Some instances within the script require elevated permissions. To ensure that they are no errors when the script is executed, it is best to ensure that one is a root user when executing the script. ``` ROOT_UID=0 if [ "$UID" -ne "$ROOT_UID" ]; then echo"***** You must be the root user to run this script!*****" exit fi ``` **Key Functions** **1. create_directories()** We need to first create two directories, _/var/log/user_management.log_ and _/var/secure/user_passwords.csv_.The _/var/log/user_management.log_ will be used to log all events that will be happening in our script and can be reviewed for troubleshooting.The _/var/secure/user_passwords.csv_ will be used to store the created usernames and their passwords.This file is highly sensitive and should only be accessible to the owner.To achieve this, the permissions will be set to 700 on this file. _chmod_ is used to set the appropriate permissions and _chown_ is used to set ownership of the file. ``` log_dir="/var/log" log_file="$log_dir/user_management.log" secure_dir="/var/secure" password_file="$secure_dir/user_passwords.csv" # Function to create directories if they don't exist and assigning the necessary permission create_directories() { # Create log directory if it doesn't exist if [ ! -d "$log_dir" ]; then sudo mkdir -p "$log_dir" sudo chmod 755 "$log_dir" sudo chown root:root "$log_dir" fi # Create secure directory if it doesn't exist if [ ! -d "$secure_dir" ]; then sudo mkdir -p "$secure_dir" sudo chmod 700 "$secure_dir" sudo chown root:root "$secure_dir" fi } ``` **2. log()** The log() function records script activities with timestamps (date) in _/var/log/user_management.log_ directory. ``` log() { local timestamp=$(date +"%Y-%m-%d %H:%M:%S") echo "$timestamp $1" >> "$log_file" } ``` **3. generate_password()** Before we can write a function to create a user, we first need to generate a random password for the newly created users. ``` generate_password() { # Set the desired length of the password local password_length=12 # Generate the password local password="$(openssl rand -base64 12 | tr -d '/+' | head -c $password_length)" # Output the generated password echo "$password" } ``` **File Handling** The _process_user_file()_ ensures the file exists and is readable before proceeding to create users and manage groups accordingly ``` process_user_file() { local filename="$1" # Check if the file exists and is readable if [ ! -f "$filename" ]; then echo "****Error: File '$filename' not found or is not readable.****" log "Error: File '$filename' not found or is not readable." return 1 fi ``` If the file is valid, a while loop will be used which will read the lines in the files and splits each line into _username_ and _groups_ , and then calls the function _create_user_ with username and groups as arguments. ``` while IFS=';' read -r username groups; do if [[ ! -z "$username" && ! -z "$groups" ]]; then create_user "$username" "$groups" else echo "****Invalid format in line: '$username;$groups'****" log "Invalid format in line: '$username;$groups'" fi done < "$filename" ``` **User Management** Using the variables provided by the _process_user_file_ function, we can create a user and generate a random password for them using _generate_password_ function. This command creates a user with a home directory _/home/$"username_. ``` sudo useradd -m -p "$(openssl passwd -6 "$password")" "$username" # Making the user the owner of the directory sudo chown "$username:$username" "/home/$username" ``` By default when a user is created in most linux distribution, a group with the same name as the users username is created this group is usually the primary group of the user.However, to be on the safe side we can check if the group already exists and if not, we can create the group and add the user to the group then make the group the primary group of the user. ``` if ! grep -q "^$username:" /etc/group; then sudo groupadd "$username" #Adding the user to the group which is the primary group sudo usermod -aG group_name "$username" #change the primary group of a user sudo usermod -g "$username" "$username" fi ``` In this last segment, we are going to add the users to the specified groups. The variable _groups_ is stored in an array known as _group_list_ where we user the for function to iterate over each element in the _group_list_. ``` # Function to add users to specified groups add_to_groups() { local username="$1" local groups="$2" IFS=',' read -ra group_list <<< "$groups" for group in "${group_list[@]}"; do if grep -q "^$group:" /etc/group; then sudo usermod -aG "$group" "$username" log "User '$username' added to group '$group' successfully." echo "****User '$username' added to group '$group' successfully.****" else log "Group '$group' does not exist. Skipping addition of user '$username'." echo "****Group '$group' does not exist. Skipping addition of user '$username'.****" fi done } ``` To make the script excutable, you need to use the chmod command in combination with the +x option ``` chmod +x path/directory/script.sh ``` To execute the script, run: ``` ./path/directory/script.sh text_file ``` You can view the full script at [Github](https://github.com/wanjirumurira/Linux-User-Creation-Bash-Script.git). This script was a task which was to be completed during my HNG internship. For those interested in practical learning and real-life scenarios, check out the [HNG internship program](https://hng.tech/internship). It's a great opportunity to gain hands-on experience! To maximize your internship experience, consider upgrading to their premium package at [HNG Premium](https://hng.tech/premium).
wanjiru
1,910,184
Automating User and Group Management with a Bash Script
I recently got accepted into the famous(https://hng.tech/hire). It is a very intense hands-on 8 week...
0
2024-07-03T14:17:31
https://dev.to/faruq2991/automating-user-and-group-management-with-a-bash-script-33fj
devops, linux, cloud, automation
> I recently got accepted into the famous(https://hng.tech/hire). It is a very intense hands-on 8 week program where your skills will be tested and legends are made. As a requirement for proceeding to the next stage, interns on every track are given tasks they are expected to execute, document and write an article to pass. this blog post is such. I would recommend anyone willing to upskill and gain employable skill to enroll into the program. (https://hng.tech/premium) https://hng.tech/internship > Hey there, fellow Sysadmin! We can all agree that managing users and groups is a bit difficult and time consuming. Trust me, I've been there. Picture this: it's 2 AM, you're knee-deep in energy drink cans, trying to add the 50th user to your system. Your eyes are crossing, and you're pretty sure you just gave someone access to critical codes by mistake. We've all been there, right? Well, chin up, because I'm about to introduce you to your new best friend: a Bash script that'll make user management a breeze! #### Script Breakdown ```#!/bin/bash declare -a users declare -a groups ``` The script starts with a shebang (`#!/bin/bash`) to specify the interpreter. It then declares two arrays, `users` and `groups`, to hold user and group data. 2. **Argument Check** ```bash if [[ $# -ne 1 ]]; then echo "Usage: $0 <input_file>" exit 1 fi input_file="$1" echo "Reading input file: $input_file" ``` The script checks and confirm that the script is run is run with exactly one argument (the input file). Otherwise, the exit code 1 is triggered which which indicates failure. The input file is then assigned to the variable `input_file`. 3. **Reading the Input File** ```bash function read_input() { local file="$1" if [[ ! -f "$file" ]]; then echo "File not found!" return 1 fi while IFS= read -r line; do user=$(echo "$line" | cut -d';' -f1) groups_list=$(echo "$line" | cut -d';' -f2 | tr -d '[:space:]') users+=("$user") groups+=("$groups_list") done < "$file" } read_input "$input_file" ``` The `read_input` function reads the input file line by line. Each line read is expected to contain a username and a comma-separated(delimiter) list of groups, separated by a semicolon. The function splits each line into a user and their groups, then adds these to the respective arrays. 4. **Verification of Data** ```bash echo "Users: ${users[@]}" echo "Groups: ${groups[@]}" ``` The script prints the `users` and `groups` arrays to verify the data read from the input file. 5. **Log and Password Files Setup** ```bash log_file="/var/log/user_management.log" password_file="/var/secure/user_passwords.txt" touch $log_file mkdir -p $(dirname $password_file) touch $password_file ``` The script sets up paths for a log file and a password file. It creates these files and their parent directories if they do not exist. 6. **User and Group Management** ```bash for (( i = 0; i < ${#users[@]}; i++ )); do user="${users[$i]}" user_groups="${groups[$i]}" if id "$user" &>/dev/null; then echo "User $user already exists, Skipped" | tee -a "$log_file" else # Create user useradd -m -s /bin/bash "$user" if [[ $? -ne 0 ]]; then echo "Failed to create user $user" | tee -a "$log_file" exit 1 fi echo "User $user created" | tee -a "$log_file" password=$(openssl rand -base64 50 | tr -dc 'A-Za-z0-9!?%=' | head -c 10) echo "$user:$password" | chpasswd if [[ $? -ne 0 ]]; then echo "Failed to set password for $user" | tee -a "$log_file" exit 1 fi echo "Password for $user set" | tee -a "$log_file" echo "$user:$password" >> "$password_file" if grep -q "^$user:" /etc/group; then echo "Personal group $user already exists" | tee -a "$log_file" else echo "Personal group $user does not exist, creating $user" | tee -a "$log_file" groupadd "$user" if [[ $? -ne 0 ]]; then echo "Failed to create personal group $user" | tee -a "$log_file" exit 1 fi fi usermod -aG "$user" "$user" if [[ $? -ne 0 ]]; then echo "Failed to add $user to $user group" | tee -a "$log_file" exit 1 fi echo "Added $user to $user group" | tee -a "$log_file" for group in $(echo "$user_groups" | tr ',' '\n'); do if grep -q "^$group:" /etc/group; then echo "Group $group already exists" | tee -a "$log_file" else echo "Group $group does not exist, creating $group" | tee -a "$log_file" groupadd "$group" if [[ $? -ne 0 ]]; then echo "Failed to create group $group" | tee -a "$log_file" exit 1 fi fi usermod -aG "$group" "$user" if [[ $? -ne 0 ]]; then echo "Failed to add $user to $group group" | tee -a "$log_file" exit 1 fi echo "Added $user to $group group" | tee -a "$log_file" done fi done ``` The main loop iterates over the `users` and `groups` arrays. For each user: - If the user already exists, it logs and skips to the next user. - If the user does not exist, it creates the user, sets a password, logs the details, and creates a personal group if it does not exist. - The user is added to their personal group and any additional groups specified. This script provides a comprehensive solution for automating the management of users and groups, ensuring consistent and secure setups based on predefined input data.
faruq2991
1,910,291
Exploring the Newest Features in JavaScript ES2024
The JavaScript ecosystem continues to evolve rapidly, with each ECMAScript (ES) release introducing...
0
2024-07-03T14:17:10
https://dev.to/delia_code/exploring-the-newest-features-in-javascript-es2024-jie
webdev, javascript, programming, beginners
The JavaScript ecosystem continues to evolve rapidly, with each ECMAScript (ES) release introducing new features that enhance the language's capabilities and developer experience. ES2024 is no exception, bringing a host of exciting features that promise to improve code readability, performance, and overall efficiency. In this article, we'll explore the newest features in JavaScript ES2024 that are ready to be used, complete with practical examples and explanations to help you understand and apply them in your projects. ## 1. **Top-Level await** Top-level `await` simplifies the use of asynchronous operations at the module level without the need for an async function wrapper. ### Example: ```javascript // data.js const response = await fetch('https://api.example.com/data'); export const data = await response.json(); // main.js import { data } from './data.js'; console.log(data); ``` ### Status: **Stage 4** (Part of ES2022) ### Usage: Supported in modern browsers and Node.js. It can be used in module scripts to simplify asynchronous code at the top level. ## 2. **WeakRefs and FinalizationRegistry** WeakRefs and FinalizationRegistry provide new ways to manage memory in JavaScript by allowing references to objects that don't prevent those objects from being garbage-collected. ### Example: ```javascript let registry = new FinalizationRegistry((heldValue) => { console.log(`${heldValue} was collected`); }); let obj = {}; let ref = new WeakRef(obj); registry.register(obj, 'MyObject'); // Clear the reference to the object obj = null; // Simulate garbage collection setTimeout(() => { console.log(ref.deref()); // Output: undefined (if garbage collected) }, 1000); ``` ### Status: **Stage 4** (Part of ES2021) ### Usage: Supported in modern browsers and Node.js. It allows developers to manage memory more effectively by using weak references and registering finalization callbacks. ## 3. **Private Instance Methods and Accessors** Building on the class fields and private instance fields introduced in previous ECMAScript versions, ES2024 adds support for private instance methods and accessors. This feature allows developers to define private methods and getters/setters that are only accessible within the class. ### Example: ```javascript class Person { #name; constructor(name) { this.#name = name; } #greet() { return `Hello, ${this.#name}!`; } get greeting() { return this.#greet(); } } const person = new Person('Alice'); console.log(person.greeting); // Output: "Hello, Alice!" ``` ### Status: **Stage 4** (Part of ES2022) ### Usage: Available in some JavaScript engines like V8 (used by Chrome and Node.js). It can also be used with transpilers like Babel for broader compatibility. ## 4. **Temporal** The Temporal API is a new, modern API for working with dates and times in JavaScript. It addresses many of the shortcomings of the existing Date API, providing a more comprehensive and user-friendly way to handle date and time manipulation. ### Example: ```javascript const now = Temporal.Now.plainDateTimeISO(); console.log(now.toString()); // Output: "2024-07-02T14:23:47.123456789" const birthDate = Temporal.PlainDate.from('1990-01-01'); const age = now.since(birthDate).years; console.log(age); // Output: 34 ``` ### Status: **Stage 3** (Proposal) ### Usage: Not yet part of the official standard, but available in some experimental builds and polyfills. Keep an eye on its progress to integrate it into projects once it reaches stage 4. JavaScript ES2024 introduces a range of powerful new features that enhance the language's capabilities and improve developer productivity. From top-level `await` and private instance methods to the Temporal API and WeakRefs, these features make JavaScript more robust, expressive, and easier to use. By staying up-to-date with these advancements, you can write more efficient and maintainable code. Happy coding with ES2024! ### Further Reading - [ECMAScript 2024 Specification](https://tc39.es/ecma262/) - [MDN Web Docs: JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript) - [Chrome DevTools Documentation](https://developers.google.com/web/tools/chrome-devtools) - [Temporal Proposal](https://github.com/tc39/proposal-temporal) - [WeakRefs and FinalizationRegistry](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/WeakRef)
delia_code
1,910,290
React Custom Hooks vs. Helper Functions - When To Use Both
When working as a developer it is fairly common to come across various technologies and use cases on...
0
2024-07-03T14:16:42
https://dev.to/andrewbaisden/react-custom-hooks-vs-helper-functions-when-to-use-both-2587
webdev, javascript, programming, react
When working as a developer it is fairly common to come across various technologies and use cases on a day-to-day basis. Two popular concepts are React Custom Hooks and Helper functions. The concept of Helper functions has been around for a very long time whereas React Custom Hooks are still fairly modern. Both concepts allow developers to abstract and reuse the code that they write in different ways although they both have slightly different use cases. Today we will take a look at the similarities between the two and conclude on when is the right time to use each of them. Let's start by taking a look at what React Custom Hooks are. ## What are React Custom Hooks? React Custom Hooks are JavaScript functions which give you the ability to reuse stateful logic throughout your React codebase. When using Custom Hooks we tap into the React Hooks API that lets us manage our state and its side effects in a way that follows the React functional component process. One of the unique characteristics of Custom Hooks is being able to leverage state management which means that we can access the React built-in hooks like `useState`, and `useEffect`. Another unique identifier is the fact that we have to follow the named conventions for React Custom Hooks as we must prefix the start of the hook with the word `use` followed by its name, for example, `useFetch`. When we use Custom Hooks they can access and change a component state, plus lifecycle methods as they are deeply interconnected with the logic for React components. We can see an example of what a React Custom Hook looks like in this code example: ```javascript import { useState, useEffect } from 'react'; export function useFetch(url) { const [data, setData] = useState([]); const [error, setError] = useState(null); const [isLoading, setIsLoading] = useState(null); useEffect(() => { const fetchData = async () => { try { const json = await fetch(url).then((r) => r.json()); setIsLoading(false); setData(json); } catch (error) { setError(error); setIsLoading(false); } }; fetchData(); return { data, error, isLoading }; } ``` This Custom Hook is called `useFetch` and has reusable logic for fetching data from an API. It can manage the loading and error states and can be imported into multiple components. Now that we have a fundamental understanding of React Custom Hooks let's see how they compare to Helper Functions. ## What are Helper Functions? Helper functions are essentially standalone functions which are used for doing different calculations or tasks. These types of functions can be used anywhere inside your applications as they are not part of the React component or state management system. Another key difference is that they can be used in numerous programming languages and are not tied to any ecosystem. They are a concept that can be used anywhere. Unlike React Custom Functions, Helper functions perform calculations and tasks which are relevant to the given input. They cannot interact with side effects or component states. They also do not need predefined naming conventions like `use` and should be named based on whatever task you have designated them for. Take a look at this helper function in this example here: ```javascript import dayjs from 'dayjs'; function formatDate(date) { return dayjs(date).format('MM/DD/YYYY'); } export default formatDate; ``` In this code snippet, we use the Javascript date library [Day.js](https://day.js.org/) to parse and format the date, which gives us a more powerful method for formatting our dates. Ok with our updated understanding of React Custom Hooks and helper functions it's now a great time to see how we can incorporate both of them into a simple application. In the next section, we are going to build an app that uses them both. ## Building an app that uses a Custom Hook and Helper Function The app we will be building is a simple Pokémon Pokédex application which you can see in this picture. ![Pokemon Pokedex App Screen](https://res.cloudinary.com/d74fh3kw/image/upload/v1719917912/pokemon-app_hzkkag.png) We get the Pokemon data and information from the [Pokémon API](https://pokeapi.co/) and then we use the data to build our application which is then styled using Tailwind CSS. With our explanation done it's time to start building our app. You can find the codebase online here on GitHub [https://github.com/andrewbaisden/pokemon-pokedex-app](https://github.com/andrewbaisden/pokemon-pokedex-app) The first thing we have to do is set up our project structure so find a location on your computer for the project like the desktop and then run these commands to create a Next.js project. > On the Next.js setup screen make sure that you select yes for Tailwind CSS and the App Router so that our projects have the same setup. In this project, we will be using JavaScript, the other default settings should be fine. ```shell npx create-next-app pokemon-pokedex-app cd pokemon-pokedex-app ``` We should now have a Next.js project and we should be inside of the `pokemon-pokedex-app` folder so the next step is to install the JavaScript packages which we need for this application. We have to install `dayjs` for calculating times and dates and `uuid` for creating unique IDs for our Pokémon. Install both packages with this command: ```shell npm i dayjs uuid ``` Now that our packages are installed next we are going to create all of the files and folders for our project. Run the command below to create our project architecture: ```shell cd src/app mkdir components hooks utils mkdir components/Header components/Pokemon components/PokemonDetails touch components/Header/Header.js components/Pokemon/Pokemon.js components/PokemonDetails/PokemonDetails.js touch hooks/usePokemon.js touch utils/dateUtils.js utils/fetchUtils.js cd ../.. ``` With this command we: - Create a components folder for our Header, Pokemon and PokemonDetails components - Create a hooks folder for our `usePokemon` hook which fetches data from the Pokemon API - Create a `utils` folder for our fetch and date utility functions Ok, in the next steps, we shall add the code to our files and then our project will be completed, so open the project in your code editor. Up first is our `next.config.mjs` file in our root folder. Replace all of the code in that file with this new code here: ```javascript /** @type {import('next').NextConfig} */ const nextConfig = { images: { remotePatterns: [ { protocol: 'https', hostname: 'raw.githubusercontent.com', }, ], }, }; export default nextConfig; ``` All we are doing in this file is adding an image pattern for GitHub so that we can access the Pokémon images in our application without getting an error. We have to define that route so that it is approved. Now let's do our `layout.js` file so replace all of the code with the code below: ```javascript import { Yanone_Kaffeesatz } from 'next/font/google'; import './globals.css'; const yanone = Yanone_Kaffeesatz({ subsets: ['latin'] }); export const metadata = { title: 'Create Next App', description: 'Generated by create next app', }; export default function RootLayout({ children }) { return ( <html lang="en"> <body className={yanone.className}>{children}</body> </html> ); } ``` The main change in this file is using the `Yanone_Kaffeesatz` Google font for our application which replaces the default `Inter` font. The `globals.css` file is next on our list we just have to do some code cleanup. Like before replace the code with this snippet: ```css @tailwind base; @tailwind components; @tailwind utilities; body { font-size: 20px; } ``` We cleaned up some of the code and made the default font size 20px for our application. That takes care of our initial configuration files we just need to add the code for our components, hooks, utils and main page and our application will be ready. Starting from the top let's do our `Header.js` component inside of `Header/Header.js`. Add this code to our file: ```javascript import { useState, useEffect } from 'react'; import { getLiveDateTime } from '../../utils/dateUtils'; export default function Header() { const [dateTime, setDateTime] = useState(getLiveDateTime()); useEffect(() => { const interval = setInterval(() => { setDateTime(getLiveDateTime()); }, 1000); return () => clearInterval(interval); }, []); return ( <> <header className="flex row justify-between items-center bg-slate-900 text-white p-4 rounded-lg"> <div> <h1 className="text-5xl uppercase">Pokémon</h1> </div> <div> <p>Date: {dateTime.date}</p> <p>Time: {dateTime.time}</p> </div> </header> </> ); } ``` This component essentially displays the title for our application which is Pokémon and it also shows the live date and time. This is accomplished by importing the `utils/dateUtils.js` helper function that uses the `dayjs` JavaScript library for calculating the time and date. The next file to work on will be the `Pokemon.js` file in the `Pokemon` folder. Here is the code for our file: ```javascript import { useState, useEffect } from 'react'; import usePokemon from '../../hooks/usePokemon'; import { fetchPokemon } from '../../utils/fetchUtils'; import PokemonDetails from '../PokemonDetails/PokemonDetails'; export default function Pokemon() { const { data, isLoading, error } = usePokemon( 'https://pokeapi.co/api/v2/pokemon' ); const [pokemonDetails, setPokemonDetails] = useState([]); useEffect(() => { const fetchPokemonDetails = async () => { if (data && data.results) { const details = await Promise.all( data.results.map(async (pokemon) => { const pokemonData = await fetchPokemon(pokemon.url); return pokemonData; }) ); setPokemonDetails(details); } }; fetchPokemonDetails(); }, [data]); if (isLoading) { return <div>Loading...</div>; } if (error) { return <div>Error: {error.message}</div>; } return ( <> <div className="flex row flex-wrap gap-4 justify-evenly"> <PokemonDetails pokemonDetails={pokemonDetails} /> </div> </> ); } ``` This our our main Pokémon component file which uses our `usePokemon.js` hook for fetching data from the Pokémon API. This works alongside our utility `fetchUtils.js` file for fetching data. We have an error handling setup for fetching data and our state data is passed down into our `PokemonDetails.js` component that renders our user interface. Right, we should add the code for our `PokemonDetails.js` file now in the `PokemonDetails` folder. Put this code in the file: ```javascript import Image from 'next/image'; import { v4 as uuidv4 } from 'uuid'; export default function PokemonDetails({ pokemonDetails }) { return ( <> {pokemonDetails.map((pokemon) => ( <div key={pokemon.id} className={ pokemon.types[0].type.name === 'fire' ? 'bg-orange-400' : pokemon.types[0].type.name === 'water' ? 'bg-blue-400' : pokemon.types[0].type.name === 'grass' ? 'bg-green-400' : pokemon.types[0].type.name === 'bug' ? 'bg-green-700' : pokemon.types[0].type.name === 'normal' ? 'bg-slate-400' : '' } > <div className="text-white p-4"> <div className="capitalize"> <h1 className="text-4xl">{pokemon.name}</h1> </div> <div className="flex row gap-2 mt-4 mb-4"> <div className="bg-indigo-500 shadow-lg shadow-indigo-500/50 p-2 rounded-lg text-sm"> Height: {pokemon.height} </div> <div className="bg-indigo-500 shadow-lg shadow-indigo-500/50 p-2 rounded-lg text-sm"> Weight: {pokemon.weight} </div> </div> <div className="bg-white text-black rounded-lg p-4"> {pokemon.stats.map((stat) => ( <div key={uuidv4()}> <div className="capitalize flex row items-center gap-2"> <table> <tr> <td width={110}>{stat.stat.name}</td> <td width={40}>{stat.base_stat}</td> <td width={40}> <div style={{ width: `${stat.base_stat}px`, height: '0.5rem', backgroundColor: `${ stat.base_stat <= 29 ? 'red' : stat.base_stat <= 60 ? 'yellow' : 'green' }`, }} ></div> </td> </tr> </table> </div> </div> ))} </div> <div> <Image priority alt={pokemon.name} height={300} width={300} src={pokemon.sprites.other.home.front_default} /> </div> </div> </div> ))} </> ); } ``` Pretty much all of the code in this file is used for creating the interface for our Pokémon application. The styling is done using Tailwind CSS. Just a few more files to do before the project is done. The next file to work on will be the `usePokemon.js` file in our `hooks` folder. Our file will need this code so add it now: ```javascript import { useState, useEffect } from 'react'; import { fetchPokemon } from '../utils/fetchUtils'; const usePokemon = (initialUrl) => { const [data, setData] = useState(null); const [isLoading, setIsLoading] = useState(true); const [error, setError] = useState(null); useEffect(() => { const fetchData = async () => { try { const pokemonData = await fetchPokemon(initialUrl); setData(pokemonData); } catch (error) { setError(error); } finally { setIsLoading(false); } }; fetchData(); }, [initialUrl]); return { data, isLoading, error }; }; export default usePokemon; ``` This custom hook is used for fetching data from an API and in our case it will be for the Pokémon API. Now we will complete our `dateUtils.js` file in the `utils` folder with this code: ```javascript import dayjs from 'dayjs'; export const getLiveDateTime = () => { const now = dayjs(); return { date: now.format('MMMM D, YYYY'), time: now.format('h:mm:ss A'), }; }; ``` With this utility file, we use the `dayjs` JavaScript library to calculate dates and times in any file it is imported into. Ok now for our second utility file, `fetchUtils.js` add this code to the file: ```javascript export const fetchPokemon = async (url) => { try { const response = await fetch(url); const data = await response.json(); console.log(data); return data; } catch (error) { console.error('Error fetching Pokemon:', error); throw error; } }; ``` This utility file works with our `usePokemon.js` hook to fetch data from an API. Finally, let's complete our project by replacing and adding the code to our main `page.js` file in the root folder. This is the code we need for this file: ```javascript 'use client'; import Header from './components/Header/Header'; import Pokemon from './components/Pokemon/Pokemon'; export default function PokemonList() { return ( <div className="p-5"> <Header /> <h1 className="text-4xl mt-4 mb-4">Pokédex</h1> <Pokemon /> </div> ); } ``` Our `page.js` file is the main entry point for all of our components and with this code, our project is now completed. Run your project using the usual Next.js run script as shown here and you should see the Pokémon Pokédex application in your browser: ```shell npm run dev ``` ## Conclusion Today we learned how it's important to know the differences between helper functions and React Custom if you want to develop code that is organised, clean, and manageable. With stateful logic reuse in React, custom hooks are recommended because helper functions work best for stateless, general-purpose jobs. It is possible to enhance your codebase's modularity and reusability by making the best judgement on when to use both.
andrewbaisden
1,910,289
Top 20 React.JS interview questions.
As a React developer, it is important to have a solid understanding of the framework's key concepts...
0
2024-07-03T14:16:13
https://dev.to/hasan048/top-20-reactjs-interview-questions-5df3
react, webdev, javascript, programming
As a React developer, it is important to have a solid understanding of the framework's key concepts and principles. With this in mind, I have put together a list of 20 important questions that every React developer should know, whether they are interviewing for a job or just looking to improve their skills. Before diving into the questions and answers, I suggest trying to answer each question on your own before looking at the answers provided. This will help you gauge your current level of understanding and identify areas that may need further improvement. **Let's get started! ** ## 01. What is React and what are its benefits? Ans: React is a JavaScript library for building user interfaces. It is used for building web applications because it allows developers to create reusable UI components and manage the state of the application in an efficient and organized way. ## 02. What is the virtual DOM and how does it work? Ans: The Virtual DOM (Document Object Model) is a representation of the actual DOM in the browser. It enables React to update only the specific parts of a web page that need to change, instead of rewriting the entire page, leading to increased performance. When a component's state or props change, React will first create a new version of the Virtual DOM that reflects the updated state or props. It then compares this new version with the previous version to determine what has changed. Once the changes have been identified, React will then update the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the Virtual DOM. This process is known as "reconciliation". The use of a Virtual DOM allows for more efficient updates because it reduces the amount of direct manipulation of the actual DOM, which can be a slow and resource-intensive process. By only updating the parts that have actually changed, React can improve the performance of an application, especially on slow devices or when dealing with large amounts of data. ## 03. How does React handle updates and rendering? Ans: React handles updates and rendering through a virtual DOM and component-based architecture. When a component's state or props change, React creates a new version of the virtual DOM that reflects the updated state or props, then compares it with the previous version to determine what has changed. React updates the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the virtual DOM, a process called "reconciliation". React also uses a component-based architecture where each component has its own state and render method. It re-renders only the components that have actually changed. It does this efficiently and quickly, which is why React is known for its performance. ## 04. Explain the concept of Components in React? Ans: A React component is a JavaScript function or class that returns a React element, which describes the UI for a piece of the application. Components can accept inputs called "props", and manage their own state. ## 05. What is JSX and why is it used in React? Ans: JSX is a syntax extension for JavaScript that allows embedding HTML-like syntax in JavaScript. It is used in React to describe the UI, and is transpiled to plain JavaScript by a build tool such as Babel. ## 06. What is the difference between state and props? Ans: State and props are both used to store data in a React component, but they serve different purposes and have different characteristics. Props (short for "properties") are a way to pass data from a parent component to a child component. They are read-only and cannot be modified by the child component. State, on the other hand, is an object that holds the data of a component that can change over time. It can be updated using the setState() method and is used to control the behavior and rendering of a component. ## 07. What is the difference between controlled and uncontrolled components in React? Ans: In React, controlled and uncontrolled components refer to the way that forms are handled. A controlled component is a component where the state of the form is controlled by React, and updates to the form's inputs are handled by event handlers. An uncontrolled component, on the other hand, relies on the default behavior of the browser to handle updates to the form's inputs. A controlled component is a component where the value of input fields is set by state and changes are managed by React's event handlers, this allows for better control over the form's behavior and validation, and it makes it easy to handle form submission. On the other hand, an uncontrolled component is a component where the value of the input fields is set by the default value attribute, and changes are managed by the browser's default behavior, this approach is less performant and it's harder to handle form submission and validation. ## 08. What is Redux and how does it work with React? Ans: Redux is a predictable state management library for JavaScript applications, often used with React. It provides a centralized store for the application's state, and uses pure functions called reducers to update the state in response to actions. In a React app, Redux is integrated with React via the react-redux library, which provides the connect function for connecting components to the Redux store and dispatching actions. The components can access the state from the store, and dispatch actions to update the state, via props provided by the connect function. ## 09. Can you explain the concept of Higher Order Components (HOC) in React? Ans: A Higher Order Component (HOC) in React is a function that takes a component and returns a new component with additional props. HOCs are used to reuse logic across multiple components, such as adding a common behavior or styling. HOCs are used by wrapping a component within the HOC, which returns a new component with the added props. The original component is passed as an argument to the HOC, and receives the additional props via destructuring. HOCs are pure functions, meaning they do not modify the original component, but return a new, enhanced component. For example, an HOC could be used to add authentication behavior to a component, such as checking if a user is logged in before rendering the component. The HOC would handle the logic for checking if the user is logged in, and pass a prop indicating the login status to the wrapped component. HOCs are a powerful pattern in React, allowing for code reuse and abstraction, while keeping the components modular and easy to maintain. ## 10. What is the difference between server-side rendering and client-side rendering in React? Ans: Server-side rendering (SSR) and client-side rendering (CSR) are two different ways of rendering a React application. In SSR, the initial HTML is generated on the server, and then sent to the client, where it is hydrated into a full React app. This results in a faster initial load time, as the HTML is already present on the page, and can be indexed by search engines. In CSR, the initial HTML is a minimal, empty document, and the React app is built and rendered entirely on the client. The client makes API calls to fetch the data required to render the UI. This results in a slower initial load time, but a more responsive and dynamic experience, as all the rendering is done on the client. ## 11. What are React Hooks and how do they work? Ans: React Hooks are a feature in React that allow functional components to have state and other lifecycle methods without using class components. They make it easier to reuse state and logic across multiple components, making code more concise and easier to read. Hooks include useState for adding state and useEffect for performing side effects in response to changes in state or props. They make it easier to write reusable, maintainable code. ## 12. How does React handle state management? Ans: React handles state management through its state object and setState() method. The state object is a data structure that stores values that change within a component and can be updated using the setState() method. The state updates trigger a re-render of the component, allowing it to display updated values dynamically. React updates the state in an asynchronous and batched manner, ensuring that multiple setState() calls are merged into a single update for better performance. ## 13. How do work useEffect hook in React? Ans: The useEffect hook in React allows developers to perform side effects such as data fetching, subscription, and setting up/cleaning up timers, in functional components. It runs after every render, including the first render, and after the render is committed to the screen. The useEffect hook takes two arguments - a function to run after every render and an array of dependencies that determines when the effect should be run. If the dependency array is empty or absent, the effect will run after every render. ## 14. Can you explain the concept of server-side rendering in React? Ans: Server-side rendering (SSR) in React is the process of rendering components on the server and sending fully rendered HTML to the browser. SSR improves the initial loading performance and SEO of a React app by providing a fully rendered HTML to the browser, reducing the amount of JavaScript that needs to be parsed and executed on the client, and improving the indexing of a web page by search engines. In SSR, the React components are rendered on the server and sent to the client as a fully formed HTML string, improving the initial load time and providing a more SEO-friendly web page. ## 15. How does React handle events and what are some common event handlers? Ans: React handles events through its event handling system, where event handlers are passed as props to the components. Event handlers are functions that are executed when a specific event occurs, such as a user clicking a button. Common event handlers in React include onClick, onChange, onSubmit, etc. The event handler receives an event object, which contains information about the event, such as the target element, the type of event, and any data associated with the event. React event handlers should be passed as props to the components, and the event handlers should be defined within the component or in a separate helper function. ## 16. Can you explain the concept of React context? Ans: React context is a way to share data between components without passing props down manually through every level of the component tree. The context is created with a provider and consumed by multiple components using the useContext hook. ## 17. How does React handle routing and what are some popular routing libraries for React? Ans: React handles routing by using React Router library, which provides routing capabilities to React applications. Some popular routing libraries for React include React Router, Reach Router, and Next.js. ## 18. What are some best practices for performance optimization in React? Ans: Best practices for performance optimization in React include using memoization, avoiding unnecessary re-renders, using lazy loading for components and images, and using the right data structures. ## 19. How does React handle testing and what are some popular testing frameworks for React? Ans: React handles testing using testing frameworks such as Jest, Mocha, and Enzyme. Jest is a popular testing framework for React applications, while Mocha and Enzyme are also widely used. ## 20. How do you handle asynchronous data loading in React? Ans: Asynchronous data loading in React can be handled using various methods such as the fetch API, Axios, or other network libraries. It can also be handled using the useState and useEffect hooks to trigger a state update when data is returned from the API call. It is important to handle loading and error states properly to provide a good user experience. In conclusion, this blog post covers the top 20 major questions that a React developer should know in 2023. The questions cover a wide range of topics from the basics of React, its benefits and architecture, to more advanced concepts such as JSX, state and props, controlled and uncontrolled components, Redux, Higher Order Components, and more. By trying to answer each question yourself before looking at the answers, you can gain a deeper understanding of the React framework and become a better React developer.
hasan048
1,910,275
AWS Prefix Lists help simplify Networking
I’ve been working with a client that acquired another company, and has multiple office sites, data...
0
2024-07-03T14:15:56
https://dev.to/aws-builders/aws-prefix-lists-help-simplify-networking-1ki4
aws, networking, vpc, cloud
I’ve been working with a client that acquired another company, and has multiple office sites, data centres, and a fair number of private networks. As part of the acquisition, they’ve been working on integrating the systems of the parent and the acquired company, and one element of that has been simplifying the IP address management by removing overlaps between private networks, including AWS VPCs. As a result, I’ve been working on setting up some new VPCs, and connecting them into the corporate network. While setting this up I discovered that [customer-managed prefix lists on AWS](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-managed-prefix-lists.html) can really simplify some use cases. By way of example, imagine a networking situation like this: ![Example Network](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6r7tp4f2qq8vny2trom.png) With multiple customer networks and multiple VPCs, you can quickly end up needing to add a lot of CIDR blocks to a lot of places, particularly if you only want to target specific ranges, rather than a broad block (like 10.0.0.0/8). Using specific examples, you might need to: * add several CIDR ranges to each VPC’s route table to route the traffic to the transit gateway * add several CIDR ranges to one or more ports in one or more security groups in each VPC in order to allow access from customer networks If the organization adds another site or another CIDR block to an existing site, you might have to go back and find all the places you added those CIDR blocks to, and add it again. This can quickly get tedious. This is particularly important if you’re making these changes by hand in the AWS console, either because you don’t have infrastructure setup automated or perhaps because you’re making an exploratory change ahead before changing your infrastructure automation, but even if your infrastructure is fully automated with Terraform, CloudFormation, CDK or Pulumi, you might find that a prefix list makes a security group easier to read. For instance, this security group that allows ICMP, HTTP and HTTPS from a prefix list: ![Security Group with Prefix List](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/90l2diysmtwvgmh6ix1b.png) And this one allows it from the CIDR ranges in the diagram above: ![Security Group with CIDRs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20qhas0ynjrxxk3oa6ur.png) I know which of these I find easier to visually inspect. So if you’re managing complicated networking on AWS and you haven’t taken a look at prefix lists, I hope I’ve convinced you that it’s time to take a look.
geoffreywiseman
1,910,274
SRT Protocol - Secure Reliable Transport Protocol
The Secure Reliable Transport Protocol is a cutting edge technology. The SRT protocol is designed to...
0
2024-07-03T14:13:39
https://www.metered.ca/blog/srt-protocol-secure-reliable-transport-protocol/
webdev, javascript, devops, webrtc
The Secure Reliable Transport Protocol is a cutting edge technology. The SRT protocol is designed to enable the secure and efficient data transmission over the internet The SRT protocol was designed by Haivision, it is an Open source protocol made primarily for the real time video and data delivery purposes High Quality, Low latency and secure transmission are the key features of this protocol let us consider some of these and more key features below ## **How does SRT Protocol Work** Here are the Key features of the SRT Protocol ### **1\.  Security: How SRT Ensure the security of data during transmission** #### **A. Encryption** * **AES Encryption :** These days data piracy has become rampant in the streaming industry. Also, businesses want to keep their meetings secure in the remote working world, away from the competitors and hackers. So as to keep data secure the SRT protocol has robust encryption mechanisms built in place. The SRT protocol uses the AES Encryption or the  Advanced Encryption Standard.  This encryption standard is widely recognised to for being very robust and efficient AES uses the AES-128 or the AES-256 bit encryption to provide strong defence against any unauthorized access So, that no one can tamper with the data during transmission over the internet. * **End-to-End Encryption:** With the SRT protocol the data is end to end encrypted which means that the data is encrypted from the sender all the way to the receiver Even if the data is intercepted it cannot be deciphered with the decryption key #### **B. Authentication** * **Token Based Authentication:** The SRT protocol has the token based authentication mechanism which verifies the identity of both the sender as well as the reciever of the transmission **Process:** A unique token is generated and it is shared between the sender and the reciver, This token is then used to authenticate the devices before anydata is hsared between them * **Digital Signature:** SRT protocol uses the digital signature to verify the authenticity of the data that is being transmitted. **Process:** The sender signs the data with the senders public key and the reciver verifies the data using the senders public key This gaurantees the integritiy and the origination of data( where the data originated, from the orignal sender and not from any other third party) Verifying the digital signature ensures that the data has not been tampered with during transmission ### **2\. Reliability: What mechanisms the SRT protocol uses to ensure reliable data transport** #### **A. Packet Loss Recovery** * **Retransmission Requests:** In the SRT protocol, the receiving client can request the retransmission of lost and corropted packets of data from the sender. When a client detects that a packet of data is corropted or is missing then it sends a NAK or a Negative acknowledgement to the sending device prompting retransmission of data **Process:** the reciver contineously monitors the incoming packets of data and the sequence of these packets. If there is a gap in the sequence the reciver sends a NAK request to the sender * **Selective acknowledgements (SACK):** When the receiving device sends a NAK the sender then only retransmits the packets that are missing thus improving efficiency and reducing the unnecessary data transmission This is because of Selective acknowledgement that is built in the SRT protocol which enables both the devices to send and receive data effectively **Process:** The receiver sends a SACK request  with sequence of numbers that signifies the successfully received packets and the missing packets. This allows the sender to resend only the missing packets thus avoiding redundant data transfers between devices. #### **B. Error Correction** * **Forward Error Correction (FEC):** SRT also has a awesome feature called as the Forward error correction mechanism. With FEC the SRT protocol preemptively sends redundant data to the stream This data can be used by the receiver to reconstruct the missing packets or the corrupted packets without the need for retransmission by the sender. **Process:** Here the sender sends parity packets along the data streams based on the FEC algorithm this allows the reciever to reconstruct the missing data using the partiy packets if some data is lost in transmission ### **3\. Low Latency: How does SRT achieve Low Latency transmission** #### **A. Optimized Buffering** * **Dynamic Buffering:** SRT utilizes adaptive buffering strategie to minimize latency. The protocol dynamically adjusts teh bitrate of teh data stream in real time based on the current network bandwidth this technology gives smooth video in fluctuating network bandwidth in usecases such as mobile devices or where the network is not proper * **Minimal Buffering:** The SRT protocol keeps the buffer size as small as possible without compromising reliability, SRT can achieve lower latency compared to traditional protocols #### **B. Real-Time Transport Protocl (RTP) Compatibility** * **RTP over SRTP:** SRT can transport RTP streams. The RTP is designed for real time data transmission The compatibility allows SRT to leverage RTP real time capability, this further reduces the latency and enhancing the user experience for live experience. ### **Network Adaptibility: How SRT works in varying network conditions** #### **A. Adaptive Bitrate Streaming** * **Dynamic Bitrate adjustment:** SRT adjusts the bitrate of the video streams automatically according to the network conditions, if there is ample bandwidth the SRT will automatically start streaming high quality video On the other hand if the quality of the bandwidth decreases because of congestion or if the user has variable bandwidth if they are on mobile data then the SRT protocol provides a smooth video stream but slightly lower bitrate so that the video does not buffer. * **Congestion control:** The SRT has a mechanism where it has algorithms that monitor network traffic and adjusts the data transmission accordingly. Thus SRT tries to prevent network congestion and maintains a smooth data flow * **Congestion control:** The SRT has a mechanism where it has algorithms that monitor network traffic and adjusts the data transmission accordingly. Thus SRT tries to prevent network congestion and maintains a smooth data flow #### **B. Resilience to Jitter and Packet Delay Variation** * **Jitter Buffer:** SRT incorporates a jitter buffer in order to compensate for packet delay variations (jitter) in the network. This buffer acts as a storage for incoming packets and stores them, then it reorders the packets if they arrive out of sequence, thus ensure smooth data delivery and a smooth stream of video. * **Time Stamp Based Synchronization:** the SRT protocol uses the time stamp in order to determine the sequence of packets arriving and manages the out of order delivery of data Thus the SRT protocol reconstructs the data correctly and in the right order. #### **C. Error resilience** * **Redundant Data Transmission:** In situations where data transmission is jittery and inconsistant the SRT protocol transmits redundant data to ensure that if if some packets of data are lost or are corropted still the video stream is uninterrupted * **Network Path Diversity:** The SRT protocol has network path diversity where in it can use multiple internet connections and mutiple internet paths to reach the destination or the client devices This is helpful when one internet connection or one internet path is slow etc. ## **Comparison with Traditional Protocols: SRT vs RTMP, HTTP and WebRT** #### **SRT ( Secure Reliable Transport)** SRT protocol is the modern protocol designed with secure , reliable and low latency video, audio and data streaming over unreliable network conditions It has many good features that are required for modern day data transmission but requires skill because of complexicity involved in implementation #### **RTMP (Real Time Messaging Protocol)** RTMP is the real time messaging protocol. It was designed by Adobe for streaming audio, video and data over the internet It is widly used for live streaming and on demand media delivery. #### **2\. HTTP (Hypertext Transfer Protocol)** HTTP is a basic protocol that was developed with browsing the internet and goign to websites etc #### **3\. WebRTC  (Web Real-Time Communication)** WebRTC is a collection of communication protocols and APIs that enable real time communication, video and audio streaming between browsers adn devices ## **Advantages and disadvantages** | **Protocol** | **Advantages** | **Disadvantages** | | --- | --- | --- | | **SRT (Secure Reliable Transport)** | Data security with AES 128 bit and AES 256 bit encryption. | Difficult implementation and management. | | | Token based Auth mechanism | Overhead from error correction and retransmission mechanisms | | | Packt loss recovery and smooth data transmission with NAK and SACK | | | | Dynamic buffering and low latency | | | | Adaptive bitrate streaming that adjusts according to Network conditions | | | | Resilient to Network Jitter and congestion | | | | Highly Scalable | | | | Open Source | | | RTMP Real Time Messaging protocol | Supports basic security mechanisms RTMPS | Outdated security measures | | | Suitable for live streaming | less efficient packet loss handling | | | Established protocol with wide CDN support | Less adaptability to varying network conditions as compared to SRT protocol. | | HTTP (Hypertext Transfer Protocol) | Highly reliable and traditional data tranfer protocol | Not designed for low latency and real time streaming | | | universally supported | Non adaptive to changing network conditions | | | CDN support available | | | WebRTC (Web Real-Time Communication) | Built for video calling applications | Requires TURN servers for large deployments | | | Secure protocol with built in encryption using DTLS SRTP | | | | Designed with low latency and real time video communication | | | | Adapting to varying network conditions | | | | WebRTC (Web Real-Time Communication) | | ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pbxpseug12njlhducla.png) ## [**Metered TURN servers**](https://www.metered.ca/stun-turn) 1. **API:** TURN server management with powerful API. You can do things like Add/ Remove credentials via the API, Retrieve Per User / Credentials and User metrics via the API, Enable/ Disable credentials via the API, Retrive Usage data by date via the API. 2. **Global Geo-Location targeting:** Automatically directs traffic to the nearest servers, for lowest possible latency and highest quality performance. less than 50 ms latency anywhere around the world 3. **Servers in all the Regions of the world:** Toronto, Miami, San Francisco, Amsterdam, London, Frankfurt, Bangalore, Singapore,Sydney, Seoul, Dallas, New York 4. **Low Latency:** less than 50 ms latency, anywhere across the world. 5. **Cost-Effective:** pay-as-you-go pricing with bandwidth and volume discounts available. 6. **Easy Administration:** Get usage logs, emails when accounts reach threshold limits, billing records and email and phone support. 7. **Standards Compliant:** Conforms to RFCs 5389, 5769, 5780, 5766, 6062, 6156, 5245, 5768, 6336, 6544, 5928 over UDP, TCP, TLS, and DTLS. 8. **Multi‑Tenancy:** Create multiple credentials and separate the usage by customer, or different apps. Get Usage logs, billing records and threshold alerts. 9. **Enterprise Reliability:** 99.999% Uptime with SLA. 10. **Enterprise Scale:** With no limit on concurrent traffic or total traffic. Metered TURN Servers provide Enterprise Scalability 11. **5 GB/mo Free:** Get 5 GB every month free TURN server usage with the Free Plan 12. Runs on port 80 and 443 13. Support TURNS + SSL to allow connections through deep packet inspection firewalls. 14. Supports both TCP and UDP 15. Free Unlimited STUN
alakkadshaw
1,910,283
How to keep your Azure infrastructure highly available - Configuring data redundancy
Data redundancy is synonymous with keeping data replicas. It plays a key part in highly available...
27,951
2024-07-03T14:11:42
https://blog.q-bit.me/how-to-keep-your-azure-infrastructure-highly-available-configuring-data-redundancy/
azure, todayilearned, microsoft, cloud
Data redundancy is synonymous with keeping data replicas. It plays a key part in highly available infrastructure by ensuring data is not lost, even if it is accidentally deleted, corrupted, or encrypted by malware. This article focuses on redundancy in storage accounts. Resources this concept can be used on ------------------------------------- All services that are part of an Azure storage account can be replicated. However, some strategies might not be available for lower-tier Storage Keeping Units (SKUs). Here are the current [SKU types for storage accounts and their respective replication options](https://learn.microsoft.com/en-us/rest/api/storagerp/srp_sku_types). Types of redundancy ------------------- ### Locally redundant storage Locally redundant storage keeps your data on three drives in a single data center. It's an option for non-critical scenarios. The standard SKU can be used for all kinds of storage accounts and is available in most - if not all - regions Premium LRS is available only for more specific types of storage, such as Page Blobs, Block Blocks, and File Shares. This SKU is optimized for low latency and fast IO-operations > ✅ Premium SKUs are generally only available for locally redundant storage and zone redundant storage, depending on the region they're located in ![](https://blog.q-bit.me/content/images/2024/07/image-7.png) ### Zone redundant storage Zone redundant storage replicates your data over three [availability zones](https://blog.q-bit.me/how-to-keep-your-azure-apps-software-highly-available/#h2-2) in a single region. It's a compromise between data security and low latencies. In a full availability zone outage, your data remains safely stored in the two others. Its premium SKU may not be available in all regions. ![](https://blog.q-bit.me/content/images/2024/07/image-8.png) ### Geo-redundant storage If availability zones sound too risky, you might want to replicate your data worldwide. Geo-redundant storage replicates your locally redundant data to a second region. It protects against greater local outages affecting all three availability zones from ZRS. Geo-redundant replication to the secondary region is handled asynchronously and the replicated data is not accessible unless there is a failover in the primary region. That is unless you configure **RA-GRS**. With Read-Access Geo-redundant storage, data in the secondary become available for read-only access. This strategy is useful to provide low-latency access to replicated files, for example, if you have a branch office in the secondary region. When to use what replication strategy ------------------------------------- While there are only a handful of redundancy options, things can get complicated when combining them with account types. In the following, I'll name only a few useful combinations. What strategy you use highly depends on your company's demands. ### Standard General PurposeV2 LRS In many cases, LRS with a standard General PurposeV2 account will already suit your needs. It keeps three copies of your data in a single data center and ensures 99.99999999999% (11\*9) data availability. While not IO-optimized, it provides a great latency in your chosen region and gives you plenty of options to store files and other data. Use cases include: * Provisioning of standard VHDs for your company, for example, to share files on a virtual drive * Storing data of a virtual machine or a set of VMs * Storing default OS image files as page blobs * Storing non-regulatory and non-mission-critical files where a loss will not result in huge legal fees ### Premium block blobs ZRS In the age of large language models, data are the new gold. Training the models requires a lot of them. It's also demanding in terms of computing power. Machine learning algorithms generally perform better with low latency. If you spend lots of time and money training models (or accessing them), this SKU will be your choice. At the same time, ZRS or GRS will keep your models and training data safe. While the additional cost may seem over the top, eliminating the remaining probability of losing your model and its training data may be worth it. Generally, use cases here include * Secure provisioning of mission-critical, unstructured data of large sizes * Low latency access to training data * Provisioning data for big-data analytics ### Standard General PurposeV2 RA-GZRS With the highest SLA Azure offers (99% with 16 9s after the decimal), GZRS with read-access solves many problems at once. Besides keeping your data safe from what must be several natural disasters happening at once, it helps you reduce latency for associates in the secondary region. This is especially useful for regulatory or compliance information needed in both locations, which will be instantly available even during an outage. Use cases here might include * Provisioning of mission-critical legal, compliance- or government data * The most secure option for Long-time storage of cold-access tier data
tqbit
1,910,287
Level Up Your GitHub Repo Config Game
Stop with Spreasheets &amp; Start Automating! If you’re like many open source project maintainers,...
0
2024-07-03T14:11:32
https://dev.to/stacey_potter_3de75e600a1/level-up-your-github-repo-config-game-8lb
github, opensource, security, cloudnative
_Stop with Spreasheets & Start Automating!_ If you’re like many open source project maintainers, your project might span tens or hundreds of GitHub repos, and your repo configuration may be wildly variable. How do you make sure that your repos always have a standard configuration in place, like a code of conduct, a security.md file, a license file, secret scanning, and Dependabot? It’s a lot for maintainers to remember and continuously monitor. Fortunately, you don’t have to — there are tools available to help. In this video, Stacklok engineer and Knative maintainer Evan Anderson covers the breadth of free and open source tools available to help you keep your GitHub repos (and Actions) consistently configured and secure for your end users. {% embed https://youtu.be/wdw69jix8Eo %} [Learn more about Minder](https://stacklok.com/minder) or [Get started with Minder today](cloud.stacklok.com)
stacey_potter_3de75e600a1
1,910,286
Scop in Javascript with example
Scope in JavaScript refers to the visibility and accessibility of variables, functions, and objects...
0
2024-07-03T14:10:53
https://dev.to/hasan048/scop-in-javascript-with-example-2i4d
webdev, javascript, beginners, tutorial
Scope in JavaScript refers to the visibility and accessibility of variables, functions, and objects in different parts of your code. JavaScript has three main types of scope: 1. Global scope 2. Function scope 3. Block scope (introduced in ES6 with let and const) Here's an example demonstrating these scopes: ```javascript // Global scope let globalVar = "I'm global"; function exampleFunction() { // Function scope let functionVar = "I'm function-scoped"; if (true) { // Block scope let blockVar = "I'm block-scoped"; var functionScopedVar = "I'm function-scoped too"; console.log(globalVar); // Accessible console.log(functionVar); // Accessible console.log(blockVar); // Accessible console.log(functionScopedVar); // Accessible } console.log(globalVar); // Accessible console.log(functionVar); // Accessible console.log(functionScopedVar); // Accessible // console.log(blockVar); // Error: blockVar is not defined } exampleFunction(); console.log(globalVar); // Accessible // console.log(functionVar); // Error: functionVar is not defined // console.log(blockVar); // Error: blockVar is not defined // console.log(functionScopedVar); // Error: functionScopedVar is not defined ```
hasan048
1,910,285
Connect MongoDB database with Next JS App(the simplest way)
In Next.js, especially when deploying on serverless environments or edge networks, database...
0
2024-07-03T14:10:24
https://dev.to/saadnaeem/connect-mongodb-database-with-next-js-appthe-simplest-way-3km9
mongodbwithnext, mongodbnext, nextjs, mongodb
## In Next.js, especially when deploying on serverless environments or edge networks, database connections are established on every request. This is due to the nature of serverless functions, which do not maintain state between requests and can be spun up or down as needed. Here are some key points to consider: **Serverless Functions:** Next.js uses serverless functions for API routes and server-side rendering (SSR). Each invocation of a serverless function is stateless and isolated, meaning it doesn't maintain a persistent connection to a database. Therefore, a new database connection must be established for each request. ``` import mongoose from "mongoose"; // Object to track connection state type ConnectionObject = { isConnected?: number; }; // Single connection object to be used across requests const connection: ConnectionObject = {}; // Function to connect to the database async function dbConnect(): Promise<void> { // Check if already connected to avoid redundant connections if (connection.isConnected) { console.log("Already Connected"); return; } try { // Establish a new connection const db = await mongoose.connect(process.env.MONGO_URI || "", { // Use appropriate options here based on your setup }); // Track the connection state connection.isConnected = db.connections[0].readyState; console.log("DB is connected"); } catch (error) { console.log("DB connection error:", error); process.exit(1); // Exit the process if unable to connect } } export default dbConnect; ``` Thank You Me Latter
saadnaeem
1,910,284
how to create new permission groups, permission category
How to update and control permissions of an app and/or menu items using xml first of all we need...
0
2024-07-03T14:10:08
https://dev.to/jeevanizm/how-to-create-new-permission-groups-permission-category-7lh
odoo
How to update and control permissions of an app and/or menu items using xml 1. first of all we need to override - ir_module_category_data.xml 2. locate the menu items we need to hide/show based on permissions 3. create the new permissions grroups, if it requires, under new category full code below ``` <?xml version="1.0" encoding="utf-8"?> <odoo> <data> <record model="ir.module.category" id="category_custompermission"> <field name="name">The E-Cig Store Permissions</field> <field name="sequence">70</field> <field name="visible" eval="1" /> </record> <record id="group_ecig_view_contacts_app" model="res.groups"> <field name="name">View Contacts App</field> <field name="category_id" ref="custom_module_name.category_custompermission"/> </record> <record id="group_ecig_view_employees_app" model="res.groups"> <field name="name">View Employees App</field> <field name="category_id" ref="custom_module_name.category_custompermission"/> </record> <record id="group_ecig_view_pos_app" model="res.groups"> <field name="name">View POS Ribbon</field> <field name="category_id" ref="custom_module_name.category_custompermission"/> </record> <record id="contacts.menu_contacts" model="ir.ui.menu"> <field name="groups_id" eval="[(5,0),(4, ref('custom_module_name.group_ecig_view_contacts_app'))]"/> </record> <record id="hr.menu_hr_root" model="ir.ui.menu"> <field name="groups_id" eval="[(5,0),(4, ref('custom_module_name.group_ecig_view_employees_app'))]"/> </record> <record id="point_of_sale.pos_config_menu_catalog" model="ir.ui.menu"> <field name="groups_id" eval="[(5,0),(4, ref('custom_module_name.group_ecig_view_pos_app'))]"/> </record> <record id="point_of_sale.menu_point_of_sale" model="ir.ui.menu"> <field name="groups_id" eval="[(5,0),(4, ref('custom_module_name.group_ecig_view_pos_app'))]"/> </record> </data> </odoo> ```
jeevanizm
1,910,281
Betvisa: Cá cược dễ dàng
Cá cược luôn là một trò tiêu khiển phổ biến, kết hợp cảm giác hồi hộp khi dự đoán kết quả với khả...
0
2024-07-03T14:09:40
https://dev.to/betvisa/betvisa-ca-cuoc-de-dang-1ma8
gamedev
Cá cược luôn là một trò tiêu khiển phổ biến, kết hợp cảm giác hồi hộp khi dự đoán kết quả với khả năng nhận được phần thưởng sinh lợi. Tuy nhiên, việc điều hướng thế giới cá cược có thể gây choáng ngợp cho cả người mới chơi cũng như người đặt cược dày dạn kinh nghiệm. Đó là lúc betvisa xuất hiện, cách mạng hóa trải nghiệm cá cược bằng cách làm cho mọi người dễ dàng hơn, dễ tiếp cận hơn và thú vị hơn. Betvisa là gì? Betvisa là một nền tảng cá cược sáng tạo được thiết kế để đơn giản hóa quy trình cá cược. Cho dù bạn quan tâm đến cá cược thể thao, trò chơi sòng bạc hay các hình thức cá cược khác, [Betvisa](https://www.betvisa-bet.com/vi) đều cung cấp giao diện thân thiện với người dùng và nhiều tùy chọn để phục vụ cho mọi loại người đặt cược. Các tính năng chính của Betvisa Giao diện thân thiện với người dùng: Một trong những tính năng nổi bật của Betvisa là thiết kế trực quan. Nền tảng này dễ điều hướng, cho phép người dùng nhanh chóng tìm thấy các tùy chọn cá cược ưa thích của họ. Cho dù bạn đang truy cập nó trên máy tính để bàn hay thiết bị di động, Betvisa đều đảm bảo trải nghiệm liền mạch. Nhiều lựa chọn cá cược: Betvisa bao gồm rất nhiều thị trường cá cược. Từ các môn thể thao phổ biến như bóng đá, bóng rổ và quần vợt đến các thị trường thích hợp như thể thao điện tử và thể thao ảo, luôn có thứ gì đó dành cho tất cả mọi người. Ngoài ra, Betvisa còn cung cấp nhiều lựa chọn trò chơi sòng bạc, bao gồm máy đánh bạc, bài poker và trò chơi người chia bài trực tiếp. An toàn và đáng tin cậy: An toàn là điều tối quan trọng trong thế giới cá cược trực tuyến và Betvisa rất coi trọng điều này. Nền tảng này sử dụng các biện pháp bảo mật tiên tiến để bảo vệ thông tin cá nhân và tài chính của người dùng. Hơn nữa, Betvisa được cấp phép và quản lý đầy đủ, cung cấp một môi trường đáng tin cậy cho mọi hoạt động cá cược của bạn. Tỷ lệ cược cạnh tranh: Betvisa cam kết cung cấp tỷ lệ cược cạnh tranh trên tất cả các thị trường cá cược của mình. Điều này có nghĩa là người đặt cược có cơ hội tối đa hóa lợi nhuận tiềm năng của mình, khiến Betvisa trở thành lựa chọn ưa thích của nhiều người. Khuyến mãi và tiền thưởng: Để nâng cao trải nghiệm cá cược, Betvisa cung cấp nhiều chương trình khuyến mãi và tiền thưởng. Người dùng mới có thể tận dụng các phần thưởng chào mừng, trong khi những người đặt cược thường xuyên có thể hưởng lợi từ các chương trình khuyến mãi liên tục, phần thưởng dành cho khách hàng trung thành và ưu đãi đặc biệt. Hỗ trợ khách hàng: Betvisa tự hào về dịch vụ khách hàng tuyệt vời của mình. Một nhóm hỗ trợ tận tâm luôn sẵn sàng hỗ trợ mọi thắc mắc hoặc vấn đề. Cho dù bạn cần trợ giúp trong việc đặt cược hay giải quyết vấn đề kỹ thuật, bộ phận hỗ trợ khách hàng của Betvisa luôn sẵn sàng trợ giúp. Tại sao chọn Betvisa? Betvisa nổi bật trong thị trường cá cược đông đúc vì nhiều lý do: Dễ sử dụng: Thiết kế đơn giản của nền tảng đảm bảo rằng ngay cả những người mới bắt đầu cũng có thể bắt đầu cá cược mà không gặp bất kỳ rắc rối nào. Tùy chọn cá cược đa dạng: Với vô số thị trường cá cược và trò chơi sòng bạc, Betvisa đáp ứng mọi sở thích. An toàn và bảo mật: Cam kết bảo mật của Betvisa đảm bảo một môi trường cá cược an toàn. Tiền thưởng hấp dẫn: Các chương trình khuyến mãi và tiền thưởng hào phóng làm tăng thêm giá trị cho trải nghiệm cá cược. Hỗ trợ khách hàng đáng tin cậy: Dịch vụ khách hàng 24/7 mang đến sự yên tâm và giải quyết nhanh chóng mọi vấn đề. Bắt đầu với Betvisa Bắt đầu hành trình cá cược của bạn với Betvisa thật đơn giản: Đăng ký: Tạo một tài khoản trên trang web Betvisa. Quá trình đăng ký nhanh chóng và đơn giản. Gửi tiền: Sử dụng một trong các phương thức thanh toán an toàn để gửi tiền vào tài khoản Betvisa của bạn. Khám phá và đặt cược: Duyệt qua các thị trường cá cược hoặc trò chơi sòng bạc có sẵn và đặt cược một cách dễ dàng. Rút tiền thắng cược: Tận hưởng cảm giác hồi hộp khi chiến thắng và rút tiền kiếm được của bạn thông qua quy trình an toàn và thuận tiện. Phần kết luận Betvisa giúp việc cá cược trở nên dễ dàng, thú vị và an toàn. Cho dù bạn là người đam mê thể thao muốn đặt cược vào đội bóng yêu thích của mình hay người yêu thích sòng bạc đang tìm kiếm cảm giác hồi hộp của trò chơi, Betvisa đều có thứ gì đó để cung cấp. Với giao diện thân thiện với người dùng, các tùy chọn cá cược đa dạng, tỷ lệ cược cạnh tranh và hỗ trợ khách hàng đặc biệt, Betvisa là nền tảng phù hợp cho mọi nhu cầu cá cược của bạn. Hãy tham gia Betvisa ngay hôm nay và trải nghiệm cá cược thật dễ dàng!
betvisa
1,910,279
Top 10 ES6 Features that Every Developer Should know
Top 10 ES6 Features that Every Developer Should know **1. let and const: **Block-scoped...
0
2024-07-03T14:08:59
https://dev.to/hasan048/top-10-es6-features-that-every-developer-should-know-epi
javascript, typescript, webdev, programming
## Top 10 ES6 Features that Every Developer Should know **1. let and const: **Block-scoped variable declarations. 'let' allows reassignment, 'const' doesn't. Prevents hoisting issues and unintended global variables. Improves code predictability and encourages better practices for variable scope and mutability. **2. Arrow Functions: **Concise syntax for function expressions. Lexically binds 'this', solving context issues in callbacks. Can't be used as constructors or methods needing their own 'this' binding. Simplifies code and reduces 'this'-related errors. **3. Template Literals: **Uses backticks for strings. Allows multi-line strings and interpolation with ${expression}. Improves readability when constructing complex strings or HTML templates. Supports tagged templates for custom string processing. **4. Destructuring Assignment: **Extracts values from arrays or object properties into distinct variables concisely. Useful for handling function returns, import statements, and complex data structures. Enhances code readability and reduces lines of code. **5. Enhanced Object Literals: **Shorthand syntax for defining object methods and properties. Allows computed property names. Simplifies object creation, especially when property names match variable names. Makes object definitions more concise and readable. **6. Default Parameters: **Allows setting default values for function parameters. Reduces need for manual parameter checks. Improves function flexibility and readability. Default values are used when arguments are undefined. **7. Rest and Spread Operators: **Rest (...) collects multiple elements into an array. Spread (...) expands arrays or objects. Useful in function arguments, array manipulation, and object composition. Simplifies working with arrays and function parameters. **8. Promises: **Represents asynchronous operations. Provides cleaner alternative to callbacks. Has states: pending, fulfilled, or rejected. Includes methods like then() and catch() for handling outcomes. Improves asynchronous code structure and error handling. **9. Classes: **Syntactical sugar over prototype-based inheritance. Provides familiar syntax for object-oriented programming. Includes constructors, methods, and inheritance. Doesn't change JavaScript's prototype-based nature but improves code organization and readability. **10. Modules: **Allows code organization into separate files. Uses import and export statements. Supports default and named exports. Improves code maintainability, reusability, and helps manage dependencies. Replaces older module patterns like CommonJS.
hasan048
1,910,277
Exploring the :has() Selector in CSS
CSS has progressed greatly over time, introducing a number of advanced selectors that improve the...
0
2024-07-03T14:08:07
https://dev.to/code_passion/exploring-the-has-selector-in-css-58gg
css, webdesign, html, tutorial
CSS has progressed greatly over time, introducing a number of advanced selectors that improve the ability to style web pages with precision and flexibility. One of the most recent additions to the CSS selector is the :has() pseudo-class. This blog will go over the details of the :has() selector, including its usage, benefits, and practical examples to help you use this powerful tool in your web development projects. **What is the :has() Selector?** [The :has() selector](https://skillivo.in/has-selector-in-css/) is a relational pseudo-class that lets you choose an element depending on the presence of a descendant or a more complicated relationship within its subtree. In simpler terms, it allows you to style a parent element if it contains specific child components. **Syntax:** The basic syntax of the :has() selector is as follows: ``` element:has(selector) { /* CSS properties */ } ``` **Practical Examples of :has() Selector :** **Theme Chooser using :has Selector** Output: ![Theme Chooser using :has Selector](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l0yyhz83oqk7ihav3tdy.gif) HTML: ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Theme Chooser</title> </head> <body> <header> <h1>Welcome to the Theme Chooser</h1> <div class="theme-selector"> <label for="theme-select">Choose Theme:</label> <select id="theme-select"> <option value="light">Light Mode</option> <option value="dark">Dark Mode</option> <option value="colorful">Colorful Mode</option> </select> </div> </header> <main> <p>This is an example of a theme chooser using the :has selector and a select tag.</p> </main> </body> </html> ``` CSS: ``` body { font-family: Arial, sans-serif; background-color: #fff; color: #000; transition: background-color 0.3s, color 0.3s; } /* Header styling */ header { display: flex; justify-content: space-between; align-items: center; padding: 20px; background-color: #f0f0f0; border-bottom: 1px solid #ddd; } /* Align the theme selector to the right */ .theme-selector { display: flex; align-items: center; margin-left: auto; } /* Space between label and select */ .theme-selector label { margin-right: 10px; } /* Light Mode (Default) */ body { --bg-color: #fff; --text-color: #000; --header-bg-color: #f0f0f0; --header-border-color: #ddd; } /* Dark Mode styles using the :has selector */ :has(#theme-select option:checked[value="dark"]) body { --bg-color: #333; --text-color: #fff; --header-bg-color: #444; --header-border-color: #555; } /* Colorful Mode styles using the :has selector */ :has(#theme-select option:checked[value="colorful"]) body { --bg-color: #ffefd5; --text-color: #333; --header-bg-color: #ffdab9; --header-border-color: #eea2ad; } /* Applying CSS variables */ body { background-color: var(--bg-color); color: var(--text-color); } header { background-color: var(--header-bg-color); border-bottom: 1px solid var(--header-border-color); } /* Style for the main content */ main { padding: 20px; } /* Select tag styling */ select { padding: 5px; font-size: 16px; border: 1px solid #ccc; border-radius: 4px; } ``` [Using variables in CSS](https://skillivo.in/css-variables-key-empowering-stylesheets/), especially custom properties (variables), has various benefits that make stylesheets easier to manage, maintain, and extend. Here is why variables are used in the provided CSS. Variables allow you to define a value once and reuse it throughout your stylesheet. **For instance:** ``` body { --bg-color: #fff; --text-color: #000; --header-bg-color: #f0f0f0; --header-border-color: #ddd; } ``` These variables can then be utilised wherever needed, ensuring consistency and saving you from repeating the same value several times. ``` body { background-color: var(--bg-color); color: var(--text-color); } header { background-color: var(--header-bg-color); border-bottom: 1px solid var(--header-border-color); } ``` If you have to change a colour scheme or any other reusable property, you simply need to edit the variable value once, rather than hunting down each instance of the value in your CSS: ``` :has(#theme-select option:checked[value="dark"]) body { --bg-color: #333; --text-color: #fff; --header-bg-color: #444; --header-border-color: #555; } ``` The :has pseudo-class is a powerful CSS selector that enables you to style an element based on the presence of a descendant element that meets a specific condition. Within this context: ``` :has(#theme-select option:checked[value="dark"]) ``` This means the styles will be applied to the body element if there is an option element with the value “dark” that is checked within the #theme-select element. **option:checked[value=”dark”] targets an option element within the select that:** 1. Is checked (selected). 2. Has a value attribute of “dark”. The CSS rule you provided uses the :has pseudo-class in combination with CSS variables to implement a dynamic theming system. [Read More examples of :has selector ](https://skillivo.in/has-selector-in-css/) **Conclusion** [The :has() selector](https://skillivo.in/has-selector-in-css/) is an extremely useful element in current CSS, allowing developers to construct more dynamic and context-aware styles. Understanding and utilising this selection allows you to improve the interactivity and visual attractiveness of your online projects while keeping the code clear and maintainable.
code_passion
1,910,276
The Ultimate Guide to Workshop Manuals PDF for Your Vehicle
In the realm of vehicle maintenance and repair, having access to accurate and comprehensive...
0
2024-07-03T14:07:30
https://dev.to/dots52vc/the-ultimate-guide-to-workshop-manuals-pdf-for-your-vehicle-57cm
In the realm of vehicle maintenance and repair, having access to accurate and comprehensive information is indispensable. Workshop manuals in PDF format provide an excellent resource for vehicle owners, mechanics, and DIY enthusiasts. This guide will delve into the benefits of **[workshop manuals PDF](https://downloadworkshopmanuals.com/)**, how to find and download them, and tips on using them effectively. **Why Choose Workshop Manuals PDF?** 1. Convenience and Accessibility Workshop manuals in PDF format can be accessed on various devices such as smartphones, tablets, and computers. This flexibility allows you to reference the manual wherever you are, be it in your garage, at a friend’s place, or on the go. 2. Instant Access Downloading a workshop manual PDF provides immediate access to crucial information. This is especially valuable when you need to address urgent repair issues without waiting for a physical manual to be delivered. 3. Cost-Effective Digital workshop manuals are often more affordable than printed versions. Additionally, they eliminate shipping costs and can be stored electronically, saving physical storage space. 4. Search Functionality PDF workshop manuals typically include search functions that allow you to quickly locate specific information. This feature saves time compared to manually flipping through a printed manual. 5. Easy Updates Digital manuals can be updated more easily than printed versions, ensuring you have the most current information and procedures for your vehicle. **Key Features of Workshop Manuals PDF** 1. Detailed Diagrams Workshop manuals in PDF format come with detailed diagrams that illustrate the layout and components of various systems within your vehicle. These diagrams are essential for visualizing how parts fit together, aiding in both disassembly and reassembly. 2. Step-by-Step Instructions Each repair and maintenance procedure is broken down into clear, step-by-step instructions. This approach ensures that even individuals with limited mechanical knowledge can follow along and complete tasks successfully. 3. Technical Specifications Accurate technical specifications are crucial for performing repairs correctly. Workshop manuals provide detailed information such as torque settings, fluid capacities, and part numbers, ensuring precision in your work. 4. Troubleshooting Guides Workshop manuals often include troubleshooting sections to help identify and resolve common issues. These guides can save time and frustration by directing you to the most likely causes of problems. **How to Download Workshop Manuals PDF** 1. Manufacturer’s Website Many vehicle manufacturers offer digital versions of their workshop manuals on their official websites. These manuals are typically available for free or for a nominal fee, and they cover a wide range of models and years. 2. Specialty Websites Several websites specialize in offering downloadable workshop manuals for various vehicles. These sites provide both free and paid options. Ensure you choose a reputable source to guarantee the accuracy and completeness of the information. 3. Automotive Forums Online automotive forums can be a valuable resource for finding workshop manuals PDF. Members frequently share links to downloadable manuals for different vehicle models. However, it’s important to verify the credibility of these sources before downloading. 4. Libraries and Online Archives Some libraries and online archives offer access to digital workshop manuals. While this option may not be as immediate, it can provide access to hard-to-find manuals, especially for older or less common models. **Benefits of Using Workshop Manuals PDF** 1. Comprehensive Information Workshop manuals PDF provide detailed information about the vehicle’s systems, including the engine, transmission, brakes, and electrical components. This comprehensive coverage ensures that you have all the necessary information to perform repairs accurately. 2. Safety Workshop manuals emphasize safety procedures, ensuring that repairs are carried out correctly and safely. Following these guidelines reduces the risk of accidents and further damage to the vehicle. 3. Time Efficiency Having a workshop manual PDF on hand allows you to diagnose and fix problems quickly. This eliminates the need to spend hours searching for solutions online or waiting for a mechanic to become available. 4. Empowerment Using a workshop manual PDF empowers vehicle owners to take control of their vehicle’s maintenance and repair. This knowledge can lead to greater confidence and satisfaction in performing DIY repairs. **How to Use a Workshop Manual PDF Effectively** 1. Identify the Problem Before starting any repairs, use the troubleshooting section of the manual to accurately diagnose the issue. This ensures that you address the root cause of the problem. 2. Gather Tools and Supplies Make sure you have all the necessary tools and replacement parts before beginning any repair. The manual will list the required tools for each procedure, ensuring you are fully prepared. 3. Follow the Instructions Carefully read and follow the step-by-step instructions provided in the manual. Pay close attention to any warnings or safety precautions mentioned to avoid mistakes. 4. Take Your Time Rushing through repairs can lead to errors and further damage. Take your time to ensure that each step is completed correctly, and refer back to the manual frequently. 5. Keep the Manual Handy Keep the workshop manual PDF nearby while working on your vehicle. You may need to refer to it frequently throughout the repair process. **Types of Workshop Manuals PDF** 1. Factory Service Manuals Produced by vehicle manufacturers, these manuals offer the most detailed and accurate information, covering every aspect of the vehicle’s maintenance and repair. They are the go-to resource for professional mechanics and serious DIY enthusiasts. 2. Aftermarket Manuals Produced by third-party companies, these manuals may cover multiple vehicle models. While they might not be as detailed as factory service manuals, they are often more user-friendly and accessible. 3. Owner’s Manuals Provided with the purchase of a new vehicle, these manuals include basic maintenance and troubleshooting information. Although less detailed than workshop manuals, they are still valuable for routine maintenance tasks. 4. Repair Guides Often included with aftermarket parts, these guides provide specific instructions for installing and using the parts. They can be useful supplements to a workshop manual. **Conclusion** Workshop manuals PDF are a practical and efficient way to access essential information for vehicle maintenance and repair. These manuals provide comprehensive coverage, detailed instructions, and crucial safety guidelines. Whether you are a professional mechanic or a DIY enthusiast, having a workshop manual PDF at your fingertips can save you time, money, and frustration. By understanding how to effectively use these manuals, you can ensure that your vehicle remains in optimal condition and operates safely. Embrace the power of knowledge and take control of your vehicle’s maintenance with the help of workshop manuals PDF.
dots52vc
1,910,273
How does ChatGPT generate human-like text?
ChatGPT, developed by OpenAI, is a cutting-edge language model that has made a significant impact in...
0
2024-07-03T14:05:40
https://dev.to/hasan048/how-does-chatgpt-generate-human-like-text-3ljj
chatgpt, webdev, programming, cheetsheet
**ChatGPT, developed by OpenAI, is a cutting-edge language model that has made a significant impact in the field of natural language processing. It uses deep learning algorithms to generate human-like text based on the input it receives, making it an excellent tool for chatbots, content creation, and other applications that require natural language processing. In this post, we will explore the workings of ChatGPT to understand how it generates human-like text.** ## The Core of ChatGPT : The backbone of ChatGPT is a transformer-based neural network that has been trained on a massive amount of text data. This training allows the model to understand the patterns and relationships between words in a sentence and how they can be used to generate new text that is coherent and meaningful. The transformer-based architecture is a novel approach to machine learning that enables the model to learn and make predictions based on the context of the input. This makes it ideal for language models that need to generate text that is relevant to the context of a conversation. ## How ChatGPT Generates Text : ChatGPT uses an autoregressive language modeling approach to generate text. When you provide input to ChatGPT, the model first encodes the input into a vector representation. This representation is then used to generate a probability distribution over the next word in the sequence. The model selects the most likely next word and generates a new vector representation based on the new input. This process is repeated until the desired length of text has been developed. One of the key strengths of ChatGPT is its ability to handle context. The model has been trained to understand the context of a conversation and can generate text that is relevant to the current topic. This allows it to respond to questions and generate text that is relevant to the context of the conversation. This makes it an excellent tool for chatbots, as it can understand the user's intention and respond accordingly. ## Scalability and Fine-tuning : Another critical aspect of ChatGPT is its scalability. The model can be fine-tuned for specific use cases by training it on specific data sets. This allows it to generate text that is more tailored to the needs of the application. For example, if ChatGPT is being used in a customer service chatbot, it can be fine-tuned on data that is relevant to customer service queries to generate more accurate and relevant responses. This fine-tuning process can be done by using transfer learning, where the model is trained on a smaller data set, leveraging the knowledge it gained from its training on the larger data set. ## Real-world Applications : ChatGPT has a wide range of real-world applications, from content creation to customer service. It can be used to generate news articles, creative writing, and even poetry. In customer service, ChatGPT can be used as a chatbot to respond to customer queries, freeing up human agents to handle more complex issues. Additionally, ChatGPT can be used in language translation, as it has the ability to understand the context of a conversation and translate text accordingly. In summary, ChatGPT's ability to generate human-like text and understand context makes it a versatile tool with endless potential applications. Its deep learning algorithms and transformer-based architecture allow it to generate coherent and meaningful text, making it an exciting development in the field of natural language processing. Whether it's being used in customer service, content creation, or language translation, ChatGPT has the potential to revolutionize the way we interact with machines. ## Conclusion : In conclusion, this blog has explored the workings of ChatGPT, a cutting-edge language model developed by OpenAI. We have seen that the model is based on a transformer-based neural network that has been trained on massive amounts of text data, allowing it to generate human-like text based on the context of a conversation. Its scalability and fine-tuning capabilities make it a valuable tool for a wide range of applications, from customer service to content creation. With its ability to understand the context and generate coherent and meaningful text, ChatGPT has the potential to revolutionize the way we interact with machines and will play a crucial role in the development of AI-powered applications.
hasan048
1,903,102
MERRY-GO-ROUND : Carousel Component
In the rapidly changing world of social media, from Instagram to Facebook and LinkedIn, one feature...
0
2024-07-03T14:04:53
https://dev.to/madgan95/merry-go-round-carousel-component-59dn
webdev, javascript, beginners, programming
In the rapidly changing world of social media, from Instagram to Facebook and LinkedIn, one feature stands out for its ability to capture attention and convey a wealth of information in an engaging way: **"The Carousel"**. Carousels provide a dynamic way to present large amounts of content in a **cyclic** and visually appealing format, moving beyond the monotony of traditional bullet points. This feature allows you to showcase multiple pieces of content within a single post, creating an interactive and engaging experience for your audience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vagjtwp6wt3t9jau31k5.jpg) ## Why Carousels? Using carousels to present your content offers several advantages: - Increased Engagement - Highly space efficiency - Visual Appeal - Organized Content ## Carousel Implementation: **Pre-requisites:** In this blog, I'm going to implement using these technologies: 1) Nextjs 2) Tailwind CSS **1)Boiler code for carousel:** ``` const Carousel = ({ slides }) => { const [currentIndex, setCurrentIndex] = useState(0); const slideRef = useRef(null); const goToNextSlide = () => { setCurrentIndex((prevIndex) => (prevIndex + 1) % slides.length); }; const goToPreviousSlide = () => { setCurrentIndex((prevIndex) => prevIndex === 0 ? slides.length - 1 : prevIndex - 1 ); }; useEffect(() => { if (slideRef.current) { slideRef.current.style.transform = `translateX(-${currentIndex * 100}%)`; } }, [currentIndex]); return ( <div className="relative sm:w-3/4 w-full overflow-hidden"> <div ref={slideRef} className="flex transition-transform duration-500 ease-in-out" > {slides.map((slide, index) => ( <div key={index} className="min-w-full bg-white flex flex-col pb-2 rounded-lg shadow-lg"> {slide} </div> ))} </div> <button className="absolute top-1/2 transform -translate-y-1/2 left-4 bg-gray-800 text-white w-10 h-10 p-2 rounded-full" onClick={goToPreviousSlide} > Previous; </button> <button className="absolute top-1/2 transform -translate-y-1/2 right-4 bg-gray-800 text-white w-10 h-10 p-2 rounded-full" onClick={goToNextSlide} > Next; </button> </div> ); }; export default Carousel; ``` ## Code Explanation: **1) Creating the Reference:** ``` const slideRef = useRef(null); ``` The 'ref' attribute in React is used to attach the reference to a specific DOM element. In this code, the reference is attached to the container <div> that holds the slides: ``` <div ref={slideRef} className="flex transition-transform duration-500 ease-in-out" > {slides.map((slide, index) => ( <div key={index} className="min-w-full bg-white flex flex-col pb-2 rounded-lg shadow-lg"> {slide} </div> ))} </div> ``` Here, 'slideRef' is assigned to the ref attribute of the div element. When this component is rendered, React sets slideRef.current to the corresponding DOM element. **2) Accessing the DOM Element:** The 'current' property of the reference object will now point to the actual DOM element. This allows you to interact directly with the DOM element in your component. ``` useEffect(() => { if (slideRef.current) { slideRef.current.style.transform = `translateX(-${currentIndex * 100}%)`; } }, [currentIndex]); ``` 'slideRef.current.style.transform = translateX(-${currentIndex * 100}%)' - updates the transform style property of that DOM element to move it horizontally based on the current index. **3) Previous and Next Buttons:** Button: Previous ``` <button className="absolute top-1/2 transform -translate-y-1/2 left-4 bg-gray-800 text-white w-10 h-10 p-2 rounded-full" onClick={goToPreviousSlide} > &lt; </button> ``` Button: Next ``` </button> <button className="absolute top-1/2 transform -translate-y-1/2 right-4 bg-gray-800 text-white w-10 h-10 p-2 rounded-full" onClick={goToNextSlide} > &gt; </button> ``` ----------------------------------------------------------------- Feel free to reach out if you have any questions or need further assistance. 😊📁✨
madgan95
1,909,084
Automating User Creation with Bash Script
As a SysOps engineer, managing users and groups in a growing development team can be a time-consuming...
0
2024-07-03T14:04:47
https://dev.to/vctcode/automating-user-creation-with-bash-script-1d75
automation, devops, cloud, linux
As a SysOps engineer, managing users and groups in a growing development team can be a time-consuming task. To streamline this process, automating user and group creation with a Bash script can save valuable time and reduce errors. This article will guide you through creating a script to automate these tasks, while ensuring secure handling of user passwords and detailed logging of all actions. In this article, we will explore an enhanced bash script that handles user creation and management for your growing company. This script reads a formatted text file and handles user and group creation, sets up home directories, generates random passwords, and ensures secure logging of actions. This script allows for multiple groups per user to be added when the user is created. ## Requirements - Linux machine - Ubuntu preferably - Sudo privileges: a type of administrator right to run higher level commands. - A Text file containing the users and groups separated by a `;` ## Key Features 1. Reads Input File: The script reads a text file(.txt) where the user and group is inserted in each line of the file as `user;group1,group2,group3` 2. Creates Users and Groups: Each user gets a personal group and additional groups as specified. 3. Sets Up Home Directories: Sets up home directories with proper permissions. 4. Generates Random Passwords: Securely generates and stores random passwords. 5. Log Actions: Logs all action to `/var/log/user_management.log` and stores generated passwords securely in `/var/secure/user_passwords.txt`. You can specify your custom paths. ## Let's Get this Done Create a file with `.sh` extension. Here I the script called `create_users.sh` ![Create user](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/476i42q9l7rkekwpepwh.png) The create_users.sh script is divided into several modules, each responsible for a specific task. Here's a detailed explanation of each module: ### 1. Initialization ``` #!/bin/bash # Script to create users and groups from a file, generate passwords, and log actions INPUT_FILE=$1 LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" DATE=$(date "+%Y-%m-%d %H:%M:%S") ``` - `#!/bin/bash` called **Shebang**: Specifies the script should be run using the Bash shell. - `$1` is the Input Arguments: Takes the name of the input file as the first argument. - `PASSWORD_FILE`: Log and Password Files- Defines paths for logging and storing passwords. - `DATE`: Captures the current date and time for logging purposes. ### 2. Setup Directories and Files ``` ## Ensure the log and password directories exist sudo mkdir -p /var/log sudo mkdir -p /var/secure sudo touch $LOG_FILE sudo touch $PASSWORD_FILE # Set permissions for the password file sudo chmod 600 $PASSWORD_FILE ``` - `mkdir`: directory creation which ensures that the directories for logs and secure files exist. - `touch`: file creation that creates the log file and password file if they don't exist. - `chmod`: File Permissions used to set the password file to be readable and writable only by the owner (chmod 600). ### 3. Logging Function ``` # Function to log actions log_action() { echo "$DATE - $1" | sudo tee -a $LOG_FILE > /dev/null } ``` - `log_action`: Defines a function to log actions with timestamps to the log file. The sudo tee -a $LOG_FILE command appends the log message to the log file. ### 4. Password Generation Function ``` # Function to generate a random password generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 ; echo '' } ``` - `generate_password`: Uses the `tr` command to generate a random 12-character password consisting of alphanumeric characters. ### 5. Input Validation ``` # Ensure input file is provided if [[ -z "$INPUT_FILE" ]]; then echo "Usage: $0 <input_file>" log_action "ERROR: No input file provided" exit 1 fi ``` - Validation: Checks if the input file is provided. If not, logs an error and exits the script. ### 6. Main Processing Loop ``` # Read the input file line by line while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) # Ignore empty lines if [[ -z "$username" ]]; then continue fi ``` - Reading Input: Reads the input file line by line, splitting each line into username and groups based on the ; delimiter. - Trimming Whitespace: Uses `xargs` to trim any leading or trailing whitespace from username and groups. - Ignore Empty Lines: Skips processing for empty lines. ### 7. User and Group Creation ``` # Check if user already exists if id "$username" &>/dev/null; then log_action "User $username already exists" continue fi # Create the user with their own personal group user_group=$username if ! getent group "$user_group" > /dev/null; then sudo groupadd "$user_group" log_action "Created group $user_group" fi sudo useradd -m -g "$user_group" -s /bin/bash "$username" log_action "Created user $username with group $user_group" ``` - User Existence Check: Uses the id command to check if the user already exists. If the user exists, it logs the action and skips to the next iteration. - Personal Group Creation: Checks if a group with the same name as the username exists. If not, it creates the group. - User Creation: Creates the user with their personal group, sets the shell to Bash, and creates a home directory. ### 8. Password Handling ``` # Generate a random password password=$(generate_password) echo "$username:$password" | sudo chpasswd log_action "Set password for user $username" # Store the password securely echo "$username,$password" | sudo tee -a $PASSWORD_FILE > /dev/null ``` - Password Generation: Generates a random password using the `generate_password` function. - Password Assignment: Sets the generated password for the user using `chpasswd`. - Password Storage: Appends the username and password to the password file in txt format. ### 9. Additional Group Handling ``` # Handle additional groups IFS=',' read -ra ADD_GROUPS <<< "$groups" for group in "${ADD_GROUPS[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" > /dev/null; then sudo groupadd "$group" log_action "Created group $group" fi sudo usermod -aG "$group" "$username" log_action "Added user $username to group $group" done ``` - Parsing Groups: Splits the groups string into an array using , as the delimiter. - Group Existence Check: Checks if each group exists and creates it if necessary. - User Group Assignment: Adds the user to each additional group using `usermod -aG`. ### 10. Home Directory Permissions ``` # Set appropriate permissions and ownership for the home directory sudo chown -R "$username:$user_group" "/home/$username" sudo chmod 700 "/home/$username" log_action "Set permissions for home directory of user $username" done < "$INPUT_FILE" log_action "User creation process completed" ``` - Ownership and Permissions: Sets the ownership of the user's home directory to the user and their personal group. Sets directory permissions to 700 (owner can read, write, and execute; others have no permissions). ### Execute Merge the modules above into the script (in my case `create_user.sh`). Give execute permission to the file using `sudo chmod +x create_user.sh`. and run the script like the below ![Execute command format](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8j9lgaxhocva20aw3b8j.png) - **PS**: you specify the path and the name of your custom `.txt` file you create in your machine with the content in the format below ![file text sample](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rq1l5h03y7jnof303tm8.png) ## Conclusion By modularizing the script, each part of the process is handled in a clear and structured manner, making it easier to understand and maintain. This script efficiently automates user and group management tasks, ensuring secure handling of passwords and detailed logging of actions. For more resources and opportunities to enhance your technical skills, check out the [HNG Internship]( https://hng.tech/internship) and explore their [premium offerings](https://hng.tech/premium). The HNG Internship is a great platform to learn, grow, and network with industry professionals.
vctcode
1,910,272
A Technical Comparison of React and Vue.js
Frontend development is a dynamic field with a variety of technologies that help developers build...
0
2024-07-03T14:04:14
https://dev.to/samson_toju_4f5a205a6f7ea/a-technical-comparison-of-react-and-vuejs-11c
hngtech, hngtechpremium, webdev
Frontend development is a dynamic field with a variety of technologies that help developers build interactive, responsive, and efficient user interfaces.And in the landscape of frontend development,React and Vue.js are at the forefront of the most popular frameworks/libraries for building user interfaces,with each offering unique features and advantages.This article provides us with detailed analysis and comparison of these two popular frontend technologies,clearly stating their strengths and what makes them unique. Brief introduction React is a JavaScript library developed by Facebook and initially released in 2013. Vue.js is a JavaScript framework developed by Evan You and was initially released in 2014. Strengths and Unique features React Component-Based Architecture:React applications are built using reusable components, which can be nested, managed, and handled independently. This promotes modularity and makes the development process more efficient. JSX(JavaScript XML):JSX allows developers to write HTML-like syntax within JavaScript. This makes the code more readable and easier to write.It provides a seamless way to blend logic and presentation, facilitating better code organization. Virtual DOM:React uses a virtual DOM (Document Object Model) which improves performance by updating only the necessary parts of the real DOM when data changes. This reduces DOM manipulation and speeds up rendering. Single-Page Applications (SPAs): React is ideal for SPAs that require dynamic content updates without full page reloads. Ecosystem and Community:React boasts a large and active community along with a vast ecosystem of libraries and tools. This support network ensures continuous improvement, extensive documentation, and readily available solutions to common development challenges. Vue.js Ease of Learning:Vue.js is known for its user-friendly learning curve. Its syntax and structure are straightforward, making it accessible to developers of all levels. Flexibility:Vue.js is often praised for its simplicity and ease of integration. It’s designed to be easily adoptable,allowing developers to add Vue.js to an existing project without restructuring the entire codebase. Single File Components (SFCs): Vue.js promotes the use of Single File Components, where templates, styles, and logic are encapsulated in a single file. This component-centred approach enhances maintainability, readability, and allows for better code organization. Vue.js on the other hand, shines in projects where simplicity and ease of integration are priorities. Its straightforward syntax and gradual adoption approach make it ideal for small to medium-sized applications. In conclusion,both React and Vue.js are powerful frontend technologies with their own strengths and uniqueness.The choice between them should take cognisance of factors such as project scale, team familiarity,performance requirements,and ecosystem support.By understanding their key features and benefits, developers can make an informed decision to best meet their project goals and development needs. I am eagerly looking forward to leveraging ReactJS during the HNG internship to build impactful projects and enhance my skills.The opportunity to work with a technology I am passionate about,coupled with the chance to learn from experienced mentors and collaborate with peers,is something I am very excited about. React’s powerful features and supportive community make it an excellent choice for developing modern web applications, and I am enthusiastic about the journey ahead in the HNG internship.
samson_toju_4f5a205a6f7ea
1,910,264
Mastering Face Detection, Recognition, and Verification: An In-Depth Guide
Introduction In the modern digital era, facial analysis has emerged as a fundamental...
0
2024-07-03T14:04:06
https://dev.to/api4ai/mastering-face-detection-recognition-and-verification-an-in-depth-guide-3h1f
facedetection, facerecognition, api, ai
#Introduction In the modern digital era, facial analysis has emerged as a fundamental technology, significantly contributing to enhanced security, optimized user experiences, and streamlined automation across multiple sectors. Its applications are extensive and ever-growing, ranging from unlocking smartphones to tagging friends in social media photos. Within this realm, face verification is particularly crucial, ensuring the accuracy and reliability of identity verification processes, such as matching photographs in passports and driver's licenses. As facial analysis technology advances, its significance in both personal and professional contexts continues to grow. ## Purpose of the Blog Post The goal of this blog post is to offer an extensive tutorial on face verification utilizing the [Face Analysis API](https://api4.ai/apis/face-analysis) from API4AI. By harnessing this robust API, developers can effortlessly incorporate sophisticated facial analysis functionalities into their applications. Whether you're developing a security solution, a customer identification system, or any application requiring dependable face verification, this guide will provide you with the essential knowledge and resources to begin. ## Overview of the Tutorial In this tutorial, we will guide you through the key steps for implementing face verification using the Face Analysis API from API4AI. We'll begin with a brief introduction to face detection, recognition, and verification, and explain the importance of face verification in various applications. Next, we will introduce API4AI, outlining its features and advantages for facial analysis tasks. Following this introduction, we will delve into the practical aspects of face verification. You will learn how to set up your environment, send requests to the API, and interpret the responses. We will provide a detailed code example that demonstrates how to compare two faces, such as those in a passport and a driver's license, to verify their identity. Finally, we will explore experimenting with different individuals and poses to assess the robustness of the verification process. By the end of this tutorial, you will have a comprehensive understanding of how to implement face verification using the Face Analysis API and be well-prepared to incorporate this technology into your own projects. #Understanding Face Detection, Recognition, and Verification **Face Detection** Face detection is the initial step in facial analysis, involving the identification and localization of human faces within images or video streams. This technology scans an image to find face-like structures and typically highlights them with bounding boxes. The main goal of face detection is to allow systems to distinguish and process faces separately from other objects or background elements. **Applications of Face Detection:** - **Security:** In surveillance systems, face detection aids in the real-time identification and monitoring of individuals, enhancing security protocols. - **Photography:** Modern cameras utilize face detection to focus on faces, ensuring clear and well-composed portraits. - **Human-Computer Interaction:** Devices such as smartphones and laptops use face detection to enable features like facial recognition for unlocking the device and interactive applications requiring face tracking. **Face Recognition** Face recognition goes beyond detection by identifying and differentiating individual faces within an image or video. This process involves examining facial features and comparing them to a database of known faces to ascertain the person's identity. **Role and Applications of Face Recognition:** - **Identifying and Tagging Individuals:** Social media platforms use face recognition to automatically tag people in photos, simplifying the organization and sharing of images. - **Surveillance:** Law enforcement and security agencies employ face recognition to identify individuals of interest in crowds or public spaces. - **Access Control:** Secure environments, such as offices or restricted areas, utilize face recognition to grant or deny access based on recognized faces. **Face Verification** Face verification is a specialized use of face recognition that involves comparing two facial images to determine if they belong to the same person. This task is essential in situations where confirming an individual's identity is required. **Importance and Use Cases of Face Verification:** - **Confirming Identity:** Face verification is frequently used in authentication systems to verify a person's claimed identity, such as in online banking or secure transactions. - **Mobile Unlock Features:** Smartphones employ face verification to enable users to unlock their devices quickly and securely. - **Document Verification:** A primary application of face verification is comparing photos from various identification documents. For example, verifying that the photos in a passport and a driver's license belong to the same person ensures the accuracy and authenticity of identity verification processes. Face detection, recognition, and verification together form a robust framework for numerous applications, enhancing security, improving user experiences, and streamlining operations across various fields. Understanding these core concepts is crucial for effectively leveraging facial analysis technologies in any project. #Why Face Verification is Necessary **Security** Face verification is essential for strengthening security systems, offering a dependable method for precise identification and verification of individuals. Conventional security measures like passwords or PINs are vulnerable to compromise, but facial verification introduces an additional layer of protection that is challenging to circumvent. By ensuring that only authorized individuals can access secure areas, systems, or information, face verification greatly reduces the risk of unauthorized access and potential security breaches. This technology is extensively employed across various sectors, including airports, government facilities, and corporate offices, to uphold high-security standards. **User Experience** Face verification significantly enhances user interactions with technology by offering a smooth and intuitive way to engage with devices and applications. For example, smartphones and laptops employ face verification, enabling users to swiftly unlock their devices without the need to remember and input passwords, thereby improving convenience and satisfaction. Additionally, face verification can be used for personalized content delivery, customizing recommendations and services based on the recognized user. Another instance is the automated organization of photos in personal galleries or on social media platforms, where face verification aids in grouping photos of the same individual, simplifying media management for users. **Automation and Efficiency** In sectors such as banking, healthcare, and retail, face verification optimizes processes by automating identity verification tasks that would otherwise require manual effort. For instance, in banking, customers can conduct secure transactions or access their accounts remotely using facial verification, minimizing the need for physical presence and paperwork. In healthcare, face verification ensures accurate patient identification, making sure the correct medical records and treatments are provided. Retail businesses can leverage this technology for effortless customer check-ins and personalized shopping experiences. By reducing the need for manual checks and increasing the speed and accuracy of identity verification, face verification significantly boosts overall operational efficiency. **Ethical Considerations** While face verification offers many advantages, it is vital to address the ethical issues associated with its use. Privacy concerns are significant, as this technology involves collecting and storing biometric data, which could be misused or accessed without authorization. Therefore, implementing stringent data protection measures and obtaining informed consent from users is essential. Transparency regarding how facial data is used and shared is also crucial. Another ethical concern is the potential bias in facial recognition algorithms, which can result in inaccuracies and discrimination against certain groups. To mitigate this, developers and organizations must aim to create fair and unbiased systems by using diverse training data and continuously monitoring and improving algorithm accuracy. Ensuring the responsible use of face verification technology allows its benefits to be realized without compromising individual rights and freedoms. Face verification is a powerful tool that enhances security, improves user experience, and increases efficiency across various sectors. However, its deployment must be accompanied by careful consideration of ethical issues to ensure its responsible and fair use. #Introduction to API4AI for Face Analysis ![API4AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ct1h6bw8uard7k7xrb4.png) **About API4AI** [API4AI](https://api4.ai/) is a state-of-the-art platform providing advanced artificial intelligence and machine learning solutions through a wide range of APIs. With a focus on image and video analysis, API4AI delivers powerful tools for tasks such as face detection, recognition, and verification. The platform is designed for ease of use, allowing developers and businesses to effortlessly incorporate robust AI functionalities into their applications without requiring deep machine learning knowledge. The [Face Analysis API](https://api4.ai/apis/face-analysis) from API4AI stands out, offering a streamlined solution for various facial analysis tasks within a single, cohesive endpoint. ## Why Choose API4AI Opting for API4AI for face detection, recognition, and verification offers several notable advantages: - **Ease of Use**: API4AI is designed with simplicity at its core, making it accessible to developers of all skill levels. The platform provides clear documentation and straightforward API endpoints, enabling users to quickly integrate facial analysis capabilities into their applications. The onboarding process is seamless, with comprehensive guides and examples to assist you at every step. - **Accuracy**: Accuracy is crucial for any facial analysis application, and API4AI excels in this aspect. The Face Analysis API is built on cutting-edge machine learning models that deliver high accuracy in detecting, recognizing, and verifying faces. This ensures that your applications can reliably identify and authenticate individuals, enhancing both security and user experience. - **Integration Capabilities**: API4AI offers robust integration capabilities, facilitating the incorporation of facial analysis into a wide range of applications. Whether you are developing a mobile app, a web application, or an enterprise system, the API4AI platform supports various programming languages and frameworks. Additionally, the APIs are designed to be scalable, catering to the needs of both small projects and large-scale deployments. - **Comprehensive Features**: The Face Analysis API from API4AI combines multiple facial analysis functions into a single solution. This allows you to perform face detection, recognition, and verification without needing to switch between different APIs or manage multiple integrations. This all-in-one approach simplifies development and maintenance, enabling you to focus on building exceptional applications. - **Support and Resources**: API4AI provides extensive support and resources to help you succeed. The platform offers detailed documentation, code examples, and tutorials to guide you through using the API. Additionally, a responsive support team is available to assist with any questions or issues, ensuring you can maximize the platform's capabilities. By choosing API4AI for your facial analysis needs, you gain access to a powerful, accurate, and user-friendly toolset that can significantly enhance your applications. Whether you are working on a security system, a personalized user experience, or any other project requiring facial analysis, API4AI provides the tools and support you need to succeed. ## Face Verification Using the Face Analysis API **Register for API4AI's Face Analysis API** - **Visit the API4AI Website**: Go to the [API4AI website](https://api4.ai/apis/face-analysis) and choose the subscription plan that aligns best with your requirements. - **Subscribe to Your Selected Plan on RapidAPI**: API4AI services are accessible via the RapidAPI platform. If you're unfamiliar with RapidAPI, refer to the blog post "[RapidAPI Hub: The Step-by-Step Guide to Subscribing and Starting with an API](https://api4.ai/blog/rapid-api-hub-the-step-by-step-guide-to-subscribing-and-starting-with-an-api)" for a detailed subscription tutorial. ## Overview of the API Documentation and Resources API4AI offers extensive documentation and resources to assist developers in integrating the Face Analysis API into their applications. The documentation includes: **API Documentation**: API4AI provides detailed documentation for all its APIs, including the Face Analysis API. You can access this documentation by visiting the "Docs" section on the API4AI website or directly via this link. The documentation covers: - **API Endpoints**: Descriptions of all available endpoints and their specific functions. - **Request Formats**: Instructions on how to structure your API requests, including required headers, parameters, and supported input formats. - **Response Formats**: Information on the structure of API responses, including examples of successful responses and error messages. - **Code Samples**: Example code snippets in various programming languages to help you get started quickly. **API Playground**: API4AI includes an interactive API playground where you can test API requests directly [in your browser](https://api4.ai/apis/face-analysis#demo-wrapper). This feature allows you to explore the API's capabilities and see real-time results without writing any code. **Support**: API4AI offers various support options, including a dedicated support team. If you encounter any issues or have questions, you can reach out through the options listed in the Contacts section on the [documentation page](https://api4.ai/docs/face-analysis). **Tutorials and Guides**: In addition to the documentation, API4AI provides tutorials and guides that cover common use cases and advanced features. These resources are designed to help you maximize the use of the Face Analysis API and integrate it seamlessly into your applications. ## Setting Up the Environment Before starting, it is highly advisable to review the [Face Analysis API documentation](https://api4.ai/docs/face-analysis) and explore the provided [code examples](https://gitlab.com/api4ai/examples/face-analyzer). This preparation will give you a thorough understanding of the API's capabilities, how to format your requests, and what types of responses you can expect. Familiarizing yourself with the documentation will provide insights into the different endpoints, request and response formats, and any specific parameters needed. The code examples offer practical guidance on implementing the API in various programming languages, helping you get started quickly and efficiently. Taking the time to review these resources will ensure a smoother integration process and enable you to fully leverage the Face Analysis API in your applications. Additionally, you need to install the required packages, specifically requests, by running: ```bash pip install requests ``` ## Comparing the Faces Face verification involves comparing two facial images to determine if they belong to the same person. You can send a straightforward request for face detection and embedding vector calculation following the API documentation. To obtain the embedding vector, simply add `embeddings=True` to the query parameters. The response, in JSON format, will include the face bounding box (box), face landmarks (`face-landmarks`), and the embedding vector (`face-embeddings`). The next step is to calculate the similarity between the two images. To do this, follow these steps: 1. Calculate the L2-distance between the two embedding vectors. 2. Convert the L2-distance to similarity using the following equation: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dz9xb7vd4g2dxpphw1f.png) Where **a** a is a constant L2-distance value representing a similarity threshold of 50%. ## Sending a Request to the API To move forward, we need to understand how to send **requests** to the API. We utilize the requests library to make HTTP requests. ```python with pathlib.Path('/path/to/image.jpg').open('rb') as f: res = requests.post('https://demo.api4ai.cloud/face-analyzer/v1/results', params={'embeddings': 'True'}, files={'image': f.read()}) ``` Be sure to include **embeddings=True** in the query parameters to retrieve the embedding vector. ## Calculating the Similarity The API response provides various details about face detection in **JSON** format. Since the response is a string, you need to convert it to a dictionary using the json module and then extract the embedding vector from it. ```python res_json = json.loads(res.text) if res_json['results'][0]['status']['code'] == 'failure': raise RuntimeError(res_json['results'][0]['status']['message']) embedding = res_json['results'][0]['entities'][0]['objects'][0]['entities'][2]['vector'] ``` **Important Note** When the client submits an image that cannot be processed for any reason, the service will still respond with a 200 status code and return a JSON object similar to a successful analysis. However, in such cases, **results[].status.code** will have the value 'failure' and **results[].status.message** will contain an explanation. Possible reasons for this issue include: - Unsupported file MIME type - Corrupted image - The file passed as a URL is too large or not downloadable Ensure that **results[].status.code** in the response JSON is not 'failure'. The next step is to calculate the L2-distance and convert it to a similarity score using the provided formula. ```python dist = math.sqrt(sum([(i-j)**2 for i, j in zip(embedding1, embedding2)])) a = 1.23 similarity = math.exp(dist ** 7 * math.log(0.5) / a ** 7) ``` A face similarity threshold enables us to define the minimum similarity percentage necessary to classify faces as matching: ```python threshold = 0.8 if similarity >= threshold: print("It's the same person.") else: print('There are different people on the images.') ``` You can modify the threshold parameter to fit your specific requirements. To reduce the number of false positives (i.e., incorrectly identifying two faces as the same person), increase the threshold. Conversely, to ensure that only clearly different individuals are identified, lower the threshold. ## Script for Comparing Faces in Two Images Now that we understand how to determine face similarity, we can create a script to check if the same person appears in two different images. This process involves several key steps: sending the images to the API, extracting the embedding vectors, calculating the L2-distance between the vectors, and converting this distance into a similarity score. By adjusting the similarity threshold, we can effectively distinguish between faces belonging to the same person and those that do not. This script will enable robust identity verification, enhance security measures, and support various applications requiring accurate facial comparisons. ```python #! /usr/bin/env python3 """Determine that the same person is in two photos.""" from __future__ import annotations import argparse import json import math from pathlib import Path import requests from requests.adapters import HTTPAdapter, Retry API_URL = 'https://demo.api4ai.cloud' ALLOWED_EXTENSIONS = ['.jpg', '.jpeg', '.png'] def parse_args(): """Parse command line arguments.""" parser = argparse.ArgumentParser() parser.add_argument('image1', help='Path or URL to the first image.') parser.add_argument('image2', help='Path or URL the second image.') return parser.parse_args() def get_image_embedding_vector(img_path: str): """Get face embedding using Face Analysis API.""" retry = Retry(total=4, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504]) session = requests.Session() session.mount('https://', HTTPAdapter(max_retries=retry)) if '://' in img_path: res = session.post(API_URL + '/face-analyzer/v1/results', params={'embeddings': 'True'}, # required parameter if you need to get embeddings data={'url': str(img_path)}) else: img_path = Path(img_path) if img_path.suffix not in ALLOWED_EXTENSIONS: raise NotImplementedError('Image path contains not supported extension.') with img_path.open('rb') as f: res = session.post(API_URL + '/face-analyzer/v1/results', params={'embeddings': 'True'}, # required parameter if you need to get embeddings files={'image': f.read()}) res_json = json.loads(res.text) if 400 <= res.status_code <= 599: raise RuntimeError(f'API returned status {res.status_code}' f' with text: {res_json["results"][0]["status"]["message"]}') if res_json['results'][0]['status']['code'] == 'failure': raise RuntimeError(res_json['results'][0]['status']['message']) return res_json['results'][0]['entities'][0]['objects'][0]['entities'][2]['vector'] def convert_to_percent(dist): """Convert embeddings L2-distance to similarity percent.""" threshold_50 = 1.23 return math.exp(dist ** 7 * math.log(0.5) / threshold_50 ** 7) def main(): """Entrypoint.""" # Parse command line arguments. try: args = parse_args() # Get embeddings of two images. emb1 = get_image_embedding_vector(args.image1) emb2 = get_image_embedding_vector(args.image2) # Calculate similarity of faces in two images. dist = math.sqrt(sum([(i-j)**2 for i, j in zip(emb1, emb2)])) # L2-distance similarity = convert_to_percent(dist) # The threshold at which faces are considered the same. threshold = 0.8 print(f'Similarity is {similarity*100:.1f}%.') if similarity >= threshold: print("It's the same person.") else: print('There are different people on the images.') except Exception as e: print(str(e)) if __name__ == '__main__': main() ``` We have also integrated command-line argument parsing into the script, enabling users to easily specify input images and parameters. Additionally, we included a version check for the Face Analysis API to ensure compatibility and leverage the latest features and improvements. With these enhancements, the script not only performs robust identity verification but also offers flexibility and reliability, making it suitable for a variety of applications requiring accurate facial comparisons and verification. **Experimenting with Various Individuals** To gain a deeper understanding of the capabilities and limitations of the Face Analysis API, let's experiment with photos of different people. This will allow you to observe how accurately the API can differentiate between distinct faces. **Identical Individual** Let's test this script using two photos of Jared Leto. ![Face Analysis](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ojsnaoitky0w6mk1reru.jpg) ![Face Analysis](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epmt5l3g75d9i8g5h5bi.jpg) Simply execute the script in the terminal by running: ```bash python3 ./main.py 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/leto1.jpg' 'https://storage.googleapis.com/api4ai-static/rapidapi/face_verification_tutorial/leto2.jpg' ``` For version **v1.16.2**, we should expect the following output: ```bash Similarity is 99.2%. It's the same person. ``` **Distinct Individuals** Now, let's compare several different actors: Jensen Ackles, Jared Padalecki, Dwayne Johnson, Kevin Hart, Scarlett Johansson, and Natalie Portman. ![Face Comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fke96lgaw3rq3897wf3l.png) As you can see, the similarity scores for the same individuals are close to 100 percent. In contrast, the similarity scores for different individuals are significantly lower. By adjusting the similarity threshold, you can fine-tune the criteria for determining whether faces belong to the same person or different people. This adjustment allows you to control the sensitivity of your face verification system, ensuring it accurately differentiates between individuals based on your specific requirements. ## Experimenting with Various Poses Faces can look different depending on the angle and lighting, and extreme angles, such as profile views, present significant challenges for face comparison algorithms. To test the robustness of the verification process, it is crucial to experiment with photos taken from various angles and under different lighting conditions. This comprehensive testing approach will help you gauge how well the API performs in diverse scenarios, including less-than-ideal conditions. By doing so, you can pinpoint potential weaknesses and adjust your system accordingly to enhance its accuracy and reliability. Additionally, this experimentation will provide insights into the API's strengths and limitations, enabling you to make informed decisions when implementing face verification in real-world applications. ![Face Comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ltu4p8fsssubseanqx7.png) #Conclusion ## Summary of Key Points In this extensive tutorial, we explored the essential components of face detection, recognition, and verification, with a particular focus on face verification. We started by understanding the fundamental concepts and the significance of facial analysis across various fields. Next, we introduced the API4AI Face Analysis API, highlighting its features and benefits. We provided detailed instructions for setting up the environment, sending requests to the API, and implementing face verification with practical code examples. Additionally, we discussed experimenting with different faces and poses to assess the robustness of the verification process. ## Future Directions The realm of face analysis technology is swiftly advancing, driven by continuous improvements in machine learning algorithms and computational capabilities. Upcoming updates from API4AI are expected to offer enhanced accuracy, quicker processing times, and additional features to manage more complex scenarios. We can also anticipate improved performance with extreme angles, varied lighting conditions, and occlusions, further boosting the reliability of face verification systems. ## Encouragement to Explore Further We encourage you to delve deeper into the capabilities of the API4AI Face Analysis API beyond the examples provided in this tutorial. Experiment with various datasets, diverse environmental conditions, and additional API features to fully grasp its potential. By doing so, you can customize the technology to meet your specific needs and develop more robust and versatile applications. [More Stories about AI, Cloud, APIs and Web](https://api4.ai/blog)
taranamurtuzova
1,910,270
AI's Impact on Future Business: Transformation Across Industries and for IT Professionals 🌐
Artificial Intelligence is already rewriting the rules of the business world, providing companies...
0
2024-07-03T14:01:40
https://dev.to/namik_ahmedov/ais-impact-on-future-business-transformation-across-industries-and-for-it-professionals-2nhp
ai, it, automation
Artificial Intelligence is already rewriting the rules of the business world, providing companies with new opportunities for innovation and growth. In the IT industry, understanding how AI is changing not only technological approaches but also business processes is crucial. 🔍 **Data Transparency and Business Analytics**: AI enables extracting valuable insights from large volumes of data, significantly improving the quality of business analysis and decision-making. 🤖 **Automation and Process Optimization**: AI-driven automation speeds up task execution, reduces costs, and enhances operational efficiency, which is particularly vital for IT professionals involved in developing and implementing new technologies. 🌐 **Personalized Customer Experience**: With AI, companies can offer more personalized services and products, enhancing user experience and strengthening customer loyalty. 🔒 **Cybersecurity and Data Protection**: AI plays a critical role in securing data, detecting threats, and preventing cyber-attacks, essential for maintaining customer trust and protecting corporate assets. 🔮 **Future Innovations and Research**: The development prospects of AI include new technologies and methods that will continue to transform business models and create new opportunities for IT professionals. Artificial Intelligence is not just a tool but a key component for creating competitive advantages in the future business landscape. Understanding its potential and applying it in every aspect of work will enable companies and IT professionals to effectively adapt to the challenges of the new digital era.
namik_ahmedov
1,910,269
Git Cheatsheet that will make you a master in Git
Introduction to Git Git is a widely used version control system that allows developers to...
0
2024-07-03T14:01:11
https://dev.to/hasan048/git-cheatsheet-that-will-make-you-a-master-in-git-p0p
github, githubactions, cheetsheet, programming
## Introduction to Git Git is a widely used version control system that allows developers to track changes and collaborate on projects. It has become an essential tool for managing code changes, whether working solo or in a team. However, mastering Git can be a challenge, especially for beginners who are not familiar with its commands and features. In this Git cheatsheet, we will cover both the basic and advanced Git commands that every developer should know. From creating a repository to branching, merging, and beyond, this cheatsheet will serve as a handy reference guide for anyone looking to improve their Git skills and become a more proficient developer. Whether you are just starting with Git or looking to enhance your existing knowledge, this cheatsheet will help you make the most out of Git and optimize your workflow. ## Basic Git commands Initialization To initialize a new Git repository in the current directory, run the following command: git init This creates a hidden .git directory in the current directory that tracks changes to your code. ## Cloning To clone an existing Git repository to your local machine, run the following command: git clone <repository URL> This creates a new directory on your computer with a copy of the repository. ## Staging changes Before you commit changes to your code, you need to stage them using the git add command. This tells Git which changes you want to include in your commit. To stage a file or directory, run the following command: git add <file/directory> You can also stage all changes in the current directory by running: git add . Committing changes To commit the changes in the staging area with a commit message, run the following command: git commit -m "<commit message>" The commit message should briefly describe the changes you made in the commit. ## Checking status To check the current status of your repository, run the following command: git status This will show you which files have been modified, which files are staged for commit, and which files are untracked. ## Viewing changes To view the changes between the working directory and the staging area, run the following command: git diff To view the changes between the staging area and the last commit, run the following command: git diff --cached ## Branching Git allows you to create multiple branches of your code so that you can work on different features or fixes without affecting the main codebase. The default branch in Git is called master. To create a new branch with the specified name, run the following command: git branch <branch name> To switch to the specified branch, run the following command: git checkout <branch name> You can also create and switch to a new branch in one command by running: git checkout -b <branch name> To merge the specified branch into the current branch, run the following command: git merge <branch name> ## Pushing changes To push changes to a remote repository, run the following command: git push <remote> <branch> <remote> is the name of the remote repository, and <branch> is the name of the branch you want to push. ## Pulling changes To pull changes from a remote repository, run the following command: git pull <remote> <branch> <remote> is the name of the remote repository, and <branch> is the name of the branch you want to pull. ## Viewing history To view the commit history, run the following command: git log This will show you a list of all the commits in the repository, along with the commit message, author, and date. ## Advanced Git commands Reverting changes If you need to undo a commit, you can use the git revert command. This creates a new commit that undoes the changes made in the specified commit. ## Resetting changes If you want to undo a commit and remove it from the commit history, you can use the git reset command. This removes the specified commit and all subsequent commits from the commit history. There are three options for git reset: --soft, --mixed, and --hard. --soft only resets the commit history and leaves the changes in the staging area. --mixed resets the commit history and unstages the changes. --hard resets the commit history, unstages the changes, and discards all changes made after the specified commit. To reset the last commit using --soft, run the following command: git reset --soft HEAD~1 To reset the last commit using --mixed, run the following command: git reset --mixed HEAD~1 To reset the last commit using --hard, run the following command: git reset --hard HEAD~1 ## Rebasing If you want to apply your changes to a different branch, you can use the git rebase command. This command applies your changes on top of the specified branch. To rebase the current branch onto the specified branch, run the following command: git rebase <branch name> ## Stashing If you want to save changes without committing them, you can use the git stash command. This saves the changes in a stack of temporary commits, allowing you to switch to a different branch or work on something else. To stash your changes, run the following command: git stash To apply your changes again, run the following command: git stash apply Cherry-picking If you want to apply a specific commit from one branch to another, you can use the git cherry-pick command. This command applies the specified commit on top of the current branch. To cherry-pick the specified commit, run the following command: git cherry-pick <commit hash> ## Git hooks Git hooks are scripts that run automatically before or after specific Git commands, allowing you to customize the behavior of Git. Git comes with a set of built-in hooks, but you can also create your own custom hooks. To create a custom Git hook, navigate to the .git/hooks directory in your Git repository and create a new file with the name of the hook you want to create (e.g., pre-commit, post-merge). The file should be executable and contain the script you want to run. ## Git aliases Git aliases are shortcuts for Git commands, allowing you to save time and type less. You can create your own custom aliases using the git config command. To create a new alias, run the following command: git config --global alias.<alias name> '<command>' <alias name> is the name of the alias you want to create, and <command> is the Git command you want to alias. ## Git workflows Git workflows are strategies for using Git to manage code changes in a team. There are several popular Git workflows, including the centralized workflow, the feature branch workflow, and the Gitflow workflow. The centralized workflow is a simple workflow that involves a single main branch, with all changes made directly to that branch. The feature branch workflow involves creating a new branch for each feature or bug fix, and merging those branches back into the main branch when the changes are complete. The Gitflow workflow is a more complex workflow that involves multiple branches, including a develop branch for ongoing development, a release branch for preparing releases, and feature branches for individual features. ## Conclusion In conclusion, Git is a powerful tool for version control and managing code changes. It allows developers to collaborate on projects, track changes, and revert to previous versions when necessary. While the basic Git commands are essential to know, the advanced Git commands discussed in this cheat sheet can help you be more efficient and effective when working with Git.
hasan048
1,908,147
Vercel v0 and the future of AI-powered UI generation
Written by Peter Aideloje✏️ Vercel v0 is at the cutting edge of UI development. It has caught the...
0
2024-07-03T14:00:02
https://blog.logrocket.com/vercel-v0-ai-powered-ui-generation
vercel, webdev
**Written by [Peter Aideloje](https://blog.logrocket.com/author/peteraideloje/)✏️** Vercel v0 is at the cutting edge of UI development. It has caught the attention of so many developers with claims that it can make UI development less complicated and tedious. In this blog post, let's delve more into how v0 can be your key to faster and more efficient UI development. ## What is Vercel v0? Vercel v0 is a revolutionary platform that utilizes AI to streamline UI creation for developers. Imagine simply describing your desired interface to v0 — buttons, layouts, and all — and having it generate the corresponding code. That's the magic v0 brings to the table. ## How does Vercel v0 work? As a developer, all you have to do is provide a natural language description of your UI vision. Vercel v0 then uses its generative AI engine to output multiple code variations based on pre-built components from popular open source libraries like Tailwind CSS and Shaddock UI. Integrating closely with these established tools allows Vercel v0 to generate a UI that you can implement seamlessly in your existing projects. ## Building UIs with Vercel v0 Imagine you need a landing page for your new photography course. It used to be that you'd sketch out the layout by hand or use some design software. Now, with v0, you can describe your vision in plain English and watch it come to life as code. After you describe your desired layout, v0 generates different code options for each element. Then, you can pick and choose the ones that best suit your needs. You have the option to customize and tweak these settings or even modify the code to give it a more personalized touch. ## Getting started with Vercel v0 Using Vercel v0 is straightforward and enjoyable. Since it integrates directly with Vercel’s platform, you just need to sign into your Vercel account to access it. Here’s a glimpse into the process. ### Describe your interface Begin by describing the UI you want to build. This could be as simple as typing ideas or uploading an image for reference. For instance, you might write a prompt like, "I need a landing page with a hero section featuring a headline, a sub-headline, a call-to-action button, and an image background.": ![Prompting Vercel V0 To Generate A Landing Page With A Hero Section Containing Various Components](https://blog.logrocket.com/wp-content/uploads/2024/06/img1-Prompt-v0-generate-landing-page-hero-section-various-components.png) Alternatively, upload a mockup or sketch of your desired layout. ### Generate options Vercel v0 uses AI to generate multiple UI options based on your description. It’s like having a virtual assistant who understands your vision and presents you with several prototypes. For example, it might generate three hero sections, each with unique styles and arrangements. You can review these options and select the one that resonates most with your project’s aesthetic: ![Three Generated Ui Options By V0 Based On Previous Prompt](https://blog.logrocket.com/wp-content/uploads/2024/06/img2-Three-generated-UI-options-v0.png) ### Edit and customize Select the components you like and customize them to fit your needs. Vercel v0 allows you to adjust any aspect of the generated code, ensuring the final product aligns perfectly with your vision. Let’s say you chose one of the hero options above, but wanted to tweak the button color or change the header font. You can make these changes easily using the v0 interface and see a real-time preview of how they impact the overall design. For example, you could prompt v0 to update the button like so: ![Prompting V0 To Update The Button Component](https://blog.logrocket.com/wp-content/uploads/2024/06/img3-Prompting-v0-update-button-component.png) Here’s the result: ![Resulting Updated Button Based On New Prompt](https://blog.logrocket.com/wp-content/uploads/2024/06/img4-Resulting-updated-button-component.png) ### Integrate and develop Once you’re satisfied with the UI, hit the **Code </>** button to open the panel containing the generated code: ![Button To Open Panel With Generated Code](https://blog.logrocket.com/wp-content/uploads/2024/06/img5-Code-button-open-panel-generated-code.png) Then, copy the generated code and integrate it into your project: ![Open Code Panel With Button To Copy Generated Code](https://blog.logrocket.com/wp-content/uploads/2024/06/img6-Opened-code-panel-copy-button.png) This seamless transition from concept to code significantly reduces development time. ## Demonstrating how to use Vercel v0 Now that we’ve seen a high-level overview of getting started with v0, let’s walk through an example process of incorporating v0 into your next project. ### Set up your Vercel account If you don’t have a Vercel account already, visit [the Vercel website](https://vercel.com/) and create a free account. Keep in mind that Vercel v0 utilizes a subscription-based billing model separate from Vercel team accounts. Each subscription tier offers a set number of credits, with the option to purchase additional credits on demand. Generating a UI with v0 costs 10 credits each time. Here's a [breakdown of the plans](https://v0.dev/pricing): * **Free plan ($0)**: Perfect for getting acquainted with v0 — Includes 200 credits per month * **Premium plan ($20/month)**: Ideal for freelancers and small teams — Includes 5000 credits per month * **Enterprise plan (custom)**: Best for larger teams — Provides the ability to customize your credits and access advanced Vercel and v0 features and services * **Additional credits**: If your needs exceed your plan's limit, you can purchase extra credits as needed The [official v0 docs](https://v0.dev/docs) also mention a limited-time introductory offer. If you have an existing Vercel Pro account, you'll receive a complimentary set of credits upon starting with v0\. These free credits expire 30 days after onboarding. ### Navigate to v0 and describe your first UI element Once logged in, navigate to the v0 section within the Vercel platform. v0's interface is intuitive and user-friendly, making it easy to get started. Explore the available features and familiarize yourself with the workflow. Then, it's time to put v0 to the test! Begin by describing a simple UI element in clear and concise language. The more specific you are, the better v0 will understand your vision. For instance, you might describe something like the following: ```plaintext E-Commerce Website Design Prompt Objective: Design a comprehensive e-commerce website that provides a seamless shopping experience, reflects the brand identity, and includes essential features to drive sales and customer engagement. 1\. Project Overview Website Name: [Your Website Name] Brand Identity: [Describe the brand, its mission, vision, and values] Target Audience: [Describe the primary and secondary target audiences, including demographics, preferences, and behaviors] 2\. Core Features Home Page: Engaging hero banner with high-quality images or videos. Clear call-to-action buttons (e.g., Shop Now, Learn More) Navigation menu with categories and subcategories. Product Pages: High-quality product images with zoom functionality. Add to cart and wishlist, e.t.c. Add this fruit image in different parts of the website ``` Keep in mind that specificity is key. The more clearly you describe exactly what you want, the more accurately v0 will be able to generate what you’re looking for on the first try. v0 also accepts image uploads as a reference! If you have a design mockup or inspiration image, upload it alongside your description for even more accurate code generation. ### Experiment with variations v0 will generate multiple code variations based on your description. This gives you the flexibility to choose the option that best suits your needs. Don't be afraid to experiment! You can mix and match elements from different variations to create your ideal UI component: ![Demo Landing Page With Three Generated Variations](https://blog.logrocket.com/wp-content/uploads/2024/06/img7-Demo-landing-page-three-generated-variations.png) While v0 generates high-quality code, you still have complete creative control. You can tailor the components to your project's unique requirements, including adjusting styles (e.g., fonts, colors), modifying layouts, or incorporating custom code for specific functionalities. ### Integrate v0’s generated code into your project Once you've identified the perfect code variation and customized it to your liking, simply copy the desired code and paste it into the appropriate location within your project's codebase. Note that this might involve integrating the code into your component library or framework-specific files: ![Demo Generating And Copying Vercel V0 Generated Code For Landing Page](https://blog.logrocket.com/wp-content/uploads/2024/06/img8-Demo-generating-copying-v0-code-landing-page.png) Make sure you integrate the components according to your project's specific architecture to optimize performance and functionality. ## Practical tips and best practices for working with v0 Since v0 is a relatively new tool, it might be overwhelming to use at first. Here are a few helpful tips that might help: * **Start simple**: Begin with simple UI elements to get a feel for how Vercel v0 works. As you become more comfortable, you can move on to more complex layouts and components * **Be specific**: The more specific you are in your descriptions, the better the results. Clearly outline the elements, styles, and layouts you need * **Check out the Explore page**: If you don't want to start from scratch, v0’s Explore page showcases a variety of pre-built web app samples. These examples not only demonstrate the capabilities of v0, but can also inspire your own UI ideas * **Leverage community resources**: Engage with the Vercel community. Sharing tips, asking questions, and learning from other developers’ experiences can enhance your use of v0 Also, remember that Vercel v0 isn't an island. You can extend its functionality through plugins and integrations. Developers can even contribute specialized tools to further enhance v0's capabilities and tailor it to specific development needs. ## Vercel v0 use cases and limitations Vercel v0 is versatile and can be applied in various development scenarios, making it a valuable tool for modern-day web applications, ecommerce sites, marketing landing pages, and more. Here are some specific use cases: * **Rapid iteration**: Use Vercel v0 to quickly prototype different design ideas, allowing for faster iterations and more experimentation without the time-consuming process of manual coding. You can even gather user feedback on generated UI elements and use v0 to swiftly incorporate their suggestions * **A/B testing**: Create multiple versions of a landing page for A/B testing. Quickly iterate on different designs to determine which performs best, optimizing your campaign's effectiveness * **Performance analysis**: v0 prioritizes performance by providing tools to analyze the generated code and identify potential bottlenecks, which can inform your decisions about code structure and styling to ensure a smooth UX across different devices * **Personalization and interactivity**: Create variations on UI elements for different color themes, timely trends and events, user settings, and more, or build dynamic and interactive elements that enhance user interactions and the overall UX While Vercel v0 offers significant advantages, it still has some limitations you should be aware of: * **Advanced functionality**: For complex interactions that require intricate logic or real-time updates, Vercel v0's AI-generated code might fall short. Real user experiences or human expertise might be necessary to implement sophisticated features and ensure they function correctly * **Customization needs**: The ability to customize code generated by Vercel v0 can be both a pro and a con. On the limitations side, it can be inconvenient if you need to refine the generated code every time to meet specific requirements or integrate with existing systems * **Accessibility considerations**: AI-generated outputs might not fully address all accessibility standards. It’s important to perform manual checks and modify the code, especially if you or a teammate have experience with accessibility. * **Testing and validation**: While the generated code is high-quality, you might need to further test, validate, and refine your code to align with specific design guidelines, functionality requirements, or performance needs ## Comparing Vercel v0 vs. ChatGPT Vercel v0 excels at UI development through code generation, and as of this writing, there aren’t many other AI-powered tools for UI generation. However, let's explore how Vercel v0 compares to ChatGPT, which might be better suited to different creative tasks within your workflow. Here's a table comparing Vercel v0 and ChatGPT: <table> <thead> <tr> <th>Feature</th> <th>Vercel v0</th> <th>ChatGPT</th> </tr> </thead> <tbody> <tr> <td>Focus</td> <td>UI development (code generation)</td> <td>Generative text formats (various creative text formats)</td> </tr> <tr> <td>Input</td> <td>Natural language description of UI</td> <td>Text prompt</td> </tr> <tr> <td>Output</td> <td>Code snippets for UI components</td> <td>Text formats like poems, code, scripts, musical pieces, etc.</td> </tr> <tr> <td>Strengths</td> <td>Speeds up UI development, reduces manual coding</td> <td>Creative text generation, ability to adapt to different writing styles</td> </tr> <tr> <td>Weaknesses</td> <td>Limited to UI development, complex interactions might require manual coding</td> <td>Not specifically designed for UI development, may require more post-processing of outputs</td> </tr> <tr> <td>Customization options</td> <td>Extensive customization for UI components</td> <td>Limited to textual and code responses</td> </tr> <tr> <td>Pricing tiers</td> <td>Free, Pro ($20/month), Enterprise (custom pricing)</td> <td>Free, ChatGPT Plus ($20/month)</td> </tr> <tr> <td>Ideal users</td> <td>Developers, UI/UX designers</td> <td>Writers, content creators, anyone needing creative text formats</td> </tr> </tbody> </table> ## Conclusion Vercel v0 is a powerful tool for rapidly prototyping and building various web application elements. Its ability to generate code based on descriptions can significantly speed up your development workflow. Despite v0’s many strengths and capabilities, you may need to manually handle complex interactions, accessibility, and customizations to achieve a final, workable UI. However, combining v0’s generative AI with your own human expertise can help you more quickly build high-quality, functional, and accessible web apps. As AI continues to evolve, the future of UI development looks incredibly bright. Vercel v0 is just the beginning, paving the way for a future where AI and human creativity work together to build even more innovative and engaging user interfaces. --- ##Get set up with LogRocket's modern error tracking in minutes: 1. Visit https://logrocket.com/signup/ to get an app ID. 2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side. NPM: ```bash $ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id'); ``` Script Tag: ```javascript Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script> ``` 3.(Optional) Install plugins for deeper integrations with your stack: * Redux middleware * ngrx middleware * Vuex plugin [Get started now](https://lp.logrocket.com/blg/signup)
leemeganj
1,910,267
Top 10 React js interview questions.
As a React developer, it is important to have a solid understanding of the framework's key concepts...
0
2024-07-03T13:58:04
https://dev.to/hasan048/top-10-react-js-interview-questions-2l15
webdev, javascript, beginners, tutorial
As a React developer, it is important to have a solid understanding of the framework's key concepts and principles. With this in mind, I have put together a list of 10 important questions that every React developer should know, whether they are interviewing for a job or just looking to improve their skills. Before diving into the questions and answers, I suggest trying to answer each question on your own before looking at the answers provided. This will help you gauge your current level of understanding and identify areas that may need further improvement. Let's get started! **01. What is React and what are its benefits? **Ans: React is a JavaScript library for building user interfaces. It is used for building web applications because it allows developers to create reusable UI components and manage the state of the application in an efficient and organized way. **02. What is the virtual DOM and how does it work? **Ans: The Virtual DOM (Document Object Model) is a representation of the actual DOM in the browser. It enables React to update only the specific parts of a web page that need to change, instead of rewriting the entire page, leading to increased performance. When a component's state or props change, React will first create a new version of the Virtual DOM that reflects the updated state or props. It then compares this new version with the previous version to determine what has changed. Once the changes have been identified, React will then update the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the Virtual DOM. This process is known as "reconciliation". The use of a Virtual DOM allows for more efficient updates because it reduces the amount of direct manipulation of the actual DOM, which can be a slow and resource-intensive process. By only updating the parts that have actually changed, React can improve the performance of an application, especially on slow devices or when dealing with large amounts of data. **03. How does React handle updates and rendering? **Ans: React handles updates and rendering through a virtual DOM and component-based architecture. When a component's state or props change, React creates a new version of the virtual DOM that reflects the updated state or props, then compares it with the previous version to determine what has changed. React updates the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the virtual DOM, a process called "reconciliation". React also uses a component-based architecture where each component has its own state and render method. It re-renders only the components that have actually changed. It does this efficiently and quickly, which is why React is known for its performance. **04. What is the difference between state and props? **Ans: State and props are both used to store data in a React component, but they serve different purposes and have different characteristics. Props (short for "properties") are a way to pass data from a parent component to a child component. They are read-only and cannot be modified by the child component. State, on the other hand, is an object that holds the data of a component that can change over time. It can be updated using the setState() method and is used to control the behavior and rendering of a component. **05. Can you explain the concept of Higher Order Components (HOC) in React? **Ans: A Higher Order Component (HOC) in React is a function that takes a component and returns a new component with additional props. HOCs are used to reuse logic across multiple components, such as adding a common behavior or styling. HOCs are used by wrapping a component within the HOC, which returns a new component with the added props. The original component is passed as an argument to the HOC, and receives the additional props via destructuring. HOCs are pure functions, meaning they do not modify the original component, but return a new, enhanced component. For example, an HOC could be used to add authentication behavior to a component, such as checking if a user is logged in before rendering the component. The HOC would handle the logic for checking if the user is logged in, and pass a prop indicating the login status to the wrapped component. HOCs are a powerful pattern in React, allowing for code reuse and abstraction, while keeping the components modular and easy to maintain. **06. What is the difference between server-side rendering and client-side rendering in React? **Ans: Server-side rendering (SSR) and client-side rendering (CSR) are two different ways of rendering a React application. In SSR, the initial HTML is generated on the server, and then sent to the client, where it is hydrated into a full React app. This results in a faster initial load time, as the HTML is already present on the page, and can be indexed by search engines. In CSR, the initial HTML is a minimal, empty document, and the React app is built and rendered entirely on the client. The client makes API calls to fetch the data required to render the UI. This results in a slower initial load time, but a more responsive and dynamic experience, as all the rendering is done on the client. **07. How do work useEffect hook in React? **Ans: The useEffect hook in React allows developers to perform side effects such as data fetching, subscription, and setting up/cleaning up timers, in functional components. It runs after every render, including the first render, and after the render is committed to the screen. The useEffect hook takes two arguments - a function to run after every render and an array of dependencies that determines when the effect should be run. If the dependency array is empty or absent, the effect will run after every render. **08. How does React handle events and what are some common event handlers? **Ans: React handles events through its event handling system, where event handlers are passed as props to the components. Event handlers are functions that are executed when a specific event occurs, such as a user clicking a button. Common event handlers in React include onClick, onChange, onSubmit, etc. The event handler receives an event object, which contains information about the event, such as the target element, the type of event, and any data associated with the event. React event handlers should be passed as props to the components, and the event handlers should be defined within the component or in a separate helper function. **09. What are some best practices for performance optimization in React?** Ans: Best practices for performance optimization in React include using memoization, avoiding unnecessary re-renders, using lazy loading for components and images, and using the right data structures. **10. How does React handle testing and what are some popular testing frameworks for React?** Ans: React handles testing using testing frameworks such as Jest, Mocha, and Enzyme. Jest is a popular testing framework for React applications, while Mocha and Enzyme are also widely used. In conclusion, understanding key concepts and principles of React is crucial for every React developer. This article provides answers to 10 important questions related to React including what is React, the virtual DOM, how React handles updates and rendering, the difference between state and props, Higher Order Components, server-side rendering and client-side rendering, and more. Understanding these topics will help developers to build efficient and effective web applications using React.
hasan048
1,910,266
Linux User Creation Bash Script
Introduction We can use a Bash script to automate the creation of users and groups, set up...
0
2024-07-03T13:55:00
https://dev.to/tennie/linux-user-creation-bash-script-llg
## Introduction We can use a Bash script to automate the creation of users and groups, set up home directories, generate random passwords, and log all actions. ## Script Overview The script we're going to discuss performs the following functions: Create Users and Groups: Reads a file containing usernames and group names, creates the users and groups if they do not exist, and assigns users to the specified groups. Setup Home Directories: Sets up home directories with appropriate permissions and ownership for each user. Generate Random Passwords: Generates random passwords for the users and stores them securely. Log Actions: Logs all actions to /var/log/user_management.log for auditing and troubleshooting. Store Passwords Securely: Stores the generated passwords in /var/secure/user_passwords.csv with restricted access. ## The Script Here is the complete Bash script: ``` #!/bin/bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure /var/secure exists and has the correct permissions mkdir -p /var/secure chmod 700 /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } # Function to generate random passwords generate_password() { local password_length=12 tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length } # Function to add users, groups and set up home directories setup_user() { local username=$1 local groups=$2 # Create the user # &>/dev/null if ! id -u "$username" &>/dev/null; then password=$(generate_password) useradd -m -s /bin/bash "$username" echo "$username:$password" | chpasswd log_message "User $username created." # Store the username and password echo "$username,$password" >> "$PASSWORD_FILE" log_message "Password for $username stored." else log_message "User $username already exists." fi # Create groups and add user to groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group "$group" &>/dev/null; then groupadd "$group" log_message "Group $group created." fi usermod -aG "$group" "$username" log_message "Added $username to $group." done # Set up the home directory local home_dir="/home/$username" chown "$username":"$username" "$home_dir" chmod 700 "$home_dir" log_message "Home directory for $username set up with appropriate permissions." } # Main script if [ $# -eq 0 ]; then log_message "Usage: $0 <input_file>" exit 1 fi input_file=$1 log_message "Starting user management script." # Read the input file and process each line while IFS=';' read -r username groups; do setup_user "$username" "$groups" done < "$input_file" log_message "User management script completed." ``` ### Logging and Password File Setup * The script ensures that the /var/secure directory exists and has the appropriate permissions. * It creates the password file /var/secure/user_passwords.csv and ensures only the owner can read it. ```bash mkdir -p /var/secure chmod 700 /var/secure touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" ``` ### Message_Log The log_message function logs messages to /var/log/user_management.log with a timestamp. ```bash log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" } ``` ### password function The generate_password function generates a random password of a specified length (12 characters in this case). ```bash generate_password() { local password_length=12 tr -dc A-Za-z0-9 </dev/urandom | head -c $password_length } ``` ### User Setup Function The setup_user function creates users, adds them to groups, sets up home directories with appropriate permissions, and logs each action. It also generates and stores passwords securely. ```bash setup_user() { local username=$1 local groups=$2 # Create the user if ! id -u "$username" &>/dev/null; then password=$(generate_password) useradd -m -s /bin/bash "$username" echo "$username:$password" | chpasswd log_message "User $username created." # Store the username and password echo "$username,$password" >> "$PASSWORD_FILE" log_message "Password for $username stored." else log_message "User $username already exists." fi ``` ### Main Script The main part of the script takes an input file as an argument, reads it line by line, and processes each line to create users and groups, set up home directories, and log actions. ```bash if [ $# -eq 0 ]; then log_message "Usage: $0 <input_file>" exit 1 fi ``` ### This makes sure you run the script with an input_file, i.e input.txt ```bash input_file=$1 log_message "Starting user management script." ``` ### Usage To use this script, save it to a file (e.g., user_management.sh), make it executable, and run it as a root user with the path to your input file as an argument: input.txt ```bash user1;group1,group2 user2;group3,group4 ``` on the Command Line(CMD) | Terminal ```bash chmod +x user_management.sh ./create_users.sh input.txt ``` Talents [HNG Internship](https://hng.tech/internship) [HNG Tech]( https://hng.tech/hire)
tennie
1,910,265
Make an Image drag and drop with CSS in React
React is a popular JavaScript library for building user interfaces, and its flexibility and...
0
2024-07-03T13:54:48
https://dev.to/hasan048/make-an-image-drag-and-drop-with-css-in-react-3md8
webdev, javascript, programming, tutorial
React is a popular JavaScript library for building user interfaces, and its flexibility and versatility make it a great choice for building interactive applications. In this tutorial, we will show you how to create a drag-and-drop feature for images using only CSS in React. **Step 1 — ** To start, let's set up a React project. You can use either Create React App or any other setup method that works best for you. Let's make a React application using create-react-app. npx create-react-app drag-and-drop Step 2 — Replace App.js and App.css with the below code. ``` App.js import './App.css'; function App() { return ( <div className="App"> <h2 className="heading">Select Image:</h2> <div className="image-area"> </div> </div> ); } export default App; App.css .App { text-align: center; width: 100vw; height: 100vh; } .heading { font-size: 32px; font-weight: 500; } Step 3 — Now create a new file and component ImageContainer.js in the src directory and take a div for drag and drop container. ImageContainer.js import React from 'react'; const ImageContainer = () => { return ( <div className="image-container"> </div> ); }; export default ImageContainer; ``` Then make a CSS file ImageContainer.css in the src directory and add some styles in the image container. ``` ImageContainer.css .image-container { width: 60%; height: 90%; display: flex; align-items: center; justify-content: center; border: 2px dashed rgba(0, 0, 0, .3); } ``` Step 4 — Now we will take a div with an input field and input text title inside the .image-container class and add some style in the ImageContainer.css file. We will also take a state for the image URL and an onChage function for the update state. ``` ImageContainer.js will be import React from 'react'; import './ImageContainer.css'; const ImageContainer = () => { const [url, setUrl] = React.useState(''); const onChange = (e) => { const files = e.target.files; files.length > 0 && setUrl(URL.createObjectURL(files[0])); }; return ( <div className="image-container"> <div className="upload-container"> <input type="file" className="input-file" accept=".png, .jpg, .jpeg" onChange={onChange} /> <p>Drag & Drop here</p> <p>or</p> <p>Click</p> </div> </div> ); }; export default ImageContainer; ``` ImageContainer.css will be ``` .image-container { width: 60%; height: 90%; display: flex; align-items: center; justify-content: center; border: 2px dashed rgba(0, 0, 0, .3); } .upload-container { position: relative; width: 100%; height: 100%; display: flex; flex-direction: column; align-items: center; justify-content: center; background-color: white; } .upload-container>p { font-size: 18px; margin: 4px; font-weight: 500; } .input-file { display: block; border: none; position: absolute; top: 0; left: 0; right: 0; bottom: 0; opacity: 0; } ``` Step 5 — Now we will preview the image file conditionally. If you have dropped an image will render the image and or render the drag-and-drop area. ``` ImageContainer.js will be import React from 'react'; import './ImageContainer.css'; const ImageContainer = () => { const [url, setUrl] = React.useState(''); const onChange = (e) => { const files = e.target.files; files.length > 0 && setUrl(URL.createObjectURL(files[0])); }; return ( <div className="image-container"> { url ? <img className='image-view' style={{ width: '100%', height: '100%' }} src={url} alt="" /> : <div className="upload-container"> <input type="file" className="input-file" accept=".png, .jpg, .jpeg" onChange={onChange} /> <p>Drag & Drop here</p> <p>or <span style={{ color: "blue" }} >Browse</span></p> </div> } </div> ); }; export default ImageContainer; ``` Step 6 — Now we will import the ImageContainer component in our App.js and run our application using the npm start command and have fun while coding. ``` App.js will be import './App.css'; import ImageContainer from './ImageContainer'; function App() { return ( <div className="App"> <h2 className="heading">Select Image:</h2> <div className="image-area"> <ImageContainer /> </div> </div> ); } export default App; ``` In this tutorial, we showed you how to create a drag-and-drop feature for images using only CSS in React.
hasan048
1,910,262
Lazy Imports in Java?🙂‍↔️
I am an experienced Python user, and I love how everything is built-in. One feature I use daily is...
0
2024-07-03T13:51:14
https://dev.to/tanwanimohit/lazy-imports-in-java--3pal
java, coding, tutorial, learning
I am an experienced Python user, and I love how everything is built-in. One feature I use daily is lazy imports. Recently, I was working with Java and wanted to do something similar. Although I couldn't find any good articles on the topic, I managed to figure it out. Here is a sample implementation based on the bcrypt library use case. Before using lazy import, I would import the function at the top and use it later in the function. ``` import at.favre.lib.crypto.bcrypt.BCrypt;

 // Using it somewhere in the program String password = “Test@123” String storedHashString = "$2RandomHashString” isValid = BCrypt.verifyer().verify(password.toCharArray(), storedHashString.toCharArray()).verified ``` Above, you can see example code where I am using the Bcrypt library's functions to verify a password. However, I wanted to import Bcrypt lazily. Here is what I did: ``` // Directly wrote this where I wanted to use this function. Class<?> bCryptClass = Class.forName("at.favre.lib.crypto.bcrypt.BCrypt"); // Get the instance of the class Method verifyerMethod = bCryptClass.getMethod("verifyer"); //Get the method from class. Object verifyer = verifyerMethod.invoke(null); // Invoke verifyer method and store instance in the variable. Method verifyMethod = verifyer.getClass().getMethod("verify", char[].class, char[].class); Object result = verifyMethod.invoke(verifyer, password.toCharArray(), storedHashString.toCharArray()); // Final invoke the verify method. Boolean ans = (Boolean) result.getClass().getField("verified").get(result); // get the verified field. ``` Few things to notice here: - I didn’t import the Bcrypt library at the top. - It will be imported at runtime. If the jar file is not present, it will throw a ClassNotFoundException. So, add a try-catch block if needed. - My goal was to achieve this because, in my case, the library might or might not exist in the environment. Let me know if you have a better way to do this. I hope you find it helpful. P.S. I'm a noob at Java, so don't quote me on this :P
tanwanimohit
1,910,263
Highly Effective 7 Habits for Developers
As a software developer, success doesn't just come from luck or chance. It is the result of years of...
0
2024-07-03T13:50:53
https://dev.to/hasan048/highly-effective-7-habits-for-developers-2gkh
webdev, beginners, programming, tutorial
As a software developer, success doesn't just come from luck or chance. It is the result of years of hard work, continuous learning and development, and forming good habits. In the fast-paced world of technology, software developers must always be learning and adapting to keep up with the latest trends and advancements in their field. In this article, we will discuss 7 habits that can help you become a highly effective software developer. **01 Map out a timetable:**Just like in school, having a timetable is essential for software developers. It helps you keep track of your daily activities and make sure you're using your time efficiently. When you're learning a new programming language, it's important to have a schedule in place that outlines when you'll be working on it and for how long. This way, you can stay focused and avoid distractions, and make the most of your learning time. **02 Embrace mistakes and learn from experiences:** No one is perfect, and as a software developer, you will make mistakes. It's important to embrace these mistakes and use them as opportunities to learn and grow. When you make a mistake, take time to reflect on what went wrong and what you can do better next time. This way, you'll be able to avoid making the same mistake in the future and become a better developer. **03 Be consistent:**Consistency is key when it comes to software development. By setting aside time every day to work on your craft, you'll be able to make steady progress and become more skilled over time. Consistency also helps you identify areas that need improvement and gives you the time and motivation to work on them. **04 Find a mentor:** Having a mentor can be incredibly beneficial for software developers. A mentor can offer guidance, and advice, and help you overcome challenges. They can provide you with a fresh perspective and share their experiences and insights, which can be valuable when working on complex projects. **05 Work on projects:** Learning by doing is one of the most effective ways to become a better software developer. By working on projects, you'll have the opportunity to put your skills to the test and gain real-world experience. It's important to choose projects that are aligned with your skill level and gradually increase the difficulty as you grow more comfortable. **06 Don't be a jack of all trades:** As a software developer, it's tempting to try and learn as many programming languages and technologies as possible. However, it's important to remember that being a jack of all trades won't necessarily make you a master of any. Instead, focus on mastering one area, and then move on to the next once you feel comfortable. This way, you'll be able to become a more specialized and in-demand developer. **07 Stay up to date with the latest advancements:** The world of technology is constantly changing, and software developers must keep up with the latest advancements in their field. Read articles, attend webinars and conferences, and follow industry leaders on social media to stay informed and up to date with the latest trends and advancements. In conclusion, forming good habits as a software developer can greatly enhance your career and lead to long-term success. By following these 7 habits, you'll be able to become a more effective, knowledgeable, and in-demand developer in no time.
hasan048
1,910,261
The Role of a Digital Marketing Agency in Today's Business Landscape
In the fast-paced world of modern business, staying competitive requires more than just having a...
0
2024-07-03T13:49:34
https://dev.to/alica_gc_017203c23e4289d1/the-role-of-a-digital-marketing-agency-in-todays-business-landscape-4o21
webdev, javascript, programming, tutorial
In the fast-paced world of modern business, staying competitive requires more than just having a great product or service. It demands effective outreach and engagement strategies that resonate with today's digital-savvy consumers. This is where digital marketing agencies play a crucial role, offering specialized services that help businesses navigate and thrive in the digital realm. What is a Digital Marketing Agency? A [digital marketing agency](https://duitservice.com/) is a specialized firm that leverages digital channels such as websites, social media, search engines, email, and mobile apps to promote and advertise products and services. Unlike traditional marketing agencies, which focused primarily on print and broadcast media, digital marketing agencies harness the power of technology and data to deliver targeted marketing campaigns. Services Offered by Digital Marketing Agencies Search Engine Optimization (SEO): SEO is the practice of optimizing a website to rank higher in search engine results pages (SERPs). Digital marketing agencies employ SEO strategies such as keyword research, content optimization, and link building to increase organic traffic and improve visibility. Pay-Per-Click Advertising (PPC): PPC campaigns involve placing ads on search engines and social media platforms, where advertisers pay a fee each time their ad is clicked. Digital marketing agencies manage PPC campaigns to drive immediate traffic and conversions, utilizing platforms like Google Ads and Facebook Ads. Social Media Marketing (SMM): SMM focuses on leveraging social media platforms such as Facebook, Instagram, Twitter, and LinkedIn to engage with audiences, build brand awareness, and drive website traffic. Digital marketing agencies create tailored social media strategies, manage content calendars, and analyze engagement metrics to optimize performance. Content Marketing: Content marketing involves creating and distributing valuable, relevant content to attract and retain a targeted audience. Digital marketing agencies develop content strategies, produce blog posts, videos, infographics, and other content formats to educate, entertain, and inspire potential customers. Email Marketing: Email marketing remains a powerful tool for nurturing leads and maintaining customer relationships. Digital marketing agencies design and execute email campaigns, segmenting audiences, crafting compelling messages, and analyzing campaign performance to achieve higher open and click-through rates. Website Design and Development: A well-designed website is essential for establishing credibility and converting visitors into customers. Digital marketing agencies offer website design and development services, ensuring websites are responsive, user-friendly, and optimized for search engines and conversions. The Benefits of Hiring a Digital Marketing Agency Expertise and Specialization: Digital marketing agencies bring expertise in various disciplines, including SEO, PPC, social media, and content marketing. They stay updated with industry trends, best practices, and algorithm changes, ensuring clients benefit from cutting-edge strategies. Cost-Effectiveness: Outsourcing digital marketing to an agency can be more cost-effective than maintaining an in-house team. Agencies provide scalable solutions, allowing businesses to allocate resources efficiently and achieve a higher return on investment (ROI). Access to Advanced Tools and Technologies: Digital marketing agencies have access to sophisticated tools and technologies for analytics, keyword research, campaign management, and performance tracking. This enables data-driven decision-making and optimization of marketing efforts in real-time. Measurable Results and Accountability: Unlike traditional marketing methods, digital marketing campaigns are highly measurable. Agencies provide detailed analytics and reports, allowing businesses to track campaign performance, measure ROI, and make informed adjustments to improve outcomes. Choosing the Right Digital Marketing Agency When selecting a digital marketing agency, businesses should consider the following factors: Experience and Track Record: Review the agency's portfolio and case studies to gauge their expertise in relevant industries and campaign success. Client Testimonials and Reviews: Seek feedback from past and current clients to assess satisfaction levels and the agency's ability to deliver results. Industry Reputation and Credentials: Verify the agency's reputation within the industry and any certifications or partnerships with leading platforms such as Google or HubSpot. Conclusion In conclusion, digital marketing agencies play a pivotal role in helping businesses navigate the complexities of the digital landscape and achieve their marketing objectives. By leveraging specialized expertise, innovative strategies, and advanced technologies, these agencies empower brands to connect with their target audiences effectively, drive growth, and stay ahead of the competition in today's competitive marketplace. As businesses continue to embrace digital transformation, partnering with a reputable digital marketing agency has become not just a competitive advantage but a necessity for sustained success.
alica_gc_017203c23e4289d1
1,910,259
Clean Code in React
Hello everyone, I'm Juan, and today I want to share with you some tips and tricks that I use to write...
0
2024-07-03T13:47:38
https://dev.to/juanemilio31323/clean-code-in-react-30cm
react, frontend, cleancode, webdev
Hello everyone, I'm Juan, and today I want to share with you some tips and tricks that I use to write more reusable code. For the past 2 weeks, I've been writing about how to develop projects faster and how it's irrelevant to spend too much time looking for the perfect code at the outset of the project. Although, you are not always going to be working on your own projects, and sometimes you won't be in a rush. Or maybe, just maybe, you are one of those accountable developers who, after spending time creating something amazing in just [48 hours](https://dev.to/juanemilio31323/48hs-is-all-you-need-39fa) and applying the [strategies](https://dev.to/juanemilio31323/how-i-build-projects-faster-my-stack-and-strategies-3hpg) that I shared, now you are at the point where you have to fix the mess that you've created, developing as fast as you could. The example project that I'm going to use is React, but you can apply most of these things practically anywhere you want. I'm going to be showing folder structures, mentioning the architecture that I'm following, and also providing code snippets with explanations and examples of multiple cases. ## What is going on with the industry? --- I've been lucky, and thanks to that I've worked at companies of different sizes and styles, some bigger, some smaller, but in all of them, I saw a pattern that repeats everywhere: the code is broken. No structure whatsoever—logic, styles, types, everything in the same place. That might not be a problem when you just have a bunch of files, but when your codebase is gigantic, it is really easy to make it impossible to grow or sustain over time. Right now, I'm working for a huge company with over 29,000 employees. You wouldn't expect a company like this to make the mistakes that I was describing, but it does. After talking to people who theoretically know how to lead a project and help sustain a readable and scalable codebase, I came to the conclusion that most of us lack the knowledge or interest, or both, when it comes to structuring the code. Hopefully, this post will help someone out there write better code and know how to sustain a project over time. ## The architecture that I use - let's scream ---- Hopefully, you've heard about this architecture before. It's called _Screaming Architecture_, and the name is no joke—it literally screams at you. When you enter the project, it perfectly summarizes what's going on without needing you to run the project. Let me show you: ![Basic folder structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcfyw8ifnjcj0e7yy72t.png) It's worth mentioning that I made some modifications to the _Screaming Architecture_ to better suit my needs, and that is something you can do too. As always, there is no perfect solution, just solutions that may adapt better to your specific case. Feel free to keep the things that work best for you and discard the rest. But when you do it, be honest with yourself and ask the following question: "Am I really doing this because it is better or just because I don't want to learn something new?" ### Explaining the basic structure: In the _components_ folder, you'll create all the components that you need to reuse across your application. These are not specific components, and if you are going to put it there, it is because you are already using that component and you want to reuse it somewhere else. In the _interfaces_ folder, you'll define all your types. We are doing it this way because most of the time, you are sharing types across your entire application, so it doesn't make much sense to me to have the types in a deeper folder. ![Basic folder structure interfaces](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9kc1yq7n4r3yfjlr3nol.png) Finally, in the _features_ folder, you'll have most of the code, following a recursive structure: ![Components structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/beze20ur6qyo9hjco7fc.png) As you can probably see here, we are creating a _Logic_ file because we don't want the logic of our application leaking into our visual components. Let me show you an example in _React_ to see how this works. The component that I'm going to show you is _Window_. It renders on the screen the Window's width of our user. You'll notice that we have almost no logic here, just a `<p>` tag that is returning the _width size_ of the screen. ```tsx import Logic from "./Logic"; const Window = () => { const { size } = Logic(); return <p>{size}</p>; }; export default Window; ``` And here's the logic: ```typescript import { useState, useEffect } from "react"; const Logic = () => { const [size, setWindowSize] = useState(0); useEffect(() => { // get the windows width setWindowSize(window.screen.width); }, []); return { size }; }; export default Logic; ``` We have successfully isolated our logic and our view. There will be people reading this and thinking that this is similar in some ways to the [Model View Controller](https://developer.mozilla.org/es/docs/Glossary/MVC) pattern, and they are correct. This is really helpful and most of the time will avoid having a file that is overwhelmingly huge to read. But this is not a silver bullet, and there will be cases where you'll have to take other approaches. So let's see other concepts and examples. ## Abstraction and Encapsulation When I started programming, I remember that one of the things that I hated the most was _OOP (Object-Oriented Programming)_. For some reason, I couldn't get it. Most of the time, I found myself asking why I would ever write something like that. At some point in my life, I realized that _OOP_ was amazing and really helpful. Concepts like _SOLID_ or _Abstraction_ were pillars of any _clean code_. So for that reason, before moving to some code snippets, I consider it appropriate to make a stop on some of those concepts. ### Abstraction It is by far one of the most important things on the priority list when it comes to writing the best code possible, and also most of the time, it's one of the most forgotten. I've seen people who write code that astonishingly solves the problem they are presented with, but when you start to take a deeper look and ask yourself (or them) how this is going to be reusable, you find no answer. Most of the time, that is due to a lack of understanding of the _Abstraction_ principle. Abstraction can be understood as the capacity to ignore the obvious and put the necessary attention on the more general aspects of the things that we are confronting (the big picture). If you are trying to describe a glass of water, you may say that: _"it is a piece of transparent material, containing water,"_ and indeed, that would be correct... on the small picture, but useless on the big picture. Instead, we can say: _"it is a piece of material, with some geometry, restrained to a structure able to contain liquids, with a color and transparency determined."_ The second description is able to generalize a lot better than the first one, giving us the opportunity to use the same description for many things. This is _Abstraction_, the skill to see the "big picture." Work on it, write code thinking about it, and you from the future and the people who will have to work with you will be grateful. Do you see any _Abstraction_ in the previous example? It has, but it might be hard to see in its current state. Let me make a little change and see if you get it: ```tsx import { useState, useEffect } from "react"; const useWindowSize = () => { const [size, setWindowSize] = useState(0); useEffect(() => { // get the windows width setWindowSize(window.screen.width); }, []); return { size }; }; export default useWindowSize; ``` Now instead of "Logic," we have _useWindowSize_. Separating the logic from the view gives us the opportunity to see that we have a piece of code that is reusable. Now that we have this custom hook that we can reuse and generalize a lot better, what do you think if we move it to the _utils/_ folder, so we can reuse it later? ### Encapsulation Would you give anyone entire access to your phone? Most likely you said no; otherwise, you are crazy since with that, any person would be able to enter your bank account and personal contacts list. We don't want that to happen, so we'll let **some** people **access** some part of our phone. We won't expose our entire phone to everybody. This, my dear friend, in an intricate and funny way, is _Encapsulation_. You could summarize it as: _"Some can see and touch it; some just can see it."_ Great, this is helpful in many ways for your code. It becomes handy, especially when you are working with too many people or when you have code that is going to be used by someone else, like in the case of my [React Library](https://www.npmjs.com/package/smart-layout). Now we understand what _Encapsulation_ is. Do you see it applied in any way in our _useWindowSize_? Yes! You got it. We are keeping _setWindowSize_ private, just making it accessible to the code inside of _useWindowSize_ and no one else. That's great because in that way, no one will be able to alter the window size besides the logic of our code, making it more secure and predictable. ## Finally - Code Snippets Sorry for the stop on _OOP_ concepts, but it's necessary. It's amazing how those two concepts can change your life. With the new knowledge and understanding that we now have, we'll be able to better appreciate the code that we are going to see. ### I haven't seen it anywhere else The code that I'm going to show you is something that I've been using maybe too much lately and for some reason, no one is talking about it. I'll give you context. Have you ever found yourself using a perfect and reusable component, but each time you have to reimplement some specific logic because you need to access that from a higher component? No? Maybe just me... Anyway, let's see the solution. ```tsx type fields = "text" | "image" | "number" interface ModalProps { show: boolean; fields: fields[] } const Modal = ({ show, fields }: ModalProps) => ( return show ? <div className="modal"> {fields.map((field) => { // Modal code... }} </div> : '' ) const ImageGallery = () => { const [show, setShow] = useState(false) const images = [...] return ( <> <Modal show={show} fields={["image", "text"]}/> {images.map((image, i) => { <img src={image} onClick={() => setShow(true)}/> })} </> ) } ``` Perfect, we have an _ImageGallery_ that is using our "beautiful" _Modal_. In this case, imagine that the modal is dynamically creating a form to upload an image with a description. Now let's say that I want to add another page similar to _ImageGallery_, maybe a _Comments_ section. I'll have to re-write the _show_ logic. Let's try to avoid it: ```tsx type fields = "text" | "image" | "number" interface ModalProps { fields: fields[] } const ModalHandler = () => ( const [show, setShow] = useState(false) const Modal = ({ fields }: ModalProps) => ( show ? <div className="modal"> {fields.map((field) => { // Modal code... }} </div> : '') return { Modal, show, setShow } ) const ImageGallery = () => { const { show, setShow, Modal } = ModalHandler() const images = [...] return ( <> <Modal fields={["image", "text"]}/> {images.map((image) => { <img src={image} onClick={() => setShow(true)}/> })} </> ) } const Comments = () => { const { show, setShow, Modal } = ModalHandler() const comments = [...] return ( <> <Modal fields={["text"]}/> {comments.map((comment) => { <p onClick={() => setShow(true)}> {comment} </p> })} </> ) } ``` We did it! In this case, it is an overkill scenario because it is just a show and _setShow_ that may be arguably re-implemented in every case, but the important thing is the concept. With this approach, we can create more reusable components, and at least for me, it is more readable than the _composition pattern_. Speaking of which, why don't we take a look: ### Composition Pattern Let's create a grid to preview images: ```tsx const ImageGrid = ({ images }) => { const handleAnimations = useCallback(() => { ... }, []) useEffect(() => { handleAnimations() }, []) const complexGridLogic = () => { ... } return ( <div className="magic-grid"> {complexGridLogic()} {images.map((image, i) => { <div className="animated-container" id={`animate-${i}`}> <img src={image}/> </div> })} </div> ) } ``` Great, we have our ImageGrid (again, it is just a concept). It will mount images and then animate them. Also, it has a _"complexGridLogic"_. Now let's say that I want a _CommentsGrid_ with extremely similar behavior. What a problem. Let's see how the _Composition Pattern_ can save us: ```tsx const AnimatedContainer = ({ children, id }) => { const handleAnimations = useCallback(() => { ... }, []) useEffect(() => { handleAnimations() ... }, []) return <div className="animated-container" id={id}>{children}</div> } const MagicGrid = ({ children }) => { const complexGridLogic = () => { ... } return ( <div className="magic-grid"> {complexGridLogic()} {children} </div> ) } const ImageGallery = ({ images }) => { return ( <MagicGrid> {images.map((image) => ( <AnimatedContainer id={image}> <img src={image}/> </AnimatedContainer> ))} </MagicGrid> ) } const CommentSection = ({ comments }) => { return ( <MagicGrid> {comments.map((comment) => ( <AnimatedContainer id={comment}> <p>{comment}</p> </AnimatedContainer> ))} </MagicGrid> ) } ``` I see it and I love it. What a wonderful world it would be if everyone cared as much as you and I do about our code. Now we have 2 reusable components and 2 implementations that are perfectly reusable. This also gave us something called: ## Separation of concerns At the beginning of this post, I showed you the structure that I like to use for my projects. I think it is understandable and cleaner than many things out there, and a big part of that is thanks to the _Separation of Concerns_, a principle that I always try to follow. You can summarize it as: _don't take responsibility for someone else's actions_. It's that easy. The _button_ shouldn't have the logic of the _input_, obviously, and the _Grid_ shouldn't be in charge of rendering your images, but instead rendering any children. It's that easy, really, and for some reason, sometimes I find myself arguing about it with some random co-worker (this happened to me last week). ## Wrapping up I'm finishing this post, and I feel like there's something off with it, but I'm not sure what it is. I have the feeling that I've tried to cover so many aspects that I couldn't go much deeper on any of them. So if you are interested in me talking more in-depth about some of these aspects, let me know. ## Before you go I'm thinking of posting on [X](https://x.com/Juan31323). Would you follow me? Thank you in advance, and if you really enjoyed the post, would you help me pay my rent? [---------------------------------------------------------------------------] 0% of $400 <a href="https://www.buymeacoffee.com/juanemilio" rel='noopener' target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 250px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
juanemilio31323
1,910,258
Streamlining Linux User Management with a Bash Script
Managing user accounts and groups on a Linux system can often become cumbersome, especially in...
0
2024-07-03T13:46:30
https://dev.to/six-shot/streamlining-linux-user-management-with-a-bash-script-54pk
devops, linux
Managing user accounts and groups on a Linux system can often become cumbersome, especially in dynamic environments where frequent updates are necessary. Automating these tasks not only simplifies the process but also ensures consistency and efficiency. In this article, we'll explore how to implement a Bash script that automates user creation, group management, and password generation, while also logging all activities for auditing purposes. ## Script Overview The Bash script create_users.sh is designed to read user and group information from a text file, create new users with specified groups, set up their home directories securely, generate random passwords, and maintain a detailed log of actions performed. This approach ensures systematic user management and enhances system security. ## Script Breakdown Below is the complete create_users.sh script, followed by a detailed explanation of its key components: ## Bash Script for User and Group Management This Bash script automates the creation of users and groups on a Linux system, ensuring proper permissions and logging actions for auditing purposes. ### Script Overview The script `create_users.sh` performs the following tasks: ```bash #!/bin/bash # Ensure the script is run with root privileges if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi # Log file path LOG_FILE="/var/log/user_management.log" # Password storage file path PASSWORD_FILE="/var/secure/user_passwords.csv" # Create secure directory for passwords if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure # Function to create groups create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Remove leading/trailing whitespace if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } # Function to create user and group create_user() { local username="$1" local groups="$2" # Create user group if it doesn't exist if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi # Create the additional groups create_groups "$groups" # Create user with personal group and home directory if user doesn't exist if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" # Set home directory permissions chmod 700 "/home/$username" chown "$username:$username" "/home/$username" # Generate random password password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } # Read the input file input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi # Ensure the input file exists if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi # Process each line of the input file while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) # Remove leading/trailing whitespace groups=$(echo "$groups" | xargs) # Remove leading/trailing whitespace if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" # Set permissions for password file chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ## Detailed Explanation **Ensuring Root Privileges** The script starts by checking if it is being run with root privileges, as creating users and modifying system files require administrative rights. ```bash if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi ``` ## Setting Up Log and Password Files The script defines paths for the log file and the password storage file. It then creates a secure directory for storing passwords and ensures it has the correct permissions. ```bash LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" mkdir -p /var/secure chmod 700 /var/secure ``` **Function to Create Groups** The create_groups function takes a comma-separated list of groups and creates each group if it does not already exist. It also logs the creation of each group. ```bash create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } ``` **Function to Create Users and Groups** The create_user function handles the creation of the user and their primary group, as well as any additional groups. It sets up the user's home directory, assigns appropriate permissions, and generates a random password for the user. ```bash create_user() { local username="$1" local groups="$2" if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi create_groups "$groups" if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" chmod 700 "/home/$username" chown "$username:$username" "/home/$username" password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } ``` **Processing the Input File** The script reads the input file provided as a command-line argument. Each line of the file is expected to contain a username and a list of groups separated by a semicolon. The script processes each line, removing any leading or trailing whitespace, and calls the create_user function. ```bash input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" ## ``` **Finalizing Permissions** Finally, the script ensures that the password file has the correct permissions, making it readable only by the root user. ```bash chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ## Running the Script To run the create_users.sh script, follow these steps: 1.**Create the Input File:**Prepare a text file with usernames and groups. Each line should contain a username followed by a semicolon and a comma-separated list of groups. For example: ```bash ajayi;admins,developers okhuomon;users,admins ``` 2.**Make the Script Executable:** Ensure the script has executable permissions. ```bash chmod +x create_users.sh ``` 3.**Run the Script with Root Privileges: ** Execute the script, passing the path to the input file as an argument. ```bash sudo ./create_users.sh /path/to/input_file.txt ``` ## Conclusion This script offers a powerful solution for automating user and group management on Linux systems. It ensures thorough logging for auditing and securely stores generated passwords. By leveraging and customizing this script, you can streamline user management tasks, ensuring efficiency and consistency tailored to your specific requirements. [Link to my GitHub repository](https://github.com/six-shot/HNG11DEVOPS/tree/main/stage1) For more details on the HNG Internship and how to apply, visit [HNG Internship](https://hng.tech/internship) and embark on your journey of growth and innovation, Also if you wish to expand your horizons with Hng Network visit [HNG Premium](https://hng.tech/premium) and never walk alone!!! I'd love to hear from you about your experiences with automating user and group management on Linux systems. Share your thoughts, challenges, or any tips you have for improving the script. Here are a few questions to get you started: What methods do you use to automate user management in your environment? Have you encountered any specific challenges with user creation and group management? Do you have any suggestions for enhancing the security aspects of user management scripts? Feel free to leave your comments below. Your insights and experiences are valuable to me and the community!
six-shot
1,910,257
Revolutionize the Fitness Industry with Pro Athlete Connect: Seeking a Technical Cofounder
Hello DEV Community! I'm Sam, an early stage entrepreneur with a vision to transform the fitness...
0
2024-07-03T13:45:36
https://dev.to/proathleteconnect/revolutionize-the-fitness-industry-with-pro-athlete-connect-seeking-a-technical-cofounder-17hc
ai, cofounder, appdevelopment, healthtech
Hello DEV Community! I'm Sam, an early stage entrepreneur with a vision to transform the fitness industry. I am excited to introduce you to Pro Athlete Connect, an innovative app designed to bridge the gap between professional athletes, fitness enthusiasts, and the general public. Our mission is to provide personalized training, performance analytics, and a supportive community to help users reach their fitness goals. About Pro Athlete Connect Pro Athlete Connect is an all-in-one fitness app that offers: - Personalized Training Plans: Tailored workout programs created by professional coaches to meet the specific needs and goals of each user. - Performance Analytics: Advanced tracking and analytics to monitor progress and optimize training effectiveness. - Access to Professional Coaches: Direct connection to professional coaches from various sports for guidance, motivation, and real-time feedback. - Supportive Community: A platform where users can interact, share experiences, and motivate each other. - Comprehensive Resources: Wellness resources, nutritional guidance, and educational content to support overall health and fitness. Our Unique Value Proposition Unlike existing fitness apps, Pro Athlete Connect fills the market gaps by offering: - Direct access to professional coaches from multiple sports. - A highly personalized training experience using advanced AI and machine learning algorithms. - An inclusive platform that caters to both athletes and the general public. - A community-focused approach to foster motivation and consistency. What We're Looking For To bring Pro Athlete Connect to life, I am seeking a passionate and skilled Technical Cofounder. The ideal candidate will have: - Strong Technical Expertise: Proficiency in app development, AI integration, and UX/UI design. - Experience with Fitness Apps: Previous experience in developing or working on fitness-related apps is highly desirable. - Innovative Mindset: A creative problem-solver who can contribute to refining our app’s features and user experience. - Team Player: Excellent communication and collaboration skills to work effectively in a startup environment. - Passion for Fitness: A genuine interest in health and fitness, and a desire to make a positive impact in this space. Why Join Us? - Exciting Opportunity: Be part of an innovative startup with the potential to disrupt the fitness industry. - Collaborative Environment: Work closely with a dedicated founder and a network of fitness professionals. - Growth Potential: Contribute to a project with significant market potential and growth opportunities. - Equity Stake: Receive a substantial equity stake in the company and be a key player in its success. Let’s Connect! If you are excited about the prospect of revolutionizing the fitness industry and meet the criteria above, I would love to hear from you! Let’s discuss how we can work together to make Pro Athlete Connect a reality. Feel free to reach out to me directly here on DEV. Let’s connect and start this exciting journey together! Looking forward to connecting with like-minded individuals who are passionate about fitness and technology. Best regards, Sam|Founder of Pro Athlete Connect
proathleteconnect
1,910,256
Enhance Your Crop Yield With An HTP Sprayer Pump
Farming technology has advanced significantly, and one of the most valuable tools for modern...
0
2024-07-03T13:42:42
https://dev.to/mitra_agro21_d2593e19c339/enhance-your-crop-yield-with-an-htp-sprayer-pump-2c06
Farming technology has advanced significantly, and one of the most valuable tools for modern agriculture is the HTP sprayer pump. This equipment is crucial for plant protection sprayer and ensuring healthy, productive crops. An HTP pump is designed to deliver pesticides, herbicides, and fertilizers efficiently across your fields. This ensures every plant receives the right amount of treatment, promoting healthy growth and maximizing yield. For instance, a farmer in Iowa shared his experience of switching to an HTP sprayer pump and seeing a 25% increase in crop yield. The even distribution of treatments led to healthier plants and a more bountiful harvest. A case study from a farm in California further highlights the benefits of using an HTP sprayer pump. The farm reported a 20% reduction in chemical usage while maintaining high crop quality. The precision of the HTP pump minimized waste and environmental impact, supporting sustainable farming practices. This is especially important as the agriculture industry moves towards eco-friendly solutions. HTP sprayer pumps are also known for their versatility and ease of use. They can be adjusted to suit different types of crops and field conditions, making them a valuable investment for any farm. Many modern models are user-friendly and require minimal maintenance, saving farmers time and effort. News reports have frequently covered the positive impacts of HTP sprayer pumps. For example, during a severe pest outbreak, farmers equipped with HTP sprayer pumps were able to protect their crops effectively. This underscores the importance of reliable equipment in maintaining food security. In conclusion, an HTP sprayer pump is an essential tool for modern farming. It boosts efficiency, supports sustainable practices, and helps achieve higher crop yields. Investing in this technology can significantly improve your farming operations. https://issuu.com/mitra789666
mitra_agro21_d2593e19c339
1,910,254
"Introducing FoodBuddy: My First Go Project! 🚀🍽️ Looking for Feedback to Improve! #GoLang"
🚀 Beginner here! A few weeks into Go with no prior web development experience, I built FoodBuddy, a...
0
2024-07-03T13:41:29
https://dev.to/lijuthomas_/introducing-foodbuddy-my-first-go-project-looking-for-feedback-to-improve-golang-28g1
go, webdev, beginners
🚀 Beginner here! A few weeks into Go with no prior web development experience, I built FoodBuddy, a restaurant aggregator. 🍽️ Still working on it. Looking for constructive criticism to learn and improve. Roast away! 😅 Check it out: [FoodBuddy-API](https://github.com/liju-github/FoodBuddy-API) #golang
lijuthomas_
1,910,252
React 101
This article assumes familiarity with React coding, particularly in using states and other essential...
0
2024-07-03T13:36:40
https://dev.to/achal_tiwari/react-101-kei
webdev, react, beginners, tutorial
This article assumes familiarity with React coding, particularly in using states and other essential features. ## State - _State_: _Component-specific memory_ in React used to track and manage dynamic data within a component. - _Dynamic Interaction_: State allows components to change what’s displayed on the screen based on user interactions or other events. ### Common Use Cases 1. **Form Inputs**: State is used to remember and update the current value of an input field as a user types. 2. **Image Carousels**: State tracks the currently displayed image and updates it when the user clicks “next” or “previous”. 3. **Shopping Carts**: State manages the items in the shopping cart, adding new products when the user clicks “buy”. React batches state updates. It updates the screen **after all the event handlers have run** and have called their `set` functions. This prevents multiple re-renders during a single event. In the rare case that you need to force React to update the screen earlier use `flushSync`. ## Queuing a Series of State Updates Setting a state variable will queue another render. But sometimes you might want to perform multiple operations on the value before queuing the next render ```js // This is a general Interview question import { useState } from 'react'; export default function Counter() { const [number, setNumber] = useState(0); return ( <> // M-1 <h1>{number}</h1> <button onClick={() => { setNumber(number + 1); setNumber(number + 1); setNumber(number + 1); }}>+3</button> // M-2 <h1>{number}</h1> <button onClick={() => { setNumber(n => n + 1); setNumber(n => n + 1); setNumber(n => n + 1); }}>+3</button> </> ) } ``` I want you to think about how will M-1 and M-2 perform. #### M-1 code will only increment the number by 1 ##### Why? Each render’s state values are fixed, so the value of number inside the first render’s event handler is always 0, no matter how many times you call setNumber(1): - **React waits until all code in the event handlers has run before processing your state updates.** This is why the re-render only happens _after_ all these `setNumber()` calls. #### But M-2 code can do the increment by 3 ##### Why? Here, `n => n + 1` is called an **updater function**. When you pass it to a state setter: 1. React queues this function to be processed after all the other code in the event handler has run. 2. During the next render, React goes through the queue and gives you the final updated state. This is the first part of my **React series**, covering some basics. Follow me for more upcoming parts! If you have any queries, drop a comment—I'll do my best to provide answers.
achal_tiwari