id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,882,368 | My idea and submission for problem 1 on Leetcode(very detailed) | 1. the analysis of the problem Now we get some information from the question: A array... | 0 | 2024-06-09T22:06:35 | https://dev.to/hallowaw/my-idea-and-submission-for-problem-1-on-leetcodevery-detailed-4p57 | beginners, cpp, tutorial | ## **1. the analysis of the problem**

**Now we get some information from the question:**
A array named nums,it includeds some intergers;
An integer named target;
What should we do:we need add two numbers from the nums to make it euqates target then return the indices(or indexs)of the two numbers;
**example**:Input: nums = [2,7,11,15], target = 9
Output: [0,1]
when the array nums=[2,7,11,15],we found that only 2+7=9,so we need return the indices of "2" and "7",which is 0 and 1,so the output is [0,1]
**Precondition**:only one solution in the array given to us,so it won't happen the situation like nums=[1,2,3,4],target=5,because the answer can be 1+4 or 2+3,so it won't happen.and we also can not use the same number for twice,
like in the example:
Input: nums = [3,3], target = 6
Output: [0,1]
we can not use the first "3" twice so we can not get the answer[0,0]
## 2.the method to find the two number
I guess most of us will consider about calculating all possibilities
so let us use the example of nums = [2,7,11,15], target = 9, at first we need add 2+7,then 2+11,2+15,7+11,7+15,11+15,and check the sum whether equate the target 9,once we find the equal relationship we can end the calculation cause there is only one solution exactly,
**so how we can do this by codes?**
we can start with the nums[0]
0 is the indice of the array,and it start from 0 in any array,it symbols the number in the position of "0+1",in the array nums it is 2
and then we add the nums[0] with nums[1],it represent 2+7,
then add the nums[0] with nums[2],it represent 2+11,then nums[0]+nums[3] ,nums[1]+nums[2],nums[1]+nums[3] ,nums[2]+nums[3]
**so we can find the rules:**
1.fisrt number start from nums[0] to nums[length -2].(we define the lenghth is the quantity of the numbers in the array)
2.the indice of second number is always 1 greater than first number.
so we have key codes as;
```
int n=nums.size();//define n as the length of array nums[]
for(int i=0;i<n-1;i++){
for(int j=i+1;j<n;j++){
if(nums[i]+nums[j]==target){
return {i,j};
```
then we add this to the given code to spell into a complete code.
```
class Solution {
public:
vector<int> twoSum(vector<int>& nums, int target) {
//num is the array which we need find two elements to get the right answer.vector is the similar form of array,we can ignore the difference right now.
//first we make a integer
int n=nums.size();
for(int i=0;i<n-1;i++){
for(int j=i+1;j<n;j++){
if(nums[i]+nums[j]==target){
return {i,j};
}
}
}
return {};//no solution found
}
};
```
| hallowaw |
1,882,391 | Building Android Automotive OS: A Beginner-Friendly Guide | Introduction Android Automotive OS is a version of Android tailored for in-vehicle use. It... | 0 | 2024-06-09T22:01:27 | https://dev.to/hpnightowl/building-android-automotive-os-a-beginner-friendly-guide-4f67 | aaos, android, androiddev, aosp |
## Introduction
Android Automotive OS is a version of Android tailored for in-vehicle use. It provides a seamless experience for drivers and passengers by integrating various automotive functions with Android applications. This guide will walk you through the process of building Android Automotive OS from scratch, covering all the necessary tools, setup, and steps required.
## Prerequisites
Before starting, ensure you have the following:
1. **Computer with Linux or macOS**: Building Android Automotive OS is most compatible with Linux-based systems or macOS.
2. **Adequate System Resources**: At least 16GB of RAM and 400GB of free disk space.
3. **Internet Connection**: To download necessary tools and dependencies.
## Tools and Software Required
1. **Java Development Kit (JDK)**: Java 8 or higher.
2. **Repo Tool**: To manage the Android source code.
3. **Git**: Version control system.
4. **AOSP (Android Open Source Project) Source Code**: Base source code for Android.
5. **Android Studio**: Latest stable version.
## Step-by-Step Guide
### 1. Set Up Your Environment
#### Install Java Development Kit (JDK)
First, install the JDK. Open a terminal and run:
```bash
sudo apt update
sudo apt install openjdk-8-jdk
```
#### Install Required Packages
For Ubuntu 18.04 or later, install the necessary packages:
```bash
sudo apt-get update
sudo apt-get install git-core gnupg flex bison build-essential zip curl zlib1g-dev libc6-dev-i386 x11proto-core-dev libx11-dev lib32z1-dev libgl1-mesa-dev libxml2-utils xsltproc unzip fontconfig
```
#### Install Git
Ensure Git is installed by running:
```bash
sudo apt install git
```
#### Install Repo Tool
Download the Repo tool and make it executable:
```bash
mkdir ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
chmod a+x ~/bin/repo
```
Add Repo to your PATH:
```bash
export PATH=~/bin:$PATH
```
### 2. Download the Android Source Code
Create a directory for your Android build:
```bash
mkdir ~/android-automotive
cd ~/android-automotive
```
Initialize the Repo with the Android source code:
```bash
repo init -u https://android.googlesource.com/platform/manifest -b android-13.0.0_r83
```
**Note**: you can use any branch or tag which will be latest and depending on the project you are building like `android-13.0.0_r83` or `master`
Synchronize the Repo to download the source code:
```bash
repo sync
```
### 3. Configure the Build
Set up the environment for the build:
```bash
source build/envsetup.sh
```
Choose a target:
```bash
lunch
```
Select an appropriate target, such as `aosp_car_x86_64-userdebug`.
### 4. Build the Android Automotive OS
Start the build process:
```bash
make -j$(nproc)
```
This process can take several hours depending on your system's performance.
### 5. Flash the Build to a Device or Emulator
Once the build is complete, you can flash it to an Android Automotive compatible device or run it on an emulator.
the below command will directly launch the emulator
```bash
emulator
```
#### Flash to Device
Connect your device and run:
```bash
adb reboot bootloader
fastboot flashall -w
```
#### Run on Emulator
To create an AVD (Android Virtual Device) for Automotive:
1. Open Android Studio.
2. Go to AVD Manager.
3. Create a new AVD with an automotive system image.
4. Start the emulator.
## Conclusion
Building Android Automotive OS from scratch involves several steps, from setting up your environment to flashing the OS onto a device or emulator. By following this guide, you can get started with developing for the automotive platform and exploring its features.
For more detailed information, refer to the official [Android Automotive OS documentation](https://source.android.com/docs/automotive).
## Resources
- [Android Open Source Project](https://source.android.com/)
- [Android Developer Documentation](https://developer.android.com/docs)
- [Android Automotive OS](https://source.android.com/docs/automotive)
Feel free to leave comments or questions below, and I'll be happy to help you through the process! | hpnightowl |
1,854,417 | Dev: Data Engineer | A Data Engineer Developer is a professional who specializes in designing, building, and maintaining... | 27,373 | 2024-06-09T22:00:00 | https://dev.to/r4nd3l/dev-data-engineer-on1 | datascience, developer | A **Data Engineer Developer** is a professional who specializes in designing, building, and maintaining data pipelines and infrastructure to support the storage, processing, and analysis of large volumes of data. Here's a detailed description of the role:
1. **Data Infrastructure Design:**
- Data Engineer Developers design scalable and efficient data infrastructure architectures that meet the needs of data storage, processing, and analysis.
- They work with cloud platforms (such as AWS, Google Cloud Platform, or Azure) or on-premises data centers to set up distributed storage systems, data warehouses, and data lakes.
2. **Data Pipeline Development:**
- Data Engineer Developers build and maintain data pipelines to ingest, process, transform, and load (ETL) data from various sources into storage systems or analytics platforms.
- They use tools and frameworks such as Apache Kafka, Apache Spark, Apache Flink, Apache Airflow, or cloud-native services like AWS Glue or Google Dataflow for building scalable and reliable data pipelines.
3. **Data Modeling and Schema Design:**
- Data Engineer Developers design and implement data models and schemas that optimize data storage, retrieval, and query performance.
- They choose appropriate data formats (e.g., JSON, Parquet, Avro) and database technologies (e.g., relational databases, NoSQL databases) based on data requirements and access patterns.
4. **Data Integration and Data Quality:**
- Data Engineer Developers integrate data from multiple sources, including databases, APIs, streaming platforms, and external data providers, ensuring data consistency and integrity.
- They implement data quality checks, data validation rules, and data cleansing processes to identify and resolve data anomalies, errors, or missing values.
5. **Big Data Technologies:**
- Data Engineer Developers leverage big data technologies and frameworks to handle large-scale data processing and analytics tasks.
- They work with distributed computing platforms like Apache Hadoop, Apache Spark, Apache Hive, or cloud-based services such as Amazon EMR or Google Dataproc for processing and analyzing massive datasets.
6. **Real-time Data Processing:**
- Data Engineer Developers build real-time data processing systems to handle streaming data and event-driven architectures.
- They use technologies like Apache Kafka, Apache Flink, or Apache Pulsar for real-time event ingestion, processing, and analytics, enabling near-real-time insights and decision-making.
7. **Data Security and Compliance:**
- Data Engineer Developers implement data security measures and access controls to protect sensitive data and ensure compliance with data privacy regulations (e.g., GDPR, CCPA).
- They encrypt data at rest and in transit, manage user permissions and roles, and monitor data access and usage to prevent unauthorized access or data breaches.
8. **Monitoring and Performance Optimization:**
- Data Engineer Developers monitor data pipelines, storage systems, and processing jobs to detect performance bottlenecks, errors, or failures.
- They optimize data processing workflows, fine-tune database configurations, and scale infrastructure resources to improve system reliability, efficiency, and cost-effectiveness.
9. **Collaboration and Communication:**
- Data Engineer Developers collaborate with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements, define data pipelines, and deliver data-driven solutions.
- They communicate technical concepts and design decisions effectively to non-technical audiences, aligning data engineering efforts with business objectives and priorities.
10. **Continuous Learning and Skill Development:**
- Data Engineer Developers stay updated on emerging technologies, tools, and best practices in data engineering, distributed systems, and cloud computing.
- They participate in training programs, online courses, and industry conferences to enhance their skills in data management, data processing, and data architecture.
In summary, a Data Engineer Developer plays a critical role in building and maintaining the data infrastructure and pipelines that enable organizations to unlock the value of their data assets, drive data-driven decision-making, and achieve business goals through insights and analytics. By combining expertise in data engineering, big data technologies, cloud computing, and data modeling, they empower businesses to extract actionable insights from complex and diverse datasets at scale. | r4nd3l |
1,882,388 | I'm not a designer, but ... | I am a developer, not a designer, but I have strong opinions on UI. I helped thousands of new... | 0 | 2024-06-09T21:46:55 | https://dev.to/lisacee/im-not-a-designer-but--lke | ui, modal | I am a developer, not a designer, but I have strong opinions on UI. I helped thousands of new computer users while working at a public library. I got a very informal introduction to the world of UI and UX, accessibility and expected behaviors. It was these interactions that led me to becoming a software engineer. I work with designers who have their own experiences and educations on UI. Sometimes I don’t love the designs, and while I often make their designs into a reality, I thought it would be fun to explore how I would have designed things instead.
Note: All of the UI images here were created with [Penpot](https://penpot.app/). The these examples are simplified, and often quite clunky and far from pixel perfect, but hopefully they get the gist across.
<hr/>
## The modal
A designer I work with designs a lot of modals. I know people have a lot of opinions about modals, but that is a discussion for another post.
I recently worked on a new timeline feature for a SASS application. A user may have multiple sites that they manage and the timeline will show the user events in chronological order. Each timeline event has a "view more" button that opens a modal.
This is the original design. It features a heading with the event and a list of each site that attempted the event. There are both indeterminate and static progress bars, status badges (they're not buttons), a list of errors and a vertical scroll when there is more than a few sites.

I immediately had thoughts on this modal. This is a massive modal with a lot of elements to interpret. The combination of movement, colors and the vertical scroll were visually overwhelming to me. A modal seemed like too small of a surface to display a potentially lengthy list of sites.
## My modal idea
Rather than providing the user with ALL of the details, I would opt to present them an overview and then link them to a page for more detail.

I would expect the "view site details" button to take me to a full page. In fact, the modal could be eliminated all together and the original user interaction could take them to this page directly.

There are a few things that I changed between the original modal list and this new page.
- I am only displaying a progress bar when the event is occurring. It is redundant to have both a complete progress bar and a status badge.
- Rather than display all of the errors, I moved them behind a toggle button. When the button is clicked, the error details expand.
- I added a line between each site to easily distinguish between sites.
Some other ideas I have for this page include adding a button to view the site (in case an update broke something) or to go to the main admin screen. I couldn't decide a good place for those buttons, so I left them out of the example.
## Thoughts?
So, what do y'all think? Fellow devs, do you ever feel like a professional design isn't quite hitting the spot? Designers, do you wish the devs would stay in their lane?
Until recently, I created UI components and screens that matched the design without question. Lately, though, the handoff between design and dev has been much more of a discussion. This has worked well for both design and dev. We can see different perspectives from our own and discover challenges we unknowingly create for each other. We're all working towards the same goal so it makes sense to collaborate a little more along the way.
| lisacee |
1,851,500 | 🌐Extensões para produtividade de um dev no Navegador (Arc Browser) | Introdução Navegador 🌐 (Arc Browser) Extenções uBlock Origin WhatRuns AI... | 0 | 2024-06-09T21:40:07 | https://dev.to/neiesc/extensoes-para-produtividade-de-um-dev-no-navegador-arc-browser-3of | ## Introdução
Navegador 🌐 (Arc Browser)


## Extenções
1. [uBlock Origin](https://chromewebstore.google.com/detail/ublock-origin/cjpalhdlnbpafiamejdnhcphjbkeiagm)

1. [WhatRuns](https://chromewebstore.google.com/detail/whatruns/cmkdbmfndkfgebldhnkbfhlneefdaaip)

1. [AI Grammar Checker & Paraphraser – LanguageTool](https://chromewebstore.google.com/detail/ai-grammar-checker-paraph/oldceeleldhonbafppcapldpdifcinji)

1. [Vytal](https://chromewebstore.google.com/detail/vytal-spoof-timezone-geol/ncbknoohfjmcfneopnfkapmkblaenokb)

1. [Fake Filler](https://chromewebstore.google.com/detail/fake-filler/bnjjngeaknajbdcgpfkgnonkmififhfo)

1. [Selenium IDE](https://chromewebstore.google.com/detail/selenium-ide/mooikfkahbdckldjjndioackbalphokd)

1. [VisBug](https://chromewebstore.google.com/detail/visbug/cdockenadnadldjbbgcallicgledbeoc)

1. [Save to Pocket](https://chromewebstore.google.com/detail/save-to-pocket/niloccemoadcdkdjlinkgdfekeahmflj)
 | neiesc | |
1,882,384 | Babylon.js Browser MMORPG - DevLog- Update #7 - Player combat abilities | Hello, Last few day i spent on refactoring server code and redesigning architecture on client side... | 0 | 2024-06-09T21:34:24 | https://dev.to/maiu/babylonjs-browser-mmorpg-devlog-update-7-player-combat-abilities-2g7i | babylonjs, indie, mmorpg, indiegamedev | Hello,
Last few day i spent on refactoring server code and redesigning architecture on client side for ECS like, but to be honest most of it is event based approach.
Anyway it feels much more better than before when i reached code complexity critical point and further development wasn't possible. Right now adding new things is much easier, still there's a lot of to improve and optimize but i think it's possible to work with current architecture without much hassle and this was the main goal.
On the video I'm presenting few new things:
- server info on the intro screen (it's being refreshed each 5s)
- added HP component to the entities
- added first melee attack.
- added system handling animations (in this case attack)
Melee attack is taking between 1-3 dmg, atack is possible when closer than 5 units and for now. When entity hp falls below 1 it reset to the full. Death it's not planned yet :D As You can see hp information is nicely refreshed in the target and player panels.
Next on the board I think is displaying validation information about skill range, wrong facing on player etc. And after that I think I'll work on tick synchronization between client and server. It might be tricky because networking is handled by worker thread but we will see..
After having sync tick clock I will be able to work on entity interpolation and client side prediction. It will add smoothness to the player and entities movement
Hope You like it!
{% youtube Kz29E6ksTbA %} | maiu |
1,882,367 | Transactional Outbox: from idea to open-source | Hey there! Misha Merkushin here, Team Lead of the Ruby Platform team at Kuper Tech. We’re the crew... | 0 | 2024-06-09T21:27:45 | https://dev.to/bibendi/transactional-outbox-from-idea-to-open-source-34ia | ruby, kafka, sidekiq, opensource | Hey there! Misha Merkushin here, Team Lead of the Ruby Platform team at Kuper Tech. We’re the crew behind the internal libraries and microservice architecture improvements for everything Ruby. This article dives into the Transactional Outbox pattern and a tool we built and iteratively developed in-house that we've just recently released to the world. It tackles the challenge of ensuring reliable and consistent message delivery from your application, guaranteeing messages are sent only after a database transaction is successfully completed.

## The Quest for Reliable Delivery
Our architecture followed a classic pattern: a monolithic storefront for customers and a separate microservice backend to handle order fulfillment. Customers would browse the storefront, add items to their cart, and place orders. These orders were then sent to the backend via a REST API using asynchronous Sidekiq jobs.
While we steered clear of synchronous distributed transactions, we encountered an issue with order loss, inevitably leading to a poor customer experience. Sure, Sidekiq offers retry functionality to handle network hiccups, but that didn't address the root cause.
The problem lies in how the background job is initiated. If a job is queued before the database transaction commits, we risk sending incomplete data to the backend. There's no guarantee the transaction will succeed. Conversely, if we queue the job immediately after the transaction commits, the application process might unexpectedly terminate before the job is queued—exceeding memory limits and getting killed, for example. The result? A lost order and a frustrated customer.
## Enter the Outbox Pattern
The Transactional Outbox pattern is an architectural pattern used in distributed systems to ensure reliable message delivery. It works by persisting messages to a data store (typically an "outbox" table in the database) before they are eventually delivered to a message broker. This approach guarantees data integrity—either everything is committed, or the entire operation is rolled back if an error occurs.
Here's how it works:
1. **Order Creation and Persistence:** When the application receives an order creation request, it opens a database transaction. The order details are saved, and within the same transaction, a message for the other service is generated and persisted to a dedicated "outbox" table in the database. The transaction is then committed.
2. **Polling for New Messages:** A separate process periodically polls the outbox table for new entries.
3. **Message Processing and Delivery:** This process picks up messages from the outbox table and handles them. In our case, it means sending the messages to the message broker.
The outbox table usually includes the following fields:
- **primary key**
- **payload** (the message body)
- **statuses** to track the message's current state.
If there are network issues or the broker is unavailable, the process will retry, ensuring the message isn't lost.

This pattern guarantees **at-least-once** message delivery, meaning messages will be delivered at least once but might be delivered more than once, potentially leading to duplicates. This approach ensures reliability in unstable network environments where messages might get lost.
Here's a simplified code example:
```ruby
Order.transaction do
order.complete!
OrderOutboxItem.create(order)
end
```
## When to Use the Outbox Pattern
Imagine a large distributed system with hundreds of services, all communicating with each other through messages. These messages trigger events and updates, keeping each service aligned and enabling them to execute their own business logic. The reliability requirements for message delivery between these services depend heavily on the specific business logic they govern. Let's illustrate this with a couple of examples.
**Example 1: Real-time Location Tracking (Low Reliability Requirements)**
A courier is delivering an order, and the system tracks their location to provide accurate delivery estimates to the customer. This generates a high volume of data as the courier's position constantly changes. However, losing a few location updates has minimal impact on the overall process.
**Example 2: Food Delivery Order Placement (High Reliability Requirements)**
A customer places an order for pizza delivery. The restaurant needs to receive the order promptly to start preparing the food. Here, timely and reliable messaging is critical:
- **Delayed order reception:** Leads to a dissatisfied customer who receives their pizza late.
- **Lost order message:** Results in the customer never receiving their order, costing the company both revenue and customer trust.
In this article, we'll focus on scenarios like the second example, where reliable data transmission is paramount, and message loss would severely disrupt business operations.
## Our Outbox Journey
At the company, we have a range of services written in Ruby. During development, we needed a simple yet scalable solution for reliable data transfer between these services. We went through several development stages, tackling new challenges and overcoming obstacles along the way. Starting with basic approaches, we gradually refined and enhanced our solution. Examining these stages will provide a deeper understanding of the Outbox pattern and help you avoid some pitfalls we encountered.
### Streaming Store Configurations
Stores are the heart of any marketplace. As a delivery service, our partners with a diverse range of stores, each with its own set of parameters. For instance:
- A store can be open or closed.
- A store might have specific operating hours.
Technically, this means changes to a store's settings in one microservice must be communicated to dozens of others. With thousands of stores in our ecosystem, settings are constantly being updated and fine-tuned. The traffic volume is low but consistent. Crucially, these setting changes must reach their destinations without any loss. A delay of up to 10 minutes is acceptable. This made the Outbox pattern a perfect fit for this task, ensuring reliable and secure data transmission.
#### Our First Foray
To test our concept, we opted for the path of least resistance: a Rake task triggered by a Cron job. The idea was simple: every minute, the task would gather data about recently placed orders and any orders that hadn't been sent previously (due to network errors, for example) and deliver them to the message broker queue.
```ruby
task :publish_store do
StoreOutboxItem.pending.find_each(&:publish!)
end
```
While functional for a proof of concept, this approach had drawbacks:
- **Slow application boot times:** Processing large volumes of data during application startup led to increased boot times.
- **Lack of scalability:** The single-threaded nature of this solution limited its ability to handle growing data volumes efficiently.
#### A Step Towards Efficiency
Launching a new Ruby process for each task proved inefficient. Enter [Schked](https://github.com/bibendi/schked), a job scheduler that allowed us to schedule a recurring task to send messages to the broker every minute. Sidekiq would then handle these tasks.
```ruby
every "1m" do
StoreOutboxItemsPublishJob.enqueue
end
class StoreOutboxItemsPublishJob < ApplicationJob
def perform
StoreOutboxItem.pending.find_each(&:publish!)
end
end
```
This approach eliminated the overhead of environment initialization, making task execution nearly instantaneous. However, scalability remained a challenge. There was a risk of concurrent task execution, potentially disrupting order processing.
To mitigate this, we ran Sidekiq in single-threaded mode, ensuring tasks were executed sequentially. This solved the concurrency issue but limited our ability to leverage Sidekiq's full potential for parallel processing.
Since scalability remained unaddressed, it's time to turn to the Kafka concept. A Kafka topic is analogous to a message queue. Topics are divided into partitions, enabling parallel processing and enhanced throughput. When sending a message, we can choose which partition it goes to, allowing us to group events related to a specific order within the same partition. By having a single consumer process messages from a single partition, we achieve parallel processing while maintaining order.
The idea was to parallelize event dispatching across different partitions using multiple threads. This required pre-calculating the partition when saving an outbox item.

Therefore, Schked schedules a Sidekiq job to send messages to a specific partition:
```ruby
every "1m" do
PARTITIONS_COUNT.times do |partition|
StoreOutboxItemsPublishJob.enqueue(partition)
end
end
```
This achieved parallel dispatching across different partitions.
#### Job Overlapping
As discussed, our current task handling mechanism suffers from potential job accumulation and overlap. This can overwhelm the system, preventing it from processing all events within the allotted time. To ensure smoother and more reliable task processing, we can leverage the popular `sidekiq-uniq-jobs` gem. This gem prevents a new job from starting if a previous job with the same unique identifier is still running.
```ruby
class StoreOutboxItemsPublishJob < ApplicationJob
sidekiq_options lock: :until_executed
def perform(partition)
StoreOutboxItem.publish!
end
end
```
This approach offers several advantages:
- **Reliance on standard tools:** Simplifies support and deployment.
- **Good observability:** Built-in metrics provide valuable insights.
- **Scalability:** Increasing parallelism is as easy as adding more partitions to the Kafka topic and increasing the Sidekiq queue concurrency.
However, despite these benefits, the system remains complex, with multiple potential points of failure.
Everything ran relatively smoothly until one day... it broke. An experienced Ruby developer would quickly identify the weakest link 🙂. `sidekiq-uniq-jobs` failed us. We faced a new challenge: reducing points of failure to pave the way for tackling larger-scale challenges.
### Migrating Order Processing from Async HTTP to Outbox
Our system comprised two main components: a storefront and a back-office system, actively exchanging order information at a rate of around 100 messages per second.
Historically, we used a combination of Sidekiq and HTTP for inter-system communication. This approach worked adequately with moderate data volumes. However, as the load increased, we encountered issues with message ordering, system scalability, and message loss. Moreover, the system lacked extensibility—adding new consumers required modifications to the monolithic storefront.
Recognizing the criticality of timely order information exchange, the Ruby Platform team decided to migrate the order synchronization mechanism to Kafka. It became evident that our existing Schked/Sidekiq/Uniq-jobs setup couldn't deliver the required reliability and performance.
We realized the limitations of our current solution and decided to implement a dedicated outbox daemon. This approach aimed to:
- **Reduce points of failure to one.**
- **Enable independent, replicable, and interchangeable daemons.**
This ensures that the failure of one daemon wouldn't impact the others.
The new daemon's concept was straightforward and effective:
- **Dedicated Process:** A separate process handles messages for all Kafka partitions in a multi-threaded manner.
- **Process Synchronization:** Process synchronization is achieved using Redis with the Red Lock algorithm, ensuring that only one process can process a specific partition at any given time, preventing data conflicts.
This new architecture uses a single Ruby process for the entire system, simplifying maintenance and management. Scalability is achieved by increasing the number of processes and threads, allowing the system to adapt to growing workloads.
#### A Safe Migration to Kafka
Our primary goal was to replace the existing Sidekiq+HTTP transport mechanism with Kafka while ensuring a seamless and stable transition. To achieve this, we decided to run both systems in parallel, gradually migrating from the old to the new. This required our outbox pattern implementation to support multiple transport mechanisms simultaneously.
Here's how we approached the migration:
1. **Outbox Integration:** We made minor adjustments to our existing Sidekiq+HTTP implementation to integrate with the Outbox pattern.
2. **Parallel Order Dispatch:** We started duplicating order dispatches through Kafka.
3. **Performance Comparison and Cutover:** By monitoring and comparing performance metrics (e.g., order processing speed), we confirmed the superiority of our new Kafka-based solution. Once confident, we safely decommissioned the old synchronization system, leaving the new Kafka transport as the sole mechanism.
This approach allowed us to build and fine-tune the new synchronization process without jeopardizing the existing system's stability. This evolutionary migration strategy, characterized by minimal risks and zero downtime, can be applied beyond transport protocol replacements. It proves valuable for breaking down monolithic applications into smaller, manageable services. This experience reinforces the effectiveness of gradual and controlled transitions when implementing changes within critical systems.
## Observability & Tracing
Our previous Sidekiq-based outbox implementation benefited from built-in metrics provided by tools like `yabeda-schked` and `yabeda-sidekiq`. These tools offered valuable insights into the system's health and performance with minimal effort. However, developing a custom outbox daemon from scratch meant we had to implement all observability metrics, including metrics and distributed tracing, ourselves.
After deploying the new daemon, it became clear that its functionality extended beyond simple message delivery. To maintain the same level of observability we had with Sidekiq, we needed to integrate a robust metrics collection and tracing system. This would allow us to monitor performance, identify bottlenecks, and respond quickly to any arising issues.
During the development and launch of the new daemon, we focused on the following key aspects:
- **Performance Metrics:** We implemented metrics tracking the number of processed messages, processing time, errors, and other crucial indicators to assess the system's efficiency.
- **Distributed Tracing:** To understand the flow of execution and interactions between different system components, we integrated distributed tracing. This allowed us to follow each request across our microservices, simplifying debugging and performance optimization.

While our new daemon delivered enhanced stability and scalability, it also demanded extra effort to ensure the same level of observability—a crucial aspect of maintaining a high-quality service.
#### Inbox Pattern
Let's shift our perspective to the message consumer, who faces similar challenges. For instance, Kafka doesn't inherently guarantee exactly-once message delivery semantics. To address this, we extend the concept of "Transactional Outbox" to "Transactional Inbox," mirroring the mechanism on the consumer side.
Here's how the Inbox Pattern works:
1. **Message Persistence:** Upon arrival, messages are saved to a dedicated "inbox" table in the database. When using Kafka, this is handled by an "inbox-kafka-consumer," which commits the processed message offsets within a transaction.
2. **Inbox Processing:** A separate inbox process fetches events from the database and triggers the corresponding business logic for each. This process resembles the outbox process, but instead of sending messages to a broker, it executes the business logic associated with the received message.
This approach mirrors the Outbox pattern, with the consumer initiating the database record creation. The mechanism for polling events from the database remains similar to the Outbox pattern. However, instead of sending a message to the broker during processing (as in the Outbox), the system executes the business logic designed to handle the received message.
This pattern effectively ensures exactly-once message delivery semantics. This is achieved by creating inbox records with the same unique identifier (UUID) used for outbox records and employing a unique index on this column in the database.
It's important to note that this approach is particularly valuable when strong guarantees of correct message processing by the consumer are required.
#### Scaling
After implementing the Inbox pattern, we encountered a new challenge—ensuring its scalability. It's important to highlight that our architecture conceptually separates message consumption from actual message processing.
Consuming messages from Kafka is generally fast, as it mainly involves persisting received data to the database. However, message processing, which includes executing business logic, is significantly more time-consuming.
This disparity in processing speeds allows us to scale consumption and processing independently. For instance, if the consumer's throughput is sufficient for recording all incoming messages but business logic processing lags behind, we can scale processing separately by increasing the number of threads responsible for executing the business logic.
Increasing the number of processing threads beyond the number of partitions is counterproductive. We can only process messages from one partition at a time, leaving other threads idle while waiting for data access. This limitation arises because our scaling approach relies on parallel processing of messages from different partitions.
Consequently, we encounter a limitation: message processing scalability is bound by the number of partitions in the Kafka topic. This can be problematic if message processing involves resource-intensive business logic and increasing Kafka partitions is challenging for various reasons.
#### Virtual Partitions
Previously, scaling our outbox/inbox daemon involved several cumbersome steps: increasing Kafka partitions, restarting message producers, and then restarting the daemon itself. This process was inefficient and required excessive manual intervention.
To address this, we introduced the concept of "buckets." Instead of using partition numbers, we now use bucket numbers when creating outbox/inbox records. The number of buckets is limited and calculated similarly to partition numbers—as the remainder of dividing the event key. However, multiple buckets can be mapped to a single Kafka partition.
This approach allows us to create a large number of buckets upfront while dynamically adjusting the number of Kafka partitions without restarting the entire system. Scaling the outbox/inbox daemon now simply requires a configuration change, significantly simplifying system operation.

Through these efforts, we successfully tackled the challenge of streaming completed orders with high reliability and minimal latency.
Our solution boasts several key improvements:
- **Outbox/Inbox Daemon:** This provides complete control over message processing and enables tailored optimizations.
- **Kafka Partition-Based Outbox Scaling:** We can dynamically adjust system throughput by modifying the number of partitions.
- **Robust Inbox Pattern Implementation:** This guarantees exactly-once message processing, crucial for data-sensitive business processes.
- **Independent Scaling with Virtual Partitions (Buckets):** This allows for system scalability without modifying Kafka configurations or restarting producers.
Ultimately, we achieved a flexible, scalable, and fault-tolerant message processing system capable of handling high loads and ensuring reliable delivery of critical data.
## Widespread Outbox Adoption
The Ruby Platform team's overarching goal was to create and standardize a universal message handling solution for all Ruby services at the company. A well-designed solution would free product teams from tackling technical complexities, allowing them to focus on product development and business logic implementation.
Our toolkit needed to be compatible with diverse Ruby services, adaptable to varying workloads, and equipped with comprehensive monitoring capabilities. From this perspective, our outbox-based solution had reached a certain maturity:
- **Successful Implementation:** Deployed across multiple Ruby services.
- **Proven Stability:** Demonstrated reliable performance in production environments.
- **Observability and Scalability:** Offered robust monitoring and scaling capabilities.
However, widespread adoption presented new challenges. Despite independent scaling, fine-tuning the daemon for specific workloads required calculating the relationship between several parameters:
- Number of Outbox partitions.
- Number of buckets.
- Number of daemon processes.
- Number of threads per process.
This configuration proved too complex for users, forcing them to delve into technical implementation details. Users often misconfigured the system, setting too few partitions or too many daemon processes, leading to performance degradation.
For wider adoption, we needed to simplify configuration and make the scaling process more intuitive and user-friendly.
## New Architecture
Let's recap how our daemon works: each thread sequentially polls the database for new messages within its assigned buckets. Once a message is fetched, the thread processes it, executing the associated business logic. Polling is generally quick, while message processing can take significantly longer.
Initially, our scaling unit for business logic was the thread itself, which handled both polling and processing. This approach led to an undesirable outcome: scaling up increased the intensity of both operations, even when unnecessary. For example, if business logic execution became a bottleneck, increasing the number of threads also amplified database polling frequency, even if the bottleneck wasn't related to message retrieval. This resulted in "overscaling."
Increasing the number of threads proportionally increased database queries. We constantly polled the database for new messages, unable to reduce the polling frequency without impacting processing speed.
Our new architecture addresses these issues by separating the daemon into two thread pools: a **polling pool** and a **processing pool**.
### Divide and Conquer
Our new architecture decouples polling and processing responsibilities by utilizing two distinct thread pools and employing Redis as an intermediary message queue.
**Polling Pool:**
- Threads in this pool are responsible for fetching new messages from the database for each partition.
- Upon discovering new messages, they are enqueued into Redis using the `LPUSH` command.
- To maintain data consistency and processing order, a lock is acquired on the partition during polling.
**Processing Pool:**
- Threads in this pool dequeue messages from Redis using the `BRPOP` command.
- Each thread processes messages from a specific bucket, acquiring a lock during processing to prevent concurrent access and preserve message order within the bucket.
Each daemon now comprises two thread pools: a polling pool and a processing pool. By default, we maintain a 2:1 ratio of processing threads to polling threads, but this ratio can be adjusted as needed.
### Advantages of the New Architecture
Our revamped architecture delivers several key benefits:
- **Simplified Scaling:** Since polling is a lightweight operation, the polling pool can have significantly fewer threads than the processing pool. Scaling the system now involves increasing the number of daemon replicas, each with its own polling and processing pools.
- **Enhanced Performance:** Decoupling polling and processing enables more efficient resource utilization. Polling threads are not blocked during message processing, and processing threads don't idle waiting for new data.
- **Flexible Configuration:** The ratio of polling to processing threads can be easily adjusted based on workload characteristics and performance requirements.
- **Automated Scaling with Kubernetes HPA:** Utilizing Redis as a buffer between the polling and processing pools facilitates seamless autoscaling (HPA) in Kubernetes. The Redis queue size accurately reflects the processing pool's load. If the queue grows, indicating processing bottlenecks, HPA can automatically scale up daemon replicas. Conversely, if the queue is empty, HPA can scale down replicas to optimize resource consumption.
This new architecture delivers a more flexible, scalable, and easily configurable message processing system.

## Evolutionary Journey
This story exemplifies the iterative development of a tool, with each stage driven by new challenges and escalating demands for performance and reliability.
The key takeaway? Ruby can handle almost any task. Many open-source projects start as solutions to specific business problems and evolve into versatile, battle-tested tools used across numerous production systems.
This article explored the three stages of our Inbox/Outbox tool's development, crafted by the Ruby Platform team to address message-handling challenges. Each stage focused on enhancing reliability, scalability, and user-friendliness.
Our final solution, refined through real-world deployments in both large monolithic applications and distributed systems spanning dozens of services, has proven its stability and effectiveness. Confident in its capabilities, we've decided to share it with the community as an open-source project.
- [sbmt-outbox](https://github.com/Kuper-Tech/sbmt-outbox): Outbox/Inbox Ruby gem
- [sbmt-kafka_producer](https://github.com/Kuper-Tech/sbmt-kafka_producer): Outbox Kafka transport
- [sbmt-kafka_consumer](https://github.com/Kuper-Tech/sbmt-kafka_consumer): Inbox Kafka transport
- [example applications](https://github.com/Kuper-Tech/outbox-example-apps): Example microservice implementations
 | bibendi |
1,882,382 | What is Selenium? Why do we use Selenium for automation? | Selenium is a automation tool used for web application testing. It is a popular open source testing... | 0 | 2024-06-09T21:25:09 | https://dev.to/jayachandran/what-is-selenium-why-do-we-use-selenium-for-automation-3phh | Selenium is a automation tool used for web application testing. It is a popular open source testing tool. Selenium enables testers to write automated tests in various programming languages to test the functionality of web applications. It can be run in many different browsers and operating systems. It can be used to test the functionality of web applications on various browsers and operating systems. Selenium Software Testing is a great way to automate your web application testing.
**_Components of Selenium_**
Selenium is a collection that consists of three major components:
_Selenium IDE:_ Selenium IDE is a Firefox add-on that enables testers to record and playback the recorded automated tests. Selenium IDE is easy to use and lets you create and run automated tests quickly. Selenium IDE Chrome also includes a built-in debugger that enables you to troubleshoot your tests. To use Selenium IDE, you first need to install the Selenium add-on for Firefox or Chrome. You can then open Selenium IDE by clicking on the "Selenium" icon in the Firefox or Chrome toolbar.
Once the Selenium IDE is open, then you can start recording your tests by clicking on the "Record" button. Selenium will then begin recording all of your actions as you perform them in the browser. To stop the test recording, click on the "Stop" button. You can then playback your tests by clicking on the "Play" button. Selenium will then replay all of the actions that you recorded.
_Selenium WebDriver:_ The Selenium WebDriver is an open-source tool used for automating web browser interaction from a user perspective. With it, you can write tests that simulate user interactions with a web application. It is available for many different programming languages, such as Java, C#, Python, and Perl.
WebDriver provides a powerful and flexible test automation framework that enables you to create automated tests for your web applications easily. It also includes several over-the-top features, such as automatically discovering elements on a web page and capturing screenshots of your tests.
_Selenium Grid:_ The Selenium Grid distributes your tests across multiple machines or virtual machines (VMs). Selenium Grid enables you to test parallelly on various devices or VMs, allowing you to scale your test automation quickly. Selenium Grid is a crucial part of the overall Selenium testing suite and will enable you to execute your automated tests much faster. The Selenium Server is a part of Selenium Grid that can be installed locally on your device or hosted on a separate system/server.
**_Using Selenium for automation:_**
Testing is a vital part of the development cycle and is essential for ensuring the quality and stability of your applications. By performing application testing, you can find and fix bugs in your code before they have a chance to cause problems for your users. Additionally, application testing can help you verify that your application is working correctly on different browsers and operating systems.
Testing is performed in several ways, including manual, automated, and performance testing. Automated testing is a popular approach for performing application testing, as it enables you to test your applications quickly and efficiently. Selenium Testing is a popular tool for automated testing, as it allows you to write tests in various programming languages and run them on many different browsers and operating systems.
**_Benefits of Selenium Testing:_**
1. Efficient and accurate web application testing.
2. The ability to test your web application on multiple browsers and
operating systems.
3. The ability to run more than one test at the same time.
You can significantly reduce your time to test your web application with a selenium grid. And by using a selenium grid, you can ensure that your web application is fully functional before releasing it to users. Therefore, if you want to improve your web application testing, consider using the selenium grid. Its one of the best ways to automate your web application testing.
The important things to keep in mind while writing Selenium tests.
1. The goal of our selenium test is to find the bugs in your web application.
2. your selenium test should be concise.
3. You should only use the selenium web driver when you are sure about how to use the tools and scripts
Once you have written your selenium test, you can run them on different browsers and operating systems. To do this, you will need to use a selenium grid. A selenium grid is a server that enables you to run multiple selenium tests simultaneously on different browsers and operating systems.
There are different types of Selenium tests that you can write. The most common types of selenium tests are
- Unit Tests
- Functional Tests
- Integration Tests
- Regression Tests
- End-to-End Tests
**_Unit Tests_**
Unit Tests are the simplest type of Selenium tests. A Unit Test verifies that a single unit of code behaves as expected.
When writing a Unit Test, you should first create a test case. A test case is a set of instructions that tests a particular feature or function of your code. To create a test case, you will need to:
- Define the expected outcome of the test.
- Verify that the code under test behaves as expected.
Once your test case is written, you can run it using the WebDriver. Open your browser and go to the page where your test is located. Then, enter your test case into the browser's address bar and press "Enter." Your Unit Test should automatically run, and you will be able to see if the code under test behaved as expected.
**_Functional Test_**
Functional Tests are similar to Unit Tests, but they test the functionality f an entire web application. When writing a Functional Test, you should always keep in mind the following:
1. The goal of your Functional Test is to find bugs in all the functional components of your web application.
2. Your Functional Test should be concise but have to be written for each functional module of your app.
3. You should only use the WebDriver functional tests when all your components are passed from the development stage.
When writing a Functional Test, you will first need to identify the different areas of your web application that you want to test. Once you have selected these areas, you can create a test case for each one. Once your test cases are written, you can run them using the WebDriver. Open your browser and go to the page where your tests are located. Then, enter your test case into the browser's address bar and press "Enter." Your Functional Test should automatically run, and you will be able to see if the code under test behaved as expected.
**_Integration Tests_**
Integration Tests are used to test the integration between different parts of your web application. When writing an Integration Test, you should always keep in mind the following:
- The goal of Integration Testing in Selenium is to find bugs in your web application that arise from integrating multiple components.
- Your Integration Test should be short, and it should verify that all individual components work properly when integrated.
**_Regression Tests_**
Usually, regression suites include a huge number of test cases and it takes time and effort to execute them manually every time when a code change has been introduced. Hence almost every organization looks after automating regression test cases to reduce the time and effort. Choosing the right automation framework/ tool completely depends upon the application, technology used, testing requirements, and skill sets required for performing automation testing. It helps in automating functional and regression test cases that reduce the manual testing effort.
Automation completely depends on the framework that you choose to develop, and there is no such tool dedicated to performing only regression testing. The automation framework you select should be designed such that it supports regression testing effectively.
You can develop the regression suite for automation and keep adding new test scripts/test cases as and when required. Selenium Framework contains many reusable modules/functions that make it easy to maintain the existing code or add any new code.
**_End-to-End Tests_**
End-to-End Tests are used to test an entire web application from start to finish. When writing an end-to-end test, you should always keep in mind the following:
- The goal of your End-to-End Test is to find bugs in your web application that arise as a result of the correlation between multiple components.
- Your End-to-End Test should be concise, and it should verify that all individual components work properly when integrated.
Once your test cases are written, you can run them using the WebDriver. Open your browser and go to the page where your tests are located. Then, enter your test case into the browser's address bar and press "Enter." Your End-to-End Test should automatically run, and you will be able to see if the code under test behaved as expected.
**Limitations of Selenium WebDriver**
Although Selenium is one of the best tools for automating your tests on multiple devices, it still has limitations. Some of them are mentioned below:
1. WebDriver cannot interact with flash or Java applets.
2. WebDriver is not capable of handling complex animations.
3. WebDriver cannot recognize text inside images.
4. WebDriver has some difficulty dealing with dynamically generated
pages.
5. WebDriver can be difficult to use when testing web applications that
use Ajax or ReactJS.
**_Advantages of Selenium Automation Testing_**
_Language Agnostic:_
Selenium WebDriver offers native bindings for JavaScript, Python, Java, C#, and Ruby, eliminating the need to learn a new programming language solely for testing. While Selenium has its syntax, having proficiency in one of these languages proves beneficial.
_Cross-Browser Compatibility:_
Selenium communicates with browsers through drivers and is adaptable to different browser versions. With the appropriate driver, Selenium seamlessly supports significant browsers like Chrome, Firefox, Safari, Edge, and Opera.
_Cross-Platform Compatibility:_
Extending its versatility, Selenium is cross-platform compatible, allowing test creation on one platform and execution on another. It effortlessly functions across Windows, Mac OS, and various Linux distributions.
_Community Support:_
As an open-source tool with a substantial history, Selenium boasts a strong community. This support extends beyond regular updates and upgrades to encompass comprehensive documentation and a wealth of learning resources.
_Integrations with Third Parties:_
Selenium excels in integrations, providing the flexibility to extend functionality through third-party plugins. Users can leverage existing plugins or create custom ones to enhance Selenium's capabilities.
_Parallel Test Execution:_
Selenium supports parallel test execution across multiple machines, facilitated by Selenium Grid. This feature enables users to conduct tests simultaneously on various browsers and platforms, centralizing the management of browser configurations.
**Drawbacks of Selenium Automation Testing**
- High Test Maintenance
- No Build-in Capabilities
- No Reliable Tech Support
- Learning Curve
| jayachandran | |
1,866,262 | HDMI 2.0 TV Backlight Kit | Welcome to the enchanting world of deerdance, where smart lighting takes center stage in... | 0 | 2024-05-27T07:52:55 | https://dev.to/deerdance_43156f231259b9d/hdmi-20-tv-backlight-kit-1j25 | Welcome to the enchanting world of deerdance, where smart lighting takes center stage in revolutionizing the way we illuminate our spaces. As a leading brand in intelligent lighting solutions, deerdance seamlessly combines state-of-the-art technology, breathtaking design, and unparalleled functionality to create an extraordinary lighting experience.
At deerdance, we understand that lighting is about more than just brightness—it's about setting the perfect ambiance, enhancing moods, and optimizing overall well-being. Our smart lighting systems are meticulously designed to cater to the diverse needs and preferences of our valued customers, transforming the way we interact with light. [](https://www.deerdance.com/collections/tv-lights/products/deerdance-hdmi-tv-backlight
)
| deerdance_43156f231259b9d | |
1,882,372 | Strategies for Debugging Immutable Code | As any seasoned developer can attest, debugging code can often feel like a never-ending battle... | 0 | 2024-06-09T21:10:54 | https://dev.to/cherrypick14/strategies-for-debugging-immutable-code-1a8b | go, immutablecode, debuggingtechniques, codequality | As any seasoned developer can attest, debugging code can often feel like a never-ending battle against an ever-multiplying army of bugs – a relentless game of whack-a-mole where squashing one issue only seems to spawn two more in its place. In the world of Go programming, immutability is a powerful ally that can make your code more reliable and easier to maintain, but even this helpful tool has its own unique challenges when it comes to debugging.
In Go, you'll find two types of data structures: **immutable (value) types** and **mutable (reference) types**. Immutable types, like **int, float, bool, string,** and good ol' **structs** without any fancy reference types, are set in stone – they cannot be modified after creation, much like a developer's love for a good cup of coffee. On the other hand, **mutable types**, such as **slices, maps,** and **channels,** can be changed, even though their elements might be immutable or mutable depending on their types, making them as unpredictable as a rubber duck's mood.
Working with immutable data structures can simplify how you think about your program's state and reduce the risk of unintended changes causing bugs. However, even with these benefits, debugging immutable code can present its own unique challenges, like trying to navigate a maze while wearing a blindfold – frustrating, but not impossible with the right strategies.
Immutable data structures, by their very nature, cannot be modified once created, which can make it tricky to observe and manipulate data during the debugging process. It's like trying to catch a glimpse of a shooting star – blink, and you might miss it. Additionally, immutability can introduce new kinds of bugs, such as _accidentally creating new copies of data_ when you intended to modify existing ones, leaving you scratching your head like a confused monkey.
This article aims to equip you with practical strategies and techniques for effectively debugging immutable code in Go. Whether you're working with built-in immutable types or creating your own custom immutable data structures, these strategies will help you identify and resolve issues more efficiently, allowing you to fully harness the power of immutability in your Go projects.
**1. Leveraging Logging and Tracing**.
One of the most powerful tools for debugging immutable code is effective logging and tracing. By logging relevant information at strategic points in your code, you can gain valuable insights into the state of your program and the flow of data through your immutable data structures.
```
func processData(data []byte) (result []byte, err error) {
log.Printf("Processing data: %v", data)
// ... data processing logic ...
log.Printf("Result: %v", result)
return result, nil
}
```
In the example above, we log the input data and the final result, providing a clear trail of information that can aid in debugging. Additionally, you can employ more advanced logging techniques, such as structured logging or using third-party logging libraries like logrus or zap.
**2. Leverage Debuggers and Profilers**.
While immutable data structures can simplify debugging by reducing the number of potential sources of mutation, they can also make it more challenging to observe and manipulate data during the debugging process. Fortunately, Go's built-in debugger (dlv) and profiling tools can be invaluable allies in these situations.
```
func main() {
data := []byte("hello, world")
result, err := processData(data)
if err != nil {
log.Fatalf("Error processing data: %v", err)
}
fmt.Println(string(result))
}
```
By setting breakpoints and inspecting variables at different points in your code, you can gain insights into the state of your immutable data structures and identify potential issues. Additionally, profiling tools like pprof can help you detect performance bottlenecks, memory leaks, or other issues that may be related to your immutable data structures.
**3. Embrace Pure Functions and Immutable Transformations**.
Functional programming techniques, such as pure functions and immutable data transformations, can greatly simplify debugging by reducing side effects and making code more predictable. In Go, you can leverage these techniques to work with immutable data structures more effectively.
```
func transformData(data []byte) []byte {
// Perform some transformation on the data
// ...
return result
}
func processData(data []byte) ([]byte, error) {
transformed := transformData(data)
// ... further processing ...
return result, nil
}
```
In the example above, transformData is a pure function that takes an immutable slice of bytes as input and returns a new, transformed slice without modifying the original data. `transformData` is likely a pure function because it consistently returns the same result for the same input and has no side effects while `processData` is less likely to be a pure function due to error handling and potential side effects from additional processing.By separating your data transformations into pure functions, you can more easily reason about their behavior and identify potential issues.
**4. Leverage Third-Party Libraries and Tools**.
While Go's standard library provides some immutable data structures (e.g., _bytes.Buffer_, _strings.Builder_), there are also several third-party libraries and tools available that can aid in working with immutable data structures and debugging immutable code.
For example, the _immutable package_ (https://github.com/benbjohnson/immutable) provides a collection of immutable data structures, including _lists_, _maps_, and _sets_. Using these data structures can simplify your code and provide additional debugging benefits.
**Conclusion**
Debugging immutable code in Go requires a combination of effective _logging and tracing_, _leveraging debuggers_ and _profilers_, embracing _pure functions_ and _immutable transformations_, and taking advantage of _third-party libraries and tools_. By applying these strategies, you can unlock the full potential of immutability in Go, writing more robust and maintainable code while simplifying the debugging process.
Remember, immutability is a powerful tool for reducing bugs and improving code quality, and mastering the art of debugging immutable code can be a game-changer for your Go development workflow. While the strategies discussed in this article can greatly aid in debugging immutable code, it's essential to continually learn and adapt as new techniques and tools emerge.
| cherrypick14 |
1,882,371 | [Game of Purpose] Day 21 | Today I was travelling, so no progress. I even forgot to make an update that day, so I'm writing this... | 27,434 | 2024-06-09T21:06:52 | https://dev.to/humberd/game-of-purpose-day-21-3828 | gamedev | Today I was travelling, so no progress. I even forgot to make an update that day, so I'm writing this 1 day later. | humberd |
1,882,370 | A Voyage through Algorithms using Javascript - Recursion | What is Recursion? Recursion is a powerful and elegant technique that forms the backbone... | 0 | 2024-06-09T21:02:21 | https://www.sahinarslan.tech/posts/a-voyage-through-algorithms-using-javascript-recursion | webdev, javascript, computerscience, algorithms | ## What is Recursion?
Recursion is a powerful and elegant technique that forms the backbone of many algorithms. It allows a function to call itself repeatedly until a specific condition is met, enabling the solution of complex problems by breaking them down into simpler subproblems.
## Anatomy of a Recursive function
Every recursive function consists of two essential parts: the base condition and the recursive call.
**Base Condition:** The base condition is the foundation of a recursive function. It defines the scenario where the function can provide a direct solution without the need for further recursive calls, acting as a termination point to prevent infinite loops. The base condition represents the simplest form of the problem that can be solved immediately.
**Recursive Call:** The recursive call is where the function invokes itself with a modified version of the original problem. The function approaches the base condition by gradually reducing the problem into smaller, more manageable subproblems. The results of these recursive calls are then combined or processed to yield the final solution.
The cooperation between the base condition and the recursive call is what makes recursion a powerful tool. The base condition ensures that the recursion terminates, while the recursive call allows the function to tackle complex problems by breaking them down into simpler ones.
Here's the general structure of a recursive function:
```javascript
function recursiveFunction(parameters) {
if (baseCondition) {
// Base case: Return a direct solution
return baseResult;
} else {
// Recursive case: Modify parameters for the next recursive call
const modifiedParameters = ...;
// Invoke the recursive function with modified parameters
const recursiveResult = recursiveFunction(modifiedParameters);
// Process or combine the recursive result
const finalResult = ...;
return finalResult;
}
}
```
## Types of Recursion
Recursion can be classified into different types based on how the recursive calls are made. Let's explore each type with examples, use cases, pros, and cons.
### Direct Recursion
In direct recursion, a function calls itself directly within its own body.
**Example:**
```javascript
function factorial(n) {
if (n === 0) {
return 1;
}
return n * factorial(n - 1);
}
console.log(factorial(5)); // Output: 120
```
**Use cases:**
* Calculating factorials
* Traversing tree-like data structures (e.g., binary trees, file systems)
* Implementing recursive algorithms (e.g., Tower of Hanoi, Fibonacci sequence)
**Pros:**
* Simple and intuitive to understand and implement
* Suitable for problems with a clear recursive structure
**Cons:**
* Can lead to stack overflow if the recursion depth is too large
* May be less efficient compared to iterative solutions for certain problems
### Indirect Recursion
Indirect recursion occurs when a function calls another function, which in turn calls the original function directly or indirectly.
**Example:**
```javascript
function isEven(n) {
if (n === 0) {
return true;
}
return isOdd(n - 1);
}
function isOdd(n) {
if (n === 0) {
return false;
}
return isEven(n - 1);
}
console.log(isEven(4)); // Output: true
console.log(isOdd(5)); // Output: true
```
**Use cases:**
* Implementing mutually recursive functions (e.g., even/odd, is_palindrome)
* Solving problems that can be divided into interrelated subproblems
**Pros:**
* Allows for modular and organized code structure due to each function having a specific role.
* Can make the logic more readable and maintainable due to each function handling a distinct part of the problem.
**Cons:**
* Even the modularity and separation of concerns makes it readable at a glance, it can be still harder to understand since it involves multiple functions calling each other. This can make the flow of execution less straightforward compared to direct recursion, where you only deal with a single function calling itself.
* It can result in deeper recursion depths since multiple functions are involved in the recursive process. This increases the risk of stack overflow if the recursion goes too deep, as each function call adds a new frame to the call stack.
### Tail Recursion
A recursive function is considered tail recursive if the recursive call is the last thing executed by the function. There is no need to keep a record of the previous state – in other words, there's no additional computation after the recursive call returns.
**Example:**
```javascript
function sumNumbers(n, accumulator = 0) {
if (n === 0) {
return accumulator;
}
return sumNumbers(n - 1, accumulator + n);
}
console.log(sumNumbers(5)); // Output: 15
```
**Use cases:**
* Optimizing recursive algorithms by simplifying the recursive structure to reduce the amount of recursive calls
* Example: A tail-recursive version of the naive implementation of Fibonacci sequence (more details on Optimizing Recursive functions section).
* Tail call optimization (TCO) can also further enhance performance by optimizing these calls at the engine level. In terms of Javascript, TCO is only supported by the Safari engine.
* Avoiding stack overflow in languages (or engines) that support tail call optimization
**Pros:**
* Can be optimized by the compiler or interpreter for better performance
* Reduces the risk of stack overflow in supported languages / engines
**Cons:**
* Not all programming languages support tail call optimization in the engine level
* May require additional parameters or helper functions to maintain the state
### Non-Tail Recursion
A recursive function is considered non-tail recursive if the recursive call is not the last thing executed by the function. After returning from the recursive call, there is something left to evaluate (notice the console log after the reversePrint function below).
**Example:**
```javascript
function reversePrint(n) {
if (n === 0) {
return;
}
reversePrint(n - 1);
console.log(n);
}
reversePrint(5);
// Output:
// 1
// 2
// 3
// 4
// 5
```
**Use cases:**
* Generating permutations or combinations
* Traversing and processing data structures in a post-order manner
**Pros:**
* Allows for post-processing or evaluation after the recursive call
* Suitable for problems that require backtracking or post-order traversal
**Cons:**
* Can lead to stack overflow if the recursion depth is too large
* May be less efficient compared to tail recursive or iterative solutions
## Infinite Recursion and Stack Overflow Errors
It is critical that the recursive calls eventually reach the base condition. Without a well-defined base condition, the function may continue calling itself indefinitely, leading to infinite recursion and a stack overflow error.
Consider the following example of a recursive function without a base condition:
```javascript
function recursiveGreeting(name) {
console.log(`Hi ${name}.`);
recursiveGreeting(name);
}
recursiveGreeting("Mark");
```
In this case, the `recursiveGreeting` function lacks a base condition. When invoked with the argument `"Mark"`, it will print `"Hi Mark."` and then call itself with the same argument. This process will repeat indefinitely, causing an infinite loop. The function will keep printing `"Hi Mark."` until the call stack reaches its limit, resulting in a stack overflow error:

However, stack overflow errors can also occur in functions with a base condition if the input data is too large. For example, calculating the factorial of a very large number using a simple recursive function without optimization can also lead to a stack overflow due to the excessive depth of recursive calls.
Which brings us to the next topic: to effectively work with recursive functions, it's not only essential to understand the anatomy of recursion but also to comprehend how the call stack operates. The call stack is an "unseen" key element that must be considered when working with recursive problems.
## Understanding the Call Stack
The call stack is a fundamental data structure used by Javascript Engine (similarly in many other programming languages) to manage function execution, evaluations, and the program's execution flow "under the hood".
When a script calls a function, Javascript creates a new execution context for that function and pushes it onto the call stack. This execution context includes information such as the function's parameters, local variables, and the location to return to when the function completes. If the function makes further function calls or evaluations, additional execution contexts are pushed onto the stack.
The call stack operates on the principle of "last in, first out" (LIFO), meaning that the last execution context pushed onto the stack is the first one to be popped off when it completes. Javascript continuously executes the code in the execution context at the top of the stack. When a function completes, its execution context is popped off the stack, and the program resumes execution from the previous execution context.
Let's consider a simple example to illustrate how the call stack behaves with non-nested function calls:

*Recorded gif from JS Visualizer 9000 tool: <https://www.jsv9000.app/>*
**Code:**
```javascript
function one() {
return 'one';
}
function two() {
return 'two';
}
function three() {
return 'three';
}
one();
two();
three();
```
Here's what happens with the call stack:
1. `one()` is called:
* `one` is pushed onto the call stack.
* `one` executes and returns the string `'one'`.
* `one` is popped off the call stack.
2. `two()` is called:
* `two` is pushed onto the call stack.
* `two` executes and returns the string `'two'`.
* `two` is popped off the call stack.
3. `three()` is called:
* `three` is pushed onto the call stack.
* `three` executes and returns the string `'three'`.
* `three` is popped off the call stack.
In this scenario, each function call is independent and completes before the next function is called. The call stack never holds more than one function at a time, as each function "comes and goes" in a sequential manner.
Now, let's examine how the call stack handles nested function calls:

*Recorded gif from JS Visualizer 9000 tool: <https://www.jsv9000.app/>*
**Code:**
```javascript
function one() {
return two();
}
function two() {
return three();
}
function three() {
return 'done!';
}
one();
```
Here's what happens with the call stack when we're dealing with nested calls:
1. `one()` is called:
* `one` is pushed onto the call stack.
* `one` calls `two()`.
2. `two()` is called by `one()`:
* `two` is pushed onto the call stack on top of `one`.
* `two` calls `three()`.
3. `three()` is called by `two()`:
* `three` is pushed onto the call stack on top of `two`.
* `three` executes and returns the string `'done!'`.
* `three` is popped off the call stack.
4. The return value from `three()` is used by `two()`:
* `two` continues execution, receives the string `'done!'`, and returns it.
* `two` is popped off the call stack.
5. The return value from `two()` is used by `one()`:
* `one` continues execution, receives the string `'done!'`, and returns it.
* `one` is popped off the call stack.
In this case, the call stack holds multiple functions simultaneously due to the nested calls to manage more complex execution flows.
### The Call Stack and Recursion:
Recursion relies heavily on the call stack to manage the multiple instances of the recursive function. Each recursive call creates a new frame on the call stack with its own execution context (variables and parameters).
Consider a simple example of a recursive function that counts down from a number:

*Recorded gif from JS Visualizer 9000 tool: <https://www.jsv9000.app/>*
**Code:**
```javascript
function recursiveCountdown(n) {
if (n === 0) {
return;
}
return recursiveCountdown(n - 1);
}
recursiveCountdown(4);
```
Here's what happens in the call stack when we're dealing with this recursive function:
1. `recursiveCountdown(4)` is called and pushed onto the stack.
2. Since `n` is not 0, it calls `recursiveCountdown(3)` and pushes it onto the stack.
3. This process continues, pushing `recursiveCountdown(2)`, `recursiveCountdown(1)`, and `recursiveCountdown(0)` onto the stack.
4. When `recursiveCountdown(0)` is called, the base case is met, and it returns `undefined` and pops `recursiveCountdown(0)` off the stack.
5. The return value (`undefined`) is passed up to `recursiveCountdown(1)`, which then returns and pops off the stack.
6. This process continues until `recursiveCountdown(4)` receives the return value from `recursiveCountdown(3)` and completes, ultimately clearing the stack.
As you see, even a small operation can fill up the call stack pretty quickly. Therefore it's critical to be cautious of the call stack's size when working with recursion. If the recursion depth becomes too large or if there is no base case to terminate the recursion, the call stack can overflow. This happens when the maximum stack size is exceeded, and the program runs out of memory to store additional execution contexts.
## Optimizing Recursive functions
Recursive functions can be powerful and expressive, but they can also suffer from performance issues if not optimized properly. One common problem that arises with recursive functions is redundant calculations, where the same subproblems are solved multiple times. This can lead to exponential time complexity and inefficient use of resources.
Whenever we discuss optimizing an algorithm, we are essentially focusing on improving its time and space complexity. This section assumes that you are at least somewhat familiar with Big O notation. If you need a quick refresher, I recommend starting with the article below and then returning here to continue:
[Comprehensive Big O Notation Guide in Plain English, using Javascript](https://www.sahinarslan.tech/posts/comprehensive-big-o-notation-guide-in-plain-english-using-javascript)
### Case study: Fibonacci Sequence Problem
To illustrate this, let's take a look at the famous Fibonacci sequence problem and explore how we can optimize its recursive solution. The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones. The sequence starts with 0 and 1, and goes on infinitely:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
#### Naive Recursive Implementation
A straightforward recursive implementation of the Fibonacci sequence can be written as follows:
```javascript
function fibonacci(n) {
if (n <= 1) {
return n;
}
return fibonacci(n - 1) + fibonacci(n - 2);
}
```
In this implementation, the base condition is when `n` is less than or equal to 1, in which case we simply return `n`. For any value of `n` greater than 1, we recursively call the `fibonacci` function with `n - 1` and `n - 2` and add their results to obtain the Fibonacci number at position `n`.
While this implementation is concise and easy to understand, it suffers from a major performance issue. Let's analyze the time complexity of this approach.
**Time Complexity:** O(2^n)
The time complexity of the naive recursive Fibonacci implementation can be represented by the recurrence relation:
```
T(n) = T(n-1) + T(n-2) + O(1)
```
This means that to calculate the Fibonacci number at position `n`, we need to calculate the Fibonacci numbers at positions `n-1` and `n-2`, and then perform a constant-time operation (addition).
The recursive calls form a binary tree-like structure, where each node represents a function call. The height of this tree is `n`, and the number of nodes in the tree grows exponentially with `n`. In fact, the time complexity of this naive recursive approach is exponential, approximately O(2^n). **In simpler words, each function is calling 2 functions, those 2 functions are calling 4 functions, and so on..**.
To understand why, let's look at a small example. Consider the calculation of the 5th Fibonacci number:
```
fibonacci(5)
-> fibonacci(4) + fibonacci(3)
-> (fibonacci(3) + fibonacci(2)) + (fibonacci(2) + fibonacci(1))
-> ((fibonacci(2) + fibonacci(1)) + (fibonacci(1) + fibonacci(0))) +
((fibonacci(1) + fibonacci(0)) + fibonacci(1))
-> (((fibonacci(1) + fibonacci(0)) + fibonacci(1)) + (1 + 0)) +
((1 + 0) + 1)
```
As we can see, the number of function calls grows exponentially with each increasing value of `n`. Each function call leads to two more function calls, resulting in a binary tree structure. The number of nodes in this tree is approximately 2^n, leading to an exponential time complexity.
**Space Complexity:** O(n)
The space complexity of the naive recursive Fibonacci implementation is O(n). This is because the maximum depth of the recursive call stack is proportional to the input value `n`. Each recursive call adds a new frame to the call stack, consuming additional memory.
In the worst case, when `n` is large, the recursive calls will reach a depth of `n` before starting to return. This means that the call stack will contain `n` frames, each storing the local variables and function call information.
It's important to note that the space complexity is not exponential like the time complexity. While the time complexity grows exponentially with `n`, the space complexity grows linearly with `n` due to the linear growth of the call stack.
To summarize:
* The time complexity of the naive recursive Fibonacci implementation is O(2^n), which is exponential.
* The space complexity is O(n) due to the linear growth of the call stack with respect to the input value `n`.
Now let's explore different optimization techniques to improve the performance of the recursive Fibonacci function:
### 1 - Memoization
Memoization is a technique where we store the results of expensive function calls and return the cached result when the same inputs occur again. This optimization can significantly improve the performance of recursive functions by eliminating redundant calculations.
Here's an example of applying memoization to the Fibonacci function:
```javascript
function fibonacciMemo(n, memo = {}) {
if (n in memo) {
return memo[n];
}
if (n <= 1) {
return n;
}
memo[n] = fibonacciMemo(n - 1, memo) + fibonacciMemo(n - 2, memo);
return memo[n];
}
```
In this optimized version, we introduce an object `memo` to store the previously calculated Fibonacci numbers. Before making a recursive call, we check if the result for the current input `n` is already available in the `memo` object. If it is, we return the memoized result directly. Otherwise, we proceed with the recursive calls and store the result in the `memo` object before returning it.
**Time Complexity:** O(n)
With memoization, each Fibonacci number is calculated only once. The recursive calls are made only for numbers that have not been previously calculated and memoized. This reduces the time complexity from exponential to linear, as there are only n unique subproblems to solve.
**Space Complexity:** O(n)
The space complexity of the memoized solution is O(n) because the `memo` object stores the results of each subproblem. In the worst case, the `memo` object will contain all the Fibonacci numbers from 0 to n.
**Use cases:**
* When the recursive function performs redundant calculations
* When the recursive function has overlapping subproblems
**Pros:**
* Eliminates redundant calculations
* Significantly improves the time complexity (from exponential to linear)
**Cons:**
* Requires extra space to store the memoized results
* May not be suitable for all recursive problems
### 2 - Tail Call Optimization
Tail call optimization is a technique used by some programming languages / engines to optimize recursive functions. It allows the compiler or interpreter to reuse the same callstack frame for recursive calls, thereby avoiding the overhead of creating new stack frames.
To take advantage of tail call optimization, we need to rewrite the recursive function in a way that the recursive call is the last operation performed. Here's an example of the Fibonacci function optimized for tail calls:
```javascript
function fibonacciTailCall(n, a = 0, b = 1) {
if (n === 0) {
return a;
}
if (n === 1) {
return b;
}
return fibonacciTailCall(n - 1, b, a + b);
}
```
In this version, we introduce two additional parameters `a` and `b` to keep track of the previous two Fibonacci numbers. The recursive call is made with `n - 1`, and the values of `a` and `b` are updated accordingly. The base conditions handle the cases when `n` is 0 or 1.
**Time Complexity:** O(n)
The tail-recursive solution has a time complexity of O(n) because it makes n recursive calls, each taking constant time. This time complexity remains the same regardless of whether tail call optimization is supported or not.
**Space Complexity:** O(n) or O(1)
The space complexity of the tail-recursive solution depends on whether the Javascript engine supports tail call optimization or not.
* If tail call optimization is supported, the space complexity is reduced to O(1) because the recursive calls are optimized to reuse the same stack frame. This eliminates the need for additional memory to store the call stack.
* If tail call optimization is not supported, the space complexity remains O(n) because each recursive call will consume an additional stack frame, similar to the naive recursive approach.
It's worth noting that tail call optimization is not consistently available across all Javascript engines. As of today, TCO is only supported by Safari (in strict mode). This means that the actual space complexity of the tail-recursive solution may vary depending on the execution environment.
**Use cases:**
* When the recursive function can be transformed into a tail-recursive form
* When the programming language or specific implementation supports tail call optimization
**Pros:**
* Avoids stack overflow errors for deep recursions (if tail call optimization is supported)
* Optimizes memory usage by reusing stack frames (if tail call optimization is supported)
**Cons:**
* Not all Javascript engines support tail call optimization consistently
* May require additional parameters and manipulation of the recursive function
To keep the consistent behavior across different Javascript environments, it's generally recommended to rely on other optimization techniques that have more predictable performance characteristics.
### 3 - Trampolining
Trampolining is a technique used to overcome the limitations of stack overflow in recursive functions. It involves separating the recursive function into a trampoline function that handles the recursive calls iteratively. This technique is particularly useful in Javascript, as it provides a way to manage deep recursion across different engines to prevent the call stack to exceed its limit.
Here's an example of applying trampolining to the Fibonacci function:
```javascript
function trampoline(fn) {
return function(...args) {
let result = fn(...args);
while (typeof result === 'function') {
result = result();
}
return result;
};
}
function fibonacciTrampoline(n, a = 0, b = 1) {
if (n === 0) {
return a;
}
if (n === 1) {
return b;
}
return () => fibonacciTrampoline(n - 1, b, a + b);
}
const trampolinedFibonacci = trampoline(fibonacciTrampoline);
trampolinedFibonacci(5); // Output: 5
```
In this variant, the `trampoline` function is defined separately. It takes a recursive function `fn` as an argument and returns a new function that handles the recursive calls iteratively. The `fibonacciTrampoline` function remains the same as before, returning a function that encapsulates the next recursive step.
The `trampoline` function repeatedly calls the returned function until a non-function value is returned, which represents the final result. Finally, we create a `trampolinedFibonacci` function by passing `fibonacciTrampoline` to the `trampoline` function.
**Time Complexity:** O(n)
The trampolined solution has a time complexity of O(n) because it performs n iterations in the `trampoline` function. Each iteration invokes the returned function, which represents the next step of the recursion.
**Space Complexity:** O(n)
The space complexity of the trampolined solution is O(n) because the closure returned by `fibonacciTrampoline` captures the current state of the computation. In the worst case, there will be n closures created, each capturing the values of `n`, `a`, and `b`.
**Use cases:**
* When the recursive function has a deep recursion depth
* When the programming language / engine does not support tail call optimization
**Pros:**
* Avoids stack overflow errors by breaking down the recursion into smaller steps
* Allows for deep recursions without exceeding the call stack limit
**Cons:**
* Requires additional code to handle the trampolining logic
* May have slightly slower performance compared to direct recursion
Take a look at the visual comparison below to see how trampolining effectively manages the call stack compared to the naive recursive approach:
#### Call stack visualization using the naive approach:

*Recorded gif from JS Visualizer 9000 tool: <https://www.jsv9000.app/>*
#### Call stack visualization using the trampoline optimization:

*Recorded gif from JS Visualizer 9000 tool: <https://www.jsv9000.app/>*
### 4 - Opting for an Iterative Approach
While recursive solutions can be elegant and expressive, sometimes an iterative approach can be more efficient in terms of time and space complexity. However, it's important to note that iterative variants of recursive functions can often be more complex and harder to understand compared to their recursive counterparts. Let's consider an iterative variant of the Fibonacci problem:
```javascript
function fibonacciIterative(n) {
if (n <= 1) {
return n;
}
let a = 0;
let b = 1;
for (let i = 2; i <= n; i++) {
const temp = b;
b = a + b;
a = temp;
}
return b;
}
```
In this iterative solution, we start with the base cases for `n <= 1`. Then, we initialize two variables `a` and `b` to keep track of the previous two Fibonacci numbers. We iterate from 2 to `n`, updating the values of `a` and `b` in each iteration. Finally, we return the value of `b`, which represents the Fibonacci number at position `n`.
**Time Complexity:** O(n)
The iterative solution has a time complexity of O(n) because it iterates from 2 to n, performing constant-time operations in each iteration. The number of iterations is directly proportional to the input value n.
**Space Complexity:** O(1)
The space complexity of the iterative solution is O(1) because it only uses a constant amount of additional memory to store the variables `a`, `b`, and `temp`. The space required does not depend on the input value n.
**Use cases:**
* When the recursive solution exceeds the maximum call stack size
* When the problem requires a large number of iterations
**Pros:**
* Often more efficient in terms of time and space complexity
* Avoids the overhead of function calls and stack management
**Cons:**
* Can be more complex and harder to understand compared to recursive solutions
* Requires explicit state management (e.g., variables to track previous values)
* May lack the elegance and expressiveness of recursive solutions
### Optimization summary
Just by looking at the time and space complexity of each optimization technique, we can see how they improve upon the naive recursive approach:
* Memoization reduces the time complexity from O(2^n) exponential to O(n) linear, but space complexity stays at O(n) linear to store the memoized results.
* Tail call optimization maintains the O(n) linear time complexity, if the language / engine supports TPO it reduces the space complexity to O(1) constant by reusing stack frames, else space complexity stays at O(n) linear.
* Trampolining also maintains the O(n) linear time complexity but has a space complexity of O(n) linear due to the creation of closures.
* The iterative approach achieves optimization on both ends - O(n) linear time complexity and O(1) constant space complexity, making it the most efficient in terms of both time and space. But at this point, we no longer have a recursive function.
While the iterative solution is the most efficient among them, it comes at the cost of reduced readability and increased complexity.
Recursive solutions, on the other hand, often have a more natural and intuitive structure that aligns with the problem's definition. They can be easier to understand and reason about, especially for problems that have a clear recursive nature.
When deciding between a recursive or iterative approach, it's also important to consider not only the efficiency aspects but also the readability and maintainability of the code. In some cases, a recursive solution may be preferable due to its simplicity and expressiveness, even if it has slightly higher time or space complexity compared to an iterative approach.
The choice between recursion and iteration depends on the specific problem, the constraints of the system, and the balance between efficiency and code clarity. It's all about comparing the trade-offs and choose the approach that best fits the given scenario while prioritizing code readability and maintainability.
I hope this article helped you to understand what a recursion is, how to work with it and how to optimize recursive functions depending on your use case. Thanks for reading! | humblecoder00 |
1,882,318 | World Bicycle Day CSS Art : Frontend Challenge (June Edition) | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration Today,... | 0 | 2024-06-09T20:51:05 | https://dev.to/israebenboujema/world-bicycle-day-css-art-frontend-challenge-june-edition-31oc | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
Today, we are highlighting World Bicycle Day. Cycling has been a favorite hobby for both of us since we were kids, and it holds a special place in our hearts. When we saw the challenge post from the dev community, we thought it would be a perfect opportunity to celebrate our love for cycling. June 3rd, World Bicycle Day, is a day to promote cycling, its health benefits, and its positive impact on the environment.
## Demo
Here is the CSS Art we created to celebrate World Bicycle Day:
{% codepen https://codepen.io/IsraeBenboujema/pen/dyEzgxR %}
## Journey
Our journey in creating this CSS Art started with reflecting on our childhood memories of cycling and the joy it brought us. We wanted to capture the essence of a beautiful day spent riding a bicycle in the park.
**Process**
- _Conceptualization:_ We brainstormed various elements that represent cycling and settled on a simple yet vibrant depiction of a bicycle against a scenic background.
- _Design:_ We sketched the design on paper, focusing on the bicycle's structure, wheels, and the surrounding environment.
- _Coding:_ Using CSS, we brought our sketch to life. We used various CSS properties such as border, border-radius, transform, and animation to create the bicycle and animate the wheels to give a sense of movement.
- _Optimization:_ We refined the code to ensure it was clean and efficient, and we tested it across different browsers to ensure compatibility.
**What We Learned**
- _CSS Techniques:_ We improved our skills in using CSS properties to create complex shapes and animations.
- _Problem-Solving:_ We encountered and overcame challenges in aligning the elements and making the animation smooth.
- _Teamwork:_ Working together allowed us to combine our strengths and ideas, resulting in a better final product.
**Proud Moments**
- Successfully animating the bicycle wheels to simulate motion.
- Creating a visually appealing scene that resonates with our personal experiences and the spirit of World Bicycle Day.
**Next Steps**
- _Explore More Animations:_ We plan to delve deeper into CSS animations and transitions to create more dynamic art pieces.
- _Interactive Elements:_ Adding interactivity to our CSS art to engage viewers more.
- _Share and Inspire:_ We hope to share our journey and work with others to inspire them to explore CSS art and celebrate their passions through coding.
**Team Members**
@cssdru
@israebenboujema
| israebenboujema |
1,882,366 | Discover Jaipur's Best Cafes! | Top Cafes in Jaipur -Cafe Romeo Juliet | Jaipur, the vibrant capital of Rajasthan, is renowned for its rich history, stunning architecture,... | 0 | 2024-06-09T20:50:51 | https://dev.to/caferomeojuliet/discover-jaipurs-best-cafes-top-cafes-in-jaipur-cafe-romeo-juliet-b1b | food, cafe |

Jaipur, the vibrant capital of Rajasthan, is renowned for its rich history, stunning architecture, and bustling bazaars. Amidst its regal charm, Jaipur also boasts a thriving café culture, offering a blend of traditional and modern culinary delights. Whether you are a local resident or a traveler seeking a cozy spot to relax, Jaipur’s cafes provide the perfect ambiance to unwind. In this guide, we explore some of the [top cafes in jaipur](https://caferomeojuliet.com/top-cafes-in-jaipur/) that promise an unforgettable experience.
Café Palladio: A Slice of Italy in Jaipur
##
Nestled in the heart of the city, Café Palladio stands as a testament to elegance and charm. The café's décor is inspired by Sicilian aesthetics, featuring pastel hues, intricate frescoes, and vintage furniture that transport you straight to Italy.
Menu Highlights:
Pasta Fresca: Made with fresh, locally sourced ingredients, this dish captures the essence of authentic Italian cuisine.
Pizzas: Their wood-fired pizzas are a must-try, offering a perfect balance of crispy crust and rich toppings.
Gelato: End your meal with their creamy and flavorful gelato, available in a variety of classic and exotic flavors.
Café Palladio is not just about food; it's an experience that combines art, culture, and culinary excellence.
Tapri Central: A Blend of Tradition and Modernity
##
Tapri Central is a beloved spot for both locals and tourists, offering a unique fusion of traditional Indian snacks with a contemporary twist. Situated near Central Park, this café provides a panoramic view of the city’s skyline.
Menu Highlights:
Chai Ki Laadli: Their signature masala chai served in traditional clay cups, delivering an authentic taste of India.
Vada Pav: A spicy potato filling encased in a soft bun, Tapri’s Vada Pav is a perfect blend of flavors and textures.
Chaat Platter: A medley of tangy and spicy street food items that are perfect for sharing.
The ambiance at Tapri Central is cozy and inviting, with rustic wooden furniture and quirky décor that adds to its charm.
Anokhi Café: Organic and Fresh
For those who prioritize health and sustainability, Anokhi Café is the place to be. Located within the Anokhi Museum, this café offers a delightful range of organic and farm-fresh dishes.
Menu Highlights:
Quinoa Salad: Packed with nutrients and bursting with flavors, this salad is a favorite among health enthusiasts.
Carrot Cake: Moist, rich, and topped with a creamy frosting, their carrot cake is a crowd-pleaser.
Fresh Juices: Made from organic fruits and vegetables, these juices are refreshing and revitalizing.
Anokhi Café is committed to sustainability, using locally sourced ingredients and eco-friendly practices in their operations.
Nibs Café & Chocolataria: A Chocolate Lover’s Paradise
If you have a sweet tooth, Nibs Café & Chocolataria is a must-visit. This café specializes in all things chocolate, offering an array of decadent desserts and beverages.
Menu Highlights:
Chocolate Fondue: Dip fresh fruits, marshmallows, and pastries into a pot of warm, melted chocolate for a delightful treat.
Belgian Waffles: Crispy on the outside and fluffy on the inside, these waffles are drizzled with rich Belgian chocolate.
Hot Chocolate: A cup of their velvety hot chocolate is perfect for those chilly evenings.
The ambiance at Nibs is whimsical and enchanting, with chocolate-themed décor that adds to the overall experience.
Taruveda Bistro: A Fusion Delight
Taruveda Bistro offers a unique fusion of global cuisines, catering to diverse palates. The café is known for its eclectic menu and vibrant atmosphere.
Menu Highlights:
Sushi Rolls: Fresh and flavorful, their sushi rolls are a favorite among seafood lovers.
Tacos: These are packed with a variety of fillings, from spicy chicken to vegetarian options, ensuring there is something for everyone.
Smoothie Bowls: A healthy and delicious way to start your day, these bowls are loaded with fruits, nuts, and seeds.
Taruveda Bistro’s ambiance is modern and chic, with an outdoor seating area that’s perfect for a relaxing meal.
Café LazyMojo: The Perfect Hangout Spot
With multiple locations across Jaipur, Café LazyMojo is known for its relaxed vibe and extensive menu. It’s the ideal spot to hang out with friends or enjoy a quiet meal by yourself.
Menu Highlights:
Burgers: Their juicy, flavorful burgers are a hit among patrons, offering both vegetarian and non-vegetarian options.
Pasta Alfredo: Creamy and comforting, this pasta dish is perfect for a hearty meal.
Mojo Special Pizza: A unique creation topped with an assortment of ingredients, this pizza is a must-try.
Café LazyMojo’s interiors are trendy and inviting, with comfortable seating and artistic décor that enhance the dining experience.
The Wind View Café: Scenic Views and Delicious Food
Located near the iconic Hawa Mahal, The Wind View Café offers stunning views of the historic palace along with a delectable menu. This café is a favorite among tourists for its picturesque setting and delicious food.
Menu Highlights:
Paneer Tikka: Marinated and grilled to perfection, this dish is a favorite among vegetarians.
Chicken Wings: Spicy and succulent, these wings are perfect for a quick snack.
Mocktails: Refreshing and innovative, their mocktails are perfect for cooling down on a hot day.
The Wind View Café is the perfect spot to enjoy a meal while taking in the breathtaking views of Jaipur’s architectural marvels.
Curious Life Coffee Roasters: For the Coffee Aficionados
For coffee lovers, Curious Life Coffee Roasters is a haven. This café is dedicated to serving the best coffee in town, sourced from the finest beans and roasted to perfection.
Menu Highlights:
Espresso: Rich and robust, their espresso is a must-try for any coffee enthusiast.
Cold Brew: Smooth and refreshing, this beverage is perfect for hot summer days.
Pastries: Pair your coffee with one of their freshly baked pastries for a delightful treat.
Curious Life Coffee Roasters offers a cozy and inviting atmosphere, making it the perfect spot to relax with a good book or catch up with friends.
Jaipur Modern Kitchen: A Contemporary Culinary Experience
Jaipur Modern Kitchen combines contemporary design with a menu that celebrates modern cuisine. The café is known for its innovative dishes and chic ambiance.
Menu Highlights:
Avocado Toast: Topped with fresh ingredients and a drizzle of olive oil, this dish is a healthy and delicious choice.
Burrata Salad: Creamy burrata cheese paired with fresh greens and a tangy dressing makes for a perfect starter.
Gourmet Sandwiches: These sandwiches are packed with high-quality ingredients and bursting with flavor.
Jaipur Modern Kitchen’s sleek and stylish interiors make it a great spot for a sophisticated dining experience.
Conclusion
Jaipur's café scene is as diverse and vibrant as the city itself. From traditional Indian snacks to international delicacies, these cafes offer something for everyone. Each café has its unique charm and specialties, ensuring that every visit is a delightful experience. So, the next time you find yourself in the Pink City, make sure to explore these top cafes and indulge in the culinary delights they have to offer.
| caferomeojuliet |
1,882,362 | How to get html from the server using javascript using fetch (hmpl-js module)? | You can use the hmpl-js package to load HTML from the server. It works on fetch, so it can help you... | 0 | 2024-06-09T20:41:02 | https://dev.to/antonmak1/how-to-get-html-from-the-server-using-javascript-using-fetch-1kih | webdev, javascript, programming, tutorial | You can use the hmpl-js package to load HTML from the server. It works on `fetch`, so it can help you avoid writing a bunch of code:
```html
<div id="wrapper"></div>
<script src="https://unpkg.com/hmpl-js"></script>
<script>
const templateFn = hmpl.compile(
`<div>
<button class="getHTML">Get HTML!</button>
<request src="/api/test" after="click:.getHTML"></request>
</div>`
);
const wrapper = document.getElementById("wrapper");
const elementObj = templateFn({
credentials: "same-origin",
get: (prop, value) => {
if (prop === "response") {
if (value) {
wrapper.appendChild(value);
}
}
},
});
</script>
```
or
```javascript
import { compile } from "hmpl-js";
const templateFn = compile(
`<div>
<button class="getHTML">Get HTML!</button>
<request src="/api/test" after="click:.getHTML"></request>
</div>`
);
const wrapper = document.getElementById("wrapper");
const elementObj = templateFn({
credentials: "same-origin",
get: (prop, value) => {
if (prop === "response") {
if (value) {
wrapper.appendChild(value);
}
}
},
});
```
The function will fire dynamically, depending on the update of properties in the object. | antonmak1 |
1,882,361 | Revolutionizing Technology Solutions in Pakistan | Welcome to CodeHuntSPK, a leading software solutions company based in Islamabad, Pakistan. Our... | 0 | 2024-06-09T20:35:14 | https://dev.to/hmzi67/revolutionizing-technology-solutions-in-pakistan-pc7 | webdev, cloudcomputing, android, web3 | Welcome to CodeHuntSPK, a leading software solutions company based in Islamabad, Pakistan. Our mission is to transform your digital ideas into reality through cutting-edge technology and innovative strategies. With over two years of experience and a portfolio of 12 completed projects, we specialize in:
- **Web Development**: Creating responsive and visually appealing websites using the latest technologies like WordPress and Bootstrap5.
- **App Development**: Offering custom solutions for Windows, Android, and iOS platforms.
- **UX/UI Design**: Enhancing user experiences with intuitive and engaging interfaces.
- **Graphics and Logo Design**: Crafting unique visual identities for your brand.
Our team of experts, including cloud engineers, SEO specialists, and skilled developers, is dedicated to providing comprehensive IT solutions tailored to your business needs.
## Why Choose CodeHuntSPK?
1. **Expert Team**: Our professionals bring a wealth of knowledge and expertise to every project.
2. **Customized Solutions**: We tailor our services to meet your specific requirements and goals.
3. **Innovative Approach**: We stay ahead of industry trends to deliver state-of-the-art solutions.
4. **Client-Centric**: Your satisfaction is our priority, and we work closely with you to ensure the best outcomes.
## Our Services
### Web Development
- **E-Commerce Solutions**: Building robust online stores.
- **Custom Websites**: Developing tailored web applications.
### App Development
- **Mobile Apps**: Creating user-friendly apps for Android and iOS.
- **Desktop Applications**: Developing powerful Windows applications.
### Design Services
- **UX/UI Design**: Crafting seamless user experiences.
- **Graphic Design**: Designing compelling visuals for your brand.
## Our Team
Meet our talented team members:
- **Hamza Waheed**: CEO and Cloud Engineer.
- **Muhammad Usama**: SEO Expert.
- **Muhammad Shazil**: Web Developer.
## Contact Us
Have a project in mind? Reach out to us at [CodeHuntSPK](https://codehuntspk.com/) and let's start building your digital future today.
---
For more information, visit [CodeHuntSPK](https://codehuntspk.com/). | hmzi67 |
1,882,360 | Solving Tailwind's "Unknown at rule @apply" | Chances are you are using VSCode, and chances are you're also using Tailwind in one of your projects.... | 0 | 2024-06-09T20:33:11 | https://www.oh-no.ooo/snippets/unknown-at-rule-apply-k-thx-bye | tailwindcss, webdev, vscode, css | Chances are you are using VSCode, and chances are you're also using Tailwind in one of your projects. Chances are that at my first *chances are...* you went immediately *nuh-uh* and moved on (and if you're a NeoVim user I can't help you anyway, you're already doomed and lost in the recommended plugins someone chose for you and you're probably still figuring out half of them) but if I got your attention with the first two assumptions, then this post might be relevant for you!
One issue I've been having ever since using Tailwind with SCSS modules has been that VSCode goes absolutely ballistic marking every `@apply` as an unknown CSS rule, __as the editor assumes it's one `at-rule`__ from CSS. (If you are wondering what at-rules are, `@import` and `@media` might sound a bit more familiar :D)
`@apply` is a Tailwind directive that mimics in semantics a native CSS at-rule, and this one in the specific takes care of enabling Tailwind utility classes in a .scss file ... which by now we already now — *I know, we wouldn't be reading this article otherwise* — so jumping to the main thing, let me present you my favourite solution from <a href="https://byby.dev/at-rule-tailwind" target="_blank">the solutions list that this article from byby.dev</a> proposes.
Open the `.vscode` folder in your project, create a `tailwind_directives.json` file and add the following lines. I took the liberty of expanding from the article, which was talking specifically about the `@tailwind` directive, and take it as a chance to put <a href="https://tailwindcss.com/docs/functions-and-directives" target="_blank">all the Tailwind specific directives I could find from Tailwind's documentation</a>:
```json
{
"version": 1.0,
"atDirectives": [
{
"name": "@tailwind",
"description": "Use the @tailwind directive to insert Tailwind's `base`, `components`, `utilities`, and `screens` styles into your CSS."
},
{
"name": "@apply",
"description": "Use @apply to inline any existing utility classes into your own custom CSS."
},
{
"name": "@screen",
"description": "The screen function allows you to create media queries that reference your breakpoints by name instead of duplicating their values in your own CSS. Apparently deprecated in favour of @media?!"
},
{
"name": "@layer",
"description": "Use the @layer directive to tell Tailwind which “bucket” a set of custom styles belong to. Valid layers are base, components, and utilities."
},
{
"name": "@config",
"description": "Use the @config directive to specify which config file Tailwind should use when compiling that CSS file. Do not put @config before your @import statements."
}
]
}
```
Then, head to `settings.json` and add these two lines:
```json
{
// ... other stuff
"css.customData": [".vscode/tailwind_fix.json"],
"scss.customData": [".vscode/tailwind_fix.json"]
}
```
And voilá! Done!
<hr />
...
Now now, you might be tempted to follow the first solution proposed by the article, which simply tells VSCode to ignore every at-rule it cannot recognise:
```json
{
// Warning: Sub-optimal approach
// Please follow the previous advice!
// ... other stuff
"css.lint.unknownAtRules": "ignore"
"scss.lint.unknownAtRules": "ignore"
}
```
But! __What about typos?__ We all make them, and I wouldn't trust myself writing things correctly any day :'D Better be more specific and let pass only the things we are aware of!
<br />
## Sources and inspiration
- <a href="https://byby.dev/at-rule-tailwind" target="_blank">How to fix the unknown at rule @tailwind warning</a> by <a href="https://byby.dev" target="_blank">byby.dev</a>
- <a href="https://tailwindcss.com/docs/functions-and-directives" target="_blank">Tailwind's Functions & Directives documentation</a>
- Cover: Tailwind logotype from <a href="https://tailwindcss.com/brand" target="_blank">Tailwind Official Brand page</a>, <a href="https://www.freepik.com/free-vector/3d-abstract-background-with-paper-cut-flower-shape_18695224.htm" target="_blank"> 3d abstract background with paper cut flower shape</a> by <a href="https://www.freepik.com/author/garrykillian" target="_blank">GarryKillian</a> via <a href="https://freepik.com" target="_blank">Freepik</a>
<hr />
Originally posted in <a href="https://oh-no.ooo">oh-no.ooo</a> (<a href="https://www.oh-no.ooo/snippets/unknown-at-rule-apply-k-thx-bye">Unknown at rule @apply... k thx bye!</a>), my personal website. | mahdava |
1,882,332 | Using interactive rebase in Git | About An interactive rebase in Git is a process that allows you to edit a sequence of... | 0 | 2024-06-09T20:20:38 | https://dev.to/lyumotech/using-interactive-rebase-in-git-361d | repository, github, versioncontrol, git | ## About
An interactive rebase in Git is a process that allows you to edit a sequence of commits. This is typically used to clean up commit history before sharing changes with other members of the team.
## Case study
Let's study an example of making a clear 3-commit history out of the initial 5 commits.
1. **Start an interactive rebase:** Start an interactive rebase for the last 5 commits. If your branch has 5 commits ahead of the base branch, you can use:
```
git rebase -i HEAD~5
```
2. **Interactive rebase editor:** This command will open your default text editor with a list of the last 5 commits, starting from the oldest commit to the newest. The lines will look something like this:
```
pick abc123 Commit message 1
pick def456 Commit message 2
pick ghi789 Commit message 3
pick jkl012 Commit message 4
pick mno345 Commit message 5
```
3. **Edit the commits:** Change the word "pick" to specify what you want to do with each commit. To squash commits into one, you can use the word "squash" or "s" for short. For example, if you want to combine commits 2, 3, and 4 into one, you would modify the list like this:
```
pick abc123 Commit message 1
pick def456 Commit message 2
s ghi789 Commit message 3
s jkl012 Commit message 4
pick mno345 Commit message 5
```
4. **Save and close the editor:** After modifying the list, save and close the editor. Git will start the rebase process.
5. **Edit commit messages:** Git will then reopen the editor to allow you to combine commit messages. You can edit the commit messages to reflect the new commits. For example, you might see something like this:
```
# This is a combination of 3 commits.
# The first commit's message is:
Commit message 2
# This is the 2nd commit message:
Commit message 3
# This is the 3rd commit message:
Commit message 4
```
You can edit this to create a new combined commit message:
```
Combined commit message for commits 2, 3, and 4
```
6. **Complete the rebase:** Save and close the editor. Git will complete the rebase process.
### Example
Assume your commits are:
1. `abc123` - Initial setup
2. `def456` - Added feature A
3. `ghi789` - Fixed bug in feature A
4. `jkl012` - Improved feature A
5. `mno345` - Added feature B
You want to have the following commits:
1. Initial setup
2. Feature A (including the fix and improvement)
3. Feature B
Your rebase edit list would look like this:
```
pick abc123 Initial setup
pick def456 Added feature A
s ghi789 Fixed bug in feature A
s jkl012 Improved feature A
pick mno345 Added feature B
```
And after saving the first list, you would combine the messages of the second, third, and fourth commits into something like:
```
Feature A (including the fix and improvement)
```
> The editor used for an interactive rebase in Git is determined by your Git configuration and environment settings. By default, Git uses the default editor configured for your system, such as `vi` or `nano` on many Unix-like systems, or Notepad on Windows.
By following these steps, you can adjust your commits into three sensible commits with the desired commit messages.
## Other commands for interactive rebase
During an interactive rebase in Git, several operations can be performed to modify the commit history. Here’s a short list of the key operations along with their syntax and explanations:
1. **pick** (or `p`):
- **Syntax**: `pick <commit-hash> <commit-message>`
- **Explanation**: Use the commit as-is. This is the default operation and is used to keep a commit unchanged.
2. **reword** (or `r`):
- **Syntax**: `reword <commit-hash> <commit-message>`
- **Explanation**: Use the commit, but modify the commit message.
3. **edit** (or `e`):
- **Syntax**: `edit <commit-hash> <commit-message>`
- **Explanation**: Pause the rebase to allow amendments to the commit. This can be used to change the content of the commit or the commit message.
4. **squash** (or `s`):
- **Syntax**: `squash <commit-hash> <commit-message>`
- **Explanation**: Combine the commit with the previous commit, merging their changes and allowing you to edit the commit message.
5. **fixup** (or `f`):
- **Syntax**: `fixup <commit-hash> <commit-message>`
- **Explanation**: Similar to `squash`, but discard the commit message of the commit being combined. The commit message of the previous commit is used.
6. **exec** (or `x`):
- **Syntax**: `exec <command>`
- **Explanation**: Execute a shell command.
7. **drop** (or `d`):
- **Syntax**: `drop <commit-hash> <commit-message>`
- **Explanation**: Remove the commit entirely. This can be used to delete unnecessary or undesired commits from the history.
8. **break**:
- **Syntax**: `break`
- **Explanation**: Pause the rebase at this point to allow inspection or further manual intervention. This is useful for debugging the state of the repository.
And others.
### Usage example for some of the commands
Here's an example of what the rebase instructions file might look like when you initiate an interactive rebase:
```plaintext
pick a1b2c3d First commit message
reword b2c3d4e Second commit message
edit c3d4e5f Third commit message
squash d4e5f6g Fourth commit message
fixup e5f6g7h Fifth commit message
drop f6g7h8i Sixth commit message
```
Alternatively, use the first letters to for calling each of the commands:
```plaintext
p a1b2c3d First commit message
r b2c3d4e Second commit message
e c3d4e5f Third commit message
s d4e5f6g Fourth commit message
f e5f6g7h Fifth commit message
d f6g7h8i Sixth commit message
```
## References
1. [Git Tools - Rewriting History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History)
2. [Vim Cheat Sheet](https://vimsheet.com/) | lyumotech |
1,882,323 | Why Python is a Great Language for Coding Games | Introduction Python is a great coding language for beginners. The syntax is easy to read and it makes... | 0 | 2024-06-09T20:18:11 | https://dev.to/walkera1/why-python-is-a-great-language-for-coding-games-3088 | Introduction
Python is a great coding language for beginners. The syntax is easy to read and it makes sense. Python is good for building beginner friendly games. The language is also useful in many more ways than just making games. Python is very popular so there are lots of resources out there to help you code and learn.
Why is python good for coding games?
- Easy to learn
There are many reasons that work together to make Python easy to learn. It has good readability and syntax. Python uses indentation to signify code blocks. This means you have to move your code around and that makes it way easier to read at a glance.

Unlike other languages that signify code blocks with curly braces or keywords like begin and end, which requires you to search for those signifiers and remember where they are.
Python also uses its popularity to be easier to learn, by that I mean the wealth of resources and supportive communities around Python. Lots of people have already learned Python, that means most questions you have probably have their answers documented on the internet somewhere. There are lots of courses, tutorials, books, and communities around Python so if you want help or practice, there are plenty of means to do so.
Python itself has robust performance. It is easy to import modules into your codebase. That means your code does not have to contain all the data that it needs to work. Whatever modules you import get carried over and combined with your existing code base. Pythons robustness makes it able to handle lots of actions at once without lagging or slowing down. The popular game Mario Kart 8 Deluxe is a good example. It has sold 45 million copies world wide. Being a high action game with lots of collisions and player interactions at high speed, that shows just how much Python can handle.

- Python can also delegate the harder tasks to more specialized languages
Python itself does have some limits when it comes to rendering games. Of course there are technically no limits to any programming language. If you had a specially made computer to handle a fast paced 3D game in Python, the language itself would not stop you.
The issue arises when you want to make a game on a “normal” (consumer grade) computer or have the game run on lots of other people's computers. The machine Python is running on will start to slow down because Python is not the most optimized language when it comes to graphics rendering.
But that does not mean it is impossible to make Call Of Duty in Python. While Python itself can not render graphics very well, it can delegate the more specialized tasks to more specialized coding languages. To do this you will need a cross language framework. This will not be too difficult. You only need to learn how to use a framework built for the language you already know, to ask for a language that is better at handling graphics. You can use a more complex language like C++ without even knowing it!

- How many devices can Python games run on?
Python has many useful libraries, frameworks, and even some services that let you give your Python games the ability to work on many different platforms. This includes Windows, MacOS, and Linux. But the one I am interested in is Num Works. More specifically the NumWorks calculator. That expensive graphing calculator is able to run python code. I am sure this has many useful applications like calculating complex formulas or storing data to cheat on a test and so on. But what I am interested in is gaming on my calculator. Now I know this won’t be an amazing coding experience, I do have a personal computer (the best gaming experience), no, my main motivator here is memes!
There is a meme that I love, where people try to code the game DOOM onto a calculator. Not just a calculator but I saw someone even code an environment that you can walk around in on a digital pregnancy test! While I am not skilled enough to code a pregnancy test, I believe that putting video games on my calculator is within my reach! Normally this would be a very easy process, not even requiring coding knowledge, all one has to do is connect their calculator to the internet and download the desired game. But in this case no one has coded DOOM in Python, for the NumWorks calculator, yet! That is where I come in. Now I have never made a three dimensional game, or even played DOOM for that matter. But I believe with a little effort to code the game (and playing the game lol) I too can play DOOM on my calculator!
- What else can you do with Python?
Being an easy to learn language makes it popular for lots of fields
You may think there is a limit to the things Python can do for you, since it is so easy to learn and it's popular. But those traits have become so substantial that they contribute to the usefulness of Python, instead of hurting it. Since Python is so easy to learn that makes it a good choice for people who don’t want to spend a huge amount of time learning to code but still want the benefits of automation and data visualization (more than not). Web development, automation of simple tasks, education, and finance are just some of the most widely used applications of Python. Python is good at web development because the syntax is easy to read. That means someone will spend more time coding their website with the necessities and extra stuff, rather than trying to read and understand syntax. You can of course leverage Python's extensive libraries and frameworks here. To make it so that you don’t have to reinvent the wheel while making a website and think with more abstraction (think about the big picture versus small details) while making your website. Python excels at management of data and information. Python has lots of frameworks and libraries that help organize and represent data. This could be useful for finances, making it easier to input data and see it from a different angle. Or have an algorithm do all that for you. This can also help teachers represent complicated topics to students, hopefully making it easier to understand.

Conclusion
In conclusion, Python is a great coding language to learn to get your foot in the door as a programmer. Python is great for making games. Python can also be utilized in many different fields besides gaming.
Sources:
https://www.tutorialspoint.com/is-python-good-for-developing-games-why-or-why-not
https://www.reddit.com/r/learnpython/comments/12vbfev/how_complex_a_game_can_you_build_in_python/
http://programarcadegames.com/index.php?lang=en&chapter=introduction_to_graphics
https://forum.freecodecamp.org/t/why-should-i-learn-python-if-i-already-know-javascript/253988 | walkera1 | |
1,882,322 | TypeError: Object of type AgentChatResponse is not JSON serializable | Getting this error while using chat_history for agent chat in Llama Index. Need a quick resolution.... | 0 | 2024-06-09T20:12:52 | https://dev.to/sourav_mukherjee_te/typeerror-object-of-type-agentchatresponse-is-not-json-serializable-aj9 | openai, llm, llamaindex | Getting this error while using chat_history for agent chat in Llama Index. Need a quick resolution. Any help on this? You can get the code below-
'''chat_history.append(ChatMessage(role= MessageRole.USER, content=user_message))
response = agent.chat(message=user_message, chat_history= chat_history)
chat_history.append(ChatMessage(role= MessageRole.ASSISTANT, content=response))'''
It is working fine for the 1st time query and giving the error from 2nd one. | sourav_mukherjee_te |
1,882,321 | How to work with .hmpl file extension in javascript? | In order to work with files with the .hmpl extension, you can install a webpack and loader for it... | 0 | 2024-06-09T20:12:21 | https://dev.to/antonmak1/how-to-work-with-hmpl-file-extension-in-javascript-28hf | webdev, javascript, programming, tutorial | In order to work with files with the `.hmpl` extension, you can install a [webpack](https://www.npmjs.com/package/webpack) and loader for it [hmpl-loader](https://www.npmjs.com/package/hmpl-loader). Since version `0.0.2`, the loader connection looks like this:
### webpack.config.js
```javascript
module.exports = {
module: {
rules: [
{
test: /\.hmpl$/i,
use: ["hmpl-loader"],
}
]
}
}
```
### main.hmpl
```html
<div><request src="/api/test"></request></div>
```
### main.js
```javascript
const templateFn = require("./main.hmpl");
const elementObj = templateFn();
``` | antonmak1 |
1,878,727 | Prisma ORM | Introduction Prisma ORM is an open-source ORM made up of 3 parts: Prisma Client, Prisma... | 0 | 2024-06-09T20:08:19 | https://dev.to/allyn/prisma-orm-kjh | ## Introduction
Prisma ORM is an open-source ORM made up of 3 parts: Prisma Client, Prisma Migrate, and Prisma Studio. Prisma Client is an auto-generated, type-safe database client for Node.js and TypeScript. Prisma Migrate is a migration system, and Prisma Studio is an interface to view and manipulate data in the database. As an ORM, Prisma ORM supports databases like MongoDB, PostreSQL, MySQL, and [many more](https://www.prisma.io/docs/orm/reference/supported-databases). Upon installation of Prisma, you have access to the Prisma CLI, which is how you will mainly interact with your Prisma project. All Prisma projects start off with a schema before any of the main components are utilized, so let's start with the Prisma schema.
## Prisma Schema
We know that Prisma is not a database, but an ORM, which is a tool to "translate" the data in the database to the developer. We also know that Prisma supports many different databases, which vary in syntax and structure/architecture. This is where the Prisma schema comes in.
Your Prisma schema, made up of models, reflects your database schema and acts as a proxy database. These schemas are written inside a `schema.prisma` file and contain a connection to the database via `datasource` and a `generator` that ['indicates that you want to generate a Prisma Client'](https://www.prisma.io/docs/orm/overview/introduction/what-is-prisma#the-prisma-schema). Data models take up the most space in the Prisma schema because they represent the tables or collections in databases, and they act as the base of the Prisma Client. Prisma gives you 2 workflows to create these models: Prisma Migrate and introspection.
### Introspection
Introspection is defined as a ["program's ability to examine the type or properties at run time"](https://en.wikipedia.org/wiki/Type_introspection). In the context of Prisma, introspection is when the program reads your database schema and generates your Prisma schema based on your database schema. Introspection is commonly used for an initial version of the Prisma schema, however, it can be used repeatedly, mainly when Prisma Migrate is not being used. If there are any changes to either schema, you can use introspection to ensure that both your database schema and Prisma schema are congruent. The workflow using introspection uses commands like `prisma db pull` and `prisma db push` that allow for said congruency in your project.
The workflow for introspection will follow, [per Prisma's documentation](https://www.prisma.io/docs/orm/prisma-schema/introspection).

## Prisma Migrate
Prisma Migrate is used to synchronize your Prisma schema and database schema as either one evolves. One of the main uses of Prisma Migrate is to ensure updates to your Prisma schema are reflected in your database schema, and this is done by using the `migrate` commands from the Prisma CLI. The objective of the commands under the Prisma Migrate umbrella of the Prisma CLI is to apply and resolve migrations, or updates, to the Prisma schema. Also as a result of creating a migration, SQL migration files are generated which act as a documented history of changes to the schema.
Let's look at the `prisma migrate dev` command which will be one of the main `prisma migrate` commands.
The `prisma migrate dev` command will rerun the existing migration history on a "shadow database" to detect schema drift, which are changes and deletions of migration files or changes made directly to the database schema. The shadow database is a second, temporary database used to detect issues in your migrations. Shadow databases have a lifespan of however long the `prisma migrate dev`, meaning they are created every time the command is run and is deleted once the command is complete. Pending migrations are applied and thus generate a new migration file, documenting what changes were made to the database and warnings if applicable.
```
/*
Warnings:
- You are about to drop the column `artistId` on the `Album` table. All the data in the column will be lost.
- You are about to drop the column `name` on the `Album` table. All the data in the column will be lost.
- Added the required column `albumName` to the `Album` table without a default value. This is not possible if the table is not empty.
- Added the required column `artistName` to the `Album` table without a default value. This is not possible if the table is not empty.
*/
-- DropForeignKey
ALTER TABLE `Album` DROP FOREIGN KEY `Album_artistId_fkey`;
-- AlterTable
ALTER TABLE `Album` DROP COLUMN `artistId`,
DROP COLUMN `name`,
ADD COLUMN `albumName` VARCHAR(191) NOT NULL,
ADD COLUMN `artistName` VARCHAR(191) NOT NULL;
```
The code block above is an example of what a migration file consists of when you make changes to your schema.
Once you're content with the state of your database and ready to add records to your database, you can move on to using Prisma Client.
## Prisma Client
Prisma Client is your database client built on your Prisma schema and is how you make queries to your database. In order to use your Prisma Client, you must run the `prisma generate` command in your terminal. after you run `prisma generate`, you should see this in your terminal.
```
✔ Generated Prisma Client (v5.15.0) to ./node_modules/@prisma/client in 45ms
Start using Prisma Client in Node.js (See: https://pris.ly/d/client)
import { PrismaClient } from '@prisma/client'
const prisma = new PrismaClient()
```
You will receive a block of code you can copy and paste into whichever files you will be making queries to your database. This also shows a peek into how user-friendly Prisma is. The `prisma generate` command can also be used to solidify changes made to your database since this is how you'll interact with the data inside the database.
This will be put at the top of your file where you'll be making queries, with the rest of the import statements.
```
server/index.ts
import { PrismaClient } from '@prisma/client';
const prisma = new PrismaClient();
```
Once you import your Prisma Client, you can now make queries with the `prisma` variable. The syntax for making queries is as follows:
```
prisma.<model>.<queryMethod>({})
```
According to Prisma's documentation, ["all Prisma Client queries return plain old JavaScript objects"](https://www.prisma.io/docs/orm/overview/introduction/what-is-prisma#accessing-your-database-with-prisma-client), but these queries can be made asynchronous with async/await or using Promise methods like `.then()` and `.catch()`.
## Prisma Studio.
Prisma Studio is a "graphical user interface for you to view and edit your database" in the browser. It's important to note that Prisma Studio is not open-source.
## Conclusion
Prisma ORM is a highly flexible and type-safe tool that makes interacting with your data in the database more transparent and streamlined for developers.
| allyn | |
1,882,056 | Exploring GitHub Copilot with Microsoft Visual Studio | Introduction Learn how to use GitHub Copilot in Microsoft Visual Studio 2022 with real... | 27,651 | 2024-06-09T20:07:55 | https://dev.to/karenpayneoregon/exploring-github-copilot-with-microsoft-visual-studio-4jfo | ai, productivity | ## Introduction
Learn how to use GitHub Copilot in Microsoft Visual Studio 2022 with real world usage from documenting code, improving code, explain code and more. Also, will talk about some features of Copilot in Microsoft Visual Studio Code.
:heavy_check_mark: Do not have a Copilot subscription, [go here](https://github.com/settings/copilot) for a free trial.
### Ways to use GitHub Copilot
- Write pull request summaries (Copilot Enterprise feature only)
- Generate commit messages
- Fix code inline
- Generate documentation for your code
- Meaningful names matter (VS Code <kbd>F2</kbd>)
- Create unit test
- Assist with debugging code
- Explaining code
- Documenting code
The following table was copied from [this page](https://github.blog/2024-03-25-how-to-use-github-copilot-in-your-ide-tips-tricks-and-best-practices/) in raw format then asked Copilot to create a three column markdown table :green_heart:
| Command | Description | Usage |
|---------|-------------|-------|
| /explain | Get code explanations | Open file with code or highlight code you want explained and type: /explain what is the fetchPrediction method? |
| /fix | Receive a proposed fix for the problems in the selected code | Highlight problematic code and type: /fix propose a fix for the problems in fetchAirports route |
| /tests | Generate unit tests for selected code | Open file with code or highlight code you want tests for and type: /tests |
| /help | Get help on using Copilot Chat | Type: /help what can you do? |
| /clear | Clear current conversation | Type: /clear |
| /doc | Add a documentation comment | Highlight code and type: /doc |
| /generate | Generate code to answer your question | Type: /generate code that validates a phone number |
| /optimize | Analyze and improve running time of the selected code | Highlight code and type: /optimize fetchPrediction method |
| /new | Scaffold code for a new workspace | Type: /new create a new django app |
| /simplify | Simplify the selected code | Highlight code and type: /simplify |
| /feedback | Provide feedback to the team | Type: /feedback |
## How GitHub Copilot handles data
As you explore adopting GitHub Copilot, you might wonder what data it collects, how that data is used, transferred, stored, and where its lifecycle ends. In this guide, we’ll describe GitHub Copilot’s data pipeline, and explain how your data is kept safe while being used to provide the most accurate code suggestions and responses possible.
Read [full article](https://resources.github.com/learn/pathways/copilot/essentials/how-github-copilot-handles-data/) at GitHub
## Documenting code
Developers here tend not to document code which in some cases means that other developers may not understand the purpose of a class or method. Even when a method name is meaningful the method should be documented for several reasons, first, when a code base will have help generated from XML documentation and for clarification of usage and meaning of parameters and return types are a few reasons to document code.
### Example 1
Here is a method which is easy to understand.
```csharp
public static class DateTimeExtensions
{
public static DateOnly FirstDateOfWeek(this DateTime sender, DayOfWeek startOfWeek = DayOfWeek.Sunday)
=> DateOnly.FromDateTime(sender.AddDays(-1 * (7 + (sender.DayOfWeek - startOfWeek)) % 7));
}
```
When using the above method this is what is shown with Intellisense.

Using /doc feature of GitHub Copilot
```csharp
public static class DateTimeExtensions
{
/// <summary>
/// Calculates the first date of the week based on a given start day of the week.
/// </summary>
/// <param name="sender">The DateTime object representing the current date.</param>
/// <param name="startOfWeek">The start day of the week (default is Sunday).</param>
/// <returns>The first date of the week.</returns>
public static DateOnly FirstDateOfWeek(this DateTime sender, DayOfWeek startOfWeek = DayOfWeek.Sunday)
=> DateOnly.FromDateTime(sender.AddDays(-1 * (7 + (sender.DayOfWeek - startOfWeek)) % 7));
}
```
Much better with proper documentation from Copilot.

### Example 2
A developer comes across the following method has been asked to document the method.
```csharp
public partial class Helpers
{
public static string NextValue(string sender, int incrementBy = 1)
{
string value = NumbersPattern().Match(sender).Value;
return sender[..^value.Length] + (long.Parse(value) + incrementBy)
.ToString().PadLeft(value.Length, '0');
}
[GeneratedRegex("[0-9]+$")]
private static partial Regex NumbersPattern();
}
```
To document the code.
1. Select the method
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. Type <kbd>/</kbd> which provides options, select **doc**
1. Press <kbd>ENTER</kbd>
Results
```csharp
/// <summary>
/// Generates the next value based on the given sender string and incrementBy value.
/// </summary>
/// <param name="sender">The sender string.</param>
/// <param name="incrementBy">The value to increment by (default is 1).</param>
/// <returns>The next value.</returns>
public static string NextValue(string sender, int incrementBy = 1)
{
string value = NumbersPattern().Match(sender).Value;
return sender[..^value.Length] + (long.Parse(value) + incrementBy)
.ToString().PadLeft(value.Length, '0');
}
```
Next, jumping ahead to explaining code which will be covered later. A developer is curious how the above code works.
```csharp
string value = NumbersPattern().Match(sender).Value;
return sender[..^value.Length] + (long.Parse(value) + incrementBy)
.ToString().PadLeft(value.Length, '0');
```
1. Select the following body of code.
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. Press <kbd>/</kbd>, select **explain**
1. Press <kbd>ENTER</kbd>
The following is provided and at the bottom additional help.

The selected code is a part of the NextValue method in the Helpers class. Let's break it down step by step:
1. string value = NumbersPattern().Match(sender).Value;
• This line retrieves the numeric value from the sender string using a regular expression pattern defined in the NumbersPattern method.
• The NumbersPattern method is annotated with the [GeneratedRegex("[0-9]+$")] attribute, which indicates that it generates a regular expression pattern that matches one or more digits at the end of a string.
• The Match method is called on the result of NumbersPattern() to find the first occurrence of the pattern in the sender string.
• The Value property of the Match result is assigned to the value variable.
2. return sender[..^value.Length] + (long.Parse(value) + incrementBy).ToString().PadLeft(value.Length, '0');
- This line constructs and returns the next value based on the sender string and the incrementBy value.
- The sender[..^value.Length] expression is used to remove the numeric value from the end of the sender string. It uses the range operator .. to specify a range from the beginning of the string to the length of the sender string minus the length of the value string.
- (long.Parse(value) + incrementBy) parses the value string as a long integer and adds the incrementBy value to it.
- .ToString().PadLeft(value.Length, '0') converts the sum to a string and pads it with leading zeros to match the length of the original value string.
- The resulting string is concatenated with the modified sender string and returned as the next value.
In summary, the selected code extracts a numeric value from the sender string using a regular expression pattern, and then constructs the next value by incrementing the extracted value by the specified incrementBy value and padding it with leading zeros.
With both example code is document and in the last code has been broken down to understand code and in some cases assist if code needs modifications.
### Using Copilot to explain code
Above an example for C# was presented for explaining code, Copilot can also explain JavaScript.
A developer wants to understand the following code is for a button click event but does not understand the code and would like to understand the code.
```javascript
document.getElementById('getButton').addEventListener('click', function () {
const xhr = new XMLHttpRequest();
xhr.open("GET", "https://jsonplaceholder.typicode.com/users");
xhr.send();
xhr.responseType = "json";
xhr.onload = () => {
if (xhr.readyState === 4 && xhr.status === 200) {
const data = xhr.response;
document.getElementById("people-template").innerHTML = "";
console.table(data,
[
"id",
"name",
"email"
]);
$('#people-template').append(` ${data.reduce((updated, user) =>
updated.concat(`<li>${user.id} <strong>Name</strong> ${user.name.padEnd(35, '.')} ${user.email}</li>`), '')}`);
} else {
console.log(`Error: ${xhr.status}`);
}
};
});
```
Using `/explain` the following is provided.
The selected code is an event listener that triggers when the "Get" button is clicked. It performs an AJAX GET request to retrieve data from the "https://jsonplaceholder.typicode.com/users" URL. Let's break down the code step by step:
1. document.getElementById('getButton').addEventListener('click', function () { ... });: This line adds an event listener to the "Get" button. When the button is clicked, the function inside the event listener will be executed.
2. const xhr = new XMLHttpRequest();: This line creates a new XMLHttpRequest object. The XMLHttpRequest object is used to interact with servers and make HTTP requests.
3. xhr.open("GET", "https://jsonplaceholder.typicode.com/users");: This line initializes the request by specifying the HTTP method (GET) and the URL from which to retrieve data.
4. xhr.send();: This line sends the HTTP request to the server.
5. xhr.responseType = "json";: This line sets the expected response type to JSON. This tells the browser to automatically parse the response as JSON.
6. xhr.onload = () => { ... };: This line sets up a callback function to handle the response when it is received. The function will be executed when the request is complete and the response is available.
7. if (xhr.readyState === 4 && xhr.status === 200) { ... }: This line checks if the request is complete (readyState 4) and if the response status is 200 (indicating a successful request).
8. const data = xhr.response;: This line assigns the response data to the data variable.
9. document.getElementById("people-template").innerHTML = "";: This line clears the existing content of the "people-template" element.
10. console.table(data, ["id", "name", "email"]);: This line logs the data in a table format to the console, displaying only the "id", "name", and "email" properties.
11. $('#people-template').append( ... );: This line uses jQuery to append HTML content to the "people-template" element. It uses the data.reduce() method to iterate over the data array and generate a list item (<li>) for each user. The user's ID, name, and email are displayed in the list item.
12. console.log(Error: ${xhr.status});: If the request is not successful (status code other than 200), this line logs an error message to the console, including the status code.
In summary, when the "Get" button is clicked, the code sends an AJAX GET request to retrieve user data from a remote server. If the request is successful, the data is displayed in a table format and appended to the "people-template" element as a list. If there is an error, an error message is logged to the console.
What about CSS?
A great many developers in the world do not have a grasp of working with CSS. With that in mind a developer wants to better understand the following rule.
```css
legend {
background: #0f506f;
border: solid 1px black;
-webkit-border-radius: 8px;
-moz-border-radius: 8px;
border-radius: 8px;
padding: 6px;
color: white;
}
```
Use `/explain` and unlike the prior examples done in Visual Studio 2022, this one is done in VS Code.
**Results from Copilot**

What about SQL?
A developer comes across the following and wants to understand the statement.
```sql
WITH CTE
AS (SELECT t2.id,
t2.TransDate AS [Date],
t2.Credit,
t2.Debit,
SUM(COALESCE(t1.credit, 0) - COALESCE(t1.debit, 0)) AS Balance
FROM dbo.Transactions t1
INNER JOIN dbo.Transactions t2
ON t1.TransDate <= t2.TransDate
GROUP BY t2.TransDate,
t2.Credit,
t2.Debit,
t2.id)
SELECT id, [Date], Credit, Debit
FROM CTE
ORDER BY CTE.[Date];
```
1. Select the statment
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. <kbd>ENTER</kbd>
1. `/explain`
1. Press <kbd>ENTER</kbd>
Copilot provides a detailed explanation along with a suggestion to modify the statement yet in this case is guessing and the guess is wrong but may open an idea up to a developer.

What about documenting the statement.
1. Select the statment
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. <kbd>ENTER</kbd>
1. `/doc`
1. Press <kbd>ENTER</kbd>
```sql
/*
The following query uses a common table expression (CTE) to calculate the balance of transactions over time.
It retrieves the transaction ID, date, credit amount, and debit amount from the Transactions table.
The CTE calculates the balance by summing the difference between the credit and debit amounts for each transaction.
The result is ordered by date in ascending order.
*/
WITH CTE
AS (SELECT t2.id,
t2.TransDate AS [Date],
t2.Credit,
t2.Debit,
SUM(COALESCE(t1.credit, 0) - COALESCE(t1.debit, 0)) AS Balance
FROM dbo.Transactions t1
INNER JOIN dbo.Transactions t2
ON t1.TransDate <= t2.TransDate
GROUP BY t2.TransDate,
t2.Credit,
t2.Debit,
t2.id)
SELECT id, [Date], Credit, Debit
FROM CTE
ORDER BY CTE.[Date];
```
## Ask Copilot to refactor your code
To start off, general refactoring can be changing a method with a statement body to an expression body member using Visual Studio ReSharper extension.
### Using statement body
```csharp
public string[] Items()
{
return new[] { ApplicationName, ApplicationVersion, ApplicationKey };
}
```
### Refactor to expression body
```csharp
public string[] Items() => new[] { ApplicationName, ApplicationVersion, ApplicationKey };
```
## Improve/refactor code with Copilot
Given the following method used for decades to return a list of primary keys from a database table a developer learns about [Dapper](https://github.com/DapperLib/Dapper) and wants to rewrite this code using Dapper.
```csharp
public List<int> CategoryIdentifiers()
{
var idList = new List<int>();
var selectStatement = "SELECT DISTINCT CategoryID FROM dbo.Categories ";
using (var cn = new SqlConnection() { ConnectionString = ConnectionString })
{
using (var cmd = new SqlCommand() { Connection = cn })
{
cmd.CommandText = selectStatement;
cn.Open();
var reader = cmd.ExecuteReader();
while (reader.Read())
{
idList.Add(reader.GetInt32(0));
}
}
}
return idList;
}
```
1. Select the above method
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. Enter `Refactor this method to use Dapper.`
1. <kbd>ENTER</kbd>
The following is produced.
```csharp
public List<int> CategoryIdentifiers()
{
var idList = new List<int>();
var selectStatement = "SELECT DISTINCT CategoryID FROM dbo.Categories";
using (var cn = new SqlConnection() { ConnectionString = ConnectionString })
{
cn.Open();
idList = cn.Query<int>(selectStatement).AsList();
}
return idList;
}
```
Copilot did a great job on refactoring and works for inexperienced developers but an experienced developer can farther modify the code.
Here the experienced developer understands that Dapper will open the connection for us and the SQL statement does not really need to be formatted but imagine a large complex SQL statement with many joins, this would make a difference.
```csharp
public List<int> CategoryIdentifiers()
{
using SqlConnection cn = new() { ConnectionString = ConnectionString };
return cn.Query<int>("""
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
""").AsList();
}
```
Let's now ask Copilot to make the above code asynchronous.
Copilot produces the following yet appears to change the SQL from a raw string literal to a verbatim string.
```csharp
public async Task<List<int>> CategoryIdentifiersAsync()
{
await using SqlConnection cn = new() { ConnectionString = ConnectionString };
return (await cn.QueryAsync<int>(@"
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
")).AsList();
}
```
ReSharper will provide a refactor recommendation and the developer accepts the recommendation.
```csharp
public async Task<List<int>> CategoryIdentifiersAsync()
{
await using SqlConnection cn = new() { ConnectionString = ConnectionString };
return (await cn.QueryAsync<int>("""
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
""")).AsList();
}
```
### Ask Copilot to write a unit test
1. Select the above method
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. Enter `write a unit test`
1. <kbd>ENTER</kbd>
> **Note**
> We get a standard MS-Test but could specify which test framework to use.
```csharp
[TestClass]
public class DapperOperationsTests
{
[TestMethod]
public async Task CategoryIdentifiersAsync_ShouldReturnDistinctCategoryIDs()
{
// Arrange
var dapperOperations = new DapperOperations();
// Act
var result = await dapperOperations.CategoryIdentifiersAsync();
// Assert
Assert.IsNotNull(result);
CollectionAssert.AllItemsAreUnique(result);
}
}
```
### Ask Copilot to add error handling
Add error handling with SeriLog logging to this method
1. Select the above method
1. Press <kbd>ALT</kbd> + <kbd>/</kbd>
1. Enter `add error handling with SeriLog logging to this method`
1. <kbd>ENTER</kbd>
The results were not what the developer wanted.
```csharp
public async Task<List<int>> CategoryIdentifiersAsync()
{
try
{
await using SqlConnection cn = new() { ConnectionString = ConnectionString };
return (await cn.QueryAsync<int>(@"
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
")).AsList();
}
catch (Exception ex)
{
Log.Error(ex, "An error occurred while executing CategoryIdentifiersAsync");
throw;
}
}
```
- We are back to a verbatim string, use ReSharper to use a raw string literal.
```csharp
public async Task<List<int>> CategoryIdentifiersAsync()
{
try
{
await using SqlConnection cn = new() { ConnectionString = ConnectionString };
return (await cn.QueryAsync<int>("""
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
""")).AsList();
}
catch (Exception ex)
{
Log.Error(ex, "An error occurred while executing CategoryIdentifiersAsync");
throw;
}
}
```
Let now go on without Copilot and get rid of `throw` and change the return type to a tuple which can be [deconstructed](https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals/functional/deconstruct) by the caller.
```csharp
public async Task<(bool, List<int>)> CategoryIdentifiersAsync()
{
try
{
await using SqlConnection cn = new() { ConnectionString = ConnectionString };
return (true,(await cn.QueryAsync<int>("""
SELECT DISTINCT
CategoryID
FROM
dbo.Categories
""")).AsList());
}
catch (Exception ex)
{
Log.Error(ex, "An error occurred while executing CategoryIdentifiersAsync");
return (false, null);
}
}
```
### Unit test revision
Now that there has been a change to the return type `ask Copilot to write a new unit test`.
```csharp
[TestClass]
public class DapperOperationsTests
{
[TestMethod]
public async Task CategoryIdentifiersAsync_ShouldReturnCategoryIDs()
{
// Arrange
DapperOperations dapperOperations = new DapperOperations();
// Act
(bool success, List<int> categoryIds) = await dapperOperations.CategoryIdentifiersAsync();
// Assert
Assert.IsTrue(success);
Assert.IsNotNull(categoryIds);
Assert.IsTrue(categoryIds.Count > 0);
}
}
```
### Unit test
In each example for unit test only one test was written, a developer should do a mirror image of a good test and have a one or more test for when things break.
## Asking Copilot to write new code
Copilot can write code, for instance a developer needs a string language extension method that splits a string as follows, FirstName to First Name. The developer finds the following language extension but thinks there may be a more efficient way and not using regular expressions.
```csharp
public static class StringExtensions
{
private static readonly Regex CamelCaseRegex = new(@"([A-Z][a-z]+)");
/// <summary>
/// KarenPayne => Karen Payne
/// </summary>
[DebuggerStepThrough]
public static string SplitCamelCase(this string sender) =>
string.Join(" ", CamelCaseRegex.Matches(sender)
.Select(m => m.Value));
}
```
They ask several questions, but none are sufficient but than decide to include using span and the following is provided.
```csharp
public static class StringExtensions
{
/// <summary>
/// Split text at each capital letter
/// </summary>
/// <param name="input">string to work on</param>
/// <returns>
/// <para>An empty string, if the input is null or empty.</para>
/// <para>Same as original if nothing affected</para>
/// <para>String split on each uppercase token</para>
/// <para>SSMS would become S S M S</para>
/// </returns>
[DebuggerStepThrough]
public static string SplitCamelCase(this string input)
{
if (string.IsNullOrEmpty(input))
{
return input;
}
Span<char> result = stackalloc char[input.Length * 2];
var resultIndex = 0;
for (var index = 0; index < input.Length; index++)
{
var currentChar = input[index];
if (index > 0 && char.IsUpper(currentChar))
{
result[resultIndex++] = ' ';
}
result[resultIndex++] = currentChar;
}
return result[..resultIndex].ToString();
}
}
```
> **Note**
> Of course, some developers favor shorter code as in the first code written yet shorter code is not always the best so if curious run both through Benchmarks.
## Interface and classes
The following is a very simple example for using AI to create interfaces and classes in C# using GitHub Copilot and JetBrains ReSharper.
> **Note**
> Jetbrains ReSharper is a paid extension for Visual Studio. It is used here to move the generated code to proper folders and update the namespaces. Also, Jetbrains has a paid AI service which Karen has a ticket for.
Here we want to create
- A generic interface
- Customer class to implement above interface
- Both of the above get created in Program.cs using `Copilot`.
:heavy_check_mark: The interface are then moved using Jetbrain's ReSharper Visual Studio extension to proper folders.
:heavy_check_mark: During the ReSharper action Visual Studio kicks in and asked to update the namespace.
A console project is used for this experiment in Program.cs.
**Question 1**
Create a public crud interface named IBase under Program.cs
Copilot responds with the following.
```csharp
namespace InterfacesClassesExamples
{
public interface IBase<T> where T : class
{
T GetById(int id);
IEnumerable<T> GetAll();
void Add(T entity);
void Update(T entity);
void Delete(T entity);
}
internal partial class Program
{
static void Main(string[] args)
{
}
}
}
```
**Question 2**
Create a public class named Customer which implements IBase with the following properties, Id, FirstName, LastName, Email and BirthDate as DateOnly and implement INotifyPropertyChanged for each property.
Copilot responds with the following.
```csharp
namespace InterfacesClassesExamples
{
public interface IBase<T> where T : class
{
T GetById(int id);
IEnumerable<T> GetAll();
void Add(T entity);
void Update(T entity);
void Delete(T entity);
}
public class Customer : IBase<Customer>, INotifyPropertyChanged
{
private int _id;
private string _firstName;
private string _lastName;
private string _email;
private DateOnly _birthDate;
public int Id
{
get { return _id; }
set
{
if (_id != value)
{
_id = value;
OnPropertyChanged(nameof(Id));
}
}
}
public string FirstName
{
get { return _firstName; }
set
{
if (_firstName != value)
{
_firstName = value;
OnPropertyChanged(nameof(FirstName));
}
}
}
public string LastName
{
get { return _lastName; }
set
{
if (_lastName != value)
{
_lastName = value;
OnPropertyChanged(nameof(LastName));
}
}
}
public string Email
{
get { return _email; }
set
{
if (_email != value)
{
_email = value;
OnPropertyChanged(nameof(Email));
}
}
}
public DateOnly BirthDate
{
get { return _birthDate; }
set
{
if (_birthDate != value)
{
_birthDate = value;
OnPropertyChanged(nameof(BirthDate));
}
}
}
public Customer GetById(int id)
{
// Implementation of GetById method
throw new NotImplementedException();
}
public IEnumerable<Customer> GetAll()
{
// Implementation of GetAll method
throw new NotImplementedException();
}
public void Add(Customer entity)
{
// Implementation of Add method
throw new NotImplementedException();
}
public void Update(Customer entity)
{
// Implementation of Update method
throw new NotImplementedException();
}
public void Delete(Customer entity)
{
// Implementation of Delete method
throw new NotImplementedException();
}
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
internal partial class Program
{
static void Main(string[] args)
{
}
}
}
```
Next steps, create an Interface and Models folder and move IBase to the Interface folder and Customer to the Model folder or copy IBase and Customer to appropriate folders in another project. ReSharper can assist with moving IBase and Customer to their own files and Visual Studio will assist with setting up proper namespacing.
Now suppose the above was done in the same console project but in a class outside of Program.cs the results vary.
In one case the following
```csharp
public class Customer : IBase<Customer>, INotifyPropertyChanged
```
Is created as
```csharp
public class Customer : IBase<int>, INotifyPropertyChanged
```
In another test in a separate class the following question was posed to Copilot.
_Create a public crud generic interface named IBase with a constraint on T. create a public class named Customer which implements IBase with the following properties, Id, FirstName, LastName, Email and BirthDate as DateOnly and implement INotifyPropertyChanged for each property_
The response was missing private fields for property change notification and IBase was not implemented and duplicate property names.
```csharp
namespace InterfacesClassesExamples.Test
{
public interface IBase<T> where T : class
{
void Create(T entity);
T Read(int id);
void Update(T entity);
void Delete(int id);
}
public class Customer : IBase<Customer>, INotifyPropertyChanged
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public DateOnly BirthDate { get; set; }
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
// Implement INotifyPropertyChanged for each property
public string FirstName
{
get { return _firstName; }
set
{
if (_firstName != value)
{
_firstName = value;
OnPropertyChanged(nameof(FirstName));
}
}
}
// Implement INotifyPropertyChanged for each property
public string LastName
{
get { return _lastName; }
set
{
if (_lastName != value)
{
_lastName = value;
OnPropertyChanged(nameof(LastName));
}
}
}
// Implement INotifyPropertyChanged for each property
public string Email
{
get { return _email; }
set
{
if (_email != value)
{
_email = value;
OnPropertyChanged(nameof(Email));
}
}
}
// Implement INotifyPropertyChanged for each property
public DateOnly BirthDate
{
get { return _birthDate; }
set
{
if (_birthDate != value)
{
_birthDate = value;
OnPropertyChanged(nameof(BirthDate));
}
}
}
}
}
```
Next, still in the same class as above.
1. Create a public crud interface named IBase
1. Create a public class named Customer which implements IBase with the following properties, Id, FirstName, LastName, Email and BirthDate as DateOnly and implement INotifyPropertyChanged for each property.
Results with no change property notification code.
```csharp
namespace InterfacesClassesExamples.Test
{
public interface IBase<T> where T : class
{
T GetById(int id);
void Add(T entity);
void Update(T entity);
void Delete(T entity);
}
public class Customer : IBase<Customer>, INotifyPropertyChanged
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
public DateOnly BirthDate { get; set; }
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
public Customer GetById(int id)
{
// Implementation of GetById method
}
public void Add(Customer entity)
{
// Implementation of Add method
}
public void Update(Customer entity)
{
// Implementation of Update method
}
public void Delete(Customer entity)
{
// Implementation of Delete method
}
}
}
```
Next, here the experiment was done in a new class project.
Ask Copilot the same questions as in prior failures and failed again.
Instead of
```csharp
public class Customer : IBase<Customer>, INotifyPropertyChanged
```
Copilot produced.
```charp
public class Customer : IBase, INotifyPropertyChanged
```
Then when ready to try again the following was suggested without property change notification.


Let's try the last question as follows.
_Create a public class named Customer which implements IBase with the following properties, Id, FirstName, LastName, Email and BirthDate as DateOnly and implement INotifyPropertyChanged for each property
_
And we get. Next step, move Customer to it's own file.
```charp
using System.ComponentModel;
namespace InterfacesClassesExamplesLibrary.Interfaces
{
public class Customer : IBase<Customer>, INotifyPropertyChanged
{
private int _id;
private string _firstName;
private string _lastName;
private string _email;
private DateOnly _birthDate;
public int Id
{
get { return _id; }
set
{
if (_id != value)
{
_id = value;
OnPropertyChanged(nameof(Id));
}
}
}
public string FirstName
{
get { return _firstName; }
set
{
if (_firstName != value)
{
_firstName = value;
OnPropertyChanged(nameof(FirstName));
}
}
}
public string LastName
{
get { return _lastName; }
set
{
if (_lastName != value)
{
_lastName = value;
OnPropertyChanged(nameof(LastName));
}
}
}
public string Email
{
get { return _email; }
set
{
if (_email != value)
{
_email = value;
OnPropertyChanged(nameof(Email));
}
}
}
public DateOnly BirthDate
{
get { return _birthDate; }
set
{
if (_birthDate != value)
{
_birthDate = value;
OnPropertyChanged(nameof(BirthDate));
}
}
}
public Customer GetById(int id)
{
// Implementation of GetById method
}
public void Insert(Customer entity)
{
// Implementation of Insert method
}
public void Update(Customer entity)
{
// Implementation of Update method
}
public void Delete(Customer entity)
{
// Implementation of Delete method
}
public event PropertyChangedEventHandler PropertyChanged;
protected virtual void OnPropertyChanged(string propertyName)
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
public interface IBase<T>
{
T GetById(int id);
void Insert(T entity);
void Update(T entity);
void Delete(T entity);
}
}
```
### Lessons learn for generating new code
- Location of where the question is asked determines how copilot will respond and how the question is asked will also determine copilot’s response.
- Sometimes it is faster to just write the code by hand if responding solutions are not to a developers requirements.

First out questions in a text file, ask Copilot, if the response is not proper, go back and create a copy of the question in the text file and rephrase. Try until a good enough response is provided or go old school and write code yourself.
## Debugging
When GitHub Copilot is active and a runtime exception is raised a developer can click, Ask Copilot to get suggestions which is what the project is for. Once asking coilot a window opens with recommendations and in most cases why the exception was thrown.
The connection string is deliberately setup to point to a non-existing SQL-Server instance and the default time out has been altered from the original which is 30 seconds down to two seconds.

> **Note**
> This project was created to show a simple example for SQL-Server computed columns and was put here to show the above.
## About (When the project has no issues)
Provides an interesting way to compute how old a person is in years.
Original code came from this Stackoverflow [post](https://stackoverflow.com/a/11942/5509738).
I took the code and created a simple Console project to demonstrate how to use it with a computed column in a SQL-Server database table using Dapper to read the data.
```sql
CREATE TABLE [dbo].[BirthDays](
[Id] [int] IDENTITY(1,1) NOT NULL,
[FirstName] [nvarchar](max) NULL,
[LastName] [nvarchar](max) NULL,
[BirthDate] [date] NULL,
[YearsOld] AS ((CONVERT([int],format(getdate(),'yyyyMMdd'))-CONVERT([int],format([BirthDate],'yyyyMMdd')))/(10000))
```
- Take birthdate and current date, format and convert to integer
- Subtract birthdate from current date
- Divide by 10,000 to get years old
## Source code
Do not expect the code provided to be stable as it was used for demonstrating what has been provided above.
{% cta https://github.com/karenpayneoregon/Generative-Artificial-Intelligence %} Source code {% endcta %}
## VS Code
Using GitHub Copilot works great here as well as Visual Studio.
The shortcut is <kbd>CTRL</kbd> + <kbd>I</kbd> rather than <kbd>ALT</kbd> + <kbd>/</kbd> in Visual Studio.
A great usage is fixing code, in the screenshot below there are errors on each input because they are not associated with labels. Hover and a menu appears, use the last menu item to see why this is an issue (a WCAG AA rule) and second from bottom to fix.

Let's fix the issue with copilot by associating labels above inputs with the proper input and for kicks, surround the labels and inputs with a fieldset.
Press <kbd>CTRL</kbd> + <kbd>I</kbd> and ask with
_wrap in fieldset and legend then associate inputs with labels_

## Copilot History
In VS Code, use the arrow keys to traverse history for the current session while Visual Studio does not have history available.
## Generate commit messages
Not sure what to write for a commit message? Click the pencell and Copilot will write the message for you.

## Unravel Your Commit History with GitHub Copilot
From Microsoft [here](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes#unravel-your-commit-history-with-github-copilot)
Git history can be daunting to shuffle through, but it's often the best way to learn about a code base or help identify the origin of a bug. We've added a GitHub Copilot powered explain feature to the Commit Details window to make it easier to understand the contents of each commit.
You'll need to have an active GitHub Copilot subscription and the GitHub Copilot Chat Extension installed. Double click on any commit to open the Commit Details pane in the Git Repository window. Then, click on the 'Explain Commit' sparkle pen icon to get a summary of the changes side by side with the code.
## Copilot options in Visual Studio
Top right hand corner of the IDE, click on GitHub Copilot than settings.

## Copilot options in Visual Studio Code
- From Setting, type copilot into the search input.
Also for commands press <kbd>F1</kbd> and type copilot.

## Security
Use caution to what text is placed into your Copilot questions as they very well be seen by other users of Copilot. In one case while writing documentation into a markdown file Copilot suggested text that was from a closed system. They were informed that this happened. Imagine exposing customer information or internal server details and a bad actor got the information what might happen?
## Summary
Made it this far? That means by applying what was written the reader will have a better expectation of Copilot. Copilot is a great edition to programmers at every level but one should not rely solely on Copilot as seen not all response will be correct. As Copilot matures expect better responses and more features.
## Resources
- [What GitHub Copilot can do for your organization](https://resources.github.com/learn/pathways/copilot/essentials/what-github-copilot-can-do-for-your-organization/)
- [How to use GitHub Copilot: Prompts, tips, and use cases](https://github.blog/2023-06-20-how-to-write-better-prompts-for-github-copilot/)
- [GitHub Copilot Trust Center](https://resources.github.com/copilot-trust-center/)
- Microsoft learn: [Training modules](https://learn.microsoft.com/en-us/training/browse/?terms=github%20copilot)
- Research: [Quantifying GitHub Copilot’s impact on code quality](https://github.blog/2023-10-10-research-quantifying-github-copilots-impact-on-code-quality/)
- [Building Generative AI apps with .NET 8](https://devblogs.microsoft.com/dotnet/build-gen-ai-with-dotnet-8/)
- [AI for .NET developers](https://learn.microsoft.com/en-us/dotnet/ai/)
### Visual Studio specific
_Currently in preview_
- [GitHub Copilot is getting smarter](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes-preview#githubcopilotisgettingsmarter)
- [Naming things made easy](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes-preview#airenamesuggestions)
- [AI-generated breakpoint expressions](https://learn.microsoft.com/en-us/visualstudio/releases/2022/release-notes-preview#aigeneratedbreakpointscpp)
## Learning videos
{% embed https://www.youtube.com/watch?v=dhfTaSGYQ4o %} | karenpayneoregon |
1,881,903 | Python Basics 3: Operators | Operators are one of the most utilized elements of any programming language. We have to use operators... | 0 | 2024-06-09T20:06:14 | https://dev.to/coderanger08/python-basics-3-operators-2ee1 | python, programming, beginners, tutorial | Operators are one of the most utilized elements of any programming language. We have to use operators in Python for numerous cases. From mathematical calculations to building any project, operators are inevitable. As usual, operators have classifications.
**_Arithmetic Operators:_**
**1.Addition(+):** Add values on either side of the operator
```
x=2
y=5
print(x+y)
#output:7
```
**2.Subtraction(-):** Subtracts right hand operand from left operand
```
x=5
y=2
print(x-y)
>>> 3
```
**3.Multiplication(*):** Multiplies on either side of the operator
```
x=10
y=5
print(x*y)
>>> 50
```
**4.Division(/):** Divides left-hand operand by right-hand operand. The result of division is always a float value.
```
x= 20
y= 5
print(x/y)
>>> 4
```
**5.Floor Division(//):** It returns floor value for both integer and floating point arguments. Floor value effectively rounds down a real nummber to the nearest integre. It always shows the least value of your result.
`print(30//4) >>> 7
`
**5.Modulus(%):** This operator is also known as remainder operator. It basically returns the remainder by dividing the first number with the second number.
```
x=30
y= 4
print(30%4)
>>> 2
```
6.Exponentiation(`**`): It performs power calculations on operators.
```
x=2
y= 3
print(x**y)
>>> 8
```
**Round():** For rounding the decimal point to the nearest possible integer.
Round(number [, ndigits])
•Takes a number as an argument
•Square bracket means its optional. The no of digits that you want the round function to round to(eg: round to 3 digits)
**_Assignment Operator:_**
<u>1.=(Assign):</u> It assigns the value found in the right operand to the left operand.
`name= 123`
<u>2.+=(add and assign):</u> Adds the value found in the right operand to the value found in the left operand.
```
x=5
x+=3 (x=x+3)
print(x) >>> 8
```
<u>3.-=(subtract and assign):</u> Subtracts the value of the right operand from the value found in the left operand.
```
x=10
x-=3 (x=x-3)
print(x) >>>7
```
<u>4.*=(multiply and assign):</u> Multiplies the value of the right operand by the value of the left operand.
```
x= 5
x*=3 (x=x*3)
print(x) >>> 15
```
<u>5./=(Divide and assign):</u>
```
x=25
x/=5 (x=x/5)
print(x) >>> 5
```
<u>6.%=(Modulus and assign):</u>
```
x=30
x%=4(x=x%4)
print(x) >>>2
```
<u>7.**=(exponent and assign):</u>
```
x=2
x**=3 (x=x**3)
print(x) >>>8
```
<u>8.//=(floor division and assign):</u>
```
x=30
x//=4 (x=x//4)
print(x) >>> 7
```
**_Identical Operator:_**
Identical operators check whether two values are identical or not.
**i.is:** Returns true if the first value is identical or the same as the second value; otherwise, it returns false.
```
x=10
y=10
Print(x is y) >>>> True
```
**ii.is not:** Returns true if the first value is not identical or not the same as the second value; otherwise, it returns false.
```
x=20
y=21
print(x is not y) >>>> True
```
**_Membership Operator:_**
**i.in:** Returns true if the first value is in the second. Otherwise, returns False
```
x= ['apple', 'banana']
print('apple' in x) >>>> True
```
**ii.not in:** Returns True if the first value is not in the second. Otherwise, returns False.
```
x='Hello World'
print('h' not in x) >>>> True
```
**Logical Operators**
1.And
2.Or
3.Not
**Comparison or Relational operators**
1.==(equal)
2.!=(not equal)
3.>(Greater than)
4.<(Less than)
5.>= (greater than or equal)
6<=(Less than or equal)
| coderanger08 |
1,882,320 | Pixel perfect squarespace website | Pixel perfect Squarespace website. Links:... | 0 | 2024-06-09T20:03:03 | https://dev.to/deknows/pixel-perfect-squarespace-website-25hj | website, webdev, company, softwaredevelopment |

Pixel perfect Squarespace website.
Links: https://dribbble.com/shots/24322764-Pixel-Perfect-Squarespace-Website-Design
https://dribbble.com/shots/24322764-Pixel-Perfect-Squarespace-Website-Design
https://www.deknows.com/case-studies/squarespace/climatewrx-website-development-and-feature-enhancement
#website #design #uiux #html #css #javascript #wordpress #webdesign #squarespace #webflow | deknows |
1,882,319 | They Say Frontend is Easy | They say becoming a front-end dev is easy... Then, I have to show them the reality: ... | 0 | 2024-06-09T20:00:02 | https://dev.to/syedmuhammadaliraza/they-say-frontend-is-easy-agk | frontend, webdev, development, developers | ### They say becoming a front-end dev is easy...
Then, I have to show them the reality:
### Beginner Level
1. div
2. button
3. i
4. b
5. font-size: 16px
6. index.html
### Intermediate Level
7. script
8. img
9. form
10. iframe
11. $(function(){...});
### Advanced Level
12. useEffect
13. center a <div>
14. unit tests
15. responsive layouts
16. <link preload/prefetch>
17. linting
18. CSS cascade
19. float
### Expert Level
20. CORS
21. E2E tests
22. input validation
23. Hydration errors
24. font-size 14px safari zoom
25. filter views
26. @media print
27. img srcSet
28. pagination
29. NaN / undefined / null
### Master Level
30. infinite scroll
31. layout shift
32. calendar UI
33. cache busting
34. email CSS
35. CSS selector perf
36. event loop
37. OAuth2
38. user agents
39. WASM
40. this
### God Mode Difficulty
41. caching headers
42. "0 results found"
43. WebRTC
44. regex
45. SameSite cookie
46. closures
47. CJS / ESM
48. drag and drop UI
49. rewrite it in Rust
50. CSS full height on mobile
51. mobile dialogs
52. websockets
53. cookie banners
54. v8 stack traces
55. dates
56. localization
57. local first
58. rewrite it back in JavaScript
59. new Response(new ReadableStream(...))
60. microtask queue
61. typeof [1, 2, 3]; // 'object'
62. source maps
63. WYSIWYG editor
64. Samsung browser
65. 2023 IE support
66. time zones
(Beyond God Mode)
67. T extends [any, ...any[]] ? (T extends [...any[], infer _] ? 0 : 1):1
## And if that's not enough here is more 😅.
1. Quantum Computing Integration
2. WebAssembly (Wasm) Optimization
3. Micro Frontends Architecture
4. WebGPU Programming
5. State Machines and Statecharts (XState)
6. Advanced Compiler Techniques
7. CRDTs (Conflict-Free Replicated Data Types)
8. Advanced Accessibility Techniques
9. Neural Network Integration in Frontend
10. Edge Computing with Service Workers
11. Blockchain-Based Web Applications
12. Advanced Static Site Generation (SSG)
13. Custom Rendering Pipelines
14. Virtual and Augmented Reality (VR/AR) Frontend Development
15. Multi-Threading in JavaScript
16. Custom Browser Engines
These topics push the boundaries of frontend development and require a deep understanding of both frontend and backend technologies.
Still easy, you tell me?
#frontend #dev #webdev #developer #software | syedmuhammadaliraza |
1,882,316 | Fixing the Draggable Element Background Issue in Chromium Browsers | When developing a chess game today, I stumbled upon a peculiar behavior in Chromium browsers while... | 0 | 2024-06-09T19:48:27 | https://www.jayganatra.com/blog/draggable-element-in-chromium-browsers | css, html, webdev, tutorial | When developing a chess game today, I stumbled upon a peculiar behavior in Chromium browsers while implementing draggable elements. If you've ever noticed that dragging an element causes it to inherit its parent’s background, you're not alone. This odd effect can be quite a nuisance, but fortunately, there are ways to fix it.
<p> </p>
### Understanding the Issue
The issue arises when a draggable element seems to take on the background of its parent element during the drag action. This can lead to unexpected and unwanted visual results, especially if the parent element's background is distinct or patterned.
To understand why this happens, let's delve into some technical insights:
- The HTML draggable attribute (draggable="true") seems to force the element to inherit the parent’s background.
- According to the HTML Living Standard, the drag data store default feedback is dependent on the user agent (browser). This means different browsers might handle draggable elements differently.
<br />
Here's a snippet from the HTML Living Standard that highlights this:
> "Update the drag data store default feedback as appropriate for the user agent. If the user is dragging the selection, then the selection would likely be the basis for this feedback; if the user is dragging an element, then that element's rendering would be used; if the drag began outside the user agent, then the platform conventions for determining the drag feedback should be used."
Because of this browser-dependent behavior, the default feedback during a drag action can vary, making it challenging to create a consistent user experience.
<p> </p>
### Fixing the Issue
Through some research and experimentation, I found two effective ways to fix this issue:
- Using position: relative and z-index: By setting the draggable element’s position to relative and applying a z-index, you can ensure it retains its own background.
```
[draggable] { position: relative; z-index: 1; }
```
- Using CSS Transforms: Applying a small transformation to the draggable element can also resolve the issue.
```
[draggable] { transform: translate(0,0); }
```
<p> </p>
### Why These Fixes Work
1.
**position and z-index:** By setting the position to relative and giving it a z-index, you force the element to create a new stacking context. This prevents the draggable element from inheriting the parent’s background during the drag operation.
2.
**CSS Transforms:** Using a small transformation disrupts the default rendering process enough to ensure that the draggable element maintains its own background. The translateX and translateY values can be minimal and should not visibly affect the element’s position.
<p> </p>
### Conclusion
Browser inconsistencies can be frustrating, especially when dealing with visual feedback during drag-and-drop operations. By understanding the underlying causes and applying these CSS fixes, you can ensure your draggable elements display correctly across different browsers.
Have you encountered any other weird browser behaviors? Share your experiences and solutions in the comments below!
<p> </p>
This post is also available on my portfolio blog, so be sure to visit there for more updates and insights.
Photo by <a href="https://unsplash.com/@redaquamedia?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Denny Müller</a> on <a href="https://unsplash.com/photos/logo-JySoEnr-eOg?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| ganatrajay2000 |
1,882,315 | Documenting my pin collection with Segment Anything: Part 1 | As a hobby that spans across various cultures and ages, pin collecting allows enthusiasts like me to... | 27,656 | 2024-06-09T19:46:04 | https://blog.feregri.no/blog/documenting-my-pin-collection-with-segment-anything-part-1/ | imagesegmentation, python, objectdetection |
As a hobby that spans across various cultures and ages, [pin collecting](https://en.wikipedia.org/wiki/Pin_trading) allows enthusiasts like me to hold onto pieces of art, history, and personal milestones. Whether they're enamel pins from theme parks or vintage lapel pins, I believe each piece in a collection tells a unique story.
In this blog series, I'm excited to share my journey of documenting my extensive pin collection, which consists of gifts, purchases, and serendipitous finds from the streets.

## Version 1
With the help of ChatGPT I built a simple website that allows you to zoom into the whole canvas so that you can look at the pins in more detail, and while I liked the result ([you can view it here!](https://pins.feregri.no/v1/)), it is far from what I wanted to document my collection.
## My ideal collection display
My ideal solution is to create an interactive website where viewers can hover over each pin to see it highlighted, and click for a detailed view and additional information about the pin's background. Using the canvas image shown above, I embarked on a project to bring this vision to life, leveraging modern machine learning techniques.
## Enter Segment Anything
To extract the cutouts from the canvas I thought of using an image segmentation algorithm to extract the silhouettes of the pins. Now, the last time I tried to do something related object/edge detection, the model to go with was YOLO V2, with great surprise I discovered that advancements have led to YOLO V10!
However, Intrigued by the capabilities of the latest models, I decided to experiment with Meta AI's Segment Anything Model (SAM), which was released with the promise of being a powerful image segmentation model so I tried it
It turns out that now there is a V10 of YOLO, and it is more powerful than what I was already familiar with. But at the same time, I wanted to try the Segment Anything Model released by Meta AI… so that is what I did.
## Installing SAM
I wanted to run everything locally, so I set out to install everything in my Mac M2, it was a bit tricky and involved a lot of trial and error, but here is what in the end worked for me:
### 1. Create a new Python environment
```bash
python -m venv .venv
```
The version I used to create the environment was `3.10.12`
### 2. Install `torch`
I found there is specific [Apple guidance](https://developer.apple.com/metal/pytorch/) on how to do this, given that on certain Macs it is possible to take advantage of the GPU:
```bash
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
```
In the end, these are the versions I ended up having: `torch==2.3.1`, `torchvision==0.18.1` and `torchaudio==2.3.1`.
### 3. Fix NumPy
For some reason, I ran into an issue with `NumPy` and `torch`, [this StackOverflow answer](https://stackoverflow.com/questions/20518632/importerror-numpy-core-multiarray-failed-to-import/47433969#47433969) helped me solving it, by re-installing it with the following command:
```bash
pip install numpy -I
```
The final `numpy` version was `1.26.4`
### 4. Install Segment Anything
As far as I know, the only way to install the necessary code for SAM is through their GitHub repo:
```bash
pip install 'git+https://github.com/facebookresearch/segment-anything.git'
```
### 5. Download the SAM model
The models and the code for *Segment Anything* come separately, so to download a model to use:
```bash
wget -q https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth
```
There are different versions of models, but `sam_vit_b_01ec64` was my choice, mainly because as far as I know, it is the smallest.
### 6. Remaining tools
To develop and test the model, I used Jupyter, and to visualise the results of the image segmentation I used a package called `supervision`.
## Accessing the SAM model
In order to use the model, it is necessary to open it using Python, here is where you can configure where the model should run (either GPU or CPU, for example), in the code below you will see me configuring the model `vit_b`, I also attempted to use MPS (metal performance shaders) however I found an error and I just decided to run everything in the CPU:
```python
import torch
from segment_anything import sam_model_registry
# DEVICE = torch.device('mps' if torch.backends.mps.is_available() else 'cpu')
DEVICE = torch.device('cpu')
MODEL_TYPE = "vit_b"
CHECKPOINT_PATH='sam_vit_b_01ec64.pth'
sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE)
```
### Opening the image
```python
import cv2
IMAGE_PATH= 'pins@high.jpg'
image = cv2.imread(IMAGE_PATH)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
```
The `cv2.cvtColor()` function converts the colour space of an image. In this case, it's converting the colour format from BGR to RGB. This is often done because while OpenCV uses BGR, most image applications and libraries use RGB. The converted image is stored in the variable `image_rgb`.
## Generating Automated Masks
SAM has different methods of generating masks, the one I wanted to try initially is by far the easiest one because all you need to do is provide an image and have the model generate the masks for you, all you need is to pass the `sam` variable (containing the model) to an instance of the `SamAutomaticMaskGenerator`:
```python
from segment_anything import SamAutomaticMaskGenerator
mask_generator = SamAutomaticMaskGenerator(sam)
```
Then, to generating the masks is as easy as calling the `generate` method of the automatic generator passing the RGB image:
```python
output_mask = mask_generator.generate(image_rgb)
```
SAM is indeed a very powerful model, much more powerful than what I need, at least out of the box, this is the result I get from running the entire image through SAM:

Upon running SAM, the results were not as expected. The model struggled to accurately detect all pins and sometimes misinterpreted parts of pins as separate entities.
I then decided to try to work on a smaller crop of the image, however, I got the same results:

If you are interested in how I managed to display the results, you can have a look at the function I wrote for this task:
```python
import numpy as np
import supervision as sv
def view_masks(source, masks):
""""
Display the source image, the segmented image and the binary mask
:param source: The source image in BGR format
:param masks: The result of the automatic mask generator call
"""
mask_annotator = sv.MaskAnnotator(color_lookup=sv.ColorLookup.INDEX)
detections = sv.Detections.from_sam(sam_result=masks)
dark = np.zeros_like(source)
annotated_image = mask_annotator.annotate(scene=source.copy(), detections=detections)
masked = mask_annotator.annotate(scene=dark, detections=detections)
sv.plot_images_grid(
images=[source, annotated_image, masked],
grid_size=(1, 3),
titles=['source image', 'segmented image', 'binary mask'])
```
## Conclusion
It may be possible to modify the behaviour of the `SamAutomaticMaskGenerator` via arguments, however, when I modified some of these arguments I realised that (I did not know what I was doing, and) sometimes the kernel died on me. I suppose my laptop does not have enough memory to run some combinations.
While the initial attempts with SAM presented challenges, they provided valuable learning opportunities. In the next blog post, I will explore alternative methods and adjustments to enhance pin detection and achieve the interactivity I envision for my collection's display.
| feregri_no |
1,882,314 | Wie man django in Entwicklung Containerisiert - via docker compose | Kontext Des öfteren gibt es in der Entwicklung das Problem, dass es einen Fehler gibt, der... | 0 | 2024-06-09T19:45:51 | https://dev.to/rubenvoss/wie-man-django-in-entwicklung-containerisiert-via-docker-compose-3n0b | ## Kontext
Des öfteren gibt es in der Entwicklung das Problem, dass es einen Fehler gibt, der nur auf einem Gerät auftaucht.

Häufig liegt das daran, dass in den verschiedenen Entwicklungsumgebungen verschiedene Programmversionen und/oder Libraries installiert sind.
Entwickler A:
PostgreSQL 14.6
Python 3.7.11
Redis 7.4.2
Entwickler B:
PostgreSQL 14.6
Python 3.9.11
Redis 7.4.2
Wenn es jetzt einen Bug gibt, der in Python 3.7.11 Auftaucht, aber in Python 3.9.11 nicht - wird davon nur Entwickler A betroffen sein.
Normalerweise ist es jedoch so, dass es sehr lange dauert herauszufinden dass der Bug aus den Versionsunterschieden stammt. Es ist also nicht sofort ersichtlich, dass wegen der unterschiedlichen Versionen der Bug auftritt. Alle Versionen und Libraries bei allen Entwicklern immer gleich zu halten ist schwierig bis unmöglich / sinnlos.
Außerdem ist es oft auch so, dass immer wieder neue Entwickler technisch aufgestellt werden müssen. Das kann teilweise mehrere Tage Arbeit bedeuten, damit nur eine Person alles in einen 'funktionierenden' Zustand bekommt.
Hier kommt jetzt docker compose & Containerisierung ins Spiel. Um verschiedene Applikations- und Libraryversionen zu vermeiden, und ebenso den Technischen onboarding - Prozess zu vereinfachen ist hier einiges möglich. Mit etwas DevOps wissen kann man `docker-compose.yml` & `Dockerfile` Dateien aufsetzen und damit die Gesamte Applikation mit allen Versionen einstellen. Jetzt müssen nur noch die einzelnen Entwickler Docker auf ihrem PC/Mac haben, den Rest erledigt dann Docker.
## Dockerfile & .yml Speicherorte
Am besten legst du dein `Dockerfile` und `docker-compose.yml` so an, dass es für deine App Sinn macht. Bei mir ist das so, dass das `docker-compose.yml` auf dem Repository-Level ist, und das Dockerfile auf dem Django-Projektlevel. So kann ich meine App direkt starten, wenn ich in die Repository gehe. Außerdem können dann weiter Teile der Applikation am Repository-Level hinzugefügt werden. Wie z.B. andere Webapps, oder Datenbanken, etc.
```
- meine_repository
- docker-compose.yml
- mein_django_projekt/
- Dockerfile
- manage.py
- ...
- venv/
- .env
- .gitignore
- README.md
```
## Dockerfile
Um deine django-App im container zu starten, brauchst du ein Dockerfile. Hier bestimmst du Details deines Applikationsverhaltens im Container. Hier mein Dockerfile für Django mit Erklärungen:
```
# Deine Python-Version gibst du mit Doppelpunkt und der Versionsnummer mit 2 Punkten an.
# So verhinderst du mögliche Probleme mit falschen Versionen
FROM python:3.11.9
# Drei nützliche Optionen für die Python runtime im Container.
# https://stackoverflow.com/questions/59812009/what-is-the-use-of-pythonunbuffered-in-docker-file
ENV PYTHONUNBUFFERED 1
# https://stackoverflow.com/questions/59732335/is-there-any-disadvantage-in-using-pythondontwritebytecode-in-docker
ENV PYTHONDONTWRITEBYTECODE 1
# https://stackoverflow.com/questions/45594707/what-is-pips-no-cache-dir-good-for
ENV PIP_NO_CACHE_DIR=1
# Du kannst als dein 'Hauptverzeichnis' im Container einfach /app nutzen
# Das 'WORKDIR' wirkt sich auf alle weiteren Docker - Befehle aus, wie z.B.
# den COPY Befehl in der nächsten Zeile. Mit . ist jetzt /app gemeint
WORKDIR /app
# Hiermit kopierst du deine requirements.txt nach /app/requirements.txt
# Dazu muss sich dein requirements.txt in deinem Django-Projektverzeichnis befinden
COPY requirements.txt .
# Jetzt installierst du die requirements.txt
RUN pip install -r requirements.txt
# Du kopierst nun dein gesamtes django-Projekt in den Container
COPY . .
# Gebe deinen Port im Docker-Netzwerk frei
EXPOSE 8000
```
Jetzt kannst du dein Container-Image bauen & laufen lassen:
```
cd mein_projekt
docker build -t mein_container_image .
# Hier sollte nichts passieren
docker run mein_container_image
```
Wieso passiert nichts? Dein Container läuft nur ganz kurz, weil du den `python manage.py runserver` Befehl noch nicht angegeben hast! Das machen wir jetzt.
## docker-compose.yml
Die Orchestrierung deines Containers machen wir aus dem `docker-compose.yml`. Momentan brauchst du nur den einen Service, aber in Zukunft kannst du auch mehrere weitere Services hinzufügen. Folgendermaßen kannst du dein `docker-compose.yml` aufbauen:
```
services:
meine_app:
# Automatischer Neustart, falls deine App abstürzt
restart: always
# Das Verzeichnis, in dem dein Dockerfile liegt
build: ./mein_django_projekt
# Du gibst den Port 8000 frei vom Inneren des Containers zum Äußeren
ports:
- "8000:8000"
# Mit einem Bind-Mount sorgst du dafür, dass der Code deines Projekts
# Im Container aktualisiert wird, so wie auf deiner Festplatte
volumes:
- ./mein_django_projekt/:/app
# Hier kannst du deine Umgebung angeben:
env_file: .env
# Mit diesem Befehl startest du den Server
# Bei Containern mit 0.0.0.0 starten!
command: python manage.py runserver 0.0.0.0:8000
```
Jetzt kannst du deine App starten!
Mit `docker compose up` wird nun dein Container gebaut & mit dem runserver-Befehl gestartet. Der Befehl muss im gleichen Verzeichnis wie die `docker-compose.yml` Datei abgesetzt werden.
Nun kannst du unter `localhost:8000` deine App erreichen!
Happy Coden!
Dein Ruben
[Mein Blog](rubenvoss.de) | rubenvoss | |
1,882,313 | Hi, I'm Justin Goncalves | Hello All I'm Justin I am a 16yo Self Learning Developer from the United States I mostly... | 0 | 2024-06-09T19:40:42 | https://dev.to/justingonca/hi-im-justin-goncalves-3bc8 | ## Hello All
I'm Justin
I am a 16yo Self Learning Developer from the United States
I mostly post every week on Mondays, Wednesdays and Fridays as well as Saturday and Sunday.
Well that's everything about me,
See you later,
Bye | justingonca | |
1,880,939 | Two Flags, One Country, Same Message | *Note: _This is my submission for this month's CSS Art challenge on Dev.to * Table of... | 27,663 | 2024-06-09T19:39:40 | https://dev.to/cbid2/two-flags-one-country-same-message-6d7 | frontendchallenge, css, devchallenge | **Note: _This is my submission for [this month's CSS Art challenge on Dev.to](https://dev.to/challenges/frontend-2024-05-29) **
**Table of content:**
- [What I built](#what-i-built)
- [Demo](#demo)
- [Struggles](#struggles)
- [My overall feelings about this challenge](#my-overall-feelings-about-this-challenge)
- [Footnotes](#footnotes)
<!-- headings -->
<a id="what-i-built"></a>
## What I built
When I think of June, Caribbean Heritage Month[^1] comes to mind. I noticed that Haiti is often underrepresented during this time, so for this CSS art challenge, I decided to draw two versions of the Haitian flag and describe its history to educate people about the country.
<a id="demo"></a>
## Demo
[](https://haitian-flag.vercel.app/)
View the source code here:
{% github: https://github.com/CBID2/Haitian-flag %}
<a id="struggles"></a>
## Struggles
A major challenge that I faced in this challenge was working on the 1964 version of the Haitian flag. First, I duplicated the HTML code that I did for the present version:
```html
<div class="flag-color-one"></div>
<div class="coat-of-arms"></div>
<div class="flag-color-two"></div>
</div>
<p class="flag-slogan"> L'Union fait la force! </p>
```
Then, I tried giving the CSS a unique touch by only adding code for the code of arms:
```css
/* code for first Haitian Flag */
.1964-flag-color-one {
background-color: black;
flex: 1;
}
```
Unfortunately the design didn't change:
{% codepen https://codepen.io/CB_ID2/pen/QWRgzgM %}
I was on the verge of giving up, but then I reminded myself: "Would Haiti have gained its reputation as the first Black republic if figures like Jean-Jacques Dessalines[^2] had given up?". Feeling a new surge of confidence, I started brainstorming more ideas. Then, a question popped into my head: "Why not try wrapping the code for both flags in their own container?". At first, I used the `<div>` element, but to make the code more accessible, I switched to the `<section>` element:
```html
<!-- Previous flag (1964) -->
<section class="flag-container flag-1964">
<section class="flag-1964-color-one"></section>
<section class="coat-of-arms-1964"></section>
<section class="flag-1964-color-two"></section>
</section>
```
Then, I placed its CSS in its own section:
```css
/* Code for 1964 flag */
.flag-1964 .flag-1964-color-one {
background-color: black;
flex: 1;
}
.flag-1964 .flag-1964-color-two {
background-color: #ce1126;
flex: 1;
}
```
After that, the design worked! :)

<figcaption> You tell them Quan! :) </figcaption>[^3]
<a id="my-overall-feelings-about-this-challenge"></a>
### My overall feelings about this challenge
This challenge was fun! :) I not only worked on my CSS skills, but also learned more about Haiti. If you want to see more of my coding adventures, follow me on Dev.to and check my other links below:
### Me
{% user cbid2 %}
{% cta https://twitter.com/CodesChrissy %} 🐦 Follow me on X(Twitter) {% endcta %}
{% cta https://chrissycodes.hashnode.dev %} 📝 Check out my other content on Hashnode {% endcta %}
{% cta https://www.linkedin.com/in/christinebelzie %} 🫱🏾🫲🏻 Connect with me on Linkedin {% endcta %}
<a id="footnotes"></a>
## Footnotes
[^1]: In the U.S., Caribbean-Americans celebrate and educate others about their culture, and highlight the impact the island had on the country. If you want to learn more about this holiday, check out [their website](https://caribbeanamericanmonth.com/).
[^2]: Jean-Jacques Dessalines was the first emperor of Haiti and the leader in the Haitian Revolution. To learn more about him, check out [his biography](https://www.britannica.com/biography/Jean-Jacques-Dessalines)
[^3]: This is a GIF called dynasty warriors success by [wifflegif](https://giphy.com/gifs/reaction-win-success-GS1VR900wmhJ6)
| cbid2 |
1,882,312 | Buy verified cash app account | .https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-09T19:39:26 | https://dev.to/futhalsope322/buy-verified-cash-app-account-4lcp | webdev, javascript, beginners, python | .https://dmhelpshop.com/product/buy-verified-cash-app-account/

Buy verified cash app account
Cash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.
Our commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.
Why dmhelpshop is the best place to buy USA cash app accounts?
It’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.
Clearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.
Our account verification process includes the submission of the following documents: [List of specific documents required for verification].
Genuine and activated email verified
Registered phone number (USA)
Selfie verified
SSN (social security number) verified
Driving license
BTC enable or not enable (BTC enable best)
100% replacement guaranteed
100% customer satisfaction
When it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.
Clearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.
Additionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.
How to use the Cash Card to make purchases?
To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.
After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.
Why we suggest to unchanged the Cash App account username?
To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.
Alternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.
Selecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.
Buy verified cash app accounts quickly and easily for all your financial needs.
As the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.
For entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.
When it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.
This article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.
Is it safe to buy Cash App Verified Accounts?
Cash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.
Unfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.
Cash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.
Leveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.
Why you need to buy verified Cash App accounts personal or business?
The Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.
To address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.
If you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.
Improper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.
A Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.
This accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.
How to verify Cash App accounts
To ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.
As part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.
How cash used for international transaction?
Experience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.
No matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.
Understanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.
As we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.
Offers and advantage to buy cash app accounts cheap?
With Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.
We deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.
Enhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.
Trustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.
How Customizable are the Payment Options on Cash App for Businesses?
Discover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.
Explore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.
Discover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.
Where To Buy Verified Cash App Accounts
When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.
Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.
The Importance Of Verified Cash App Accounts
In today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.
By acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.
When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.
Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.
Conclusion
Enhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.
Choose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.
Contact Us / 24 Hours Reply
Telegram:dmhelpshop
WhatsApp: +1 (980) 277-2786
Skype:dmhelpshop
Email:dmhelpshop@gmail.com
| futhalsope322 |
1,882,123 | Frontend Challenge: CSS is a Beach | This is a submission for [Frontend Challenge... | 0 | 2024-06-09T19:37:00 | https://dev.to/kelseyrh/frontend-challenge-css-is-a-beach-5911 | devchallenge, frontendchallenge, css, javascript |

_This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
## What I Built
This list of best beaches is displayed ON a beach. As the page is scrolled the ocean washes away the previous item on the list and the next one appears as the ocean recedes.
_NOTE: this uses scroll-based animations that are currently only available in Chrome. If viewed on a browser that does not support the animations the page will display with a simple scrolling list, the same way it will display for users who have enabled the reduced motion accessibility option on their browser._
_NOTE_2: not optimized for mobile view._
## Demo
<!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. -->
{% codepen https://codepen.io/KelseyHale/pen/KKLvzjO %}
## Journey
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
As soon as I saw that the challenge was to display a list, I knew I wanted to try something I'd been very curious about lately: scroll based animations. I immediately knew I wanted the ocean to wash away each list item as the page is scrolled, but it took a considerable amount of experimentation and research into scroll based animations to make it happen.
First, I examined an example [codepen](https://codepen.io/giana/pen/BabdgjB) I'd been intrigued by a few weeks ago by CodePen user [Giana](https://codepen.io/giana). I was impressed how the scrolling itself was basically invisible, and the transitions stole the show. I learned a lot from this example about stacking elements to allow them to shift seamlessly without a scroll. I used this idea to make the words appear and disappear under the ocean. Certainly easier ways of making this happen, but I wanted to explore CSS only solutions and expand my understanding of scroll animations.
Though I originally intended to make the text of the beach list look like it was "written in the sand", instead I ended up adding a color animation to that text to make it easier to detect the transition from one item on the list to the next. The color of the text shifts from green to blue over the course of scrolling the list.
Next I added the wave. Once the stacked elements were set up to allow the text fade, the wave was surprisingly easy to implement by animating a psuedo element on each `<li>` element. I used a clip path to make the top of the element look wave-like. Even though this part ended up being much less complicated than the main scroll animation, I am most proud of it. It is the thing I first envisioned when I began this challenge, and the fact that it works is amazing. Even though I would probably never use something like this in a production project, I really enjoyed playing with it and it opened up so many possibilities of what scroll animations could do.
Once those two main elements were working I decided to add a beachy background image and an image to the pseudo element to actually look like a wave instead of using gradients. I love the way the images from Unsplash add an element of reality, while still being whimsical.
Finally, I knew I had to add some safeguards for browser support and accessibility concerns. I added all of the animations to `@support` and `@media` queries to make sure that the page does not break if a user is using a browser that doesn't support scroll animations or if the user has selected the browser option to reduce motion.
#Improvements
- Because this way of scrolling the list uses full view height elements, I ended up having to adapt the top elements to be fixed position so that they would be visible. To improve the design I would like to explore making these visible initially and then scroll out of view, leaving the beach list to scroll and animate. This is still a challenge due to needing to have full view height list elements and also allowing scroll-snapping on the html element which ends up pushing other elements off the top of the page.
- A more responsive design would be nice as right now the background image either looks really great or really odd at various sizes of viewport.
- An indicator at the bottom of the list would be nice to have as well as the option to scroll to top.
- There are some accessibility considerations implemented for reduced motion, but there are always more to consider, especially making sure keyboard navigation works and possibly disables the animation.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | kelseyrh |
1,882,310 | HIRE THE MOST POPULAR BITCOIN RECOVERY EXPERT ADWARE RECOVERY SPECIALIST | Website info: www.adwarerecoveryspecialist.expert The wild west of cryptocurrency investing, where... | 0 | 2024-06-09T19:35:45 | https://dev.to/cynthia_creech_efb48d686d/hire-the-most-popular-bitcoin-recovery-expert-adware-recovery-specialist-1jpl | Website info: www.adwarerecoveryspecialist.expert
The wild west of cryptocurrency investing, where promises of quick riches often collide with the harsh reality of scams and swindlers,finding a reliable partner is akin to discovering a rare gem amidst a sea of rubble. My foray into this world began with bright hopes and fervent aspirations, only to be ensnared by the deceitful tendrils of a fraudulent platform. WhatsApp info: +1 (571) 541‑2918
However, my salvation came in the form of ADWARE RECOVERY SPECIALIST, a beacon of integrity and expertise in an otherwise murky landscape.My journey commenced with the enticing prospect of exponential gains, fueled by the persuasive words of a trusted friend and referrer. Blinded by the allure of quick riches, I heeded their counsel without conducting the due diligence that such ventures demand. Thus, I found myself entrusting a substantial sum of $220,000 in USDT to a platform whose promises of prosperity soon proved to be a mirage. Email info: Adwarerecoveryspecialist@auctioneer.net As time wore on, withdrawals became an exercise in futility, and my investments lay shackled within the confines of the platform. Faced with mounting doubts and fears, my friend and I resolved to uncover the truth behind our predicament. It was in this moment of desperation that we stumbled upon ADWARE RECOVERY SPECIALIST, a name whispered in hushed tones among those who had faced similar trials.
With trepidation mingled with hope, we reached out to ADWARE RECOVERY SPECIALIST, placing our faith in their ability to navigate the complexities of the crypto landscape. From the outset, their professionalism and determination set them apart, instilling in us a glimmer of optimism amidst the shadows of uncertainty.Armed with meticulous attention to detail and a tenacious spirit, the team at ADWARE RECOVERY SPECIALIST embarked on a mission to reclaim what was rightfully ours. Their expertise and dedication became evident as they navigated the intricate web of transactions and communications with the platform's finance team.In the span of a few days, our prayers were answered as ADWARE RECOVERY SPECIALIST orchestrated the retrieval of our initial deposits, restoring them to our wallets with a sense of triumph that words cannot adequately express. Beyond mere financial restitution, they restored our faith in the principles of integrity and justice that should underpin every facet of the crypto community.
What distinguishes ADWARE RECOVERY SPECIALIST is not merely their technical proficiency, but their unwavering commitment to their clients' well-being. Telegram info: @adwarerecoveryspecialist Throughout the ordeal, they remained steadfast allies, offering guidance and reassurance when doubts threatened to overwhelm us.
For anyone traversing the perilous terrain of crypto investments, I wholeheartedly endorse ADWARE RECOVERY SPECIALIST as a beacon of hope and resilience. In a landscape fraught with peril, they stand as guardians of integrity and champions of justice, ready to assist those in need with unwavering dedication and expertise. Trust in ADWARE RECOVERY SPECIALIST, and let them illuminate your path to redemption. | cynthia_creech_efb48d686d | |
1,882,280 | Building a command line photo tagger using Docker, .Net and ExifTool | In this post, I will cover what photo Geotagging is, my use of the concept over time and a brief... | 0 | 2024-06-09T19:33:38 | https://dev.to/syamaner/building-a-command-line-photo-tagger-using-docker-net-and-exiftool-1gc4 | geotagging, docker, dotnet, gps | In this post, I will cover what photo Geotagging is, my use of the concept over time and a brief discussion of a basic tool I put together for the travels over the past few weeks.
## Part 1 - Geotagging
Geotagging (adding location information as a metadata) images is quite fun allowing us not only revisit the memories but also where they were and even explore further in the future. This has been a transparent feature offered by mobile phones and devices. A concept that has been ubiquitous for some time.
Over the years, I have used various cameras some of which allowed native Geotagging support using accessories that allow linking with bluetooth GPS receivers and some without such feature, meaning having to use local GPS loggers that have sufficient storage and battery life to be placed in a backpack or even in the camera strap. Depending on the approach, the workflow and pros / cons can be very different.
### Using mobile phones / devices with built in support
- Convenient / easy to use as it works out of the box.
- Improved accuracy as the operating systems can utilise appropriate algorithms to reject outliers and provide best estimate of a position at a given time.
- Image Quality of the mobile phones improved a lot but still not comparable to a purpose built device.
### Using cameras with native GPS support
- Provide a seamless experience with potential impact on battery
- Photos tagged out of the box
- Able to use preferred systems / lens combinations and so on
- It is a niche market and still not many option (for example a small mirrorless that also has built in GPS or a connection port for external accessories)
- Depending on the setup, if the decide was off, time to first fix can be slow ending up with some images without GPS tags.
- I have used this approach for a few years around early 2010s utilising a receiver on the camera and an external GPS accessory as outlined in the following blog post:
- [Unleashed GPS Bluetooth Geotagging Solution for Nikon DSLRs - Terry White](https://terrywhite.com/unleashed-gps-bluetooth-geotagging-solution-for-nikon-dslrs/)
- [Unleashed D90 Instruction Manual](https://www.foolography.com/wp-content/uploads/instructions-en-d90.pdf)
### Using any camera and an offline GPS logger
- A specific hardware or even a mobile phone can be used as a gps logger.
- During post processing, images from camera might be matched to the entries from the logs and metadata added for the images where there is a good match.
- Still need to be aware of battery implications on the loggers.
- GPS data can be noisy and might need to deal with outliers.
- Plenty open source applications allow tagging images using these logs.
- This has been my go to overall with cameras ranging from bridge format, Mirrorless and pocket cameras too with success.
- After a few years of hiatus, back again and this is the focus of this post.
[ExifTool by Phil Harvey](https://exiftool.org "ExifTool" target="_blank") is an open source cross platform tool to manage [exif data](https://www.canon.co.uk/pro/infobank/all-about-exif/) on images. Given that we do not want to corrupt our images when manipulating metadata, using a tried and tested approach will mean peace of mind in contrast to building oneself or using obscure packages for the functionality.
### Mobile app companions
It is also possible to use companion apps for the cameras and use tagging using this approach. The ones I have come across has not been productive for my style and therefor not getting into in details. Here is some information about [FujiFilm Xapp](https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/)
### Current workflow
My current workflow involves the following:
- [i-GotU GT600](https://www.filesaveas.com/igotu_gt600.html)
- This is the item on the left on the image below next to AAA battery for comparison.
- A pocket camera (Sony RX100 VII)
- A mobile phone to manage the logger (change settings, export logs, clear space)
- Google Drive to share the logs with my laptop
- Adobe Lightroom CC
- And the command line tool subject of this post
- I used to have an older version of I-GotU and the supplied desktop application also did a great job at geotagging photos and part of my previous workflows.

The two on the rights are the ones I probably used around early 2010s and late 2000s and the left is my current setup.
## Part 2 - Combining System.Commandline, Docker and ExifTool for adding GPS tags to images in bulk
### System.Commandline
[System.Commandline](https://learn.microsoft.com/en-us/dotnet/standard/commandline/) is a handy library that allows us to build Command Line applications that can be packaged as usual clit tools or even as dotnet tools that can be used like any other botnet tool.
It is still under preview and the api can have breaking changes. Although parsing command line arguments is not a challenge and has been possible regardless the frameworks we used, the benefit of this approach is to use certain conventions when we are building our application and focus on the functionality instead of boilerplate / plumbing code.
For instance, with only the code below, we define what the command name is, what the arguments are and if they have defaults and at runtime, if this command is triggered, all the arguments will be passed without further effort from us. Using a simple Dependency Injection (DI) approach also allows registering these as hierarchical commands where we can almost define our own DSL in terms of interaction with the CLI. In our case this is: `photo-tool metadata gps add --arguments`. Given this is a photo tool, we could then add a new command that maybe resizes the images and outputs to another directory and the execution could look like: `photo-tool manipulate scale --scale-factor xyzzy --source-directory abc --out-directory def` For those familiar with Kubernetes, this is similar to how `kubectl` cli tool is organised this idea has originated from my ex colleague Dom so credit goes to him.
So in our case, the handler for removing tags from images looks like below:
```c#
public class RemoveGpsTagCommand : MetadataCommand
{
private readonly IPhotoTagger _photoTagger;
private readonly ILogger<RemoveGpsTagCommand> _logger;
public RemoveGpsTagCommand(IPhotoTagger photoTagger, ILogger<RemoveGpsTagCommand> logger) : base("remove", "Removes location metadata from the images.")
{
_photoTagger = photoTagger;
_logger = logger;
// example usage: dotnet PhotoTool.Cli.dll metadata gps remove --image-directory "/where-images-are"
AddOption( new Option<string>("--image-directory", "Location for the images to clean up."));
// The default naming convention is to convert the option name to camel case and remove the leading dashes.
// example: --image-directory -> imageDirectory
Handler = CommandHandler.Create(async (string imageDirectory) =>
{
await TagImages(imageDirectory);
});
}
private async Task<int> TagImages(string imageDirectory)
{
....
}
}
```
### Why Docker?
As discussed, a cli tool can be used natively in a few ways:
- As an executable
- As a custom dotnet tool
- running from source with dotnet run
- Or with Docker
Given that in essence this is just a wrapper around ExifTool, the cli only works if ExifTool is installed. So we can always ensure the tool is installed for our Operating System / Distribution or we can build an image that contains our application as well as ExifTool and runs on any Operating system / processor architecture such as arm64, amd64, arm v7, v8 and so on.
And this is the approach adopted here. The tool can be run using docker and mounting relevant directories as below:
```bash
# To add tags
docker run -it --rm -v /local-photo-directory:/app/import -v /local-logs-directory:/app/gps syamaner/photo-tool:1 metadata gps add --image-directory "/app/import" --gps-log-directory "/app/gps" --max-matching-seconds 200
# To remove tags
docker run -it --rm -v /local-photo-directory:/app/import syamaner/photo-tool:1 metadata gps remove --image-directory "/app/import" --gps-log-directory "/app/gps" --max-matching-seconds 200
```
Current version is only linux/arm64 but next step will be building with multi architecture support.
## What does it do?
As seen below with the screen capture from LightRoom, once we tag and import, then the location of the photo can be seen on map. Online tools such as Flickr also support this natively.

Similarly [Flickr map view](https://www.flickr.com/map/?fLat=25.03245&fLon=121.518544&zl=13&everyone_nearby=1&photo=5420786045) also displays ours and other public images in a given location.
## Next Steps
- Currently only csv format exported from i-GotU device is supported.
- I had [GPX](https://wiki.openstreetmap.org/wiki/GPX) support as well but removed so next steps will involve:
- Make csv parsing configurable so mappings can be changed at runtime
- Bring back GPX Support so that this could be a wider supported format.
- I have only tested with Sony .arw raw files and will add tests for other raw / compressed formats
- GPS logs are noisy, so depending on the use case (moving vehicle. walking around, mostly stationary, we might use different algorithms to eliminate the outliers and extrapolate the positions so this will be a topic.
- This is something I have implemented back at university around 2004: [Determining the locations visited by GPS users: a clustering approach](https://salford-repository.worktribe.com/output/1442237/determining-the-locations-visited-by-gps-users-a-clustering-approach)
- Image below depicts this issue. Even in seemingly ideal circumstances, there can be several outliers over a period of time without moving much.

## Links
- [Source Code](https://github.com/syamaner/photo-tool)
- [DockerHub](https://hub.docker.com/repository/docker/syamaner/photo-tool/general)
- [ExifTool by Phil Harvey](https://exiftool.org)
- [A nice article demonstrating a simple approach to System.Commandline Dependency Injection](https://endjin.com/blog/2020/09/simple-pattern-for-using-system-commandline-with-dependency-injection)
| syamaner |
1,882,301 | Adding Interactivity to JavaScript Importance of interactivity in web development | Importance of Interactivity in Web Development Interactivity in web development is just... | 0 | 2024-06-09T19:20:36 | https://dev.to/ellaokah/adding-interactivity-to-javascriptimportance-of-interactivity-in-web-development-4cho | webdev, javascript, tutorial | ## Importance of Interactivity in Web Development
Interactivity in web development is just like adding the fun stuff to a website that makes you want to stay and explore.
Interactivity is necessary in web development for several reasons:
#### Enhanced User Engagement:
It keeps one interested. When you are clicking around a website, and suddenly something cool happens. when you hover over a button or fill out a form, that's interactivity. It's like a little game that keeps you engaged and makes you want to stay longer
They are like forms, buttons, and adaptive contents that keep users interested leading to longer sessions and higher interaction with the website.
#### Improved User Experience(ui)
Interactive features make websites more intuitive and user-friendly, allowing visitors to navigate smoothly and accomplish tasks efficiently.
e.g Have you ever been on a website where everything just makes sense? Like, you click a button, and it takes you exactly where you want to go? That's because of interactive features that make websites easy to use, so you can discover what you need quickly and get stuff done without any hassle.
#### Personalization
Interactivity enables websites to tailor content and experiences based on user actions and preferences.
Ever realized how some websites seem to know exactly what you are looking for? That's because they use interactive tricks to customize your experience based on things you do on the site. It's like having a website just for you.
#### Communication and feedback
Allows users to share opinions, and communicate with the website owner, bringing about a sense of community and involvement.
Leaving a comment on a website, that's interactivity. Its like having a conversation with the people who run the website, and they appreciate what you have to say
#### Collection of Data
Help them help you
At times websites ask you questions or take information from you. Not because they are nosy but because they want to make the website better for you. They use that to discover what you like and what you don't like, so they can give you more of the good stuff.
#### Brand Differentiation
When a website is engaging, with interactive features, it makes them stand out from other competitors.
It leaves a lasting impression on visitors and strengthens the brand identity.
## Role of JavaScript in creating interactive elements on websites
[JavaScript](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/First_steps/What_is_JavaScript) is like the master builder behind the scenes of websites, making them come alive with cool interactive features. It lets developers add things like pop up messages, animated buttons, and forms that check if you've filled them out correctly. So, when you click a button and something cool happens, or when a website updates without having to refresh the page, that's all thanks to JavaScript! It's the magic ingredient that turns boring websites into exciting, interactive experiences that keep you engaged and coming back for more.
## Basics of JavaScript interactivity
Familiarizing the document object model (DOM)

img source: www.ASAQENI.COM
The [Document Object Model (DOM)](https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model) is like a blueprint for web pages, showing how everything is structured. Imagine it as a tree, with the main HTML element as the trunk, and each part of the webpage (like text, images, or buttons) as branches. The DOM lets developers interact with and manipulate these elements using JavaScript, allowing them to change content, style, or behavior dynamically. It's like being able to rearrange furniture in a room without physically moving it! This manipulation is what makes websites dynamic and responsive to user actions. For example, when you click a button and something happens on the webpage, it's because JavaScript is using the DOM to make it happen. Understanding the DOM is essential for developers to create interactive and user-friendly websites, as it forms the foundation for building engaging web experiences.
#### Selecting and Manipulating Elements
Selecting and manipulating elements in a web page is like picking items in a room and moving them around. In web development, JavaScript allows you to do this with parts of the webpage, like text, images, or buttons.
- Selecting elements: Imagine you want to change the colour of a chair in your room. First, you need to point to the chair. Similarly, in a webpage, you use JavaScript to "point" to or select an element. This is done using methods like “getElementById” or “querySelector”.
- Manipulating elements: Once you've selected an element, you can change it. It’s like deciding to move the chair to a different corner or paint it a new colour. In JavaScript, you can change the content, style, or even the position of the selected element. For example, you can update text, hide images, or change the background colour.
This ability to select and manipulate elements makes websites dynamic and interactive, enhancing the user experience.
## Events handling
Event handling in web development is like setting up a reaction to something that happens on a webpage, such as a click, hover, or key press. When these actions occur, JavaScript can trigger specific responses, like showing a message, changing colors, or updating content.
## Common interactive features
Common interactive features on websites include:
1. Form Validation: Ensuring users fill out forms correctly before submission, providing immediate feedback on errors.
2. Dynamic Content: Updating parts of the web page without reloading, such as loading new articles or images.
3. Animations and Transitions: Adding movement and visual effects to elements like buttons and menus for a more engaging experience.
4. Modals and Pop-ups: Displaying overlay messages or forms to grab user attention or collect input.
5. Drop-down Menus: Allowing users to navigate through various options efficiently.
6. Carousels and Sliders: Enabling users to scroll through images or content horizontally.
7. Interactive Maps: Allowing users to zoom in, out, and click on locations for more information.
## Advanced techniques
1. AJAX (Asynchronous JavaScript and XML): This allows parts of a webpage to update without reloading the whole page. For instance, new comments can appear without refreshing.
2. Fetch API: Similar to AJAX, it lets the website get data from servers in the background. For example, when you scroll through a social media feed, new posts load seamlessly.
3. WebSockets: This enables real-time communication, allowing for instant updates. Think of live chat apps or online gaming where actions happen instantly.
4. Using Frameworks and Libraries: Tools like React or Vue.js help build complex interactive elements more easily and efficiently, providing pre-built components and structure.
5. Client-Side Storage: Techniques like localStorage and sessionStorage allow websites to store data directly in the browser, making it possible to remember user settings or keep data even after a page refresh.
These advanced techniques enhance user experience by making websites faster, more interactive, and capable of real-time updates.
## Best practices
Best practices in web development help make websites work well and be easy to use:
1. Write Clean Code: Keep your code organised and readable so others can understand and maintain it easily.
2. Optimise Performance: Make sure your website loads quickly by minimising file sizes and reducing unnecessary code.
3. Ensure Accessibility: Design your website so everyone, including people with disabilities, can use it. Use proper tags and labels.
4. Test Regularly: Check your website on different devices and browsers to make sure it works everywhere.
5. Keep It Secure: Protect your website from hackers by using strong passwords and keeping software updated.
Following these practices ensures your website is fast, easy to use, and secure.
## Conclusion
In conclusion, adding interactivity with JavaScript transforms static websites into engaging and dynamic experiences. By using JavaScript to handle events, manipulate elements, and update content in real-time, you create a more enjoyable and efficient user experience. Understanding and implementing these techniques ensures your website is not only functional but also interactive and fun to use, keeping visitors engaged and satisfied.
| ellaokah |
1,882,268 | THE PYTHON BLUEPRINT: A Beginners Guide to getting started | What is Python Language Python is a backend tool that is used by programmers and tech developers.... | 0 | 2024-06-09T19:18:22 | https://dev.to/davidbosah/the-python-blueprint-a-beginners-guide-to-getting-started-2chm | python, beginners, programming, developer | **What is Python Language**
Python is a backend tool that is used by programmers and tech developers. It's objective/Use cuts across different sectors of programming including:
1. Web development.
2. Data service.
3. Artificial Intelligence.
4. Gaming.
**Merits of Python Language**
Firstly the Python is generally shorter to write than other languages like javascript which reduces ambiguity.
Secondly python is a cross platform language meaning that a python code on Mac Os can be ran on Windows without change of code pattern.
Thirdly Python has a large community so getting online colleagues and mentors is not a problem at all.
**Getting Started**
This is the real deal, the easiest way to start your python language is as follows:
1. Download a suitable python application on your operating system from the official python website.
2. Choose a text editor like Notepad++, Sublime Text, or Atom or an Integrated Development Environment(IDE) like Spyder or Pycharm.
3. Learn basic Syntax: Start with basic ones like numbers variables and Strings.
4. Engage tutorial resources online.
5. Join social media python communities.
With this you are on the right track to becoming a python guru.
| davidbosah |
1,882,063 | Helix Editor: a matter of questionable existence | As a learning junior dev, I went from the depiction of soyest dev imaginable(python my beloved,... | 0 | 2024-06-09T19:11:24 | https://dev.to/yappaholic/helix-editor-a-matter-of-questionable-existence-5cho | vim, editors, helix, codenewbie | As a learning junior dev, I went from the depiction of soyest dev imaginable(python my beloved, windows and VS Code) to the true struggle and learning enjoyer(Neovim btw, Arch btw and Typescript(bearable)). So recently I landed my vim-motions on the helix editor to try and figure out: was it really necessary for helix to exist or should you try it out as a vimmer?
## Prelude
From the start we should talk about the origins of the helix editor. Just as JavaScript came from Java, Helix came from Kakoune. What is Kakoune you ask? I have no idea, neither should you. All you should know is that Helix has a lot of similarities of vim and vim-motions but then it takes a rather strange approach to solve same problems. Let's take a look.
## Selection over action
You heard it right. No more double y to copy line or double d(eez nuts) to delete line, Helix thinks otherwise. First you gotta select a line and then manipulate it. You think "Okay, just let me use visual block with `V`" and you wrong again. No more visual blocks, it's a matter of the past, now you need to use `x` to select line and `x` again to move one line down.
Okay, maybe I can start selecting at the start of the line and then use `0` to go to the end of the line, right? Right?

Sike! Helix thinks you are lazy to learn vim bindings, so now movement is bound to "go to"(or `g`) movement. To go to the end of the file, use `ge` and to select a line and go up you need to use `vglk`, when in vim you can just do `Vk`. Do you feel post-modernity yet? If not, then we go deeper.
## Plugin system
Just as PlayStation 5 has no games, Helix editor has no plugin system yet. They are going to implement it using Lisp language and connecting it to Rust with some Steel plugin(did I mention it is written in Rust?) which is old and not post-modern at all. Forget about git with fugitive, forget about managing files with oil(also at the moment there is no file traversal in Helix except `:open` command and fuzzy finder), and the most important, no more file harpooning.
Instead, Helix comes with a lot of built-in stuff like fuzzy finder, tree-sitter, autocompletion, key bindings helper, diagnostics and lsp support when you install a correct language server. It is enough to make editor work with code but not enough to make it a complete IDE-like editor, so to get the most out of it modding community should do a lot of heavy lifting.
## Too much FZF
Working with Helix feels kind of okay. You got most of the stuff done, autocompletion is snappy, files open a bit faster, but then you notice: there is too much fuzzy finder! Literally, it makes a lot of stuff. Diagnostics? FZF. File picker? FZF. Tree-sitter objects? FZF. Special names in the code? FZF. Hotel? FZF. Brazil? Mentioned.
Don't get me wrong, I think that fuzzy finder is awesome, but it has it's own caveats and is some cases it's better to use something else. For example, you can't really manipulate folders and files with FZF and when(as I understand) using FZF it goes through every folder no matter what you want to search and shows everything. That's when you want to search for something and get a screamer with all `node_modules` files. Yuck!
## Configuration
Configs for Helix are written in `.toml` format, which is still better than some other options available(talking about you, ĴSON). Some terminal emulators and bun runtime also use toml to configure settings so you can get comfortable, and documentation covers enough settings to make experience personal and comfortable(no plugins tho).
## "Post-modern" editor

At the end, Helix Editor is a sort of chimera from scrapped vim-motions, LSP, Tree-sitter, FZF and some other popular plugins from vim.
The main reason I don't like Helix isn't because there are no plugins, even though the version number is almost close to the one in emacs, but because of the motions. Vim-motions are designed with idea of mnemonics and being fast. When vim's `g` is responsible for many movements it makes sense, because it is short to "go". So `G` is going to the end, `gg` is going to the start, `gf` is to go to the closest matching character(the only `gf` you'll have), `gd` is to go to the declaration and so on. In Helix it is scattered everywhere and g is more like a leader key for other keys, so now you gotta make more hand movement and keystrokes. Just why? This is why sometimes it is better to stick with old ways, even when they are old. And also text wrapping is awful, just going to leave it there, think for yourself at least once.
So that's my thoughts about Helix Editor. Stay tuned to see how I test warp terminal to save you a couple of keystrokes, like a GigaChad. | yappaholic |
1,882,298 | Tool to create quick mockup | I am a developer, not a designer, but I want to write about UI and I want to include example UI... | 0 | 2024-06-09T19:07:42 | https://dev.to/lisacee/tool-to-create-quick-mockup-1eb6 | discuss | I am a developer, not a designer, but I want to write about UI and I want to include example UI images. What is a tool that y'all use to quickly create a sharable or embeddable image to accompany your writing here on dev.to?
I don't want a tool so in-depth that I have to spend a lot of time to figure it out. It would be lovely if the tool had some generic components, buttons and placeholder, but I am also open to a Publisher-like tool. I am on a Mac and would prefer a free-ish tool.
For example, a designer at work made a very complex modal design that had multiple animation, interactions and a vertical scroll. I would like to write about the things I don't like about the design and how I would have designed it differently and why. Since it's based on proprietary details, I want to find a tool to recreate an image of it in a more generic way.
| lisacee |
1,882,285 | Set up an automated incident management response using AWS | Incident management Excited to share my latest AWS project focusing on automating an... | 0 | 2024-06-09T19:05:27 | https://dev.to/monica_escobar/set-up-an-automated-incident-management-response-using-aws-mp6 | aws, security, incident, automation | ## Incident management
Excited to share my latest AWS project focusing on automating an incident response following what I found to be a really engaging AWS made workshop.
The project involves setting up a core configuration of 3 EC2 instances with its corresponding security groups and a VPC in us-east-1 through a CloudFormation template.
To enhance security and ensure prompt incident response, a pipeline has been implemented. This pipeline starts with GuardDuty constantly monitoring the environment. When an anomaly is detected, an Amazon EventBridge Rule triggers a Lambda function.
The Lambda function plays a crucial role in securing the EC2 instances by restricting access to only ports 3389 and 22. Additionally, it takes an EC2 snapshot and stores it in an S3 bucket to prevent any data loss. The architecture is as follows:

To initiate the scenario and create the infrastructure for the automated incident response, I followed these steps to deploy the CloudFormation template:
**Deploy the CloudFormation template**
1. Review the CloudFormation Template
* Before deploying, you can review the template to understand its components and configurations.
```
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Security Automations Workshop template. Sets up VPC, EC2 instances, turns on GuardDuty and GuardDuty-Tester",
"Metadata": {
"AWS::CloudFormation::Interface": {
"ParameterGroups": [
{
"Label" : {"default": "Workshop Service Configuration"},
"Parameters": ["EnableGuardDuty"]
},
{
"Label" : {"default": "Workshop Parameters"},
"Parameters": ["LatestAMZNLinuxAMI", "LatestAMZNLinux2AMI", "LatestWindows2016AMI"]
}
],
"ParameterLabels": {
"EnableGuardDuty": {"default" : "Automatically enable GuardDuty?"}
}
}
},
"Parameters": {
"LatestAMZNLinuxAMI": {
"Description": "DO NOT CHANGE: The latest AMI ID for Amazon Linux",
"Type": "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>",
"Default": "/aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2"
},
"LatestAMZNLinux2AMI": {
"Description": "DO NOT CHANGE: The latest AMI ID for Amazon Linux2",
"Type": "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>",
"Default": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2"
},
"LatestWindows2016AMI": {
"Description": "DO NOT CHANGE: The latest AMI ID for Windows 2016",
"Type": "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>",
"Default": "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base"
},
"EnableGuardDuty": {
"Description": "Choose Yes if GuardDuty is not yet enabled in the account and region this template will be deployed to, otherwise choose No.",
"Type": "String",
"AllowedValues": ["Yes-Enable GuardDuty", "No-GuardDuty is already enabled"],
"Default": "Yes-Enable GuardDuty"
}
},
"Mappings": {
"AWSRegionAMIMap": {
"ap-south-1": {"HVM64": "ami-b46f48db"},
"eu-west-3": {"HVM64": "ami-cae150b7"},
"eu-west-2": {"HVM64": "ami-c12dcda6"},
"eu-west-1": {"HVM64": "ami-9cbe9be5"},
"ap-northeast-3": {"HVM64": "ami-68c1cf15"},
"ap-northeast-2": {"HVM64": "ami-efaf0181"},
"ap-northeast-1": {"HVM64": "ami-28ddc154"},
"sa-east-1": {"HVM64": "ami-f09dcc9c"},
"ca-central-1": {"HVM64": "ami-2f39bf4b"},
"ap-southeast-1": {"HVM64": "ami-64260718"},
"ap-southeast-2": {"HVM64": "ami-60a26a02"},
"eu-central-1": {"HVM64": "ami-1b316af0"},
"us-east-1": {"HVM64": "ami-467ca739"},
"us-east-2": {"HVM64": "ami-976152f2"},
"us-west-1": {"HVM64": "ami-46e1f226"},
"us-west-2": {"HVM64": "ami-6b8cef13"}
}
},
"Conditions": {
"EnableGuardDuty": {"Fn::Equals": [{"Ref": "EnableGuardDuty"}, "Yes-Enable GuardDuty"]}
},
"Resources": {
"SSMInstanceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ssm.amazonaws.com",
"ec2.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "S3andSSMAccess",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeAssociation",
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:GetDocument",
"ssm:DescribeDocument",
"ssm:GetManifest",
"ssm:GetParameters",
"ssm:GetParameter",
"ssm:ListAssociations",
"ssm:ListInstanceAssociations",
"ssm:PutInventory",
"ssm:PutComplianceItems",
"ssm:PutConfigurePackageResult",
"ssm:UpdateAssociationStatus",
"ssm:UpdateInstanceAssociationStatus",
"ssm:UpdateInstanceInformation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstanceStatus"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ds:CreateComputer",
"ds:DescribeDirectories"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:PutObject",
"s3:GetObject",
"s3:GetEncryptionConfiguration",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucket",
"s3:ListBucketMultipartUploads"
],
"Resource": "*"
},
{
"Sid": "S3ListBuckets",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Sid": "S3GetObjects",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:GetObject"
],
"Resource": {
"Fn::Join": [
"",
[
"arn:aws:s3:::",
"agentbucket-",
{
"Ref": "AWS::AccountId"
},
"/*"
]
]
}
}
]
}
}
],
"Path": "/"
}
},
"VPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsSupport": "true",
"EnableDnsHostnames": "true",
"Tags": [
{
"Key": "Application",
"Value": {
"Ref": "AWS::StackName"
}
},
{
"Key": "Name",
"Value": {
"Fn::Join": [
"-",
[
"VPC",
{
"Ref": "AWS::StackName"
}
]
]
}
}
]
}
},
"PublicSubnet": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"CidrBlock": "10.0.1.0/24",
"MapPublicIpOnLaunch": "true",
"AvailabilityZone": {
"Fn::Select": [
"0",
{
"Fn::GetAZs": {
"Ref": "AWS::Region"
}
}
]
},
"Tags": [
{
"Key": "Name",
"Value": {
"Fn::Join": [
"-",
[
"Pub1",
{
"Ref": "AWS::StackName"
}
]
]
}
}
]
}
},
"IGW": {
"Type": "AWS::EC2::InternetGateway",
"Properties": {
"Tags": [
{
"Key": "Application",
"Value": {
"Ref": "AWS::StackName"
}
}
]
}
},
"AttachGateway": {
"Type": "AWS::EC2::VPCGatewayAttachment",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"InternetGatewayId": {
"Ref": "IGW"
}
}
},
"PublicRouteTable": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "VPC"
},
"Tags": [
{
"Key": "Application",
"Value": {
"Ref": "AWS::StackName"
}
},
{
"Key": "Network",
"Value": "Public"
}
]
}
},
"PublicRoute": {
"Type": "AWS::EC2::Route",
"DependsOn": [
"AttachGateway"
],
"Properties": {
"RouteTableId": {
"Ref": "PublicRouteTable"
},
"DestinationCidrBlock": "0.0.0.0/0",
"GatewayId": {
"Ref": "IGW"
}
}
},
"PublicSubnetRouteAssociation": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"SubnetId": {
"Ref": "PublicSubnet"
},
"RouteTableId": {
"Ref": "PublicRouteTable"
}
}
},
"GDdetector": {
"Type": "AWS::GuardDuty::Detector",
"Condition": "EnableGuardDuty",
"Properties": {
"Enable": true,
"FindingPublishingFrequency": "FIFTEEN_MINUTES"
}
},
"GuardDutyTesterTemplate": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL":{
"Fn::Join": [
"",
[
"https://sa-security-specialist-workshops-",
{
"Ref": "AWS::Region"
},
".s3.",
{
"Ref": "AWS::Region"
},
".amazonaws.com/security-hub-workshop/templates/guardduty-tester-template.json"
]
]
},
"Parameters": {
"InstanceSubnetId": {
"Ref": "PublicSubnet"
},
"DeployVPC": {
"Ref": "VPC"
},
"DeployVPCCidr": {
"Fn::GetAtt": [
"VPC",
"CidrBlock"
]
},
"LatestWindows2012R2AMI": "/aws/service/ami-windows-latest/EC2LaunchV2-Windows_Server-2016-English-Full-Base"
}
}
}
}
}
```
2. Deploy the Template
* Select aregion, for instance, I will be using us-east-1 (N. Virginia).
3. Specify Stack Details
* Enter the following parameters:
* Stack name: AutomatedIncidentResponseWorkshop
* Enable GuardDuty: Yes
* After filling in the parameters, click Next.
4. Configure Stack Options
* Click Next again on the following page, leaving all options at their default values.
5. Acknowledge and Create Stack
* Scroll down to the bottom of the page, check the box acknowledging that the template will create IAM roles, and click Create stack.

**Setting up a security group**
1. Create Security Group
* Create a new security group named ForensicsSG.
* Remove all outbound rules and set the following inbound rules:
* RDP: Protocol TCP, Port 3389, Source (your IP), Description: RDP for IR team
* SSH: Protocol TCP, Port 22, Source (your IP), Description: SSH for IR team
**Creating and Attaching Policies**
1. Create a New IAM Policy
* Create a policy called ec2instance-containment-with-forensics-policy with the following JSON to deny termination of isolated instances:
{
* "Version": "2012-10-17",
* "Statement": [
* {
* "Sid": "VisualEditor0",
* "Effect": "Deny",
* "Action": [
* "ec2:TerminateInstances",
* "ec2:DeleteTags",
* "ec2:CreateTags"
* ],
* "Resource": "*",
* "Condition": {
* "StringEquals": {
* "aws:ResourceTag/status": "isolated"
* }
* }
* }
* ]
* }
*
*
2. Create the Execution role for the Lambda function:
Create a role called ec2instance-containment-with-forensics-role with Lambda as a trusted entity in Trust Relationships
3. Create a User Group
* Create a group named ec2-users.

4. Attach Policies to the Group
* Attach the following policies to the ec2-users group:
* AmazonEC2FullAccess (AWS Managed Policy)
* Deny-termination-of-isolated-instances (Custom policy created above).
5. Create a New User
* Create an IAM user named testuser
* Add this user to the ec2-users group.

**Configuring the Lambda Function**
1. Create IAM Policy for Lambda
* Create an IAM policy and attach it to the IAM role that the Lambda function will assume for automated responses.
2. Create the Lambda Function
* Develop and deploy the Lambda function that will handle automated incident responses. Change the timeout to 15 minutes and select ec2instance-containment-with-forensics-role as the execution role. Select Python as runtime.

3. Add the following environmental values:
* Key: ForensicsSG
* Value: sg-...(the ID of your Forensics SG)

4. Include the following code:
import boto3
import time
from datetime import date
from botocore.exceptions import ClientError
import os
def lambda_handler(event, context):
# Copyright 2022 - Amazon Web Services
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# print('## ENVIRONMENT VARIABLES')
# print(os.environ)
# print('## EVENT')
# print(event)
response = 'Error remediating the security finding.'
try:
# Gather Instance ID from CloudWatch event
instanceID = event['detail']['resource']['instanceDetails']['instanceId']
print('## INSTANCE ID: %s' % (instanceID))
# Get instance details
client = boto3.client('ec2')
ec2 = boto3.resource('ec2')
instance = ec2.Instance(instanceID)
instance_description = client.describe_instances(InstanceIds=[instanceID])
print('## INSTANCE DESCRIPTION: %s' % (instance_description))
#-------------------------------------------------------------------
# Protect instance from termination
#-------------------------------------------------------------------
ec2.Instance(instanceID).modify_attribute(
DisableApiTermination={
'Value': True
})
ec2.Instance(instanceID).modify_attribute(
InstanceInitiatedShutdownBehavior={
'Value': 'stop'
})
#-------------------------------------------------------------------
# Create tags to avoid accidental deletion of forensics evidence
#-------------------------------------------------------------------
ec2.create_tags(Resources=[instanceID], Tags=[{'Key':'status', 'Value':'isolated'}])
print('## INSTANCE TAGS: %s' % (instance.tags))
#------------------------------------
## Isolate Instance
#------------------------------------
print('quarantining instance -- %s, %s' % (instance.id, instance.instance_type))
# Change instance Security Group attribute to terminate connections and allow Forensics Team's access
instance.modify_attribute(Groups=[os.environ['ForensicsSG']])
print('Instance ready for root cause analysis -- %s, %s' % (instance.id, instance.security_groups))
#------------------------------------
## Create snapshots of EBS volumes
#------------------------------------
description= 'Isolated Instance:' + instance.id + ' on account: ' + event['detail']['accountId'] + ' on ' + date.today().strftime("%Y-%m-%d %H:%M:%S")
SnapShotDetails = client.create_snapshots(
Description=description,
InstanceSpecification = {
'InstanceId': instanceID,
'ExcludeBootVolume': False
}
)
print('Snapshot Created -- %s' % (SnapShotDetails))
response = 'Instance ' + instance.id + ' auto-remediated'
except ClientError as e:
print(e)
return response
5. Test the Lambda Function
* Perform tests to ensure the Lambda function works as expected. You can use the following code (remember to change some of the variables):
import boto3, json
import time
from datetime import date
from botocore.exceptions import ClientError
import os
def lambda_handler(event, context):
# Copyright 2022 - Amazon Web Services
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# print('## ENVIRONMENT VARIABLES')
# print(os.environ)
# print('## EVENT')
# print(event)
response = 'Error remediating the security finding.'
try:
# Gather Instance ID from CloudWatch event
instanceID = event['detail']['resource']['instanceDetails']['instanceId']
print('## INSTANCE ID: %s' % (instanceID))
# Get instance details
client = boto3.client('ec2')
ec2 = boto3.resource('ec2')
instance = ec2.Instance(instanceID)
instance_description = client.describe_instances(InstanceIds=[instanceID])
print('## INSTANCE DESCRIPTION: %s' % (instance_description))
#-------------------------------------------------------------------
# Protect instance from termination
#-------------------------------------------------------------------
ec2.Instance(instanceID).modify_attribute(
DisableApiTermination={
'Value': True
})
ec2.Instance(instanceID).modify_attribute(
InstanceInitiatedShutdownBehavior={
'Value': 'stop'
})
#-------------------------------------------------------------------
# Create tags to avoid accidental deletion of forensics evidence
#-------------------------------------------------------------------
ec2.create_tags(Resources=[instanceID], Tags=[{'Key':'status', 'Value':'isolated'}])
print('## INSTANCE TAGS: %s' % (instance.tags))
#------------------------------------
## Isolate Instance
#------------------------------------
print('quarantining instance -- %s, %s' % (instance.id, instance.instance_type))
# Change instance Security Group attribute to terminate connections and allow Forensics Team's access
instance.modify_attribute(Groups=[os.environ['ForensicsSG']])
print('Instance ready for root cause analysis -- %s, %s' % (instance.id, instance.security_groups))
#------------------------------------
## Create snapshots of EBS volumes
#------------------------------------
description= 'Isolated Instance:' + instance.id + ' on account: ' + event['detail']['accountId'] + ' on ' + date.today().strftime("%Y-%m-%d %H:%M:%S")
SnapShotDetails = client.create_snapshots(
Description=description,
InstanceSpecification = {
'InstanceId': instanceID,
'ExcludeBootVolume': False
}
)
print('Snapshot Created -- %s' % (SnapShotDetails))
response = 'Instance ' + instance.id + ' auto-remediated'
except ClientError as e:
print(e)
return response
**IMPORTANT: **Verify status after execution: check on the EC2 console what is the current status of the instance "BasicLinuxTarget”. You will see that the security group has changed to the one we configured only for the IR team. You will also see new snapshots have been created. You are now seeing our automated response in live action!
Before testing, the security group in the instance was:

After testing, note the change in security group. Our IR security group took over.

**You can check the GuardDuty Dashboard as well to see the threats it detected and the lambda logs, as well as the snapshots created. **



6. Create EventBridge Rule
* Create a rule in EventBridge that triggers the Lambda function based on findings from GuardDuty. On creation method, select custom pattern and use the following code:
```
{
"source": ["aws.guardduty"],
"detail": {
"type": ["UnauthorizedAccess:EC2/TorClient", "Backdoor:EC2/C&CActivity.B!DNS", "Trojan:EC2/DNSDataExfiltration", "CryptoCurrency:EC2/BitcoinTool.B", "CryptoCurrency:EC2/BitcoinTool.B!DNS"]
}
}
```
As target, select the lambda you previously created. Click on create.
This is one way to manage incident responses automatically in the cloud.
**Personal learnings from this project:**
•To perform automated basic incident response tasks for containment and gathering data for analysing cyber threats.
•To understand possible actions to take and how to prevent such threats affecting a production environment.
**How could this be improved?**
As a reflective professional, I like to spend some time after finishing projects thinking how they can be further improved.
For future enhancements, I plan to integrate an SNS topic to notify the incident response team. This will enable manual checks in case of any damage and facilitate reverting to the original state when the situation is under control.
There is also the downside of a 15 minute maximum time for the lambda function. If needed, some environments will probably benefit more of using a different architecture, one that relies on step functions to avoid the time restriction.
Excited to continue optimising this project for enhanced incident response capabilities. Thanks for reading and happy deploying if you want to give this a go! | monica_escobar |
1,882,296 | about kelvin, a terminal password manager i'm building | overview kelvin is a password manager for the linux terminal, it generates passwords, and... | 0 | 2024-06-09T19:03:16 | https://dev.to/dompehbright/about-kelvin-a-terminal-password-manager-im-building-a1c | rust, security, linux | ### overview
kelvin is a password manager for the linux terminal, it generates passwords, and it could be used as a vault to save and secure passwords. kelvin creates your vault locally. vault is encrypted and made a hidden directory.
in building kelvin, i depended on a skeleton of three structs, admin, deck and deck_data.
```
pub struct Admin {
pub username: String,
pub password: String,
}
pub struct Deck {
pub domain: String,
pub plaintext: String,
}
pub struct DeckData {
pub domain: String,
pub ciphertext: Vec<u8>,
pub admin_data: Admin,
pub rsa_public_key: String,
pub rsa_private_key: String,
}
```
### admin.rs
the admin struct has the username and passwords fields. implementations on admin struct mainly deal with creating an administrator, validating administator and serialization and deserialization.
### implemetations on admin struct
`new()`:
- This is a constructor method for creating an Admin instance.
- It takes a name (username) and a password as arguments.
- It initializes the Admin struct with the provided username and password.
`hash_password()`:
- This method hashes the admin's password for security.
- It uses the bcrypt crate to hash the password with a default cost.
- The hashed password replaces the original password in the struct.
`verify_password()`:
- This method verifies a password against the hashed password stored in the Admin struct.
- It compares the provided password with the stored hashed password.
- Returns true if the provided password matches the stored hashed password, otherwise false.
`save_to_json()`:
- Serializes the Admin struct to JSON and saves it to a file.
- It converts the struct to a JSON string using serde_json.
- Constructs a filepath based on the admin's username and a predefined constant VAULT_PATH.
- Writes the JSON string representation of the admin data to the file.
- Calls `encrypt_directory()` to encrypt the directory containing the file.
`read_data_from_json()`:
- Reads admin data from a JSON file and deserializes it into an Admin struct.
- Constructs a filepath based on the admin's username and `VAULT_PATH`.
- Calls `decrypt_directory()` to decrypt the directory containing the file.
- Reads the contents of the file into a string.
- Deserializes the JSON string into an Admin struct.
- Returns the deserialized Admin struct.
`prompt_auth()`:
- Prompts for authentication by comparing provided username and password with stored admin credentials.
- Reads admin data from JSON using `read_data_from_json()`.
- Compares the provided username with the stored username and verifies the provided password.
- Returns true if both username and password match, otherwise false.
### deck.rs
deck struct manages the creation of a deck, a single data that has a domain and it's password(plaintext).
### implementation`s on deck struct
`new()`:
- Constructor method for creating a new Deck instance.
- Takes a domain and plaintext as arguments.
- Initializes the Deck struct with the provided domain and plaintext.
`encrypt()`:
- Encrypts the plaintext data of the deck using RSA encryption.
- Calls `get_keys()` to obtain RSA keys and a random number generator.
- Converts the plaintext to bytes.
- Encrypts the plaintext using RSA public key encryption (PKCS#1 v1.5 padding).
- Returns a tuple containing the encrypted data and the RSA keys.
`read_data_from_json():`
- Reads deck data from a JSON file and deserializes it into a DeckData struct (not defined in this snippet).
- Constructs a filepath based on the deck's domain and a predefined constant `VAULT_PATH`.
- Calls `decrypt_directory()` to decrypt the directory containing the file.
- Reads the contents of the file into a string.
- Deserializes the JSON string into a `Vec<DeckData>`.
- Returns the first DeckData from the vector or an error if no data is found.
### deckdata.rs
deckdata struct deals serializing of the deck to a json file and saving the deck.
### implementations on deckdata struct
`new()`:
- This is a constructor method for `DeckData`.
- It takes arguments to initialize the fields of `DeckData`, including Admin data, domain, ciphertext, RSA public key, and RSA private key.
- It converts RSA public and private keys to PKCS#1 PEM format and initializes the struct.
- `serialize_struct()`:
- Serializes the struct to a JSON string.
- Uses the serde_json crate to convert the struct into a JSON string.
- Returns the JSON string representation of the struct.
`save_to_json()`:
- Serializes the struct to JSON and saves it to a file.
- Constructs a filepath based on the domain and a predefined constant `VAULT_PATH`.
- Writes the JSON string representation of the struct to the file.
- Calls `encrypt_directory()` (defined in data.rs) to encrypt the directory.
- Returns a Result indicating success or failure.
`read_data_from_json()`:
- Reads JSON data from a file, deserializes it into a `DeckData` struct.
- Constructs a filepath based on the domain and `VAULT_PATH`.
- Calls `decrypt_directory() `(defined in data.rs) to decrypt the directory.
- Reads the contents of the file into a string.
- Deserializes the JSON string into a `Vec<DeckData>`.
- Returns the first `DeckData` from the vector or an error if no data is found.
`decrypt()`:
- Decrypts the ciphertext using RSA private key.
- Parses the RSA public and private keys from their PEM representations.
- Decrypts the ciphertext using RSA with PKCS#1 v1.5 padding.
- Returns the decrypted data.
### helping scripts
aside the main skeleton, other scripts contain helping functions that help make kelvin work as a whole.
### `data.rs`
`check_file_exists(username: &str, directory_path: &str) -> bool`:
- Checks if a file with the given username exists in the directory_path.
- Iterates through the directory to find the file.
- Returns true if the file exists, otherwise false.
`read_user_data(username: &str, directory_path: &str) -> Option<Admin>`:
- Reads user data from a file specified by username and directory_path.
- If the file exists, reads its content, deserializes it into an Admin struct, and returns it.
- Returns None if the file doesn't exist or if there's an error in reading/deserializing.
`read_deck_data(domain: &str) -> Option<deckdata::DeckData>`:
- Reads deck data from a JSON file specified by domain and `VAULT_PATH`.
- If the file exists, reads its content, deserializes it into a DeckData struct, and returns it.
- Returns None if the file doesn't exist or if there's an error in reading/deserializing.
`encrypt_directory() -> std::io::Result<()>`:
- Encrypts the directory containing encrypted data.
- Archives the directory into a .tar.gz file.
- Encrypts the .tar.gz file using gpg.
- Deletes the original directory and the .tar.gz file.
- Returns a Result indicating success or failure.
`decrypt_directory() -> std::io::Result<()>`:
- Decrypts the directory containing encrypted data.
- Decrypts the .tar.gz.gpg file using gpg.
- Extracts the decrypted .tar.gz file.
- Deletes the encrypted .tar.gz.gpg file.
- Returns a Result indicating success or failure.
### `password.rs`
`generate_password(length: usize) -> String`:
- Generates a password from printable ASCII characters, including symbols, digits, uppercase, and lowercase letters.
- Randomly selects characters from the ASCII character set to create the password of the specified length.
- Shuffles the characters within the password to enhance randomness and security.
- Returns the generated password as a string.
### `prompt.rs`
`prompt_deck()`:
- Prompts the user to enter a domain and its password.
- Reads input from the user for domain and password.
- Returns a tuple containing the entered domain and password.
`prompt_deck_open_sesame()`:
- Prompts the user to enter a domain.
- Reads input from the user for the domain.
- Returns the entered domain.
`prompt_logins()`:
- Prompts the user to enter an admin username and password.
- Reads input from the user for the admin username and password.
- Returns a tuple containing the entered admin username and password.
`initialize_vault()`:
- Checks if the vault directory exists, and if not, creates it.
- Initializes the vault directory for storing encrypted data.
- Returns a result indicating success or failure.
`clip(text: &str)`:
- Sets the system clipboard content to the provided text.
- Uses the clipboard crate to interact with the system clipboard.
- Pauses execution for 2 seconds to ensure the clipboard content is set before returning.
### kelvin as a whole
in `main.rs` is where kelvin works as a whole. clap is used as the argument parser where kelvin demands commands to channel it's operations.
commands include;
- `generate`
- `create-admin`
- `deck` whick is used to add or create a deck
- `reset`
- `open-sesame` which is used to get a password to a domain in the vault
- `help`
### conclusion
this project appears to be laying the groundwork for a terminal-based password manager, or "vault," with a focus on secure storage and retrieval of user and domain credentials. working features include adding a deck, retrieving passwords, generating passwords and interacting with the system clipboard.
in the future, the project aims to evolve into a fully functional terminal vault with a user-friendly terminal interface. It will likely include features such as:
- Secure storage of user credentials and domain passwords.
- User authentication to access stored credentials.
- Terminal-based user interface for easy interaction.
- Daemonization to allow the vault to run persistently in the background.
- Command-line interface for starting, stopping, and managing the vault daemon.
overall, the project aims to provide a convenient and secure solution for managing passwords and sensitive data from the terminal, offering both ease of use and robust security features.
contribute to kelvin on [github](github.com/db-keli/kelvin)
text me on discord to join contributors [here](discordapp.com/users/1083741166492733500)
| dompehbright |
1,882,295 | New Developer | hello friends i am a new person here i wish see a new world special technology world | 0 | 2024-06-09T18:59:43 | https://dev.to/hakim_alabbasi_574021419d/hello-friends-i-am-a-new-person-here-i-wish-see-a-new-world-special-technology-world-49b9 | hello friends i am a new person here i wish see a new world special technology world | hakim_alabbasi_574021419d | |
1,882,294 | React hooks nobody told you before | TLDR 🔥 React.js is one of the most popular JavaScript libraries for building beautiful... | 0 | 2024-06-09T18:59:10 | https://dev.to/kumarkalyan/react-hooks-nobody-told-you-before-14 | webdev, reactnative, react, programming | ## TLDR :fire:
React.js is one of the most popular JavaScript libraries for building beautiful user interfaces and single-page applications. And the average salary of a react developer goes up to 100K usd a year . React is used by some of the most popular tech companies around the world. The React ecosystem is vast and has a huge number of supporting libraries and frameworks, which give React additional powers to build full-stack, production-grade applications. The key features of React include component-driven architecture, the use of a virtual DOM, one-way data binding, and many more. However, in this particular article, I will focus on one of the most important features of React: hooks. I will explain the concept of hooks in detail.
- What are react hooks and their importance
- 38 react hooks you must know as a react developer along with code

## What are hooks in react and their importance?
Hooks in React allow developers to use the state of a React application without using classes. This helps React developers write clean and reusable components with fewer lines of code. Knowledge of React hooks will help you excel in React interviews, as it is one of the most important topics asked by interviewers in every React interview.
## The top 38 react hooks
## 1. useState
Manages local component state.
import { useState } from 'react';
```js
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<button onClick={() => setCount(count + 1)}>Increment</button>
<p>Count: {count}</p>
</div>
);
}
```
## 2. useEffect
Performs side effects in function components.
```js
import { useEffect, useState } from 'react';
function DataFetcher() {
const [data, setData] = useState(null);
useEffect(() => {
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => setData(data));
}, []);
return <div>Data: {data ? JSON.stringify(data) : 'Loading...'}</div>;
}
```
## 3. useContext
Consumes context in a component.
```js
import { useContext } from 'react';
import { ThemeContext } from './ThemeContext';
function ThemedButton() {
const theme = useContext(ThemeContext);
return <button style={{ background: theme.background }}>Click me</button>;
}
```
## 4.useReducer
Manages complex state logic
```js
import { useReducer } from 'react';
const initialState = { count: 0 };
function reducer(state, action) {
switch (action.type) {
case 'increment':
return { count: state.count + 1 };
case 'decrement':
return { count: state.count - 1 };
default:
throw new Error();
}
}
function Counter() {
const [state, dispatch] = useReducer(reducer, initialState);
return (
<div>
<button onClick={() => dispatch({ type: 'decrement' })}>-</button>
<span>{state.count}</span>
<button onClick={() => dispatch({ type: 'increment' })}>+</button>
</div>
);
}
```
## 5. useCallback
Returns a memoized callback function.
```js
import { useCallback, useState } from 'react';
function CallbackComponent() {
const [count, setCount] = useState(0);
const increment = useCallback(() => {
setCount(count + 1);
}, [count]);
return <button onClick={increment}>Count: {count}</button>;
}
```
6. useMemo
Memoizes expensive calculations.
```js
import { useMemo, useState } from 'react';
function Fibonacci() {
const [num, setNum] = useState(1);
const fib = useMemo(() => {
const computeFib = (n) => (n <= 1 ? n : computeFib(n - 1) + computeFib(n - 2));
return computeFib(num);
}, [num]);
return (
<div>
<button onClick={() => setNum(num + 1)}>Next Fibonacci</button>
<p>Fibonacci of {num} is {fib}</p>
</div>
);
}
```
## 7.useRef
Accesses DOM elements or stores mutable values.
```js
import { useRef } from 'react';
function TextInputWithFocusButton() {
const inputEl = useRef(null);
const onButtonClick = () => {
inputEl.current.focus();
};
return (
<div>
<input ref={inputEl} type="text" />
<button onClick={onButtonClick}>Focus the input</button>
</div>
);
}
```
## 8.useImperativeHandle
Customizes the instance value exposed by a ref.
```js
import { forwardRef, useImperativeHandle, useRef } from 'react';
const FancyInput = forwardRef((props, ref) => {
const inputRef = useRef();
useImperativeHandle(ref, () => ({
focus: () => {
inputRef.current.focus();
},
}));
return <input ref={inputRef} />;
});
function App() {
const fancyInputRef = useRef();
return (
<div>
<FancyInput ref={fancyInputRef} />
<button onClick={() => fancyInputRef.current.focus()}>Focus input</button>
</div>
);
}
```
## 9. useLayoutEffect
Synchronizes with the DOM layout.
```js
import { useEffect, useLayoutEffect, useRef, useState } from 'react';
function MeasureWidth() {
const ref = useRef();
const [width, setWidth] = useState(0);
useLayoutEffect(() => {
setWidth(ref.current.offsetWidth);
}, []);
return (
<div>
<div ref={ref} style={{ width: '50%' }}>
Resize the window to see the effect.
</div>
<p>Width: {width}px</p>
</div>
);
}
```
## 10.useDebugValue
Displays custom label in React DevTools.
```js
import { useDebugValue, useState } from 'react';
function useFriendStatus(friendID) {
const [isOnline, setIsOnline] = useState(null);
useDebugValue(isOnline ? 'Online' : 'Offline');
// Simulate an asynchronous operation
setTimeout(() => setIsOnline(Math.random() > 0.5), 1000);
return isOnline;
}
function FriendStatus({ friendID }) {
const isOnline = useFriendStatus(friendID);
if (isOnline === null) {
return 'Loading...';
}
return isOnline ? 'Online' : 'Offline';
}
```
## 11.useFetch
Fetches data from an API.
```js
import { useEffect, useState } from 'react';
function useFetch(url) {
const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
fetch(url)
.then(response => response.json())
.then(data => {
setData(data);
setLoading(false);
});
}, [url]);
return { data, loading };
}
function App() {
const { data, loading } = useFetch('https://jsonplaceholder.typicode.com/posts');
if (loading) {
return <p>Loading...</p>;
}
return (
<ul>
{data.map(post => (
<li key={post.id}>{post.title}</li>
))}
</ul>
);
}
```
## 12.useLocalStorage
Manages state with local storage.
```js
import { useState } from 'react';
function useLocalStorage(key, initialValue) {
const [storedValue, setStoredValue] = useState(() => {
try {
const item = window.localStorage.getItem(key);
return item ? JSON.parse(item) : initialValue;
} catch (error) {
console.error(error);
return initialValue;
}
});
const setValue = (value) => {
try {
const valueToStore = value instanceof Function ? value(storedValue) : value;
setStoredValue(valueToStore);
window.localStorage.setItem(key, JSON.stringify(valueToStore));
} catch (error) {
console.error(error);
}
};
return [storedValue, setValue];
}
function App() {
const [name, setName] = useLocalStorage('name', 'Bob');
return (
<div>
<input value={name} onChange={(e) => setName(e.target.value)} />
<p>Hello, {name}!</p>
</div>
);
}
```
## 13. useDebounce
Debounces a value over time.
```js
import { useEffect, useState } from 'react';
function useDebounce(value, delay) {
const [debouncedValue, setDebouncedValue] = useState(value);
useEffect(() => {
const handler = setTimeout(() => {
setDebouncedValue(value);
}, delay);
return () => {
clearTimeout(handler);
};
}, [value, delay]);
return debouncedValue;
}
function App() {
const [text, setText] = useState('');
const debouncedText = useDebounce(text, 500);
return (
<div>
<input value={text} onChange={(e) => setText(e.target.value)} />
<p>Debounced Value: {debouncedText}</p>
</div>
);
}
```
## 14.usePrevious
Stores the previous value of a variable.
```js
import { useEffect, useRef } from 'react';
function usePrevious(value) {
const ref = useRef();
useEffect(() => {
ref.current = value;
}, [value]);
return ref.current;
}
function App() {
const [count, setCount] = useState(0);
const previousCount = usePrevious(count);
return (
<div>
<button onClick={() => setCount(count + 1)}>Count: {count}</button>
<p>Previous Count: {previousCount}</p>
</div>
);
}
```
## 15. useWindowSize
Tracks window size.
```js
import { useEffect, useState } from 'react';
function useWindowSize() {
const [size, setSize] = useState({ width: window.innerWidth, height: window.innerHeight });
useEffect(() => {
const handleResize = () => {
setSize({ width: window.innerWidth, height: window.innerHeight });
};
window.addEventListener('resize', handleResize);
return () => window.removeEventListener('resize', handleResize);
}, []);
return size;
}
function App() {
const { width, height } = useWindowSize();
return (
<div>
<p>Width: {width}px</p>
<p>Height: {height}px</p>
</div>
);
}
```
## 16.useHover
Detects if an element is hovered.
```js
import { useCallback, useState } from 'react';
function useHover() {
const [hovered, setHovered] = useState(false);
const onMouseOver = useCallback(() => setHovered(true), []);
const onMouseOut = useCallback(() => setHovered(false), []);
return { hovered, onMouseOver, onMouseOut };
}
function HoverComponent() {
const { hovered, onMouseOver, onMouseOut } = useHover();
return (
<div onMouseOver={onMouseOver} onMouseOut={onMouseOut}>
{hovered ? 'Hovering' : 'Not Hovering'}
</div>
);
}
```
## 17. useOnlineStatus
Tracks online status.
```js
import { useEffect, useState } from 'react';
function useOnlineStatus() {
const [isOnline, setIsOnline] = useState(navigator.onLine);
useEffect(() => {
const handleOnline = () => setIsOnline(true);
const handleOffline = () => setIsOnline(false);
window.addEventListener('online', handleOnline);
window.addEventListener('offline', handleOffline);
return () => {
window.removeEventListener('online', handleOnline);
window.removeEventListener('offline', handleOffline);
};
}, []);
return isOnline;
}
function App() {
const isOnline = useOnlineStatus();
return <div>{isOnline ? 'Online' : 'Offline'}</div>;
}
```
## 18. useEventListener
Attaches an event listener.
```js
import { useEffect, useRef } from 'react';
function useEventListener(eventName, handler, element = window) {
const savedHandler = useRef();
useEffect(() => {
savedHandler.current = handler;
}, [handler]);
useEffect(() => {
const eventListener = (event) => savedHandler.current(event);
element.addEventListener(eventName, eventListener);
return () => {
element.removeEventListener(eventName, eventListener);
};
}, [eventName, element]);
}
function App() {
useEventListener('click', () => alert('Window clicked!'));
return <div>Click anywhere!</div>;
}
```
## 19.useInterval
Sets up an interval with a dynamic delay.
```js
import { useEffect, useRef } from 'react';
function useInterval(callback, delay) {
const savedCallback = useRef();
useEffect(() => {
savedCallback.current = callback;
}, [callback]);
useEffect(() => {
function tick() {
savedCallback.current();
}
if (delay !== null) {
const id = setInterval(tick, delay);
return () => clearInterval(id);
}
}, [delay]);
}
function Timer() {
const [count, setCount] = useState(0);
useInterval(() => setCount(count + 1), 1000);
return <div>Count: {count}</div>;
}
```
## 20. useTimeout
Sets up a timeout.
```js
import { useEffect, useRef } from 'react';
function useTimeout(callback, delay) {
const savedCallback = useRef();
useEffect(() => {
savedCallback.current = callback;
}, [callback]);
useEffect(() => {
function tick() {
savedCallback.current();
}
if (delay !== null) {
const id = setTimeout(tick, delay);
return () => clearTimeout(id);
}
}, [delay]);
}
function App() {
const [visible, setVisible] = useState(true);
useTimeout(() => setVisible(false), 5000);
return <div>{visible ? 'Visible for 5 seconds' : 'Hidden'}</div>;
}
```
## 21. useOnClickOutside
Detects clicks outside a component.
```js
import { useEffect, useRef } from 'react';
function useOnClickOutside(ref, handler) {
useEffect(() => {
const listener = (event) => {
if (!ref.current || ref.current.contains(event.target)) {
return;
}
handler(event);
};
document.addEventListener('mousedown', listener);
document.addEventListener('touchstart', listener);
return () => {
document.removeEventListener('mousedown', listener);
document.removeEventListener('touchstart', listener);
};
}, [ref, handler]);
}
function App() {
const ref = useRef();
const [isVisible, setIsVisible] = useState(true);
useOnClickOutside(ref, () => setIsVisible(false));
return (
<div>
<div ref={ref} style={{ display: isVisible ? 'block' : 'none' }}>
Click outside this box to hide it.
</div>
</div>
);
}
```
## 22. useClipboard
Handles clipboard operations.
```js
import { useState } from 'react';
function useClipboard() {
const [copied, setCopied] = useState(false);
const copy = (text) => {
navigator.clipboard.writeText(text).then(() => setCopied(true));
};
return { copied, copy };
}
function App() {
const { copied, copy } = useClipboard();
return (
<div>
<button onClick={() => copy('Hello, world!')}>
{copied ? 'Copied!' : 'Copy'}
</button>
</div>
);
}
```
## 23. useDarkMode
Manages dark mode preference.
```js
import { useEffect, useState } from 'react';
function useDarkMode() {
const [isDarkMode, setIsDarkMode] = useState(false);
useEffect(() => {
const darkMode = window.matchMedia('(prefers-color-scheme: dark)').matches;
setIsDarkMode(darkMode);
}, []);
return isDarkMode;
}
function App() {
const isDarkMode = useDarkMode();
return <div>{isDarkMode ? 'Dark Mode' : 'Light Mode'}</div>;
}
```
## 24.useToggle
Toggles between boolean values.
```js
import { useState } from 'react';
function useToggle(initialValue = false) {
const [value, setValue] = useState(initialValue);
const toggle = () => setValue(!value);
return [value, toggle];
}
function App() {
const [isToggled, toggle] = useToggle();
return (
<div>
<button onClick={toggle}>{isToggled ? 'On' : 'Off'}</button>
</div>
);
}
```
## 25. useTheme
Toggles between light and dark themes.
```js
import { useEffect, useState } from 'react';
function useTheme() {
const [theme, setTheme] = useState('light');
useEffect(() => {
document.body.className = theme;
}, [theme]);
const toggleTheme = () => {
setTheme((prevTheme) => (prevTheme === 'light' ? 'dark' : 'light'));
};
return { theme, toggleTheme };
}
function App() {
const { theme, toggleTheme } = useTheme();
return (
<div>
<p>Current Theme: {theme}</p>
<button onClick={toggleTheme}>Toggle Theme</button>
</div>
);
}
```
## 26. useMedia
Queries media properties.
```js
import { useEffect, useState } from 'react';
function useMedia(query) {
const [matches, setMatches] = useState(window.matchMedia(query).matches);
useEffect(() => {
const mediaQueryList = window.matchMedia(query);
const listener = (event) => setMatches(event.matches);
mediaQueryList.addListener(listener);
return () => mediaQueryList.removeListener(listener);
}, [query]);
return matches;
}
function App() {
const isLargeScreen = useMedia('(min-width: 800px)');
return <div>{isLargeScreen ? 'Large Screen' : 'Small Screen'}</div>;
}
```
## 27. useLockBodyScroll
Locks the body scroll.
```js
import { useEffect } from 'react';
function useLockBodyScroll() {
useEffect(() => {
const originalOverflow = window.getComputedStyle(document.body).overflow;
document.body.style.overflow = 'hidden';
return () => (document.body.style.overflow = originalOverflow);
}, []);
}
function App() {
useLockBodyScroll();
return <div>Body scroll is locked</div>;
}
```
## 28.useKeyPress
Detects key press.
```js
import { useEffect, useState } from 'react';
function useKeyPress(targetKey) {
const [keyPressed, setKeyPressed] = useState(false);
useEffect(() => {
const downHandler = ({ key }) => {
if (key === targetKey) setKeyPressed(true);
};
const upHandler = ({ key }) => {
if (key === targetKey) setKeyPressed(false);
};
window.addEventListener('keydown', downHandler);
window.addEventListener('keyup', upHandler);
return () => {
window.removeEventListener('keydown', downHandler);
window.removeEventListener('keyup', upHandler);
};
}, [targetKey]);
return keyPressed;
}
function App() {
const aPressed = useKeyPress('a');
return <div>{aPressed ? 'A is pressed' : 'Press A'}</div>;
}
```
## 29. useDocumentTitle
Updates document title.
```js
import { useEffect } from 'react';
function useDocumentTitle(title) {
useEffect(() => {
document.title = title;
}, [title]);
}
function App() {
useDocumentTitle('Custom Title');
return <div>Check the document title</div>;
}
```
## 30. useHover
Handles hover state.
```js
import { useCallback, useState } from 'react';
function useHover() {
const [hovered, setHovered] = useState(false);
const onMouseOver = useCallback(() => setHovered(true), []);
const onMouseOut = useCallback(() => setHovered(false), []);
return { hovered, onMouseOver, onMouseOut };
}
function HoverComponent() {
const { hovered, onMouseOver, onMouseOut } = useHover();
return (
<div onMouseOver={onMouseOver} onMouseOut={onMouseOut}>
{hovered ? 'Hovering' : 'Not Hovering'}
</div>
);
}
```
## 31.useGeolocation
Retrieves geolocation.
```js
import { useEffect, useState } from 'react';
function useGeolocation() {
const [location, setLocation] = useState({});
useEffect(() => {
navigator.geolocation.getCurrentPosition(
(position) => setLocation(position.coords),
(error) => console.error(error)
);
}, []);
return location;
}
function App() {
const { latitude, longitude } = useGeolocation();
return (
<div>
<p>Latitude: {latitude}</p>
<p>Longitude: {longitude}</p>
</div>
);
}
```
## 32. useScrollPosition
Tracks scroll position.
```js
import { useEffect, useState } from 'react';
function useScrollPosition() {
const [position, setPosition] = useState({ x: 0, y: 0 });
useEffect(() => {
const handleScroll = () => {
setPosition({ x: window.scrollX, y: window.scrollY });
};
window.addEventListener('scroll', handleScroll);
return () => window.removeEventListener('scroll', handleScroll);
}, []);
return position;
}
function App() {
const { x, y } = useScrollPosition();
return (
<div>
<p>Scroll Position: {`x: ${x}, y: ${y}`}</p>
</div>
);
}
```
## 33. useUnmount
Runs a function when a component unmounts.
```js
import { useEffect } from 'react';
function useUnmount(callback) {
useEffect(() => {
return () => callback();
}, [callback]);
}
function App() {
useUnmount(() => {
console.log('Component will unmount');
});
return <div>Unmount me to see the console message.</div>;
}
```
## 34. useClickOutside
Detects clicks outside an element.
```js
import { useEffect, useRef } from 'react';
function useClickOutside(handler) {
const ref = useRef();
useEffect(() => {
const listener = (event) => {
if (!ref.current || ref.current.contains(event.target)) {
return;
}
handler(event);
};
document.addEventListener('mousedown', listener);
document.addEventListener('touchstart', listener);
return () => {
document.removeEventListener('mousedown', listener);
document.removeEventListener('touchstart', listener);
};
}, [handler]);
return ref;
}
function App() {
const handleClickOutside = () => {
console.log('Clicked outside!');
};
const ref = useClickOutside(handleClickOutside);
return (
<div ref={ref} style={{ padding: '50px', border: '1px solid black' }}>
Click outside me!
</div>
);
}
```
## 35. useDebouncedCallback
Debounces a callback function.
```js
import { useCallback, useState } from 'react';
function useDebouncedCallback(callback, delay) {
const [timeoutId, setTimeoutId] = useState(null);
const debouncedCallback = useCallback((...args) => {
if (timeoutId) {
clearTimeout(timeoutId);
}
const id = setTimeout(() => {
callback(...args);
}, delay);
setTimeoutId(id);
}, [callback, delay]);
return debouncedCallback;
}
function App() {
const [value, setValue] = useState('');
const handleChange = useDebouncedCallback((e) => {
setValue(e.target.value);
}, 500);
return (
<input type="text" onChange={handleChange} />
);
}
```
## 36. useThrottle
Throttles a value over time.
```js
import { useEffect, useState } from 'react';
function useThrottle(value, limit) {
const [throttledValue, setThrottledValue] = useState(value);
useEffect(() => {
const handler = setTimeout(() => {
setThrottledValue(value);
}, limit);
return () => {
clearTimeout(handler);
};
}, [value, limit]);
return throttledValue;
}
function App() {
const [text, setText] = useState('');
const throttledText = useThrottle(text, 1000);
return (
<div>
<input value={text} onChange={(e) => setText(e.target.value)} />
<p>Throttled Value: {throttledText}</p>
</div>
);
}
```
## 37. useUpdateEffect
Runs an effect only on updates, not on mount.
```js
import { useEffect, useRef } from 'react';
function useUpdateEffect(effect, dependencies) {
const isInitialMount = useRef(true);
useEffect(() => {
if (isInitialMount.current) {
isInitialMount.current = false;
} else {
effect();
}
}, dependencies);
}
function App() {
const [count, setCount] = useState(0);
useUpdateEffect(() => {
console.log('Component updated');
}, [count]);
return (
<div>
<button onClick={() => setCount(count + 1)}>Increment</button>
<p>Count: {count}</p>
</div>
);
}
```
## 38.useLocalStorage
Manages state in local storage
```js
import { useEffect, useState } from 'react';
function useLocalStorage(key, initialValue) {
const [storedValue, setStoredValue] = useState(() => {
try {
const item = window.localStorage.getItem(key);
return item ? JSON.parse(item) : initialValue;
} catch (error) {
console.error(error);
return initialValue;
}
});
const setValue = (value) => {
try {
setStoredValue(value);
window.localStorage.setItem(key, JSON.stringify(value));
} catch (error) {
console.error(error);
}
};
return [storedValue, setValue];
}
function App() {
const [name, setName] = useLocalStorage('name', 'Guest');
return (
<div>
<input value={name} onChange={(e) => setName(e.target.value)} />
</div>
);
}
```

## conclusion
In this article you learned about the top 38 react hooks and their use cases. Make sure to use these hooks in you code and stay tuned for the next article.
{% embed https://dev.to/kumarkalyan %} | kumarkalyan |
1,882,292 | Understanding React.memo: Optimizing Your React Applications | Introduction Have you ever noticed your React app slowing down as it grows? If yes, you... | 0 | 2024-06-09T18:58:55 | https://dev.to/ak_23/understanding-reactmemo-optimizing-your-react-applications-436a | react, programming, learning, webdev |
#### Introduction
Have you ever noticed your React app slowing down as it grows? If yes, you are not alone. React.memo is one of those handy tools that can help you optimize your React application, making it more efficient and faster. Let's dive into what React.memo is, how it works, and how you can use it to boost your app's performance.
#### What is React.memo?
React.memo is a higher-order component (HOC) provided by React. It's designed to improve the performance of your functional components by preventing unnecessary re-renders. It achieves this by memoizing the component, which means it remembers the last rendered output and skips rendering if the props haven't changed.
#### How Does React.memo Work?
React.memo works similarly to `React.PureComponent` but for functional components. When a component wrapped with React.memo receives the same props as its previous render, React skips rendering and uses the memoized result instead.
Here's a simple example to illustrate this:
```javascript
import React from 'react';
const MyComponent = React.memo(({ name }) => {
console.log('Rendering MyComponent');
return <div>Hello, {name}!</div>;
});
export default MyComponent;
```
In this example, `MyComponent` will only re-render if the `name` prop changes. If the `name` remains the same, React will reuse the previous render output, thus saving processing time and resources.
#### When Should You Use React.memo?
Using React.memo can be particularly beneficial in the following scenarios:
1. **Pure Components**: If your component renders the same output given the same props, it's a good candidate for React.memo.
2. **Performance Bottlenecks**: When you identify performance issues due to frequent re-renders, React.memo can help reduce unnecessary renders.
3. **Large Components**: Components that involve heavy computations or complex rendering logic can benefit from memoization.
#### Example: Optimizing a List Component
Let's consider a more practical example. Imagine you have a list component that renders a list of items. If the list is large, re-rendering the entire list whenever a parent component updates can be costly. Here's how you can optimize it with React.memo:
```javascript
import React from 'react';
const ListItem = React.memo(({ item }) => {
console.log('Rendering ListItem', item.id);
return <li>{item.name}</li>;
});
const List = ({ items }) => {
return (
<ul>
{items.map(item => (
<ListItem key={item.id} item={item} />
))}
</ul>
);
};
export default List;
```
In this example, `ListItem` will only re-render if its `item` prop changes, reducing the number of re-renders significantly.
#### Gotchas and Considerations
While React.memo can improve performance, it's important to use it judiciously. Here are some points to consider:
1. **Shallow Comparison**: React.memo performs a shallow comparison of props. If your props are complex objects, consider using custom comparison logic.
2. **Overhead**: Memoization adds some overhead. If your component updates frequently and the re-render cost is low, memoization might not be beneficial.
3. **Development vs. Production**: Ensure you measure performance improvements in both development and production environments, as the benefits might vary.
#### Conclusion
React.memo is a powerful tool for optimizing your React applications. By preventing unnecessary re-renders, it helps improve performance, especially in large and complex applications. However, it's important to use it wisely, considering the trade-offs and specific use cases.
Next time you face performance issues, remember to consider React.memo as part of your optimization strategy. Happy coding!
---
Feel free to reach out if you have any questions or need further clarifications. Let's make our React applications faster and more efficient together! | ak_23 |
1,882,293 | Benefits of Using AJAX | AJAX, which stands for Asynchronous JavaScript and XML, is a technique used in web development to... | 0 | 2024-06-09T18:57:42 | https://dev.to/infobijoy/explanation-of-the-example-and-benefits-of-using-ajax-3gjf | webdev, javascript, jquery, programming | AJAX, which stands for Asynchronous JavaScript and XML, is a technique used in web development to create dynamic and interactive web applications. It allows web pages to be updated asynchronously by exchanging small amounts of data with the server behind the scenes. This means that parts of a web page can be updated without reloading the entire page, resulting in a more seamless user experience.
Here are the key concepts and components of AJAX:
1. **Asynchronous**:
- AJAX allows web applications to send and receive data from a server asynchronously, meaning it can happen in the background without affecting the display and behavior of the existing page.
2. **JavaScript**:
- AJAX primarily uses JavaScript to make asynchronous requests to the server. JavaScript's `XMLHttpRequest` object (or the `fetch` API in modern browsers) is used to send and receive data.
3. **XML (or JSON)**:
- Originally, XML (Extensible Markup Language) was used to format the data being sent and received via AJAX. However, nowadays, JSON (JavaScript Object Notation) is more commonly used due to its simplicity and ease of use with JavaScript.
4. **How AJAX Works**:
- A user action triggers an event (like clicking a button).
- JavaScript creates an `XMLHttpRequest` object.
- The `XMLHttpRequest` object sends a request to a web server.
- The server processes the request and sends back a response.
- JavaScript processes the server response.
- The web page is updated with the new data, without reloading the entire page.
### Example of AJAX Workflow:
1. **HTML Code**:
```html
<!DOCTYPE html>
<html>
<head>
<title>AJAX Example</title>
</head>
<body>
<h1>AJAX Example</h1>
<button type="button" onclick="loadData()">Click me to load data</button>
<div id="result"></div>
<script>
function loadData() {
const xhr = new XMLHttpRequest();
xhr.open('GET', 'data.json', true);
xhr.onreadystatechange = function () {
if (xhr.readyState === 4 && xhr.status === 200) {
document.getElementById('result').innerHTML = xhr.responseText;
}
};
xhr.send();
}
</script>
</body>
</html>
```
2. **data.json**:
```json
{
"message": "Hello, this is the data loaded via AJAX!"
}
```
### Explanation of the Example:
1. **HTML and Button**: The HTML contains a button that, when clicked, triggers the `loadData` function.
2. **JavaScript Function (`loadData`)**:
- An `XMLHttpRequest` object is created.
- The `open` method configures the request with the HTTP method and the URL of the data source.
- The `onreadystatechange` event handler processes the server's response. When the request is complete (`readyState === 4`) and successful (`status === 200`), the response text is displayed in the `result` div.
- The `send` method sends the request to the server.
3. **Server Response**: The `data.json` file contains the data to be fetched. Once the AJAX request is successful, the content of `data.json` is displayed within the `result` div.
### Benefits of Using AJAX:
- **Improved User Experience**: By updating only parts of the page, AJAX creates a smoother and more responsive experience.
- **Reduced Bandwidth Usage**: Only necessary data is exchanged between the client and server, which can be more efficient.
- **Seamless Updates**: Users can interact with the web application without interruptions or full-page reloads.
### Modern Alternatives:
While `XMLHttpRequest` is still widely used, modern web development often utilizes the `fetch` API, which provides a more powerful and flexible feature set for making HTTP requests.
### Example Using `fetch`:
```html
<script>
function loadData() {
fetch('data.json')
.then(response => response.json())
.then(data => {
document.getElementById('result').innerHTML = data.message;
})
.catch(error => console.error('Error fetching data:', error));
}
</script>
```
In this example, the `fetch` API simplifies the code and makes it more readable by using promises. | infobijoy |
1,882,291 | Benefits of Using jQuery And Downsides | jQuery is a fast, small, and feature-rich JavaScript library. It simplifies many tasks commonly... | 0 | 2024-06-09T18:54:38 | https://dev.to/infobijoy/benefits-of-using-jquery-and-downsides-2pln | webdev, javascript, jquery, programming | jQuery is a fast, small, and feature-rich JavaScript library. It simplifies many tasks commonly associated with JavaScript, such as HTML document traversal and manipulation, event handling, animation, and AJAX. jQuery is designed to make things like DOM manipulation and AJAX interactions simpler and more concise.
### Key Features of jQuery:
1. **DOM Manipulation**:
- jQuery makes it easy to select, traverse, and manipulate DOM elements.
```html
<!-- HTML -->
<div id="content">Hello, World!</div>
<button id="changeText">Change Text</button>
<!-- jQuery -->
<script>
$('#changeText').click(function() {
$('#content').text('Hello, jQuery!');
});
</script>
```
2. **Event Handling**:
- jQuery provides a simple way to attach event handlers to elements.
```html
<button id="clickMe">Click Me!</button>
<script>
$('#clickMe').on('click', function() {
alert('Button clicked!');
});
</script>
```
3. **AJAX**:
- jQuery simplifies AJAX calls with easy-to-use methods like `$.ajax()`, `$.get()`, and `$.post()`.
```html
<button id="loadData">Load Data</button>
<div id="result"></div>
<script>
$('#loadData').click(function() {
$.get('data.json', function(data) {
$('#result').html(data.message);
});
});
</script>
```
4. **Animations**:
- jQuery provides methods to create animations and effects like `.hide()`, `.show()`, `.fadeIn()`, `.fadeOut()`, `.slideUp()`, `.slideDown()`, etc.
```html
<button id="fadeOut">Fade Out</button>
<div id="box" style="width:100px;height:100px;background:red;"></div>
<script>
$('#fadeOut').click(function() {
$('#box').fadeOut();
});
</script>
```
5. **Simplified Syntax**:
- jQuery allows you to write less code to achieve the same functionality compared to vanilla JavaScript.
```html
<ul>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
<script>
$('li').css('color', 'red');
</script>
```
### Including jQuery:
To use jQuery in your web project, you can include it in your HTML file by linking to the jQuery CDN (Content Delivery Network) or by downloading and hosting the jQuery library locally.
#### Using CDN:
```html
<!DOCTYPE html>
<html>
<head>
<title>jQuery Example</title>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
</head>
<body>
<script>
// Your jQuery code goes here
$(document).ready(function() {
console.log("jQuery is ready!");
});
</script>
</body>
</html>
```
#### Hosting Locally:
1. Download the jQuery library from [jQuery's official website](https://jquery.com/download/).
2. Include the downloaded file in your project directory and link it in your HTML file.
```html
<!DOCTYPE html>
<html>
<head>
<title>jQuery Example</title>
<script src="path/to/your/jquery.min.js"></script>
</head>
<body>
<script>
// Your jQuery code goes here
$(document).ready(function() {
console.log("jQuery is ready!");
});
</script>
</body>
</html>
```
### Example Use Cases with jQuery:
1. **Form Validation**:
- Validate form inputs before submitting to the server.
```html
<form id="myForm">
<input type="text" id="name" required>
<input type="email" id="email" required>
<button type="submit">Submit</button>
</form>
<script>
$('#myForm').on('submit', function(event) {
if ($('#name').val() === '' || $('#email').val() === '') {
alert('All fields are required!');
event.preventDefault();
}
});
</script>
```
2. **Dynamic Content Loading**:
- Load content into a page without refreshing it.
```html
<button id="loadContent">Load Content</button>
<div id="content"></div>
<script>
$('#loadContent').click(function() {
$('#content').load('content.html');
});
</script>
```
3. **Tabbed Navigation**:
- Create a tabbed interface for better content organization.
```html
<div class="tabs">
<button class="tab-link" data-tab="tab1">Tab 1</button>
<button class="tab-link" data-tab="tab2">Tab 2</button>
</div>
<div id="tab1" class="tab-content">Content for Tab 1</div>
<div id="tab2" class="tab-content" style="display:none;">Content for Tab 2</div>
<script>
$('.tab-link').click(function() {
var tab = $(this).data('tab');
$('.tab-content').hide();
$('#' + tab).show();
});
</script>
```
### Benefits of Using jQuery:
- **Cross-browser Compatibility**: jQuery handles many of the cross-browser inconsistencies, ensuring your code works on various browsers.
- **Community and Plugins**: A large community and a vast array of plugins are available to extend jQuery’s functionality.
- **Simplified JavaScript**: jQuery makes writing JavaScript easier and faster with its concise and readable syntax.
### Downsides:
- **Performance**: Vanilla JavaScript can be faster for some tasks, especially with modern browsers with improved JavaScript performance.
- **File Size**: Including jQuery adds extra kilobytes to your project, which can be significant for performance-sensitive applications.
- **Learning Curve**: While jQuery simplifies many tasks, it introduces its own syntax and methods that need to be learned.
Overall, jQuery remains a powerful and useful tool in web development, especially for quickly building robust and interactive web applications. However, with the advancement of JavaScript and the introduction of modern frameworks like React, Vue.js, and Angular, the need for jQuery has diminished in many modern projects. | infobijoy |
1,882,290 | 2024 Guide to Succeeding in Technical Interviews | Whether you are an aspiring software engineer or a professional seeking a career change, finding your... | 0 | 2024-06-09T18:54:13 | https://dev.to/abdullah-dev0/2024-guide-to-succeeding-in-technical-interviews-ek2 | webdev, interview, beginners, tutorial | Whether you are an aspiring software engineer or a professional seeking a career change, finding your dream job in the modern tech industry can be challenging. Receiving an interview call is a significant achievement. However, the real challenge begins after you pass the rigorous application phase and secure an interview slot; thorough preparation is essential to succeed.
From exploring various platforms and consulting different guides to gathering numerous resources, the process can be daunting. Nevertheless, with a proper plan and roadmap for the interview, you can manage it effectively. To address this need, we have designed a specialized boot camp to train and guide you on how to excel in technical interviews.
In this article, we will break down and explain the 12-week boot camp to help you prepare efficiently and confidently for your next big interview. Let's get started!
## **What to Expect in a Tech Interview**
---
Technical interviews can differ significantly based on the company, the role, and the level of the position you are applying for. However, there are several common types of questions you can anticipate:
Technical Skill Assessment: These questions evaluate your proficiency in specific technical skills relevant to the job, such as programming languages (e.g., Python, Java, C++), frameworks (e.g., React, Angular), databases (e.g., MySQL, MongoDB), and tools (e.g., Git, Docker).
Coding Challenges: You will likely be asked to solve one or more coding problems during the interview. These can range from algorithmic puzzles (e.g., sorting, searching, recursion, dynamic programming) to system design challenges (e.g., designing a scalable web application). The objective is to assess your problem-solving skills, coding ability, and familiarity with computer science fundamentals.
System Design Questions: Especially for senior-level positions, you may be asked to design a complex system (e.g., a URL shortening service, a chat application). These questions test your ability to architect scalable, efficient, and reliable systems. You will need to discuss your design choices, considering factors such as scalability, load balancing, database schemas, and APIs.
Behavioral Questions: These questions evaluate how you have handled situations in the past, with a focus on teamwork, conflict resolution, leadership, and problem-solving. Examples include, "Tell me about a time when you faced a challenging bug and how you resolved it," or "Describe a project where you took the lead."
Technical Knowledge and Theory: You will be asked questions to test your understanding of computer science basics, such as data structures, algorithms, operating systems, networking, and databases. This might include explaining how sorting algorithms work or discussing the principles behind RESTful services.
> In summary, preparing for these types of questions involves reviewing your technical fundamentals, practicing coding problems, studying system design principles, reflecting on your past experiences, and understanding the company's products, technology stack, and culture.
## **How to Efficiently Prepare for a Tech Interview in 12 Weeks**
---
Tech interviews demand a solid understanding of foundational concepts and technical skills. To succeed, you must study effectively and develop the necessary skills without wasting time.
The two primary areas to focus on for tech interviews are coding and system design questions.
## **Here is the complete weekly breakdown of your interview preparation:**
| Week | Focus Area | Topics |
|----------|----------------|------------------------------------------------------------------------------------------|
|
| Week 1 | Coding | Introduction to Data Structures, Arrays, Matrix |
| | System Design | Introduction to System Design, Load Balancing, API Gateways |
| Week 2 | Coding | Stack, Queue, Linked List |
| | System Design | Distributed Systems, DNS, Caching |
| Week 3 | Coding | Tree, HashTable, HashSet |
| | System Design | CDN, Data Partitioning, Proxy Server |
| Week 4 | Coding | Heap, Graph, Trie |
| | System Design | Replication, CAP & PACELC Theorems, Databases, Indexes |
| Week 5 | Coding | Recursion |
| | System Design | Bloom Filters, Long-Polling, WebSockets, Quorum, Heartbeat |
| Week 6 | Coding | Two Pointers Pattern, Fast & Slow Pointers Pattern |
| | System Design | Checksum, Leader & Follower, Messaging System |
| Week 7 | Coding | Sliding Window Pattern, Merge Intervals Pattern, Cyclic Sort Pattern |
| | System Design | System Design Interview, Master Template, URL Shortening, Pastebin |
| Week 8 | Coding | In-place Reversal of a LinkedList, Monotonic Stack, Tree BFS, Tree DFS |
| | System Design | Dropbox, Facebook Messenger, Twitter, Netflix |
| Week 9 | Coding | Island Pattern, Two Heaps Pattern, Subsets Pattern |
| | System Design | Typeahead Suggestion, API Rate Limiter, Twitter Search, Web Crawler |
| Week 10 | Coding | Modified Binary Search, Bitwise XOR, Top 'K' Elements |
| | System Design | Facebook Newsfeed, Yelp, Uber, Ticketmaster |
| Week 11 | Coding | K-way Merge, Backtracking, Topological Sort, Multi-threaded |
| | System Design | Key-Value Store, Mock Interview |
| Week 12 | Coding | Dynamic Programming, 0/1 Knapsack, Fibonacci Numbers, Longest Common Substring |
| | System Design | Distributed Messaging System |
| abdullah-dev0 |
1,882,257 | Modular Monolith: A disruptive guide to architecting your React app | Intro This writing was inspired by the article: “How to Structure Vue Projects”. It... | 0 | 2024-06-09T18:52:19 | https://dev.to/artiumws/modular-monolith-a-disruptive-guide-to-architecting-your-react-app-2gji | webdev, javascript, programming, react | ## Intro
This writing was inspired by the article: “[How to Structure Vue Projects](https://dev.to/alexanderop/how-to-structure-vue-projects-20i4)”.
It explains various front-end application structures. It’s really insightful and can be applied to other front-end frameworks.
A year ago, my team and I started a new project from scratch. The discussion about its structure and stack became crucial. We decided on a modular monolithic architecture.
I will detail what motivated our choices and hope these insights will inspire you for your future projects.
## context
Through my experience, I have worked on several projects. Each with its unique specificities which can be classified into these categories:
### Monolithic
A monolithic architecture is an all-in-one project containing all application functionalities.
- **Advantages:** Easy to onboard new team members, maintain, update dependencies, and add new features. Deployment is straightforward and depends on one CI/CD pipeline.
- **Disadvantages:** As the engineering team grows, it leads to more conflict management. Deployment can bottleneck when two teams try to release simultaneously. There is no clear ownership of features.
In short, it doesn’t scale well.
### Micro-front end
A micro-front end architecture is a spliced application where each feature lives in a separate project, owned by a dedicated team.
- **Advantages:** Each team can add new features without conflict or deployment bottlenecks. Each micro-front end has its own CI/CD pipeline, allowing specific actions for each project.
- **Disadvantages:** Harder to onboard new team members and update dependencies. It requires more configuration and maintenance, often necessitating a dedicated DevOps team.
In short, it scales well — until it doesn’t.
What happens when you need to scale down?
In 2023, team reductions due to layoffs lead to shared ownership and less-maintained parts. Usually pipelines and configurations, leading to poor developer experience and slowing down the entire process.
I wouldn't advise any company smaller than FAANG size to go for it.
With that said, let’s introduce the architecture we decided to use.
## Challenges:
- Enabling multiple teams to work on the project without conflicts
- Maintaining the project by one or several teams
- Having a single project that is easy to maintain
- Ensuring low complexity for easy onboarding
- Scalable architecture that can scale both up and down
## Modular monolith
A modular monolith is an all-in-one project where each feature lives in a separate shell.
In other words, each feature has its own folder and does not share any components, state, or logic with other features. If shared logic, state, or components are needed, they are stored at the root of the project.

This architecture is flexible and can be adapted to your stack and needs.
For example, you can add `layouts` if you need layouts for your pages or `containers` if you are working with Redux.
## Architecture details
I will detail the choices made for our stack and the conventions implemented to address the challenges.
## Conventions
- **Avoid the trend tech syndrome:**
New JavaScript libraries are released frequently, making it hard to stay focused on a stack. It might seem like there's always a library that solves an issue in a more elegant way.
Don’t fall into this trap. Stick to the stack you have defined.
- **Avoid nesting in components folder (The atomic design trap):**
It's common for people to store components related to a parent component within its folder. Leading to a very nested structure that is hard to navigate for newcomers.
Prefer a flat structure within your components folder.

- **Put Logic in Hooks:**
Start by writing the logic within your component. When the logic takes up too much space, move it to a hook.
This approach is subjective and may vary for each individual.
- **Place testable logic in utils:**
Separate logic that requires native hooks (useState…) from the logic that doesn't. Move the latter to a utils file if it needs to be tested.
More details about this are in the testing section.
- **Maximise the utility of third-party dependencies:**
Avoid adding a dependency that can handle multiple things if you only use one of its features.
For example, using RxJS to handle simple HTTP requests when the Fetch API can do it just as well.
## Stacks
### Shared state (`stores` in structure diagram)
Shared state management is always a highly opinionated topic. There are different preferences based on various factors:
- **Redux:** Preferred by those who like a stricter pattern, a strong community, and solid debugging tools.
- **MobX:** Preferred by those who prefer more freedom in implementation and a fine-grained updating system to improve optimisation.
- **New Trends (Jutai or Zustand):** Preferred by those who enjoy new trends because they are hook-like and easy to implement.
Our choice was **React Context**.
Some may argue that React Context, if not memoized, will re-render all children, and the syntax can be cumbersome.
They would be right.
However, it is native to React, has a low learning curve, and in version 19, the syntax will be simplified.
The Virtual DOM (VDOM) is performant, and re-rendering all children rarely causes issues. If it does, just memoize it properly. React has announced their new React-Compiler (see [React-Compiler: When react becomes Svelte](https://dev.to/artiumws/react-compiler-when-react-becomes-svelte-5969)), which will optimise during build time.
### Styles
**CSS-in-JS (styled-components)**
Should I really explain why it’s a bad idea ?
- **Size of the documentation:** Why do we have migration guide when dealing with CSS ?

- **JS handling CSS:** Why Javascript would be better than CSS at handling CSS ?
- **Component instead of class:** Why do we need to create a component to apply border styles ?

Don’t recommend to use it in a project
**Tailwind**
Inline styling done with elegance
- **Locality of behaviour:** Styles are directly within the component
- **Easy syntax:** Simple and intuitive
- **Optimised:** Minification and network compression
I highly recommend Tailwind for any project.
However, newcomers may face a small learning curve to memorise each classes. To onboard people quickly and easily, we decided to go with the following:
**Inline-style**
The easiest way to handle CSS
- **Locality of behaviour:** Styles are directly within the component
- **Super easy:** Straightforward and quick to implement
If we needed selectors or animations, we used:
**CSS modules**
Namespaced css
- **Local to your component:** Namespaces classes to your component
- **Native css syntax:** Uses regular CSS syntax
### Tests
**Unit test (components)**
I strongly believe that unit testing components is a waste of time and energy.
Components will be tested manually with or without unit tests. Testing things that can be easily spotted by eye is meaningless.
The only case where I’ve seen a benefit is when working on a UI library that serves several projects.
**Unit test (logic)**
As previously mentioned, if you need to test logic, extract it from a custom hook and place it in a utils file. Separate the logic from native hooks (e.g., `useState`, `useEffect`) and test the logic itself.
Example:
If a custom hook formats data before storing it in a state, extract the format data logic into a utils file and test it there.
Testing a function is easier than testing a hook.
**End 2 end testing**
I strongly believe that unit testing components is a waste of time and energy **because** of E2E testing.
Testing a user scenario covers more than just components. It checks if all features of your application are well-implemented together.
E2E testing ensures that critical user paths are working properly: logging in, buying, liking, sharing, etc., depending on your application's business model.
These tests are directly linked to the value of your product. It is more crucial than making sure a FAQ button dispatches an event properly.
## conclusion
Each project has its own specificities, and each team has its familiarities with stacks. This architecture and stack won’t fit every project.
_The best stack is the one you know_
An expert in JavaScript will build a better app in JavaScript than in Rust, even if Rust is known for its performances.
We were a team of five (designer, product manager, engineering manager, and engineers) working on this project. This was aimed to be an internal tool used by our company and it works great.
The workflow is really smooth, and other teams can implement features without difficulties.
The success of choosing a architecture for your project relies on:
- What you want to achieve with it ?
- Who will work on it ?
- Who will maintaining it ?
- Are all member of the team aligned with it ?
---
I hope you enjoyed this article.
If so, don’t hesitate to keep in contact with me:
{% cta https://x.com/ArtiumWs %} Stay in touch on X (Twitter) {% endcta %}
If not, please feel free to add your critiques in the comments.
---
## Sources
What is locality of behaviour:
https://htmx.org/essays/locality-of-behaviour/
What is modular monolith:
https://www.milanjovanovic.tech/blog/what-is-a-modular-monolith
How to structure a vue project:
https://dev.to/alexanderop/how-to-structure-vue-projects-20i4
Documentation about react-hooks-testing-library:
https://github.com/testing-library/react-hooks-testing-library
Documentation about styled components:
https://styled-components.com/
Documentation about atomic design:
https://atomicdesign.bradfrost.com/chapter-2/
How tailwind optimise for production:
https://tailwindcss.com/docs/optimizing-for-production
What is css modules:
https://github.com/css-modules/css-modules | artiumws |
1,882,288 | Optimizing Web Performance: Tips and Techniques | Why Web Performance Matters: Web development is crucial because it shapes the way we... | 0 | 2024-06-09T18:47:39 | https://dev.to/ellaokah/optimizing-web-performance-tips-and-techniques-1h60 | webdev, javascript, performance, css |
## Why Web Performance Matters:
[Web development](https://en.wikipedia.org/wiki/Web_development#:~:text=Web%20development%20is%20the%20work,businesses%2C%20and%20social%20network%20services.) is crucial because it shapes the way we experience the internet.
Websites and web applications are essential for businesses, organizations, and individuals to connect with their audiences. A well-developed website ensures that information is easily accessible, engaging, and user-friendly. It also plays a significant role in a company’s online presence, affecting its reputation and reach.
For businesses, a strong web presence can lead to increased visibility, customer engagement, and sales. Websites serve as a platform for marketing, communication, and e-commerce. They also offer essential services like customer support and information dissemination.
Furthermore, good web development ensures sites are fast, secure, and responsive, meaning they work well on all devices, including smartphones and tablets. This enhances user experience and satisfaction, making it more likely for visitors to return. In summary, web development is vital for effective digital communication, business success, and providing a seamless user experience.
### Key areas of optimization
- [Minimize Http Request](https://sematext.com/glossary/http-requests/#:~:text=An%20HTTP%20request%20is%20made,to%20access%20the%20server%20resources.): Each element on a web page, like images, scripts, and stylesheets, requires a separate request to the server. Combining files and reducing the number of elements can speed up load times.
Image Optimization: Large images can slow down a website. Using modern image formats like WebP, compressing images, and using the correct size for each device ensures faster loading without sacrificing quality.
- [Lazy Loading](https://www.cloudflare.com/learning/performance/what-is-lazy-loading/): This technique loads images and videos only when they are about to appear on the screen. It helps speed up the initial load time, especially for content-heavy pages.
-Minification/Minimization: Removing unnecessary characters from code files (like spaces and comments) reduces their size, making the website load faster.
- Efficient Font Loading: Fonts can be heavy. Using only the necessary font styles and optimizing their delivery ensures they load quickly without blocking other content.
- Caching: Storing parts of a website on the user’s device means that returning visitors don’t need to load everything from scratch, speeding up their experience.
- [Content Delivery Network (CDN)](https://www.inmotionhosting.com/blog/what-is-a-content-delivery-network/?mktgp=t&utm_medium=cpc&utm_source=google&utm_term=&utm_campaign=Dedi+2024+Performance+Max+Jun&hsa_acc=8307215010&hsa_cam=21114932322&hsa_grp=&hsa_ad=&hsa_src=x&hsa_tgt=&hsa_kw=&hsa_mt=&hsa_net=adwords&hsa_ver=3&gad_source=1&gclid=Cj0KCQjwpZWzBhC0ARIsACvjWROQP4MMBo0IPutsjTJ_p1WmguE74Gv_xheLEvK8BsRl5KFnSR1bvsMaAhuYEALw_wcB): A CDN distributes website content across multiple servers around the world. This reduces the distance between the user and the server, resulting in faster load times.
## Asset Management
Asset management in web development refers to the efficient handling of all the files and resources that make up a website, such as images, CSS, JavaScript, and fonts. Proper asset management ensures that a website loads quickly and runs smoothly, which is crucial for keeping visitors engaged.
1. Minification: This process involves removing unnecessary characters from code files (like spaces, comments, and line breaks). This makes the files smaller, so they load faster. For example, minified CSS and JavaScript files are quicker for browsers to download and process.
2. Compression: Compressing files using tools like Gzip or Brotli reduces their size without losing any information. Smaller files mean faster download times for users.
3. Efficient Font Loading: Fonts can be bulky and slow down a website. Using only the necessary font styles and sizes, and ensuring they load in a way that doesn’t block other content, can significantly improve load times.
4. Mage Optimization: Large images can slow down a site. By compressing images, using modern formats like WebP, and serving the right image sizes for different devices, you ensure images load quickly without sacrificing quality.
5. Content Delivery Network (CDN): A CDN distributes your website’s assets across multiple servers worldwide. When a user visits your site, the CDN delivers assets from the server closest to them, reducing load times and improving performance.
Good asset management helps create a fast, efficient, and user-friendly website, keeping visitors happy and engaged.
## Strategies for Caching
Caching strategies in web development are like saving shortcuts on your computer to access files faster. When you visit a website, your browser saves some parts of it, so when you come back later, it doesn't have to load everything from scratch. This makes the website load faster and saves time.
Websites also use servers in different locations to store copies of their files. When you visit a website, these copies are delivered to you from the nearest server, cutting down on the time it takes for information to reach you. This is like getting your order delivered from a nearby store instead of one far away.
Overall, caching strategies help websites load quicker and give you a smoother experience, just like how saving shortcuts and using nearby servers make things faster and more convenient in your everyday life.
## JavaScript and CSS Optimization
JavaScript and CSS are like the superpowers behind how websites look and behave. JavaScript makes things move and interact, while CSS styles everything to look just right.
Optimizing JavaScript and CSS is about making them work faster and more efficiently, like superheroes with streamlined costumes. For JavaScript, this means writing code that does the job with fewer steps, like using shortcuts to save time. It's like finding the quickest route to your destination.
In CSS, optimization involves simplifying styles and reducing unnecessary code. Imagine organizing your wardrobe by color and type of clothing to find what you need faster.
When JavaScript and CSS are optimized, websites load quicker and run smoother. It's like having superheroes who can leap into action faster, making your online experience more enjoyable and seamless.
## Monitoring
Monitoring in web development is like having a watchful eye over a website's health and performance. It involves keeping track of how the site is doing, checking for any issues, and making sure everything is running smoothly. It's like regularly checking the temperature, pulse, and vital signs of a patient to ensure they're healthy. Monitoring tools help developers spot problems quickly, like detecting a fever before it turns into something serious. By monitoring regularly, developers can catch and fix issues early, ensuring the website stays in top shape and provides a seamless experience for users.
## Conclusion
Optimizing web performance is like giving your website a turbo boost. By following tips like minimizing file sizes, using efficient coding techniques, and implementing caching, you can make your site load faster and run smoother. This ensures a better experience for visitors, keeping them happy and engaged.
| ellaokah |
1,882,287 | Searching Castles Using Go, MongoDB, Github Actions And Web Scraping | It’s Been A While! It’s been a while since last one! TL;DR: work, life, study… you know... | 0 | 2024-06-09T18:42:28 | https://www.buarki.com/blog/find-castles | go, mongodb, webscraping, opendata |
## It’s Been A While!
It’s been a while since last one! TL;DR: work, life, study… you know :)
## A Project Of Open Data Using Go
In March 2024, I created a small project to experiment with using **Server-Sent Events (SSE)** in a Go web server to continuously send data to a frontend client. It wasn’t anything particularly fancy, but it was still pretty cool :)
The project involved a small server written in Go that served a minimalist frontend client created with raw HTML, vanilla JavaScript, and Tailwind CSS. Additionally, it provided an endpoint where the client could open an SSE connection. The basic goal was for the frontend to have a button that, once pressed, would trigger a server-side search to collect data about castles. As the castles were found, they would be sent from the server to the frontend in real-time. I focused on castles from the United Kingdom and Portugal, and the project worked nicely as you can see below:

The code of such minimalist project can be found [here](https://github.com/buarki/find-castles/blob/10d0f8604011f0a98939b3fc4d70ccca4db6f401/cmd/standalone/main.go) and you can follow the README instructions to run it on you local machine.
A few days ago, I revisited this project and decided to expand it to include more countries. However, after several hours of searching, I couldn’t find an official consolidated dataset of castles in Europe. I did find a few datasets focused on specific countries, but none that were comprehensive. Therefore, for the sake of having fun with Go and because I have a passion for history, I started the project **Find Castles**. The **goal of this project is to create a comprehensive dataset of castles by collecting data from available sources, cleaning it, preparing it, and making it available via an API**.
## Why Go Really Shines For This Project?
Goroutines and channels! The biggest part of the code of this project will be navigating through websites, collecting and processing data to, in the end, save it on database. By using Go we leverage the ease that the language offers us to implement these complex operations keeping the maximum possible amount of hair :)
## How It Works So Far?
So far I implemented data collectors for 3 countries only: **Ireland, Portugal and United kingdom**, the reason was that the effort for finding a good reference for these countries was not so hard.
The current implementation basically has two main stages: the website inspection for the links containing castle data and the data extraction per se. This process is the same for all countries and due to that an interface was introduced to establish an stable API for current and future enrichers:
```go
type Enricher interface {
CollectCastlesToEnrich(ctx context.Context) ([]castle.Model, error)
EnrichCastle(ctx context.Context, c castle.Model) (castle.Model, error)
}
```
If you want to see the implementation of at least one, [here you can find the enricher for Ireland](https://github.com/buarki/find-castles/blob/10d0f8604011f0a98939b3fc4d70ccca4db6f401/enricher/ireland.go).
Once we have enrichers able to scrap and extract data from proper sources we can actually collect data using the **executor package**. This package manages the execution of enrichers by leveraging goroutines and channels distributing the work load among the available CPUs.
The executor current definition and function constructor can be see bellow:
```go
type EnchimentExecutor struct {
enrichers map[castle.Country]enricher.Enricher
cpus int
}
func New(
cpusToUse int,
httpClient *http.Client,
enrichers map[castle.Country]enricher.Enricher) *EnchimentExecutor {
cpus := cpusToUse
availableCPUs := runtime.NumCPU()
if cpusToUse > availableCPUs {
cpus = availableCPUs
}
return &EnchimentExecutor{
cpus: cpus,
enrichers: enrichers,
}
}
```
The execution process is basically a **data pipeline** in which the first stage looks for castles to be enriched, next stage extract data from the given sources and last one persists it on DB.
The first stage goes by spawning goroutines to find the castles and as those castles are found they are pushed into a channel. We then merge those channels into a single one to be consumed by the next stage:
```go
func (ex *EnchimentExecutor) collectCastles(ctx context.Context) (<-chan castle.Model, <-chan error) {
var collectingChan []<-chan castle.Model
var errChan []<-chan error
for _, enricher := range ex.enrichers {
castlesChan, castlesErrChan := ex.toChanel(ctx, enricher)
collectingChan = append(collectingChan, castlesChan)
errChan = append(errChan, castlesErrChan)
}
return fanin.Merge(ctx, collectingChan...), fanin.Merge(ctx, errChan...)
}
func (ex *EnchimentExecutor) toChanel(ctx context.Context, e enricher.Enricher) (<-chan castle.Model, <-chan error) {
castlesToEnrich := make(chan castle.Model)
errChan := make(chan error)
go func() {
defer close(castlesToEnrich)
defer close(errChan)
englandCastles, err := e.CollectCastlesToEnrich(ctx)
if err != nil {
errChan <- err
}
for _, c := range englandCastles {
castlesToEnrich <- c
}
}()
return castlesToEnrich, errChan
}
```
The second stage spawn a group of goroutines to be listening to the output channel of previous stage, and as it receives castles it extracts data by scraping the HTML page. As the data extraction finishes, the enriched castles are pushed into another channel containing the enriched castles.
```go
func (ex *EnchimentExecutor) extractData(ctx context.Context, castlesToEnrich <-chan castle.Model) (chan castle.Model, chan error) {
enrichedCastles := make(chan castle.Model)
errChan := make(chan error)
go func() {
defer close(enrichedCastles)
defer close(errChan)
for {
select {
case <-ctx.Done():
return
case castleToEnrich, ok := <-castlesToEnrich:
if ok {
enricher := ex.enrichers[castleToEnrich.Country]
enrichedCastle, err := enricher.EnrichCastle(ctx, castleToEnrich)
if err != nil {
errChan <- err
} else {
enrichedCastles <- enrichedCastle
}
} else {
return
}
}
}
}()
return enrichedCastles, errChan
}
```
And the main executor’s function that does it all is bellow one:
```go
func (ex *EnchimentExecutor) Enrich(ctx context.Context) (<-chan castle.Model, <-chan error) {
castlesToEnrich, errChan := ex.collectCastles(ctx)
enrichedCastlesBuf := []<-chan castle.Model{}
castlesEnrichmentErr := []<-chan error{errChan}
for i := 0; i < ex.cpus; i++ {
receivedEnrichedCastlesChan, enrichErrs := ex.extractData(ctx, castlesToEnrich)
enrichedCastlesBuf = append(enrichedCastlesBuf, receivedEnrichedCastlesChan)
castlesEnrichmentErr = append(castlesEnrichmentErr, enrichErrs)
}
enrichedCastles := fanin.Merge(ctx, enrichedCastlesBuf...)
enrichmentErrs := fanin.Merge(ctx, castlesEnrichmentErr...)
return enrichedCastles, enrichmentErrs
}
```
The full current implementation of the executor can be found [here](https://github.com/buarki/find-castles/blob/10d0f8604011f0a98939b3fc4d70ccca4db6f401/executor/executor.go).
The last just consumes the channel with enriched castles and save them in bulk into MongoDB:
```go
castlesChan, errChan := castlesEnricher.Enrich(ctx)
var buffer []castle.Model
for {
select {
case castle, ok := <-castlesChan:
if !ok {
if len(buffer) > 0 {
if err := db.SaveCastles(ctx, collection, buffer); err != nil {
log.Fatal(err)
}
}
return
}
buffer = append(buffer, castle)
if len(buffer) >= bufferSize {
if err := db.SaveCastles(ctx, collection, buffer); err != nil {
log.Fatal(err)
}
buffer = buffer[:0]
}
case err := <-errChan:
if err != nil {
log.Printf("error enriching castles: %v", err)
}
}
}
```
You can find the current version of the main.go [here](https://github.com/buarki/find-castles/blob/10d0f8604011f0a98939b3fc4d70ccca4db6f401/cmd/enricher/main.go). This process runs periodically using a scheduled job [created using Github Actions](https://github.com/buarki/find-castles/blob/10d0f8604011f0a98939b3fc4d70ccca4db6f401/.github/workflows/collect-and-enrich-castles.yml).
## Next Steps
This project has a considerable roadmap ahead, bellow you can find listed the next steps.
**1. Implement recursive crawling**: in order to add more enrichers is making it possible to do recursive crawling of a website, because some of them has a huge list of castles in such a way that the listing is done through pagination.
**2. Support for multiple enrichment website sources for the same country**: It must also support multiple enrichment website sources of the same country because this is something possible as I could see.
**3. Develop an official website**: In the meantime, an official website for this project must be done to make the collected data available and for sure to show the progress. Such site is in progress and you can already visit it here. Due to my lack of design skills the site is ugly as hell, but stay tuned and we’ll get over it :)
**4. Integrate machine learning for filling data gaps**: And for sure, something that will help a lot, specially in complementing data hard to be found via the regular enrichers, will be machine learning, because by prompting these models with requests for hard-to-find data, we can efficiently fill in data gaps and enrich the dataset.
**Contributions Are Welcome!**
This project is open source and all **collaborations are more than welcome**! Whether you’re interested in backend development, frontend design, or any other aspect of the project, your input is valuable.
If you find anything you want to contribute — specially with frontend :) — just open an issue on the [repository](https://github.com/buarki/find-castles) and ask for code review.
This article was originally posted on my personal site: https://www.buarki.com/blog/find-castles
| buarki |
1,882,284 | Cypress Testing Framework | Cypress is a testing framework for Javascript that allows you to easily create tests for your web... | 0 | 2024-06-09T18:40:35 | https://dev.to/uhrinuh/cypress-testing-framework-32jh | Cypress is a testing framework for Javascript that allows you to easily create tests for your web application. You're able to test the app directly in the browser, debug issues directly in the browser, and eliminates flaky tests by interacting with your application the same way users do so you can discover bugs before users do.
In this blog, I will discuss how to create tests in your application using Cypress!
**Step 1: Download Cypress and Add npm Script**
You're going to need to download Cypress using _npm install cypress --save-dev_ and then you can add the script to your package.json file by adding _"cy:open": "cypress open"_.
**Step 2: Open Cypress**
Then, you can open Cypress from the project root using _npx cypress open_ or _npm run cy:open_ and this will open the cypress launchpad. If its your first time using Cypress, you will see a page like so

**Step 3: Decide between E2E testing or Component testing**
This is where you will decide on whether to use End to End testing or Component testing.

_E2E Testing_
- Great at verifying your app runs as intended from the front end to the back end
- Good for making sure your entire app is functioning as a cohesive whole
- Testing this way helps ensure your tests and the user's experience are the same
_Component Testing_
- Tests individual components, not the app as a whole
- Focus on testing only the component's functionality and not worry about testing the component as part of the larger app
- Just because all component tests pass, it does not mean the app is functioning as a whole
The documentation recommends using E2E if you are unsure of which type you want and you can always choose a different type later.
**Step 4: Quick Configuration**
This will show all of the configuration files being added to your project.

**Step 5: Choosing a Browser**
You will be presented with a list of compatible browsers Cypress found on your system and you can choose what works best for you. I will be using Chrome.

**Step 6: Add a Test File**
You are going to click Create new spec

and this will prompt you with choosing the path for your new spec. You can just accept the default path name. Then, you will see that the spec was successfully added and you can just exit out from that. Now that spec will be displayed in your E2E specs and you can click on it to see Cypress launch it.
**Step 7: Write a test**
Depending on if your test passes or fails, Cypress will update like so

or like so

The tests reload in real time so once you create a test, you should see it pass or fail in real time. Tests in Cypress use describe and it from Mocha and expect from Chai which are popular frameworks that many people have used before, making it very accessible and easy to use. They also have an ESLint plugin for projects that use Cypress https://github.com/cypress-io/eslint-plugin-cypress
**Step 8: How to Write a Good Test**
To write a good test, you should cover 3 things: set up the application state, take an action, and make an assertion about the resulting application state. First put the app into a specific state, then take some action in the app that causes it to change, and finally check the resulting app state. A rule of thumb with writing tests is to write clear test descriptions, avoid unnecessary repetition in tests, and to structure tests in a readable and maintainable way.
**Step 9: Debugging**
With Cypress, the test code runs in the same loop as your app so you have access to the code running on the page, as well as the things the browser makes available to you. Cypress makes debugging easy by using the .debug() method like so

Then, you're able to use the Developer Tools to get a view of what is going on

You can also use .pause() to run the test command by command.
And that's it! The best way to learn is by doing and the Cypress documentation is a really good and easy tool to use to put testing into action! Happy testing!
**Sources**
https://www.cypress.io/
https://docs.cypress.io/guides/overview/why-cypress
https://docs.cypress.io/guides/getting-started/opening-the-app
https://docs.cypress.io/guides/core-concepts/testing-types#What-is-E2E-Testing
https://docs.cypress.io/guides/end-to-end-testing/writing-your-first-end-to-end-test
https://docs.cypress.io/guides/guides/debugging | uhrinuh | |
1,882,282 | JavaScript Latest Version: What's New? | JavaScript's latest update, ECMAScript 2023 (ES14), released in June 2023, brings several exciting... | 0 | 2024-06-09T18:37:27 | https://dev.to/azeem_shafeeq/javascript-latest-version-whats-new-532m | javascript, webdev, programming, tutorial | JavaScript's latest update, ECMAScript 2023 (ES14), released in June 2023, brings several exciting features to enhance coding efficiency and capabilities. Here's a quick overview of what's new:
- Well-Formed Unicode Strings: Ensures better handling of global text standards.
- Atomic waitSync: Improves communication in programs sharing memory.
- RegExp v Flag and String Properties: Enhances pattern finding in texts.
- Top-Level Await: Simplifies asynchronous coding without needing an async function wrapper.
- Pipeline Operator: Streamlines chaining commands for readability.
- Records and Tuples: Introduces immutable data structures.
- Decorators: Offers a way to annotate and modify classes and methods.
- Pattern Matching: Provides a robust method for data interrogation.
- Temporal: Aims to solve long-standing issues with dates and times in JavaScript.
- Ergonomic Brand Checks: Simplifies the type checking of custom data.
- Realms API: Allows creating isolated code environments.
These features aim to make JavaScript more intuitive, secure, and powerful for developers. However, not all features are fully supported across all browsers yet, so checking compatibility is crucial. The article further discusses practical applications, compatibility and migration strategies, and gives a glimpse into the future of JavaScript and what's expected in its next version.
**Understanding ECMAScript**
- ECMAScript is the official name for the set of rules that JavaScript follows
- Every new version adds new features or makes changes
- A group called Ecma TC39 decides what gets added
- They release a new version every year to keep things fresh
- Web browsers try to support the new version as soon as it's out
**Overview of Major Changes**
The latest version, ECMAScript 2023 (ES14), includes some cool updates:
- New ways to work with arrays, like findLast() and findLastIndex()
- Methods to sort and reverse arrays, called toSorted() and toReversed()
- Better ways to use numbers in coding
- A new feature for finding patterns in text, called RegExp match indices API
- A way to explain errors better with Error cause extension
These changes make it easier to do a lot of common tasks when coding.## Key Features
**Well-Formed Unicode Strings**
The idea here is to make sure JavaScript can handle text in any language without messing up. Now, there's a way to check if text is written correctly according to global standards. This means:
- No more mix-ups with special characters that don’t match up right.
- Making sure paired characters are always used together correctly.
This helps avoid mistakes when working with text from around the world, making things more predictable when you're coding.
_**Key Takeaways**_
The latest update to JavaScript, known as ECMAScript 2023 or ES14, brings in some handy tools for developers:
- Top-level await makes it easier to deal with tasks that wait on something else, like loading data, by allowing await to be used on its own.
- New array methods such as findLast() and findLastIndex() are like the reverse of methods we already have, helping to look through data backwards.
- Atomics API helps keep data safe when multiple tasks are happening at once, avoiding mix-ups.
- RegExp improvements make it simpler to find and work with specific pieces of text.
- Records and tuples are types of data that can't be changed, which helps keep things consistent.
These changes make JavaScript more reliable, easier to use, and ready for new challenges like working with data in parallel.
_**The Future of JavaScript**_
avaScript is always getting small but important updates every year. This keeps the language growing in a good way. Looking ahead, we see more focus on making code easier to read and write with things like pattern matching, decorators, and the pipeline operator. There’s also a push to let developers customize how JavaScript works for them. The Temporal API is all about dealing with dates and times better, and realms API is for keeping different parts of a program separate and safe. There’s also ongoing work to make JavaScript work better for people all around the world. JavaScript is being used in more places than just websites, like functions that run in the cloud. The language is becoming more flexible and easy for developers to use, no matter where they are coding. The next updates are looking to make JavaScript even more powerful for creating all kinds of applications.
_**Related Questions**_
What's new in JavaScript 2024?
JavaScript keeps getting better to help web developers do their jobs easier. Looking forward to ES2024, we can expect some cool updates like:
- Realms API: This feature lets you keep different parts of your JavaScript code separate, which is great for security and organization.
- Immutable data structures: This will introduce things that can't be changed once they're created, like arrays and objects. This helps prevent mistakes.
- Advanced pattern matching: This makes it easier to sift through data and find exactly what you need without a lot of hassle.
- Decorator syntax: This adds special notes to classes and their parts to say how they should work, making code easier to manage.
These updates are all about making code clearer, faster, and safer.
**What is the newest version of JavaScript?**
The latest update is ECMAScript 2023 (ES14), which came out in June 2023. It brings in new features like top-level await, handy array methods such as findLast(), ways to safely share data between threads, and better ways to work with text.
ECMAScript is the official name for JavaScript's rules, and it gets updated every year.
**What is new in JS?**
Here are some of the latest additions to JavaScript:
Top-level await - Makes waiting for things like data loads simpler.
- New array methods (findLast() and findLastIndex()) help search arrays from the end.
- Atomics API for safely sharing data between threads.
- Improved text searching with regular expressions.
- Unchangeable data structures (records and tuples).
- Temporal API for easier handling of dates and times.
These features improve how we handle waiting for things, searching through data, and managing dates and times.
**What are the new features of JavaScript es8?**
ES8, also known as ECMAScript 2017, introduced several helpful features:
- Object.entries() and Object.values() make it easier to work with objects.
- Object.getOwnPropertyDescriptors() lets you see detailed info about object properties.
- Async/await syntax for simpler asynchronous code.
- Shared memory and atomics for working with data across multiple threads.
- padStart() and padEnd() for adding padding to strings.
_ES8 focused on making asynchronous code easier to write and improving how we work with objects and strings._
| azeem_shafeeq |
1,882,281 | Download Output – FX Bundle for (Windows) | Looking to enhance your music production experience on your Windows platform? Look no further than... | 0 | 2024-06-09T18:36:14 | https://dev.to/plugins_forest05_90df5565/download-output-fx-bundle-for-windows-387j | musicproduction, windows, fxbundle, downloadoutput | Looking to enhance your music production experience on your Windows platform? Look no further than the Download [Output – FX Bundle (Windows)](https://pluginsforest.com/product/output-fx-bundle-windows/), a comprehensive suite designed to elevate your sound game to unprecedented heights. Packed with cutting-edge features and a plethora of dynamic tools, this bundle is your ultimate solution for crafting immersive and professional-grade audio productions.
Elevate Your Sound Experience
Experience a revolution in audio manipulation with the Download Output – FX Bundle. This versatile collection of effects plugins empowers you to transform ordinary sounds into extraordinary sonic masterpieces. Whether you're a seasoned producer or just starting out, these tools provide endless possibilities for experimentation and creativity.
Seamless Integration
Seamlessly integrate the FX Bundle into your workflow with its user-friendly interface and intuitive controls. With compatibility across major digital audio workstations (DAWs), including Ableton Live, FL Studio, and Logic Pro, you can unleash your creativity without constraints.
Dive Into a World of Possibilities
Unlock a world of sonic exploration with an extensive array of effects, including reverbs, delays, distortions, and more. Experiment with unique soundscapes, create captivating textures, and add depth and dimension to your tracks with ease.
Unparalleled Versatility
From subtle enhancements to bold transformations, the FX Bundle offers unparalleled versatility to suit any musical style or genre. Whether you're producing electronic dance music, cinematic soundtracks, or ambient compositions, these plugins are your secret weapon for crafting signature sounds that stand out from the crowd.
Stay Ahead of the Curve
Stay ahead of the competition and keep your productions fresh and innovative with regular updates and new releases. With our commitment to excellence and continuous development, you'll always have access to the latest tools and technologies to fuel your creativity.
Join the Community
Connect with like-minded producers and share your passion for music production in our vibrant online community. Gain valuable insights, collaborate on projects, and stay inspired as you embark on your musical journey.
Take Your Sound to the Next Level
Elevate your sound experience and take your productions to new heights with the Download Output – FX Bundle (Windows). Whether you're a professional producer, hobbyist, or enthusiast, this bundle is your ticket to unlocking the full potential of your creativity.
#DownloadOutput #FXBundle #Windows #MusicProduction #AudioEffects #SoundDesign #DigitalAudioWorkstation #CreativeTools #SonicExploration #ProducerCommunity #Innovation #Versatility #SignatureSounds #Inspiration #ProfessionalGrade #ImmersiveExperience #ContinuousDevelopment #StayCreative #ElevateYourSound #UnleashCreativity
Transform your sonic landscape and revolutionize your music production process with the Download [Output – FX Bundle (Windows)](https://pluginsforest.com/product/output-fx-bundle-windows/
). Download now and experience the future of audio innovation! | plugins_forest05_90df5565 |
1,882,277 | HackerRank SQL Preparation: Weather Observation Station 1(MySQL) | Problem Statement: Query a list of CITY and STATE from the STATION table. Link: HackerRank - Weather... | 0 | 2024-06-09T18:31:33 | https://dev.to/christianpaez/hackerrank-sql-preparation-weather-observation-station-1mysql-1gke | sql, writeups, hackerrank, mysql |
**Problem Statement:**
Query a list of CITY and STATE from the **STATION** table.
**Link:** [HackerRank - Weather Observation Station 1](https://www.hackerrank.com/challenges/weather-observation-station-1/problem)
**Solution:**
```sql
SELECT CITY, STATE FROM STATION;
```
**Explanation:**
- `SELECT CITY, STATE`: This part of the query specifies that you want to retrieve the `CITY` and `STATE` columns from the **STATION** table.
- `FROM STATION`: Indicates that you are selecting data from the **STATION** table. | christianpaez |
1,882,276 | Handling AWS WAF Capacity Limits with Terraform | When managing AWS WAF resources using Terraform, one common issue is exceeding the Web ACL Capacity... | 0 | 2024-06-09T18:28:02 | https://dev.to/mukulsharma/handling-aws-waf-capacity-limits-with-terraform-1741 | terraform, aws, webacl, waf | When managing AWS WAF resources using Terraform, one common issue is exceeding the Web ACL Capacity Units (WCUs) limit, which can lead to deployment failures. This blog post will guide you through understanding WCUs, checking capacity using AWS CLI, and integrating this process into your Terraform workflow to ensure smooth deployments.
#### Understanding WCUs in AWS WAF
AWS WAF uses Web ACL Capacity Units (WCUs) to measure the resources required to run rules, rule groups, and Web ACLs. Each rule type has a different capacity requirement based on its complexity and processing needs. The capacity limits help AWS manage the performance and cost of running WAF rules efficiently.
- **Simple Rules**: Require fewer WCUs (e.g., size constraint rules).
- **Complex Rules**: Require more WCUs (e.g., regex pattern sets).
For detailed WCU calculations, refer to the [AWS WAF documentation](https://docs.aws.amazon.com/waf/latest/developerguide/aws-waf-capacity-units.html) [oai_citation:1,AWS WAF web ACL capacity units (WCUs) - AWS WAF, AWS Firewall Manager, and AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/aws-waf-capacity-units.html).
#### Common Error
When you exceed the WCU limits, you may encounter the following error:
```
WAFInvalidParameterException: Error reason: You exceeded the capacity limit for a rule group or web ACL.
```
#### Checking WCU Requirements with AWS CLI
Terraform lacks built-in functionality to check WCUs before applying configurations. However, AWS CLI provides a `check-capacity` command to verify the WCU requirements. Here’s how you can use it:
1. **Define WAF Rules in JSON**:
Create a JSON file (`rules.json`) with the rules you want to include:
```json
{
"Rules": [
{
"Name": "Rule1",
"Priority": 1,
"Statement": {
"ByteMatchStatement": {
"SearchString": "BadBot",
"FieldToMatch": {
"UriPath": {}
},
"TextTransformations": [
{
"Priority": 0,
"Type": "NONE"
}
]
}
},
"Action": {
"Block": {}
},
"VisibilityConfig": {
"SampledRequestsEnabled": true,
"CloudWatchMetricsEnabled": true,
"MetricName": "Rule1Metric"
}
}
]
}
```
2. **Run the `check-capacity` Command**:
Use the AWS CLI to check the capacity:
```bash
aws wafv2 check-capacity --scope REGIONAL --rules file://rules.json
```
The output will look like this:
```json
{
"Capacity": 15
}
```
This command returns the total WCU requirement for your specified rules. Ensure it doesn't exceed 5,000 WCUs for a single Web ACL.
#### Integrating Capacity Check with Terraform
To avoid deployment failures due to WCU limits, integrate the capacity check into your Terraform workflow using a shell script:
1. **Create a Shell Script**:
Write a script (`check_capacity.sh`) to perform the capacity check:
```bash
#!/bin/bash
CAPACITY=$(aws wafv2 check-capacity --scope REGIONAL --rules file://rules.json | jq '.Capacity')
if [ "$CAPACITY" -gt 5000 ]; then
echo "Error: Capacity units exceeded. Current capacity: $CAPACITY"
exit 1
else
echo "Capacity check passed. Current capacity: $CAPACITY"
export TERRAFORM_CAPACITY_CHECK=passed
fi
```
2. **Integrate with Terraform**:
Modify your Terraform workflow to include the script:
```bash
./check_capacity.sh
if [ "$TERRAFORM_CAPACITY_CHECK" == "passed" ]; then
terraform apply
else
echo "Terraform apply aborted due to capacity issues."
fi
```
By incorporating this script, you ensure that your Terraform deployments only proceed if the WCU requirements are within limits, preventing potential failures and downtime.
### Conclusion
Handling WCU limits is crucial for successful AWS WAF deployments using Terraform. By leveraging the `check-capacity` command in AWS CLI and integrating it into your Terraform workflow, you can proactively manage capacity and avoid deployment issues. This approach ensures a smoother, more reliable infrastructure management process especially until terraform starts to support a similar built-in functionality.
For further reading, check the official [AWS WAF documentation on WCUs](https://docs.aws.amazon.com/waf/latest/developerguide/aws-waf-capacity-units.html) and the [AWS CLI `check-capacity` command reference](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/wafv2/check-capacity.html). | mukulsharma |
1,882,275 | Gold Bangles Design | The Timeless Elegance of Gold Bangles Design Gold bangles have long been a symbol of elegance,... | 0 | 2024-06-09T18:27:26 | https://dev.to/parakkatjewels/gold-bangles-design-45fa | The Timeless Elegance of Gold Bangles Design
Gold bangles have long been a symbol of elegance, tradition, and timeless beauty. Their captivating designs not only adorn the wrists but also tell a story of cultural heritage, personal style, and exquisite craftsmanship. Whether you’re a jewelry enthusiast or someone looking to add a touch of sophistication to your collection, exploring the diverse world of [gold bangles design](https://parakkatjewels.com/collections/24-carat-heavy-gold-plated-bangles?usf_sort=bestselling) can be both inspiring and delightful.
A Glimpse into Tradition
For centuries, gold bangles have been an integral part of various cultures, especially in South Asia and the Middle East. These bangles are often passed down through generations, symbolizing family legacy and tradition. Traditional gold bangles are known for their elaborate patterns, detailed filigree work, and the inclusion of precious stones. These designs often feature motifs inspired by nature, mythology, and ancient art, showcasing the artisan's skill and creativity.
Modern Interpretations
As fashion evolves, so do gold bangle designs. Contemporary designers have taken traditional elements and blended them with modern aesthetics to create pieces that appeal to the modern woman. Sleek, minimalist designs have become increasingly popular, characterized by clean lines and subtle elegance. These bangles are perfect for everyday wear, adding a touch of sophistication without being overly ornate.
Fusion of Styles
One of the most exciting trends in gold bangle design is the fusion of traditional and modern styles. Designers are experimenting with unconventional shapes, textures, and finishes, creating bangles that are both unique and versatile. This fusion approach allows for a broader appeal, catering to those who appreciate the charm of classic designs and those who prefer contemporary fashion.
Customization and Personalization
In today's market, customization is key. Many jewelers offer personalized gold bangle designs, allowing customers to choose specific motifs, engravings, or even incorporate birthstones. This trend of personalization adds sentimental value to the jewelry, making it a cherished piece for years to come.
Choosing the Perfect Gold Bangle
When selecting a gold bangle, consider the following factors:
Design and Style: Choose a design that reflects your personal style and complements your wardrobe. Whether you prefer intricate traditional patterns or sleek modern designs, there’s a gold bangle for every taste.
Quality and Craftsmanship: Pay attention to the craftsmanship. High-quality gold bangles should have a smooth finish, precise detailing, and sturdy construction.
Comfort and Fit: Ensure the bangle fits comfortably on your wrist. It should be easy to wear and remove, without being too tight or too loose.
Occasion: Consider the occasion for which you’re purchasing the bangle. Heavier, more ornate designs are perfect for weddings and festive events, while simpler designs are ideal for everyday wear.
Conclusion
Gold bangles are more than just pieces of jewelry; they are expressions of art, culture, and personal style. From traditional to contemporary, the world of gold bangle design offers something for everyone. Whether you’re adding to your collection or buying your first piece, the timeless elegance of gold bangles will always make a statement. Explore different designs, find what resonates with you, and let these beautiful pieces enhance your style and grace. | parakkatjewels | |
1,882,274 | HackerRank SQL Preparation: Japanese Cities' Names(MySQL) | Problem Statement: Query the names of all Japanese cities in the CITY table. The COUNTRYCODE for... | 0 | 2024-06-09T18:24:42 | https://dev.to/christianpaez/hackerrank-sql-preparation-japanese-cities-namesmysql-5ap1 | sql, mysql, writeups, hackerrank |
**Problem Statement:**
Query the names of all Japanese cities in the **CITY** table. The `COUNTRYCODE` for Japan is `JPN`.
**Link:** [HackerRank - Japanese Cities Name](https://www.hackerrank.com/challenges/japanese-cities-name/problem)
**Solution:**
```sql
SELECT NAME FROM CITY WHERE COUNTRYCODE = 'JPN';
```
**Explanation:**
- `SELECT NAME`: This part of the query specifies that you want to retrieve the `NAME` column from the **CITY** table.
- `FROM CITY`: Indicates that you are selecting data from the **CITY** table.
- `WHERE COUNTRYCODE = 'JPN'`: This condition filters the rows to include only those cities where the `COUNTRYCODE` is 'JPN' (Japan). | christianpaez |
1,882,273 | Learn the Linked List Data Structure by Building a Blockchain in JavaScript | Overview Building a blockchain from scratch is an excellent way to understand both the... | 0 | 2024-06-09T18:24:05 | https://dev.to/hasan_py/learn-the-linked-list-data-structure-by-building-a-blockchain-in-javascript-1gcb | javascript, dsa, beginners, programming | ### Overview
Building a blockchain from scratch is an excellent way to understand both the intricacies of blockchain technology and fundamental data structures like linked lists. In this article, we’ll explore how to create a simple blockchain using JavaScript, highlighting how the blockchain resembles a singly linked list.
### Now the question is what is a Blockchain?
A blockchain is a decentralized digital ledger. What that means? You can say it's a database that records transactions. Now what is transaction here? You can say it's data. That's mean blockchain store the data across many computers in a way that ensures security, transparency, and immutability means can't manipulate the data. Each block in a blockchain contains a cryptographic hash of the previous block, a timestamp, and transaction data. Still don't understand what is blockchain? [Learn more](https://www.investopedia.com/terms/b/blockchain.asp)
### How it's related to Linked Lists?
A linked list is a linear data structure where each element, known as a node, points to the next element in the sequence. In a singly linked list, each node contains data and a reference (or pointer) to the next node in the list.
In blockchain, each block is like a node in a linked list:
- Data: Contains transaction details.
- Pointer: Holds the hash of the previous block.
This creates a chain of blocks, where each block points to its previous block, forming a linked list.
### Let's build a Blockchain in JavaScript
You need `nodejs` latest version in your pc. So Download it.
#### Step 1: Program setup
Now create a file name `blockchain.js`. Our code will be here.
We need a cryptographic library that called `crypto-js`. This library is for generating hash. So hash means, You can give a data and a secret key to an algorithm and it will encrypt it. The encrypted result will be long form of string so that nobody can understand what is the data. Meaning we can abstract our data by encrypting it. And when we need to see the data we will decrypt it using the same algorithm with a secret key we provided on encryption time.
Now open the terminal and install it by this command:
```js
npm install crypto-js
```
#### Step 2: Setting Up the Block Class
First, we need a Block class that will represent each block in our blockchain.
```js
const SHA256 = require("crypto-js/sha256");
class Block {
constructor(index, timestamp, data, previousHash = "") {
this.index = index;
this.timestamp = timestamp;
this.data = data;
this.previousHash = previousHash;
this.hash = this.calculateHash();
}
calculateHash() {
return SHA256(
this.index +
this.previousHash +
this.timestamp +
JSON.stringify(this.data)
).toString();
}
}
```
- `SHA-256`: We are using this algorithm
- Block Class: Each block has an index, timestamp, data, the hash of the previous block, and its own hash. You can say it's the node of the linked lists.
- `calculateHash()`: Generates a SHA-256 hash based on the block’s properties.
#### Step 2: Creating the Blockchain Class
Next, we need a Blockchain class to manage the chain of blocks.
```js
class Blockchain {
constructor() {
this.chain = [this.createGenesisBlock()];
}
createGenesisBlock() {
return new Block(0, "10/06/2024", "Genesis Block", "0");
}
getLatestBlock() {
return this.chain[this.chain.length - 1];
}
addBlock(newBlock) {
newBlock.previousHash = this.getLatestBlock().hash;
newBlock.hash = newBlock.calculateHash();
this.chain.push(newBlock);
}
isChainValid() {
for (let i = 1; i < this.chain.length; i++) {
const currentBlock = this.chain[i];
const previousBlock = this.chain[i - 1];
if (currentBlock.hash !== currentBlock.calculateHash()) {
return false;
}
if (currentBlock.previousHash !== previousBlock.hash) {
return false;
}
}
return true;
}
}
```
- Blockchain Class: Manages the chain of blocks means the list.
- `createGenesisBlock()`: Creates the first block in the blockchain.
- `getLatestBlock()`: Retrieves the latest block in the chain.
- `addBlock(newBlock)`: Adds a new block to the chain, setting its previous hash to the hash of the current latest block, then recalculates its own hash.
- `isChainValid()`: Validates the integrity of the blockchain by checking hashes.
Now what is validation means here: Let's suppose we have a database and we want to add some data into it, so before that what we do? we prepare and validate the data. So same situation here. We must have to validate our block before it insert to the blockchain. Here `isChainValid()` function actually validate our block.
#### Step 3: Adding and Validating Blocks
In this step, we will create an instance of our Blockchain class, add some blocks to it, and then validate the chain to ensure its integrity.
Let's break down the code and explain each part in detail.
- Creating a Blockchain Instance
First, we create a new instance of our Blockchain class. This instance will start with a single block, the genesis block, which is automatically created by the constructor of the Blockchain class
```js
let myBlockchain = new Blockchain();
```
- Adding Blocks
Next, we add new blocks to our blockchain. Each block contains an index, a timestamp, and some data. When a new block is added, its previousHash property is set to the hash of the latest block in the chain, and its own hash is calculated using the calculateHash method.
```js
myBlockchain.addBlock(new Block(1, "11/06/2024", { amount: 4 }));
myBlockchain.addBlock(new Block(2, "12/06/2024", { amount: 8 }));
```
- Validating the Blockchain
After adding blocks, we can validate the entire blockchain to ensure its integrity. The isChainValid method checks if all the blocks in the chain are valid by comparing their hashes and the previousHash values.
```js
console.log("Blockchain valid?", myBlockchain.isChainValid());
```
Displaying the Blockchain
Finally, we can display the blockchain by converting it to a JSON string and logging it to the console.
```js
console.log(JSON.stringify(myBlockchain, null, 4));
```
Now open the terminal and run the program:
```
node blockchain.js
```
This will output a neatly formatted representation of the entire blockchain, showing all the blocks and their properties.
### Now let's understand how this blockchain relates to linked lists
This blockchain can be thought of as a specialized type of linked list. Both data structures store a sequence of elements (blocks in blockchain, nodes in linked list) where each element contains a reference to the next (and in some cases, the previous) element. Let's explore how the concepts align and then illustrate this with some examples.
Similarities Between Blockchain and Linked Lists
#### Sequential Storage:
- Linked List: Each node points to the next node in the sequence.
- Blockchain: Each block points to the next block through the previousHash property, forming a chain.
#### References:
- Linked List: Nodes have pointers or references to the next node.
- Blockchain: Blocks have a previousHash that references the hash of the previous block.
#### Addition:
- Linked List: New nodes are added by adjusting the pointers of the existing and new nodes.
- Blockchain: New blocks are added by setting the previousHash of the new block to the hash of the latest block, then appending it to the chain.
#### Traversal:
- Linked List: You can traverse from the head (or any starting node) to the end by following the next pointers.
- Blockchain: You can traverse from the genesis block to the latest block by following the previousHash references.
Differences Between Blockchain and Linked Lists
#### Immutability:
- Linked List: Nodes can be changed freely.
- Blockchain: Once a block is added, it is immutable; any change to a block would invalidate the entire chain from that block onwards.
#### Data Security:
- Linked List: No inherent security features.
- Blockchain: Each block contains a cryptographic hash that ensures data integrity and security.
#### Consensus Mechanisms:
- Linked List: No need for consensus; nodes are managed centrally.
- Blockchain: Often part of a decentralized system requiring consensus mechanisms (like Proof of Work or Proof of Stake) to validate new blocks.
#### Example: Linked List Implementation
Let's see a simple linked list implementation in JavaScript:
```js
class Node {
constructor(data) {
this.data = data;
this.next = null;
}
}
class LinkedList {
constructor() {
this.head = null;
}
add(data) {
const newNode = new Node(data);
if (!this.head) {
this.head = newNode;
} else {
let current = this.head;
while (current.next) {
current = current.next;
}
current.next = newNode;
}
}
print() {
let current = this.head;
while (current) {
console.log(current.data);
current = current.next;
}
}
}
// Usage
let list = new LinkedList();
list.add(1);
list.add(2);
list.add(3);
list.print();
```
This structure is consisting of a sequence of elements called nodes. Each node contains two parts:
- Data: The value stored in the node.
- Next: A reference (or pointer) to the next node in the sequence.
Here’s a simple visual representation of a linked list:
```js
[Data|Next] -> [Data|Next] -> [Data|Next] -> null
```
In this structure, the first node is called the head, and the last node points to null, indicating the end of the list. Linked lists are dynamic, meaning they can grow and shrink in size as elements are added or removed.
Our blockchain:
```js
const crypto = require("crypto-js");
class Block {
constructor(index, timestamp, data, previousHash = "") {
this.index = index;
this.timestamp = timestamp;
this.data = data;
this.previousHash = previousHash;
this.hash = this.calculateHash();
}
calculateHash() {
return crypto
.SHA256(
this.index +
this.previousHash +
this.timestamp +
JSON.stringify(this.data)
)
.toString();
}
}
class Blockchain {
constructor() {
this.chain = [this.createGenesisBlock()];
}
createGenesisBlock() {
return new Block(0, "10/06/2024", "Genesis Block", "0");
}
getLatestBlock() {
return this.chain[this.chain.length - 1];
}
addBlock(newBlock) {
newBlock.previousHash = this.getLatestBlock().hash;
newBlock.hash = newBlock.calculateHash();
this.chain.push(newBlock);
}
isChainValid() {
for (let i = 1; i < this.chain.length; i++) {
const currentBlock = this.chain[i];
const previousBlock = this.chain[i - 1];
if (currentBlock.hash !== currentBlock.calculateHash()) {
return false;
}
if (currentBlock.previousHash !== previousBlock.hash) {
return false;
}
}
return true;
}
}
// Usage
let myBlockchain = new Blockchain();
myBlockchain.addBlock(new Block(1, "11/06/2024", { amount: 4 }));
myBlockchain.addBlock(new Block(2, "12/06/2024", { amount: 8 }));
console.log(JSON.stringify(myBlockchain, null, 4));
```
### Now what's happening blockchain as a linked list here:
- Genesis Block: The genesis block is analogous to the head node of a linked list. It’s the first block (node) and does not reference any previous block.
- Adding Blocks: Similar to adding nodes to a linked list, new blocks are linked to the chain by referencing the hash of the previous block (node).
- Traversal and Validation: Just like traversing a linked list to perform operations, we traverse the blockchain to validate the integrity of the chain.
### Conclusion
By building a blockchain from scratch, we can see how its underlying structure is similar to a linked list. Each block in the blockchain contains a reference to the previous block, forming a chain of blocks similar to nodes in a linked list. However, the blockchain adds layers of security and immutability through cryptographic hashing, making it a robust structure for maintaining an immutable ledger of transactions. And there are also some others linked list as well as.
| hasan_py |
1,882,270 | Building a Plant Diagnosis Assistant using Lyzr SDK | In the age of technology, even plant care is getting an upgrade. Imagine having a personal plant... | 0 | 2024-06-09T18:19:01 | https://dev.to/akshay007/building-a-plant-diagnosis-assistant-using-lyzr-sdk-2i21 | ai, plants, programming, python | In the age of technology, even plant care is getting an upgrade. Imagine having a **personal plant expert** at your fingertips, ready to diagnose and remedy any issues your green friends may be facing. In this post, we’ll explore how this AI-powered app is transforming the way we care for our plants, powered by Lyzr SDK.

The **Plant Diagnosis Assistant** is a web application designed to simplify the process of diagnosing and treating plant health issues. Whether you’re dealing with yellowing leaves, wilting stems, or mysterious spots, our app is here to help. Simply input your plant’s symptoms or concerns, and let our AI-powered assistant do the rest. From accurate diagnoses to tailored care instructions, we’ve got you covered every step of the way.
At the heart of Your Plant Health Companion is **Lyzr SDK**, a powerful toolkit for building AI-powered applications. Our app leverages Lyzr SDK’s advanced AI models to analyze user input, generate detailed diagnoses, and provide actionable recommendations. With Lyzr SDK, we’re able to deliver a seamless user experience, complete with personalized guidance tailored to each user’s specific needs.
**Why use Lyzr SDK’s?**
With **Lyzr SDKs**, crafting your own **GenAI** application is a breeze, requiring only a few lines of code to get up and running swiftly.
[Checkout the Lyzr SDK’s](https://docs.lyzr.ai/homepage)
**Lets get Started!**
Create an **app.py** file
```
import streamlit as st
from lyzr_automata.ai_models.openai import OpenAIModel
from lyzr_automata import Agent, Task
from lyzr_automata.pipelines.linear_sync_pipeline import LinearSyncPipeline
from PIL import Image
from lyzr_automata.tasks.task_literals import InputType, OutputType
import os
# Set the OpenAI API key
os.environ["OPENAI_API_KEY"] = st.secrets["apikey"]
st.markdown(
"""
<style>
.app-header { visibility: hidden; }
.css-18e3th9 { padding-top: 0; padding-bottom: 0; }
.css-1d391kg { padding-top: 1rem; padding-right: 1rem; padding-bottom: 1rem; padding-left: 1rem; }
</style>
""",
unsafe_allow_html=True,
)
image = Image.open("./logo/lyzr-logo.png")
st.image(image, width=150)
# App title and introduction
st.title("Plant Diagnosis Assistant")
st.markdown("Welcome to your Plant Health Companion, your go-to resource for diagnosing and remedying plant issues powered by Lyzr Automata. What can I assist you with today? ")
input = st.text_input("Please enter your problems or concerns",placeholder=f"""Type here""")
```
In this snippet, we import the necessary libraries and set up the **Streamlit interface** for our app. We also load the Lyzr logo and display it on the app interface, providing users with a visual indication of the app’s association with Lyzr Automata.
```
open_ai_text_completion_model = OpenAIModel(
api_key=st.secrets["apikey"],
parameters={
"model": "gpt-4-turbo-preview",
"temperature": 0.2,
"max_tokens": 1500,
},
)
```
Here, we initialize the **OpenAI model** using Lyzr SDK, configuring parameters such as the model version, temperature, and maximum tokens for text generation.
```
def generation(input):
generator_agent = Agent(
role=" Expert PLANT DIAGNOSIS ASSISTANT ",
prompt_persona=f"Your task is to CAREFULLY ANALYZE the user's input regarding their plant's type, visible symptoms, and any other relevant details they provide. You MUST use your EXPERTISE to pinpoint possible issues and offer GUIDANCE on both IMMEDIATE and LONG-TERM care for their plant.")
prompt = f"""
# Instructions...
"""
generator_agent_task = Task(
name="Generation",
model=open_ai_text_completion_model,
agent=generator_agent,
instructions=prompt,
default_input=input,
output_type=OutputType.TEXT,
input_type=InputType.TEXT,
).execute()
```
This function, generation, represents the core functionality of our app. It takes user input, creates a persona for the AI agent, defines task instructions, and executes the task using **Lyzr SDK**. The generated output, containing diagnoses and care instructions, is then returned to be displayed to the user.
With **Plant Diagnosis Assistant**, caring for your plants has never been easier. Powered by Lyzr SDK, our app combines the latest advancements in AI technology with expert knowledge to deliver personalized plant care guidance right to your fingertips. Say goodbye to guesswork and hello to healthier, happier plants!
**App link**: https://plantdiagnosis-lyzr.streamlit.app/
**Source Code**: https://github.com/isakshay007/Plant_Diagnosis
The **Plant Diagnosis Assistant** is powered by the Lyzr Automata Agent, utilizing the capabilities of OpenAI’s GPT-4 Turbo. For any inquiries or issues, please contact Lyzr. You can learn more about Lyzr and their offerings through the following links:
**Website**: [Lyzr.ai](https://www.lyzr.ai/)
**Book a Demo**: [Book a Demo](https://www.lyzr.ai/book-demo/)
**Discord**: [Join our Discord community](https://discord.com/invite/nm7zSyEFA2)
**Slack**: [Join our Slack channel](https://anybodycanai.slack.com/join/shared_invite/zt-2a7fr38f7-_QDOY1W1WSlSiYNAEncLGw#/shared-invite/email) | akshay007 |
1,882,269 | Beaches frontend | This is a submission for [Frontend Challenge... | 0 | 2024-06-09T18:16:32 | https://dev.to/abelotegbola/beaches-frontend-2o51 | devchallenge, frontendchallenge, css, javascript | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
## What I Built
Using the html provided, custom features were added to provide better user experience. Some of these features are:
1. Custom font using google fonts - inter
2. Theme toggle into light or darkmode
3. Images using javascript insertBefore.
## Demo
<!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. -->
Check out the live demo here:
https://screeching-steel-present-friends-production.pipeops.app/
the github code repo:
https://github.com/abel-otegbola/Frontend-Beaches
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | abelotegbola |
1,882,267 | Como resolver o erro "Execução de scripts desabilitada neste sistema" (Execution scripts is disabled on this system error) | Nós, como desenvolvedores, não podemos nos restringir apenas aos conhecimentos das nossas respectivas... | 0 | 2024-06-09T18:14:42 | https://dev.to/devaugusto/como-solucionar-o-erro-execucao-de-scripts-desabilitada-neste-sistema-execution-scripts-is-disabled-on-this-system-error-25ch | webdev, tutorial, productivity | Nós, como desenvolvedores, não podemos nos restringir apenas aos conhecimentos das nossas respectivas áreas na programação, precisamos também entender como o sistema operacional que você está utilizando funciona e como resolver erros com eficácia, sem tomar muito tempo.
O erro “Execution scripts is disabled on this system”, ou “A Execução de scripts desabilitada neste sistema” é um erro que é apresentado devida a **falta de permissão para executar um script no Windows**, que geralmente acontece após uma formatação do sistema ou alteração das políticas de execução da máquina.
### Solução
Para iniciar o processo de solução do erro, você precisará abrir o Windows Powershell como administrador e identificar a política de execução que está habilitada na sua máquina utilizando:
```powershell
Get-ExecutionPolicy
```
O seu dispositivo deve oferecer a opção padrão do Sistema: `Default` ou `Restricted` . Independente de qual opção aparecer, você precisará alterar para a politica de execução `RemoteSigned` ou até mesmo `Unrestricted`.
A política de execução `RemoteSigned` permite a execução de scripts locais sem necessariamente precisar de uma assinatura digital, mas exige assinatura digital para scripts que são baixados da internet.
E a política de execução `Unrestricted` por sua vez, permite a execução de todos os scripts sem qualquer restrição. Em regra, é menos seguro e não é tão recomendado a menos que seja absolutamente necessária a utilização.
Em resumo, prefira utilizar a política de execução `RemoteSigned` na maioria dos casos, então aplique a linha de comando:
```powershell
Set-ExecutionPolicy RemoteSigned
```
Após isso, você pode aceitar os termos de troca de política de execução e o problema será resolvido.
### Mas… pera aí!
Em alguns outros casos mais específicos, pode acontecer desta troca de execução de scripts retornar um erro parecido com este:
`O Windows PowerShell atualizou sua política de execução com êxito, mas a configuração foi substituída por uma política definida em um escopo mais específico. Devido à substituição, o shell manterá sua política de execução efetiva atual de Restricted.`
Isso ocorre por diversos motivos, no meu caso, utilizando o comando `Get-ExecutionPolicy -List` , pude perceber que no meu escopo atual, a política de execução em “CurrentUser” ainda estava em `Restricted` .

Então, precisei alterar a execução de scripts dentro do escopo “CurrentUser”.
Apliquei esta linha de comando para mudar a política no meu escopo atual (CurrentUser) em que o modo ainda estava em `Restricted` .
Observe como funciona a “anatomia” do comando:
Syntax: `“Set-ExecutionPolicy (Parâmetro -Scope para escolher o escopo) (Nome do Escopo que está restrito) (Política de Execução que você escolheu).”`
O meu resultado foi este:
```powershell
Set-ExecutionPolicy -Scope CurrentUser RemoteSigned
```
⚠️ Lembrando que todas as operações feitas neste artigo, foram utilizando o Powershell em execução no modo de Administrador, que necessariamente precisa ser executado para a troca de política funcionar.

Pronto! ✅ eu, de forma sincera, espero que tenha conseguido solucionar esta questão utilizando esse artigo. Obrigado pela sua leitura até aqui.
Conheça a Cherry Code -> https://cherrycode.com.br
| devaugusto |
1,882,266 | vue mess detector | I started to put together a static code analysis tool for detecting code smells and best practice... | 0 | 2024-06-09T18:12:34 | https://dev.to/rrd/vue-mess-detector-34cp | vue, nuxt | I started to put together a static code analysis tool for detecting code smells and best practice violations in Vue.js and Nuxt.js projects.
Vscode extension is coming soon.
[https://github.com/rrd108/vue-mess-detector](https://github.com/rrd108/vue-mess-detector)
| rrd |
1,882,265 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-09T18:11:34 | https://dev.to/pippapaperkle/buy-verified-cash-app-account-48n0 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | pippapaperkle |
1,882,253 | Day 13 of my progress as a vue dev | About today Today was one of the good day, I learned a few new programming approaches and a lot more... | 0 | 2024-06-09T17:34:53 | https://dev.to/zain725342/day-13-of-my-progress-as-a-vue-dev-2d37 | webdev, vue, typescript, tailwindcss | **About today**
Today was one of the good day, I learned a few new programming approaches and a lot more things on how I wanna give direction to this journey I'm on. Basically, this journey is all about being disciplined, learning more about things, and polishing my craft, and also to achieve something along the way to showcase the proof of my achievements. So far, it has been fun, bumpy and somewhat satisfying, but not perfect. I have been learning new something each day and apply it to see what sticks and what feels most natural to me, and I think I got it figured up to 80%.
**What's next?**
I will keep on doing what I'm doing. Going to dive deep into vue and try take on a more complex project next that will put my learning to test and will upload that on my git for public to help my out in improving that, if they want.
**Improvements required**
I just got started on this journey and my experience is relatively short, but what I have in my is to keep on going at it continue it for a long time till it becomes effortless and starts to feel natural and I start seeing some results from it.
Wish me luck! | zain725342 |
1,882,264 | My First Postmortem: Outage Incident on Web Application | Postmortem: Outage Incident on Web Application Issue Summary: Duration: 4th June 2024, 08:00 AM —... | 0 | 2024-06-09T18:11:23 | https://dev.to/jacques00077/my-first-postmortem-outage-incident-on-web-application-3aj9 | Postmortem: Outage Incident on Web Application
Issue Summary: Duration: 4th June 2024, 08:00 AM — 5th June 2024, 1:00 PM (GMT) Impact: The web application experienced intermittent downtime, resulting in slow response times and partial service disruption. Approximately 35% of users were affected during this period.
Timeline:
4th June 2024, 08:15 AM (GMT): The issue was detected when monitoring alerts indicated a significant increase in response time.
The engineering team immediately started investigating the issue, suspecting a potential database problem.
Misleadingly, the investigation initially focused on the database cluster due to a recent deployment that involved schema changes.
The incident was escalated to the database administration team to assess the potential impact of the schema changes on the cluster’s performance.
Further investigation revealed no abnormalities within the database cluster, prompting the team to explore other areas of the system.
5th June 2024, 10:30 PM (GMT): The root cause was identified as an overloaded cache layer, leading to increased latency and intermittent failures.
The incident was escalated to the infrastructure team for immediate resolution.
5th June 2023, 1:00 PM (GMT): The incident was resolved, and the web application’s performance returned to normal.
Root Cause and Resolution: The root cause of the issue was an overloaded cache layer. The increased load on the system caused the cache to evict frequently accessed data, resulting in higher latency and intermittent failures. The cache’s eviction policy was not adequately configured to handle the sudden surge in traffic.
To resolve the issue, the infrastructure team adjusted the cache configuration by increasing its capacity and optimizing the eviction policy. Additionally, they implemented a monitoring system to provide early warnings when the cache utilization reaches critical levels. These measures aimed to prevent similar cache overload situations in the future.
Corrective and Preventative Measures: To improve the overall system stability, several actions will be taken:
Optimize cache eviction policies: Review and fine-tune the cache eviction policies based on usage patterns and anticipated traffic fluctuations.
Scale cache infrastructure: Evaluate the current cache infrastructure and determine if additional resources or distributed caching solutions are required to handle peak loads.
Enhance monitoring and alerts: Implement comprehensive monitoring across the entire web stack, including cache utilization, response times, and database performance, to promptly identify any anomalies.
Load testing and capacity planning: Perform regular load testing to simulate various traffic scenarios and ensure the system can handle increased loads without degrading performance.
Improve incident response process: Refine the escalation path and clearly define roles and responsibilities for incident response, ensuring efficient collaboration among teams during critical situations.
Tasks to address the issue:
Patch cache eviction policies: Adjust the cache eviction policies to prioritize frequently accessed data while considering memory constraints.
Evaluate cache infrastructure: Assess the current cache infrastructure’s capacity and explore options for scaling or introducing distributed caching.
Implement comprehensive monitoring: Deploy a monitoring solution that covers cache utilization, response times, and database performance, with appropriate alerts.
Conduct load testing: Develop and execute load testing scenarios to validate the system’s performance under varying traffic conditions.
Review and update incident response procedures: Enhance the incident response process to ensure swift identification, investigation, and resolution of future incidents.
By implementing these corrective and preventative measures, we aim to enhance the reliability and performance of our web application, reducing the likelihood and impact of similar incidents in the future. | jacques00077 | |
1,882,185 | Integrating Google Translate API with Yii2 | In this post, I'll be sharing how to integrate the Google Translate API with Yii2. As developers, we... | 27,655 | 2024-06-09T17:57:40 | https://dev.to/toru/integrating-google-translate-api-with-yii2-c7o | yii2, php, googlecloud, programming | In this post, I'll be sharing how to integrate the Google Translate API with Yii2. As developers, we often encounter challenges when building multilingual sites with translation capabilities. Whether we're manually adding them or relying on a CMS, the process can be incredibly time-consuming and tedious.
<img width="100%" style="width:100%" src="https://media.giphy.com/media/QsY8yp5q4atcQ/giphy.gif">
Although it is not perfect, the Google Translate API has a high accuracy level, comprehensive language support and is constantly being updated to ensure relatability. Hopefully by the end of this guide, you'll be able to incorporate Google Translate into your Yii2 application.
## Adding Google Translate API Configuration to Yii2
Before we start translating text, we need to configure the Google Translate API in our Yii2 application. Follow these steps to set up the API configuration:
### Step 1: Obtain Your API Key
If you haven't already, refer to the previous post on setting up a Google Cloud Project and enabling the Google Translate API. Once you have your API key, proceed with the following steps.
### Step 2: Configure Yii2 to Use the API Key
Open your Yii2 configuration file (common/config/main.php) and add the API key to the params section:
```
`<?php
return [
'params' => [
'google' => [
'translate' => [
'api_key' => 'YOUR_API_KEY_HERE', // Replace with your actual API key
'enabled' => true,
],
],
],
// Other configurations
];`
```
## Using the API to Translate Text
Now that we've configured our application to use the Google Translate API, let's create a function that uses the API to translate text. We'll create a new component to encapsulate this functionality.
### Step 1: Create the Translate Component
Create a new file components/GoogleTranslate.php and add the following code:
```
<?php
namespace common\components;
use Yii;
use yii\base\Component;
class GoogleTranslate extends Component {
/**
* Translates text using the Google Translate API.
*
* @param string $text The text to be translated.
* @param string $targetLanguage The language to translate the text into.
* @param string $sourceLanguage The language of the text to be translated (default: 'en').
* @return string The translated text.
* @throws \Exception If Google Translate API is not enabled or if an error occurs during translation.
*/
public function translate($text, $targetLanguage, $sourceLanguage = 'en') {
// Check if Google Translate API is enabled
if (!Yii::$app->params['google']['translate']['enabled']) {
throw new \Exception("Google Translate is not enabled.");
}
// Get the API key from Yii2 application parameters
$apiKey = Yii::$app->params['google']['translate']['api_key'];
// Construct the API request URL
$url = "https://translation.googleapis.com/language/translate/v2?key={$apiKey}";
// Prepare data for the API request
$data = [
'q' => $text,
'source' => $sourceLanguage,
'target' => $targetLanguage,
'format' => 'text',
];
// Initialize a cURL session
$ch = curl_init();
// Set cURL options for the API request
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($data));
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/json']);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Execute the cURL request and get the response
$response = curl_exec($ch);
// Close the cURL session
curl_close($ch);
// Process the API response
return $this->handleResponse($response);
}
/**
* Handles the response received from the Google Translate API.
*
* @param string $response The API response in JSON format.
* @return string The translated text.
* @throws \Exception If an error occurs while processing the response.
*/
private function handleResponse($response) {
// Decode the JSON response into an associative array
$response = json_decode($response, true);
// Check if decoding was successful
if ($response === null) {
throw new \Exception("Failed to decode JSON response.");
}
// Check if the response contains an error message
if (isset($response['error'])) {
// Extract the error message
$errorMessage = $response['error']['message'];
// Throw an exception indicating a Google Translate API error
throw new \Exception("Google Translate API error: {$errorMessage}");
}
// Return the translated text extracted from the response data
return $response['data']['translations'][0]['translatedText'];
}
}
```
This component defines a translate method that sends a translation request to the Google Translate API and a handleResponse method that processes the API response.
### Step 2: Register the Component
Open your Yii2 configuration file (common/config/main.php) and register the GoogleTranslate component:
```
<?php
return [
'components' => [
'googleTranslate' => [
'class' => 'common\components\GoogleTranslate',
],
],
// Other configurations
];
```
Yii2 follows the dependency injection design pattern, where component can be injected into other classes and componentes when needed. So by doing this, we enable Yii2 to automatically inject an instance of `GoogleTranslate` into classes that require the translation functionality.
## Handling API Responses and Errors
Handling API responses and errors is crucial to ensure a smooth user experience and to debug issues effectively. Let's look at how the GoogleTranslate component handles responses and errors.
### Response Handling
The handleResponse method decodes the JSON response and checks for errors. If the response contains a translation, it returns the translated text. If there is an error, it throws an exception with a detailed error message.
### Error Handling
Here are a few common scenarios and how to handle them:
Invalid API Key: Ensure your API key is correct and has the necessary permissions. If the API key is invalid, Google will return an error which the handleResponse method will catch and throw as an exception.
API Quota Exceeded: Google Translate API has usage limits. If you exceed these limits, you'll receive an error response. Consider implementing retry logic or monitoring usage to prevent exceeding quotas.
Network Issues: If there's a network issue, curl_exec might fail. Ensure you handle such cases gracefully, possibly with retries or alternative actions.
## Example Usage
Let's see how to use the GoogleTranslate component in a controller to translate text:
```
<?php
namespace frontend\controllers;
use Yii;
use yii\web\Controller;
class TranslateController extends Controller {
public function actionIndex() {
try {
$text = 'Hello, world!';
$targetLanguage = 'es'; // Spanish
$translatedText = Yii::$app->googleTranslate->translate($text, $targetLanguage);
return $this->render('index', ['translatedText' => $translatedText]);
} catch (\Exception $e) {
Yii::error($e->getMessage(), __METHOD__);
return $this->render('error', ['message' => $e->getMessage()]);
}
}
}
```
In the actionIndex method, we use the googleTranslate component to translate "Hello, world!" into Spanish. If an error occurs, it is caught, logged, and an error message is displayed.
## Conclusion
By following this guide, you can add automated translation capabilities to your Yii2 application. This enhances your applications ability to support multiple languages and reach a broader audience.

I have plans to make this into a mini series, where we'll be next exploring how to create and run translation jobs in Yii2. This will allow for asynchronous and scalable translation processing.
Resources:
[Google PHP Client Library Documentation](https://cloud.google.com/translate/docs/reference/libraries/v2/php)
[Yii2 Internationalization Documentation]
(https://www.yiiframework.com/doc/guide/2.0/en/tutorial-i18n)
[Dependency Injection Container]
(https://www.yiiframework.com/doc/guide/2.0/en/concept-di-container) | ifrah |
1,882,234 | Take me to the beach: Frontend | This is a submission for [Frontend Challenge... | 0 | 2024-06-09T17:54:37 | https://dev.to/gloria_gyemfa/take-me-to-the-beach-frontend-1oij | devchallenge, frontendchallenge, css, javascript | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
## What I Built
Welcome to our curated list of the best beaches in the world. Hover over the gallery of beaches for detailed information about each beach and where they are located
<!-- Tell us what you built and what you were looking to achieve. -->
## Demo
Link to the website {% embed https://dev-challenge-beaches.vercel.app %}
A link to the code on Github {% embed https://github.com/Gyemfa-Gloria-Abedi/dev-challenge-beaches.git %}
<!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. -->
## Journey
I began by understanding the challenge and generated ideas on how to go about the challenge.
Since it is obligatory to not manipulate the HTML structure, I used javascript to manipulate the DOM and CSS to add images and attain flip effect on hover of an image to achieve the desired outcome. Also, I leverage AI for assistive development.
All images are from google images.
I finally deployed site using vercel
This project was a valuable learning experience in creating dynamic, interactive web content using JavaScript and CSS. I'm particularly proud of the adaptive approach and the smooth animations achieved. Moving forward, I hope to enhance the solution further by optimizing performance, ensuring improved accessibility, and expanding functionality
Feel free to comment, all reviews are welcome
<!-- Thanks for participating! --> | gloria_gyemfa |
1,882,262 | Can you beat the 'Maze Of Monsters'? | Hello Dev.To community. This is my first post and it's the final step to my Codecademy training class... | 0 | 2024-06-09T17:48:06 | https://dev.to/bryson_noblesbemomusi/can-you-beat-the-maze-of-monsters-23ce | terminal, python, codene, discuss | Hello Dev.To community. This is my first post and it's the final step to my Codecademy training class in Python. My final project was to create a game/tool using Python code that could be run in the Terminal of your computer. The idea that I ended up going with was the "Maze Of Monsters" where you have to complete 5 stages in the maze by defeating the Monster at each stage. There are 3 races and 3 roles to choose from that provide different stats, which can be upgraded as you progress through the maze.
I have a "ReadMe" file on the Git repo that explains the how the game mechanics work, which is helpful to understand before journeying into the maze. Once you understand the game, you can download the Python file and run it in your terminal. I'm not super confident that the game is very balanced, but I did my best to make challenging with the limitations of the framework I created.
There are also several ways that the code could be refactored to be cleaner and/or more efficient. However, because this was my first attempt at creating a fully functioning game, I was desperate to get it working regardless of how I got there. When I end up coding another game in the future, I'm hoping it will be much more efficient with the lessons I've learned through this project.
Would love to know some tips about making the code more efficient and how I could use Python _classes_ to remove some redundant code. Appreciate any help on this!
[GitHub Repo Link](https://github.com/brysonnobles/cc_python_project/blob/main/MoM.py) | bryson_noblesbemomusi |
1,882,261 | why ReLU activation is preffered over sigmoid activation in hidden layers? | A post by 3129 Abhinav Kumar | 0 | 2024-06-09T17:46:32 | https://dev.to/abhinavkumar/why-relu-activation-is-preffered-over-sigmoid-activation-in-hidden-layers-3038 | machinelearning, deeplearning | abhinavkumar | |
1,882,260 | Deploying a Web Socket Application on Kubernetes | What is a WebSocket WebSocket is a computer communications protocol, providing a... | 0 | 2024-06-09T17:44:53 | https://dev.to/adilansari/deploying-a-web-socket-application-on-kubernetes-2h33 | websocket, kubernetes, devops, tutorial | ## What is a WebSocket
WebSocket is a computer communications protocol, providing a simultaneous two-way communication channel over a single Transmission Control Protocol (TCP) connection. [[Wikipedia]](https://en.wikipedia.org/wiki/WebSocket)
Deploying a WebSocket application on Kubernetes can seem daunting, but this guide will simplify the process for you.
## Prerequisites
1. A running kubernetes cluster.
2. A WebSocket app to be deployed.
3. Docker to containerize that application.
You can get a sample websocket app from this github repo [adilansari488/websocket-sample-app](https://github.com/adilansari488/websocket-sample-app).
Now let's create kubernetes manifest files.
## Manifest files
Deployment.yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-app-deployment
namespace: default
labels:
app: ws-app
spec:
replicas: 1
selector:
matchLabels:
app: ws-app
template:
metadata:
labels:
app: ws-app
spec:
containers:
- name: ws-app
image: ws-app:latest
imagePullPolicy: Always
securityContext:
privileged: true
ports:
- containerPort: 8819
resources:
limits:
cpu: 1000m
memory: 1000Mi
requests:
cpu: 100m
memory: 100Mi
```
Service.yaml
```yaml
apiVersion: v1
kind: Service
metadata:
name: ws-app
namespace: dev
spec:
selector:
app: ws-app
ports:
- name: ws-app
port: 8819
targetPort: 8819
```
In above manifest files, I have declared port no. 8819 for my websocket app but it can be any port on which your app will run.
## Deploying to kubernetes
- Now build a docker image of your application or use [this repository](https://github.com/adilansari488/websocket-sample-app.git) or directly use this docker image [adilansari488/websocket-app](https://hub.docker.com/r/adilansari488/websocket-app) to test.
- Now apply *deployment.yaml* and *service.yaml* to your k8s cluster and get the IP of your service.


- Now update the service ip in your client.py and test the connection.


## Conclusion
Congratulations! You’ve successfully deployed your first websocket application. If you found this helpful, please like, share, and follow [Adil Ansari](https://linkedin.com/adilansari488) for more valuable content.
| adilansari |
1,882,259 | How to Get Udemy Premium Account Cookies Unlocking Udemy Premium Courses for Free | Learn how to get Udemy Premium account cookies and unlock premium courses for free. Follow our... | 0 | 2024-06-09T17:44:36 | https://dev.to/muhammadsuheer/how-to-get-udemy-premium-account-cookies-unlocking-udemy-premium-courses-for-free-28l | webdev, javascript, programming, tutorial | Learn how to get Udemy Premium account cookies and unlock premium courses for free. Follow our step-by-step guide to using Udemy Premium cookies effectively.
## Introduction
In the digital age, access to quality education has become easier and more affordable. Online learning platforms like Udemy offer a plethora of courses across various domains. However, premium courses often come with a price tag that not everyone can afford. This guide explores how you can unlock Udemy premium courses for free using Udemy premium cookies. By understanding the process, legal considerations, and best practices, you can maximize your learning without breaking the bank.
## Understanding Udemy
### What is Udemy?
Udemy is an online learning platform designed to help people learn new skills or improve existing ones. It offers over 155,000 courses in various fields, including technology, business, personal development, and more. These courses are created by experts and are accessible to anyone with an internet connection.
### Types of Courses Offered on Udemy
Udemy's courses range from beginner to advanced levels, covering topics like programming, marketing, graphic design, and personal growth. Each course includes video lectures, downloadable resources, and certificates of completion, making it a comprehensive learning tool.
### Benefits of Udemy Premium Courses
Premium courses on Udemy provide in-depth knowledge and advanced training in specific areas. They often include exclusive content, direct access to instructors, and additional resources not available in free courses. This makes premium courses highly valuable for serious learners.
## Udemy Premium Cookies Explained
### What Are Udemy Premium Cookies?
Udemy premium cookies are small pieces of data stored on your browser that can give you access to premium content on Udemy without paying for it. These cookies are typically shared by users who have premium accounts, allowing others to bypass the payment process.
### How Do Udemy Premium Cookies Work?
When you import these cookies into your browser, they authenticate your session as if you are a premium user. This gives you temporary access to premium courses and content. The process involves copying the cookie data from a trusted source and pasting it into your browser's cookie storage.
### Why Use Udemy Premium Cookies?
#### Cost Savings
One of the primary reasons people use Udemy premium cookies is to save money. Premium courses can be expensive, and not everyone can afford them. Using cookies allows you to access high-quality content without financial strain.
#### Access to High-Quality Content
Premium courses often provide more detailed and structured content compared to free courses. By using premium cookies, you can access this superior content and enhance your learning experience.
#### Enhancing Learning Opportunities
For students and professionals looking to advance their careers, premium courses can be a game-changer. Using cookies to access these courses can open up new opportunities for learning and growth.
## Finding Reliable Udemy Premium Cookies
### Trusted Sources for Udemy Premium Cookies
To avoid scams and malware, it's crucial to find reliable sources for Udemy premium cookies. Websites and forums with positive reviews and active user communities are generally safer options. Always be cautious and verify the credibility of the source.
### Avoiding Scams and Malware
There are numerous fake websites and scams promising free Udemy premium cookies. These can lead to malware infections or even identity theft. Ensure your antivirus software is up-to-date and only download cookies from trusted sources.
## Step-by-Step Guide to Using Udemy Premium Cookies
### Step 1: Copying the Udemy Premium Cookies
1. Click on the link given below to copy the cookies [https://controlc.com/37bb134e](https://controlc.com/37bb134e)
```
Passowrd for the cookies: "muhammadsuheer"
```
### Step 2: Install the Cookie-Editor Extension
1. Open your web browser (e.g., Google Chrome, Mozilla Firefox).
3. Search for a Cookie Editor extension.
4. Click the "Add to Chrome" or "Add to Firefox" button to install the extension.

### Step 3: Pin the Cookie Editor Extension
Once the extension is installed, its symbol will appear in your browser's toolbar (often in the top-right corner). Pin the extension and open [Udemy](https://www.udemy.com/).
### Step 4: Delete Default Udemy Cookies
A cookie can be deleted by selecting it from the list and then looking for an "All Delete" or trash can icon. The specified cookie will be deleted when you click on it.

### Step 5: Add New Cookies
1. Look for an option like the "Import Cookie" button in the Cookie Editor window to add a new cookie.
2. Paste the cookie you copied from the trusted source mentioned earlier and click the import icon.
3. Refresh the page (Press `CTRL + R`).

You Will be logged into the Udemy Premium Account.
## Troubleshooting Common Issues
### Common Problems and Solutions
If you encounter issues such as cookies not working or getting logged out frequently, try clearing your browser cache and cookies before re-importing. This can resolve most common problems.
### What to Do If Cookies Stop Working
If the cookies stop working, it's likely they have expired or been revoked.
## Alternatives to Using Udemy Premium Cookies
### Free Courses and Discounts on Udemy
Udemy frequently offers discounts and free courses, especially during sales and promotions. Keeping an eye on these offers can help you access premium content legally and affordably.
### Other Online Learning Platforms
Consider exploring other online learning platforms such as Coursera, edX, and Khan Academy. These platforms also offer high-quality courses, many of which are free or have financial aid options.
## Enhancing Your Learning Experience on Udemy
### Best Practices for Online Learning
To make the most of your online learning experience, set a regular study schedule, take notes, and participate in course discussions. Engaging with the material actively will improve your retention and understanding.
### Utilizing Udemy's Features for Effective Learning
Udemy offers various features like quizzes, assignments, and forums. Use these tools to test your knowledge, get feedback, and connect with other learners and instructors.
### Success Stories
Testimonials from Users Who Benefited from Udemy Premium Cookies: Many users have reported significant benefits from accessing premium courses using cookies. These include career advancements, skill enhancements, and personal growth. While these stories are encouraging, it's important to weigh the risks and ethical considerations.
## FAQs
**What are Udemy Premium Cookies?**
Udemy premium cookies are data files that allow you to access Udemy's premium courses for free by tricking the platform into thinking you are a premium user.
**Are Udemy Premium Cookies Safe?**
Using Udemy premium cookies carries risks, including potential malware and account bans. Always use trusted sources and proceed with caution.
**How Can I Find Working Udemy Premium Cookies?**
Reliable sources for Udemy premium cookies can be found on reputable forums and websites. Ensure you verify the credibility of the source before using the cookies.
**What are the Risks of Using Udemy Premium Cookies?**
Using Udemy premium cookies can violate Udemy's terms of service, leading to potential account bans or legal issues. Proceed with caution and be aware of the risks.
**Are There Any Alternatives to Udemy Premium Cookies?**
Yes, alternatives include waiting for discounts on Udemy, exploring other online learning platforms, and utilizing free resources available on the internet.
## Conclusion
Unlocking Udemy premium courses using premium cookies can provide significant benefits, including access to high-quality content and cost savings. However, it's essential to consider the legal and ethical implications, as well as the potential risks. By following the steps outlined in this guide and using reliable sources, you can enhance your learning experience on Udemy. Always remember to use such methods responsibly.
If you have any questions or comments about this guide, please feel free to leave a comment below. Happy learning!
| muhammadsuheer |
1,882,258 | Best Beaches in the World! | This is a submission for [Frontend Challenge... | 0 | 2024-06-09T17:44:15 | https://dev.to/monik2002/best-beaches-in-the-world-12k9 | devchallenge, frontendchallenge, css, javascript | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_

## What I Built
<!-- Tell us what you built and what you were looking to achieve. -->
I built an interactive webpage showcasing the best beaches in the world. The page includes a slider with beautiful images of each beach, descriptions, and a button to view the beach's location on a map. My goal was to create an engaging and visually appealing way to explore stunning beach destinations, providing users with both visual and informational content.
## Demo
<!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. -->
[Github-Repo](https://github.com/Monik2002/beaches_JS_Challenge)
<img width="800" height="600" style="width:100%; max-width:800px;" src="https://res.cloudinary.com/dmmuehdc8/image/upload/v1717956879/js_beaches_recording_x2cvb2.gif">
Here's the code:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Best Beaches in the World</title>
<link href="https://fonts.googleapis.com/css2?family=Roboto:wght@400;700&display=swap" rel="stylesheet">
<style>
body {
font-family: 'Roboto', sans-serif;
margin: 0;
padding: 0;
background: #f0f8ff;
color: #333;
}
header {
background: #48cae4;
color: #fff;
padding: 20px 0;
text-align: center;
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}
main {
max-width: 1200px;
margin: 20px auto;
padding: 0 20px;
}
h1, h2 {
margin: 0 0 20px;
}
h1 {
font-size: 2.5em;
}
h2 {
color: #0077b6;
font-size: 2em;
}
section {
margin-bottom: 40px;
}
ul {
list-style: none;
padding: 0;
}
li {
background: #ffffff;
margin: 20px 0;
padding: 20px;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
transition: transform 0.3s;
}
li:hover {
transform: translateY(-10px);
}
h3 {
margin: 0 0 10px;
color: #023e8a;
}
p {
margin: 0;
}
.slider {
position: relative;
max-width: 100%;
margin: auto;
overflow: hidden;
border-radius: 10px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
}
.slides {
display: flex;
transition: transform 0.5s ease-in-out;
will-change: transform;
}
.slides img {
width: 100%;
border-radius: 10px;
pointer-events: none;
user-select: none;
}
.prev, .next {
position: absolute;
top: 50%;
transform: translateY(-50%);
background-color: rgba(0, 0, 0, 0.5);
color: #fff;
border: none;
padding: 10px;
cursor: pointer;
border-radius: 5px;
}
.prev {
left: 10px;
}
.next {
right: 10px;
}
.dots {
text-align: center;
margin-top: 10px;
}
.dot {
cursor: pointer;
height: 15px;
width: 15px;
margin: 0 2px;
background-color: #bbb;
border-radius: 50%;
display: inline-block;
transition: background-color 0.6s ease;
}
.active, .dot:hover {
background-color: #717171;
}
button {
background-color: #48cae4;
border: none;
color: white;
padding: 10px 20px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin-top: 10px;
cursor: pointer;
border-radius: 5px;
}
button:hover {
background-color: #0077b6;
}
.modal {
display: none;
position: fixed;
z-index: 1;
left: 0;
top: 0;
width: 100%;
height: 100%;
overflow: auto;
background-color: rgb(0, 0, 0);
background-color: rgba(0, 0, 0, 0.4);
padding-top: 60px;
}
.modal-content {
background-color: #fefefe;
margin: 5% auto;
padding: 20px;
border: 1px solid #888;
width: 80%;
max-width: 600px;
border-radius: 10px;
}
.close {
color: #aaa;
float: right;
font-size: 28px;
font-weight: bold;
}
.close:hover,
.close:focus {
color: black;
text-decoration: none;
cursor: pointer;
}
#map {
height: 400px;
width: 100%;
border-radius: 10px;
}
</style>
</head>
<body>
<header>
<h1>Best Beaches in the World</h1>
</header>
<main>
<section>
<h2>Take me to the beach!</h2>
<p>Welcome to our curated list of the best beaches in the world. Whether you're looking for serene white sands, crystal-clear waters, or breathtaking scenery, these beaches offer a little something for everyone. Explore our top picks and discover the beauty that awaits you.</p>
</section>
<section>
<h2>Top Beaches</h2>
<div class="slider" id="slider">
<div class="slides" id="slides">
<!-- Images will be added here by JavaScript -->
</div>
<button class="prev" id="prevBtn">❮</button>
<button class="next" id="nextBtn">❯</button>
</div>
<div class="dots" id="dots"></div>
<ul id="beachList">
<!-- Beach details will be added here by JavaScript -->
</ul>
</section>
</main>
<div id="myModal" class="modal">
<div class="modal-content">
<span class="close">×</span>
<div id="map"></div>
</div>
</div>
<script>
const accessKey = 'YOUR_ACCESS_KEY';
const beaches = [
{
name: 'Whitehaven Beach, Australia',
description: 'Located on Whitsunday Island, Whitehaven Beach is famous for its stunning white silica sand and turquoise waters. It\'s a perfect spot for swimming, sunbathing, and enjoying the natural beauty of the Great Barrier Reef.',
searchTerm: 'Whitehaven Beach',
lat: -20.282,
lng: 149.037
},
{
name: 'Grace Bay, Turks and Caicos',
description: 'Grace Bay is known for its calm, clear waters and powdery white sand. This beach is ideal for snorkeling, diving, and enjoying luxury resorts that line its shore.',
searchTerm: 'Grace Bay',
lat: 21.795,
lng: -72.172
},
{
name: 'Baia do Sancho, Brazil',
description: 'Baia do Sancho, located on Fernando de Noronha island, offers stunning cliffs, vibrant marine life, and crystal-clear waters, making it a paradise for divers and nature lovers.',
searchTerm: 'Baia do Sancho',
lat: -3.857,
lng: -32.416
},
{
name: 'Navagio Beach, Greece',
description: 'Also known as Shipwreck Beach, Navagio Beach is famous for the rusting shipwreck that rests on its sands. Accessible only by boat, this secluded cove is surrounded by towering cliffs and azure waters.',
searchTerm: 'Navagio Beach',
lat: 37.859,
lng: 20.624
},
{
name: 'Playa Paraiso, Mexico',
description: 'Playa Paraiso, located in Tulum, offers pristine white sands and turquoise waters against the backdrop of ancient Mayan ruins. It\'s a perfect blend of history and natural beauty.',
searchTerm: 'Playa Paraiso',
lat: 20.216,
lng: -87.431
},
{
name: 'Anse Source d\'Argent, Seychelles',
description: 'Anse Source d\'Argent is renowned for its unique granite boulders, shallow clear waters, and soft white sand. This beach is perfect for photography, snorkeling, and relaxation.',
searchTerm: 'Anse Source d\'Argent',
lat: -4.373,
lng: 55.826
},
{
name : 'Seven Mile Beach, Cayman Islands',
description: 'Stretching for seven miles, this beach offers soft coral sand, clear waters, and numerous activities such as snorkeling, paddleboarding, and beachcombing.',
searchTerm: 'Seven Mile Beach',
lat: 19.313,
lng: -81.213
}
];
let currentSlide = 0;
async function fetchImageUrl(searchTerm) {
const response = await fetch(`https://api.unsplash.com/search/photos?page=1&query=${encodeURIComponent(searchTerm)}&client_id=${accessKey}`);
const data = await response.json();
if (data.results && data.results.length > 0) {
return `${data.results[0].urls.raw}&w=1600`;
} else {
return 'https://via.placeholder.com/1600x900?text=Image+Not+Available';
}
}
async function showSlides() {
const slidesContainer = document.getElementById('slides');
slidesContainer.innerHTML = '';
const dotsContainer = document.getElementById('dots');
dotsContainer.innerHTML = '';
const beach = beaches[currentSlide];
const slide = document.createElement('div');
slide.classList.add('mySlides');
const img = document.createElement('img');
img.src = await fetchImageUrl(beach.searchTerm);
slide.appendChild(img);
slidesContainer.appendChild(slide);
beaches.forEach((_, index) => {
const dot = document.createElement('span');
dot.classList.add('dot');
dot.setAttribute('onclick', `currentSlide = ${index}; showSlides();`);
if (index === currentSlide) {
dot.classList.add('active');
}
dotsContainer.appendChild(dot);
});
const beachList = document.getElementById('beachList');
beachList.innerHTML = '';
const li = document.createElement('li');
li.innerHTML = `
<h3>${beach.name}</h3>
<p>${beach.description}</p>
<button onclick="openMap(${currentSlide})">View on Map</button>
`;
beachList.appendChild(li);
}
function changeSlide(n) {
currentSlide = (currentSlide + n + beaches.length) % beaches.length;
showSlides();
}
function openMap(index) {
const beach = beaches[index];
const modal = document.getElementById('myModal');
const modalContent = document.querySelector('.modal-content');
modalContent.innerHTML = `
<span class="close" onclick="closeMap()">×</span>
<div id="map"></div>
`;
modal.style.display = 'block';
initMap(beach.lat, beach.lng);
}
function closeMap() {
const modal = document.getElementById('myModal');
modal.style.display = 'none';
}
async function initMap(lat, lng) {
const { Map } = await google.maps.importLibrary("maps");
const mapOptions = {
center: { lat: parseFloat(lat), lng: parseFloat(lng) },
zoom: 12
};
const map = new Map(document.getElementById('map'), mapOptions);
const marker = new google.maps.Marker({
position: { lat: parseFloat(lat), lng: parseFloat(lng) },
map: map,
title: 'Beach Location'
});
}
window.onload = function() {
showSlides();
};
document.getElementById('prevBtn').addEventListener('click', () => {
changeSlide(-1);
document.getElementById('slider').scrollIntoView({ behavior: 'smooth' });
});
document.getElementById('nextBtn').addEventListener('click', () => {
changeSlide(1);
document.getElementById('slider').scrollIntoView({ behavior: 'smooth' });
});
</script>
<script async defer
src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap" ></script>
</body>
</html>
```
## Journey
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
The process of building this project involved several key steps:
Design and Layout: I focused on creating a clean, responsive design that highlights the beauty of each beach. The use of Google Fonts and CSS styling helped achieve a modern look.
Image Fetching with Unsplash API: I learned how to fetch high-quality images dynamically from the Unsplash API based on search terms, ensuring that each beach image is relevant and visually appealing.
Slider Functionality: Implementing the slider required careful handling of image loading, transitions, and navigation controls to provide a smooth user experience.
Interactive Map: Integrating Google Maps API allowed users to view the exact location of each beach, adding an interactive element to the site.
Future Plans:
User Reviews: Integrate a section for user reviews and ratings to provide more insights and personal experiences.
Weather Information: Include real-time weather information for each beach to help users plan their visits better.
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | monik2002 |
1,882,256 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-09T17:39:59 | https://dev.to/unapapere865/buy-verified-paxful-account-ppg | tutorial, react, python, ai | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | unapapere865 |
1,882,254 | CSS CANVAS | https://github.com/Shivaya007/css-canva.git Inspiration My growing enthusiasm for web... | 0 | 2024-06-09T17:38:55 | https://dev.to/176_shivaya_gupta_516b185/css-canvas-30fi | frontendchallenge, devchallenge, css | ERROR: type should be string, got "\nhttps://github.com/Shivaya007/css-canva.git\n\n## Inspiration\nMy growing enthusiasm for web designing fuels my desire to dive deeper into this field. This passion motivates me to embrace new challenges, continually expand my knowledge, and refine my skills. I am excited to explore innovative design techniques, learn cutting-edge technologies, and create engaging, user-friendly websites. This challenge represents an opportunity for personal and professional growth, driving me to push my limits and achieve excellence in web design\n\n## Demo \n\n\n\n" | 176_shivaya_gupta_516b185 |
1,882,252 | Exploring The Pieces for Developers AI App - My Initial Thoughts | These days we are so fortunate to live in a time where we have access to dozens of applications which... | 0 | 2024-06-09T17:34:17 | https://dev.to/andrewbaisden/exploring-the-pieces-for-developers-ai-app-my-initial-thoughts-cc5 | productivity, writing, ai, webdev | These days we are so fortunate to live in a time where we have access to dozens of applications which can dramatically boost our productivity. I use quite a few popular productivity tools because I follow the Getting Things Done (GTD) workflow which is essentially a personal productivity philosophy that reinvents how you approach your life and work.
Today I want to introduce you to one of the hottest new AI-enabled productivity tools around called Pieces. It has completely transformed my productivity workflow and I rate it as highly as other productivity tools like Notion and Obsidian which I am already using daily. I am getting so much usage out of it that I added it to the dock on my Mac right next to Visual Studio code 😄
I suppose you are wondering what is Pieces exactly.
## What is Pieces?
The Pieces team say that:
> "Pieces is your AI-enabled productivity tool designed to supercharge developer efficiency. Unify your entire toolchain with an on-device copilot that helps you capture, enrich, and reuse useful materials, streamline collaboration, and solve complex problems through a contextual _**understanding of your workflow."**_
Basically, this means that you can use the Pieces desktop application to save code snippets and use its AI features for various tasks which I'm sure you are already familiar with if you use ChatGPT a lot. You get access to Large Language Models (LLMs) like OpenAI, Gemini and others both in the cloud and locally on your machine.
You can download the [Pieces application](https://pieces.app/?utm_source=youtube&utm_medium=cpc&utm_campaign=andrew-partner-twitter) from their website and see just how well it can improve your workflow.
I was quite fortunate to discover this application. For a while, I had been seeing mentions of it on X formerly Twitter but I thought it was related to GitHub Copilot when it is so much more. [Arindam Majumder](https://x.com/Arindam_1729) reached out to me on X formerly Twitter and this led to some coffee chats with [Rosie Levy](https://x.com/Rosie_at_Pieces) and [Tsavo van den Berg](https://x.com/KnottTsavo) who work for Pieces. I played around with the app and my first impression was that it felt like a product that could be built into a future version of an operating system one day that's how useful it is.
Being able to switch between multiple different applications, plus using context and having Pieces know what you're working on while offering AI assistance and help can feel like real magic. It's almost like having your own personal Cortana, Jarvis, Lucy, Siri with superpowers or whatever future AI avatar you can imagine from a movie or real life. This is only the beginning and I fully expect it to get better over time because AI tools like ChatGPT have only been around since 2022.
Anyway, today I will go through a high-level overview of what my first impressions were when I started to use the desktop application. I will go through the 7 main application features one by one which are:
- Saved Materials
- Captured Context
- Copilot Chats
- Workflow Activity
- Global Search
- Snippet Discovery
- Updates & Upcoming
Alright, let's start with Saved Materials.
## Saved Materials
You can save code snippets by adding them to the application or by using one of the browser, IDE or other extensions. It's possible to copy, edit and manage the code snippets in different ways. It's even possible to use the inbuilt copilot to start a conversation about the code snippet which is very cool and a great feature.

## Captured Context
The captured context section seems to capture data on relevant information. This can include websites you have visited, people you interacted with, different files and also relevant tags. I guess you could think of it like a smart AI book marker that compiles information that you might find useful based on what you are doing.

## Copilot Chats
This part of the application is the most familiar to most people. If you have used an AI tool like ChatGPT, Copilot, Gemini, Claude or any of the others then you already know what to expect. You use the prompt to ask the AI questions and then it responds. The Pieces desktop app is unique though because you can choose to use different LLMs. As you can see here I am using GPT 4 however the latest GPT-4o is also available to use.

Here you can see the Pieces Copilot LLM selection screen.

## Workflow Activity
The workflow activity does exactly as you would expect it to do. It tracks everything that you have been doing and shows you a timeline of the events. You can go through the files and locate the activity within the Pieces application.

## Global Search
When using the global search you can search for anything inside of the application that you have worked on. This includes chats, snippets and many other things.

## Snippet Discovery
In this section of the Pieces application, it becomes possible to add entire codebases to our snippet database. All you have to do is select a codebase and then Pieces will import the relevant files. You then get the option to select which files you want to import.

## Updates & Upcoming
From this section, we can see the upcoming roadmap for the Pieces application. There are links to the discussions on GitHub so you can take part and see what the progress is with the latest feature requests. if the feature is already available then you can just follow the link to the feature and it will take you there inside of the Pieces application.

## Final Thoughts
We have just barely scratched the surface here the Pieces application is capable of doing so much and it is only going to get better. I am enjoying my experience so far and this has become one of my must-have tools as a developer. I can see the Pieces application getting mentioned in the same conversations as Visual Studio Code, Notion and Obsidian.
As of writing, there are plugins for Visual Studio Code and Obsidian and in the future, there could be one for Notion as well. I posted some videos on X formerly Twitter that show the [Visual Studio Code plugin](https://x.com/andrewbaisden/status/1792945919572267280) in action as well as the [Live Context](https://x.com/andrewbaisden/status/1796251951505498258) feature. The Live Context feature essentially gives you real-time contextual information throughout all applications on your operating system. So it can see what you are writing in documents, browsing on the web, the code you are writing, emails and much more.
It's worth mentioning that the application is privacy-orientated so all of your data and private information should be safe. You can turn off the Live Context mode if you want to. | andrewbaisden |
1,882,251 | Improving Developer Productivity With Pieces | Introduction Hello everyone! Recently, I got introduced to a developer tool from a Twitter... | 0 | 2024-06-09T17:32:56 | https://dev.to/olibhiaghosh/improving-developer-productivity-with-pieces-if3 | tutorial, ai, productivity, development | ## Introduction
Hello everyone! Recently, I got introduced to a developer tool from a Twitter space where I got to know about its superb features and thus decided to use it. Today I'll be sharing that and yeah, you guessed it right it's **Pieces for Developers.**
I was working on a project named Makaut Buddy and that's where I tried Pieces that gave my productivity a 10x boost. It saved my time and gave me a smooth workflow. Also, Keeping track of my work became very easy with Pieces.
In this blog, I will be sharing how it helped me and how it can help you and increase your productivity.
Now the obvious questions that might come to your mind are What is Pieces? Why Pieces? How has it helped me? and How can it help you too?
So, let's answer it in the following section.
## What is Pieces **for Developers**?
Pieces is an AI-enabled productivity tool that helps to boost a developer's productivity. It's your AI companion.
To have a better idea about What Pieces do, check out this video.
{% embed https://youtu.be/aP8u95RTCGE?si=xBIwvzfuiOU-qfZL %}
## Why Pieces?
Now let's jump to the next question, "Why Pieces?" and also the obvious one "Why am I even suggesting this?".
Previously while working on projects I used ChatGPT or Gemini as an AI tool. But the problems I faced with that were the continuous switching between the tabs, not able to manage the code snippets in one place, and the copy-pasting code from IDE like VS code to the ChatGPT. It was really a headache for me.
That's when I decided to try out Pieces and after using it for a few weeks I realized how it increased my productivity, solved all my above problems, and provided me with a smooth workflow.
Thus I thought of sharing it with you all.
## How can it help you?
Now let me give you an overview of my experience on how Pieces can help you along with the Use cases and pros/Cons with detailed notes on how it helped me in further sections of this blog.
Let's start with some of its **Use cases**
* It helps in generating code from simple language instructions, searching for any code with context, generating code from screenshots, and many more.
* It can access your entire PC, including recent web searches, files, folders, and open tabs for a broader understanding of your work.
* Managing code snippet: Organizing, Saving, Editing, and sharing code snippets in a much easier way.
* You can use live context to read the contents in an open tab when a huge number of tabs are open and thus you need not look for the tab among those huge number of tabs. (It works only when the live context is on and also the workstream pattern engine is enabled).
* Live context also keeps a track of everything that you’ve focused on while WPE is running. And the amazing part is even if you close it you will still be able to access the information via Live Context.
Also, here are some of the pros/cons I discovered while using Pieces.
**Pros:**
* It can access your entire PC, including recent web searches, files, folders, and open tabs for a broader understanding of your work.
* It helps you to manage code snippets in an easier way.
* Tailors suggestions to your project's codebase and language.
* Can be run offline.
* It picks up the context from where you left off, thus potentially saving your time.
* It provides the workflow activity.
* It enhances privacy and security by saving all the stuff in your local system.
* It also improves the code quality by providing additional links and information.
* You can also use Local LLMs through Pieces.
**Cons:**
* Functionality is a bit slow in case of taking context from any files or folders.
* It would be better if we also have a cloud version with a request limit so that everyone need not to have Pieces OS on their system.
* Sometimes it throws an error while generating a response.
This was all about my discoveries on how Pieces can help you. Now let me share where Pieces actually helped me and how I used it.
## How Pieces helped me?
I generally use Pieces while working on my projects and writing blogs.
In the following section, I'll be sharing how I used different features of Pieces and how they helped me
Let's start with my favourite one, the VS Code Extension
### **Visual Studio Code Plugin**
When I was working on my project on VS code I had a problem in understanding a part of the code. Thus I decided to use the Pieces VS code extension.
I selected the code and asked to explain it to me. It explained the whole code and thus saved my time and solved my problem.
Here's the screenshot of what I did.

The best part about the VS code extension is that it has access to the VS Code workspace, thus you can ask any question related to any file or folder provided as context by giving the file name or its relative path and it also saves code snippets with a title as well as a description.

But one thing that I didn't like was while I accessed the saved snippets from any particular project folder in VS code it showed me all the saved snippets from various folders. It would be better if it can show that for that particular folder only.
That was all about my thoughts while exploring this feature.
Now, let's jump into that feature which I found the most helpful, the Desktop app.
### **Desktop App**
My college gave me a screenshot of a code that I need to run on my machine. Thus instead of writing the code manually, I decided to use Pieces.
I used the screenshot to code generator feature in Pieces copilot that extracted the code from the screenshot thus simplifying my work. I just copied the code and ran it on my machine.

I explored the desktop app more as it caught my attention. I found several use cases that are super helpful which are listed below.

* You can use files and folders as context for asking any question about a code. Thus you don’t have to copy-paste the huge lines of codes and ask for any answer.
* You can also save code snippets that you can further refer to in a single place i.e. from the Desktop App.
* The best feature I liked is the Screenshot to code converter as I mentioned earlier.
* You can use GPT4o and that’s really awesome. You can also switch between various LLMs.

* **Pieces Desktop App** can access your entire PC, including recent web searches, files, folders, and open tabs for a broader understanding of your work.
* It also picks up context from where you left off, potentially saving time.
* Pieces helps you to share code snippets using shareable links attached with context, important info as well as links.

And these were the awesome features the desktop app provides.
Here I didn’t find any such type of cons except the late response by the Live context feature. It takes a lot of time to grab the context and then provide the answer.
And this was all about it. Now let's look into the most fascinating feature I found, the Live Context.
### **Live Context**
I was looking for my tweet where I celebrated my followers and thus instead of searching the Twitter tab where the post was open among the huge number of tabs previously opened by me, I decided to use Live context (My live context was already on and the workstream pattern engine was also enabled).
It read me the post with the keyword followers which helped me to avoid the manual search and this was really fascinating.

It help you to deal with switching between tabs frequently. Using it you can get information of any tab that is open without scrolling and finding the tabs among the numerous opened tabs.
Moreover, when I was working on one of the projects on VS code and couldn’t understand it, I thought of referring to the Live context. (Of course my live context and WPE was enabled).
I asked it to explain me the code opened on my VS code and it did that for me. This saved me from copying the code and then asking the copilot. Isn’t it amazing?
Here’s a screenshot of what I did.

And you know what, the best part is it stores all the contexts and data on your device locally thus making it more secure for you.
But Sometimes it does not give the desired result. It is taking some extra time to capture the context thus getting the response late or getting the earlier response.
That was all about Live context.
And while I was exploring all this I got to know about the web extension and now let me share that with you too.
### **Web Extension**
While browsing I come across a lot of code on various websites and thus while working with these codes I need to switch between my tabs very often which was really a headache for me.
The Pieces web extension gave me an option to save all the snippets in a place so that I could refer to them from a single place. This helped me as I need not switch between the tabs frequently anymore. It saved all the snippets in Pieces desktop app.

It also discovers all the possible snippets and suggests to save if required. This reduces your load as it already discovered all the code snippets from the website.

It also gives you access to the copilot chat with the help of which you can ask questions without leaving the browser.
And that was all about all the features and how these helped me.
## Setting Up
After knowing all the amazing features it provides all that remains to know is how to set it up and use it. For more information on how to set up on your local system check the link below.
[Check out this link for Set Up](https://docs.pieces.app/installation-getting-started/what-am-i-installing)
## Conclusion
In my opinion, Pieces Suite is a pack of powerful tools that adds to developers' productivity, enriched learning, Code snippet management, and various others. Its Web extension, VS code extension, and live context make it superbly easy to keep track of work and maintain a consistent workflow.
In a nutshell, it’s a great developer tool that can increase one's productivity by 10x times.
If you found this blog useful and insightful, share it and comment down your views on it. Do follow for more such content. Also, connect with me on :
Email: [olibhia0712@gmail.com](http://olibhiag@gmail.com)
Socials: [Twitter](https://twitter.com/OlibhiaGhosh), [LinkedIn](https://www.linkedin.com/in/olibhiaghosh/) and [GitHub](https://github.com/OlibhiaGhosh)
Thanks for giving it a read !!
 | olibhiaghosh |
1,882,250 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ __ Buy verified cash app account Cash... | 0 | 2024-06-09T17:28:21 | https://dev.to/unapapere865/buy-verified-cash-app-account-5egi | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n__\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | unapapere865 |
1,882,249 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp Nguyên liệu làm bánh dẻo trung thu bằng bột... | 0 | 2024-06-09T17:26:56 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-26nn | 1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
| blinkbakery | |
1,882,248 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp Nguyên liệu làm bánh dẻo trung thu bằng bột... | 0 | 2024-06-09T17:26:55 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-4f50 | 1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
| blinkbakery | |
1,882,247 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp Nguyên liệu làm bánh dẻo trung thu bằng bột... | 0 | 2024-06-09T17:26:55 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-5dc9 | 1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
| blinkbakery | |
1,882,246 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp Nguyên liệu làm bánh dẻo trung thu bằng bột... | 0 | 2024-06-09T17:26:55 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-1m6i | 1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
| blinkbakery | |
1,882,245 | Emulate User Activity with Bots | In our sales events platform, Auctibles, sellers have the exciting opportunity to create various... | 0 | 2024-06-09T17:26:03 | https://dev.to/kornatzky/emulate-user-activity-with-bots-5hdd | laravel, ecommerce, bots, php | In our sales events platform, [Auctibles](https://auctibles.com), sellers have the exciting opportunity to create various events. These range from a sale event at a fixed price to live auctions, where buyers bid the price up, providing a dynamic and engaging experience.
# The Technology
We use the Laravel framework in the context of the TALL stack: Tailwind, Alpine.js, Laravel, and Livewire.
# Why Emulation?
For a demo environment, it's crucial that we accurately emulate buyers' activity for a seller. This includes sending buy orders and chat messages. The purpose of the emulation is to allow the seller to get a feeling of how the event with multiple buyers will be observed from the event control screens. This includes changing the quantities of items still available, dynamic price changes, and chat messages from buyers.
# Ordinary Buyers
The buy screen is a Livewire component. Clicking the BUY button sends a broadcast event: `LiveBuy,` `LiveBid`,...
Similarly, buyers send chat messages via a broadcast event ChatMessage.
# Bots for Emulation
We emulate buyers' activity via a bot. The bot is a ReactPHP server that periodically sends the same broadcast events that ordinary buyers send.
The server receives a `set_timer` GET call at the event's start and a `stop_timer` GET call at its end.
$loop = Factory::create();
$server = new Server(function (ServerRequestInterface $request) use($loop) {
$path = $request->getUri()->getPath();
$method = $request->getMethod();
if ($path == '/set_timer' && $method === 'GET') {
$params = $request->getQueryParams();
$event_id = $params['event_id'] ?? null;
// start emulation
return Response::plaintext('timer set')->withStatus(Response::STATUS_OK);
} else if ($path == '/stop_timer' && $method === 'GET') {
$params = $request->getQueryParams();
$event_id = $params['event_id'] ?? null;
// stop emulation
return Response::plaintext('timer stopped')->withStatus(Response::STATUS_OK);
}
});
$socket = new SocketServer('0.0.0.0:' . config('app.BOT_SERVER_PORT'));
$server->listen($socket);
$loop->run();
# Laravel to ReactPHP
These calls are received from the component where the seller controls the event, using `Http`:
$url = config('app.BOT_SERVER_URL') . ':' . config('app.BOT_SERVER_PORT') . '/set_timer';
$data = [
'event_id' => $this->event->id,
];
$response = Http::withQueryParameters($data)->get($url);
# Starting the Bot
To start the emulation, we set a periodic timer,
$timer = $loop->addPeriodicTimer($this->period_in_seconds, function () use ($event_id) {
$this->send_bid($event_id);
$this->send_message($event_id);
});
We store the timer in an array:
$this->timers[$event_id] = $timer;
# Stopping the Bot
$loop->cancelTimer($this->timers[$event_id]);
unset($this->timers[$event_id]);
| kornatzky |
1,882,244 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết... | 0 | 2024-06-09T17:21:50 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-38hh | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết trung thu bên cạnh món bánh nướng quen thuộc, thì không thể không nhắc đến món bánh dẻo với đa dạng các loại nhân bánh và cách làm khác nhau như: Bánh dẻo truyền thống, bánh dẻo lạnh – món bánh dẻo trung thu với nhân bên trong là đậu sên. Hãy vào bếp cùng BlinkBakery làm bánh dẻo trung thu bằng bột nếp ngon và đơn giản cho cả nhà thưởng thức cùng đón trung thu sắp đến.
Nội dung [hiện]
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Bước2: Sên nhân đậu xanh
Bước3: Chuẩn bị vỏ bánh
Bước4: Tạo hình và đóng bánh
Bước5: Thành phẩm
3. Kết luận
Liên hệ với chúng tôi
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
Bánh dẻo trung thu bằng bột nếp
Bánh dẻo trung thu bằng bột nếp
3. Kết luận
Bánh dẻo trung thu bằng bột nếp là món bánh ngon và ý nghĩa, rất thích hợp để làm quà tặng trong dịp Tết Trung Thu. Hy vọng với hướng dẫn chi tiết trên đây, bạn có thể tự tay làm ra những chiếc bánh thơm ngon, đẹp mắt để cùng gia đình thưởng thức. Ngoài ra, BlinkBakery cũng đang cung cấp có những sản phẩm “Bánh Trung Thu” khác với chất lượng cao và tốt nhất cho khách hàng của mình
| blinkbakery | |
1,882,243 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết... | 0 | 2024-06-09T17:21:49 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-3p3o | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết trung thu bên cạnh món bánh nướng quen thuộc, thì không thể không nhắc đến món bánh dẻo với đa dạng các loại nhân bánh và cách làm khác nhau như: Bánh dẻo truyền thống, bánh dẻo lạnh – món bánh dẻo trung thu với nhân bên trong là đậu sên. Hãy vào bếp cùng BlinkBakery làm bánh dẻo trung thu bằng bột nếp ngon và đơn giản cho cả nhà thưởng thức cùng đón trung thu sắp đến.
Nội dung [hiện]
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Bước2: Sên nhân đậu xanh
Bước3: Chuẩn bị vỏ bánh
Bước4: Tạo hình và đóng bánh
Bước5: Thành phẩm
3. Kết luận
Liên hệ với chúng tôi
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
Bánh dẻo trung thu bằng bột nếp
Bánh dẻo trung thu bằng bột nếp
3. Kết luận
Bánh dẻo trung thu bằng bột nếp là món bánh ngon và ý nghĩa, rất thích hợp để làm quà tặng trong dịp Tết Trung Thu. Hy vọng với hướng dẫn chi tiết trên đây, bạn có thể tự tay làm ra những chiếc bánh thơm ngon, đẹp mắt để cùng gia đình thưởng thức. Ngoài ra, BlinkBakery cũng đang cung cấp có những sản phẩm “Bánh Trung Thu” khác với chất lượng cao và tốt nhất cho khách hàng của mình
| blinkbakery | |
1,882,242 | Cách làm bánh dẻo trung thu bằng bột nếp tại nhà | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết... | 0 | 2024-06-09T17:21:48 | https://dev.to/blinkbakery/cach-lam-banh-deo-trung-thu-bang-bot-nep-tai-nha-11lk | Làm bánh dẻo trung thu bằng bột nếp chắc hẳn đang được rất nhiều chị em quan tâm. Vào những dịp tết trung thu bên cạnh món bánh nướng quen thuộc, thì không thể không nhắc đến món bánh dẻo với đa dạng các loại nhân bánh và cách làm khác nhau như: Bánh dẻo truyền thống, bánh dẻo lạnh – món bánh dẻo trung thu với nhân bên trong là đậu sên. Hãy vào bếp cùng BlinkBakery làm bánh dẻo trung thu bằng bột nếp ngon và đơn giản cho cả nhà thưởng thức cùng đón trung thu sắp đến.
Nội dung [hiện]
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Bước2: Sên nhân đậu xanh
Bước3: Chuẩn bị vỏ bánh
Bước4: Tạo hình và đóng bánh
Bước5: Thành phẩm
3. Kết luận
Liên hệ với chúng tôi
1. Nguyên Liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm bánh dẻo trung thu bằng bột nếp
Nguyên liệu làm nước đường bánh dẻo
300g đường trắng (đường tinh luyện)
5ml nước cốt chanh
300ml nước
Nguyên liệu làm vỏ bánh
392ml nước đường bánh dẻo
12ml dầu ăn
6ml nước hoa bưởi
200g bột bánh dẻo Bắc (bột nếp rang)
180g đậu xanh cà vỏ
80g đường
70-80ml dầu dừa hoặc dầu ăn
9g bột mì đa dụng
270 ml nước
2. Các bước làm bánh dẻo trung thu bằng bột nếp
Bước1: Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Nấu nước đường bánh dẻo
Đun sôi 300ml nước, sau đó cho vào 300gr đường rồi hạ lửa nhỏ, thêm 5ml nước cốt chanh và đun thêm 15 phút, tắt bếp và lọc qua rây. Để nguội sau đó có thể đem làm bánh.
Bước2: Sên nhân đậu xanh
Đậu xanh bạn rửa sạch, ngâm cho nở, rồi vớt ra để ráo. Sau đó đun chín với 80g đường và 200 ml nước.
Sên nhân đậu xanh
Sên nhân đậu xanh
Đem phần đậu xanh đã chín xay với nước cho thật nhuyễn, lọc qua rây cho mịn rồi cho vào chảo sên. Thêm ½ lượng dầu ăn (80ml) vào khuấy liên tục trên lửa nhỏ. Tiếp tục khuấy và thêm phần dầu ăn còn lại vào.
Khi hỗn hợp đậu chuyển đặc hơn, bạn hòa tan hỗn hợp 9g bột mì với 70 ml nước rồi cho từ từ vào chảo đậu và khuấy đều tay. Sên đậu ở lửa nhỏ cho đến khi nhân khô, dẻo, không bị chảy và không dính chảo là đạt.
Đợi nhân đạt thì tắt bếp và để hơi nguội, Sau đó bạn chia nhân thành các phần bằng nhau, vo tròn nhân và bọc lại bằng màng bọc thực phẩm để tránh nhân bị khô khi làm bánh.
Bước3: Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Chuẩn bị vỏ bánh
Trộn đều hỗn hợp nước đường, nước hoa bưởi, rồi thêm từ từ bột bánh dẻo và khuấy nhanh tay. Tiếp đó, xoa 1 lớp bột mỏng lên bột và nhào đều tay cho đến khi bột dẻo và mịn là đạt. Sau đó, bạn dùng màng bọc thực phẩm bao lại và ủ trong vòng 30 phút.
Cuối cùng, bạn lấy bột ra nhào lại một lần nữa rồi chia bột thành các phần bằng nhau.
Bước4: Tạo hình và đóng bánh
Tạo hình và đóng bánh
Tạo hình và đóng bánh
Bước5: Thành phẩm
Bánh dẻo truyền thống với phần nhân dẻo ngọt hòa quyện cùng nhân đậu xanh bùi bùi, dùng kèm một tách trà nóng thì còn gì bằng.
Bánh dẻo trung thu bằng bột nếp
Bánh dẻo trung thu bằng bột nếp
3. Kết luận
Bánh dẻo trung thu bằng bột nếp là món bánh ngon và ý nghĩa, rất thích hợp để làm quà tặng trong dịp Tết Trung Thu. Hy vọng với hướng dẫn chi tiết trên đây, bạn có thể tự tay làm ra những chiếc bánh thơm ngon, đẹp mắt để cùng gia đình thưởng thức. Ngoài ra, BlinkBakery cũng đang cung cấp có những sản phẩm “Bánh Trung Thu” khác với chất lượng cao và tốt nhất cho khách hàng của mình
| blinkbakery | |
1,882,241 | Time for A Change | So here I am at the beginning of a new journey. I am making this post today to make my introduction... | 0 | 2024-06-09T17:21:04 | https://dev.to/reina_simms/time-for-a-change-2mnb | So here I am at the beginning of a new journey. I am making this post today to make my introduction and to talk about something I learned.
I was a teacher until about two months. I have taught for more than 15 years in total and done much work in special education. I wont go on too much about it at this point however, I finally made the tough decision to leave teaching once and for all.
Teaching has been my entire life (outside of raising my own family) for the last 15 or so years. I have tried before to make the decision to leave but I just was always to scared to leave what I knew and was comfortable with. Even after I completed my Master's in Digital Innovation in 2021, I still didn't feel confident to leave the career and to be honest I am still not 100% that I can do something else. With that being said here is what I have learned in the past month or so:
That as a teacher I have to been lead to believe that I can only be a teacher and that this is a career I should stay in no matter what. Even as I drove home after my last day of teaching thoughts of what my new career could entail crossed my mind and before I knew it I was back to thinking about teaching/education related jobs. This is not to say that those skills I will gain in the near future could not lend themselves back to an educational position however my goal is for a new career. I have learned that being "a teacher" is so ingrained in me that I even when trying to transition out it is a difficult concept to get my head around.
I have many teacher friends and as I had given noticed that I would not return to teaching a surprising amount of my colleagues also expressed that they would like to leave but they all had the same feelings/thoughts that I had such as: What else could they do, how else could they do it, and they have been teaching for so long that leaving would be very difficult.
I learned that teachers have been taught that this is all we can do, teach, and that leads so many of us to staying even after we are ready to move on. I learned that I have to know that this line of thinking is in fact not true and that I can transition out of that career and be successful in new profession.
So with all that being said I am on the journey to learn about software development, UX/UI and so much more and hope that I can have the confidence to make successful career out of it in the end.
If you have some advice to share with me as I get started I would love to hear it! | reina_simms | |
1,882,233 | etcd: The Vital Key-Value Store Powering Kubernetes | etcd is an open-source, distributed key-value store that serves as the backbone for storing and... | 0 | 2024-06-09T17:16:53 | https://dev.to/nuwan_weerasinhge_d93fd5b/etcd-the-vital-key-value-store-powering-kubernetes-10n9 |

etcd is an open-source, distributed key-value store that serves as the backbone for storing and managing critical data in Kubernetes clusters. It acts as a highly available and consistent repository for all the configuration information that governs the state and behavior of your containerized applications.
**Core Functionalities of etcd in Kubernetes**
* **Configuration Storage:** etcd holds the configuration data for your Kubernetes cluster, including:
- Pod definitions (specifying container images, resources, and deployment configurations)
- Service definitions (exposing deployments as services within the cluster and externally)
- Namespaces (logically grouping resources for better organization)
- Network policies (controlling how pods communicate with each other and external networks)
- Cluster roles and bindings (defining access control for users and service accounts)
* **State Management:** etcd tracks the current state of the cluster in real-time, reflecting the status of deployments, pods, services, and other resources. This enables Kubernetes to maintain consistency and make informed decisions about scheduling and scaling containerized workloads.
* **Service Discovery:** etcd facilitates service discovery within the cluster. Services register themselves with etcd, allowing pods to find and interact with them using DNS names or service endpoints. This simplifies communication between containerized applications.
* **Coordination:** etcd plays a crucial role in coordinating activities across different components of the Kubernetes control plane. It ensures that the API server, scheduler, and controllers have access to the latest cluster state and can work together seamlessly.
**Key Features of etcd**
* **Distributed Storage:** etcd replicates data across multiple nodes (typically an odd number for leader election) to ensure high availability and fault tolerance. Even if one node fails, the cluster remains operational with the remaining nodes keeping the data consistent.
* **Leader-Based Consensus:** etcd employs the Raft consensus algorithm to maintain consistency across the distributed storage. A leader node coordinates updates, while follower nodes replicate the data to guarantee consistency and prevent data loss.
* **Watch Functionality:** Kubernetes utilizes etcd's watch functionality to monitor changes in the key-value store. This allows the Kubernetes control plane to react to events like pod creation or deletion, service updates, and more, enabling dynamic scaling and automated cluster management.
* **Secure Communication:** etcd supports secure communication between nodes and clients using TLS client certificates, safeguarding sensitive cluster data.
**Benefits of Using etcd in Kubernetes**
* **High Availability:** With its distributed architecture, etcd offers exceptional durability and fault tolerance. Even in the event of node failures, the cluster remains operational and data is preserved.
* **Scalability:** etcd can be easily scaled horizontally by adding more nodes to the cluster. This caters to growing workloads and ensures the key-value store can handle increasing data storage and access demands.
* **Consistency:** The Raft consensus algorithm guarantees data consistency across all nodes in the etcd cluster. This eliminates potential conflicts and ensures that all components within the Kubernetes control plane have a consistent view of the cluster state.
* **Simplified Management:** etcd offers a straightforward API for storing and retrieving data, making it easy for Kubernetes to interact with it for managing cluster operations.
**Deployment Considerations**
* **Clustering:** etcd is typically deployed as a multi-node cluster for high availability. It's recommended to use an odd number of nodes to avoid split-brain scenarios during leader election.
* **Security:** Secure your etcd cluster by enabling TLS client certificate authentication and restricting access to authorized clients only.
* **Monitoring:** Monitor etcd cluster health to ensure consistent operation and identify potential issues early on. Metrics like leader election times, follower lag, and storage usage can be valuable indicators.
**In Conclusion**
etcd is an indispensable component of Kubernetes, providing a robust and reliable foundation for storing and managing cluster-wide data. Its distributed architecture, high availability, and consistency features are essential for ensuring the smooth operation and scalability of containerized applications deployed in Kubernetes environments. By understanding how etcd works and its significance, you can effectively configure and manage your Kubernetes clusters for optimal performance and resilience. | nuwan_weerasinhge_d93fd5b | |
1,882,232 | Secure Shell (SSH): Accessing Remote Machines Securely | SSH, or Secure Shell, is a fundamental tool for securely connecting to and managing remote computer... | 0 | 2024-06-09T17:15:31 | https://dev.to/nuwan_weerasinhge_d93fd5b/secure-shell-ssh-accessing-remote-machines-securely-518i |
SSH, or Secure Shell, is a fundamental tool for securely connecting to and managing remote computer systems. It enables you to log in to a remote machine, execute commands, transfer files, and manage resources as if you were sitting directly in front of it. This article delves into the world of SSH, explaining its functionalities, usage with examples, and key security aspects.
### Understanding SSH
SSH establishes a secure encrypted channel between your local machine (client) and a remote machine (server) running an SSH server daemon. This encrypted tunnel ensures that all data exchanged during the session, including your login credentials and commands, remains confidential and protected from prying eyes on potentially unsecured networks.
### Using SSH: Basic Steps
Here's a breakdown of the typical SSH workflow:
1. **Prerequisites:**
- Ensure both your local machine and the remote server have SSH installed and running.
- For Linux and macOS, SSH is typically pre-installed. On Windows, you might need to install an SSH client like PuTTY.
2. **Initiating the Connection:**
- Open a terminal window on your local machine.
- Type the following command, replacing `<username>` with your username on the remote server and `<remote_server>` with the server's hostname or IP address:
```
ssh <username>@<remote_server>
```
- Press Enter.
3. **Authentication:**
- The first time you connect to a server, you'll be prompted to verify the server's fingerprint (a unique identifier). This ensures you're connecting to the intended server and not a malicious imposter.
- Type "yes" and press Enter to proceed.
- You'll then be prompted for your password on the remote server. Enter your password securely (characters won't be displayed while typing) and press Enter.
4. **Remote Access:**
- If authentication is successful, you'll be granted access to the remote server's command line. You can now execute commands on the server as that user.
5. **Exiting the Session:**
- To terminate the SSH session and return to your local machine, type the following command and press Enter:
```
exit
```
### Example: Connecting to a Remote Server
Let's consider a scenario where you want to connect to a remote server named "server1" using your username "alice". Here's the corresponding SSH command:
```
ssh alice@server1
```
Once you enter your password and authenticate successfully, you'll have a secure shell session established with "server1". You can then manage the server by issuing commands directly on its terminal.
### Beyond Basic Usage: Additional SSH Features
SSH offers a plethora of functionalities beyond basic connections. Here are some noteworthy features:
* **Specifying SSH Port:** The default SSH port is 22. You can specify a different port number during connection by adding `-p <port_number>` after the server address in the SSH command.
* **Secure File Transfer:** Commands like SCP (Secure Copy) and SFTP (SSH File Transfer Protocol) leverage SSH for secure file transfer between your local machine and the remote server.
* **Public Key Authentication:** This method eliminates the need to enter a password every time. You can configure SSH to use a public-private key pair for authentication, enhancing security and convenience.
* **Port Forwarding:** SSH allows forwarding ports on your local machine to ports on the remote server, enabling access to remote services through your local machine.
### Security Considerations with SSH
While SSH is a secure protocol, here are some security practices to remember:
* **Maintain strong passwords:** Use complex and unique passwords for your remote server accounts.
* **Enable Public Key Authentication:** Public key authentication offers a more secure alternative to password-based logins.
* **Keep SSH software up-to-date:** Ensure both your local SSH client and the remote server's SSH daemon are updated with the latest security patches.
* **Restrict SSH access:** Limit SSH access to authorized users and consider implementing additional security measures like firewalls for further protection.
By understanding these concepts and implementing best practices, SSH can become a powerful tool for securely managing your remote machines and infrastructure. | nuwan_weerasinhge_d93fd5b | |
1,882,176 | Create Floating Label Input Using Tailwind CSS | Hi guys, I wanna show you about floating label input of course using tailwind css😎 Here's the... | 0 | 2024-06-09T17:15:20 | https://dev.to/driannaird/create-floating-label-input-using-tailwind-css-3409 | webdev, tailwindcss, frontend, css | **Hi guys, I wanna show you about floating label input of course using tailwind css😎**
Here's the result

You wanna try?, lets go.
_1. We assume that you already have a tailwind project._
_2. Create a structure HTML like below._
```HTML
<div class="relative">
<input id="email" type="email" placeholder="" />
<label for="email">Email</label>
</div>
```
_3. Add some tailwind style._
```HTML
<div class="relative">
<input id="email" type="email" placeholder="" class="block border border-zinc-500 px-4 pb-1 pt-6" />
<label for="email" class="absolute top-4">Email</label>
</div>
```
result

- We make the display input elements into `block` and the position label elements into `absolute`, of course you need `relative` position on the parent element.
- Adding some styles in input like a border (It depends on your creativity🔥).
- Give space for placeholder made from label.
- Then adjust the position label so then look like the result above.
_4. Move and Scale the label._
```HTML
<label for="email" class="absolute top-4 scale-75 -translate-y-3 origin-[0] left-4">Email</label>
```
to look like this.

- More about scale [Scale Tailwind CSS](https://tailwindcss.com/docs/scale)
- The translate technique produces a moving effect according `x` or `y`coordinates.
- The origin-[0] utilities for specifying the origin for an element's transformations.
_5. Creates a label effect from the displayed placeholder input and if the input is focused using peer._
```HTML
<div class="relative">
<input id="email" type="email" class="peer block border border-zinc-500 px-4 pb-1 pt-6" placeholder="" />
<label for="email" class="absolute left-4 top-4 origin-[0] -translate-y-3 scale-75 peer-placeholder-shown:translate-y-0 peer-placeholder-shown:scale-100 peer-focus:-translate-y-3 peer-focus:scale-75">Email</label>
</div>
```
- When you need to style an element based on the state of a sibling element, mark the sibling with the peer class, and use peer-* modifiers like peer-invalid to style the target element. In the example above, I gave effect when input placeholder shown and input focus.
_6. Last step, transition effect._
```HTML
<label for="email" class="... duration-150">Email</label>
```
- You just add duration, it will add a smoother transformation effect.
Congratulation it's finished

Full code
```HTML
<div class="relative">
<input id="email" type="email" class="peer block border border-zinc-500 px-4 pb-1 pt-6" placeholder="" />
<label for="email" class="absolute left-4 top-4 origin-[0] -translate-y-3 scale-75 duration-150 peer-placeholder-shown:translate-y-0 peer-placeholder-shown:scale-100 peer-focus:-translate-y-3 peer-focus:scale-75">Email</label>
</div>
```
Follow my instagram: [@driannaird](https://www.instagram.com/driannaird/)
And give the fire🔥🔥🔥🔥🔥
Thank you.
| driannaird |
1,882,231 | Tourney Cap | Our surgical caps are made from soft, breathable fabric that ensures comfort during long surgeries.... | 0 | 2024-06-09T17:14:48 | https://dev.to/nelson_murdock_92092a944a/tourney-cap-1n5f | surgical, medical | Our [surgical caps](https://tourneycap.com/product-category/surgical-caps/) are made from soft, breathable fabric that ensures comfort during long surgeries. They come in various colors and patterns, allowing medical professionals to express their personalities while maintaining a professional appearance. The caps are designed to fit securely, preventing hair from falling into the face and ensuring a hygienic working environment. They are easy to wash and maintain, making them a practical choice for daily use. | nelson_murdock_92092a944a |
1,882,230 | Frontend Challenge | Best Beaches in the World! | This is a submission for Frontend Challenge v24.04.17, Glam Up My Markup : Beaches 🌴🏖️ Welcome to My... | 0 | 2024-06-09T17:13:22 | https://dev.to/hikolakita/frontend-challenge-best-beaches-in-the-world-ddg | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup : Beaches_
**🌴🏖️ Welcome to My June CSS Frontend Challenge Submission! 🌊🌅
Inspiration ✨**
## Inspiration 🤔
**Today**, I am highlighting a serene and visually appealing web page dedicated to showcasing the best beaches in the world. The design captures the tranquility and beauty of beach destinations, aiming to evoke a sense of wanderlust and relaxation. The warm colors and tropical imagery set the perfect mood for exploring _dreamy beach_ getaways. 🌞🌴
## Demo 🌐
Here's a preview of my submission:

👉 You can view and interact with the full demo [here](https://hikolakita.github.io/Frontend-Challenge-Top-Beaches-/).
👉 You can also preview the full code on this [repository](https://github.com/Hikolakita/Frontend-Challenge-Top-Beaches-).
## Journey 🚀
Creating this webpage was an exciting journey that allowed me to blend aesthetics with functionality. I focused on achieving a visually pleasing gradient background to simulate a sunset, complemented by palm tree silhouettes to enhance the tropical theme. 🌇🌴 The typography was chosen to be welcoming and easy to read, emphasizing the headline and call-to-action button.
## What I Learned 📚
CSS Gradients: Crafting beautiful, smooth transitions that mimic natural light.
Layering Images with Transparency: Adding depth and dimension to the visuals.
Creating a Cohesive Color Scheme: Ensuring all elements harmonize to enhance the user experience.
I am particularly proud of how the background and text elements harmonize to create an inviting and immersive experience.
## What's Next? 🔮
Responsive Design: Refining my skills to ensure the webpage looks great on all devices.
Animations: Experimenting with subtle movements to bring the design to life.
Thank you for taking the time to check out my project! I hope it inspired a bit of wanderlust in you. 🌍✨
> "To travel is to take a journey into yourself." – Danny Kaye
Feel free to leave feedback or suggestions in the comments. Until next time, happy coding! 💻🌟
Edit : Well, I think it's responsive now :)
If it's not for you, just let me know what's your screen size and I'll fix it ;) | hikolakita |
1,882,229 | DOM Guardians: Exploring JavaScript’s MutationObserver API | Welcome to the world of JavaScript Observers! If you've ever needed a way to keep an eye on changes... | 0 | 2024-06-09T17:11:14 | https://dev.to/mini2809/dom-guardians-exploring-javascripts-mutationobserver-api-29hd | javascript, mutationobserverapi, webdev | Welcome to the world of JavaScript Observers! If you've ever needed a way to keep an eye on changes in your DOM and react dynamically, then observers are your best friends. Let's break it down.
**What Exactly Are Observers?**
As the name suggests, an observer watches for changes or events to occur and then springs into action with a callback function.
**The MutationObserver API:**
The MutationObserver API is your go-to tool for detecting changes in the DOM tree of your browser. Here’s how you can set it up:
```
function cb(mutations) {
// changes as response to mutation
}
// Create a MutationObserver object and pass the callback defined
const observer = new MutationObserver(cb);
observer.observe(targetNode, options);
```
**Configuring Your Observer: The Options**
To tell your MutationObserver what to watch for, you provide it with a set of options:
- childList: Boolean
- attributes: Boolean
- characterData: Boolean
- subtree: Boolean
- attributeFilter: []
- attributesOldValue: Boolean
- characterDataOldValue: Boolean
**Breaking Down the Boolean Functions:**
1.childList: Monitors the addition or removal of child nodes. When this happens, the mutation observer triggers an event of type childList.
2.Attributes: Watches for changes to the attributes of a node. Any style change? Boom! The observer triggers an event of type attributes.
3.CharacterData: Keeps an eye on changes to the innerHTML character data, triggering an event of type characterData.
These three properties are essential for setting up observers on a target node.
4.subtree: Observes changes in any part of the subtree of a particular node. While childList monitors only direct children, subtree goes all the way down the rabbit hole to nested children.
5.attributesOldValue and characterDataOldValue: These properties store the old values, allowing you to compare the changes and deduce what’s different.
6.attributeFilter: Specify which attributes to monitor changes for, using an array field. This is especially useful when you want to zero in on specific attributes.
```
function cb(mutations) {
mutations.forEach(mutation => {
console.log(mutation);
});
}
const observer = new MutationObserver(cb);
const options = {
childList: true,
attributes: true,
characterData: true,
subtree: true,
attributeFilter: ['class', 'style'],
attributesOldValue: true,
characterDataOldValue: true
};
const targetNode = document.getElementById('target');
observer.observe(targetNode, options);
// Remember to disconnect when done
// observer.disconnect();
```
**Note**: Always use observer.disconnect() when you’re done observing to prevent memory leaks.
With this setup, your JavaScript code is now armed with a vigilant observer, ready to react to any changes in your DOM.
**Stay Tuned: More Observers to Come**
But wait, there’s more! The world of observers doesn’t end here. There are different types of observers that we'll cover in upcoming blogs, each with its unique superpowers. From IntersectionObserver to ResizeObserver, we’ll dive into how these tools can make your web applications even more dynamic and responsive.
Stay tuned and keep observing!
Also for more detailed examples and code snippets, check out my [github repository](https://github.com/mini2809/Observers). Happy coding.
If you have any questions, thoughts, or suggestions, feel free to leave a comment below.Also Have you used MutationObservers in your projects? Share your experiences and tips in the comments! | mini2809 |
1,882,228 | Performance optimization with useMemo | One of those things that you don't know enough, and then when you know somewhat you tend to use a bit... | 0 | 2024-06-09T17:08:15 | https://www.oh-no.ooo/articles/performance-optimization-with-usememo | react, webdev, performance, javascript | One of those things that you don't know enough, and then when you know somewhat you tend to use a bit too much is React beloved and behated <a href="https://react.dev/reference/react/useMemo" target="_blank">useMemo hook</a>. In my quest of trying to understand more the features of React that I use on a regular basis, I find myself in need to write more about this hook, clarifying what is it, when to use it and when not.
So, let's get started and unravel this madness!
## What is useMemo?
Introduced in React 16.8, the useMemo hook is a built-in feature designed to memoize values and prevent unnecessary recalculations and re-renders, meaning that instead of calculating things on the fly, it will first check if that computation has already done and take the value already stored.
... Wait wait wait! __memoization what?!__ What's that?!
<a href="https://www.freecodecamp.org/news/author/gercocca/" target="_blank">Germán Cocca</a> makes a great job at explaining this concept <a href="https://www.freecodecamp.org/news/memoization-in-javascript-and-react/" target="_blank">in his article about Memoization in Javascript and React</a>, and I will not try to rewrite it when he's already told it better than I ever can!
<blockquote>
<p>In programming, <strong>memoization is an optimization technique</strong> that makes applications more efficient and hence faster. It does this by storing computation results in cache, and retrieving that same information from the cache the next time it's needed instead of computing it again.</p>
<br />
<p>In simpler words, <mark><strong>it consists of storing in cache the output of a function</strong>, and making the function check if each required computation is in the cache before computing it.</mark></p>
<br />
<div class="text-right">— <a href="https://www.freecodecamp.org/news/author/gercocca/" target="_blank">Germán Cocca</a> in <a href="https://www.freecodecamp.org/news/memoization-in-javascript-and-react/" target="_blank">What is Memoization? How and When to Memoize in JavaScript and React</a></div>
</blockquote>
So far so good. Yes?! __Nope__.
I have heard that *useMemo should not always be used* and this explanation makes me think instead that it always sounds like a good idea, so let's continue digging up a bit more more, but let's go the long route and let's see first how to implement it.
## How to use useMemo
The basic syntax for useMemo is
```javascript
const memoizedValue = useMemo(() =>
computeExpensiveValue(a, b), [a, b]);
```
This line of code contains the function used to calculate the value and the dependency array. For example, consider a component that filters a list of items based on a search term:
```javascript
import React, { useMemo } from 'react';
const FilteredList = ({ items, searchTerm }) => {
const filteredItems = useMemo(() => {
return items.filter(item =>
item.toLowerCase()
.includes(searchTerm.toLowerCase())
);
}, [items, searchTerm]);
return (
<ul>
{filteredItems.map(item => (
<li key={item}>{item}</li>
))}
</ul>
);
};
```
In this example, we use useMemo to memoize the filtered list of items and avoid recalculating the filtered list every time the component renders. *Still sounds pretty good and still like I should actually be using it more than I actually do.* 🤔
Browsing various examples online, as you can see from the first snippet of code in this article, however, I find the emerging written pattern of using useMemo *only for expensive calculations*, *computed expensive value*, *expensive operations*, and I finally start understanding that there is something particularly in mind for which useMemo is recommended.
But what is an expensive calculation in JavaScript?
## Expensive calculations
Expensive calculations are __tasks that require a significant amount of processing time and resources__: they might be time-consuming, memory-intensive, or computationally complex. What seems to fit in this category is:
- __Manipulating large strings__, such as parsing and transforming text
- __Transforming large datasets through iteration__, while performing multiple calculations on the data
- __Sorting or filtering large arrays__
- __Fetching data through APIs/databases__
- __Processing and manipulating big files__
- __Rendering a top component__ with a significant number of nested components and elements
- __Recursive algorithms__, advanced statistical calculations, matrices, weird formulas etc. etc. etc.
<strong>Note:</strong>
<a href="https://react.dev/reference/react/useMemo" target="_blank">React useHook official documentation</a> has a section <a href="https://react.dev/reference/react/useMemo#how-to-tell-if-a-calculation-is-expensive" target="_blank">How to tell if a calculation is expensive?</a> where it teaches us to use the console by adopting <code>console.time()</code> and <code>console.timeEnd()</code> to better determine the cost of our operations.
Not all these operations are inherently worthy of useMemo though! There are at least a few things to consider.
<br /><br />
### 1. How often is the operation performed?
If the operation is performed frequently or triggered by state/props changes, and the result doesn't change as often, memoizing the result can prevent redundant recalculations, therefore improving overall performance.
For example, if you have a big array of strings that needs to be formatted in a particular way, and you might want to list all of them in a nice HTML list, using useMemo might help avoiding slow renders in the UI.
<br />
### 2. How reusable is the result of useMemo?
The result is used within the component or passed down as props to child components; if the result is used multiple times, useMemo can indeed help avoid unnecessary recalculation.
Think of a large file that you want to use in different sections of your app; it's definitely worth contemplating memoizing it.
<br />
### 3. Is it worth the memory consumption?
As the memoized values are stored in memory until the component unmounts or the dependencies change, memoizing large datasets will definitely increase memory consumption; __if the fetching from caching is faster than the recomputation of the calculations, perhaps useMemo might have a valid purpose__.
If you're fetching a big list and your interface freezes, perhaps yes, it's time to think about useMemo.
<hr/>
In all of the above cases, and even more critical than for `useEffect`, __the dependencies of the memoized function need to be properly evaluated__ to make sure that useMemo recalculates the memoized value only when necessary, as performance might otherwise ironically drop.
In his article <a href="https://maxrozen.com/understanding-when-use-usememo" target="_blank">Understand when to use useMemo</a>, <a href="https://twitter.com/RozenMD" target="_blank">Max Rozen</a> puts it up in simple words:
<br />
<blockquote>
<p><mark>Don't use useMemo until you notice parts of your app are frustratingly slow.</mark> <strong>Premature optimisation is the root of all evil</strong>, and throwing useMemo everywhere is premature optimisation.</p>
— <a href="https://twitter.com/RozenMD" target="_blank">Max Rozen</a> in <a href="https://maxrozen.com/understanding-when-use-usememo" target="_blank">Understand when to use useMemo</a>
</blockquote>
<br />
## <br />"Premature optimization is the root of all evil"
*This paragraph might be a tad too long and expanding on a topic that is not quite the purpose of this article; feel free to move to the next header if you already have a grasp about premature optimization.*
Calling premature optimization *the root of all evil* seems to be something rather common, and <a href="https://effectiviology.com/itamar-shatz/" target="_blank">Dr. Itamar Shatz</a> in his article about <a href="https://effectiviology.com/premature-optimization/" target="_blank">Premature Optimization: Why It’s the “Root of All Evil” and How to Avoid It</a> says that<br /><br />
<blockquote>
<p><strong>Premature optimization</strong> involves trying to <mark><strong>improve something</strong> — especially with the goal of perfecting it — <strong>when it’s too early to do so</strong>.</mark></p>
— <a href="https://effectiviology.com/itamar-shatz/" target="_blank">Dr. Itamar Shatz</a> in <a href="https://effectiviology.com/premature-optimization/" target="_blank">Premature Optimization: Why It’s the “Root of All Evil” and How to Avoid It</a>
</blockquote>
<br />
His article also shows that the famous *Premature optimization is the root of all evil* sentence is a quote from the computer scientist <a href="https://profiles.stanford.edu/donald-knuth" target="_blank">Donald Knuth</a>, from his article <a href="https://dl.acm.org/doi/10.1145/356635.356640" target="_blank">Structured Programming with go to Statements</a>. The article is a reminder to developers not to spend too much time or resources optimizing parts of a program before they need to and especially before they have a working version.
According to Donald Knuth, premature optimization leads to wasted effort, as you might end up optimizing parts of the code that don't have a significant impact on the overall performance, or even optimizing code that eventually gets removed or significantly changed.
His recommendations are to __first build a functional system__, __then identify any performance issues__, and __finally optimize critical parts that actually need optimization__, without forgetting that readability and maintainability are often more important than minor efficiency gains, especially in early development stages.
Dr. Itamar Shatz lists out the <a href="https://effectiviology.com/premature-optimization/#Dangers_of_premature_optimization" target="_blank">dangers of premature optimization</a> in a few yet impactful points, which I recommend reading from his article.
<hr />
I have now some ideas on where useMemo could come at hand as well as potential dangers of premature optimization, but I still feel like the constraints for which it's recommended are still a bit unable to give a clear line. Perhaps it'd be beneficial to go the other route and analyze when _not_ to use useMemo.
<br />
## When not to use useMemo
By what we've figured out before, if an operation is relatively simple, doesn't consume much processing time, and doesn't depend on frequently changing data, __using useMemo will add unnecessary overhead without significant performance gains__.
Therefore
- if a component **doesn't have any expensive calculations** and **doesn't render frequently**
- or the expensive operation is **isolated to a specific part of the component that doesn't impact the overall rendering performance**
- or the expensive operation varies significantly between the instances where the component is used
- and we're trying to accommodate all sorts of devices that might not have a lot of memory available
- and **the performance improvement is not noticeable by end-users**
... __we might find ourselves perhaps not in need of useMemo__.
We also need to consider that adding useMemo to memoize some calculations can make writing unit tests more complicated, and while it's not a reason to avoid useMemo, it's definitely worth to weigh the benefits of memoization also against the ease of testing and maintaining the code.
<br />
## Example implementations
From all this reading and writing, I can deduct that there are plausible usages where you expect to process a lot of data, that won't change as often, such as:
```javascript
// Plausible usage of useMemo
import React, { useMemo } from 'react';
const BigProcessedList = ({ data }) => {
const processedData = useMemo(() => {
return data.map((item) => {
// Simulating an expensive computation
for (let i = 0; i < 1e6; i++) {}
return item * 2;
});
}, [data]);
return (
<ol>
{processedData.map((item, index) => (
<li key={`${index}__${item}`}>{item}</li>
))}
</ol>
);
};
```
Another plausible example would be:
```javascript
// Plausible good use of useMemo
import React, { useMemo } from 'react';
const EmployeeList = ({ employees, filter }) => {
const filteredEmployees = useMemo(() => {
return employees.filter(emp => emp.department === filter);
}, [employees, filter]);
return (
<ul>
{filteredEmployees.map((employee) => (
<li key={employee.id}>{employee.name}</li>
))}
</ul>
);
};
```
... And potentially, there could be so many bad/unrecommended implementations where there's no big computation needed and therefore useMemo is bringing no real benefit to the end user:
```javascript
// Unoptimal use of useMemo
import React, { useMemo } from 'react';
const CombinedStringComponent = ({ propA, propB }) => {
const combinedProps = useMemo(() => {
return `${propA} ${propB}`;
}, [propA, propB]);
return (
<div>{combinedProps}</div>
);
};
```
Ultimately, the trend with useMemo seems to be that as soon as people get to know more about it, they want to implement it, whereas __it'd make more sense to leave it for only when the app clearly shows benefits from implementing it__. It sounds like a pretty good news to me as it means that most likely I don't have to change anything on my website as I don't seem to see anything particularly too slow... *yay!* 😂
I've truly enjoyed writing about this and I can't help but recommend strongly to read the sources/resources attached to the article; if you feel like I've miswritten something (which can very well be the cause as my best corrections often come days after I rewrote something), don't hesitate to mention that in the comments below!
But before I leave...
<br />
## A brief note about useMemo vs. useCallback
You'll also hear this one too! And while `useCallback` is gonna be a topic for another day, let's just iron this one last thing out before we call it a day!
__useCallback__ is similar to useMemo, but __instead of memoizing the result of a function, it memoizes the actual function itself__, which is particularly useful when you have a component that receives a function as a prop and you want to prevent unnecessary re-rendering of child components that depend on that function. This means that if the dependencies of useCallback don't change, __the same function instance is used, avoiding unnecessary re-renders of child components that depend on it__.
Now out! Bye! :D
<br />
## Sources and inspiration
- <a href="https://react.dev/reference/react/useMemo" target="_blank">Official React useHook documentation</a>
- <a href="https://chat.openai.com/" target="_blank">ChatGPT</a> prompt: `Can you help me write a good article on the What, Why and How of using useMemo hook?`
- <a href="https://www.w3schools.com/react/react_usememo.asp" target="_blank">W3School documentation on useMemo</a>
- <a href="https://www.freecodecamp.org/news/memoization-in-javascript-and-react" target="_blank">What is Memoization? How and When to Memoize in JavaScript and React</a> by <a href="https://www.freecodecamp.org/news/author/gercocca/" target="_blank">Germán Cocca</a> from <a href="https://www.freecodecamp.org/" target="_blank">freeCodeCamp</a>
- <a href="https://maxrozen.com/understanding-when-use-usememo" target="_blank">Understand when to use useMemo</a> by <a href="https://twitter.com/RozenMD" target="_blank">Max Rozen</a>
- <a href="https://effectiviology.com/premature-optimization/" target="_blank">Premature Optimization: Why It’s the “Root of All Evil” and How to Avoid It</a> by <a href="https://effectiviology.com/itamar-shatz/" target="_blank">Dr. Itamar Shatz</a>
- Cover: <a href="https://www.freepik.com/free-vector/gradient-liquid-abstract-background_13346239.htm" target="_blank">Vector Liquid Background</a> by <a href="https://www.freepik.com/author/pikisuperstar" target="_blank">pikistar</a> via <a href="https://freepik.com" target="_blank">Freepik</a>, <a href="https://www.freepik.com/free-psd/3d-illustration-people-working-marketing_42937458.htm" target="_blank">People working marketing</a> by <a href="https://freepik.com" target="_blank">Freepik</a>
<hr />
Originally posted in <a href="https://oh-no.ooo">oh-no.ooo</a> (<a href="https://www.oh-no.ooo/articles/performance-optimization-with-usememo">Performance optimization with useMemo</a>), my personal website.
| mahdava |
1,819,859 | HOW TO DEAL WITH EXCEPTIONS IN LARAVEL | Laravel is a free and open-source PHP web framework, created by Taylor Otwell and intended for the... | 0 | 2024-06-09T17:06:03 | https://dev.to/yzpeedro/how-to-deal-with-exceptions-in-laravel-mnl | laravel, php, backend | Laravel is a free and open-source PHP web framework, created by Taylor Otwell and intended for the development of web applications, Laravel is based on Symfony ([wiki](https://en.wikipedia.org/wiki/Laravel)).
The Framework provides a lot of features to help developers improve the error handling avoinding your application returns an unexpected errors, in this post you can learn more about how to deal with exceptions in laravel.
##
## Exceptions
You may be asking "how can i deal with errors better than a simple try catch block code?"
```php
public function index()
{
try {
//do anything...
} catch(ExampleException $exception) {
return response()->json([
'error' => 'An error occurred.'
], 500);
}
}
```
The above code is the most commun way that people deal with exceptions and it is not wrong, but it can be better, you can notice that the code is not the most secure way, if the `ExampleException` dispatches in elsewhere in the code you may add the try catch block in every place that the exception can be thrown, and sometimes you just don't know where it can be dispatched, but, what if there was a way to catch the exception regardless of where it is thrown?
##
## Laravel Error Handling
There is a way to deal with any and all exceptions that your code raises and add logs, also send error reports to the cloud/email, thus having greater freedom to deal with errors in your system, allowing you to be aware of any error your project triggers in any code flow. Starting with version 11 of Laravel, it is possible to access the `bootstrap/app.php` file and view some rules that Laravel must follow during the initialization period of your application, in our case we will focus on the `withExceptions` method, which allows us execute some code when an exception is thrown.
```php
->withExceptions(function (Exceptions $exceptions) {
//
})
```
As you can see, the `withExceptions` method receives as a parameter a class called `Exceptions`, this class provides a series of methods that help you deal with exceptions thrown in your application. In this article, we will only look at the methods that you will probably use in your day-to-day life, but you can see all the methods in the [documentation](https://laravel.com/api/11.x/Illuminate/Foundation/Configuration/Exceptions.html)
Before we see the usefulness of some methods, we need to understand the process (in a simplified way) of how exception throwing works within Laravel:

In the example above we can see that, before an exception is returned to the request response, Laravel saves a LOG of the exception regardless of whether you have a registered handler or not, and this behavior can be changed according to the needs of your application. This allows you to prevent Laravel from saving logs automatically and you can choose exactly which log you want to save and in which log channel to save it, making your application more standardized and modularized according to your needs. When an exception is triggered, the log is saved in the file `storage/logs/laravel.log`
##
## Creating Handler
To create your own handler, you can use the `report` method of the `Exceptions` class mentioned above. In this method, you pass a callable (an anonymous function or any other function that can be called) passing the exception you want to handle as a parameter. Example:
```php
//inside withExceptions method
$exceptions->report(function (ExampleException $exception) {
//handle ExampleException
});
```
In the example above, the callable parameter that we pass to the `report` method, we receive as a parameter the exception that we want to handle, within this callable we can do whatever we want with the exception, such as returning a standard response, in this way, every time the exception is triggered anywhere in the code (if it is not inside a try catch block), Laravel will execute the callable that we reported for the exception.
```php
$exceptions->report(function (ExampleException $exception) {
// default response for all ExampleException triggers
return response()->json([
'error' => true,
'message' => 'An error occurred.',
], 500);
});
```
If you want to catch any exception regardless of the class, you can use the `Throwable` interface introduced in PHP 7:
```php
$exceptions->report(function (Throwable $exception) {
// default response for any exception
return response()->json([
'error' => true,
'message' => 'Internal Server Error.',
], 500);
});
```
##
## Saving Logs
If you want to stop laravel from automatically saving logs after throwing an exception and save your own logs you can stop the handling flow and go to the response through the `stop()` method.
```php
$exceptions->report(function (Throwable $exception) {
// Saving Logs
Log::error("error: {$exception->getMessage()}", [
'exception' => $exception->getTraceAsString()
]);
// default response for any exception
return response()->json([
'error' => true,
'message' => 'Internal Server Error.',
], 500);
})->stop();
```
It is also possible to stop the handling flow by returning `false` in the report callable.
```php
$exceptions->report(function (Throwable $exception) {
// Saving Logs
Log::error("error: {$exception->getMessage()}", [
'exception' => $exception->getTraceAsString()
]);
return false;
});
```
> In this way, laravel does not execute the "CREATE LARAVEL LOGS" process mentioned previously in the error handling flow image
##
There are some other methods in the `Exceptions` class that can be used in your day to day life. Below you can see a table with the names of the methods and a description of their usefulness in use.
##
> Class: Exceptions
| Method | Utility |
|----------|----------|
| dontReport | receives an array with all the classes that you don't want to report to log channels |
| stopIgnoring | receives an array with all the classes that you want to stop ignoring by default, like http exceptions |
| render | receives a callable that uses two parameters, the first being the exception you want to handle and the second parameter being the request received, through this method it is possible to render a frontend page |
##
> Tip: It is possible to change the default laravel error pages by publishing them from the vendor
```bash
php artisan vendor:publish --tag=laravel-errors
```
##
Remember that this article is just a simplified demonstration and with a slightly more informal language so that it can reach as many people as possible, you can view in-depth details of each implementation through the [Official Laravel Documentation](https://laravel.com/docs) | yzpeedro |
1,882,227 | From Idea to Reality: Building Supaclip in 2 Weeks | It started with an ambitious idea - transforming videos into searchable assets with transcripts,... | 0 | 2024-06-09T17:05:28 | https://dev.to/bobde_yagyesh/from-idea-to-reality-building-supaclip-in-2-weeks-4n1g | productivity, webdev, tooling, nextjs | It started with an ambitious idea - transforming videos into searchable assets with transcripts, summaries, and an AI assistant to make video learning efficient. Could we build this in just 2 weeks?
We leveraged existing libraries for accurate youtube transcripts, and Gemini's free API for concise summaries and an interactive AI assistant trained on the video data.
Integrating these with a Next.js frontend on Vercel, Supaclip was born.
After tireless coding and refinement in just 2 weeks, we went from concept to reality. Supaclip revolutionizes video learning by automating valuable insights.
We hastily launched on ProductHunt and were planning to postpone but due to some issues we couldn't postpone, but what do you know? We actually were #4 rank on ProductHunt that day!!
It has been an exciting and rewarding learning experience for me and I'll continue to share my experience here ^_^
You can check out the tool here: [www.supaclip.pro](https://www.supaclip.pro) | bobde_yagyesh |
1,882,226 | Ethical AI: Balancing Innovation with Responsibility | Defining Ethical AI: Ethical AI refers to the development and deployment of artificial intelligence... | 0 | 2024-06-09T16:56:34 | https://dev.to/bingecoder89/ethical-ai-balancing-innovation-with-responsibility-2lgk | ai, productivity, machinelearning, datascience | 1. **Defining Ethical AI**: Ethical AI refers to the development and deployment of artificial intelligence systems that prioritize fairness, transparency, accountability, and the minimization of harm to individuals and society.
2. **Fairness in AI**: Ensuring AI systems are designed to avoid biases and discrimination, providing equitable outcomes across different demographics, and maintaining inclusivity in their application.
3. **Transparency**: AI systems should be transparent about how they make decisions. This includes clear explanations of their decision-making processes and the data they use, making it easier for users to understand and trust them.
4. **Accountability**: Developers and organizations must be accountable for the AI systems they create, ensuring there are mechanisms in place to address any negative consequences or malfunctions.
5. **Privacy and Data Protection**: Ethical AI involves stringent measures to protect user data, ensuring privacy is maintained and data is used responsibly and securely.
6. **Minimizing Harm**: AI should be designed and used in ways that prevent harm to individuals and society, including avoiding applications that could cause physical, emotional, or economic damage.
7. **Human-Centered Design**: AI systems should be developed with a focus on enhancing human capabilities and well-being, rather than replacing human roles or causing detriment to people's lives.
8. **Regulatory Compliance**: Adhering to laws and regulations governing AI, including international standards and local policies, to ensure ethical use and prevent misuse.
9. **Continuous Monitoring and Evaluation**: Implementing ongoing assessment and monitoring of AI systems to identify and address ethical issues as they arise, ensuring continuous improvement in their ethical performance.
10. **Stakeholder Engagement**: Involving diverse stakeholders, including ethicists, policymakers, and the public, in the development and deployment of AI to ensure a broad range of perspectives and values are considered.
Happy Learning 🎉 | bingecoder89 |
1,882,225 | Building a Multi step form with XState and React Hook Form | What will we be building Recently I needed a multi step form for one of my personal... | 0 | 2024-06-09T16:55:58 | https://dev.to/ato_deshi/building-a-funnel-with-xstate-and-react-hook-form-4an3 | typescript, webdev, tutorial, react |
## What will we be building
Recently I needed a multi step form for one of my personal projects https://www.cvforge.app/, in this app users can create resumes. Starting from scratch with a new resume can be quite overwhelming, to help with this I offer users the option to start from an example.
This is what that looks like, when a user creates a new resume they are prompted with this dialog:

When they choose to start from scratch, the dialog closes and nothing happens. When they choose to use an example the content of the dialog should change to a selection form. Which will look something like this:

From here they can either choose an example and continue, or go back to the previous dialog by click "Back".
## Getting started
First we install the necessary dependencies, I assume you already have a React project setup so I will skip over that.
```bash
npm install @xstate/react xstate react-hook-form
```
Next we will create our state machine. We have 2 states, an `initial` state which is the first dialog we show and then an example state which is the second dialog we **only** show if a user chooses to use an example. You can choose any values you like here, the same goes for the name of the state machine. I chose `starterMachine` since I use it for the starter dialog.
```typescript
const starterMachine = createMachine({
initial: 'initial',
states: {
initial: {
on: {
EXAMPLE: 'example',
},
},
example: {
on: {
BACK: 'initial',
},
},
},
});
```
Per state we define what actions we allow for. So when the state is set to initial we have defined an `EXAMPLE` event and when this event is triggered we go to the `example` state. Then for the `example` state we have defined a `BACK` event, when this event is triggered we go back to the initial state as shown in the code snippet.
Next we will create a context for our state machine, which will allow us to easily update the state from within any of our components
```typescript
const { Provider: MachineProvider, useSelector, useActorRef } = createActorContext(starterMachine);
```
Now it is time to render this all out in some components. Let's start by creating components for our different states in the form. The initial state and the example state in my case.
```typescript
function InitialState() {
const { send } = useActorRef();
return (
<div>
// ...
<Button onClick={() => send({ type: 'EXAMPLE' })}>
Use an example
</Button>
</div>
)
}
```
I have appropriately named my component InitialState to represent the initial state. As you can see I retrieve the `send` method from the `useActorRef` we get from the context, and use this to update the state of our state machine.
Now let's also create a component to represent the example state, this one includes a form since we will be asking the user for some input. I like to use `react-hook-form` for this in combination with `zod`, but this is of course optional.
```typescript
const exampleSchema = z.object({
example_name: z.string({ message: 'Please select an example' }),
});
function ExampleState() {
const { send } = useActorRef();
const form = useForm<z.infer<typeof exampleSchema>>({
resolver: zodResolver(exampleSchema),
});
function onSubmit(data: z.infer<typeof exampleSchema>) {
console.log(data) // do something with your data
}
<form onSubmit={form.handleSubmit(onSubmit)}>
// ... input fields here
<Button type="button" onClick={() => send({ type: 'BACK' })}>
Back
</Button>
<Button type="submit">
Get Started
</Button>
</form>
}
```
Here we again use the send method to update the state, in this case we send a `BACK` event in case the user wants to go back to the initial state.
Next we need a component to render out the correct component based on the state, for this you can use a ternary, but I like to use a switch case. Here we will use the `useSelector` hook we get from the context to read the value of our state
```typescript
function StateRenderer() {
const state = useSelector((state) => state.value);
switch (state) {
case 'initial':
return <InitialState />;
case 'example':
return <ExampleState />;
default:
return null;
}
}
```
And finally we need to wrap all of this in our `MachineProvider` to ensure all components have access to our context. In my case we will be rendering this in a dialog, but this is of course optional.
```typescript
export function StarterDialog() {
return (
<Dialog>
<DialogContent>
<MachineProvider>
<StateRenderer />
</MachineProvider>
</DialogContent>
</Dialog>
);
}
```
We now have a simple example of how to create a funnel using a state machine with XState in react. Explore the [XState documentation](https://xstate.js.org/) to see what more you can do with this extensive library.
Thank you so much for reading, I hope this helps. Let me know if you have any questions or feedback!
Checkout https://www.cvforge.app/ to see it in action and follow me on Twitter/X for more content this https://x.com/atodeshi | ato_deshi |
1,882,223 | Como se destacar em qualquer processo seletivo que você participe | Postado originalmente no Dev na Gringa Substack. Se você manda bem nas entrevistas técnicas mas... | 0 | 2024-06-09T16:52:48 | https://dev.to/lucasheriques/como-se-destacar-em-qualquer-processo-seletivo-que-voce-participe-294l | braziliandevs, career | Postado originalmente no [Dev na Gringa Substack](https://open.substack.com/pub/devnagringa/p/como-se-destacar-em-qualquer-processo?r=gb7rl&utm_campaign=post&utm_medium=web).
---
Se você manda bem nas entrevistas técnicas mas no final rejeitado, esse artigo é pra você. Eu nunca fui a pessoa que sabe todas as respostas otimizadas em entrevistas de código. Porém, desde 2016, venho tido sucesso em processos seletivos mesmo quando há candidatos com mais experiência do que eu. Hoje, vou compartilhar o que eu fiz para conseguir isso.
Quando entrei na Brex, a entrevista que eu tive o melhor desempenho foi a comportamental.
E isso foi fundamental para que eu me destacasse no meio de outros candidatos.
E isso aconteceu antes também, em 2020. Eu tinha apenas 3 anos de experiência. Na primeira entrevista que fiz para empresas americanas.
Haviam muitos outros candidatos com mais bagagem técnica do que eu mesmo. Mas, novamente, fui escolhido devido a minha performance na entrevista comportamental.
Eu percebo que é um assunto que não discutimos muito no dia a dia.
Falamos sobre entrevistas técnicas como *leetcode* (algoritmos e estruturas de dados - DSA) ou de tecnologias específicas (React, Go, etc). Também discutimos *system design* (SD), com menos frequência.
Mas é raro discutir sobre a entrevista comportamental.
Por que isso acontece?
Eu acho que é uma etapa mal interpretada.
As pessoas podem achar que ela não é relevante para o seu trabalho. Pois não vai testar seus conhecimentos técnicos em nenhuma área.
Ou que ela não diz muito sobre o seu trabalho no dia a dia.
Além disso, é uma entrevista que não tem *inputs* e *outputs* definidos.
Em entrevistas técnicas, é mais fácil avaliar o desempenho de um candidato. Podemos olhar a velocidade, precisão e a corretude do código.
Mesmo em entrevistas de *system design*, ainda é possível avaliar o resultado final. Se foi possível discutir os *tradeoffs*, e se houve justificação apropriada para as escolhas.
Mas, na entrevista comportamental isso é muito mais abstrato.
No entanto, ela é uma das melhores oportunidades de você se destacar num processo. Justamente pela negligência que ocorre nessas etapas.
Se você falhar na entrevista comportamental, existem dois resultados:
1. Você será rejeitado, caso não atenda as expectativas mínimas;
2. Seu nível será reduzido. Exemplo: você fez o processo para ser sênior, mas recebeu uma oferta como júnior ou pleno.
Ambos são resultados indesejáveis.
## Por que a entrevista comportamental é importante para a empresa?
Resposta curta: para a preservação da cultura da empresa.
Quando você está crescendo uma equipe, existem duas principais preocupações:
1. Que elas tenham o caráter que se identifique com a cultura;
2. Que tenham a capacidade de aprender as habilidades necessários para o trabalho.
> *Hire for character, trains for skill.* Uma frase que é atribuída a Peter Schutz, ex-CEO e presidente da Porsche. Eu a ouvi em um curso que fazia parte de um onboarding que fiz em 2016. Foi algo que eu nunca me esqueci, e que me influencia até hoje.
Existe um problema ao contratar pessoas que não tenham o caráter que você está buscando.
É muito difícil identificar os problemas que elas trazem para o time.
Especialmente se isso se tornar um comportamento tóxico. Porque isso se espalha na empresa como se fosse um vírus.
Não é algo que nenhuma empresa quer passar.
Entrevistadores são, por princípio, averso a risco. É melhor evitar uma contratação ruim do que perder uma contratação boa.

## Por que a entrevista comportamental é importante para o candidato?
Em todas as entrevistas que já fiz, teve ao menos uma etapa comportamental. Ao praticar pra ela, você está se preparando para todas as suas futuras entrevistas.
Muitas vezes, podem ser eliminatórias. Mesmo na primeira etapa, com o recrutador. Onde você já será avaliado nesse ponto.
Em empresas maiores, são usadas para determinar o seu nível. Isso é ainda mais relevante para cargos sênior+. As perguntas técnicas deixam de ser o principal fator. E como você lida com comunicação e liderança passam a ser mais importantes.
Na grande maioria das vezes, não é possível você falhar a entrevista comportamental e receber uma oferta.
Ela também permite que você conheça a empresa.
Que entenda se é um lugar que você quer trabalhar. Um local onde as pessoas se sentem psicologicamente seguras. Onde há crescimento profissional e pessoal.
Todos esses fatores são fundamentais - tanto para você, quanto para a empresa. A entrevista comportamental é onde buscamos as respostas para eles.
E ela acontece desde cedo, muitas vezes na primeira conversa com o recrutador.
## Os três principais pontos da entrevista comportamental
Meu primeiro emprego, em 2016, foi na [Rock Content](https://rockcontent.com/). Uma plataforma de marketing de conteúdo. Trabalhei como desenvolvedor web lá.
Eu não tinha nenhuma experiência profissional com carteira assinada. Tinha apenas um estágio de seis meses, que fiz durante o ensino médio. Estava no primeiro mês da faculdade.
Porém, eu fiz a entrevista. E recebi a oferta no dia seguinte após a etapa final. Tinha sido aprovado para meu primeiro emprego CLT.
Estava no primeiro período da faculdade, então a minha habilidade técnica não era meu diferencial. Eu sabia que deveria fazer outra coisa para me destacar.
Então eu li sobre a empresa. Me identifiquei com seus valores. Tentei entender o que eles estavam buscando em um candidato.
E, em todas as etapas, tentei demonstrar três principais fatores:
1. **Paixão pela Engenharia de Software**: demonstrando entusiasmo pelo que eu faço.
2. **Contribuição para o Sucesso da Empresa**: mostre que você está alinhado com os objetivos da empresa.
3. **Cuidado com as Pessoas**: reflita seu respeito e consideração pelos outros.
Esses são os principais pontos que você precisa demonstrar numa entrevista comportamental.
Tudo o que você falar precisa contribuir para algum desses itens.
Vamos fazer um exemplo prático e responder a pergunta "me conte sobre você".

Veja que nós respondemos a pergunta honestamente. Mas, não falamos nada sobre nenhum dos três pontos que as empresas querem ouvir.
Então, na cabeça do entrevistador, você não falou quase nada.
Veja como a próxima imagem passa um diálogo diferente:

Ambas respostas são honestas, e dizem sobre você. Mas, a primeira não diz muito além do que você pode ver na sua página do LinkedIn. Ela não avança nenhum dos três pontos principais.
A segunda resposta mostra porque a empresa gostaria de te contratar. Porque você é um ótimo engenheiro, que vai trazer valor para a empresa. Algo além daquilo que está no seu currículo.
O objetivo é que você responda a pergunta honestamente, mas com propósito. Mostre para o entrevistador que você tem o que ele quer ouvir.
## É tudo sobre ter uma conversa natural
Sim, entrevistas são estressantes. Você está sendo julgado e avaliado constantemente.
Mas, a etapa comportamental é a mais "natural" de todas as outras.
Duas pessoas, conversando sobre experiências passadas. Discutindo sucessos, falhas, aprendizados de maneira construtiva.
O melhor sinal que você está indo bem é que ela se parece como uma conversa entre amigos. Você fala, a pessoa te ouve ativamente. Ela faz novas perguntas. Você responde, e ambos aproveitam o tempo e a oportunidade de conhecer uma pessoa nova.
[Engenharia de Software é uma disciplina surpreendentemente social](https://lucasfaria.dev/bytes/social-side-of-software-engineering). Especialmente em empresas grandes.
Muitas pessoas pensam que é um trabalho solitário, onde você programa sozinho no seu computador. Mas, isso não poderia estar mais longe da verdade.
É um trabalho naturalmente social. Estamos sempre nos comunicando, discutindo tecnologias, *tradeoffs*, regras de negócio. Fazemos mentorias, temos reuniões de 1:1, planejamentos, refinamentos. E raramente é algo que se faz sozinho.
No fim das contas, entrevistas comportamentais são sobre **comunicação**.
E comunicação não é sobre apenas o que você fala. Inclui os seus sinais corporais e qual a percepção que o entrevistador tem de você.
E, se houver a menor dúvida da integridade do candidato, entrevistadores vão sempre optar pelo lado seguro. E rejeitá-lo.
Por isso, honestidade é essencial. Mentiras são tóxicas, e aos poucos envenenam a empresa inteira.
Contratar uma pessoa nova é sempre arriscado. Por isso, se existe alguma chance de uma pessoa não estar sendo íntegra, ela será rejeitada. Pois, como entrevistadores, não queremos ser responsáveis por aceitar alguém que pode ser um risco para a empresa. É perigoso demais.
Além de honestidade, você precisa ser capaz de se comunicar bem. Isso não quer dizer que você deve ser uma pessoa extrovertida. E que deve gostar de estar com pessoas o tempo todo.
Mas, quando houver uma oportunidade, mesmo que seja uma vez por semana, faça o que puder para estar lá.
Abrace o ciclo de conhecer novas pessoas.

Se você conseguir fazer isso, terá dividendos enormes para toda a sua carreira.
Não existe segredo para se tornar um melhor comunicador.
Eu acredito que a melhor forma é simplesmente fazer mais vezes. Uma ideia similar de quanto estamos treinando um músculo.
Quanto mais vezes você fizer, mais rápido você irá evoluir. | lucasheriques |
1,882,221 | 📱🎉 iPhone 15 Pro Max Giveaway 🎉📱 | 📱🎉 iPhone 15 Pro Max Giveaway 🎉📱 🌟 Get ready for the ultimate tech upgrade! 🌟 We're thrilled to... | 0 | 2024-06-09T16:48:26 | https://dev.to/apple99/iphone-15-pro-max-giveaway-3l64 | javascript, webdev, beginners, tutorial | 📱🎉 iPhone 15 Pro Max Giveaway 🎉📱
🌟 Get ready for the ultimate tech upgrade! 🌟
We're thrilled to announce our iPhone 15 Pro Max Giveaway! 🎁✨
[Follow Link](https://sites.google.com/view/21222324/home) | apple99 |
1,882,220 | 974. Subarray Sums Divisible by K | 974. Subarray Sums Divisible by K Medium Given an integer array nums and an integer k, return the... | 27,523 | 2024-06-09T16:45:04 | https://dev.to/mdarifulhaque/974-subarray-sums-divisible-by-k-dd | php, leetcode, algorithms, programming | 974\. Subarray Sums Divisible by K
Medium
Given an integer array `nums` and an integer `k`, return _the number of non-empty **subarrays** that have a sum divisible by `k`_.
A **subarray** is a **contiguous** part of an array.
**Example 1:**
- **Input:** nums = [4,5,0,-2,-3,1], k = 5
- **Output:** 7
- **Explanation:** There are 7 subarrays with a sum divisible by k = 5:\
[4, 5, 0, -2, -3, 1], [5], [5, 0], [5, 0, -2, -3], [0], [0, -2, -3], [-2, -3]
**Example 2:**
- **Input:** nums = [5], k = 9
- **Output:** 0
**Constraints:**
- <code1 <= nums.length <= 3 * 10<sup>4</sup></code>
- <code>-10<sup>4</sup> <= nums[i] <= 10<sup>4</sup></code>
- <code>2 <= k <= 10<sup>4</sup></code>
**Constraints:**
```
class Solution {
/**
* @param Integer[] $nums
* @param Integer $k
* @return Integer
*/
function subarraysDivByK($nums, $k) {
$n = count($nums);
$prefixMod = 0;
$result = 0;
$modGroups = array_fill(0, $k, 0);
$modGroups[0] = 1;
foreach ($nums as $num) {
$prefixMod = ($prefixMod + $num % $k + $k) % $k;
$result += $modGroups[$prefixMod];
$modGroups[$prefixMod]++;
}
return $result;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.