id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,896,525 | Exploring Blockchain Technology Beyond Cryptocurrencies | Introduction Blockchain technology has made groundbreaking changes in the world of... | 0 | 2024-06-22T00:31:59 | https://dev.to/kartikmehta8/exploring-blockchain-technology-beyond-cryptocurrencies-2k48 | javascript, beginners, programming, tutorial | ## Introduction
Blockchain technology has made groundbreaking changes in the world of cryptocurrencies. However, its potential extends far beyond that. This revolutionary technology has the ability to transform various industries and bring significant benefits to businesses. In this article, we will explore the advantages and disadvantages of blockchain technology, its unique features, and its potential applications beyond cryptocurrencies.
## Advantages of Blockchain Technology
One of the biggest advantages of blockchain technology is its decentralized nature. This means that there is no single point of control, making it highly secure and resistant to fraud or hacking. It also eliminates the need for intermediaries, reducing transaction costs and increasing efficiency. Another benefit is its immutability, as once data is recorded on the blockchain, it cannot be altered. This ensures transparency and trust in the shared data.
## Disadvantages of Blockchain Technology
Although blockchain technology has numerous advantages, it also comes with its own set of challenges. One of the major concerns is scalability, as the current infrastructure has limitations in handling a large number of transactions. Another disadvantage is the high energy consumption required for mining and verifying transactions, which has raised environmental concerns.
## Key Features of Blockchain Technology
Blockchain technology is known for its highly secure and immutable records. Its use of cryptographic techniques ensures the integrity of data and its distributed ledger system allows for transparent and efficient record-keeping. It also enables the execution of smart contracts, automatically executing terms and conditions in a secure and transparent manner.
### Smart Contracts Example
```solidity
pragma solidity ^0.5.0;
contract SimpleContract {
uint public value;
function setValue(uint _value) public {
value = _value;
}
}
```
This example illustrates a basic smart contract in Solidity, the programming language for Ethereum-based contracts. The contract allows users to set and retrieve a value, showcasing the automation and security features of blockchain smart contracts.
## Potential Applications Beyond Cryptocurrencies
Blockchain technology has the potential to disrupt various industries such as supply chain management, healthcare, and voting systems. It can streamline processes, increase transparency, and reduce costs in supply chain management. In healthcare, it can improve data security and accuracy, and in voting systems, it can eliminate fraud and ensure fair and transparent elections.
## Conclusion
In conclusion, blockchain technology goes beyond cryptocurrencies and has the potential to transform various industries in a positive way. Its unique features of decentralization, immutability, and security make it a powerful tool for businesses. However, it also comes with its own set of challenges that need to be addressed. With proper developments and regulations, blockchain technology can bring significant benefits and change the way we do business. | kartikmehta8 |
1,896,524 | How to create and connect to a Linux VM on Azure using a Public Key. | Table of content. Introduction Log In Step 1: Create VM Step 2: Add all Basic Parameters Step 3:... | 0 | 2024-06-22T00:22:48 | https://dev.to/phillip_ajifowobaje_68724/how-to-create-and-connect-to-a-linux-vm-on-azure-using-a-public-key-5fm1 | Table of content.
- Introduction
- Log In
- Step 1: Create VM
- Step 2: Add all Basic Parameters
- Step 3: Review + Create VM
- Step 4: Machine Validation Passed
- Step 5: Create for VM deployment
- Step 6: Connect Linux VM Via SSH
**INTRODUCTION**: Linux Virtual machines is a type of machine that is powered by the Linux operating system. They vary in different types ranging from Ubuntu, Redhat, Oracle Linux and a host of others. For the purpose of this write up we will be working on the Ubuntu Linux virtual machine connecting via SSH using a public Key.
The process is initiated by logging into the Azure Portal to commence creation of your Ubuntu Linux Virtual Machine.
**STEP 1: CREATE VM**
click on create Virtual machine upon accessing the Azure portal as shown in below diagram.
**STEP 2: BASIC PARAMETERS**
Next step will be to add all the **BASIC** parameters to your virtual machine. See steps below:
a. create a resource group name **(RGLinux)** see
highlighted in the diagram below.
b. Give the virtual machine a name**(LinuxVM)**
c. Pick your region**(US West US2)**
d. Select your availability option**( No Infrastructure redundancy required)**
e. Select security type(Standard for this machine)
f. From the image tab you will select your linux server type**(Ubuntu server)**
The drop down arrow indicates all dialog boxes you can open to show various options in the instance detail area.
g. Select the size tab to pick the memory size for the Linux virtual machine. there is a dropdown to pick as required for the VM. This comes with monthly billing based on the selected size.
h. Authentication Type is to select the preferred authentication protocol. This can either be SSH Public key as highlighted below or password. In the case of this Ubuntu Linux machine we are using a SSH public key. Note: A public key will be generated which will be used to gain connect to the virtual machine in the chosen terminal. select **HTTP** to give web access when connecting to your linux virtual machine.
i. In the username tab, a username needs to be created or you use the default **azureuser** as highlighted in the below image.
j. You can click on next Disk to proceed to the Disk tab and make the required selection for your Linux virtual machine. This option can also be done when your machine has been deployed and validated.
**STEP 3: REVIEW AND CREATE**
Finally in the Basic tab you will click on. review and create to validate your linux VM. after this command your virtual machine will go through a validation process.


**STEP 4. MACHINE VALIDATION**
After the review and create, if your virtual machine has no error, you will see the validation passed indicated as shown in the diagram below.
Note: your machine will show you your hourly subscription credit once the validation has been passed.

**STEP 5. CREATE FOR VM DEPLOYMENT**
The create button will be clicked for final deployment of your ubuntu linux virtual machine. Here you will find details of your Linux virtual machine before you connect your machine via your preferred terminal.

**STEP 6. CONNECT VIRTUAL MACHINE**
The virtual machine can be connected via SSH using the SSH public key generated during the creation of the virtual machine. A public IP address which was generated while creating the linux virtual machine is also required to be able to connect the VM machine through your preferred terminal. See below diagram:

| phillip_ajifowobaje_68724 | |
1,896,523 | GIF to JPG: Transitioning Between Image Formats | What Are the Differences Between GIF and JPG? GIF (Graphics Interchange Format) and JPG... | 0 | 2024-06-22T00:19:23 | https://dev.to/msmith99994/gif-to-jpg-transitioning-between-image-formats-2e28 | ## What Are the Differences Between GIF and JPG?
GIF (Graphics Interchange Format) and JPG (or JPEG - Joint Photographic Experts Group) are two of the most widely used image formats, each serving different purposes and possessing distinct characteristics.
### GIF
**- Compression:** GIF uses lossless compression, which means no image data is lost, preserving the quality. However, it is limited to a palette of 256 colors, which can restrict its use for detailed images.
**- Animation:** GIF supports animations, allowing multiple frames within a single file, making it ideal for simple animated graphics.
**- Transparency:** GIF supports binary transparency, meaning a pixel can be fully transparent or fully opaque.
**- File Size:** Generally small, especially for simple graphics with limited colors.
### JPG
**- Compression:** JPG uses lossy compression, which reduces file size by discarding some image data. This can result in a loss of quality, especially at higher compression levels.
**- Color Depth:** Supports 24-bit color, displaying millions of colors, making it ideal for photographs and detailed images.
**- File Size:** Generally smaller due to lossy compression, which is beneficial for web use.
**- Transparency:** Does not support transparency.
## Where Are They Used?
### GIF
**- Web Graphics:** Ideal for simple graphics, icons, and logos with limited colors.
**- Animations:** Widely used for simple animations and short looping clips on websites and social media.
**- Emojis and Stickers:** Used in messaging apps for animated emojis and stickers.
### JPG
**- Digital Photography:** Standard format for digital cameras and smartphones due to its balance of quality and file size.
**- Web Design:** Widely used for photographs and complex images on websites because of its quick loading times.
**- Social Media:** Preferred for sharing images on social platforms due to its universal support and small file size.
**- Email and Document Sharing:** Frequently used in emails and documents for easy viewing and sharing.
## Benefits and Drawbacks
### GIF
**Benefits:**
**- Small File Size:** Effective for simple graphics with limited colors.
**- Animation Support:** Allows for simple animations within a single file.
**- Wide Compatibility:** Supported by almost all browsers and devices.
**Drawbacks:**
**- Limited Color Range:** Restricted to 256 colors, which is insufficient for detailed images.
**- Binary Transparency:** Does not support varying levels of transparency.
**- No Advanced Features:** Lacks support for complex color profiles and transparency levels.
### JPG
**Benefits:**
**- Small File Size:** Effective lossy compression reduces file sizes significantly.
**- Wide Compatibility:** Supported by almost all devices, browsers, and software.
**- High Color Depth:** Capable of displaying millions of colors, ideal for photographs.
**- Adjustable Quality:** Compression levels can be adjusted to balance quality and file size.
**Drawbacks:**
**- Lossy Compression:** Quality degrades with higher compression levels and repeated edits.
**- No Transparency:** Does not support transparent backgrounds.
**- Limited Editing Capability:** Cumulative compression losses make it less ideal for extensive editing.
## How to Convert GIF to JPG
Converting [GIF to JPG](https://cloudinary.com/tools/gif-to-jpg) can be beneficial when you need smaller file sizes and do not require transparency or animation. Here are several methods to convert GIF images to JPG:
- Using Online Tools
- Using Image Editing Software
- Command Line Tools
- Programming Libraries
## The Bottom Line
GIF and JPG are both essential image formats, each suited for different purposes. GIF is excellent for simple graphics and animations with limited colors, while JPG excels in delivering high-quality images with a broader color range and smaller file sizes. Understanding the differences between GIF and JPG, and knowing how to convert between them, allows you to choose the best format for your specific needs. Whether you need the animation capabilities of GIF or the efficient, compact storage of JPG, mastering these formats ensures you can handle any digital image requirement effectively.
| msmith99994 | |
1,895,155 | Discover the Heart of Ethical Software Development: Principles, Practices, and Real-World Examples | As software developers, we hold the keys to powerful tools that shape our world. Whether it's a... | 27,798 | 2024-06-22T00:19:09 | https://dev.to/andresordazrs/discover-the-heart-of-ethical-software-development-principles-practices-and-real-world-examples-2eb | ethicaldevelopment, privacyandsecurity, softwareengineering, techethics | As software developers, we hold the keys to powerful tools that shape our world. Whether it's a mobile app simplifying daily tasks or a complex system managing vast amounts of data, the software we create impacts lives in significant ways. But with this power comes a great responsibility: to ensure that our creations are ethical and beneficial to society. In this article, we'll delve into the core principles of ethical software development, explore its importance, and examine real-world examples that underscore why ethics in technology matter more than ever.
### What is Ethical Software Development?
Ethical software development is about creating software that adheres to moral principles and societal norms. It's about ensuring that our software solutions do not cause harm and actively promote the well-being of users and society. This means making thoughtful decisions throughout the development lifecycle, from initial design to deployment, that prioritize values like privacy, transparency, fairness, and accountability.
### Why is Ethical Software Development Crucial?
The importance of ethical software development cannot be overstated. With software becoming an integral part of our daily lives, the potential for misuse and harm has grown. Here are some real-world examples that highlight the consequences of unethical practices:
[**1. Cambridge Analytica Scandal:**](https://www.bbc.com/news/topics/c81zyn0888lt)
- In 2018, it was uncovered that Cambridge Analytica had illegally harvested personal data from millions of Facebook users without their consent, using it for political advertising. This scandal brought to light significant privacy violations and a lack of transparency in data handling practices.
[**2. Uber's Greyball Tool:**](https://www.nytimes.com/2017/03/03/technology/uber-greyball-program-evade-authorities.html)
- Uber used a tool called Greyball to evade law enforcement in cities where its service was not yet approved. By collecting and using data to identify and avoid regulatory officials, Uber demonstrated an unethical manipulation of data.
[**3. Apple's iPhone Throttling:**](https://www.smartphoneperformancesettlement.com/)
- Apple admitted to slowing down older iPhone models through software updates, ostensibly to prevent unexpected shutdowns. However, the lack of transparency led many to believe it was a tactic to encourage new purchases, sparking public outrage and legal challenges.
These incidents illustrate how unethical software practices can lead to significant harm, legal repercussions, and erosion of user trust. They underscore the urgent need for ethical considerations in every aspect of software development.
### Legal and Ethical Implications
While ethical software development is guided by moral principles, it is also reinforced by legal frameworks. Regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States set stringent requirements for data protection, transparency, and user consent . Non-compliance can result in severe penalties and damage to a company's reputation.
However, ethical software development goes beyond legal compliance. It involves a proactive approach to identifying and addressing potential ethical dilemmas, ensuring that the software we create respects users' rights and promotes their best interests. This proactive stance helps prevent harm and builds a foundation of trust and integrity in the tech industry.
### Core Ethical Principles
1. **Privacy:** Protecting user data is a fundamental ethical obligation. Implement robust data protection measures and ensure that users' personal information is handled with the utmost care and confidentiality.
2. **Transparency:** Communicate clearly and honestly about how software works and how user data is used. Inform users about data collection practices, the purpose of data usage, and who has access to their information.
3. **Fairness and Non-Discrimination:** Ensure that software is free from biases that can lead to discrimination. Algorithms and decision-making processes should be fair and equitable, providing equal opportunities for all users.
4. **Security:** Implement strong security measures to protect software from malicious attacks. This includes regular security audits, encryption, and secure coding practices to safeguard user data and maintain system integrity.
5. **Accountability:** Take responsibility for the software created. Be transparent about errors, fix issues promptly, and ensure that any harm caused by the software is addressed and rectified.
Ethical software development is not just a technical challenge but a moral imperative. As developers, we have the power to shape the digital world in ways that can either harm or benefit society. By adhering to ethical principles and proactively addressing potential ethical issues, we can build software that respects users' rights, fosters trust, and contributes positively to the world. This introduction sets the stage for a deeper exploration of ethical software development in subsequent articles, where we will delve into practical steps, case studies, and the challenges of maintaining ethical standards in a rapidly evolving tech landscape.
> _"Technology is best when it brings people together."_
**– Matt Mullenweg**
#### References:
[1. General Data Protection Regulation (GDPR).](https://gdpr.eu/)
[2. California Consumer Privacy Act (CCPA).](https://oag.ca.gov/privacy/ccpa)
[3. IEEE Code of Ethics.](https://www.ieee.org/about/corporate/governance/p7-8.html)
[4. ACM Code of Ethics and Professional Conduct.](https://www.acm.org/code-of-ethics)
[5. Principles for Digital Development.](https://digitalprinciples.org/)
| andresordazrs |
1,896,522 | How To Create And Connect To A Linux Virtual Machine Using A Public Key | To do this effectively, you must follow the below steps; 1. Sign up to Azure portal 2. Click on... | 0 | 2024-06-22T00:17:21 | https://dev.to/romanus_onyekwere/how-to-create-and-connect-to-a-linux-virtual-machine-using-a-public-key-53dc | virtual, microsoft, system, technology | To do this effectively, you must follow the below steps;
**1. Sign up to Azure portal**

**2. Click on Virtual Machine**

**3. Click on Create**

**
**4. On the dropdown. click Azure Virtual Machine**

**5.** Under the **Project details**, make sure the right subscription is selected and then choose to **Create new** resource group and enter LinuxVm

**6.**Under **Instance details** enter LinuxVm for the **Virtual Machine Name**, and choose Ubuntu Server 20.04 LTS-x64 Gen2 (free services eligible) for your **Image**. Choose No infrastructure Redundancy required for **Availability Option**, Select Standard for **Security Type** and leave the other defaults

**7.** Under **Administrative account**, select **Password**
**8.** In Username, enter Moderntech
**9.** Input your Password

**10.** At the **Inbound port rule**, choose **Allow selected ports** on the Public inbound port

**11.** Review + Create
This is for validation to take place

**12. ** Create.
This is to complete the deployment

**13.** Click on Resource

**14.** Connect to powershell
| romanus_onyekwere |
1,896,518 | A Pause in the Grind: Reflecting on the Journey of My Heart | Woke up at 5 am today and my boyfriend was still asleep, so I decided to take a vlog of me starting... | 0 | 2024-06-22T00:04:46 | https://dev.to/generosiie/a-pause-in-the-grind-reflecting-on-the-journey-of-my-heart-1h6o | Woke up at 5 am today and my boyfriend was still asleep, so I decided to take a vlog of me starting my day.
Made my coffee and sat at my PC, "Work na ko mahal!." I started, "Soooo, this is a day in the life of a..." and I got cut off. A moment of silence as years of hard work flashed before my eyes. That's when I realized—three more months, and it's freestyle—no more manual to follow.
I looked back on my JHS yearbook. Saw that I once wanted to become a softeng. I had that mindset before I entered senhigh.

Took STEM because I thought I could self-learn programming, as I couldn't abandon my love for science. A few days in STEM, I almost aced the post-test, and so on, but my heart was telling me something else. I shouldn't be here, I want to pursue programming, so I shifted to ICT Animation. I was so hesitant at first because I swore I couldn't draw. Back in JHS, if there was an option to choose between poster and slogan, I'd always go with slogan making.
But during my stay in ICT Animation, I learned how to draw—wow, amazing. I learned from there that you can work hard for a kill to be good at it. That time, I didn't just acquire a skill but also a mindset.

My love for programming blossomed during my senhigh. I loved Java soooo much. I was able to understand every corner of it. And before I graduated from senhigh, my friends and I created a user-admin price comparison website for retail stores for our research study. Damn. It was another "work hard to be good at it" milestone for me.

I couldn't forget that two-week grind of self-learning PHP and coding. I wasn't confident enough in my HTML and CSS skills back then, but I did it with them. We all did it. We just dived in and did what we needed to do. It was a peak for me.
Looking back, my first lines of code were HTML in Notepad++. Six years ago, in grade 9, I just knew how to change the background color and italicize text using inline styling.
I entered college with, i say, good programming skills. I learned Python and also enrolled myself in a full-stack developer boot camp. I invited my friends, but they thought it was risky. I took the risk because I knew I needed to push myself to code more, so I entered the boot camp. One week in, I learned so much already. I was doing the boot camp during my first year in college. I had five classes to attend and the boot camp. I had no good sleep. A student during the day and grinding coding at night. I barely had sleep. But around my second week in the boot camp, I woke up with a stinging pain in my left abdomen, high fever, and nausea. I couldn't bear the pain, so I was rushed to the hospital. We found out I had a kidney infection. It was quarantine, so I was confined at home, in my room. I still attended the boot camp even though I was on an IV drip.

I forced my body to face my computer and do my modules in the boot camp because I didn't want to be delayed in the training. Everything in the training was scheduled. If I missed a module, it would pile up, making it harder for me to catch up. But then, my body got weaker, and the stinging pain got stronger. That's when I learned that, yes, I can work hard for something I want to achieve, but I must not ignore my health too. So then, I wrote a letter to my boot camp teacher that I quit and would be back the next batch. It was the best decision because my body healed. The kidney stone was gone, and I had a clearer mindset now. And I saw my fire of passion, what my heart is telling me.
From my second year of college to the third year, I forced myself to finish my data structures and algorithms certification which I have been crawling for pakking 4 years (but actually took me 70 days).

This strengthened my knowledge of backend programming. It was good timing because I finished it before I entered another boot camp at Accenture. I knew I promised to go back to V88, but Accenture was right in front of me, and it was the exact plan I was going for for my internship. Even though I hesitated to take it because my friends wouldn't, I still took it because it aligned with my plan.
Two months of training at Accenture boosted my confidence in my skills. I'm somewhat finished with the boot camp and an offer came to me. I'm not coding now just to practice, but I'm now coding for others to benefit from what I can do.
Apparently, I've been offered two job offers, but in my heart, I don't want to risk these jobs for the next 2 years. I learned to love 3D modeling from our thesis, and I once loved Java, but my heart says something else. I'm currently freelancing 2 web development projects my way out of college, and even though it makes me cry and say I hate programming, at the end of the day, it works out, and I say, "I swear I love programming."
| generosiie | |
1,896,459 | What You Need To Know About EF Core Bulk Updates | When you're dealing with thousands or even millions of records, efficiency is king. That's where EF... | 0 | 2024-06-24T10:18:22 | https://www.milanjovanovic.tech/blog/what-you-need-to-know-about-ef-core-bulk-updates | efcore, dotnet, csharp, aspnetcore | ---
title: What You Need To Know About EF Core Bulk Updates
published: true
date: 2024-06-22 00:00:00 UTC
tags: efcore,dotnet,csharp,aspnetcore
canonical_url: https://www.milanjovanovic.tech/blog/what-you-need-to-know-about-ef-core-bulk-updates
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ef12yf3vc5ok6wp6zh1m.png
---
When you're dealing with thousands or even millions of records, efficiency is king. That's where [**EF Core bulk update**](https://www.milanjovanovic.tech/blog/how-to-use-the-new-bulk-update-feature-in-ef-core-7) capabilities come into play.
EF Core 7 introduced two powerful new methods, `ExecuteUpdate` and `ExecuteDelete`. They're designed to simplify bulk updates in your database. Both methods have their respective async overloads - `ExecuteUpdateAsync` and `ExecuteDeleteAsync`.[EF bulk updates](https://learn.microsoft.com/en-us/ef/core/saving/execute-insert-update-delete) offer significant performance advantages over traditional approaches.
However, there's an **important caveat**: these bulk operations bypass the EF Core **Change Tracker**. This disconnect can lead to unexpected behavior if you're not aware of it.
In this week's issue, we'll dive into the details of bulk updates in EF Core.
## Understanding the EF Core ChangeTracker
When you load entities from the database with EF Core, the `ChangeTracker` starts tracking them. As you update properties, delete entities, or add new ones, the `ChangeTracker` records these changes.
```csharp
using (var context = new AppDbContext())
{
// Load a product
var product = context.Products.FirstOrDefault(p => p.Id == 1);
product.Price = 99.99; // Modify a property
// At this point, the ChangeTracker knows that 'product' has been modified
// Add a new product
var newProduct = new Product { Name = "New Gadget", Price = 129.99 };
context.Products.Add(newProduct);
// Delete a product
context.Products.Remove(product);
context.SaveChanges(); // Persist all changes to the database
}
```
When you call `SaveChanges`, EF Core uses the `ChangeTracker` to determine which SQL commands to execute. This ensures that the database is perfectly synchronized with your modifications. The `ChangeTracker` acts as a bridge between your in-memory object model and your database.
If you're already familiar with how EF Core works, this serves mostly as a reminder.
## Bulk Updates and the ChangeTracker Disconnect
Now, let's focus on how [**bulk updates in EF Core**](https://www.milanjovanovic.tech/blog/how-to-use-the-new-bulk-update-feature-in-ef-core-7)interact with the `ChangeTracker` - or rather, how they don't interact with it. This design decision might seem counterintuitive, but there's a solid reason behind it: **performance**.
By directly executing SQL statements against the database, EF Core eliminates the overhead of tracking individual entity modifications.
```csharp
using (var context = new AppDbContext())
{
// Increase price of all electronics by 10%
context.Products
.Where(p => p.Category == "Electronics")
.ExecuteUpdate(
s => s.SetProperty(p => p.Price, p => p.Price * 1.10));
// In-memory Product instances with Category == "Electronics"
// will STILL have their old price
}
```
In this example, we're increasing the price of all products in the `Electronics` category by 10%. The `ExecuteUpdate` method efficiently translates the operation into a single SQL `UPDATE` statement.
```csharp
UPDATE [p]
SET [p].[Price] = [p].[Price] * 1.10
FROM [Products] as [p];
```
However, if you inspect the `Product` instances that EF Core has already loaded into memory, you'll find that their `Price` properties haven't changed. This might seem surprising if you aren't aware of how bulk updates interact with the change tracker.
Everything we discussed up to this point also applies to the `ExecuteDelete` method.
[**EF Core interceptors**](https://www.milanjovanovic.tech/blog/how-to-use-ef-core-interceptors) do not trigger for `ExecuteUpdate` and `ExecuteDelete` operations. If you need to track or modify bulk update operations, you can create database triggers that fire whenever a relevant table is updated or deleted. This allows you to log details and perform additional actions.
## The Problem: Maintaining Consistency
If `ExecuteUpdate` completes successfully, the changes are directly committed to the database. This is because bulk operations bypass the `ChangeTracker` and don't participate in the usual transaction managed by `SaveChanges`.
If `SaveChanges` subsequently fails due to an error (e.g., validation error, database constraint violation, connection issue), you'll be in an inconsistent state. The changes made by `ExecuteUpdate` are already persisted. Any changes made "in memory" are lost.
The most reliable way to ensure consistency is to wrap both `ExecuteUpdate` and the operations that lead to `SaveChanges` in a transaction:
```csharp
using (var context = new AppDbContext())
using (var transaction = context.Database.BeginTransaction())
{
try
{
context.Products
.Where(p => p.Category == "Electronics")
.ExecuteUpdate(
s => s.SetProperty(p => p.Price, p => p.Price * 1.10));
// ... other operations that modify entities
context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
// You could also let the transaction go out of scope.
// This would automatically rollback any changes.
transaction.Rollback();
// Proceed to handle the exception...
}
}
```
If `SaveChanges` fails, the transaction will be rolled back, reverting the changes made by both `ExecuteUpdate` and any other operations within the transaction. This keeps your database in a consistent state.
## Summary
EF Core bulk update features, `ExecuteUpdate` and `ExecuteDelete`, are invaluable tools for optimizing performance. By bypassing the `ChangeTracker` and executing raw SQL directly, they deliver significant speed improvements compared to traditional methods.
However, it's crucial to be mindful of the potential pitfalls associated with this approach. The disconnect between in-memory entities and the database state can lead to unexpected results if not handled correctly.
My rule of thumb is to create an explicit [**database transaction**](https://www.milanjovanovic.tech/blog/working-with-transactions-in-ef-core) when I want to make additional entity changes. We can be confident that all the changes will persist in the database or none of them will.
I hope this was helpful, and I'll see you next week.
**P.S.** Get the [source code](https://github.com/m-jovanovic/ef-bulk-updates) and try out the examples from this issue.
* * *
**P.S. Whenever you're ready, there are 3 ways I can help you:**
1. [**Modular Monolith Architecture (NEW):**](https://www.milanjovanovic.tech/modular-monolith-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 650+ engineers in this in-depth course that will transform the way you build modern systems. You will learn the best practices for applying the Modular Monolith architecture in a real-world scenario.
2. [**Pragmatic Clean Architecture:**](https://www.milanjovanovic.tech/pragmatic-clean-architecture?utm_source=dev.to&utm_medium=website&utm_campaign=cross-posting) Join 2,800+ students in this comprehensive course that will teach you the system I use to ship production-ready applications using Clean Architecture. Learn how to apply the best practices of modern software architecture.
3. [**Patreon Community:**](https://www.patreon.com/milanjovanovic) Join a community of 1,050+ engineers and software architects. You will also unlock access to the source code I use in my YouTube videos, early access to future videos, and exclusive discounts for my courses. | milanjovanovictech |
1,896,521 | Highlights from Day 1 of CascadiaJS 2024 | Introduction CascadiaJS 2024 began with a series of insightful talks that delved into... | 0 | 2024-06-21T23:59:59 | https://dev.to/agagag/unveiling-the-future-highlights-from-day-1-of-cascadiajs-2024-2kib | javascript, ai, rag, vectordatabase | ## Introduction
CascadiaJS 2024 began with a series of insightful talks that delved into various aspects of the JavaScript ecosystem. From AI-generated components to the latest in web animations, Day 1 was packed with valuable information for developers. Here’s a summary of the key presentations:
---
## AI-Generated React Server Components
**Speaker: Tejas Kumar**
**[Talk Information](https://cascadiajs.com/2024/talks/ai-generated-react-server-components)**
- **Focus on AI Types**: Rule-based, Predictive, and Generative AI.
- **Challenges of Generative AI**: Issues such as hallucinations, finite context length, and knowledge cutoffs.
- **Retrieval Augmented Generation (RAG)**: Combining data retrieval with AI-generated text to enhance the user experience.
- **Live Coding Demo**: Demonstrated practical implementation of AI-generated React components.
- **Key Insight**: "RAG enables more accurate and contextually relevant AI outputs."
Kumar's insights into AI-generated components set the stage for the next talk, which explored the benefits of using smaller, local language models.
---
## Building Useful Apps with Small, Local LLMs
**Speaker: Jacob Lee**
**[Talk Information](https://cascadiajs.com/2024/talks/building-useful-apps-with-small-local-llms)**
- **Portability and Reliability**: Advantages of smaller models over larger ones.
- **Evolution of LLMs**: Timeline from 2017 to 2024, highlighting rapid advancements.
- **LangGraph.js**: A tool for building with smaller LLMs, offering customization and streaming support.
- **Real-World Examples**: Applications built using LangGraph.js and LangChain modules.
- **Key Insight**: "Focus on productionization and architecting well, as the only constant is change".
Lee's emphasis on efficient design and architecture flowed into the next session on creating delightful user experiences.
---
## Delightful Design
**Speaker: John Pham**
**[Talk Information](https://cascadiajs.com/2024/talks/delightful-design)**
- **Design Principles**: Used by companies like Vercel, Linear, and Raycast.
- **Live Coding Session**: Tips on improving readability through whitespace, contrast, and proper element usage.
- **Accessibility**: Importance of ARIA labels and user-friendly hit sizes.
- **Key Insight**: "Delightful design is about more than aesthetics; it's about creating an inclusive and intuitive experience."
Pham's practical design tips seamlessly transitioned into a discussion on the importance of accessibility in web development.
---
## Embedding Accessibility
**Speaker: Aaron Gustafson**
**[Talk Information](https://cascadiajs.com/2024/talks/embedding-accessibility)**
- **Diverse Teams**: Create better products by filling knowledge gaps.
- **Holistic Accessibility**: Embedding accessibility from the planning phase through to deployment.
- **Market Opportunity**: Accessibility can expand market reach and reduce costs.
- **Key Insight**: "Accessibility is not a one-time effort but an ongoing journey."
Gustafson's talk underscored the ongoing nature of accessibility, setting the stage for a technical discussion on improving the performance of GIFs.
---
## GIFs Are Forever, Let's Make Them Better
**Speaker: Tyler Sticka**
**[Talk Information](https://cascadiajs.com/2024/talks/gifs-are-forever-lets-make-them-better)**
- **Modern Image Formats**: Use WebP and AVIF for higher quality and smaller file sizes.
- **Alt Text for Accessibility**: Ensuring all images and GIFs are accessible.
- **User Motion Preferences**: Providing static alternatives to respect user preferences.
- **Playback Controls**: Enhancing user control over GIF playback.
- **Key Insight**: "Modernizing GIF usage can significantly improve performance and accessibility."
Sticka's focus on modernizing GIFs led into a creative session on using JavaScript for personal projects.
---
## Jammin' with JavaScript
**Speaker: Herve Aniglo**
**[Talk Information](https://cascadiajs.com/2024/talks/jammin-with-javascript)**
- **Building a Playlist**: Step-by-step guide to creating a music playlist with JavaScript.
- **HTML and CSS**: Fundamental elements and styling for the web player.
- **JavaScript Functionality**: Adding interactivity to the playlist.
- **Key Insight**: "JavaScript can bring your personal projects to life in engaging and fun ways."
Aniglo's session on JavaScript provided a practical segue into a historical overview of JavaScript and serverless technologies.
---
## Lost and Found: A Decade of Modern JS and the Rise of Serverless Fullstack
**Speaker: Brian Leroux**
**[Talk Information](https://cascadiajs.com/2024/talks/lost-and-found-a-decade-of-modern-js-and-the-rise-of-serverless-fullstack)**
- **JavaScript Evolution**: Key milestones from 1995 to the present.
- **Serverless Technologies**: Development and impact on modern backend solutions.
- **Focus on Core Business**: Outsource non-essential functions to improve efficiency.
- **Key Insight**: "Serverless is not a specific technology but a superpower that allows focusing on core business values."
Leroux's historical insights set the stage for addressing modern security challenges with AI.
---
## Navigating AI Security
**Speaker: Logan Gore**
**[Talk Information](https://www.linkedin.com/in/logan-gore/)**
- **Security Goals**: Securing applications and protecting users from abuse.
- **AI-Powered Threats**: Increased sophistication and scalability of phishing and social engineering attacks.
- **Mitigation Strategies**: Phishing-resistant authentication, user education, and advanced bot detection.
- **Key Insight**: "With the rise of AI, attackers have new incentives and better tooling, making robust security measures essential."
Gore's focus on AI security led to a discussion on optimizing React performance without traditional tools.
---
## React Without DevTools
**Speaker: Aiden Bai**
**[Talk Information](https://cascadiajs.com/2024/talks/react-without-devtools)**
- **Performance Optimization**: Challenges with current methods like DevTools and useMemo().
- **Million Lint**: A tool for code optimization and linting.
- **Decoupling Performance Data**: Separating performance data from code for better insights.
- **Key Insight**: "Automated tools like Million Lint can streamline React performance optimization."
Bai's technical insights on React performance set the stage for a deep dive into web animations.
---
## Return to Web Animation Wonderland
**Speaker: Rachel Lee Nabors**
**[Talk Information](https://cascadiajs.com/2024/talks/return-to-web-animation-wonderland)**
- **Web Animations API**: Using JavaScript-based animations to replace CSS animations.
- **Scroll-Driven and View-Driven Animations**: Leveraging the latest web technologies for interactive content.
- **Practical Examples**: Implementing advanced animations using modern web APIs.
- **Key Insight**: "Web animations have evolved, offering more tools and techniques for creating dynamic user experiences."
Nabors' exploration of web animations transitioned into a session on integrating AI with existing development tools.
---
## Your AI Needs an Assistant
**Speaker: Josh Goldberg**
**[Talk Information](https://cascadiajs.com/2024/talks/your-ai-needs-an-assistant)**
- **Gartner Hype Cycle**: Understanding the stages of AI adoption.
- **Combining AI with Dev Tools**: Using tools like Sourcegraph and LangChain for better development cycles.
- **Dynamic and Static Analysis**: Enhancing AI outputs with thorough analysis and testing.
- **Key Insight**: "AI is a tool, not a feature; combining it with existing dev tools can create more productive cycles."
Goldberg's session concluded Day 1 with a focus on integrating AI to enhance productivity and development.
---
## Conclusion
Day 1 of CascadiaJS 2024 highlighted the rapid advancements in AI, the importance of accessibility, and the evolving landscape of JavaScript and serverless technologies. The talks provided valuable insights and practical tools for developers to enhance their skills and build better applications. | agagag |
1,896,519 | How do I prepare for low level system design? | Preparing for low-level system design interviews requires a structured approach to mastering... | 0 | 2024-06-21T23:51:41 | https://dev.to/muhammad_salem/how-do-i-prepare-for-low-level-system-design-114g | Preparing for low-level system design interviews requires a structured approach to mastering object-oriented design (OOD) and software design principles. Here is a comprehensive guide to help you prepare effectively:
### 1. Understand the Basics of OOD
- **Object-Oriented Principles**: Learn the four main principles of OOD:
- **Encapsulation**: Keeping the data (attributes) and the code (methods) that manipulates the data together.
- **Abstraction**: Hiding complex implementation details and showing only the necessary features of an object.
- **Inheritance**: Mechanism to create a new class using the properties and methods of an existing class.
- **Polymorphism**: Ability of different classes to be treated as instances of the same class through inheritance.
### 2. Learn SOLID Principles
- **Single Responsibility Principle (SRP)**: A class should have one and only one reason to change.
- **Open/Closed Principle (OCP)**: Classes should be open for extension but closed for modification.
- **Liskov Substitution Principle (LSP)**: Objects of a superclass should be replaceable with objects of a subclass without affecting the functionality.
- **Interface Segregation Principle (ISP)**: Many client-specific interfaces are better than one general-purpose interface.
- **Dependency Inversion Principle (DIP)**: Depend on abstractions, not on concretions.
### 3. Study Common Design Patterns
- **Creational Patterns**: Singleton, Factory, Abstract Factory.
- **Structural Patterns**: Adapter, Facade, Decorator.
- **Behavioral Patterns**: Strategy, Observer.
- **Resources**: "Head First Design Patterns" by Eric Freeman and Elisabeth Robson.
### 4. Practice Common System Design Problems
- **Design a Parking Lot**
- **Design a Library Management System**
- **Design an Online Bookstore**
- **Design a Social Media Platform**
- **Design a Chess Game**
### 5. Brush Up on UML Diagrams
- **Class Diagrams**: Represent classes and their relationships.
- **Sequence Diagrams**: Show how objects interact in a particular sequence.
- **Use Case Diagrams**: Illustrate system functionalities and user interactions.
- **Activity Diagrams**: Represent workflows of stepwise activities.
### 6. Implement Small Projects
- Create small projects that require careful design:
- **Inventory Management System**
- **Employee Management System**
- **E-commerce Application**
- Focus on how you structure your classes, relationships, and interactions between components.
### 7. Code Reviews and Refactoring
- Regularly review and refactor your code.
- Learn to identify code smells and apply refactoring techniques.
- Resources: "Refactoring: Improving the Design of Existing Code" by Martin Fowler.
### 8. Mock Interviews and Practice
- Participate in mock interviews with peers or mentors.
- Use platforms like Pramp, Interviewing.io, or LeetCode Discuss to practice system design problems.
- Record your solutions and get feedback to improve.
### 9. Study Real-World Systems
- Analyze the design of popular open-source projects on GitHub.
- Read engineering blogs and case studies from tech companies.
### 10. Review and Revise
- Regularly review your notes and solutions.
- Stay updated with new design patterns and architectural styles.
### Sample Problem Walkthrough: Design a Parking Lot
#### Step 1: Gather Requirements
- Different types of parking spots: regular, compact, handicapped.
- Payment methods and fee calculation.
- Entry and exit points.
- Tracking available spots.
#### Step 2: Identify Main Components
- **ParkingLot**: Manages parking spots, entry, and exit.
- **ParkingSpot**: Represents a single parking spot.
- **Vehicle**: Base class for vehicles.
- **Ticket**: Manages parking tickets.
- **Payment**: Handles payment processing.
#### Step 3: Define Classes and Relationships
```java
class ParkingLot {
List<ParkingSpot> spots;
Map<String, Ticket> activeTickets;
Ticket parkVehicle(Vehicle vehicle);
void exitVehicle(Ticket ticket);
}
class ParkingSpot {
String spotId;
boolean isOccupied;
Vehicle currentVehicle;
SpotType spotType; // Enum for regular, compact, handicapped
boolean assignVehicle(Vehicle vehicle);
void removeVehicle();
}
class Vehicle {
String licensePlate;
VehicleType vehicleType; // Enum for car, truck, bike
}
class Ticket {
String ticketId;
Date entryTime;
Date exitTime;
Vehicle vehicle;
double calculateFee();
}
class Payment {
String ticketId;
double amount;
boolean processPayment();
}
```
#### Step 4: Design UML Diagram
- Create class and sequence diagrams to visualize the interaction and design.
#### Step 5: Implement and Test
- Write unit tests for each class.
- Simulate parking, exiting, and payment scenarios.
By following this structured approach and continually practicing, you can effectively prepare for low-level system design interviews and improve your ability to design robust software systems.
To get the most value from practicing low-level design problems, especially with a resource like "Grokking the Object-Oriented Design Interview," you should follow a methodical approach. Here are tips and tricks for effective low-level design practices:
### 1. Understand the Problem Thoroughly
- **Clarify Requirements**: Before diving into the solution, ensure you understand the problem statement and clarify any ambiguities.
- **Identify Key Features**: Break down the problem into essential features and functionalities.
### 2. Break Down the Design
- **Modular Design**: Divide the problem into smaller, manageable modules or components.
- **Single Responsibility Principle**: Ensure each class or module has a single responsibility or purpose.
### 3. Use Design Patterns Wisely
- **Appropriate Patterns**: Identify and apply suitable design patterns for each component. For example, use the Singleton pattern for managing a single instance of a class.
- **Pattern Combinations**: Sometimes, combining multiple patterns can lead to an optimal solution.
### 4. Think Through Edge Cases
- **Robust Design**: Consider edge cases and how your design will handle them. This includes error handling, boundary conditions, and performance considerations.
- **Scalability and Extensibility**: Design with scalability in mind, allowing for easy extension or modification.
### 5. Visualize with UML
- **Class Diagrams**: Create class diagrams to visualize the relationships between different components.
- **Sequence Diagrams**: Use sequence diagrams to understand object interactions over time.
- **Use Case Diagrams**: Map out user interactions and system functionalities.
### 6. Implement and Test
- **Write Clean Code**: Follow best practices for clean and maintainable code. Use meaningful names, consistent formatting, and document your code.
- **Unit Testing**: Write unit tests for each class and component. Test both typical and edge cases.
- **Refactor**: Continually improve your design and code. Refactor for better performance, readability, and maintainability.
### 7. Review Solutions Critically
- **Analyze Sample Solutions**: Study the solutions provided in your resource. Understand why certain design choices were made and how they address the problem requirements.
- **Compare and Contrast**: Compare your solution with the sample. Identify areas of improvement or alternative approaches.
- **Feedback Loop**: Seek feedback from peers, mentors, or online forums. Use constructive criticism to refine your design skills.
### 8. Practice Regularly and Variedly
- **Diverse Problems**: Practice a variety of design problems. Each problem will help you understand different aspects of OOD.
- **Consistency**: Set a regular practice schedule. Consistent practice will reinforce concepts and improve your skills.
- **Time Yourself**: Simulate interview conditions by timing your design process. This helps improve your efficiency and ability to think under pressure.
### 9. Document Your Process
- **Design Journal**: Keep a journal of your design problems, solutions, and learnings. Document the steps you took, the design patterns used, and any challenges faced.
- **Reflect and Iterate**: Periodically review your journal to reflect on your progress. Identify patterns in your mistakes and areas where you can improve.
### 10. Build Projects
- **Real-World Applications**: Apply your design skills to small projects or contribute to open-source projects. This gives practical experience and reinforces your learning.
- **Incremental Complexity**: Start with simple projects and gradually take on more complex systems. This builds confidence and competence.
### Sample Approach Using Grokking OOD Problems
1. **Read and Analyze**: Carefully read the problem statement and break it down into core requirements.
2. **Plan**: Sketch a high-level design and decide on the classes, interfaces, and relationships.
3. **Detail Design**: Flesh out the details of each component, considering design principles and patterns.
4. **Code**: Implement the design in code, adhering to clean coding standards.
5. **Test**: Write tests to ensure your implementation meets the requirements and handles edge cases.
6. **Review**: Compare your solution with the provided solution, noting differences and improvements.
7. **Refine**: Refactor your design and code based on insights gained from the review.
### Example Problem Walkthrough: Design a Library Management System
#### Step 1: Clarify Requirements
- **Core Features**: Book catalog, member management, borrowing and returning books, fee calculation.
- **Key Entities**: Books, Members, Librarians, Borrowing Records.
#### Step 2: Identify Classes and Responsibilities
- **Book**: Title, Author, ISBN, Status (available, borrowed).
- **Member**: Member ID, Name, Contact Details, Borrowing Limit.
- **Librarian**: Manage books, Assist members.
- **BorrowingRecord**: Book, Member, Borrow Date, Return Date, Fee.
#### Step 3: Apply Design Patterns
- **Factory Pattern**: For creating instances of Books, Members.
- **Observer Pattern**: To notify members of due dates.
- **Singleton Pattern**: For a single instance of Library.
#### Step 4: Create UML Diagrams
- Draw class and sequence diagrams to visualize the structure and interactions.
#### Step 5: Implement and Test
- Code the classes and write unit tests to validate the functionality.
#### Step 6: Review and Refine
- Compare with the solution from Grokking OOD, identify improvements, and refine your design.
By following these steps and continually practicing, you will develop strong low-level design skills and be well-prepared for your interviews.
Here's a step-by-step thought process for converting system requirements into an object-oriented design (OOD):
**1. Analyze the Requirements:**
* **Identify Nouns and Verbs:** Start by meticulously examining the system requirements document. Look for nouns that represent entities or data and verbs that describe actions or functionalities. These often map to classes and their methods in your OOD.
* **Categorize Requirements:** Group similar requirements together. This helps identify objects with common functionalities and potential relationships between them.
**2. Identify Objects and Classes:**
* **From Nouns to Classes:** Look at the identified nouns and consider:
* Do they represent real-world entities with attributes and behavior? (e.g., User, Product)
* Do they hold data that needs protection and controlled access? (e.g., Account)
* **Nouns as Collaborators:** Some nouns might not be central entities but rather collaborators. For instance, "Order" might collaborate with "Product" and "Customer" classes.
**3. Define Class Attributes and Responsibilities:**
* **Identify Attributes:** For each class, define the data it holds. These become the class attributes. Consider what information is essential for the class to function effectively.
* **Define Methods:** Think about the actions the class can perform based on the verbs associated with it in the requirements. These translate to methods within the class.
**4. Establish Relationships Between Objects:**
* **Identify Interactions:** Review the requirements again to see how objects interact with each other. Look for associations, dependencies, or message passing between them.
* **Relationships:** These interactions can translate to different relationships between classes like:
* **Has-a:** One object contains another as a part (e.g., Car has-a Engine)
* **Uses-a:** One object utilizes the services of another (e.g., Order uses-a Product)
* **Is-a (Inheritance):** One object inherits properties and behavior from another (e.g., Manager is-a Employee)
**5. Refine and Iterate:**
* **Review and Refine:** Once you have a preliminary class structure and relationships, go back to the requirements and ensure your design addresses them effectively.
* **Consider Trade-offs:** There might not be a single perfect solution. Think about potential trade-offs between simplicity, efficiency, and future maintainability.
* **Iterate and Improve:** Be prepared to refine your design as you go. This is an iterative process, and you might need to adjust class structures or relationships based on your analysis.
**Additional Tips:**
* **Use UML Diagrams (Optional):** Consider using UML class diagrams to visually represent your object model. It can improve clarity and communication of your design.
* **Start High-Level, Refine Gradually:** Don't get bogged down in intricate details initially. Start with a high-level overview of classes and relationships, then progressively add details as you refine.
* **Focus on Reusability:** Think about how your design can be reused or extended in the future. This promotes maintainability and reduces the need for major rework as requirements evolve.
By following these steps, you can systematically translate system requirements into a well-structured object-oriented design that effectively addresses the needs of the system.
Absolutely! Let's imagine we're designing a system for a simple online library. Here's how I, as a professional software engineer, would approach converting the requirements into an object-oriented design:
**1. Analyze the Requirements:**
* **Requirements Snippet:** "The system shall allow users to search for books by title or author. Users can borrow books and return them. The system keeps track of loaned books and due dates."
* **Identified Nouns and Verbs:**
* Nouns: User, Book, Loan
* Verbs: Search, Borrow, Return, Track
**2. Identify Objects and Classes:**
* **Class Candidates:** Based on the nouns, we have strong candidates for classes like User, Book, and Loan.
**3. Define Class Attributes and Responsibilities:**
* **User Class:**
* Attributes: Name, ID (unique identifier)
* Methods: SearchBook(title, author) - to search for books
* **Book Class:**
* Attributes: Title, Author, ISBN (unique identifier)
* Methods: GetLoanStatus() - to check if the book is loaned out
* **Loan Class:**
* Attributes: UserBorrowing (User object), BookBorrowed (Book Object), DueDate
* Methods: MarkAsReturned() - to update loan status upon return
**4. Establish Relationships Between Objects:**
* **Interactions:** Users search for Books, Users borrow Books, Loans track borrowed Books and their due dates.
* **Relationships:**
* User - Borrows -> Loan (One user can have many loans)
* Book - BelongsTo -> Loan (One book can be in one loan at a time)
* Loan - Contains -> User, Book (A loan links a User and a Book)
**5. Refine and Iterate:**
* **Reviewing the Design:** This initial design seems to capture the core functionalities. We can represent the relationships using a UML class diagram for better visualization.
* **Additional Considerations:** We might consider adding attributes like "genre" to the Book class or implementing a shopping cart functionality using a separate class. These can be incorporated as the design evolves.
This is a simplified example, but it demonstrates the thought process involved in converting requirements into an object-oriented design. As a professional software engineer, I would continuously refine this design based on more detailed requirements, considering factors like scalability, error handling, and potential future needs of the library system. | muhammad_salem | |
1,896,517 | Project Stage-3: Reflections and Final Thoughts on Implementing AFMV in GCC | As the summer project for SPO600 comes to a close, it’s time to reflect on the journey of... | 0 | 2024-06-21T23:50:08 | https://dev.to/yuktimulani/project-stage-3-reflections-and-final-thoughts-on-implementing-afmv-in-gcc-45gp | As the summer project for SPO600 comes to a close, it’s time to reflect on the journey of implementing Automatic Function Multi-Versioning (AFMV) for AArch64 systems in GCC. This final blog post summarizes the accomplishments, challenges, and learnings from this project, and provides insights into the future work that could enhance this feature further.
**Project Overview and Contributions**
Over the course of this project, my primary focus was on extending GCC to support AFMV, particularly with features like SVE2 for AArch64 processors. AFMV aims to provide performance optimizations by automatically creating function clones optimized for different processor features, enabling the software to dynamically choose the most efficient version at runtime.
**Key Contributions:**
- Enhanced the GCC’s `multiple_target.cc` file to recognize and handle SVE2 attributes effectively.
- Integrated advanced GCC flags to support the SVE2 instruction set.
- Addressed and resolved critical errors related to attribute recognition in GCC.
**Code Location and Integration:**
- Repository Branch: The changes were committed to the 2`024-S-FMV-automatic-function-cloning` branch in the class Git repository.
- File Location: The specific changes were made in `gcc/gcc/multiple_target.cc`.
**Integration with Other Branches:**
- The `2024-S-FMV-automatic-function-cloning` branch was regularly rebased with the main development branch to ensure compatibility and to integrate the latest updates from other team members working on related tasks.
- Collaboration with other contributors was key to integrating AFMV with various diagnostic and function pruning tools being developed concurrently.
**What Works and Achievements**
- Successful SVE2 Integration: By using advanced GCC flags, I was able to enable SVE2 support, which allows for greater optimization potential in function cloning and selection.
- Improved Attribute Handling: The modifications made to `multiple_target.cc` ensure that attributes like target("sve2") are now valid and properly handled by the compiler.
- Enhanced Performance Diagnostics: The project also contributed to improved diagnostic outputs that help developers understand the effects of function multi-versioning and optimize their code more effectively.
**Limitations and What Needs Improvement:**
- Limited Testing Scope: Although the project saw significant progress, the testing was limited to specific cases and hardware. More extensive testing across diverse AArch64 systems is needed.
- Error Handling and Debugging: While some critical errors were resolved, the handling of complex attribute combinations still requires refinement. Future work should focus on improving the robustness of error handling.
- Documentation and Usability: Although I updated the documentation to some extent, it needs to be more comprehensive to assist new users and developers in understanding and utilizing the AFMV feature effectively.
**What Was Not Tested:**
- Cross-Platform Performance: The testing primarily focused on AArch64, and the performance and functionality on other platforms like x86_64 need to be evaluated.
- Complex Use Cases: The project didn’t extensively cover complex real-world applications that could benefit from AFMV, such as those requiring intricate multi-core or SIMD optimizations.
**Reflections on the Project and Course**
This project has been an incredible learning experience, providing deep insights into the workings of GCC and the intricacies of compiler design. Here are some key takeaways:
- Understanding Compiler Internals: The hands-on experience with GCC’s codebase was invaluable. It enhanced my understanding of how compilers manage attributes, perform optimizations, and generate machine-specific code.
- Challenges of Multi-Versioning: Implementing AFMV highlighted the complexities involved in function cloning and optimization, especially when dealing with diverse processor features and instruction sets.
- Collaborative Development: Working in a team environment and integrating code from multiple branches taught me the importance of version control, code reviews, and effective communication.
- Balancing Performance and Compatibility: The project underscored the delicate balance between optimizing for the latest hardware features and maintaining compatibility with older systems.
**Next Steps:**
Moving forward, I plan to continue refining the AFMV implementation, focusing on:
- Extending testing to cover more platforms and use cases.
- Improving documentation to facilitate broader adoption and usage.
- Collaborating with the GCC community to integrate these changes into the main GCC branch.
This project has been a rewarding journey, filled with both challenges and achievements. I am grateful for the opportunity to contribute to such an important tool in the open-source community and look forward to continuing this work in the future.
For those interested, the code and changes are available in the `2024-S-FMV-automatic-function-cloning` branch in the class Git repository.
Until next time, happy coding 🚀!! | yuktimulani | |
1,896,516 | Any opinions on this new AI powered mutation testing tool? | I read a Medium article about this software testing tool that uses AI. I’m curious to know if someone... | 0 | 2024-06-21T23:44:02 | https://dev.to/devlevy/any-opinions-on-this-new-ai-powered-mutation-testing-tool-lho | opensource, discuss, webdev, python | I read a [Medium](https://medium.com/codeintegrity-engineering/transforming-qa-mutahunter-and-the-power-of-llm-enhanced-mutation-testing-18c1ea19add8) article about this software testing [tool](https://github.com/codeintegrity-ai/mutahunter) that uses AI. I’m curious to know if someone with deep experience in software testing would use a tool like this. | devlevy |
1,896,515 | I just created a simple & stupid notepad (Looking for advice) | Salam (Peace) everyone! I've recently started learning C# coding. After mastering the basics and... | 0 | 2024-06-21T23:39:22 | https://dev.to/dzt/i-just-created-a-simple-stupid-notepad-looking-for-advice-2gnh | csharp, help, programming, beginners | Salam (Peace) everyone! I've recently started learning C# coding. After mastering the basics and console apps, I've moved on to GUI using WinForms to gain more experience and develop my skills in C#. I hope someone here can assist me in developing this notepad application. I'm facing several issues, particularly with the find & replace feature. I've tried numerous times, but unfortunately, it's not working for me :( Your help would be greatly appreciated. Thank you!
GitHub: https://github.com/DZ-T/Notepad
I just code new version with better functions. I faced a problem in replace & find so i delete the code till i find a solution Thanks again. Let's make this Notepad less "stupid" and more awesome together!
| dzt |
1,896,514 | GIF to PNG: Transitioning Between Image Formats | What Are the Differences Between GIF and PNG? GIF (Graphics Interchange Format) and PNG... | 0 | 2024-06-21T23:33:15 | https://dev.to/msmith99994/gif-to-png-transitioning-between-image-formats-239 | ## What Are the Differences Between GIF and PNG?
GIF (Graphics Interchange Format) and PNG (Portable Network Graphics) are two widely-used image formats, each with unique characteristics that cater to different needs. Understanding these differences helps in deciding when and why to use each format, and how to convert between them.
### GIF
**- Compression:** GIF uses lossless compression, ensuring no loss of image quality. However, it is limited to a palette of 256 colors, which can restrict its use for detailed images.
**- Animation:** GIF supports animations, allowing multiple frames within a single file, making it ideal for simple animated graphics.
**- Transparency:** GIF supports binary transparency, meaning a pixel can be fully transparent or fully opaque.
**- File Size:** Generally small, especially for simple graphics with limited colors.
### PNG
- Compression: PNG uses lossless compression, preserving all image data without loss of quality. It supports a broader color range compared to GIF.
- Color Depth: PNG supports 24-bit color and 8-bit transparency (alpha channel), allowing for millions of colors and varying levels of transparency.
- Animation: PNG does not natively support animation, although the MNG (Multiple-image Network Graphics) extension can handle animations.
- Transparency: Supports advanced transparency with varying levels of opacity, making it suitable for complex images requiring clear backgrounds or overlays.
- File Size: Larger than GIF for the same image due to higher color depth and quality.
## Where Are They Used?
### GIF
**- Web Graphics:** Ideal for simple graphics, icons, and logos with limited colors.
**- Animations:** Widely used for simple animations and short looping clips on websites and social media.
**- Emojis and Stickers:** Used in messaging apps for animated emojis and stickers.
### PNG
**- Web Graphics:** Preferred for logos, icons, and images requiring high quality and transparency.
**- Digital Art:** Used for images with sharp edges, text, and transparent elements.
**- Screenshots:** Commonly used for screenshots to capture exact screen details without quality loss.
**- Print Media:** Employed where high quality and lossless compression are necessary.
## Advantages and Disadvantages
### GIF
**Advantages:**
**- Small File Size:** Effective for simple graphics with limited colors.
**- Animation Support:** Allows for simple animations within a single file.
**- Wide Compatibility:** Supported by almost all browsers and devices.
**Disadvantages:**
**- Limited Color Range:** Restricted to 256 colors, which is insufficient for detailed images.
**- Binary Transparency:** Does not support varying levels of transparency.
**- No Advanced Features:** Lacks support for complex color profiles and transparency levels.
### PNG
**Advantages:**
**- Lossless Compression:** Maintains original image quality without any loss.
**- Wide Color Range:** Supports millions of colors, suitable for detailed images.
**- Advanced Transparency:** Allows for varying levels of opacity, making it ideal for complex images.
**- Ideal for Editing:** No quality loss through multiple edits and saves.
**Disadvantages:**
**- Larger File Sizes:** Can be larger than GIF files due to higher color depth and quality.
**- No Native Animation Support:** Does not support animations natively.
**- Browser Compatibility:** While widely supported, PNG files can be less efficient for large images on older systems.
## How to Convert GIF to PNG
Converting [GIF to PNG](https://cloudinary.com/tools/gif-to-png) can be beneficial when you need higher quality, a broader color range, or advanced transparency. Here are several methods to convert GIF images to PNG:
### Conversion Methods
Using Online Tools
Websites like Convertio and Online-Convert allow you to upload GIF files and download the converted PNG files.
Using Image Editing Software
Software like Adobe Photoshop and GIMP support both GIF and PNG formats. Open your GIF file and save it as PNG.
Command Line Tools
Command-line tools like ImageMagick can be used for conversion.
Programming Libraries
Programming libraries such as Python's Pillow can be used to automate the conversion process in applications.
## Final Words
GIF and PNG are both crucial image formats with distinct advantages and use cases. GIF is excellent for simple graphics and animations with limited colors, while PNG excels in delivering high-quality images with advanced transparency and a broader color range. Understanding the differences between GIF and PNG, and knowing how to convert between them, allows you to choose the best format for your specific needs. Whether you need the animation capabilities of GIF or the superior quality and transparency of PNG, mastering these formats ensures you can handle any digital image requirement effectively. | msmith99994 | |
1,896,513 | How to create a resource group in Azure Portal | What is a Resource group?: A resource is a logical container for housing our resources in... | 0 | 2024-06-21T23:32:45 | https://dev.to/realcloudprojects/how-to-create-a-resource-group-in-azure-portal-2h8j | webdev, beginners, tutorial, productivity | **What is a Resource group?**: A resource is a logical container for housing our resources in Azure.
**Here are the steps to create one in Azure**
Step1: Search for resource in the search resource, documents& services field and click enter

Step2: Click on the + create button

Step 3: On the basics tab, fill in the project details which comprises of the resource and the subscription

| realcloudprojects |
1,896,512 | Rush Mommy | RushMommy WordPress hosting provides a streamlined experience with features like rapid load speeds,... | 0 | 2024-06-21T23:31:28 | https://dev.to/sharedhosting/rush-mommy-16gj | RushMommy **<a href=https://rushmommy.com/wordpress-hosting/>WordPress hosting</a>** provides a streamlined experience with features like rapid load speeds, robust security protocols, and easy site management tools. Their hosting solutions cater to both beginners and advanced users, offering one-click WordPress installations, automatic updates, and comprehensive customer support. Designed to handle high-traffic websites and online stores, RushMommy ensures reliability and scalability, making it a top choice for those looking to enhance their digital footprint. | sharedhosting | |
1,896,509 | Cooking up high-quality website templates this weekend! 🧑🍳 | As a developer who's passionate about simplifying processes and saving time, I'm excited to share a... | 0 | 2024-06-21T23:21:46 | https://dev.to/darkinventor/cooking-up-high-quality-website-templates-this-weekend-9cp | webdev, javascript, softwareengineering, productivity | As a developer who's passionate about simplifying processes and saving time, I'm excited to share a game-changing project I've been working on: **free, open-source website templates**.
> These templates are designed to streamline web development, making it easier and faster for developers of all skill levels to create stunning, functional websites.
## Why Website Templates are a Game-Changer?
Website templates offer numerous benefits that can dramatically improve your development workflow:
```
1. Save 100+ hours of work
2. Pre-built advanced animations
3. Easy configuration and customization
4. One-click download and setup
5. 5-minute content updates
6. Seamless Vercel deployment
```
These features combine to create a powerful toolkit that allows developers to focus on customization and content rather than starting from scratch every time.
## Tech Stack:
To ensure these templates are modern, efficient, and highly customizable, I'm using a robust tech stack:
```
- React
- Next.js
- Magic UI
- Shadcn UI
- Tailwind CSS
- Framer Motion
```
This combination of technologies provides a solid foundation for creating responsive, performant, and visually appealing websites.
## What's Included in Each Template?
Each template comes packed with essential sections to kickstart your project:
```
✅ Header Section
✅ Hero Section
✅ Social Proof Section
✅ Pricing Section
✅ Call To Action Section
✅ Footer Section
✅ Mobile Responsive Navbar
```
These components cover the core elements needed for most websites, allowing you to quickly set up a professional-looking site.
## Progress and Future Plans:
I've just completed the first template and am aiming to create two more this week. The best part? All of these templates will be free and open source, allowing the entire development community to benefit from and contribute to this project.
Demo Preview:
Here's a sneak peek at one of the templates:

If you're interested in seeing the full demo video, you can check it out on my LinkedIn post [here](https://www.linkedin.com/posts/kathan-mehta-software-dev_webdevelopment-opensource-react-activity-7210055333027745792-l-N9?utm_source=share&utm_medium=member_desktop).
Stay Tuned!
I'm thrilled about the potential impact these templates can have on the web development community. By providing free, high-quality, and easily customizable templates, we can collectively save countless hours of development time and lower the barrier to entry for creating professional websites.
Stay tuned for the official launch! I'll be sharing more updates and resources as the project progresses. If you're as excited about this as spread the word to your fellow developers.
Let's revolutionize web development together, one template at a time! 🚀 | darkinventor |
1,896,503 | Project Stage-3:Leveraging Advanced GCC Flags. | Hello there!! As our journey towards implementing Automatic Function Multi-Versioning (AFMV) for... | 0 | 2024-06-21T23:18:33 | https://dev.to/yuktimulani/project-stage-3leveraging-advanced-gcc-flags-5e2l | Hello there!!
As our journey towards implementing Automatic Function Multi-Versioning (AFMV) for AArch64 systems continues, I’ve encountered some intriguing challenges and solutions regarding the support of SVE2 (Scalable Vector Extension version 2) in GCC. This blog post explores how advanced GCC flags can be leveraged to overcome these hurdles, enabling a smoother path towards a fully functional AFMV setup.
## Exploring Advanced GCC Flags:
GCC (GNU Compiler Collection) is known for its robust support for various architectures and instruction sets, including SVE2. However, enabling these features sometimes requires explicit configuration through advanced flags. Here’s a breakdown of the steps I followed to integrate these features:
**1. Identifying the Required Architecture Support:**
- The error indicated that SVE2 support wasn't automatically enabled, which led me to investigate the necessary architecture flags. SVE2 is part of the Armv8.5-A architecture, so I needed to ensure GCC was configured to support this.
**2. Adding the Appropriate Flags:**
- By consulting the GCC documentation and community forums, I identified the necessary flags to enable SVE2 support. The critical flags were -march and -mcpu, which dictate the target architecture and CPU:
```
gcc hello.c -o hello -O3 -march=armv8.5-a+sve2 -mcpu=cortex-a75
```
Here’s a brief explanation of the flags:
- `-march=armv8.5-a+sve2`: Specifies the target architecture, including SVE2.
- `-mcpu=cortex-a75`: Specifies the CPU, ensuring that it aligns with the required architecture.
**3. Integrating the Flags into the Build Process:**
By incorporating these flags into my build commands, I ensured that the compiler was correctly configured to recognize and support the SVE2 instructions. This involved updating my Makefile and build scripts to include these flags.
**4. Validating the Configuration:**
After adding the flags, I recompiled my code and verified that the target("sve2") attribute was now recognized and valid. Here’s a snippet of the updated hello.c:
```
#include <stdio.h>
__attribute__((target("sve2")))
void my_function() {
printf("This function utilizes SVE2 instructions.\n");
}
int main() {
my_function();
return 0;
}
```
## Insights and Learnings:
This experience underscored the importance of understanding and leveraging advanced GCC flags to achieve the desired functionality. It was a reminder that while GCC is incredibly powerful, it often requires explicit configuration to support specialized features like SVE2.
In summary, the addition of advanced GCC flags was crucial in enabling SVE2 support, thereby allowing us to move forward with the AFMV implementation. This journey has been instrumental in deepening my understanding of compiler configurations and their impact on project outcomes.
Stay tuned for the next post in this series, where I’ll delve into the performance optimizations made possible through AFMV and SVE2!
Until next time, happy coding 🚀!! | yuktimulani | |
1,896,502 | form backend service fabform.io | FabForm.io is a service designed to handle form submissions for static sites and web applications... | 0 | 2024-06-21T23:18:04 | https://dev.to/irishgeoff22/form-backend-service-fabformio-45ip | FabForm.io is a service designed to handle form submissions for static sites and web applications without needing a custom backend. Here’s a step-by-step guide on how to use FabForm.io:
### 1. Sign Up and Set Up Your Account
1. **Sign Up**: Go to the FabForm.io website and create an account.
2. **Create a Project**: Once you have an account, create a new project. This project will represent the application or site that you want to collect form submissions for.
### 2. Create a Form
1. **Form ID**: Every form you create will have a unique form ID provided by FabForm.io. You will use this ID to link your HTML form to FabForm.io.
2. **Form Endpoint**: The form action should be set to the FabForm.io endpoint, which is typically something like `https://fabform.io/forms/[your_form_id]/submit`.
### 3. Integrate the Form in Your HTML
1. **HTML Form**: Add an HTML form to your webpage with the `action` attribute pointing to FabForm.io and the `method` set to `POST`.
2. **Form Fields**: Include the necessary input fields in your form. FabForm.io will collect and store all the fields you define in your form.
Here’s an example of how your HTML form might look:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Contact Form</title>
</head>
<body>
<form action="https://fabform.io/f/your_form_id/submit" method="POST">
<label for="name">Name:</label>
<input type="text" id="name" name="name" required>
<label for="email">Email:</label>
<input type="email" id="email" name="email" required>
<label for="message">Message:</label>
<textarea id="message" name="message" required></textarea>
<button type="submit">Submit</button>
</form>
</body>
</html>
```
Replace `your_form_id` with the actual form ID provided by FabForm.io.
### 4. Test the Form
1. **Submit the Form**: Open your webpage with the form and try submitting it to ensure it works correctly.
2. **Check Submissions**: Go to your FabForm.io dashboard and check if the form submissions are being recorded.
### 5. Advanced Features (Optional)
FabForm.io often provides additional features such as:
- **Email Notifications**: Set up email notifications to receive an email whenever someone submits a form.
- **Webhook Integrations**: Integrate with other services using webhooks to trigger actions on form submission.
- **Data Export**: Export form submission data as CSV or other formats.
To configure these features, go to your FabForm.io dashboard and navigate to the settings for your form.
### 6. Security Considerations
- **Spam Protection**: Use hidden fields (honeypot technique) or reCAPTCHA to protect your forms from spam submissions.
- **Data Privacy**: Ensure that the data collected through forms is handled according to privacy regulations applicable to your region (e.g., GDPR).
Here is an example of adding a hidden field for spam protection:
```html
<form action="https://fabform.io/f/your_form_id/submit" method="POST">
<label for="name">Name:</label>
<input type="text" id="name" name="name" required>
<label for="email">Email:</label>
<input type="email" id="email" name="email" required>
<label for="message">Message:</label>
<textarea id="message" name="message" required></textarea>
<!-- Honeypot field for spam protection -->
<input type="text" name="website" style="display:none">
<button type="submit">Submit</button>
</form>
```
### 7. Monitor and Maintain
- **Monitor Submissions**: Regularly check your FabForm.io dashboard for new submissions.
- **Update Forms**: Update your forms as needed based on user feedback or changing requirements.
By following these steps, you can effectively use FabForm.io to manage form submissions for your static site or web application without the need for a custom backend.
Check ou the [form backend](https://fabform.io) service fabform today | irishgeoff22 | |
1,896,393 | How to Add Nodemon to your TS files | The dev package nodemon has been a great help providing server-side hot-reloading as we code in our... | 0 | 2024-06-21T23:13:03 | https://dev.to/finalgirl321/how-to-add-nodemon-to-your-ts-files-4736 | The dev package nodemon has been a great help providing server-side hot-reloading as we code in our JavaScript, json, and other files while developing the in the NodeJS environment. To include the benefits of nodemon in your TypeScript projects so that the _unbuilt_ ts file hot-reloads as you go takes a bit more configuration, but isn't difficult.
Start by installing nodemon and ts-node as dev tools:
```
npm i -D nodemon ts-node
```
In the root of your project (on the same level as your package.json) made a config file called
```
nodemon.json
```
In this file, past the following code:
```
{
"watch": ["src"],
"ext": "ts",
"exec": "ts-node ./src/index.ts"
}
```
The extensions I am watching with nodemon are simply the .ts files located in the src folder. The package ts-node will be used to execute the file called index.ts in my src folder. Update per your needs.
Put the following script in your package.json with your other scripts:
```
"nodemon": "nodemon --watch 'src/**/*.ts' --exec ts-node src/index.ts",
```
You will now be able to start your project using the following command:
```
nodemon
```
and as you code in the index.ts, the hot-reloading will update your server.
That's it for now! | finalgirl321 | |
1,896,499 | Project Stage-3: Overcoming Compiler Attribute Challenges | Welcome back folks!! In our quest to implement Automatic Function Multi-Versioning (AFMV) for... | 0 | 2024-06-21T23:05:11 | https://dev.to/yuktimulani/project-stage-3-overcoming-compiler-attribute-challenges-3le3 | Welcome back folks!! In our quest to implement Automatic Function Multi-Versioning (AFMV) for AArch64, one of the critical challenges has been dealing with compiler attribute errors. The target("sve2") attribute error was a particular stumbling block. Here’s how I tackled it.
## Initial Error Message:
```
error: pragma or attribute ‘target("sve2")’ is not valid
```
## Steps to Resolve:
**1. GCC Version and Compatibility Check:**
Verified that my GCC version supports SVE2 with gcc --version.
Updated GCC to the latest version to ensure full support for the attribute.
**2. Correct Usage of Attributes:**
Adjusted the context in which the target attribute was applied, ensuring it was used correctly in the function definition.
**3. Compilation Flags:**
Added -march=armv8.5-a+sve2 to the GCC command to explicitly enable SVE2 support.
**4. Code Adjustment:**
```
#include <stdio.h>
__attribute__((target("sve2")))
void my_function() {
printf("This function uses SVE2 instructions.\n");
}
int main() {
my_function();
return 0;
}
```
With these adjustments, the attribute error was resolved, allowing me to continue with the AFMV implementation. This process underscored the importance of aligning compiler settings with the specific architectural requirements.
Until next time , happy coding 🚀!! | yuktimulani | |
1,894,509 | 4 useState Mistakes You Should Avoid in React🚫 | Introduction React.js has become a cornerstone of modern web development, with its unique... | 0 | 2024-06-21T23:00:00 | https://dev.to/safdarali/4-usestate-mistakes-you-should-avoid-in-react-1ol0 | technology, javascript, react, programming | ## Introduction
React.js has become a cornerstone of modern web development, with its unique approach to managing state within components. One common hook, useState, is fundamental but often misused. Understanding and avoiding these common mistakes is crucial for both beginners and experienced developers aiming to create efficient and bug-free applications.
This blog will dive into four critical mistakes to avoid when using useState in React. Let's enhance our React skills together!
Before diving in, explore more in-depth articles on web development at my personal website: https://safdarali.vercel.app/

## Mistake 1: Forgetting to Consider the Previous State 😨
When working with React’s useState hook, a common mistake is not taking into account the most recent state when updating it. This oversight can lead to unexpected behaviors, particularly when you're dealing with rapid or multiple state updates.
## ❌ Understanding the Issue
Let’s imagine you’re building a counter in React. Your goal is to increase the count each time a button is clicked. A straightforward approach might be to simply add 1 to the current state value. However, this can be problematic.
```
import React, { useState } from 'react';
const CounterComponent = () => {
const [counter, setCounter] = useState(0);
const incrementCounter = () => {
setCounter(counter + 1); // Might not always work as expected
};
return (
<div>
<p>Counter: {counter}</p>
<button onClick={incrementCounter}>Increment</button>
</div>
);
};
export default CounterComponent;
```
In the above code, incrementCounter updates the counter based on its current value. This seems straightforward but can lead to issues. React might batch multiple setCounter calls together, or other state updates might interfere, resulting in the counter not being updated correctly every time.
## ✅ The Correction:
To avoid this issue, use the functional form of the setCounter method. This version takes a function as its argument, which React calls with the most recent state value. This ensures that you're always working with the latest value of the state.
```
import React, { useState } from 'react';
const CounterComponent = () => {
const [counter, setCounter] = useState(0);
const incrementCounter = () => {
setCounter(prevCounter => prevCounter + 1); // Correctly updates based on the most recent state
};
return (
<div>
<p>Counter: {counter}</p>
<button onClick={incrementCounter}>Increment</button>
</div>
);
};
export default CounterComponent;
```
In this corrected code, incrementCounter uses a function to update the state. This function receives the most recent state (prevCounter) and returns the updated state. This approach is much more reliable, especially when updates happen rapidly or multiple times in a row.
If you are interested in React JS real time training, Please reach out to me for more details.
## Mistake 2: Neglecting State Immutability 🧊
## ❌ Understanding the Issue
In React, state should be treated as immutable. A common mistake is directly mutating the state, especially with complex data structures like objects and arrays.
Consider this faulty approach with a stateful object:
```
import React, { useState } from 'react';
const ProfileComponent = () => {
const [profile, setProfile] = useState({ name: 'John', age: 30 });
const updateAge = () => {
profile.age = 31; // Directly mutating the state
setProfile(profile);
};
return (
<div>
<p>Name: {profile.name}</p>
<p>Age: {profile.age}</p>
<button onClick={updateAge}>Update Age</button>
</div>
);
};
export default ProfileComponent;
```
This code incorrectly mutates the profile object directly. Such mutations don't trigger re-renders and lead to unpredictable behaviors.
## ✅ The Correction:
Always create a new object or array when updating state to maintain immutability. Use the spread operator for this purpose.
```
import React, { useState } from 'react';
const ProfileComponent = () => {
const [profile, setProfile] = useState({ name: 'John', age: 30 });
const updateAge = () => {
setProfile({...profile, age: 31}); // Correctly updating the state
};
return (
<div>
<p>Name: {profile.name}</p>
<p>Age: {profile.age}</p>
<button onClick={updateAge}>Update Age</button>
</div>
);
};
export default ProfileComponent;
```
In the corrected code, updateAge uses the spread operator to create a new profile object with the updated age, preserving state immutability.
## Mistake 3: Misunderstanding Asynchronous Updates ⏳
## ❌ Understanding the Issue
React’s state updates via useState are asynchronous. This often leads to confusion, especially when multiple state updates are made in quick succession. Developers might expect the state to change immediately after a setState call, but in reality, React batches these updates for performance reasons.
Let’s look at a common scenario where this misunderstanding can cause problems
```
import React, { useState } from 'react';
const AsyncCounterComponent = () => {
const [count, setCount] = useState(0);
const incrementCount = () => {
setCount(count + 1);
setCount(count + 1);
// Developer expects count to be incremented twice
};
return (
<div>
<p>Count: {count}</p>
<button onClick={incrementCount}>Increment Count</button>
</div>
);
};
export default AsyncCounterComponent;
```
In this example, the developer intends to increment the count twice. However, due to the asynchronous nature of state updates, both setCount calls are based on the same initial state, resulting in the count being incremented only once.
## ✅ The Correction:
To handle asynchronous updates correctly, use the functional update form of setCount. This ensures that each update is based on the most recent state.
```
import React, { useState } from 'react';
const AsyncCounterComponent = () => {
const [count, setCount] = useState(0);
const incrementCount = () => {
setCount(prevCount => prevCount + 1);
setCount(prevCount => prevCount + 1);
// Now each update correctly depends on the most recent state
};
// Optional: Use useEffect to see the updated state
useEffect(() => {
console.log(count); // 2
}, [count]);
return (
<div>
<p>Count: {count}</p>
<button onClick={incrementCount}>Increment Count</button>
</div>
);
};
export default AsyncCounterComponent;
```
In the above code, each call to setCount uses the most recent value of the state, ensuring accurate and sequential updates. This approach is crucial for operations that depend on the current state, especially when multiple state updates occur in quick succession.
## Mistake 4: Misusing State for Derived Data 📊
## ❌ Understanding the Issue
A frequent error is using state for data that can be derived from existing state or props. This redundant state can lead to complex and error-prone code.
For example:
```
import React, { useState } from 'react';
const GreetingComponent = ({ name }) => {
const [greeting, setGreeting] = useState(`Hello, ${name}`);
return (
<div>{greeting}</div>
);
};
export default GreetingComponent;
```
Here, greeting state is unnecessary as it can be derived directly from name.
## ✅ The Correction:
Instead of using state, derive data directly from existing state or props.
```
import React from 'react';
const GreetingComponent = ({ name }) => {
const greeting = `Hello, ${name}`; // Directly derived from props
return (
<div>{greeting}</div>
);
};
export default GreetingComponent;
```
In the corrected code, greeting is computed directly from the name prop, simplifying the component and avoiding unnecessary state management.
## Conclusion 🚀
Effectively using the useState hook in React is crucial for building reliable and efficient applications. By understanding and avoiding common mistakes—like neglecting the previous state, mismanaging state immutability, overlooking asynchronous updates, and avoiding redundant state for derived data—you can ensure smoother and more predictable component behavior. Keep these insights in mind to enhance your React development journey and create more robust applications.
That's all for today.
And also, share your favourite web dev resources to help the beginners here!
Connect with me:@ [LinkedIn ](https://www.linkedin.com/in/safdarali25/)and checkout my [Portfolio](https://safdarali.vercel.app/).
Explore my [YouTube ](https://www.youtube.com/@safdarali_?sub_confirmation=1)Channel! If you find it useful.
Please give my [GitHub ](https://github.com/Safdar-Ali-India) Projects a star ⭐️
Happy Coding! 🚀
Thanks for 23592! 🤗 | safdarali |
1,896,497 | Firebase Authentication: Are you truly secure? | INTRODUCTION Verizon’s 2023 Data Breach Investigations Report (DBIR) reported that 49% of... | 0 | 2024-06-21T22:53:28 | https://dev.to/oyegoke/firebase-authentication-are-you-truly-secure-1bo4 | webdev, javascript, beginners, react | ## **INTRODUCTION**
Verizon’s 2023 Data Breach Investigations Report (DBIR) reported that 49% of security breaches involved compromised credentials. This highlights the critical importance of implementing a robust website authentication system on the developer's part.
Firebase authentication is a comprehensive authentication service provided by Firebase. It simplifies implementing authentication in web and mobile applications by offering various sign-in options and handling the backend for developers.
## **OVERVIEW OF FIREBASE AUTHENTICATION**
Firebase is a Backend-as-a-Service (BaaS) application development platform by Google. Firebase provides tools and services to help developers build, improve, and grow their apps. These include App hosting, Authentication, Cloud Storage, Realtime Database, Cloud Firestore, Cloud Functions, Analytics, and Crashlytics.
The scope of this article does not exceed the authentication service of Firebase. In detail, we will explore the process of implementing the Email/Password authentication and the Google sign-in method.
## **SETTING UP FIREBASE AUTHENTICATION.**
To implement Firebase Authentication, you start by creating a Firebase project. Open the [Firebase console page](https://console.firebase.google.com), and click the Add Project option. Follow the subsequent prompts to create your Firebase project.

The next step is to add Firebase to your app, which could be an iOS, Android, or web app—our focus in this article is on the web app. You'll be prompted to create an app name. Upon creation, Firebase generates a configuration code unique to your project, which allows you to link your web app to the Firebase project.

Note that this article features implementing Firebase Auth in a web application developed using Reactjs, so this tutorial is tailored towards using Firebase with a React application. It is necessary to install Firebase as a dependency in our project by running the command:
`npm install firebase`
Then, create a js file where you initialize the Firebase SDK to our app.
To assess the Firebase Authentication system in our project, we import the getAuth function that sets up Firebase authentication allowing us to manage sign-ins, sign-ups, and authentication states as shown in the code snippet below:
```c
import {initializeApp} from 'firebase/app';
import {getAuth} from 'firebase/auth'
const firebaseConfig = {
apiKey: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
authDomain: "xxxxxxxxxxxxxxxxxxxxxxx.com",
projectId: "auth-71d4f",
storageBucket: "auth-71d4f.appspot.com",
messagingSenderId: "420633845077",
appId: "1:420633845077:web:e610a208e4fa3aa48de3dc"
};
const app = initializeApp(firebaseConfig);
export const auth = getAuth(app)
```
## **INTEGRATING FIREBASE AUTHENTICATION**
### **CREATING USER ACCOUNT WITH EMAIL AND PASSWORD**
Firebase authentication allows users to create user accounts using their email and password. Note that the sign-in method must be enabled in the Firebase console for it to be implemented.
The “createUserWithEmailAndPassword” function from the Firebase Auth module is used to create user accounts. This function receives 3 parameters, auth (the instance of the Firebase Auth service), email, and password.
```c
import {createUserWithEmailAndPassword} from 'firebase/auth'
//React state to maintain user input
const [registerEmail, setRegisterEmail] = useState("");
const [registerPassword, setRegisterPassword] = useState("")
// function to create user account
const register = async () => {
try{
const user = await createUserWithEmailAndPassword(auth, registerEmail, registerPassword)
console.log(user)
} catch(error){
console.log(error.message)
}
}
```
## **SIGN IN WITH EMAIL AND PASSWORD
**
This process is similar to creating a user account as we explored earlier, but with a different function. Signing in uses the signInWithEmailAndPassword function from the Firebase Auth module. Similar to the createUserWithEmailandPassword function, it accepts three parameters, auth, email, and password.
Here’s an example implementation:
```c
import {signInWithEmailAndPassword} from 'firebase/auth'
const login = async () => {
try{
const user = await signInWithEmailAndPassword(auth, loginEmail, loginPassword)
} catch(error){
console.log(error.message)
}
}
```
## **GOOGLE SIGN-IN**
Firebase Authentication allows users create accounts using their Google Accounts. To implement this, the Google sign-in option must be enabled in your firebase project console. Signing in with Google uses the signInWithPopup function from the Firebase Auth module. This function takes two parameters, the auth instance and a provider instance and is created using the GoogleAuthProvider function imported from the firebase Auth module.
```c
import {createUserWithEmailAndPassword, GoogleAuthProvider, signInWithPopup} from 'firebase/auth'
const provider = new GoogleAuthProvider();
const signInWithGoogle = ()=>{
signInWithPopup(auth, provider)
}
```
## **MONITORING AUTHENTICATION STATE**
Firebase Authentication allows you to monitor the user's authentication state. To implement this, use the onAuthStateChanged function from the Firebase Auth module. This function takes two parameters: the authentication instance and a callback function that executes whenever the user's authentication state changes. The callback function receives the user object if the user is signed in, or null if the user is signed out.
```c
import { onAuthStateChanged } from 'firebase/auth
import { useState } from "react"
const [user, setUser] = useState({})
onAuthStateChanged(auth, (currentUser) => {
setUser(currentUser)
},[])
```
## **LOGGING OUT**
Firebase Authentication allows users to log out of their accounts easily. To implement this, use the signOut function from the Firebase Auth module. This function takes one parameter: the authentication instance. Once called, it signs the user out of their account
```c
import { signOut } from 'firebase/auth
const logout = async () => {
await signOut(auth)
}
```
## **BENEFITS OF FIREBASE AUTHENTICATION**
- Easy to develop
- Multiple Sign-In options
- Secure Authentication
## **CONCLUSION**
In this article, we explored the world of Firebase Authentication, stressing its importance in our web applications to provide seamless user authentication. We also explored how to implement user authentications with email and password credentials and the Google sign-in option. We wrapped this article up by looking into the benefits of using Firebase Authentication in your project.
| oyegoke |
1,896,498 | Crear un proyecto nuevo con Eslint, Stylelint, CommitLint y Husky | En este artículo vamos a ver cómo configurar tu proyecto utilizando herramientas de revisión de... | 0 | 2024-06-21T22:47:33 | https://dev.to/altaskur/crear-un-proyecto-nuevo-con-eslint-stylelint-commitlint-y-husky-k6j | webdev, programming, spanish, tutorial | En este artículo vamos a ver cómo configurar tu proyecto utilizando herramientas de revisión de código y vamos a modificar Git para que podamos ejecutar estas herramientas antes de subir nuestro código al repositorio, asegurándonos de mantener unas reglas, estructuras y configuraciones que homogenicen y limpien nuestro código.
- [El comienzo](#el-comienzo)
- [Formateador Airbnb](#formateador-airbnb)
- [StyleLint y CommitLint](#stylelint-y-commitlint)
- [Husky](#husky)
- [Package.json](#packagejson)
- [Integración con Husky](#integración-con-husky)
- [Bonus Track: Comprobación de ramas](#bonus-track-comprobación-de-ramas)
- [Repositorio de ejemplo](#repositorio-de-ejemplo)
- [Final](#final)
---
## El comienzo
Para comenzar, me centraré en el uso de Angular, esta configuración puedes adaptarla fácilmente en los frameworks más populares cómo React, Vite y Astro.
En este artículo vamos a usar la última versión de Angular disponible al momento de escribirlo, que es la 18. Para ello, comenzaremos creando nuestro proyecto usando:
```sh
ng new proyectoArticulo
```
---
Asegúrate de cambiar "proyectoArticulo" por el nombre que desees utilizar. Sigamos los pasos del Angular CLI. No es el objetivo de este artículo explicarte cómo funciona el CLI de Angular, así que te mostraré la configuración que he elegido para el proyecto:
- He utilizado SCSS y no vamos a usar SSR.
Una vez finalice la creación del proyecto, pasamos a instalar la integración de ESLint con Angular usando el esquema @angular-eslint/schematics. Esta es una herramienta para el CLI de Angular, por lo que usaremos:
```sh
ng add @angular-eslint/schematics
```
---
De esta manera, al usar ng lint, directamente nos saltará ESLint.
Una vez instalado el Angular schematics, pasamos a la instalación de ESLint. Es muy popular el uso de Prettier cómo formateador de código junto a ESLint. Aunque este último puede modificarse, personalmente prefiero usar ESLint con las normas de formateo de Airbnb. Esta es una preferencia totalmente personal y es la que usaremos en el artículo.
## Formateador Airbnb
Para instalar ESLint junto a las normas de Airbnb, necesitaremos un listado de paquetes:
```sh
npm install eslint-config-airbnb-base eslint-config-airbnb-typescript eslint-plugin-simple-import-sort --save-dev
```
---
Como ves, las estamos guardando en dev, ya que estas dependencias solo nos interesan en desarrollo y no tiene sentido añadirlas a la versión de producción.
¿Qué son todas estas dependencias? Hemos añadido la normativa de Airbnb a ESLint, la variante para TypeScript, y el último paquete es para ordenar las importaciones.
Una vez tengamos esto, podrás ver que se ha creado el archivo .eslintrc.json, que es el que usa ESLint para guardar su configuración.
Si utilizas “_” para declarar valores que no se van a usar, te recomiendo que añadas dentro de overrides >> rules:
```json
"@typescript-eslint/no-unused-vars": [
"error",
{ "argsIgnorePattern": "^_", "varsIgnorePattern": "^_" }
]
```
---
Ahora vamos a configurar la extensión simple-import-sort que hemos instalado anteriormente y, además, vamos a compatibilizar un poco las reglas de Airbnb con el convenio de Angular deshabilitando los default imports. Al igual que el paso anterior, esto va dentro de rules:
```json
"simple-import-sort/imports": "error",
"simple-import-sort/exports": "error",
"import/prefer-default-export": "off"
```
---
Ahora añadimos a ESLint que use los paquetes que hemos descargado de Airbnb. Dentro de extends añadimos:
```json
"airbnb-base",
"airbnb-typescript/base"
```
---
Una vez instalado ESLint, le indicaremos que ignore las carpetas node_modules y dist para evitar problemas tanto en las librerías como en el bundle de dist. Para ello, creamos el archivo .eslintignore y añadimos estas dos carpetas, similar a como haríamos con .gitignore, quedando así:
```sh
dist
node_modules
```
---
## StyleLint y CommitLint
Una vez tenemos esto, vamos a instalar StyleLint y CommitLint:
```sh
npm install --save-dev stylelint stylelint-config-standard-scss
npm install @commitlint/cli @commitlint/config-conventional --save-dev
```
---
Verás que se ha añadido la carpeta de configuración de StyleLint al estilo de ESLint, pero en cambio para CommitLint no. Así que vamos a crearla:
Creamos el archivo ``.commitlintrc.json`` y dentro de este archivo debemos poner:
```json
{
"extends": ["@commitlint/config-conventional"]
}
```
---
## Husky
Ahora vamos a instalar la herramienta que nos permite utilizar los hooks de Git para comprobar que todo nuestro código pasa las normas establecidas antes de subirlo al repositorio. Para ello, usaremos:
```sh
npx husky-init
```
---
El comando nos pedirá instalar la dependencia de Husky, por lo tanto, le decimos que sí. Una vez instalada, en el directorio raíz de nuestro proyecto tendremos la carpeta .husky con varios archivos dentro. El que nos interesa es ``pre-commit``. Si lo abres, tendrás algo similar a esto:
```sh
# !/usr/bin/env sh
"$(dirname -- "$0")/_/husky.sh"
npm test
```
---
Como puedes ver, esto ejecutaría el comando npm test de nuestro proyecto antes de hacer el commit, y si estos presentan algún error, no nos dejará hacer el commit. Ahora nos faltaría añadir los formateadores de código.
## Package.json
Ahora vamos a crear nuestros propios comandos para que Husky pueda ejecutarlos de una manera limpia. Para ello, nos dirigimos al archivo ``package.json`` y dentro de la propiedad scripts añadimos:
```json
"lint:fix": "ng lint --fix",
"lint:scss": "stylelint src/**/*.scss"
```
---
Ahora tenemos nuestros linters preparados tanto para comprobar los archivos cómo para forzar estos cambios al subirlos al repositorio.
## Integración con Husky
Vamos a añadir estas sentencias a Husky. Para ESLint es bastante sencillo, ya que solo necesitamos dirigirnos al archivo pre-commit del directorio ``.husky`` y añadir:
```sh
npm run lint
npm run lint:scss
npm run test
```
---
Tendrás algo cómo esto dentro del directorio ``.husky/pre-commit``:
```sh
# !/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
npm run lint
npm run lint:scss
```
---
Te preguntarás que pasó con CommitLint, para este último necesitamos añadirlo a través de la utilidad de comando de Husky para ello ejecutamos:
```bash
npx husky add .husky/commit-msg 'npx --no --commitlint --edit ${1}'
```
---
¿Por qué está dentro de commit-msg y no dentro de pre-commit como todo lo anterior?
El hook ``commit-msg`` se enfoca en la validación del mensaje de commit este se ejecuta después de que el mensaje de commit ha sido introducido, pero antes de que el commit sea finalizado, además de que nos permite obtener el nombre y el mensaje del commit.
## Bonus Track: Comprobación de ramas
Es frecuente que no quieras subir los cambios directamente a tu repositorio y que estos sean aprobados mediante una pull request. Para esto, podemos crear nuestro propio script para comprobar en qué rama estamos y si podemos o no hacer commit.
Dentro de la carpeta .husky, añadimos el archivo ``prevent-commit.sh`` y dentro de este añadimos el siguiente código en sh:
```sh
# !/bin/sh
branch_name=$(git symbolic-ref --short HEAD)
if [[ "$branch_name" != feature* ]] && [[ "$branch_name" != test* ]]; then
echo "Commits are only allowed on feature and test branches. Current branch: $branch_name"
exit 1
fi
```
---
Como puedes ver, en la sentencia if he especificado que solo se pueda hacer commit en el caso de que la rama comience con feature o test, siguiendo siempre la estructura base de los conventional commits. Recuerda que puedes ampliarlo o cambiarlo según tu gusto o necesidad.
Ahora para ejecutar nuestro .sh debemos añadirlo al hook de ``commit-msg`` añadiendo la siguiente línea
``bash .husky/prevent-commit.sh``
## Repositorio de ejemplo
Te dejo un enlace a un proyecto de GitHub vacío con la configuración que acabas de ver:
[AngularConventionalCommitsTemplate](https://github.com/altaskur/AngularConventionalCommitsTemplate)
## Final
Con este artículo tienes una estructura completa y avanzada del uso de linters y hooks de Git en un repositorio de Angular.
Espero que te haya sido de ayuda y que puedas aplicar estas configuraciones en tus proyectos. Si tienes alguna duda o sugerencia, no dudes en dejar un comentario.
¿Y tú? ¿Utilizas linters en tus proyectos? ¿Qué herramientas utilizas? ¡Déjame tu opinión en los comentarios!
| altaskur |
1,896,494 | Acessibilidade = Usabilidade | Um rant rápido considerando acessibilidade | 0 | 2024-06-21T22:43:53 | https://dev.to/andrewmat/acessibilidade-usabilidade-362c | braziliandevs, frontend, a11y | ---
title: Acessibilidade = Usabilidade
published: true
description: Um rant rápido considerando acessibilidade
tags: braziliandevs, frontend, a11y
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-21 22:10 +0000
---
Nós devs frontend não devemos pensar em acessibilidade como uma coisa separada de usabilidade. Usabilidade é um termo empregado pra dizer quão fácil é que o usuário consiga alcançar o que deseja com o produto. E o design tem que ser pensado considerando o perfil de usuário para melhorar a usabilidade.
Um exemplo são os atalhos de teclado como `o Ctrl C` / `Ctrl V`, que facilitam a tarefa para uma porção de usuários. Ao mesmo tempo, é importante deixar alternativas para usuários sem ciência desse atalho, por conta disso que no menu de contexto (botão direito do mouse) geralmente você encontra um "copiar" e um "colar" ali.
Acessibilidade é a mesma coisa que usabilidade, só que empregada para perfis específicos de usuário, geralmente portadores de deficiência. Um campo inacessível é um bug crítico de usabilidade com impacto igual a (ou maior que) um botão difícil de usar ou um carregamento lento.
Algumas coisitas que podemos fazer para melhorar no nosso dia a dia de dev frontend:
* Aprenda mais sobre acessibilidade. Muitas gente aprende sobre "alt" e "tags semânticas" e pára aí. Aprenda sobre WCAG, WAI-ARIA, media features, e dissemine seu aprendizado.
* Não reinvente a roda. HTML tem bastante coisa nativamente, e onde ele sozinho não consegue, use bibliotecas de componentes headless, tem muitas alternativas mas pessoalmente gosto de usar o Radix.
* Batalhe pelo usuário. Quando for implementar algo novo, sempre considere como fator de implementação adaptar esse algo para uma gama de necessidades de usuários diferentes.
* Teste o que criou, não fique só na teoria. Use leitores de tela, ou outras ferramentas de acessibilidade, para confirmar que o que fez está correto.
| andrewmat |
1,896,496 | Project Stage-3: Digging Deeper | Hi Peeps!! Welcome back. Without wasting any time lets continue from the last blog. The error... | 0 | 2024-06-21T22:43:26 | https://dev.to/yuktimulani/project-stage-3-digging-deeper-mcg | Hi Peeps!! Welcome back. Without wasting any time lets continue from the [last blog](https://dev.to/yuktimulani/project-stage-3-error-analysis-3lm4).
The error message related to the target("sve2") attribute has been a key focus in the latest phase of our AFMV implementation. The message was clear but presented a challenge:
```
error: pragma or attribute ‘target("sve2")’ is not valid
```
To address this, I first ensured that my GCC version supports SVE2 instructions. Using the command gcc --version, I verified the version and compatibility. Here’s what I found:
- GCC Version Check: Confirmed that my GCC version was up-to-date and should theoretically support SVE2.
Next, I explored the usage and context of the target attribute. Here’s a snippet from my hello.c file:
```
#include <stdio.h>
__attribute__((target_clones("sve2")))
void my_function() {
printf("This function uses SVE2 instructions.\n");
}
int main() {
my_function();
return 0;
}
```
Despite the correct syntax, the error persisted. This led me to check if there were specific flags required to enable SVE2 support. The solution involved adding -march=armv8.5-a+sve2 to the GCC compilation command:
```
gcc hello.c -o hello -O3 -march=armv8.5-a+sve2
```
This resolved the issue, confirming that the attribute was now valid under the specified architecture. The journey continues, but this was a significant step towards a functioning AFMV implementation.
Until next time, happy coding 🚀!! | yuktimulani | |
1,896,490 | Los 6 lenguajes de programación de ciencia de datos que hay que aprender en 2024 | Hoy en día, el campo de la ciencia de datos está experimentando un gran crecimiento. Existe una... | 0 | 2024-06-21T22:43:01 | https://dev.to/pirisaurio32/los-6-lenguajes-de-programacion-de-ciencia-de-datos-que-hay-que-aprender-en-2024-1ln9 | Hoy en día, el campo de la ciencia de datos está experimentando un gran crecimiento. Existe una demanda de personas capaces de extraer información de los datos, sobre todo porque la cantidad de datos sigue aumentando a un ritmo exponencial. En el campo de la ciencia de datos, los profesionales utilizan lenguajes de programación para recopilar, analizar y presentar visualmente los datos. Si aspiras a hacer carrera en este campo, tener conocimientos de estos lenguajes de programación te proporcionará sin duda una ventaja sobre los profesionales.
En esta guía presentaremos una visión general de los seis lenguajes de programación que los científicos de datos deberían priorizar aprender en 2024. Profundizaremos en los propósitos y puntos fuertes de cada lengua, así como en sus ventajas e inconvenientes. Empecemos.
**1. Python**
El primero de la lista es Python. Considerado el lenguaje por excelencia para la ciencia de datos de propósito general, Python es ampliamente utilizado en este campo. Este lenguaje de programación interprestado y de alto nivel permite a los científicos de datos desarrollar y crear prototipos de aplicaciones con rapidez.
Capacidades clave
Algunas de las cosas clave para las que se utiliza Python en la ciencia de datos son:
-Gestión y limpieza de datos
-Análisis exploratorio de datos
-Análisis estadístico y aprendizaje automático
-Visualización de datos
-Creación de canales de datos y flujos de trabajo
-Web scraping

**Pros**
**Muy fácil de leer, escribir y aprender ideal para principiantes
- Amplias bibliotecas y marcos para tareas de datos (NumPy, Pandas, TensorFlow)
- Amplia comunidad de profesionales de los datos
- Entorno de codificación interactivo mediante cuadernos Jupyter
- Altamente flexible, puede integrarse con otros lenguajes como R
- Contras
- Al estar basado en un intérprete, puede ser más lento para cálculos muy intensivos.
- El manejo de grandes datos y conjuntos de datos puede consumir mucha memoria
- No está diseñado intrínsecamente para el cálculo multihilo
**
Como puedes ver, Python proporciona una base excelente para realizar todo tipo de trabajos de ciencia de datos. Su versatilidad y facilidad de uso la convierten en nuestra recomendación nº 1 para principiantes.
**2. R**
Creado originalmente para el cálculo estadístico, R se ha convertido en uno de los principales lenguajes de programación para la ciencia de datos. Muy utilizado para el aprendizaje automático y el modelado estadístico, ofrece una amplia selección de herramientas avanzadas.
Capacidades clave
Los principales puntos fuertes de R son:
**- Análisis estadístico y visualizaciones gráficas
- Magníficas herramientas de análisis predictivo y modelización
- Gestión de datos
- Aprendizaje automático con bibliotecas robustas
- IDE flexible para codificación interactiva**

**Pros
- Código abierto con miles de paquetes creados por la comunidad
- Entorno líder para la exploración estadística
- Ideal para crear prototipos rápidamente
- Funciones avanzadas de visualización de datos
- Altamente extensible con integración de código
- Contras
- Curva de aprendizaje pronunciada para principiantes
- Uso limitado fuera de estadísticas/análisis de datos
- Las funciones básicas de programación requieren más codificación
- El tratamiento de macrodatos requiere muchos recursos**
Para los científicos de datos en ciernes, las capacidades analíticas avanzadas de R lo hacen extremadamente valioso. Aunque la curva de aprendizaje es más pronunciada que la de Python, el tiempo invertido en aprender R reporta dividendos en términos de dominio del modelado.
3. SQL
SQL (Structured Query Language) se ha convertido en una herramienta fundamental en muchos ámbitos de la ciencia de datos. Como lenguaje especializado para acceder a bases de datos y manipularlas, dota a los usuarios de un inmenso poder para recopilar y ordenar datos.
Capacidades clave.
Algunos usos clave de SQL son:
**- Creación y gestión de bases de datos
- Redacción de consultas complejas para extraer datos brutos
- Filtrar, clasificar, combinar, agregar datos
- Analizar la información cuantitativa de las bases de datos
- Copia de seguridad/movimiento de datos**

Pros
-
- Lenguaje declarativo fácil de escribir y leer
- Estándar independiente de la plataforma para todos los tipos de bases de datos
- Permite a los usuarios acceder a vastos conjuntos de datos
- Lenguaje crítico para aprovechar los macrodatos
- Ideal para agilizar los flujos de trabajo de análisis de datos
Contras
**- Requiere una base de datos existente desde la que realizar la consulta
- A menudo es necesario combinar otras lenguas para el análisis
- Las operaciones avanzadas pueden complicarse
- No funciona bien en procesos iterativos o basados en códigos**
SQL da a los expertos en datos las claves para acceder a montones de datos encerrados en bases de datos. Dominar SQL junto con un lenguaje de manipulación de datos como Python o R proporcionará un gran impulso a las capacidades de los analistas.
4. Java
Java, uno de los lenguajes de programación más utilizados en todo el mundo en todos los ámbitos de la ingeniería de software, también desempeña un papel destacado en la ciencia de datos. Java ofrece un sólido respaldo para el procesamiento de datos a gran escala mediante los marcos Hadoop y Spark.
Capacidades clave
**- Algunas de las formas en que Java se utiliza para la ciencia de datos:
- Creación de aplicaciones y sistemas distribuidos escalables
- Marcos de procesamiento paralelo de datos por lotes como Apache Spark
- Infraestructuras de respaldo como Hadoop
- Transmisión de datos en tiempo real mediante herramientas como Kafka
- Tareas de aprendizaje automático de propósito general**

Pros
**- Código tipado estáticamente, eficiente y de ejecución rápida
- Abundantes bibliotecas y paquetes disponibles
- Robustez para desarrollar programas complejos y de gran envergadura
- Buena integración con marcos de Big Data y ML
- Funciona en cualquier plataforma con disponibilidad de JVM**
Contras
**- Tareas de datos no optimizadas como R y Python
- Lenguaje más verboso, todo necesita codificación
- Carece de entorno REPL interactivo
- Curva de aprendizaje más pronunciada que otros idiomas**
Es posible que Java no sea la opción más adecuada para la manipulación y el análisis cotidianos de datos. Sin embargo, para los arquitectos que diseñan flujos de trabajo y canalizaciones de datos mastodónticos, el dominio de Java es extremadamente ventajoso.
5. JavaScript
Tal vez resulte sorprendente que JavaScript se haya convertido en una fuerza prominente en el campo de la ciencia de datos en los últimos años. El omnipresente lenguaje de scripting tiene algunas aplicaciones interesantes en este campo.
Capacidades clave
Algunos casos de uso de JavaScript en la ciencia de datos son
Visualización interactiva de datos con D3.js
Creación de cuadros de mando e informes de datos basados en la web
Uso de Node.js para las necesidades de programación ETL
Integración de interfaces frontales con R y Python
Análisis exploratorio de datos

Pros
**- Lenguaje muy fácil de aprender para programadores principiantes
- Se integra perfectamente en interfaces web y aplicaciones
- Gran comunidad y amplio material didáctico disponible
- Ligero en cuanto a necesidades de dependencias
- El tiempo de ejecución está disponible universalmente en todas las plataformas**
Contras
**- No está diseñado específicamente para las necesidades de manipulación de datos
- Falta de herramientas robustas en comparación con Python y R
- Necesita combinar otros idiomas para tareas más avanzadas
- En general, se utiliza menos en la industria**
Aunque quizá no se encuentre en la misma categoría de pesos pesados que Python y R para la ciencia de datos, JavaScript sigue siendo una utilidad increíblemente útil. Para los interesados en crear interfaces y visualizaciones de datos personalizadas, los conocimientos de JavaScript son inestimables.
6. C/C++
Para los programadores que desean maximizar el rendimiento y la eficiencia, C y C++ siguen siendo el estándar de oro. Estos lenguajes constituyen la base sobre la que se construyen muchos marcos e infraestructuras de análisis de datos. Proporcionan la velocidad que necesitan las plataformas de grandes volúmenes de datos.
Capacidades clave
Algunos ejemplos de cómo se aprovechan C/C++ son:
- Creación de motores de procesamiento de datos distribuidos subyacentes
- Necesidades informáticas de alto rendimiento
- Algoritmos complejos y modelos cuantitativos
- Desarrollo de bibliotecas estadísticas utilizadas por lenguajes superiores
- Tareas generales de programación del sistema
Pros
**- Código ejecutable ultrarrápido y optimizado para hardware
- Ofrece a los programadores un nivel inferior de control de la memoria
- Tipado estático para mayor fiabilidad
- Disponible en todas partes como lengua de sistema
- Compatible con una amplia gama de hardware**
Contras
**- Lenguajes muy complejos, difíciles de dominar
- La gestión manual de la memoria provoca errores
- Soporte inherente limitado para las funciones de análisis de datos
- Carece de la interactividad de lenguajes como Python**
Para la mayoría de los análisis y modelos cotidianos, C/C++ son excesivos. Sin embargo, su rendimiento computacional sigue siendo fundamental para desarrollar algoritmos de vanguardia, simulaciones y bases de infraestructura sobre las que se construyen otros lenguajes más sencillos.
Consideraciones clave para empezar
Mientras repasamos algunos de los lenguajes de programación más utilizados en la ciencia de datos actual, quizá te preguntes: ¿cuál es mejor aprender primero? La elección de la lengua inicial depende de sus intereses específicos y de la base de la que disponga. He aquí algunas consideraciones clave que pueden ayudarle a tomar una decisión:
Experiencia previa en programación- Si eres nuevo en el mundo de la programación, Python es el programa más adecuado para principiantes.
Para aquellos con algunos conocimientos previos, ampliar esa base suele ser el camino más fácil.
**Área de interés**
Aquellos interesados más en estadística, modelado predictivo puede que desee abordar R antes. Si desea hacer visualizaciones personalizadas, JavaScript es un buen punto de partida. Las arquitecturas e infraestructuras de big data se prestan mejor a Java.
**Estilo de aprendizaje**
Los cuadernos interactivos en Python y R permiten iterar rápidamente durante el aprendizaje. Los lenguajes estructurados como Java favorecen los objetivos concretos de los proyectos para impulsar el progreso.
**Objetivos futuros**
Las perspectivas de empleo y las necesidades específicas de cada dominio pueden dictar ciertos idiomas requeridos. Los puestos de ingeniería de datos y en la nube se inclinan por Java, por ejemplo, mientras que los analistas tienden a utilizar más Python y R.
Lo mejor de todos estos lenguajes es que pueden trabajar juntos a la hora de crear soluciones de datos sólidas. No sientas que necesitas dominar uno antes de tocar el siguiente. La diversidad de idiomas le hará mucho más capaz como profesional de los datos.
| pirisaurio32 | |
1,896,493 | [Game of Purpose] Day 34 | Today I made my drone drop a granade and explode on impact. A problem I currently face is that... | 27,434 | 2024-06-21T22:32:28 | https://dev.to/humberd/game-of-purpose-day-34-47j4 | gamedev | Today I made my drone drop a granade and explode on impact.
{% embed https://youtu.be/n-Vs0QikzeE %}
A problem I currently face is that when the granade is attached via a PhysicsConstraint it will contribute to a drone's mass and will slowly fall down, because there is not enough thrust. I tried to add granade's mass to up/down thrust calculations, but it didn't work. :/
Another problem is that there is no easy way of disabling a connection PhysicsConstraint. You cannot simply unbind connection from a granade to a drone. You have to swap granade with something else. I tried that, but the replacement needs to have its physics enabled and I don't want it to contribute to a mass even if it's invisible.
The solution I made is unlocking the "Linear Z limit". It means that the invisible rope the granade is connected to the drone unwinds infinitely. By default all the Linear axes are locked

| humberd |
1,896,492 | Networking and Sockets: Syn and Accept queue | In my previous article, we discussed endianness, its importance, and how to work with it.... | 27,728 | 2024-06-21T22:31:03 | https://www.kungfudev.com/blog/2024/06/14/network-sockets-syn-and-accept-queue | linux, rust, networking, socket |
In my previous [article](https://www.kungfudev.com/blog/2024/06/14/network-sockets-endianness), we discussed endianness, its importance, and how to work with it. Understanding endianness is crucial for dealing with data at the byte level, especially in network programming. We examined several examples that highlighted how endianness affects data interpretation.
One key area we focused on was the relationship between endianness and `TCP stream sockets`. We explained why the `bind` syscall expects certain information for the INET family to be in a specific endianness, known as network byte order. This ensures that data is correctly interpreted across different systems, which might use different native endianness.
One of the next interesting topics after binding the socket is putting it into listening mode with the `listen` syscall. In this article, we are going to dive a little deeper into this next phase.
## Listen: Handshake and SYN/Accept Queues
Recall that TCP is connection-oriented protocol. This means the sender and receiver need to establish a connection based on agreed parameters through a `three-way handshake`. The server must be listening for connection requests from clients before a connection can be established, and this is done using the `listen` syscall.
We won't delve into the details of the three-way handshake here, as many excellent articles cover this topic in depth. Additionally, there are related topics like` TFO (TCP Fast Open) `and `syncookies` that modify its behavior. Instead, we'll provide a brief overview. If you're interested, we can explore these topics in greater detail in future articles.
The three-way handshake involves the following steps:
1. The client sends a SYN (synchronize) packet to the server, indicating a desire to establish a connection.
2. The server responds with a SYN-ACK (synchronize-acknowledge) packet, indicating a willingness to establish a connection.
3. The client replies with an ACK (acknowledge) packet, confirming it has received the SYN-ACK message, and the connection is established.

Do you remember in the first article when we mentioned that the `listen` operation involves the kernel creating two queues for the socket. The kernel uses these queues to store information related to the state of the connection.
The kernel creates these two **queues**:
- **SYN Queue**: Its size is determined by a system-wide setting. Although referred to as a queue, it is actually a hash table.
- **Accept Queue**: Its size is specified by the application. We will discuss this further later. It functions as a FIFO (First In, First Out) queue for established connections.
As shown in the previous diagram, these queues play a crucial role in the process. When the server receives a `SYN` request from the client, the kernel stores the connection in the `SYN queue`. After the server receives an `ACK` from the client, the kernel moves the connection from the `SYN queue` to the `accept queue`. Finally, when the server makes the `accept` system call, the connection is taken out of the accept queue.
### Accept Queue Size
As mentioned, these queues have size limits, which act as thresholds for managing incoming connections. If the size limit of the `queues` are exceeded, the kernel may discard or drop the incoming connection request or return an RST (Reset) packet to the client, indicating that the connection cannot be established.
If you recall our previous examples where we created a socket server, you'll notice that we specified a backlog value and passed it to the `listen` function. This backlog value determines the length of the queue for completely established sockets waiting to be accepted, known as the accept queue.
```rust
// Listen for incoming connections
let backlog = Backlog::new(1).expect("Failed to create backlog");
listen(&socket_fd, backlog).expect("Failed to listen for connections");
```
If we inspect the `Backlog::new` function, we will uncover some interesting details. Here's the function definition:
```rust
...
pub const MAXCONN: Self = Self(libc::SOMAXCONN);
...
/// Create a `Backlog`, an `EINVAL` will be returned if `val` is invalid.
pub fn new<I: Into<i32>>(val: I) -> Result<Self> {
cfg_if! {
if #[cfg(any(target_os = "linux", target_os = "freebsd"))] {
const MIN: i32 = -1;
} else {
const MIN: i32 = 0;
}
}
let val = val.into();
if !(MIN..Self::MAXCONN.0).contains(&val) {
return Err(Errno::EINVAL);
}
Ok(Self(val))
}
```
We can notice a few things:
- On Linux and FreeBSD systems, the minimum acceptable value is `-1`, while on other systems, it is `0`. We will explore the behavior associated with these values later.
- The value should not exceed `MAXCONN`, **which is the default maximum number of established connections that can be queued**. Unlike in `C`, where a value greater than `MAXCONN` is silently capped to that limit, in Rust's `libc` crate, exceeding `MAXCONN` results in an error.
- Interestingly, in the `libc` crate, `MAXCONN` is hardcoded to `4096`. Therefore, even if we modify `maxconn`, this won't affect the listening socket, and we won't be able to use a greater value.
### The `SOMAXCONN` Parameter
The `net.core.somaxconn` kernel parameter sets the maximum number of established connections that can be queued for a socket. This parameter helps prevent a flood of connection requests from overwhelming the system. The default value for `net.core.somaxconn` is defined by the `SOMAXCONN` constant [here](https://github.com/torvalds/linux/blob/2dde18cd1d8fac735875f2e4987f11817cc0bc2c/include/linux/socket.h#L297) .
You can view and modify this value using the `sysctl` command:
```bash
$ sudo sysctl net.core.somaxconn
net.core.somaxconn = 4096
$ sudo sysctl -w net.core.somaxconn=8000
net.core.somaxconn = 8000
```
### Analyzing the Accept Queue with example
If we run the example `listen-not-accept`, which sets the backlog to `1` without calling `accept`, and then use `telnet` to connect to it, we can use the `ss` command to inspect the queue for that socket.
```bash
# server
$ cargo run --bin listen-not-accept
Socket file descriptor: 3
# telnet
$ telnet 127.0.0.1 6797
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
```
Using `ss`:
```bash
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 1 1 127.0.0.1:6797 0.0.0.0:*
```
- **Recv-Q**: The size of the current accept queue. This indicates the number of completed connections waiting for the application to call `accept()`. Since we have one connection from `telnet` and the application is not calling `accept`, we see a value of 1.
- **Send-Q**: The maximum length of the accept queue, which corresponds to the backlog size we set. In this case, it is 1.
### Testing Backlog Behavior on Linux
We are going to execute the following code to dynamically test the backlog configuration:
```rust
fn main() {
println!("Testing backlog...");
for backlog_size in -1..5 {
let server_socket = socket(
AddressFamily::Inet,
SockType::Stream,
SockFlag::empty(),
SockProtocol::Tcp,
)
.expect("Failed to create server socket");
let server_address =
SockaddrIn::from_str("127.0.0.1:8080").expect("...");
bind(server_socket.as_raw_fd(), &server_address).expect("...");
listen(&server_socket, Backlog::new(backlog_size).unwrap()).expect("...");
let mut successful_connections = vec![];
let mut attempts = 0;
loop {
let client_socket = socket(
AddressFamily::Inet,
SockType::Stream,
SockFlag::empty(),
SockProtocol::Tcp,
).expect("...");
attempts += 1;
match connect(client_socket.as_raw_fd(), &server_address) {
Ok(_) => successful_connections.push(client_socket),
Err(_) => break,
}
}
println!(
"Backlog {} successful connections: {} - attempts: {}",
backlog_size,
successful_connections.len(),
attempts
);
}
}
```
The key functions of the code are:
- It iterates over a range of backlog values, creating a server socket and putting it in listen mode with each specific backlog value.
- It then enters a loop, attempting to connect client sockets to the server socket. If a connection is successful, it adds the client socket to the `successful_connections` vector. If it fails, it breaks the loop.
This allows us to determine the exact number of successful connections for each backlog configuration.
When we run the program, we will see output similar to the following. Note that this process may take a few minutes:
```bash
$ cargo run --bin test-backlog
Backlog -1 successful connections: 4097 - attempts: 4098
Backlog 0 successful connections: 1 - attempts: 2
Backlog 1 successful connections: 2 - attempts: 3
Backlog 2 successful connections: 3 - attempts: 4
Backlog 3 successful connections: 4 - attempts: 5
Backlog 4 successful connections: 5 - attempts: 6
```
With the above output you can notice two things:
- When listen receive the the min acceptable value `-1` it will use the default value of `SOMAXCONN`, we already know if we pass a value greather than `SOMAXCONN` it will return an error.
- And the other "weird" that can notice is that always the connections are 1 more than the limit set by the backlog, for example, backlog is 3 but there are 4 connections,
The last result is due to the behavior of this [function](https://github.com/torvalds/linux/blob/2dde18cd1d8fac735875f2e4987f11817cc0bc2c/include/net/sock.h#L1033) in Linux. The `sk_ack_backlog` represents the current number of acknowledged connections, while the `sk_max_ack_backlog` represents our backlog value. For example, when the backlog is set to 3, the function checks if `sk_ack_backlog > sk_max_ack_backlog`. When the third connection arrives, this check (`3 > 3`) evaluates to false, meaning the queue is not considered full, and it allows one more connection.
```c
static inline bool sk_acceptq_is_full(const struct sock *sk)
{
return READ_ONCE(sk->sk_ack_backlog) > READ_ONCE(sk->sk_max_ack_backlog);
}
```
### Handling a Full Accept Queue

As we can see from our program's logs, the client will attempt to connect to the server one more time than the specified backlog value. For example, if the backlog is 4, it will attempt to connect 5 times. In the last attempt, the client and server go through the handshake process: the client sends the `SYN`, the server responds with `SYN-ACK`, and the client sends an `ACK` back, thinking the connection is established.
Unfortunately, if the server's `accept queue` is full, it cannot move the connection from the `SYN queue` to the `accept queue`. The server's behavior when the accept queue overflows is primarily determined by the `net.ipv4.tcp_abort_on_overflow` option.
Normally, this value is set to `0`, meaning that when there is an overflow, the server will discard the client's `ACK` packet from the third handshake, acting as if it never received it. The server will then retransmit the `SYN-ACK` packet. The number of retransmissions the server will attempt is determined by the `net.ipv4.tcp_synack_retries` option.
> We can reduce or increase the `tcp_synack_retries` value. By default, this value is set to 5 on Linux.
In our final attempt, when the retransmission limit is reached, the connection is removed from the `SYN queue`, and the client receives an error. Again, this is an oversimplified overview of the process. The actual process is much more complex and involves additional steps and scenarios. However, for our purposes, we can keep things simpler.
By running the command `netstat -s | grep "overflowed"`, we can check for occurrences of socket listen queue overflows. For example, in my case, the output indicates that **"433 times the listen queue of a socket overflowed,"** highlighting instances where the system's accept queue was unable to handle incoming connection requests. This can lead to potential connection issues.
```bash
$ netstat -s | grep "overflowed"
433 times the listen queue of a socket overflowed
```
### Testing Backlog Behavior on Mac
If we execute the same program on a Mac, we will observe some differences:
- As mentioned, for other systems, the minimum acceptable value is `0`. Using this value gives us the default setting.
- The default value for `SOMAXCONN` is `128`, which can be retrieved using the command `sysctl kern.ipc.somaxconn`.
- Lastly, on a Mac, we don't encounter the issue of allowing `Backlog + 1` connections. Instead, we accept the exact number of connections specified in the backlog.
```bash
$ cargo run --bin test-backlog
Backlog 0 successful connections: 128 - attempts: 129
Backlog 1 successful connections: 1 - attempts: 2
Backlog 2 successful connections: 2 - attempts: 3
Backlog 3 successful connections: 3 - attempts: 4
Backlog 4 successful connections: 4 - attempts: 5
```
### SYN Queue Size Considerations
As the `listen` man page states, the maximum length of the queue for incomplete sockets `(SYN queue)` can be set using the `tcp_max_syn_backlog` parameter. However, when `syncookies` are enabled, there is no logical maximum length, and this setting is ignored. Additionally, other factors influence the queue's size. As mentioned earlier: `its size is determined by a system-wide setting`.
You can find the code for this example and future ones in this [repo](https://github.com/douglasmakey/socket_net).
## To Conclude
In this article, we explored the role of TCP queues in connection management, focusing on the `SYN queue` and the `accept queue`. We tested various backlog configurations and examined how different settings impact connection handling. Additionally, we discussed system parameters like `tcp_max_syn_backlog` and `net.ipv4.tcp_abort_on_overflow`, and their effects on queue behavior.
Understanding these concepts is crucial for optimizing server performance and managing high traffic scenarios. By running practical tests and using system commands, we gained insights into how to handle connection limits effectively.
As we move forward, keeping these fundamentals in mind will help ensure robust and efficient network communication.
Thank you for reading along. This blog is a part of my learning journey and your feedback is highly valued. There's more to explore and share regarding socket and network, so stay tuned for upcoming posts. Your insights and experiences are welcome as we learn and grow together in this domain. **Happy coding!** | douglasmakey |
1,896,491 | Project Stage-3: Error Analysis | Hi folks!!! Welcome to my series of Summer Project blogs. As part of the SPO600 2024 Summer Project,... | 0 | 2024-06-21T22:29:50 | https://dev.to/yuktimulani/project-stage-3-error-analysis-3lm4 | Hi folks!!! Welcome to my series of Summer Project blogs. As part of the SPO600 2024 Summer Project, my objective in Stage 3 is to resolve Automatic Function Multi-Versioning (AFMV) error messages within the GNU Compiler Collection (GCC). AFMV is a cutting-edge feature designed to optimize function calls across different micro-architectural levels, making software more adaptable and efficient on various hardware platforms.
The goal of this stage is to ensure that the AFMV implementation in GCC functions seamlessly by identifying, analyzing, and resolving error messages that arise during its use. This post will introduce the scope of Stage 3, highlight the challenges of dealing with AFMV error messages, and outline the approach for resolving these issues.
## Error Messages Context:
Automatic Function Multi-Versioning (AFMV) allows for the creation of multiple versions of a function, optimized for different hardware features, without requiring changes to the source code. This capability significantly enhances software performance by leveraging the latest hardware optimizations automatically.
However, implementing AFMV comes with its own set of challenges. One major issue is the occurrence of error messages that can be cryptic and difficult to understand, making it challenging to troubleshoot and fix problems. These errors can arise from various sources, such as syntax issues, misconfigurations, or conflicts with other compiler features.
The error message that is troubling us is like so-

## Initial Challenges:
The main challenge in this stage is the complexity of the AFMV feature and the intricacies of the GCC codebase. Error messages in GCC can be quite specific and may involve multiple components of the compiler. Understanding the context in which these errors occur is crucial for effective resolution.
Additionally, some error messages may not directly point to the root cause of the issue, requiring a thorough investigation and sometimes even modifying the compiler's internal mechanisms to achieve a fix. This stage is not just about fixing bugs but also understanding how AFMV integrates into the broader GCC architecture.
## Goals for Stage 3:
The primary goals for this stage are:
Identify Error Messages: Categorize and document the various AFMV-related error messages encountered during compilation.
Analyze Root Causes: Perform a deep dive into the causes of these errors, identifying any common patterns or recurring issues.
Resolve Issues: Implement fixes for these errors, ensuring that AFMV functions correctly and efficiently within GCC.
Document Solutions: Provide detailed documentation of the errors and their resolutions to aid future development and troubleshooting efforts.
Integrate and Test: Ensure that the solutions are integrated into the main codebase and thoroughly tested for robustness.
## Summary:
In the following blog posts, I will delve deeper into the specifics of AFMV errors, the strategies used to resolve them, and the testing processes involved. This journey is not just about solving problems but also about enhancing our understanding of GCC and contributing to a more efficient and flexible compiler.
Stay tuned as we explore the fascinating world of AFMV and GCC error handling in more detail. | yuktimulani | |
1,896,480 | Interceptors nestjs | How do I use Interceptors in nestjs? | 0 | 2024-06-21T21:56:19 | https://dev.to/azuli_jerson_86d70f94325d/interceptors-nestjs-2m7e | How do I use Interceptors in nestjs? | azuli_jerson_86d70f94325d | |
1,895,955 | Revolutionizing Resume Screening: How ATS-B-Gone Can Keep The Buzzwords Away | Revolutionizing Resume Screening: Beyond ATS Limitations With so much happening across... | 0 | 2024-06-21T22:27:32 | https://dev.to/brandonbaz/revolutionizing-resume-screening-how-ats-b-gone-can-keep-the-buzzwords-away-1pg2 | ### Revolutionizing Resume Screening: Beyond ATS Limitations
With so much happening across nearly all industries, I’ve noticed a trend: highly skilled professionals are being approached by managers or recruiters for positions they didn’t even know they’d applied for because their resumes got tossed by ATS systems. Keywords, ATS format, one page—why would companies that value culture, passion, and other intangibles want someone with 15 years of experience to cram it all onto one page?
From what I see, they actually don’t. It’s just an unfortunate time constraint, hence the reliance on ATS scanners.
**The Problem with ATS**
Personally, I’ve always thought ATS systems were inherently flawed. In school, you fail a research paper if you have too many words matching exactly, but you fail to get past an ATS scanner if you don’t have enough. The only answer up to now has been to tailor your resume to each position, ending up with 532 versions of yourself and a new cloud storage subscription. Then you forget which version to send, ending up with 1/532 of you.
**An Innovative Solution**
There’s a better way. I’m using a “mini-service” I created using Natural Language Processing (NLP) and Natural Language Understanding (NLU) in my workflow to parse the content of articles in my feed. After too many “I hate DevOps” articles got saved as relevant, I realized I needed a smarter system. All I learned was that a lot of people don’t like DevOps—not what I had in mind.
This same concept can be applied to resumes. By drawing out the sentiment and context, we can find the measurable impacts and categorize them beyond just keywords. This also opens up opportunities to make those interest sections and awards measurable, aligning them with the company’s culture and goals for a more accurate representation of who to pass to those amazing people in recruiting.
**The Result**
The result? Time saved across the board in a way that’s actually impactful and not frustrating. I have a prototype that shows a lot of promise—if you close your eyes and don’t look at the unstyled dashboard, of course.
Will it catch on? I don’t know. But if everyone only tried things they were sure would succeed, I wouldn’t be writing this, considering nobody would be applying to jobs in the first place.
**Showcasing the Tool**
Imagine a world where your resume isn’t just a static document but a dynamic representation of your professional journey. My mini-service leverages the power of NLP and NLU to sift through the noise and highlight what truly matters—your achievements, your skills, and how they align with a company’s ethos.
This tool doesn't stop at the resume. It can analyze cover letters, LinkedIn profiles, and even your portfolio projects to provide a holistic view of your professional persona. It’s like having a personal branding assistant that ensures you’re always putting your best foot forward.
For recruiters, this means no more sifting through piles of resumes that all look the same. Instead, they get a curated list of candidates whose values and skills match their company’s needs. It’s akin to dating—finding the perfect match without the awkward first date.
So, while the dashboard might currently look like it was designed by someone who thinks Comic Sans is a good idea, the potential is enormous. With a bit of polish and a lot of enthusiasm, this tool could revolutionize the way we think about hiring.
And who knows? Maybe one day, we’ll look back at ATS systems the same way we look at dial-up internet—an interesting relic of a bygone era. | brandonbaz | |
1,896,477 | My First Scrimba Project | Technical writing has been my chosen profession for the last three years. My interest started seven... | 0 | 2024-06-21T22:13:29 | https://dev.to/eryn_bing_d82c364c570aff3/my-first-scrimba-project-1kn7 | Technical writing has been my chosen profession for the last three years. My interest started seven years ago when I was a senior in high school. After researching writing careers in the bls.gov, the only thing I knew about technical writing was that it included writing manuals. Despite my skint knowledge in the field, I decided my profession and modeled my higher education to enhance the skills I would need to be an effective communicator.
---
I did not know about the variety in types of writing I would make or the variety in tools. I mistakenly thought that everything could be done with word processors such as Microsoft Word and Google Docs. To date, I've used to help me write effective, clear, and concise communication.
- Grant research software
- Snag It (picture editing software)
- Jira (project management tracking tool)
- Confluence (knowledge base site)
- Adobe Acrobat (document builder)
- Adobe Illustrator (document builder)
- Datayze.com (ease of reading editor)
I am so grateful for my professional and educational experiences that have allowed me to explore these tools and many others. My understanding of technical writing has deepened and I am excited for what the future of my career holds.
For more specific information on how I used these tools, please see the link below for my internship presentation.
```
{
youtubeChannel: 'https://www.youtube.com/watch?v=M2w5YT5vMus'
}
```

| eryn_bing_d82c364c570aff3 | |
1,896,487 | Maiu Online - Browser MMORPG #indiegamedev #babylonjs Ep22 - Map editor | Hey Currently I’m working on collisions detection. For now I’m using SAT 2D and later in future maybe... | 0 | 2024-06-21T22:13:11 | https://dev.to/maiu/maiu-online-browser-mmorpg-indiegamedev-babylonjs-ep22-map-editor-3f3f | babylonjs, indiegamedev, mmorpg, devlog | Hey
Currently I’m working on collisions detection. For now I’m using SAT 2D and later in future maybe I’ll change it to something different (maybe navmesh, will se…). To make creation of map and collisions data a little bit more automatic I prepared very simple map editor. Which prints in the output whole map config with collisions data etc… in json format.
Collisions detection still have few bugs mostly related to the calculating collision response, when I’ll fix it I’ll share next video.
{% youtube Cd32A2cxyVI %} | maiu |
1,896,482 | Trouble in implementing pagination in django rest framework | https://stackoverflow.com/questions/78654425/trouble-in-implementing-pagination-in-django-rest-framew... | 0 | 2024-06-21T21:58:58 | https://dev.to/abhaylearns/trouble-in-implementing-pagination-in-django-rest-framework-46m | https://stackoverflow.com/questions/78654425/trouble-in-implementing-pagination-in-django-rest-framework | abhaylearns | |
1,896,481 | My Flow and Productivity has Improved with the Simplicity of Neovim | I don't think it's a surprise if you've been following along with me lately that I've pivoted my... | 0 | 2024-06-21T21:56:46 | https://www.binaryheap.com/productivity-and-flow-improved-with-neovim/ | programming, development | I don't think it's a surprise if you've been following along with me lately that I've pivoted my daily programming setup to Neovim. What might surprise you is that I started my career working on HP-UX and remoting into servers because the compilers and toolchains only existed on those servers I was building for. When working through a terminal, you've got to leverage a terminal editor or do something with x11 and that was just super clunky. Enter my first experience with Vi. I loved the motions, the simplicity, and the ubiquity of it. But those are things that have been talked about in great detail. What I want to explore in this article is my experience in moving to Neovim.
## Why the Change
I've been working with code going back into the mid-90s across multiple operating systems and too many languages to keep track of. The question that many people have asked me is why would you "go back" to a terminal-based workflow. Almost as if to say it's a step backward. With tools like VSCode and Jetbrains (insert any of their editors), why jump into something that is a community port of something so old? Before I jump into the process of converting, here are my top 3 reasons.
1. Simplification of my tooling. I need a Terminal Emulator and Neovim. With that setup, I can work on Rust, Go, TypeScript, HTML, YAML, and any language or markup that I encounter in my day-to-day job. No VSCode for this and Jetbrains for that. No switching between keybindings and the mouse. Now, I do have many plugins that enhance my Neovim experience, but at its core, my setup is 2 things.
2. Improved flow and productivity. By staying in my terminal to work with Git, code, compilation, filesystems, and other pieces of my chain, I keep myself from having to jump out to another tool. The way my brain works, every time I jump, I lose flow. I lose my place. When I find my flow, I'm much more productive.
3. Stripped-down experience. In a world where code documentation, auto-completion, and AI-code generation are taking over, I'm going back to just crafting things with hand tools. Sure, I have automation in my Neovim setup, but for me, when I have to hit the docs to read code, and then type that code in, it sticks in my brain better. This might not apply to you, but I was gifted with a great memory that is close to photogenic. My memory further cements when I see, and then type, write, or speak. By passing on some of the more fancy automation, I find myself learning more. I might not write as much code, but the code I write feels better crafted. No judgment here whatsoever, I'm simply stating how coding with Neovim makes me feel.
## What was My Experience Like?
With all of the above as the foundation for my move, where did I start? Honestly, not quite at the bottom but pretty close to it with TJ DeVries' [KickStart](https://github.com/nvim-lua/kickstart.nvim) project. I went down the path of wanting to understand exactly how my setup is working and only add in the plugins that I wanted. Looking back, I just didn't have the time to fully understand exactly what this meant. However, the act of failing with KickStart did give me some solid background in how Neovim, Lua, and Lazy (the plugin manager) work.
I honestly was at a breaking point a few weeks into my conversion and reached out to my friend [Rakesh](https://awsmantra.com/) that I was ready to give up. As much as I enjoyed the [Neovim Motions](https://www.barbarianmeetscoding.com/boost-your-coding-fu-with-vscode-and-vim/moving-blazingly-fast-with-the-core-vim-motions/), I just couldn't live with the discombobulated experience that I was getting. He rightly recommended that I give it another try, but this time with a prebuilt configuration. The Neovim world calls them "distros".
My first attempt at a distro was [NvChad](https://nvchad.com/). NvChad is well-liked, polished, and a really good place to start. I know as of this writing, Rakesh is still flying high with NvChad and enjoys it very much. Something about it felt too proprietary though. Custom loaders, dealing with packages in certain ways, and that sort of thing. I wanted something prebuilt but felt more like KickStart in that plugin adds and configurations felt more Neovim "native".
This leads me to where I am now with using [LazyVim](https://www.lazyvim.org/). Landing in LazyVim was exactly what I needed to start building my developer flow and productivity. It has some nice defaults, but the extensions and adding of plugins feel closer to what my journey started within KickStart. I'd like to spend the balance of the article walking through my workflow and favorite plugins.
## The Terminal
I don't want to skip over some foundational parts that are key to my development workflow and productivity. Neovim is the editor, but it starts with my terminal emulator AND terminal multiplexer.
I was a heavy and long-term user of iTerm2 coming into this change. I figured it would serve me just fine. And it did until it didn't. I noticed it starting to get a little bogged down as I was running tmux and now Neovim. More on tmux in a minute.
I tried Kitty. There was success but ultimately font rendering just felt off. I then moved over to Alacrity. Loved it, but found the configuration to be a little strange. So on the prompting of some other friends ([AJ](https://www.binaryheap.com/wp-content/uploads/2024/06/which-key.png) and [Darko](https://x.com/darkosubotica)), Wezterm is where I landed. It honestly feels like a blend of all 3 of the previous ones I listed but yet still super snappy.
## Multiplex or Multiverse?
I said multiplexer didn't I? [tmux](https://github.com/tmux/tmux/wiki) to be exact. Another game-changer for me. The beauty of using tmux is that I can create sessions, panes, and windows that can then be moved, split, detached, and everything in between. I also have Neovim shortcuts built in so that I can easily move with `hjkl` which if you know Neovim, that's life.

With panes, I can split my terminal however I want, navigate between, hide, zoom, and dispatch right from the keyboard. Super powerful.
And if I want to have multiple windows going, I can switch with a keystroke that cycles through previous and next, by window number, or I get a selection screen.

And as I mentioned, I can navigate with Neovim keybindings.
With tmux and Wezterm, I'm in a position to get my editor fired up.
## Neovim Tour and Plugins
This article could get pretty lengthy if I went all through my setup, configuration, and plugin usage. So the plan is, that I'm [sharing](https://github.com/benbpyle/dot-files) my dot-files and will touch on a few of the pieces I use or love the most.
What I enjoy the most about using Neovim is that my fingers are glued to the keyboard. It's getting to the point that I'm not even having to think about which key pairs do what. I can't understate how much that improves my flow and productivity. I don't want this to be a big Neovim vs VSCode vs IntelliJ as I know they all support Neovim bindings, but having specific keys for specific tasks that don't conflict with my Mac is so freeing.
So let's get into a tour, starting with the [Outline](https://github.com/hedyhli/outline.nvim) plugin.
### Outline

What I like about Outline is that I have a nice heads-up view of my file. I can see functions, structs, fields, properties, or whatever your language calls them. There's nothing magical about the plugin, but it does a great job of doing just what it says. Acting as an Outline. I have mine bound to the simple `<leader>o`. In Neovim, the `leader` key is a special key that you map to kick off commands. For me, I have `leader == space`.
### Trouble
In a similar spirit to Outline, there is a plugin called [Trouble](https://github.com/folke/trouble.nvim). This was created and maintained by the creator of LazyVim as well. Think of Trouble as having two functions for me.
Function one is to interact with the Language Server Protocol (LSP) in a way that yields diagnostics. Those error messages, warnings, and other items that show up. Trouble makes them available in a single window. A much broader topic is what is an LSP. Think of it as the brains behind many of the coding actions you find useful like symbol lookup, reference finding, and filling in your favorite struct of class.
The next thing that Trouble does is provide a view into all of the places a particular symbol is used. This is determined by my cursor position and looks like the image below.

With code organization covered, let's move into some functionality.
### Telescope
I don't think many Neovim users could live without [Telescope](https://github.com/nvim-telescope/telescope.nvim). Maintained by TJ DeVries, this is a fuzzy find, LSP integrator, and so many other things. I use it constantly to find open buffers, grep my codebase, look through Git logs, and pull up references. The image below shows how I'm using it to find Workspace Symbols.

I could spend an entire article on just Telescope as it's something I could not live without. The best I can compare it to is the finder in IntelliJ that greps symbols, types, and other items. Only this can search through so much more.
### Code Completion and Types
Sometimes it's helpful to have code completion and an easy view of the types that you are working with. Coming from an IDE, these were things that I enjoyed and while I'm not super reliant on them, I still do use them.
Like many using Neovim, I'm leveraging the [Nvim-Cmp](https://github.com/hrsh7th/nvim-cmp) plugin. With this plugin, I get the snippets, code completion, and documentation on functions that I'm used to that help me out when my brain slows.

And while code completion is nice, sometimes I just want to see the hints of the type in-lay next to my variables. And with that latest version of Neovim, that's possible.

And, if hit `<leader>uh`, they disappear.

So many options.
### Testing
The last three parts of my setup that I want to dive into speak specifically to functions that are integral to my build process.
1. Git
2. Unit Tests
3. Debugging
#### Git
Using tmux, I could just have a shell to pivot into when I want to work with Git. Fine, and I could do that. But I'm using the Neovim plugin for [LazyGit](https://github.com/kdheepak/lazygit.nvim). Which takes advantage of this [LazyGit UI](https://github.com/jesseduffield/lazygit).

What I like about using LazyGit is that I can stay in my editor, and use my normal keybindings to navigate the buffer just like I do in every other buffer I work with. This whole journey wasn't about feature for feature, but how I could increase my flow and productivity. And staying in the terminal does that for me.
#### Testing
What developer flow is complete without a unit test runner? For that work, I rely on [Neotest](https://github.com/nvim-neotest/neotest). Neotest launches a Neovim buffer that sits on the side of my terminal. I don't have to pop up the summary. I can trigger Neotest in the background, get some notifications, and then move on. It also feels just like the other buffers I've mentioned above that can slide in and out as I need them.

#### Debugging
The final piece of the experience for me was "Could I use a debugger in Neovim?". This was a big thing for me because I use a lot of [Rust](https://www.binaryheap.com/serverless-rust-developer-experience/) and Golang, and having a debugger available is critical. The Debugger Adapter Protocol or DAP can plug into popular debuggers like LLDB or GDB which then can be managed by a plugin called [DAP UI](https://github.com/rcarriga/nvim-dap-ui).
The UI is exactly what you'd think it would be. Symbols, threads, watches, breakpoints, and then the common Continue, Step Over, Step Out that you would be accustomed to. The below shows how I'm using it to debug a Rust Web API.

## Wrap Up Thoughts
I feel like I've written too little but my editor is showing me that I'm into the normal length article I've been producing lately. I could keep going, go back up, and dive deeper into the plugins, but I'm going to stop here. The point of this was to introduce my current setup, why I chose it, and what I'm doing with it. I am not using any other editor for my coding or debugging tasks. I still use VSCode to write my blog, because I like the Markdown preview mode and the Grammarly feature. I am toying with using LaTex and Neovim and seeing about a Markdown plugin, but I bounce so much while writing that my hand reaches for the mouse in natural ways. Maybe I'll switch in the future, but I'm not sure.
My closing thought though is that in a world that is looking for more instant gratification, more code, more output, and using AI to bounce prompts and thoughts off of, I like the feeling that I can read and write my code on purpose without distractions. Generated code is a distraction to me. I've said it before, but the act of learning is why I like coding, not the act of producing. Sure I love to finish things. But I love coding because it's an art to me. There is science for sure, but I like writing code like I like writing books and articles. I'm not in a rush to complete it and move to the next thing. I often romanticize the work I do. It's just who am.
And I'd be remiss if I didn't include links to my font and color scheme in case anyone is looking to make the switch.
* Font - [Jetbrains Mono Nerdfont](https://www.programmingfonts.org/#jetbrainsmono) -- I can't get away from Jetbrains!
* Colors - The soothing pastels for [Catppuccin](https://github.com/catppuccin/nvim)
And the last thing, if you ever get lost, [Which-Key](https://github.com/folke/which-key.nvim) is always there to help!

Thanks for reading and happy building! | benbpyle |
1,896,478 | 💾 Database Management Systems (DBMS) Explained | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-21T21:52:45 | https://dev.to/aviralgarg05/database-management-systems-dbms-explained-44ia | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
DBMS 💾 is software that manages and organizes data in databases 📚. It allows users to store, retrieve, update, and delete data efficiently 🗃️. With features like data security 🔒 and backup 📦, DBMS ensures data integrity and accessibility for applications. | aviralgarg05 |
1,896,475 | How make Auth Nestjs | Could anyone help me with implementing Auth JWT in a nestjs application? | 0 | 2024-06-21T21:48:48 | https://dev.to/azuli_jerson_86d70f94325d/how-make-auth-nestjs-3d7f | Could anyone help me with implementing Auth JWT in a nestjs application? | azuli_jerson_86d70f94325d | |
1,896,474 | Day 976 : Manifest | liner notes: Professional : Not a bad day. Took some training. Applied for a Visa, again. haha... | 0 | 2024-06-21T21:44:11 | https://dev.to/dwane/day-976-manifest-llo | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Not a bad day. Took some training. Applied for a Visa, again. haha Responded to some community questions. Worked on refactoring a feature in an app to use a new SDK.
- Personal : Last night, I went through tracks and put together a playlist for the radio show. I set up the social media posts for the projects I picked up on Bandcamp. Created a starter project to upgrade a previous side project. Ended the night watching the new episode of "The Boys".

Going to finish putting together the radio show along with looking up social media accounts for the artists. I keep saying it but I need to finish the logo for my side project. Also, need to print out some mailing labels to ship some packages tomorrow. Maybe watch some episodes of "Demon Slayer" to end the night. So, radio show on Saturday (https://kNOwBETTERHIPHOP.com) and Sunday (https://untilit.works). Trying to manifest a productive weekend. haha
Have a great night and weekend!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube 6AgxwjgI4E0 %} | dwane |
1,896,473 | MysterySkulls Crud Nestjs | I'm having difficulty creating a crud in nestjs and I need help, I bought a course and it turned out... | 0 | 2024-06-21T21:44:06 | https://dev.to/azuli_jerson_86d70f94325d/mysteryskulls-crud-nestjs-2km4 | nestjs, help | I'm having difficulty creating a crud in nestjs and I need help, I bought a course and it turned out that the guy was a crook and I was scammed so I would only like some help if it's not too problematic
I would like to receive a tip or assistance in creating a crud | azuli_jerson_86d70f94325d |
1,896,471 | What do a bug screen and a firewall have in common? | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-21T21:38:40 | https://dev.to/yowise/what-do-a-bug-screen-and-a-firewall-have-in-common-olg | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Firewall is like a bug screen, essential to defend our space against vermin intruders when we open the window. In order to save yourself from hosting the banquet of malware creepy-crawlies, secure your place. Use firewall.
## Additional Context
The post is meant to emphasize the importance of fortifying the security posture in our lives.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | yowise |
1,896,470 | Preparing for AWS Cloud Practitioner CLF-C02: A Beginner’s Guide to Getting Started | Introduction The AWS Cloud Practitioner CLF-C02 exam is a foundational certification... | 0 | 2024-06-21T21:35:29 | https://dev.to/rahul_chandra/preparing-for-aws-cloud-practitioner-clf-c02-a-beginners-guide-to-getting-started-370o | aws, cloud, productivity, learning | ## **Introduction**
The AWS Cloud Practitioner CLF-C02 exam is a foundational certification offered by Amazon Web Services (AWS), designed for individuals who want to demonstrate a basic understanding of the AWS Cloud. As someone who has just embarked on this journey, I’m excited to share my initial steps, study plans, and strategies to help others who are also starting their preparation.
## **Understanding the AWS Cloud Practitioner CLF-C02 Exam**
The AWS Cloud Practitioner CLF-C02 exam consists of multiple-choice and multiple-response questions. It covers four key domains:
1. **Cloud Concepts**
2. **Security and Compliance**
3. **Technology**
4. **Billing and Pricing**
Knowing the structure and format of the exam is essential for creating an effective study plan. Each domain has a specific weight, with Cloud Concepts and Security being heavily emphasized.
## **My Initial Motivation and Study Schedule**
**Initial Motivation:** My motivation to take the AWS Cloud Practitioner exam stemmed from a desire to understand cloud computing better and enhance my career prospects. AWS certifications are highly respected in the industry, making the AWS Cloud Practitioner a great starting point.
**Setting Up a Study Schedule:** To start my preparation, I created a study schedule that allows me to balance work, personal life, and study time. Allocating specific hours each day for study helps maintain consistency. My schedule includes reading, watching videos, and hands-on practice.
**Essential Study Resources**
**AWS Certified Cloud Practitioner Exam Guide:** I began by reviewing the official exam guide, which outlines the domains, objectives, and the percentage of questions in each domain. This guide serves as my roadmap for preparation.
**AWS Training and Certification Portal:** AWS offers free digital training courses that cover all the exam topics. These courses are an excellent way to start my preparation.
**AWS Whitepapers and FAQs:** AWS whitepapers provide in-depth knowledge about AWS services, best practices, and architecture. The FAQs on the AWS website help clarify many of my doubts.
**Online Learning Platforms**
**A Cloud Guru:** A Cloud Guru offers a comprehensive course for the AWS Cloud Practitioner exam. The course includes video lectures, quizzes, and hands-on labs.
**Udemy:** Udemy has several high-quality courses for the AWS Cloud Practitioner exam. I found the practice exams on Udemy particularly helpful for gauging my readiness.
**Coursera:** Coursera offers courses in collaboration with top universities. The AWS Fundamentals specialization is a great resource for understanding AWS services.
## **Study Groups and Forums**
**AWS Discussion Forums:** Engaging with the AWS community through discussion forums helps me get insights and tips from others who are also preparing for the exam.
**Reddit and Other Community Platforms:** Platforms like Reddit have active communities where you can ask questions, share resources, and get support from fellow learners.
**Practice Exams and Mock Tests**
**Importance of Practice Exams:** Although I have just started, I plan to take practice exams as I progress. Practice exams are crucial for understanding the exam format and for time management.
**Recommended Platforms for Practice Tests:** I intend to use practice tests from A Cloud Guru, Udemy, and Whizlabs. These platforms offer high-quality practice questions that closely mimic the actual exam.
## **Hands-On Experience**
**Using the AWS Free Tier:** The AWS Free Tier allows me to experiment with various AWS services without incurring costs. This hands-on experience is invaluable for understanding how AWS services work.
**Practical Labs and Exercises:** Websites like Qwiklabs and AWS Skill Builder offer practical labs that provide real-world scenarios for using AWS services.
**Time Management Strategies**
**Balancing Study Time with Other Responsibilities:** Effective time management is key to balancing study with work and personal life. I dedicate specific time slots each day to study, ensuring a consistent routine.
**Effective Study Techniques and Tips:** Techniques such as active recall, spaced repetition, and summarizing information in my own words help reinforce my learning.
## **Staying Updated with AWS Services**
**Following AWS Blogs and Announcements:** AWS frequently updates its services and introduces new features. Following the AWS News Blog and other official channels keeps me informed about the latest developments.
**Subscribing to AWS Newsletters:** AWS newsletters provide updates on new services, feature releases, and upcoming events. These newsletters are a valuable resource for staying current.
## **Tips for Exam Day**
**What to Bring and What to Expect:** On exam day, I plan to ensure I have a valid ID and arrive at the test centre early. Familiarizing myself with the exam rules and procedures beforehand will help manage any last-minute nerves.
**Managing Exam Day Nerves:** Staying calm and confident is crucial. Practicing deep breathing techniques and taking short breaks if needed during the exam can help manage stress.
## **Post-Exam Steps**
**What to Do After Passing the Exam:** Once I pass the exam, I'll celebrate my achievement! Updating my resume and LinkedIn profile with my new certification is a must. Exploring other AWS certifications will be the next step in my learning journey.
**Exploring Further AWS Certifications:**
The AWS Cloud Practitioner certification is a stepping stone to more advanced certifications like AWS Solutions Architect Associate, AWS Developer Associate, and more.
## **Common Challenges and How to Overcome Them**
**Dealing with Difficult Topics:** Some topics may be more challenging than others. I plan to tackle difficult topics by breaking them down into smaller, manageable parts and revisiting them regularly.
**Staying Motivated Throughout the Preparation:** Staying motivated can be challenging, especially with a busy schedule. Setting small, achievable goals and rewarding myself for meeting them helps maintain motivation.
**Additional Resources**
**Books and eBooks:** Books like "AWS Certified Cloud Practitioner Study Guide" by Ben Piper and David Clinton provide detailed explanations and practice questions.
**YouTube Channels and Video Tutorials:** YouTube channels like FreeCodeCamp and AWS Tutorials offer free video tutorials that cover exam topics in detail.
## **Conclusion**
Starting my preparation for the AWS Cloud Practitioner CLF-C02 exam has been an exciting journey. By following a structured study plan, utilizing various resources, and gaining hands-on experience, I am confident that I will be well-prepared for the exam. Remember, the journey to certification is a valuable learning experience that enhances your understanding of cloud computing and AWS services. Good luck!
## **FAQ Section**
1. **What is the AWS Cloud Practitioner CLF-C02 exam?**
- The AWS Cloud Practitioner CLF-C02 exam is a foundational certification that validates your understanding of the AWS Cloud.
2. **What are the key domains covered in the CLF-C02 exam?**
- The exam covers Cloud Concepts, Security and Compliance, Technology, and Billing and Pricing.
3. **What study resources are recommended for the AWS Cloud Practitioner exam?**
- Recommended resources include the AWS Certified Cloud Practitioner Exam Guide, AWS Training and Certification Portal, and AWS whitepapers.
4. **How important is hands-on experience for the AWS Cloud Practitioner exam?**
- Hands-on experience is crucial as it helps you understand how AWS services work in real-world scenarios.
5. **What should I expect on the day of the AWS Cloud Practitioner exam?**
- Expect to go through a check-in process, follow exam rules, and manage your time effectively during the multiple-choice and multiple-response questions.
| rahul_chandra |
1,896,469 | React Router | React Router DOM | What is React Router? Traditional multi-page web apps have multiple view files for... | 0 | 2024-06-21T21:33:00 | https://dev.to/geetika_bajpai_a654bfd1e0/react-router-react-router-dom-1blo | ## What is React Router?
Traditional multi-page web apps have multiple view files for rendering different views, while modern Single Page Applications (SPAs) use component-based views. This necessitates switching components based on the URL, which is handled via routing. Although not all development requirements in React need a third-party library, routing is complex and requires a pre-developed solution for efficient app development.
React Router is the most popular and fully-featured routing library for React SPAs. It is lightweight, has an easy-to-learn API, and offers well-written documentation, making it a go-to choice for implementing routing in React apps. Now maintained by Remix, React Router benefits from active development and support.
Unlike traditional websites where routing requests are sent to the server, React Router allows SPAs to handle all routing on the frontend, eliminating the need for additional server requests for new pages. Routing enables seamless navigation between different parts of an application when a user enters a URL or interacts with elements like links or buttons, thus playing a crucial role in building responsive and user-friendly web applications.
## Use Case of React Router
To create a React application using create-react-app, go to your preferred CMD and type:

The command will create a React application called router-tutorial. Once the application has been successfully created, switch to the app directory using cd router-tutorial in your code editor terminal.
To use React Router in your application, you need to install it.

run the command below:
.
## Create ReactApp
Once your project is set up and React Router is installed as a dependency, open the `src/index.js` in your text editor. Import `BrowserRouter` from `react-router-dom` near the top of your file and wrap your app in a `<BrowserRouter>`.

Wrapping `<App />` with `<BrowserRouter>` will apply the React Router to the components where routing is needed. That is, if you want to route in your application, you have to wrap the `<App />` application with `<BrowserRouter>`.
- BrowserRouter: For React Router to work, it has to be aware and in control of your application’s location. The `<BrowserRouter>` component makes that possible when you wrap the entire application within it. Wrapping the entire application with `<BrowserRouter>` ensures the use of routes within it.
## Rendering Routes
Now that we have successfully set up React Router, to implement routing in our application, we have to render a route (that is, all the pages or components we want to navigate in our browser). To render our routes, we will set up a route for every component in our application.
Routes: Whenever we have more than one route, we wrap them up in Routes. If the application’s location changes, Routes look through all its child Route to find the best match and render that branch of UI.
Route: Route(s) are objects passed to the router creation function. Route renders a component whenever a user navigates to the component path.

This code sets up a basic React application with routing using React Router. It defines two functional components, Home and About, which display simple messages. The BrowserRouter component enables routing, and the Routes and Route components define the paths for the home and about pages. When a user navigates to the root path '/', the Home component is displayed, and when they navigate to '/about', the About component is displayed. This setup provides the structure for a single-page application (SPA) where navigation between different parts of the app is handled on the client side without full page reloads.
## Multilevel routes

## Another method to achieve nested routes

<u>Nested Routes:<u>
Parent Route (Account):The Route with path 'account' does not have an element prop, indicating it is used to group child routes.This route serves as a container for the nested routes.
Child Routes: Nested routes under 'account':
'/account/profile' renders the Profile component.
'/account/settings' renders the Settings component.
## Dynamic Route
The e-commerce site uses dynamic routing to handle different categories and specific products. Similar to user-based content sites like blogs or social media, each route has a common structure but displays different content based on the URL parameters. This approach helps in organizing and displaying content efficiently, making the site easy to navigate and manage. This is what's known as dynamic pages and routes.

useParams hook is used to access the URL parameters. In this case, it captures the userName parameter from the URL.
It defines a route with a dynamic segment (:userName), captures the parameter using the useParams hook, and uses it within a component to display personalized content. This approach is useful for scenarios where the content or component behavior needs to change based on URL parameters, such as user profiles, product details, or other similar dynamic pages.
Now let's go ahead and add in our API call.


<u>Home Component</u>
- State Management: Uses the useState hook to store the list of posts.
- Data Fetching: The useEffect hook fetches data from jsonplaceholder.typicode.com/posts when the component mounts and sets the posts state.
- Rendering Posts: Maps over the posts array to create a list of links (NavLink) that navigate to a detailed view of each post. The to prop of NavLink uses a dynamic URL (/post/${post.id}).
<u>PostPage Component</u>
- useParams: Retrieves the postId from the URL.
- State Management: Uses the useState hook to store the fetched post data.
- Data Fetching: The useEffect hook fetches data for the specific post ID (params.postId) whenever params.postId changes.
- Conditional Rendering: Displays a loading message until the data is fetched and then displays the post’s title and body.
<u>Summary</u>
- Home Component: Fetches and displays a list of posts. Each post is a link that navigates to a detailed view of the post.
- PostPage Component: Fetches and displays details of a specific post based on the postId URL parameter.
- React Router: Handles navigation between the home page and the post detail page.
| geetika_bajpai_a654bfd1e0 | |
1,896,468 | ZenCortex Reviews (Hearing Health Support) Is Zen Cortex Recommended By Experts? | ❤️👉 Click here to buy ZenCortex from official website and get a VIP discount👈 OFFER====>... | 0 | 2024-06-21T21:32:00 | https://dev.to/zencortenow/zencortex-reviews-hearing-health-support-is-zen-cortex-recommended-by-experts-1fof | zencortex | ❤️👉 Click here to buy ZenCortex from official website and get a VIP discount👈
OFFER====> https://mwebgraceful.com/9133/246/10/
ZenCortex is a progressive dietary enhancement carefully created to upgrade hearing wellbeing and mental capability normally. With north of 20 plant-based fixings, this exceptional equation intends to safeguard and support your ears through a mix of cell reinforcements, natural concentrates, and fundamental supplements.
Facebook Page> https://www.facebook.com/Zen.Cortex.Buy/
Facebook Page> https://www.facebook.com/Zen.Cortex.Australia.Price/
Facebook Page >https://www.facebook.com/ProstadineAustralia/
Facebook Page> https://www.facebook.com/TryFitspressoAu/
Facebook Page > https://www.facebook.com/FitSpressoWeightLossPills/
Facebook Page> https://www.facebook.com/SugarDefenderBloodAustralia/
Facebook Page>https://www.facebook.com/TryPuraviveAu/
Facebook Page> https://www.facebook.com/ThePuraviveNewZealand/
Facebook Page> https://www.facebook.com/TryPuraviveNigeria/
Facebook Page> https://www.facebook.com/TryPuraviveSouthAfrica/
Facebook Page> https://www.facebook.com/PuraviveCa/
Facebook Page > https://www.facebook.com/ProstadineUnitedKingdom/
Facebook Page > https://www.facebook.com/GET.Sight.Care.Australia/
Facebook Page> https://www.facebook.com/SightCareCa/
Facebook Page> https://www.facebook.com/SightCareNZ/
Facebook Page > https://www.facebook.com/SeroLeanUS/
Facebook Page > https://www.facebook.com/people/SeroLean/61552272659040/
Facebook Page >https://www.facebook.com/NeotonicsIr/
Facebook Page > https://www.facebook.com/JungleBeastProAu/
Facebook Page > https://www.facebook.com/FitSpressoWeightLossPills/
Facebook Page > https://www.facebook.com/projunglebeast/
Facebook Page> https://www.facebook.com/PowerAizen/
Facebook Page >> https://www.facebook.com/TryAquaGuard5/
Facebook Page >> https://www.facebook.com/EyeFortin/
Facebook Page > https://www.facebook.com/Metilean/
Facebook Page> https://www.facebook.com/WarmoolUnitedKingdom/
Facebook Page > https://www.facebook.com/CortexiAu/
Facebook Page > https://www.facebook.com/CortexiJonathanMiller/
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/2f0bc45b-362f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/5acf4a00-372f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/77452540-382f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/1b84ecc6-382f-ef11-a296-001dd80621d3#d5137b10-392f-ef11-840a-001dd80c0611
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/cf225a0e-3a2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/d716b9c8-3a2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/3f7fcbaa-3b2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/48ab4857-3c2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/bb0f73c5-3c2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/1fa6c177-3d2f-ef11-a296-001dd80621d3
https://groups.google.com/g/zencortex-safe-results/c/Xynca6I1R-U
https://groups.google.com/g/zencortex-safe-results/c/27ko7f0Qms8
https://groups.google.com/g/zencortex-safe-results/c/oRKchE2P9iI
https://zencortex-purchase-now.jimdosite.com/
https://www.scoop.it/topic/zencortex-reviews-by-zencortecost/p/4153830018/2024/06/20/zencortex-wow-zcx-39
https://medium.com/@zencortecost/2zencortex-reviews-serious-zencortex-update-uncovering-the-truth-before-buy-zcx-49-a16fd27e1bd7
https://sites.google.com/view/zencortex-honest/home
https://groups.google.com/g/zencortex-safe-results/c/8XJCuD-chn0
https://groups.google.com/g/zencortex-safe-results/c/mat1x8gmRQk
https://groups.google.com/a/tensorflow.org/g/tflite/c/7SJ9HQTpW6Q | zencortenow |
1,896,467 | ZenCortex Reviews (Hearing Health Support) Is Zen Cortex Recommended By Experts? | ❤️👉 Click here to buy ZenCortex from official website and get a VIP discount👈 In a world loaded up... | 0 | 2024-06-21T21:29:14 | https://dev.to/zencortenow/zencortex-reviews-hearing-health-support-is-zen-cortex-recommended-by-experts-26c0 | zencortex | ❤️👉 Click here to buy ZenCortex from official website and get a VIP discount👈
In a world loaded up with commotion and interruptions, tracking down a characteristic answer for help sound hearing and smartness resembles finding an unlikely treasure. ZenCortex offers an exceptional equation intended to support your hearing skills utilizing a mix of cautiously chosen fixings. Plunge into this extensive survey to reveal the enchanted behind ZenCortex and why it very well may be the way to rejuvenating your hear-able wellbeing.
OFFER====> https://mwebgraceful.com/9133/246/10/
Facebook Page> https://www.facebook.com/Zen.Cortex.Buy/
Facebook Page> https://www.facebook.com/Zen.Cortex.Australia.Price/
Facebook Page >https://www.facebook.com/ProstadineAustralia/
Facebook Page> https://www.facebook.com/TryFitspressoAu/
Facebook Page > https://www.facebook.com/FitSpressoWeightLossPills/
Facebook Page> https://www.facebook.com/SugarDefenderBloodAustralia/
Facebook Page>https://www.facebook.com/TryPuraviveAu/
Facebook Page> https://www.facebook.com/ThePuraviveNewZealand/
Facebook Page> https://www.facebook.com/TryPuraviveNigeria/
Facebook Page> https://www.facebook.com/TryPuraviveSouthAfrica/
Facebook Page> https://www.facebook.com/PuraviveCa/
Facebook Page > https://www.facebook.com/ProstadineUnitedKingdom/
Facebook Page > https://www.facebook.com/GET.Sight.Care.Australia/
Facebook Page> https://www.facebook.com/SightCareCa/
Facebook Page> https://www.facebook.com/SightCareNZ/
Facebook Page > https://www.facebook.com/SeroLeanUS/
Facebook Page > https://www.facebook.com/people/SeroLean/61552272659040/
Facebook Page >https://www.facebook.com/NeotonicsIr/
Facebook Page > https://www.facebook.com/JungleBeastProAu/
Facebook Page > https://www.facebook.com/FitSpressoWeightLossPills/
Facebook Page > https://www.facebook.com/projunglebeast/
Facebook Page> https://www.facebook.com/PowerAizen/
Facebook Page >> https://www.facebook.com/TryAquaGuard5/
Facebook Page >> https://www.facebook.com/EyeFortin/
Facebook Page > https://www.facebook.com/Metilean/
Facebook Page> https://www.facebook.com/WarmoolUnitedKingdom/
Facebook Page > https://www.facebook.com/CortexiAu/
Facebook Page > https://www.facebook.com/CortexiJonathanMiller/
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/2f0bc45b-362f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/5acf4a00-372f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/support-forum/77452540-382f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/1b84ecc6-382f-ef11-a296-001dd80621d3#d5137b10-392f-ef11-840a-001dd80c0611
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/cf225a0e-3a2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/d716b9c8-3a2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/3f7fcbaa-3b2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/48ab4857-3c2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/bb0f73c5-3c2f-ef11-a296-001dd80621d3
https://lvmpd-portal.dynamics365portals.us/forums/general-discussion/1fa6c177-3d2f-ef11-a296-001dd80621d3
https://groups.google.com/g/zencortex-safe-results/c/Xynca6I1R-U
https://groups.google.com/g/zencortex-safe-results/c/27ko7f0Qms8
https://groups.google.com/g/zencortex-safe-results/c/oRKchE2P9iI
https://zencortex-purchase-now.jimdosite.com/
https://www.scoop.it/topic/zencortex-reviews-by-zencortecost/p/4153830018/2024/06/20/zencortex-wow-zcx-39
https://medium.com/@zencortecost/2zencortex-reviews-serious-zencortex-update-uncovering-the-truth-before-buy-zcx-49-a16fd27e1bd7
https://sites.google.com/view/zencortex-honest/home
https://groups.google.com/g/zencortex-safe-results/c/8XJCuD-chn0
https://groups.google.com/g/zencortex-safe-results/c/mat1x8gmRQk
https://groups.google.com/a/tensorflow.org/g/tflite/c/7SJ9HQTpW6Q | zencortenow |
1,896,465 | Average Churn Rate for Subscription Services in 2024 | This Blog was Originally Posted to Churnfree Blog The average churn rate for subscription services... | 0 | 2024-06-21T21:22:29 | https://churnfree.com/blog/average-churn-rate-for-subscription-services/ | churnrate, churnfree, churnreduction, saaschurn | This Blog was Originally Posted to [Churnfree Blog](https://churnfree.com/blog/average-churn-rate-for-subscription-services/)
The average churn rate for subscription services varies from industry to industry. Understanding the average churn rate for subscription services is essential for any business to grow in 2024.
Reaching the average churn rate for subscription services can be tricky, but you can reduce subscription churn with some simple churn retention strategies. The churn rate is the percentage of customers who discontinue their subscriptions within a specific time frame. It’s often measured in terms of monthly recurring revenue (MRR) and annual recurring revenue (ARR), key financial metrics for subscription-based businesses.
The average annual churn rate for subscription companies typically ranges between 5-7%. A monthly average churn rate for subscription services is around 4%. However, these figures can vary depending on industry and market conditions. You can compare these average [churn rate benchmarks](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) against your own gauge to see where your business stands.
**Exploring Average Churn Rate for Subscription Services by Industry**
Different industries experience varying levels of churn, influenced by factors like customer engagement, pricing strategies, and market saturation. Let’s explore average churn rate for subscription services in different industries:

**SaaS Subscription Services**
According to Recurly, the average churn rate for SaaS is 3.36% for voluntary churn. SaaS is a B2B service and, therefore, has lower churn rates. The SaaS sector shows significant variance in [SaaS churn rate](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), with B2B platforms typically experiencing lower churn rates, around 3.5% to 4.67%, compared to B2C platforms. This difference is often due to the critical nature of B2B services and the long-term contracts commonly associated with them.
**Related Read:** [What is a good churn rate for SaaS?](https://churnfree.com/blog/b2b-saas-churn-rate-benchmarks/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution)
**Consumer-Oriented Services**
Direct-to-consumer (DTC) businesses usually have higher average subscription churn rates. Media and Entertainment, Consumer Goods, and Retail reports tend to have higher [customer churn rate](https://churnfree.com/blog/a-look-at-customer-churn-rate/). Industries such as Digital Media, Entertainment, Consumer Goods, and Retail report an average churn rate of around 6.5%. This is relatively high compared to SaaS churn rates, which average about 3.8%.
**Energy/Utilities**
In the Energy/Utilities sector, the average churn rate for subscription services stands at approximately 11%. This figure underscores the importance of businesses in this sector focusing on customer retention strategies, especially considering the competitive nature of the energy market.
**IT Services**
The IT Services industry shows a lower churn rate, around 12%. This relatively lower rate can be attributed to the indispensable nature of IT services for businesses, which often leads to longer contract durations and higher customer retention.
**Computer Software**
The Computer Software industry experiences an average churn rate of about 14%. Despite the essential role of software in modern business operations, the competitive market and rapid technological advancements contribute to this churn rate.
**Professional Services**
Professional Services face a higher churn rate, averaging 27%. This sector’s high rate can be linked to the need for personalized service delivery and the intense competition among firms offering these services.
**Clothing Subscription Box**
Clothing Subscription Box Churn Rates, similar to fashion subscription services, often see high churn rates. The average clothing subscription services have churn rates of around 10.54% per month due to fluctuating consumer interests and a competitive market.
**News Subscriptions**
News subscriptions face challenges in maintaining subscribers, especially in an era where free content is readily accessible. Factors like content quality, pricing, and the proliferation of alternative news sources widely influence the churn rate in this niche.
**E-commerce Subscriptions**
As reported by e-commerce platforms, including Shopify, the average churn rate of e-commerce subscriptions is around 5%.
**Streaming Services**
The average churn rates for streaming services like Netflix, Amazon Prime, Disney, etc., are reported to be high. In the US, streaming services had a churn rate of around 37% for the second half of 2022. The churn rate was significantly higher with Generation Z and millennials than with boomers and Gen X.
**Factors Influencing Subscription Churn**
Pricing is one of the main [causes of churn](https://churnfree.com/blog/analyze-customer-churn-causes/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). According to a survey, approximately 71% of the customers unsubscribed because of high pricing or an increase in pricing. Other than that, most sign ups never turn into upgrades due to hidden charges, such as additional fees for premium features or unexpected price increases after a trial period, which causes customers to churn. This can be resolved by creating a transparent pricing calculator and responding to churn.
**Monitoring and Responding to Churn**
It is essential for subscription businesses to monitor their churn rates closely and understand the underlying reasons. Even slight increases in churn can indicate potential issues that need immediate attention to retain customers and sustain growth. By utilizing a [churn prediction software](https://churnfree.com/blog/churn-prediction-software/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), you can confidently predict churn and implement effective strategies to avoid high churn rates, ensuring the security and stability of your business.
The average subscription churn rate can serve as a critical metric for the health of a subscription-based business. Keeping this rate at or below industry benchmarks is essential for the long-term sustainability of a business.
By delving into these industry-specific churn rates, businesses can gain a comprehensive understanding of the factors that drive customer turnover. This will help you in building strong [customer retention strategies](https://churnfree.com/blog/customer-retention-strategies/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), thereby reducing customer churn and achieving [net negative churn](https://churnfree.com/blog/net-negative-churn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution).

In case you feel like this lady after looking at the average churn rates by industry, don’t worry [Churnfree can help](https://churnfree.com/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution) you reduce churn by 40%. Just sign up and find out yourself.
If you’d like to learn more about churn rate and various tricks to [reduce customer churn](https://churnfree.com/blog/how-to-reduce-customer-churn/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution), follow [Churnfree Blog](https://churnfree.blog/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution).
**FAQs**
**What is the typical churn rate for subscription-based businesses in 2024?**
The typical churn rate for subscription services generally falls between 6-8%. However, businesses can strive to reduce their churn rate below this average, presenting a promising opportunity for improvement. Understanding why customers cancel their subscriptions and how to calculate your company’s churn rate is crucial for this journey of enhancement.
**What are typical churn rates for digital subscriptions?**
Digital subscription companies generally see an average annual churn rate of 5-7%. A monthly churn rate of 4% is often considered adequate. However, these rates can vary depending on the specific market and industry, so it is beneficial to look at industry benchmarks to evaluate your business’s performance.
**How is the retention rate for a subscription service determined?**
The retention rate of a subscription service is determined by the percentage of users who continue to use the service over a given period, such as weekly or monthly. This rate is a critical metric for assessing [customer loyalty](https://churnfree.com/blog/customer-loyalty/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_distribution). Additionally, Monthly Recurring Revenue (MRR) retention, which tracks the stability of revenue from recurring subscriptions over time, is another critical metric to consider. | churnfree |
1,896,394 | Security news weekly round-up - 21st June 2024 | Weekly review of top security news between June 14, 2024, and June 21, 2024 | 6,540 | 2024-06-21T21:16:14 | https://dev.to/ziizium/security-news-weekly-round-up-21st-june-2024-34j0 | security | ---
title: Security news weekly round-up - 21st June 2024
published: true
description: Weekly review of top security news between June 14, 2024, and June 21, 2024
tags: security
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jupjut8w3h9mjwm8m57.jpg
series: Security news weekly round-up
---
## __Introduction__
Today's review is about malware and a vulnerability. Unlike previous editions, this one is short because we have just 3 articles. All are worthy of your reading time, and you can relate to them (like every article that we have ever covered in the years gone by).
With that out of the way, let's begin.
<hr/>
## [New Linux malware is controlled through emojis sent from Discord](https://www.bleepingcomputer.com/news/security/new-linux-malware-is-controlled-through-emojis-sent-from-discord/)
Emojis? Who would have thought that? Well, here we are and this shows that humans are creative. But, in this case, not for a good reason. This malware is linked to a threat actor tracked under the alias UTA0137, and the article claims they this threat actor has espionage in mind.
The takeaway from this article is that threat actors are always finding a way to infect your computer system. What's more, the nature of this malware can allow it to bypass some security solutions as stated in the excerpt below.
> The malware is similar to many other backdoors/botnets used in different attacks, allowing threat actors to execute commands, take screenshots, steal files, deploy additional payloads, and search for files.
>
> However, its use of Discord and emojis as a command and control (C2) platform makes the malware stand out from others and could allow it to bypass security software that looks for text-based commands.
## [Fake Google Chrome errors trick you into running malicious PowerShell scripts](https://www.bleepingcomputer.com/news/security/fake-google-chrome-errors-trick-you-into-running-malicious-powershell-scripts/)
When it comes to internet security, the one thing that can stop some attacks is education and awareness. The latter and former can prevent this type of attack. So, be nice to your non-technical family and friends and let them know the difference between a legitimate browser update and a fake one.
To aid you in this, you can start with the following excerpt:
> ...the threat actors exploit their targets' lack of awareness about the risks of executing PowerShell commands on their systems.
>
> They also take advantage of Windows' inability to detect and block the malicious actions initiated by the pasted code.
## [Researchers Uncover UEFI Vulnerability Affecting Multiple Intel CPUs](https://thehackernews.com/2024/06/researchers-uncover-uefi-vulnerability.html)
The good news about this vulnerability is that it's now patched. Nonetheless, it got me thinking of [Meltdown and Spectre](https://meltdownattack.com/). Read the article and check if it affects you.
Here is an excerpt to get you started:
> Tracked as CVE-2024-0762 (CVSS score: 7.5), the "UEFIcanhazbufferoverflow" vulnerability has been described as a case of a buffer overflow stemming from the use of an unsafe variable in the Trusted Platform Module (TPM) configuration that could result in the execution of malicious code.
## __Credits__
Cover photo by [Debby Hudson on Unsplash](https://unsplash.com/@hudsoncrafted).
<hr>
That's it for this week, and I'll see you next time. | ziizium |
1,883,868 | Tips for collaborating with a new project codebase | Working on a new project codebase can be both exciting and challenging. Whether you're an experienced... | 0 | 2024-06-21T21:08:18 | https://dev.to/wdp/tips-for-collaborating-with-a-new-project-codebase-4kpf | codereview, softwaredevelopment, programmingtips |
Working on a new project codebase can be both exciting and challenging. Whether you're an experienced developer or a newcomer, understanding how to navigate and contribute to an unfamiliar codebase is crucial for efficient and effective collaboration.
In this article, we provide tips to help you get started, what to do during the feature development process or fix, and how to conclude it on a high note using the [Web Dev Path codebase](https://github.com/Web-Dev-Path/web-dev-path) as a reference.
---
## Before you start coding
### 1. Familiarize yourself with the project structure
Understanding the project structure is your first step:
- **Read the README:** The README file usually contains essential information about the project, including setup instructions, dependencies, and a high-level overview of the codebase. It's the first document you should read to get an overall understanding of the project. Here's an example of our project's [README file](https://github.com/Web-Dev-Path/web-dev-path/blob/main/README.md).
- **Explore the file structure:** Spend some time browsing through the directories and files to understand how the project is organized. Here's why it's important to look at these common folders:
- `src`: This folder typically contains the application's source code. Understanding the organization of the source code helps you locate where to implement new features or find existing functionality.
- `components`: This folder usually holds reusable UI components. Familiarizing yourself with these components can save time by allowing you to reuse existing ones instead of creating new ones from scratch.
- `utils`: Utility functions and helpers are often found here. These functions can simplify your development process by providing pre-built functionalities for common tasks.
- `styles`: This folder often contains styles and theming information. Understanding the styling conventions used in the project ensures that your additions remain consistent with the existing design.
### 2. Review documentation and comments
Documentation and code comments are invaluable resources:
- **API documentation:** If the project interacts with external APIs, read the API documentation to understand how data is fetched and handled. For example, in our project, you can look at the dev.to API documentation to understand how [we fetch blog post data](https://github.com/Web-Dev-Path/web-dev-path/blob/main/pages/blog/index.js)
- **Code comments:** Look for comments within the code. They often provide context and explanations for complex logic and decisions made by the original developers.
### 3. Set up the development environment
Ensure your development environment matches the project's requirements:
- **Follow setup instructions:** Carefully follow any setup instructions provided in the README or other documentation. For example, our project requires you to clone the repository and run yarn install to install dependencies.
- **Install dependencies:** Run the commands to install project dependencies, typically using package managers like npm or yarn. In our case, you might need to install specific versions of React, `styled-components`, and other libraries listed in the package.json file.
### 4. Understand the styling and component libraries
Consistency in styling and components is vital for maintaining a cohesive look and feel:
- **Use existing components:** Before creating new components, check if there are existing ones you can reuse. This helps maintain consistency and reduces redundant code that will not contribute to the project's maintenance.
- **Refer to style guides:** Adhere to any style guides or theming conventions used in the project. This might include using pre-defined or adhering to specific design patterns. For example, in our project, we must refer to the themeConfig.js for predefined styles and themes.
### 5. Study sample data and utilities
Projects often come with sample data and utility functions:
- **Sample data:** Look for sample data that can help you understand the structure and format of the data the project handles. For instance, check the blogContent.js file in the utils folder, such as blog post data.
- **Utility functions:** Familiarize yourself with utility functions that can simplify your work. These might include functions for data manipulation, API calls, or common UI operations. Explore the utils folder to see the helper functions you can leverage in your development process.
---
## During the coding process
### 6. Communicate and collaborate
Effective communication is key to successful collaboration:
- **Ask questions:** Don't hesitate to ask questions if something is unclear. Engaging with your team can provide insights and help you avoid common pitfalls. If your questions require more detailed explanations, consider reaching out on Slack. PR conversation threads can become confusing, and a small meeting or a video can often be more efficient than a long comment thread.
- **Request reviews early and use DRAFT Pull Requests:** If you're unsure about your approach, request a review from a more experienced team member. Early feedback can save time and guide you in the right direction. When doing so, create a [Draft PR](https://github.com/Web-Dev-Path/web-dev-path/wiki/Creating-a-Pull-Request#about-draft-pull-requests). Draft PRs are meant for work in progress and are not ready for a full review. You can ask for feedback from one or two team members while keeping the PR as a draft. Only add all the remaining reviewers when your PR is ready for a complete review.
### 7. Follow best practices for code contribution
Adhering to best practices ensures your contributions are well-received:
- **Write clean code:** Follow coding standards and write clean, readable code. This means using meaningful variable names, writing modular code, and avoiding unnecessary complexity. Here’s an example of clean code:
```
// Bad Example
function a(arr) {
let r = [];
for (let i = 0; i < arr.length; i++) {
r.push(arr[i] * 2);
}
return r;
}
```
```
// Good Example
function doubleArrayElements(elements) {
return elements.map(element => element * 2);
}
```
- **Commit regularly:** Make frequent commits with clear, descriptive messages. This helps track your progress and makes it easier to review changes. For example
:
```
git commit -m "Add function to double array elements"
git commit -m "Fix issue with array doubling logic"
git commit -m "Refactor doubleArrayElements function for readability"
```
- **Test thoroughly:** Ensure your code is well-tested. Write unit tests if applicable, and manually test your changes to confirm they work as expected. For front-end web features, it’s crucial to test in multiple browsers (e.g., Chrome, Firefox, Safari) and on different screen sizes (e.g., desktop, tablet, mobile) to ensure compatibility and responsiveness.
---
## After finishing your PR
### 8. Learn from code reviews
Code reviews are an excellent opportunity for learning and growth:
- **Embrace feedback:** View feedback as a chance to improve your skills. Constructive criticism can help you become a better developer. It's important not to take any feedback personally. When a suggestion is unclear, reach out to the developer and try to understand their perspective.
- **Review others' code:** Participating in code reviews for others can give you insights into different coding styles and best practices. When reviewing others' work, try to provide links and examples whenever possible. This makes your feedback more actionable and easier to understand.
### 9. Document your changes
Proper documentation helps others understand your work:
- **Update documentation:** Ensure that any relevant documentation, including the README and CHANGELOG, is updated to reflect your changes.
- **Write clear pull request (PR) descriptions:** Provide a clear and concise description of your pull request, including what changes were made, why they were necessary, and how to test them. It's a good practice to have a [PR template](https://github.com/Web-Dev-Path/web-dev-path/blob/main/.github/pull_request_template.md) per project so each member has an itinerary to follow when submitting one.
---
## Conclusion
Diving into a new codebase can feel overwhelming, but it becomes manageable and even enjoyable when you follow a methodical approach.
Start by understanding the project structure and reading through the documentation. Then, ensure your development environment is properly set up and familiarize yourself with the existing components and styles.
Effective communication and collaboration are key, so don't hesitate to ask questions and seek early feedback. Follow best practices for writing and testing your code and embrace the learning opportunities from code reviews. Proper documentation and clear PR descriptions help ensure the team understands and appreciates your contributions. Remember, the goal is always to work smarter, not harder.
| marianacaldas |
1,893,279 | Basics of Web Dev + Hosting on Github Pages - Day 0/? | This would be a very rough draft or more like pointed-down stuff of whatever I will be learning. I... | 27,813 | 2024-06-21T21:07:03 | https://dev.to/theshakeabhi/re-learning-the-basics-of-web-day-0-1o6o | webdev, beginners, htmlcssjs | This would be a very rough draft or more like pointed-down stuff of whatever I will be learning. I have been a web developer for the last 3 years, but I still feel like I am lagging in some areas.
So the plan is to read the entire MDN [Guides Section](https://developer.mozilla.org/en-US/docs/Learn), blog every day whatever I learn by remembering and jotting it down and keep updating the blog and a GitHub repo.
## Day 0: Web Development
I am currently reading MDN Docs and starting from the basics.
- `npx http-server /path/to/project -o -p 9999`
This caught my attention as to how easy it is to host a project locally
- Interesting that the google search engine treats `-` as a word separator but doesn't do the same for `_`. So use `-` for file names and other variables
1 Anatomy of an HTML Element:

1. [Quirk modes](https://developer.mozilla.org/en-US/docs/Web/HTML/Quirks_Mode_and_Standards_Mode). Something that I had not heard before.
- Comments in HTMl are between `<-- {content} -->`
- Don't use heading elements to make text bigger or bold, because they are used for **accessibility** and other reasons such as **SEO**. Try to create a meaningful sequence of headings on your pages, **without skipping levels**.
- Anatomy of CSS ruleset:

- Understanding of [different selectors](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/CSS_basics#different_types_of_selectors) becomes very useful while using JS and JQuery
- Anything in CSS between /* and */ is a CSS comment.
- CSS layout:

- `margin: 0 auto;` When you set two values on a property like margin or padding, the first value affects the element's top and bottom side (setting it to 0 in this case); the second value affects the left and right side.
- `text-shadow` [details](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/CSS_basics#positioning_and_styling_the_main_page_title)
- JavaScript was invented by Brendan Eich. JS is a powerful programming language that can add interactivity to a website. It can be used to create games, animated 2D and 3D graphics, comprehensive database-driven apps, and much more!
- `<script>` tag which loads the file, should be put just before closing of the `<body>`, because:
- **Performance**: Placing `<script>` tags at the bottom of the HTML document ensures that the HTML and CSS content of the page is loaded and rendered first. This can improve the perceived loading time of the page because users see the content before the JavaScript is loaded and executed.
- **DOM Readiness**: When a script is loaded at the bottom of the page, it ensures that the DOM (Document Object Model) is fully constructed before the script runs. This is important for scripts that need to manipulate or interact with elements on the page, as it ensures that all elements are available to the script.
- **Dependency Resolution**: If your script depends on any libraries or other scripts, placing it at the end ensures that any dependencies loaded in the same manner are available by the time your script runs.
- The reason the instructions (above) place the `<script>` element near the bottom of the HTML file is that the browser reads code in the order it appears in the file. If the JavaScript loads first and it i's supposed to affect the HTML that hasn't loaded yet, there could be problems. Placing JavaScript near the bottom of an HTML page is one way to accommodate this dependency.
- Object can be anything. Everything in JavaScript is an object and can be stored in a variable. Keep this in mind as you learn.
I was getting my hands also dirty by making the webpage they were asking me to make.
NOTE: Completed from [Getting started with web](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web) till [Publishing the website](https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/Publishing_your_website)
Here is the link to the repo and the hosted link:
**Github**: https://github.com/theshakeabhi/basics-of-web/tree/main/day-0/test-site
**Hosted Link**: https://theshakeabhi.github.io/basics-of-web/
| theshakeabhi |
1,896,365 | Disable Effects of a Controller On its Pods in Kubernetes | A controller is the name for several different elements in Kubernetes. One of these elements... | 0 | 2024-06-21T20:57:34 | https://dev.to/umairk/disable-effects-of-a-controller-on-its-pods-in-kubernetes-64a | kubernetes, devops, tutorial, docker | A controller is the name for several different elements in Kubernetes. One of these elements designates the global name of any resource responsible, among other things, for the creation and lifecycle management of the pods.
A typical controller is a Deployment, which is used to deploy pods in several replicas but is also responsible for destroying or recreating them as needed. The other three most used types of controllers are StatefulSet, DaemonSet, and Job. When working with controllers, we may want to prevent them from making changes to pods temporarily. So, how do we do it?
## The cleanest solution (STS only)
In the case of an StatefulSet (STS), we have a significant difference compared to a dDeployment, it is the presence of Rolling Update Strategies, at the root of spec, of which here is the entire API with the values by default:
```
spec:
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
```
What we are interested in is _spec.updateStrategy.rollingUpdate.partition_. This option is only available if _spec.updateStrategy.type_ is set to RollingUpdate. It allows you to specify, in a StatefulSet, the pod ordinal above which modifications made to the controller will be propagated. In other words, with the default value of 0, all pods will be updated when the StatefulSet changes. If set to 1, then all pods will be updated except the first pod, whose name ends with -0.
This allows for canary deployment at a lower cost, such as modifying the image of only 30% of the pods to gradually introduce an update. The command to patch a StatefulSet is as follows:
```
sts=postgres
part=2
kubectl patch statefulset $sts -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":'$part'}}}}'
```
In the case of a canary deployment, we could first modify the image of a container in the pod template, then deploy the modifications gradually by regularly patching the controller and lowering the partition value step by step.
Note that this will not prevent a pod from restarting if it is deleted; it only affects modifications to the pod template. For handling pod restarts, continue reading.
Without a controller, there won't be any deletion or recreation of pods against your will. However, if you delete a controller, the pods created by it will disappear. To prevent this, you can use an option with the kubectl delete command that prevents a controller from eliminating its child pods. This option is --cascade=false (since it is true by default).
```
kubectl delete sts postgresql --cascade=false
```
When deleting a controller this way, Kubernetes still removes the ownerReferences objects from all child pods, but does nothing else. You can then, for example, delete a pod and mount its data volume in a new pod for a special operation (such as an advanced backup). Using the backup example, this method can also be used to turn off the pods of a StatefulSet one by one and take the time to perform a cold backup of the data on the storage side.
However, you should remember to recreate the deleted controller if you want to keep the pods alive after the operation. Here is a bash script that provides the basics of using this method properly:
{% gist https://gist.github.com/Umair-khurshid/28a0b5cf1f6ad149fa32885fd41ada54 %}
This script ensures that the controller is recreated as is once the operation is completed or if the script encounters a problem, or if the user kills it prematurely (except in the case of a kill -9 of course).
| umairk |
1,889,031 | Swimming Like a Fish (Bite-size Article) | Introduction Many fish swim continuously throughout their lives. Especially, fish like... | 0 | 2024-06-21T20:56:35 | https://dev.to/koshirok096/swimming-like-a-fish-bite-size-article-29bm | life, mentalhealth | #Introduction
Many fish swim continuously throughout their lives. Especially, fish like sharks and tuna take in oxygen by swimming, maintaining the activities necessary for survival. These fish, which possess this habit, survive by always moving forward.
So, <u>do we need to keep swimming continuously in our lives like fish?</u> Of course, when we talk about "continuously swimming," it doesn't just mean physical movement. It refers to constantly moving forward in the direction we should take in life.
For example, dedicating ourselves to work for career advancement or achieving self-growth through studying can be seen as steps forward in life. One reason migratory fish keep swimming is the pursuit of possibilities. They constantly search for new waters and new food sources, always moving forward. The sight of migratory fish freely swimming in the vast ocean symbolizes infinite possibilities. They seek new adventures and explore unknown worlds, expanding their limits. This attitude also applies to us pursuing our own possibilities.

#The "Lying Flat" Movement in China
However, continuously "swimming" isn't always a good thing. Constantly moving forward without rest can lead to mental and physical exhaustion. "Moving forward" often means engaging in intense competition and battles. Just as fish swim against the waves, we face difficulties as we strive to progress.
Recently, in China, there has been an increase in young people who distance themselves from excessive competition and pressure, prioritizing time with themselves and their families over a life driven by work. These individuals are part of a movement known as the **"lying flat" movement (躺平主義)**. In recent years, rapid economic growth in China has intensified competition, causing many young people to feel immense pressure. In urban areas, high living costs, demanding educational environments, fierce job competition, and long working hours have led to burnout and mental fatigue. As a result, some young people choose to live life at their own pace, distancing themselves from excessive competition and embracing a lifestyle of "lying flat."
Specifically, this lifestyle involves earning just enough to get by (some even live off savings or family support without working at all), not buying homes or cars, avoiding romantic relationships, marriage, and having children, and maintaining low levels of consumption.

Some people view this as a form of "giving up", criticizing it as a destructive and lazy way of life. However, this approach of distancing oneself from excessive pressure and competition, and pursuing a simple, low-stress lifestyle, can preserve mental health, create time to focus on what truly matters, and allow individuals to concentrate on <u>what is genuinely important to them as human beings</u>.
On the other hand, there are concerns about the impact of this lifestyle on overall societal productivity and economic growth. If the "lying flat" lifestyle becomes widespread, it could lead to a shortage of labor and economic stagnation. Additionally, spending one's youth without working hard could result in missed opportunities for personal growth and future career advancement, posing a risk of irreparable consequences. Furthermore, choosing this lifestyle could lead to lower economic stability and increase future uncertainties.

#It's All Up to YOU
There are two main reasons I wrote this article. First, it’s a bit personal, but over the past few years, I’ve been under immense pressure while tackling various tasks, trying to push myself forward. In doing so, I began to feel like a fish that must keep swimming out of instinct, <u>as if I had no choice in the matter</u>. It felt like I was moving not out of my own will, but as a biological imperative.
Second, when I knew about China’s “lying flat” movement through the news media, it struck me and made me wonder if there’s something I’m missing in my current way of living.
<u>I don’t believe that our lives need to be spent continuously swimming like fish</u>. Some migratory fish, such as tuna, may die if they stop moving because they can’t breathe, but <u>we have the ability to stop and rest</u>, just like the young people in China’s “lying flat” movement.
However... <u>I also don’t think we should completely stop "swimming" like fish</u>. Because, swimming is fun, isn't it? At least for me, swimming is fun. So, as long as I can find a enjoyment in "swimming", I think it’s better to keep going until I’m completely exhausted. I plan to enjoy swimming while taking occasional breaks.

#Conclusion
I usually write articles about tech topics, work techniques, and productivity, but today I decided to try something a bit experimental. Sorry if it turned out a bit disjointed!
In our lives, constantly moving forward is important, but it’s not the only way to live. It’s also important to move at your own pace, taking breaks when needed. While enjoying the act of swimming like a fish, don’t forget to stop and rest occasionally to find your own balance.
Thank you for reading, take care!
| koshirok096 |
1,896,392 | AWS Polling - What is it and why you should do it | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-21T20:56:14 | https://dev.to/onesoltechnologies/aws-polling-what-is-it-and-why-you-should-do-it-4cob | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
In [AWS](https://aws.amazon.com/) Lambda polling, we find the most optimal configuration for your AWS Lambda. These configurations refer to the memory size and its relation to runtime & cost. Plotting memory vs cost on a graph gives us a minimum point, which is the optimal cost.
<!-- Explain a computer science concept in 256 characters or less. -->

## Additional Context
When I first started using [AWS](https://aws.amazon.com/), I'd often end up spending a lot more as compared to what my expectation, but since the projects were small and the small differences in run-time did not affect the business function that much I used to overlook it. It was only when we started the first major project that we learnt about [polling](https://docs.aws.amazon.com/lambda/latest/operatorguide/profile-functions.html) in [AWS Lambda](https://aws.amazon.com/) and it has been a lifesaver. On a recently completed project, we are projected to save about $500 over the lifetime due to polling. These are the small things that we have now learnt to pay more attention to at our [Consultancy](https://onesol.dorik.io/)!
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | onesoltechnologies |
1,896,389 | Crafting Effective UI Layouts: Day 4 of My UI/UX Design Journey | Day 4: Mastering Layout in UI Design 👋 Hello, Dev Community! I'm Prince Chouhan, a B.Tech CSE... | 0 | 2024-06-21T20:52:25 | https://dev.to/prince_chouhan/crafting-effective-ui-layouts-day-4-of-my-uiux-design-journey-4i87 | ui, uidesign, ux, uxdesign | Day 4: Mastering Layout in UI Design
👋 Hello, Dev Community!
I'm Prince Chouhan, a B.Tech CSE student with a passion for UI/UX design. Today, I'm excited to share my learnings on the principles of creating effective layouts in UI design.
---
🗓️ Day 4 Topic: Layout Principles in UI Design
---
📚 Today's Learning Highlights:
1. Concept Overview:
A layout is the arrangement of visual elements on a user interface,
organizing content to guide users and enhance their experience.
Effective layouts make interfaces intuitive and visually appealing.
2. Key Takeaways:
- Order: Arrange elements logically to guide users through the
interface.
- Hierarchy: Establish visual priority to direct attention to
important elements.
- Balance: Distribute elements evenly to create visual stability.
3. Key Considerations for Creating a Good Layout:
- Balance:Achieve stability through symmetrical (equal weight) or
asymmetrical (visual weight) distribution.
- Hierarchy:
- Size: Larger elements catch attention first.
- Color: Bold colors highlight important elements.
- Placement: Position key elements in prominent areas.
- Proximity: Group related elements together to indicate their
connection.
- Separate unrelated elements to avoid confusion.
- White Space (Negative Space): Use empty space strategically to
enhance readability and focus.
Prevent clutter and give the design room to breathe.
- Alignment: Align elements to create order and cohesion.
Use grids and guides to maintain consistency across the interface.
4. Common Design Layouts:
- Grid Layout: Uses rows and columns to structure content.
Ensures alignment and consistency.
- Single Column Layout:Simplifies navigation by presenting content in
a single, vertical flow.
Ideal for mobile interfaces.
- Split-Screen Layout:Divides the screen into two distinct areas.
Useful for showcasing dual content or comparisons.
- Z-Pattern Layout: Guides the user's eye in a Z-shaped path.
Effective for designs with a clear start and end point.
- F-Pattern Layout: Directs the user's eye in an F-shaped pattern.
Commonly used for text-heavy content.
---
🚀 Future Learning Goals:
Next, I'll explore visual hierarchy, consistency, and simplicity in UI design.
---
📢 Community Engagement:
- What are your favorite layout techniques?
- Any examples of effective layouts that you find inspiring?
---
💬 Quote of the Day:
_"Good design is obvious. Great design is transparent." – Joe Sparano_
---
Thank you for reading! Stay tuned for more updates as I continue my journey in UI/UX design.

#UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan
| prince_chouhan |
1,891,804 | A practical summary of React State variables & Props! | There are so many videos and documentation out there to help you learn the concept and usage of State... | 0 | 2024-06-21T20:49:37 | https://dev.to/atenajoon/react-state-variables-vs-props-21o7 | reactjsdevelopment, state, props, react | There are so many videos and documentation out there to help you learn the concept and usage of _State_ and _Props_ with code examples. this article is not about teaching you these concepts from scratch or providing you with code examples. The main point of this blog is to briefly give you a practical summary of **what they are** and **when to use them**. So, if you are completely new to these concepts, I would suggest you first watch some basic tutorials about them to get the most out of this summary.
## State
### What is State?
The state variable is a plain JavaScript object used by React that includes all information about the component's current situation. State variables are immutable and private to that component.
### When do we need a state for a component?
- Whenever you need to change/update part of the UI
- Whenever the user is supposed to interact with the UI
- Whenever you need to persist local variables between renders
### Using state correctly:
- States are immutable. The component that owns a piece of the state, should be the one that modifies it. Do not modify the state directly. Instead, use **setState()**.
- State updates may be asynchronous. So, if you need to use the value of _this.props_ and _this.state_ to calculate the next state value, instead of directly passing an object to _setState()_ pass a callback function with the _state_ and _props_ as arguments.
- State updates are merged.
- The data flows down.
### How does setState() work?
This method tells React that we are updating the state. Then, React checks which parts of the state are changed to bring the DOM in sync with the virtual DOM based on those changes.
Note: **props** is basically a plain JavaScript object that includes all the attributes that we set in the parent component. Every React component has the _props property_. It is used to pass data from a parent component to a child. It is _“read-only”_ and cannot be modified, if we need to do so, we should use setState and modify them from the state.
## State vs props
#### Ownership and Control:
**State:** Internal data, owned and controlled by the component itself.
**Props:** External data, owned and controlled by the parent component.
#### Conceptual Understanding:
**State:** Can be thought of as the component's memory, holding data over time across multiple re-renders.
**Props:** Like function parameters, used to pass data from parent to child components.
#### Mutability and Effects:
**State:** Can be updated by the component itself, triggering a re-render.
**Props:** Read-only, cannot be modified by the receiving component. Receiving new props also causes a re-render.
#### Usage:
**State:** Used to make components interactive.
**Props:** Used to configure child components from the parent component.
**Note:** Whenever a piece of state is updated, the component that owns the state, and all children components that receive the piece of state as props will be re-rendered to keep in sync. | atenajoon |
1,896,388 | shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5 | In this article, I discuss how Blocks page is built on ui.shadcn.com. Blocks page has a lot of... | 0 | 2024-06-21T20:44:22 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-is-blocks-page-built-part-5-54o3 | javascript, nextjs, opensource, shadcnui | In this article, I discuss how [Blocks page](https://ui.shadcn.com/blocks) is built on [ui.shadcn.com](http://ui.shadcn.com). [Blocks page](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) has a lot of utilities used, hence I broke down this Blocks page analysis into 5 parts.
1. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a)
2. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-2-7714c8f36a43)
3. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-3-991c423b2ea3)
4. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-4-57c5eaa4916e)
5. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5
This is final part 5 where I will discuss the following:
1. getBlock function
2. BlockPreview component
3. BlockDisplay
getBlock function
-----------------
[getBlock function](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) uses that same function such as readFile, createTempSourceFile and project.createSourceFile. I explained about this in [great detail in part 4](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-4-57c5eaa4916e).
To summarise, project.createSourceFile is an API provided by ts-morph to perform Typescript manipulations such as removing a variable from a file by access Typescript’s AST. This can simplify refactoring as it can save a lot of time in performing repetitive tasks such as renaming a property or function when dealing with Typescript code across a large code base.
```js
// source: https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27
export async function getBlock(
name: string,
style: Style\["name"\] = DEFAULT\_BLOCKS\_STYLE
) {
const entry = Index\[style\]\[name\]
const content = await \_getBlockContent(name, style)
const chunks = await Promise.all(
entry.chunks?.map(async (chunk: BlockChunk) => {
const code = await readFile(chunk.file)
const tempFile = await createTempSourceFile(\`${chunk.name}.tsx\`)
const sourceFile = project.createSourceFile(tempFile, code, {
scriptKind: ScriptKind.TSX,
})
sourceFile
.getDescendantsOfKind(SyntaxKind.JsxOpeningElement)
.filter((node) => {
return node.getAttribute("x-chunk") !== undefined
})
?.map((component) => {
component
.getAttribute("x-chunk")
?.asKind(SyntaxKind.JsxAttribute)
?.remove()
})
return {
...chunk,
code: sourceFile
.getText()
.replaceAll(\`@/registry/${style}/\`, "@/components/"),
}
})
)
return blockSchema.parse({
style,
highlightedCode: content.code ? await highlightCode(content.code) : "",
...entry,
...content,
chunks,
type: "components:block",
})
}
```
This function attempts to remove nodes with “x-chunk” attribute. To my surprise, there’s some block examples that do contain this attribute as shown in the below image

and getBlock function returns the below object to a variable in BlockDisplay component.
```js
return blockSchema.parse({
style,
highlightedCode: content.code ? await highlightCode(content.code) : "",
...entry,
...content,
chunks,
type: "components:block",
})
```
BlockDisplay Component
----------------------
Now that we fully understand what happens behind the scenes when you call getBlock as shown in the below code, [BlockDisplay](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5) uses a Promise.all and waits till it gets all the blocks.
```js
export async function BlockDisplay({ name }: { name: string }) {
const blocks = await Promise.all(
styles.map(async (style) => {
const block = await getBlock(name, style.name)
const hasLiftMode = block?.chunks ? block?.chunks?.length > 0 : false
// Cannot (and don't need to) pass to the client.
delete block?.component
delete block?.chunks
return {
...block,
hasLiftMode,
}
})
)
if (!blocks?.length) {
return null
}
return blocks.map((block) => (
<BlockPreview key={\`${block.style}-${block.name}\`} block={block} />
))
}
```
and then BlockPreview is used to show a block example.
BlockPreview component
----------------------
The below code is picked from BlockPreview component
```js
"use client"
import \* as React from "react"
import { ImperativePanelHandle } from "react-resizable-panels"
import { cn } from "@/lib/utils"
import { useConfig } from "@/hooks/use-config"
import { useLiftMode } from "@/hooks/use-lift-mode"
import { BlockToolbar } from "@/components/block-toolbar"
import { Icons } from "@/components/icons"
import {
ResizableHandle,
ResizablePanel,
ResizablePanelGroup,
} from "@/registry/new-york/ui/resizable"
import { Tabs, TabsContent } from "@/registry/new-york/ui/tabs"
import { Block } from "@/registry/schema"
export function BlockPreview({
block,
}: {
block: Block & { hasLiftMode: boolean }
}) {
const \[config\] = useConfig()
const { isLiftMode } = useLiftMode(block.name)
const \[isLoading, setIsLoading\] = React.useState(true)
const ref = React.useRef<ImperativePanelHandle>(null)
if (config.style !== block.style) {
return null
}
return (
<Tabs
id={block.name}
defaultValue="preview"
className="relative grid w-full scroll-m-20 gap-4"
style={
{
"--container-height": block.container?.height,
} as React.CSSProperties
}
>
<BlockToolbar block={block} resizablePanelRef={ref} />
<TabsContent
value="preview"
className="relative after:absolute after:inset-0 after:right-3 after:z-0 after:rounded-lg after:bg-muted"
>
<ResizablePanelGroup direction="horizontal" className="relative z-10">
<ResizablePanel
ref={ref}
className={cn(
"relative rounded-lg border bg-background",
isLiftMode ? "border-border/50" : "border-border"
)}
defaultSize={100}
minSize={30}
>
{isLoading ? (
<div className="absolute inset-0 z-10 flex h-\[--container-height\] w-full items-center justify-center gap-2 text-sm text-muted-foreground">
<Icons.spinner className="h-4 w-4 animate-spin" />
Loading...
</div>
) : null}
<iframe
src={\`/blocks/${block.style}/${block.name}\`}
height={block.container?.height}
className="chunk-mode relative z-20 w-full bg-background"
onLoad={() => {
setIsLoading(false)
}}
/>
</ResizablePanel>
<ResizableHandle
className={cn(
"relative hidden w-3 bg-transparent p-0 after:absolute after:right-0 after:top-1/2 after:h-8 after:w-\[6px\] after:-translate-y-1/2 after:translate-x-\[-1px\] after:rounded-full after:bg-border after:transition-all after:hover:h-10 sm:block",
isLiftMode && "invisible"
)}
/>
<ResizablePanel defaultSize={0} minSize={0} />
</ResizablePanelGroup>
</TabsContent>
<TabsContent value="code">
<div
data-rehype-pretty-code-fragment
dangerouslySetInnerHTML={{ \_\_html: block.highlightedCode }}
className="w-full overflow-hidden rounded-md \[&\_pre\]:my-0 \[&\_pre\]:h-\[--container-height\] \[&\_pre\]:overflow-auto \[&\_pre\]:whitespace-break-spaces \[&\_pre\]:p-6 \[&\_pre\]:font-mono \[&\_pre\]:text-sm \[&\_pre\]:leading-relaxed"
/>
</TabsContent>
</Tabs>
)
}
```
As you can see, blocks are rendered using an iframe. An [example URL](https://ui.shadcn.com/blocks/default/authentication-04) is provided in the screenshot below.

Similarly, you can load other blocks by visiting their relevant URL.
Conclusion:
-----------
In these 5 parts, I studied the code used in building the Blocks page found on [ui.shadcn.com/blocks](http://ui.shadcn.com/blocks).
On this side of codebase, I have seen some advanced Typescript patterns such as using Records and parsing objects with zod to make sure they meet certain set schema standards. My favourite was using ts-morph to perform some variable removing operations on the code picked from a file using AST API (that sounds cool, lol) just so the code presented to the “client” component requires what’s needed and nothing more than that.
Frankly speaking, it was not easy to read and understand this code. In my next adventure, I will use this momentum to understand how the shadcn-ui/ui’s CLI package is built and write articles about this CLI package. It will be interesting to find out what happens under the hood when you type \`npx shadcn-ui add button\` for example.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) | ramunarasinga |
1,896,387 | Integración Jenkins con MSBuild e IIS | Estos dias me he planteado realizar la implementacion de CI (Integración Continua) en mis proyectos.... | 0 | 2024-06-21T20:41:31 | https://dev.to/re-al-/integracion-jenkins-con-msbuild-e-iis-4212 | cicd, msbuild, iis, jenkins | Estos dias me he planteado realizar la implementacion de CI (Integración Continua) en mis proyectos. Para ello he acudido a Jenkins.
### Jenkins
[Jenkins](https://jenkins.io/) nos permite, de una manera facil e intuitiva, programar el despliegue de nuestras aplicaciones.
Verán que instalarlo es muy simple. [Aquí](https://jenkins.io/doc/book/getting-started/installing/) las instrucciones para instalarlo en cualquier S.O. En mi caso, voy a usarlo sobre Windows Server.
Por defecto, el puerto por el que escucha Jenkins es el 8080, pero podemos modificarlo para escuchar cualquier otro puerto.
Esta modificación se realiza en el archivo **"jenkins.xml"** que se encuentra en el directorio raiz de la instalación de Jenkins
En mi caso, utilizare el puerto 82. La configuración debera quedar asi:
```xml
<arguments>-Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar "%BASE%\jenkins.war" --httpPort=82 --webroot="%BASE%\war"</arguments>
```
Para que la configuración tenga efecto tuve que reiniciar el servicio a través del "Administrador de Servicios" de Windows.
Una vez se tenga todo listo, debe procederse a instalar los plugins recomendados por Jenkins, y después debe instalarse los siguientes plugins a través de "Administrar Jenkins"->"Administrar Plugins":
* Bitbucket Build Status Notifier Plugin
* Bitbucket Plugin
* MSBuild Plugin
* MSTest plugin

Ahora procedemos a configurar nuestro MSBuild. Esta herramienta nos permitirá compilar proyectos .Net.
### Configurar MSBuild y Git
Para realizar estas configuraciones, primero debemos tener instaladas estas herramientas en el servidor de despliegue (aka. donde está Jenkins):
* [MsBuild](https://www.microsoft.com/es-ar/download/details.aspx?id=48159)
* [Git](https://git-scm.com/)
Una vez tengamos instalados estos utilitarios, ingresamos a "Administrar Jenkins"->"Global Tool Configuration" para ajustar el PATH de cada ejecutable:


Podemos crear varias instancias de MsBuild, de acuerdo a la version del mismo. En mi caso estoy utilizando la versión 14, correspondiente a VisualStudio 2015.
### Crear Tareas
Una vez tengamos todo configurado, podemos crear nuestra primera tarea, haciendo click en la barra lateral izquierda.
Ponemos un nombre a la tarea (yo prefiero poner el nombre de mi Solución .Net) y elegimos la opcion **"Crear un proyecto estilo libre"** y presionamos el boton **"OK"**:

Ahora realizamos la configuración del Proyecto.
En mi caso voy a elegir, como origen del codigo fuente, un repositorio git en BitBucket:

Al configurar un repositorio en BitBucket, me pedirá que cree mis credenciales de acceso Git.
Tambien puedo configurar (si tengo el plugin de BitBucket), que se ejecute una "Build" cada vez que se haga un PUSH al repositorio.

Si marcamos esta opcioón, debemos configurar un **WebHook** en el repositorio de BitBucket apuntando hacia nuestro servidor de despliegue:

La URL del WeebHook en BitBucket debe ser:
```
http://USUARIO_JENKINS:PASSSWORD_JENKINS@SERVIDOR_JENKINS:PUERTO_JENKINS/bitbucket-hook/
```
Ahora, volviendo a la configuración de la Tarea en Jenkins, pasamos a la parte mas importante:
En el bloque de **Ejecutar** podemos añadir varias acciones:
#### 1. Restaurar paquetes Nuget de la Solucion
Antes de empezar a realizar el despliegue de nuestros proyectos, es necesario restaurar los paquetes Nuget utilizados en la solución.
Si bien existen plugins para Jenkins que nos ayudan con esto, es mejor realizar la tarea a través de un paso de tipo **"Ejecutar un comando de Windows"**
Obviamente, esto implica que tenemos que tener instalado el ejecutable **"Nuget.exe"** en nuestro servidor de despliegue.
```
"C:\Program Files\Nuget\"nuget.exe restore ARCHIVO_DE_SOLUCION.sln
```
#### 2. Ejecutar el MsBuild de Toda la solución
Para compilar toda la solución, agregamos un nuevo paso y elegimos la opcion **"Build a VisualStudio project or solution using MsBuild"**. Despues, configuramos el paso de la siguiente forma:

En el campo "MSBuild File" se debe especificar el archivo que representa la solución (.sln).
#### 3. Publicar el Proyecto principal
Para este paso agregamos una opción **"Build a VisualStudio project or solution using MsBuild"** y especificamos el Path al proyecto principal de nuestra solucion:

Ademas ponemos el siguiente argumento:
```
/T:Build;Package /p:Configuration=DEBUG /p:OutputPath="C:\JenkinsBuilds\NOMBRE_PROYECTO_PRINCIPAL" /p:DeployIisAppPath="/Default Web Site/NOMBRE_SITIO" /p:VisualStudioVersion=14.0
```
#### 4. Desplegar al Servidor IIS
Por ultimo, agregamos otro paso de tipo **"Ejecutar un comando de Windows"** y especificamos el siguiente comando:
```
xcopy "C:\JenkinsBuilds\NOMBRE_SOLUCION\_PublishedWebsites\NOMBRE_PROYECTO_PRINCIPAL" /O /X /E /H /K /Y /d "C:\inetpub\wwwroot\NOMBRE_SITIO_EN_IIS\"
```
Esta instrucción copiará los archivos recurrentemente, pero sólo aquellos que han cambiado recientemente
#### 5. Compilar AHORA y ver la tendencia de tiempo de ejecución
Una vez esté todo configurado, podemos ordenar a **Jenkins** realizar la compilación y verificar las estadísticas:
 | re-al- |
1,896,386 | Blockchain in Banking: Revolutionizing the Financial Sector | Introduction Blockchain technology, originally devised for Bitcoin, has evolved into... | 27,673 | 2024-06-21T20:40:51 | https://dev.to/rapidinnovation/blockchain-in-banking-revolutionizing-the-financial-sector-54j6 | ## Introduction
Blockchain technology, originally devised for Bitcoin, has evolved into a
decentralized digital ledger that records transactions securely and
transparently. This technology is transforming data management across various
sectors, including banking.
## What is Blockchain?
Blockchain is a distributed ledger shared among nodes in a network. It ensures
data integrity and security without the need for a central authority, making
it ideal for applications beyond cryptocurrencies.
## How Does Blockchain Technology Work?
Blockchain operates on decentralization, where control is distributed across a
network of nodes. Transactions are grouped into blocks, validated by miners,
and secured through consensus mechanisms like Proof of Work and Proof of
Stake.
## Types of Blockchain Deployments in Banking
Blockchain in banking can be deployed as public, private, or consortium
blockchains, each offering unique benefits and challenges in terms of
transparency, security, and scalability.
## Top 7 Ways Banks Benefit From Blockchain Technology
Blockchain enhances security, improves transparency, increases efficiency,
reduces costs, improves asset traceability, facilitates payments, and drives
innovation in financial products.
## Challenges of Implementing Blockchain in Banking
Key challenges include regulatory uncertainties, scalability issues, and
integration with legacy systems. Addressing these challenges is crucial for
widespread adoption.
## Future of Blockchain in Banking
The future looks promising with evolving regulations, technological
advancements, and increasing adoption. Blockchain is set to redefine trust,
transparency, and efficiency in banking.
## Real-World Examples of Blockchain in Banking
Notable examples include JPMorgan Chase's JPM Coin and HSBC's blockchain-based
letter of credit, showcasing the practical benefits of blockchain in enhancing
transaction speed and security.
## Why Choose Rapid Innovation for Blockchain Implementation and Development
Rapid Innovation offers expertise in AI and blockchain, customized solutions,
and a proven track record with industry leaders, making it the ideal partner
for blockchain development.
## Conclusion
The integration of advanced technologies like AI, blockchain, and big data
analytics is revolutionizing banking operations, enhancing efficiency,
security, and customer experience.
📣📣 Drive innovation with intelligent AI and secure blockchain technology!
Check out how we can help your business grow!
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Development](https://www.rapidinnovation.io/ai-software-development-
company-in-usa)
[Blockchain Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa) [AI
Development](https://www.rapidinnovation.io/ai-software-development-company-
in-usa)
## URLs
* <http://www.rapidinnovation.io/post/top-7-ways-banks-benefit-from-blockchain-tech>
## Hashtags
#BlockchainTechnology
#BankingInnovation
#Decentralization
#FinancialSecurity
#SmartContracts
| rapidinnovation | |
1,896,063 | What is a Ledger and Why Floating Points Are Not Recommended? | Portugue Version What is Ledger Series What is a Ledger and why you need to learn about... | 0 | 2024-06-21T20:37:29 | https://dev.to/woovi/what-is-a-ledger-and-why-floating-points-are-not-recommended-1f4l | webdev, javascript, programming, tutorial | [Portugue Version](https://daniloab.substack.com/p/o-que-e-um-ledger-e-por-que-pontos)
## What is Ledger Series
1. [What is a Ledger and why you need to learn about it?](https://dev.to/woovi/what-is-ledger-and-why-does-it-need-idempotence-18n9)
2. [What is Ledger and why does it need Idempotence?](https://dev.to/woovi/what-is-ledger-and-why-does-it-need-idempotence-18n9)
3. [What is a Ledger and Why Floating Points Are Not Recommended?](https://dev.to/woovi/what-is-a-ledger-and-why-floating-points-are-not-recommended-1f4l)
In the realm of financial transactions and ledgers, accuracy is paramount. However, using floating point numbers to represent monetary values can lead to significant issues due to their inherent imprecision. This is why it's crucial to understand the problems with floats and consider alternative approaches.
## Why Floating Points Are Problematic
Floating point numbers are a common way to represent real numbers in computing. However, they are not suitable for precise financial calculations due to rounding errors. Here are a few examples illustrating the issue:
```ts
console.log(0.1 + 0.2); // Expected: 0.3, Actual: 0.30000000000000004
console.log(0.1 + 0.2 === 0.3); // Expected: true, Actual: false
```
These small inaccuracies can accumulate over many transactions, leading to significant errors in financial records.
## Using Cents or Decimal128
To avoid these pitfalls, it's recommended to represent monetary values in the smallest units (like cents) or use a data type designed for high precision, such as Decimal128 in MongoDB. This ensures that all calculations are exact, maintaining the integrity of financial data.
## Updating the Example to Use Cents
We'll update our previous example to store amounts in cents instead of dollars, ensuring precision in our calculations.
Define the structure of the ledger in MongoDB using cents:
```ts
{
_id: ObjectId("60c72b2f9b1d8e4d2f507d3a"),
date: ISODate("2023-06-13T12:00:00Z"),
description: "Deposit",
amount: 100000, // Amount in cents
balance: 100000, // Balance in cents
transactionId: "abc123"
}
```
Function to add a new entry to the ledger and calculate the balance using cents:
```ts
const { MongoClient } = require('mongodb');
async function addTransaction(description, amount, transactionId) {
const url = 'mongodb://localhost:27017';
const client = new MongoClient(url);
try {
await client.connect();
const database = client.db('finance');
const ledger = database.collection('ledger');
const existingTransaction = await ledger.findOne({ transactionId: transactionId });
if (existingTransaction) {
console.log('Transaction already exists:', existingTransaction);
return;
}
const lastEntry = await ledger.find().sort({ date: -1 }).limit(1).toArray();
const lastBalance = lastEntry.length > 0 ? lastEntry[0].balance : 0;
const newBalance = lastBalance + amount;
const newEntry = {
date: new Date(),
description: description,
amount,
balance: newBalance,
transactionId: transactionId
};
await ledger.insertOne(newEntry);
console.log('Transaction successfully added:', newEntry);
} finally {
await client.close();
}
}
addTransaction('Deposit', 50000, 'unique-transaction-id-001'); // Amount in cents
```
Of course, if you are in prod making a change like this one it will need a migration of database. With mongo, this is easier. You can check how we handle this in our blogpost [Zero Downtime Database Migrations at Woovi](https://dev.to/woovi/zero-downtime-database-migrations-at-woovi-cc7)
The goal here is show what you can learn from common mistakes
## Woovi APIs Use Cents
Woovi, a modern financial services platform, utilizes cents in their APIs to ensure precision and avoid the pitfalls of floating-point arithmetic. By doing so, they maintain the accuracy and reliability of financial transactions, which is crucial for both their operations and their customers' trust.
## Conclusion
Implementing idempotency in your ledger system is crucial for maintaining accurate and reliable financial records. By ensuring that each transaction is only recorded once, you can prevent duplicate entries and maintain the integrity of your data.
As we have seen, idempotency is not just a technical detail but a fundamental principle that helps build robust and fault-tolerant systems. Similarly, using cents instead of floating points ensures precise financial calculations. In our next blog post, we will explore more advanced topics in ledger management and how to handle other challenges such as concurrency and eventual consistency.
Stay tuned for more insights into building reliable financial systems!
---
Visit us [Woovi](https://woovi.com/)!
---
Follow me on [Twitter](https://x.com/daniloab_)
If you like and want to support my work, become my [Patreon](https://www.patreon.com/daniloab)
Want to boost your career? Start now with my mentorship through the link
https://mentor.daniloassis.dev
See more at https://linktr.ee/daniloab
Photo of <a href="https://unsplash.com/pt-br/@iam_aspencer?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Andrew Spencer</a> in <a href="https://unsplash.com/pt-br/fotografias/silhueta-no-homem-flutuante-eY7ioRbk2sY?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| daniloab |
1,896,385 | Documentar tu proyecto ASP.Net y mostrarlo como un formulario mas | Ahora mostraré como puedes hacer que tu proyecto WebForms ASP.Net genere cada vez su propia... | 0 | 2024-06-21T20:37:15 | https://dev.to/re-al-/documentar-tu-proyecto-aspnet-y-mostrarlo-como-un-formulario-mas-1na1 | csharp, aspnet, documentation | Ahora mostraré como puedes hacer que tu proyecto WebForms ASP.Net genere cada vez su propia documentación en base a los comentarios del código fuente y además se muestre como un formulario.
### Para empezar
Para ello utilizaremos dos recursos:
* [VsXMd](https://github.com/lijunle) de [lijunle](https://github.com/lijunle)
* [CommonMark.Net](https://github.com/Knagis/CommonMark.NET) de [Knagis](https://github.com/Knagis)
Estos dos proyectos los podemos obtener a traves de su respectivo repositorio en GitHub o a traves de Nuget.
Para nuestro propósito, vamos a trabajar con el Codigo Fuente de VsXMd y con el paquete Nuget de CommonMark.Net
### Generando la documentación en XML
Como bein sabemos, VisualStudio tiene la posibilidad de generar la documentación de nuestro proyecto en un archivo XML
La documentación se genera en base a los comentarios que se realicen sobre los métodos, propiedades y atributos de las clases de nuestro proyecto.
Si no conoces cómo generar esta documentación, te sugiero que te pases por [aqui](https://msdn.microsoft.com/es-es/library/x4sa0ak0(v=vs.100).aspx)
### Generando la documentación en Markdown
Lo primero que tenemos que hacer es abrir de manera separada el proyecto **VsXMd**.
Una vez abierto, podremos compilarlo y generar sus respectivos ejecutables, tal como muestra en la figura:

Necesitamos tener el ejecutable (y todas sus dependencias) por una razon:vamos a utilizar este ejecutable para generar un Script PostCompilacion en nuestro proyecto que queremos documentar
Además, la razon por la que trabajé con el codigo fuente (y no con el paquete Nuget), fue por que con el codigo fuente tengo la libertad de poder elegir lo que quiero que aparezca en mi documentacion. Asi por ejemplo, no me interesa poner en la documentación los nombres y las descripciones de todos los controles.
Para hacer este cambio, modifiqué el metodo: **private static IEnumerable<IUnit> ToUnits(XElement docElement)** en la clase **Converter.cs** del proyecto **Vsxmd** para que quede asi:
```csharp
private static IEnumerable<IUnit> ToUnits(XElement docElement)
{
// assembly unit
var assemblyUnit = new AssemblyUnit(docElement.Element("assembly"));
// member units
var memberUnits = docElement
.Element("members")
.Elements("member")
.Select(element => new MemberUnit(element))
.Where(member => member.Kind != MemberKind.NotSupported && member.Kind != MemberKind.Constants)
.GroupBy(unit => unit.TypeName)
.Select(MemberUnit.ComplementType)
.SelectMany(group => group)
.OrderBy(member => member, MemberUnit.Comparer);
// table of contents
var tableOfContents = new TableOfContents(memberUnits);
return new IUnit[] { tableOfContents }
.Concat(new[] { assemblyUnit })
.Concat(memberUnits);
}
```
Si se fijan en la clausula **Where** (y la comparan con la que originalmente esta en el proyecto de lijunle), podrán ver que estoy excluyendo los elementos de tipo **Constants**.
Antes:
```csharp
.Where(member => member.Kind != MemberKind.NotSupported)
```
Despues:
```csharp
.Where(member => member.Kind != MemberKind.NotSupported && member.Kind != MemberKind.Constants)
```
Una vez realizado este cambio, compilo nuevamente el proyecto para obtener el nuevo archivo ejecutable
### Creando el Script PostCompilación en nuestro proyecto
Ahora nos vamos a **nuestro** proyecto para añadir los scripts post compilación.
Primero debemos copiar el ejecutable y las DLLs del proyecto **VsXMd** a la carpeta de nuestra Solucion para que quede mas o menos asi:

Una vez tengamos copiados estos archivos, debemos ir a las propiedades de nuestro **Proyecto**, y nos vamos a la ficha de **Eventos de Compilación**:

En la **"Linea de comandos del evento posterior a la compilación"** colocamos:
```
"$(SolutionDir)VsXMd"\Vsxmd.exe "$(ProjectDir)App_Data\XmlDocument.xml" "$(ProjectDir)App_Data\XmlDocument.md"
```
En mi caso, cambie la ruta (y el nombre) del archivo XML generado por el VisualStudio, para colocarlo en la carpeta **App_Data**
Ahora compilamos nuestro Proyecto y verificamos que el archivo Markdown existe en la ruta especificada. Cada vez que compilemos nuestro proyecto, se generará/actualizará la documentación XML y su respectivo archivo MD
### Creando el formulario para mostrar la documentación en HTML a partir del Markdown
Finalmente, podemos hacer que ese archivo Markdown sirva como fuente para uno de nuestros formularios. Aqui es donde utilizaremos el paquete **CommonMark** para interpretar texto en *Markdown* y representarlo en *HTML*
En éste proyecto de ejemplo, trabajé con WebForms y Bootstrap. Dentro de un nuevo WebForm (archivo aspx) creo el siguiente bloque de código:
```html
<div class="row">
<div class="col-lg-12 col-md-12">
<div runat="server" id="divDocs"></div>
</div>
</div>
```
En el CodeBehind, en el evento **Page_Load** tenemos
```csharp
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
//El Markdown
var strMarkDown = Server.MapPath("~/App_Data/XmlDocument.md");
string text = File.ReadAllText(strMarkDown);
divDocs.InnerHtml = CommonMarkConverter.Convert(text);
}
}
```
Y listo!!! Este seria el resultado:


Si queremos hilar mas fino, podemos modificar el proyecto **VsXMd** para escribir los textos en español, o para modificar lo que queremos que se muestre.
Espero les sirva. | re-al- |
1,896,337 | HOW TO CREATE AND CONNECT TO A LINUX VM USING A PUBLIC KEY | In this blog, you will be taught how to create a Linux Virtual Machine running on Ubuntu image, how... | 0 | 2024-06-21T20:35:22 | https://dev.to/presh1/how-to-create-and-connect-to-a-linux-vm-using-a-public-key-2h1h | signintoazure, createavirtualmachine, connecttothevirtualmachine | In this blog, you will be taught how to create a Linux Virtual Machine running on Ubuntu image, how to connect to the virtual machine through a SSH protocol using a Public Key and go the extra mile of creating a web application, connect to it using the public key and lastly browse the web server.
**SIGN IN TO AZURE PORTAL**
To do anything on Azure portal, the first thing is to login to the portal (www.portal.azure.com) with your sign in details (Username and Password). You may need to register and sign up if you are a new user.
**CREATE A VIRTUAL MACHINE ON AZURE PORTAL**
After signing in, the next thing to is to create VM. The steps are as detailed below with images and annotations
1. Enter virtual machines in the search or select "Create a Resource" or select "Resource Group". any of the 3 will guide your steps.

2. Select Virtual Machine

3. In the Virtual machines page, select Create and then Virtual machine. The Create a virtual machine page opens.

4. In the Basics tab, under Project details, make sure the correct subscription is selected and then choose to Create new resource group or select an existing one. I entered PRENNIEX for sample purpose. Type in the preferred virtual name for your VM (PreshlinuxVM for me) , pick the other options as in the image

5. Let the security type be standard and choose **Ubuntu Server 22.04 LTS - Gen2** for your Image. This is one of the differences between Windows Vm and Linux VM. Leave the other defaults. Size availability and pricing are dependent on your region and subscription. If you experience any error at any stage or during validation and its referencing the Region, please, option for another region.

6. Choose the memory size of your VM

7. Under Administrator account, select **SSH public key**. This basically indicates that you want to connect to your VM using public keys. In Username enter your desired username (**PreshlinuxVM** for me). For SSH public key source, leave the default of **Generate new key pair**, pick other options as in the image

8. Under Inbound port rules > Public inbound ports, choose Allow selected ports and then select SSH (22) and HTTP (80) from the drop-down. Then choose,

Leave the remaining defaults and then select the Review + create button at the bottom of the page.
9. When the Generate new key pair window opens, select Download private key and create resource. Your key file will be download as PreshlinuxVM.pem. Make sure you know where the .pem file was downloaded; you will need the path to it in the next step.

100. When the deployment is finished, select Go to resource

110. On the page for your new VM, select the public IP address and copy it to your clipboard.

**CONNECT TO THE VIRTUAL MACHINE**
1. Go to windows, search for Powershell and run as administrator

2. At your prompt, open an SSH connection to your virtual machine. Replace the IP address with the one from your VM, and replace the complete path to the .pem ( for this sample, the path is **_c:/user/prenniex/Downloads/PreshlinuxVM.pem_**) This is the path that the key file was downloaded. For sample sake, the command is as below
_**ssh -i c:/user/prenniex/Downloads/PreshlinuxVM.pem PreshlinuxVM@40.84.55.12**_
**INSTALL A WEBSERVER**
To see your VM in action, install the NGINX web server. From your SSH session, first update your package sources and then install the latest NGINX package with the commands below
_**sudo apt-get -y update
sudo apt-get -y install nginx_
**

**BROWSE THE WEBSERVER**
Use a web browser of your choice to view the default NGINX welcome page. Type the public IP address of the VM as the web address. The public IP address can be found on the VM overview page or as part of the SSH connection string you used earlier.

**DO NOT FORGET TO CLEAN UP RESOURSES BY DELETING RESOURCE GROUPS AND THE RESOURCES THAT ARE NOT NEEDED.**
GOODLUCK! | presh1 |
1,895,799 | #TestInPublic: Charty App | Update These issues are in the process of being addressed where appropriate. See this... | 0 | 2024-06-21T20:34:15 | https://dev.to/ashleygraf_/testinpublic-charty-app-151a | testing, startup | 
## Update
These issues are in the process of being addressed where appropriate.
See <a href="https://x.com/excentiodev/status/1804351579404878294">this Twitter post</a> for the first set.
## Introduction
Hi! I'm Ashley. I'm a software tester who used to work in marketing and communications so I keep swapping hats back and forth. I'm recently back on the job hunt after a 6 month sabbatical which I was very lucky to be able to take. But it's a very different market to the last two that I have experienced as a QE, so I'm trying new things. I've been more than a little bit inspired by the indie hacker movement, so I want to help y'all out while I'm interviewing.
Every fortnight, I'm going to choose one early-stage project to #testinpublic with a real-world application to keep my testing skills sharp. You can nominate your product, or suggest a friend's, and I'll take a look over a couple days, and send you my thoughts.
For my first week, I'm looking at <a href="https://chartyapp.com">Charty App</a>, an app where you can create animated charts. This one is just soon to come out of alpha so I expect it to be usable, but not polished. Did anyone say "Minimum Lovable Product"?
# Executive Summary
- 2-day functional testing exercise
- Tested key features: user account, payment, one type of chart, import, export
- Most issues were around user experience and error handling
# Summary
Most issues I found were in two categories: user experience, and error handling.
It came down to two questions
- How do you avoid overwhelming users when you want to give them all the possible options?
- How do you handle errors for minimum negative impact on customer experience?
With user experience, there are some issues I’d like to call out.
- The main issue is that there was premature optimisation of categories, with settings for several aspects scattered across several categories. It added a lot of scrolling and short-term memory recollection to do what I wanted.
- I was also expecting some categories and settings to have different names, as it was what I had come to expect from using other charting software.
- The next, is that many settings were missing default values and/or units of measurement. Give the user a reference point!
- The last, is that some subcategory titles were missing the title CSS styling, and they got a bit lost in that big list. For styling, with this many options available, it becomes really important to be able to separate feature from feature aspects at first glance
I’d rank the importance of the error-handling issues in the below order
- 500 errors that led to crashes and therefore lost work
- Silent failures that would lead to confusion and lost time
- Enabling inputs outside of the allowed range
# Approach
When I start looking at a new piece of software, I always try and gain as much understanding of the product as possible by wandering around it for a while by myself without any deliberate provision of context from the creators. Then I'll dig in when it gets interesting. This means that I don't have any explanations of how it works to make my experience easier, and I use it much closer to how an actual new user would.
When I take this approach, I deliberately DON'T ask questions, just like most users wouldn't, and I see if I get disoriented or confused or overwhelmed, and so on. My context is instead similar products, and UX design patterns. These create unwritten expectations.
For this exercise, I'm going to limit my testing to a small part of the app. I chose The User Account, Line Charts, Import, Export, and Payment.
I have not prioritised (what I feel may be) the key issues, because while a QA may and should engage in bug advocacy they are not the ultimate decision maker. I have chosen instead key features to test, and thrown everything that I think could be of concern into an organised pile. Occasionally I will put my marketing hat back on and make some suggestions. I will always point out when that is.
This is also not exhaustive, as I set a time limit of 2 days to test. It also cannot be exhaustive. There are always bugs, just hopefully as few as possible.
For the sake of brevity, I will only be functional testing, and only on my home device. My observations will only cover possible defects, concerns, and questions.
I tested with the latest version of Chrome on Windows 10.
# General Observations
- I felt like the settings options were a bit scattered and the namings and labels a little inconsistent. Some titles are missing title CSS styling as well. While some settings have on-hover tooltips and others don't. I'll provide examples for each further down.
- There is no unit of measurement provided for some of the different settings. Is it px? I would assume it's px.

- I was able to enter numbers above the maximum set in the slider for most features using sliders to set values. This should not be possible.


### Naming consistency
- Placement dropdown naming was inconsistent. For Common Text Properties it used used "top, bottom". For Ticks it used "start, middle, end". But these both referred to label positions.


- Box Radius is titled Angle for Background Plate Border and Box Radius for Watermark Box. Unless I am misunderstanding something.


- There are some Points settings in Graph Settings and other Points settings in Points Labels. Personally, I found it annoying going to two different places to set all the Points settings.
- Animations settings are in three places
-- Animation Timings
-- Graph Animation
-- Particles, which appear to be celebratory animations (unless you chose the sad or angry face emoji)
- Most Ticks settings are in Ticks, but to set Tick label size, I need to go to Common Text Properties.

- 500s could be handled better. When they happen, the whole application crashes and I see the start of the stack-trace. I will not provide screenshots of that here.
- There seem to be two ways to add and remove Grid Lines
Ticks -> Horizontal Ticks
Ticks -> Vertical Ticks

Grid -> Enable Grid X
Grid -> Enable Grid Y

However, the Grid Line settings over-rule the Ticks Line settings. This could get confusing for the user (and it's more code that the developer has to maintain), especially as the default setting for the Starter Line Chart has the Grids turned off in Grid.
_Marketing hat on_
The Animation options sound very mathematical and I wasn't sure what the difference was between most of them without trying them out! Perhaps a section on your features page with animations demonstrating all the options.
- There are a LOT of options. The user can basically customise everything! There's celebratory graphics for social media, and slick animations for everything. For that reason, I would
consider
- some renaming of some categories.
- some rearranging of some categories.
- clustering the animations options together.
- subcategories
- I personally was unsure about how to add x and y labels for a while!
- ALL settings are Graph Settings. That category is probably a high candidate for renaming.
How to get to there
1) on-hover tooltips explaining what each category means (just temporarily. in the long term the new short titles should be enough)
2) running analytics to see which categories and settings are most often used and least often used for a more effective clustering.
3) talking to customers to understand why
Perhaps: Least often used - at the bottom and accordion shut by default. Most often used - At the top and accordion shut by default.
_Marketing hat off_
# ~The User Account~ (Log In, User Profile)
- I believe it's a standard of web experience that the user should never lose their work. I would expect that if you click "Log In" from a page where you are editing a project, it would ask you if you want to save the project currently in progress as a JSON file (as there is not currently an option to save projects in progress on the user profile). This does not happen. Additionally, I was unable to use the Back button to rescue my work.
- I can't close the User Profile modal. I have to click Open Editor to close it indirectly.

# Payment
- I think I was given a crypto wallet to use for DePlan when I signed up for an account but I can't be sure as it was labeled "Your UID" in my User Profile. I don't know what it's for.
- Not necessarily a bug but why is it not an option to pay for Pay-As-You-Go Access from the User Profile page? Probably something about DePlan I do not understand.
- Why is the dollar sign on the left for the Pay-As-You-Go option, and on the right for the Monthly Premium option? It's not clear what currency you pay in for either option. I think it's USD.

# File/Import
- If the user attempts to import non-compliant JSON it silently fails.
- If the user attempts to import the project without the project settings it silently fails.
- If the user removes a setting from the JSON they are still able to import it. The relevant setting just returns to the default.
- If the user brute forces a setting to be higher than the maximum or lower than the minimum, unless it's Animation Framerate, it is allowed. It can still be successfully uploaded.
- It was unclear until I tried it that Open referred to importing project files.
_Marketing hat on_
- The purpose of File -> Open is a little unclear until you look at the icon. Then it's obvious that it means 'Import'. I would maybe change the titles like
File -> Project
Open -> Import
Save -> Save.
_Marketing hat off_
# Export
- Why is there a + icon for "Export Image" but not "Export Animation"

# Play button
- No on-hover tooltip. Interesting that it's not using the start/stop video icons.
# The Starter Line Chart
## Animation Timings
- What is the maximum animation framerate? It's not clear but it appears to be 300 because when I try a number higher than that, it changes to 300.
## Background Properties
- Plate Border is missing some title CSS styling
## Watermark
- Watermark means something else to me. I would be expecting to Free version to have the ChartyApp logo watermark in the bottom right-hand corner, but it didn't.
_Marketing hat on_
- The watermark on the premium version seems more like somewhere to put the customer's branding. I'd suggest, if I may, an option to upload an image instead of adding text.
_Marketing hat off_
- At the default X and Y offset, the Watermark is falling off the chart for characters like "q, j, p, g" when they are lower case for fonts with both lower and upper case, like Raleway and Montserrat.

Changing the Box Padding should probably push the box up so all the text remains in the chart, but it stays in the same place horizontally.

# Common Text Properties
- There was no character limit for the Graph Header.
- The on-hover tooltip for the Enable Graph Header slider to show the tooltip, is not correct.

- If you hid the Graph Header, the chart did not increase in size to fill up the space no longer used by it.

- There was a very high range of options for the Ticks Label Text Size. It currently doesn't detect collision and set a limit when that is reached, and so they can currently overlap.

## Particles
- There is some sort of bug with Particles -> Graphics -> Custom Emojis at the moment generating a 500 error, and when I tried to select it, the whole application crashes and I see the start of the stacktrace.
- Emoji Set 1 -> 9 didn't really mean anything to me as I don't know what's in the set before I try it. They are not descriptive.
## Points Labels
- It is interesting that the default is to not leave a space between the point value and point unit of measurement. It does give the user more options.

- Non-numeric characters are removed for the Y-axis when adding the Label Suffix to it. For the X-axis, the NaN error appears.


## Graph Animation
- The dropdown options for Animation Type should be Capital Case to make the rest.
## Graph Settings
- The sliders are missing on-hover tooltips.
- Line Curve is missing a sub heading, and the dropdown settings are still in camelCase.

## Legends
- The labels appear to have multi-lingual capabilities. That's good. I tried Cyrillic.
## Ticks
- Horizontal Ticks is missing the title CSS styling.

- I found it a little confusing initially without a reference to x-axis and y-axis.
## Editor
- The x-axis values allow non-numeric characters. I suppose so that the user can do the time series any direction they want?
## Simple Editor
For 'Import Your Data', when CSV has header, the title 'Group' Values Column (can be empty) implies it can be empty, but there were no NULL options provided. I had to select a column to choose as the Group values column.
## Raw Data
- It would be good to have some kind of written alert when the JSON file is no longer JSON compliant.
- If you remove the x-axis value on Raw Data for one of the Groups, it disappears for all on the chart.
- If you deleted the name of the group in the Raw Data editor, that group would disappear from the chart. It wouldn't just remove the name from the label, as you might expect.
- Once the Raw Data JSON was no longer JSON compliant, the chart "froze" in its last compliant state (and red underlines would appear on the editor) but there was no other indicator to the user that the data was not compliant.

# Conclusion
I'd like to give Naz a shout-out for some amazing work! And thank you for being a good sport about my publishing my findings of your alpha-stage software on the internet! | ashleygraf_ |
1,896,384 | OpenLDAP con CSharp | En mi empresa hemos empezado a trabajar con OpenLDAP, y esto implica cambiar todos los metodos de... | 0 | 2024-06-21T20:33:45 | https://dev.to/re-al-/openldap-con-csharp-3h8n | csharp, openldap | En mi empresa hemos empezado a trabajar con OpenLDAP, y esto implica cambiar todos los metodos de autenticacion de los sistemas desarrollados, a éste protocolo.
Al principio parecia dificil, pero no fue asi. Todo se hizo mas facil con la ayuda de algunos articulos de stackoverflow.
Al final pude armar una clase **helper** que me permitiera acceder a los elementos del LDAP:
```csharp
public class LDAPHelper
{
private readonly LdapConnection ldapConnection;
private readonly string searchBaseDN;
private readonly int pageSize;
public LDAPHelper(
string searchBaseDN,
string hostName,
int portNumber,
AuthType authType,
string connectionAccountName,
string connectionAccountPassword,
int pageSize)
{
var ldapDirectoryIdentifier = new LdapDirectoryIdentifier(
hostName,
portNumber,
true,
false);
var networkCredential = new NetworkCredential(
connectionAccountName,
connectionAccountPassword);
ldapConnection = new LdapConnection(
ldapDirectoryIdentifier,
networkCredential)
{ AuthType = authType };
ldapConnection.SessionOptions.ProtocolVersion = 3;
this.searchBaseDN = searchBaseDN;
this.pageSize = pageSize;
}
public IEnumerable<SearchResultEntryCollection> PagedSearch(
string searchFilter,
string[] attributesToLoad)
{
var pagedResults = new List<SearchResultEntryCollection>();
var searchRequest = new SearchRequest
(searchBaseDN,
searchFilter,
SearchScope.Subtree,
attributesToLoad);
var searchOptions = new SearchOptionsControl(SearchOption.DomainScope);
searchRequest.Controls.Add(searchOptions);
var pageResultRequestControl = new PageResultRequestControl(pageSize);
searchRequest.Controls.Add(pageResultRequestControl);
while (true)
{
var searchResponse = (SearchResponse)ldapConnection.SendRequest(searchRequest);
var pageResponse = (PageResultResponseControl)searchResponse.Controls[0];
yield return searchResponse.Entries;
if (pageResponse.Cookie.Length == 0)
break;
pageResultRequestControl.Cookie = pageResponse.Cookie;
}
}
}
```
Con ésta clase, la consulta a los elementos del LDAP fue sencilla:
```csharp
static void Main(string[] args)
{
try
{
var baseOfSearch = "dc=integrate,dc=com,dc=bo";
var ldapHost = "192.168.0.101";
var ldapPort = 389;
var connectAsDN = "cn=admin,dc=integrate,dc=com,dc=bo";
var pageSize = 1000;
var secureString = "CONTRASEÑA_ADMIN_LDAP";
var openLDAPHelper = new LDAPHelper(
baseOfSearch,
ldapHost,
ldapPort,
AuthType.Basic,
connectAsDN,
secureString,
pageSize);
var searchFilter = "objectclass=posixAccount";
//var searchFilter = "uid=rvera";
var attributesToLoad = new[] { "sn","uid","cn","userPassword" };
var pagedSearchResults = openLDAPHelper.PagedSearch(
searchFilter,
attributesToLoad);
foreach (var searchResultEntryCollection in pagedSearchResults)
foreach (SearchResultEntry searchResultEntry in searchResultEntryCollection)
{
Console.WriteLine(searchResultEntry.Attributes["uid"][0] + ": " +
searchResultEntry.Attributes["cn"][0]);
Console.WriteLine(searchResultEntry.Attributes["userPassword"][0]);
Console.WriteLine(".......");
}
}
catch (Exception exp)
{
Console.WriteLine(exp.Message);
Console.WriteLine(exp.StackTrace);
}
Console.WriteLine("Presione una tecla para terminar...");
Console.Read();
}
```
Pueden obtener el ejemplo [aqui](https://bitbucket.org/re_al_/real.test.openldap) | re-al- |
1,896,383 | Aplicacion para descargar imagenes de una página web | Hace algun tiempo tuve la necesidad de descargar las imágenes de una página web. Esto sería tarea... | 0 | 2024-06-21T20:32:15 | https://dev.to/re-al-/aplicacion-para-descargar-imagenes-de-una-pagina-web-35k0 | csharp, projects | Hace algun tiempo tuve la necesidad de descargar las imágenes de una página web. Esto sería tarea sencilla de no ser porque existian mas de 50 imágenes en esa página. ASi que me propuse a realizar una aplicación de escritorio que se encargue de realizar ese trabajo por mi.
Pueden encontrar la aplicación y el codigo fuente [aqui](https://bitbucket.org/re_al_/real.downloadwebimages).
Me comentan que tal les va. | re-al- |
1,896,382 | Mostrar archivo PDF alojado en un servidor FTP con C# | Tenemos un repositorio FTP donde varias oficinas en distintos lugares van alojando archivos PDF que... | 0 | 2024-06-21T20:30:59 | https://dev.to/re-al-/mostrar-archivo-pdf-alojado-en-un-servidor-ftp-con-c-27f3 | csharp, ftp, pdf | Tenemos un repositorio FTP donde varias oficinas en distintos lugares van alojando archivos PDF que generan con información de su respectivo trabajo. Basicamente todos pueden acceder al FTP y consultar los documentos subidos.
Sin embargo, un nuevo requerimiento necesitaba revisar y calificar el documento. Para ello, se necesitaba que el sistema web que se maneja, muestre los archivos PDF desde el FTP (sin necesidad de descargarlos todos al servidor web).
Buscando, pude encontrar soluciones parciales, pero al final llegué a este bloque de código que me permitía visualizar el archivo requerido:
```csharp
FileInfo objFile = new FileInfo(filename);
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(new Uri("ftp://" + ftpServerIP + "/" + filename));
request.Credentials = new NetworkCredential(Ftp_Login_Name, Ftp_Login_Password);
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
byte[] bytes = null;
using (var memstream = new MemoryStream())
{
reader.BaseStream.CopyTo(memstream);
bytes = memstream.ToArray();
}
Response.Clear();
Response.ClearHeaders();
Response.ClearContent();
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.ContentType = "application/pdf";
Response.AddHeader("Content-Disposition", "inline; filename=" + objFile.Name);
Response.BinaryWrite(bytes);
Response.End();
```
| re-al- |
1,896,381 | ClosedXML y el uso de plantillas | Para facilitar la tarea de crear reportes o documentos en formato XLS o XLSX, se tiene el paquete... | 0 | 2024-06-21T20:29:26 | https://dev.to/re-al-/closedxml-y-el-uso-de-plantillas-m06 | netframework, corenet, closedxml, excel | Para facilitar la tarea de crear reportes o documentos en formato XLS o XLSX, se tiene el paquete ClosedXML. El mismo tiene soporte para el uso de templates.
Lo primero que tenemos que hacer es instalar el paquete Nuget en nuestro proyecto:
```
PM> Install-Package ClosedXML
```
Después, el bloque de codigo necesario para mostrar un excel a partir de una plantilla es:
```csharp
var strTitulo = "Titulo del reporte";
var dtReporte = ObtenerDatosReporte(); //Funcion que devolverá un DataTable
//Definimos la plantilla y la utilizamos con la libreria ClosedXML
var template = Server.MapPath("~/doc_templates/Reporte.xlsx");
using (var wb = new XLWorkbook(template))
{
//Ponemos algunos valores en el documento
wb.Worksheets.Worksheet(1).Cell(5, 1).Value = strTitulo;
//Podemos insertar un DataTable
wb.Worksheets.Worksheet(1).Cell(9, 1).InsertTable(dtReporte);
//Aplicamos los filtros y formatos a la tabla
wb.Worksheets.Worksheet(1).Table("Table1").ShowAutoFilter = true;
wb.Worksheets.Worksheet(1).Table("Table1").Style.Alignment.Vertical =
XLAlignmentVerticalValues.Center;
wb.Worksheets.Worksheet(1).Columns(2, 2 + dtReporte.Columns.Count).AdjustToContents();
//Limitamos el ancho de las columnas a 60
foreach (var column in wb.Worksheets.Worksheet(1).Columns())
if (column.Width > 60)
{
column.Width = 60;
column.Style.Alignment.WrapText = true;
}
wb.Style.Alignment.Horizontal = XLAlignmentHorizontalValues.Center;
wb.Style.Font.Bold = true;
//Enviamos el archivo al cliente
Response.Clear();
Response.Buffer = true;
Response.Charset = "";
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("content-disposition", "attachment;filename=\"" + strTitulo + ".xlsx\"");
using (var myMemoryStream = new MemoryStream())
{
wb.SaveAs(myMemoryStream);
myMemoryStream.WriteTo(Response.OutputStream);
Response.Flush();
Response.End();
}
}
```
Para acceder a la documentación del proyecto, pueden ingresar a su [repositorio](https://github.com/closedxml/closedxml) | re-al- |
1,896,380 | Generar colores hexadecimales aleatorios con Postgres | Hoy me he visto en la necesidad de obtener Varias fechas de una tabla en PostgreSql y mostrar cada... | 0 | 2024-06-21T20:25:18 | https://dev.to/re-al-/generar-colores-hexadecimales-aleatorios-con-postgres-1ihd | sql, postgres, hex | Hoy me he visto en la necesidad de obtener Varias fechas de una tabla en PostgreSql y mostrar cada una con un color diferente en un Sistema Web.
Mi primera alternativa era buscar una funcion en PHP que me permitiera generar colores hexadecimales aleatorios, pero como tengo más experiencia en PLPgSql, decidí empezar por ahí.
Primero lo primero: todos sabemos que un color hexadecimal es la presentacion de 3 numeros en el rango de 0-255 convertidos a base 16. Pues bien por ahí es por donde empece: generar un numero aleatorio en el rango de 0 a 255.
```sql
SELECT trunc(random() * 255)::INTEGER
```
Ahora necesitamos una función que convierta un numero de Base 10 a Base 16: Para ello necesitamos la siguiente funcion:
```sql
CREATE OR REPLACE FUNCTION b10_b16(digits bigint, min_width integer DEFAULT 0)
RETURNS character varying AS
$BODY$
DECLARE
chars char[];
ret varchar;
val bigint;
BEGIN
chars:=ARRAY['0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'];
val := digits;
ret := '';
IF val < 0 THEN
val := val * -1;
END IF;
WHILE val != 0 LOOP
ret := chars[(val % 16)+1] || ret;
val := val / 16;
END LOOP;
IF min_width > 0 AND char_length(ret) < min_width THEN
ret := lpad(ret, min_width, '0');
END IF;
RETURN ret;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE
COST 100;
```
El uso de ésta función es:
```sql
SELECT b10_b16(45);
--Devolverá el Dato 2D
SELECT b10_b16(11);
--Devolverá el Dato B
SELECT b10_b16(11,2)
--Devolverá el Dato 0B
```
Con todo ésto tenemos la Base para generar un color aleatorio:
```sql
SELECT '#' ||
b10_b16(trunc(random() * 255)::INTEGER,2) ||
b10_b16(trunc(random() * 255)::INTEGER,2) ||
b10_b16(trunc(random() * 255)::INTEGER,2) AS color
--Devolvera algo así: #69A51D
```
Y listo!!!
Ya tenemos nuestra función que genera colores hexadecimales ALEATORIAMENTE!!! | re-al- |
1,896,378 | Modernizing Legacy Applications with Ballerina | https://www.meetup.com/austin-developer-community/events/301626607/?utm_medium=referral&utm_campa... | 0 | 2024-06-21T20:19:15 | https://dev.to/harsha_thirimanna_39edfd6/modernizing-legacy-applications-with-ballerina-1hei | https://www.meetup.com/austin-developer-community/events/301626607/?utm_medium=referral&utm_campaign=share-btn_savedevents_share_modal&utm_source=linkedin
 | harsha_thirimanna_39edfd6 | |
1,896,377 | Todo App in 3 hours | Creating a Todo app with Refine and Supabase This article will cover the technical aspects... | 0 | 2024-06-21T20:14:42 | https://dev.to/stefan_hodoroaba/todo-app-in-3-hours-4f33 | refine, react, supabase | ## Creating a Todo app with Refine and Supabase
This article will cover the technical aspects of how I made a Todo app in a few hours using Refine and Supabase. I tried to take a few detours from the official way of doing things to showcase a few possible ways one can achieve the same result.
All the code is available at https://github.com/TheEmi/TodoRefine
You can see an instance of the app running at https://todo-app-rosy-pi-29.vercel.app/
## Getting started
To begin, we will head to [refine.new](refine.new) which is a quick tool for creating the Refine project with the standard integrations already written. Selecting Vite as the react platform, Ant Design for the UI framework and Supabase for both backend and authentication. This will create a template that we can download and open to get a head-start on our project.
Next, we will go to [supabase.com](supabase.com), create an account and a project.
To connect our Supabase instance we will store our SUPABASE_URL and SUPABASE_ANON_KEY in a .env file to avoid publishing this on GitHub.
Here is what the .env in the root of the directory looks like:
VITE_SUPABASE_URL=https://vlkudqvakedjdcnakadcnejj.supabase.co
VITE_SUPABASE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InZsa3VkcXZxdWd6b189dnNjKJsdn6ImFub24iLCJpYXQiOjE3MTg0NTk1OTQsImV4cCI6MjAzNDAzNTU5NH0.7IRAWj13jrkiMCB_sUBEWJCd44XrC0SvZX-aHCO7ogw
Next, we can use the environment variables in the ./utility/supabaseClient.ts:
const SUPABASE_URL = import.meta.env.VITE_SUPABASE_URL;
const SUPABASE_KEY = import.meta.env.VITE_SUPABASE_KEY;
## What do you want the project to do?
Our configuration is done and now we need to have a clear vision of what our project will accomplish so we can model our database and build the views we need.
I needed a Todo app to track daily activities and allow me to check tasks I've done. There are hundreds of other apps but the advantage is that I can customize it however I see fit. That being said I want 3 main sections:
1. Daily Todo
2. Calendar with all days
3. Default items
Here is a bit more detail:
1. When we first log in I want to see today's checklist of activities I need to do. This list should be customizable and should contain a checkbox for when an activity is finished.
2. A calendar view to navigate between days and plan activities ahead. Clicking a day will go to the same view we have at 1.
3. Default items that will be allocated to each day created. This should be an editable list containing all recurring habits.
## Database modeling
Now that we have a clear view of what we need we can create tables and assign fields to them in Supabase.
I will create 2 tables:
1. todo: To store the activities of all days and all users.
```sql
create table
public.todo (
id bigint generated by default as identity,
created_at timestamp with time zone not null default now(),
user_id uuid null default auth.uid (),
todo json null default '{}'::json,
day date null,
constraint todo_pkey primary key (id)
) tablespace pg_default;
```
2. default_todo: To store default habits (one per user).
```sql
create table
public.default_todo (
id bigint generated by default as identity,
created_at timestamp with time zone not null default now(),
user_id uuid null default auth.uid (),
defaults json null default '{}'::json,
constraint default_todo_pkey primary key (id)
) tablespace pg_default;
```
Now we can create a default_todo row when a user registers by creating a database function and trigger on auth.users:
```sql
CREATE
OR REPLACE FUNCTION insert_default_todo () RETURNS TRIGGER AS $$
begin insert into public.default_todo (user_id, defaults)
values (NEW.id, '{"items":[]}');
return NEW;
end;
$$ LANGUAGE plpgsql security definer;
CREATE TRIGGER insert_default_todo_trigger
after insert on auth.users
for each row execute procedure public.insert_default_todo();
```
Don't forget about policies! This is where user data filtering will happen. We can set RLS (Row level security) to only allow **insert** for authenticated users:
```sql
alter policy "Enable insert for authenticated users only"
on "public"."todo"
to authenticated
with check (
true
);
```
And data access for **select** and **update** only where `user_id` is the same as authenticated user id (`auth.id()`)
```sql
alter policy "Enable select for users based on user_id"
on "public"."todo"
to public
using (
(( SELECT auth.uid() AS uid) = user_id)
);
```
The last thing we will do on Supabase is create a database function to showcase them. In the SQL Editor we can write a function like this:
```sql
CREATE
OR REPLACE FUNCTION get_todo_by_day (current_day DATE ) RETURNS TABLE (
id BIGINT,
todo JSON,
DAY DATE
) AS $$
BEGIN
RETURN QUERY
SELECT t.id, t.todo, t.day
FROM todo as t
WHERE t.day = current_day;
END;
$$ LANGUAGE plpgsql;
```
This function takes a DATE as a parameter to return the row for that day. This is replaceable by a select where(day = parameter_day) but we want to experiment with RPC functions.
## Refine
We can now start on the frontend by first defining our Refine component in the App.tsx file. This is what tells Refine where to access its data. For us, we will simply use it to define the pages we want to see in the layout and implement a custom data query. Note the data, auth, router, and notification provider are also defined here.
```html
<Refine
dataProvider={dataProvider(supabaseClient)}
authProvider={authProvider}
routerProvider={routerBindings}
notificationProvider={useNotificationProvider}
resources={[
{
name: "today",
list: "/today",
meta: {
label: "Today"
}
},
{
name: "todo",
list: "/todo",
meta: {
label: "Calendar"
}
},
{
name: "default_todo",
list: "/default_todo",
meta: {
label: "Default Todo"
}
},
]}
options={{
syncWithLocation: true,
warnWhenUnsavedChanges: true,
useNewQueryKeys: true,
projectId: "8BijMs-4mgC2j-TUz0Rv",
title: { text: "Todo App", icon: <CheckSquareOutlined/> },
}}
>
```
Another change we need in the App.tsx file is to define the components for our routes.
```html
<Route path="/today">
<Route index element={<TodayTodo />} />
</Route>
<Route path="/todo">
<Route index element={<TodoCalendar />} />
</Route>
<Route path="/default_todo">
<Route index element={<Default_todo />} />
</Route>
```
We can see the pages in the Sider of our Layout and accessing them will load our page components. Now let's move on to actually writing the code for each page!
### Default Todo items
To fetch the data we can call `useList({ resource: "default_todo" })` and access the first element knowing each user will only have one default_todo row.
This page needs to display current items, with the possibility of adding and removing items. Luckily, Ant Design has the Form.List component that implements this. We just need to combine it with whatever style we want to achieve. I chose to use a Table instead of mapping over all fields of the Form. Here is a basic hierarchy to better understand this:
```html
<Form>
<Form.List>
{(fields, operator) =>{
return (<>
<Table dataSource={fields}>
............ Column configuration ..............
</Table>
<Button onClick=( ()=> operator.add();)/>
</>
);
}
}
</Form.List>
<Form>
```

### Today's todo
We will first begin by returning the current day from the todo table by using the database function we created.
```js
supabaseClient
.rpc("get_todo_by_day", { current_day: dayjs().format("YYYY-MM-DD") })
.then((data) => {});
```
If this returns an empty array we will attempt to get the default_todo items for the current user and create an entry in the todo list and set the form to display these items as well.
```js
supabaseClient.from("default_todo").select("*").then((data) => {
if(data.data){
mutateCreate({
resource: "todo",
values: {
todo: {items: data.data[0].defaults.items},
day: dayjs().format("YYYY-MM-DD") },
});
form.setFieldsValue({ defaults: data.data[0].defaults.items });
}
});
```

### Calendar view
We will begin by adding a Calendar component from Ant Design and define the onSelect and cellRender properties.
```html
<Calendar onSelect={handleSelect} cellRender={cellRender} />
```
cellRender will filter the data for any given calendar cell and return a list with the items on that day.
handleSelect will just set the selected date to pass it to a child component:
```html
<SpecificTodo day={selectedDay} returnToCalendar={callbackReturn} />
```
This component is a copy of the today's todo component with the option to return to calender via a callback function.

With this we have a functional Todo app with authentication, user-specific data filtering, and infinite opportunities for the future, in just 3 hours of configuring and coding.
Hope this helps anyone who wanted to take Refine or Supabase for a spin! | stefan_hodoroaba |
1,896,376 | 6 Captivating Linux Command Line Tutorials from LabEx 🐧 | The article is about a captivating collection of six Linux command line tutorials from the LabEx platform. It introduces readers to a wide range of topics, including mastering Linux processes, streamlining text manipulation with the `nl` command, conquering virtual battlefields through directory mastery, unveiling secrets with text display, showcasing system elegance with Neofetch, and unlocking the power of shell variables. Each tutorial is presented with a compelling narrative and a link to the corresponding LabEx lesson, inviting readers to embark on a journey of Linux command line exploration and skill development. | 27,674 | 2024-06-21T20:13:08 | https://dev.to/labex/6-captivating-linux-command-line-tutorials-from-labex-ec8 | linux, coding, programming, tutorial |
Welcome to an exciting journey through the realm of Linux command line mastery! LabEx, the premier platform for hands-on technical education, has curated a collection of six captivating tutorials that will empower you to navigate the Linux ecosystem with confidence and finesse.
## Unravel the Mysteries of Linux Processes 🕰️
Dive into the ancient Empire of Linutopia, where the wise Emperor Forkius Maximus reigns supreme. Explore the mystical prowess of signal handlers and Process Oracles as you learn to master the art of Linux process waiting in our [Linux Process Waiting](https://labex.io/labs/271433) tutorial.
## Streamline Your Text Manipulation with the `nl` Command 📄
Discover the power of the `nl` command in Linux, and learn how to efficiently number lines in your text files with our [Linux nl Command: Line Numbering](https://labex.io/labs/210988) tutorial.
## Conquer the Virtual Battlefield with Directory Mastery 🗂️
As an elite Virtual Reality Military Strategist, you'll navigate through a myriad of data, directories, and software tools in our [Linux Directory Displaying](https://labex.io/labs/271365) tutorial, ensuring a streamlined flow of operations within your digital command center.
## Unveil the Secrets of the Enchanted Forest with Text Display 📖
Step into the role of the Forest's Arcane Hunter and harness the power of Linux commands to unveil the secrets hidden in plain sight, as you explore our [Linux Text Display](https://labex.io/labs/271273) tutorial.
## Showcase Your System's Elegance with Neofetch 🖥️
Learn how to use Neofetch, a command-line tool that displays aesthetic information about your system and its configuration, in our [Display OS Info Stylishly with Neofetch](https://labex.io/labs/299825) tutorial.
## Unlock the Power of Shell Variables 💻
Delve into the world of shell variables and discover how to store and manipulate data in shell scripts, as you work through our [Working with Shell Variables](https://labex.io/labs/153894) tutorial.
Embark on this captivating journey through the Linux command line, and unlock the full potential of your system with the help of these expertly crafted tutorials from LabEx. Happy learning! 🎉
---
## Want to learn more?
- 🌳 Learn the latest [Linux Skill Trees](https://labex.io/skilltrees/linux)
- 📖 Read More [Linux Tutorials](https://labex.io/tutorials/category/linux)
- 🚀 Practice thousands of programming labs on [LabEx](https://labex.io)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
1,896,375 | CVPR Edition: Voxel51 Filtered Views Newsletter - June 21, 2024 | This week's CVPR conference was AWESOME! Here's a quick spotlight on the papers we found insightful... | 0 | 2024-06-21T20:11:30 | https://voxel51.com/blog/voxel51-filtered-views-newsletter-june-21-2024/ | computervision, machinelearning, ai, datascience | This week's CVPR conference was AWESOME! Here's a quick spotlight on the papers we found insightful at this year's show.
## 📙 Good Reads by Jacob Marks
### 🔥 CVPR 2024 Paper Spotlight: CoDeF 🔥
Recent progress in video editing/translation has been driven by techniques like Tune-A-Video and FateZero, which utilize text-to-image generative models.
Because a generative model (with inherent randomness) is applied to each frame in input videos, these methods are susceptible to breaks in temporal consistency.
Content Deformation Fields (CoDeF) overcome this challenge by representing any video with a flattened canonical image, which captures the textures in the video, and a deformation field, which describes how each frame in the video is deformed relative to the canonical image. This allows for image algorithms like image translation to be “lifted” to the video domain, applying the algorithm to the canonical image and propagating the effect to each frame using the deformation field.
Through lifting image translation algorithms, CoDeF achieves unprecedented cross-frame consistency in video-to-video translation. CoDeF can also be applied for point-based tracking (even with non-rigid entities like water), segmentation-based tracking, and video super-resolution!
- [Arxiv](https://arxiv.org/abs/2308.07926)
- [Project page](https://qiuyu96.github.io/CoDeF/)
- [GitHub Repo](https://github.com/qiuyu96/CoDeF)
### 🔥 CVPR 2024 Paper Spotlight: Depth Anything 🔥
How do you estimate depth using just a single image? Technically, calculating 3D characteristics of objects like depth requires comparing images from multiple perspectives — humans, for instance, perceive depth by merging images from two eyes.
Computer vision applications, however, are often constrained to a single camera. In these scenarios, deep learning models are used to estimate depth from one vantage point. Convolutional neural networks (CNNs) and, more recently, transformers and diffusion models employed for this task typically need to be trained on highly specific data.
Depth Anything revolutionizes relative and absolute depth estimation. Like Meta AI’s Segment Anything, Depth Anything is trained on an enormous quantity and diversity of data — 62 million images, giving the model unparalleled generality and robustness for zero-shot depth estimation, as well as state-of-the-art fine-tuned performance on datasets like NYUv2 and KITTI. (the video shows raw footage, MiDaS – previous best, and Depth Anything)
The model uses a Dense Prediction Transformer (DPT) architecture and is already integrated into [Hugging Face](https://www.linkedin.com/company/huggingface/)‘s Transformers library and FiftyOne!
- [Arxiv](https://arxiv.org/abs/2401.10891)
- [Project page](https://depth-anything.github.io/)
- [GitHub](https://github.com/LiheYoung/Depth-Anything)
- [Depth Anything Transformers Docs](https://huggingface.co/docs/transformers/model_doc/depth_anything)
- [Monocular Depth Estimation Tutorial](https://medium.com/towards-data-science/how-to-estimate-depth-from-a-single-image-7f421d86b22d)
- [Depth Anything FiftyOne Integration](https://docs.voxel51.com/tutorials/monocular_depth_estimation.html#Hugging-Face-Transformers-Integration)
### 🔥 CVPR 2024 Paper Spotlight: YOLO-World 🔥
Over the past few years, object detection has been cleanly divided into two camps.
Real-time closed-vocabulary detection:
Single-stage detection models like those from the You-Only-Look-Once (YOLO) family made it possible to detect objects from a pre-set list of classes in mere milliseconds on GPUs.
Open-vocabulary object detection:
Transformer-based models like Grounding DINO and Owl-ViT brought open-world knowledge to detection tasks, giving you the power to detect objects from arbitrary text prompts, at the expense of speed.
YOLO-World bridges this gap! YOLO-World uses a YOLO backbone for rapid detection and introduces semantic information via a CLIP text encoder. The two are connected through a new lightweight module called a Re-parameterizable Vision-Language Path Aggregation Network.
What you get is a family of strong zero-shot detection models that can process up to 74 images per second! YOLO-World is already integrated into Ultralytics (along with YOLOv5, YOLOv8, and YOLOv9), and FiftyOne!
- [Arxiv](https://arxiv.org/abs/2401.17270)
- [Project page](https://www.yoloworld.cc/)
- [GitHub](https://github.com/AILab-CVC/YOLO-World?tab=readme-ov-file)
- [YOLO-World Ultralytics Docs](https://docs.ultralytics.com/models/yolo-world/)
- [YOLO-World FiftyOne Docs](https://docs.voxel51.com/integrations/ultralytics.html#open-vocabulary-detection)
### 🔥 CVPR 2024 Paper Spotlight: DeepCache 🔥
Diffusion models dominate the discourse regarding visual genAI these days — Stable Diffusion, Midjourney, DALL-E3, and Sora are just a few of the diffusion-based models that produce breathtakingly stunning visuals.
If you’ve ever tried to run a diffusion model locally, you’ve probably seen for yourself how these models can be pretty slow. This is because diffusion models iteratively try to denoise an image (or other state), meaning that many sequential forward passes through the model must be made.
DeepCache accelerates diffusion model inference by up to 10x with minimal quality drop-off. The technique is training-free and works by leveraging the fact that high-level features are fairly consistent throughout the diffusion denoising process. By caching these once, this computation can be saved in subsequent steps.
- [Arxiv](https://arxiv.org/abs/2312.00858)
- [Project page](https://horseee.github.io/Diffusion_DeepCache/)
- [GitHub](https://github.com/horseee/DeepCache?tab=readme-ov-file)
- [DeepCache Diffusers Docs](https://huggingface.co/docs/diffusers/main/en/optimization/deepcache)
### 🔥 CVPR 2024 Paper Spotlight: PhysGaussian 🔥
I’m a sucker for some physics-based machine learning, and this new approach from researchers at [UCLA](https://www.linkedin.com/company/ucla/), [Zhejiang University](https://www.linkedin.com/company/zhejiang-university/), and the [University of Utah](https://www.linkedin.com/company/university-of-utah/) is pretty insane.
3D Gaussian splatting is a rasterization technique that generates realistic new views of a scene from a set of photos or an input video. It has rapidly risen to prominence because it is simple, trains relatively quickly, and can synthesize novel views in real time.
However, to simulate dynamics (which involves motion synthesis), views generated by Gaussian splatting had to be converted into meshes before physical simulation and final rendering could be performed.
PhysGaussian cuts through these intermediate steps by embedding physical concepts like stress, plasticity, and elasticity into the model itself. At a high level, the model leverages the deep relationships between physical behavior and visual appearance, following Nvidia’s “what you see is what you simulate” (WS2) approach.
Very excited to see where this line of work goes!
- [Arxiv](https://arxiv.org/abs/2311.12198)
- [Project page](https://xpandora.github.io/PhysGaussian/)
## 🎙️ Good Listens - All About CVPR!

LINK - [Youtube](https://youtu.be/dnWuyrfsHKM?si=-C8yOD8AyRSGA6c5)

LINK - [YouTube](https://youtu.be/AgXodz7fOHo?si=PSpZUbeuWbz4RRRd)

LINK - [YouTube](https://youtu.be/jZg-vsG8alU?si=8-vSSMJVzcpXL3u6)

LINK - [YouTube](https://youtu.be/-Pu6uifjABU?si=MvVQSjBJsXOidbB3)
## 🗓️. Upcoming Events
Check out these upcoming AI, machine learning and computer vision events! [View the full calendar and register for an event](https://voxel51.com/computer-vision-events/).

| jguerrero-voxel51 |
1,896,351 | 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐌𝐚𝐫𝐤𝐞𝐭𝐩𝐥𝐚𝐜𝐞 𝐓𝐮𝐫𝐧𝐬 𝟐𝟐! | I think everybody knows I have joined the community of authors offering website designs and... | 0 | 2024-06-21T19:18:11 | https://dev.to/hasnaindev1/-1l10 | tmbday22, website, webdev, wordpress |

I think everybody knows I have joined the community of authors offering website designs and creations in the [𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 𝐝𝐢𝐠𝐢𝐭𝐚𝐥 𝐦𝐚𝐫𝐤𝐞𝐭𝐩𝐥𝐚𝐜𝐞](https://rebrand.ly/templates-marketplace). This year, the company turns 22! And I couldn't have missed the occasion to share my greetings to everyone involved in this amazing project. I congratulate everyone standing behind this amazing project and wish them another year of success and prosperity! Besides, I want to greet fellow authors and wish them a lot of inspiration to bring their creative ideas to life! 𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 breaks all stereotypes and proves that no matter how old your business is, as long as you keep up with the trend and follow the quality standards, you will always be in demand.
𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 𝐃𝐢𝐠𝐢𝐭𝐚𝐥 𝐌𝐚𝐫𝐤𝐞𝐭𝐩𝐥𝐚𝐜𝐞 - 𝐇𝐢𝐬𝐭𝐨𝐫𝐢𝐜𝐚𝐥 𝐇𝐢𝐠𝐡𝐥𝐢𝐠𝐡𝐭𝐬
𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 was officially launched in 2002. However, their story began much earlier than that. In 1998, the company's founder joined forces with several like-minded web designers to create custom websites for their clients. However, as the number of clients and their requests grew, a small web design studio shifted to a new approach or work. Rather than developing custom projects, they started creating reusable website templates. The approach proved successful, and the company completely shifted to the new business model.22 years is a long time for any digital company. During its history, 𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 has seen rises and falls, impressive success, and lawsuits. They have also switched from selling web design assets created by their in-house developers to the marketplace, which everyone can join and provide their creative works for download. If you are a web developer wondering how to make passive income online, you should become an author at the 𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 digital marketplace.
𝐔𝐥𝐭𝐢𝐦𝐚𝐭𝐞 𝐑𝐞𝐩𝐨𝐬𝐢𝐭𝐨𝐫𝐲 𝐨𝐟 𝐖𝐞𝐛𝐬𝐢𝐭𝐞 𝐃𝐞𝐬𝐢𝐠𝐧 𝐚𝐧𝐝 𝐂𝐫𝐞𝐚𝐭𝐢𝐨𝐧
I joined 𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 Digital Marketplace as an author in 2022. Since then, I have published 63 products there and made 133 sales. My creative works include 𝐖𝐨𝐫𝐝𝐏𝐫𝐞𝐬𝐬 and 𝐖𝐨𝐨𝐂𝐨𝐦𝐦𝐞𝐫𝐜𝐞 themes, Shopify themes, and UI elements.However, 𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 is more than a repository of WordPress designs. This is where you can find everything needed for your website design and creation. The inventory features impressive templates compatible with the latest versions of popular CMS and e-commerce platforms. It also offers stunning graphics, audio and video assets, plugins, and expert web development services. All products published in the marketplace undergo thorough review before they become accessible for download.
𝐌𝐨𝐫𝐞 𝐈𝐬 𝐘𝐞𝐭 𝐭𝐨 𝐂𝐨𝐦𝐞!
𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫, I want to express my admiration for your 22 years of success and prosperity. Your journey has been inspiring, and I'm proud to be a part of it. You've provided a platform for creative developers like me to showcase our work and helped us find new clients, expand our reach, and earn passive income. Here's to many more years of your success! As a token of my appreciation for your support and trust, I'm thrilled to offer my precious readers and followers an exclusive opportunity. You can now download my works from the [𝐓𝐞𝐦𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 Author Store](https://rebrand.ly/author-store) at a special 7% discount. Just use the promo code 𝐇𝐚𝐬𝐧𝐚𝐢𝐧𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫. But remember, this offer is time-sensitive and valid only until July 31, 2024. So, don't miss out on this great deal!
𝐄𝐱𝐩𝐥𝐨𝐫𝐞 𝐓𝐞𝐦𝐩𝐥𝐚𝐭𝐞𝐌𝐨𝐧𝐬𝐭𝐞𝐫 𝐌𝐚𝐫𝐤𝐞𝐭𝐩𝐥𝐚𝐜𝐞: https://rebrand.ly/templates-marketplace
𝐄𝐱𝐩𝐥𝐨𝐫𝐞 𝐌𝐲 𝐀𝐮𝐭𝐡𝐨𝐫 𝐒𝐭𝐨𝐫𝐞: https://rebrand.ly/author-store | hasnaindev1 |
1,896,372 | What is the difference between Library and Framework | While the terms "framework" and "library" are usually used interchangeably in software development,... | 0 | 2024-06-21T20:04:21 | https://dev.to/chintamani_pala/what-is-the-difference-between-library-and-framework-1b8g | webdev, react, angular, javascriptlibraries | While the terms "**framework**" and "**library**" are usually used interchangeably in software development, they relate to two concepts that serve different purposes, with distinctly different implications for their usage in a project. Here are the major differences:

## Framework
- **Inversion of Control :**
Definition: A framework dictates the structure and flow of an application. It calls the user-written individual code rather than the reverse.
Example: When using a framework, you normally follow its conventions and integrate your code into the lifecycle of the framework. Suppose you were writing a web application with Django or Ruby on Rails. You would define classes for models, views, and controllers in a way that the framework prescribes, and then the framework is responsible for how all of the pieces interact.
- **Structure and Convention :**
Definition: Guidance in the form of predefined structures for your application, together with best practices, design patterns, and conventions.
For example, it is the Angular framework that gives guidance regarding how to structure your code with modules, components, and services. This helps in keeping the uniformity and makes the application more scalable.
- **Fully Functional :**
Definition: More often, frameworks host a wide extent of built-in functionalities covering all aspects of application development, from database handling to user authentication.
An example would be that Spring Framework for Java provides embeddable features for web, security, and database interaction. This means you won't need to integrate various libraries to a greater extent. In this regard, a library is defined as a collection of functions or class libraries that you can call at will to perform certain tasks. You are in control of when and how to use it.
Example: If you are using a library, you would write the main flow of your application and call library functions as needed. In fact, you call utility functions inline in your code in many cases when you use Lodash for JavaScript.
- **Focused Functionality :**
Definition: Almost all libraries are focused on narrow ranges of functionality. Thus, they can provide tools to help you do certain specific things:.
Example: React, a JavaScript library to build user interfaces, is only concerned with rendering UI components. It enforces no application architecture or handles any other concerns like routing or state management—unless you include other libraries like React Router or Redux, that is.
- **Loose Coupling :**
Definition: It refers to when libraries, in their use, are utilised independently of an application and thus loosely coupled with it. They allow the use of multiple libraries if need be.
An example of this might be that a Node.js application could have Express for routing, Mongoose for MongoDB, and Passport for authentication. All these have some form of specified role, and none operate based on strict enforcement of structure. Summary Framework: It guides the general structure and flow and enforces conventions generally, including broad comprehensive functionality. You plug your code into the framework.
Library: It provides certain functionality, called directly from your code, which enables more control over the structure and flow of applications. | chintamani_pala |
1,851,942 | Worker Pool Design Pattern Explanation | This entry is about a design pattern of which, at the time, I found little to none information in... | 0 | 2024-06-19T15:48:10 | https://coffeebytes.dev/en/worker-pool-design-pattern-explanation/ | designpatterns, go, algorithms, performance | ---
title: Worker Pool Design Pattern Explanation
published: true
date: 2024-06-21 20:00:00 UTC
tags: designpatterns,Go,algorithms,performance
canonical_url: https://coffeebytes.dev/en/worker-pool-design-pattern-explanation/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yygshkjmn5at4co4nui7.jpg
---
This entry is about a design pattern of which, at the time, I found little to none information in Spanish.
Imagine that you have a number of concurrent tasks that you want to perform, either crawling many websites, or perhaps processing information from each of the pixels of an image or anything else you can think of.
The simplistic option is to create a series of workers and use them concurrently, something like this pseudocode:
``` python
for job in jobs:
async process_concurrent_job()
```
This may look pretty good, at first, but it has multiple disadvantages; first, you will be creating workers without control, which can increase your program’s memory usage incredibly fast; second, you are constantly creating and destroying workers, which can be costly for your program.

_If there is no worker limit, workers will continue to be created to match the tasks_
It would be best to keep memory usage constant and avoid creating and destroying workers frequently. For this, the worker pool pattern works perfect.
Worker pool is a [design pattern](https://coffeebytes.dev/en/design-patterns-in-software/) that comes to make up for these shortcomings.
There are developers who have used this pattern to [handle a million requests per minute on go.](http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang)
## How does the worker pool design pattern work?
Let’s take a tail of a task by running, these can be fixed or dynamically created. Then, instead of creating and destroying multiple workers ([goroutines in the case of go](https://coffeebytes.dev/en/go-introduction-to-goroutines-and-concurrency/)) constantly, we create a **fixed number of workers** and put them in a cycle, in which they will be constantly listening for information from the queue (through a [channel in the case of languages like Go](https://coffeebytes.dev/en/go-use-of-channels-to-communicate-goroutines/)).
This way we will keep our memory management much more stable and predictable, in addition to limiting the impact of constant creation and destruction of workers.
Finally, we can optionally save the results of these tasks in a queue from which they can be read later.

### Job queue
The tasks or jobs that we want to be executed by the workers will go to a task queue or job queue. A normal queue, like any other. This can be fixed, or created on the fly by means of user interactions or other systems.

### The pool
The pool initializes and hosts the number of workers we set up, generally you will want to use a configuration file, environment variables or other means. Each of these workers will take a task, run it, and when it becomes available again, it will again look for a task from the job queue to run and repeat the cycle.

### The worker
The worker is in charge of executing the tasks and, as I mentioned, it will be listening for new tasks or jobs permanently or until a certain limit that we indicate, such as the job queue runs out, a given number of tasks are executed or any other condition that we declare.

The fixed number of workers will ensure that, during the entire program execution, there will be a maximum number of tasks running, which will limit the memory impact of our concurrent tasks.
### The results queue
Optionally, we can send the result of each task executed by a worker to a second queue; the result queue, which we can process later.

This design pattern is very useful when huge amounts of tasks have to be processed and when we do not want to overload the system. And, as you can guess it is quite popular and useful in programming languages that heavily use concurrency, such as [the Go programming language](https://coffeebytes.dev/en/go-programming-language-introduction-to-variables-and-data-types/). | zeedu_dev |
1,896,371 | The "Works on My Machine" Curse: Slaying the Productivity Dragon in Local Development | Introduction Ever spent hours coding something that works perfectly on your machine, only... | 0 | 2024-06-21T19:59:34 | https://dev.to/ssadasivuni/the-works-on-my-machine-curse-slaying-the-productivity-dragon-in-local-development-276 | developerlife, developerproductivity, devrel, cloudnative | ## Introduction
Ever spent hours coding something that works perfectly on your machine, only to see it mysteriously fail for others? The "Works on My Machine" (WOMMM) phenomenon strikes again!
We've all been there. Different operating systems, software versions, and local configurations can turn a seemingly perfect solution into a frustrating puzzle.
## Summary
**The WOMMM Productivity Drain**
WOMMM isn't just annoying, it's a productivity killer. It leads to:
Debugging black holes: Time wasted chasing ghosts, trying to fix issues specific to your local setup.
- **Version control chaos:** Code that works for you breaks for others, causing merge conflicts and delays.
- **Onboarding roadblocks:** New team members struggle to get started if their environment doesn't match yours.
- **Production pandemonium:** Bugs that go undetected locally cause major headaches when deployed.
**Taming the Beast**
Don't fret over the "Works on My Machine" monster! We can achieve developer satisfaction by embracing:
- Version control to track code changes
- Standardization to use same versions of libraries and frameworks
- Containerization to allow consistent environments that everyone can use
- Automated testing to catch bugs early before they become major issues
- Collaboration and open communication to reduce onboarding and support challenges
## The Takeaway
By taming the beast, we can create a unified development experience, boosting productivity and ensuring smooth code deployment. Remember, happy developers lead to better code, and a consistent development environment paves the way for success.
---
Let's keep the conversation going! Share your tips and WOMMM horror stories in the comments below. | ssadasivuni |
1,896,370 | Looking for a tech cofounder (golfer desired) | Hi guys, I am looking for a technical co-founder for my startup birdiefit.com. If you are or if you... | 0 | 2024-06-21T19:58:31 | https://dev.to/luka_karaula_daca8e314562/looking-for-a-techcofounder-golfer-desired-51h1 | partner, cofounder, startup | Hi guys, I am looking for a technical co-founder for my startup birdiefit.com. If you are or if you know a developer who is also a golfer, comment or DM me.
The app already has paying customers. The ideal person will take care of the technical side of things, making sure the code is neat, new features are implemented, and existing features working properly. We already have a few customers wanting to join, conditioned upon the release of new features.
Knowledge of golf is desired as we will all be in touch with our ideal customer personas - golfers.
The main feature of the app is the ability to create practice programs for months ahead of time in just a few clicks. The library consists of 700+ golf drills.
Cheers,
Luka Karaula
Reach out at info(at)birdiefit.com
#startup #cofounder #golf #business #connection | luka_karaula_daca8e314562 |
1,896,366 | The MEVN Stack: A Modern Web Development Powerhouse | Hey Dev.to community! 🌟 Let's dive into one of the hottest tech stacks of 2024: the MEVN stack.... | 0 | 2024-06-21T19:47:09 | https://dev.to/matin_mollapur/the-mevn-stack-a-modern-web-development-powerhouse-34ji | webdev, javascript, beginners, programming | Hey Dev.to community! 🌟
Let's dive into one of the hottest tech stacks of 2024: the MEVN stack. Whether you're new to web development or looking to expand your skill set, this stack has got you covered with everything from front-end to back-end, all using the mighty JavaScript.
## What is the MEVN Stack?
The MEVN stack stands for MongoDB, Express.js, Vue.js, and Node.js. This combo brings together a powerful and flexible set of tools for building dynamic web applications. Here's a quick breakdown of each component:
- **MongoDB**: A NoSQL database that's perfect for handling large amounts of unstructured data. It's schema-less, which means you can store documents in a flexible, JSON-like format.
- **Express.js**: A lightweight framework for building web applications in Node.js. It simplifies the server-side code, making it easy to handle routes, requests, and responses.
- **Vue.js**: A progressive JavaScript framework for building user interfaces. Vue.js is known for its simplicity and flexibility, making it a popular choice for both beginners and seasoned developers.
- **Node.js**: A runtime environment that lets you run JavaScript on the server side. Node.js is fast and efficient, perfect for building scalable network applications.
## Why Choose the MEVN Stack?
### 1. **Full-Stack JavaScript**
One of the biggest advantages of the MEVN stack is that it allows you to use JavaScript for both front-end and back-end development. This uniformity simplifies the development process and makes it easier to manage your codebase.
### 2. **Scalability**
Thanks to Node.js, your server-side applications can handle a large number of simultaneous connections with high performance. MongoDB's flexible schema design also makes it easy to scale your database as your application grows.
### 3. **Ease of Learning**
Each component of the MEVN stack is well-documented and has a strong community behind it. Vue.js, in particular, is celebrated for its gentle learning curve and excellent documentation, making it an ideal choice for developers who are new to modern front-end frameworks.
### 4. **Rich Ecosystem**
The MEVN stack benefits from a rich ecosystem of tools and libraries. Whether you need state management with Vuex, server-side rendering with Nuxt.js, or API management with Express middleware, there's a tool to fit your needs.
## Getting Started with MEVN
Here's a simple guide to set up a basic MEVN application:
### 1. **Set Up Your Environment**
First, make sure you have Node.js and npm installed. You can download them from [nodejs.org](https://nodejs.org).
### 2. **Initialize Your Project**
Create a new directory for your project and navigate into it:
```bash
mkdir mevn-app
cd mevn-app
npm init -y
```
### 3. **Install Dependencies**
Install the necessary packages for your application:
```bash
npm install express mongoose
npm install -g @vue/cli
vue create client
```
### 4. **Create Your Server**
Set up a basic Express server in a `server.js` file:
```javascript
const express = require('express');
const mongoose = require('mongoose');
const app = express();
mongoose.connect('mongodb://localhost:27017/mevn-app', { useNewUrlParser: true, useUnifiedTopology: true });
app.get('/', (req, res) => {
res.send('Hello MEVN Stack!');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
```
### 5. **Build Your Front-End**
Navigate into the `client` directory and start the Vue.js development server:
```bash
cd client
npm run serve
```
You can now start building your front-end components with Vue.js.
### 6. **Connect Front-End and Back-End**
Configure your Vue.js application to make API requests to your Express server. You can use Axios for this:
```bash
npm install axios
```
In your Vue component, you can now fetch data from your server:
```javascript
<template>
<div>
<h1>{{ message }}</h1>
</div>
</template>
<script>
import axios from 'axios';
export default {
data() {
return {
message: ''
};
},
mounted() {
axios.get('http://localhost:3000/')
.then(response => {
this.message = response.data;
});
}
};
</script>
```
## Conclusion
The MEVN stack is a fantastic choice for modern web development. It offers a streamlined development experience with JavaScript running on both the client and server sides, making it easier to build, manage, and scale your applications.
Happy coding! 🚀
Feel free to share your thoughts, questions, or your own MEVN stack projects in the comments below. Let's learn and grow together!
| matin_mollapur |
1,896,363 | 2024 and Beyond: The Evolving Role of Scriptless Test Automation in Agile Development | Test automation has become crucial to modern software development and testing lifecycles. With the... | 0 | 2024-06-21T19:42:44 | https://dev.to/sophie_wilson0412/2024-and-beyond-the-evolving-role-of-scriptless-test-automation-in-agile-development-2e6l | codeless, automation, selfhealing, scriptless | Test automation has become crucial to modern software development and testing lifecycles. With the rapidly evolving expectations for speed and scale from end users, it is not feasible for organizations to continue with their traditional software testing processes. Nearly most organizations have adopted Agile and DevOps methodologies to overcome the challenges of the traditional waterfall software development life cycle. As we enter 2024, the importance of scriptless test automation in Agile development continues to increase at an unprecedented rate. Many organizations are investing in a scriptless test automation platform [goinsta.ai] to enhance efficiency and accelerate the development lifecycle. This unique technique not only deals with the issues of traditional test automation but also fully coincides with Agile development ideas.
## The Rise of Scriptless Test Automation in Agile Development
Agile methods have become the cornerstone of modern software development, focusing on collaboration, adaptability, and customer satisfaction. The rapid pace of Agile development offers distinct hurdles for standard test automation methods. Writing and maintaining lengthy code scripts can be time-consuming, limiting an Agile team's ability to deliver rapidly and iteratively. This is where the scriptless test automation tools come into play.
The scriptless test automation platform [goinsta.ai], which uses artificial intelligence to speed up the testing process, is a pioneer in this sector. This platform goes beyond conventional scriptless solutions, offering a powerful, user-friendly solution for Agile teams.
## Advantages of Scriptless Test Automation
It is a versatile test automation framework that eliminates the need for manual intervention in coding while automating the heavy lifting behind the scenes. Instead of writing actual code, testers must merely identify the processes, which the framework can later translate into test cases.
## Increased Collaboration
By reducing the requirement for considerable programming skills, scriptless test automation promotes collaboration between developers and testers. With codeless automated testing methods, testers may contribute actively to the testing process, ensuring that testing is in sync with development efforts.
According to the World Quality Report 2021–2022 by Capgemini, an automation-first approach to software quality delivery will be the norm across all quality analysis activities. Within the next two years, businesses will have either implemented or plan to implement scriptless test automation. It emphasizes the rapid rise in the use of scriptless testing solutions.
## Enhanced Efficiency
The efficiency improvements achieved by scriptless test automation are significant. Testers may quickly design, edit, and execute tests, drastically lowering the time spent on script maintenance. This increased productivity is especially important in agile development, where rapid iterations and continuous testing are required.
A study by Forrester indicates that organizations adopting scriptless test automation can enjoy significant time savings in test design and execution. This time efficiency can seamlessly align with the Agile principle of delivering working software frequently.
## Broader Test Coverage
Scriptless test automation systems or no code testing tools often have capabilities such as data-driven testing and reusable components. This allows testers to achieve more extensive test coverage, ensuring that essential functionalities are appropriately evaluated and contributing to overall product quality.
According to a study by TechWell, companies have experienced a significant increase in test productivity after implementing scriptless test automation. This increased productivity is invaluable for agile teams striving to deliver high-quality software solutions within tight timelines.
## Significant Cost Reduction
Traditional test automation scripts can be fragile, necessitating ongoing maintenance to adapt to changes in the application under test. No-code testing methods decrease the script maintenance load, allowing teams to focus on testing activities rather than script upkeep.
The World Quality Report also highlights that organizations using codeless automation testing tools can experience a noticeable reduction in test maintenance costs. With all these exciting benefits, scriptless test automation can easily transform today’s software development practices, ensuring safety and a shorter time-to-market.
## Enhanced Test Maintenance
Traditional test automation scripts often face difficulties when applications change, requiring human script revisions. This issue can be efficiently solved with self-healing test automation tools. The tool's capacity to autonomously detect and fix faults that may develop due to changes in the application's user interface or functionality is referred to as self-healing.
## The Future: AI Test Automation Tools and Low Code Test Automation
The future of scriptless test automation is connected with emerging technologies such as artificial intelligence (AI) and low code test automation. AI test automation solutions, such as those integrated into the scriptless test automation platform [goinsta.ai], add an additional level of intelligence and adaptability to the testing process. These tools can autonomously find and address errors, paving the way for self-healing test automation.
Self-healing test automation is a game changer in Agile development, and its significance will rise in the future. The testing process becomes more resilient to changes via self-healing test automation solutions, reducing the need for manual intervention and script maintenance. It seamlessly aligns with Agile's focus on continuous integration and delivery.
## Conclusion
Scriptless test automation has emerged as a game changer in the fast-paced world of agile development. The data stated throughout this article highlights the increasing popularity and usefulness of scriptless testing solutions. The scriptless test automation platform [goinsta.ai] takes this innovation a step further by integrating artificial intelligence to improve the intelligence and adaptability of the testing process. As organizations seek faster releases, better quality, and greater collaboration between development and testing teams, the scriptless test automation journey, augmented by AI and low-code solutions, is a worthwhile endeavor.
| sophie_wilson0412 |
1,896,362 | The RSA Algorithm | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-21T19:39:50 | https://dev.to/achilles_68b2e35472911b34/the-rsa-algorithm-23ed | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
The RSA algorithm encrypts with a public key and decrypts with a private key. The former is based on a product of two primes; these primes generate the latter. Large key size makes factorization hard for unauthorized decryption, guaranteeing security.
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | achilles_68b2e35472911b34 |
1,896,359 | The Ultimate Guide to Essential SEO Elements | Introduction Navigating the intricate world of SEO can be daunting, but mastering its... | 0 | 2024-06-21T19:32:52 | https://dev.to/gohil1401/the-ultimate-guide-to-essential-seo-elements-4gcn | webdev, beginners, tutorial, seo |
## Introduction
Navigating the intricate world of SEO can be daunting, but mastering its essential elements is key to enhancing your website's visibility and performance. This comprehensive guide explores critical SEO components, from Sitemap.XML and robots.txt to SSL certificates and structured data, providing actionable insights to help you achieve optimal search engine rankings.
## Sitemap.XML: The Blueprint for Search Engines
An XML sitemap is a specially formatted document listing all the pages on a website, created for search engines.
**Purpose**
- **Search Engine Crawling**: Facilitates efficient crawling by search engines like Google and Bing.
- **Indexing**: Ensures all pages, including those difficult to find through internal links, are indexed.
- **Updates**: Alerts search engines about updates or new pages on your site.
**Best Practices**
- Include all crucial pages.
- Keep the sitemap updated.
- Submit the sitemap to search engines through their webmaster tools.
## Sitemap.HTML: Enhancing User Experience
An HTML sitemap is a webpage listing all the site's pages in a hierarchical manner.
**Purpose**
- **User Navigation**: Assists visitors in navigating the website and locating content swiftly.
- **SEO Benefit**: Improves the internal linking structure, aiding search engine optimization.
**Best Practices**
- Ensure it is easily accessible from the homepage.
- Keep it user-friendly and organized.
## Robots.txt File: Guiding Search Engines
A text file in the root directory of a website instructing web robots (typically search engine crawlers) on which pages or files they can or cannot request from your site.
**Purpose**
- **Control Crawling**: Prevents search engines from crawling and indexing specific parts of your website.
- **Server Load Management**: Reduces server load by restricting crawler access to certain files or directories.
**Best Practices**
- Use the file to block sensitive or duplicate content.
- Regularly review and update the file as your site evolves.
## Page Load Time: Speed Matters
The duration it takes for a webpage to fully display its content.
**Purpose**
- **User Experience**: Faster load times enhance user experience, reducing bounce rates and increasing engagement.
- **SEO**: Search engines consider page speed a ranking factor, so faster sites can achieve higher search rankings.
**Best Practices**
- Optimize images by compressing them.
- Minimize and defer JavaScript and CSS files.
- Use browser caching and a content delivery network (CDN).
## Optimization of JS & CSS: Streamlining Code
The process of minimizing JavaScript (JS) and Cascading Style Sheets (CSS) files to reduce their size and load time.
**Purpose**
- **Performance**: Enhances website speed by reducing file sizes and load times.
- **SEO**: Faster sites often rank higher in search engine results.
**Best Practices**
- Minify JS and CSS files.
- Combine multiple files into one.
- Use asynchronous loading for JavaScript.
## SSL Certificate: Securing Your Site
A digital certificate providing authentication for a website and enabling an encrypted connection.
**Purpose**
- **Security**: Encrypts data transferred between the user and the server, safeguarding sensitive information.
- **Trust**: Users see a padlock icon in their browser, which builds trust.
- **SEO**: Search engines favor sites with SSL certificates, potentially improving rankings.
**Best Practices**
- Always use HTTPS instead of HTTP.
- Regularly renew your SSL certificate before it expires.
## Canonical Tag: Preventing Duplicate Content
An HTML tag used to prevent duplicate content issues by specifying the "canonical" or preferred version of a webpage.
**Purpose**
- **SEO**: Helps search engines understand which version of a page to index and rank.
- **Avoids Penalties**: Prevents duplicate content penalties by consolidating link equity to a single URL.
**Best Practices**
- Use canonical tags on all pages, especially if multiple URLs have similar content.
- Ensure the canonical URL points to the most authoritative version of the page.
## Redirection (404, 301, 302): Managing URL Changes
**Purpose**
- **404**: Informs users and search engines that the page no longer exists.
- **301**: Transfers link equity from the old URL to the new one, maintaining SEO value.
- **302**: Temporarily redirects users while preserving the original URL for future use.
**Best Practices**
- Use 301 redirects for permanently moved pages.
- Use 302 redirects for temporary changes.
- Customize 404 error pages to guide users to other relevant content.
## W3C Validation: Ensuring Code Quality
The process of checking your website's HTML and CSS code against the standards set by the World Wide Web Consortium (W3C).
**Purpose**
- **Compliance**: Ensures your site adheres to web standards, improving compatibility across different browsers and devices.
- **Accessibility**: Helps make your site more accessible to users with disabilities.
**Best Practices**
- Regularly validate your code using W3C validation tools.
- Fix any validation errors to ensure compliance.
## Open Graph Tag: Enhancing Social Sharing
Metadata added to HTML to control how web pages are represented on social media platforms.
**Purpose**
- **Social Sharing**: Enhances the appearance of links shared on social media, leading to better engagement.
- **Control**: Allows you to specify titles, descriptions, and images for social media shares.
**Best Practices**
- Include Open Graph tags on all pages you want to be shared.
- Test how your pages appear using social media sharing tools.
## Structured Data: Improving Search Visibility
A standardized format (often using JSON-LD) for providing information about a page and its content, making it easier for search engines to understand.
**Purpose**
- **Rich Snippets**: Enables rich snippets and other enhanced search results features.
- **SEO**: Improves visibility and click-through rate in search engine results.
**Best Practices**
- Use structured data to mark up key content like products, reviews, articles, and events.
- Test your structured data with Google's Structured Data Testing Tool.
## Conclusion
Mastering these SEO essentials—Sitemap.XML, HTML sitemaps, robots.txt files, page load time optimization, JS & CSS optimization, SSL certificates, canonical tags, redirection management, W3C validation, Open Graph tags, and structured data—can significantly boost your website's performance and search engine rankings. By adhering to best practices, you can provide a superior user experience and gain a competitive edge in the digital landscape. | gohil1401 |
1,896,358 | Understanding JavaScript Promises | Introduction JavaScript Promises are a powerful way to handle asynchronous operations, allowing you... | 0 | 2024-06-21T19:26:37 | https://dev.to/just_ritik/understanding-javascript-promises-2eib | webdev, javascript, beginners, programming | **Introduction**
JavaScript Promises are a powerful way to handle asynchronous operations, allowing you to write cleaner and more manageable code. In this post, we'll dive deep into Promises and explore how they can improve your code.
**Understanding Promises**
A Promise in JavaScript represents the eventual completion (or failure) of an asynchronous operation and its resulting value.
**How you can create Promises**
You can create a new Promise using the `Promise` constructor, which takes a function with two parameters: `resolve` and `reject`.

**Promise Methods**
Promises have several methods to handle the result of the asynchronous operation.
**`.then()`**
The .then() method is used to handle the resolved value.

**`.catch()`**
The .catch() method is used to handle errors or rejected promises.

**`.finally()`**
The .finally() method is executed regardless of the promise's outcome.

**Promises Chaining**
Promises can be chained to perform multiple asynchronous operations in sequence.

**Handle Error**
Proper error handling in Promises is crucial for robust code.

**Conclusion**
JavaScript Promises provide a powerful way to handle asynchronous operations. By understanding and utilizing Promises, you can write cleaner, more efficient, and more readable code.
Happy Coding (:
Follow for more "_" | just_ritik |
1,896,338 | How RAG with txtai works | txtai is an all-in-one embeddings database for semantic search, LLM orchestration and language... | 11,018 | 2024-06-21T18:42:49 | https://neuml.hashnode.dev/how-rag-with-txtai-works | ai, llm, rag, vectordatabase | [](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/63_How_RAG_with_txtai_works.ipynb)
[txtai](https://github.com/neuml/txtai) is an all-in-one embeddings database for semantic search, LLM orchestration and language model workflows.
Large Language Models (LLMs) have captured the public's attention with their impressive capabilities. The Generative AI era has reached a fever pitch with some predicting the coming rise of superintelligence.
LLMs are far from perfect though and we're still a ways away from true AI. The biggest challenge is with hallucinations. Hallucinations is the term for when a LLM generates output that is factually incorrect. The alarming part of this is that on a cursory glance, it actually sounds like factual content. The default behavior of LLMs is to produce plausible answers even when no plausible answer exists. LLMs are not great at saying I don't know.
Retrieval Augmented Generation (RAG) helps reduce the risk of hallucinations by limiting the context in which a LLM can generate answers. This is typically done with a search query that hydrates a prompt with a relevant context. RAG has been one of the most practical use cases of the Generative AI era.
txtai has a multiple ways to run RAG pipelines as follows.
- Embeddings instance and LLM. Run the embeddings search and plug the search results into a LLM prompt.
- RAG (aka Extractor) pipeline which automatically adds a search context to LLM prompts.
- RAG FastAPI service with YAML
This article will cover all these methods and shows how RAG with txtai works.
# Install dependencies
Install `txtai` and all dependencies.
```
pip install txtai[api,pipeline] autoawq
```
# Components of a RAG pipeline
Before using txtai's RAG pipeline, we'll show how each of the underlying components work together. In this example, we'll load the [txtai Wikipedia embeddings database](https://huggingface.co/NeuML/txtai-wikipedia) and a LLM. From there, we'll run a RAG process.
```python
from txtai import Embeddings, LLM
# Load Wikipedia Embeddings database
embeddings = Embeddings()
embeddings.load(provider="huggingface-hub", container="neuml/txtai-wikipedia")
# Create LLM
llm = LLM("TheBloke/Mistral-7B-OpenOrca-AWQ")
```
Next, we'll create a prompt template to use for the RAG pipeline. The prompt has a placeholder for the question and context.
```python
# Prompt template
prompt = """<|im_start|>system
You are a friendly assistant. You answer questions from users.<|im_end|>
<|im_start|>user
Answer the following question using only the context below. Only include information
specifically discussed.
question: {question}
context: {context} <|im_end|>
<|im_start|>assistant
"""
```
After that, we'll generate the context using an embeddings (aka vector) query. This query finds the top 3 most similar matches to the question **"How do you make beer 🍺?"**
```python
question = "How do you make beer?"
# Generate context
context = "\n".join([x["text"] for x in embeddings.search(question)])
print(context)
```
```
Brewing is the production of beer by steeping a starch source (commonly cereal grains, the most popular of which is barley) in water and fermenting the resulting sweet liquid with yeast. It may be done in a brewery by a commercial brewer, at home by a homebrewer, or communally. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that emerging civilizations, including ancient Egypt, China, and Mesopotamia, brewed beer. Since the nineteenth century the brewing industry has been part of most western economies.
Beer is produced through steeping a sugar source (commonly Malted cereal grains) in water and then fermenting with yeast. Brewing has taken place since around the 6th millennium BC, and archeological evidence suggests that this technique was used in ancient Egypt. Descriptions of various beer recipes can be found in Sumerian writings, some of the oldest known writing of any sort. Brewing is done in a brewery by a brewer, and the brewing industry is part of most western economies. In 19th century Britain, technological discoveries and improvements such as Burtonisation and the Burton Union system significantly changed beer brewing.
Craft beer is a beer that has been made by craft breweries, which typically produce smaller amounts of beer, than larger "macro" breweries, and are often independently owned. Such breweries are generally perceived and marketed as emphasising enthusiasm, new flavours, and varied brewing techniques.
```
Now we'll take the question and context and put that into the prompt.
```python
print(llm(prompt.format(question=question, context=context)))
```
```
To make beer, you need to steep a starch source, such as malted cereal grains (commonly barley), in water. This process creates a sweet liquid called wort. Then, yeast is added to the wort, which ferments the liquid and produces alcohol and carbon dioxide. The beer is then aged, filtered, and packaged for consumption. This process has been used since around the 6th millennium BC and has been a part of most western economies since the 19th century.
```
Looking at the generated answer, we can see it's based on the context above. The LLM generates a paragraph of text using the context as input. While this same answer could be directly asked of the LLM, this helps ensure the answer is based on known factual data.
# The RAG Pipeline
txtai has a RAG pipeline that makes this even easier. The logic to generate the context and join it context with the prompt is built in. Let's try that.
```python
from txtai import RAG
# Create RAG pipeline using existing components. LLM parameter can also be a model path.
rag = RAG(embeddings, llm, template=prompt)
```
Let's ask a question similar to the last one. This time we'll ask **"How do you make wine🍷?"**
```python
print(rag("How do you make wine?", maxlength=2048)["answer"])
```
```
To make wine, follow these steps:
1. Select the fruit: Choose high-quality grapes or other fruit for wine production.
2. Fermentation: Introduce yeast to the fruit, which will consume the sugar present in the juice and convert it into ethanol and carbon dioxide.
3. Monitor temperature and oxygen levels: Control the temperature and speed of fermentation, as well as the levels of oxygen present in the must at the start of fermentation.
4. Primary fermentation: This stage lasts from 5 to 14 days, during which the yeast consumes the sugar and produces alcohol and carbon dioxide.
5. Secondary fermentation (optional): If desired, allow the wine to undergo a secondary fermentation, which can last another 5 to 10 days.
6. Fermentation location: Choose the appropriate fermentation vessel, such as stainless steel tanks, open wooden vats, wine barrels, or wine bottles for sparkling wines.
7. Bottle and age the wine: Transfer the finished wine into bottles and allow it to age, if desired, to develop flavors and complexity.
Remember that wine can be made from various fruits, but grapes are most commonly used, and the term "wine" generally refers to grape wine when used without a qualifier.
```
# RAG API Endpoint
Did you know that txtai has a built-in framework for automatically generating FastAPI services? This can be done with a YAML configuration file.
```yaml
# config.yml
# Load Wikipedia Embeddings index
cloud:
provider: huggingface-hub
container: neuml/txtai-wikipedia
# RAG pipeline configuration
rag:
path: TheBloke/Mistral-7B-OpenOrca-AWQ
output: flatten
template: |
<|im_start|>system
You are a friendly assistant. You answer questions from users.<|im_end|>
<|im_start|>user
Answer the following question using only the context below. Only include information
specifically discussed.
question: {question}
context: {context} <|im_end|>
<|im_start|>assistant
```
Note how the same prompt template and models are set. This time instead of doing that with Python, it's done with a YAML configuration file 🔥
Now let's start the API service using this configuration.
```
CONFIG=config.yml nohup uvicorn "txtai.api:app" &> api.log &
sleep 90
```
Now let's run a RAG query using the API service. Keeping with the theme, we'll ask **"How do you make whisky 🥃?"**
```
curl "http://localhost:8000/rag?query=how+do+you+make+whisky&maxlength=2048"
```
```
To make whisky, follow these steps:
1. Choose the grains: Select the grains you want to use for your whisky, such as barley, corn, rye, or wheat.
2. Malt the grains (optional): If using barley, malt the grains by soaking them in water and allowing them to germinate. This process releases enzymes that help break down starches into fermentable sugars.
3. Mill the grains: Grind the grains to create a coarse flour, which will be mixed with water to create a mash.
4. Create the mash: Combine the milled grains with hot water in a large vessel, and let it sit for several hours to allow fermentation to occur. The mash should have a temperature of around 65°C (149°F) to encourage the growth of yeast.
5. Add yeast: Once the mash has cooled to around 30°C (86°F), add yeast to the mixture. The yeast will ferment the sugars in the mash, producing alcohol.
6. Fermentation: Allow the mixture to ferment for several days, during which the yeast will consume the sugars and produce alcohol and carbon dioxide.
7. Distillation: Transfer the fermented liquid, called "wash" to a copper still. Heat the wash in the still, and the alcohol will vaporize and rise through the still's neck. The vapors are then condensed back into a liquid form, creating a high-proof spirit.
8. Maturation: Transfer the distilled spirit to wooden casks, typically made of charred white oak. The spirit will mature in the casks for a specified period, usually ranging from 3 to 25 years. During this time, the wood imparts flavors and color to the whisky.
9. Bottling: Once the whisky has reached the desired maturity, it is bottled and ready for consumption.
```
And as before, we get an answer bound by the search context provided to the LLM. This time it comes from an API service vs a direct Python method.
# RAG API Service with Docker
txtai builds Docker images with each release. There are also Docker files available to help configure API services.
The Dockerfile below builds an API service using the same config.yml.
```dockerfile
# Set base image
ARG BASE_IMAGE=neuml/txtai-gpu
FROM $BASE_IMAGE
# Copy configuration
COPY config.yml .
# Install latest version of txtai from GitHub
RUN \
apt-get update && \
apt-get -y --no-install-recommends install git && \
rm -rf /var/lib/apt/lists && \
python -m pip install git+https://github.com/neuml/txtai
# Run local API instance to cache models in container
RUN python -c "from txtai.api import API; API('config.yml')"
# Start server and listen on all interfaces
ENV CONFIG "config.yml"
ENTRYPOINT ["uvicorn", "--host", "0.0.0.0", "txtai.api:app"]
```
The following commands build and start a Docker API service.
```bash
docker build -t txtai-wikipedia --build-arg BASE_IMAGE=neuml/txtai-gpu .
docker run -d --gpus=all -it -p 8000:8000 txtai-wikipedia
```
This creates the same API service just this time it's through Docker. RAG queries can be run the same way.
```bash
curl "http://localhost:8000/rag?query=how+do+you+make+whisky&maxlength=2048"
```
# Wrapping up
This article covered the various ways to run retrieval augmented generation (RAG) with txtai. We hope you find txtai is one of the easiest and most flexible ways to get up and running fast!
| davidmezzetti |
1,896,357 | Crafting Clarity: A Journey into the World of Clean Code Code Needs Care!🤯 | Introduction ✍🏻 Do you spend countless hours fixing bugs in your code? Have you wondered why it takes... | 0 | 2024-06-21T19:26:18 | https://dev.to/rowan_ibrahim/crafting-clarity-a-journey-into-the-world-of-clean-codecode-needs-care-32mi | cleancode, cleancoding, writing, career | **Introduction ✍🏻**
Do you spend countless hours fixing bugs in your code? Have you wondered why it takes so long?
Everyone can write code that computers understand. But, not everyone can write code that humans can easily read.
As software developers, understanding how to write clean code is essential. Let’s dive into the world of Clean Code.
``
**Table of Contents**
What is the clean code?
What causes bad code?
Why is clean code important?
Comparison Between Bad Code and Clean Code.
How Can We Determine if Our Code is Clean?
Principles of Clean Code.
Conclusion.
**What is the clean code ?**🤔
Writing clean code is a fundamental skill for every software engineer. Clean code is to write code that is well-organized, easy to read, understand, and maintain. Writing clean code allows others to read it without wasting time to understand it. Robert Cecil Martin made it popular. He has also gained popularity as Uncle Bob. He wrote “Clean Code: A Handbook of Agile Software Craftsmanship” in 2008.
In another sense, if three people have different experience levels, they can quickly understand the same code. It’s a clean code.
**What causes bad code?**
Consider this scenario: There’s a new project, and your manager gives you a tight deadline. You start coding, promising yourself to enhance it later. But you deliver the product quickly because of pressure from your manager and the customer. But, it’s poorly written. You will never return to improve it.
Oh, there is Leblanc’s law says:
“Later equals never.”
That means if you said you’d improve this later, you wouldn’t return to it.
**Why is clean code important?**
Let’s revisit a story from the “Clean Code” book to understand the importance of clean code. A company wanted to release a product quickly, resulting in poorly written code. When they tried to add more features, they found it almost impossible to make changes. The changes would break the existing functions and they lost all their money.
Clean code ensures that companies can add features quickly and without errors.
When we need to add more features or when someone contributes to complete with you, it will take a long time to read and understand it to fix old bugs and make new features.
When writing code implementation is simple, the difficulty is reading and debugging. That takes a long time.
Keeping code clean will give you the most value from your software. The right clean code tool can help you get there. Clean code makes debugging more efficient and unit testing easier.

**A comparison between clean and badly written code.**
The compiler will compile every code, clean or not. But not every code will be readable. Let’s take a look:
Can you see this?
You don’t need to understand. Let’s see first:
Bad code

Clean version

Even without understanding the code, do you see the difference? It’s formatting the code that makes the code easy to read and comfortable to the eye.
Let’s see another example, its login page contains an image and text field.

This is the same page but make every part as a small function.

Here, we make functions with clear names. They make the code shorter. If we want to fix any part, we can go right to it.
You will never know the importance of clean code. Even though you edit your code, you will say: “What a horrible thing I wrote.”🤯
**How Can We Determine if Our Code is Clean?**
Writing clean code is an art and a way of thinking. It’s like a craft that comes from knowledge and practice. Knowledge comes from knowing principles, patterns, and practice. So, you know this and use it in your code.
There is a Boy Scout rule that says:
“Leave the campground cleaner than you found it.”
That means we need to always check our code and enhance it. Every time we read it we will find that we can make it more cleanly.
In the book “Art of Clean Code,” they provide some questions we should ask ourselves to help us improve our code.
- Can you consolidate files and modules to reduce interdependencies?
- Can you divide large, complicated files into simpler ones?
- Can you generalize code into libraries to simplify the main application?
- Can you use existing libraries to reduce the amount of code?
- Can you use caching to avoid recomputing results?
- Are there more suitable algorithms for your tasks?
- Can you remove premature optimizations that don’t improve performance?
- Would another programming language be more suitable for your problem?
**Principles of Clean Code.**
Like a beautiful painting, clean code needs well-chosen colors and composition. It follows several principles. We will discuss each in later articles.
- Meaningful and descriptive names.
- Proper Indentation.
- Short functions that do one thing.
- The DRY (Don’t Repeat Yourself) Principle.
- Avoid code duplication.
- Establish code writing standards.
These principles are just a starting point; there are many more to explore and apply.
**Conclusion :**
“Coders spend the vast majority of their time reading old code to write new code. If the old code is easy to read, this will speed up the process considerably.”
If we write messy code, it will reduce productivity, even if we can make it productive and enjoyable while reading it. Clean code makes it easier to reduce errors and add features without being scared to have errors in another function.
By following this, you will become a professional developer, and always remember there are no limitations to making your code clean; you can make it better every time.

**That’s it! Start now and begin writing clean code! Connect with me on LinkedIn and GitHub for more articles and insights. If you have questions, contact me.**
**https://linktr.ee/rowan_ibrahim**
Remember, “Later Equals Never.” Start writing clean code today! 😉 | rowan_ibrahim |
1,896,356 | Comparing CI/CD Tools | This article was originally published on the Shipyard Blog Continuous integration (CI) pipelines,... | 0 | 2024-06-21T19:24:56 | https://shipyard.build/blog/cicd-tools/ | cicd, devops, githubactions, testing | *<a href="https://shipyard.build/blog/cicd-tools/" target="_blank">This article was originally published on the Shipyard Blog</a>*
---
<a href="https://www.redhat.com/en/topics/devops/what-is-ci-cd#continuous-integration" target="_blank">Continuous integration (CI) pipelines</a>, when used as intended, are something you’re triggering and running multiple times per day. Your org’s pipelines can contain hundreds (if not thousands) of lines of code. It won’t come as a surprise that you have a lot at stake when choosing a CI provider — migrating CI providers is expensive and all around *messy.* With the right research, you can get it right the first time.
## What should I look for in a CI tool?
Every tool in this article is obviously great in its own regard (after all, that’s why they’re featured!), but picking the right one for your org takes a little more than just spinning the wheel and choosing whichever one it lands on. Here are the a few things that could make or break your decision.
### Hosting options
This is arguably the most important consideration when it comes to choosing a tool. You’ll typically see tools fall into three main hosting models: self-hosted, cloud-hosted, and hybrid. If you’re starting from scratch, cloud-hosted runners are usually managed by your CI provider and require dramatically less maintenance. Hybrid or self-hosted are best for orgs who have a bit more time and effort to invest into this, because when done right, these options will be more cost-effective *and* secure. They also enable you to get a closer approximation of your infrastructure than a managed (cloud) option.
### Source code management provider
The major source code management (SCM) providers all offer their own integrated CI platforms. There’s definitely a benefit here: using the same interface for source control and continuous integration not only makes logical sense (especially since CI events should occur on particular pushes/code changes) but also reduces context switching during the inner loop, where maintaining flow really matters.
## Our Favorite CI Tools
Every engineering team has a strong opinion and an obvious favorite when it comes to CI tools. We studied the features, developer experience, and use cases for the most appreciated tools in the industry. This is intended to serve as a high-level guide for narrowing down your CI provider search — the best next step is reading the docs and testing out each offering’s free tier.
### CircleCI

<a href="https://circleci.com/docs/" target="_blank">CircleCI</a> is a flexible, SCM-agnostic CI provider. In 2024, it’s hugely popular for enterprise orgs for its flexible hosting/compute options and high-quality integrations. CircleCI is considerably stable and reliable. Performance-wise, it’s on par with GitHub Actions and GitLab CI. Users have a huge selection when it comes to pipeline modules, called “orbs”, and have means to create and maintain their own.
**Who it’s for:** Enterprises that want a well-supported, customizable CI solution decoupled from their SCM.
**Hosting model:** CircleCI offers self and cloud-hosted runners. It uses usage-based billing.
**Pipeline format:** Workflows are YAML-based, and users can publish and maintain their own orbs or choose from 3,500 existing orbs.
### GitHub Actions

As far as CI tools go, <a href="https://docs.github.com/en/actions" target="_blank">GitHub Actions</a> is the easiest to get started with, especially if you’re already using GitHub. The GitHub Actions developer experience is exceptional: there are countless existing sample workflows, and thousands of official and unofficial Actions that you can piece together to build your workflow(s). It’s entirely accessible from your repository’s dashboard, so you can track CI runs right after making major code changes. GitHub also makes it easy to write, host, publish, and maintain your own custom Actions.
**Who it’s for:** Teams who use GitHub for source control and want an easy solution with a wide selection of pre-written Actions.
**Hosting model:** GitHub Actions uses hybrid and cloud-hosted runners and is billed by usage.
**Pipeline format:** Workflows are written in YAML, using reusable Actions.
### Jenkins

<a href="https://www.jenkins.io/doc/" target="_blank">Jenkins</a> has been the industry-standard in CI for over a decade. It’s open source, and categorized as an “automation server”, which suits it to use cases beyond CI alone. Jenkins takes more setup and configuration than most alternatives, but since it’s been around for so long, it has an impressively strong community and external knowledge base to solve any and all intricacies. Unfortunately, with so many great CI providers emerging, Jenkins is falling out of favor, and orgs are migrating off of it.
**Who it’s for:** Large enterprise teams with in-house Jenkins experts.
**Hosting model:** Jenkins is typically self-hosted, but there are third-party providers out there for Jenkins cloud hosting.
**Pipeline format:** Workflows are defined with Apache Groovy in a `Jenkinsfile`. Jenkins offers over 1,900 community plugins.
### Buildkite

<a href="https://buildkite.com/docs" target="_blank">Buildkite</a> is the CI provider that the Ubers, Doordashes, and Pinterests of the world use to push, build, and test code. It was initially designed as a cloud-agnostic, self-hosted tool for companies at scale. With its hybrid model, your organization will host the agent, and you’ll have access to a cloud-based dashboard. A standout feature of Buildkite is its dynamic pipelines, which enable you to generate custom pipeline specs conditionally during your pipeline’s execution.
**Who it’s for:** Enterprises who have outgrown CircleCI, GitHub Actions, and GitLab CI and want a CI provider that can scale with them.
**Hosting model:** Agents are self-hosted with no usage limits. Dashboard lives on the Buildkite cloud. Billing is per-seat.
**Pipeline format:** Traditional YAML workflows. Support for dynamic pipelines.
### GitLab CI

<a href="https://docs.gitlab.com/ee/ci/" target="_blank">GitLab CI</a> is a widely-used CI provider for GitLab-based orgs. It’s generally considered more feature-rich than alternatives. Interface-wise, it’s similar to GitHub Actions — you’ll be configuring and viewing your CI runs from your repository dashboard. GitLab CI is known for being highly-customizable for more complex CI tasks, but still has thorough documentation, so onboarding shouldn’t be a huge lift. You can create and maintain reusable modules for your pipelines by creating a “component project” on GitLab, or use existing components from the CI/CD Catalog.
**Who it’s for:** Orgs who are in the GitLab ecosystem who want a full-featured CI tool that they can use for the most complex of pipelines.
**Hosting model:** GitLab CI offers self-hosted and cloud-hosted runners, with usage-based billing.
**Pipeline format:** YAML workflows with public or custom components.
### Harness

<a href="https://developer.harness.io/docs/continuous-integration" target="_blank">Harness</a> is a complete CI/CD platform -- it goes above and beyond the standard CI toolkit, offering detailed deployment metrics, visual pipeline editors, governance tools, and test insights. It's designed for larger teams who want a plug-and-play, self-service solution, which in turn means it runs somewhat costlier than many alternatives. Harness is notably fast at building. It has a wide tooling ecosystem, with a recently-introduced SCM, and even an IDP solution.
**Who it’s for:** Larger, enterprise org who are ready to buy into a CI solution that benefits from the rest of the Harness ecosystem.
**Hosting model:** Harness offers managed, cloud-hosted runners, and some plans with self-hosted options. It is billed per seat.
**Pipeline format:** Workflows can be edited in the Harness Pipeline Editor visually or in YAML.
## Conclusion
When you look at the CI tool landscape, you’ll notice one thing pretty quickly: most offerings are rather similar to each other. Pricing and performance are two metrics that tend to drive decision making, however it might be more important to consider a few other things, such as the size of your engineering org, the hosting model, user support offerings, and integrations with your SCM/other tools.
*Need to unblock your CI with testing environments? Good news: <a href="https://shipyard.build" target="_blank">Shipyard</a> integrates with all CI providers. <a href="https://shipyard.build/contact" target="_blank">Talk to us today</a>.* | shipyard |
1,896,355 | Unlock Creativity with MonsterONE! | Hey there! 👋 As a seasoned web developer, I'm excited to share my experience with MonsterONE - the... | 0 | 2024-06-21T19:22:09 | https://dev.to/hasnaindev1/unlock-creativity-with-monsterone-4ong | website, webcomponents, themes, webdev |
Hey there! 👋 As a seasoned web developer, I'm excited to share my experience with **MonsterONE** - the ultimate subscription service for all things web development! With **MonsterONE**, I've unlocked a treasure trove of digital assets, from **WordPress themes** to graphic designs and audio/video resources.
The variety and quality of assets available are unmatched, and they've saved me countless hours of design and development work. Whether I'm designing marketing materials, or producing engaging videos, **MonsterONE** has everything I need - and then some!
What's more, **MonsterONE** offers flexible subscription plans tailored to your needs. Plus, you can enjoy an exclusive 10% OFF on any **MonsterONE** plan using the promo code [**HasnainDeveloper**] at checkout!.
So, if you're serious about web development or marketing content creation, I highly recommend giving **MonsterONE** a try.
🔗 Explore **MonsterONE** now: https://monsterone.com/pricing/?discount=HasnainDeveloper | hasnaindev1 |
1,896,353 | Regulatory Compliance Challenges for US Financial Institutions in the UAE and the Middle East | The Middle East is one of the fastest regions when it comes to economy and technology. The US... | 0 | 2024-06-21T19:20:11 | https://dev.to/muhammad_78f9e1864a36d0a4/regulatory-compliance-challenges-for-us-financial-institutions-in-the-uae-and-the-middle-east-2oip | regulatorycompliance, usa, uae, fis | > The Middle East is one of the fastest regions when it comes to economy and technology. The US companies and investors are curious to grab every opportunity in the hindsight. There is a huge market of real estate on the hand, the financial sector is booming at an accelerated pace on the other hand. However, alongside the promise of profit come significant regulatory compliance challenges that must be navigated with caution and precision.
**Complex Regulatory Environment**
The UAE and the wider Middle East region boast a unique and intricate regulatory framework that differs substantially from that of the United States. While the UAE offers a business-friendly environment with favorable tax policies and incentives for foreign investors, its regulatory landscape can be complex and multifaceted.
One of the primary challenges for US financial institutions operating in the UAE is compliance with local laws and regulations, which often diverge from those in the US. These regulations cover a broad spectrum, including anti-money laundering (AML) and counter-terrorism financing (CTF) laws, data protection regulations, foreign ownership restrictions, and Sharia-compliant banking principles.
**Anti-Money Laundering and Counter-Terrorism Financing**
AML and CTF compliance remain paramount concerns for financial institutions worldwide, and the UAE is no exception. US banks operating in the region must adhere to stringent AML and CTF regulations set forth by the UAE Central Bank and other relevant regulatory bodies.
Ensuring compliance with these regulations requires robust internal controls, comprehensive due diligence procedures, and ongoing monitoring of transactions. US financial institutions must also stay abreast of the UAE's evolving regulatory landscape and adapt their compliance measures accordingly to mitigate the risk of financial crime.
**Data Protection and Privacy**
In an era of heightened concerns surrounding data protection and privacy, US financial institutions operating in the UAE must navigate the intricacies of local data protection laws. The UAE's data protection framework, governed primarily by the Federal Decree-Law No. 45 of 2021 on the Protection of Personal Data (PDPL), imposes strict requirements on the collection, processing, and storage of personal data.
Compliance with the PDPL necessitates the implementation of robust data protection measures, including encryption, access controls, and data breach response protocols. US financial institutions must also ensure that their data processing activities align with the principles of transparency, accountability, and consent outlined in the PDPL.
**Foreign Ownership Restrictions and Sharia Compliance**
In addition to regulatory compliance challenges, US financial institutions operating in the UAE must navigate foreign ownership restrictions and adhere to Sharia-compliant banking principles. While the UAE permits foreign ownership in certain sectors through the establishment of local branches or joint ventures, ownership limitations may apply in sensitive industries such as banking and finance.
Moreover, Sharia-compliant banking practices, which prohibit interest-based transactions and adhere to Islamic principles of finance, present additional considerations for US financial institutions seeking to operate in the UAE. Ensuring compliance with Sharia principles requires specialized expertise and a thorough understanding of Islamic finance principles.
**Conclusion**
As US financial institutions continue to expand their presence in the UAE and the broader Middle East region, regulatory compliance will remain a critical challenge. Navigating the complex regulatory landscape requires a strategic approach, with an emphasis on comprehensive risk assessment, robust compliance frameworks, and ongoing monitoring of regulatory developments.
>
| muhammad_78f9e1864a36d0a4 |
1,896,352 | Smart Contracts Evolution: Enhancing Efficiency in DApp Development | In the ever-evolving landscape of blockchain engineering, smart contracts are a testament to the... | 0 | 2024-06-21T19:19:39 | https://dev.to/sophie_wilson0412/smart-contracts-evolution-enhancing-efficiency-in-dapp-development-a5a | blockchain, development, engineering, technology | In the ever-evolving landscape of blockchain engineering, smart contracts are a testament to the continuous pursuit of smart efficiency, security, and innovation in decentralized application (DApp) development. Smart contracts have gone through a remarkable journey, from their conceptualization by Nick Szabo to their real-time applications on platforms such as Ethereum, reshaping the world of digital agreements. As we go deeper into the intricate tapestry of their progress, it becomes evident that these self-executing contracts have transcended their conceptual origins and contributed to a paradigm shift in how we think about, build, and interact with decentralized apps.
## The Genesis of Smart Contracts
To understand the evolution of smart contracts, it’s necessary to revisit their origins. In the late 1990s, the term was first coined by computer scientist and cryptographer Nick Szabo. He envisioned self-executing contracts with the terms of the agreement directly written into code. It effectively eliminates the need for intermediaries in traditional contract enforcement.
## Ethereum's Contribution
The turning point took place with the 2015 debut of Ethereum, which introduced smart contracts as self-executing scripts on its blockchain. By utilizing the Ethereum Virtual Machine (EVM), a runtime environment for smart contracts, developers could design decentralized applications. This signalled a paradigm achievement in blockchain engineering services, allowing developers to create trust less applications with automated, tamper-proof contract execution.
## Limitations and Challenges
DApp development and blockchain development services have gained quite a bit of popularity, but some limits and issues associated with early smart contracts have become apparent. Scalability concerns, expensive gas expenses, and a lack of compatibility hampered DApp adoption. Ethereum, a pioneer in smart contract implementation, had congestion and scalability issues, resulting in delays and increasing expenses.
## Evolving Standards: ERC-20 to ERC-721
To address these issues, Ethereum implemented standardized token contracts, the most noteworthy of which was the ERC-20 standard. It helps develop fungible tokens, allowing developers to create tokens with a similar set of rules, resulting in simple connections with diverse services and exchanges. Later, the ERC-721 standard introduced non-fungible tokens (NFTs), which revolutionized the digital asset environment by creating and exchanging indivisible, unique tokens. This growth widened the scope of DApps, giving rise to platforms for decentralized finance (DeFi) and digital art.
## Smart Contract Languages: From Solidity to Move
The programming languages used for developing smart contracts evolved as well. The early landscape was dominated by Solidity, Ethereum's native language. Other blockchains, like Libra (now Diem), offered Move, a programming language created exclusively for smart contracts. The move aimed to enhance security by removing specific types of vulnerabilities and facilitating more predictable results in decentralized applications.
## Interoperability and Cross-Chain Solutions
Polkadot and Cosmos evolved in response to the requirement for interoperability between different blockchains. These platforms supported effective communication and data transfer between blockchains, allowing developers to take advantage of the unique features of numerous networks. Cross-chain solutions addressed scalability concerns and improved overall DApp development efficiency by allowing developers to select the most appropriate blockchain for their unique needs.
## Smart Contract Auditing and Security
Leading blockchain development services are concerned about growing complexity and security issues. High-profile smart contract attacks and weaknesses emphasized the necessity for meticulous auditing mechanisms. Smart contract auditing organizations have been developed to assess code security, find flaws, and ensure decentralized application resilience. The focus on security has become a critical component of DApp development, ensuring trust in users and promoting mainstream adoption.
## The Rise of Layer 2 Solutions
The use of layer 2 solutions can alleviate the scalability issues affecting the applications of distributed ledger technology. These solutions run on top of existing blockchains, transferring many transactions to reduce congestion and the cost of gas. Layer 2 methods such as optimistic rollups, zk-rollups, and sidechains demonstrate promise in improving the scalability and efficiency of DApps.
## Conclusion
Smart contract development services have played an essential role in influencing the landscape of decentralized application development. Smart contracts have overcome many obstacles and limits, from their conception by Nick Szabo to their implementation on Ethereum and subsequent developments. Standards, various programming languages, interoperability solutions, security measures, and Layer 2 scaling solutions have all improved the efficiency, security, and versatility of DApp development and overall blockchain engineering.
| sophie_wilson0412 |
1,896,349 | Speed Up Your React App: A Guide to Lazy Loading 🚀 | Enhancing Performance with Asynchronous Component Loading 🔻 In the world of modern web... | 0 | 2024-06-21T19:13:07 | https://dev.to/alisamirali/speed-up-your-react-app-a-guide-to-lazy-loading-24pm | react, javascript, frontend, performance | ## Enhancing Performance with Asynchronous Component Loading 🔻
---
In the world of modern web development, optimizing the performance of web applications is a critical consideration.
One effective technique for improving React applications' load times and overall performance is lazy loading.
React Lazy Loading is a concept that involves loading components asynchronously, only when they are needed.
This article delves into the fundamentals of React lazy loading, its benefits, and how to implement it in your projects.
---
## 📌 What is React Lazy Loading?
Lazy loading, in the context of web development, refers to delaying loading resources until they are needed.
In React, this means deferring the loading of components until they are required for rendering.
This approach can significantly reduce the initial load time of an application, especially if it contains many components or large bundles of code.
---
## 📌 Benefits of React Lazy Loading
**1. Improved Initial Load Time:**
The application's load time is reduced by loading only the components necessary for the initial render. This leads to a faster first contentful paint (FCP), enhancing the user experience.
**2. Reduced Bandwidth Usage:**
Since components are loaded on-demand, users do not have to download unnecessary code upfront. This is particularly beneficial for users with limited bandwidth or on mobile networks.
**3. Enhanced Performance on Subsequent Navigations:**
Once a component is loaded, it is cached by the browser. This means subsequent navigations or interactions requiring the same component will be faster.
**4. Scalability:**
Lazy loading allows applications to scale more efficiently. As an application grows and more components are added, lazy loading helps manage and maintain performance.
---
## 📌 Implementing Lazy Loading in React
React provides a built-in function called `React.lazy()` that enables lazy loading of components. Along with `React.Suspense`, it allows you to display a fallback while the lazy-loaded component is being fetched.
_Here’s a step-by-step guide to implementing lazy loading in a React application:_
**1. Creating a Lazy-Loaded Component**
```jsx
const UserProfile = React.lazy(() => import('./UserProfile'));
```
This statement dynamically imports the `UserProfile` component only when it is needed.
**2. Using React.Suspense**
`React.Suspense` is used to wrap the lazy-loaded component and provide a fallback UI (like a loading spinner) while the component is being loaded:
```jsx
import React, { Suspense } from 'react';
const UserProfile = React.lazy(() => import('./UserProfile'));
function App() {
return (
<div>
<h1>Welcome to My App</h1>
<Suspense fallback={<div>Loading...</div>}>
<UserProfile />
</Suspense>
</div>
);
}
export default App;
```
In this example, the `UserProfile` component will be loaded asynchronously, and the text `"Loading..."` will be displayed until the component is ready.
**3. Error Handling**
Lazy loading can sometimes fail, for instance, due to network issues. React’s error boundaries can be used to handle such scenarios gracefully:
```jsx
import React, { Suspense, lazy } from 'react';
const UserProfile = lazy(() => import('./UserProfile'));
class ErrorBoundary extends React.Component {
constructor(props) {
super(props);
this.state = { hasError: false };
}
static getDerivedStateFromError(error) {
return { hasError: true };
}
componentDidCatch(error, errorInfo) {
console.error('Error loading component:', error, errorInfo);
}
render() {
if (this.state.hasError) {
return <div>Something went wrong. Please try again later.</div>;
}
return this.props.children;
}
}
function App() {
return (
<div>
<h1>Welcome to My App</h1>
<ErrorBoundary>
<Suspense fallback={<div>Loading...</div>}>
<UserProfile />
</Suspense>
</ErrorBoundary>
</div>
);
}
export default App;
```
Here, if the `UserProfile` component fails to load, the `ErrorBoundary` component will catch the error and display a fallback message.
---
## Conclusion ✅
React lazy loading is a powerful technique to optimize the performance of your applications by loading components only when they are needed.
This not only improves the initial load time but also enhances the overall user experience by reducing bandwidth usage and improving subsequent navigation performance.
By leveraging `React.lazy()` and `React.Suspense`, developers can easily implement lazy loading and build more efficient, scalable React applications.
---
**_Happy Coding!_** 🔥
**[LinkedIn](https://www.linkedin.com/in/dev-alisamir)**
**[X (Twitter)](https://twitter.com/dev_alisamir)**
**[Telegram](https://t.me/the_developer_guide)**
**[YouTube](https://www.youtube.com/@DevGuideAcademy)**
**[Discord](https://discord.gg/s37uutmxT2)**
**[Facebook](https://www.facebook.com/alisamir.dev)**
**[Instagram](https://www.instagram.com/alisamir.dev)** | alisamirali |
1,896,347 | Técnicas de Concorrência e Gerenciamento de Estado em Elixir com FSM | Introdução Elixir é uma linguagem funcional que roda na máquina virtual BEAM, a mesma do... | 0 | 2024-06-21T19:09:00 | https://dev.to/zoedsoupe/tecnicas-de-concorrencia-e-gerenciamento-de-estado-em-elixir-com-fsm-3f83 | architecture, elixir, programming | ## Introdução
Elixir é uma linguagem funcional que roda na máquina virtual BEAM, a mesma do Erlang, famosa por suas capacidades de concorrência e tolerância a falhas. Um dos padrões poderosos que podem ser implementados em Elixir é a Máquina de Estado Finito (FSM - Finite State Machine), combinada com processamento assíncrono. Este artigo explora como esses padrões podem beneficiar programas em Elixir, proporcionando controle sobre o fluxo de eventos e melhorando a eficiência e a robustez dos sistemas.
## Máquinas de Estado Finito (FSM)
Uma Máquina de Estado Finito é um modelo computacional que consiste em um número finito de estados, transições entre esses estados, e ações, que podem ser acionadas por eventos. Em um sistema FSM, um objeto pode estar em apenas um estado de cada vez, e ele pode mudar de estado em resposta a um evento. Cada transição pode ser associada a uma ação específica.
### Benefícios da FSM
1. **Organização e Clareza**: FSM ajuda a organizar e clarear o comportamento de sistemas complexos, tornando o fluxo de eventos mais previsível e fácil de entender.
2. **Controle de Fluxo**: Com FSM, o controle do fluxo de estados é explícito e definido claramente, permitindo transições apenas quando condições específicas são atendidas.
3. **Facilidade de Manutenção**: FSM permite encapsular a lógica de estado em um único lugar, facilitando a manutenção e atualização do sistema sem introduzir bugs.
### Exemplo de FSM
Considere um sistema simples de autorização de pagamento com os seguintes estados: `pending`, `authorized`, `denied` e `failed`.
1. **Estado Inicial**: O pagamento começa no estado `pending`.
2. **Transição para `authorized`**: Quando uma solicitação de autorização é aprovada, o estado muda para `authorized`.
3. **Transição para `denied`**: Se a solicitação for negada, o estado muda para `denied`.
4. **Transição para `failed`**: Se houver um erro no processamento, o estado muda para `failed`.
Um fluxo simplificado de FSM poderia ser representado assim:
```elixir
defmodule PaymentFSM do
@moduledoc """
Define a FSM para estados de pagamento.
"""
use Fsmx.Fsm,
transitions: %{
"pending" => ["authorized", "denied", "failed"],
"authorized" => [],
"denied" => [],
"failed" => []
}
end
```
> Aqui usamos a biblioteca `Fsmx`, mas é possível usar funções simples ou mesmo o ecto para definir uma FSM.
Neste exemplo, um pagamento pode mudar de `pending` para `authorized`, `denied`, ou `failed`. Uma vez que está em `authorized`, `denied`, ou `failed`, ele não pode transitar para outros estados.
## Processamento Assíncrono
O processamento assíncrono permite que operações sejam executadas em paralelo ou de maneira não bloqueante, aumentando a eficiência e capacidade de resposta do sistema. Em Elixir, isso é conseguido utilizando processos leves gerenciados pela máquina virtual BEAM.
### Benefícios do Processamento Assíncrono
1. **Escalabilidade**: O processamento assíncrono permite que múltiplas tarefas sejam executadas simultaneamente, aumentando a escalabilidade do sistema.
2. **Responsividade**: Sistemas assíncronos podem continuar a responder a novas requisições enquanto outras operações estão em andamento.
3. **Tolerância a Falhas**: A arquitetura de supervisão de Elixir permite que processos falhem e sejam reiniciados sem afetar o restante do sistema.
### Exemplo de Processamento Assíncrono
Imagine um sistema que processa eventos de pagamento. Cada evento pode desencadear várias operações, como verificação de saldo, registro de transações, etc. Para não bloquear o processamento de novos eventos, podemos usar tarefas assíncronas.
```elixir
defmodule PaymentProcessor do
@moduledoc """
Processa eventos de pagamento de forma assíncrona.
"""
use GenServer
def start_link(_opts) do
GenServer.start_link(__MODULE__, :ok, name: __MODULE__)
end
def process_event(event) do
GenServer.cast(__MODULE__, {:process, event})
end
@impl true
def init(:ok) do
{:ok, []}
end
@impl true
def handle_cast({:process, event}, deadletter) do
case handle_event(event) do
:ok -> {:noreply, deadletter}
{:error, reason} -> {:noreply, [event | deadletter]}
end
end
# dependendo da complexidade, podem existir
# diferentes módulos de handler para eventos
defp handle_event(event) do
# Lógica para processar o evento de pagamento
# Efeitos colaterais
# Observabilidade
IO.puts("Processando evento: #{inspect(event)}")
end
end
```
Neste exemplo, cada evento é processado em uma nova tarefa, permitindo que o sistema continue a receber e enfileirar novos eventos sem esperar que os eventos anteriores sejam processados.
## Lock Otimista
O Lock Otimista é uma técnica utilizada para lidar com concorrência em sistemas distribuídos. Ao contrário do Lock Pessimista, que bloqueia os recursos até que uma transação seja concluída, o Lock Otimista assume que as colisões são raras e permite que múltiplas transações ocorram simultaneamente, verificando conflitos apenas no momento da confirmação.
### Benefícios do Lock Otimista
1. **Melhor Desempenho**: Reduz a quantidade de bloqueios, permitindo maior paralelismo.
2. **Menos Deadlocks**: Minimiza a chance de deadlocks, uma vez que os recursos não são bloqueados por longos períodos.
3. **Maior Escalabilidade**: Facilita a escalabilidade horizontal, essencial para sistemas distribuídos de grande porte.
### Exemplo de Lock Otimista
Vamos adicionar a lógica de lock otimista em nosso sistema de pagamentos. A ideia é verificar e atualizar o estado do pagamento apenas se ele não foi modificado por outro processo.
```elixir
defmodule Payments do
alias PaymentSystem.Repo
alias PaymentSystem.Payments.Payment
def update_payment_with_lock(id, attrs) do
Repo.transaction(fn ->
payment = Repo.get!(Payment, id)
updated_payment = Payment.changeset(payment, attrs)
case Repo.update(updated_payment) do
{:ok, _payment} -> :ok
{:error, changeset} -> Repo.rollback(changeset)
end
end)
end
end
```
Neste exemplo, usamos uma transação para garantir que o pagamento seja atualizado apenas se ele não tiver sido modificado por outro processo desde a última leitura.
## Integração de FSM, Lock Otimista e Processamento Assíncrono
Combinar FSM, Lock Otimista e Processamento Assíncrono em Elixir pode levar a sistemas altamente eficientes e robustos. A FSM controla o fluxo de estados, garantindo que as transições sejam ordenadas e previsíveis. O Lock Otimista permite que transações concorrentes sejam gerenciadas eficientemente, enquanto o Processamento Assíncrono garante que operações de longa duração não bloqueiem o sistema.
### Exemplo Completo
Vamos combinar os conceitos discutidos anteriormente em um exemplo completo:
```elixir
defmodule PaymentFSM do
use Fsmx.Fsm,
transitions: %{
"pending" => ["authorized", "denied", "failed"],
"authorized" => [],
"denied" => [],
"failed" => []
}
def start_payment(payment) do
process_event(payment, "start")
end
defp process_event(payment, event) do
PaymentProcessor.process_event(event)
end
defp handle_event(payment, event) do
case transition(payment, event) do
{:ok, new_payment} ->
IO.puts("Evento processado com sucesso: #{inspect(new_payment)}")
{:error, reason} ->
IO.puts("Erro ao processar evento: #{inspect(reason)}")
end
end
defp transition(payment, "start") do
Fsmx.transition(payment, "authorized")
end
end
defmodule PaymentProcessor do
alias PaymentSystem.{Payments, PaymentFSM}
def process_payment_async(payment) do
Task.start(fn -> process_payment(payment) end)
end
defp process_payment(payment) do
with {:ok, payment} <- PaymentFSM.authorize_payment(payment),
{:ok, payment} <- Payments.update_payment_with_lock(payment.id, %{status: "authorized"}) do
IO.puts("Pagamento processado com sucesso: #{inspect(payment)}")
else
{:error, reason} ->
PaymentFSM.fail_payment(payment)
IO.puts("Falha ao processar pagamento: #{inspect(payment)}. Motivo: #{reason}")
end
end
end
```
Neste exemplo, a FSM gerencia o estado do pagamento, o Lock Otimista garante que as atualizações de estado ocorram de maneira concorrente segura, e o processamento de eventos é feito de forma assíncrona, garantindo que o sistema permaneça responsivo e eficiente.
## Conclusão
O uso de Máquinas de Estado Finito (FSM) e processamento assíncrono interno pode trazer inúmeros benefícios para programas em Elixir, desde melhor organização e controle de fluxo até maior resiliência e escalabilidade. Ao combinar esses padrões, é possível criar sistemas robustos, capazes de lidar com complexidade e alta carga de forma eficaz.
| zoedsoupe |
1,896,346 | Optimizing Web Performance: Lazy Loading Images and Components | Optimizing web performance is crucial for providing a superior user experience, improving SEO, and... | 0 | 2024-06-21T19:08:10 | https://dev.to/henriqueschroeder/optimizing-web-performance-lazy-loading-images-and-components-noe | webdev, javascript, react, nextjs |
Optimizing web performance is crucial for providing a superior user experience, improving SEO, and increasing conversion rates. For intermediate and experienced developers, lazy loading images and components is an advanced technique that can make a significant difference in web application performance. Let's explore the more technical aspects of this practice and how to implement it efficiently using native JavaScript, `lazysizes`, `React.lazy`, Dynamic Imports, and `next/script`.
### Advanced Benefits of Lazy Loading
1. **Improvement of Largest Contentful Paint (LCP)**:
- LCP is one of Google's key Core Web Vitals indicators. By deferring the loading of non-critical images and components, the rendering time of the largest visible element in the initial viewport is significantly reduced.
2. **Reduction of Resource Consumption**:
- Lazy loading reduces the load on the server and network by avoiding unnecessary requests, improving overall system efficiency and allowing resources to be allocated more effectively.
3. **Enhancement of Responsiveness**:
- Applications that use lazy loading are more responsive, especially on mobile devices, providing a smoother experience for end users.
### Implementation of Lazy Loading for Images with Native JavaScript
**Complete Example:**
1. **HTML:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Lazy Loading Example</title>
<style>
.lazy {
opacity: 0;
transition: opacity 0.3s;
}
.lazy-loaded {
opacity: 1;
}
</style>
</head>
<body>
<h1>Lazy Loading Example</h1>
<img data-src="image1.jpg" alt="Image 1 Description" class="lazy">
<img data-src="image2.jpg" alt="Image 2 Description" class="lazy">
<img data-src="image3.jpg" alt="Image 3 Description" class="lazy">
<script src="lazyload.js"></script>
</body>
</html>
```
2. **JavaScript (lazyload.js):**
```javascript
document.addEventListener("DOMContentLoaded", function() {
const lazyImages = document.querySelectorAll('img.lazy');
const lazyLoad = (image) => {
image.src = image.dataset.src;
image.onload = () => {
image.classList.remove('lazy');
image.classList.add('lazy-loaded');
};
};
if ("IntersectionObserver" in window) {
const observer = new IntersectionObserver((entries, observer) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
lazyLoad(entry.target);
observer.unobserve(entry.target);
}
});
});
lazyImages.forEach(img => {
observer.observe(img);
});
} else {
// Fallback for browsers that do not support IntersectionObserver
const lazyLoadThrottle = () => {
lazyImages.forEach(img => {
if (img.getBoundingClientRect().top < window.innerHeight && img.getBoundingClientRect().bottom > 0 && getComputedStyle(img).display !== "none") {
lazyLoad(img);
}
});
};
window.addEventListener("scroll", lazyLoadThrottle);
window.addEventListener("resize", lazyLoadThrottle);
window.addEventListener("orientationchange", lazyLoadThrottle);
}
});
```
### Implementation of Lazy Loading for Images with `lazysizes`
**HTML:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Lazy Loading with lazysizes</title>
<style>
.lazyload {
opacity: 0;
transition: opacity 0.3s;
}
.lazyloaded {
opacity: 1;
}
</style>
</head>
<body>
<h1>Lazy Loading with lazysizes</h1>
<img data-src="image1.jpg" alt="Image 1 Description" class="lazyload">
<img data-src="image2.jpg" alt="Image 2 Description" class="lazyload">
<img data-src="image3.jpg" alt="Image 3 Description" class="lazyload">
<script src="https://cdnjs.cloudflare.com/ajax/libs/lazysizes/5.3.2/lazysizes.min.js" async></script>
</body>
</html>
```
### Implementation of Lazy Loading for Components with React.lazy and Suspense
**React:**
```javascript
import React, { Suspense, lazy } from 'react';
const LazyComponent = lazy(() => import('./LazyComponent'));
function App() {
return (
<Suspense fallback={<div>Loading...</div>}>
<LazyComponent />
</Suspense>
);
}
export default App;
```
### Implementation of Lazy Loading for Components with Dynamic Imports in Next.js
**Next.js:**
```javascript
import dynamic from 'next/dynamic';
const DynamicComponent = dynamic(() => import('../components/DynamicComponent'), {
loading: () => <p>Loading...</p>,
ssr: false
});
export default function Home() {
return <DynamicComponent />;
}
```
### Additional Optimizations with `next/script`
**Next.js:**
```javascript
import Script from 'next/script';
function MyApp() {
return (
<>
<Script src="https://example.com/somescript.js" strategy="lazyOnload" />
<Component />
</>
);
}
```
### Final Considerations
Implementing lazy loading with native JavaScript, `lazysizes`, `React.lazy`, Dynamic Imports, and `next/script` is a powerful approach to improving the performance of your web applications. These techniques allow precise control over when and how resources are loaded, providing a faster and more efficient user experience.
When applying these advanced techniques, it's important to monitor performance impacts using tools like Lighthouse, Google Analytics, and Core Web Vitals reports. Continuous analysis and iterative optimization will ensure that your applications deliver the best possible experience.
### References
1. [Google Developers: Native Lazy Loading](https://developers.google.com/web/updates/2019/04/native-lazy-loading)
2. [MDN Web Docs: Intersection Observer API](https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API)
3. [Web.dev: Lazy Loading](https://web.dev/lazy-loading/)
4. [Google Developers: Optimize LCP](https://web.dev/optimize-lcp/)
5. [Lazysizes GitHub Repository](https://github.com/aFarkas/lazysizes)
6. [React Documentation: Code Splitting](https://reactjs.org/docs/code-splitting.html)
7. [Next.js Documentation: Dynamic Import](https://nextjs.org/docs/advanced-features/dynamic-import)
---
Translated post with the help of AI | henriqueschroeder |
1,896,345 | 10 secret rules of Development every Programmer should know | Every profession has its quirks and unspoken rules, and development is no exception. From the... | 27,390 | 2024-06-21T19:07:16 | https://dev.to/buildwebcrumbs/10-secret-rules-of-development-every-programmer-should-know-2kj1 | jokes, watercooler | Every profession has its quirks and unspoken rules, and development is no exception.
From the inexplicable faith in turning it off and on again (hey, it often works!) to the mysterious art of centering a div (this doesn't work as often🥲), here are ten secrets that every programmer will nod knowingly about.
### 1. **If It Works, Don't Touch It**
There's code, and then there's code you don't touch because the last time you did, it took three days to fix.
### 2. **The Console Log is Your Best Friend**
Who needs a debugger when you have `console.log()`? It’s the Swiss Army knife of debugging.
### 3. **Comment Your Code As If Your Job Depends on it**
Self-explanatory, and yet, future you will still wonder what the past you was thinking.
### 4. **Google Is the Co-Developer of Every Project**
Stack Overflow might as well be on your payroll, considering how much it contributes.
### 5. **Not All Heroes Wear Capes, Some Just Write Documentation**
Bless those souls who actually document their code. They save lives.
### 6. **Some Bugs Could Have Been a Feature**
Sometimes the line between a bug and a feature is just a matter of timing and presentation.
### 7. **'It Works on My Machine' Should Be an Official Development Stage**
The first stage of grief in debugging is denial.
### 8. **Every Developer Has a Personal Repository of Snippets They Don’t Share**
These are the family jewels, handed down from one generation of projects to another.
### 9. **The More Urgent the Deadline, the More Likely You Are to Break the Build**
Murphy’s Law is especially potent in the development environment.
### 10. **The Number of Browser Tabs You Have Open Is Directly Proportional to How Close the Project Deadline Is**
By the end of a project, you could probably scroll through tabs faster than you can scroll through your code.
## Community is the bigger secret
Just like the secret rules shared in this article, the best tricks in development often come from the collective wisdom and contributions of the community.
If you enjoyed uncovering these secrets, consider contributing to the magic by starring our project, Webcrumbs, on GitHub. Your support helps us continue to unveil hidden gems and build tools that empower developers like you.
[Star Webcrumbs on GitHub ⭐️](https://github.com/webcrumbs-community/webcrumbs)
Thank you for reading,
Pachi 💚 | pachicodes |
1,896,323 | Proxy Pattern: A Practical Guide to Smarter Object Handling | The Proxy Pattern is an object-oriented programming concept that acts as a “substitute” or... | 0 | 2024-06-21T18:17:36 | https://dev.to/robertoumbelino/proxy-pattern-a-practical-guide-to-smarter-object-handling-4mpl | The Proxy Pattern is an object-oriented programming concept that acts as a “substitute” or “representative” for another object. This pattern is very useful when we need to control access to an object or add extra functionalities without directly modifying its code. Basically, the Proxy acts as an intermediary between the client (who uses the object) and the real object, allowing the Proxy to manage this access in various ways.
#### ❓ **Why Use the Proxy Pattern?**
Imagine you have an object that performs a heavy or time-consuming task, like loading data from a server. It might not be efficient or necessary to do this every time the object is accessed. With a Proxy, you can implement “lazy loading,” loading the data only when it’s really needed. Another common application is implementing security, where the Proxy can check permissions before allowing access to sensitive methods of the real object.
#### ⚙️ **How Does the Proxy Work?**
The Proxy Pattern involves three main components:
1. **Real Object:** The object that contains the main logic or data.
2. **Proxy:** The intermediary that controls access to the real object.
3. **Client:** The entity that interacts with the Proxy instead of directly with the real object.
When the client requests something from the Proxy, it decides whether to forward the request to the real object, handle the request itself, or even deny access.
Let’s look at two practical examples to understand this better.
#### 💾 **Example 1: Caching with Proxy**
First, we’ll define an object that simulates a database. Then, we’ll create a Proxy to cache the results of queries to this database.
##### 🏗️ Step 1: Defining the Database Object
Let’s create an object that simulates a database with user data.
```jsx
const database = {
users: {
1: { id: 1, name: "Alice" },
2: { id: 2, name: "Bob" },
3: { id: 3, name: "Charlie" },
},
getUser(id) {
console.log(`Fetching user with id ${id} from database...`)
return this.users[id]
},
}
```
##### 🛠️ Step 2: Creating the Proxy for Caching
Now, we’ll create a Proxy that caches the results of database queries.
```jsx
const cacheHandler = {
cache: {},
get(target, prop, receiver) {
if (prop === "getUser") {
return (id) => {
if (!this.cache[id]) {
this.cache[id] = target.getUser(id)
return this.cache[id]
}
console.log(`Fetching user with id ${id} from cache...`)
return this.cache[id]
}
}
return Reflect.get(target, prop, receiver)
},
}
const dbProxy = new Proxy(database, cacheHandler)
```
##### 🔄 Step 3: Using the Proxy
Now, let’s use the Proxy to access the database data and see if the caching works correctly.
```jsx
console.log(dbProxy.getUser(1)) // Fetching user with id 1 from database...
console.log(dbProxy.getUser(1)) // Fetching user with id 1 from cache...
console.log(dbProxy.getUser(2)) // Fetching user with id 2 from database...
console.log(dbProxy.getUser(2)) // Fetching user with id 2 from cache...
console.log(dbProxy.getUser(3)) // Fetching user with id 3 from database...
console.log(dbProxy.getUser(3)) // Fetching user with id 3 from cache...
```
##### 📊 Result
When you run this code, you’ll see that the first query for a specific user accesses the (simulated) database, and subsequent queries for the same user access the cache, avoiding the need to query the database again. Here’s the expected output:
```plaintext
Fetching user with id 1 from database...
{ id: 1, name: 'Alice' }
Fetching user with id 1 from cache...
{ id: 1, name: 'Alice' }
Fetching user with id 2 from database...
{ id: 2, name: 'Bob' }
Fetching user with id 2 from cache...
{ id: 2, name: 'Bob' }
Fetching user with id 3 from database...
{ id: 3, name: 'Charlie' }
Fetching user with id 3 from cache...
{ id: 3, name: 'Charlie' }
```
#### 📝 **Example 2: Form Validator with Proxy**
Now, let’s create a Proxy to validate form data before it gets set on the object.
```jsx
const form = {
name: "",
email: "",
password: "",
}
const validator = {
set(target, prop, value) {
if (prop === "email" && !isValidEmail(value)) {
throw new Error("Invalid email address")
}
if (prop === "password" && !isValidPassword(value)) {
throw new Error("Password must contain at least 8 characters")
}
target[prop] = value
return true
},
}
const formProxy = new Proxy(form, validator)
function isValidEmail(email) {
return /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email)
}
function isValidPassword(password) {
return password.length >= 8
}
formProxy.name = "John"
formProxy.email = "john.doe@example.com" // throws an Error: "Invalid email address"
formProxy.password = "1234" // throws an Error: "Password must contain at least 8 characters"
```
Here, the Proxy intercepts the value assignments to the form fields and validates them before allowing the update. If validation fails, an error is thrown, preventing invalid values from being set.
### ✅ **Conclusion**
The Proxy Pattern is a powerful tool for adding a layer of control and functionality over an object without directly modifying its code. With it, we can implement caching, lazy loading, input validation, access control, and much more in an organized and efficient way. | robertoumbelino | |
1,896,343 | 🚀 Um Guia Prático para Configurar Zsh, Oh My Zsh, asdf e Spaceship Prompt com Zinit para Seu Ambiente de Desenvolvimento | Introdução 🌟 Melhore seu ambiente de desenvolvimento com este guia sobre como instalar e... | 0 | 2024-06-21T19:03:22 | https://dev.to/girordo/um-guia-pratico-para-configurar-zsh-oh-my-zsh-asdf-e-spaceship-prompt-com-zinit-para-seu-ambiente-de-desenvolvimento-13ld | shell, zsh, terminal, tutorial | ### **Introdução** 🌟
Melhore seu ambiente de desenvolvimento com este guia sobre como instalar e configurar o **Zsh**, **Oh My Zsh**, **asdf** e o tema **Spaceship Prompt**. Também utilizaremos o **Zinit** para gerenciamento adicional de plugins. Vamos começar!
### 🛠️ **Passo 1: Instalando o Zsh**
O **Zsh** é um shell robusto que proporciona uma experiência de linha de comando poderosa. Veja como instalá-lo:
#### 🐧 **Para Linux (Ubuntu/Debian):**
```bash
sudo apt update
sudo apt install zsh
chsh -s $(which zsh)
```
#### 🍎 **Para macOS:**
O Zsh já vem pré-instalado. Para configurá-lo como seu shell padrão:
```bash
chsh -s /bin/zsh
```
#### 🪟 **Para Windows:**
Use o **WSL (Windows Subsystem for Linux)** ou **Git Bash**. Para WSL:
1. Instale uma distribuição WSL (por exemplo, Ubuntu) na Microsoft Store.
2. Instale o Zsh como faria no Ubuntu.
**Verifique a instalação**:
```bash
zsh --version
```
### ⚙️ **Passo 2: Configurando o Oh My Zsh**
O **Oh My Zsh** simplifica a configuração do Zsh com temas e plugins.
**Instale o Oh My Zsh**:
```bash
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
```
Este script configurará o Oh My Zsh e mudará seu shell padrão para o Zsh.
**Configure o Oh My Zsh**:
1. Abra seu arquivo `.zshrc`:
```bash
nano ~/.zshrc
```
2. **Habilite os plugins**:
```bash
plugins=(git asdf)
```
3. **Recarregue sua configuração**:
```bash
source ~/.zshrc
```
### 🔄 **Passo 3: Instalando e Configurando o Zinit**
O **Zinit** é um gerenciador de plugins para Zsh, oferecendo um gerenciamento de plugins flexível e rápido.
**Instale o Zinit**:
1. Adicione o trecho de instalação do Zinit ao seu `.zshrc`:
```bash
cat << 'EOF' >> ~/.zshrc
### Adicionado pelo instalador do Zinit
if [[ ! -f $HOME/.local/share/zinit/zinit.git/zinit.zsh ]]; then
print -P "%F{33} %F{220}Instalando %F{33}ZDHARMA-CONTINUUM%F{220} Initiative Plugin Manager (%F{33}zdharma-continuum/zinit%F{220})…%f"
command mkdir -p "$HOME/.local/share/zinit" && command chmod g-rwX "$HOME/.local/share/zinit"
command git clone https://github.com/zdharma-continuum/zinit "$HOME/.local/share/zinit/zinit.git" && \
print -P "%F{33} %F{34}Instalação bem-sucedida.%f%b" || \
print -P "%F{160} A clonagem falhou.%f%b"
fi
source "$HOME/.local/share/zinit/zinit.git/zinit.zsh"
autoload -Uz _zinit
(( ${+_comps} )) && _comps[zinit]=_zinit
# Carregue alguns anexos importantes, sem Turbo
zinit light-mode for \
zdharma-continuum/zinit-annex-as-monitor \
zdharma-continuum/zinit-annex-bin-gem-node \
zdharma-continuum/zinit-annex-patch-dl \
zdharma-continuum/zinit-annex-rust
### Fim do trecho de instalação do Zinit
EOF
```
2. **Fonte seu `.zshrc`**:
```bash
source ~/.zshrc
```
### 🔌 **Passo 4: Instalando Plugins Adicionais com Zinit**
Use o Zinit para instalar plugins adicionais e enriquecer sua experiência com o Zsh.
**Instale plugins usando o Zinit**:
1. Abra seu arquivo `.zshrc` e adicione os seguintes comandos de plugin do Zinit:
```bash
nano ~/.zshrc
```
2. Adicione essas linhas para instalar e carregar plugins adicionais:
```bash
zinit light zdharma-continuum/fast-syntax-highlighting
zinit light zsh-users/zsh-autosuggestions
zinit light zsh-users/zsh-completions
```
3. **Salve e recarregue** seu `.zshrc`:
```bash
source ~/.zshrc
```
### 📦 **Passo 5: Instalando e Configurando o asdf**
O **asdf** é um gerenciador de versões versátil para várias linguagens.
**Instale o asdf**:
1. Clone o repositório do asdf:
```bash
git clone https://github.com/asdf-vm/asdf.git ~/.asdf --branch v0.11.3
```
2. Adicione o asdf ao seu `.zshrc`:
```bash
echo -e '\n. $HOME/.asdf/asdf.sh' >> ~/.zshrc
echo -e '\n. $HOME/.asdf/completions/asdf.bash' >> ~/.zshrc
source ~/.zshrc
```
**Instale plugins do asdf**:
1. **Adicione o plugin Node.js**:
```bash
asdf plugin-add nodejs https://github.com/asdf-vm/asdf-nodejs.git
```
2. **Instale uma versão específica**:
```bash
asdf install nodejs 16.13.0
asdf global nodejs 16.13.0
```
3. **Adicione o plugin Python**:
```bash
asdf plugin-add python https://github.com/danhper/asdf-python.git
```
4. **Instale uma versão específica**:
```bash
asdf install python 3.9.7
asdf global python 3.9.7
```
**Gerenciando versões específicas para projetos**:
Crie um arquivo `.tool-versions` no diretório do seu projeto:
```bash
nodejs 14.17.6
python 3.8.10
```
Execute `asdf install` no diretório do projeto para usar essas versões localmente.
### 🚀 **Passo 6: Configurando o Tema Spaceship Prompt**
O tema **Spaceship Prompt** oferece um prompt elegante e informativo para o Zsh.
**Instale o Spaceship Prompt**:
1. **Clone o repositório Spaceship**:
```bash
git clone https://github.com/spaceship-prompt/spaceship-prompt.git "$ZSH_CUSTOM/themes/spaceship-prompt" --depth=1
```
2. **Crie um link simbólico**:
```bash
ln -s "$ZSH_CUSTOM/themes/spaceship-prompt/spaceship.zsh-theme" "$ZSH_CUSTOM/themes/spaceship.zsh-theme"
```
3. **Defina o tema no seu `.zshrc`**:
```bash
ZSH_THEME="spaceship"
```
**Configure o Spaceship Prompt**:
1. Crie um arquivo de configuração `.spaceshiprc.zsh`:
```bash
nano ~/.spaceshiprc.zsh
```
2. Adicione a seguinte configuração:
```zsh
SPACESHIP_USER_SHOW=always
SPACESHIP_PROMPT_ADD_NEWLINE=false
SPACESHIP_CHAR_SYMBOL="λ"
SPACESHIP_CHAR_SUFFIX=" "
SPACESHIP_PROMPT_ORDER=(
user # Seção de nome de usuário
dir # Seção de diretório atual
host # Seção de nome do host
git # Seção do Git (git_branch + git_status)
package # Versão do pacote
node # Seção do Node.js
bun # Seção do Bun
elixir # Seção do Elixir
erlang # Seção do Erlang
rust # Seção do Rust
docker # Seção do Docker
docker_compose # Seção do Docker Compose
terraform # Seção do Terraform
exec_time # Tempo de execução
line_sep # Quebra de linha
jobs # Indicador de trabalhos em segundo plano
exit_code # Seção de código de saída
char # Caractere do prompt
)
```
3. **Fonte sua configuração** no `.zshrc`:
```bash
echo "source ~/.spaceshiprc.zsh" >> ~/.zshrc
source ~/.zshrc
```
### 🔠 **Passo 7: Adicionando uma Nerd Font para o Spaceship Prompt**
Para melhorar a aparência do seu terminal com ícones e glifos adicionais, instale uma **Nerd Font**.
**Instale uma Nerd Font**:
1. Vá para o [repositório do Nerd Fonts no GitHub](https://github.com/ryanoasis/nerd-fonts) e baixe uma fonte de sua preferência, como **Hack** ou **Roboto Mono**.
2. Siga as instruções de instalação para o seu sistema operacional.
**Configure o Emulador de Terminal**:
1. Abra as preferências/configurações do seu emulador de terminal.
2. Selecione a Nerd Font instalada (por exemplo, Hack Nerd Font ou Roboto Mono Nerd Font) como a fonte para o seu terminal.
**Atualize sua Configuração do Zsh**:
1. Abra o arquivo `.zshrc`:
```bash
nano ~/.zshrc
```
2. Atualize o `SPACESHIP_CHAR_SYMBOL` para usar um ícone da Nerd Font (por exemplo, `` para Hack Nerd Font ou `` para Roboto Mono Nerd Font):
```bash
SPACESHIP_CHAR_SYMBOL="" # Use o símbolo da Nerd Font apropriado aqui
```
3. **Salve e feche** o arquivo.
**Fonte sua configuração**:
```bash
source ~/.zshrc
```
### 🔧 **Passo 8: Outras Configurações Úteis para Zsh**
**Personalize a História do Comando**:
Armazene mais histórico e compartilhe entre sessões:
```bash
HISTSIZE=10000
SAVEHIST=10000
setopt share_history
```
**Habilite Correções Automáticas**:
Habilite correções para erros comuns de digitação:
```bash
setopt correct
```
### 🎉 **Conclusão**
Agora você configurou um ambiente de desenvolvimento robusto e visualmente atraente com **Zsh**, **Oh My Zsh**, **asdf** e o tema **Spaceship Prompt**, usando **Zinit** para plugins adicionais. Esta configuração melhorará seu fluxo de trabalho e facilitará o gerenciamento de vários projetos. Feliz codificação!
---
**Leitura Adicional** 📚:
- [Plugins do Oh My Zsh](https://github.com/ohmyzsh/ohmyzsh/wiki/Plugins)
- [Documentação do asdf](https://asdf-vm.com/)
- [Spaceship Prompt no GitHub](https://github.com/spaceship-prompt/spaceship-prompt)
- [Documentação do Zinit](https://zdharma-continuum.github.io/zinit/wiki/)
- [Guia do Usuário do Zsh](http://zsh.sourceforge.net/Doc/Release/)
---
<p align="center"><em>Este artigo foi criado e personalizado com a ajuda do ChatGPT.</em> 🤖💡</p> | girordo |
1,896,342 | Recursion: Computer Science Challenge | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-21T19:03:05 | https://dev.to/karim_abdallah/recursion-computer-science-challenge-b6p | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
**Recursion**: A function that calls itself! Solves problems by breaking them into smaller, similar problems. Powerful for complex tasks but requires careful design to avoid infinite loops.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
resources: [About Recursion](https://en.wikipedia.org/wiki/Recursion_(computer_science))
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | karim_abdallah |
1,896,461 | ReAPI Client: A Comprehensive Guide to My React API Request Builder | Introduction Welcome to my blog! I'm Dipankar Paul, a frontend developer with a passion... | 27,831 | 2024-06-23T18:04:33 | https://iamdipankarpaul.hashnode.dev/reapi-client-a-comprehensive-guide-to-my-react-api-request-builder | react, javascript, projects, api | ---
title: ReAPI Client: A Comprehensive Guide to My React API Request Builder
published: true
date: 2024-06-21 18:54:17 UTC
tags: React,JavaScript,projects,APIs
series: Projects
canonical_url: https://iamdipankarpaul.hashnode.dev/reapi-client-a-comprehensive-guide-to-my-react-api-request-builder
---
## Introduction
Welcome to my blog! I'm Dipankar Paul, a frontend developer with a passion for creating efficient and user-friendly web applications. In this article, I will take you through one of my projects, the **ReAPI Client**. This React-based API request builder simplifies the process of creating, sending, and managing API requests. Whether you're a developer who frequently interacts with APIs or someone looking for a tool to streamline your workflow, the ReAPI Client has a lot to offer.
In this comprehensive blog post, I'll cover the following:
- Inspiration
- Project Overview
- Key Features
- Technical Details
- Step-by-Step Guide with Screenshots
- Challenges and Solutions
- Future Improvements
- Conclusion
## Inspiration
The idea for building the ReAPI Client came from my experience using the Thunder Client VS Code extension. I was impressed by its simplicity and efficiency in managing API requests directly within the code editor. This inspired me to develop the ReAPI Client.
## Project Overview
The ReAPI Client is designed to be a powerful yet user-friendly tool for developers. Its main goal is to simplify the testing and debugging of API requests. The app supports various HTTP methods, provides real-time and full-screen responses, maintains a history of requests, and allows users to copy and use code snippets. Additionally, users can easily reuse any request from the history, making the ReAPI Client a versatile tool for any developer's toolkit.
## Key Features
Let's dive into the key features of the ReAPI Client:
1. **API Request Builder**: Supports GET, POST, PUT, PATCH, and DELETE requests.
2. **Real-time Response**: Displays responses in real-time and in full-screen mode.
3. **Request History**: Keeps a history of all API requests made.
4. **Code Snippets**: Allows users to copy code snippets for API requests.
5. **Reuse Requests**: Users can easily reuse any request from the history.
## Technical Details
The ReAPI Client is built using a modern tech stack:
- **ReactJS**: For the frontend, providing a robust and flexible framework.
- **Mantine**: For UI components and hooks, offering a customizable and responsive design.
- **Zustand**: For state management, ensuring efficient and modular state handling.
- **Axios**: For managing API requests, ensuring smooth data flow throughout the app.
The app’s architecture is designed to be modular and maintainable, with a focus on state management and a seamless user experience.
## Step-by-Step Guide with Screenshots
Let's walk through the ReAPI Client step by step, highlighting its features and functionality with screenshots.
### 1. User Interface
Upon launching the ReAPI Client, you are greeted with a clean and intuitive user interface.

### 2. Creating an API Request
To create a new API request, select the request type (GET, POST, PUT, PATCH, DELETE), input the endpoint URL, and add any necessary params, headers or body content.


### 3. Sending the Request
Once you have filled in the request details, click the "Send" button. The app will process the request and display the response in real-time, in a full-screen mode. This feature allows you to view the status and data of the response immediately.

### 4. Managing Request History
All your API requests are saved in the request history. You can access this history from the main menu. The request history allows you to view and manage past requests, making it easy to keep track of your testing and debugging activities. To reuse a request from the history, just select the request. This feature allows you to resend or modify past requests easily, saving you significant time and effort.

### 5. Using Code Snippets
One of the standout features of the ReAPI Client is the ability to copy code snippets for API requests. This feature simplifies the process of reusing API calls in your development environment. Simply click the "Copy" button next to the request to get the code snippet.

## Challenges and Solutions
During the development of the ReAPI Client, I encountered several challenges. Here are some of the key challenges and how I addressed them:
### 1. Handling Asynchronous Requests
- **Challenge**: Ensuring the UI remained responsive while handling asynchronous API requests.
- **Solution**: I used React hooks and Axios to manage asynchronous requests smoothly. By leveraging React's useEffect and useState hooks, I was able to keep the UI responsive and update it in real-time as requests were processed.
### 2. Efficient State Management
- **Challenge**: Managing complex state efficiently.
- **Solution**: Zustand was the perfect choice for state management. Its lightweight and flexible approach allowed me to manage the app's state efficiently without adding unnecessary complexity. Zustand's API is straightforward, making it easy to implement and maintain.
### 3. UI/UX Design
- **Challenge**: Creating an intuitive and user-friendly interface.
- **Solution**: Mantine provided a set of customizable UI components that made designing the interface straightforward. Its responsive design capabilities ensured that the app looked great on all devices. I focused on creating a clean and simple layout, with easily accessible features and a consistent design language.
## Future Improvements
While the ReAPI Client is already a powerful tool, there are several areas for future improvement:
### Planned Features
1. **User Authentication**: Adding user authentication to provide personalized experiences and secure access to saved requests.
2. **Enhanced Error Handling**: Improving error handling to provide more detailed and helpful error messages, making debugging easier.
### Potential Improvements
1. **Performance Optimization**: Optimizing performance for handling large datasets and complex requests more efficiently.
2. **Customizable UI**: Adding more options for customizing the UI to suit individual preferences and workflows.
## Source Code and Live Link
For those interested in exploring the code or trying out the ReAPI Client, you can find the source code on GitHub and the live version hosted online. Check them out through the links below:
- **Source Code**: [GitHub Repository](https://github.com/dipankarpaul2k/ReAPI_Client)
- **Live Demo**: [Live Version](https://reapi-client.vercel.app/)
Feel free to explore the repository, clone the project, and contribute if you'd like. The live demo allows you to experience the ReAPI Client in action and see its features firsthand.
## Conclusion
In conclusion, the ReAPI Client is a powerful and user-friendly tool for developers, offering a range of features to simplify API testing and debugging. With its real-time responses, request history, code snippets, and the ability to reuse requests, it streamlines the entire process, making it an essential tool for any developer.
I hope this blog post has given you a comprehensive overview of the ReAPI Client. If you have any questions or feedback, I’d love to hear from you. Feel free to comment.
Thank you for reading, and happy coding!
| dipankarpaul |
1,896,341 | My First Blog | Welcome to my first blog post! If you're here, you're probably pondering one of two questions:... | 0 | 2024-06-21T18:51:19 | https://dev.to/shrishti_srivastava_/my-first-blog-3e9k | webdev, javascript, beginners, programming | **Welcome to my first blog post!**
If you're here, you're probably pondering one of two questions: "Which domain should I go into?" or "How do I really delve into web development?" Today, I’ll tackle both!
Which Domain to Choose?
You chose B.Tech to land a job in the IT sector or maybe your dream company like MANG or FANG. Each company requires specific skills, which are usually aligned with your interests. Tech and coding are vast oceans, and you might feel lost if you're navigating through something uninteresting to you. Boredom is your enemy, so it’s crucial to find your passion.
Try different domains for at least a month each. It's not a waste of time if it helps you discover what truly excites you. Dive into areas like Artificial Intelligence, Machine Learning, Web Development, Cybersecurity, and more. Find your niche and embrace it. Remember, it’s about YOUR TECH journey!
How to Get into Web Development?
If you're here, you're probably as excited about web development as I am. Whether you’re a total newbie or looking to brush up on the basics, you’re in the right place. Let’s dive into the essentials of web development and unravel the mysteries of HTML, CSS, and JavaScript. Ready to turn your ideas into interactive websites? Let’s get started!
HTML: The Skeleton of the Web
Learn the basics of HTML and how it structures your content.
Quick tip: Always ensure your HTML is clean and semantic.
CSS: Adding Style to Your Site
Discover how CSS makes your websites look fabulous.
From colors to layouts: understanding selectors, properties, and values.
Quick tip: Use Flexbox and Grid for modern, responsive designs.
JavaScript: Bringing Your Site to Life
Understand the role of JavaScript in making your websites interactive.
Basics you need to know: variables, functions, and events.
Quick tip: Practice DOM manipulation to dynamically update your content.
Putting It All Together
Build a simple project combining HTML, CSS, and JavaScript.
Step-by-step guide to creating your first interactive webpage.
Quick tip: Use debugging tools effectively.
Conclusion:
Remember, every coding journey starts with a single line of code. Stay curious, keep experimenting, and don’t be afraid to make mistakes. Happy coding, and see you in the next post where we’ll dive deeper into advanced tech skills!
Hope you like my ideas... Thank you! | shrishti_srivastava_ |
1,896,340 | Discover Exceptional Orthodontic Care in Austin | Are you searching for the perfect orthodontist in Austin to transform your smile? Look no further... | 0 | 2024-06-21T18:46:30 | https://dev.to/harry_feddrick_/discover-exceptional-orthodontic-care-in-austin-16k6 | healthydebate, dentist, austin, orthodontist | Are you searching for the perfect orthodontist in Austin to transform your smile? Look no further than the vibrant community of orthodontic specialists right here in our city. Austin is not only known for its music scene and vibrant culture but also for its commitment to top-notch dental care, including orthodontics.
## Why Choose an Orthodontist in Austin?
Austin boasts a diverse array of orthodontic practices, each offering unique specialties and a commitment to patient care. Whether you're considering braces, Invisalign, or other orthodontic treatments, you'll find experts who utilize cutting-edge technology and personalized treatment plans to achieve exceptional results.
## Personalized Care and Expertise
One of the standout features of [orthodontists](https://celebratedentalaustin.com/services/orthodontist/) in Austin is their dedication to personalized care. Each patient receives a customized treatment plan tailored to their specific needs and goals. Whether you're an adult looking to enhance your smile discreetly with clear aligners or a teenager embarking on a journey with traditional braces, Austin orthodontists prioritize your comfort and satisfaction throughout the process.
## State-of-the-Art Technology
Advancements in dental technology have revolutionized orthodontic treatment, and Austin orthodontists are at the forefront of these innovations. From digital impressions and 3D imaging to advanced treatment techniques, such as accelerated orthodontics, these specialists use state-of-the-art tools to ensure precise diagnostics and efficient treatment outcomes.
## A Welcoming Environment
Beyond expertise and technology, Austin orthodontists foster a welcoming environment for patients of all ages. Whether you're visiting for a consultation or undergoing treatment, you'll find that their offices are designed to make you feel comfortable and relaxed. This commitment to creating a positive experience reflects Austin's renowned hospitality and community spirit.
## Choosing Your Austin Orthodontist
When selecting an orthodontist in [Austin](https://celebratedentalaustin.com/), consider factors such as their experience, credentials, patient reviews, and the range of services they offer. Many practices offer complimentary consultations, allowing you to meet the team, discuss treatment options, and determine if they're the right fit for your orthodontic needs.
## Get Started Today!
Ready to embark on your journey to a healthier, more confident smile? Explore the diverse range of orthodontists in Austin and take the first step towards achieving the smile you've always wanted. Whether you're new to the city or a longtime resident, Austin's orthodontic community is here to support you every step of the way.
Begin your search today and discover why Austin is a hub for exceptional orthodontic care. Your smile transformation awaits!
---
This guest post highlights the strengths of orthodontic care in Austin, emphasizing personalized treatment, advanced technology, and a welcoming atmosphere for patients. | harry_feddrick_ |
1,896,336 | Big O Notation | A mathematical notation describing the upper limit of an algorithm's runtime or space requirements in... | 0 | 2024-06-21T18:41:01 | https://dev.to/pirisaurio32/big-o-notation-40bb | devchallenge, cschallenge, computerscience, beginners | A mathematical notation describing the upper limit of an algorithm's runtime or space requirements in the worst-case scenario. It's crucial for comparing efficiency, helping developers optimize code and understand performance as input size grows. | pirisaurio32 |
1,896,283 | DAY3 -> Scaling Databases (Replication) | As in my previous blogs, We talked about vertical scaling. In this We will talk about Horizontal... | 0 | 2024-06-21T18:39:43 | https://dev.to/taniskannpurna/day3-scaling-databases-replication-30pd | systemdesign, system, softwaredevelopment, softwareengineering | - As in my previous blogs, We talked about vertical scaling. In this We will talk about Horizontal Scaling.
**HORIZONTAL SCALING**
- We know that for any db read & write ratio is 90:10;
- The very basic scaling that we can do is have separate db just for reading and separate db for writing.
- Through API we can send request to replica DB(Read operation) and Master DB(Write operation).
It would look something like shown below. (Replica DB can be multiple)

- Data consistency becomes a big part as we don't want data to be different in replica db and in master db. So, its very important to replicate the data.
There are mainly 2 ways to replicate :
1. SYNCHRONOUS
- This means that when there is any write/update operation then through code or DB, we also update replica DB at the same time thus resulting in
- More response time
- Slower writes
- Zero replication logs
- Very strong consistency.

2. ASYNCHRONOUS
- This means that we keep logs of all the write operations on master db and at certain time we just run those logs on replica dbs. This results in
- Small response time
- Faster writes as it doesn't write on multiple db at a time, some replication logs
- Eventual consistency.

| taniskannpurna |
1,896,335 | DEV's Wanted! | Hey everyone. It's since a long time that I have this idea of a MIT Licensed app dedicated to home... | 0 | 2024-06-21T18:38:28 | https://dev.to/jsfreu/devs-wanted-138k | opensource, mit, development | Hey everyone. It's since a long time that I have this idea of a MIT Licensed app dedicated to home based business owners. Yesterday I have been generating a overview of all different integrations my ideal platform must need. Having small to none experience on php development but rather more on html and css for now. I was wondering if any dev's here front & backend would be intrested to join in the development of a All In One Digital Marketing Suite SAAS App on a voluntary basis. The Software will be licensed under MIT. More info can be found on my selfhosted gitea server. I just installed the server and will upload the first Master Branch tomorow morning wich already has a working Laravel 11 Jetstream Livewire Filament Stack. Here's the repo adress let me know if you wish to join in such a project. https://git.onlinemarketingsociety.eu/toms/TheOnlineMoneySystem/src/branch/main/README.md
Best John
| jsfreu |
1,896,334 | Are you burning your money? | Recently I've became very spending conscientious. Partially because there are more and more services... | 0 | 2024-06-21T18:33:41 | https://dev.to/sein_digital/are-you-burning-your-money-2m6f | productivity, discuss, community, startup | Recently I've became very spending conscientious. Partially because there are more and more services that adapt subscription based model, and partially because I became more and more aware how much every possible industry giants try to milk their customers as much as possible. I've became quite resentful towards those practices, and recent news about Adobe TOS and anti-consumer practices just pushed that resentment even further.
On top of that I do tend to forget, that there are some subscriptions I subscribed to trial and forgot to cancel. Sometimes I want to just check what are recurring payments that automatically draw from my account, and where I would like to cut costs.
Because of that initially I created a script that will look into my financial raport and recognize those payments automatically.
You can find the source code here: [https://github.com/sudo-sein/subscription-finder](https://github.com/sudo-sein/subscription-finder)
However, at the end of the day, I thought that I would go a bit further, and create service that will create such reports via webapp. So, If you guys want to check it out!
- [Subscription Finder](https://subscriptionfinder.xyz/)
If application gain enough traction and interest, I might extend it's functionality.
| sein_digital |
1,891,801 | O que é JSON (Javasript Object Notation) e como usamos ? | É bastante comum pra galera que ta começando na programação ficar um pouco perdido com muitos... | 0 | 2024-06-21T18:33:39 | https://dev.to/henriqueleme/o-que-e-json-javasript-object-notation-e-como-usamos--2b1d | beginners, braziliandevs, json, programming | É bastante comum pra galera que ta começando na programação ficar um pouco perdido com muitos **acronismos** nas primeiras semanas. Hoje, vamos dar uma passada sobre algo que é muito comum no dia a dia da nossa profissão e que é bem fácil de pegar as manhas, pode confiar!! Neste artigo, vamos explicar o que significa a sigla JSON, como pode ser usado, como é a estrutura dele e muito mais! Bora lá?
## Índice
- [O que é JSON afinal?](#o-que-é-json-afinal)
- [Já não existia uma forma de transportar dados assim antes?](#já-não-existia-uma-forma-de-transportar-dados-assim-antes)
- [Qual é a diferença entre JSON e XML?](#qual-é-a-diferença-entre-json-e-xml)
- [Ainda vale a pena usar XML?](#ainda-vale-a-pena-usar-xml)
- [Vantagens do JSON](#vantagens-do-json)
* [Sintaxe JSON](#sintaxe-json)
* [Tipos de Dados JSON](#tipos-de-dados-json)
- [Exemplo prático com Carros](#exemplo-prático-com-carros)
* [Trabalhando com JSON em JavaScript](#trabalhando-com-json-em-javascript)
* [Trabalhando com JSON em Python](#trabalhando-com-json-em-python)
* [Trabalhando com JSON em PHP](#trabalhando-com-json-em-php)
* [Trabalhando com JSON em Java](#trabalhando-com-json-em-java)
## O que é JSON afinal?
O **JSON** ou "JavaScript Object Notation" é um formato básico para transportar dados, criado em 2000. Mas ao ler o nome, você pode se perguntar:
> "Se tem JavaScript no nome, então só posso usá-lo com JavaScript, certo?" - Você, 2024
Não exatamente, pelo fato, de ser um formato leve e simples, ele é fácil de ser lido por outras linguagens e, o mais importante, por pessoas.
> "Bem, se é uma maneira leve e simples de transportar dados, então vou usá-lo como um banco de dados." - Você, 2024
Eu não recomendaria...
Por quê? Bom, embora o JSON seja ótimo para transportar dados por causa de sua simplicidade e legibilidade, usá-lo como banco de dados pode não ser a melhor ideia. Arquivos JSON não são projetados para lidar com grandes quantidades de dados de forma eficiente. Eles não possuem recursos de indexação, consulta e transações como bancos de dados SQL (como MySQL e PostgreSQL) ou bancos de dados NoSQL (como MongoDB e ScyllaDB). Usar JSON como banco de dados pode levar a problemas de desempenho, integridade de dados e desafios na gestão de acesso concorrente aos dados.
## Não existia uma forma de transportar dados assim antes?
Claro que sim. A principal linguagem usada era o XML (Extensible Markup Language), projetada para ser uma linguagem de marcação flexível e personalizável. Embora seja poderosa e altamente extensível, pode ser bastante verbosa, dificultando sua legibilidade. Cada pedaço de dado no XML é cercado por tags, o que pode aumentar o tamanho do arquivo e a complexidade, especialmente em documentos maiores ou conjuntos de dados.
## Qual é a diferença entre JSON e XML?
JSON e XML são ambos formatos de serialização de dados. JSON é mais conciso, o que o torna mais fácil de ler e mais rápido de analisar, sendo bem adaptado às estruturas de dados das linguagens de programação modernas. O XML, por outro lado, é mais verboso com o uso de tags explícitas de início e fim, suportando uma estrutura hierárquica mais complexa adequada para aplicações que exigem metadados detalhados. Enquanto o JSON é preferido para APIs Web devido à sua eficiência, o XML é preferido em ambientes que requerem marcação extensiva de documentos, como aplicações empresariais.
Da uma olhadinha nesse exemplo em XML para comparação:
```xml
<livraria>
<livro>
<titulo>Aprendendo XML</titulo>
<autor>Erik T. Ray</autor>
<preco>29.99</preco>
</livro>
<livro>
<titulo>JSON para Iniciantes</titulo>
<autor>iCode Academy</autor>
<preco>39.95</preco>
</livro>
</livraria>
```
E agora temos esse mesmo exemplo, mas agora no formato JSON:
```json
{
"livraria": {
"livros": [
{
"titulo": "Aprendendo XML",
"autor": "Erik T. Ray",
"preco": 29.99
},
{
"titulo": "JSON para Iniciantes",
"autor": "iCode Academy",
"preco": 39.95
}
]
}
}
```
## Ainda vale a pena usar o XML?
Bom, vai depender do seu objetivo, mas na minha opinião, sempre vale a pena dar uma olhada em alguma coisa que você não conhece, mesmo que você não pretenda usar, só para ter uma ideia do que é e como é usado. Você pode se deparar com um problema que o XML pode resolver um dia, vai saber.
## Vantagens do JSON
Então, como você já viu, a principal razão para usar JSON é sua legibilidade, mas vamos listar suas principais vantagens:
- **Fácil de Ler**: JSON tem uma estrutura clara e direta.
- **Fácil de Parsear**: Sua sintaxe simples facilita a na hora de converter.
- **Compacto**: JSON tende a ser mais leve, economizando espaço e largura de banda.
- **Universal**: Amplamente suportado por várias linguagens de programação e plataformas, usado por grandes empresas como Google e Twitter.
### Sintaxe JSON
A sintaxe JSON é direta e sem muito segredo, seguindo algumas regras básicas:
1. **Dados em pares de nome/valor**: Cada elemento de dado em JSON é representado como um par de chave (ou nome) e valor, separados por dois pontos.
2. **Dados separados por vírgulas**: Múltiplos pares de chave-valor dentro de um objeto são separados por vírgulas.
3. **Objetos são envoltos por chaves**: Um objeto em JSON é cercado por chaves `{}`.
4. **Arrays são envoltos por colchetes**: Um array em JSON é cercado por colchetes `[]`.
Mas é mais fácil visualizar do que ler, então aqui estão alguns exemplos para você, junto com a definição de cada tipo:
### Tipos de Dados JSON
JSON suporta os seguintes tipos de dados:
- **String**: Uma sequência de caracteres, cercada por aspas duplas.
`"nome": "João Silva"`
- **Número**: Valores numéricos, podem ser inteiros ou de ponto flutuante.
`"idade": 30`
- **Objeto**: Uma coleção de pares chave-valor, cercados por chaves.
`"endereço": { "rua": "Rua Principal", "cidade": "Cidade Exemplo" }`
- **Array**: Uma lista ordenada de valores, cercada por colchetes.
`"cursos": ["Matemática", "Ciências", "História"]`
- **Boolean**: Valores verdadeiro ou falso.
`"éEstudante": false`
- **Null**: Representa um valor nulo.
`"nomeDoMeio": null`
E não, você não pode escrever comentarios em arquivo um JSON :D
## Exemplo prático com Carros
Vamos dizer que você quer manter registros de carros e seus detalhes. Aqui está um exemplo de como esses registros podem ser organizados em JSON:
```json
{
"carros": [
{
"marca": "Toyota",
"modelo": "Corolla",
"ano": 2020,
"características": {
"cor": "Azul",
"transmissão": "Automática"
}
},
{
"marca": "Toyota",
"modelo": "Corolla",
"ano": 2021,
"características": {
"cor": "Vermelha",
"transmissão": "Automática"
}
}
]
}
```
Se você quiser adicionar mais carros, basta adicionar mais objetos ao array de carros na estrutura JSON.
E isso pode ser facilmente feito analisando o JSON em uma linguagem de sua escolha e, em seguida, manipulando-o como desejar. Aqui estão alguns exemplos usando o JSON mostrado anteriormente para dar uma ideia melhor de como ler e analisar um arquivo JSON:
### Trabalhando com JSON em JavaScript
```javascript
const fs = require('fs').promises
const jsonPath = 'C:docs/example/example.json'
const readJsonFile = async () => {
// Lê o conteúdo do JSON e garante que seja lido como uma string
const jsonContent = await fs.readFile(jsonPath, 'utf-8')
// Converte o JSON em um objeto JavaScript
const data = JSON.parse(jsonContent)
console.log(data.carros[0])
// Saída: { marca: "Toyota", modelo: "Corolla", ano: 2020, características: { cor: "Azul", transmissão: "Automática" } }
console.log(data.carros[1])
// Saída: { marca: "Toyota", modelo: "Corolla", ano: 2021, características: { cor: "Vermelha", transmissão: "Automática" } }
}
readJsonFile()
```
### Trabalhando com JSON em Python
```python
import json
# Lê o arquivo JSON
with open('C:docs/example/example.json', 'r') as jsonFile:
# Analisa o conteúdo do JSON
jsonContent = json.load(jsonFile)
print(jsonContent['carros'][0])
# Saída: {'marca': 'Toyota', 'modelo': 'Corolla', 'ano': 2020, 'características': {'cor': 'Azul', 'transmissão': 'Automática'}}
print(jsonContent['carros'][1])
# Saída: {'marca': 'Toyota', 'modelo': 'Corolla', 'ano': 2021, 'características': {'cor': 'Vermelha', 'transmissão': 'Automática'}}
```
### Trabalhando com JSON em PHP
```php
<?php
$jsonPath = 'C:docs/example/example.json';
// Lê o conteúdo do JSON
$contents = file_get_contents($jsonPath);
// Converte o conteúdo do JSON em um Objeto PHP
$jsonContent = json_decode($contents);
print_r($jsonContent->carros[1]);
// Saída: stdClass Object
// (
// [marca] => Toyota
// [modelo] => Corolla
// [ano] => 2021
// [características] => stdClass Object
// (
// [cor] => Vermelha
// [transmissão] => Automática
// )
// )
?>
```
### Trabalhando com JSON em Java
```java
package org.example;
import org.json.JSONArray;
import org.json.JSONObject;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
public class Main {
public static void main(String[] args) throws IOException {
String jsonFilePath = "C:docs/example/example.json";
// Lê o conteúdo do arquivo e converte em uma string
String jsonContent = Files.readString(Paths.get(jsonFilePath), StandardCharsets.UTF_8);
// Converte o conteúdo da string em um Objeto JSON
JSONObject jsonExample = new JSONObject(jsonContent);
JSONArray carros = jsonExample.getJSONArray("carros");
System.out.println(carros.get(0));
// Saída: {"características": {"transmissão":"Automática","cor":"Azul"},"ano":2020,"modelo":"Corolla","marca":"Toyota"}
System.out.println(carros.get(1));
// Saída: {"características":{"transmissão":"Automática","cor":"Vermelha"},"ano":2021,"modelo":"Corolla","marca":"Toyota"}
}
}
```
PS: O exemplo em Java usa uma [biblioteca json](https://mvnrepository.com/artifact/org.json/json). Se você for testá-lo, certifique-se de incluí-la nas suas dependências.
## Conclusão
Entender JSON pode parecer complicado no começo, mas na verdade é muito simples quando você pega o jeito! Demos uma passada sobre o que é JSON, como é usado e por que é tão útil.
Desde entender sua sintaxe até vê-lo em ação em diferentes linguagens de programação, agora você está pronto para dar os primeiros passos e começar a usar JSON em seus projetos.
Se você ainda tiver alguma dúvida, sinta-se à vontade para me perguntar diretamente. Espero que tenha gostado do artigo - não se esqueça de curtir e compartilhar com aquele amigo que ainda está lutando para entender JSON.
Feedback sobre o artigo são sempre bem-vindos para que eu possa melhorar nos próximos. Obrigado por ler, mantenha-se saudável e beba água! | henriqueleme |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.