id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,879,627 | 🚀 Completed my C++ syllabus and course! 🎉 Excited to share this milestone with everyone. #CodeComplete #CPP #AchievementUnlocked | A post by Rishabh | 0 | 2024-06-06T20:17:55 | https://dev.to/rishabh_devios/completed-my-c-syllabus-and-course-excited-to-share-this-milestone-with-everyone-codecomplete-cpp-achievementunlocked-4m7c | rishabh_devios | ||
1,879,626 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-06T20:09:20 | https://dev.to/betam34174/buy-verified-paxful-account-9k9 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n " | betam34174 |
1,879,625 | Core Architectural components of Azure | Introduction Azure is a cloud computing platform powered by Microsoft that helps to build business... | 0 | 2024-06-06T20:08:45 | https://dev.to/ayospecie/core-architectural-components-of-azure-1bi5 | *Introduction*
*Azure* is a cloud computing platform powered by Microsoft that helps to build business solutions to meet organizational needs. Azure enables businesses and developers to build, deploy, and manage applications and services through Microsoft’s global network of data centers.
It offers a wide range of cloud services, including computing power, storage, databases, machine learning, networking, analytics, and more.
Organizations can leverage Azure services to innovate, enhance efficiency, and achieve scalability based on pocket strengths.
**Core Architectures**
The architecture of Microsoft Azure includes components such as compute, storage, networking, databases, identity and access management, security, and monitoring.
The architecture of Microsoft Azure comprises data centers housing physical servers, virtualized hardware, and a complex network infrastructure, facilitating the deployment and operation of cloud-based applications and services.
Azure architecture adopts a distributed, scalable, and elastic approach, allowing for on-demand resource allocation and dynamic scaling, unlike traditional IT infrastructures that often rely on fixed hardware configurations.
With Microsoft Azure, the hardware instructions are emulated by mapping of software instructions, this way, virtualized hardware can perform functions like “real” hardware by the use of software and the architecture of Microsoft azure is run on a server collection which is huge and networking hardware which are capable of hosting a collection of applications that are complex and can control the software operation and configuration on these servers.
| ayospecie | |
1,879,624 | FastAPI Beyond CRUD Part 7 - Create a User Authentication Model (Database Migrations With Alembic) | In this video, we create the user authentication model using SQLModel but we approach this by using... | 0 | 2024-06-06T20:07:21 | https://dev.to/jod35/fastapi-beyond-crud-part-7-create-a-user-authentication-model-database-migrations-with-alembic-2l4d | fastapi, api, python, programming | In this video, we create the user authentication model using SQLModel but we approach this by using Alembic. This video demonstrates the steps to set up database migrations with Alembic when working with Async SQLModel.
{%youtube jPTJ0i1JM3I%} | jod35 |
1,878,844 | > Dynamic SVG in Vue with Vite | Let me show you a super useful implementation of Vite's Glob Imports: creating a wrapper component... | 0 | 2024-06-06T20:06:31 | https://dev.to/aronsantha/-dynamic-svg-in-vue-with-vite-40jl | codecapsule, vite, vue | Let me show you a super useful implementation of Vite's [Glob Imports](https://vitejs.dev/guide/features#glob-import): creating a wrapper component for displaying SVGs.
---
This Vue 3 component below imports all files that
- are located in `/src/assets/svg` folder, and
- have `.svg` extension
```
// BaseSvg.vue
<template>
<div v-html="svg" v-if="svg"></div>
</template>
<script lang="ts" setup>
import { computed } from "vue";
const props = defineProps<{
name: string;
}>();
const modules = import.meta.glob("/src/assets/svg/*.svg", {
query: "?raw",
import: "default",
eager: true,
});
const svg = computed(() => {
return modules["/src/assets/svg/" + (props.name ?? "") + ".svg"] ?? null;
});
</script>
```
Here's what we need to do when we add a new SVG to the project:
1. Drop the file in the /svg folder
2. Import the BaseSvg component and use it in the template
3. Pass the filename we want to display, as the `name` prop.
```
// script
import BaseSvg from "@/components/BaseSvg.vue";
// template
<BaseSvg name="heart-icon" class="w-12 text-slate-500" />
```
---
💡**TIP**: Set each svg file's `width` and `height` to "100%", and `fill` (or `stroke`) to "currentColor". This will help with adding styles to the component. Eg.:
```
// heart-icon.svg
<svg width="100%" height="100%" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 9.1371C2 14 6.01943 16.5914 8.96173 18.9109C10 19.7294 11 20.5 12 20.5C13 20.5 14 19.7294 15.0383 18.9109C17.9806 16.5914 22 14 22 9.1371C22 4.27416 16.4998 0.825464 12 5.50063C7.50016 0.825464 2 4.27416 2 9.1371Z" fill="currentColor"/>
</svg>
```
---
So, there you have it. This wrapper component comes really handy in projects that use many custom icons. It helps DRYing up the code and keeping maintenance at a minimum.
Good luck, and have fun using it!
| aronsantha |
1,879,623 | Three Levels of Scrapping Data: From Basic to Advanced to Pro | Introduction When I was working on Tarwiiga AdGen at Tarwiiga, I needed to finetune an LLM... | 0 | 2024-06-06T20:05:36 | https://dev.to/maelghrib/three-levels-of-scrapping-data-from-basic-to-advanced-to-pro-2i6p | python, machinelearning | ## Introduction
When I was working on Tarwiiga AdGen at [Tarwiiga](https://tarwiiga.com), I needed to finetune an LLM for Google ads generation, but the data was not found, so I needed to create it from scratch, the tool was taking input from two words or three and give a JSON output, so we need a dataset that contains different inputs with JSON output, but there was a problem getting the input, I tried to make the LLM suggest inputs but it was containing lots of duplicates, so I decided to scrape online data to be an input, I was first using regular programming to scrape data but then improved it to use AI then found another killer way that helped me scrape millions of data points at no time, here is I am talking about those three levels of scraping data for anyone who may be interested.

## Basic Level: Scrape with regular programming
At the basic level, I was using regular Python scripts with Selenium and Beautifulsoup, where Selenium simulates user behavior on the browser and Beautifulsoup with Requests for scrapping HTML and extracting texts, this was working for basic stuff where the data is public or the data is in the first page without needing you to scrape endless feeds, but when it needs login and scrapping endless feed this was like a hell and I couldn't manage to fix all of the bugs!
## Advanced Level: Scrapping with AI
At the advanced level, while this couldn't solve the problem of the basic level, but incorporated a new way of scrapping, instead of relying on manually writing Python code that scrapes HTML and extracts data in JSON, I shifted this task to LLM to do it, just giving it the HTML and tell it to extract the content you want and put them in JSON, looping through list of HTML documents will give you a list of JSON objects that you could save it in CSV file or database, but this also rise a new problem of limited token length of LLM and sometimes LLM won't give accurate results.
## Pro Level: Scrapping with the request URL
At the pro level, you don't need to do the above stuff, you could just take the request URL especially if it is an endless feed to get the JSON response, this needs you to open the inspector of the browser and go to the network tab and refresh the page then scroll and track changes until you find the request URL that fetches data, and take this URL and just make a request to it to give you the response in JSON, this way was very powerful for me as I managed to scrape millions of data at no time, also I didn't face any problems related to handling endless feed or handling login or not even needed to depend on AI and its context length limit.
Here is a detailed guide on how to get data from the Twitter (X) feed using the request UR.
First, open [https://x.com](https://x.com) and then inspect and go to the network tab, refresh, and start scrolling until you find this HomeTimeline click on it and it will give you all the details of the request and response.

Then go to preview to see a preview of the response

And here is the payload

Take the request URL and the payload and put them in Postman

Put any headers that come with the request and then click send and you will get the response

You could click on the code icon at Postman to get the code that does this request in your language. Here is the code in Python.

And that is it! In this way, you could get millions of data points as I did on other specific websites for our use cases.
## Conclusion
I hope this article was helpful to you, please don't hesitate to ask me any questions and you can reach me on [Twitter (X)](https://twitter.com/maelghrib) and [Linkedin](https://www.linkedin.com/in/maelghrib) | maelghrib |
1,879,621 | Shadcn-ui codebase analysis: examples-nav.tsx explained | I wanted to find out how the below example nav is developed on ui.shadcn.com, so I looked at its... | 0 | 2024-06-06T20:03:47 | https://dev.to/ramunarasinga/shadcn-ui-codebase-analysis-examples-navtsx-explained-14j7 | javascript, nextjs, shadcnui, opensource | I wanted to find out how the below example nav is developed on [ui.shadcn.com](http://ui.shadcn.com), so I looked at its [source code](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/%28app%29/layout.tsx). Because shadcn-ui is built using app router, the files I was interested in were [page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/%28app%29/page.tsx) and [examples-nav.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/examples-nav.tsx#L55)

In this article, we will find out the below items:
1. Where is the code related to the examples-nav?
2. Examples Nav code snippet.
> Want to learn how to build shadcn-ui/ui from scratch? Check out [build-from-scratch](https://github.com/Ramu-Narasinga/build-from-scratch) and give it a star if you like it. Sovle challenges to build shadcn-ui/ui from scratch. If you are stuck or need help? [solution is available](https://tthroo.com/build-from-scratch).
### Where is the code related to the examples-nav?
ExamplesNav is used in [page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/%28app%29/page.tsx#L43) as shown below

[examples-nav.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/examples-nav.tsx#L55) has the code below.

examples is an array containing the code below:
```
const examples = \[
{
name: "Mail",
href: "/examples/mail",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/mail",
},
{
name: "Dashboard",
href: "/examples/dashboard",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/dashboard",
},
{
name: "Cards",
href: "/examples/cards",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/cards",
},
{
name: "Tasks",
href: "/examples/tasks",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/tasks",
},
{
name: "Playground",
href: "/examples/playground",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/playground",
},
{
name: "Forms",
href: "/examples/forms",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/forms",
},
{
name: "Music",
href: "/examples/music",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/music",
},
{
name: "Authentication",
href: "/examples/authentication",
code: "https://github.com/shadcn/ui/tree/main/apps/www/app/(app)/examples/authentication",
},
\]
```
### Conclusion:
ExamplesNav component is used to show the examples nav element on the [ui.shadcn.com](http://ui.shadcn.com). This component uses examples array to show the nav tab elements.
### About me:
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
### References:
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/examples-nav.tsx#L55](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/examples-nav.tsx#L55)
3. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/page-header.tsx#L5](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/page-header.tsx#L5) | ramunarasinga |
1,879,620 | What is Open Source Software (OSS)? | What is open source software? Open source software (OSS) refers to software that is... | 0 | 2024-06-06T20:00:31 | https://dev.to/isttiiak/what-is-open-source-software-oss-1fpg | linux, opensource | ## What is open source software?
Open source software (OSS) refers to software that is distributed with its source code made available to the public. This allows anyone to view, modify, and distribute the software.
Sharing of software has gone on since the beginnings of the computer age. In fact, not sharing software was the exception, and not the rule. The concepts of open source software (OSS) long predate the use of the term.
This software are open for all. For instance it is **Free.** In English the word **FREE** has two meanings:
* Free as in free speech, freedom to distribute
* Free as in no cost, or as is often said, like "Free beer"
Use of OSS means source code is made available with a license which provides rights to examine, modify and redistribute, without restriction on the user's identity or purpose.
## Why use open source software?
The purpose of OSS is Collaborative Development. enables software projects to build better software. When progress is shared, not everyone has to solve the same problems and make the same mistakes. Thus, progress can be made much faster and costs can be reduced.
Having more eyeballs viewing code and more groups testing it leads to stronger and more secure code, as well. It is often hard for competitors to get used to the idea of sharing, and grasping that the benefits can be greater than the costs. But experience has proved this to be true over and over again.
Competitors can compete on user-facing interfaces (e.g. internal plumbing that everyone needs) so that end users still see plenty of product differentiation and have varying experiences.
## Successful OSS Projects:
* **Linux Kernel:** It is the basis of almost all of the world’s computing infrastructure, from the most powerful supercomputers to the largest number of mobile devices, based on Android, built on a Linux kernel.
* **Git:** Git is a distributed version control system that is used worldwide for an astounding number of collaborative products. It is also the basis of GitHub, which hosts more than one hundred million of open source projects repositories; GitLab, another easily available host, handles quite a few projects as well.
* **Apache:** Work on the Apache HTTP Server began in 1995. Today it is the most widely used web server with roughly one third of the market share (another open source project, nginx, has almost as many users).
* **Programming Languages:** Many computing languages are developed using open source methods. Examples of these languages include: Python, Perl, Ruby, Rust etc
* **GNU:** The GNU Project has provided many essential ingredients for virtually all modern computer technologies, under various versions of the GPL (General Public License). Some of the most prominent products emanating from the GNU are GCC, GBD, GLIBC, BASH, COREUTILS.
| isttiiak |
1,879,619 | "npm run build" is not working for nodejs application. | every time I am hitting "npm run build" it showing like starting building but not starting. this is... | 0 | 2024-06-06T19:55:46 | https://dev.to/developerdruva/npm-run-build-is-not-working-for-nodejs-application-37k1 | help | every time I am hitting "npm run build" it showing like starting building but not starting. this is showing multiple times and then stopped. | developerdruva |
1,879,618 | Mobile Game Testing: Essential Test Cases to Ship It Smooth | The competition in the mobile gaming industry is fierce, and developing a game that will go viral is... | 0 | 2024-06-06T19:55:23 | https://dev.to/konst_/mobile-game-testing-essential-test-cases-to-ship-it-smooth-802 | testing, mobile, gamedev | The competition in the mobile gaming industry is fierce, and developing a game that will go viral is a challenge in itself. Should you even bother making mobile games in 2024? Well, the market is projected to hit a whopping [$99 billion](https://www.statista.com/outlook/dmo/digital-media/video-games/mobile-games/worldwide) globally in 2024, reaching nearly $119 billion by 2027, with almost 2 billion users.
Another thing that can make it worthy of your investment is [prioritizing quality over quantity](https://www.businessofapps.com/insights/mobile-gaming-trends-for-2024/), a prominent trend in 2024. High user ratings not only prompt others to try out your game but also can influence app store algorithms, giving it more visibility.
Good quality is only possible with thorough testing. But what exactly is meant by “thorough”? Device fragmentation, frequent OS updates, different network conditions, multiplayer features, and localization – all of these require attention. In this post, I’ll share time-proven test cases to help you release well-polished mobile games that users will play on repeat.
**Functional Test Cases**
Don’t want your users to rage quit? Then do some functional testing to make sure each component of your game operates as it should:
- Verify if no issues appear during game installation, launch, shutdown, and uninstallation
- Test user registration and login process
- Look for UI bugs like misaligned elements, glitches and artifacts, unreadable text, and areas prone to misclicks
- Test your gameplay mechanics like controls, physics, AI behavior, combat mechanics
- Ensure that difficulty levels are appropriate and there are no exploits
- Test everything audio-related: audio settings, sound effects and background music, and audio synchronization
- Run the game in different network conditions (Wi-Fi, 3G, 4G, 5G, no network)
- In the multiplayer mode, check the matchmaking process and lobby chat functionality
- Test in-app purchases, ensuring the purchased items are delivered and correctly reflected in the player’s inventory
- Check if users can subscribe to paid services and whether the subscription grants the advertised benefits
- Test refund processes and error handling in transactions
- Check if the game can handle unexpected errors, such as connectivity issues, device limitations, and internal errors, gracefully

_Example of a functional bug in [Cross The Ages: TCG](https://bugcrawl.qawerk.com/weekly-bug-crawl/bugs-found-in-cross-the-ages-tcg-for-ios/). Agreeing to Terms of Service results in an error_
**Compatibility Test Cases**
Compatibility testing may feel like a grind, but that’s what will help you address the device fragmentation challenge. You want to provide a seamless experience for as many players as possible:
- Test how the game works on the target operating systems
- Keep track of operating system updates and perform regression testing to verify compatibility with new OS versions
- Verify that your game supports gestures specific to the OS, like swipe on Android or 3D Touch on iOS
- Check whether your game drains the battery when it’s running in the background
- Test the game on multiple popular devices with varying screen sizes, resolutions, and aspect ratios
- Check your game compatibility with external game controllers, such as MFi controllers
- Test the game’s interaction with other installed applications and third-party services

_Example of a compatibility bug in [Elemental Raiders](https://bugcrawl.qawerk.com/weekly-bug-crawl/bugs-found-in-butter-royale-for-ios/): On iPhone 12, the search screen is cropped and impossible to scroll_
**Performance Test Cases**
Don’t want to lag behind your competitors? Provide lag-free gaming. Performance testing will help you achieve a visually stunning and responsive game. Here’s what needs to be done:
- Check if the install and launch times are adequate
- Monitor the frame rate during gameplay, documenting lagging or dropped frames
- Test how quickly the game drains the device’s battery
- Check if the amount of memory the game uses exceeds device limitations
- Monitor the workload on the device’s processor and graphics card and watch for overheating
- Perform initial load testing to determine the game’s baseline performance
- Identify the point where the game’s performance starts to degrade by gradually increasing the load
- Push the game to its limits with high user loads or intense gameplay scenarios to identify its breaking point
- Test if your game crashes or loses progress in case of interruptions, such as incoming calls, text messages, alarms, or switching apps
- If applicable, verify if the game functions properly offline and transitions seamlessly between online and offline modes

_Example of a performance issue in [Couple Up!](https://qawerk.com/case-studies/couple-up/): Most API requests take longer than 3 seconds to respond_
**Security Test Cases**
Hackers taking someone else’s loot can spoil the impression of your game and undermine players’ trust. Level up your game’s security by running these test cases:
- Verify that only authenticated users can access the game
- Check account lockout mechanisms and password recovery processes
- Test if authentication mechanisms like 2FA are strong enough to prevent unauthorized access
- Check what permissions the game requests and whether they are justified
- Check if the user can revoke granted permissions at any time
- Check if the game complies with data privacy regulations like GDPR, COPPA, CCPA, depending on your target audience
**Usability Test Cases**
Every mobile game developer wants players coming back. Usability testing helps assess how easy it is for players to go from noob to ninja and understand what to tweak to make it more fun and rewarding.
- Test the general usability, looking into things like clarity and brevity of instructions, content abundance and variety, and overall impression
- Check the ease and consistency of navigation, ensuring the menu structure is clear, and controls are always used in the same way
- Test the game balance, ensuring it’s challenging enough to be fun, but at the same time, gives players a fair chance of success
- Check if the game provides players with a sense of purpose and direction
- Evaluate how satisfying or frustrating the progression level is
**Localization Test Cases**
Developers who want their games to resonate with a worldwide audience localize them. Some do it better than others, and one of the reasons why is the extent of localization testing they perform. Here are must-know test cases to guarantee an immersive experience regardless of the player’s language:
- Check if all in-game text elements, including menus, buttons, tutorials, dialogues, and error messages, are translated into the target languages
- Ensure all audio content is translated as well
- Verify if translations are contextually appropriate and convey the intended meaning and humor
- Document typos, awkward line breaks, and texts overflowing their containers
- Check if special characters, accents, and diacritics are displayed correctly and do not cause crashes
- Confirm that the font used supports all required characters in the target languages
- Watch for cultural references, jokes, or idioms that might not translate well or be offensive in the target culture
- Verify that date and time formats, currency, and measurement units (meters, feet, kilometers, miles) are adapted to the target country’s standards
**Bonus Content**
For those of you who’d like to dig deeper, I created a more extensive guide, so make sure to check this [mobile game testing checklist](https://qawerk.com/blog/mobile-game-testing-detailed-qa-checklist/). It also contains real-life examples of common bugs in mobile games that we at QAwerk encounter quite often. Don’t nerf the fun! Use these test cases to identify elements that can unintentionally frustrate your players.
**Final Thoughts**
The mobile gaming market isn’t just massive; it’s hungry for quality. In 2024, gamers crave polished experiences, and those can be achieved with ongoing professional testing. Checking if the game runs on your friend’s phone won’t cut it. While this list of test cases will equip you with a solid foundation for testing, remember, there’s no one-size-fits-all approach. The key is to craft a mobile game testing strategy unique to your game’s genre, target audience, and market trends.
| konst_ |
1,879,612 | Callbacks vs Promises vs Async/Await Concept in JavaScript | JavaScript (JS) provides several ways to handle asynchronous operations, which are crucial for tasks... | 0 | 2024-06-06T19:53:51 | https://dev.to/ayas_tech_2b0560ee159e661/callbacks-vs-promises-vs-asyncawait-concept-in-javascript-hp6 | JavaScript (JS) provides several ways to handle asynchronous operations, which are crucial for tasks like fetching data from an API, reading files, or performing time-consuming computations without blocking the main thread. Let's explore the three main approaches: Callbacks, Promises, and Async/Await.
**What is a Callback ?**
A callback is a function passed into another function as an argument and is executed after some operation has been completed. Callbacks are one of the oldest ways to handle asynchronous operations in JavaScript
```
function fetchData(callback) {
setTimeout(() => {
const data = { user: 'John Doe' };
callback(null, data);
}, 1000);
}
fetchData((error, data) => {
if (error) {
console.error(error);
} else {
console.log(data);
}
});
```
In this example, fetchData takes a callback function as an argument. After a delay of 1 second, the callback function is executed with the data.
**What is a Promise ?**
Definition: A promise is an object representing the eventual completion or failure of an asynchronous operation.It provides a cleaner way to handle asynchronous code, avoiding nested callbacks.
```
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
const data = { user: 'John Doe' };
resolve(data);
}, 1000);
});
}
fetchData()
.then(data => {
console.log(data);
})
.catch(error => {
console.error(error);
});
```
**What are Async/Await ?**
Async and await are syntactic sugar built on top of promises. They provide a more straightforward way to work with asynchronous code, making it easier to read and write.
```
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
const data = { user: 'John Doe' };
resolve(data);
}, 1000);
});
}
async function fetchUser() {
try {
const data = await fetchData();
console.log(data);
} catch (error) {
console.error(error);
}
}
fetchUser();
```
**Conclusion**
Choosing between callbacks, promises, and async/await depends on your specific use case and the complexity of your asynchronous code. For simple tasks, callbacks may suffice. For more complex scenarios involving multiple asynchronous operations, promises or async/await offer better readability and maintainability.
| ayas_tech_2b0560ee159e661 | |
1,879,617 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-06T19:53:48 | https://dev.to/betam34174/buy-verified-cash-app-account-4025 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | betam34174 |
1,879,616 | BDD Testing in .NET8 | Introduction In this tutorial you will understand what is Behavior-Driven Development... | 0 | 2024-06-06T19:52:40 | https://dev.to/vinicius_estevam/teste-bdd-em-net8-4ech | ledscommunity, bdd, testing, aspnet |
## Introduction
In this tutorial you will understand what is Behavior-Driven Development (BDD) and your benefits.
### Behavior-Driven Development (BDD)
Behavior-Driven Development (BDD) is an agile methodology that enhances collaboration among all project participants, regardless of technical knowledge. It builds on Test-Driven Development (TDD) by using natural language to describe test scenarios, making them easy to understand for everyone. BDD aims to improve communication, reduce misunderstandings, and ensure the software meets real user needs.
### Benefits Of BDD
- Clear overview of the project.
- Improves understanding of the parties involved.
- Allows automation of tests and documents.
- Contributes to efficient development.
### How BDD Works ?
**Scenario definition (features files):**
The team must define through natural language how the system should behave in each scenario, which can be understood as the CRUD operation for a system entity. Cases of success and error must be taken into account, as well as what the system should return to the user in each action.
> BDD syntax uses 3 keywords:
> **Given:** Describes the initial context of the scenario.
> **When:** Describes the action that triggers the scenario.
> **Then:** Describes the expected result of the action.
**Scenario automation (steps files):**
After defining the scenarios, tests can be automated based on feature documents, these tests serve to validate the scenarios created and are run continuously throughout the project's development, to identify bugs quickly.
---
## Tools
- C#
- .NET8
- Visual Studio 2022
---
## Configuration
To implement the BDD test in .NET8 we will use the **_Xunit.Gherkin.Quick_** library available on [link](https://github.com/ttutisani/Xunit.Gherkin.Quick)
Within your api's solution, right-click and select the Add new Project option and select the xUnit Test Project type project. At the end you should see a project in this structure:

To install **_Xunit.Gherkin.Quick_** you can run this commands in cmd or add packages in Nuget Mangement via Visual Studio.
```cmd
dotnet add package Xunit.Gherkin.Quick
dotnet add package Microsoft.AspNetCore.Mvc.Testing
dotnet add package Newtonsoft.Json
```

------
Then create two folders within this project named as Features and Steps.

---
## Test Implementation
Let's start making the feature file for the member entity, create a file called _**MemberFeatures.feature**_ inside the feature folder, then let's set the scene, in this tutorial we will cover an error and a success case for registration and editing.
```gherkin
Feature: Member CRUD
Background:
Given I have access to the member API
Scenario Outline: Create a new member
When I send a POST request to /Member with the following member details: "<Name>", "<Email>", "<Identifier>", "<BrazilianDocument>", "<BrazilianZipCode>", "<Phone>"
Then the API response should be: "<StatusCode>"
Examples:
| Name | Email | Identifier | BrazilianDocument | BrazilianZipCode | Phone | StatusCode |
| "" | john.doe@example.com | 0000bsi0000 | 12345678900 | 12345678 | 27900000000 | 400 |
| John Doe | john.doe@example.com | 0000bsi0000 | 12345678900 | 12345678 | 27900000000 | 200 |
Scenario Outline: Update a new member
When I send a PUT request to /Member/"<MemberId>" with the following member details: "<Name>", "<Email>", "<Identifier>", "<BrazilianDocument>", "<BrazilianZipCode>", "<Phone>"
Then the API response should be: "<StatusCode>"
Examples:
| MemberId | Name | Email | Identifier | BrazilianDocument | BrazilianZipCode | Phone | StatusCode |
| 8db060d9-ca6f-4320-a972-c687d75accce | "" | john.doe@example.com | 0000bsi0000 | 12345678900 | 12345678 | 27900000000 | 400 |
| 8db060d9-ca6f-4320-a972-c687d75accce | John Doe Updated | john.doe2@example.com | 1111bsi1111 | 12345678911 | 12345679 | 27912341234 | 200 |
Scenario Outline: Retrieve an existing member
When I send a GET request to /Member/"<MemberId>"
Then the API response should be: "<StatusCode>"
Examples:
| MemberId | StatusCode |
| 28368835-8f04-4357-81b5-31d666314020 | 200 |
Scenario Outline: Delete an existing member
When I send a DELETE request to /Member/"<MemberId>"
Then the API response should be: "<StatusCode>"
Examples:
| MemberId | StatusCode |
| 52ee8b49-9758-499a-b35e-93b61e10827a | 200 |
```
Now let's implement the steps test based on the resource file we just created in the previous step, to create a file called **_MemberSteps.cs_** inside the Step folder.
```c#
using Gherkin.Ast;
using Application.DTOs.Request;
using Application.DTOs.Response;
using Test.Shared;
using System.Net;
using System.Text;
using System.Text.Json;
using Xunit.Gherkin.Quick;
namespace SlaveOneBack.Test.Steps
{
[FeatureFile("../../../Features/MemberFeature.feature")]
public class MemberSteps : Xunit.Gherkin.Quick.Feature
{
private const string BASE_URL = "https://localhost:3000/api/Member/";
private readonly HttpClient _client;
private HttpResponseMessage _response;
private ApiDataProvider _provider;
public MemberSteps()
{
_client = new HttpClient();
_provider = new ApiDataProvider();
}
#region Check if API is running
[Given("I have access to the member API")]
public async Task IHaveAccessAPI()
{
var response = await _client.GetAsync(BASE_URL);
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
#endregion
#region Post Request
[When(@"I send a POST request to /Member with the following member details: ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)""")]
public async Task WhenISendAPostRequest(string name, string email, string identifier, string document, string zipCode, string phone)
{
var member = new MemberRequestDTO();
member.Name = name;
member.Email = email;
member.Identifier = identifier;
member.BrazilianDocument = document;
member.BrazilianZipCode = zipCode;
member.Phone = phone;
var content = new StringContent(JsonSerializer.Serialize(member), Encoding.UTF8, "application/json");
var response = await _client.PostAsync(BASE_URL, content);
_response = response;
}
#endregion
#region Put Request
[When(@"I send a PUT request to /Member/""(.+)"" with the following member details: ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)""")]
public async Task WhenISendAPutRequest(string memberId, string name, string email, string identifier, string document, string zipCode, string phone)
{
MemberResponseDTO member = await _provider.GetEntityById<MemberResponseDTO>("Member", memberId);
member.Name = name;
member.Email = email;
member.Identifier = identifier;
member.BrazilianDocument = document;
member.BrazilianZipCode = zipCode;
member.Phone = phone;
var content = new StringContent(JsonSerializer.Serialize(member), Encoding.UTF8, "application/json");
var response = await _client.PutAsync(BASE_URL + memberId, content);
_response = response;
}
#endregion
#region Retrieve Request
[When(@"I send a GET request to /Member/""(.+)""")]
public async Task WhenISendAGetRequest(string memberId)
{
var response = await _client.GetAsync(BASE_URL + memberId);
_response = response;
}
#endregion
#region Delete Request
[When(@"I send a DELETE request to /Member/""(.+)""")]
public async Task WhenISendADeleteRequest(string memberId)
{
var response = await _client.DeleteAsync(BASE_URL + memberId);
_response = response;
}
#endregion
#region Check API Response
[Then(@"the API response should be: ""(.+)""")]
public async Task ThenApiResponse(string statusCode)
{
Assert.Equal(Convert.ToInt32(statusCode), (int)_response.StatusCode);
}
#endregion
}
}
```
### So how this file works ?
**Feature File Association**
The MemberSteps class is associated with the **_MemberFeature.feature_** file using the **_[FeatureFile]_** attribute:
```c#
[FeatureFile("../../../Features/MemberFeature.feature")]
public class MemberSteps : Xunit.Gherkin.Quick.Feature
{
```
**Fields and Constructor**
The class has fields to hold an **_HttpClient_** instance, a base URL for the API, an **_ApiDataProvider_** instance and a **_response_** instance to store api response to each request:
```c#
private const string BASE_URL = "https://localhost:3000/api/Member/";
private readonly HttpClient _client;
private HttpResponseMessage _response;
private ApiDataProvider _provider;
public MemberSteps()
{
_client = new HttpClient();
_provider = new ApiDataProvider();
}
```
**Verifying API Availability**
This step ensures the API is available:
```c#
[Given("I have access to the member API")]
public async Task IHaveAccessAPI()
{
var response = await _client.GetAsync(BASE_URL);
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
}
```
**Verifying API Response Codes**
This step checks that the API response codes match expected values:
```c#
[Then(@"the API response should be: ""(.+)""")]
public async Task ThenApiResponse(string statusCode)
{
Assert.Equal(Convert.ToInt32(statusCode), (int)_response.StatusCode);
}
```
**Creating a New Member**
This step sends a POST request to create a new member and records the response status codes:
```c#
[When(@"I send a POST request to /Member with the following member details: ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)""")]
public async Task WhenISendAPostRequest(string name, string email, string identifier, string document, string zipCode, string phone)
{
var member = new MemberRequestDTO();
member.Name = name;
member.Email = email;
member.Identifier = identifier;
member.BrazilianDocument = document;
member.BrazilianZipCode = zipCode;
member.Phone = phone;
var content = new StringContent(JsonSerializer.Serialize(member), Encoding.UTF8, "application/json");
var response = await _client.PostAsync(BASE_URL, content);
_response = response;
}
```
**Retrieving an Existing Member**
This step sends a GET request to retrieve a member by ID and records the response status code:
```c#
[When("I send a GET request to '/Member/28368835-8f04-4357-81b5-31d666314020'")]
public async Task GetByIdRequest()
{
listOfStatusCode.Clear();
var response = await _client.GetAsync(BASE_URL + MemberID);
listOfStatusCode.Add((int)response.StatusCode);
}
```
**Updating an Existing Member**
This step sends a PUT request to update a member's details and records the response status codes:
```c#
[When(@"I send a PUT request to /Member/""(.+)"" with the following member details: ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)"", ""(.+)""")]
public async Task WhenISendAPutRequest(string memberId, string name, string email, string identifier, string document, string zipCode, string phone)
{
MemberResponseDTO member = await _provider.GetEntityById<MemberResponseDTO>("Member", memberId);
member.Name = name;
member.Email = email;
member.Identifier = identifier;
member.BrazilianDocument = document;
member.BrazilianZipCode = zipCode;
member.Phone = phone;
var content = new StringContent(JsonSerializer.Serialize(member), Encoding.UTF8, "application/json");
var response = await _client.PutAsync(BASE_URL + memberId, content);
_response = response;
}
```
**Deleting a Member**
This step sends a DELETE request to delete a member by ID and records the response status code:
```c#
[When(@"I send a DELETE request to /Member/""(.+)""")]
public async Task WhenISendADeleteRequest(string memberId)
{
var response = await _client.DeleteAsync(BASE_URL + memberId);
_response = response;
}
```
---
## Run Test
You can run the test in Visual Studio IDE or cmd.

```cmd
dotnet test
```
I choose to run via IDE then i got this following report from our test, if you choose run by terminal you will receive the report on terminal:

---
## Conclusion
Working with BDD (Behavior-Driven Development) at LEDS has been an has been an incredible experience, writing test scenarios right after requirements documentation helps reduce uncertainty and decrease the error rate during a development phase. Furthermore, test automation allows, with each new change or implementation of new functionalities, it is possible to validate these changes efficiently. This ensures greater security and confidence in continuous development.
Another significant benefit is that everyone involved in the project, regardless of their area of expertise, is fully aware of what to expect from each new version of the project. The BDD is read in natural language, which makes communication clear and accessible to everyone, making it easier to understand what is being developed and tested. This approach promotes more integrated and effective collaboration between teams, resulting in a higher quality final product.
| vinicius_estevam |
1,879,593 | Streamlining Angular Deployment with GitHub Actions, GitHub Container registry , Docker, and Nginx | In modern web development, setting up an efficient CI/CD pipeline for deploying your Angular... | 0 | 2024-06-06T19:50:35 | https://dev.to/aixart/streamlining-angular-deployment-with-github-actions-github-container-registry-docker-and-nginx-27bo | devops, docker, webdev, cicd | In modern web development, setting up an efficient CI/CD pipeline for deploying your Angular applications can significantly enhance your workflow. This blog post guides you through setting up a GitHub Actions pipeline to build and deploy an Angular application using Docker, Nginx, and ensuring basic security group configurations in EC2.
## Prerequisites
- A GitHub repository containing your Angular application.
- Docker installed on your local machine.
- GitHub Container Registry (GHCR) for storing Docker images.
- An EC2 instance or any server with Docker and Nginx installed.
- Proper security group settings on your EC2 instance.
## Step 1: Modify Dockerfile to Accept Build Arguments
First, we need to update our Dockerfile to accept build arguments that specify the Angular environment. This ensures that our build process can dynamically adjust based on the environment.
```dockerfile
# Use the official Node.js image as a base
FROM node:14
# Create and set the working directory
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application files
COPY . .
# Set the environment variable to increase memory limit
ENV NODE_OPTIONS=--max_old_space_size=4096
# Build the Angular application based on the provided environment
ARG ANGULAR_ENV=production
RUN npm run build -- --configuration=$ANGULAR_ENV
# Expose the port the app runs on
EXPOSE 80
# Command to run the application
CMD ["npm", "start"]
```
## Step 2: Set Up GitHub Actions Workflow
Next, we'll configure a GitHub Actions workflow to handle the build and deployment process.
```yaml
name: Build and Deploy Angular
on:
push:
branches:
- main # Change this to your main branch name
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} # Use GitHub token secret
- name: Extract Git Commit Hash
id: vars
run: echo "COMMIT_HASH=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: Build Docker Image
run: |
docker build --build-arg ANGULAR_ENV=production -t ghcr.io/yourusername/angular-docker:${{ env.COMMIT_HASH }} .
- name: Push Docker Image to GitHub Container Registry
run: |
docker push ghcr.io/yourusername/angular-docker:${{ env.COMMIT_HASH }}
- name: SSH and Deploy to Server
env:
PRIVATE_KEY: ${{ secrets.PRIVATE_KEY }} # This is the private key secret for SSH
COMMIT_HASH: ${{ env.COMMIT_HASH }}
run: |
echo "$PRIVATE_KEY" > private_key.pem
chmod 600 private_key.pem
ssh -o StrictHostKeyChecking=no -i private_key.pem ubuntu@your-ec2-instance <<EOF
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
docker pull ghcr.io/yourusername/angular-docker:${COMMIT_HASH}
docker stop angular-app || true
docker rm angular-app || true
docker run -d --name angular-app -p 8080:80 ghcr.io/yourusername/angular-docker:${COMMIT_HASH}
sudo systemctl restart nginx
docker system prune -a -f --volumes
EOF
rm -f private_key.pem
permissions:
contents: read
packages: write
id-token: write
```
## Step 3: Nginx Configuration
Update your Nginx configuration to correctly proxy requests to the Docker container.
```nginx
server {
if ($host = yourdomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name yourdomain.com;
return 404;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem; # managed by Certbot
}
```
## Step 4: EC2 Security Group Configuration
Ensure your EC2 instance's security group allows inbound traffic on ports 80 and 443.
1. **Log in to AWS Management Console**.
2. **Navigate to EC2 Dashboard**.
3. **Select Security Groups** under "Network & Security".
4. **Find and Select Your Security Group** associated with your EC2 instance.
5. **Edit Inbound Rules** to allow traffic on ports 80 (HTTP) and 443 (HTTPS):
- **HTTP** (port 80):
- Type: HTTP
- Protocol: TCP
- Port Range: 80
- Source: 0.0.0.0/0
- **HTTPS** (port 443):
- Type: HTTPS
- Protocol: TCP
- Port Range: 443
- Source: 0.0.0.0/0
6. **Login ghcr.io** to pull the iamges from your github
```
export CR_PAT=YOUR_TOKEN
echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
```
## Summary
This setup ensures that:
- Each deployment uses a Docker image tagged with the specific Git commit hash.
- Unused Docker images are removed from your server to free up space.
- Your server runs the latest Docker image without accumulating old images.
| aixart |
1,879,615 | DIGITAL ASSET FRAUD EXPERT KNOWN AS GEARHEAD ENGINEERS | Growing up on a farm instilled in me a strong work ethic and a deep appreciation for the simple, yet... | 0 | 2024-06-06T19:47:29 | https://dev.to/eva_cole_9a88b92c94a52f90/digital-asset-fraud-expert-known-as-gearhead-engineers-1if2 | Growing up on a farm instilled in me a strong work ethic and a deep appreciation for the simple, yet demanding, rhythms of rural life. However, my aspirations always lay beyond the fields and barns of my upbringing. I yearned for something different, something that blended the new-age economy with the values I held dear. That path led me unexpectedly into the world of cryptocurrencies. I began my journey with an intense period of self-education. As a farmer's son, resources were not handed to me; I had to cultivate them myself. This drive to learn and adapt was pivotal when I chose to skip college, a decision that weighed heavily on my parents but felt right to me. Instead, I meticulously built and maintained a perfect credit score, which eventually allowed me to secure a $50,000 loan from the bank to start my own business. Rather than pursue a traditional route, I invested the entire sum into Bitcoin. This was in the early days of crypto's surge, a time both ripe with opportunity and fraught with risk.Remarkably, within just six months, my investment blossomed into $200,000. It felt like a vindication of my risky strategy and a testament to the potential of cryptocurrencies. Yet, as often happens in the volatile world of digital finance, my initial success caught the attention of less scrupulous parties. I was scammed out of $20,000, a severe blow that not only threatened my financial stability but also my confidence in the venture I had embarked upon. The situation worsened when I realized that the rest of my funds were also at risk.In my search for solutions, I stumbled upon GearHead Engineers experts. Skeptical yet desperate, I reached out to them, hoping that they could help secure what was left of my digital assets. The team at GearHead Engineers was incredibly professional and deeply understanding of the nuances of cryptocurrency security. They guided me through the process of securing my accounts and recovering what could be salvaged from the scam. Thanks to GearHead Engineers, not only was I able to secure my existing wallets, but I also implemented enhanced security measures to protect against future attacks. The experience was sobering but ultimately educational, reinforcing the importance of security in digital transactions and the need for vigilance in an increasingly connected world. This ordeal has taught me more than I could have learned in any classroom. It tested my resolve, challenged my understanding of digital finance, and forced me to confront the realities of engaging in high-risk investments. With my accounts now secure, I am cautiously optimistic about the future. My journey from a farmer’s son to a crypto investor has been unconventional but is a testament to the power of resilience and the importance of adapting to new opportunities while safeguarding one's investments.
| eva_cole_9a88b92c94a52f90 | |
1,879,614 | A Deep Dive into Three.js: Exploring the Beauty of 3D on the Web 🌐 | Hey there, fellow developers! Today, we're diving into the fascinating world of Three.js, a... | 0 | 2024-06-06T19:45:24 | https://dev.to/mohith/a-deep-dive-into-threejs-exploring-the-beauty-of-3d-on-the-web-5812 | threejs, nextjs, new, 3d | Hey there, fellow developers! Today, we're diving into the fascinating world of Three.js, a JavaScript library that makes creating 3D graphics in the web browser a breeze. Whether you're a seasoned coder or just getting started, Three.js has something for everyone. Let's unpack its history, usage, benefits, and best practices. Ready? Let's go! 🚀
**The Story Behind Three.js 🕰️**
Three.js was born out of the vision of Ricardo Cabello, also known as Mr. doob, back in 2010. The idea was to simplify the creation of 3D graphics on the web. Before Three.js, developers had to write complex WebGL code, which wasn't exactly user-friendly. Three.js abstracted away much of that complexity, making it accessible to a broader audience. Fast forward to today, and Three.js is a cornerstone of web-based 3D graphics, widely adopted in interactive websites, games, and VR experiences.
**Why Use Three.js? 🤔**
Three.js is incredibly useful for several reasons:
1. Ease of Use: It abstracts the complexities of WebGL, providing an intuitive API.
2. Flexibility: It supports various rendering backends like WebGL, SVG, and CSS3D.
3. Community and Resources: A large community means plenty of tutorials, examples, and forums to help you out.
4. Performance: Efficiently handles complex 3D scenes and animations.
5. Integration: Easily integrates with other web technologies and frameworks.
**Future Scope of Three.js 🌟**
The future of Three.js is bright! As web technologies evolve, Three.js continues to innovate, pushing the boundaries of what's possible in web-based 3D graphics. From augmented reality (AR) to virtual reality (VR) applications, the possibilities are endless. Imagine immersive online shopping experiences, interactive educational tools, and sophisticated games—all powered by Three.js.
**What Can You Create with Three.js? 🎨**
The sky's the limit! Here are a few ideas:
- Interactive Websites: Add a new dimension to your site with 3D models and animations.
- Games: Create engaging browser-based games.
- Data Visualization: Turn complex data into interactive 3D visualizations.
- VR/AR Experiences: Build immersive virtual and augmented reality applications.
**How Three.js Enhances Websites 🌐**
Using Three.js, you can make your websites more engaging and interactive. 3D elements can grab users' attention and provide a more immersive experience. Whether it's a spinning logo, a 3D product showcase, or an interactive background, Three.js can help you stand out.
**Handling Model Loading 🚀**
Loading 3D models efficiently is crucial to ensure a smooth user experience. Here’s how to do it right:
**Optimizing Models 🛠️**
- Reduce Complexity: Simplify your models in a 3D modeling software before exporting them. This reduces the number of polygons and can significantly improve performance.
- Use Proper Formats: Formats like GLTF or GLB are optimized for web use. They support features like mesh compression, which helps in reducing file size without losing quality.
- Texture Optimization: Use compressed textures and reduce their resolution where possible. Textures often take up a significant portion of the model's size.
**Lazy Loading 📦**
- Load On-Demand: Load models only when they are needed. For example, load models when they come into the camera's view or when a user interacts with a specific part of the page.
- Use Placeholders: Display low-poly versions or simple placeholders while the full models load in the background.
**Compression 🚀**
- Model Compression: Use tools like Draco compression to compress 3D models. Three.js supports decoding Draco-compressed meshes directly.
- Texture Compression: Use texture compression techniques like Basis Universal to reduce the size of your textures.
Example of Loading a Model:

**Best Practices and What to Avoid 🚫**
**Best Practices 🌟**
- Keep It Simple: Start with simple models and scenes to avoid overwhelming the rendering engine.
- Optimize Performance: Use techniques like Level of Detail (LOD) to adjust the complexity of models based on their distance from the camera. Implement frustum culling to only render objects visible in the camera’s view.
- Stay Updated: Regularly update your Three.js library to benefit from the latest features, performance improvements, and bug fixes.
- Use Efficient Materials: Choose materials that are optimized for performance. For example, use MeshBasicMaterial instead of MeshStandardMaterial when you don’t need advanced lighting and shading effects.
- Debugging Tools: Utilize tools like the Three.js Inspector for Chrome to debug and optimize your scenes.
**What to Avoid 🚫**
- Overloading the Scene: Avoid adding too many objects or highly detailed models that can degrade performance. Use instanced rendering for repetitive objects.
- Ignoring Compatibility: Ensure your 3D content works across different devices and browsers. Test on both high-end and low-end devices to ensure a consistent experience.
- Neglecting User Experience: Always consider the impact of 3D elements on the overall user experience. Ensure that your 3D content enhances the user journey rather than hindering it.
**Useful Repositories and Resources 📚**
**Official Three.js Repository**
Three.js https://github.com/mrdoob/three.js: The official repository for Three.js, containing the core library and extensive documentation.
**Examples and Tutorials**
Three.js Examples https://github.com/mrdoob/three.js/tree/master/examples: A collection of examples showcasing various features of Three.js. This is part of the official repository but provides a direct link to the examples.
Three.js Journey: Bruno Simon's advanced Three.js course repository, filled with examples and lessons.
**Model Loading and Optimization**
GLTF Loader Example https://github.com/mrdoob/three.js/blob/master/examples/jsm/loaders/GLTFLoader.js: An example of how to use the GLTFLoader to load 3D models.
Draco Compression https://github.com/mrdoob/three.js/blob/master/examples/jsm/loaders/DRACOLoader.js : An example of how to use Draco compression with Three.js.
**Community Projects and Tools**
Three.js Inspector: A powerful tool to debug and inspect Three.js scenes.
Three.js Boilerplate https://github.com/Sean-Bradley/Three.js-TypeScript-Boilerplate : A boilerplate project to get started with Three.js and TypeScript quickly.
Three.js Starter https://github.com/designcourse/threejs-webpack-starter : A starter template for Three.js projects using Webpack.
**Advanced Examples and Utilities**
Three.js React https://github.com/pmndrs/react-three-fiber : React Three Fiber is a React renderer for Three.js, making it easier to integrate Three.js with React.
Three.js Path Tracing https://github.com/erichlof/THREE.js-PathTracing-Renderer : An advanced path tracing renderer using Three.js for realistic rendering.
These repositories provide a wealth of resources, examples, and tools to help you master Three.js and create stunning 3D web applications
**Conclusion 🎉**
Three.js opens up a world of possibilities for web developers, making it easier than ever to create stunning 3D graphics. By following best practices and leveraging the wealth of resources available, you can bring your web projects to life in ways you never imagined. So, what are you waiting for? Dive into Three.js and start creating your 3D masterpieces today! 🌟
| mohith |
1,879,613 | Looking to hire talented Blockchain developers for building Crypto Betting Platform | Currently, we are working on crypto betting website development that is called Reject Rumble. We just... | 0 | 2024-06-06T19:44:17 | https://dev.to/twentyfour7/looking-to-hire-talented-blockchain-developers-for-building-crypto-betting-platform-50kd | web3 | Currently, we are working on crypto betting website development that is called Reject Rumble.
We just started Front-UI development but our previous Blockchain developer left from our team so we are looking for developers who has experience in Blockchain development.
Current platform supports only Solana network and can play only Dice game.
Therefore, we are planning to extend this platform that receive all cryptocurrencies and also add some betting games.
For this project, you need to have proven experience in React, TypeScript, Node.js and Web3.js.
If you are interested, then we can discuss more details.
Thanks Regards. | twentyfour7 |
1,879,611 | Async/Await keeps order in JavaScript; | JavaScript is a single-threaded synchronous language, meaning it goes through the statements in the... | 0 | 2024-06-06T19:41:57 | https://dev.to/atenajoon/asyncawait-keeps-order-in-javascript-2k4p | asynchronous, javascript, api, react | JavaScript is a single-threaded synchronous language, meaning it goes through the statements in the order they are written and processes one at a time.
```
console.log("cook!");
console.log("eat!");
console.log("clean!");
// cook!
// eat!
// clean!
```
If before the _cook!_ operation there was an asynchronous operation like fetching the recipe from an API, the program might not wait for it to complete before moving on to the _cook!_, _eat!_, and _clean!_ operations. This can result in out-of-order execution, like so:
```
fetchRecipeFromAPI(); // Assume this is an asynchronous function
console.log("cook!");
console.log("eat!");
console.log("clean!");
// Output might be:
// cook!
// eat!
// clean!
// (sometime later) Recipe fetched!
```
Well, we definitely need to get the recipe first, and then cook based on it!
To ensure the steps are executed in order, even if one of them is asynchronous, we can use **async** and **await** keywords. These little friends can help us write asynchronous code that appears synchronous, making it easier to understand and maintain.
There are 2 steps to use them:
1. Define the asynchronous function with the **async** keyword before it.
2. Use the **await** keyword inside your asynchronous function to pause the execution until the operation is completed.
In the following example:
- **fetchRecipeFromAPI** is an asynchronous function simulating a recipe fetch operation with a delay.
- **cook** is an asynchronous function that waits for fetchRecipeFromAPI to complete before continuing.
- **main** is an asynchronous function that ensures cook, eat!, and clean! happen in order.
```
async function fetchRecipeFromAPI() {
return new Promise((resolve) => {
setTimeout(() => {
console.log("Recipe fetched!");
resolve("recipe");
}, 2000); // Simulate an API call with a 2-second delay
});
}
async function cook(recipe) {
console.log("start cooking with", recipe);
}
async function main() {
const recipe = await fetchRecipeFromAPI(); // Wait for the recipe to be fetched
await cook(recipe); // Wait for the cooking to complete
console.log("eat!");
console.log("clean!");
}
main();
// Output:
// Recipe fetched!
// cook!
// start cooking with recipe
// eat!
// clean!
```
By using **async** and **await**, you ensure that the asynchronous _cook_ function completes before moving on to _eat!_ and _clean!_. This maintains the logical sequence of operations.
**Note:** If you try to get the value of an asynchronous function outside of it, It's always gonna be _Promise { <pending> }_, because JavaScript has no way of knowing what the result will be outside of that asynchronous process.
You can find the actual code in this [SandBox](https://codesandbox.io/p/sandbox/get-advice-app-react-7d2k9f?file=%2Fsrc%2FTest.js) | atenajoon |
1,879,610 | Looking to hire Blockchain Developer for Crypto Betting Platform Development | Currently, we are working on crypto betting website development that is called Reject Rumble. We just... | 0 | 2024-06-06T19:41:07 | https://dev.to/twentyfour7/looking-to-hire-blockchain-developer-for-crypto-betting-platform-development-38do | career, blockchain, web3, node | Currently, we are working on crypto betting website development that is called Reject Rumble.
We just started Front-UI development but our previous Blockchain developer left from our team so we are looking for developers who has experience in Blockchain development.
Current platform supports only Solana network and can play only Dice game.
Therefore, we are planning to extend this platform that receive all cryptocurrencies and also add some betting games.
For this project, you need to have proven experience in React, TypeScript, Node.js and Web3.js.
If you are interested, then we can discuss more details.
Thanks Regards. | twentyfour7 |
1,879,609 | Leading 2D Animation Agency in New York, USA - Expert Animations | Discover the leading 2D Animation Agency in New York, USA, with Web Craft Pros. Our team of expert... | 0 | 2024-06-06T19:40:56 | https://dev.to/alexa_johns_af71ca060fd0d/leading-2d-animation-agency-in-new-york-usa-expert-animations-4dl2 | webdev | Discover the leading [2D Animation Agency](https://webcraftpros.com/video-animation) in New York, USA, with Web Craft Pros. Our team of expert animators is dedicated to transforming your ideas into visually stunning and engaging animations. Specializing in explainer videos, promotional content, and animated storytelling, we provide high-quality 2D animations that captivate your audience and enhance your brand's identity. At Web Craft Pros, we blend creativity with advanced animation techniques to deliver exceptional results tailored to your unique needs. Our comprehensive services ensure your message is effectively communicated and leaves a lasting impact. Partner with us to experience the expertise and innovation that sets us apart in the industry. Elevate your brand with the premier 2D animation agency in New York – Web Craft Pros, where expert animations bring your vision to life. | alexa_johns_af71ca060fd0d |
1,879,605 | Multitenant Considerations In Azure | What is Multitenancy? A multitenant solution serves multiple distint customers or tenants... | 0 | 2024-06-06T19:37:04 | https://dev.to/chethankumblekar/multitenant-considerations-in-azure-bbn | webdev, multitenancy, azure, cloud | ### What is Multitenancy?
A multitenant solution serves multiple distint customers or tenants and they might be individual organizations or group of users._Examples include B2B solutions (like accounting software), B2C solutions (such as music streaming), and enterprise-wide platforms (like shared Kubernetes clusters)._
a multitenant solution is mostly considered by those who building SaaS products.who mainly targeted for business or consumers.
### Design Considerations for Multitenant Solution
#### Tenant Isolation
One of the biggest considerations in the design of a multitenant architecture is the level of isolation that each tenant needs. Isolation can mean different things:
* Having a single shared infrastructure, with separate instances of your application and separate databases for each tenant.
* Sharing some common resources, but keeping other resources separate for each tenant.
* Keeping data on a separate physical infrastructure. In the cloud, this configuration might require separate Azure resources for each tenant. It could even mean deploying a separate physical infrastructure by using dedicated hosts.

#### Tenancy Models
##### 1. Automated single-tenant deployments
In an automated single-tenant deployment model, you deploy a dedicated set of infrastructure for each tenant.

* people who use this model use infrastructure as code (IaC) for repeating the infra creation and deployment for all customers and hence automate it.
* A key benefit of this approach is that data for each tenant is isolated, which reduces the risk of accidental leakage.
* cost efficiency is low, because you don't share infrastructure among your tenants. If a single tenant requires a certain infrastructure cost, 100 tenants probably require 100 times that cost.
##### 2. Fully multitenant deployments
In this approach unlike single-tenant deployment here all components are shared. we will have only once set of infrastructure to deploy and maintain.

* operating this model is less expensive as components are shared accross tenants.even if we want to deploy with higher tiers or SKUs of resources still the overal deployment cost is lower the cost of single-tenant resources.
* Might have risk of memory leaks and down time affects all the tenants
##### 3. Automated single-tenant deployments
This approach has combination of single-tenant and multitenant deployments. For example, you might have most of your customers' data and application tiers on multitenant infrastructures, but deploy single-tenant infrastructures for customers who require higher performance or data isolation.
* Deploy multiple instances of your solution geographically, and map each tenant to a specific deployment. This approach is particularly effective when you have tenants in different geographies.

* Since you're still sharing infrastructure, you can gain some of the cost benefits of using shared multitenant deployments.
But codebase will probably need to be designed to support both multitenant and single-tenant deployments.
##### 4. Horizontally partitioned deployments
In a horizontal deployment, you have some shared components but maintain other components with single-tenant deployments. For example, you could build a single application tier and then deploy individual databases for each tenant.

* Horizontally partitioned deployments can help you mitigate a noisy neighbor problem, if you identify that most of the load on your system is caused by specific components that you can deploy separately for each tenant.
* With a horizontally partitioned deployment, you still need to consider the automated deployment and management of your components, especially the components used by a single tenant.
Ref-https://learn.microsoft.com/en-us/azure/architecture/guide/multitenant/approaches/overview
| chethankumblekar |
1,879,606 | Ojas: The Science Behind How Peel Therapy Works to Transform Your Skin | In the quest for brilliant, energetic skin, strip treatment has emerged as a powerful treatment,... | 0 | 2024-06-06T19:32:09 | https://dev.to/ojasskincare9/ojas-the-science-behind-how-peel-therapy-works-to-transform-your-skin-3afj | In the quest for brilliant, energetic skin, strip treatment has emerged as a powerful treatment, venerated for its capacity to restore and change the skin. At Ojas, we have faith in tackling the force of science to upgrade magnificence, and strip treatment is a great representation of this cooperative energy. This article digs into the science behind [peel therapy](https://www.daposieyewear.com/), making sense of how it attempts to revive your skin and give groundbreaking outcomes.

**Figuring out Strip Treatment**
Strip treatment, frequently alluded to as synthetic strips, includes applying an answer to the skin that causes controlled shedding. This cycle eliminates the external layers of dead skin cells, advancing the recovery of new, sound skin. Compound strips can differ in strength and structure, going from shallow strips that focus on the epidermis to more profound strips that arrive at the dermis.
**The Study of Skin Reestablishment
**
The skin is a unique organ, continually restoring itself through the shedding of dead skin cells and the development of new ones. This regular interaction can be impacted by variables like age, sun openness, and natural harm, prompting lopsided complexions, scarce differences, and a dull composition. Strip treatment speeds up this restoration interaction by animating the skin’s normal mending systems.
**Types of Chemical Peels
****Superficial Peels:
**
Alpha Hydroxy Acids (AHAs): Derived from soil-derived compounds, AHAs, inclusive of lactic and glycolic corrosives, are mild and appropriate for addressing small imperfections, enhancing the surface of the skin, and adding a fantastic sheen.
Beta Hydroxy Acids (BHAs): The most well-known BHA is salicylic corrosive; that’s first-rate for inflammatory pores and skin because it dissolves oil and can surely enter pores to clean them.
**Medium Peels:
**
Trichloroacetic Corrosive (TCA): TCA strips infiltrate further into the skin, tending to direct kinks, sun harm, and pigmentation issues. They animate collagen creation, upgrading skin immovability and flexibility.
**Deep Peels:
**
Phenol Strips: The most grounded kind of strip, phenol strips are utilized for serious skin issues, for example, profound kinks and broad sun harm. They provide emotional outcomes yet require longer recuperation times.
How does peel treatment function?
**Exfoliation:
**
The substance arrangement applied during a strip makes the connections between dead skin cells separate, permitting these phones to bog off. This peeling system uncovers fresher, smoother skin underneath.
**Cell Recovery:
**
By eliminating the external layer of dead skin cells, strip treatment animates the fundamental skin to quickly deliver new cells. This expanded cell turnover assists with further developing the skin surface and tone.
**Collagen Creation:
**
More profound strips, for example, TCA and phenol strips, infiltrate the skin’s surface to invigorate collagen creation. Collagen is a significant protein that gives construction and versatility to the skin, decreasing the presence of scarce differences and kinks.
**Pigmentation Adjustment:
**
Synthetic strips can successfully target and lessen hyperpigmentation, including age spots, melasma, and post-provocative hyperpigmentation. The stripping system assists with a night-out complexion by advancing the evacuation of pigmented cells.
**Benefits of [Peel Therapy](https://www.daposieyewear.com/)
**
**Improved skin texture:
**
Strip treatment smooths unpleasant skin, lessens the presence of scarcely discernible differences and leaves the skin feeling delicate and revived.
**Even skin tone:
**
Chemical peels aid in the attainment of a more even and glowing complexion by treating pigmentation problems and sun damage.
Acne treatment:
BHAs and other superficial peels work especially well at clearing clogged pores and lowering inflammation in the skin prone to acne.
**Diminished Visibility of Scars:
**
By encouraging skin regeneration and the synthesis of collagen, medium and deep peels can reduce the visibility of acne scars and other small scars.
**Following Peel Care
**
Post-peel care is essential to guarantee the best outcomes and avoid problems. The skin is more sensitive and needs to be handled gently after a peel. Important guidelines for peel treatment include:
Sun Protection: To protect recently exposed skin, stay out of direct sunlight and apply a broad-spectrum sunscreen with a high SPF.
Moisturization: Use a light, non-comedogenic moisturizer to keep the skin moisturized.
Stay Free of Harsh Products: Until the skin has healed completely, avoid using retinoids, abrasive scrubs, or other strong skincare products.
**In summary
**
At Ojas, we support therapies that integrate holistic well-being with scientific efficacy. This strategy is best represented by [peel therapy](https://www.daposieyewear.com/), which provides a dermatologist-proven way to revitalize and change the skin. Knowing the workings of [peel therapy](https://www.daposieyewear.com/) can help you embrace this potent procedure and make wise choices for beautiful, youthful skin.
| ojasskincare9 | |
1,879,604 | Insta Saver Pro APK Download: Unlock Instagram Media Downloads | What Is Insta Saver Pro? Insta Saver Pro is a powerful Android app that allows you to download... | 0 | 2024-06-06T19:27:11 | https://dev.to/insta_pro_fc70da4c3d09930/insta-saver-pro-apk-download-unlock-instagram-media-downloads-30a | instagram, seo, entertainment, apk | What Is Insta Saver Pro?
Insta Saver Pro is a powerful Android app that allows you to download Instagram photos, videos, stories, reels, and IGTV content directly to your phone. Whether you want to save your favorite posts or share them with friends, Insta Saver Pro has got you covered!
Key Features:
Download Instagram Media: Save photos and videos from Instagram to your device.
Repost with Captions and Hashtags: Boost your followers by sharing content with proper attribution.
Offline Viewing: View downloaded media even without an internet connection.
How to Get [Insta Saver Pro APK](https://instaprosapks.com/top-follow-apk/):
Download the APK File: Visit Insta Saver Pro APK and grab the latest version (Version 1.0 as of January 15, 2024).
Install the APK: After downloading, tap the file to install. If prompted, allow installation from unknown sources in your device settings.
Why Choose Insta Saver Pro?
No Ads: Enjoy an ad-free experience while saving Instagram content.
Easy to Use: Insta Saver Pro’s user-friendly interface makes downloading a breeze.
Stay Connected: Access your saved media offline anytime, anywhere.
Unlock the full potential of Instagram with Insta Saver Pro! 📸🔥
Remember to verify the authenticity of any [APK ](https://instaprosapks.com/top-follow-apk/)files you download. Happy saving! | insta_pro_fc70da4c3d09930 |
1,879,462 | Four Fundamental JS Array Methods to Memorize | There are four basic array methods that every beginner should learn to start manipulating arrays.... | 0 | 2024-06-06T19:21:37 | https://dev.to/miguel_c/four-fundamental-js-array-methods-to-memorize-24k | javascript, coding, softwaredevelopment, newbie | There are four basic array methods that every beginner should learn to start manipulating arrays. These methods are `push()`, `pop()`, `shift()`, and `unshift()`. In this guide, I will take you through each method so that you can start manipulating arrays effectively!
## Method `push()`
The `push()` method adds an item to the end of an array.
**Example**
```
const colorsArray = ["blue", "red", "green"]
colorsArray.push("purple")
console.log(colorsArray)
// Output: ["blue", "red", "green", "purple"]
```

---
## Method `pop()`
The `pop()` method removes the last item in an array.
**Example**
```
const colorsArray = ["blue", "red", "green"]
colorsArray.pop()
console.log(colorsArray)
// Output: ["blue", "red"]
```

---
## Method `unshift()`
The `unshift()` method adds an item to the beginning of the array.
**Example**
```
const colorsArray = ["blue", "red", "green"]
colorsArray.unshift("purple")
console.log(colorsArray)
// Output: ["purple", "blue", "red", "green"]
```

---
## Method `shift()`
The `shift()` method removes the first item in an array.
**Example**
```
const colorsArray = ["blue", "red", "green"]
colorsArray.shift()
console.log(colorsArray)
// Output: ["red", "green"]
```

---
These same methods will work just as well with objects in an array.
**Example**
```
const people = [
{
name: "Bob",
age: 23
},
{
name:"Joe",
age: 55
}
]
people.push({ name: "Blake", age: 32 }, { name: "Alex", age: 79 });
console.log(people);
// Output: [
// { name: "Bob", age: 23 },
// { name: "Joe", age: 55 },
// { name: "Blake", age: 32 },
// { name: "Alex", age: 79 }
// ]
```

---
## Conclusion
These are the four basic array methods that every beginner should learn to start manipulating arrays. Memorizing these methods is essential and will give you a better foundation as you begin your journey in development.
- `push()`: Adds an item at the end of an array.
- `pop()`: Removes the last item from an array.
- `unshift()`: Adds an item at the beginning of an array.
- `shift()`: Removes the first item from an array.
| miguel_c |
1,879,603 | Spotify MOD APK v8.9.6.458: Download (No Ads/Premium) 2024 | In the digital age, online music streaming has revolutionized how we listen to music. Among the... | 0 | 2024-06-06T19:21:18 | https://dev.to/saadseo_e90a28931a110dbae/spotify-mod-apk-v896458-download-no-adspremium-2024-3bf9 | spotify, seo, music, learning | In the digital age, online music streaming has revolutionized how we listen to music. Among the plethora of options available, Spotify MOD APK stands out as a premier choice for music enthusiasts. Let’s dive into what makes it special:
"Unlock Spotify Premium features for free with Spotify++ IPA! Compatible with iOS 15 to iOS 17.5, this modified version provides ad-free listening, unlimited skips, high-quality audio streaming, and offline listening. No jailbreak required—customize your Spotify experience today!
What Is [Spotify MOD APK](https://www.spotifyclub.com/2024/03/05/spotify-vs-soundcloud/)?
Spotify MOD APK is a modified version of the official Spotify app that comes with premium features enabled, eliminating the need for a paid subscription. With this version, you can enjoy:
Ad-Free Music Streaming: Say goodbye to interruptions and enjoy uninterrupted listening without any ads.
Premium Sound Quality: Experience music like never before with superior sound quality of up to 320 kbps.
Unlimited Music Skips: Skip songs as many times as you want, bypassing the limitations of the free version.
No Root Required: Enjoy hassle-free usage without compromising your device’s security.
Unlimited Offline Downloads: Download your favorite tracks and listen offline anytime, anywhere.
Subscription Plans
Spotify offers various subscription plans:
Individual Plan:
Ad-free listening
Unlimited offline downloads
Price: ₹119.00 monthly or ₹1189.00 yearly
Duo Plan (for two users):
Same features as the Individual plan
Price: ₹149.00 per month
Family Plan (up to 6 users):
HD quality music
Unlimited skips
Ad-free experience
Price: ₹179.00 per month
Student Plan:
Special pricing for students at ₹59.00 per month
How to Get [Spotify](https://www.spotifyclub.com/2024/03/05/spotify-vs-soundcloud/) MOD APK v8.9.6.458?
Download the Spotify MOD APK v8.9.6.458 from trusted sources and unlock premium features for free. Enjoy your favorite music without any limitations!
| saadseo_e90a28931a110dbae |
1,879,602 | Launch of the Edudu App for iOS and Android | Release of Edudu It is with great pleasure that I announce the launch of the Edudu App for... | 0 | 2024-06-06T19:20:11 | https://dev.to/marciofrayze/release-of-edudu-app-1529 | app, ios, android, teachers | ## Release of Edudu
It is with great pleasure that I announce the launch of the Edudu App for [iOS](https://apps.apple.com/br/app/edudu/id6477551944) and [Android](https://play.google.com/store/apps/details?id=tech.segunda.edudu)!
In this article, I present the Edudu app and discuss the technologies I chose to develop this project.
## What is Edudu?
If you are a teacher looking for a more efficient way to manage your classes, students, and attendance records, Edudu is the app you have been waiting for!
### Key Features
- **Class, Student, and Lesson Registration**: Create and manage multiple classes, keeping all important information organized in one place. Quickly and easily register your students.
- **Class Diary**: Record all your lessons and keep track of your students' attendance in a simple and effective way.
- **Cloud Storage**: All data is securely stored in the cloud, ensuring easy access and protection against data loss.
- **Intuitive and Easy-to-Use Interface**: Edudu was developed with ease of use in mind, providing an intuitive experience for users.
## Technologies
To implement this product, I chose:
- **Dart/Flutter**: I chose the [Dart language](https://dart.dev) and the [Flutter framework](https://flutter.dev) because I already have experience with these technologies and was looking for a solution that would allow the creation of an app for both iOS and Android without needing to create two separate codes.
- **[Firebase Authentication](https://firebase.google.com/docs/auth?hl=en)**: Everyone with a smartphone has a Google or Apple account. To facilitate the login process, I use Firebase Authentication for user authentication.
- **[Cloud Firestore](https://firebase.google.com/docs/firestore?hl=en)**: I wanted the data to be maintained in the cloud (not just locally on the user's device). The Firestore database was an almost natural choice. It integrates easily with Firebase Authentication, is secure, fast, and requires minimal maintenance.
- **[Firebase Crashlytics](https://firebase.google.com/products/crashlytics?hl=en)**: Every app needs a mechanism for monitoring failures. Crashlytics fulfills this function very well.
The main challenge that led me to choose these technologies was the fact that I was developing this app alone. I was looking for solutions that would allow the construction of a product with reduced effort but with a strong foundation for long-term evolution. Flutter + Firebase proved to be a very good combination for this scenario!
In the coming months, I intend to write some articles sharing more about this journey.
## Where can you find it
The app is free and available on the [Apple Store](https://apps.apple.com/br/app/edudu/id6477551944) and [Google Play](https://play.google.com/store/apps/details?id=tech.segunda.edudu). | marciofrayze |
1,879,597 | A maneira correta de utilizar a nomenclatura BEM | A nomenclatura BEM (Block Element Modifier) define o padrão que devemos utilizar para nomear as nossas classes CSS, nesse artigo você vai aprender como utilizar a nomenclatura BEM corretamente | 0 | 2024-06-06T19:17:00 | https://codigoaoponto.com/blog/a-maneira-correta-de-utilizar-a-nomenclatura-bem | css, frontend, webdev, tutorial | ---
title: 'A maneira correta de utilizar a nomenclatura BEM'
description: 'A nomenclatura BEM (Block Element Modifier) define o padrão que devemos utilizar para nomear as nossas classes CSS, nesse artigo você vai aprender como utilizar a nomenclatura BEM corretamente'
author: 'Thiago Nunes Batista'
---
Ter padrões de desenvolvimento é essencial para que códigos sejam escritos de forma organizada e estruturada, especialmente quando trabalhando com outros programadores em projetos, pois cada programador tem a sua forma específica de programar, estruturar o código e nomear as suas classes e variáveis, e sem um padrão, o código do projeto possui a tendência de ficar muito bagunçado, visto que diferentes pessoas estarão escrevendo códigos de maneiras diferentes.
A nomenclatura ou metodologia BEM (Block Element Modifier) define o padrão que devemos utilizar para nomear as nossas classes CSS, separando as classes do layout em três partes: blocos, elementos e modificadores.
Dessa forma, se todos os programadores de um projeto conhecerem e utilizarem esse padrão, as classes CSS do código sempre terão o mesmo formato, tornando o código muito mais organizado.
Nesse artigo você vai aprender como utilizar a nomenclatura BEM corretamente e também conhecer os principais erros que programadores cometem utilizando a nomenclatura BEM para que você possa evitá-los.
```html
<div class="block">
<div class="block__element"></div>
<div class="block__element">
<div class="block__another-element"></div>
</div>
<div class="block__element block__element--modifider"></div>
</div>
```
## O que é o Bloco?
É como se fosse o container em volta do seu código, o bloco pode ter vários elementos dentro dele, o seu nome deve trazer muito significado, visto que o bloco será o "elemento pai" de várias elementos que serão os "elementos filho".
📖 Sintaxe:
```html
<body>
<div class="block">...</div>
...
<div class="second-block">
...
<div class="third-block"></div>
...
</div>
...
</body>
```
✍🏻 Exemplo:
```html
<body>
<div class="gallery">...</div>
<div class="card">...</div>
<div class="accordion"></div>
</body>
```
## O que é o Elemento?
É a parte interna do bloco, são pedaços de código que são amarrados e possuem o seu significado atrelado ao seu bloco.
Para nomear um elemento, deve se fazer da seguinte forma: `block__element`, ou seja, é utilizado o nome da classe de bloco, após é acrescentado **2 caracteres de underline** e por fim se adiciona o nome da classe de elemento.
📖 Sintaxe:
```html
<body>
<div class="block">
<div class="block__element"></div>
</div>
</body>
```
✍🏻 Exemplo:
```html
<body>
<div class="media-gallery">
<div class="media-gallery__tabs">...</div>
<div class="media-gallery__posts">...</div>
<div class="media-gallery__pagination">...</div>
</div>
<div class="card">
<div class="card__header">...</div>
<div class="card__content">...</div>
<div class="card__footer">...</div>
</div>
</body>
```
## O que é o Modificador?
Seu nome é auto explicativo, ele é responsável por modificar os estilos de blocos ou elementos, dessa forma, representando diferentes variações de estilização de blocos e estilos, como por exemplo: `active`, `disabled`, `open`, `error` e `small`.
Para nomear um modificador de bloco, se deve fazer da seguinte forma: `block--modifier`, ou seja, é utilizado o nome da classe de bloco seguido de **2 caracteres de hífen**, e após é adicionado o nome do modificador.
Para criar o modificador de elemento é a mesma "receita": `block__element--modifier`, basicamente, é utilizado a classe de elemento acrescido de **2 caracteres de hífen**, com o nome do modificador ao final da classe.
📖 Sintaxe:
```html
<body>
<div class="block--modifier">
<div class="block__element--other-modifier"></div>
</div>
</body>
```
✍🏻 Exemplo:
```html
<body>
<div class="alert alert--error">...</div>
<div class="media-gallery">
<div class="media-gallery__tabs--disabled">...</div>
<div class="media-gallery__posts--empty">...</div>
</div>
<div class="card card--large">
<div class="card__header">...</div>
<div class="card__content card__content--open">...</div>
<div class="card__footer">...</div>
</div>
</body>
```
## Os principais erros ao utilizar a nomenclatura BEM
Agora você aprendeu os princípios da nomenclatura BEM e o melhor caminho para fixar todo esse conteúdo é praticar, visto que "é praticando, que se aprende", no entanto, durante essa prática podem surgir muitas dúvidas e erros na utilização do BEM podem acontecer.
Abaixo trago os principais erros que eu cometi e que também vi outros programadores errarem na utilização do BEM, espero que isso te ajude na sua jornada de aprendizado. 🙋🏻♂️
### Posso ter elementos dentro de elementos?
Não, os elementos devem se referenciar à um bloco e não à outros elementos.
❌ Incorreto
```html
<div class="block">
<div class="block__element">
<div class="block__element__other-element">...</div>
</div>
</div>
```
---
✅ Correto
```html
<body>
<div class="block">
<div class="block__element">
<div class="block__other-element">...</div>
</div>
</div>
</body>
```
### Posso ter blocos dentro de blocos?
Sim, isso normalmente faz sentido quando um elemento se tornou muito grande ou complexo, nesse caso, você pode tornar esse elemento em um bloco, que não precisa ter relação com o seu "bloco pai".
✅ Correto
```html
<body>
<div class="block">
...
<div class="second-block">
<div class="second-block__element">..</div>
<div class="second-block__second-element">...</div>
<div class="second-block__third-element">...</div>
</div>
...
</div>
</body>
```
### Posso aplicar modificadores de classe também em blocos?
Sim, você pode utilizar modificadores nas suas classes de bloco, no entanto, é necessário que a classe de bloco também esteja presente na tag HTML.
✅ Correto
```html
<div class="block block--modifier">...</div>
```
---
❌ Incorreto
```html
<div class="block--modifier">...</div>
```
#### Conclusão
A nomenclatura BEM é uma das metodologias mais utilizadas para padronização de classes CSS, espero ter conseguido te ensinar a forma correta de utilizá-la ela e te passar a importância de sua utilização.
| thiagonunesbatista |
1,879,537 | Setup and Use Sitecore CMP Connector. Part 1 | That sure is a nice Sitecore Content Hub you've got there. It is a good bet that you would like to... | 0 | 2024-06-06T19:16:22 | https://dev.to/mrpipedream/setup-and-use-sitecore-cmp-connector-1f7f | sitecore, cmp, xmcloud | That sure is a nice Sitecore Content Hub you've got there. It is a good bet that you would like to use some of that content on XM Cloud. So lets do that.
This might get a bit long. So, briefly, here is what I am going to be describing how to do:
- Setup an Action and Trigger inside of Content Hub
- Setup the CMP connector inside of your XM Cloud application
- Setup data mappings for the content hub items.
- List some issues I had along the way and how they were overcome.
Assumptions:
- You have access to create Triggers and Actions in Content Hub.
- The content item you need to sync has already been created. I will show off what the sample one looks like, but I will not go into its creation.
- You have access to add variables to your Sitecore XM Cloud project.
- You already setup an Oauth connection string setup in Content Hub that is being used to connect to the DAM. We will reuse this for the cmp connection, but these can be different if you have difference security requirements.
## Content Hub Actions and Triggers
---
### Create an action
Actions are the way we communicate between the two systems and triggers are ways to cause the action to fire.
The action we are going to want to setup is a [service bus action](https://doc.sitecore.com/xp/en/developers/connect-for-ch/50/connect-for-content-hub/create-a-sitecore-content-hub-action.html). There are two types supported.
- <u>Azure Service Bus</u> - An azure service bus setup and maintained by you. So some of this setup can be useful for any other downstream application that might want to use the date being emitted by Content Hub.
- <u>M Azure Service Bus</u> - A service bus topic setup and maintained by Sitecore.
For this we are going to be using the 'M Azure Service Bus' because I don't want to have to setup and maintain anything myself. When you add a new action select the Type "M Azure Service Bus" and choose "Topic" as the Destination Type.

Make sure to copy the values that are generated in "Hub In" and "Hub Out". you will be using these later to setup the connection in XM Cloud.

Now we have a way to send the data, we need a way to trigger the information exchange to happen.
---
### Create a trigger
We now need to tell Content Hub to actually send the message. For this instance we are publishing out a simple new article which has already been defined as a content item. The news article we are syncing between systems has a title and a rich text body.

It is also worth noting right now that this is also where we can see all the related taxonomy for content items in the system.

[Setting up the trigger](https://doc.sitecore.com/xp/en/developers/connect-for-ch/50/connect-for-content-hub/create-a-trigger.html) is fairly simple. We define basic information about the trigger. Here I want to make sure I check all the objectives, since I might be mutating the object later, and the execution type as "in background" so it does not stall out any of the processing.

The conditions can become quote specific. In this example I want to sync the object when a user has fully published the news item. The content item of "M.Content" has the information about the publish state and there is a separate publish flag that was added on the news item itself. I need to make sure both of these conditions are matched before I perform my action.

Now all we have to do is tie in the action we created earlier and we are all set to start sending out information.

## Setup the CMP connection on XM Cloud
In your Sitecore XM Cloud deploy portal you will need to setup a few values to get your CMP functionality working.
- **SITECORE_AppSettings_cmpEnabled_define** - Enables the CMP connector in the system. (yes or no)
- **Sitecore_ConnectionStrings_CMP_dot_ContentHub** - Connection string to content hub. You can reuse your DAM OAuth connection string here.
- **Sitecore_ConnectionStrings_CMP_dot_ServiceBusSubscription** - Name of the azure service bus subscription. <u>More on this later</u>.
- **Sitecore_ConnectionStrings_CMP_dot_ServiceBusEntityPathOut** - HUB IN connection string from the service bus action that was setup.
- **Sitecore_ConnectionStrings_CMP_dot_ServiceBusEntityPathIn** - HUB OUT connection string from the service bus action that was setup

This will cause the CMP items to start showing up inside of Sitecore.
Before:

After:

Now your application should be setup to receive information from Content Hub.
---
## Some problems we ran into
### The value for Sitecore_ConnectionStrings_CMP_dot_ServiceBusSubscription
It was not immediately apparent what the value of this was meant to be. If you are not familiar with [how a topic works on Azure service bus](https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-queues-topics-subscriptions#topics-and-subscriptions) it is basically a way to allow multiple "receivers" to handle the messages posted to a topic.
- It was not the name of the action in Content Hub.
- It was not derived from the action name.
after opening a ticket with Sitecore we did find out ours was named '**<u>hub_out_subscription</u>**'. You can try this if you are having issue with your application receiving information. you might have to open a support request to get the name of yours though.
If you are setting up your own service bus, this will not be a problem.
### Authentication / SSO
Working to get this up and going we also ran into an issue with authentication. Our content hub was originally setup to **ONLY** accept connections over SSO. This was causing the OAuth connection between Content Hub and XM Cloud to fail. We needed to go into Content Hub settings and set 'EnableBasicAuthentication' to true.

Setting up data mappings coming in part 2
| mrpipedream |
1,879,596 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-06T19:14:49 | https://dev.to/jeremyhunt96529/buy-verified-cash-app-account-52g1 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | jeremyhunt96529 |
1,879,595 | Configure Eslint, Prettier and show eslint warning into running console vite react typescript project | Creating a React application with Vite has become the preferred way over using create-react-app due... | 0 | 2024-06-06T19:14:23 | https://dev.to/khalid7487/configure-eslint-prettier-and-show-eslint-warning-into-running-console-vite-react-typescript-project-pk5 | eslint, prettier, gorupimport, showeslintwarningintoconsole | Creating a React application with Vite has become the preferred way over using create-react-app due to its faster and simpler development environment.Although vite projects by default offer eslint support but it is very simple setup which is not very helpful so far.
In this guide this i will go through config eslint with prettier groups import ad show eslint warning into console in react + typescript which is create by vite.
Assuming you already have your React Vite project set up (using npm create vite@latest), the first step is to install Vitest:
## Setting up Vite eslint with yarn install:
`yarn add eslint-import-resolver-typescript eslint-plugin-import eslint-plugin-prettier eslint-plugin-react-refresh prettier typescript-eslint`
## For showing eslint and typescript error and warning into console we need to add another library below:
`yarn add vite-plugin-checker`
Now time to update **tsconfig.json** file:
```
{
"compilerOptions": {
"target": "ES2020",
"useDefineForClassFields": true,
"lib": ["ES2020", "DOM", "DOM.Iterable"],
"module": "ESNext",
"skipLibCheck": true,
"baseUrl": "src",
"paths": {
"*": ["*"]
},
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"resolveJsonModule": true,
"isolatedModules": true,
"noEmit": true,
"jsx": "react-jsx",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true
},
"include": ["src", "vite.config.ts"],
"exclude": [
"node_modules",
"./node_modules",
"./node_modules/*",
"./node_modules/@types/node/index.d.ts"
],
"references": [{"path": "./tsconfig.node.json"}]
}
```
Note: Here, i added basurl as src and and path accept whatever i have set in vite.config.ts. For example, I have a folder name component and @Core folder now i want to import it as same group during that time it will be helpful.
Enough talk let's configure **vite.config.ts**:
```
import {defineConfig} from 'vite'
import react from '@vitejs/plugin-react'
import checker from 'vite-plugin-checker'
export default defineConfig({
plugins: [
react(),
checker({
typescript: true,
eslint: {
lintCommand: 'eslint "./src/**/*.{ts,tsx}"',
},
}),
],
resolve: {
alias: {
images: '/src/images',
component: '/src/component',
'@core': '/src/@core',
},
},
})
```
now we need to update **.eslintrc.cjs** file:
```
module.exports = {
root: true,
env: {browser: true, es2020: true},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:react-hooks/recommended',
],
parserOptions: {
project: './tsconfig.json',
tsconfigRootDir: __dirname,
},
ignorePatterns: ['dist', '.eslintrc.cjs'],
parser: '@typescript-eslint/parser',
plugins: ['react-refresh', 'import', 'prettier', 'react', '@typescript-eslint'],
rules: {
'react-refresh/only-export-components': ['warn', {allowConstantExport: true}],
'no-console': 'warn',
'arrow-body-style': ['warn', 'as-needed'],
'no-empty-function': 'error',
quotes: ['warn', 'single', {avoidEscape: true}],
'prefer-const': 'off',
'no-dupe-keys': 'warn',
'react/react-in-jsx-scope': ['off'],
'no-duplicate-imports': ['warn'],
'@typescript-eslint/no-unused-vars': ['warn'],
'@typescript-eslint/no-explicit-any': ['error'],
'valid-typeof': ['error', {requireStringLiterals: true}],
'prettier/prettier': ['warn'],
'import/order': [
'warn',
{
groups: ['builtin', 'external', 'internal', ['parent', 'sibling', 'index']],
'newlines-between': 'always',
pathGroups: [
{
pattern: '@core/**',
group: 'internal',
position: 'after',
},
],
pathGroupsExcludedImportTypes: ['internal'],
},
],
'import/no-named-as-default-member': ['off'],
'import/no-anonymous-default-export': [
'error',
{
allowArray: false,
allowArrowFunction: false,
allowAnonymousClass: false,
allowAnonymousFunction: false,
allowCallExpression: true,
allowNew: false,
allowLiteral: false,
allowObject: false,
},
],
},
settings: {
react: {
version: 'detect',
},
'import/resolver': {
typescript: {
// @alwaysTryTypes always try to resolve types under `<root>@types`
// directory even it doesn't contain any source code, like `@types/unist`
alwaysTryTypes: true,
project: './tsconfig.json',
extensions: ['.ts', '.tsx'],
},
},
},
}
```
We need to update **.prettier.cjs** file as well:
```
module.exports = {
semi: false,
singleQuote: true,
jsxSingleQuote: true,
bracketSpacing: false,
jsxBracketSameLine: false,
arrowParens: 'avoid',
useTabs: false,
tabWidth: 2,
printWidth: 100,
trailingCommas: {
array: true,
object: true,
function: false,
},
}
```
##Import as a group is responsible for the below eslint rules:
`'import/order': [
'warn',
{
groups: ['builtin', 'external', 'internal', ['parent', 'sibling', 'index']],
'newlines-between': 'always',
pathGroups: [
{
pattern: '@core/**',
group: 'internal',
position: 'after',
},
],
pathGroupsExcludedImportTypes: ['internal'],
},
],`
here we need to specifics the group here which group i want. if your import group is not setup correctly this will show you warning into your console.
We may check this reference from github: **https://github.com/khalid7487/study/tree/master/React/vites-config-eslint**
| khalid7487 |
1,879,594 | From Cat-Eyes to Aviators: A Nostalgic Journey Through Timeless Eyewear Trends | In thе vast rеalm of fashion, cеrtain accessories stand thе tеst of timе, transcending fleeting... | 0 | 2024-06-06T19:13:01 | https://dev.to/daposieyewear/from-cat-eyes-to-aviators-a-nostalgic-journey-through-timeless-eyewear-trends-5d6a | In thе vast rеalm of fashion, cеrtain accessories stand thе tеst of timе, transcending fleeting trends and leaving an indelible mark on stylе history. Among thеsе, eyewear has emerged as a transformative еlеmеnt that not only еnhancеs onе’s vision but also adds a touch of sophistication and flair to pеrsonal stylе. Ovеr thе yеars, various [eyewear trends](https://www.daposieyewear.com/) have risen and fallen, but some have endured, bеcoming iconic symbols of bygonе еras and contеmporary chic. In this nostalgic journеy, we explore the evolution of eyewear, from thе sultry allurе of cat-еyе glassеs to thе advеnturous spirit еmbodiеd by aviators.
The 1950s ushered in a revolution in eyewear fashion with thе introduction of cat-еyе glassеs. Exemplifying feminine mystique, thеsе frames with upswept corners captured thе еssеncе of post-war glamor. The еlongatеd shapе, reminiscent of a feline gaze, added a touch of intrigue to thе wearer’s countеnancе. Popularized by iconic figures likе Marilyn Monroе and Audrеy Hеpburn, cat-еyе glassеs bеcamе synonymous with timеlеss еlеgancе and a bold fashion statеmеnt. Evеn in thе 21st cеntury, designers continue to rеinvеnt this classic silhouеttе, demonstrating its enduring appeal across generations.
As thе swinging ’60s rollеd in, thе eyewear landscape underwent a radical shift. Round frames bеcаmе thе epitome of counterculture cool, еmbracеd by thе likеs of John Lеnnon and Janis Joplin. Thеsе glassеs, charactеrizеd by thеir circular lеnsеs, reflected the free-spiritеd ethos of thе еra. From wirе-rimmеd to bold acеtatе dеsigns, round frames became a symbol of rebellion and individualism, challеnging thе convеntional norms of fashion. Thе еnduring popularity of round glassеs pеrsists, transcеnding еras and finding a placе in thе closеts of modеrn-day trеndsеttеrs.
Thе ’70s brought about a lovе affair with ovеrsizеd framеs, taking eyewear into a realm of bold proportions. From largе squarе framеs to chunky aviators, this еra marked a departure from thе delicate dеsigns of thе past. Thе disco еra saw thе risе of ovеrsizеd sunglassеs as a glamorous accеssory, shielding thе еyеs of icons like Jackie O from thе paparazzi’s flashbulbs. This еra’s influence echoes in the current fashion scеnе, as ovеrsizеd framеs continuе to dominatе runways and street stylе alike, offеring a nod to thе bold aеsthеtics of thе ‘70s.
As thе ’80s dawnеd, eyewear embraced a futuristic vision with the advent of visor-likе sunglassеs and gеomеtric shapеs. The era of excess was epitomized by oversized, angular frames that exuded an air of opulence and extravagance. Think Madonna in her “Material Girl” phase, sporting rectangular frames that complеmеntеd thе boldness of powеr suits and shouldеr pads. Whilе thе ’80s fashion landscape was often characterized by extremes, thе geometric eyewear of thе era laid the foundation for thе experimental spirit that defines contemporary [eyewear trends](https://www.daposieyewear.com/).
Thе ’90s witnеssеd a rеturn to minimalism, with wirе-framе glassеs gaining popularity. Inspired by the grunge movement, oval and rectangular wire frames became a staplе for thosе sееking a laid-back, еffortlеssly cool aеsthеtic. Popularizеd by musicians likе Kurt Cobain, thеsе glassеs еmbodiеd a nonchalant attitudе that rеsonatеd with the youth culture of thе tіmе. The ’90s marked a departure from thе flamboyance of the previous decade, bringing forth a more understated approach to eyewear that continues to influеncе current trends.
As wе еntеrеd thе nеw millеnnium, [eyewear trends](https://www.daposieyewear.com/) became increasingly eclectic, drawing inspiration from various dеcadеs. Howеvеr, two distinct stylеs emerged as timeless favorites — wayfarers and aviators. Thе wayfarеr, with its squarе shapе and thick framеs, gainеd popularity as a unisеx and vеrsatilе option, favored by celebrities and fashion enthusiasts alike. Mеanwhilе, aviators, with their teardrop-shapеd lеnsеs and metal framеs, maintained their status as a symbol of adventure and rebellious stylе, having bееn an intеgral part of military and aviation history.
In thе prеsеnt day, [eyewear trends](https://www.daposieyewear.com/) reflect a fusion of the past and thе contеmporary. Cat-еyе glassеs, round framеs, ovеrsizеd sunglassеs, and geometric shapes continue to coexist, providing a divеrsе array of options for individuals to еxprеss thеir uniquе stylе. With advancеmеnts in matеrials and tеchnology, eyewear designers are pushing boundaries, creating innovative designs that blend nostalgia with modеrn aesthetics. Thе allurе of timеlеss [eyewear trends](https://www.daposieyewear.com/) lies in their ability to transcend thе limitations of time, allowing wеarеrs to connеct with thе rich history of fashion whilе making a statеmеnt that is еntirеly thеir own.
In conclusion, thе journey through timеlеss [eyewear trends](https://www.daposieyewear.com/) is a tеstamеnt to thе cyclical naturе of fashion. From thе alluring cat-еyе glassеs of thе ’50s to thе advеnturous spirit of aviators, еach decade has left an indelible mark on thе world of eyewear. As wе navigatе thе еvеr-changing landscapе of fashion, thеsе iconic stylеs serve as a reminder that certain elements are destined to endure, becoming not just accessories but cultural artifacts that wеavе thе fabric of our sartorial history. Whether you opt for the vintage charm of cat-еyеs or thе aviator’s bold allurе, eyewear rеmains a powerful medium through which personal stylе is cеlеbratеd and timeless trends arе pеrpеtuatеd.
| daposieyewear | |
1,879,592 | Must-Have Helper Functions for Every JavaScript Project | JavaScript is a versatile and powerful language, but like any programming language, it can benefit... | 0 | 2024-06-06T19:07:26 | https://dev.to/raksbisht/must-have-helper-functions-for-every-javascript-project-9bk | helperfunctions, utilityfunctions, codingtips, tutorial | JavaScript is a versatile and powerful language, but like any programming language, it can benefit greatly from the use of helper functions. Helper functions are small, reusable pieces of code that perform common tasks. By incorporating these into your projects, you can simplify your code, improve readability, and reduce the likelihood of errors. Below is an in-depth look at some essential helper functions that can be invaluable for any JavaScript project.
## 1\. Type Checking
Understanding the type of data you are working with is crucial for avoiding errors and ensuring the correct functioning of your code. JavaScript’s `typeof`operator is useful, but it has limitations (e.g., it returns "object" for arrays and null). The following functions provide more precise type checking:
### isArray
```
function isArray(value) {
return Array.isArray(value);
}
```
### isObject
```
function isObject(value) {
return value !== null && typeof value === 'object' && !Array.isArray(value);
}
```
### isFunction
```
function isFunction(value) {
return typeof value === 'function';
}
```
### isNull
```
function isNull(value) {
return value === null;
}
```
### isUndefined
```
function isUndefined(value) {
return typeof value === 'undefined';
}
```
## 2\. Data Manipulation
Manipulating arrays and objects is a common task in JavaScript. Here are some helper functions to streamline these operations:
### Deep Clone
Creates a deep copy of an object or array, ensuring that nested structures are also duplicated.
```
function deepClone(obj) {
return JSON.parse(JSON.stringify(obj));
}
```
### Merge Objects
Merges two objects, combining their properties. In case of a conflict, properties from the second object overwrite those from the first.
```
function mergeObjects(obj1, obj2) {
return {...obj1, ...obj2};
}
```
### Array Remove
Removes a specific item from an array.
```
function arrayRemove(arr, value) {
return arr.filter(item => item !== value);
}
```
## 3\. String Manipulation
String operations are another common requirement. Here are some useful string manipulation helpers:
### Capitalize
Capitalizes the first letter of a string.
```
function capitalize(str) {
return str.charAt(0).toUpperCase() + str.slice(1);
}
```
### Camel Case
Converts a string to camel case.
```
function toCamelCase(str) {
return str.replace(/\[-\_\](.)/g, (\_, char) => char.toUpperCase());
}
```
### Kebab Case
Converts a string to kebab case.
```
function toKebabCase(str) {
return str.replace(/\[A-Z\]/g, char => '-' + char.toLowerCase()).replace(/^-/, '');
}
```
## 4\. Asynchronous Helpers
Working with asynchronous code is a fundamental part of modern JavaScript. These helpers can simplify dealing with asynchronous operations:
### Sleep
Pauses execution for a specified amount of time.
```
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
```
### Retry
Retries an asynchronous function a specified number of times before failing.
```
async function retry(fn, retries = 3) {
for (let i = 0; i < retries; i++) {
try {
return await fn();
} catch (err) {
if (i === retries - 1) throw err;
}
}
}
```
## 5\. Utility Functions
General-purpose utilities that can be handy in various situations:
### Debounce
Prevents a function from being called too frequently.
```
function debounce(fn, delay) {
let timeout;
return function(...args) {
clearTimeout(timeout);
timeout = setTimeout(() => fn.apply(this, args), delay);
};
}
```
### Throttle
Ensures a function is not called more often than a specified rate.
```
function throttle(fn, limit) {
let inThrottle;
return function(...args) {
if (!inThrottle) {
fn.apply(this, args);
inThrottle = true;
setTimeout(() => inThrottle = false, limit);
}
};
}
```
### Unique ID
Generates a unique identifier.
```
function uniqueId(prefix = '') {
return prefix + Math.random().toString(36).substr(2, 9);
}
```
## Conclusion
Incorporating these helper functions into your JavaScript projects can greatly enhance your productivity and code quality. They provide reusable, efficient solutions to common programming tasks, allowing you to focus on the unique aspects of your application. Whether you are dealing with type checking, data manipulation, string operations, asynchronous code, or general utilities, these functions will help streamline your development process and make your code more robust and maintainable. | raksbisht |
1,879,590 | Angular vs React js Prop Passing | Prop Passing in React Parent Component: import React from 'react'; import... | 0 | 2024-06-06T19:00:53 | https://dev.to/syedmuhammadaliraza/angular-vs-react-js-prop-passing-1jaj | angular, react, javascript, coding | ## Prop Passing in React
1. **Parent Component:**
```jsx
import React from 'react';
import ChildComponent from './ChildComponent';
class ParentComponent extends React.Component {
render() {
const message = "Hello from Parent!";
return (
<div>
<h1>Parent Component</h1>
<ChildComponent message={message} />
</div>
);
}
}
export default ParentComponent;
```
2. **Child Component:**
```jsx
import React from 'react';
class ChildComponent extends React.Component {
render() {
return (
<div>
<h2>Child Component</h2>
<p>{this.props.message}</p>
</div>
);
}
}
export default ChildComponent;
```
### Advantages of Prop Passing in React
- **Simplicity:** Prop passing in React is straightforward and easy to understand.
- **Unidirectional Data Flow:** This makes it easier to track data changes and debug issues.
### Challenges of Prop Passing in React
**Prop Drilling:** In larger applications, passing props through many layers of components can become cumbersome.
## Angular: Prop Passing with Dependency Injection
Prop Passing in Angular
1. **Parent Component (parent.component.ts):**
```typescript
import { Component } from '@angular/core';
@Component({
selector: 'app-parent',
template: `
<h1>Parent Component</h1>
<app-child [message]="message"></app-child>
`
})
export class ParentComponent {
message: string = "Hello from Parent!";
}
```
2. **Child Component (child.component.ts):**
```typescript
import { Component, Input } from '@angular/core';
@Component({
selector: 'app-child',
template: `
<h2>Child Component</h2>
<p>{{ message }}</p>
`
})
export class ChildComponent {
@Input() message: string;
}
```
In Angular, the parent component binds the `message` property to the `app-child` component using Angular's property binding syntax `[message]="message"`. The child component uses the `@Input` decorator to declare that it expects a `message` input.
### Advantages of Prop Passing in Angular
- **Two-Way Data Binding:** Angular allows two-way data binding, making it easier to keep the UI and data model in sync.
### Challenges of Prop Passing in Angular
**Performance:** Two-way data binding can introduce performance overhead in larger applications.
### Conclusion
Both React and Angular offer powerful methods for prop passing, each with its unique advantages and challenges. React's simplicity and unidirectional data flow make it a great choice for smaller to medium-sized applications, while Angular's rich feature set and dependency injection system provide a robust solution for larger, more complex applications.
If you have any query Than message me on Linkedin [Syed Muhammad Ali Raza](https://www.linkedin.com/in/syed-muhammad-ali-raza/)
| syedmuhammadaliraza |
1,879,586 | A Deep Dive into Next.js 14: Solving Common, Intermediate, and Advanced Issues 🚀 | Next.js 14 is a powerful framework that brings numerous enhancements and features. However, like any... | 0 | 2024-06-06T18:48:05 | https://dev.to/mohith/a-deep-dive-into-nextjs-14-solving-common-intermediate-and-advanced-issues-hja | nextjs, javascript, node, programming | Next.js 14 is a powerful framework that brings numerous enhancements and features. However, like any tool, it comes with its own set of challenges. Let's explore some common, intermediate, and advanced problems you might encounter, along with practical solutions. 🌟
**Common Problems 🛠️**
**1. Build Failures Due to Webpack Errors 🔧**
When running next build, you might face errors such as Cannot read property 'asString' of undefined. This often occurs if Webpack 5 isn't enabled.
Solution:
Enable Webpack 5 in your next.config.js:

**2. API and Slug-Related Errors 🌐**
Errors like getStaticPaths is required for dynamic SSG pages and is missing for '/page/[slug]' are common when using dynamic routes without properly defining getStaticPaths.
Solution:
Define getStaticPaths in your page component:

**3. CORS Errors 🚫**
When handling API requests, CORS issues can arise, preventing your application from making cross-origin requests.
Solution:
Add CORS headers to your API responses:

**Intermediate Problems 🔄**
**1. Hot Module Replacement (HMR) Issues 🔥**
HMR might fail to preserve the component state, leading to full reloads.
Solution:
Ensure all components are function components with hooks, and avoid anonymous components:

**2. Updating next.config.js ⚙️**
Changes to next.config.js are not propagated automatically.
Solution:
Restart the server after making changes:

**3. Handling Environment Variables 🌍**
Updates to .env files are not loaded dynamically.
Solution:
Restart your server to apply new environment variable values:

**Advanced Problems 🚀**
**1. Using Server and Client Components Together 🌐**
Mixing Server and Client components can be tricky, especially with context providers.
Solution:
Separate Client components and use them correctly within Server components:

**2. Optimizing Data Fetching 📡**
Efficient data fetching is crucial for performance. Misusing Server and Client components for data fetching can lead to inefficiencies.
Solution:
Fetch data directly within Server components when possible:

**3. Complex State Management 🧠**
Managing state across Server and Client components can be challenging.
Solution:
Use React Query or SWR for data fetching and caching:

**Terminal Errors and Solutions 🖥️**
**1. Incorrect Project Setup 🔨**
If Next.js isn't working, it could be due to an incorrect project setup.
Solution:
Ensure your package.json includes the necessary scripts and dependencies:

Verify your directory structure includes a pages directory for routing.
**2. Compatibility Issues with Node.js Version ⚙️**
Next.js requires a specific Node.js version to function correctly.
Solution:
Check your Node.js version and ensure it's compatible:

**3. Missing Environment Variables 🛠️**
Next.js relies on configuration files and environment variables.
Solution:
Double-check your .env and next.config.js files for the correct settings:

**4. Issues with Plugins or Dependencies 🔌**
Third-party plugins or dependencies can cause conflicts.
Solution:
Review and update your plugins and dependencies to ensure compatibility with Next.js 14.
Conclusion 🏁
Next.js 14 brings powerful features and improvements, but understanding and overcoming common, intermediate, and advanced issues is key to leveraging its full potential. By addressing these challenges with practical solutions, you can build robust and efficient applications with Next.js 14.
Happy coding! 👨💻👩💻
| mohith |
1,879,585 | How to use a custom font on Excalidraw.com | Introduction Excalidraw is a great whiteboarding tool, useful for technical diagrams and... | 0 | 2024-06-06T18:44:52 | https://dev.to/dawidcodes/how-to-use-a-custom-font-on-excalidrawcom-4jl4 | webdev, javascript, productivity, css | ### Introduction
Excalidraw is a great whiteboarding tool, useful for technical diagrams and wireframes.
### Problem
The default handwritten font looks quite unprofessional and can be difficult to read
Additionally, there is no official way to add custom fonts.
### Solution
The solution is to run a custom script on every page load to override the default fonts.
#### 1. Download the "scripty" chrome extension
You can download the extension [here](https://chromewebstore.google.com/detail/scripty-javascript-inject/milkbiaeapddfnpenedfgbfdacpbcbam?hl=en)
This will allow you to inject custom js on specific web pages.
#### 2. Configure automatic script injection
Once you have scripty installed, create a new script and configure it to run automatically when the URL contains https://app.excalidraw.com/

#### 3. Add custom font script
Write a script to override one of the Excalidraw default fonts with your chosen font
```js
(() => {
console.log("Custom font script started");
// Add the font-face definition
const style = document.createElement("style");
style.textContent = `
@font-face {
font-family: 'Helvetica';
font-display: swap;
src: url(https://fonts.gstatic.com/s/shantellsans/v9/FeVhS0pCoLIo-lcdY7kjvNoQg2xkycTqsuA6bi9pTt8YiT-NXidjb_ee-maigL6R8nKVh8BbE1mv4wwMMm1lebY.woff2) format('woff2');
}
`;
document.head.appendChild(style);
console.log("Custom font added");
})();
```
At the time of writing, the default fonts are:
Handdrawn = 'Virgil'
Normal = 'Helvetica'
Code = 'source-code-pro'
#### 4. Bonus tip: using google fonts
Find your font in google fonts, click on embed code, and under the web tab select the @import option

The problem here is that google fonts default import url will set the font name to be the real name of the font.
We want to manually alter the name of the font to override the fonts in excalidraw.
To achieve this, you can navigate to the font url in the browser eg visit [https://fonts.googleapis.com/css2?family=Shantell+Sans:ital,wght@0,300..800;1,300..800&display=swap](https://fonts.googleapis.com/css2?family=Shantell+Sans:ital,wght@0,300..800;1,300..800&display=swap)
From here you can copy the entire css, paste it into vs code, and use find/replace to change all the font names .
For example, to replace the default handdrawn font (Virgil) with a custom handdrawn font (Shantell Sans), you can use the link above, copy all the CSS and change the name from 'Shantell Sans' to 'Virgil' to trick Excalidraw into replacing the fonts.
Now add this code into the script provided above and youre ready to go.
Here is the script for a fully working example https://pastebin.com/raw/1LULgXuX
### Conclusion
This article provides you with everything you need to implement a custom font in Excalidraw.
I hope that someone finds this useful
| dawidcodes |
1,879,584 | Unleash the Power of System Design: Essential for Every Software Engineer! 💻🚀 | In the fast-paced world of software engineering, one skill reigns supreme: system design. From... | 0 | 2024-06-06T18:44:34 | https://dev.to/raksbisht/unleash-the-power-of-system-design-essential-for-every-software-engineer-52a | systemdesign, softwareengineering, scalability, tutorial | In the fast-paced world of software engineering, one skill reigns supreme: system design. From turbocharging scalability to fine-tuning performance, mastering system design isn’t just beneficial — it’s absolutely crucial! Let’s explore why system design is a game-changer and how it can supercharge your career as a software engineer.
## Why System Design Rocks:
### 1\. Scalability 📈:
As your app blossoms and users flock in, scalability becomes your best friend. System design arms you with the tools to architect systems that grow seamlessly with user demand. From load balancing to distributed systems, you’ll learn how to keep your app running smoothly no matter the traffic!
### 2\. Performance Optimization ⚡:
In today’s fast-paced digital world, speed is king! System design teaches you the art of optimizing performance, ensuring your app delivers a lightning-fast user experience. Dive into resource management, latency reduction, and clever caching strategies to keep your app blazing ahead of the competition!
### 3\. Reliability and Resilience 🛡️:
When the going gets tough, a robust system design keeps your app standing tall. Learn how to build resilient systems that shrug off failures and unexpected hiccups. With redundancy, fault tolerance, and disaster recovery in your arsenal, downtime becomes a thing of the past!
### 4\. Maintainability and Extensibility 🛠️:
Well-designed systems are a joy to maintain and extend. System design principles like modularity and abstraction empower you to craft codebases that are clean, organized, and ready for whatever the future holds. Say goodbye to spaghetti code and hello to smooth sailing!
## How System Design Elevates Your Career:
### 1\. Career Blastoff 🚀:
Mastering system design opens doors to a galaxy of career opportunities. Whether you’re eyeing tech startups or big-league enterprises, strong system design skills make you a sought-after star in the tech universe. Level up your career and watch your earning potential soar!
### 2\. Problem-Solving Superpowers 💡:
System design challenges you to flex your mental muscles and tackle complex problems head-on. By grappling with scalability, performance, and reliability issues, you’ll sharpen your problem-solving skills and emerge as a true tech superhero. Innovation awaits!
### 3\. Collaboration Magic ✨:
Effective system design is a team sport. Collaborate with stakeholders, product wizards, and fellow engineers to craft solutions that dazzle and delight. Strengthen your communication skills, build stronger teams, and watch your projects take flight!
In a nutshell, system design isn’t just a skill — it’s your secret weapon for success in the tech galaxy! Whether you’re a seasoned space explorer or a bright-eyed newcomer, investing in system design is a journey worth taking. So buckle up, blast off, and let’s explore the infinite possibilities of system design together! | raksbisht |
1,879,583 | Desfazer o último commit no Git | Para desfazer o último commit no Git, você pode usar um dos seguintes comandos, dependendo da... | 0 | 2024-06-06T18:42:01 | https://dev.to/lucasvalhos/desfazer-o-ultimo-commit-no-git-38e0 | Para desfazer o último commit no Git, você pode usar um dos seguintes comandos, dependendo da situação:
1 - **Desfazer o último commit, mantendo as mudanças no seu diretório de trabalho**:
```bash
git reset --soft HEAD~1
```
Este comando desfaz o último commit, mas mantém as mudanças no seu diretório de trabalho. Ou seja, o conteúdo do commit desfazido ainda estará disponível para ser cometido novamente.
2 - **Desfazer o último commit, descartando as mudanças**:
```bash
git reset --hard HEAD~1
```
Este comando desfaz o último commit e também descarta todas as mudanças feitas nesse commit. As mudanças não estarão mais no seu diretório de trabalho.
3 - **Desfazer o último commit sem mexer no seu diretório de trabalho ou no staging area (área de preparação)**:
```bash
git reset --mixed HEAD~1
```
Este comando desfaz o último commit, mantém as mudanças no diretório de trabalho, mas remove-as do staging area.
4 - **Desfazer o último commit e criar um novo commit corrigido**:
Se você quiser desfazer o último commit, fazer algumas modificações e depois criar um novo commit, você pode fazer isso em duas etapas:
- Primeiro, desfazer o commit, mantendo as mudanças no seu diretório de trabalho:
```bash
git reset --soft HEAD~1
```
- Faça as modificações necessárias e depois crie um novo commit:
```bash
git add .
git commit -m "Mensagem do novo commit"
```
5 - **Reverter um commit específico**:
Se você já tiver empurrado (push) o commit para um repositório remoto ou quiser desfazer um commit específico, você pode usar o comando `revert`:
```bash
git revert <commit_hash>
```
Isso cria um novo commit que desfaz as mudanças do commit especificado, sem alterar o histórico de commits.
### Exemplo Prático
Para desfazer o último commit mantendo as mudanças no diretório de trabalho:
```bash
git reset --soft HEAD~1
```
Para desfazer o último commit e descartá-lo completamente:
```bash
git reset --hard HEAD~1
```
Escolha o método que melhor se adapta à sua necessidade, lembrando que o uso de `--hard` é destrutivo e não pode ser desfeito facilmente. | lucasvalhos | |
1,879,580 | How Much Does a Website Cost In Ireland? 2024 Price Guide | How Much Does a Website Cost In Ireland? 2024 Price Guide In a world where businesses live or die by... | 0 | 2024-06-06T18:38:59 | https://dev.to/affordablewebsites/how-much-does-a-website-cost-in-ireland-2024-price-guide-4m0p | webdev, development | How Much Does a Website Cost In Ireland? 2024 Price Guide
In a world where businesses live or die by their online presence, having a well-designed website is non-negotiable. As an Irish [web design agency](https://www.affordablewebsites.ie/) with over 18 years of experience, we understand the vital role a website plays in a company’s success. However, we also recognise that budget constraints can make the process daunting. That’s why we’re here to demystify website design pricing in Ireland and help you get maximum value without breaking the bank.
Understanding Website Costs for Irish Businesses
When it comes to website design prices in Ireland, there’s no one-size-fits-all solution. The cost varies based on several factors, including the website type, features, and the service provider you choose. To give you a rough idea:
Small Business Website (Up to 12 Pages)
For a small business website with up to 12 pages, you can expect to pay between €395 to €800 in Ireland. This typically includes a brochure-style site with essential features like contact forms, a blog, and pages showcasing your products/services. Our comprehensive [small business website package](https://www.affordablewebsites.ie/small-business-website-package/) costs €495 + VAT.
eCommerce Website
If you’re launching an online store to sell products or services, the costs range from €595 to €2,950 for an [eCommerce website](https://www.affordablewebsites.ie/web-design-services/ecommerce-website-design-dublin/). This includes features like a secure shopping cart, payment gateways, product catalogues, and seamless shopping experiences across devices.
Custom Website Solutions
For businesses with unique requirements or complex functionalities, we offer fully customised website solutions tailored to your specific needs. The cost for these projects can vary significantly based on the scope of work involved.
How Much Does a Website Cost In Ireland - eCommerce web design price for businesses in Ireland
Understanding Website Types and Their Costs
Before diving into pricing, it’s crucial to understand the different types of websites and their purposes. This will help you determine the right solution for your business needs and budget. Here’s a deeper dive into common website types and their unique benefits:
Business Service Provider Portfolio Site
Purpose: To inform prospective customers about your business and entice them to work with you.
Key Features: Homepage, Services Page, About Page, FAQ, Testimonials, Contact, Booking/Scheduling Tool integration.
Benefits: Establishes credibility and showcases expertise, making it easier for potential clients to learn about your services and get in touch.
Blog Website
Purpose: To educate or inform your audience and position yourself as a trusted authority.
Key Features: Article pages, Article catalogues, Categorisation/tag features, Search function, Bio/About Page.
Benefits: Drives organic traffic through valuable content, helps build a loyal audience, and enhances your site’s SEO.
Event Website
Purpose: To plan events and consolidate helpful information for attendees.
Key Features: Signup functionality, Booking/Scheduling tool, Secure checkout, Event info page.
Benefits: Streamlines event management, provides all necessary information in one place, and facilitates easy attendee registration and payment.
Landing Page
Purpose: To drive customers to a single, specific action, usually as part of a marketing campaign.
Key Features: Booking/Scheduling tool, Sign-up function, Call-to-action buttons.
Benefits: Highly targeted, designed for conversions, and ideal for capturing leads or promoting specific offers.
eCommerce Website
Purpose: To sell and promote products online.
Key Features: Product Gallery, Homepage, About Page, Secure checkout, Category pages, Custom menu bar, Contact Page, User dashboard, Live chat.
Benefits: Opens a 24/7 sales channel, reaches a global audience, and provides detailed insights into customer behaviour and sales trends.
How much does SEO cost in Ireland - SEO and [digital marketing](https://www.affordablewebsites.ie/seo-dublin/) strategy written on a whiteboard
Factors That Influence Website Design Pricing
The cost of a website boils down to the amount of work involved. Specifically, the size and complexity of your site are the two main factors determining the price. An attractive 5-page brochure website with basic customisation will cost significantly less than a highly customised 20-page site with all the bells and whistles.
Here are some key factors that impact the cost of a website:
Brand Positioning: Before starting the design process, get crystal clear on your positioning and how you will target your ideal audience.
Branding: Do you have an established brand visual identity, including a logo, colour palette, and fonts?
Copywriting: Have you defined your brand tone of voice? Will you write the copy yourself or hire a professional?
Number of Pages: Most websites have core pages like Homepage, About, Services, and Contact. Additional pages like FAQs, Testimonials, and Blogs will increase the cost.
Functionalities and Integrations: Features like eCommerce, booking engines, scheduling tools, live chat, or custom plugins add complexity and cost.
Visual Media: Custom illustrations, branded graphics, product photography, headshots, or stock images impact the budget.
Lead Generation: Using a lead magnet to build an email list requires additional pages like a thank you page and email sequences.
Online Shop: For eCommerce businesses, the number of products, images, and variables like sizes/colours affect the cost.
Content Migration: Transferring blogs from a previous site can be time-consuming and increase the cost.
URL Mapping: If moving from another site, replicating old URLs or setting up redirects is necessary.
SEO: Keyword research and optimisation for search engines should be considered from the start.
Coding: Adding tracking codes like Google Analytics or Facebook Pixel may require additional support.
Language Versions: Translating the site into multiple languages expands the scope of work.
While some of these factors may not apply to every business, it’s crucial to identify your website’s specific needs before beginning the design or development process. Each element can significantly impact the overall cost.
Web Design Agency Ireland - Affordable Websites Team working our Dublin office
Maximising Value with Affordable Website Design
At Affordable Websites, we believe that quality website design shouldn’t come at a premium. Our commitment is to provide unparalleled value and build lasting relationships with our clients. Here’s how we help you maximise your investment:
Transparent Pricing and No Hidden Costs
We understand that budgeting is crucial for businesses, which is why we offer transparent pricing with no hidden costs. Our website design packages start at just €395, and we’ll work with you to find the perfect solution within your budget.
Future-Proof Solutions
We build websites that are not only visually stunning but also future-proof. Our web designs are responsive, optimised for search engines, and built on robust platforms like WordPress, ensuring your website remains relevant and competitive for years to come.
Unparalleled Customer Support
Our relationship doesn’t end with the website launch. We provide ongoing customer support, [website maintenance](https://www.affordablewebsites.ie/website-maintenance-ireland/), and regular updates to ensure your website remains secure, fast, and up-to-date with the latest industry standards.
How much does a website cost in Ireland 2024 - Affordable Website Design Packages
Web Design Packages Tailored for Irish Businesses
Website Redesign Package Cost – From €395
Looking to revamp your online presence? Our complete website redesign package starts at just €395. It’s not just a facelift; it’s a game-changer for your business, creating a remarkable first impression that resonates with your target audience.
Small Business Website Package Cost – €495
Is it time to boost your small business online? Our €495 small business website design package is more than just a web design service; it’s a digital partnership elevating your online presence to convert visitors into customers.
eCommerce Website Package Cost – From €695
Ready to unlock the power of online shopping? Boost your sales with our eCommerce website package starting at €695. This package provides you with a secure, user-friendly WordPress online store, open 24/7, to showcase and sell your products or services.
Business Growth Website Package Cost – €1,495
Grow your brand, customer base and website traffic fast with our digital marketing package. Our business growth web design & SEO package includes a custom website with six months SEO service to improve your business’s visibility, drive more traffic to your website and customers your way.
Cheap SEO Packages Ireland - How much does SEO cost in Ireland
Affordable SEO Services for Sustainable Growth
SEO Packages from €175/Month
Boost your search visibility and attract more customers with our cost-effective SEO packages starting from €175/month. Our tailored strategies leverage in-depth keyword research to skyrocket your local SEO and nationwide rankings through strategic on-page and off-page optimisation.
Transparent Reporting and Data-Driven Decisions
We don’t just set it and forget it. Our transparent monthly SEO reports keep you in the loop, allowing you to track your performance, monitor your ROI, and make data-driven decisions for your business’s sustainable growth.
How Much Does a Website Cost In Ireland Summary
In summary, affordable high-quality website design in Ireland is possible with the right approach and partner. By understanding your needs, prioritising essential features, and investing wisely, you can get a fantastic, future-proof website that positions your business for success without breaking the bank.
Get Started with Affordable Websites Today
For over 18 years, Affordable Websites commitment has been to provide unparalleled value and build lasting relationships with our clients in Ireland. Whether you need affordable web design, eCommerce solutions, website maintenances or SEO services, we’ve got you covered.
Don’t let budget constraints or web design costs in Ireland hold you back from establishing a robust online presence. Contact us today to discuss your website project, and let’s create a website that not only looks fantastic but also delivers real business results. | affordablewebsites |
1,879,578 | My first Pull, Commit, and Push with Git! | "Little things can have a big impact on our daily life. Playing with Git is very exciting. As a... | 0 | 2024-06-06T18:36:43 | https://dev.to/manish_dev/my-first-pull-commit-and-push-with-git-38j5 | git, github | **"Little things can have a big impact on our daily life. Playing with Git is very exciting. As a beginner, solving minor issues provides great motivation. This is how I push code into GitHub."**
- Run
```
git init
```
in the terminal. This will initialize the folder/repository that you have on your local computer system.
Run git add . in the terminal. This will track any changes made to the folder on your system, since the last commit. As this is the first time you are committing the contents of the folder, it will add everything.
- Run
```
git commit -m
```
"insert Message here". This will prepare the added/tracked changes to the folder on your system for pushing to Github. Here, insert Message here can be replaced with any relevant commit message of your choice.
- Run
```
git remote add origin https://github.com/manishwin/tic-tac-toe.git
```
in the terminal. Here, manishwin and tic-tac-toe will be replaced by the values provided in the copied link. This will push the existing folder on you local computer system, to the newly created Github repository.
- Run
```
git remote -v
```
. This does some git pull and git push magic, to ensure that the contents of your new Github repository, and the folder on you local system are the same.
- Run
```
git push origin master
```
. Note that the last word in the command master, is not a fixed entry when running git push. It can be replaced with any relevant “branch_name”. | manish_dev |
1,868,117 | ScoutSuite | ScoutSuite is a really nice security tool to audit your cloud solutions. I have used it on the AWS... | 0 | 2024-06-06T18:35:11 | https://dev.to/stefanalfbo/scoutsuite-2l1n | 100daystooffload, tooling, security, cloud | [ScoutSuite](https://github.com/nccgroup/ScoutSuite) is a really nice security tool to audit your cloud solutions.
I have used it on the AWS cloud and it instantly gave me some things to inspect further and it was easy to get started with. However the tool also support other cloud providers as Azure, GCP and more.
The project is based on Python and can be installed like this.
```terminal
virtualenv -p python3 venv
source venv/bin/activate
pip install scoutsuite
```
I recommend to use the [custom policy](https://github.com/nccgroup/ScoutSuite/blob/develop/doc/aws-minimal-permission-policy.json) provided by their wiki page when running against AWS. With that you will give the tool minimal privileges.
Set up a new profile in the aws credential file that is using the policy above when [authenticating](https://github.com/nccgroup/ScoutSuite/wiki/Amazon-Web-Services#authentication) against AWS, call the profile, `scoutprofile`.
```text
[default]
aws_access_key_id = AKIA...
aws_secret_access_key = thesecretkey
[scoutprofile]
aws_access_key_id = AKIA...
aws_secret_access_key = anothersecretkey
```
Now we can use the command below to start the application.
```terminal
$ python scout.py aws --profile scoutprofile
```
This will start to query the AWS API to find out as much as possible about your AWS environment. When done, it will create a nice web page with a report on all the findings.
I really recommend you to try it out, I had valuable feedback on my first try and the investment to get it running was quite low.
There are of course many tools out there to try out, if you want to explore more, then this [curated list](https://github.com/4ndersonLin/awesome-cloud-security) is a great resource.
Happy auditing!
| stefanalfbo |
1,879,574 | HTTP Status Codes: Your Guide to Web Communication and Error Handling 🌐 | HTTP status codes are like the silent messengers of the web, communicating vital information between... | 0 | 2024-06-06T18:32:16 | https://dev.to/raksbisht/http-status-codes-your-guide-to-web-communication-and-error-handling-1cej | http, statuscodes, webdev, tutorial | HTTP status codes are like the silent messengers of the web, communicating vital information between servers and clients. From indicating successful transactions to warning of errors, these codes play a crucial role in ensuring smooth communication across the internet. Let's dive into the world of HTTP status codes, decoding their meanings and understanding their significance in web development.
### Understanding HTTP Status Codes
**HTTP**, or **Hypertext Transfer Protocol**, is the foundation of data communication on the World Wide Web. When a client, such as a web browser, sends a request to a server, the server responds with an HTTP status code to indicate the outcome of the request. These status codes are three-digit numbers grouped into five categories, each conveying a different type of information.
### The Categories of HTTP Status Codes
1. **1xx - Informational** ℹ️: These status codes indicate that the server has received the request and is processing it. They are primarily used for informational purposes and rarely encountered in practice.
2. **2xx - Success** ✅: These status codes indicate that the request was successfully received, understood, and processed by the server. The most common 2xx status code is 200, which signifies that the request was successful.
3. **3xx - Redirection** 🔄: These status codes indicate that further action is needed to complete the request. They are often used for redirecting clients to a different URL or resource.
4. **4xx - Client Errors** ❌: These status codes indicate that the client's request contains errors or cannot be fulfilled by the server. The most well-known 4xx status code is 404, indicating that the requested resource was not found on the server.
5. **5xx - Server Errors** 🚨: These status codes indicate that the server encountered an error while processing the request. They are typically caused by issues on the server-side and indicate a temporary or permanent failure.
### Commonly Encountered HTTP Status Codes
1. **200 OK** ✅: Indicates that the request was successful, and the server has returned the requested resource.
2. **404 Not Found** ❌: Indicates that the requested resource could not be found on the server.
3. **500 Internal Server Error** 🚨: Indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
4. **301 Moved Permanently** 🏠: Indicates that the requested resource has been permanently moved to a new location.
5. **403 Forbidden** 🚫: Indicates that the server understood the request but refuses to authorize it.
### Importance of HTTP Status Codes in Web Development
HTTP status codes play a crucial role in web development for several reasons:
* They provide valuable feedback to users, informing them about the outcome of their requests.
* They help developers diagnose and troubleshoot issues by indicating where and why errors occurred.
* They facilitate proper handling of requests and responses, ensuring smooth communication between clients and servers.
### Conclusion
HTTP status codes are the language of the web, providing vital information about the outcome of requests and responses. Understanding these codes is essential for both developers and users, as they enable effective communication and troubleshooting in web development. By familiarizing themselves with HTTP status codes, developers can build more robust and reliable web applications, while users can navigate the web with greater confidence and clarity. | raksbisht |
1,879,573 | Geração de IDs únicos no Salesforce sem chance de colisão | Para garantir a geração de IDs únicos no Salesforce sem chance de colisão, é possível utilizar... | 0 | 2024-06-06T18:31:50 | https://dev.to/lucasvalhos/geracao-de-ids-unicos-no-salesforce-sem-chance-de-colisao-4jpm | Para garantir a geração de IDs únicos no Salesforce sem chance de colisão, é possível utilizar algumas abordagens eficazes. A seguir, descrevo algumas dessas abordagens:
### 1. **Utilizar a API padrão de Salesforce (UUID)**
Salesforce possui uma função nativa para gerar UUIDs (Identificadores Únicos Universais) no Apex, o que praticamente elimina as chances de colisão. O UUID é um padrão internacional para criar identificadores únicos.
```apex
String uniqueId = String.valueOf(Math.abs(Crypto.getRandomLong()));
```
### 2. **Combinar campos exclusivos**
Você pode criar um ID único combinando vários campos que, juntos, são garantidos como únicos. Por exemplo, concatenar o ID do registro com a data e hora atual, um código único de usuário, ou outros campos exclusivos.
```apex
String uniqueId = 'ID-' + System.currentTimeMillis() + '-' + UserInfo.getUserId();
```
### 3. **Usar Sequências Automáticas**
Criar um campo do tipo "Auto Number" que incrementa automaticamente para cada novo registro. Isso garante unicidade por definição.
- **Exemplo de uso:**
- Configure um campo "Auto Number" no objeto desejado.
- No campo "Display Format", defina o formato como "UNQ-{000000}" para que os números sejam gerados como `UNQ-000001`, `UNQ-000002`, etc.
### 4. **Combinar Data e Hora com um Incremento**
Combinar a data e hora atual com um incremento para assegurar unicidade.
```apex
Datetime dt = Datetime.now();
String uniqueId = dt.format('yyyyMMddHHmmssSSS') + '-' + UserInfo.getUserId();
```
### 5. **Uso de Chaves Externas**
Se você está integrando com outro sistema, pode utilizar chaves externas que garantem a unicidade no sistema original. Isso pode ser útil, por exemplo, em integrações com ERP, CRM, ou outros sistemas que já garantem a unicidade das chaves.
### Exemplo Completo em Apex:
Aqui está um exemplo completo utilizando várias técnicas para garantir a unicidade:
```apex
public with sharing class UniqueIdGenerator {
public static String generateUniqueId() {
// Utilize a função de geração de UUID
String uuid = String.valueOf(Math.abs(Crypto.getRandomLong()));
// Combine com a data e hora atual para mais segurança
Datetime dt = Datetime.now();
String timestamp = dt.format('yyyyMMddHHmmssSSS');
// Combine com o ID do usuário para garantir mais unicidade
String userId = UserInfo.getUserId();
// Crie o ID único final
String uniqueId = uuid + '-' + timestamp + '-' + userId;
return uniqueId;
}
}
```
Este exemplo utiliza um UUID, a data e hora atual e o ID do usuário para criar um ID único. A combinação desses elementos reduz drasticamente a chance de colisão.
Escolher a abordagem correta depende das suas necessidades específicas e do contexto de uso no Salesforce. Essas técnicas combinadas garantem que você terá um ID único para cada registro sem a preocupação de colisões. | lucasvalhos | |
1,879,572 | Is JWT Safe When Anyone Can Decode Plain Text Claims | If I get a JWT and can decode the payload, how is it secure? Why couldn't I just grab the token out... | 0 | 2024-06-06T18:30:17 | https://dev.to/jacktt/is-jwt-safe-when-anyone-can-decode-plain-text-claims-2j7o | If I get a JWT and can decode the payload, how is it secure? Why couldn't I just grab the token out of the header, decode and change the user information in the payload, and then encode it again to access any person's account? - My friend asked me this question today.
The short answer is NO, you can decode to see payload but can not edit payload!
Let's see how it works!
## How does JWT work?
JWT consists of 3 parts: Header, Payload, and Signature.
```javascript
const token = base64urlEncoding(header) + '.' + base64urlEncoding(payload) + '.' + base64urlEncoding(signature)
```
It will look like that:
```
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
```
**Header**
The header identifies which algorithm is used to generate the signature.
For example, decoded header of the above JWT will look like:
```json
{
"alg": "HS256",
"typ": "JWT"
}
```
It indicates that this JWT uses HS256 (HMAC with SHA-256) algorithm to generate the signature.
**Payload**
Contains a set of claims. The JWT specification defines seven Registered Claim Names which are the standard fields commonly included in tokens such as exp(Expiration Time), iat(Issued at),…
Custom claims are usually also included, depending on the purpose of the token such as email, user_id, role,…
The above JWT's Payload is:
```json
{
"sub": "1234567890",
"name": "John Doe",
"iat": 1516239022
}
```
**Signature**
Securely validates the token. The signature is calculated by encoding the header and payload using Base64url Encoding RFC 4648 and concatenating the two together with a period separator. That string is then run through the cryptographic algorithm specified in the header, in this case, it's HMAC-SHA256. The Base64url Encoding is similar to base64, but uses different non-alphanumeric characters and omits padding.
## Answer the questions
### Can anyone decode to see the payload?
Yes, they can! 3 parts of the token are basically Base64url encoding, so everyone can decode easily. You can try it at [jwt.io](https://jwt.io).

### Can anyone edit the payload?
Sure, everyone can decode to get the JSON payload, edit it then encode it to get a new token with edited payload value.
But this token is not valid anymore, because the signature is invalid.
Scroll up to read "<u>how the signature is generated</u>" again.
"_It's calculated by encoding the header and payload using…_". So when you edit the payload and then send the token to server, server will calculate the signature, compare and know that your signature is invalid!
So why don't I generate the corresponding signature?
No, you can not! Read "_how the signature is generated_" again: "_That string is then run through the cryptographic algorithm specified in the header_".
In this case, the header indicates that its signature is encrypted using HMAC-SHA256, so if you want to generate the signature, you must have the HMAC-SHA256 **secret key** - that was used to generate this signature before.
### Can anyone validate it's a valid JWT from a server?
It depends on the algorithm used in the JWT.
The most common algorithm used to sign the JWT is as follows:
- **HMAC** stands for Hash-based Message Authentication Code, and it is a symmetric algorithm that uses a hash function and a secret key to generate a signature.
- **RSA** stands for Rivest-Shamir-Adleman, and it is an asymmetric algorithm that uses a public and private key pair to generate and verify a signature.
So:
- With HMAC, only users who hold the secret key can validate the JWT.
- With RSA, anyone can validate the JWT using the public key that is published somewhere. | jacktt | |
1,879,571 | Impactos ao alterar um campo Lookup para Master-Detail no Salesforce | Alterar um campo de relacionamento do tipo Lookup para Master-Detail no Salesforce pode trazer... | 0 | 2024-06-06T18:29:26 | https://dev.to/lucasvalhos/impactos-ao-alterar-um-campo-lookup-para-master-detail-no-salesforce-l3k | Alterar um campo de relacionamento do tipo Lookup para Master-Detail no Salesforce pode trazer diversos desafios. Aqui estão alguns dos principais pontos a serem considerados:
### 1. **Impacto nos Dados Existentes**
- **Integridade Referencial**: O campo Lookup deve ser obrigatório antes de convertê-lo em Master-Detail. Todos os registros existentes precisam ter valores preenchidos para o campo Lookup.
- **Conversão de Dados**: Certifique-se de que todos os dados existentes são compatíveis com a nova relação. Isso pode exigir limpeza ou atualização dos dados antes da conversão.
### 2. **Permissões e Segurança**
- **Controle de Acesso**: Master-Detail herda as permissões de segurança do objeto mestre. Verifique se as permissões dos usuários nos objetos envolvidos estão configuradas corretamente.
- **Compartilhamento de Registros**: A conversão para Master-Detail pode alterar como os registros são compartilhados, pois os registros filhos herdam as configurações de compartilhamento do registro pai.
### 3. **Modelagem de Dados**
- **Dependências**: Os registros filhos são altamente dependentes dos registros pais em uma relação Master-Detail. A exclusão de um registro pai exclui automaticamente todos os registros filhos.
- **Contagem de Relacionamentos**: Salesforce limita o número de relações Master-Detail em um objeto. Verifique se você não está ultrapassando esses limites.
### 4. **Relatórios e Dashboards**
- **Relatórios Personalizados**: Master-Detail permite a criação de relatórios somando os dados dos registros filhos aos registros pais, o que pode afetar relatórios existentes.
- **Sumários Roll-Up**: Master-Detail suporta campos de resumo roll-up, que podem ser úteis, mas exigem reconfiguração de relatórios e dashboards.
### 5. **Automação e Processos**
- **Triggers e Workflows**: Verifique se existem triggers, workflows ou processos que dependem do campo Lookup e ajuste-os para suportar a nova relação Master-Detail.
- **Process Builder e Flow**: Flows e processos do Process Builder que usam o campo Lookup podem precisar de ajustes após a conversão.
### 6. **Customizações e Código**
- **Apex Code**: Revise e atualize qualquer código Apex que interaja com o campo Lookup para garantir que ele funcione corretamente após a conversão.
- **Visualforce Pages e Lightning Components**: Verifique e ajuste componentes de interface customizados que utilizam o campo Lookup.
### 7. **Testes**
- **Testes de Regresso**: Execute testes de regressão para garantir que todas as funcionalidades existentes ainda funcionam corretamente após a alteração.
- **Ambiente de Sandbox**: Realize todas as alterações inicialmente em um ambiente de sandbox para validar os impactos e ajustes necessários antes de implementar na produção.
### 8. **Documentação e Treinamento**
- **Documentação Atualizada**: Atualize a documentação técnica e de usuário para refletir a nova estrutura de dados.
- **Treinamento de Usuários**: Ofereça treinamento para os usuários finais sobre as mudanças e como elas afetam suas tarefas diárias.
### Resumo
Alterar um campo Lookup para Master-Detail no Salesforce é uma tarefa complexa que exige planejamento cuidadoso e validação extensiva. Considerando os pontos acima, você pode minimizar riscos e garantir uma transição suave. | lucasvalhos | |
1,879,570 | Matthew Danchak's Proven Tips for Achieving Optimal Mental Health | Keeping your mental health in check can be tough in today's hectic world. With all the pressures from... | 0 | 2024-06-06T18:24:32 | https://dev.to/matthewdanchak/matthew-danchaks-proven-tips-for-achieving-optimal-mental-health-1ofp | matthewdanchak, healthyrelationships, healthyliving, mentalclarity | Keeping your mental health in check can be tough in today's hectic world. With all the pressures from work, relationships, and the constant digital noise, it's crucial to have some solid strategies to stay mentally healthy. [Matthew Danchak](https://www.f6s.com/member/matthew-danchak), a well-known mental health expert, shares some practical tips to help you achieve and maintain good mental health. Here are his proven strategies that can really make a difference in your life.
## Prioritize Self-Care
Self-care isn't just a trendy phrase; it's a must. Danchak emphasizes taking time for yourself. This can be as simple as enjoying a quiet cup of coffee in the morning or scheduling a regular massage. The key is to make self-care a regular part of your routine. When you put your needs first, you're better able to handle stress and life's challenges.
## Stay Physically Active
Exercise is a great way to boost your mental health. Matthew Danchak says you don't have to become a marathon runner to see the benefits. Even a daily 30-minute walk can significantly improve your mood. Exercise releases endorphins, which are natural mood lifters. It also helps reduce anxiety and depression, making it essential for mental well-being.
## Foster Strong Relationships
Human connection is crucial for mental health. Danchak suggests building strong relationships with family and friends to create a support network. Make time for meaningful conversations and activities that strengthen your bonds. Remember, it's not about the number of relationships but the quality. A few close, supportive friends can make a huge difference.
## Practice Mindfulness and Meditation
Mindfulness and meditation are effective ways to reduce stress and improve mental clarity. Danchak recommends setting aside time each day to practice mindfulness, whether through meditation, deep breathing exercises, or just being present in the moment. These practices help calm the mind and reduce mental clutter that can lead to anxiety.
## Set Realistic Goals
Setting and achieving goals gives a sense of purpose and accomplishment. However, Danchak warns against setting goals that are too ambitious, as this can lead to frustration and burnout. Instead, break your goals into smaller, manageable steps. Celebrate your progress along the way, no matter how small. This approach keeps you motivated and positively impacts your mental health.
## Maintain a Healthy Diet
What you eat affects how you feel. Danchak highlights the importance of a balanced diet rich in fruits, vegetables, lean proteins, and whole grains. Avoid too much sugar and processed foods, which can lead to energy crashes and mood swings. Staying hydrated is also crucial for mental clarity and overall well-being.
## Limit Screen Time
In our digital age, it's easy to get lost in screens, from smartphones to computers to TVs. Danchak advises setting limits on screen time to prevent digital overload. Designate tech-free times, such as during meals or before bedtime, to unwind and connect with the real world. This practice helps reduce stress and improve sleep quality.
## Seek Professional Help When Needed
There's no shame in seeking help. Danchak stresses the importance of reaching out to mental health professionals when you need support. Therapists and counselors can provide guidance and strategies tailored to your unique situation. Remember that asking for help demonstrates strength, not weakness.
## Cultivate a Positive Mindset
Your mental health is greatly influenced by your mentality. Matthew Danchak encourages cultivating positivity by practicing gratitude and focusing on the good in your life. This doesn't mean ignoring problems, but rather, viewing challenges as opportunities for growth. Positive thinking can improve your resilience and overall mental well-being.
## Conclusion
Achieving optimal mental health is an ongoing journey that requires effort and dedication. By incorporating [Matthew Danchak](https://medium.com/@matthewdanchak861/about)'s proven tips into your daily life, you can build a strong foundation for mental wellness. Prioritize self-care, stay active, nurture relationships, and don't hesitate to seek help when needed. Remember, mental health is just as important as physical health, and taking steps to maintain it is crucial for living a fulfilling life.
| matthewdanchak |
1,879,569 | AI CSS Animations | I released my first software in the design space about a month ago, a free AI CSS animation... | 0 | 2024-06-06T18:21:38 | https://dev.to/max_prehoda_9cb09ea7c8d07/ai-css-animations-1e1f | webdev, css, design, ai | I released my first software in the design space about a month ago, a free AI CSS animation generator.
As a dev/designer, I was frustrated with the annoying & tedious process of writing keyframe animations. The lack of good tools available led me to build my own solution.
After a month of intense development, it’s ready! Now, I'm reaching out to the Dev community for feedback and beta testers to help refine things further :)
If you're interested in making some slick animations for your site, I'd love for you to try it out and share your thoughts! Looking for harsh criticisms here, don’t hold back!
[Aicssanimations.com](https://Aicssanimations.com)
| max_prehoda_9cb09ea7c8d07 |
1,875,534 | Database Migrations : Flyway for Spring Boot projects | Like Liquibase, Flyway can be used in 2 main manners for database migrations in Spring Boot projects,... | 27,623 | 2024-06-06T18:18:50 | https://dev.to/aharmaz/database-migrations-flyway-for-spring-boot-projects-2coi | Like Liquibase, Flyway can be used in 2 main manners for database migrations in Spring Boot projects, either on application startup or in an independent way.
In this post we will see how to configure Flyway and use it in the context of a Spring Boot project for both development and release phases, You can find examples in the repository available at : [Github Repository](https://github.com/Aharmaz/flyway-demo)
**Fundamental Concepts of Flyway**
*Migration and Migration Script* :
A migration is the smallest coutable unit of change that Flyway can perform and register against a target database and it can involve one or more operations located on a file called migration script.
A migration in Flyway is equivalent to a changeset in Liquibase, however a migration script in Flyway is intended to host only a single migration, and a migration script in Liquibase can host multiple changesets
In Flyway executing a migration is equivalent to executing the migration script hosting that migration
*flyway_schema_history Table* :
This is the table Flyway creates on a database and uses for keeping track of what migrations has been already executed against that target database so that it does not run the same migration multiple times and it will be aware of what migrations it should run at a point of time (those that haven't been registered on the history table).
Each migration is identified by the filepath of the migration file where it is located. When Flyway runs a migration it calculate a checksum number for the content of that migration and stores it inside the flyway_schema_history table in order to make sure that the same migration won't be changed again
If Flyway notices that a changeset has been modified after it has been applied by comparing the checksums, it will throw an error during the migration process
**Common Configuration**
This consists of a folder on the resources directory of the project, then populate it with the migration files, in the example I have named it db/migrations :
- V1_creating_persons_table.sql
- V2_inserting_rows_into_persons_table.sql
**Configuring Flyway to run migrations on application startup**
This kind of behavior is commonly used at the development phase when a developer needs to update the state of the database he has on his local machine. Spring Boot offers some auto-configurations for launching the Flyway migration when the application is started.
For those auto-configuration to work we need to add some informations in the configuration file of the Spring Boot app related to local environments (application-local.yml) like the location of the database, the username and password of connection and the location of the migration files within the classpath of the application :
```
spring:
datasource:
url: jdbc:postgresql://localhost:5432/flyway_demo
username: postgres
password: changemeinproduction
driver-class-name: org.postgresql.Driver
jpa:
hibernate:
ddl-auto: none
flyway:
locations: classpath:db/migrations
```
Then we should add flyway dependency in the pom.xml file of the project (build.gradle file if you are using Gradle) :
```
<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>
```
Now when starting the application Spring Boot will notice the presence of the Flyway dependency on the classpath of the app and will uses the information in the configuration fi,le to trigger the auto-configuration that will be responsible for launching the migration process automcatically, here is an example of what you could see in the logs when the migration process is started :
```
2024-06-06T19:01:09.407+01:00 INFO 3229 --- [ main] org.flywaydb.core.FlywayExecutor : Database: jdbc:postgresql://localhost:5432/flyway_demo (PostgreSQL 16.0)
2024-06-06T19:01:09.428+01:00 WARN 3229 --- [ main] o.f.c.internal.database.base.Database : Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 15.
2024-06-06T19:01:09.450+01:00 INFO 3229 --- [ main] o.f.c.i.s.JdbcTableSchemaHistory : Schema history table "public"."flyway_schema_history" does not exist yet
2024-06-06T19:01:09.453+01:00 INFO 3229 --- [ main] o.f.core.internal.command.DbValidate : Successfully validated 2 migrations (execution time 00:00.015s)
2024-06-06T19:01:09.486+01:00 INFO 3229 --- [ main] o.f.c.i.s.JdbcTableSchemaHistory : Creating Schema History table "public"."flyway_schema_history" ...
2024-06-06T19:01:09.538+01:00 INFO 3229 --- [ main] o.f.core.internal.command.DbMigrate : Current version of schema "public": << Empty Schema >>
2024-06-06T19:01:09.547+01:00 INFO 3229 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "public" to version "1 - creating persons table"
2024-06-06T19:01:09.606+01:00 INFO 3229 --- [ main] o.f.core.internal.command.DbMigrate : Migrating schema "public" to version "2 - inserting rows into persons table"
2024-06-06T19:01:09.647+01:00 INFO 3229 --- [ main] o.f.core.internal.command.DbMigrate : Successfully applied 2 migrations to schema "public", now at version v2 (execution time 00:00.042s)
```
**Configuring Flyway to run migrations independently from running the application**
This behavior is used when a new release of the software using a database is ready to be deployed, and a number of migrations must be applied on that database.
Maven provides a plugin for running Flyway migrations, this plugin will need to know about the database location, username and password of connection and the location of the migration files in the project folder, the way this can be configured is by adding a file on named flyway.conf to the resources folder of the Spring Boot project and then when adding the flyway maven plugin referencing the location of that file in the pom.xml :
```
flyway.user=postgres
flyway.password=changemeinproduction
flyway.schemas=demo_flyway
flyway.url=jdbc:postgresql://localhost:5432/demo_flyway
flyway.locations=src/main/resources/db/migrations
```
```
<build>
<plugins>
<plugin>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-maven-plugin</artifactId>
<version>6.5.7</version>
<configuration>
<configFiles>
<configFile>
src/main/resources/flyway.conf
</configFile>
</configFiles>
</configuration>
</plugin>
</plugins>
</build>
```
If we want to launch the migration process using the Flyway maven plugin we must set the property spring.flyway.enabled to false so that when the application is started no migration is executed :
Finally we should run the migrations using the following command :
```
./mvnw flyway:migrate
```
and we will be able to see the following logs indicating that the migrations has been executed succesfully.
```
[INFO] Scanning for projects...
[INFO]
[INFO] ---------------------< ma.demo.flyway:flyway-demo >---------------------
[INFO] Building flyway-demo 1.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- flyway-maven-plugin:6.5.7:migrate (default-cli) @ flyway-demo ---
[INFO] Flyway Community Edition 6.5.7 by Redgate
[INFO] Database: jdbc:postgresql://localhost:5432/flyway_demo (PostgreSQL 16.0)
[WARNING] Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 12.
[INFO] Creating schema "flyway_demo" ...
[INFO] Creating Schema History table "flyway_demo"."flyway_schema_history" ...
[INFO] Current version of schema "flyway_demo": null
[INFO] Migrating schema "flyway_demo" to version 1 - creating persons table
[INFO] Migrating schema "flyway_demo" to version 2 - inserting rows into persons table
[INFO] Successfully applied 2 migrations to schema "flyway_demo" (execution time 00:00.066s)
[WARNING] Flyway upgrade recommended: PostgreSQL 16.0 is newer than this version of Flyway and support has not been tested. The latest supported version of PostgreSQL is 12.
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 0.797 s
[INFO] Finished at: 2024-06-06T19:04:59+01:00
[INFO] ------------------------------------------------------------------------
```
**Conclusion**
Flyway simplifies database migrations by ensuring that changes are applied consistently, in the context of Spring Boot, it can be integrated easily to launch the migration process at the startup of the application and this helps developer focus more on building new features without worrying about the state databases.
| aharmaz | |
1,879,562 | Express.js: Unleashing the Power of Node.js for Web Development 🚀🔥 | Are you ready to revolutionize your web development projects with Express.js? In this article, we’ll... | 0 | 2024-06-06T18:16:13 | https://dev.to/raksbisht/expressjs-unleashing-the-power-of-nodejs-for-web-development-59aj | node, webdev, javascript, tutorial | Are you ready to revolutionize your web development projects with Express.js? In this article, we’ll embark on a journey into the world of Express.js — a fast, unopinionated, and minimalist web framework for Node.js. Whether you’re a seasoned developer or just starting your coding adventure, Express.js offers a robust toolkit for building scalable and feature-rich web applications. Let’s dive in!
## What is Express.js?
Express.js is a web application framework for Node.js, designed to simplify the process of building web applications and APIs. It provides a range of features for handling HTTP requests, routing, middleware, and more, allowing developers to focus on writing clean and efficient code.
## Getting Started with Express.js
To kick things off, let’s install Express.js and create a simple web server:
```
// Install Express.js
npm install express
```
```
// Create a basic Express server
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello, Express!');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
```
## Key Features of Express.js
Express.js comes packed with a plethora of features to streamline web development. Here are some highlights:
1. **Routing:** Define routes to handle different HTTP requests and URL paths, making it easy to organize and structure your application.
2. **Middleware:** Utilize middleware functions to perform tasks such as logging, authentication, error handling, and request processing.
3. **Template Engines:** Integrate template engines like EJS, Pug, or Handlebars to dynamically generate HTML content and render views.
4. **Static File Serving:** Serve static files such as images, CSS, and client-side JavaScript files with built-in middleware like express.static.
5. **RESTful API Development:** Build RESTful APIs effortlessly using Express.js, with support for handling JSON data, request validation, and response formatting.
## Example: Creating a RESTful API with Express.js
Let’s create a simple RESTful API for managing tasks using Express.js:
```
const express = require('express');
const app = express();
app.use(express.json());
let tasks = \[\];
// GET all tasks
app.get('/tasks', (req, res) => {
res.json(tasks);
});
// GET a specific task by ID
app.get('/tasks/:id', (req, res) => {
const taskId = req.params.id;
const task = tasks.find(task => task.id === taskId);
if (!task) {
return res.status(404).json({ error: 'Task not found' });
}
res.json(task);
});
// POST a new task
app.post('/tasks', (req, res) => {
const task = req.body;
tasks.push(task);
res.status(201).json(task);
});
// PUT (update) an existing task by ID
app.put('/tasks/:id', (req, res) => {
const taskId = req.params.id;
const taskIndex = tasks.findIndex(task => task.id === taskId);
if (taskIndex === -1) {
return res.status(404).json({ error: 'Task not found' });
}
tasks\[taskIndex\] = { ...req.body, id: taskId };
res.json(tasks\[taskIndex\]);
});
// DELETE a task by ID
app.delete('/tasks/:id', (req, res) => {
const taskId = req.params.id;
const taskIndex = tasks.findIndex(task => task.id === taskId);
if (taskIndex === -1) {
return res.status(404).json({ error: 'Task not found' });
}
tasks.splice(taskIndex, 1);
res.sendStatus(204);
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
```
## Embracing Express.js for Web Development
Express.js continues to be a popular choice for developers due to its flexibility, performance, and vibrant ecosystem of middleware and plugins. Whether you’re building a simple blog, a sophisticated e-commerce platform, or a real-time chat application, Express.js empowers you to bring your ideas to life with ease.
## Conclusion
Express.js is not just a framework — it’s a catalyst for innovation and creativity in web development. With its intuitive API, extensive documentation, and supportive community, Express.js remains at the forefront of Node.js web development.
Are you ready to harness the power of Express.js for your next project? Share your thoughts and experiences in the comments below! Let’s embark on an exhilarating journey of web development together. 💻✨
Feel free to share this article with fellow developers, aspiring coders, or anyone eager to explore the exciting world of Express.js! Together, we can unlock new possibilities and redefine the future of web development. 🌐🚀
| raksbisht |
1,879,559 | Discover Cost Savings on LTL Shipping from Canada to the USA | Are you business owner or logistics manager tasked with shipping products from Canada to the United... | 0 | 2024-06-06T18:13:31 | https://dev.to/john_54bb96585e9bc23d0901/discover-cost-savings-on-ltl-shipping-from-canada-to-the-usa-3jgh | ltlcrossborder, shippingcanadatousa, shippingsoftware | Are you business owner or logistics manager tasked with shipping products from Canada to the United States? If so, you're likely aware of the intricacies and expenses involved in cross-border shipping. The good news is, there's a solution that can simplify your process and significantly cut costs: LTL (Less-Than-Truckload) shipping. Even better news? https://www.freightcom.com/cross-border-shipping is here to help you navigate the complexities and secure the best rates available.
**Why Choose LTL Shipping?**

LTL shipping is the optimal choice for businesses that don’t need a full truckload for their shipments. By consolidating your freight with other shipments, LTL offers a cost-effective, efficient way to transport goods across the border. Here are a few reasons why LTL is a smart choice:
Cost Savings: By sharing the transportation cost with other shippers, you only pay for the portion of the truck your freight occupies.Flexibility: LTL shipping allows for more frequent shipments, reducing the need for large, less frequent orders.Reduced Risk: With professional handling and less individual cargo movement, LTL shipping can reduce the risk of damage.
**Advantage**
Navigating the complexities of LTL shipping can be daunting, but Freightcom makes it easy. Here’s why Freightcom stands out as your best partner for shipping from Canada to the USA:
Competitive Rates: Our network of carriers ensures you always get the best possible rates. We negotiate significant discounts with trusted carriers we work with which are then passed on to you.Simplified Process: Our user-friendly platform allows you to compare different carrier rates, book shipments and track deliveries in real-time all in one place.Expert Support: Our team is made up of logistics professionals who are available at any time for any questions or concerns that may arise during the shipping process from start to finish so don't hesitate to ask them anything!Custom Solutions: We provide tailored services designed around what works best for each individual customer - no matter how large or small their shipment needs may be!
**Real-World Savings**
Let me give you an example: A small business based out of Toronto needs send several pallets worth goods across America. Without using LTL they would either have to deal with an abundance smaller shipments that could cost them more in time and money than it’s worth or one huge truckload which might not even be necessary for all the items being shipped depending on where each is going. These guys should check into utilizing our company’s less than load services because this will allow them group together these different loads into one so as lower overall charges while also speeding up delivery times – now isn’t that something?
**Contact Freightcom Today**
If you want to grow your business but don’t know how because things are too complicated with shipping stuff back and forth over borders then let me tell ya about what we offer here at Freightcom: LTL (less than truckload) service! This means if I sell my goods from Canada online I can get them all the way down south into America quickly without spending too much cash along the way thanks again guys; whether big companies or small fish like us there’s no job too hard.
https://www.freightcom.com/contact-us
Working with Freightcom to get information about prices of LTL shipping from Canada to the USA helped me save a lot on my shipping expenses. The group of experts is always ready to offer you all the details you need and show you how to make your shipments faster plus reducing cost.
| john_54bb96585e9bc23d0901 |
1,879,558 | O que é um Wrapper no Salesforce? | Em Salesforce, um "Wrapper" é um padrão de design de programação usado para agrupar um conjunto de... | 0 | 2024-06-06T18:11:48 | https://dev.to/lucasvalhos/o-que-e-um-wrapper-no-salesforce-15ec | Em Salesforce, um "Wrapper" é um padrão de design de programação usado para agrupar um conjunto de dados ou objetos diferentes em uma única unidade lógica. Este padrão é especialmente útil quando você deseja manipular e exibir um conjunto de informações em uma interface de usuário de maneira coesa. Vamos detalhar mais sobre o conceito de Wrapper e seus usos:
### Definição
Um Wrapper é uma classe personalizada em Apex que contém outras variáveis e objetos, geralmente encapsulando várias propriedades e métodos relacionados. Ele permite que você trate um conjunto de informações relacionadas como uma única unidade, facilitando a manipulação de dados complexos em componentes visuais, como Visualforce pages ou Lightning components.
### Usos Comuns de Wrappers
1. **Agrupamento de Dados**:
- Wrappers são frequentemente usados para agrupar dados de diferentes origens. Por exemplo, você pode ter uma classe Wrapper que agrupa informações de contas e contatos para serem exibidas juntas.
2. **Manipulação de Interfaces Complexas**:
- Em Visualforce pages ou Lightning components, Wrappers são úteis para gerenciar listas de registros com funcionalidades adicionais, como seleção múltipla, ações específicas por linha, ou agrupamento de informações.
3. **Passagem de Dados em Métodos**:
- Wrappers facilitam a passagem de dados complexos entre métodos, encapsulando múltiplos parâmetros em uma única estrutura de dados.
### Exemplo de Uso de Wrapper
Aqui está um exemplo básico de como você pode definir e usar um Wrapper em Apex:
```apex
public class AccountContactWrapper {
public Account account { get; set; }
public List<Contact> contacts { get; set; }
public Boolean isSelected { get; set; }
public AccountContactWrapper(Account account, List<Contact> contacts) {
this.account = account;
this.contacts = contacts;
this.isSelected = false; // Padrão para não selecionado
}
}
```
### Exemplo de Uso em Visualforce Page
Vamos supor que você queira exibir uma lista de contas com seus contatos em uma Visualforce page, permitindo que o usuário selecione várias contas:
```apex
public with sharing class AccountController {
public List<AccountContactWrapper> accountWrapperList { get; set; }
public AccountController() {
accountWrapperList = new List<AccountContactWrapper>();
List<Account> accounts = [SELECT Id, Name FROM Account LIMIT 10];
for(Account acc : accounts) {
List<Contact> contacts = [SELECT Id, Name FROM Contact WHERE AccountId = :acc.Id];
accountWrapperList.add(new AccountContactWrapper(acc, contacts));
}
}
public void processSelectedAccounts() {
for(AccountContactWrapper wrapper : accountWrapperList) {
if(wrapper.isSelected) {
// Processar a conta selecionada
}
}
}
}
```
E a correspondente Visualforce page pode ser:
```html
<apex:page controller="AccountController">
<apex:form>
<apex:pageBlock title="Accounts and Contacts">
<apex:pageBlockTable value="{!accountWrapperList}" var="wrapper">
<apex:column>
<apex:inputCheckbox value="{!wrapper.isSelected}"/>
</apex:column>
<apex:column value="{!wrapper.account.Name}" headerValue="Account Name"/>
<apex:column>
<apex:repeat value="{!wrapper.contacts}" var="contact">
{!contact.Name}<br/>
</apex:repeat>
</apex:column>
</apex:pageBlockTable>
<apex:commandButton value="Process Selected" action="{!processSelectedAccounts}"/>
</apex:pageBlock>
</apex:form>
</apex:page>
```
### Benefícios de Usar Wrappers
- **Organização**: Facilita a organização e manipulação de dados complexos em estruturas claras e lógicas.
- **Reutilização**: Encapsula lógica comum em uma única classe, facilitando a reutilização e manutenção do código.
- **Flexibilidade**: Permite manipular e exibir dados de diferentes maneiras sem alterar a estrutura subjacente dos objetos Salesforce.
Em resumo, Wrappers são uma ferramenta poderosa no arsenal de um desenvolvedor Salesforce, permitindo a criação de interfaces de usuário ricas e interativas, bem como a manipulação eficiente de dados complexos. | lucasvalhos | |
1,879,555 | Responsive Image Carousel ( Animation ) | Check out this Pen I made! | 0 | 2024-06-06T18:10:10 | https://dev.to/lindoh_mdunge_1e3362af487/responsive-image-carousel-animation--5bdl | codepen | Check out this Pen I made!
{% codepen https://codepen.io/lindoh-Mdunge/pen/BabMbqG %} | lindoh_mdunge_1e3362af487 |
1,879,553 | app web | post demo | 0 | 2024-06-06T18:09:16 | https://dev.to/byron_loarte_d700d5b9fa29/app-web-8ea | webdev, appweb | post demo | byron_loarte_d700d5b9fa29 |
1,879,551 | 🚫 Common Pitfalls in Node.js Development: What Not to Do 🛑 | As developers, we’re often focused on what we should be doing to write efficient, scalable, and... | 0 | 2024-06-06T18:08:18 | https://dev.to/raksbisht/common-pitfalls-in-nodejs-development-what-not-to-do-85c | node, development, webdev, tutorial | As developers, we’re often focused on what we should be doing to write efficient, scalable, and maintainable Node.js code. But equally important is understanding what we should avoid doing to prevent common pitfalls and ensure the stability and security of our applications. In this article, we’ll explore some of the most crucial mistakes to steer clear of in Node.js development.
## 🚫 Blocking the Event Loop
One of the fundamental principles of Node.js is its non-blocking, event-driven architecture. Blocking the event loop with synchronous operations can severely degrade performance and responsiveness. Instead, utilize asynchronous patterns and leverage Node.js’s built-in features like Promises, async/await, and callbacks to handle I/O operations efficiently.
```
// ❌ Blocking the Event Loop
const fs = require('fs');
const data = fs.readFileSync('file.txt'); // Blocking operation
// ✅ Asynchronous Approach
fs.readFile('file.txt', (err, data) => {
if (err) throw err;
console.log(data);
});
```
## 🚫 Neglecting Error Handling
Node.js applications are prone to errors, whether they’re network failures, file system errors, or unexpected inputs. Neglecting proper error handling can lead to crashes and vulnerabilities in your application. Always handle errors gracefully by using try-catch blocks, error-first callbacks, or utilizing frameworks like Express’s error middleware.
```
// ❌ Neglecting Error Handling
fs.readFile('file.txt', (data) => {
console.log(data); // Oops! No error handling
});
// ✅ Handling Errors Gracefully
fs.readFile('file.txt', (err, data) => {
if (err) {
console.error('Error reading file:', err);
return;
}
console.log(data);
});
```
## 🚫 Overlooking Security Best Practices
Security should be a top priority in any Node.js application. Failing to follow security best practices, such as input validation, proper authentication, and authorization mechanisms, can leave your application vulnerable to attacks like injection and cross-site scripting (XSS). Always sanitize user inputs, use parameterized queries for database operations, and implement secure authentication mechanisms like JWT.
```
// ❌ Vulnerable to SQL Injection
const query = \`SELECT \* FROM users WHERE username = '${username}' AND password = '${password}'\`;
// ✅ Parameterized Query
const query = 'SELECT \* FROM users WHERE username = ? AND password = ?';
connection.query(query, \[username, password\], (err, results) => {
// Handle results
});
```
## 🚫 Ignoring Performance Optimization
Node.js is known for its scalability and performance, but ignoring optimization can lead to sluggish and inefficient applications. Avoid unnecessary computations, optimize database queries, and utilize caching mechanisms to improve response times and reduce resource consumption.
```
// ❌ Inefficient Code
const result = array.filter(item => expensiveOperation(item));
// ✅ Optimized Code
const result = array.filter(item => cache\[item\] || (cache\[item\] = expensiveOperation(item)));
```
## 🚫 Skipping Testing and Continuous Integration
Testing is essential for ensuring the reliability and robustness of your Node.js applications. Skipping unit tests, integration tests, and end-to-end tests can result in undetected bugs and regressions. Integrate testing into your development workflow using frameworks like Mocha, Jest, and Supertest, and automate the testing process with continuous integration tools like Jenkins or Travis CI.
```
// ❌ Skipping Tests
if (result !== expected) {
console.error('Test failed!');
}
// ✅ Writing Tests
test('should return true if value is equal to 10', () => {
expect(func(10)).toBe(true);
});
```
## 🚫 Reinventing the Wheel
Node.js has a vast ecosystem of libraries and frameworks that can streamline development tasks and solve common problems. Reinventing functionality that already exists in well-established packages can lead to unnecessary complexity and maintenance overhead. Always explore existing solutions and choose the right tools for the job.
```
// ❌ Reinventing HTTP Server
const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello World!');
});
// ✅ Using Express.js
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('Hello World!');
});
```
## Conclusion
By avoiding these common pitfalls, you can enhance the stability, security, and performance of your Node.js applications. Remember to stay updated with best practices, leverage the power of the Node.js ecosystem, and prioritize continuous improvement in your development workflow. Happy coding! 🎉✨
What other common mistakes have you encountered in Node.js development? Share your experiences in the comments below! 🚀 | raksbisht |
1,879,548 | Node.js Performance Optimization: Unleashing the Full Potential of Your Applications | Introduction: Node.js has revolutionized the world of server-side development with its non-blocking,... | 0 | 2024-06-06T17:59:50 | https://dev.to/raksbisht/nodejs-performance-optimization-unleashing-the-full-potential-of-your-applications-4njm | node, performanceoptimization, eventloop, tutorial | **Introduction:** Node.js has revolutionized the world of server-side development with its non-blocking, event-driven architecture. However, achieving optimal performance in Node.js applications requires more than just writing efficient code — it requires a deep understanding of its underlying principles and techniques for optimization. In this article, I’ll share insights and strategies for maximizing the performance of your Node.js applications, helping you unlock their full potential.
## Main Points:
1. **Understanding Event Loop:** Explain the event loop in Node.js and its role in handling I/O operations asynchronously, emphasizing its impact on application performance.
2. **Identifying Performance Bottlenecks:** Discuss common performance bottlenecks in Node.js applications, such as CPU-bound and I/O-bound operations, and how to identify them using profiling tools.
3. **Code Optimization Techniques:** Offer tips and best practices for optimizing Node.js code, including minimizing blocking operations, caching frequently accessed data, and using asynchronous patterns effectively.
4. **Scaling Strategies:** Explore strategies for horizontally and vertically scaling Node.js applications to handle increased traffic and workload, including load balancing, clustering, and vertical scaling techniques.
5. **Memory Management:** Discuss memory management in Node.js, including techniques for reducing memory consumption, detecting memory leaks, and optimizing garbage collection.
6. **Performance Monitoring and Tuning:** Introduce tools and techniques for monitoring the performance of Node.js applications in real-time, identifying performance issues, and fine-tuning system parameters for optimal performance.
7. **Security Considerations:** Address the importance of security in performance optimization, including best practices for securing Node.js applications against common vulnerabilities and attacks.
**Conclusion:** By implementing the strategies and techniques discussed in this article, you can optimize the performance of your Node.js applications and deliver faster, more responsive experiences to your users. With a solid understanding of Node.js performance optimization, you can unleash the full potential of your applications and stay ahead in today’s competitive landscape.
**Call to Action:** Ready to take your Node.js applications to the next level? Start by incorporating the performance optimization strategies discussed in this article into your development workflow, and join the conversation on LinkedIn to share your experiences and learn from others. Together, we can build faster, more scalable Node.js applications and drive innovation in the world of software development.
| raksbisht |
1,879,545 | The Power of Presentation: 12 Ways Pre-Roll Packaging Boosts Your Brand | It might not seem important, but pre-roll packaging, those little cases that hold those... | 0 | 2024-06-06T17:58:19 | https://dev.to/jamrio/the-power-of-presentation-12-ways-pre-roll-packaging-boosts-your-brand-1cp4 | prerollpackaging, customboxes, packaging, boxes | It might not seem important, but pre-roll packaging, those little cases that hold those single-serving weed smokes, is important. But they play a surprisingly significant role for companies in the weed game. Using **_[pre roll packing](https://elixirpackaging.com/pre-roll-packaging/)_** correctly can be a great way to market your business by improving your brand's image, bringing in new customers, and keeping your product safe. How to do it:
## 1. Building Brand Recognition:
The pre-roll box is like a small sign for your business. It's the first thing a customer sees and touches. Color schemes, patterns, and standout images all help build a strong brand personality that customers will remember. The more they see your brand on packages, the more familiar they become with and trust it.
## 2. Standing Out From the Crowd:
There are many people selling weed. Finding ways to make your product stand out is important since so many others are on the market. Pre-roll packing is one of a kind and unique, and it can do just that. When creating your business, you should consider what makes it unique. Do people know that you use organic food? Your packing should have earthy colors and natural patterns. Do you focus on a specific type of customer, like high-end smokers? Choose patterns that are sleek and classy, with shiny highlights.

## 3. Telling Your Story:
You can tell people about your business on pre roll packaging. Say what your company stands for and what makes your product unique in this spot. You could use short, catchy words or even QR codes that take people to your website or social media pages, where they can find out more.
## 4. Transparency is Key:
Being open is very important, especially in the weed business. People want to know what they're getting. Put clear information on your pre-roll package about the strain type, the amount of THC and CBD, and anything else important. Customers will trust you more, and it will help them make intelligent decisions.
## 5. Tamper Evident Packaging:
Safety and protection are very important. Pre-roll packaging must be made so it can not be tampered with, so your product arrives at the client precisely as you need it to. Features like child-evidence locks, tamper-obtrusive seals, or reflective stickers can be part of this.
## 6. Freshness First:
If you don't properly store cannabis custom pre roll boxes, they can dry out and lose their strength. Good pre-roll wrapping is made of materials that help the rolls stay fresh. Choose items with airtight covers or locks that can be resealed. Some companies even add humidity control packets to the pre-rolls to ensure they stay at the right moisture level.
## 7. Portion Control:
With pre-rolls, you can control how much you eat and make it easier to use. That should show in the package. Ensure the packages are the right size for a single pre-roll so that customers only use a little by accident. This also helps with making a budget and smartly using things.
## 8. Sustainability Matters:
Customers care approximately the earth extra and extra. If you pick out pre roll packaging labels crafted from hemp or recycled materials, your corporation cares about the environment. Even better, you can use packaging that breaks down naturally or can be composted.
## 9. Durability is Key:
The **_[dispensary pre roll packaging](https://elixirpackaging.com/)_** has to be strong enough not to break or crush during shipping and handling. This keeps your product safe and ensures it looks its best when it reaches the customer. You should use strong cardboard or plastic packages that can be sealed well.
## 10. Convenience Counts:
Think about what your customers will use the package for. Is it easy to open? Is it easy to carry in a bag or pocket? These little things can significantly affect how the customer feels about the experience.
## 11. Upselling Opportunities:
You can do more with luxury pre roll packaging than store your goods. You could use some of the room to advertise other items in your line, like sweets or powders. You can also include coupons or information about a reward program to get people to come back.
## 12. Compliance is Crucial:
The rules about how to custom pre roll boxes weed can differ based on where you live. Make sure that the packing for your pre-rolls follows all laws and regulations. For example, child-proof lids, clear labels, and necessary warning signs are all part of this.
## Last words
In the end, pre-roll packaging does more than hold your goods. It can also tell people about your business, make it stand out on shelves, and earn their [trust](https://dev.to/). Style, usefulness, and ecology can make pre-roll packing a powerful marketing tool that raises your brand's profile and keeps people coming back for more.
### Pre-Roll Packaging FAQs
1. What are the most important things to think about when picking out pre-roll packaging?
Branding, security, and usefulness are the three most important things to think about. Your name should come through in your package, which should be visually appealing. It should also be strong enough to keep your item safe while you store and move it. The last thing is that it should be simple for people to open and use.
**2. Does pre-roll packaging have to be eco-friendly?
**Sustainability is becoming increasingly vital to clients. Picking green programs crafted from hemp, recycled substances, or maybe biodegradable substances suggests that you care about the earth and assists you in locating clients who care the same way.
**3. What are some ways I can sell with pre-roll packaging?
**Pre-roll packaging is a great way to promote your business. Include your name and company colors in clear and appealing designs. You could also add QR codes or short messages that link to your website or social media. You can even put ads for other items in your line on the package.
| jamrio |
1,878,857 | Running our First Docker Image | Introduction In the recent years, technological improvements across all industries have... | 27,622 | 2024-06-06T17:55:10 | https://dev.to/kalkwst/running-our-first-docker-image-4gpm | beginners, docker, devops, tutorial | ## Introduction
In the recent years, technological improvements across all industries have dramatically increased the rate at which software products are demanded and consumed. Trends like agile development and continuous integration also increased demand. As a result, many organizations have opted to switch to cloud infrastructure.
Cloud infrastructure provides hosted virtualization, network and storage solution that can be used on a pay-as-you-go basis. These providers allow any organization (or individual) to sign up and gain access to infrastructure that would otherwise require a significant investment in space and equipment to build on-site or in a data center. Cloud providers such as Amazon Web Services and Microsoft Azure offer simple APIs that enable the development and provisioning of massive fleets of **virtual machines** almost immediately.
Deploying infrastructure to the cloud, provided a solution to many issues faced by organizations that were using traditional solutions, but also created new problems related to cost management in running these services at scale. In fact these cloud costs were so significant, that a whole new discipline had to be born.
VMs revolutionized infrastructure procurement by leveraging hypervisor technology to create smaller servers on top of larger hardware. The downside of virtualization was, however, how resource-intensive it is to run a VM. VMs themselves look, act and feel like real bare metal hardware, because hypervisors like Zen, KVM, or VMWare allocate resources to boot and manage an entire operating system image. The resources dedicated to VMs also make them large and quite difficult to manage. Also moving VMs between an on-prem hypervisor and the cloud, can potentially mean moving hundreds of gigabytes of data per VM.
To provide a greater degree of automation and optimize their cloud presence, companies find themselves moving towards containerization and microservices as a solution. Containers run software services within isolated sections of the kernel of the host operating system, a technique known as **process-level isolation**. This means that instead of running an entire operating system kernel per process to provide isolation, containers can share the kernel of the host OS to run multiple applications. This is accomplished though Linux kernel features known as **control groups** (or **cgroups**) and **namespace isolation**. With these features, a user can potentially run hundreds of containers that run individual application instances on a single VM.
This is in stark contrast to a traditional VM architecture. In general, when deploying a VM, the intention is to utilize that machine for running a single server or a small set of services. This results in an inefficient utilization of CPU resources that could be better allocated to additional tasks or serving more requests. One possible solution for this problem would be installing multiple services on a single VM. However, this can lead to significant confusion when trying to determine which machine is running which service. It also places the responsibility of hosting multiple software services and backend dependencies into a single OS.
A containerized microservice approach solves these problems by allowing the container runtime to schedule and run containers on the host OS. The container runtime does not care what application is running inside the container, but rather that a container exists and can be downloaded and executed on the host OS. It doesn't matter if the application running inside the container is a simple Python script, a Kibana installation or a legacy Cobol application. As long as the container is in a standard format, the container runtime will download the image and run the software within it.
Throughout these series, we will discuss the Docker container runtime and learn the basics of running containers both locally and and at scale. In this post we are going to discuss the basics of running containers using the **docker run** command.
## Docker Engine
The **Docker Engine** is the interface that provides access to the process isolation features of the Linux kernel. Since only Linux exposes the features that allow containers to run, Windows and macOS hosts leverage a Linux VM in the background to make container execution possible. For Windows and macOS users, Docker provides the [Docker Desktop]([Docker Desktop: The #1 Containerization Tool for Developers | Docker](https://www.docker.com/products/docker-desktop/)) suite, that deploys and runs this VM in the background for us.
The Docker Engine also provides built-in features to build and test container images from source files known as **Dockerfiles**. When container images are built, they can be pushed to container **image registries**. An **image registry** is a repository of container images from which Docker hosts can download and execute container images.
When a container is started, Docker will, by default, download the container image, store it in its local container cache, and finally execute the container's **entrypoint** directive. The **entrypoint** directive is the command that will start the primary process of the application. When this process stops, the container will also stop running.
Depending on the application running inside the container, the **entrypoint** directive might be a long-running server daemon that is available all the time, or could be a short lived script that will naturally stop when the execution is completed. Furthermore, many containers execute **entrypoint** scripts that complete a series of setup steps before starting the primary process.
## Running Docker Containers
The lifecycle of a container is defined by the state of the container and the running processes within it. A container can be in a running or stopped state depending on the actions of the operator, the container orchestrator, or the state of the application running inside the container it self. For example, you can manually start or stop a container using the **docker start** or **docker stop** commands. Docker itself might also automatically restart or stop a container if it detects that the container entered an unhealthy state. Moreover, if the primary application running inside the container fails or stops, the container will also stop.
In the following exercise we will see how to use the **docker run**, **docker ps** and **docker images** commands to start and view the status of a simple container.
### Running the Hello-World Container
Docker has published a **hello-world** container that is extremely small in size and simple to execute. This container demonstrates the nature of containers running a single process with an indefinite lifespan.
In this exercise, we will use the **docker run** command to start the **hello-world** container and the **docker ps** command to view the status of the container after it has finished execution. This will provide a basic overview of running containers locally.
Before you begin, make sure that your **Docker Desktop** instance is running if you are using Windows or macOS.
Enter the **docker run** command in a Bash terminal or PowerShell window. This instructs Docker to run a container called **hello-world**:
```powershell
docker run hello-world
```
The shell should return an output similar to the following:
```
Unable to find image 'hello-world: latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest: sha256:
8e3114318a995a1ee497790535e7b88365222a21771ae7e53687ad76563e8e76
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working
correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64)
3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit: https://docs.docker.com/get-started/
```
Let's see what we just did. We instructed Docker to run the container **hello-world**. So, Docker will first look in its local container cache for a container by that same name. If it doesn't find one, like in our case, it will look to a container registry on the internet, to try and find such an image. By default, docker will query Docker Hub for a published container image by that name.
As you can see from the logs, it was able to find a container called **library/hello-world** and begin the process of pulling in the container image layer by layer. We will take a closer look into container images and layers in a latter post of this series, *Getting started with Dockerfiles*.
Once the image has fully downloaded, Docker runs the image, which displays the **Hello from Docker** output. Since the main process of this image is to simply display that output, the container then stops itself and ceases to run after it displays the output.
---
Type the **docker ps** command to see what containers are running on your system:
```powerpoint
docker ps
```
This will return an output similar to the following:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
The output of the **docker ps** command is empty because it only shows currently running containers by default. This is similar to the Linux **ps** command, which only shows the running processes.
---
Use the **docker ps -a** command to display all the containers even the stopped ones:
```powershell
docker ps -a
```
In the output returned, you should see the **hello-world** instance:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0561352787ff hello-world "/hello" 4 minutes ago Exited (0) 4 minutes ago pedantic_mirzakhani
```
As you can see, Docker gave the container a unique ID. It also displays the **IMAGE** that was run, the **COMMAND** within that image that was executed, the **TIME** it was created, and the **STATUS** of the process running that container, as well as a unique human-readable name. This particular container was created approximately 4 minutes ago, executed the program **/hello**, and ran successfully. You can tell that the program ran and executed successfully since it resulted in an **Exited (0)** code.
---
You can also query your system to see what container images Docker cached locally. Execute the **docker images** command to view the local cache:
```powershell
docker images
```
The returned output should display the locally cached container images:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest d2c94e258dcb 13 months ago 13.3kB
```
The only image cached so far is the **hello-world** container image. This image is running the **latest** version, which was created 13 months ago, and has a size of 13.3 kilobytes. From that output, you know that this Docker image is incredibly slim and that developers haven't published a code change for this image in 13 months. This output can be very helpful for troubleshooting differences between software versions in a real world scenario.
Since we simply told Docker to run the **hello-world** container without specifying a version, Docker will pull the latest version by default. You can specify specific versions by adding an tag in your **docker run** command. For example, if the **hello-world** container image had a version **2.1**, you can run that version by using the **docker run hello-world:2.1** command.
---
If you execute the same **docker run** command again, then, for each **docker run** command you run, a new container instance will be created. One of the benefits of containerization is the ability to easily run multiple instances of a software application. To see how Docker handles multiple container instances, we will run the same **docker run** command again to create another instance of the **hello-world** container
```powershell
docker run hello-world
```
You should see the following output:
```
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
```
Notice that, this time, Docker did not have to download the container image from Docker Hub again. This is because we now have that container image cached locally. Docker was able to directly run the container and display the output to the screen.
---
If we run the **docker ps -a** command again:
```powershell
docker ps -a
```
In the output, you should see that the second instance of this container image has completed its execution and entered a stopped state as indicated by **Exit (0)** in the **STATUS** column of the output:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2aa43b348c33 hello-world "/hello" 4 hours ago Exited (0) 4 hours ago elegant_cray
0561352787ff hello-world "/hello" 6 hours ago Exited (0) 6 hours ago pedantic_mirzakhani
```
We now have a second instance of this container showing in the output. Each time you execute the **docker run** command, Docker will create a new instance of that container with its attributes and data. You can run as many instances of a container as your system resources can handle.
---
Check the base image again by executing the **docker images** command once more:
```powershell
docker images
```
The returned output will show the single base image that Docker created two running instances from:
```
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest d2c94e258dcb 13 months ago 13.3kB
```
## Summary
In this post, we discussed the fundamentals of containerization, the benefits of running apps in containers and the basic Docker life cycle commands to manage container instances. We also discussed that containers serve as a universal software deployment package that truly can be built once and run anywhere.
Because we are running Docker locally, we can know for certain that the same container images running in our local environment can be deployed in production and run with confidence. | kalkwst |
1,879,543 | Créer une application en ligne de commande avec Rust | Introduction Dans cet article, nous allons construire ensemble une application en ligne de... | 0 | 2024-06-06T17:52:04 | https://damiencosset.dev/fr/posts/creer-application-ligne-commande-rust/ | french, rust, learning |
## Introduction
Dans cet article, nous allons construire ensemble une application en ligne de commande toute simple avec Rust. Le but du programme sera très basique. L'utilisateur nous donnera un nom de dossier et nous créerons un nouveau dossier avec ce nom. Plutôt facile non? Allons-y!
## Setup
Si vous n'avez pas encore installé Rust, vous pouvez consulter la <a href="https://www.rust-lang.org/fr/learn/get-started" target="_blank">documentation officielle</a>, il ya beaucoup d'infos utile!
Commençons par créer notre nouveau projet Rust. Pour ça, on peut lancer la commande suivante:
`cargo new new_dir_cmd`
Ceci va créer un nouveau dossier appelé *new_dir_cmd*. Bien sûr, vous pouvez remplacer ce nom par ce que vous voulez.
Si on jette un oeil à l'intérieur de notre nouveau dossier, on peut voir ceci:

Si on lance direct `cargo run` à l'intérieur de ce dossier, on va se retrouver avec notre bon vieux "Hello World!".

Génial, on est prêt à coder!
## Créer un nouveau dossier
Première étape, on va créer un nouveau dossier. On améliorera notre programme petit à petit.
Mettons le code suivant dans notre *main.rs*:
```rust
use std::fs;
fn main() -> std::io::Result<()> {
fs::create_dir_all("/Users/damiencosset/Desktop/rust/new_dir_cmd/awesome_new_dir")?;
Ok(())
}
```
On utilise le module *fs* qui vient du crate *std*. Crates ( caisses en Français ), dans le contexte de Rust, sont similaires aux packages dans d'autres languages. Si vous voulez plus d'informations sur les crates, vous pouvez consulter la <a href="https://doc.rust-lang.org/book/ch07-01-packages-and-crates.html" target="_blank">documentation</a>
On utilise la fonction *create_dir_all* exportée par le module. Elle prend un chemin (path) comme argument. Pour l'instant, je lui donne un nom de chemin en dur.
Enfin, cette fonction retourne un type *std::io::Result*. Donc, notre fonction main doit retourner ce type. On ajoute donc *-> std::io::Result<()>*. Après avoir créer un nouveau dossier, on renvoie simplement le *Ok* variant qui indique que tout est ok et qu'aucune erreur n'a été rencontré.
*Note*: Le caractère *?* est utilisé dans Rust pour la propagation d'erreur. Si nous rencontrons une erreur pendant la création de notre dossier, nous retournerons un *Err(error)*, *Err* étant le deuxième variant que peut prendre *Result* après *Ok*.
Lançons ce code et voyons ce que ça donne:

Ca marche! On vient de créer notre premier nouveau dossier avec Rust!
## Ajouter des arguments à notre ligne de commande
Eviddemment, ce programme n'est pas très utile. Le but est que l'utilisateur puisse donner un nom de dossier via la ligne de commande. Mettons cela en place.
### Ajout du crate clap
Pour lire des arguments de notre ligne de commande, on va utiliser le crate *clap*. Pour consulter les crates qui ont été ajoutés à notre projet, on peut ouvrir le fichier *Cargo.toml* et voir ce qui se trouve après:
```
[dependencies]
```
Tout les crates qui auront été rajoutés se trouveront ici.
Ajoutons *clap* avec la commande suivante:
`cargo add clap --features derive`
Le flag *features* nous permet d'utiliser des fonctionnalités optionnel du crate. Ici, on veut utilise la feature appelé *derive*. Pour avoir plus d'informations sur ce crate, vous pouvez consulter sa <a href="https://docs.rs/clap/latest/clap/" target="_blank">documentation</a>. Et si nous jettons un oeil à notre *Cargo.toml*, nous avons à présent une nouvelle ligne:
```
[dependencies]
clap = { version = "4.5.4", features = ["derive"] }
```
### Définir nos arguments
Remplaçons notre code existant avec ceci:
```rust
use clap::Parser;
//use std::fs;
#[derive(Parser)]
struct Cli {
directory_name: String,
}
fn main() -> std::io::Result<()> {
let args = Cli::parse();
println!("{}", args.directory_name);
// fs::create_dir_all("/Users/damiencosset/Desktop/rust/new_dir_cmd/awesome_new_dir")?;
Ok(())
}
```
J'ai mis en commentaire la création de dossier pour le moment
`#[derive(Parser)]` indique que nous utilisons une macro personnalisée qui implémente le *trait* *Parser*. Plus simplement, un *trait* définit un ensemble de méthodes (ou fonctions). Ici, nous implémentons un ensemble de méthodes (ou fonctions) définit dans *Parser* dans notre *Cli* structure. Donc, notre structure *Cli* est capable d'appeler les méthodes implémentées, comme *parse()*.
Avec ce code, et le pouvoir de *clap*, nous pouvons lire les arguments de notre ligne de commande. Lançons à nouveau notre application et donnons lui un argument:

Parfait! Nous avons réussi à afficher notre argument. On peut à présent lire notre nom de dossier depuis la ligne de commande.
### Récupérer le dossier actuel
Maintenant, nous avons un léger soucis. On ne va pas donner le chemin entier du dossier dans notre ligne de commande. On ne va donner que le nom du dossier que l'on souhaite créer.Donc, notre programme a besoin de savoir ou créer notre dossier. Pour garder les choses simples, nous allons prendre le dossier dans lequel nous sommes actuellement, puis y ajouter le nom du dossier que l'on vient de récupérer de la ligne de commande.
On a besoin de:
* récupérer le chemin du dossier actuel
* ajouter le nom du dossier à ce chemin
Pour récupérer le chemin du dossoer actuel, on peut ajouter ces 2 lignes de code:
```rust
// Import
use std::env;
// Inside the main function
let mut path = env::current_dir()?;
```
Notez que l'on utilise le mot clé *mut*. Cela signifie que la variable que nous venons de définir est mutable. Les variables de Rust sont par défaut immuable. En ajoutant le mot clé *mut*, on peut modifier notre variable pour y ajouter le nom du dossier que l'on veut créer.
En combinant tout cela, notre programme complet ressemble à ça:
```rust
use clap::Parser;
use std::env;
use std::fs;
#[derive(Parser)]
struct Cli {
directory_name: String,
}
fn main() -> std::io::Result<()> {
let args = Cli::parse();
let mut path = env::current_dir()?;
path.push(args.directory_name);
fs::create_dir_all(path)?;
Ok(())
}
```
En utilisant *push()*, on peut ajouter notre nom de dossier au chemin du dossir actuel. Voyons si tout fonctionne!

Wooo! Ca marche!
Et just comme ça, nous avons créé un programme très simple avec Rust qui nous permet de créer des nouveaux dossiers.
En espérant que vous avez trouvé cela intéressant!
Have fun ❤
| damcosset |
1,879,542 | Node.js vs. PHP: Key Differences for Web Development | In the realm of web development, choosing the right technology stack is crucial. Node.js and PHP are... | 0 | 2024-06-06T17:50:54 | https://dev.to/raksbisht/nodejs-vs-php-key-differences-for-web-development-hml | node, php, nodejsvsphp, tutorial | In the realm of web development, choosing the right technology stack is crucial. Node.js and PHP are two popular backend technologies, each with its unique strengths and use cases. Understanding their differences can help developers make informed decisions for their projects. Let’s dive into the key differences between Node.js and PHP. 🚀
## 🧩 Core Differences
### 1. Language and Runtime
* **Node.js:** Node.js is a runtime environment that allows JavaScript to be executed on the server side. It uses the V8 engine, which is also used by Google Chrome, to run JavaScript code outside the browser.
* **PHP:** PHP (Hypertext Preprocessor) is a server-side scripting language designed specifically for web development. It is embedded within HTML and is executed on the server.
### 2. Concurrency Model
* **Node.js:** Uses an event-driven, non-blocking I/O model. This makes it highly efficient for handling multiple requests simultaneously, ideal for real-time applications.
* **PHP:** Traditionally uses a blocking I/O model, where each request is handled by a separate process or thread. This can be less efficient compared to Node.js for handling numerous concurrent connections.
## ⚙️ Performance and Scalability
### 1. Node.js
* **Performance:** Node.js excels in performance for I/O-bound tasks due to its non-blocking architecture. It handles multiple connections with high throughput, making it suitable for real-time applications like chat servers and live updates.
* **Scalability:** Node.js is inherently scalable due to its event-driven nature. It can handle a large number of concurrent connections without significant performance degradation.
### 2. PHP
* **Performance:** PHP is generally efficient for CPU-bound tasks and synchronous operations. However, its traditional blocking I/O model can become a bottleneck for I/O-heavy applications.
* **Scalability:** PHP can be scaled using traditional methods such as load balancing and horizontal scaling, but it may require more resources compared to Node.js to achieve similar levels of concurrency.
## 🛠️ Development Environment
### 1. Node.js
* **Package Management:** Node.js uses npm (Node Package Manager), which is one of the largest ecosystems of open-source libraries. This allows developers to easily manage and install dependencies.
* **Flexibility:** Node.js offers flexibility in choosing architectures and design patterns, making it suitable for microservices and serverless architectures.
### 2. PHP
* **Package Management:** PHP uses Composer as its package manager, which is robust but not as extensive as npm.
* **Simplicity:** PHP’s simplicity and ease of use make it an excellent choice for small to medium-sized projects and content management systems (CMS) like WordPress.
## 🔧 Use Cases
### 1. Node.js
* **Real-Time Applications:** Perfect for chat applications, online gaming, and collaborative tools due to its non-blocking, event-driven nature.
* **API Services:** Ideal for building RESTful APIs and microservices.
### 2. PHP
* **Content Management Systems:** Dominates in CMS development with platforms like WordPress, Joomla, and Drupal.
* **E-commerce:** Widely used in e-commerce platforms like Magento and WooCommerce.
## 🤝 Community and Ecosystem
### 1. Node.js
* **Community:** A rapidly growing and vibrant community with a strong focus on modern web development practices.
* **Ecosystem:** Rich ecosystem with a plethora of libraries and frameworks such as Express.js, Koa.js, and NestJS.
### 2. PHP
* **Community:** A large and mature community with extensive documentation and support resources.
* **Ecosystem:** Well-established ecosystem with frameworks like Laravel, Symfony, and CodeIgniter.
## 🌟 Conclusion
Both Node.js and PHP have their strengths and are suited to different types of projects. Node.js is a great choice for real-time, high-concurrency applications, while PHP shines in traditional web applications and content management systems. The choice between Node.js and PHP ultimately depends on the specific needs of your project, your team’s expertise, and your scalability requirements.
**Happy coding! 🚀**
| raksbisht |
1,879,069 | Detailed description on core Azure architectural components | This article focuses on understanding the core architectural components of Azure which can be... | 0 | 2024-06-06T17:45:58 | https://dev.to/ikay/detailed-description-on-core-azure-architectural-components-4o09 | cloudcomputing, azure, regions, architecture | This article focuses on understanding the core architectural components of Azure which can be classified into two main groupings:
1. _Physical infrastructure._
2. _Management infrastructure._
Make sure you understand how regions and availability zones compare, as well as the Azure Resource Manager model. This is the way all resources are organized and deployed in Azure.

## Physical Infrastructure
The physical infrastructure for Azure starts with datacenters.
**Data Center**
They’re facilities with servers arranged in racks, with dedicated power, cooling, and networking infrastructure. Data centers are grouped into Azure Regions or Azure Availability Zones that are designed to help you achieve resiliency and reliability for your business-critical workloads.
**Regions**
A region is a geographical area on the planet that contains at least one, but potentially multiple datacenters that are nearby and networked together with a low-latency network. When you deploy a resource in Azure, you’ll often need to choose the region where you want your resource deployed. It is the location for your services to host, Geographical area on the planet.
Here’s a view of all the available regions

**Availability Zones**
The availability zone is made up of one or more datacenters equipped with independent power, cooling, and networking. An availability zone is set up to be an isolation boundary. If one zone goes down, the other continues working.

It comes with two service offerings:
_Zonal services_: Allow creating multiple availability zones as required to make services highly available. e.g. VMs, Disks, etc.
_Zone redundant_: Automatically replicate data in multiple availability zones for backup. e.g. storage, SQL.
To ensure resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions. However, not all Azure Regions currently support availability zones, even with the additional resiliency that availability zones provide, it’s possible that an event could be so large that it impacts multiple availability zones in a single region. To provide even further resilience, Azure has Region Pairs.
**Region pairs**
Most Azure regions are paired with another region within the same geography (such as US, Europe, or Asia) at least 300 miles away. It helps reduce the likelihood of interruptions because of events such as natural disasters, civil unrest, power outages, or physical network outages that affect an entire region. For example, if a region in a pair was affected by a natural disaster, services would automatically failover to the other region in its region pair. Examples of region pairs in Azure are West US paired with East US and South-East Asia paired with East Asia.

**Sovereign regions**
Sovereign regions are instances of Azure that are isolated from the main instance of Azure. They are generally used for compliance or legal purposes. Azure sovereign regions include:
- Government region: US DoD Central, US Gov Virginia, US Gov Iowa, etc.
- Partnered region: China East, China North, etc.
## Azure Management Infrastructure
The management infrastructure includes Azure resources, resource groups, subscriptions, accounts, and management groups. Let's understand them by their hierarchical arrangement.
**Resources**
A resource is the basic building block of Azure. Anything you create, provision, deploy, etc. is a resource. Virtual Machines (VMs), virtual networks, CosmosDB, etc. User is billed for these resources as per their usage.
**Resource Groups**
Resource groups are simply logical groupings of resources. They can be organized by type of services, project definition, or organization requirement. Each resource must be part of only one resource group. Resources in the resource group can reside in different locations.

Resources can be moved between resource groups. Any action applied to a resource group inherits to all the resources within the resource group. If you delete a resource group, all the resources will be deleted. If you grant or deny access to a resource group, you’ve granted or denied access to all the resources within the resource group. Resource groups can’t be nested.
**Azure Subscriptions**
In Azure, subscriptions are a unit of management, billing, and scale. To create and use Azure services, you need an Azure subscription which is linked with an azure account which is an identity in Azure Active Directory (Azure AD). After you’ve created an Azure account, you’re free to create additional subscriptions. A company might use a single Azure account for your business and separate subscriptions for development, marketing, and sales departments.
**Management Groups**
For a company dealing with multiple applications, multiple development teams, in multiple geographies, and having many subscriptions, you might need a way to efficiently manage access, policies, and compliance for those subscriptions. Azure management groups provide a level of scope above subscriptions. You organize subscriptions into containers called management groups and apply governance conditions to the management groups.

_In conclusion, Azure's architectural components form a robust and flexible foundation for building and deploying a wide range of cloud-based solutions. Whether you're developing a simple web application or a complex enterprise infrastructure, Azure provides the tools and services you need to succeed in the cloud. By leveraging the power of Azure's core components, you can unlock new opportunities for innovation and growth in the digital age._
Thanks for reading till the end. Please feel free to provide any question and feedback.
| ikay |
1,879,540 | 🚀 Understanding the YAGNI Principle in Software Development | The YAGNI principle, short for "You Aren’t Gonna Need It," is a fundamental concept in software... | 0 | 2024-06-06T17:40:47 | https://dev.to/raksbisht/understanding-the-yagni-principle-in-software-development-1dno | yagni, systemdesignprinciples, designprinciples, tutorial | The YAGNI principle, short for "You Aren’t Gonna Need It," is a fundamental concept in software development that encourages developers to focus on the present requirements rather than speculating about future needs. By adhering to YAGNI, teams can avoid unnecessary complexity and keep their codebases clean and maintainable. This article will delve into the YAGNI principle, its importance, and practical examples to illustrate its application.
##What is the YAGNI Principle? 🤔
The YAGNI principle is one of the core tenets of Agile development, emphasizing that developers should not add functionality until it is absolutely necessary. The idea is to:
* Avoid Premature Optimization: Focus on current requirements rather than trying to predict future needs.
* Reduce Waste: Save time and resources by not implementing features that may never be used.
* Maintain Simplicity: Keep the codebase simple and easier to manage.
## Why is YAGNI Important? 🏆
* Efficiency: By only implementing what is needed, developers can deliver value more quickly and efficiently.
* Flexibility: A simpler codebase is more adaptable to change, making it easier to respond to evolving requirements.
* Reduced Complexity: Avoiding unnecessary features keeps the codebase lean, reducing the risk of bugs and making it easier to understand and maintain.
## Practical Examples of YAGNI 📚
Let’s explore some examples to see how the YAGNI principle can be applied in real-world scenarios.
### Example 1: Future-Proofing a Function 🔢
Scenario: You are tasked with writing a function to calculate the sum of two numbers. You anticipate that future requirements might need the function to handle more complex mathematical operations like multiplication or division.
**Non-YAGNI Approach:**
```
function calculate(a, b, operation = 'sum') {
if (operation === 'sum') {
return a + b;
} else if (operation === 'multiply') {
return a \* b;
} else if (operation === 'divide') {
if (b !== 0) {
return a / b;
} else {
return 'Error: Division by zero';
}
} else {
return 'Error: Unsupported operation';
}
}
```
**YAGNI Approach:**
```
function calculateSum(a, b) {
return a + b;
}
```
**Analysis:**
* The non-YAGNI approach adds complexity by including operations that are not currently required.
* The YAGNI approach focuses solely on the current requirement, making the function simpler and easier to understand.
### Example 2: Overengineering a Class 👨💻
Scenario: You need to create a User class with basic attributes like name and email. You think the system might eventually require features like user roles, permissions, and profile pictures.
**Non-YAGNI Approach:**
```
class User {
constructor(name, email, role = 'user', permissions = \[\], profilePicture = null) {
this.name = name;
this.email = email;
this.role = role;
this.permissions = permissions;
this.profilePicture = profilePicture;
}
setRole(role) {
this.role = role;
}
addPermission(permission) {
this.permissions.push(permission);
}
setProfilePicture(profilePicture) {
this.profilePicture = profilePicture;
}
}
```
**YAGNI Approach:**
```
class User {
constructor(name, email) {
this.name = name;
this.email = email;
}
}
```
**Analysis:**
* The non-YAGNI approach complicates the class with features that are not currently needed.
* The YAGNI approach keeps the class focused on the current requirements, making it simpler and easier to extend later if necessary.
### Example 3: Anticipating Future Database Fields 🗄️
Scenario: You are designing a database schema for a blog application that currently requires storing posts with a title and content. You speculate that in the future, you might need to store tags, categories, and comments.
**Non-YAGNI Approach:**
```
CREATE TABLE posts (
id INT PRIMARY KEY,
title VARCHAR(255),
content TEXT,
tags VARCHAR(255),
categories VARCHAR(255),
comments TEXT
);
```
**YAGNI Approach:**
```
CREATE TABLE posts (
id INT PRIMARY KEY,
title VARCHAR(255),
content TEXT
);
```
**Analysis:**
* The non-YAGNI approach adds unnecessary fields that are not needed at the moment.
* The YAGNI approach includes only the essential fields, simplifying the schema and making future changes easier.
## How to Implement YAGNI in Your Workflow 💼
1. Focus on Immediate Requirements: Always start by addressing the current needs of the project.
2. Iterative Development: Use Agile practices like iterations or sprints to implement features incrementally.
3. Refactor Regularly: Regularly review and refactor the code to ensure it remains clean and aligned with current requirements.
4. Code Reviews: Encourage code reviews to catch instances of overengineering and unnecessary features.
5. Minimal Viable Product (MVP): Develop the simplest version of the product that delivers value, and iterate based on feedback.
## Conclusion 🎯
The YAGNI principle is a powerful guideline in software development, promoting simplicity, efficiency, and adaptability. By focusing on the present requirements and avoiding the temptation to anticipate future needs, developers can create more maintainable and robust systems. Remember, you aren’t gonna need it—until you do.
Feel free to share your thoughts or experiences with YAGNI in the comments. Happy coding! 👩💻👨💻
#SoftwareDevelopment #Agile #YAGNI #CleanCode #CodingPrinciples #Efficiency #TechTips #DeveloperLife #CodeQuality #Programming | raksbisht |
1,879,539 | Optimizing Matplotlib Performance: Handling Memory Leaks Efficiently | Introduction Memory management is a crucial aspect when dealing with large datasets and... | 0 | 2024-06-06T17:38:33 | https://dev.to/siddhantkcode/optimizing-matplotlib-performance-handling-memory-leaks-efficiently-5cj2 | matplotlib, datascience, performance, programming | ## Introduction
Memory management is a crucial aspect when dealing with large datasets and intensive plotting operations in Python. `matplotlib`, a popular plotting library, can sometimes exhibit memory leaks if not used correctly. This post discusses effective strategies to prevent memory leaks in `matplotlib.pyplot`, particularly focusing on the proper use of `plt.clf()` and `plt.close()`.
## Understanding the Problem
When creating numerous plots in a loop, improper handling of figure clearing and closing can lead to memory not being released, ultimately causing an `OutOfMemory` error. This issue is particularly prominent when plotting large datasets multiple times.
Consider the following example where memory leak issues can occur:
```python
import matplotlib.pyplot as plt
import numpy as np
import psutil
mem_ary = []
# Plot 10 times
for i in range(10):
x = np.arange(1e7)
y = np.arange(1e7)
plt.plot(x, y)
# ===================================================
# Execute one of the following patterns:
# ===================================================
# Pattern 1
plt.clf()
# Pattern 2
plt.clf()
plt.close()
# Pattern 3
plt.close()
# Pattern 4
plt.close()
plt.clf()
# ===================================================
mem = psutil.virtual_memory().used / 1e9
mem = round(mem, 1)
mem_ary.append(mem)
```
## Experimental Setup
To understand how each method affects memory usage, we plotted graphs with large memory sizes 10 times, recording memory usage at the end of each plot. This experiment was conducted under four different patterns:
1. `plt.clf()`
2. `plt.clf() → plt.close()`
3. `plt.close()`
4. `plt.close() → plt.clf()`
Each pattern was tested by restarting the kernel to ensure a consistent memory usage baseline.
## Results and Conclusions
The memory usage for each pattern is visualized as follows:

### Key Observations:
- **Pattern 1 (`plt.clf()`)**: Memory usage alternates, resembling a mountain-like shape, which indicates incomplete memory clearance.
- **Pattern 2 (`plt.clf() → plt.close()`)**: Memory usage remains flat, demonstrating effective memory clearance.
- **Pattern 3 (`plt.close()`)**: Memory usage increases linearly, indicating a memory leak.
- **Pattern 4 (`plt.close() → plt.clf()`)**: Memory usage increases similarly to Pattern 3, also showing a memory leak.
### Effective Solution
The combination of `plt.clf()` followed by `plt.close()` (Pattern 2) proved to be the most effective in preventing memory leaks. This pattern ensures that all allocated memory is properly freed after each plot.
### Incorrect Order
Reversing the order (`plt.close() → plt.clf()`) did not release memory effectively. Closing the figure before clearing it prevents the clearing process from freeing up the allocated memory, leading to a leak.
## Practical Implementation
Here’s a practical implementation to prevent memory leaks using multiprocessing:
```python
from multiprocessing import Pool
import matplotlib.pyplot as plt
import numpy as np
import psutil
# Plotting method
def plot(args):
x, y = args
plt.plot(x, y)
plt.tight_layout()
plt.savefig('plot.png')
plt.clf()
plt.close()
# Plot values
x = np.arange(1e7)
y = np.arange(1e7)
# Create a process pool and perform plotting
p = Pool(1)
p.map(plot, [(x, y)])
p.close()
# Verify memory release
for i in range(10):
x = np.arange(1e7)
y = np.arange(1e7)
p = Pool(1)
p.map(plot, [(x, y)])
p.close()
mem = psutil.virtual_memory().free / 1e9
print(i, f'Memory free: {mem} [GB]')
```
## Summary
Proper memory management is critical when working with `matplotlib` for intensive plotting tasks. The combination of `plt.clf()` and `plt.close()` effectively prevents memory leaks, ensuring that memory is properly released after each plot. This method is particularly useful when handling large datasets and generating numerous plots in a loop.
By following these guidelines, you can prevent memory leaks and ensure efficient use of resources in your Python plotting applications.
---
For more tips and insights on security and log analysis, follow me on Twitter [@Siddhant_K_code](https://x.com/Siddhant_K_code) and stay updated with the latest & detailed tech content like this. | siddhantkcode |
1,879,535 | DevTools guide for web developers | Learn how to use Chrome DevTools to inspect, debug, and optimize your web applications like a... | 0 | 2024-06-06T17:32:33 | https://10xdev.codeparrot.ai/devtools-guide-for-web-developers | webdev, debugging, chromedevtools | > Learn how to use Chrome DevTools to inspect, debug, and optimize your web applications like a pro.
# Introduction to Chrome DevTools
Chrome DevTools is a set of web developer tools built directly into the Google Chrome browser. It allows you to inspect and debug your code, optimize performance, and understand how your application is working. To open DevTools, you can right-click on any webpage and select "Inspect" or use the keyboard shortcut `Ctrl+Shift+I` (Windows/Linux) or `Cmd+Option+I` (Mac).
## Key Features of Chrome DevTools
### 1. **Elements Panel**
The Elements panel is where you can inspect and modify the HTML and CSS of a webpage. It allows you to see the structure of the webpage and make real-time changes to see how they affect the layout and styling.
#### Changing an Element's Style
1. Open the Elements panel.
2. Select an element by clicking on it in the DOM tree. You can change the text for fun.
3. In the Styles panel, observe and modify the CSS properties.

You can change the styles and content and see the effect immediately.
### 2. **Console Panel**
The Console panel is a powerful tool for debugging JavaScript. You can log messages, run JavaScript on the fly, and see errors and warnings.
You can do logs for debugging in your react app and see the logs in the console.
For more information, check out the blog [Debugging beyond console.log() in JavaScript](https://dev.to/codeparrot/debugging-beyond-consolelog-in-javascript-32g6).
### 3. **Sources Panel**
The Sources panel allows you to view and debug your JavaScript code. It's not very usefull if you are using some framework like React, Angular, or Vue, but it's very useful if you are working with vanilla JavaScript.
You can check out [React Developer Tools](https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi) for debugging React applications.

### 4. **Network Panel**
The Network panel is essential for understanding network activity and performance. You can see all the network requests made by your webpage, including XHR and Fetch requests, and inspect the details of each request.
#### Analyzing a Network Request
1. Open the Network panel.
2. Reload the page to capture the network requests.
3. Click on any request to see details such as headers, response, and timing.

#### Features of the Network Panel
1. **Filters**: Use filters to view specific types of requests, such as documents, scripts, stylesheets, images, and XHR. This helps in isolating specific types of network traffic.
2. **Timing**: The timing breakdown shows how long each request took, broken down into DNS lookup, connection, request, response, and download phases. This can help identify bottlenecks.
3. **Initiator**: See what caused a request to be made (e.g., a script or an HTML element).
4. **Throttling**: Simulate slower network conditions to test how your application performs on various network speeds (e.g., 3G, 4G).
5. **Preserve Log**: Keep network logs after a page reload, which is useful for debugging issues that occur during page navigation.
### 5. **Performance Panel**
The Performance panel helps you analyze your webpage's runtime performance. You can record performance profiles to identify bottlenecks and optimize your code.
#### Recording a Performance Profile
1. Open the Performance panel.
2. Click the "Record" button and perform the actions you want to analyze.
3. Click "Stop" to finish recording.

The Performance panel will show you a detailed breakdown of the recorded activities, including scripting, rendering, and painting.
#### Features of the Performance Panel
1. **Flame Chart**: The flame chart shows a visual representation of the call stack over time. The wider the bar, the longer it took. This helps in identifying performance bottlenecks.
2. **Summary**: The summary provides an overview of the main categories of activities (e.g., scripting, rendering, painting) and their respective times.
3. **FPS Chart**: Frames per second (FPS) chart shows how smoothly your page is running. Lower FPS can indicate performance issues.
4. **Heaviest Stack**: Identifies the call stack that took the most time, helping you pinpoint the most significant performance issues.
5. **Memory Usage**: Monitor memory usage to detect potential memory leaks or excessive memory consumption.
### 6. **Application Panel**
The Application panel gives you insight into your webpage's storage. You can inspect cookies, local storage, session storage, and IndexedDB.
#### Viewing Cookies
1. Open the Application panel.
2. Expand the "Cookies" section.
3. Select your domain to see all the cookies stored for that webpage.

### 7. **Security Panel**
The Security panel provides information about the security of your webpage. It shows you the security state of your connection, including HTTPS and certificate details.
#### Inspecting the Security State
1. Open the Security panel.
2. Look at the summary to see if the connection is secure.
3. Click on the details to inspect the certificate.

## Extending Chrome DevTools with Extensions
Chrome DevTools can be further enhanced with extensions that provide additional functionality. Here are some popular extensions that can be added to your DevTools toolkit:
### 1. **React Developer Tools**
If you're working with React, the React Developer Tools extension is a must-have. It allows you to inspect React component hierarchies, view props and state, and understand the structure of your React applications.
#### Inspecting React Components
1. Install the [React Developer Tools](https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi) extension.
2. Open DevTools and navigate to the "Components" tab.
3. Select a component to view its props, state, and context.

Note: It won't work if you are using a production build of React. You can use the development build to make it work.
### 2. **Redux DevTools**
Redux DevTools provides powerful tools for inspecting and debugging Redux state changes. You can view the state tree, action history, and time-travel through state changes.
#### Viewing Redux State
1. Install the [Redux DevTools](https://chrome.google.com/webstore/detail/redux-devtools/lmhkpmbekcpmknklioeibfkpmmfibljd) extension.
2. Open DevTools and navigate to the "Redux" tab.
3. Explore the state tree and action history.

### 3. **React Profiler**
The React Profiler extension helps you analyze the performance of your React applications. You can record and visualize the rendering behavior of your components to identify performance bottlenecks.
#### Profiling React Components
Make sure React Developer Tools is installed.
1. Open DevTools and navigate to the "Profiler" tab.
2. Click the "Record" button to start profiling.
3. Interact with your React application to trigger re-renders.
4. Click the "Stop" button to end the recording session.


For more information, check out the blog [Optimize React Components with the React Profiler](https://dev.to/codeparrot/optimize-react-components-with-the-react-profiler-4184).
Note: React Profiler can be only used in development mode.
### 4. **Lighthouse**
Lighthouse is an open-source, automated tool for improving the quality of web pages. It provides audits for performance, accessibility, SEO, and more.
#### Running a Lighthouse Audit
1. Open DevTools and navigate to the "Lighthouse" tab.
2. Click "Generate report" to run an audit.
3. Review the audit results and follow the recommendations to improve your webpage.
## Some more tips
1. **Use Keyboard Shortcuts**: Learn and use keyboard shortcuts to speed up your workflow. For example, `Ctrl+Shift+J` (Windows/Linux) or `Cmd+Option+J` (Mac) to open the Console panel directly.
2. **Leverage Workspaces**: Workspaces allow you to save changes made in DevTools directly to your local files. Set up a workspace to streamline your development process.
3. **Use Device Mode for Responsive Design**: Test your webpage on different screen sizes and resolutions using the Device Mode in the Elements panel.
4. **Throttle Network Conditions**: Simulate different network speeds to test how your webpage performs under various conditions. This can help you optimize for slower connections.
## Conclusion
By now, you should be equipped with the knowledge to inspect, debug, and optimize your web applications like a pro. Whether you're tweaking CSS on the fly, profiling JavaScript performance, or diving into network requests, Chrome DevTools has got your back. Remember to explore the various panels, experiment with different features, and stay curious about how you can improve your development workflow.
For more detailed information and tutorials, be sure to check out the official [Chrome DevTools documentation](https://developer.chrome.com/docs/devtools/).
| mvaja13 |
1,879,533 | Disable the right-click context menu in JavaScript? | The context menu, accessible by right-clicking on a webpage element, is a handy tool for users. It... | 0 | 2024-06-06T17:30:31 | https://dev.to/manojkumar20/disable-the-right-click-context-menu-in-javascript-gin | webdev, javascript, programming, tutorial | The context menu, accessible by right-clicking on a webpage element, is a handy tool for users. It offers quick actions like copying text, saving images, or inspecting the page code. But have you ever wondered if it's possible to disable this menu?
Yeah, we can disable the context menu! Let's see how we can implement that on our webpages.
## Disable for Whole Page
```JavaScript
window.addEventListener("contextmenu", e => e.preventDefault());
```
## Disable using oncontextmenu
```HTML
<body oncontextmenu="return false;">
<div>
Page Content
</div>
</body>
```
## Disable for specific element without
```JavaScript
const disableClick = document.getElementId("component");
disableClick.addEventListener("contextmenu", e => e.preventDefault());
```
## Limitations and Considerations
- **Ineffective Protection:** Disabling the context menu doesn't truly safeguard your content. Savvy users can still access the source code or copy images using browser developer tools or extensions.
- **User Workarounds:** Tech-savvy users can bypass these methods, rendering them ineffective.
- **Browser Safeguards:** Modern browsers often have built-in features that prevent websites from completely disabling the context menu.
| manojkumar20 |
1,879,532 | The Power of Code Reviews: A Junior Developer’s Perspective | Entering the world of software development as a junior can be both exciting and overwhelming. One... | 0 | 2024-06-06T17:30:02 | https://dev.to/raksbisht/the-power-of-code-reviews-a-junior-developers-perspective-5c87 | codereview, beginners, productivity, tutorial | Entering the world of software development as a junior can be both exciting and overwhelming. One practice that significantly aids in this journey is the code review process. Let’s explore how code reviews help junior developers grow and how they feel about this invaluable practice.
## 1. Learning from the Best 📚
Code reviews offer a unique opportunity for juniors to learn from their more experienced colleagues. When seniors review code, they often provide insights into best practices, efficient coding techniques, and potential pitfalls to avoid. This feedback helps juniors improve their skills and write better code.
## 2. Catching Mistakes Early 🐛
No one writes perfect code on the first try. Code reviews help catch bugs and errors early in the development process. For juniors, this means they can learn to identify and fix issues before they become bigger problems, improving the overall quality of the project.
## 3. Building Confidence 💪
Receiving constructive feedback and seeing how their code improves over time boosts a junior developer’s confidence. It reassures them that they are on the right track and that they are capable of growing and excelling in their role.
## 4. Encouraging Collaboration 🤝
Code reviews foster a culture of collaboration. Juniors get to interact with seniors, ask questions, and engage in meaningful discussions about the code. This collaborative environment makes the team stronger and more cohesive.
## 5. Feeling Supported 😊
For many juniors, the thought of their code being scrutinized can be intimidating. However, when the feedback is given in a supportive and constructive manner, it can be incredibly motivating. Knowing that their team is invested in their growth helps juniors feel valued and supported.
## A Junior’s Perspective 🧑💻
As a junior developer, here’s how I feel about code reviews:
1. Grateful for the Learning Opportunity: I appreciate the detailed feedback and the chance to learn from my mistakes. It’s like having a personal mentor guiding me.
2. Motivated to Improve: Seeing my code get better with each review motivates me to keep improving and learning.
3. Part of the Team: Code reviews make me feel like an integral part of the team. I know my contributions matter and that my growth is important to my colleagues.
4. Confident in My Skills: With each review, I gain more confidence in my coding abilities. I know I’m progressing and becoming a better developer.
## Conclusion 🌟
Code reviews are more than just a process; they are a crucial part of a junior developer’s growth. They provide learning opportunities, build confidence, encourage collaboration, and foster a supportive environment. For juniors, code reviews are a stepping stone to becoming skilled, confident, and integral members of their development teams.
Embrace code reviews, learn from them, and watch yourself grow into a seasoned developer! 🚀
Feel free to share your experiences or thoughts on code reviews in the comments. Let’s continue the conversation and support each other in our development journeys!
### #CodeReview #JuniorDeveloper #SoftwareDevelopment #LearnToCode #DevCommunity #TechGrowth #CareerDevelopment #Collaboration #CodingLife #TechSupport | raksbisht |
1,875,637 | Leveraging Wasp for full-stack development | Written by Wisdom Ekpotu✏️ Wasp, or Web Application Specification, is a declarative full-stack... | 0 | 2024-06-06T17:26:03 | https://blog.logrocket.com/leveraging-wasp-full-stack-development | wasp, webdev | **Written by [Wisdom Ekpotu](https://blog.logrocket.com/author/wisdomekpotu/)✏️**
[Wasp](https://wasp-lang.dev/), or Web Application Specification, is a declarative full-stack framework introduced in 2020 with the primary goal of addressing the complexities inherent in modern web development. Traditionally, developers had to be concerned about setting up authentication, managing databases, client-to-server connections, etc., but repetitive tasks take time away from actually developing the software.
Wasp takes care of this so that developers can write less code and focus more on the business logic of their applications. Developers can define their application structure and functionalities within a simple configuration file (`main.wasp`) using a domain-specific language (DSL) that Wasp understands.
Wasp then generates the boilerplate code for the frontend (React), backend (Node.js), and data access layer (Prisma). This reduces the cognitive load, minimizes the risk of inconsistencies, and promotes better maintainability of the codebase.
In this article, we will explore how Wasp simplifies full-stack development by building a demo application to demonstrate Wasps’s features, including setting up authentication and database handling.
## Wasp's architecture
At the heart of Wasp's architecture lies the Wasp Compiler (built with Haskell), which is responsible for processing the Wasp domain-specific language (DSL) and generating the full source code for your web application in the respective stacks.
Here is a diagrammatic representation of how it works: 
### Why you should use Wasp for your project
* **Ships quickly**: With Wasp, the time from an idea to a fully deployed production-ready web app is greatly reduced
* **No vendor lock-in**: You can deploy your Wasp app anywhere to any platform of your choice and have full control over the code
* **Less boilerplate code**: Wasp comes with less boilerplate code, which makes it easy to maintain, understand, and upgrade
### Key features of Wasp
* **Full-stack authentication:** One of the standout features of Wasp is its robust, out-of-the-box authentication system, which includes [pre-built login and signup UI](https://wasp-lang.dev/docs/auth/ui) components for quick integration
* **Emails:** There is built-in support for sending emails directly from your app using your favorite email providers such as [SMTP](https://wasp-lang.dev/docs/advanced/email#using-the-smtp-provider), [Mailgun](https://wasp-lang.dev/docs/advanced/email#using-the-mailgun-provider), or [SendGrid](https://wasp-lang.dev/docs/advanced/email#using-the-sendgrid-provider)
* **Typesafe RPC layer**: Wasp provides a client-server layer that brings together your data models and all server logic closer to your client
* **Type safety**: Wasp also offers full-stack type safety in TypeScript with auto-generated types that span the whole application stack
### Wasp vs. Next.js/Nuxt.js/Gatsby
You might be asking, is Wasp not just another frontend framework? Yes, but it easily separates itself from the rest. Unlike Next.js, Nuxt.js, and Gatsby, which mainly focus on frontend development, Wasp is truly full-stack. It comes repacked with all your frontend and backend/database needs taken care of so that you don't have to integrate them separately.
Additionally, Wasp is being developed to be framework agnostic, so there should be more support for other frameworks and tools in the future.
## Building a full-stack application with Wasp
In this section, you will build a [Google Keeps](https://keep.google.com/)-like app to demonstrate how a basic CRUD application can be built with Wasp.
To get the most from this tutorial, you‘ll need the following:
* Familiarity with JavaScript, React, and Node.js
* Basic knowledge of Tailwind CSS, a React component library
* Any IDE (I recommend [Visual Studio Code](https://code.visualstudio.com/))
* Node.js >= v18 installed on your local machine (visit the [official Node.js page for instructions](https://nodejs.org/en/?ref=horizon-documentation))
* A modern terminal shell such as [zsh](https://opensource.com/article/19/9/getting-started-zsh), [iTerm2](https://iterm2.com/) with [oh-my-zsh](https://ohmyz.sh/) for Mac, or [Hyper](https://hyper.is/) for Windows
* A browser such as Chrome, Microsoft Edge, Brave, or Firefox
* An active [Fly.io](http://Fly.io) account
* [Docker installed](https://www.docker.com/get-started)
You can find the code files for this tutorial on [GitHub](https://github.com/wisdomekpotu/wasp-fullstack).
### Installing Wasp
You'll need to install the Wasp onto your local machine to get started. For Linux/macOS, open your terminal and run the command below:
```wasp
curl -sSL https://get.wasp-lang.dev/installer.sh | sh
```
You might encounter an error when trying to install Wasp on the new Apple silicon Macbooks because the Wasp binary is built for x86 and not for arm64 (Apple Silicon). Hence to proceed, you will have to install [R](https://support.apple.com/en-us/102527)[osetta](https://support.apple.com/en-us/102527). Rosetta helps Apple silicon Macs to run apps built for intel Macs.
To quickly mitigate this issue, run the following command:
```wasp
softwareupdate --install-rosetta
```
If you plan to work with Wasp on a Windows PC, it is recommended to do so with [Windows Subsystem for Linux](https://wasp-lang.dev/blog/2023/11/21/guide-windows-development-wasp-wsl) (WSL):  It may take a while to install. For context, it took approximately 35 minutes to install completely on my M1 Macbook.
### Starting a Wasp project
Run the following command to start your new Wasp project:
```wasp
wasp new
```

```bash
cd <your-project-name>
wasp start
```
Now your Wasp project should be running on [localhost:3000](http://localhost:3000/):  This is what the folder structure should look like:
```wasp
├── .wasp
├── public
├── src
│ ├── Main.css
│ ├── MainPage.jsx
│ ├── vite-env.d.ts
│ └── waspLogo.png
├── .gitignore
├── .waspignore
├── .wasproot
├── main.wasp
├── package.json
├── package-lock.json
├── tsconfig.json
├── vite.config.ts
```
## Setting up Tailwind
To add Tailwind CSS to your Wasp project, follow these steps. First, create a `tailwind.config.cjs` file in the root directory, then add the following code to it:
```javascript
## ./tailwind.config.cjs
const { resolveProjectPath } = require('wasp/dev');
/** @type {import('tailwindcss').Config} */
module.exports = {
content: [resolveProjectPath('./src/**/*.{js,jsx,ts,tsx}')],
theme: {
extend: {},
},
plugins: [],
};
```
Next, create a `postcss.config.cjs` file with the code below:
```javascript
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};
```
Next, update the `src/Main.css` file to include the Tailwind directives:
```css
>@tailwind base;
@tailwind components;
@tailwind utilities;
```
Make sure you use the `.cjs` file extension and not `.js` so that the files can be detected by Wasp. Also, to make sure that your changes get picked up by Wasp, consider restarting Wasp using the `wasp start` command.
Now your Wasp project is all set to use Tailwind CSS.
## Architecting our database
To set up the schema for your database, you have to make use of [Wasp Entities](https://wasp-lang.dev/docs/data-model/entities). Entities are how you define where data gets stored in your database. This is usually done using the Prisma Schema Language (PSL) and is defined between the `{=psl psl=}` tags.
In your `main.wasp` file, add the following code:
```wasp
// ...
entity Note {=psl
id Int @id @default(autoincrement())
title String
description String
psl=}
```
This code defines a Prisma schema for a `Note` entity with three fields: `id`, `title`, and `description`. The `id` field is the primary key and will be automatically generated.
Next up, you have to update the schema to include this newly added entity. To do this, run the code below:
```wasp
wasp db migrate-dev
```
 Don't forget to stop Wasp from running before running this command. Also, anytime you make changes to your entity definition, you have to run the command so that it syncs.
Now let's take a look at our database. In your terminal, run the following code:
```wasp
wasp db studio
```
 Click on the **Note** model:  This is where the records of our database will be stored.
## Interacting with the database
Wasp offers two main types of operations when interacting with entities: queries and actions. Queries allow you to request data from the database, while actions allow you to create, modify, and delete data.
### Querying the database
First, you have to declare your query in the `main.wasp` file like so:
```wasp
...
query getNotes {
fn: import { getNotes } from "@src/queries",
entities: [Note]
}
```
After declaring the query, create a file called `queries.js` in the `./src` directory and add the following code:
```javascript
export const getNotes = async (args, context) => {
return context.entities.Note.findMany({
orderBy: { id: 'asc' },
});
};
```
This code exports a function called `getNotes`, which fetches a list of notes from our database in ascending order.
## Connecting to the frontend
### Building the UI
The UI will be divided into two components: `AddNote.jsx` and `Notes.jsx` .
This is the code for the `AddNote.jsx` component:
```javascript
export default function AddNote() {
return (
<form className='max-w-xl mt-20 mx-auto' onSubmit={handleSubmit}>
<div className='w-full px-3'>
<input
type='text'
name='title'
placeholder='Enter note title'
className='focus:shadow-soft-primary-outline text-sm leading-5.6 ease-soft block w-full appearance-none rounded border border-solid border-gray-300 bg-white bg-clip-padding px-3 py-2 font-normal text-gray-700 outline-none transition-all placeholder:text-gray-500 focus:border-fuchsia-300 focus:outline-none'
></input>
<br />
<textarea
rows='4'
name='description'
placeholder='Enter note message'
className='resize-none appearance-none block w-full bg-gray-200 text-gray-700 border border-gray-200 rounded py-3 px-4 mb-3 leading-tight focus:outline-none focus:bg-white focus:border-gray-500'
></textarea>
</div>
<div className='flex justify-between w-full px-3'>
<div className='md:flex md:items-center'></div>
<button
className='shadow bg-indigo-600 hover:bg-indigo-400 focus:shadow-outline focus:outline-none text-white font-bold py-2 px-6 rounded'
type='submit'
>
Add Note
</button>
</div>
</form>
);
}
```
The following is the code for the `Notes.jsx` component:
```javascript
import { getNotes, useQuery} from 'wasp/client/operations';
export default function Notes() {
const { data: notes, isLoading, error } = useQuery(getNotes);
return (
<div className='container px-0 py-0 mx-auto'>
{notes && <Note notes={notes} />}
{isLoading && 'Loading...'}
{error && 'Error: ' + error}
</div>
);
}
export function Note({ notes }) {
if (!notes?.length) return <div>No Note Found</div>;
return (
<div className='flex flex-wrap -m-4 py-6'>
{notes.map((note, idx) => (
<div className='p-4 md:w-1/3' note={note} key={idx}>
<div className='h-full border-2 border-gray-200 border-opacity-60 rounded-lg overflow-hidden'>
<div className='p-6'>
<h1 className='title-font text-lg font-medium text-gray-900 mb-3'>
{note.title}
</h1>
<p className='leading-relaxed mb-3'>{note.description}</p>
<div className='flex items-center flex-wrap '>
<a
className='text-indigo-500 inline-flex items-center md:mb-2 lg:mb-0'
onClick={() => removeNote(note.id)}
>
Delete note
</a>
</div>
</div>
</div>
</div>
))}
</div>
);
}
```
In this code, we invoked our query using the `useQuery` Hook to fetch any note available in our database. The `Note` component takes a `notes` prop and maps over the array of notes to render each note as a card with a title, description, and delete button.
You will see that it shows `No Note Found`**.** This is because the database is empty as we are yet to add a record to it: 
## Adding notes to a database
As stated earlier, this would be accomplished using actions. Just like queries, we have to declare an action first.
Modify the `main.wasp` file as so:
```wasp
...
action createNote {
fn: import { createNote } from "@src/actions",
entities: [Note]
}
```
Now create a `actions.js` file in the `./src` directory with the code below:
```javascript
export const createNote = async (args, context) => {
return context.entities.Note.create({
data: { title: args.title, description: args.description },
});
};
```
This function `createNote` takes in two arguments: `args` and `context`, and creates a new note in the database with the given title and description, which are extracted from the `args` object.
In your `AddNote.jsx` component, add the following code:
```javascript
import { createNote } from 'wasp/client/operations';
export default function AddNote() {
const handleSubmit = async (event) => {
event.preventDefault();
try {
const target = event.target;
const title = target.title.value;
const description = target.description.value;
target.reset();
await createNote({ title, description });
} catch (err) {
window.alert('Error: ' + err.message);
}
};
...
```
Here, we import the `createNote` action (operation) and set up a function to submit the titles and descriptions obtained from the inputs to our database while resetting the input fields to empty.
## Deleting notes
As usual, we have to declare the delete action in the `main.wasp` file:
```wasp
action deleteNote {
fn: import { deleteNote } from "@src/actions",
entities: [Note]
}
```
Then, update our actions file with this code:
```javascript
...
export const deleteNote = async (args, context) => {
return context.entities.Note.deleteMany({ where: { id: args.id } });
};
```
Next up, modify the `Notes.jsx` with the code below:
```javascript
...
export function Note({ notes }) {
if (!notes?.length) return <div>No Note Found</div>;
const removeNote = (id) => {
if (!window.confirm('Are you sure?')) return;
try {
// Call the `deleteNote` operation with this note's ID as its argument
deleteNote({ id })
.then(() => console.log(`Deleted note ${id}`))
.catch((err) => {
throw new Error('Error deleting note: ' + err);
});
} catch (error) {
alert(error.message);
}
};
...
```
Essentially, we created a `removeNote` function to handle deleting notes by their IDs. And that's it! Go ahead and test it out.
## Adding authentication
Now you have your app fully functional, let's add user authentication to allow users to sign in to create notes and show notes belonging to the logged-in user.
Begin by creating a `User` entity in the `main.wasp` file:
```wasp
// ...
entity User {=psl
id Int @id @default(autoincrement())
psl=}
```
Still in your `main.wasp` file, add the auth configuration. In our case, we would make use of sign-in by `usernameAndPassword`:
```wasp
app wasptutorial {
wasp: {
version: "^0.13.1"
},
title: "wasptutorial",
auth: {
userEntity: User,
methods: {
// Enable username and password auth.
usernameAndPassword: {}
},
onAuthFailedRedirectTo: "/login"
}
}
```
Run `wasp db migrate-dev` to sync these changes.
### Add client login/_s_ignup routes
Next, you need to create routes for both sign-in and login. Modify the `main.wasp` file with the code below:
```wasp
// main.wasp
...
route SignupRoute { path: "/signup", to: SignupPage }
page SignupPage {
component: import { SignupPage } from "@src/SignupPage"
}
route LoginRoute { path: "/login", to: LoginPage }
page LoginPage {
component: import { LoginPage } from "@src/LoginPage"
}
```
In the `./src` directory, create `LoginPage.jsx` and `SignupPage.jsx` files:
In `LoginPage.jsx`, add this code:
```javascript
import { Link } from 'react-router-dom'
import { LoginForm } from 'wasp/client/auth'
export const LoginPage = () => {
return (
<div style={{ maxWidth: '400px', margin: '0 auto' }}>
<LoginForm />
<br />
<span>
I don't have an account yet (<Link to="/signup">go to signup</Link>).
</span>
</div>
)
}
```
In `SignupPage.jsx`, add this code:
```javascript
import { Link } from 'react-router-dom'
import { SignupForm } from 'wasp/client/auth'
export const SignupPage = () => {
return (
<div style={{ maxWidth: '400px', margin: '0 auto' }}>
<SignupForm />
<br />
<span>
I already have an account (<Link to="/login">go to login</Link>).
</span>
</div>
)
}
```
### Protecting the main page
Because you do not want unauthorized users to have access to the main page, you will need to restrict it. To do so, modify the `main.wasp` file as follows:
```wasp
// ...
page MainPage {
authRequired: true,
component: import { MainPage } from "@src/MainPage"
}
```
Setting `authRequired` to `true` will make sure that all unauthenticated users will be redirected to the login page we just created.
Now, try accessing the main page at [localhost:3000](http://localhost:3000). You should be redirected to `/login`:  Because we do not have a user account in our database at this time, you’ll have to sign up to create one.
### Mapping users to notes
At this point, you might notice that all logged-in users are seeing the same notes. To address this, you should ensure that each user can only view notes that they have created.
In your `main.wasp` file, modify the `User` and `Note` entities as follows:
```wasp
// ...
entity User {=psl
id Int @id @default(autoincrement())
notes Note[]
psl=}
entity Note {=psl
id Int @id @default(autoincrement())
title String
description String
user User? @relation(fields: [userId], references: [id])
userId Int?
psl=}
```
Here, we defined a `one-to-many` relationship between the users and notes to match each user to their notes. Don't forget to run `wasp db migrate-dev` for these changes to be reflected.
### Checking for authentication
Go to your `queries.js` file and modify the code to forbid non-logged-in users and only request notes belonging to individual logged-in users:
```javascript
import { HttpError } from 'wasp/server';
export const getNotes = async (args, context) => {
if (!context.user) {
throw new HttpError(401);
}
return context.entities.Note.findMany({
where: { user: { id: context.user.id } },
orderBy: { id: 'asc' },
});
};
```
We also want only logged users to be able to create notes. Modify `actions.js` like so:
```javascript
import { HttpError } from 'wasp/server';
export const createNote = async (args, context) => {
if (!context.user) {
throw new HttpError(401);
}
return context.entities.Note.create({
data: {
title: args.title,
description: args.description,
user: { connect: { id: context.user.id } },
},
});
};
...
```
### Adding the logout button
The `Logout` button will be in the header component. Create a `Header.jsx` file with the code below:
```javascript
export default function Header() {
return (
<header className=' body-font'>
<div className='container mx-auto flex flex-wrap p-5 flex-col md:flex-row items-center'>
<a className='flex title-font font-medium items-center text-gray-900 mb-4 md:mb-0'>
<svg
xmlns='http://www.w3.org/2000/svg'
fill='none'
stroke='currentColor'
strokeLinecap='round'
strokeLinejoin='round'
strokeWidth='2'
className='w-10 h-10 text-white p-2 bg-indigo-500 rounded-full'
viewBox='0 0 24 24'
>
<path d='M12 2L2 7l10 5 10-5-10-5zM2 17l10 5 10-5M2 12l10 5 10-5'></path>
</svg>
<span className='ml-3 text-xl'>Noteblocks</span>
</a>
<nav className='md:mr-auto md:ml-4 md:py-1 md:pl-4 md:border-l md:border-gray-400 flex flex-wrap items-center text-base justify-center'>
<a className='mr-5 hover:text-gray-900'>Home</a>
</nav>
<button
className='inline-flex text-white items-center bg-indigo-600 hover:bg-indigo-400 border-0 py-1 px-3 focus:outline-nonerounded text-base mt-4 md:mt-0'
onClick={logout}
>
Log Out
<svg
fill='none'
stroke='currentColor'
strokeLinecap='round'
strokeLinejoin='round'
strokeWidth='2'
className='w-4 h-4 ml-1'
viewBox='0 0 24 24'
>
<path d='M5 12h14M12 5l7 7-7 7'></path>
</svg>
</button>
</div>
</header>
);
}
```
At the top, include this import:
```javascript
import { logout } from 'wasp/client/auth';
...
```
Now, head to the `MainPage.jsx` file and import the `Header` component. And just like that, we have the logout functionality. Don’t forget to test out the logout functionality on the app.
## Deploying Wasp to Fly.io
With the Wasp CLI, you can deploy the React frontend, Node.js backend (server), and PostgreSQL database generated by the Wasp compiler to [Fly.io](https://fly.io/) with a single command.
Before you can deploy to Fly.io, you should install `flyctl` on your machine. Find the the version for your operating system from the [flyctl documentation](https://fly.io/docs/hands-on/install-flyctl/) and install it. Note that all plans on Fly.io require you to add your card information or deployment will not work.
### Switching databases
Until now, we have been working with the default SQLite database, which is not supported in production. For production, we have to switch to using PostgreSQL.
Go to your `main.wasp` file and add the following code:
```wasp
app MyApp {
title: "My app",
// ...
db: {
system: PostgreSQL,
// ...
}
}
```
At this point, we don't need the SQLite DB migrations and we can get rid of them by running these commands:
```wasp
rm -r migrations/wasp clean
```
To run the PostgreSQL DB, ensure Docker is running. Next, create a `.env.server` file in the root directory and add your database URL, which can be retrieved by starting the database with the `wasp start db` command.
Once added, re-run the `wasp start db` and your database will be up and running. While the database is running, open another terminal and run `wasp db migrate-dev` to sync these changes:  Once all that is set, run the following command:
```wasp
wasp deploy fly launch wasp-logrocket-tutorial-app mia
```
Congratulations! Your app is now fully deployed: 
## Conclusion
The Wasp framework offers a great solution that can potentially make building for the web much easier. As the Wasp framework continues to evolve and gain popularity, it will likely be adopted by teams of all sizes seeking an efficient and robust full-stack development experience. | leemeganj |
1,879,330 | Git: The perfect gitflow | Introduction When we start to work with git, it's hard to understand how do you should... | 27,621 | 2024-06-06T17:20:16 | https://dev.to/henriqueleite42/git-the-perfect-gitflow-5ddn | git, beginners, tutorial, programming | 
## Introduction
When we start to work with git, it's hard to understand how do you should work with branches correctly to avoid conflicts, have a nice and organized history and a project that it's easy to deploy and rollback if necessary.
In this article, I'll explain the best _gitflow_ that exists and why each of these parts are there.
## The core concepts
To worik with gitflow we must understand 2 core concepts: branches and merging strategies.
There are 3 types of branches:
- The main "branches"
- The feature branches
- The task branches
And 3 types of merge strategies:
- Merge
- Squash
- Rebase
## Branches
You can think of Git as a Tree, it has it's trunk and it's branches.
They are "copies" of the main system, so you can do all the changes that you want without altering the main content, so you can revert these changes, apply them other branches, and do a lot of cool stuff with it.
### The main "branches"
I recommend to have only one main branch on your repository, and not a `develop` and `production` branches. Why? Because branches are **BRANCHES**! Ramifications of something that can (but shouldn't) follow different paths of the original, and not something to save a persistent state and never change again.
I 100% against using branches for `production` or release versions (branches like `v1.0.0` and similar), we have many other ways (correct ways!) to do it and we should avoid this workaround.
Git has support for [release tags](https://dev.to/neshaz/a-tutorial-for-tagging-releases-in-git-147e), which are pretty good to name your releases and know when something goes to production. They are kinda editable too, in case you need it, but it's always best to have immutable releases and release a fix if needed.
You also can use [Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository) in GitHub, that are very helpful to have a zip file with all your code, and also has lot's of integrations and automations.
So, to summarize:
- I recommend the use of **ONE** main branch, and for being a old timer, I call it `master`, but you can call it `main` if you prefer.
- In this article, I'll be using GitHub Releases, but if you use other remote repository, you can understand it as git tags.
### The feature branches
When we work in a [True Agile](https://dev.to/henriqueleite42/the-right-development-flow-better-than-agile-871) environment, we have features planned that must be implemented, like a feature to allow the user to create an account, in Scrum, they call these things "User Stories".
At the beginning of every cycle (or "sprint" if you use Scrum), it's created a new branch from the main branch with the pattern:
```c
// Pattern
feature/<ID in lowercase>/<description in lowercase>
// Example
feature/ant-1/create-account
```
This branch is created from the main branch and will group all the changes of this feature. After everything that the feature needs is merged and tested, the branch will be merged back to the main branch.
### The task branches
Task branches are sort-lived branches that apply specific changes needed to create that feature, like small pieces of that feature.
They can be understood as "Jira Tickets" or "Issues".
They are created from the FEATURE branches and merged on them.
```c
// Pattern
task/<ID in lowercase>/<description in lowercase>
// Example
task/ant-2/password-adapter
```
### Other branches and their prefixes
As you could have noticed in the previous topics, feature and task branches have different prefixes, but follow the same patterns.
Every other branch that isn't the main branch will have some kind of prefix, and the existent prefixes are:
```c
# Same level as feature branches
hotfix
feature
refact
# Same level as task branches
task
fix
refact
chore
cicd
```
You already know `feature` and `task`, but let know the other ones.
`hotfix`: Created to fix something urgently, usually things broken from the previous release that needs to be fixed in order to publish a new release because rollback is not an option.
`refact`: Changing a feature that already exists in the system, without affecting its behavior. The `refact` branch in the same level as a `feature` branch is used in order to refact a big portion of your system, a core feature, like user account creation, and not to refact a for loop.
`fix`: A bugfix that will follow the right flow of development, will be tested before merged, and will be merged with all the other things of the feature (what may take a while).
`refact`: The `refact` branch in the same level as a `task` branch is used in order to refact small portions of your code, affecting its behavior or not. These branches must be part of a feature, and can change things like your internal API to send messages to topics, improve the performance of a for loop, etc.
`chore`: Update in a comment, a development script, README, things that doesn't affect the code.
`cicd`: Updates in the pipelines, configurations for deploying.
## Merge strategies
Here I'll give a quick resume about the topic, but you can learn more in the [official documentation](https://git-scm.com/docs/git-merge).
The thing that we will focus on is: **squashing before merging**:
- Every task should have 1 commit when merging to a feature, so you **MUST ALWAYS** squash your commits before merging.
- Every feature should have 1 commit when merging to the main branch, so you **MUST ALWAYS** squash your commits before merging.
GitHub automatically squashes your commits for you if you configure it right:

But you can also do it with the CLI, see [the docs](https://git-scm.com/docs/git-merge#Documentation/git-merge.txt---squash).
## Extra tips
### Commit message pattern
I use [A simplified version of Clean Arch](https://dev.to/henriqueleite42/a-simplified-version-of-clean-arch-10i5) for my projects, from this i get the **layers**, and the pattern that i use for the commit messages is:
```c
// Pattern
<prefix>(<scope in lowercase>): <short description in lowercase>
// Examples
task(adapters): add foo adapter
fix(usecases): fix user creation
chore(docs): add details to readme
cicd(deploy): change deploy method
```
`hotfix`, `feature`, `refact`, `task`, `fix` use the [layers](https://henriqueleite42.hashnode.dev/a-simplified-version-of-clean-arch#heading-the-core-concepts) as the **scope**.
On the frontend web, you can use the names of the folders under `src` as the **scope**.
`chore` can use `docs`, `comments` or `naming` as the **scope**.
`cicd` can use `deploy`, `review` or `tests` as the **scope**.
Commits that fix PR comments doesn't need a special pattern since they will be squashed, you can use just `fix pr comments` or some message that you can kinda understand.
### Small teams
If you are working in a small team, there's no need to have [feature branches](#the-feature-branches), you can work directly with [task branches](#the-task-branches), this will decrease the complexity of your system.
## Advantages of this GitFlow
### A nice and clean commit history
In your main branch you will be able to see all the features that your software have and the evolution of it.
In your PR history, you will be able to see all the feature branches (this history is even better if you use the right labels to filter your PRs!), and all the branches and changes related to them.
### Standard
Everyone works in the same way because all of you have a standard to follow. It's easier to recognize patterns and understand the way that all teams work with git.
## GitFlow using [the best gitconfig](https://dev.to/henriqueleite42/git-config-5e35)
When using [the best gitconfig](https://dev.to/henriqueleite42/git-config-5e35), it's very, very easy to work with git:
```sh
# At the start of the cycle
git ckm
git cb feature/ant-1/create-account
# To start a task
git ck feature/ant-1/create-account
git pl
git cb task/ant-2/password-adapter
# When task finished
git acips task\(adapters\): add password adapter
# If you need to do any adjusts on your commit (even after the PR is created) you can run
git acaps task\(adapters\): add password adapter
# If there's comments on your PR and you need to fix them, create anoter commit
git acips fix pr comments
# Create the PR and it's done!
# To create another task, just start from "To start a task"
```
## Conclusion
Thanks for reading, I think that this article has everything that you need to know to be able to work with Git. If you have any thoughts, please feel free to share them with us in the comments! 😄 | henriqueleite42 |
1,879,473 | What are Shopify admin UI extensions? | Shopify admin UI extensions are an easy way to customize how merchants can interact with the Orders,... | 0 | 2024-06-06T17:19:24 | https://gadget.dev/blog/shopify-admin-ui-extensions-what-they-are-and-how-to-build-with-them | shopify, webdev, liquid, javascript | > Shopify admin UI extensions are an easy way to customize how merchants can interact with the Orders, Products, and Customers pages.
By now, you’ve probably heard about Shopify’s decision to move away from Scripts and Liquid customizations, and towards [Shopify Functions](https://gadget.dev/blog/understanding-shopify-functions-part-1) and extensions. A few extension types have been released in the past year, with checkout UI extensions being the most notable, as Shopify announced that they would replace checkout.liquid [when it’s deprecated](https://gadget.dev/blog/navigating-the-deprecation-of-shopifys-checkout-liquid-what-you-need-to-know-and-how-to-plan-ahead). Because extensions make it easy to read and request data stored in Shopify and surface it to merchants, they’ve been extended to other areas of Shopify as well – including the admin.
The Shopify admin is where merchants spend most of their time, so it only makes sense that this would be a place that needs an extra layer of customization. With admin UI extensions, merchants can tailor their Shopify admin to work for their specific store needs. For developers and agencies building custom apps, this is a big deal, so let’s talk about what admin UI extensions are and how to use them.
## What are admin UI extensions?
Shopify admin UI extensions were first announced during Summer Editions 2023, in an effort to standardize how developers and agencies customize the Shopify admin UI, and create the best possible experiences for merchants. Admin extensions give developers the option to add custom blocks and actions to the existing Orders, Products, and Customers pages. This means that app developers can add custom UIs and actions to the existing Shopify admin portal instead of designing and building out additional workflows and pages in a custom-embedded app.
Currently, there are two types of admin UI extensions you can build with: action extensions and block extensions, which can be created using the Shopify CLI 3.48 or later.

_An example of how a merchant would navigate to an admin extension in Shopify_
## Admin action extensions
If you’re looking to add new ways to let merchants quickly customize orders, products, or customers, admin action extensions are a great place to start — and the good news is that these are [generally available](https://www.shopify.com/ca/partners/blog/admin-action-extensions-ga), which means you can start using them in your public and custom apps right away.
Shopify has released a suite of [action-based components](https://shopify.dev/docs/api/admin-extensions/unstable/components) developers can embed in the Shopify admin, including buttons, forms, date pickers, and a variety of other field types. These are all used to make changes to an individual resource, for example logging complaints that have come in for certain products. Action extensions will appear in the More Actions section in the top right of a resource details page, for example, a specific order or product. They can also appear on the overview pages for products or orders for bulk selection actions. From there, merchants can make changes which will instantly be reflected on the order, product, or customer resource.
> **Tip:** We recommend communicating where merchants can access admin extensions in some of your app onboarding materials!
## Admin block extensions
Block extensions, on the other hand, are used to add additional context and information to the admin, making your apps more integrated with Shopify. Rather than having to navigate to your app interface, merchants will be able to see and interact with the information from your app directly in the admin. Block extensions give merchants an inline card on the product/order/customer page that can be used to display or change custom data unique to your app. Instead of navigating a separate embedded admin app, all relevant data can be viewed by merchants on a single page.
Bonus: You can call your action extensions from within block extensions!
## Extending custom apps with admin UI extensions
If you’re working with merchants on custom apps and internal tools, admin UI extensions are an easy way to integrate the added functionality they’re looking for into an interface they’re already familiar with. Although block extensions are still in developer preview, you can start building with them so that once they become generally available, you’re able to quickly deploy them to any merchant clients. In the meantime, admin action extensions are available and ready to be added to your apps to extend the order, product, or customer resources directly in the admin portal.
Many of your existing custom apps may benefit from being connected more directly into the Shopify admin. If the tools you’ve built for your clients regularly require modifying things like products or orders, check to see if any of the new components could help streamline those workflows. Your merchant clients will thank you for saving them from the tedious task of flipping between your app interface and the admin portal.
If you’re looking for a place to get started with Shopify admin UI extensions, you can follow along with our build of a VIP customer tagger. We use the Shopify CLI and Gadget to combine customer segment template extensions with admin UI extensions in just 10 minutes.
{% embed https://www.youtube.com/watch?v=uYXWUYQy6yo %}
If you have any questions, feedback, or just want to connect with a community of other Shopify developers, stop by our [Discord](https://discord.com/invite/tY4ZjWdMcB) and say hi.
| gadget |
1,879,472 | Introduction to Node.js for Beginners | What is Node.js? Node.js is a special tool that lets us use JavaScript, a popular... | 0 | 2024-06-06T17:19:15 | https://dev.to/raksbisht/introduction-to-nodejs-for-beginners-518o | node, beginners, javascript, tutorial | ### What is Node.js?
Node.js is a special tool that lets us use JavaScript, a popular programming language, to build different kinds of software. Usually, JavaScript is used to make websites interactive, but with Node.js, we can use it to create all kinds of applications, including games, web servers, and even robots!
### Why Learn Node.js?
1. Popularity: Many big companies like Netflix, LinkedIn, and Walmart use Node.js.
2. Versatility: You can build many types of applications with it.
3. Community Support: There are lots of tutorials, libraries, and tools to help you learn and build with Node.js.
### Basic Concepts
1. JavaScript: The language you’ll be using. It’s like learning the alphabet before writing a story.
2. Node: The environment that lets JavaScript run outside the browser. It’s like having a kitchen where you can cook anything you want, not just cookies.
### Setting Up Node.js
1. Download and Install: Go to the [Node.js website](https://nodejs.org/) and download the version that matches your computer. Follow the installation instructions.
2. Check Installation: Open your command prompt or terminal (a tool to type commands) and type node -v. If you see a version number, Node.js is installed correctly!
### Writing Your First Program
Let's start with a simple program that says "Hello, World!".
1. Create a New File: Open a text editor (like Notepad or VS Code) and create a new file called hello.js.
2. Write the Code: Type the following code into your file:
```
console.log("Hello, World!");
```
3\. Run the Program: Open your command prompt or terminal, navigate to the folder where you saved hello.js, and type node hello.js. You should see Hello, World! printed on the screen!
### Building a Simple Web Server
Now, let's build a simple web server that sends a message to your web browser.
1. Create a New File: Name it server.js.
2. Write the Code: Type the following code:
```
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!\\n');
});
server.listen(3000, '127.0.0.1', () => {
console.log('Server running at http://127.0.0.1:3000/');
});
```
3\. Run the Server: In your command prompt or terminal, navigate to the folder with server.js and type node server.js. You should see a message saying the server is running.
4\. Visit the Server: Open your web browser and go to http://127.0.0.1:3000. You should see Hello, World!.
### Learning More
* Online Tutorials: Websites like W3Schools and Codecademy offer beginner-friendly tutorials.
* Books: "Node.js for Kids" by Nick Morgan is a great book for young learners.
* Practice: The best way to learn is by doing. Try building small projects and gradually increase their complexity.
### Conclusion
Node.js is a powerful tool that lets you create amazing applications using JavaScript. By learning Node.js, you're opening the door to endless possibilities in the world of programming. So, keep practicing, have fun, and happy coding!
| raksbisht |
1,879,471 | Startup vs. Corporates: Which Path Will Make You a Millionaire? | What's the probability of working at a startup and becoming a millionaire? 🤔 The harsh reality is... | 0 | 2024-06-06T17:15:58 | https://dev.to/iwooky/startup-vs-corporates-which-path-will-make-you-a-millionaire-16hj | career, productivity, startup, programming | What's the probability of working at a startup and becoming a millionaire? 🤔
The harsh reality is that most startups fail, and even successful ones rarely lead to life-changing wealth for employees. Do you prefer the stability of a big company or the high-risk, high-reward gamble of startup life?
👉 **Read more** in my latest article exploring [the startup vs. big company trade-off](https://iwooky.substack.com/p/startup-vs-corporates)
[](https://iwooky.substack.com/p/startup-vs-corporates)
| iwooky |
1,879,470 | Data Mesh: An Executive Guide to Modern Data Architecture in Manufacturing | In the evolving landscape of data management, traditional monolithic architectures are increasingly... | 0 | 2024-06-06T17:15:44 | https://dev.to/edtbl76/data-mesh-an-executive-guide-to-modern-data-architecture-in-manufacturing-36bb | datamesh, data, datascience, dataengineering | In the evolving landscape of data management, traditional monolithic architectures are increasingly being challenged by new paradigms designed to handle the complexities of modern data ecosystems. One such paradigm gaining significant traction is the concept of Data Mesh. Introduced by Zhamak Dehghani, Data Mesh represents a shift from centralized to decentralized data management, emphasizing domain-oriented ownership and a self-serve data infrastructure.
This comprehensive guide delves deep into the principles, architecture, and implementation of Data Mesh. We will explore its benefits, challenges, and critical role in enabling scalable, efficient, and democratized data management in large organizations.
## What is Data Mesh?
### Definition and Core Concepts
Data Mesh is a revolutionary approach in data architecture that shifts the focus from centralized to decentralized data ownership and management. This paradigm decentralizes data ownership and management to domain-specific teams, empowering them to treat data as a product. Each domain team is responsible for producing, maintaining, and improving its data products, ensuring they are high-quality, discoverable, and usable by others within the organization.
The concept of Data Mesh contrasts sharply with traditional monolithic data architectures, where a centralized data team manages and governs all data for the entire organization. This centralized approach often leads to bottlenecks, scalability issues, and slower time-to-market for data-driven solutions. Data Mesh addresses these challenges by distributing data responsibilities, which enhances agility and scalability, enabling organizations to respond more quickly to changing business needs.
Moreover, Data Mesh promotes a self-serve data infrastructure that provides domain teams with the tools and platforms to create, manage, and consume data products autonomously. This infrastructure includes data storage, processing, governance, and access management capabilities, facilitating a more efficient and effective data management ecosystem. By embedding data ownership within domain teams, Data Mesh fosters a culture of accountability, continuous improvement, and innovation (Dehghani, 2020).
### Historical Context
Data Mesh emerged in response to the growing challenges of managing large-scale data in a centralized manner. Historically, data architectures have evolved from siloed databases and data warehouses to more integrated data lakes. While these architectures offered improvements in data accessibility and integration, they also brought challenges such as data silos, bottlenecks, and governance issues (Stonebraker, 2018).
Traditional data warehouses centralized data management but often struggled with scalability and agility, making them less suitable for modern enterprises' diverse and dynamic needs (Kimball & Ross, 2013). Data lakes, on the other hand, offered more flexibility and scalability but often lacked proper governance and data quality management, leading to the so-called "data swamp" problem (Gartner, 2017).
Data Mesh addresses these issues by decentralizing data ownership, aligning it more closely with business domains, and leveraging modern infrastructure and governance practices (Dehghani, 2020).
## Principles of Data Mesh
### Domain-Oriented Decentralization
At the heart of Data Mesh is the principle of domain-oriented decentralization. This principle advocates distributing data ownership and responsibility to domain teams closest to the data's source and use cases. By aligning data with business domains, organizations can achieve better data quality, relevance, and agility (Dehghani, 2020).
### Data as a Product
Data Mesh treats data as a product, emphasizing product thinking in data management. Domain teams are responsible for producing, maintaining, and enhancing their data products, ensuring they are high-quality, discoverable, and usable by other teams. This approach fosters a culture of accountability and continuous improvement (Dehghani, 2020).
### Self-Serve Data Infrastructure
To support decentralized data ownership, Data Mesh promotes a self-serve data infrastructure. This infrastructure provides domain teams with the tools and platforms to autonomously create, manage, and consume data products. It includes capabilities for data storage, processing, governance, and access management (Dehghani, 2020).
### Federated Computational Governance
Federated computational governance is a critical aspect of Data Mesh, ensuring that data policies, standards, and practices are consistently applied across the organization. This governance model balances centralized oversight with domain autonomy, enabling scalable and efficient data management (Dehghani, 2020).
## Benefits of Data Mesh
### Scalability
Data Mesh offers significant scalability benefits by decentralizing data ownership and management. As organizations grow, they can scale their data architecture more effectively by distributing the workload across domain teams rather than relying on a central team to manage everything (Dehghani, 2020).
### Flexibility
With domain-oriented decentralization, Data Mesh provides greater flexibility in handling diverse data needs. Each domain team can tailor their data products to meet specific requirements, enabling faster and more relevant data solutions (Dehghani, 2020).
### Enhanced Data Quality
Data Mesh emphasizes high-quality, reliable, and usable data by treating data as a product. Domain teams are incentivized to maintain and improve their data products, leading to better overall data quality across the organization (Dehghani, 2020).
### Improved Time-to-Market
Data Mesh accelerates time-to-market for data-driven solutions by empowering domain teams to work independently and efficiently. This autonomy reduces dependencies and bottlenecks, allowing faster development and deployment of data products (Dehghani, 2020).
## Challenges of Data Mesh
### Organizational Resistance
One of the primary challenges of implementing Data Mesh is organizational resistance. Shifting from a centralized to a decentralized model requires significant cultural and structural changes, which can be met with resistance from stakeholders accustomed to traditional approaches (Dehghani, 2020).
### Technical Complexity
Data Mesh introduces technical complexity, particularly in designing and implementing a self-serve data infrastructure and federated governance. Organizations must invest in modern data platforms and tools and have the technical expertise to manage this complexity (Dehghani, 2020).
### Governance Issues
While federated governance offers scalability benefits, it also poses challenges in ensuring consistent policy and standard application. Organizations must balance centralized oversight and domain autonomy to avoid fragmentation and inconsistency (Dehghani, 2020).
## Addressing the Challenges of Data Mesh
### Overcoming Organizational Resistance
**Example: Spotify**
[Spotify](https://open.spotify.com/) encountered organizational resistance when transitioning to a Data Mesh architecture. To address this, they initiated a comprehensive change management strategy that included stakeholder engagement sessions, clear communication of the benefits, and incremental implementation. By demonstrating quick wins and involving stakeholders in decision-making, Spotify successfully garnered support and reduced resistance to change.
**Strategies:**
1. **Stakeholder Engagement:** Regularly involve key stakeholders in planning and decision-making.
2. **Incremental Implementation:** Start with pilot projects to demonstrate value before scaling up.
3. **Clear Communication:** Articulate the benefits of Data Mesh clearly and continuously to all levels of the organization.
### Managing Technical Complexity
**Example: Zalando**
[Zalando](https://zalando.com/), an online fashion retailer, addressed the technical complexities of Data Mesh by investing in a robust technology stack that included modern data platforms and tools like Kafka for data streaming, Kubernetes for container orchestration, and dbt for data transformations. By leveraging these tools, Zalando was able to manage the complexities and ensure smooth implementation.
**Strategies:**
1. **Invest in Modern Tools:** Utilize tools like Kafka, Kubernetes, and dbt to effectively handle data streaming, container orchestration, and data transformations.
2. **Technical Training:** Provide comprehensive training for teams to build technical skills.
3. **Collaborative Approach:** Encourage cross-functional collaboration between data engineers, data scientists, and domain experts.
### Ensuring Effective Governance
**Example: Intuit**
[Intuit](https://www.intuit.com/) implemented a federated governance model to ensure consistent application of data policies across the organization. They established a central governance team responsible for defining overarching policies and standards, while domain teams were given the autonomy to implement these policies in a way that aligned with their specific needs. This balanced approach allowed Intuit to maintain consistency without stifling innovation.
**Strategies:**
1. **Centralized Oversight with Domain Autonomy:** Combine centralized policy setting with domain-specific implementation.
2. **Regular Audits:** Conduct regular audits to ensure compliance with governance standards.
3. **Continuous Improvement:** Update governance policies and practices based on feedback and changing requirements.
## Architecture of Data Mesh
### Domain Data Products
Domain data products are the fundamental building blocks of a data mesh architecture. Each domain team is responsible for creating, maintaining, and managing its data products, designed to be high-quality, discoverable, and reusable across the organization.
**Example:**
A manufacturing company's domain data products might include Production, Supply Chain, and Quality Control Data. The Production Data team could create data products that monitor and optimize the manufacturing process, including metrics like equipment performance and production rates. The Supply Chain Data team could manage data products that track inventory levels, supplier performance, and logistics. The Quality Control Data team could focus on data products that ensure product quality by monitoring defect rates and compliance with standards. Each domain team ensures that their data products meet quality and usability standards required by the organization, enhancing overall operational efficiency and decision-making.
### Data Infrastructure as a Platform
The self-serve data infrastructure in Data Mesh provides domain teams with the necessary tools and platforms to manage their data products. This infrastructure includes data storage, processing, governance, and access management capabilities, enabling domain teams to work autonomously.
**Example:**
In the manufacturing company, a self-serve data infrastructure might include Google Cloud's Dataplex for unified data management, Apache Airflow for workflow orchestration, and dbt for data transformations. This infrastructure allows the Production Data team to automate data collection and processing from various sensors and machines, the Supply Chain Data team to integrate data from different suppliers and logistics providers, and the Quality Control Data team to streamline data analysis for defect detection and quality assurance. The self-serve infrastructure empowers domain teams to handle their data independently, improving efficiency and innovation.
### Federated Governance
Federated governance in Data Mesh involves a combination of centralized and decentralized governance practices. Centralized governance provides overarching policies and standards, while domain teams have the autonomy to implement these policies in a way that aligns with their specific needs and contexts.
**Example:**
In the manufacturing company, a central governance team could set data privacy and security standards applicable across all domains. The Production Data team might tailor these standards to ensure sensitive production data is securely stored and accessed only by authorized personnel. The Supply Chain Data team could implement data sharing agreements with suppliers, ensuring compliance with central privacy policies. The Quality Control Data team might develop specific protocols for handling and reporting quality data, adhering to central security guidelines. This federated approach ensures consistent governance while allowing flexibility for domain-specific requirements.
## Implementation Strategies
### Organizational Change Management
Successful implementation of Data Mesh requires effective organizational change management. This involves securing buy-in from stakeholders, aligning data strategies with business objectives, and fostering a culture of collaboration and accountability.
**Example:**
A manufacturing company could start by aligning its data strategy with business goals such as optimizing supply chain operations and improving product quality. They might secure executive sponsorship and engage employees through workshops and training sessions to foster a collaborative culture. For instance, the company could pilot Data Mesh in the Production Data domain, demonstrating quick wins like improved production efficiency and reduced downtime. These successes would build momentum and support broader implementation across other domains, such as Supply Chain and Quality Control.
### Technology Stack
Choosing the right technology stack is crucial for implementing Data Mesh. Organizations must invest in modern data platforms and tools supporting decentralized data management, self-serve infrastructure, and federated governance.
**Example:**
A manufacturing company might leverage a combination of Kafka for real-time data streaming, [Kubernetes](https://kubernetes.io/) for container orchestration, and dbt for data transformations. They could use [Dataplex](https://cloud.google.com/dataplex) for unified data management and security across domains. This technology stack would enable the Production Data team to monitor and analyze production metrics in real-time, the Supply Chain Data team to manage and optimize logistics and inventory, and the Quality Control Data team to ensure product compliance and quality. By investing in these tools, the company can effectively support the decentralized data management and governance principles of Data Mesh.
### Data Product Development
Developing high-quality data products is central to Data Mesh's success. Domain teams must have the skills and tools to design, implement, and maintain their data products. These skills include understanding data modeling, data quality management, and data integration techniques.
**Example:**
A manufacturing company might train its domain teams in data modeling and quality management. The Production Data team could develop data products that monitor equipment performance and predict maintenance needs. The Supply Chain Data team might create data products that provide insights into supplier performance and inventory optimization. The Quality Control Data team could design data products that track defect rates and compliance with standards. These data products would be used across the organization to drive business decisions, improve operational efficiency, and ensure product quality.
### Governance Framework
A robust governance framework is essential for maintaining consistency and compliance in a Data Mesh. This framework should outline the roles and responsibilities of central and domain governance bodies, define data policies and standards, and establish processes for monitoring and enforcing compliance.
**Example:**
A manufacturing company could establish a governance framework with a central data governance board and domain-specific governance committees. The central board would set overarching data policies and standards, such as data privacy, security, and quality. Domain committees, such as those for Production, Supply Chain, and Quality Control Data, would implement these policies within their domains, tailoring them to specific operational needs. Regular audits and feedback loops ensure compliance and continuous improvement of governance practices.
## Case Studies
### Data Mesh at Netflix
[Netflix](https://www.netflix.com/) implemented a Data Mesh to address the challenges of scaling its data architecture. By decentralizing data ownership to domain teams, Netflix was able to improve data quality and accelerate time-to-market for data-driven solutions. The self-serve data infrastructure enabled teams to work independently, reducing dependencies and bottlenecks.
### Data Mesh at Zalando
[Zalando](https://zalando.com/), a leading online fashion retailer, adopted Data Mesh to manage its vast and diverse data landscape better. The decentralized approach allowed Zalando to align data management more closely with its business domains, improving data relevance and usability. The federated governance model ensured consistent application of data policies across the organization.
### Data Mesh at Intuit
[Intuit](https://www.intuit.com/) leveraged Data Mesh to enhance its data-driven decision-making capabilities. By treating data as a product and decentralizing data ownership, Intuit empowered its domain teams to create high-quality, discoverable, and reusable data products. The self-serve data infrastructure provided the tools and platforms for autonomous data management, significantly improving data quality and time to market.
### Data Mesh at ThoughtWorks
[ThoughtWorks](https://www.thoughtworks.com/en-us), a global technology consultancy, has been a pioneer in adopting Data Mesh principles. They implemented a Data Mesh architecture to effectively manage their internal data and client projects. ThoughtWorks improved data quality and accelerated project delivery timelines by decentralizing data ownership to domain-specific teams and promoting a self-serve data infrastructure. The federated governance model ensured consistent data policies and standards across the organization, enabling scalable and efficient data management.
## Sensible Defaults
### Aligning Business and Data Strategies
Aligning business and data strategies is critical for the success of Data Mesh. Organizations should ensure that their data initiatives support and drive business objectives and that data teams work closely with business stakeholders to understand their needs and priorities.
**Example:**
A manufacturing company might align its data strategy with goals such as optimizing supply chain operations and improving product quality. By doing so, the data initiatives directly support business objectives and drive tangible outcomes. For instance, the Supply Chain Data team could focus on data products that provide real-time insights into inventory levels and supplier performance, directly impacting operational efficiency and reducing costs.
### Building a Cross-Functional Team
Building a cross-functional team is essential for implementing and maintaining a Data Mesh. This team should include members with diverse skills and expertise, including data engineering, data governance, data product management, and business analysis. Collaboration and communication across functions are vital to achieving the goals of Data Mesh.
**Example:**
A manufacturing company might assemble a cross-functional team comprising data engineers, data scientists, data governance experts, and business analysts to develop and manage data products that improve production efficiency and quality control. This team could work together to create a data product that monitors equipment performance, predicts maintenance needs, and ensures product quality. By leveraging their diverse skills and expertise, the team can develop comprehensive data solutions that address key business challenges.
### Continuous Improvement
Continuous improvement is a fundamental principle of Data Mesh. Organizations should regularly review and refine their data products, infrastructure, and governance practices to meet evolving business needs and industry standards. This includes investing in ongoing training and development for data teams.
**Example:**
A manufacturing company might establish a continuous improvement program that includes regular reviews of data products, feedback loops with users, and ongoing training for data teams. For example, the Quality Control Data team could regularly review defect data and update their data products to include new metrics and insights. By continuously improving their data products and practices, the company can ensure they meet changing requirements and maintain high data quality.
## Future Trends and Developments
### Integration with AI and Machine Learning
Integrating Data Mesh with AI and machine learning (ML) is an emerging trend that promises to significantly enhance data-driven decision-making. By leveraging AI and ML capabilities, organizations can automate data quality management, predictive analytics, and anomaly detection, further improving the efficiency and effectiveness of their data products. For instance, a manufacturing company implementing Data Mesh can enhance its ML capabilities by decentralizing the data used for predictive maintenance. Domain teams managing equipment data can autonomously create high-quality data products that feed into ML models predicting machinery failures. Teams can deploy these models closer to the data source to enable real-time predictions and create more accurate maintenance schedules. Additionally, AI can automate the data quality checks, ensuring that the data used in ML models is consistently reliable (Gartner, 2023).
### Evolution of Data Mesh Tools
As Data Mesh gains traction, specialized tools, and platforms are evolving to support its principles and practices. These tools will enhance data product development capabilities, self-serve infrastructure, and federated governance, making it easier for organizations to implement and maintain Data Mesh. [SolidProject](https://solidproject.org/), for example, provides tools for creating decentralized data pods that allow users to own and control their data. This aligns with Data Mesh principles by enabling domain-specific data ownership and promoting data privacy and security. Solid's framework allows for interoperability between different data systems while maintaining user control over data, which is crucial for the distributed nature of Data Mesh architectures (SolidProject, 2024).
### Expanding Use Cases
The use cases for Data Mesh are expanding beyond traditional data management and analytics. Organizations are increasingly exploring its applications in IoT, real-time data processing, and decentralized data ecosystems. These new use cases highlight the versatility and scalability of Data Mesh as a modern data architecture. For instance, a smart city initiative might use Data Mesh to manage data from various sources, such as traffic sensors, public transportation systems, and environmental monitors. The city can more effectively manage and utilize this diverse data landscape by decentralizing data ownership to respective departments. For example, the transportation department can create data products related to traffic patterns, which can be used in real-time to optimize traffic flow and reduce congestion.
## Conclusion
Data Mesh represents a paradigm shift in data architecture, offering a scalable, flexible, and efficient approach to managing data in modern organizations. Data Mesh addresses the challenges of traditional monolithic data architectures by decentralizing data ownership, treating data as a product, and promoting self-serve infrastructure and federated governance. While it introduces certain complexities and requires significant organizational change, the benefits of improved data quality, scalability, and time-to-market make it a compelling choice for large-scale data management.
## References
- Dehghani, Z. (2020). How to Move Beyond a Monolithic Data Lake to a Distributed Data Mesh. *Martin Fowler*. Retrieved from [martinfowler.com](https://martinfowler.com/articles/data-monolith-to-mesh.html)
- Fishtown Analytics. (2020). *dbt (data build tool)*. Retrieved from [getdbt.com](https://www.getdbt.com/)
- Fishtown Analytics. (2024). *dbt Mesh*. Retrieved from [getdbt.com](https://www.getdbt.com/product/dbt-mesh)
- Gartner. (2017). The Data Lake Fallacy: All Water and No Substance. Retrieved from [gartner.com](https://www.gartner.com/en/newsroom/press-releases/2017-12-06-gartner-says-nearly-half-of-data-lake-initiatives-will-fail)
- Gartner. (2023). Predicts 2023: Data and Analytics Strategies. Retrieved from [gartner.com](https://www.gartner.com/doc/research/predicts-2023-data-analytics-strategies)
- Google Cloud. (2021). *Dataplex*. Retrieved from [cloud.google.com/dataplex](https://cloud.google.com/dataplex)
- Hoffman, K. (2018). The Netflix Tech Blog. *Medium*. Retrieved from [netflixtechblog.com](https://netflixtechblog.com)
- Kimball, R., & Ross, M. (2013). The Data Warehouse Toolkit: The Definitive Guide to Dimensional Modeling. Wiley.
- SolidProject. (2024). Solid: Your Data, Your Way. Retrieved from [solidproject.org](https://solidproject.org/)
- Stonebraker, M. (2018). The Case for Polystores. *Communications of the ACM*, 61(7), 60-67.
- Vogels, W. (2019). Continuous Innovation at Zalando with Data Mesh. *All Things Distributed*. Retrieved from [allthingsdistributed.com](https://www.allthingsdistributed.com/2019/12/data-mesh-at-zalando.html) | edtbl76 |
1,879,170 | Utilising Your Mind for a better programming Experience | Brief Overview of how Psychology and Tech Intertwine The Mind is an infinite space comprising of... | 0 | 2024-06-06T17:13:34 | https://dev.to/davidbosah/utilising-your-mind-for-a-better-programming-experience-359j | webdev, beginners, programming, productivity | **Brief Overview of how Psychology and Tech Intertwine**
The Mind is an infinite space comprising of thoughts and illusions, both the true and untrue ones. The truth is whether we like it or not the difference faces you see everyday possess completely different thought processes, temperament structure and adaptability patterns.
All these parameters are part of what describes the complexities of the mind we possess. Imagine that for reasons beyond comprehension, you could have a set of twins having completely different tastes in fashion, food choices and even choice of women.
The fact that we possess these level of disparities in these little things shows how much we are different even in large parameters like programming. This is to tell you that you may be a front end developer presently practicing your Java script and you may feel your present struggles are because you are are daft in tech not knowing that your struggles are because you are trying to copy the patterns that your tech mentor indulges without realising that the pattern used by this mentor is particular to his/her mind and whether you like it or not the development of any skill is subjective just as much as it is objective.
**Defining your mind to yourself**
Now you have understood that the mental disparities we possess is evident in the way and manner we go about different tasks as humans, the next thing to understand is how your own mind works for yourself.
To achieve this you need thorough introspection. You are the one that have lived your life not me so you need to ask yourself certain questions but I will dish out good guidelines to help you do this before we start getting answers.
Think of one very crucial activity you have learnt as a person over the years, it could be swimming, arithmetic, cycling, board games like chess, video games, just anything. Single this thing out in your mind and answer the following questions about your experience learning it;
1. Did I learn this in a very strict environment?
2. Did I prefer when the learning atmosphere was very relaxed and jovial?
3. Whenever the activity involved collaboration with a fellow learner, did I prefer it?
Don't write the answers of this questions anywhere, just store them in your head.
The idea of this is just to understand the type of work pattern that suits you. You need to be conscious of what style of learning you are used to and utilize it in your programming journey.
**Your Mind and Your Tech Development**
The way your mind works is begining to make sense to you, now it's time to understand how to make use of it for work.
Whether it's C++, Javascript or HTML You definitely have someone or people you look up to in the field, persons that mentor you directly or indirectly. While this is good you need to understand the need to give yourself space to express your own personal "self" in tech.
Even if your programming mentor louds the need to spend long straight hours working on projects for instance, you need to know that if what works for you is little hours then intermittent breaks you should confidently go ahead with it because at the end of the day, what improves efficiency is what works for you.
Take note of the following areas in ensuring you implement what works for you:
1. Your pattern of productive work while coding: Long hours, or short hours and breaks between.
2. The type of music that makes your workflow more efficient, or NO music at all if that makes it better
3. The type of mood you need on for better productivity: Super happy, calm or very serious.
4. The type of environment you need to be productive: Very quiet, slightly quiet, turbulent or mix of calm and chaos.
| davidbosah |
1,879,468 | Buy Verified Paxful Account | Buy Verified Paxful Account There are several compelling reasons to consider purchasing a... | 0 | 2024-06-06T17:11:45 | https://dev.to/whitemartin001/buy-verified-paxful-account-34gm | Buy Verified Paxful Account
There are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.
Moreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.
https://dmhelpshop.com/product/buy-verified-paxful-account/
Lastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.
Buy US verified paxful account from the best place dmhelpshop
Why we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.
If you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-
Email verified
Phone number verified
Selfie and KYC verified
SSN (social security no.) verified
Tax ID and passport verified
Sometimes driving license verified
MasterCard attached and verified
Used only genuine and real documents
100% access of the account
All documents provided for customer security
What is Verified Paxful Account?
In today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.
In light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.
For individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.
https://dmhelpshop.com/product/buy-verified-paxful-account/
Verified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.
But what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.
Why should to Buy Verified Paxful Account?
There are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.
Moreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.
Lastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.
What is a Paxful Account
Paxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.
In line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.
Is it safe to buy Paxful Verified Accounts?
Buying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.
PAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.
https://dmhelpshop.com/product/buy-verified-paxful-account/
This brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.
How Do I Get 100% Real Verified Paxful Accoun?
Paxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.
However, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.
In this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.
Moreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.
Whether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.
Benefits Of Verified Paxful Accounts
Verified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.
Verification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.
Paxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.
Paxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.
What sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.
How paxful ensure risk-free transaction and trading?
Engage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.
With verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.
Experience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.
In the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.
Examining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from dmhelpshop.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.
How Old Paxful ensures a lot of Advantages?
Explore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.
Businesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.
Experience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.
Paxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.
Why paxful keep the security measures at the top priority?
In today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.
Safeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.
Conclusion
Investing in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.
https://dmhelpshop.com/product/buy-verified-paxful-account/
The initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.
https://dmhelpshop.com/product/buy-verified-paxful-account/
In conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.
https://dmhelpshop.com/product/buy-verified-paxful-account/
Moreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.
Contact Us / 24 Hours Reply
Telegram:dmhelpshop
WhatsApp: +1 (980) 277-2786
Skype:dmhelpshop
Email:dmhelpshop@gmail.com
| whitemartin001 | |
1,879,467 | Pay Attention to Method Names in Minitest::Unit | **TL,DR: *don’t define any methods with names name, message, time, pass in Minitest::Unit test... | 0 | 2024-06-06T17:07:34 | https://jetthoughts.com/blog/pay-attention-method-names-in-minitestunit-testing-ruby/ | testing, ruby, minitest, rails | 
***TL,DR: *don’t define any methods with names name, message, time, pass in Minitest::Unit test cases unless you really want to override those of Minitest::TestCase.**
When writing Minitest::Unit tests, it’s convenient to use test case’s instance methods as kind of RSpec’s lazy-evaluated let-blocks or various helper methods.
```ruby
class TrialPeriodVideoDownloadTest < ActiveSupport::TestCase
def user
users(:trial_period_bob)
end
def video
videos(:fullhd_video_private)
end
test 'trial user can download private video' do
travel_to user.registered_at
assert VideoPolicy.new(user, video).download?
travel_back
end
test 'trial user cannot download private video after trial ended' do
travel_to user.registered_at + 32.days
refute VideoPolicy.new(user, video).download?
travel_back
end
end
```
*Note: I am extending ActiveSupport::TestCase for this nice test "verify something" DSL and for other handy features provided by ActiveSupport::Testing, but the problem concerns Minitest::TestCase, which is being inherited behind the scenes.*
In this case, when only one user and video are involved in the test, it seems reasonable to name the methods user and video accordingly, and not trial_user, user_bob and fullhd_private_video to distinct them from other tested objects.
One thing to keep in mind when using such general names for methods is that Minitest::TestCase has its own instance methods. Consider this case, quite similar by its spirit to the previous:
```ruby
class MessageDeliveryTest < ActiveSupport::TestCase
def message
messages(:draft_from_bob_to_alice)
end
test 'changes status to sent after sending' do
MessageDelivery::Send.new(message).perform
assert message.reload.status?
end
test 'changes status to delivered after delivering' do
MessageDelivery::Deliver.new(message).perform
assert_equal 'delivered', message.reload.status
end
end
```
Surprisingly, one of these tests will fail with a mysterious error message: ArgumentError: wrong number of arguments (2 for 0), with lines def message and assert_equal 'delivered', message.reload.status in backtrace.
It becomes clear after looking on Minitest::Assertions#assert_equal code:
```ruby
def assert_equal exp, act, msg = nil
msg = message(msg, E) { diff exp, act }
assert exp == act, msg
end
```
Apparently, we unintentionally overloaded an instance method message, used internally by the test case for displaying customized failure messages.
Fortunately there are not many methods with such general names which can possibly cause confusion. From a brief eye-scan of contents of Minitest::TestCase.instance_methods, I can list only four names which people might want (but shouldn't, for the reason explained above) to use for helper methods in their test cases: time, message, pass (I can imagine an app having a Pass model for passports), name. There are many other methods, but their names are too specific. If somebody overloads assert_in_epsilon, we can assume one knows what he’s doing, but overloading of message, name or time may be unintentional and can lead to unexpected results.
It’s fair to note that Minitest::Spec warns its users when they try to use these reserved names for let-blocks. Try to define things like:
```ruby
let(:name) { 'Bob' }
```
–and your test won’t load:
*ArgumentError: let ‘name’ cannot override a method in Minitest::Spec. Please use another name.*
Minitest::Unit on the other hand is more straightforward and lets you do whatever you want.
| jetthoughts_61 |
1,879,466 | A Comprehensive Guide to the llm-chain Rust crate | LLM orchestration is an important part of creating AI powered applications. Particularly in business... | 0 | 2024-06-06T17:06:37 | https://www.shuttle.rs/blog/2024/06/06/llm-chain-langchain-rust | rust, ai, machinelearning, tutorial | LLM orchestration is an important part of creating AI powered applications. Particularly in business use cases, AI agents and RAG pipelines are commonly utilised for refined LLM responses. Although Langchain is currently the most popular LLM orchestration suite, there are also similar crates in Rust that we can use. In this article, we’ll be diving into one of them - `llm-chain`.
## What is llm-chain?
`llm-chain` is a collection of crates that describes itself as “the ultimate toolbox” for working with Large Language Models (LLMs) to create AI-powered applications and other tooling.
You can find the crate’s GitHub repository [here.](https://github.com/sobelio/llm-chain)
### Comparison to other LLM orchestration crates
If you’ve looked around for different crates, you probably noticed that there are a few crates for LLM orchestration in Rust:
- `llm-chain` (this one!)
- `langchain-rust`
- `anchor-chain`
In comparison to the others, `llm-chain` is somewhat macro heavy. However, it is also the most developed in terms of having extra data processing utilities. If you’re looking for an all-in-one package, `llm-chain` is the crate that’s most likely to help you get over the line. They also have [their own docs.](https://docs.llm-chain.xyz/docs/introduction)
## Getting Started
### Pre-requisites
Before you add `llm-chain` to your project, make sure you have access to the prompting model you want to use. For example, if you want to use OpenAI, make sure you have an OpenAI API key (set as `OPENAI_API_KEY` in environment variables).
### Setup
To get started, all you need to do is to add the crate to your project (as well as Tokio for async):
```bash
cargo add llm-chain
cargo add tokio -F full
```
Next, we'll want to add a provider for whatever method of model prompting you want to use. Here we'll add the OpenAI integration by adding the crate to our application:
```bash
cargo add llm-chain-openai
```
Note that a full list of integrations can be found [here](https://github.com/sobelio/llm-chain/tree/main/crates), split by package.
## Basic usage
### Prompting
To get started with `llm-chain`, we can use their basic example as a way to quickly get something working. In this code snippet, we will:
- Initialise an executor using `executor!()`
- Use `prompt!()` with the system message and prompt to store both in a struct that will get used when the prompt (or chain) gets ran.
- Runs the prompt and returns the results, using a reference to the executor.
- Prints the results.
```rust
use std::error::Error;
use llm_chain::{executor, parameters, prompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let exec = executor!()?;
let res = prompt!(
"You are a robot assistant for making personalized greetings",
"Make a personalized greeting for Joe"
)
.run(¶meters!(), &exec)
.await?;
println!("{}", res);
Ok(())
}
```
Running this prompt should yield a result that looks like this:
```bash
Assistant: Hello Joe! I hope you're having a fantastic day filled with joy and success. Remember to keep shining bright and making a positive impact wherever you go. Have a great day!
```
The default model for the `llm_chain_openai` executor is `gpt-3.5-turbo`. The executor parameters can be defined in the macro - you can also find more about this [here](https://docs.rs/llm-chain/latest/llm_chain/macro.executor.html).
### Using Templates
However, if we want to move onto more advanced pipelines the easiest way for us to do this would be to use a prompt template with parameters. You can see below that much like in the previous code snippet, we generate an executor and return the results. However, instead of using `prompt!()` by itself we use it in `Step::for_prompt_template` - which you can find more about [here](https://docs.rs/llm-chain/latest/llm_chain/step/struct.Step.html).
```rust
use llm_chain::step::Step;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let exec = executor!()?;
let step = Step::for_prompt_template(prompt!(
"You are a bot for making personalised greetings",
"Make a personalized greeting tweet for {{text}}"
));
let step_results = step.run(¶meters!("Emil"), &exec).await?;
println!("step_results: {step_results}");
let immediate_results = step_results.to_immediate().await?.as_content();
println!("immediate results: {immediate_results}");
Ok(())
}
```
The results of the output should look like this:
```bash
step_results: Assistant: "Hey @Emil! Wishing you a fantastic day filled with joy, success, and lots of smiles! Keep shining bright and making a positive impact in the world. Cheers to you! 🌟 #YouGotThis"
immediate results: Assistant: "Hey @Emil! Wishing you a fantastic day filled with joy, success, and lots of smiles! Keep shining bright and making a positive impact in the world. Cheers to you! 🌟 #YouGotThis"
```
## Chaining prompts
Of course, one of the main reasons why we're using Langchain (or Langchain-like libraries) in the first place is to be able to orchestrate our LLM usage. The `llm-chain` Rust crate assists us with this by letting us create chains of LLM prompts using the `Chain` struct.
There are three types of chains that we can use with `llm-chain`:
- Sequential chains, which apply steps sequentially
- Map-reduce chains, which use a "map" step to apply to each chunk from a loaded file and then reduce the text. This is quite useful for text summarization.
- Conversational chains, which keep track of the conversation history and manage context. Conversational chains are great for chatbot applications, multi-step interactions and other places where context is essential.
### Sequential chaining
The easiest to use type of chaining is sequential chaining, which simply pipes the output from each step into the next step. When creating our steps, we will use the `Chain` struct instead of creating each step individually:
```rust
use llm_chain::step::Step;
use llm_chain::chains::sequential::Chain;
use llm_chain::prompt;
// Create a chain of steps with two prompts
let first_step = Step::for_prompt_template(
prompt!("You are a bot for making personalized greetings", "Make personalized birthday e-mail to the whole company for {{name}} who has their birthday on {{date}}. Include their name")
);
// Second step: summarize the email into a tweet. Importantly, the text parameter is the result of the previous prompt.
let second_step = Step::for_prompt_template(
prompt!( "You are an assistant for managing social media accounts for a company", "Summarize this email into a tweet to be sent by the company, use emojis if you can. \\n--\\n{{text}}")
);
let chain: Chain = Chain::new( vec![first_step, second_step] );
```
Next, we'll then use the `parameters!` macro to inject parameters into the prompt pipeline:
```rust
use llm_chain::parameters;
use llm_chain::traits::Executor as ExecutorTrait;
use llm_chain_openai::chatgpt::Executor;
// Create a new ChatGPT executor with the default settings
let exec = Executor::new()?;
// Run the chain with the provided parameters
let res = chain
.run(
// Create a Parameters object with key-value pairs for the placeholders
parameters!("name" => "Emil", "date" => "February 30th 2023"),
&exec,
)
.await?;
// Print the result to the console
println!("{}", res.to_immediate().await?.as_content());
Ok(())
}
```
Running the code should yield a result that looks like this:
```bash
Assistant: 🎉🎂 Join us in celebrating Emil's birthday on February 30th! 🎈🎁 Emil, your dedication and hard work are truly commendable. Wishing you happiness and success on your special day! 🥳🎉 #HappyBirthdayEmil #TeamAppreciation 🎂
```
### Map-reduce chains
Map-reduce chains typically consist of two steps:
- A "Map" step that takes a document and applies an LLM chain to it, treating the output as a new document
- The new documents are then passed to a new chain that combines the separate documents to get a single output.
At the end of a Map-Reduce chain, the output can be taken for further processing by sending it to another prompting model (for instance) or as part of a sequential pipeline.
To use this pattern, we need to create a prompt template:
```rust
// note that we import Chain from a different module here!
use llm_chain::chains::map_reduce::Chain;
// Create the "map" step to summarize an article into bullet points
let map_prompt = Step::for_prompt_template(prompt!(
"You are a bot for summarizing wikipedia articles, you are terse and focus on accuracy",
"Summarize this article into bullet points:\\n{{text}}"
));
// Create the "reduce" step to combine multiple summaries into one
let reduce_prompt = Step::for_prompt_template(prompt!(
"You are a diligent bot that summarizes text",
"Please combine the articles below into one summary as bullet points:\\n{{text}}"
));
// Create a map-reduce chain with the map and reduce steps
let chain = Chain::new(map_prompt, reduce_prompt);
```
Next, we need to take some text from a file and add it as a parameter - the `{{text}}` parameter in the Map prompt will automatically take in the file content:
```rust
// Load the content of the article to be summarized
let article = include_str!("article_to_summarize.md");
// Create a vector with the Parameters object containing the text of the article
let docs = vec![parameters!(article)];
let exec = executor!()?;
// Run the chain with the provided documents and an empty Parameters object for the "reduce" step
// Note that there are multiple modules with a Chain struct
// This one takes two different modules for
let res = chain.run(docs, Parameters::new(), &exec).await?;
// Print the result to the console
println!("{}", res.to_immediate().await?.as_content());
```
Note here that because we have **two** steps, the `chain.run()` function will take two different vectors - one for each step, in order. This means that we are passing the article content to the first prompt, but no parameters to the second document.
### Conversational Chains
Of course, the last chain we need to talk about is conversational chains. In a nutshell, conversational chains allow you to load context from memory by using saved chat history. In situations where the platform or model cannot access saved chat history, you might store the response and then use it as extra context in the next message.
To use conversational chains, like before, we need to create a `Chain` (now imported from the conversation module) and define the steps for it:
```rust
use llm_chain::{
chains::conversation::Chain, executor, output::Output, parameters, prompt, step::Step,
};
let exec = executor!()?;
let mut chain = Chain::new(
prompt!(system: "You are a robot assistant for making personalized greetings."),
)?;
// Define the conversation steps.
let step1 = Step::for_prompt_template(prompt!(user: "Make a personalized greeting for Joe."));
let step2 =
Step::for_prompt_template(prompt!(user: "Now, create a personalized greeting for Jane."));
let step3 = Step::for_prompt_template(
prompt!(user: "Finally, create a personalized greeting for Alice."),
);
let step4 = Step::for_prompt_template(prompt!(user: "Remind me who did we just greet."));
```
Next, we will individually send each prompt to the `Chain` in turn, printing out the response from each one. Note that at step 4, we should receive an answer that includes the names of the previous three people we just made a personalized greeting for (Joe, Jane and Alice).
```rust
// Execute the conversation steps.
let res1 = chain.send_message(step1, ¶meters!(), &exec).await?;
println!("Step 1: {}", res1.to_immediate().await?.primary_textual_output().unwrap());
let res2 = chain.send_message(step2, ¶meters!(), &exec).await?;
println!("Step 2: {}", res2.to_immediate().await?.primary_textual_output().unwrap());
let res3 = chain.send_message(step3, ¶meters!(), &exec).await?;
println!("Step 3: {}", res3.to_immediate().await?.primary_textual_output().unwrap());
let res4 = chain.send_message(step4, ¶meters!(), &exec).await?;
println!("Step 4: {}", res4.to_immediate().await?.primary_textual_output().unwrap());
```
Running this should get an output that looks something like this:
```bash
Step 1: Hello, Joe! I hope you are having a fantastic day filled with positivity and joy. Keep shining bright and making a difference in the world with your unique presence. Wishing you continued success and happiness in all that you do!
Step 2: Hello, Jane! Sending you warm greetings and positive vibes today. May your day be as wonderful and vibrant as you are. Remember to keep being your amazing self and always believe in the incredible things you are capable of achieving. Wishing you endless happiness and success in all your endeavors!
Step 3: Hello, Alice! I hope this message finds you well and thriving. You are such a remarkable individual with a heart full of kindness and a spirit full of strength. Keep inspiring those around you with your grace and resilience. May your day be filled with love, laughter, and countless blessings. Stay amazing, Alice!
Step 4: We just created personalized greetings for Joe, Jane, and Alice.
```
## Using embeddings with llm-chain
In terms of using embeddings with `llm-chain`, it provides a helper method for using Qdrant as a vector store. It abstracts over the `qdrant_client` crate, providing an easy way to embed documents and carry out similarity search. Note that the `Qdrant` struct will assume your collection(s) that you want to use have already been created!
### Basic usage
While we can use `qdrant_client` to manually create our own embeddings, `llm-chain` also has an integration for easy access. We will be required to create our own client through `qdrant_client` - which we can then use with the `Qdrant` struct to be able to parse stuff.
First, let's define a couple of passages that we want to insert into our Qdrant collection:
```rust
const DOC_DOG_DEF: &str = r#"The dog (Canis familiaris[4][5] or Canis lupus familiaris[5]) is a domesticated descendant of the wolf. Also called the domestic dog, it is derived from the extinct Pleistocene wolf,[6][7] and the modern wolf is the dog's nearest living relative.[8] Dogs were the first species to be domesticated[9][8] by hunter-gatherers over 15,000 years ago[7] before the development of agriculture.[1] Due to their long association with humans, dogs have expanded to a large number of domestic individuals[10] and gained the ability to thrive on a starch-rich diet that would be inadequate for other canids.[11]
The dog has been selectively bred over millennia for various behaviors, sensory capabilities, and physical attributes.[12] Dog breeds vary widely in shape, size, and color. They perform many roles for humans, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, therapy, and aiding disabled people. Over the millennia, dogs became uniquely adapted to human behavior, and the human–canine bond has been a topic of frequent study.[13] This influence on human society has given them the sobriquet of "man's best friend"."#;
const DOC_WOODSTOCK_SOUND: &str = r#"Sound for the concert was engineered by sound engineer Bill Hanley. "It worked very well", he says of the event. "I built special speaker columns on the hills and had 16 loudspeaker arrays in a square platform going up to the hill on 70-foot [21 m] towers. We set it up for 150,000 to 200,000 people. Of course, 500,000 showed up."[48] ALTEC designed marine plywood cabinets that weighed half a ton apiece and stood 6 feet (1.8 m) tall, almost 4 feet (1.2 m) deep, and 3 feet (0.91 m) wide. Each of these enclosures carried four 15-inch (380 mm) JBL D140 loudspeakers. The tweeters consisted of 4×2-Cell & 2×10-Cell Altec Horns. Behind the stage were three transformers providing 2,000 amperes of current to power the amplification setup.[49][page needed] For many years this system was collectively referred to as the Woodstock Bins.[50] The live performances were captured on two 8-track Scully recorders in a tractor trailer back stage by Edwin Kramer and Lee Osbourne on 1-inch Scotch recording tape at 15 ips, then mixed at the Record Plant studio in New York.[51]"#;
```
The `Qdrant` struct will automatically assume you have your collection set up and have a `QdrantClient` that already exists, along with the collection name. We'll pass these as arguments into a new function that does the following:
- Create embeddings using `llm-chain-openai`
- Insert the embeddings into Qdrant
- Conduct a similarity search using the prompt
Firstly, we’ll want to define a method for creating our `Qdrant` struct so that we can re-use it later on:
```rust
use llm_chain_openai::embeddings::Embeddings;
use llm_chain_qdrant::Qdrant;
use llm_chain::schema::EmptyMetadata;
use qdrant_client::prelude::{QdrantClient, QdrantClientConfig};
// note that the URL must connect to port 6334 - qdrant_client uses GRPC!
// feel free to replace this wit
async fn create_qdrant_client(url: String) -> QdrantClient {
let mut config = QdrantClientConfig::from_url(url);
// this part is only required if you're running Qdrant on the cloud
// if running locally, no api key is required
config.api_key = std::env::var("QDRANT_API_KEY").ok();
QdrantClient::new(Some(config))
}
async fn create_qdrant_struct(qdrant_client: QdrantClient, collection_name: String) -> Qdrant {
let embeddings = Embeddings::default();
// Storing documents
Qdrant::new(
qdrant_client,
collection_name,
embeddings,
None,
None,
None,
)
}
```
Next, we can use the `Qdrant` struct to carry out a similarity search! We’ll add our documents to our collection, then conduct a similarity search and print out the stored documents:
```rust
async fn similarity_search_qdrant(qdrant: Qdrant) -> Result<(), Box<dyn Error>> {
// embed and upsert the documents into Qdrant
let doc_ids = qdrant
.add_documents(
vec![
DOC_DOG_DEF.to_owned(),
DOC_WOODSTOCK_SOUND.to_owned(),
]
.into_iter()
.map(Document::new)
.collect(),
)
.await?;
println!("Documents stored under IDs: {:?}", doc_ids);
// conduct similarity search and find similar vectors
let response = qdrant
.similarity_search(
"Sound engineering is involved with concerts and music events".to_string(),
1,
)
.await?;
// print out the stored documents with payload, embeddings etc
println!("Retrieved stored documents: {:?}", response);
}
```
After this, we can then send it into a `Chain` or whatever else we need.
### Usage within a prompt template
In isolation, the Qdrant struct is not particularly helpful and mainly provides convenience methods for embedding things. However, we can also add it as part of a `ToolCollection` which lets the pipeline know that it is able to use embeddings.
```rust
let qdrant = create_qdrant_struct( ,
"mycollection".to_string()).await;
let exec = executor!().unwrap();
let mut tool_collection = ToolCollection::<Multitool>::new();
tool_collection.add_tool(
QdrantTool::new(
qdrant,
"factual information and trivia",
"facts, news, knowledgebuild_local_qdrant().await; or trivia",
)
.into(),
);
let task = "Tell me something about dogs";
let prompt = ChatMessageCollection::new()
.with_system(StringTemplate::tera(
"You are an automated agent for performing tasks. Your output must always be YAML.",
))
.with_user(StringTemplate::combine(vec![
tool_collection.to_prompt_template().unwrap(),
StringTemplate::tera("Please perform the following task: {{task}}."),
]));
let result = Step::for_prompt_template(prompt.into())
.run(¶meters!("task" => task), &exec)
.await
.unwrap();
println!("{}", res.to_immediate().await?.as_content());
```
## Processing data using llm-chain
While `llm-chain` provides tooling for creating LLM pipelines, another important part of Langchain and libraries like it is being able to process and transform data. Prompts (and prompt engineering) are important to get right. However if we're also feeding data into our pipeline, we'll also want to make sure it's as easy as possible to find the most relevant context.
Below are a couple of useful use cases that you may want to check out.
### Scraping search results
`llm-chain` provides a convenience struct for scraping Google Results using the `GoogleSerper` struct - using the Serper.dev service.
```rust
use llm_chain::tools::{tools::GoogleSerper, Tool};
#[tokio::main(flavor = "current_thread")]
async fn main() {
let serper_api_key = std::env::var("SERPER_API_KEY").unwrap();
let serper = GoogleSerper::new(serper_api_key);
let result = serper
.invoke_typed(&"Who was the inventor of Catan?".into())
.await
.unwrap();
println!("Best answer from Google Serper: {}", result.result);
}
```
As well as this, there is also support for Bing Search API which provides 1000 free searches a month - you can find more about the pricing [here](https://www.microsoft.com/en-us/bing/apis/pricing). Below is a code snippet of how you can use the API:
```rust
use llm_chain::tools::{tools::BingSearch, Tool};
#[tokio::main(flavor = "current_thread")]
async fn main() {
let bing_api_key = std::env::var("BING_API_KEY").unwrap();
let bing = BingSearch::new(bing_api_key);
let result = bing
.invoke_typed(&"Who was the inventor of Catan?".into())
.await
.unwrap();
println!("Best answer from bing: {}", result.result);
}
```
Should you need to change between one or the other, both are quite easy to use.
### Extracting labelled text
`llm-chain` also has some convenience methods for extracting labelled text. If you have a string of bullet points for instance, you can use `extract_labeled_text()` to be able to extract the text.
```rust
use llm_chain::parsing::extract_labeled_text;
fn main() {
let text = r"
- Title: The Matrix
- Actor: Keanu Reeves
- Director: The Wachowskis
";
let result = extract_labeled_text(text);
println!("{:?}", result);
}
```
Running this code should result in an output that looks like this:
```bash
[("Title", "The Matrix"), ("Actor", "Keanu Reeves"), ("Director", "The Wachowskis")]
```
You can find out more about the `parsing` module for `llm-chain` [here](https://docs.rs/llm-chain/latest/llm_chain/parsing/index.html) as well as [some of the examples](https://github.com/sobelio/llm-chain/tree/main/crates/llm-chain/examples).
## Conclusion
Thanks for reading! With the power of `llm-chain`, you can easily leverage AI for your applications.
Read more:
- [Building a RAG agent workflow](https://shuttle.rs/blog/2024/05/23/building-agentic-rag-rust-qdrant)
- [Using Huggingface with Rust](https://shuttle.rs/blog/2024/05/01/using-huggingface-rust)
- [Building an Axum web server to use with llm-chain](https://shuttle.rs/blog/2024/03/13/simple-web-server-rust)
| shuttle_dev |
1,879,465 | Optez pour le Meilleur Abonnement IPTV en France : Plus de 12 000 Chaînes en SD, HD, Full HD et 4K | Découvrez l'Expérience Ultime avec Notre Abonnement IPTV Vous cherchez le meilleur abonnement IPTV en... | 0 | 2024-06-06T17:03:36 | https://dev.to/france-iptv/optez-pour-le-meilleur-abonnement-iptv-en-france-plus-de-12-000-chaines-en-sd-hd-full-hd-et-4k-4n | **Découvrez l'Expérience Ultime avec Notre Abonnement IPTV**
Vous cherchez le meilleur abonnement IPTV en France ? Ne cherchez plus ! Notre service IPTV vous offre une sélection impressionnante de plus de 12 000 chaînes en qualité SD, HD, Full HD et même 4K. Que vous soyez fan de films, de séries, de sport ou de documentaires, notre abonnement IPTV inclut une variété de genres pour répondre à tous vos besoins.

Une VOD Riche et Diversifiée
En plus de nos chaînes en direct, notre abonnement IPTV comprend une vaste bibliothèque de vidéos à la demande (VOD). Profitez d'un large choix de films récents, de classiques intemporels, de séries populaires et de documentaires captivants. Notre service VOD est conçu pour vous offrir une expérience de divertissement inégalée, accessible à tout moment.
Pourquoi Choisir Notre Service [iptv en ligne](https://www.supremiptv.com/) ?
Qualité Supérieure : Regardez vos programmes préférés en SD, HD, Full HD et 4K, pour une expérience visuelle exceptionnelle.
Diversité des Chaînes : Plus de 12 000 chaînes couvrant tous les genres, pour satisfaire tous les goûts.
VOD Impressionnante : Une collection riche et variée de films, séries et documentaires.
Service Client Dédié : Une équipe à votre écoute pour vous offrir le meilleur service possible.
Facilité d'Abonnement : Inscrivez-vous en quelques clics et commencez à profiter immédiatement de votre abonnement IPTV.
Abonnez-vous dès Maintenant avec le Meilleur Service IPTV
Ne manquez pas l'occasion de vivre une expérience IPTV exceptionnelle en France. Avec notre abonnement IPTV, vous bénéficiez non seulement d'un accès à un vaste éventail de chaînes et de contenus VOD, mais aussi d'un service fiable et de haute qualité.
Rejoignez notre communauté de clients satisfaits et découvrez pourquoi nous sommes le choix numéro un pour l'IPTV en France. Abonnez-vous dès maintenant et transformez votre manière de regarder la télévision !
Pour plus d'informations et pour souscrire, visitez notre site web et laissez-vous guider. Votre expérience de divertissement ultime n'est qu'à quelques clics ! | france-iptv | |
1,878,948 | Authentication in monorepo(NextJs, Astro) with Lucia and MongoDB | This guide will walk you through setting up a simple authentication in a monorepo environment. It... | 0 | 2024-06-06T17:00:00 | https://medium.com/p/6d525291724e | webdev, tutorial, security, react | This guide will walk you through setting up a simple authentication in a monorepo environment. It covers the common scenario when multiple applications (e.g. landing page and web app), built with different frameworks need to share the same authentication mechanism.
1. Create a monorepo mockup (with turborepo)
2. Create a shared package to work with MongoDB database (with mongoose)
3. Create a shared package to manage auth across monorepo (with lucia-auth)
4. Set up user validation in Astro.js
5. Set up user validation in Next.js
For all NPM packages, I explicitly specified the latest versions by the moment of writing (instead of `@latest`) so this guide can be reproduced in a future. It is recommended to use `@latest` version of packages since they should be more secure and stable.
---
## Project overview
- `mysite.com` – landing page built with Astro
Publicly available
Provides login/signup page
Redirects authenticated users to `app.mysite.com`
- `app.mysite.com` – web application built with NextJs (app Router)
Available only for authenticated users
Provides sign-out feature
Redirects unauthenticated users to `mysite.com`
### Stack
- Astro js
- Next.js (app router)
- Lucia-auth
- Mongoose
- TurboRepo
- npm
- dotenv
### Source code
[GitHub - skorphil/monorepo-auth](https://github.com/skorphil/monorepo-auth)
---
## Prerequisites
- MongoDB atlas(free account will do)
---
## Part 1. Create monorepo mockup
For simplicity starter packages of TurboRepo(with NextJs) and Astro will be used.
### Monorepo structure

- `db-utils` - provides simple db methods to work with MongoDB: `createUser()`, `getUser()`. These methods are used by `auth-utils`.
- `auth-utils` - provides methods to create users and user sessions. Used by `web` and `landing`
- `web` - web application, accessible only for authenticated users. Provides log-out function
- `landing` - public landing page. Provides logout and login form. Inaccessible for authenticated users
### Install Turborepo
Install *Turborepo* starter package:
```shell
npx create-turbo@1.13.3
# ? Where would you like to create your turborepo? ./monorepo-auth
# ? Which package manager do you want to use? npm workspaces
```
### Create landing page (@monorepo-auth/landing)
Install *Astro* starter package inside `{monorepo}/apps/landing`
```shell
npm create astro@4.8.0
# Where should we create your new project? ./apps/landing
# How would you like to start your new project? Include sample files
# Do you plan to write TypeScript? Yes
# How strict should TypeScript be? Strict
# Install dependencies? Yes
# Initialize a new git repository? No
```
Rename the package to maintain consistency:
```diff
// apps/landing/package.json
- "name": "monorepo-auth-apps-landing",
+ "name": "@monorepo-auth/landing",
```
### Create web app (@monorepo-auth/web)
Next.js starter package is already being created with a turborepo, so just rename it:
```diff
// apps/web/package.json
- "name": "web",
+ "name": "@monorepo-auth/web",
```
Delete `{monorepo}/apps/docs` package, so there is only 2 packages left in `apps` directory:
```
# Monorepo structure so far
monorepo-auth/
└── apps/
├── web # @monorepo-auth/web
└── landing # @monorepo-auth/landing
```
Test run `npm run dev` to make sure everything works as expected. In my case `landing` runs at `localhost:4321` and `web` runs at `localhost:3000`.
If everything is working it's time to set up an authentication.
---
## Part 2. Create database utilities (@monorepo-auth/db-utils)
Database methods are usually used among multiple packages inside the project, this is why it is better to create them in a separate package. Only a few methods are needed for now: `createUser()` method for the sign-up form and `getUser()` for the login form. Also, `lucia mongodb adapter` needs `dbConnect()` method.
Create a `db-utils` package. I created it in `{monorepo}/packages`
```shell
mkdir packages/db-utils && touch packages/db-utils/package.json && touch packages/db-utils/.env
```
Get connection string(URI) for your ModgoDB Atlas: [Connection Strings - MongoDB Manual v7.0](https://www.mongodb.com/docs/manual/reference/connection-string/)
Add URI to the created `.env` file.
```
# monorepo-auth/packages/db-utils/.env
MONGO_URI="mongodb_uri_here"
```
Set up Turborepo to use created `.env`. I used `dotenv-cli` to make global `.env` file accessible by all packages. Install it to the *monorepo root*:
```shell
npm install dotenv-cli@7.4.2
```
Add `globalDotEnv` to `turbo.json` config:
```diff
// monorepo-auth/turbo.json
{
"$schema": "https://turbo.build/schema.json",
"globalDependencies": ["**/.env.*local"],
+ "globalDotEnv": [".env"],
```
Edit global `package.json` to run `turbo` with `dotenv`
```diff
// monorepo-auth/package.json
"scripts": {
"build": "turbo build",
+ "dev": "dotenv -- turbo dev",
```
Continue creating db-utils. Edit db-utils `package.json`:
```json
// monorepo-auth/packages/db-utils/package.json
{
"name": "@monorepo-auth/db-utils",
"type": "module",
"exports": "./index.js",
"version": "0.0.1"
}
```
Install necessary packages to `@monorepo-auth/db-utils`
```shell
npm install mongoose@8.4.0 @lucia-auth/adapter-mongodb@1.0.3 --workspace="@monorepo-auth/db-utils"
```
### Create `dbConnect()` method is used to connect to a specified mongo database.
```js
// monorepo-auth/packages/db-utils/lib/dbConnect.js
import { connect } from "mongoose";
export async function dbConnect() {
try {
await connect(process.env.MONGO_URI);
console.debug("Database connected");
} catch (error) {
throw error;
}
}
```
### Create `User` and `Session` models.
I followed recommendations from Lucia docs and expanded `userSchema` to include `username` and `hashed_password` along with `_id`:
```js
// monorepo-auth/packages/db-utils/user.model.js
import { Schema, model, models } from "mongoose";
const userSchema = new Schema(
{
_id: {
type: String,
required: true,
},
username: {
type: String,
required: true,
},
password_hash: {
type: String,
required: true,
},
},
{ _id: false } // default mongodb _id will be replaced by custom _id, which is being generated from entropy as Lucia docs suggesting
);
export default models.User || model("User", userSchema);
```
```js
// monorepo-auth/packages/db-utils/lib/session.model.js
import { Schema, model, models } from "mongoose";
const sessionSchema = new Schema(
{
_id: {
type: String,
required: true,
},
user_id: {
type: String,
required: true,
},
expires_at: {
type: Date,
required: true,
},
},
{ _id: false }
);
export default models.Record || model("Session", sessionSchema);
```
### Create `createUser()` and `getUser()` methods.
```js
// monorepo-auth/packages/db-utils/lib/createUser.js
import { dbConnect } from "./dbConnect";
import User from "../models/user.model";
export async function createUser(userData) {
const user = await new User(userData);
try {
await dbConnect();
await user.save();
console.debug("User saved to db");
} catch (error) {
throw error;
}
}
```
```js
// monorepo-auth/packages/db-utils/lib/createUser.js
import User from "../models/user.model";
export async function getUser(userData) {
const user = await User.findOne(userData, {
_id: 1,
password_hash: 1,
username: 1,
});
if (user) {
return user;
} else return false;
}
```
### Create Lucia `adapter`
```js
// monorepo-auth/packages/db-utils/lib/adapter.js
import { dbConnect } from "./dbConnect";
import { MongodbAdapter } from "@lucia-auth/adapter-mongodb";
import mongoose from "mongoose";
await dbConnect();
export const adapter = new MongodbAdapter(
mongoose.connection.collection("sessions"),
mongoose.connection.collection("users")
);
```
### Create interface for `db-utils`
To export created methods, create `index.js` in the root of `db-utils` package:
```js
// monorepo-auth/packages/db-utils/index.js
import { dbConnect } from "./lib/dbConnect";
import { createUser } from "./lib/createUser";
import { getUser } from "./lib/checkUser";
import { adapter } from "./lib/adapter";
export { createUser, adapter, dbConnect, getUser };
```
`db-utils` package ready and can be used by `auth-utils`.
```
# db-utils package structure
db-utils/
├── lib/
│ ├── dbConnect.js
│ ├── createUser.js
│ └── getUser.js
├── models/
│ ├── session.model.js
│ └── user.model.js
├── package.json
└── index.js
```
---
## Part 3. Setup Lucia-auth (@monorepo-auth/auth-utils)
Since both apps will use auth, it is better to define auth methods in a separate package.
Create an `auth-utils` package. I created it in `{monorepo}/packages`:
```shell
mkdir packages/auth-utils && touch packages/auth-utils/package.json && touch packages/auth-utils/tsconfig.json
```
Edit created `package.json` and `tsconfig.json`
```json
// monorepo-auth/packages/auth-utils/package.json
{
"name": "@monorepo-auth/auth-utils",
"type": "module",
"exports": "./index.js",
"version": "0.0.1"
}
```
```json
// monorepo-auth/packages/auth-utils/tsconfig.json
{
"compilerOptions": {
"noImplicitAny": false, // i specified this to allow imports of undeclared js modules (db-utils)
"module": "ESNext",
"target": "ESNext",
"moduleResolution":"Bundler"
}
}
```
Install necessary packages to `@monorepo-auth/auth-utils`
```shell
npm install lucia@3.2.0 --workspace="@monorepo-auth/auth-utils"
```
### Create `lucia` module
I've followed Lucia docs here, performing some decomposition.
```ts
// monorepo-auth/packages/auth-utils/auth.ts
import { adapter } from "@monorepo-auth/db-utils";
import { Lucia } from "lucia";
export const lucia = new Lucia(adapter, {
sessionCookie: {
attributes: {
secure: /* import.meta.env.PROD */ false,
},
},
getUserAttributes: (attributes) => {
return {
username: attributes.username,
};
},
});
declare module "lucia" {
interface Register {
Lucia: typeof lucia;
DatabaseUserAttributes: DatabaseUserAttributes;
}
}
interface DatabaseUserAttributes {
username: string;
}
```
### Create auth-utils interface
There is only a single export needed so far.
```js
// monorepo-auth/packages/auth-utils/index.ts
export { lucia } from "./auth";
```
`auth-utils` package is ready and it is time to implement auth in `web` and `landing` packages.
```
# auth-utils package structure
auth-utils/
├── tsconfig.json
├── package.json
├── index.ts
└── auth.ts
```
---
## Part 4. Implement auth in @monorepo-auth/landing
### Create middleware
Astro middleware use *lucia* to manage user sessions. It defines `session` and `user` in `context.locals` making it accessible by other parts of an app.
```ts
// monorepo-auth/landing/src/middleware.ts
import { lucia, verifyRequestOrigin } from "@monorepo-auth/auth-utils";
import { defineMiddleware } from "astro:middleware";
export const onRequest = defineMiddleware(async (context, next) => {
if (context.request.method !== "GET") {
const originHeader = context.request.headers.get("Origin");
const hostHeader = context.request.headers.get("Host");
if (
!originHeader ||
!hostHeader ||
!verifyRequestOrigin(originHeader, [hostHeader])
) {
return new Response(null, {
status: 403,
});
}
}
const sessionId = context.cookies.get(lucia.sessionCookieName)?.value ?? null;
if (!sessionId) {
context.locals.user = null;
context.locals.session = null;
return next();
}
const { session, user } = await lucia.validateSession(sessionId);
if (session && session.fresh) {
const sessionCookie = lucia.createSessionCookie(session.id);
context.cookies.set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
}
if (!session) {
const sessionCookie = lucia.createBlankSessionCookie();
context.cookies.set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
}
context.locals.session = session;
context.locals.user = user;
return next();
});
```
Declare `session` and `user` types
```ts
// monorepo-auth/landing/src/env.d.ts
/// <reference types="astro/client" />
declare namespace App {
interface Locals {
session: import("lucia").Session | null;
user: import("lucia").User | null;
}
}
```
Lucia works only in Astro server mode, so edit `astro.config.mjs`:
```js
// monorepo-auth/landing/astro.config.mjs
import { defineConfig } from "astro/config";
import node from "@astrojs/node";
export default defineConfig({
output: "server",
adapter: node({
mode: "standalone",
}),
});
```
Enabling server mode requires to install `@astrojs/node` adapter
```
npm install @astrojs/node@8.2.5 --workspace="@monorepo-auth/landing"
```
### Create signup form and API
I strictly followed *lucia* docs to make it more simple, so I created login and signup pages in *landing* package. However, to achieve modular and flexible architecture they can be created as a part of separate auth package with respective redirects.
API and signup form are copies from lucia docs, but imports shared `db-utils` and `auth-utils`:
```ts
// monorepo-auth/landing/src/pages/api/signup.ts
import { lucia } from "@monorepo-auth/auth-utils";
import { createUser } from "@monorepo-auth/db-utils";
import { hash } from "@node-rs/argon2";
import { generateIdFromEntropySize } from "lucia";
import type { APIContext } from "astro";
export async function POST(context: APIContext): Promise<Response> {
const formData = await context.request.formData();
const username = formData.get("username");
// username must be between 4 ~ 31 characters, and only consists of lowercase letters, 0-9, -, and _
// keep in mind some database (e.g. mysql) are case insensitive
if (
typeof username !== "string" ||
username.length < 3 ||
username.length > 31 ||
!/^[a-z0-9_-]+$/.test(username)
) {
return new Response("Invalid username", {
status: 400,
});
}
const password = formData.get("password");
if (
typeof password !== "string" ||
password.length < 6 ||
password.length > 255
) {
return new Response("Invalid password", {
status: 400,
});
}
const userId = generateIdFromEntropySize(10); // 16 characters long
const passwordHash = await hash(password, {
// recommended minimum parameters
memoryCost: 19456,
timeCost: 2,
outputLen: 32,
parallelism: 1,
});
// TODO: check if username is already used
await createUser({
_id: userId,
username: username,
password_hash: passwordHash,
});
const session = await lucia.createSession(userId, {});
const sessionCookie = lucia.createSessionCookie(session.id);
context.cookies.set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
return context.redirect("/");
}
```
Create signup form:
```html
<!--monorepo-auth/landing/src/pages/signup.astro-->
<html lang="en">
<body>
<h1>Signup Page</h1>
<form method="post" action="/api/signup">
<label for="username">Username</label>
<input id="username" name="username" />
<label for="password">Password</label>
<input id="password" name="password" />
<button>Continue</button>
</form>
</body>
</html>
```
Add signup form link to `index.astro` to simplify navigation. I deleted original content of `index.astro` to make it simpler:
```diff
// monorepo-auth/landing/src/pages/index.astro
<Layout title="Welcome to Astro.">
<main>
<h1>Landing page</h1>
+ <a href="/signup">Signup</a>
</main>
</Layout>
```
To check if sign up feature is working:
1. Launch project `npm run dev`
2. Create new user on `http://localhost:4321/signup`
In MongoDB atlas there should be a new user in `users` collection as well as a corresponding session in `sessions` collection.

In browser there should be `auth_session` cookie

### Create login form and API
```ts
// monorepo-auth/landing/src/pages/api/login.ts
import { lucia } from "@monorepo-auth/auth-utils";
import { getUser } from "@monorepo-auth/db-utils";
import { verify } from "@node-rs/argon2";
import type { APIContext } from "astro";
interface UserDocument extends Document {
_id: string;
username: string;
password_hash: string;
}
export async function POST(context: APIContext): Promise<Response> {
const formData = await context.request.formData();
const username = formData.get("username");
if (
typeof username !== "string" ||
username.length < 3 ||
username.length > 31 ||
!/^[a-z0-9_-]+$/.test(username)
) {
return new Response("Invalid username", {
status: 400,
});
}
const password = formData.get("password");
if (
typeof password !== "string" ||
password.length < 6 ||
password.length > 255
) {
return new Response("Invalid password", {
status: 400,
});
}
const existingUser = await getUser({ username: username });
console.log(existingUser);
if (!existingUser) {
// NOTE:
// Returning immediately allows malicious actors to figure out valid usernames from response times,
// allowing them to only focus on guessing passwords in brute-force attacks.
// As a preventive measure, you may want to hash passwords even for invalid usernames.
// However, valid usernames can be already be revealed with the signup page among other methods.
// It will also be much more resource intensive.
// Since protecting against this is non-trivial,
// it is crucial your implementation is protected against brute-force attacks with login throttling etc.
// If usernames are public, you may outright tell the user that the username is invalid.
return new Response("Incorrect username or password", {
status: 400,
});
}
const validPassword = await verify(existingUser.password_hash, password, {
memoryCost: 19456,
timeCost: 2,
outputLen: 32,
parallelism: 1,
});
if (!validPassword) {
return new Response("Incorrect username or password", {
status: 400,
});
}
const session = await lucia.createSession(existingUser._id, {});
const sessionCookie = lucia.createSessionCookie(session.id);
context.cookies.set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
return context.redirect("/");
}
```
```html
<!--monorepo-auth/landing/src/pages/login.astro-->
<html lang="en">
<body>
<h1>Login Page</h1>
<form method="post" action="/api/login">
<label for="username">Username</label>
<input id="username" name="username" />
<label for="password">Password</label>
<input id="password" name="password" />
<button>Continue</button>
</form>
</body>
</html>
```
Add login form link to `index.astro`:
```diff
// landing/src/pages/index.astro
<Layout title="Welcome to Astro.">
<main>
<h1>Landing page</h1>
<a href="/signup">Signup</a>
+ <a href="/login">Login</a>
</main>
</Layout>
```
### Redirect authenticated user to web app
For convenience create environment variables in root `.env` file with urls on which they run. In my case:
```diff
# monorepo-auth/packages/db-utils/.env
MONGO_URI="mongodb_uri_here"
+ WEB_URL="http://localhost:3000"
+ LANDING_URL="http://localhost:4321"
```
After middleware created user in `context.locals`, it can be checked in astro pages within frontmatter:
```js
---
const user = Astro.locals.user;
if (user) {
return Astro.redirect(process.env.WEB_URL);
}
---
```
Now if the user is authenticated it will be redirected to `web`.
---
## Part 5. Implement auth in @monorepo-auth/web
The last part of this guide covers setting up `web` package to redirect unauthenticated users to the landing page and provide log-out feature.
### Validate users in server components
Create `validateRequest()` function in `auth.ts`. It is a copy from Lucia documentation with a different `lucia` import.
```ts
// web/utils/auth.ts
import { cookies } from "next/headers";
import { cache } from "react";
import { lucia } from "@monorepo-auth/auth-utils"; // lucia instance from shared auth-utils
import type { Session, User } from "lucia";
export const validateRequest = cache(
async (): Promise<
{ user: User; session: Session } | { user: null; session: null }
> => {
const sessionId = cookies().get(lucia.sessionCookieName)?.value ?? null;
if (!sessionId) {
return {
user: null,
session: null,
};
}
const result = await lucia.validateSession(sessionId);
// next.js throws when you attempt to set cookie when rendering page
try {
if (result.session && result.session.fresh) {
const sessionCookie = lucia.createSessionCookie(result.session.id);
cookies().set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
}
if (!result.session) {
const sessionCookie = lucia.createBlankSessionCookie();
cookies().set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
}
} catch {}
return result;
}
);
```
`validateRequest()` can be used on server components to check if a user is authenticated. Setting up validation in client component requires setting up API or context, which is not covered in this guide.
Add redirect to landing for unauthenticated users:
```tsx
// monorepo/web/app/page.tsx
import { validateRequest } from "../utils/auth";
import type { ActionResult } from "next/dist/server/app-render/types";
import { redirect } from "next/navigation"
export default async function ProtectedPage() {
const { user } = await validateRequest();
if (!user) {
return redirect(process.env.LANDING_URL);
}
return (
<>
<h1>Web-app</h1>
<h2>Hi, {user.username}!</h2>
</>
);
}
```
### Create logout button in Next.js
Since authenticated users don't have access to landing page (it redirects them to `web`), logout feature should be implemented in `web` package:
```tsx
// monorepo/web/app/page.tsx
import { validateRequest } from "../utils/auth";
import { lucia } from "@monorepo-auth/auth-utils";
import { cookies } from "next/headers";
import { redirect } from "next/navigation";
import type { ActionResult } from "next/dist/server/app-render/types";
export default async function ProtectedPage() {
const { user } = await validateRequest();
if (!user) {
return redirect(process.env.LANDING_URL as string);
}
return (
<>
<h1>Web-app</h1>
<h2>Hi, {user.username}!</h2>
<form action={logout}>
<button>Sign out</button>
</form>
</>
);
}
async function logout(): Promise<ActionResult> {
"use server";
const { session } = await validateRequest();
if (!session) {
return {
error: "Unauthorized",
};
}
await lucia.invalidateSession(session.id);
const sessionCookie = lucia.createBlankSessionCookie();
cookies().set(
sessionCookie.name,
sessionCookie.value,
sessionCookie.attributes
);
return redirect(process.env.LANDING_URL as string);
}
```
## Outcome
- Both packages in a monorepo can access user session and validate if user is authenticated.
- `db-utils` and `auth-utils` can be used by other packages that might be added to monorepo in the future.
- project source code: [GitHub - skorphil/monorepo-auth](https://github.com/skorphil/monorepo-auth)
## Further reading:
- [Lucia documentation](https://lucia-auth.com)
- [Building Your Application: Authentication | Next.js](https://nextjs.org/docs/pages/building-your-application/authentication#authentication)
- [Authentication | Astro Docs](https://docs.astro.build/en/guides/authentication/#lucia)
- [The Copenhagen Book](https://thecopenhagenbook.com)
- [Mongoose v8.4.1: Getting Started](https://mongoosejs.com/docs/)
Happy coding!
Feedback is appreciated. | skorphil |
1,878,964 | Tune In to "Talking Tech with Techielass" – Your Weekly Azure Update in Under 5 Minutes! | Want to stay on top of the latest Azure news without spending too much time? Talking Tech with... | 0 | 2024-06-06T17:00:00 | https://dev.to/techielass/tune-in-to-talking-tech-with-techielass-your-weekly-azure-update-in-under-5-minutes-43b9 | azure | 
Want to stay on top of the latest Azure news without spending too much time? **_Talking Tech with Techielass_** is your go-to podcast! Each week, I deliver the newest updates from Azure in a short, under 5-minute episode.
Whether you're commuting, working, or simply need a quick tech update, my podcast ensures you're always in the know about Azure’s dynamic world. No unnecessary details—just straight information to keep you informed and efficient.
Catch **_Talking Tech with Techielass_** on your preferred podcast platform:
- [**Amazon Music**](https://music.amazon.co.uk/podcasts/39b95751-3f87-4593-a3a4-3c51654cd1ed/tech-talk-with-techielass?ref=techielass.com)
- [**Spotify**](https://open.spotify.com/show/34QAAvMLSYN09eOAu3j2b0?ref=techielass.com)
- [**Apple Podcasts**](https://podcasts.apple.com/us/podcast/sarah-leans-weekly-update-podcast/id1506712194?uo=4&ref=techielass.com)
- [**YouTube**](https://www.youtube.com/playlist?list=PLbjernQTVXitqaDOIVD_e5qjmSWErpRA1&ref=techielass.com)
- [**Overcast**](https://overcast.fm/itunes1506712194/sarah-leans-weekly-update-podcast?ref=techielass.com)
- [**Castbox**](https://castbox.fm/channel/id2752244?ref=techielass.com)
Prefer an [RSS feed](https://anchor.fm/s/1b36d81c/podcast/rss?ref=techielass.com)? We've got that too!
Subscribe now and never miss an episode. Stay informed, stay ahead, and let’s dive into tech!
[**Listen Now and Subscribe!**](https://podfollow.com/talking-tech-with-techielass?ref=techielass.com) | techielass |
1,879,463 | 846. Hand of Straights | 846. Hand of Straights Medium Alice has some number of cards and she wants to rearrange the cards... | 27,523 | 2024-06-06T16:56:44 | https://dev.to/mdarifulhaque/846-hand-of-straights-pg1 | php, leetcode, algorithms, programming | 846\. Hand of Straights
Medium
Alice has some number of cards and she wants to rearrange the cards into groups so that each group is of size groupSize, and consists of `groupSize` consecutive cards.
Given an integer array `hand` where `hand[i]` is the value written on the <code>i<sup>th</sup></code> card and an integer `groupSize`, return `true` if she can rearrange the cards, or `false` otherwise.
**Example 1:**
- **Input:** hand = [1,2,3,6,2,3,4,7,8], groupSize = 3
- **Output:** true
- **Explanation:** Alice's hand can be rearranged as [1,2,3],[2,3,4],[6,7,8]
**Example 2:**
- **Input:** hand = [1,2,3,4,5], groupSize = 4
- **Output:** false
- **Explanation:** Alice's hand can not be rearranged into groups of 4.
**Example 3:**
- **Input:** hand = [2,1], groupSize = 2
- **Output:** true
- **Explanation:** Alice's hand can not be rearranged into groups of 2.
**Constraints:**
- <code>1 <= hand.length <= 10<sup>4</sup></code>
- <code>0 <= hand[i] <= 10<sup>9</sup></code>
- <code>1 <= groupSize <= hand.length</code>
**Note:** This question is the same as 1296: https://leetcode.com/problems/divide-array-in-sets-of-k-consecutive-numbers/
**Solution:**
```
class Solution {
/**
* @param Integer[] $hand
* @param Integer $groupSize
* @return Boolean
*/
function isNStraightHand($hand, $groupSize) {
$cardCounts = array();
foreach ($hand as $card) {
if (!isset($cardCounts[$card])) {
$cardCounts[$card] = 1;
} else {
$cardCounts[$card]++;
}
}
sort($hand);
foreach ($hand as $card) {
if (isset($cardCounts[$card])) {
for ($nextCard = $card; $nextCard < $card + $groupSize; $nextCard++) {
if (!isset($cardCounts[$nextCard])) {
return false;
}
if (--$cardCounts[$nextCard] == 0) {
unset($cardCounts[$nextCard]);
}
}
}
}
return true;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,879,461 | Directory Templates: How to Pick the Best One for Your Project | Creating a directory website can be a time consuming task, but using the right directory templates... | 0 | 2024-06-06T16:55:16 | https://dev.to/karakhanyans/directory-templates-how-to-pick-the-best-one-for-your-project-46oa | laravel, directories, php, webdev | Creating a directory website can be a time consuming task, but using the right directory templates can simplify the process significantly. This guide will help you dig deeper into Larafast Directories Boilerplate, by discussing the key factors to consider and the benefits of using customizable templates.
> Check [Larafast Directories](https://larafast.com/projects/directories) for your next Directory Project
## Why Directory Templates Matter
### The Role of Templates in Directory Websites
Directory templates are designed to provide a directory structure for the future website, with functionality that is required for directory website, which is Categories, Product Listings, Admin Panel to manage everything. Some directories require subscription or one-time payments from users to list their product, so having a pre-built payment system is also a crucial case.
### Benefits of Using Templates
**Time Efficiency:** Templates save time by providing pre-designed layouts and features, allowing you to focus on content and customization rather than starting from scratch.
**Cost-Effective:** Using a template can be more cost-effective than hiring a designer to create a custom site from the ground up.
**Consistency:** Templates ensure a consistent look and feel across your website, which is essential for maintaining a professional appearance.
**Functionality:** Many templates come with built-in features tailored for directory sites, such as search functionality, listing management, and user review systems.
**Customizability:** High-quality templates offer customization options that allow you to tailor the site to your specific needs without extensive coding knowledge.
## Factors to Consider When Choosing a Template
### Customization Options
When selecting a directory template, it's crucial to consider the level of customization it offers. A good template should allow you to adjust colors, fonts, layouts, and other design elements to match your brand identity. Look for templates with flexible customization across design part as well as coding flexibility.
### Ease of Use
The ease of use of a template can significantly impact your development process. Choose a template that is user-friendly and comes with comprehensive documentation and support. A well-documented template will save you time and frustration, making it easier to implement and modify according to your needs.
### SEO Optimization
SEO optimization is vital for any directory website to ensure that your listings appear in search engine results. Select a template that is built with SEO best practices in mind, such as fast loading times, mobile responsiveness, and clean code. Templates that include SEO tools or plugins can further enhance your site's visibility.
## Why choose Larafast Directories?
### Larafast Directories: Features and Benefits
Larafast Directories is a powerful and flexible directory template designed to help you create professional directory websites with ease. It offers a range of features tailored specifically for directory sites, making it an ideal choice for developers and businesses alike.
**Features:**
- Product Listing management
- Pre-defined popular Categories
- Search accross categories and products
- SEO optimization for Categories and Products
- Automatic Sitemap generation to index pages on Google fast
- Admin panel to mange categories, products, users, etc.
- User Authentication and Dashboard
- Payment System to manage paid listings
- Blog
- 30+ Themes for every taste
- Documentation and Support
**Benefits:**
- Enhances user experience with a clean, intuitive interface
- Saves time with pre-built components and layouts
- Boosts SEO efforts with optimized code and structure
By selecting a template that meets your specific needs and goals, you can streamline the development process and create a site that not only looks great but also performs well. Make an informed choice and set the foundation for a successful directory project with the right template.
[Get Larafast Directories](https://larafast.com/projects/directories) | karakhanyans |
1,879,460 | Buy verified cash app account | Buy verified cash app account Cash app has emerged as a dominant force in the realm of mobile banking... | 0 | 2024-06-06T16:51:12 | https://dev.to/whitemartin001/buy-verified-cash-app-account-3odn | Buy verified cash app account
Cash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.
Our commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.
Why dmhelpshop is the best place to buy USA cash app accounts?
It’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.
Clearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.
Our account verification process includes the submission of the following documents: [List of specific documents required for verification].
Genuine and activated email verified
Registered phone number (USA)
Selfie verified
SSN (social security number) verified
Driving license
BTC enable or not enable (BTC enable best)
100% replacement guaranteed
100% customer satisfaction
When it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.
Clearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.
Additionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.
How to use the Cash Card to make purchases?
To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.
After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.
Why we suggest to unchanged the Cash App account username?
To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.
Alternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.
Selecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.
Buy verified cash app accounts quickly and easily for all your financial needs.
As the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.
For entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.
When it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.
This article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.
Is it safe to buy Cash App Verified Accounts?
Cash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.
Unfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.
Cash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.
Leveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.
https://dmhelpshop.com/product/buy-verified-cash-app-account/
Why you need to buy verified Cash App accounts personal or business?
The Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.
To address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.
If you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.
Improper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.
A Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.
This accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.
How to verify Cash App accounts
To ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.
As part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.
https://dmhelpshop.com/product/buy-verified-cash-app-account/
How cash used for international transaction?
Experience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.
No matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.
Understanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.
As we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.
Offers and advantage to buy cash app accounts cheap?
With Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.
https://dmhelpshop.com/product/buy-verified-cash-app-account/+
We deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.
Enhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.
Trustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.
https://dmhelpshop.com/product/buy-verified-cash-app-account/
How Customizable are the Payment Options on Cash App for Businesses?
Discover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.
Explore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.
Discover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.
https://dmhelpshop.com/product/buy-verified-cash-app-account/
Where To Buy Verified Cash App Accounts
When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.
Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.
The Importance Of Verified Cash App Accounts
In today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.
By acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.
When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.
Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.
Conclusion
Enhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.
Choose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.
https://dmhelpshop.com/product/buy-verified-cash-app-account/
Contact Us / 24 Hours Reply
Telegram:dmhelpshop
WhatsApp: +1 (980) 277-2786
Skype:dmhelpshop
Email:dmhelpshop@gmail.com | whitemartin001 | |
1,879,459 | 🏴☠️ ** Navegando por el mar del desarrollo de software con Domain-Driven Design y One Piece ** 🏴☠️ | ¡Hola Chiquis! 👋🏻 ¡Ahoy, desarrolladores y fanáticos de One Piece! 🗡️Hoy vamos a embarcarnos en una... | 0 | 2024-06-06T16:47:44 | https://dev.to/orlidev/-navegando-por-el-mar-del-desarrollo-de-software-con-domain-driven-design-y-one-piece--413f | webdev, tutorial, design, architecture | ¡Hola Chiquis! 👋🏻 ¡Ahoy, desarrolladores y fanáticos de One Piece! 🗡️Hoy vamos a embarcarnos en una aventura para descubrir cómo el Domain Driven Design (DDD) es el mapa del tesoro para el desarrollo de software, y cómo se asemeja a navegar por los mares junto a la tripulación del Sombrero de Paja.

Imagina que estás navegando por el vasto océano del desarrollo de software. Tu barco es tu proyecto y tu tripulación son los miembros de tu equipo. En este viaje, tu mapa es el Domain Driven Design (DDD), ⛓️ una metodología que se centra en el dominio del problema para diseñar software. Al igual que en la serie de cómic One Piece, cada miembro de tu tripulación tiene un papel crucial en la búsqueda del tesoro: un software exitoso.
En el contexto de Domain Driven Design (DDD), ☠️ que es una metodología de desarrollo de software que se enfoca en la creación de un modelo de dominio que refleje las complejidades y reglas del negocio, podríamos hacer algunas analogías interesantes con los personajes de One Piece.
🍜 One Piece: La aventura del dominio impulsado por el software
Navega por los mares del desarrollo de software junto a Luffy y su tripulación, donde el Domain Driven Design (DDD) toma el rol de brújula y guía en tu viaje hacia una arquitectura sólida y escalable.
🏮¿Qué es el Domain Driven Design?
Imagina a One Piece como un vasto mundo lleno de islas, cada una representando un dominio único dentro de tu aplicación. DDD te ayuda a navegar por este archipiélago, creando un mapa detallado de cada dominio y estableciendo relaciones entre ellos.
¿Cómo funciona DDD? 💡
Al igual que los piratas de Straw Hat, cada dominio tiene su propio lenguaje, sus reglas y sus objetivos. DDD te permite comprender a fondo cada dominio, identificando sus elementos clave, sus comportamientos y sus interacciones.

El Gran Line del Desarrollo: Domain Driven Design 💥
Al igual que el Grand Line es conocido por ser el lugar más peligroso y emocionante en el mundo de One Piece, el desarrollo de software tiene su propio Grand Line: el Domain Driven Design. DDD es un enfoque que se centra en el núcleo del problema de negocio, priorizando la lógica y las reglas del dominio sobre la tecnología.
- Por ejemplo, Monkey D. Luffy, el protagonista principal, podría representar el núcleo del dominio o Core Domain, ya que es el centro de la historia y el que impulsa la trama hacia adelante. Su capacidad de adaptarse y superar desafíos puede ser vista como la flexibilidad que debe tener el modelo de dominio central en DDD para poder evolucionar con los requisitos del negocio.
- Nami, la navegante, podría ser vista como la experta del dominio o Domain Expert, ya que posee conocimientos especializados en navegación y cartografía que son cruciales para la tripulación. En DDD, los expertos del dominio son aquellos que tienen el conocimiento profundo del área de negocio y ayudan a modelar el software de manera que refleje fielmente la realidad.
- Sanji, el chef, podría ser comparado con la infraestructura en DDD, ya que proporciona el soporte necesario (en este caso, comida) para que la tripulación pueda continuar su viaje. De manera similar, la infraestructura en DDD proporciona el soporte técnico necesario para que el modelo de dominio funcione correctamente.
Estas son solo algunas analogías posibles y, por supuesto, cada personaje de One Piece tiene características únicas que podrían relacionarse con diferentes aspectos de DDD. La serie es conocida por su rica variedad de personajes y cada uno podría tener su correspondencia dentro de un modelo de dominio bien diseñado.
🌊 Navegando por el mar del dominio del problema
Al igual que los Piratas del Sombrero de Paja exploran el Grand Line en busca del One Piece, tu equipo explora el dominio del problema para entenderlo completamente. En DDD, este proceso se llama modelado del dominio. Es como trazar tu mapa del Grand Line.

🏝️ Islas de Contexto Acotado
En DDD, el sistema se divide en contextos acotados, cada uno con su propio modelo. Estos son como las diferentes islas en One Piece, cada una con su propia cultura y reglas. Por ejemplo, la isla de Skypiea con sus habitantes voladores puede representar un contexto acotado, mientras que la isla de Water 7, famosa por sus carpinteros de barcos, puede representar otro.
🦜 Lenguaje Ubicuo: El lenguaje de los piratas
En DDD, el Lenguaje Ubicuo es un lenguaje común utilizado por todos los miembros del equipo. Es como el lenguaje de los piratas en One Piece. Todos, desde Luffy hasta Zoro, entienden términos como "nakama" (compañero de tripulación) o "haki" (un poder latente que todo ser viviente posee).
📜 Eventos de Dominio: Las grandes batallas
Los eventos de dominio en DDD son acciones significativas que ocurren en el dominio. Son como las grandes batallas en One Piece, como la Batalla de Marineford. Estos eventos pueden cambiar el curso de tu software, al igual que las batallas pueden cambiar el curso de la historia en One Piece.
🏴☠️ Equipos de desarrollo: La tripulación del Sombrero de Paja
Finalmente, tu equipo de desarrollo es como la tripulación del Sombrero de Paja. Cada miembro tiene un papel único y vital. Al igual que cómo Luffy no podría llegar al One Piece sin su tripulación, un proyecto de software no puede tener éxito sin un equipo de desarrollo eficaz.
🏴☠️ Arquitectura en capas: La estructura del barco
En DDD, la arquitectura en capas es una forma de organizar el código en diferentes capas según su responsabilidad. Esto es similar a cómo un barco pirata en One Piece está organizado en diferentes niveles, cada uno con su propia función. Por ejemplo, el camarote del capitán, la cubierta principal, los cañones, etc.
🌊 Entidades y objetos de valor: Los miembros de la tripulación y sus tesoros
Las entidades en DDD son objetos que tienen una identidad única, mientras que los objetos de valor son definidos por sus atributos. En One Piece, los miembros de la tripulación serían las entidades, cada uno con su propia identidad única. Los objetos de valor podrían ser los tesoros que encuentran, definidos por su valor intrínseco.
🗺️ Agregados: Las alianzas piratas
Los agregados en DDD son grupos de entidades y objetos de valor que se tratan como una unidad. En One Piece, esto podría ser una alianza de piratas, donde varios equipos de piratas trabajan juntos hacia un objetivo común.
⚓ Repositorios: El almacén del barco
Los repositorios en DDD son mecanismos para almacenar, recuperar y buscar objetos en tu dominio. En One Piece, esto podría ser el almacén de un barco pirata, donde se guardan los suministros y los tesoros.
📜 Especificaciones: Las reglas del mar
Las especificaciones en DDD son reglas o condiciones que un objeto debe cumplir. En One Piece, esto podría ser las reglas del mar que los piratas deben seguir, como no atacar a civiles o respetar la bandera de parley.

Los personajes de One Piece como roles en DDD 💨
- Monkey D. Luffy: El líder visionario: el capitán de los Straw Hats, representa al líder del equipo de desarrollo, quien define la visión general del proyecto y guía a su tripulación hacia el éxito. Luffy no se distrae con los detalles técnicos, al igual que DDD se enfoca en el corazón del problema.
- Zoro: El maestro de la espada: el espadachín de la tripulación, simboliza a los expertos en el dominio. Ellos poseen un conocimiento profundo de las reglas y comportamientos del dominio, y son capaces de navegar por sus complejidades con maestría.
- Nami: La navegante: la navegadora del barco, representa a los arquitectos de software. Ellos diseñan la arquitectura del sistema, asegurando que cada dominio se integre de manera eficiente y escalable.
- Usopp: El inventor ingenioso: el inventor del equipo, simboliza a los desarrolladores. Ellos transforman las ideas en código funcional, utilizando patrones de diseño y técnicas de programación adecuadas para cada dominio.
- Sanji: El chef meticuloso: el chef de la tripulación, representa a los testers. Ellos garantizan la calidad del código, realizando pruebas rigurosas para identificar y corregir errores en cada dominio.
La Tripulación y el Equipo de Desarrollo: La tripulación del Sombrero de Paja es un equipo diverso que trabaja unido hacia un objetivo común. En DDD, es crucial que el equipo de desarrollo colabore estrechamente con los expertos del dominio para crear un software que refleje fielmente las necesidades del negocio.
Enfrentando a la Marina: Desafíos y Soluciones 🌬️
Al igual que la tripulación enfrenta a la Marina y a otros piratas, los equipos de DDD enfrentan desafíos como la complejidad del dominio y la necesidad de mantener el software adaptable y escalable. Pero con un buen entendimiento del dominio y una colaboración efectiva, estos desafíos pueden superarse.

¡Al Abordaje! ⚔️
Así que, al igual que Luffy y su tripulación se embarcan en la búsqueda del One Piece, los desarrolladores que adoptan DDD se embarcan en la creación de software que es un verdadero tesoro para sus usuarios.
Roronoa Zoro y los Subdominios: el espadachín de tres katanas, representa los subdominios en DDD. Cada espada puede verse como un subdominio diferente que trabaja en armonía para apoyar el dominio principal. Zoro es experto en cada una, al igual que un desarrollador de DDD debe serlo en cada subdominio del software.
- Nami y el Lenguaje Ubicuo: la navegante, es como el lenguaje ubicuo en DDD. Ella utiliza mapas y el clima para guiar a la tripulación, así como DDD utiliza un lenguaje común para asegurar que todos los miembros del equipo de desarrollo y los expertos del dominio estén en la misma página.
- Usopp y los Patrones de Diseño: el francotirador, usa su ingenio y herramientas para resolver problemas complejos. En DDD, los patrones de diseño son las herramientas que los desarrolladores utilizan para resolver problemas complejos de lógica de negocio y mantener la calidad del código.
- Sanji y las Capas Anticorrupción: el cocinero, protege la integridad de su cocina al igual que las capas anticorrupción en DDD protegen la integridad del modelo de dominio. Estas capas evitan que las malas prácticas y los cambios externos afecten el núcleo del dominio.
Beneficios del Domain Driven Design 🧭
Al igual que el One Piece, que trae consigo riquezas inimaginables, DDD ofrece múltiples beneficios para el desarrollo de software:
- Mayor comprensión del dominio: DDD te permite comprender a fondo cada dominio, lo que conduce a un mejor diseño y una mejor implementación.
- Comunicación más efectiva: DDD establece un lenguaje común entre los miembros del equipo, facilitando la colaboración y la resolución de problemas.
- Arquitectura más flexible y escalable: DDD permite crear una arquitectura modular y desacoplada, lo que facilita la evolución del sistema a medida que crecen las necesidades.
- Código más mantenible: DDD promueve la creación de código limpio y bien documentado, lo que facilita su comprensión y mantenimiento a largo plazo.
Así que ahí lo tienes, una aventura por el mar del desarrollo de software con DDD y One Piece. Espero que este post te haya entretenido y proporcionado una visión clara de cómo Domain Driven Design puede ser tan emocionante y esencial como una aventura en el mundo de One Piece. ¡Que tengas un desarrollo de software tan grandioso como la búsqueda del tesoro más grande del mundo! 🏴☠️⚔️💻
Conclusión ⛵
Al igual que la aventura de One Piece, el desarrollo de software con Domain Driven Design es un viaje emocionante lleno de desafíos y recompensas. Con la brújula adecuada y una tripulación talentosa, podrás navegar por los mares del desarrollo de software y crear aplicaciones sólidas, escalables y mantenibles. En general, el objetivo de DDD es encontrar un lenguaje común entre los expertos del negocio y los desarrolladores de software para construir una solución que se ajuste a las necesidades del negocio.
Recuerda: DDD no es una receta mágica, sino una filosofía que te guía en el camino hacia un mejor desarrollo de software. Adapta sus principios a tu proyecto específico y disfruta del viaje.¡Hasta la próxima, nakama! 🏴☠️
🚀 ¿Te ha gustado? Comparte tu opinión.
Artículo completo, visita: https://lnkd.in/ewtCN2Mn
https://lnkd.in/eAjM_Smy 👩💻 https://lnkd.in/eKvu-BHe
https://dev.to/orlidev ¡No te lo pierdas!
Referencias:
Imágenes creadas con: Copilot (microsoft.com)
##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #DDD

 | orlidev |
1,879,458 | Why is JavaScript not Interpreted? | Hey there! Imagine you're building a LEGO set. You have the instructions and all the pieces, but you... | 27,558 | 2024-06-06T16:47:19 | https://dev.to/imabhinavdev/why-is-javascript-not-interpreted-4okm | webdev, javascript, beginners, programming | Hey there! Imagine you're building a LEGO set. You have the instructions and all the pieces, but you need to follow the steps one by one to create your awesome LEGO castle. JavaScript, a programming language used to make websites interactive and fun, works a bit like that too. But instead of you putting the LEGO pieces together, your computer does the job of building the castle by reading instructions called code.
In this blog, we'll explore how JavaScript works, focusing on two key concepts: being interpreted and Just-in-Time (JIT) compilation. We'll break it down into simple steps, just like how you follow LEGO instructions, so even a 10-year-old can understand it!
## What is JavaScript?
First things first, let's understand what JavaScript is. JavaScript is a language that allows developers to create interactive effects inside web browsers. Whenever you click a button, see an animation, or fill out a form on a website, JavaScript is likely behind it making things happen.
### JavaScript is Everywhere
Here are a few examples of where you might encounter JavaScript in your daily life:
1. **Online Games**: Many online games are built using JavaScript.
2. **Interactive Websites**: Buttons, forms, slideshows, and more.
3. **Apps**: Some mobile and web apps use JavaScript to function.
## Interpreted Language: What Does It Mean?
JavaScript is often called an "interpreted" language. But what does that mean?
### The Interpreter: Your Computer's Helper
Think of the interpreter like a translator who helps you understand instructions in a language you don't know. When you write JavaScript code, the interpreter translates it into actions that your computer can understand and execute right away.
### Step-by-Step Execution
When you write JavaScript code, it looks something like this:
```javascript
console.log("Hello, World!");
```
Here's what happens:
1. **You Write the Code**: You type out the code in a text editor.
2. **Interpreter Reads the Code**: The interpreter reads your code line by line.
3. **Actions are Performed**: For each line, the interpreter performs the actions specified. In this case, it prints "Hello, World!" on the screen.
### Why Interpreted?
The main advantage of being interpreted is that it allows for quick and easy execution of code. You don't have to wait for the entire program to be compiled before running it. It just runs immediately!
## Just-in-Time (JIT) Compilation: Making JavaScript Faster
While being an interpreted language is great for quick execution, it can sometimes be a bit slow. That's where Just-in-Time (JIT) compilation comes in to save the day!
### What is JIT Compilation?
JIT compilation is like a superhero for JavaScript. It combines the best of both worlds: the speed of compiled languages and the flexibility of interpreted languages.
### How JIT Works
Here's a simple way to understand JIT compilation:
1. **Interpreting Code**: At first, JavaScript runs using the interpreter, reading and executing code line by line.
2. **Finding Hotspots**: The JIT compiler looks for parts of the code that are run frequently, called "hotspots."
3. **Compiling Hotspots**: The JIT compiler translates these hotspots into faster machine code.
4. **Executing Faster Code**: The next time those parts of the code are run, the computer uses the faster machine code, making the program run quicker.
### An Example with LEGO
Imagine you're building the same LEGO castle over and over. The first few times, you follow the instructions step by step (interpreted). But then you realize that you keep building the same parts, like the walls and towers, repeatedly. So, you decide to memorize those steps (JIT compilation). Now, when you build the castle again, you can assemble those parts much faster because you know them by heart.
## Why JIT is Important
JIT compilation makes JavaScript much faster and more efficient. This is especially important for complex web applications like games and interactive websites where speed is crucial.
## Putting It All Together: Interpreted + JIT Compilation
JavaScript uses both interpretation and JIT compilation to give you the best of both worlds: quick start-up times and fast execution. This combination is what makes JavaScript so powerful and widely used.
## Real-World Example: An Interactive Web Page
Let's look at a simple example to see how JavaScript works in the real world.
### Example: Interactive Button
We'll create a button on a web page that changes color when clicked. Here's the HTML and JavaScript code:
```html
<!DOCTYPE html>
<html>
<head>
<title>Interactive Button</title>
<style>
#myButton {
padding: 10px 20px;
background-color: blue;
color: white;
border: none;
cursor: pointer;
}
</style>
</head>
<body>
<button id="myButton">Click Me!</button>
<script>
// JavaScript code to change the button color
document.getElementById('myButton').addEventListener('click', function() {
this.style.backgroundColor = 'green';
});
</script>
</body>
</html>
```
### How It Works
1. **HTML**: Defines the structure of the web page with a button.
2. **CSS**: Styles the button to look nice.
3. **JavaScript**: Adds an event listener to the button. When the button is clicked, the background color changes to green.
### Step-by-Step Execution
1. **Loading the Page**: When you open the web page, the browser reads the HTML and CSS first.
2. **Executing JavaScript**: The browser then reads and executes the JavaScript code.
3. **Interpreted Execution**: The JavaScript interpreter runs the code line by line.
4. **JIT Compilation**: If you click the button multiple times, the JIT compiler may optimize the code to change the color faster.
## Conclusion
JavaScript is a powerful and versatile language that powers much of the web. Understanding how it works, including being interpreted and Just-in-Time (JIT) compiled, can help you appreciate the magic behind the scenes. By breaking down these concepts into simple steps, we hope you now have a better grasp of how JavaScript makes your favorite websites interactive and fun.
Remember, just like building a LEGO set, learning JavaScript step by step can be both fun and rewarding. Happy coding! | imabhinavdev |
1,879,456 | Python Bytecode: A Beginner’s Guide | Python bytecode is like a secret language that Python uses behind the scenes. When you write your... | 0 | 2024-06-06T16:47:18 | https://emminex.medium.com/python-bytecode-a-beginners-guide-39ea038d2c7c | beginners, python, bytecode, compilation | [Python](https://www.python.org/) bytecode is like a secret language that Python uses behind the scenes. When you write your Python code, it doesn’t run directly. Instead, Python translates your code into bytecode, a set of instructions that the Python interpreter can understand and execute.
You may be asking why beginners should care about bytecode. Well, understanding bytecode helps you peek under the hood of Python and see how your code works. This knowledge can help you write better, more efficient programs. Even if you don’t see bytecode directly, it’s a crucial part of making Python run smoothly.
In this guide, we’ll unravel the mystery of Python bytecode and show you why it matters.
## What is Python Bytecode?
Python bytecode is like a middleman between your Python code and your computer’s hardware. When you write Python code and run it, the interpreter first translates your code into bytecode.
This bytecode is a lower-level representation of your code, but it’s still not something that your computer’s processor can understand directly.
That’s where the [Python Virtual Machine](https://leanpub.com/insidethepythonvirtualmachine/read) (PVM) comes in. The PVM is like a special engine that’s designed to run bytecode. It reads the bytecode instructions one by one and carries them out, making your Python program come to life.
### Benefits of Bytecode
Bytecode has a couple of benefits to you, the user. Let’s have a look at a couple of them:
- **Portability**: Bytecode isn’t tied to any specific computer architecture, so the same bytecode can run on different types of machines.
- **Efficiency:** Bytecode is often faster to execute than the original Python code. Python saves the bytecode in `.pyc` files. These files are like cached versions of your code. The next time you run the same program, Python can skip the compilation step and load the bytecode directly, making your program start up faster.
Therefore, you can think of bytecode as a bridge between your Python code and the inner workings of your computer. It’s a crucial part of the Python interpreter’s job, helping your code run smoothly and efficiently.
## The Compilation Process
When you write Python code, it starts as a simple text file with a `.py` extension. But your computer doesn’t exactly understand this text directly. That’s where the compilation process comes in.
Now, let’s explore how compilation works:
1. **Source Code:** You write your Python program in a plain text file, like `my_program.py`.
2. **Compilation:** When you run your program, the Python interpreter gets to work. It reads your source code and translates it into bytecode, a lower-level representation of your code that’s more efficient for the computer to handle. This bytecode gets saved in a separate file with a `.pyc` extension (e.g., `my_program.pyc`).
3. **Execution**: Now that the bytecode is ready, the Python Virtual Machine (PVM) steps in. The PVM is like a special engine that understands bytecode. It reads the bytecode instructions one by one and executes them.
In a nutshell, the compilation process converts your human-readable code into something your computer can understand and execute more efficiently.
## Viewing Python Bytecode
Python provides a powerful tool called the [`dis` module](https://docs.python.org/3/library/dis.html) (short for “disassembler”) to unveil the bytecode behind your code. This module lets you disassemble Python functions or even entire scripts, revealing the low-level instructions that the Python interpreter executes.
### Using `dis.dis()`
Let’s start with a simple function:
```bash
>>> def greet(name):
... return f"Hello, {name}!"
```
To see the bytecode for this function, we use the `dis.dis()` function:
```bash
>>> import dis
>>> dis.dis(greet)
```
Output:
```bash
1 0 RESUME 0
2 2 LOAD_CONST 1 ('Hello, ')
4 LOAD_FAST 0 (name)
6 FORMAT_VALUE 0
8 LOAD_CONST 2 ('!')
10 BUILD_STRING 3
12 RETURN_VALUE
```
Now, let’s break down what these instructions mean:
- `RESUME 0`: Marks the start of bytecode execution (specific to Python 3.11 and coroutines).
- `LOAD_CONST 1 ('Hello, ')`: Loads the string `'Hello, '` onto the stack.
- `LOAD_FAST 0 (name)`: Loads the local variable `name` onto the stack.
- `FORMAT_VALUE 0`: Formats the value `name`.
- `LOAD_CONST 2('!')`: Loads the string `'!'` onto the stack.
- `BUILD_STRING 3`: Combines the three top stack values (`’Hello, ‘`, formatted `name`, `'!'`) into one string.
- `RETURN_VALUE`: Returns the combined string from the stack.
This sequence shows how Python builds and returns the final formatted string in the `greet` function.
### Disassembling a Script
You can also disassemble an entire script. Let’s consider a simple example:
```bash
# File: example.py
def add(a, b):
return a + b
def main():
result = add(3, 4)
print(f"The result is {result}")
if __name__ == "__main__":
main()
```
Now, in a separate script, you can disassemble it as follows:
```bash
import dis
import example
dis.dis(example.add)
dis.dis(example.main)
```
You’ll get the bytecode for both functions, revealing the underlying instructions for each step.
## Common Bytecode Instructions
Here are some of the most common bytecode instructions you’ll encounter, along with explanations and examples:
- `LOAD_CONST`: loads a constant value (like a number, string, or `None`) onto the top of the stack.
For example, `LOAD_CONST 1 ('Hello, ')` loads the string “Hello, “ onto the stack.
- `LOAD_FAST`: loads the value of a local variable onto the stack.
Example: `LOAD_FAST 0 (x)` loads the value of the local variable `x`.
- `STORE_FAST`: takes the value on the top of the stack and stores it in a local variable.
For example, `STORE_FAST 1 (y)` stores the top stack value into the variable `y`.
- `BINARY_ADD`: takes the top two values from the stack, adds them together, and pushes the result back onto the stack.
For example, In the sequence `LOAD_FAST 0 (x)`, `LOAD_CONST 1 (5)`, `BINARY_ADD`, the values of `x` and 5 are added, and the result is placed on the stack.
- `POP_TOP`: removes the top value from the stack, effectively discarding it.
- `RETURN_VALUE`: returns the topmost stack value, effectively ending the function’s execution.
- `JUMP_IF_FALSE_OR_POP`: if the value at the top of the stack is false, this instruction jumps to a specified instruction. Otherwise, it pops the value from the stack.
- `JUMP_ABSOLUTE`: jumps to a specific instruction, regardless of any condition.
### Bytecode Examples for Basic Python Constructs
Let’s see how these instructions are used in basic Python constructs:
**Conditional (If-Else)**
```bash
def check_positive(x):
if x > 0:
return "Positive"
else:
return "Non-positive"
```
Bytecode:
```bash
2 0 LOAD_FAST 0 (x)
2 LOAD_CONST 1 (0)
4 COMPARE_OP 4 (>)
6 POP_JUMP_IF_FALSE 14
3 8 LOAD_CONST 2 ('Positive')
10 RETURN_VALUE
5 >> 12 LOAD_CONST 3 ('Non-positive')
14 RETURN_VALUE
```
In the bytecode above:
- `LOAD_FAST 0 (x)`: Loads the variable `x` onto the stack.
- `LOAD_CONST 1 (0)`: Loads the constant `0` onto the stack.
- `COMPARE_OP 4 (>)`: Compares the top two stack values (`x > 0`).
- `POP_JUMP_IF_FALSE 14`: Jumps to instruction 14 if the comparison is false.
- `LOAD_CONST 2 ('Positive')`: Loads the string `'Positive'` onto the stack if `x > 0`.
- `RETURN_VALUE`: Returns the value on the stack.
- `LOAD_CONST 3 ('Non-positive')`: Loads the string `'Non-positive'` onto the stack if `x <= 0`.
**Loops (For Loop)**
```bash
def sum_list(numbers):
total = 0
for num in numbers:
total += num
return total
```
Bytecode:
```bash
2 0 LOAD_CONST 1 (0)
2 STORE_FAST 1 (total)
3 4 LOAD_FAST 0 (numbers)
6 GET_ITER
>> 8 FOR_ITER 12 (to 22)
10 STORE_FAST 2 (num)
4 12 LOAD_FAST 1 (total)
14 LOAD_FAST 2 (num)
16 INPLACE_ADD
18 STORE_FAST 1 (total)
20 JUMP_ABSOLUTE 8
>> 22 LOAD_FAST 1 (total)
24 RETURN_VALUE
```
Now, let’s explore what’s happening in the bytecode:
1. `LOAD_CONST 1 (0)`: Loads the constant `0` onto the stack to initialize `total`.
2. `STORE_FAST 1 (total)`: Stores `0` in the variable `total`.
3. `LOAD_FAST 0 (numbers)`: Loads the variable `numbers` onto the stack.
4. `GET_ITER`: Gets an iterator for `numbers`.
5. `FOR_ITER 12 (to 22)`: Iterates over `numbers`, jumping to instruction 22 when done.
6. `STORE_FAST 2 (num)`: Stores the current item in the variable `num`.
7. `LOAD_FAST 1 (total)`: Loads `total` onto the stack.
8. `LOAD_FAST 2 (num)`: Loads `num` onto the stack.
9. `INPLACE_ADD`: Adds `total` and `num` (in-place).
10. `STORE_FAST 1 (total)`: Stores the result back in `total`.
11. `JUMP_ABSOLUTE 8`: Jumps back to the start of the loop.
12. `LOAD_FAST 1 (total)`: Loads `total` onto the stack.
13. `RETURN_VALUE`: Returns `total`.
Understanding these common instructions and how they are used in different Python constructs can significantly enhance your ability to analyze bytecode and gain deeper insights into the inner workings of Python.
## Conclusion
Python bytecode is the hidden language that makes your Python program run. It’s a lower-level representation of your code that the Python interpreter understands and executes. Bytecode is generated from your source code through a compilation process and stored in `.pyc` files for faster execution in future runs.
You can use the `dis` module to view and analyze bytecode, gaining insights into how Python translates your code into instructions.
By understanding common bytecode instructions and their role in basic Python constructs like loops and conditionals, you can optimize your code for better performance.
---
*Thanks for reading! If you found this article helpful (which I bet you did 😉), got a question or spotted an error/typo... do well to leave your feedback in the comment section.*
*And if you’re feeling generous (which I hope you are 🙂) or want to encourage me, you can put a smile on my face by getting me a cup (or thousand cups) of coffee below. :)*
* [***Buy Me a Coffee***](https://www.buymeacoffee.com/emminex23)
*Also, feel free to connect with me via* [***LinkedIn***](https://www.linkedin.com/in/emmanueloyibo2394/)*.* | emminex |
1,879,457 | Steps to Upgrade WordPress PHP Version Quickly | It's imperative for WordPress users to keep their sites updated, so making sure your site is running... | 0 | 2024-06-06T16:46:17 | https://dev.to/ayeshamehta/steps-to-upgrade-wordpress-php-version-quickly-h22 | wordpress, wordpressdevelopmentservices | It's imperative for WordPress users to keep their sites updated, so making sure your site is running on the most recent PHP version is an essential component of this upkeep. PHP upgrades enhance performance, ensure compatibility with the newest functionality and plugins, and improve security.
At first, navigating this process may seem overwhelming. However, be at ease! You can just handle the WordPress PHP upgrade in just ten simple steps by following our full instructions. Now let's get going!

## So Why Should You Update PHP?
Keeping your website secure and functioning at its optimum level calls for upgrading your PHP version. The following are the primary reasons for contemplating updating your PHP version:
**Performance & Speed:** Newer PHP versions are more efficient, which results in faster page load times. PHP processes data to display your website. Your site's speed can be greatly increased with an update!
**Security:** There may be issues in older versions of PHP that are no longer supported. Keeping PHP versions updated enhances the security of your website because attackers often take advantages of out-of-date PHP versions.
**New Features:** New type declarations, improved functionality, and other enhancements have been brought about by major PHP updates. You can benefit from these developments if you stay up to date.
WordPress Plugin Development: If you employ specialised **[WordPress development services](https://www.saffiretech.com/wordpress-development-services)** to make a customised plugin, developers make sure it is compliant and works at maximum effectiveness using the latest release Report Phrase of PHP.
## HowHow to Upgrade WordPress PHP Version
The previous stages covered a variety of ways to find out what version of PHP you are running. Let's now examine the process of updating your PHP version. What actions are you required to take?
Make a complete backup of your website, including any files, databases, and customised configurations, before proceeding. By taking this precaution, you may be sure that, in the event that the upgrade goes wrong, you can restore your website. As an alternative, to protect your live site, set up a staging area and test the WordPress PHP update on a site clone.
Verify Compatibility: Make sure the PHP version you're targeting is compatible with your custom code and plugins. With more recent versions of PHP, some older plugins may not operate properly. To prevent potential problems, update or replace any mismatched components.
**Conclusion**
In conclusion, keeping your website safe and operating at peak efficiency requires updating your PHP version. You may fortify your defences and safeguard sensitive data by updating your site from older PHP versions, which can expose it to online dangers.
Internal enhancements brought about by new PHP versions lead to quicker loading speeds and more efficient use of resources. By doing this, compatibility problems are avoided and your website is guaranteed to function properly.
| ayeshamehta |
1,879,455 | Great Style | Great Style Barbershop NYC is the place to go for anyone on the Upper East Side looking for a... | 0 | 2024-06-06T16:42:40 | https://dev.to/ogremagi24/great-style-547f | Great Style Barbershop NYC is the place to go for anyone on the Upper East Side looking for a top-notch haircut. The barbers here are true professionals, always taking their time to ensure you leave looking sharp. The shop itself is modern, clean, and has a really comfortable vibe. If you’re in need of a reliable [barbershop upper east side](https://greatstylebarbershopnyc.com/), this is definitely where you want to be.
| ogremagi24 | |
1,879,375 | Adding stream_async() to Phoenix LiveView | In this article, you will learn: how to implement in LiveView asynchronous assign for streams how... | 0 | 2024-06-06T16:42:15 | https://dev.to/utopos/adding-streamasync-to-phoenix-liveview-2kii | elixir, phoenix, liveview | In this article, you will learn:
- how to implement in LiveView asynchronous assign for `streams`
- how to use the `async_result` function component (UI) for `streams` in Heex templates
- how to use meta programming to auto generate boilerplate for asynchronous stream handling - `stream_async()` macro
- simply add `stream_async` via [Hex package](https://hex.pm/packages/live_stream_async)
## New asynchronous operations in LiveView
A new release of LiveView library — v 0.20.0 — introduced built-in functions for [asynchronous work](https://hexdocs.pm/phoenix_live_view/0.20.14/Phoenix.LiveView.html#module-async-operations). It's a perfect solution to deliver a snappy user experience by delegating some time-consuming tasks (ex. fetching from external services) to background jobs without blocking the UI or event handlers.
- `assign_sync/3` - a straight forward way to load the results asynchronously from these background tasks into socket assigns.
- `start_async/4` - a more granular control over async task result handling.
- `<.async_result` ...> - component to handle the asynchronous operation state on the UI side - Heex templates (for success, loading, and errors).
## Missing companion: `stream_async()`
Sometimes you may need to work asynchronously with [`streams`](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#stream/4).
Streaming results allow working with large collections without keeping them on the server. This functionality is not available out of the box yet.
I will demonstrate further on how to manually implement an asynchronous `stream` assign to the LivewView socket and wrap the whole boilerplate into a reusable `stream_async()` macro.
## Asynchronous streaming in LiveView
In the following example, LiveView loads a list of hotels in a specified location. To load the data, we will use `Hotels.fetch(location)` function. Here's the code:
```elixir
@hotels :hotels
def mount(%{"location" => location}, _, socket) do
{:ok,
socket
|> assign(@hotels, AsyncResult.loading())
|> start_async(@hotels, fn -> Hotels.fetch!(location) end)
}
end
def handle_async(@hotels, {:ok, hotels}, socket) do
{:noreply,
socket
|> assign(@hotels, AsyncResult.ok(@hotels))
|> stream(@hotels, hotels, reset: true)
}
end
def handle_async(@hotels, {:exit, reason}, socket) do
{:noreply,
update(@hotels, fn async_result -> AsyncResult.failed(async_result, {:exit, reason}) end)
}
end
```
The core of the solution in the presented code revolves around using a pair of assigns:
- `socket.assigns.hotels`: instance of `%AsyncResult{ }` struct handling the async result.
* (line: `assign(@hotels, AsyncResult.loading())`)
- `socket.assigns.streams.hotels` - `stream` for target large collection
* (line: `stream(@hotels, hotels, reset: true)`).
I use two different maps in the socket to store necessary data (`assigns` and the nested map: `assignes.streams`). Note I use the same key in both: `:hotels` (the reason will become clear in a moment). Using a combination of a direct assign and a stream assign solves the following challenge: how to store async loading state for a stream that is not yet populated.
According to the documentation, a stream assign must contain only collections and nothing else. We cannot store temporarily any other type of data (here: loading state) as it will produce errors. Thus, need an additional assign to inform whether the stream content is ready to render, or maybe there was an error and stream is not assigned.
It might look tempting to resign from using `%AsyncResult{}` assign and simply stream en empty collection, meanwhile we are fetching "the real data" asynchronously. The role of the empty collection would be to signal "loading state" for the duration of the async operation. **Personally, I don't find this approach right**. An empty stream may indicate to the UI and the user that there's no data returned as a response to their request (in the example: there are no hotels in the indicated location). Furthermore, we would lose the ability to differentiate between "loading state" and "failed state" of collection fetching. As a result, we lose as well the opportunity to provide a meaningful error message to the user.
### Anatomy of the solution
Let's dig deeper into the proposed code:
- `start_async/4` function - [asynchronous task](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#start_async/4) wrapper. Used to get result that will be later streamed.
- `handle_async` - 2x callbacks on the LiveView process to deal with `start_async/4` task results:
- `{:ok, results}` - success; collection to stream available in `results` variable,
- `{:exit, reeason}`- failure.
- `Phoenix.LiveView.AsyncResult` - LiveView [struct](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.AsyncResult.html) to track state of an async assign.
- Set the state and assign results on the struct via functions: ok(), loading() and failed()
- Read the state by accessing three boolean state fields : `:ok?` (success), `:loading` and `:failed`.
- Read the results of the async operation be accessing `:result` field.
### Accessing async stream in Heex
When the async collection loading is successful, our socket structure will look like this:
```elixir
%{
Stream: #Phoenix.LiveView.Socket<
id: "phx-F9YIUoGp9R8towwB",
[...]
assigns: %{
hotels: %Phoenix.LiveView.AsyncResult{
ok?: true,
loading: nil,
failed: nil,
result: :hotels
},
streams: %{
hotels: %Phoenix.LiveView.LiveStream{
name: :hotels,
[...]
```
Pay attention that use the same atom (`:hotels`) as:
- key to access `socket.assigns` referring to the async state data structure (`%AsyncResult{}`),
- result value of the `AsyncResult.result` field,
- key to access `socket.assigns.streams` where the stream for the large collection eventually ends up.
Although it may look confusing at a first glance, you will see soon that this helps us to produce a highly reusable code in LiveView's` render()` to access the stream:
```heex
def render(assigns) do
~H"""
<.async_result :let={stream_key} assign={@hotels}>
<:loading>Loading hotels...</:loading>
<:failed :let={_failure}>There was an error loading the hotels. Please try again later.</:failed>
<ul id="hotels_stream" phx-update="stream">
<li :for={{id, hotel} <- @streams[stream_key]} id={id}>
<%= hotel.name %>
</li>
</ul>
</.async_result>
"""
end
```
- LiveView built-in `<.async_result ...>` component is designed to work with the `%AsyncResult{}` structs.
- `%AsyncStruct{}` is passed via "`assign={}`" attribute of the component.
- `<.async_result ...>` component @inner_block receives `stream_key` which is used to fetch the correct stream from the `socket.assigns.streams` (accessed by `@streams` in the code). The `stream_key` becomes available in the @inner_block via `:let={}` attribute.
## Use `stream_async()` function
As we could observe, working manually with steams asynchronously adds a repetitive boilerplate to our LiveView. For every collection that we would like to stream, we need to add explicitly `handle_async()` callbacks.
The solution proposed in this article can be easily wrapped in a macro that will auto-generate all the necessary handling callbacks behind the scenes.
The `stream_async/4` macro can be used as follows:
```elixir
use LiveStreamAsync
def mount(%{"location" => location}, _, socket) do
{:ok,
socket
|> stream_async(:hotels, fn -> Hotels.fetch!(location) end)
}
end
```
The `stream_async/4` macro supports `opts` keyword. Options will be piped accordingly to the `start_async/4` and `stream/3` functions. To learn more about these options, check the official LiveView documentation (in the reference section at the end of the article).
We're rendering results in `Heex` template same as before (repeating):
```heex
def render(assigns) do
~H"""
<.async_result :let={stream_key} assign={@hotels}>
<:loading>Loading hotels...</:loading>
<:failed :let={_failure}>There was an error loading the hotels. Please try again later.</:failed>
<ul id="hotels_stream" phx-update="stream">
<li :for={{id, hotel} <- @streams[stream_key]} id={id}>
<%= hotel.name %>
</li>
</ul>
</.async_result>
"""
end
```
### Macro code
Just add the following macro to your project:
```elixir
defmodule LiveStreamAsync do
alias LiveView.AsyncResult
defmacro __using__(_opts) do
quote do
import unquote(__MODULE__)
@before_compile unquote(__MODULE__)
Module.register_attribute(__MODULE__, :async_streams, accumulate: true)
end
end
defmacro __before_compile__(_env) do
streams = Module.get_attribute(__CALLER__.module, :async_streams)
for {stream_id, opts} <- streams do
quote bind_quoted: [stream: stream_id, opts: opts] do
def handle_async(stream, {:ok, results}, socket) do
socket =
socket
|> assign(stream, AsyncResult.ok(stream))
|> stream(stream, results, unquote(opts))
{:noreply, socket}
end
def handle_async(stream, {:exit, reason}, socket) do
{:noreply,
update(socket, stream, fn async_result ->
AsyncResult.failed(async_result, {:exit, reason})
end)}
end
end
end
end
defmacro stream_async(socket, key, func, opts \\ []) do
Module.put_attribute(__CALLER__.module, :async_streams, {key, opts})
quote bind_quoted: [socket: socket, key: key, func: func, opts: opts] do
socket
|> assign(key, AsyncResult.loading())
|> start_async(key, func, opts)
end
end
end
```
## Hex package `live_stream_async`
For ease, I packaged this macro on [hex.pm](https://hex.pm/packages/live_stream_async), so you can easily add as your project dependency in the `mix.exs`:
```elixir
defp deps do
[
{:live_stream_async, "~> 0.1.0", runtime: false}
]
```
Use it the same way as described in this article [Use `stream_async()` function](#use-raw-streamasync-endraw-function)
Enjoy!
## Summary
In this article, we learned:
- new functions in LiveView v 0.20.0 to work with asynchronous tasks
- fetching big collection for streaming require using low level `start_async/4` function combined with `handle_async()` callbacks
- combining `%AsyncResult{}` struct with async streaming allows to control the state of loading big collections in the UI
- import `stream_async()` functionality to your project with [hex.pm](https://hex.pm/packages/live_stream_async).
## References
- [Phoenix LiveView Async Operations](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#module-async-operations)
- [Phoenix.LiveView.AsyncResult](https://hexdocs.pm/phoenix_live_view/Phoenix.Component.html#async_result/1)
- [Phoenix LiveView.start_async/4](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#start_async/4)
- [Phoenix.LiveView.stream/4](https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#stream/4) | utopos |
1,879,454 | Chic Templates for Fashion Presentations | In the world of fashion, presentation is everything. Whether you’re showcasing a new collection,... | 0 | 2024-06-06T16:38:04 | https://dev.to/fashionpresentation/chic-templates-for-fashion-presentations-3dme |

In the world of fashion, presentation is everything. Whether you’re showcasing a new collection, pitching a design concept, or creating a mood board, the way you present your ideas can make or break their impact. This is where fashion presentation templates come into play. These templates provide a framework for presenting your ideas in a visually appealing and coherent manner, but their effectiveness hinges on the understandability of the content they contain.
The Significance of Clarity in [Fashion Presentation Templates](https://simplified.com/ai-presentation-maker/fashion)
Fashion presentation templates serve as a canvas for expressing creativity and vision. However, if the content within these templates is not easily understandable, the purpose and impact of the presentation can be lost. Readers who can’t understand your content well won’t keep reading for long. If the articles in your blog are boring or confusing, they won’t attract people to your website or encourage potential customers to try your business
To ensure that your fashion presentation templates effectively convey your message, it’s crucial to make the content easily comprehensible. This involves using clear, direct sentences that are easy to follow and understand. Aim for clear, direct sentences that run 15 words or fewer. Finally, look at the Characters per Word. Petite words average only 4.5 characters in length, well under the suggested 5.0 limit. Don’t use a longer word when a shorter word will do. You don’t get extra points for using pulchritudinous instead of pretty
Enhancing Understandability in Fashion Presentations
To optimize the understandability of fashion presentation templates, it’s essential to consider various factors. Transition words play a crucial role in connecting ideas and ensuring a coherent flow of information. These transitions help readers follow the flow of ideas and understand the relationships between different elements within the presentation
Moreover, the use of visual aids, such as statistical tables, images, and short videos, can significantly enhance the understandability of fashion presentations. These additives will make your webpage break into chunks and hence easy to be read. The best thing about a website is that it is an easy way to direct readers to more information. The additional info helps the reader to find more about what he is already reading. So, supplement your webpage with hyperlinks and other external sources. This will not only rank your website but also will benefit from engaging the reader more
In conclusion, the understandability of content within fashion presentation templates is paramount for effectively conveying ideas and concepts within the fashion industry. By prioritizing clarity, coherence, and the strategic use of visual and textual elements, fashion presentations can truly shine and make a lasting impact. | fashionpresentation | |
1,879,453 | Using Azure Bicep to deploy MS Graph resources | In infrastructure as Code, we focus on deploying and configuring Azure resources. Sometimes you must... | 0 | 2024-06-06T16:36:47 | https://dev.to/omiossec/using-azure-bicep-to-deploy-ms-graph-resources-gh | azure, iac, bicep, microsoftgraph | In infrastructure as Code, we focus on deploying and configuring Azure resources. Sometimes you must define Entra ID (AzureAD) objects such as group, application, and Service principal. Until now the only way was to have two systems, one in IaC for Azure Resource and one with script (PowerShell or Azure CLI) to manage Entra ID objects.
Bicep comes with a solution to deploy MS Graph resources and Azure resources using the same code; Bicep templates support for Microsoft Graph.
Be careful, Microsoft Graph Bicep is currently in preview.
First, we need to define a scenario, you need to Create a Service Principal, deploy a VM, and give the reader role to the service principal.
You need to configure Bicep to use Microsoft Graph. In a new folder create a main.bicep file and bicepconfig.json file. In the last file type:
```json
{
"experimentalFeaturesEnabled": {
"extensibility": true
}
}
```
This will allow your bicep engine to use this feature in preview.
In the main.bicep file, the first thing you need to do is to write: `provider microsoftGraph`, it instructs Bicep that MicrosoftGraph type should be included. If you forget to do that you will not be able to deploy any MS Graph resource and if you use Visual Studio Code, every MS Graph type will be shown as unknow.
The bicep code to deploy the Service Principal, a VM, and assign the SP to the VM with the reader role.
```
provider microsoftGraph
var entraIDRole = 'f2ef992c-3afb-46b9-b7cf-a126ee74c451'
resource resourceApp 'Microsoft.Graph/applications@v1.0' = {
uniqueName: 'ExampleResourceApp'
displayName: 'Example Resource Application'
appRoles: [
{
id: entraIDRole
allowedMemberTypes: [ 'User', 'Application' ]
description: 'Read access to resource app data'
displayName: 'ResourceAppData.Read.All'
value: 'ResourceAppData.Read.All'
isEnabled: true
}
]
}
resource resourceSp 'Microsoft.Graph/servicePrincipals@v1.0' = {
appId: resourceApp.appId
}
param adminUsername string = 'demoadmin'
@allowed([
'sshPublicKey'
'password'
])
param authenticationType string = 'password'
param location string = resourceGroup().location
@secure()
param adminPasswordOrKey string
var vmName = 'demoLinuxVM'
var ubuntuOSVersion = 'Ubuntu-2004'
var vmSize = 'Standard_D2s_v3'
var virtualNetworkName = 'vNet'
var subnetName = 'Subnet'
var securityType = 'TrustedLaunch'
var imageReference = {
'Ubuntu-2204': {
publisher: 'Canonical'
offer: '0001-com-ubuntu-server-jammy'
sku: '22_04-lts-gen2'
version: 'latest'
}
}
var publicIPAddressName = '${vmName}PublicIP'
var networkInterfaceName = '${vmName}NetInt'
var osDiskType = 'Standard_LRS'
var subnetAddressPrefix = '10.1.0.0/24'
var addressPrefix = '10.1.0.0/16'
var linuxConfiguration = {
disablePasswordAuthentication: true
ssh: {
publicKeys: [
{
path: '/home/${adminUsername}/.ssh/authorized_keys'
keyData: adminPasswordOrKey
}
]
}
}
var securityProfileJson = {
uefiSettings: {
secureBootEnabled: true
vTpmEnabled: true
}
securityType: securityType
}
var extensionName = 'GuestAttestation'
var extensionPublisher = 'Microsoft.Azure.Security.LinuxAttestation'
var extensionVersion = '1.0'
var maaTenantName = 'GuestAttestation'
var maaEndpoint = substring('emptystring', 0, 0)
resource networkInterface 'Microsoft.Network/networkInterfaces@2023-09-01' = {
name: networkInterfaceName
location: location
properties: {
ipConfigurations: [
{
name: 'ipconfig1'
properties: {
subnet: {
id: virtualNetwork.properties.subnets[0].id
}
privateIPAllocationMethod: 'Dynamic'
publicIPAddress: {
id: publicIPAddress.id
}
}
}
]
}
}
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-09-01' = {
name: virtualNetworkName
location: location
properties: {
addressSpace: {
addressPrefixes: [
addressPrefix
]
}
subnets: [
{
name: subnetName
properties: {
addressPrefix: subnetAddressPrefix
privateEndpointNetworkPolicies: 'Enabled'
privateLinkServiceNetworkPolicies: 'Enabled'
}
}
]
}
}
resource publicIPAddress 'Microsoft.Network/publicIPAddresses@2023-09-01' = {
name: publicIPAddressName
location: location
sku: {
name: 'Basic'
}
properties: {
publicIPAllocationMethod: 'Dynamic'
publicIPAddressVersion: 'IPv4'
idleTimeoutInMinutes: 4
}
}
resource vm 'Microsoft.Compute/virtualMachines@2023-09-01' = {
name: vmName
location: location
properties: {
hardwareProfile: {
vmSize: vmSize
}
storageProfile: {
osDisk: {
createOption: 'FromImage'
managedDisk: {
storageAccountType: osDiskType
}
}
imageReference: imageReference['Ubuntu-2204']
}
networkProfile: {
networkInterfaces: [
{
id: networkInterface.id
}
]
}
osProfile: {
computerName: vmName
adminUsername: adminUsername
adminPassword: adminPasswordOrKey
linuxConfiguration: ((authenticationType == 'password') ? null : linuxConfiguration)
}
securityProfile: (securityType == 'TrustedLaunch') ? securityProfileJson : null
}
}
resource vmExtension 'Microsoft.Compute/virtualMachines/extensions@2023-09-01' = if (securityType == 'TrustedLaunch' && securityProfileJson.uefiSettings.secureBootEnabled && securityProfileJson.uefiSettings.vTpmEnabled) {
parent: vm
name: extensionName
location: location
properties: {
publisher: extensionPublisher
type: extensionName
typeHandlerVersion: extensionVersion
autoUpgradeMinorVersion: true
enableAutomaticUpgrade: true
settings: {
AttestationConfig: {
MaaSettings: {
maaEndpoint: maaEndpoint
maaTenantName: maaTenantName
}
}
}
}
}
resource roleAssignement 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid('roleAssignment')
scope: vm
properties: {
principalId: resourceSp.id
roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions/', 'acdd72a7-3385-48ef-bd42-f606fba81ae7')
}
}
```
The code creates an Enterprise application and its SP, deploys the VM with virtual NIC and virtual network resources, and then assigns the reader role to the VM.
To deploy the SP and the VM you want, use the New-AzResourceGroupDeployment cmdlet like any other resource group deployment
```powershell
New-AzResourceGroupDeployment -ResourceGroupName bicep-azgraph -TemplateFile ./Bicep/main.bicep
```
PowerShell will ask for a password and deploy the SP and the VM
Bicep support for Microsoft Graph can be used to deploy
- Applications and Service Principals
- Federated Identity
- Entra ID Groups
- OAuth Permission grant
- App role assigned to (but this feature does not work with personal account)
The support of Microsoft Graph is in preview, and not suitable for production. You can check the project GitHub page [here](https://github.com/microsoftgraph/msgraph-bicep-types)
You can find the code presented here on [GitHub](https://github.com/omiossec/bicep-MsGraph-demo)
| omiossec |
1,879,415 | SwiftUI and Navigation | I’m currently developing a multiplatform app and am having issues with both the tab control and... | 0 | 2024-06-06T16:36:39 | https://dev.to/simplykyra/swiftui-and-navigation-1p2i | discuss, swiftui, swift, ios | I’m currently developing a multiplatform app and am having issues with both the tab control and sidebar on the iPhone and iPad devices. Ideally I want the user to be able to switch between the two if they prefer one over the other... and the switching currently works great but I've noticed minor issues with both formats.
Specifically my app can go up to four levels but in this sample I have three. The left sidebar or tabs has four choices, the middle has it's own choices, and the right has a text box showing what was clicked on. With the sidebar option I notice the right screen doesn't change when you click on the sidebar (though the center does). Also the sidebar clicks print out text (with the onChange modifier) but don't register in the view.

I was going to lock it down to just tab control for now so the app is easily useable but noticed if you click to the third level then switch tabs the back button on the phone changes (from saying back) to reflect the newly selected tab but the content shows the old view.

Using tab control on the iPad the middle sidebar (now on the left side)updates when the tab is changed but again, like with the phone, the right pane shows the old content.

Just curious if anyone knows of a good multiplatform SwiftUI tutorial that would help me get this working. In my actual app the right view sometimes has two levels with a back button.
Either way hope your day is going good! | simplykyra |
1,879,452 | Core Architectural Component Of Azure | This blog will be examining the CORE ARHITECTURAL COMPONENT of Azure. listed. AZURE... | 0 | 2024-06-06T16:35:59 | https://dev.to/gbemisola/core-architectural-component-of-azure-40b0 | This blog will be examining the CORE ARHITECTURAL COMPONENT of Azure.
listed.
1. AZURE REGION.
B.PAIRED REGION.
2.AZURE AVAILABILITY.
3.RESOURCE GROUP IN AZURE.
4.AZURE RESOURCE MANAGER.
**AZURE REGION.**
The azure region is a set of data ccentre that are deployed with a latency defined parameter and connectec via and underlining dedicated regional low latency network.There are currently 42 region available around the world with another 12 additional azure region planned for the future
PAIRED REGIONS.
Microsoft operates azure data centres allover the world in many different lacations(otherwise refered to as geographies).A geography in Azure refers to an area where at least one azure region resides.An azure region refers to an area within a geography that contains on or more data centers.
AZURE AVAILABILITY ZONE.
Availability zone is an azure offering that is used to protect applications and data centres from data centre failure.Each availability zone is unique in physical location with the azure region and each zone is suported by one or more data centers, equipped with own independent power, cooling and networking infrastructure.
it is worthy of note that:
1.Each availability zone within azure reqion is comprised of fault domain and update domain.
2.Where 3 or more virtual machine are deployed to different zone in the azure region this virtual machine are deployed in the azure regions.
3.The virtual machine are deplolyed among different zones in the azure region.
4.The virtual machine are also distributed across update domian and ensure virtual machine i different zone are not undated at the same time.
Resilency is achieved through the existence of at least three seperate availability zone in each enabled availabilty zone.
Therefore because availability zones are physically seperate within each region,application and data are inherently protected from data centres failure with zone redundant services, replicating apps across availability zone,there is no single point of failure to deal with.
Azure offers 99.99% virtual Machine that are deployed in an availability zone.
RESOURCE GROUP IN AZURE.
1.They are logical containers.
2.They hold related Azure resources that are part of alarge solution.
3.They are also host the resourses that need to e manged as part of a group.
4. The administator has to decide based on each needs how to allocate resources group in azure.
since all the resources are usually within the resources group usually shares a similar lifestyle.it is important to detrmine the lifestlye of the resourses you intend to place in a single resource group.
4.AZURE RESOURCE MANAGER
it is a tool that lets you work with underlaying resourses that can be part of the solution of a group with a resourse manager you can deploy,update and delete the resourses that forms asolution in a single coordinated solution.
A resource manager also allows tools to streamline deployment.
A resourse manager provide consistentg management layer for azure resourses.
It does more than the scope of this blog.
Thank you
| gbemisola | |
1,879,338 | Cele Mai Bune Leagăne pentru Bebeluși de pe IdealBebe.ro | Căutarea leagănului perfect pentru bebelușul tău poate fi o provocare, mai ales atunci când vrei să... | 0 | 2024-06-06T14:54:20 | https://dev.to/idealbebe/cele-mai-bune-leagane-pentru-bebelusi-de-pe-idealbebero-4247 | Căutarea leagănului perfect pentru bebelușul tău poate fi o provocare, mai ales atunci când vrei să asiguri confort, siguranță și distracție pentru micuțul tău. La IdealBebe.ro, am selectat cu grijă cele mai bune leagăne pentru bebeluși, oferindu-ți soluții de calitate care îndeplinesc toate aceste cerințe. În acest articol, vom explora beneficiile [leagănelor pentru bebeluși](https://idealbebe.ro/), ce caracteristici să cauți și cum să alegi leagănul perfect pentru copilul tău.
1. Beneficiile Leagănelor pentru Bebeluși
[Leagănele pentru bebeluși](https://idealbebe.ro/) sunt esențiale pentru dezvoltarea și confortul micuțului tău. Acestea oferă o serie de beneficii, de la calmarea bebelușului până la stimularea dezvoltării cognitive și motorii.
Beneficii principale:
Calmarea bebelușului: Mișcarea ușoară și ritmică a leagănului ajută la calmarea și relaxarea bebelușului, imitând senzația de legănare din brațele mamei.
Stimularea senzorială: [Leagănele](https://idealbebe.ro/) sunt echipate adesea cu jucării și sunete care stimulează simțurile bebelușului și îi dezvoltă abilitățile cognitive.
Somn liniștit: Mulți bebeluși adorm mai ușor într-un leagăn, datorită mișcării liniștitoare care îi ajută să se relaxeze.
2. Caracteristici de Căutat la un Leagăn pentru Bebeluși
Atunci când alegi un leagăn pentru bebelușul tău, este important să iei în considerare câteva caracteristici esențiale pentru a te asigura că faci cea mai bună alegere. La IdealBebe.ro, oferim o gamă variată de leagăne care îndeplinesc toate aceste criterii.
Caracteristici esențiale:
Siguranță: Asigură-te că leagănul este stabil și are un sistem de prindere sigur. Centurile de siguranță în cinci puncte sunt ideale pentru a preveni căderile accidentale.
Confort: Alege un leagăn cu materiale moi și confortabile, care să nu irite pielea delicată a bebelușului.
Reglabilitate: Leagănele cu multiple viteze de legănare și poziții ajustabile oferă mai mult confort și versatilitate.
Funcționalitate: Optează pentru leagăne care au funcții suplimentare, cum ar fi melodii calmante, vibrații sau jucării detașabile pentru a menține bebelușul ocupat și fericit.
3. Tipuri de Leagăne pentru Bebeluși Disponibile la IdealBebe.ro
La IdealBebe.ro, avem o selecție diversificată de [leagăne pentru bebeluși](https://idealbebe.ro/), fiecare cu propriile sale avantaje și caracteristici. Iată câteva dintre cele mai populare tipuri de leagăne pe care le poți găsi pe site-ul nostru:
Leagăne electrice:
Mișcare automată: Aceste leagăne oferă mișcare automată, cu viteze ajustabile și funcții de temporizare pentru a te ajuta să reglezi durata și intensitatea legănării.
Sunete și melodii: Mulți dintre leagănele electrice sunt echipate cu sunete calmante și melodii care ajută la relaxarea bebelușului.
Leagăne portabile:
Ușor de transportat: Aceste leagăne sunt ideale pentru părinții care călătoresc frecvent sau care doresc să poată muta leagănul dintr-o cameră în alta.
Design compact: Leagănele portabile sunt ușoare și pliabile, economisind spațiu și fiind ușor de depozitat.
Leagăne tradiționale:
Design clasic: Aceste leagăne oferă un design simplu și eficient, perfect pentru părinții care preferă soluții tradiționale.
Mișcare manuală: Permite părinților să controleze mișcarea legănului pentru a se adapta la nevoile specifice ale bebelușului.
4. Cum Să Alegi Leagănul Perfect pentru Bebelușul Tău
Alegerea leagănului perfect depinde de nevoile și preferințele tale, precum și de personalitatea și vârsta bebelușului tău. La IdealBebe.ro, suntem aici pentru a te ajuta să faci cea mai bună alegere.
Pași pentru alegerea leagănului ideal:
Identifică nevoile: Gândește-te la ce anume ai nevoie de la un leagăn – portabilitate, funcții suplimentare, design specific etc.
Citește recenzii: Verifică recenziile și recomandările altor părinți pentru a afla experiențele lor cu diferite modele de leagăne.
Testează produsul: Dacă ai posibilitatea, încearcă leagănul înainte de a-l achiziționa pentru a te asigura că îndeplinește așteptările tale și este confortabil pentru bebeluș.
De Ce Să Alegi IdealBebe.ro?
La IdealBebe.ro, suntem dedicați să oferim cele mai bune produse pentru copii, selectate cu grijă pentru a răspunde nevoilor părinților și a asigura confortul și siguranța bebelușilor. Iată câteva motive pentru care ar trebui să alegi leagănele pentru bebeluși de la noi:
Calitate garantată: Toate produsele noastre sunt de cea mai înaltă calitate și respectă standardele internaționale de siguranță.
Varietate: Oferim o selecție largă de leagăne pentru toate preferințele și nevoile.
Prețuri competitive: Beneficiezi de oferte speciale și reduceri atractive, fără a compromite calitatea.
Consultanță specializată: Echipa noastră de experți este întotdeauna disponibilă pentru a te ajuta să faci cea mai bună alegere pentru bebelușul tău.
Descoperă acum cele mai bune leagăne pentru bebeluși de pe IdealBebe.ro și oferă-i micuțului tău confortul și siguranța pe care le merită. IdealBebe.ro este partenerul tău de încredere în această călătorie importantă!
| idealbebe | |
1,879,451 | Best way to create a Markdown Resume | The best place to maintain Resume is Github. The best way to edit Resume is Markdown with... | 0 | 2024-06-06T16:35:54 | https://dev.to/garymeng/best-way-to-create-a-markdown-resume-1c37 | resume, markdown, github | The best place to maintain Resume is Github.
The best way to edit Resume is Markdown with CSS.
[Resumis](https://resumis.com/) is a website for you to manage your Resumes.
- Create raw **Resume**
- Create Resume from free template
- Edit Resume
- Save Resume to your **own Github** Repository
- Delete Resume
- Customize **resume** theme with csss
| garymeng |
1,879,450 | I kill you | give my 1000 euros | 0 | 2024-06-06T16:28:59 | https://dev.to/muneeb_ijaz_298e2f279e27a/i-kill-you-1pdk | give my 1000 euros | muneeb_ijaz_298e2f279e27a | |
1,879,374 | Core azure Architectural Componets | Table of contents 1. - Introductions 2. - 1. Azure Regions 3. - 2. Azure Availability... | 0 | 2024-06-06T16:24:48 | https://dev.to/emeka_moses_c752f2bdde061/core-azure-architectural-componets-5875 | azure, architectural, componets, cloudcomputing | ## Table of contents
- 1. - Introductions
- 2. - 1. Azure Regions
- 3. - 2. Azure Availability Zones
- 4. - 3. Resource Groups
- 5. - 4. Azure Resource Manager(ARM)
- 6. - 5. Conclusions
## Introductions
In this module, we 'll be introduced to the core architectural components of Azure. You’ll be looking Azure Regions, availability zones, resource groups; Azure resource manager(ARM).
| emeka_moses_c752f2bdde061 |
1,879,448 | How a beginner should start coding 2024 | How a Beginner should start coding 2024 Technology is changing the way we live. At the... | 0 | 2024-06-06T16:24:05 | https://www.gigo.dev/articles/how-a-beginner-should-start-coding-2024?viewport=desktop | coding, programming, career, learning | # How a Beginner should start coding 2024

Technology is changing the way we live. At the time of publishing over 92% of all jobs require digital skills. This digital transformation has led to an appropriate increased demand for software developers.
In this article, we’ll discuss how you can achieve higher paying jobs and better benefits by learning to code from scratch.
## What is programming?
Programming is the process of writing instructions for computers or machines. We call these instructions code. Code enables computers to perform specific tasks, solve problems, or create desired outputs.
You may have heard terms like Python and JavaScript before. These are different languages of code. Like human languages, such as English and Portuguese, there are unique rules and guidelines that all Programming languages follow.
These rules, known as syntax, are used to write the code, allowing programmers to uniformly communicate with the computer effectively across a broad spectrum of applications.
## Why Learn Programming?
Software developers are in-demand and highly compensated. Computer programming is a hard skill, once you’ve obtained the information you’ll have access to a wide range of good paying roles offered by employers who need somebody with your knowledge base.
Beyond professional opportunity, knowing how to program is essentially a super power. Think of it as having the ability to bring your ideas to life. The only thing limiting your creations are your imagination and willingness to commit time.
The greatest benefit, though perhaps more subtle, is a new critical thinking blueprint. When you learn to program, you learn to design solutions to your specific problems. Crafting genuinely creative internal assessment systems for more impactful, efficient problem solving.
## Steps to landing your first programming job in 2024
1. Pick a Project
2. Build Programming Fundamentals
3. Learn Programming Basics
4. Work on Projects
5. Apply for a Job
### 1. Pick a Project
The best way to learn to code is to learn by doing. This comes down to identifying something that you want to build and working towards it on a daily basis, incrementally stepping to the final version gaining proficiency at each step along the way.
Pick something you care about that will keep you motivated. This is a far superior method to just stepping thru the language specific “books”, also known as language documentation. Instead of reciting an encyclopedia of syntax definitions, you understand concepts in the context that they are applicable to helping accomplish a task.
When it comes time to search for a job, you may have no previous programming employment history, but you can point towards a repository of projects you’ve either completed or contributed to.
### **2. Build Programming Fundamentals**
Most programmers worth their salt know multiple languages. There are common principles that every language uses and while sometimes they are called different names, the fundamentals remain the same. Learning the fundamentals will help you immensely as you move onto your project.
As an example, here are some programming fundamentals beginners need to know to start coding:
Syntax — The rules that define the structure of the language. The syntax tells you exactly which words and symbols you need to use when you write your code based on its language. Because computers don’t think, you have to be very specific when writing code. At some point, every programmer has sat at the keyboard trying to figure out why their code wasn’t working only to realize they were missing something simple, like a semicolon. (i.e a simple syntax error)
Tools — There are a variety of tools that make programming easier. For example, one of the core programming tools is an integrated development environment (IDE), that checks your syntax for errors, organizes your files, and autocompletes lines of code for you. Another fantastic example of a modern tool that programmers use is generative AI such as ChatGPT, Mistral or MetaAI.
Learning to take advantage of these tools and using them to their full potential will reduce the impact of the inevitable bumps in your coding journey.
### **3. Learn Programming Basics**
If you have a specific reason for learning to code, you may already know which language you want to start with. For instance, if you want to be a game developer, C++ & JavaScript are probably your best choice. If you want to learn to program but haven’t determined your first language, there are several beginner-friendly languages that serve as a good starting point. Python is one of the easiest languages for beginners to pick up.
### **4. Work on Projects**
From here it all becomes a numbers game, how many projects can you get under your belt.
How To Choose Programming Projects
**Start Simple**
It’s best to start with simple projects. If you want to get into game design, you may be tempted to start trying to create the next massive online role-playing game. However, you’ll be better off creating something simple, such as a basic top down shooter with one level.
A static portfolio website is a simple option that can grow with you. You can show off what you know and add more complicated projects as you master more skills.
**Create Something Useful**
Think about the “sticking points” of your day. Do you run into the same problems or regularly have to do the same repetitive activities? Create something that will solve the problem or automate the work for you. Your community is great for ideas, too. Do the people in your clubs or organizations complain about the same issues often? See if you can brainstorm a solution. Real-world problem solving is a great addition to your portfolio.
### 10 Project ideas for beginners:
1. **To-Do List App**: A simple mobile app that allows users to create and manage their to-do lists, with features like adding, editing, and deleting tasks.
2. **Quiz Game**: A web-based quiz game that asks users a series of questions and keeps track of their scores, with a leaderboard to display top scores.
3. **Weather Dashboard**: A web-based weather dashboard that displays current weather conditions and forecasts for a given location, using APIs and data visualization.
4. **Personal Finance Tracker**: A simple desktop app that helps users track their expenses, income, and budget, with features like categorization and data visualization.
5. **Hangman Game**: A command-line based game of hangman, where users can guess letters and words, with a simple AI-powered opponent.
6. **Simple Chatbot**: A basic chatbot that responds to user input with pre-defined responses, using natural language processing and machine learning algorithms.
7. **Image Gallery**: A web-based image gallery that allows users to upload and display their images, with features like filtering and sorting.
8. **Rock, Paper, Scissors**: A command-line based game of rock, paper, scissors, where users can play against the computer, with a simple AI-powered opponent.
9. **Simple Calculator**: A desktop app that provides a simple calculator with basic arithmetic operations, like addition, subtraction, multiplication, and division.
10. **Guessing Game**: A command-line based game where users have to guess a random number, with hints and feedback provided by the game.
### 10 Project ideas for intermediates:
1. **EcoLife**: A mobile app that helps users track and reduce their carbon footprint by monitoring their daily activities, such as energy consumption, water usage, and waste generation.
2. **ChatGenie**: A conversational AI chatbot that can understand and respond to user queries in a human-like manner, using natural language processing and machine learning algorithms.
3. **CodeCracker**: A web-based platform that generates coding challenges and puzzles for users to solve, with a gamified leaderboard and rewards system to encourage participation and skill-building.
4. **SmartHome Automation**: A home automation system that uses IoT sensors and machine learning algorithms to optimize energy consumption, security, and convenience in a user’s home.
5. **MedMind**: A medical diagnosis AI system that uses machine learning and natural language processing to analyze patient symptoms and provide accurate diagnoses and treatment recommendations.
6. **GameOn**: A social gaming platform that allows users to create and share their own games, using a drag-and-drop game development engine and a community-driven marketplace.
7. **EduPal**: A personalized learning platform that uses AI-powered adaptive learning to create customized lesson plans and educational content for students of all ages and skill levels.
8. **CitySim**: A urban planning and simulation platform that uses machine learning and data analytics to optimize city infrastructure, traffic flow, and resource allocation.
9. **FoodForge**: A recipe generation and meal planning platform that uses machine learning and natural language processing to create customized recipes based on user preferences and dietary needs.
10. **SoundScout**: A music discovery and recommendation platform that uses machine learning and audio analysis to identify and recommend new music based on a user’s listening habits and preferences.
### **5. Apply for a Job**
It all comes down to this. You’ve grasped the fundamentals, you’ve built out a handful of projects — some you’re passionate about and some you’re indifferent to. Its time to start the application process.
Find jobs that fit your experience, particularly that match the languages you’ve been working in. Remote jobs are extremely competitive right now, your best bet is to find an entry level in person role with a modest salary offer.
Keep putting experience under your belt and with consistent performance you’ll naturally grow your skill to a level that's compatible with higher paying, remote roles.
**Find a platform to guide you**
Take all of these concepts prior to job application and put them together. Where can I find a program that will guide me step by step thru programming fundamentals and project completion?
GIGO Dev offers project based learning to beginners looking to learn code and transition to a new job.
Build your coding skills with a guided path and repository of projects supported by your own personal AI tutor, Code Teacher.
Remember, there are tons of resources and communities out there to support you in your coding journey, just search for something that fits your interests and remember that the harder you work, the luckier you get.
[GIGO Discord](https://discord.gg/learnprogramming)
[GIGO Twitter](https://twitter.com/gigo_dev)
[GIGO Reddit](https://www.reddit.com/r/gigodev/)
[GIGO GitHub](https://github.com/Gage-Technologies/gigo.dev)
Find [this article on Medium](https://medium.com/@gigo_dev/how-a-beginner-should-start-coding-2024-e09961680d8e)
Find this article on our site: https://www.gigo.dev/articles/how-a-beginner-should-start-coding-2024?viewport=desktop | gigo_dev |
1,879,447 | Acrylic Polymer Market Dynamics: Drivers, Restraints, and Opportunities | According to the new market research report “Acrylic Polymer Market for Cleaning Application by... | 0 | 2024-06-06T16:23:26 | https://dev.to/aryanbo91040102/acrylic-polymer-market-dynamics-drivers-restraints-and-opportunities-4048 | news | According to the new market research report “Acrylic Polymer Market for Cleaning Application by Type(Water-borne & Solvent-borne), Application(Laundry & Detergent, Dish Washing, Industrial & Institutional, Hard Surface Cleaning) & Region(APAC, North America, Europe, RoW) — Global Forecast to 2026″, published by MarketsandMarkets™, the market is projected to grow from USD 580 million in 2021 to USD 709 million by 2026, at a CAGR of 4.1%. The report offers a detailed analysis of the top winning strategies, evolving market trends, acrylic polymer market size and estimations, value chain, key investment pockets, drivers & opportunities, competitive landscape and regional landscape.

With the increasing population, increasing per-capita income, changing lifestyle, and increasing usage of washing machines across the globe, the demand for laundry detergent is growing, which is subsequently driving the acrylic polymer market for cleaning application. Moreover, increasing demand for liquid dish washing products in hotels, restaurants and food retails, and household applications further supports the growth of the acrylic polymer market.
Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=247258813
Browse in-depth TOC on “Acrylic Polymer Market”
129 — Tables
41 — Figures
144 — Pages
**View Detailed Table of Content Here: [https://www.marketsandmarkets.com/Market-Reports/acrylic-polymer-market-247258813.html](https://www.marketsandmarkets.com/Market-Reports/acrylic-polymer-market-247258813.html)**
Due to the COVID-19 pandemic, there is growing awareness about hygiene and cleanliness in public places, and governments have issued guidelines to take the utmost precaution to avoid the spread of the virus. Thus, the demand for cleaning products in industrial and institutional places has increased significantly across the globe, which in turn supports the growth of the acrylic polymer market for cleaning application. However, due to fluctuating crude oil prices and increasing raw material prices across the globe, the market growth may be restricted in the upcoming years.
Water-borne is the largest segment by type in the acrylic polymer market for cleaning application.
Based on type, the water-borne acrylic polymer segment is estimated to account for the larger share of the overall market. The major factor driving this segment is the high solubility, dispersion in cleaning products, and increasing demand for sustainable products. It also helps to improve the cleaning products performance and efficacy rate, owing to which it is prevalently used in Europe and North America. However, the cost of water-borne is high compared to solvent-borne.
Laundry & Detergent is the largest segment by application in the acrylic polymer market for cleaning application.
The laundry & detergent segment is estimated to account for the largest share of the overall acrylic polymer market for cleaning application in 2020, closely followed by the dish washing segment. With the increasing population, increasing per-capita income, changing lifestyle, and increasing usage of washing machines across the globe, the demand for laundry detergent is growing, which is subsequently driving the acrylic polymer market for cleaning application. Moreover, increasing demand for liquid dish washing products in hotels, restaurants and food retails, and household applications further supports the growth of the acrylic polymer market.
**Request Sample Pages: [https://www.marketsandmarkets.com/requestsampleNew.asp?id=247258813](https://www.marketsandmarkets.com/requestsampleNew.asp?id=247258813)
**
North America is estimated to be the largest market for acrylic polymer market for cleaning application.
North America accounted for the largest share of the acrylic polymer market for cleaning application in 2020, followed by Europe. In Europe and North America, stringent regulations and increasing demand for sustainable laundry & detergents and other cleaning products have supported the growth of the acrylic polymer market for cleaning application in the regions.
Acrylic Polymer Market Key Players
The leading players in the acrylic polymer market for cleaning application are Dow Inc. (US), BASF SE (Germany), Toagosei Co., Ltd. (Japan), Sumitomo Seika Chemicals Co., Ltd. (Japan), Arkema (France), Nippon Shokubai Co. Ltd. (Japan), Ashland Global Holdings, Inc. (US), and others.
Dow Inc., Dow Inc. serves as the holding company for The Dow Chemical Company (TDCC) and its consolidated subsidiaries. Dow Inc. operates through three main business segments; packaging & specialty plastics; industrial intermediates & infrastructure; and performance materials & coatings. Under the performance materials & coatings segment, the company offers acrylic polymer for cleaning application under the trademarks DURAPLUS, ACUSOL, and RHOPLEX.
**Get 10% Free Customization on this Report: [https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=247258813](https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=247258813)**
BASF SE, BASF is among the leading chemical companies in the world. The company operates through various business segments, namely, Chemicals, Materials, Industrial Solutions, Surface Technologies, Nutrition & Care, and Agricultural Solutions segments. The company has six Verbund sites, 241 additional production sites in more than 90 countries, and three global research divisions in Europe, Asia Pacific, and North America. The company offers acrylic acid, acrylic monomers, and acrylic polymer products under the petrochemical sub-segment, a part of the chemical division. The company has an acrylic acid production capacity of 1,510 kilotons. The company offers products under the “Sokalan” brand name.
Toagosei Co., Ltd., Toagosei is engaged in the manufacture and sale of chemical and industrial products. The company operates through the following segments: Commodity Chemicals, Polymer & Oligomer, Adhesive Material, Performance Chemicals, Plastics, and Others. It sells acrylic polymer, acrylic monomers, and other products under the Polymer & Oligomer segment. | aryanbo91040102 |
1,879,446 | A tecnologia me ajuda nos estudos sobre geopolítica. | Sabe aquela sensação de estar perdido em um mapa, sem saber direito onde começa um país e termina... | 0 | 2024-06-06T16:23:22 | https://dev.to/outofyourcomfortzone/a-tecnologia-me-ajuda-nos-estudos-sobre-geopolitica-3pa0 | Sabe aquela sensação de estar perdido em um mapa, sem saber direito onde começa um país e termina outro? Pois é, eu também já estive nesse barco. Mas vou te contar uma coisa: a tecnologia está aí para salvar a pátria – literalmente, no caso da geopolítica.
Primeiro de tudo, o **Google Earth**. Meu amigo, se você ainda não usou isso pra estudar geopolítica, você está perdendo tempo! Com ele, dá pra viajar o mundo sem sair do lugar, explorar fronteiras, entender conflitos territoriais e visualizar aquela montanha que todo mundo quer porque é estratégica. E o melhor: tudo com uma precisão que só vendo.
Agora, pensa nas **redes sociais**. Acha que são só pra postar foto de comida? Nada disso! Twitter, por exemplo, é um mar de informação em tempo real. Seguindo os perfis certos, você pode acompanhar notícias de última hora sobre qualquer crise, revolução ou acordo internacional. De bônus, ainda rola uns memes pra descontrair entre uma notícia séria e outra.
E tem os **aplicativos de notícia**, né? Flipboard, Feedly, e até o Google Notícias. Eles permitem que você personalize suas leituras, escolhendo os tópicos que mais te interessam. Assim, dá pra montar um jornal só seu, focado nas áreas do mundo e temas geopolíticos que você quer entender melhor. Fica muito mais fácil se manter atualizado sem ter que caçar notícia por notícia. Além disso, há muitos **[sites de notícias** precisos e muito informativos sobre geopolítica](https://atlasreport.com.br/as-81-melhores-e-mais-confiaveis-fontes-para-noticias-e-analises-de-geopolitica/).
Outra coisa que me ajuda demais são os **podcasts e os vídeos no YouTube**. Tem muita gente boa falando sobre geopolítica de um jeito fácil de entender. Aquele trajeto chato no ônibus? Transforme num curso sobre a ascensão da China ou os desafios da União Europeia. Assim, você vai ficando cada vez mais afiado sem nem perceber.
E, claro, não dá pra esquecer dos cursos online. Professores de universidades renomadas, mapas interativos, quizzes – tudo ali, ao alcance de um clique.
Resumindo, se você tem um smartphone, tablet ou computador e uma conexão com a internet, você tem uma sala de aula global na palma da mão. A tecnologia tira a geopolítica dos livros empoeirados e traz para a nossa realidade de forma prática e até divertida. E, cá entre nós, quem não gosta de aprender enquanto mexe no celular, não é mesmo? | outofyourcomfortzone | |
1,879,444 | How to create a storage account using Azure | Create and deploy a resource group to hold all your project resources. Learn more about resource... | 0 | 2024-06-06T16:21:12 | https://dev.to/emmyfx1/how-to-create-a-storage-account-using-azure-1nmp | Create and deploy a resource group to hold all your project resources. Learn more about resource groups.
In the Azure portal, search for and select Resource groups.

Select + Create.

After the page had displayed
Give your resource group a name.
Select a region. Use this region throughout the project.
Select Review and create to validate the resource group.

Select Create to deploy the resource group.

Create and deploy a storage account to support testing and training.
After validation
In the Azure portal, search for and select Storage accounts.
Select + Create.

On the Basics tab, select your Resource group.
Provide a Storage account name. The storage account name must be unique in Azure.
Set the Performance to Standard.
Select Review, and then Create.


Wait for the storage account to deploy and then Go to resource.

The data in this storage account doesn’t require high availability or durability. A lowest cost storage solution is desired.
In your storage account, in the Data management section, select the Redundancy blade.

Select Locally-redundant storage (LRS) in the Redundancy drop-down.
Be sure to Save your changes.

Refresh the page and notice the content only exists in the primary location.
The storage account should only accept requests from secure connections.
In the Settings section, select the Configuration blade.
Ensure Secure transfer required is Enabled.
Developers would like the storage account to use at least TLS version 1.2. Learn more about transport layer security (TLS).
In the Settings section, select the Configuration blade.
Ensure the Minimal TLS version is set to Version 1.2.
Until the storage is needed again, disable requests to the storage account.

In the Settings section, select the Configuration blade.

Ensure Allow storage account key access is Disabled.
Be sure to Save your changes.
Ensure the storage account allows public access from all networks.
In the Security + networking section, select the Networking blade.
Ensure Public network access is set to Enabled from all networks.
Be sure to Save your changes.

| emmyfx1 | |
1,875,632 | Create Phi-3 Chatbot with 20 Lines of Code (Runs Without Wifi) 🚀 🤖 | Made in collaboration with Andrew Dang The Power of Local Chatbots Chatbots are fun and... | 0 | 2024-06-06T16:21:10 | https://dev.to/llmware/create-phi-3-chatbot-with-20-lines-of-code-runs-without-wifi-2d7e | python, ai, beginners, programming | Made in collaboration with Andrew Dang
## The Power of Local Chatbots
Chatbots are fun and powerful tools. They excel at organization, summarization, and conversation. The versatility of chatbots makes platforms like ChatGPT popular. However, these platforms train their chatbots on user data, which comes at the cost of user privacy. Hence the appeal of local large language models (LLMs). Users can deploy local LLMs without the need for an internet connection and on consumer hardware. Stick around to learn how to deploy a local chatbot in just 20 lines of code!
***
## Let's get started 🔥
This video will introduce you to the capabilities of the Chatbot and provide an overview of how it is built.
{% youtube gzzEVK8p3VM %}
***
## Framework 🖼️
#### LLMWare
For our new readers, LLMWARE is a comprehensive, open-source framework that provides a unified platform for application patterns based on LLMs, including Retrieval Augmented Generation (RAG).
#### Streamlit
Streamlit is an open-source Python library that allows developers to create interactive web applications quickly and easily. It's designed for machine learning and data science projects, enabling users to visualize data, run models, and share results in a user-friendly interface.
Please run `pip install llmware` and `pip install streamlit` in the command line to download these packages.
***
## Importing Libraries and Configurations 📚
```python
import streamlit as st
from llmware.models import ModelCatalog
from llmware.gguf_configs import GGUFConfigs
GGUFConfigs().set_config("max_output_tokens", 500)
```
**Streamlit(`st`)**: Used for creating web applications. It provides functions to display content and handle user interactions.
**ModelCatalog**: A component of `llmware` that manages selecting the desired model, loading the model, and configuring the model using the parameters in the Model Loading section below.
**GGUFConfigs**: A component of `llmware` that handles global (applies to all currently loaded models) configurations for models, such as output token limits. You may find the full list of global configurations [here](https://github.com/llmware-ai/llmware/blob/main/llmware/gguf_configs.py) in the variable `_conf_libs` under the `GGUFConfigs` class.
***
## Model Loading 🪫🔋
```python
model = ModelCatalog().load_model(model_name, temperature=0.3, sample=True, max_output=450)
```
**Model Name**: The name of the model you want to load from ModelCatalog.
**Temperature**: This controls the randomness of the output. Valid values range between 0 and 1, where lower values make the model more deterministic, and higher values make the model more random and creative.
**Sample**: Determines if the output is generated deterministically or probabilistically. False generates deterministic output. True generates probabilistic output.
**Max Output**: Specifies the max length of the generated output (ex. if this is set to 100, there will be at most 100 words in the output).
***
## Session State Management 🧮
Manages chat history within the session state of Streamlit, ensuring that previous messages are preserved across reruns of the app.
#### Ensure there is a list called "messages" in session state
```python
if "messages" not in st.session_state:
st.session_state.messages = []
```
#### Display Chat History
`role` is either "user" or "assistant" and will retrieve the corresponding message.
```python
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
```
#### Prompt the User and Risplay Responses
If the user enters text, display their text
```python
prompt = st.chat_input("Say something")
if prompt:
with st.chat_message("user"):
st.markdown(prompt)
```
#### Generate and Display Responses
```python
with st.chat_message("assistant"):
bot_response = st.write_stream(model.stream(prompt))
```
#### Update Session State
Updates the session state by appending the new user input and the assistant's response to the chat history.
```python
st.session_state.messages.append({"role": "user", "content": prompt})
st.session_state.messages.append({"role": "assistant", "content": bot_response})
```
***
## Main Block ⚙️
The typical Python idiom to ensure that the script runs only when it is executed as the main program, not when imported as a module.
Initializes the application with a predefined list of model names and starts the chat application with the first model in the list.
This script is a straightforward example of integrating AI models into a web application for interactive purposes, showcasing the use of Streamlit for UI and `llmware` for backend model handling.
```python
if __name__ == "__main__":
chat_models = ["phi-3-gguf",
"llama-2-7b-chat-gguf",
"llama-3-instruct-bartowski-gguf",
"openhermes-mistral-7b-gguf",
"zephyr-7b-gguf",
"tiny-llama-chat-gguf"]
model_name = chat_models[0]
simple_chat_ui_app(model_name)
```
**Model List**: This list contains the identifiers of various chat models available through the `llmware` library. These models are pre-trained and configured for generating conversational responses.
**Purpose**: At runtime, one of these models is selected to power the chat interface. This allows for flexibility in choosing different models based on their capabilities or performance characteristics.
***
## Fully Integrated Code 📄
**To run, go to command line ->
streamlit run "path/to/gguf_streaming_chatbot.py"**
```python
import streamlit as st
from llmware.models import ModelCatalog
from llmware.gguf_configs import GGUFConfigs
GGUFConfigs().set_config("max_output_tokens", 500)
def simple_chat_ui_app (model_name):
st.title(f"Simple Chat with {model_name}")
model = ModelCatalog().load_model(model_name, temperature=0.3, sample=True, max_output=450)
if "messages" not in st.session_state:
st.session_state.messages = []
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
prompt = st.chat_input("Say something")
if prompt:
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
bot_response = st.write_stream(model.stream(prompt))
st.session_state.messages.append({"role": "user", "content": prompt})
st.session_state.messages.append({"role": "assistant", "content": bot_response})
return 0
if __name__ == "__main__":
chat_models = ["phi-3-gguf",
"llama-2-7b-chat-gguf",
"llama-3-instruct-bartowski-gguf",
"openhermes-mistral-7b-gguf",
"zephyr-7b-gguf",
"tiny-llama-chat-gguf"]
model_name = chat_models[0]
simple_chat_ui_app(model_name)
```
You may also find the fully integrated code on our Github repo [here](https://github.com/llmware-ai/llmware/blob/main/examples/UI/gguf_streaming_chatbot.py)
***
## Conclusion 🔒
As we have demonstrated, deploying a local chatbot can be achieved with just 20 lines of code. This simplicity, combined with the enhanced privacy and control, makes local LLMs an attractive option for users seeking powerful and secure chatbot solutions.
Thank you for exploring this topic with us. We hope you are now equipped with the knowledge to deploy your own local chatbot and harness its full potential.
Please check out our Github and leave a star! https://github.com/llmware-ai/llmware
Follow us on Discord here: https://discord.gg/MgRaZz2VAB | will_taner |
1,879,419 | Woocommerce Web Design: Elevate Your E-Commerce Business | In the competitive world of e-commerce, having a robust and visually appealing online store is... | 0 | 2024-06-06T16:19:14 | https://dev.to/ken0098/woocommerce-web-design-elevate-your-e-commerce-business-1e64 | webdesign, woocommerce, programming | In the competitive world of e-commerce, having a robust and visually appealing online store is crucial for success. This is where [**Woocommerce web design**](https://woocommercewebdesigner.co.uk/) comes into play. As a powerful and flexible platform, Woocommerce provides businesses with the tools they need to create stunning and functional online stores. Whether you're a small business owner or a large enterprise, investing in professional Woocommerce web design can significantly impact your online presence and sales.
What is Woocommerce?
Woocommerce is an open-source e-commerce plugin for WordPress, designed to make it easy for businesses to set up and manage an online store. It offers a wide range of features, including product management, secure payment gateways, and customizable themes. With Woocommerce, you have complete control over your website's look and functionality, allowing you to create a unique and engaging shopping experience for your customers.
Benefits of Professional Woocommerce Web Design
Customizability: One of the standout features of Woocommerce is its high level of customizability. Professional web designers can leverage this flexibility to create a website that perfectly aligns with your brand identity. From choosing the right themes to customizing layouts and color schemes, every aspect of your store can be tailored to reflect your brand.
User Experience: A well-designed website enhances the user experience, making it easier for customers to navigate, find products, and complete purchases. Professional designers focus on creating intuitive interfaces, optimizing load times, and ensuring that the site is responsive across all devices.
SEO Optimization: Professional Woocommerce web design includes implementing SEO best practices to improve your site's visibility on search engines. This involves optimizing meta tags, creating SEO-friendly URLs, and ensuring fast page load times. Higher search engine rankings can drive more organic traffic to your store.
Security: Security is a top priority in e-commerce. Professional web designers ensure that your Woocommerce store is secure by integrating SSL certificates, implementing secure payment gateways, and regularly updating plugins and themes to protect against vulnerabilities.
Scalability: As your business grows, your website needs to scale with it. Woocommerce is designed to handle growth, and professional designers can build a scalable infrastructure that accommodates increased traffic and product listings without compromising performance.
Key Features of Woocommerce Web Design
Custom Themes: Choose from a wide range of customizable themes to create a unique look for your store. Professional designers can further tweak these themes to match your branding perfectly.
Product Management: Easily add, edit, and manage your products. Woocommerce supports various product types, including physical goods, digital downloads, and subscriptions.
Secure Payment Gateways: Integrate multiple payment gateways to provide your customers with secure and convenient payment options. Woocommerce supports major providers like PayPal, Stripe, and Square.
Shipping Options: Configure shipping methods and rates based on your needs. Woocommerce allows you to offer flat rate, free shipping, and real-time shipping calculations.
Marketing Tools: Utilize built-in marketing tools such as discount codes, coupons, and email marketing integration to promote your products and drive sales.
Analytics and Reporting: Access detailed reports on sales, customers, and stock levels to make informed business decisions. Integration with Google Analytics provides deeper insights into user behavior.
Why Hire a Professional Woocommerce Web Designer?
While Woocommerce offers a user-friendly interface, designing a professional and high-performing e-commerce site requires expertise. Here are a few reasons why hiring a professional Woocommerce web designer is a wise investment:
Expertise: Professional designers have the skills and experience to create a visually appealing and functional website. They stay updated with the latest trends and technologies to ensure your site remains competitive.
Time-Saving: Designing a website can be time-consuming. By hiring a professional, you can focus on running your business while the experts handle the design and development.
Optimization: Professionals know how to optimize your site for speed, SEO, and user experience. This leads to better performance and higher conversion rates.
Support and Maintenance: A professional designer offers ongoing support and maintenance to keep your site running smoothly. They can handle updates, security checks, and any issues that arise.
Conclusion
Investing in professional Woocommerce web design is essential for any e-commerce business looking to succeed in today's competitive market. With its customizable features, focus on user experience, and robust security, Woocommerce provides a solid foundation for your online store. By hiring a professional web designer, you can ensure that your website stands out, attracts more customers, and drives sales. Whether you're starting from scratch or looking to revamp your existing site, a professionally designed Woocommerce store is a key asset for your business's growth and success. | ken0098 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.