id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,880,602
Day 11 of my progress as a vue dev
About today Let's start from past few days. I made a lot of new changes in my approach of work and my...
0
2024-06-07T16:24:55
https://dev.to/zain725342/day-11-of-my-progress-as-a-vue-dev-2h26
webdev, vue, typescript, tailwindcss
**About today** Let's start from past few days. I made a lot of new changes in my approach of work and my daily schedule and have been experimenting with some stuff. I missed to post here for past 2 days and do regret about breaking my streak. However I feel like doing something doesn't make a lot of sense if you're not doing it right, hence I took the time to reconfigure the whole process of my work. I made a fresh schedule which I somewhat followed for past 2 days and I feel confident that I can continue it for the next 48 days and make it consistent throughout, sort of like a 50 days challenge. Coming back to vue, I ended up finishing the functional implementation of tree structure in my DSA visualizer project and for now dealing with a basic visual approach. **What's next?** I'm going to focus more on the quality of work I do in a day rather than the number of hours spent and I will make sure when I sit to work I give it my 100%, at least for the next 48 days and see how I like the grind and if it becomes a part of my habit. On the project end I still need to polish it on the visual end because it is not all the way there. **Improvements required** Need to stay focused on achieving as I'm tired of getting lazy and giving up over and over again, I need this to work in order to get a positive change in my live. Also, I need my project to stand out so I will be giving it a push in terms of efficiency and visualization than most usual DSA projects out there. Wish me luck!
zain725342
1,880,601
How to Speak to a Human When You're Used to Talking to a Machine
For many developers, daily interactions often involve more time spent with code and virtual...
27,390
2024-06-07T16:24:51
https://dev.to/buildwebcrumbs/how-to-speak-to-a-human-when-youre-used-to-talking-to-a-machine-4p79
For many developers, daily interactions often involve more time spent with code and virtual interfaces than with other humans. Whether it’s issuing commands in a terminal or negotiating logic with an AI, switching back to human conversations can feel a bit... well, less straightforward. Let’s explore some tongue-in-cheek tips for those moments when you need to turn off the code-switching and switch on the human interaction. _Note: IA generated cover._ --- ## 1. Preparing for Human Interaction **Re-activate Your Greeting Protocols** Remember, human interaction protocols differ significantly from your usual git commit messages. Start with a "Hello" or "How’s it going?"—simple scripts that can initiate most human interaction sequences. **Syntax Matters** Your compiler might not care about politeness, but humans do. Integrating ‘please’ and ‘thank you’ into your verbal commands can dramatically improve compilation success rates in human dialogues. --- ## 2. Syntax and Semantics in Human Conversation **Use Expressive Subroutines** While your IDE doesn’t need you to emote, humans often require facial expressions as part of communication protocols. Practice activating your smile function when making requests or expressing thanks. **Manage Your Debugging Verbosity** In code, you debug with logs; in life, it’s called asking questions. If you don’t understand a human output, try, "Can you help me understand what you mean?" instead of "Error: Output not logical." **Decoding Human Sarcasm** Unlike the clear logic of Python or JavaScript, human language can be encoded with sarcasm—essentially comments that don’t execute literally. If someone remarks, “Great job on fixing that bug,” right after a crash, parse this as possibly sarcastic. --- ## 3. Error Handling in Real Life **Handle Misunderstandings Gracefully** Instead of reverting to a previous save point, human interactions require real-time error handling. Acknowledge the bug (misunderstanding) and deploy a patch by clarifying your statement. **Reboot Conversations if Necessary** When a program crashes, you reboot it. If a conversation goes south, don’t be afraid to suggest starting over. Humans appreciate resets too, especially when accompanied by a genuine smile (see expressive subroutines). --- ## 4. Advanced Human Interaction Features **Emulate Emojis with Facial Expressions** Emojis enhance text communication by adding emotional context—your facial expressions and body language do this in the physical world. Practice your real-life ‘emojis’ to become more proficient in human sessions. **Update Your Small Talk Algorithms** Small talk functions as the UI of human interaction; it’s what loads while deeper connections are buffering. Enhance your interface by learning to engage in light, casual dialogue about common modules like ‘weather’ or ‘current TV series’. --- ## Are you ready to communicate? Navigating from structured code to the fluid dynamics of human conversation can be challenging but think of it as switching from backend to frontend development. You’re just optimizing for a different kind of user experience—one that requires empathy, patience, and a lot of dynamic variables. Happy coding and conversing! **Thanks for reading,** Pachi 💚
pachicodes
1,880,576
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-07T16:15:39
https://dev.to/abnernewman99/buy-verified-cash-app-account-49fo
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ltpm9z2qmsq1urp2kj8n.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
abnernewman99
1,880,573
Azure components of Azure
Even if you've been using Azure for a while, you might not be fully aware of what the essential...
0
2024-06-07T16:13:17
https://dev.to/arizetbest/azure-components-of-azure-2hpd
Even if you've been using Azure for a while, you might not be fully aware of what the essential elements of this massive cloud provider are. We'll examine what I think are the three essential elements of Azure's internal architecture in this article: 1. Regions 2. Availability Zones 3. Data warehouses or centers Let's examine each one in greater depth. **1. Regions** Azure regions are defined as geographical locations that house several data centers. Every region is created to give consumers the best possible security and performance. The majority of the time, a single Region contains several datacenters that are connected to one another by a low-latency network so that they can interact. Here are some essential details regarding Azure regions: 1. UK South 2. East Asia 3. Australia East 4. Brazil South 5. France Central You can find out full list of Azure Regions on their official website, here. [Link](https://azure.microsoft.com/en-us/explore/global-infrastructure/geographies/#geographies) Almost all services inside Azure require you to specify in which Region you want the resource deployed. Now, we know that a region is simply a bunch of datacenters in the same area. **2. Availability Zones** A collection of datacenters makes up an Azure Availability Zone, which is smaller than an Azure Region. As a result, an Azure Region may alternatively be thought of as a collection of one or more Availability Zones. Availability Zones' seclusion is what gives them their power. The other Zones in the Region will keep operating normally even if one goes down. This makes an Azure Region "available," as the name implies, even in the event that several datacenters experience technical difficulties. **3. Datacenters** Datacenters are situated at its very core. They may be compared to enormous structures housing a separate network, power, and cooling system for each server. Azure is a worldwide supplier, thus these big structures, or datacenters, are located all over the world. Datacenters are organized into Azure Regions to make it more secure and easier for developers to deal with. **The main conclusions of this piece.** A data center is made up of several servers. An Azure Availability Zone is created by a collection of datacenters. An Azure Region is formed from a collection of Availability Zones. Thank you for reading and please do check out my profile for more blogs on Azure and AWS services.
arizetbest
1,865,541
Rendering "glitches" in reactive programming
Rendering glitches are a bit like the reactive world's version of the race condition where some...
0
2024-06-07T16:10:53
https://dev.to/dariomannu/rendering-glitches-in-reactive-programming-5a3k
webdev, javascript, functionalreactiveprogramming
Rendering glitches are a bit like the reactive world's version of the race condition where some derived state causes the rendering of invalid data before all its dependencies have finished updating. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hucoe3wfmu47jgb5ugsw.png) In this scenario, we have A, the main state, a counter that emits a sequence of numbers. Next we have B, a derived state that simply emits A +1. Then we have C, another derived state, derived from both. Now, if A emits a value before B, the relationship `a < b` won't hold and a false value will be emitted. If B emits before A, it will always be OK. How do we make sure this won't happen? Depending on what reactive paradigm is being used, programs may or may not be exposed to these scenarios. Basic reactive patterns like "Signals" or "Hooks" can be particularly vulnerable (in fact, the first time I heard about rendering glitches myself was when I was reading about Signals). The result can be an ever increasing number of unnecessary re-renders, causing poor performance, laptops getting hot, etc. Solutions exist in the form of both tactical and strategical approaches. Tactical means "deal with it and make sure it doesn't happen in your code", although some efforts are being made to address these issues by the very supporters of the signal pattern. Another approach, more strategic, is relying on a programming paradigm that's simply immune to these situations. Take RxJS, for example, the most well-established FRP library for JavaScript. The way you address issues like the above is with dedicated control-flow operators, through which the above becomes a non-problem. ```javascript const A = interval(1000); // emits 0, 1, 2, ... every second const B = A.pipe( map(a => a+1) // re-emits 1, 2, 3, ... ); const C = zip(A, B).pipe( map(([a, b]) => a < b) // emits true | false ); ``` `zip` is one of the most basic RxJS flow-control operators. It simply waits for both sources to emit before moving forward. This way, a rendering glitch can simply never happen. There are some UI frameworks like [Cycle](https://github.com/cyclejs/cyclejs) or libraries like [Rimmel](https://github.com/hellomenu/rimmel) that take full advantage of RxJS and its operators are first-class citizens, so your components never really have to deal with rendering glitches. N.B.: we talk about "glitches", when the flow is beyond our control, like in the case of many Signal implementations. With Observables and RxJS the flow control is always in your hands. If you still experience inconsistencies using RxJS and Observable streams, those would most likely fall into the simpler "application bug" category rather than qualify as a "glitch". ### Circular Dependencies There also appears to be an interesting case with circular dependencies. When derived state is connected creating loops, it's particularly challenging to resolve the rendering glitch problem, which again is not a problem in a flow-controlled reactive solution using Observables.
dariomannu
1,880,594
Aprenda Programação Do Zero Gratuitamente Com A Rocketseat
Inicie na programação e abra as portas para uma carreira repleta de oportunidades em um mercado de...
0
2024-06-23T13:51:03
https://guiadeti.com.br/aprenda-programacao-do-zero-gratuito-rocketseat/
eventos, carreira, carreiradeti, cursosgratuitos
--- title: Aprenda Programação Do Zero Gratuitamente Com A Rocketseat published: true date: 2024-06-07 16:08:17 UTC tags: Eventos,carreira,carreiradeti,cursosgratuitos canonical_url: https://guiadeti.com.br/aprenda-programacao-do-zero-gratuito-rocketseat/ --- Inicie na programação e abra as portas para uma carreira repleta de oportunidades em um mercado de alta demanda com a Rocketseat. O evento online e gratuito é ideal para iniciantes que desejam dar seus primeiros passos na área. Sem necessidade de experiência prévia, os participantes terão a chance de assistir a três aulas ao vivo, onde aprenderão de forma prática a construir seu primeiro site, entenderão as habilidades essenciais de um programador e descobrirão o caminho para iniciar uma carreira promissora em programação. O evento também oferece a oportunidade de participar de sorteios de prêmios. Não perca a chance de transformar seu futuro com a Rocketseat, um evento desenhado para quem está começando do zero e busca uma entrada no competitivo mercado de tecnologia! ## Jornada do Zero a Programação A Rocketseat está organizando um evento online e gratuito, perfeito para aqueles que estão começando sua jornada no mundo da programação e buscam uma carreira em um mercado de alta demanda com oportunidades de trabalho remoto e salários atrativos. ![](https://guiadeti.com.br/wp-content/uploads/2024/06/image-21.png) _Imagem da página do evento_ Ideal para iniciantes, este evento vai ofertar uma plataforma de aprendizado prático e interativo. ### Detalhes e Ofertas do Evento Os participantes terão a oportunidade de assistir a três aulas ao vivo nos dias 13, 17 e 19 de junho. Nestas aulas, eles aprenderão a criar seu primeiro site de maneira simples e descomplicada, entenderão as habilidades fundamentais necessárias para um programador e serão guiados através dos passos iniciais para começar a trabalhar na área de programação. ### Conteúdo Trabalhado #### Aula 01 13/06 – QUINTA • 19H Começando do zero em programação: Nesse encontro de aquecimento, você vai aprender a criar seu primeiro site, mesmo sem conhecimentos prévios, de forma descomplicada e dar seus primeiros passos no código com a mão na massa. #### Aula 02 17/06 – SEGUNDA • 19H Como iniciar na profissão + sorteio de prêmios: Nesse encontro você vai conhecer o passo a passo para construir uma carreira na área, conhecer mais sobre o mercado e o que você precisa fazer pra se tornar um dos profissionais mais buscados do mercado. #### Aula 03 19/06 – QUINTA • 19H Cara a cara com recrutadora: Nesse encontro você vai aprender como criar um currículo atrativo e quais são os requisitos que o mercado busca em um profissional, com quem entende do assunto. ### Oportunidades de Premiação Os inscritos terão a chance de ganhar prêmios valiosos para impulsionar sua aprendizagem. Estão disponíveis um notebook para programação e uma bolsa de estudos de 12 meses para a formação Full-Stack da Rocketseat. Para concorrer, os participantes devem estar inscritos no evento, fazer parte do grupo exclusivo do WhatsApp e assistir ao vivo o encontro do dia 17 de junho, onde serão fornecidas instruções detalhadas sobre como participar do sorteio. Importante ressaltar que os prêmios serão enviados apenas para endereços no Brasil. ### Engajamento e Exclusividade Os encontros são projetados para serem interativos e ao vivo, oferecendo uma experiência exclusiva. Vale destacar que as gravações dos encontros não ficarão disponíveis por muito tempo, incentivando a participação ativa durante as transmissões ao vivo. Este evento é uma boa oportunidade para aqueles que têm pouco ou nenhum conhecimento em programação e desejam dar o primeiro passo em direção a uma carreira promissora e lucrativa na área de tecnologia. <aside> <div>Você pode gostar</div> <div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Programacao-Do-Zero-280x210.png" alt="Programação Do Zero" title="Programação Do Zero"></span> </div> <span>Aprenda Programação Do Zero Gratuitamente Com A Rocketseat</span> <a href="https://guiadeti.com.br/aprenda-programacao-do-zero-gratuito-rocketseat/" title="Aprenda Programação Do Zero Gratuitamente Com A Rocketseat"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/10/Bolsas-de-Estudo-em-Nuvem-280x210.png" alt="Bolsas de Estudo em Nuvem" title="Bolsas de Estudo em Nuvem"></span> </div> <span>Bolsas de Estudo em Nuvem AWS e Programação</span> <a href="https://guiadeti.com.br/bolsas-de-estudo-nuvem-programacao/" title="Bolsas de Estudo em Nuvem AWS e Programação"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Imersao-Em-Design-280x210.png" alt="Imersão Em Design" title="Imersão Em Design"></span> </div> <span>Imersão Em Design Gratuita Com Certificado Da PM3</span> <a href="https://guiadeti.com.br/imersao-design-gratuita-certificado-pm3/" title="Imersão Em Design Gratuita Com Certificado Da PM3"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2023/06/Curso-de-Excel-Simplifica-280x210.png" alt="Curso de Excel Simplifica Treinamentos" title="Curso de Excel Simplifica Treinamentos"></span> </div> <span>Curso de Excel Online e 100% Gratuito da Simplifica Treinamentos</span> <a href="https://guiadeti.com.br/curso-de-excel-gratuito-simplifica-treinamentos/" title="Curso de Excel Online e 100% Gratuito da Simplifica Treinamentos"></a> </div> </div> </div> </aside> ## Programação A programação de computadores é uma disciplina que teve início nos anos 1940 e 1950, com a invenção dos primeiros computadores programáveis. Originalmente, a programação era realizada em linguagem de máquina, um conjunto de instruções binárias diretamente entendidas pelo hardware do computador. Rapidamente, isso evoluiu para linguagens de programação de alto nível, como FORTRAN e COBOL nos anos 1950, que permitiam uma abordagem mais legível e acessível ao processo de desenvolvimento de software. ### Linguagens de Programação: Populares e Emergentes #### Linguagens Mais Usadas Atualmente, linguagens como JavaScript, Python e Java dominam o mundo do desenvolvimento de software devido à sua flexibilidade, comunidade de suporte robusta e ampla aplicabilidade. JavaScript é essencial para o desenvolvimento web front-end, enquanto Python é celebrado por sua simplicidade e eficácia em tarefas de ciência de dados, aprendizado de máquina e automação. Java continua sendo uma escolha popular para o desenvolvimento de aplicações empresariais. #### Linguagens Menos Utilizadas Por outro lado, linguagens como Perl e Ruby têm visto uma diminuição na popularidade. Perl era muito utilizada para scripts e programação web, mas foi suplantada por linguagens mais modernas e versáteis. Ruby, apesar de ainda ser popular no desenvolvimento de web através do framework Ruby on Rails, tem perdido espaço para linguagens mais novas que oferecem melhor desempenho e escalabilidade. ### Ferramentas e Tecnologias em Programação A evolução das ferramentas de desenvolvimento tem acompanhado a das linguagens de programação. IDEs (Ambientes de Desenvolvimento Integrados) como Eclipse, Visual Studio e IntelliJ IDEA oferecem suporte extensivo para programação em várias linguagens com recursos como depuração, controle de versão e integração contínua. Plataformas como GitHub e GitLab facilitam o versionamento e a colaboração em código, essenciais para o desenvolvimento de software moderno. ### Carreiras em Programação A carreira em programação oferece várias oportunidades. Desenvolvedores podem especializar-se em áreas como desenvolvimento web, aplicativos móveis, sistemas embarcados, inteligência artificial, entre outros. A crescente adoção de práticas como DevOps e Agile enfatiza a necessidade de programadores que não apenas escrevam código, mas também entendam de operações e colaboração em equipe. ## Rocketseat A Rocketseat é uma plataforma de educação em tecnologia que se especializa em programação e desenvolvimento web e mobile. Fundada no Brasil, a Rocketseat tem como missão capacitar desenvolvedores para o mercado de trabalho através de um método de ensino prático, atualizado e alinhado com as demandas da indústria de tecnologia. ### Cursos e Bootcamps Oferecidos A Rocketseat é conhecida tem uma metodologia intensiva e focada em resultados. Os cursos cobrem uma ampla gama de tecnologias essenciais, como Node.js, ReactJS, React Native, e mais. Através desses cursos, os alunos aprendem não apenas a teoria por trás das ferramentas, mas também como aplicá-las em projetos práticos, garantindo que adquiram experiência real que possam levar diretamente para suas carreiras. ### Networking e Colaboração Um dos grandes diferenciais da Rocketseat é sua comunidade engajada. A plataforma incentiva uma cultura de colaboração e aprendizado contínuo, onde alunos podem interagir, resolver problemas juntos e compartilhar suas experiências. ## Inscreva-se agora no evento da Rocketseat e eleve suas habilidades de programação ao próximo nível! As [inscrições para Jornada do Zero a Programação](https://www.rocketseat.com.br/eventos/do-zero-a-programacao) devem ser realizadas no site da Rocketseat. ## Compartilhe e convide outros para o evento da Rocketseat – transforme sua carreira em tecnologia! Gostou do evento gratuito para iniciantes em programação? Então compartilhe com a galera! O post [Aprenda Programação Do Zero Gratuitamente Com A Rocketseat](https://guiadeti.com.br/aprenda-programacao-do-zero-gratuito-rocketseat/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,880,567
Buy Verified Payoneer Account
Buy Verified Payoneer Account Buy Verified Payoneer Accoun tHaving a reliable platform and procedure...
0
2024-06-07T16:07:47
https://dev.to/works_usa_6ee34a6e505ce13/buy-verified-payoneer-account-1428
payoneer, payoneeraccount, verifiedpayoneer
Buy Verified Payoneer Account **[Buy Verified Payoneer Accoun](https://usa5stareviews.com/product/buy-verified-payoneer-account/)** tHaving a reliable platform and procedure for international transactions is very important. In that case, Payoneer accounts are a simple and reliable medium. Citizens of more than 200 countries do their daily transactions very well with their Payoneer accounts. Also, every day, more than 5 million users use Payoneer to transfer and receive money worldwide. A Payoneer account user can also transfer and receive money on several global markets, including Google, Amazon, Adobe, eBay, Airbnb, Rakuten, Upwork, and Fiverr. So if you are a freelancer or businessman, you need to [buy a fully verified Payoneer account](https://usa5stareviews.com/product/buy-verified-payoneer-account/) with 100% secure documents. Buy a Verified Payoneer Account with all of the features If you buy a verified Payoneer account from the usa5starreview agency, they will give you many features that other agencies don’t offer. for transacting with proper and trusted institutions. Purchasing a verified Payoneer account from us will provide 100% valid documents with an account. ➤ Guaranteed 100% Customer Satisfaction ➤ Verified Payoneer Accounts (Non-Drop) ➤ Active and Verified Payoneer Accounts ➤ Highly Affordable Prices ➤ Top-Quality Service ➤ 100% Money-Back Guarantee ➤ 24/7 Customer Support ➤ Extra Bonuses with Every Service ➤ Advance Payment Required for Purchase Know about Payoneer before buying a Payoneer Account Payoneer is an easy way to make worldwide transactions. Payoneer was first launched in 2005 in New York. Then Payoneer gradually spread to people all over the world. Currently, there are more than five million regular Payoneer users. However, the real secret to Payoneer’s popularity is its safety and convenience. Anyone can manage this account easily with Payoneer’s mobile and computer controls. Also, transactions through the union cost users less money. Other international transaction platforms charge much more. So, if you want to simplify your regular transactions, you can purchase a verified account from us. How does Payoneer work in 2024? Payoneer primarily facilitates the exchange of digital currency. Through the union. Transaction. Payment. You can process a receipt using various methods. Additionally, Payoneer facilitates easy transactions between individuals within the country and those abroad. Payoneer: US dollar, British pound, euro, Australian dollar, Canadian dollar, Singapore dollar, Hong Kong dollar, UAE dirham, Chinese yuan, and Japanese yen. You can trade it in several currencies, including a basic union account that operates through an internet connection. Easily connect your Payoneer account with freelance marketplaces Payoneer accounts can be easily connected to freelancing markets, and money transactions can be done very easily. Upwork and Fiver are two of the most popular freelance marketplaces today. You can effortlessly link your Payoneer account to the global marketplace. Follow the processes mentioned below. Connect your Fiverr account with your Payoneer account. If you want to add your Payoneer account to your Fiverr account, For that, you need to log in to Fiber first. Click on the seating option to go to the payment billing option. There are various options, like PayPal, Payoneer, and Binance. Since you need to add a Pioneer account, you must click on the preferred option. Then Fiber will ask you for some Payoneer account information, such as an e-mail address and password. Your Premier account will be linked to your Fiber account only if you provide the said theories. Then you can safely transact. Connect your Upwork account with your Payoneer account. Adding a peon account with Upwork is similar to many fibers. First, you need to login to your account and go to your profile settings. Then click on the payment receipt option from there, and you will find the option to add an account. You want to link the Payoneer account with the Upwork account. Just give the correct information. Upwork will inform you by email. Then you can freely make transactions from your Payoneer account. Not: We also Provide [Verified Cash app account](https://usa5stareviews.com/product/buy-verified-cash-app-account/) . You can Easily buy these products from us. Payoneer is important for every freelancer Buy Verified Payoneer AccountA Payoneer account will make your freelancing life a lot easier. Since freelancing is an online-based job, we perform this work from our homes for both domestic and international clients. So freelancers need a trusted and secure platform to transact dollars. After the release of Payoneer, almost all freelancers have chosen this platform as the right means of transaction. So having a Payoneer account plays a very important role as a freelancer. Also, many people cannot open a verified union account due to a lack of correct information. Obtaining the payment from the client presents numerous challenges. Clients often fail to pay you. So if you are a freelancer, definitely buy a verified Payoneer account. If you take a Payoneer account from us, we can give you a fully verified account with 100% correct documents. The benefits of using a Payoneer account Using a Payoneer account in this digital age will give you many benefits and advantages. Some of the benefits are: A simple and fast transaction Payoneer is ahead of other dollar transaction platforms. This is because Payoneer can bring money very quickly. It is also very easy to send money to Penyar. As a result, most people use Payoneer accounts. Payoneer account Security and Support It is the only transaction platform in the world that provides excellent support to its users. They also ensure the safety of users’ accounts. If at any time a Payoneer account is disturbed or hacked, contact the support team members, and they will provide a quick and easy solution. As a Payoneer user, I can say that using a Payoneer account has given me many benefits. Those are very difficult to get from other Platforms. A verified Payoneer account Transaction Limit A verified Payoneer account can make transactions ranging from 0.5 cents to USD 10,000. This may change in the future. Even if you need to do more than one transaction at a time, You can contact Payoneer support members. They will definitely tell you in a better way. Which agency will provide 100% verified Payoneer accounts? Since Payoneer accounts are an easy medium for all users, everyone wants to buy a verified account. But currently, many card-party agencies are deceiving by giving accounts for less money. Most people lose their money when they fall into this fraud trap. There are very few agencies that can provide a Payoneer account with accurate and 100% documentation. Usa5starreview is one of them. They have been providing value to trusting customers for a long time. Why should you choose the US 5-star review agency? You will buy your desired verified Payoneer account from the USA5StarReviews agency. The USA5StarReviews agency has been working with reliability for a long time. And providing verified Payoneer accounts with 100% unique information to 200+ customers. Buy a fully verified account instead of buying a normal account from other third-party agencies. Make your transactions and freelancing work easier. A simple Process to create a Payoneer account and verification Step-by-Step Process to Create a Payoneer Account If you would like to create a Payoneer account first you have to Visit the Payoneer Website. After that Sign Up and register select the type of account you want (Freelancer, Online Seller, Affiliate, etc.) must Enter Personal information: Like your User name, Correct email address, and date of birth. Create a password and security question for account security. Contact Details: Provide your contact information, such as your address and phone number. Security Details: Enter identification information, such as your national ID, passport, or driver’s license. Bank Account Information: Provide your bank account details where you want to transfer your earning. mind it, you have to provide all valid information. account Verification Process Email Verification: After submitting your application, Payoneer will send a verification email to your registered email address. Then Click on the verification link in the email to confirm your email address easily. Identity Verification: After logging in to your Payoneer account go to the settings option and click verification, then Payoneer will want your government ID, Upload a copy of your (passport, national ID card, or driver’s license). Frequently asked questions for buying a verified Payoneer account I do not have a bank account. Can I still use Payoneer? Yes, you can sign up for Payoneer without linking a bank account at first. But if you want to make a transaction, you will need to link a bank account. Can I send money from Payoneer without being checked? No, you need to go through the verification process before you can move money from your Payoneer account to another account. How can I tell if my Payoneer account is real? After logging in to your Payoneer account, go to “Settings” and click on “Verification Center.” then you’ll be able to figure that out. How do I get Payoneer to accept my account? To get your PayPal account approved, you need to send in all the necessary information. Like your ID, phone number, and bank information. than they will agree with you to verification. How to check the address of your Payoneer account? You will need to upload a bill or bank account that has your name and address on it to the “Verification Center.” Payoneer asks for proof of name. How long does that take? There are different ways that verification can take a few work days, but in general, it does. Conclusion We sell a [fully verified Payoneer account](https://usa5stareviews.com/product/buy-verified-payoneer-account/). If you are a freelancer or businessman, A Payoneer account will make your day easier. You can transfer money to anyone in quick time. Regardless of the source of your Payoneer account, ensure it contains accurate and high-quality information. To buy a verified Payoneer account from us, click here. If you need to more information Skype : usa5stareviews live:.cid.5b766f5303a04ec8 Telegram: @usa5stareview WhatsApp :*************** Email: usa5stareviews@gmail.com Web: https://usa5stareviews.com/product/buy-verified-payoneer-account/
works_usa_6ee34a6e505ce13
1,880,566
Need help with GPO
I have developed a lab with one domain controller and a domain joined machine. Now I have implemented...
0
2024-06-07T15:57:00
https://dev.to/kalyan333/need-help-with-gpo-3ch2
computerscience
I have developed a lab with one domain controller and a domain joined machine. Now I have implemented a group policy for folder redirection which redirects the desktop, downloads and documents folder to a SMB share on the DC. The GPO works on the desktop folder but not on the downloads and documents folder. I have done gpupdate /force command multiple times and even rebooted the machine quite a few times. The folder redirection stays in desktop but not on other folders. What should i do to troubleshoot it
kalyan333
1,880,565
What are Gas Fees on Blockchain?
In the blockchain technology, "gas fees" are a fundamental element. They function similarly to fuel...
0
2024-06-07T15:56:47
https://dev.to/metla/what-are-gas-fees-on-blockchain-444o
cryptocurrency, blockchain, ethereum, web3
In the blockchain technology, "gas fees" are a fundamental element. They function similarly to fuel for a vehicle, necessary for processing transactions and executing smart contracts on platforms like Ethereum. ### Understanding Gas Fees **[Gas fees](https://metla.com/blog/what-are-gas-fees)** are payments users make to cover the computational energy needed to process and validate transactions on a blockchain. Whether transferring cryptocurrencies or executing intricate contracts, each transaction demands computational effort, priced in gas units. ### The Necessity of Gas Fees **Network Security:** Gas fees motivate validators (miners or stakers) to utilize their computational power to secure and maintain the blockchain network. **Prevent Network Spam:** By imposing a cost on transactions, gas fees discourage users from flooding the network with excessive or malicious transactions. ### How Gas Fees Function Blockchain transactions require specific computational work, measured in gas units. The total gas cost of a transaction is influenced by two main factors: - **Gas Limit:** The maximum amount of gas a user is willing to spend on a transaction. - **Gas Price:** The amount of ether a user is prepared to pay per gas unit, typically measured in "gwei." ### Calculating Gas Fees The formula to determine the total gas fee is straightforward: **_Total Gas Fee=(Base Fee+Priority Fee)×Gas Used_** - **Base Fee:** A compulsory fee that adjusts dynamically with network congestion to ensure smooth operation. - **Priority Fee:** An optional tip users can offer to validators to prioritize their transactions. ### Factors Influencing Gas Fees - **Network Traffic:** Similar to rush hour traffic, higher blockchain activity raises gas prices. - **Transaction Complexity:** More complex transactions need more computational power, thus consuming more gas. - **Blockchain Protocol Design:** Different blockchains have distinct fee structures affecting gas fee applications. ### Managing Gas Fees Users can implement several strategies to control their gas fee expenses: - **Transact During Off-Peak Hours:** Conducting transactions during periods of low network activity can reduce fees. - **Optimize Gas Settings:** Advanced users can adjust gas price and limit settings to balance cost and transaction speed. ### Case Study: Ethereum Gas Fees Ethereum, a leading blockchain platform, handles not only cryptocurrency transfers but also a variety of decentralized applications and smart contracts. This versatility leads to variable gas fees based on transaction complexity and volume. ### Comparing Ethereum with Polygon Ethereum’s gas fees are generally higher than those on Polygon, a Layer 2 scaling solution. Polygon boosts transaction speed and affordability by processing transactions off Ethereum’s main chain, reducing load and cost on Ethereum. ### The Challenge of High Gas Fees While necessary for network functionality, high gas fees present certain obstacles: - **Cost Barrier:** High fees can deter average users, especially during peak periods. - **Limited Accessibility:** High gas fees can hinder broader adoption of blockchain technology. ### Innovations to Reduce Gas Fees The blockchain community continuously explores innovations like Layer 2 scaling solutions to lower high gas fees. These technologies shift transactions off the main blockchain, reducing costs and enhancing scalability. ### Conclusion Gas fees are essential to blockchain architecture, compensating validators and securing the network. Despite the challenges they pose, ongoing innovations promise to lower these costs and improve user experiences across blockchain platforms. Learn more about blockchain and Crypto - Metla Blog {% embed https://metla.com/blog %}
ipratikmali
1,879,561
API Security: Key Protocols
API security is a crucial component of the development process, due to the sensitive nature of the...
0
2024-06-07T15:49:14
https://dev.to/apidna/api-security-key-protocols-19fl
api, security, webdev, programming
API security is a crucial component of the development process, due to the sensitive nature of the information transferred between applications. Personal information such as contact details, home address, date of birth, passwords, and bank details are all at risk without sufficient API security protocols in place. In one of our [previous articles](https://apidna.ai/understanding-the-fundamentals-of-api-security/), we focused on the fundamentals of API security which will provide you with the necessary context for some of the key terms we will discuss throughout this article. Here we go through some of the essential protocols required for robust API security. However, all of the protocols we discuss today are completely useless without providing the employees of your organisation with sufficient cyber security training. We recommend free online courses such as [this one](https://www.ihasco.co.uk/courses/detail/cyber-security-awareness-training) to get started, but there are many alternative options out there that may be more suited to your organisation’s needs. API security protocols are structured frameworks designed to protect the data and communication processes of APIs. They cover areas such as authentication, authorization, encryption, and integrity of data transferred through APIs. Before we further discuss API security protocols, we’re happy to announce the launch of our new API integration platform. At APIDNA we have been working tirelessly to produce a truly industry changing piece of software, utilising autonomous agents to simplify API integrations. So [click here](https://apidna.ai/) if you’re interested in trying out our platform today! ## OAuth 2.0 For API Security OAuth (which stands for Open Authorization) is the industry standard for online authorization. It is commonly used as a way to grant websites or applications limited access to user information without exposing passwords. OAuth 2.0 works by issuing access and refresh tokens to authorise and authenticate API requests: - **Access Tokens:** These are short-lived tokens that grant access to resources. They are typically valid for a limited time, ensuring security. - **Refresh Tokens:** These tokens are long-lived and used to obtain new access tokens once the old ones expire, providing continued access without re-authentication. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijmng4un66dlw5gxv8my.jpg) Authorization grant types and flows are mechanisms defined by OAuth 2.0 to handle the process of obtaining access tokens. The grant types and flows are designed to cater to different scenarios, providing flexibility and security in various use cases: - **Authorization Code Flow:** Used for server-side applications. It involves redirecting the user to the authorization server to obtain an authorization code, which is then exchanged for an access token. - **Implicit Flow:** Simplified for client-side applications where the access token is issued directly. - **Resource Owner Password Credentials Flow:** Used when the user trusts the client and can share their credentials directly. - **Client Credentials Flow:** Suitable for machine-to-machine communication where no user is involved. ## JSON Web Tokens (JWT) JSON Web Tokens (JWT) are a compact and self-contained way of transmitting information between parties as a JSON object. They are commonly used for secure information exchange and authentication mechanisms in web applications. A JWT consists of three parts: - **Header:** The header typically consists of two parts: the type of token (JWT) and the signing algorithm being used, such as HMAC SHA256 or RSA. - **Payload:** The payload contains the claims, which are statements about an entity (typically, the user) and additional metadata. There are three types of claims: registered, public, and private. - **Signature:** To create the signature part, you have to take the encoded header, the encoded payload, a secret, the algorithm specified in the header, and sign that. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csagarlzlkqw3k4baedp.jpg) JWTs are signed to ensure the token hasn’t been altered, maintaining integrity. If the data changes, the signature will no longer match, and the token will be invalid. JWTs can be used to authenticate users by verifying their identity. Once a user is logged in, every subsequent request will include the JWT. Since JWTs can contain user roles and permissions, they are useful for controlling access to resources based on user privileges. JWTs are self-contained, meaning all the information required to verify the token is included within it. This eliminates the need to store session information on the server. Due to their JSON format, JWTs are compact and can be easily transmitted via URLs, POST parameters, or inside HTTP headers. JWTs are language-agnostic, making them interoperable and suitable for various programming environments and frameworks. ## Transport Layer Security (TLS) Transport Layer Security (TLS) is a cryptographic protocol designed to provide secure communication over a computer network. TLS evolved from Secure Sockets Layer (SSL) and is the more secure and efficient version used today. It encrypts the data transmitted between clients and servers, ensuring that any intercepted data remains unreadable to unauthorised parties. TLS also ensures data integrity by making sure that the data sent and received is not tampered with during transmission, protecting against data corruption and man-in-the-middle attacks. It provides a mechanism to authenticate the server (and optionally the client) using certificates, ensuring that the parties involved in the communication are who they claim to be. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32wzyjd2mjsnkb3heyxr.jpg) Let’s break down how TLS works, step-by-step: - **Negotiation:** The client and server agree on the version of TLS to use and select mutually supported cipher suites. - **Authentication:** The server presents its digital certificate to the client, proving its identity. The client may also provide a certificate to authenticate itself to the server. - **Key Exchange:** The client and server use asymmetric encryption to securely exchange keys, which are then used to establish a symmetric session key. - **Session Establishment:** Once the keys are exchanged, a secure session is established, and further communication is encrypted using symmetric encryption methods. ## API Key Management API keys are unique identifiers used to authenticate requests to an API and ensure that the requests are coming from authorised clients. Management of these API keys is essential for securing APIs by controlling access and preventing unauthorised use. Proper management ensures that only authorised clients can access the API, thereby protecting sensitive data and services. API keys allow for detailed monitoring and tracking of API usage. This enables organisations to analyse usage patterns, detect anomalies, and identify potential security threats in real time. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uw5s5xuhcltbqa1cw1pp.jpg) Here’s how API key management works: - **Key Generation:** API keys are generated using a secure algorithm. Each key is unique and should be unpredictable to prevent unauthorised generation. - **Key Distribution:** Once generated, API keys are securely distributed to authorised clients. This can be done through a secure channel to ensure that the keys are not intercepted during transmission. - **Usage:** API keys are included in the requests sent to the API. The server validates the key before processing the request, ensuring that it is coming from an authorised source. - **Revocation:** If an API key is compromised or no longer needed, it can be revoked. This means the key is invalidated, and any request using that key will be denied access to the API. - **Rotation:** Regularly rotating API keys is a best practice to enhance security. This involves generating new keys and updating clients to use the new keys while invalidating the old ones. ## Rate Limiting and Throttling Rate limiting restricts the number of API requests a user or client can make within a specified timeframe. This helps prevent malicious activities like DDoS attacks and brute force attempts. Throttling ensures that resources are distributed fairly among users, preventing a single user from monopolising the API’s capacity. By controlling the flow of incoming requests, rate limiting and throttling help maintain the stability and performance of backend systems, preventing crashes and slowdowns. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uuaa85e46v4b7ya2fxha.jpg) Here’s how rate limiting works: - **Fixed Window:** In a fixed window rate limiting approach, the number of requests allowed is capped within a fixed time interval, such as 1000 requests per hour. Once the limit is reached, any additional requests within that interval are rejected until the next interval begins. - **Sliding Window:** The sliding window method is more dynamic, allowing a set number of requests within a rolling time window. This approach provides a more balanced distribution of requests over time. - **Token Bucket:** The token bucket algorithm allows for bursts of requests up to a certain limit while maintaining an average rate. Tokens are added to the bucket at a steady rate, and each request consumes a token. If the bucket is empty, the request is denied or queued. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qctv6j2r7693mqpiwt9y.jpg) Then here’s how throttling works: - **Request Queuing:** When the number of requests exceeds the allowed rate, additional requests are placed in a queue. They are processed once the rate falls below the threshold, ensuring the backend system isn’t overwhelmed. - **Response Delay:** Throttling can also involve delaying responses to requests that exceed the rate limit, gradually serving them to maintain system stability. ## Conclusion Securing APIs is crucial for robust and reliable applications. Key protocols such as OAuth 2.0, JSON Web Tokens (JWT), Transport Layer Security (TLS), API Key Management, and Rate Limiting and Throttling play essential roles in optimising API security. These protocols ensure secure authentication, data encryption, access control, and fair usage, protecting APIs from various threats. To enhance your understanding and implementation of these protocols, explore the further reading resources provided below and improve your API security strategies. ## Further Reading [Best practices for REST API security: Authentication and authorization – StackOverflow](https://stackoverflow.blog/2021/10/06/best-practices-for-authentication-and-authorization-for-rest-apis/) [What is OAuth 2.0? – Okta](https://auth0.com/intro-to-iam/what-is-oauth-2) [Introduction to JSON Web Tokens – Okta](https://jwt.io/introduction) [What is TLS (Transport Layer Security)? – CloudFlare](https://www.cloudflare.com/en-gb/learning/ssl/transport-layer-security-tls/) [Creating and managing API keys – Google](https://cloud.google.com/api-keys/docs/create-manage-api-keys) [API Rate Limiting vs. API Throttling: How Are They Different? – NordicAPIs](https://nordicapis.com/api-rate-limiting-vs-api-throttling-how-are-they-different/)
itsrorymurphy
1,880,562
How to get started with Shopify theme development
In this guide, we’ll walk you through how to get started with Shopify theme development. ...
0
2024-06-07T15:48:42
https://gadget.dev/blog/setting-up-a-local-dev-environment-for-building-shopify-themes
shopify, webdev, beginners, tutorial
In this guide, we’ll walk you through how to get started with Shopify theme development. ## Requirements Before you get started, you need the following: - The URL of a store you want to work on. We recommend signing up for a Shopify Partner account and creating a new development store when you first get started. - Permission to work on themes for the store you are working on. If you are working on a merchant’s store, you need to make sure that you have a collaborator or staff account that has been granted the **Manage themes** or **Themes** permission. If you’re working on a Partner account and a development store, you don’t need to worry about this step at this time. Anytime a buyer is browsing through products or services offered by a Shopify merchant, they are interacting with the store’s theme. Themes power Shopify storefronts, and unique and attractive themes help drive higher traffic and conversion rates. If you’re looking to start building your own themes, you have two options. You can either build custom themes for merchants, or build themes that are listed for sale on [Shopify’s Theme Store](https://themes.shopify.com/themes). In either case, it’s easy to get started, as setting up a local development environment for building Shopify themes takes less than 10 minutes and we’re here to walk you through the steps. {% embed https://www.youtube.com/watch?v=MY-KzAebqqc %} ## Shopify CLI or Theme Kit? Shopify has two CLI tools that can be used to develop themes: the Shopify CLI and the Theme Kit. We recommend using the Shopify CLI, as this is Shopify’s newest tool for building themes and apps. They’ve released a number of improvements to it in recent months, and Shopify recommends using it for [Online Store 2.0](https://www.shopify.com/ca/partners/blog/shopify-online-store) theme development. For vintage themes, Theme Kit can be installed and used instead. Shopify has a [guide for getting set up with the CLI](https://shopify.dev/docs/storefronts/themes/getting-started/create). This guide will cover those steps, and sprinkle in some extra useful info about building themes using the CLI. If you want to use Theme Kit to work on a vintage theme, Shopify has a [separate getting started guide](https://shopify.dev/docs/storefronts/themes/tools/theme-kit/getting-started). ## What about theme app extensions? Theme app extensions are a way to inject small bits of custom functionality into an existing theme. For example, if you want to add a [storefront chatbot](https://www.youtube.com/watch?v=Y_ivi2cW9bk) or a product recommendation quiz to a theme, you could do so using a theme app extension. Theme app extensions are also built using the Shopify CLI, but are released as part of a public or custom Shopify app instead of as part of a theme. Merchants can then use Shopify’s theme editor to place the theme app extension somewhere in the storefront. This guide will not cover theme app extensions. If you are looking to get started building storefront apps using theme app extensions, you can check out Gadget’s [product quiz tutorial](https://docs.gadget.dev/guides/tutorials/product-recommendation-quiz-app). ## Step 0: Installing dependencies and the Shopify CLI Some [required dependencies](https://shopify.dev/docs/storefronts/themes/getting-started/create#installation-requirements-for-shopify-cli) need to be installed before you get started with the Shopify CLI. Shopify lists all dependencies on their site: Ruby, Node, and git are required for all operating systems. Things are dead simple on a Mac, two homebrew commands will install the Shopify CLI as well as any of the required dependencies. ``` brew tap shopify/shopify brew install shopify-cli ``` Windows and Linux developers have some additional required dependencies before the Shopify CLI can be installed globally using `npm`. ``` npm install -g @shopify/cli @shopify/theme ``` Once the Shopify CLI is installed, you can verify that it was installed correctly by running `shopify` in the terminal, which prints out the current version as well as the available topics and commands. Notice that one of the topics is `theme`, this is what you will be using to do theme development. If you want to see all the available theme commands, you can do this using `shopify theme`. Once the CLI is installed, you can start by cloning an existing theme. ## Step 1: Cloning a theme Now you need to pull down some theme code to your local machine. Navigate to the directory you want to clone your theme in and run `shopify theme init`. You will be prompted to name your new theme, and then Shopify’s Dawn theme will be cloned into the folder. If you want to clone a different theme, you can use a `--clone-url` flag to point to a different repository. For example, `shopify theme init –-clone-url https://github.com/Shopify/dawn.git` (this also clones Dawn, but you get the idea 😉). Now that you have a base theme, you can start modifying the template! ## Step 2: Start building With your theme code at the ready, you can start your development server and make some changes to the custom theme. To do this, run `shopify theme init –-store <my-store-name>` and your local theme code will be synced to the passed-in store. Once the sync is complete, you can preview your theme at `http://127.0.0.1:9202`. If you encounter a sign-in form when you click this link, you’ll need to grab the password from the store admin. In the admin, click on **Online Store**, then **Preferences**. The password will be on the preferences page. Copy and paste it into the login form, and you should see a preview of your theme on the store! Hot module reloading works out of the box, so any CSS or section changes will be visible immediately. Note that this preview is only available in Chrome, according to Shopify’s docs! I find that making small changes in `sections/image-banner.liquid` is a great way to start testings changes out when working on the default Dawn theme. This will make changes that are reflected on the theme’s home page. > **Note:** If you stop your development server and want to restart it on the same store, you can just run `shopify theme dev` without the `--store` flag. There are some other neat [commands](https://shopify.dev/docs/api/shopify-cli/theme) included in the CLI that help with theme development. One notable example, running `shopify theme check` runs [Theme Check](https://shopify.dev/docs/storefronts/themes/tools/theme-check/configuration), which is a linter for the Liquid and JSON files in your theme. Theme Check will identify any errors in your theme, such as syntax errors, unused variables and snippets, and even points out potential performance issues. Running Theme Check to catch these errors before pushing or publishing your theme is probably a good idea! When building themes in your local editor, it is also handy to install extensions to help write Liquid. If you are working in VSCode, [Shopify has a Liquid extension](https://marketplace.visualstudio.com/items?itemName=Shopify.theme-check-vscode) that helps with syntax highlighting, formatting, and autocompletion. Bonus: this extension also works for theme app extensions. ## Step 2.5: Learn Liquid This can’t be covered in a couple of paragraphs in a blog post. In fact, it can’t be covered by a single standalone blog post. At least, a blog post of reasonable length. Shopify’s themes are built with [Liquid](https://shopify.dev/docs/api/liquid), which is their templating language. Shopify has plenty of Liquid documentation, and there are entire courses, including one [offered by Shopify](https://academy.shopify.com/introduction-to-shopify-liquid), that focus on teaching developers how to build themes. Because of this, I’m not going to cover any Liquid details here. But if you are getting into theme development, you will want to make sure you have a decent grasp of Liquid and how it can be used to build themes. ## Step 3: Pushing and publishing themes Once you have made some changes to your theme, you can **push** it up to your store. You can also use the CLI to **publish** the theme so that it is the live version that buyers will see when they visit the store. To push your theme to a store, you can run the command `shopify theme push -–unpublished`. Using the `--unpublished` flag will create a new theme for the store. You will be prompted to give your theme a name, and this name will appear in the admin as the name of the new theme. You only need to include `--unpublished` the first time you push a new theme. Any additional changes can be pushed without the flag included. Once your theme is pushed, you can see it available in the store admin under **Online Store → Themes**. Merchants can customize the theme using the available theme options by clicking on the **Customize** button. Clicking on the ellipsis (...) button and then selecting **Edit code**, allows you to see all of your theme’s code files, which is where you can make unique customizations for a store, if desired. When a theme is published, it becomes the live theme that buyers will see when they visit the store. There are two ways to publish a theme, clicking the **Publish button** in the admin or running `shopify theme publish` in the CLI. Running the command in the CLI prompts you to select the theme you want to publish, which makes it easy to quickly change the theme that’s live on the store. This was a quick, end-to-end overview of setting up a Shopify theme development environment. If you have any questions about setting up your environment (or building custom Shopify apps) join [Gadget’s developer Discord](https://ggt.link/discord) and ask.
gadget
1,880,560
Connection Pooling in Spring Boot
connection pooling is a method specifically designed to manage and reuse database connections,...
0
2024-06-07T15:46:59
https://dev.to/lakmal_asela_8be4eb30d9db/connection-pooling-in-spring-boot-4k21
springboot, database, connectionpooling
connection pooling is a method specifically designed to manage and reuse database connections, enhancing the performance of applications that interact with databases. A database connection acts as a bridge between a Java application and a database server. Establishing and terminating these connections should be making efficient management crucial for optimal application performance. **Connection pooling** involves creating a pre-established database connections that can be reused. This is contributes reduces the overhead of establishing new connections for each database operation, thereby enhancing application efficiency & concurrency controlling. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ia6olrnk8h38eocmel17.png) Without connection pooling, database interaction would require establishing a new connection and terminating it afterward. This repetitive task & occur the overhead, leading to additionally effort to potential resource drain. Connection pooling resolve these issues by reusing existing connections & it will be in improved efficiency, concurrency, and overall system stability. The Spring boot introduce the HikariCP for connection pool implementation. HikariCP can configure in your application.properties file `# HikariCP settings spring.datasource.hikari.maximum-pool-size=10 spring.datasource.hikari.minimum-idle=5 spring.datasource.hikari.idle-timeout=600000 spring.datasource.hikari.connection-timeout=30000 spring.datasource.hikari.max-lifetime=1800000 spring.datasource.hikari.leak-detection-threshold=2000 spring.datasource.hikari.pool-name=MyHikariCP` **Summery** Connection pooling is a vital mechanism for enhancing the performance and scalability of applications that interact with databases. By maintaining a pool of reusable database connections, connection pooling minimizes the overhead associated with establishing and tearing down connections for each database operation. This approach not only optimizes resource utilization but also improves application responsiveness and stability.
lakmal_asela_8be4eb30d9db
1,880,556
Unlocking Psychology with Large Language Models: Receptiviti Augmented Generation
As an increasing number of Receptiviti's customers are working with Large Language Models (LLMs),...
0
2024-06-07T15:44:27
https://dev.to/receptiviti/unlocking-psychology-with-large-language-models-receptiviti-augmented-generation-6lo
llm, rag, psychology, nlp
As an increasing number of Receptiviti's customers are working with Large Language Models (LLMs), we've seeing rising interest in integrating our technology to enhance LLMs' understanding of their users' psychology. Recognizing that an understanding of human psychology can improve Large Language Model (LLM) responses, reduce hallucinations, and increase overall LLM safety, we've prepared an overview explaining how LLM developers can leverage Receptiviti in an API-based retrieval system. **Large Language Models fall short in comprehending the nuances of psychology** While Large Language Models have an impressive capacity for language generation and understanding, their comprehension of human psychology remains simplistic and far more limited than their language skills would lead you to believe. While LLMs can imitate empathy by responding to explicit emotional cues within a prompt, they can't pick up on important psychological and cognitive signals in human language when cues are less obvious. The absence of genuine psychological understanding means that LLM responses may lack the depth and nuance that true psychological insight would provide. This limitation hinders the effectiveness of LLMs in scenarios that require even moderately advanced psychological comprehension, such as sensitive customer support interactions, professional development and coaching, therapeutic interactions, or counselling.  LLMs’ lack of psychological insight can lead them to overgeneralize, resulting in generic, boilerplate responses that aren’t sensitive to psychological cues in the person’s writing. At best, that leads to unengaged, canned-sounding responses. At worst, a user may receive an incorrect or biased response that causes them serious harm. Retrieval Augmented Generation (RAG) has emerged as an approach for enhancing an LLM by integrating external knowledge sources to surpass what is possible with its training data. RAG involves combining a generative model, capable of creating new content like GPT, with a real-time retrieval model that selects and incorporates relevant information from a predefined set of documents or knowledge sources, enabling the generation process to be guided by external context for improved relevance and coherence. This approach enhances the output by leveraging existing knowledge and context, effectively marrying the benefits of both generative and retrieval-based models in natural language processing tasks. If you'd like to delve into RAGs in more detail, [here's an excellent overview from Stackoverflow](https://stackoverflow.blog/2023/10/18/retrieval-augmented-generation-keeping-llms-relevant-and-current/). The Receptiviti-powered RAG process can unfold as follows: First, a query or context can be presented to the retrieval model, which in this case is [Receptiviti’s language-based psychology insights API](https://www.receptiviti.com/api). The API scores the content on one or more of 200 psychological, cognitive, and emotional dimensions, which are pre-selected according to the specific use case. The resulting scores can be used to refine the initial query with additional psychological, cognitive, and emotional information about the user. These insights can then be fed into a generative model, which uses the psychological context to create highly relevant output that is tailored to the psychology of the user. This iterative interplay between retrieval and generation can enhance the quality and relevance of the final output, offering a more synergistic approach that combines the common knowledge and conversational skills of generative models with the grounded and validated psychological insights provided by the Receptiviti API. LLMs convincingly imitate empathy when prompts contain overt emotion. Generating a contextually relevant and emotionally appropriate response depends on the LLM's ability to not only comprehend the literal meaning of what it's being asked but also to understand contextual nuances embedded in the language. In emotionally charged scenarios, understanding how a question is asked becomes critical. The LLM must recognize and empathize with the user's emotion and integrate mimicry to craft a response that not only ensures factual accuracy but also mirrors an appropriate emotional tone. Consider a user who is expressing frustration in an emotionally charged scenario: **User Input:** `"I feel overwhelmed by work, and nothing seems to be going right. It's just one of those days."` In this example, the non-Receptiviti powered LLM recognizes the overt emotional tone and distressing situation apparent from the user's statement, acknowledges the feelings of overwhelm, and responds with empathy and support. The language used reflects an understanding of the overt emotion expressed within the prompt, going beyond providing fact-based information to address the user's emotional state through the emotional language contained in the prompt: **LLM Response:** `"I'm truly sorry to hear that you're feeling overwhelmed. It's completely normal to have challenging days. Remember, it's okay to take a step back, maybe take a short break to recharge. You're not alone in feeling this way. If there's anything specific on your mind or if you just need someone to talk to, I'm here for support."` **Mimicry is not psychological understanding** The inherent limitations of LLMs become apparent in scenarios where subtle emotional or psychological cues require a nuanced interpretation. In applications that require basic responses to prompts that contain overt emotions, superficially empathetic LLM responses may be sufficient. However, when emotional or psychological signals are less obvious, and when a more robust understanding of user psychology is needed, LLMs are often unable to make these inferences, even when their training data includes abundant information on the psychological phenomena in question. For instance, despite the abundance of literature on narcissism included in the training data of most LLMs, these systems’ knowledge of how narcissism manifests in language is often based on folk psychology or commonly held incorrect beliefs and associations in the training set. In test results, GPT typically equates narcissism with the use of high rates of first-person singular pronouns. However, most psychological research indicates that narcissists are no more likely to use first-person singular pronouns than people who are not narcissists. In fact, research shows that narcissists typically use higher rates of second-person pronouns and swear words, and use less language associated with anxiety, fear, tentativeness, and sensory processes. High rates of first-person singular pronouns, on the other hand, are positively correlated with very different psychological phenomena like depressive symptoms and neuroticism. Such gaps in understanding often lead LLMs to hallucinate, which can negatively impact the user experience or lead to undesirable or potentially dangerous results. **Infusing psychological understanding into Large Language Models with Receptiviti-powered Retrieval Augmented Generation** Clearly, human psychology is shaped by factors that are challenging to encapsulate in an LLM training dataset. A more sophisticated and accurate comprehension of psychological phenomena requires the LLM to decipher implicit cues that infer the psychological and cognitive processes embedded in prompt language, rather than mimicking obvious cues. Retrieval Augmented Generation (RAG) can help by augmenting an LLM with external knowledge sources, enabling the model with capabilities that exceed its initial training data. A RAG that utilizes Receptiviti’s language-based psychology insights API can further enhance an LLM’s ability to infer a wide variety of psychological insights about a user from implicit cues contained within their prompt language, and in doing so, enable the LLM to generate responses that consider a user’s psychology and cognitive processes. **Assessing an individual’s thinking style from their Large Language Model prompts** For example, a [Receptiviti API](https://www.receptiviti.com/api)-powered RAG can offer an LLM an enhanced understanding of how users process information and solve problems, thereby helping LLMs to adapt their responses to align with users’ mental frameworks and problem-solving strategies. Consider how an LLM’s response to the question “what is an LLM?” can be tailored based on a Receptiviti-enabled RAG’s quantification of [how analytical the user’s thinking style is](https://www.receptiviti.com/congnitive-measures): ![Tailor LLM responses to how analytical user is](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tnax5me72uqs9hdyxz8z.png) Users whose language the RAG deems to be low on the Receptiviti “analytical” dimension are likely to benefit from responses that are concise, engaging, and use illustrative examples to aide with comprehension: **Receptiviti-enabled RAG response tailored to a less analytical user (Analytical score of 38.9):** `“An LLM is short for Large Language Model. It's like a smart computer program that understands and generates text. You throw words at it, and it gives you back more words that make sense. People use it for things like making chatbots or writing articles without having a person do the writing. It's basically a program that talks and writes like a person.” ` Users whose language the RAG deems to be high on the Receptiviti “analytical” dimension would result in detailed responses supported with more facts and data: **Receptiviti-enabled RAG response tailored to a highly analytical user (Analytical score of 92.1):** `“An LLM, or Large Language Model, is a sophisticated artificial intelligence system designed for natural language processing tasks. These models, exemplified by architectures like GPT-4, are built on deep learning techniques and trained on extensive datasets containing diverse language patterns. LLMs excel in understanding context, semantics, and linguistic nuances. They consist of multiple layers of neural networks, allowing them to capture intricate language structures and generate human-like text. LLMs find applications in various fields, such as language translation, text completion, question answering, and content creation, showcasing their ability to comprehend and generate coherent and contextually relevant responses.”` **Assessing personality from users' Large Language Model prompts** A Receptiviti API-powered RAG could also be used to [assess a user’s personality](https://www.receptiviti.com/big-five) based on the language contained in their prompts. These quantitative scores can be used for a wide variety of purposes including informing market research with personality insights, the development of customer personas, or to develop a comprehensive and dynamic understanding of the user such that future responses are tailored to the nuances of their unique personality: ![LLM builds customer personality profiles](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7s0t14ggl9va39wdujxy.png) For the purposes of improving LLM-based digital health or coaching platforms, a similar approach could be used to track how changes in a user’s personality manifest over time. Big Five personality facets such as Stress-Prone or Anxiety-Prone can be used to quantify how stressed or anxious an individual is at a point in time or can be used to track changes in a user’s stress or anxiety levels over time. The resulting scores can be used to better understand how the individual responds to therapeutic interventions and to create far more empathetic and contextually suitable responses. **Comprehensive understanding of user psychology, personality, and emotions with Receptiviti-powered Retrieval Augmented Generation** The degree to which a user is stressed, anxious, or has an analytical thinking style are just a few of the hundreds of psychological phenomena that could be inferred with a Receptiviti API-powered RAG. Prompt language can be analyzed to understand a user’s emotions, levels of anxiety, empathy, their leadership style, and to conduct a comprehensive assessment of their personality using frameworks like the [Big Five](https://www.receptiviti.com/big-five) and [DISC](https://www.receptiviti.com/disc). Regardless of the psychological phenomena being analyzed, a [Receptiviti API](https://www.receptiviti.com/api)-powered RAG can generate quantitative scores that can be used to benchmark and compare individuals or to conduct quantitative longitudinal comparisons to understand how an individual changes over time. LLMs are designed for natural language understanding and generation, and while they excel at processing and generating human-like text based on the input they receive, generating quantitative insights into phenomena like psychology, personality, and emotion requires integrating external knowledge that has been specifically designed for the complexities of language-based psychological analysis, like Receptiviti. Augmenting Large Language Models (LLMs) with a comprehensive understanding of human psychology brings several advantages: - It creates interactions that are more meaningful and impactful. These interactions are finely tuned to the unique psychological makeup of each user. - It reduces the likelihood of LLMs generating responses that might be biased or lack sensitivity. - It equips LLMs with the capability to track and understand a user’s psychological changes over time. - From a user experience perspective, an empirical understanding of users' cognitive and psychological processes enhances the overall user experience, and makes interactions with LLMs more intuitive, adaptive, and far more impactful. [Get started with a Receptiviti API account](https://www.receptiviti.com).
receptiviti
1,880,553
FastAPI Beyond CRUD Part 8 - User Account Creation (Email & Password Auth)
In this video, we employ the user authentication model developed in the previous segment to enable...
0
2024-06-07T15:35:09
https://dev.to/jod35/fastapi-beyond-crud-part-8-user-account-creation-email-password-auth-47p5
fastapi, python, api, programming
In this video, we employ the user authentication model developed in the previous segment to enable users to create accounts within our application. Our process starts with establishing the API endpoints and incorporating the required database fields. Subsequently, we implement the logic in a service file and integrate it into our API endpoints. {%youtube iIwKWNGWtyk%}
jod35
1,880,552
Game for PYTHON
import random import time class Player: def __init__(self, name): self.name = name ...
0
2024-06-07T15:34:16
https://dev.to/praket_singh_1c341a50266b/game-for-python-png
``` import random import time class Player: def __init__(self, name): self.name = name self.health = 100 self.gold = 5 self.berries = 0 self.attack = 10 self.defense = 5 self.potions = 3 self.weak_spell = 1 self.special_ability = None self.stamina = 120 self.active_quest = None self.job = None self.salary = 0 self.required_attributes = {} self.has_attempted_run = False self.upgrade_points = 0 self.attacks = { "Roundhouse kick": {"damage": 10, "stamina_cost": 10}, "Heavy Strike": {"damage": 20, "stamina_cost": 25}, "Power Shot": {"damage": 30, "stamina_cost": 35} } def change_job(self, company): if all(getattr(self, attr, 0) >= req for attr, req in company.required_attributes.items()): self.gold += company.salary self.job = company.name print( f"Congratulations! You have been hired by {company.name} and received a salary of {company.salary} gold!") else: print("You do not meet the requirements to join this company.") def undertake_quest(self, quest): self.active_quest = quest print(f"You undertook the quest: {quest.name}") def display_companies(companies): print("Available Companies:") for idx, company in enumerate(companies, start=1): print( f"{idx}. {company.name} - Salary: {company.salary}, Required Attributes: {company.required_attributes}") def complete_quest(self): if self.active_quest and not self.active_quest.completed: self.gold += self.active_quest.reward_gold self.upgrade_points += self.active_quest.reward_upgrade_points self.active_quest.completed = True print(f"Congratulations! You completed the quest: {self.active_quest.name}") print( f"You received {self.active_quest.reward_gold} gold and {self.active_quest.reward_upgrade_points} upgrade points!") self.active_quest = None else: print("No active quest to complete.") def display_details(self): print(f"Quest: {self.name}") print(f"Description: {self.description}") print(f"Reward: {self.reward_gold} gold, {self.reward_upgrade_points} upgrade points") print("Status: Completed" if self.completed else "Status: In Progress") def reset_run_attempt(self): self.has_attempted_run = False def attempt_run(self): if not self.has_attempted_run: self.has_attempted_run = True return True else: print("You have already attempted to run during this battle.") self.has_attempted_run = False def display_stats(self): print(f"Player: {self.name}") print(f"Health: {self.health}") print(f"Gold: {self.gold}") print(f"Berries: {self.berries}") print(f"Attack: {self.attack}") print(f"Defense: {self.defense}") print(f"Potions: {self.potions}") print(f"Stamina: {self.stamina}") print(f"Upgrade Points: {self.upgrade_points}") if self.special_ability: print(f"Special Ability: {self.special_ability.name}") print("\nAttacks:") for attack, details in self.attacks.items(): if details.get("unlocked", True): print("Roundhouse kick: damage-10, stamina cost-10") print("Heavy strike: damage-20, stamina cost-25") print("Power shot: damage-30, stamina cost-35") def attack_enemy(self, enemy, attack_name): damage = 0 if attack_name in self.attacks: if self.stamina >= self.attacks[attack_name]["stamina_cost"]: damage = self.attacks[attack_name]["damage"] if self.special_ability: damage += self.special_ability.activate() enemy.take_damage(max(0, damage - enemy.defense)) print(f"You use {attack_name} and deal {max(0, damage - enemy.defense)} damage to the {enemy.name}!") self.stamina -= self.attacks[attack_name]["stamina_cost"] else: print("Not enough stamina to perform this attack!") else: print("Invalid attack choice. Please try again.") def simple_attack(self, enemy): damage = max(0, self.attack - enemy.defense) enemy.health -= damage print(f"You attack the enemy and deal {damage} damage!") def take_damage(self, damage): self.health -= damage def use_potion(self): if self.potions > 0: self.health += 20 self.potions -= 1 print("You use a potion and restore 20 health.") else: print("You don't have any potions left!") class Enemy: def __init__(self, name, health, attack, defense, gold_drop, is_god=False, key_fragment_drop=False): self.name = name self.health = health self.attack = attack self.defense = defense self.gold_drop = gold_drop self.is_god = is_god self.key_fragment_drop = key_fragment_drop def drop_items(self, player): if self.key_fragment_drop: player.defeat_enemy() def drop_fruit(self): if self.is_god: return random.choice(["Gomu Gomu no Mi", "Hie Hie no Mi", "Mera Mera no Mi"]) else: return None def attack_player(self, player): damage = max(0, self.attack - player.defense) player.take_damage(damage) print(f"The {self.name} attacks you and deals {damage} damage!") def take_damage(self, damage): self.health -= damage class BossEnemy(Enemy): def __init__(self, name, health, attack, defense, gold_drop, is_god=False): super().__init__(name, health, attack, defense, gold_drop, is_god) def boss_battle(player, friend, boss_enemy): print(f"A formidable foe, {boss_enemy.name}, appears!") while player.health > 0 and boss_enemy.health > 0: print("\nPlayer's Turn:") player.display_stats() print(f"{boss_enemy.name}'s Health: {boss_enemy.health}") print("1. Basic punch") print("Attacks:") for idx, attack_name in enumerate(player.attacks.keys(), start=2): print(f"{idx}. {attack_name}") print("5. Use Potion") print("6. Run") if player.special_ability: print("7. Use Special Ability") choice = input("Enter your choice number: ") if choice.isdigit() and 1 <= int(choice) <= 6: choice = int(choice) if choice == 1: player.simple_attack(enemy) elif choice == 2: attack_name = "Roundhouse kick" elif choice == 3: attack_name = "Heavy Strike" elif choice == 4: attack_name = "Power Shot" elif choice == 5 : player.use_potion() elif choice == 6: print("You attempt to run away!") if random.random() < 0.55: print("You successfully escaped from the battle!") player.reset_run_attempt() return True else: print("You couldn't escape!") print(f"{enemy.name}'s Turn:") enemy.attack_player(player) continue elif choice == 7 and player.special_ability: attack_name = "Special Ability" else: print("Invalid choice. Please try again.") continue player.attack_enemy(enemy, attack_name) if boss_enemy.health <= 0: print(f"Congratulations! You defeated {boss_enemy.name}!") print(f"You received {boss_enemy.gold_drop} gold!") break print(f"\n{boss_enemy.name}'s Turn:") boss_enemy.attack_player(player) if player.health <= 0: print("You have been defeated! Game over.") break class SpecialAbility: def __init__(self, name, description, damage, cooldown): self.name = name self.description = description self.damage = damage self.cooldown = cooldown self.last_used_time = 0 def activate(self): current_time = time.time() if current_time - self.last_used_time >= self.cooldown: print(f"You activate {self.name}: {self.description}") self.last_used_time = current_time return self.damage else: print(f"The ability {self.name} is still on cooldown!") return 0 class quest: def undertake_quest(self, quest): if self.active_quest: print("You are already on a quest. Complete it before taking another.") class Quest: def __init__(self, name, description, reward_gold, reward_upgrade_points): self.name = name self.description = description self.reward_gold = reward_gold self.reward_upgrade_points = reward_upgrade_points self.completed = False def undertake_quest(self, quest): self.active_quest = quest print(f"You undertook the quest: {quest.name}") def complete_quest(self): if self.active_quest and not self.active_quest.completed: self.gold += self.active_quest.reward_gold self.upgrade_points += self.active_quest.reward_upgrade_points self.active_quest.completed = True print(f"Congratulations! You completed the quest: {self.active_quest.name}") print( f"You received {self.active_quest.reward_gold} gold and {self.active_quest.reward_upgrade_points} upgrade points!") self.active_quest = None else: print("No active quest to complete.") def display_details(self): print(f"Quest: {self.name}") print(f"Description: {self.description}") print(f"Reward: {self.reward_gold} gold, {self.reward_upgrade_points} upgrade points") print("Status: Completed" if self.completed else "Status: In Progress") class BossKeyQuest: def __init__(self): self.key_fragments_needed = 3 self.key_fragments_collected = 0 def start_quest(self): print("You've received a quest to obtain the boss key!") print("Defeat enemies to collect their key fragments.") print(f"You need to collect {self.key_fragments_needed} key fragments to unlock the boss chamber.") def check_progress(self): print(f"You have collected {self.key_fragments_collected} out of {self.key_fragments_needed} key fragments.") def collect_key_fragment(self): self.key_fragments_collected += 1 print("You collected a key fragment!") def complete_quest(self): if self.key_fragments_collected >= self.key_fragments_needed: print("Congratulations! You have collected all key fragments and unlocked the boss chamber.") return True else: print("You still need to collect more key fragments to unlock the boss chamber.") return False class Company: def __init__(self, name, salary, required_attributes): self.name = name self.salary = salary self.required_attributes = required_attributes class NPC: def __init__(self, name, dialogue): self.name = name self.dialogue = dialogue def interact(self): print(f"{self.name}: {self.dialogue}") class Shop: def __init__(self): self.items = { "Health Potion": {"price": 20, "effect": "restore health"}, "Old set of armor and sword": {"price": 40, "effect": "increse attack and defense by 2" }, "Sword": {"price": 50, "effect": "increase attack by 4"}, "Armor": {"price": 50, "effect": "increase defense by 4"}, "Elixir of Life": {"price": 100, "effect": "health buff of 50 instantly"}, "Aluminum Sword": {"price": 57, "effect": "increase attack by 5"}, "Chainmail Armor": {"price": 57, "effect": "increase defense by 5"}, "Iron Sword": {"price": 65, "effect": "increase attack by 10"}, "Iron Armor": {"price": 65, "effect": "increase defense by 10"}, "Gold Sword": {"price": 80, "effect": "increase attack by 15"}, "Gold Armor": {"price": 80, "effect": "increase defense by 15"}, "Wizard's Staff": {"price": 100, "effect": "increase attack by 20"}, "Heavenly Knight's Armor": {"price": 110, "effect": "increase defense by 25"}, "Ring of Vigor": {"price": 60, "effect": "increase stamina by 20"} } self.mythical_items = { "Conqueror's Sword": {"price": 20, "effect": "massive increase in attack"}, "Dragon Armor": {"price": 20, "effect": "massive increase in defense"}, "Eternal Life": {"price": 21, "effect": "massive increase in health"}, "Ring of valor": {"price": 25, "effect": "increase in stamina, health"}, "The eternal dagger": {"price": 29, "effect":"increase in attack by 29" } } self.special_abilities = { "Gomu Gomu no Mi": SpecialAbility("Gomu Gomu no Mi", "stretching ability", 15, 60), "Hie Hie no Mi": SpecialAbility("Hie Hie no Mi", "ice manipulation", 20, 90), "Mera Mera no Mi": SpecialAbility("Mera Mera no Mi", "fire manipulation", 25, 120) } self.exchange_rate = 10 def display_items(self): print("Welcome to the shop! Here are the available items:") print("Regular Items(gold):") for item, stats in self.items.items(): print(f"{item}: Price: {stats['price']} Gold, Effect: {stats['effect']}") print("\nMythical Items(berries):") for item, stats in self.mythical_items.items(): print(f"{item}: Price: {stats['price']} Berries, Effect: {stats['effect']}") print("\nSpecial Abilities(can't be bought):") for ability, stats in self.special_abilities.items(): print(f"{ability}: Description: {stats.description}, Damage: {stats.damage}, Cooldown: {stats.cooldown} seconds") def buy_item(self, player, item_name): if item_name in self.items: item_price = self.items[item_name]["price"] if player.gold >= item_price: if item_name == "Health Potion": player.potions += 1 elif item_name == "Old set of armor and sword": player.attack += 2 player.defense += 2 elif item_name == "Sword": player.attack += 4 elif item_name == "Armor": player.defense += 4 elif item_name == "Elixir of Life": player.health += 50 elif item_name == "Aluminum Sword": player.attack += 5 elif item_name == "chainmail armor": player.defense += 5 elif item_name == "Iron Sword": player.attack += 10 elif item_name == "Iron Armor": player.defense += 10 elif item_name == "Gold Sword": player.attack += 15 elif item_name == "Gold armor": player.defense += 15 elif item_name == "wizard's staff": player.attack += 20 elif item_name == "Heavenly knight's armor": player.defense += 25 elif item_name == "Ring of Vigor": player.stamina += 20 player.gold -= item_price print(f"You bought {item_name} for {item_price} gold!") else: print("You don't have enough gold to buy this item!") elif item_name in self.mythical_items: item_price = self.mythical_items[item_name]["price"] if player.berries >= item_price: if item_name == "Conqueror's Sword": player.attack += 27 elif item_name == "Dragon Armor": player.defense += 29 elif item_name == "Eternal Life": player.health += 75 elif item_name == "Ring of valor": player.stamina += 50 player.health += 25 elif item_name == "The eternal dagger": player.attack += 29 player.berries -= item_price print(f"You bought {item_name} for {item_price} berries!") else: print("You don't have enough berries to buy this item!") else: print("Item not available in the shop!") class Friend: def __init__(self, name): self.name = name self.health = 100 self.attack = 15 self.defense = 8 self.upgrade_points_cost = 5 # Define cost of upgrading friend self.upgrade_level = 0 def display_stats(self): print(f"{self.name}'s Stats:") print(f"Health: {self.health}") print(f"Attack: {self.attack}") print(f"Defense: {self.defense}") print(f"Upgrade Level: {self.upgrade_level}") def upgrade(self, player): if player.upgrade_points >= self.upgrade_points_cost: player.upgrade_points -= self.upgrade_points_cost self.attack += 5 # Example upgrade: Increase attack by 5 self.defense += 3 # Example upgrade: Increase defense by 2 self.health += 10 # Example upgrade: Increase health by 10 self.upgrade_level += 1 print(f"{self.name} has been upgraded!") else: print("Insufficient upgrade points.") def battle(player, friend, enemy, enemies): print(f"A wild {enemy.name} appears!") print(f"{player.name}'s stats:") player.display_stats() print(f"{friend.name}'s stats:") friend.display_stats() num_enemies = random.randint(1, 2) if num_enemies == 2 : enemy.health *= 1.4 total_gold_drop = enemy.gold_drop * num_enemies enemy.attack *= num_enemies enemy.defense *= num_enemies print(f"{enemy.name}'s stats:") print(f"Health: {enemy.health}") print(f"Attack: {enemy.attack}") print(f"Defense: {enemy.defense}") while player.health > 0 and enemy.health > 0: print("\nPlayer's Turn:") print("1. Basic punch") print("Attacks:") for idx, attack_name in enumerate(player.attacks.keys(), start=2): print(f"{idx}. {attack_name}") print("Others:") print("5. Use Potion") print("6. Run") if player.special_ability: print("7. Use Special Ability") choice = input("Enter your choice number: ") if choice.isdigit() and 1 <= int(choice) <= 7: choice = int(choice) if choice == 1: player.simple_attack(enemy) elif choice in range(2, 5): attack_name = list(player.attacks.keys())[choice - 2] player.attack_enemy(enemy, attack_name) elif choice == 5: player.use_potion() elif choice == 6: print("You attempt to run away!") if random.random() < 0.4991: print("You successfully escaped from the battle!") return False # Return False when player escapes else: print("You couldn't escape!") player.attempt_run() print(f"{enemy.name}'s Turn:") enemy.attack_player(player) continue elif choice == 7 and player.special_ability: player.attack_enemy(enemy, "Special Ability") else: print("Invalid choice. Please try again.") continue if enemy.health <= 0: print(f"You defeated the {enemy.name}!") player.gold += enemy.gold_drop print(f"You received {enemy.gold_drop} gold!") fruit = enemy.drop_fruit() if fruit: print(f"The {enemy.name} dropped a {fruit}!") if fruit in player.special_abilities: print("You feel a strange power emanating from the fruit!") print(f"It seems you acquired the ability: {fruit}") player.special_ability = player.special_abilities[fruit] return True print(f"{enemy.name}'s Turn:") enemy.attack_player(player) print(f"{player.name}'s stats:") player.display_stats() print(f"{friend.name}'s stats:") friend.display_stats() print(f"{enemy.name}'s stats:") print(f"Health: {enemy.health}") print(f"Attack: {enemy.attack}") print(f"Defense: {enemy.defense}") else: print("Invalid choice. Please enter a number between 1 and 7.") if player.health <= 0: print("You have been defeated! Game over.") return False return True def dungeon(player, friend, dungeon_enemies): print("Welcome to the dungeon!") print("You are not alone here. Your friend is with you.") while player.health > 0: if len(dungeon_enemies) == 0: print("Congratulations! You defeated all enemies in the dungeon!") break print("\nCurrent enemies in the dungeon:") for idx, enemy in enumerate(dungeon_enemies): print(f"{idx + 1}. {enemy.name}") print("\nWhat would you like to do?") print("1. Battle an enemy") print("2. Shop") print("3. quest") print("4. upgrade friend") print("5. Quit") choice = input("Enter your choice number: ") if choice == "1": enemy_choice = random.choice(dungeon_enemies) if battle(player, friend, enemy_choice): dungeon_enemies.remove(enemy_choice) elif choice == "2": shop.display_items() item_choice = input("Enter the item you want to buy or 'exit' to leave the shop: ") if item_choice.lower() != 'exit': shop.buy_item(player, item_choice) elif choice == "3": print("Available Quests:") for idx, quest in enumerate(quests, start=1): quest.display_details() quest_choice = input("Enter the number of the quest you want to undertake or 'exit' to go back: ") if quest_choice.isdigit() and 1 <= int(quest_choice) <= len(quests): player.undertake_quest(quests[int(quest_choice) - 1]) if hasattr(player, 'undertake_quest') else print( "Player cannot undertake quests.") elif quest_choice.lower() == 'exit': continue else: print("Invalid quest choice. Please try again.") elif choice == "4": friend.upgrade(player) elif choice == "5": print("Exiting the dungeon...") break else: print("Invalid choice. Please try again.") def main(): player_name = input("Enter your name: ") player = Player(player_name) friend = Friend("Ally") shop = Shop() enemies = [ Enemy("Demon", 50, 8, 2, 20, False, False), Enemy("Orc", 70, 12, 5, 30), Enemy("Dragon", 100, 20, 10, 50), Enemy("Giants", 80, 17, 9, 40), Enemy("Undead Insects", 61, 10, 3, 25), Enemy("Blood hyena", 75, 16, 7, 42), Enemy("Sea Kings", 90, 21, 9, 55, False, True), Enemy("Sea Kings (Enraged)", 110, 25, 13, 63), Enemy("Sea Beast", 120, 30, 15, 70), Enemy("Jin-Woo (HUMAN, HUNTER)", 125, 35, 20, 75, False, True), Enemy("Cerberus", 132, 40, 25, 86), Enemy("GOD", 140, 45, 25, 100, True), Enemy("GOD (Enraged)", 155, 50, 30, 110, True,True) ] dark_lord = BossEnemy("Dark Lord", 200, 45, 26, 181) dungeon_enemies = [ Enemy("Skeleton", 60, 10, 10, 40), Enemy("Goblin", 100, 24, 16, 60), Enemy("Witch", 115, 42, 24, 110), Enemy("warewolfs", 140, 60, 37, 140), Enemy("Dark Knight", 145, 80, 50, 172), Enemy("Demon Lord", 150, 90, 55, 200, True), Enemy("Ancient Dragon", 200, 100, 60, 220) ] quests = [ Quest("Defeat the Sea beeast", "Defeat the notorious Sea beast .", 50, 1), Quest("defeat the GOD", "Defeat the rebel GOD.", 100, 2), Quest("defeat demon lord", "defeat the demon lord and free the captured spirits .", 150, 3) ] npc_1 = NPC("Villager", "Thank you for helping our village. We are forever grateful.") npc_2 = NPC("Blacksmith","Looking for some new gear? I have the finest weapons and armor in the land. Just visit the shop") companies = [ Company("new hunter guild", 45, {"attack": 10, "defense": 5}), Company("Hunter Association", 100, {"attack": 50, "defense": 30}), Company("Magic Guild", 120, {"attack": 40, "stamina": 20, "defense": 50}), Company("Mercenaries Guild", 80, {"attack": 60, "stamina": 70}), Company("Explorers Guild", 90, {"defense": 40, "stamina": 60}), ] boss_quest = BossKeyQuest() boss_quest.start_quest() print("Welcome to the RPG game!") print("Your grandpa just passed away and you inheritd 5 gold") print("new hunter guild offerd you a contract sign it in companies") print("You have been hired by a hunter organization.") print("Your goal is to defeat all enemies and become the ultimate warrior.") print("1 berry = 10 gold") print("Fruits can only be dropped by GOD.") print("A random number of enemies from one to two can spawn") while player.health > 0: if len(enemies) == 0: print("Congratulations! You defeated all enemies and opened a portal to a dungeon!") print("You enter the portal and find yourself in a dark dungeon...") print("But don't worry, you have a friend to help you in this journey!") dungeon(player, friend, dungeon_enemies) friend.upgrade(player) # Upgrade friend when entering the dungeon break print("\nCurrent enemies:") for idx, enemy in enumerate(enemies): print(f"{idx + 1}. {enemy.name}") print("\nWhat would you like to do?") print("1. Battle an enemy") print("2. Visit the shop") print("3. Upgrade friend") print("4. Stats") print("5. View quest") print("6. Interact with NPC") print("7. JOB(companies)") print("8. special quest") print("9. Credits") print("10. Quit") choice = input("Enter your choice number: ") if choice == "1": if enemies: # Check if there are remaining enemies enemy_choice = random.choice(enemies) if battle(player, friend, enemy_choice, enemies): enemies.remove(enemy_choice) elif choice == "2": shop.display_items() item_choice = input("Enter the item you want to buy or 'exit' to leave the shop: ") if item_choice.lower() != 'exit': shop.buy_item(player, item_choice) elif choice == "3": friend.upgrade(player) elif choice == "4": player.display_stats() friend.display_stats() elif choice == "6": print("NPCs available for interaction:") print("1. Villager") print("2. Blacksmith") npc_choice = input("Enter the number of the NPC you want to interact with: ") if npc_choice == "1": npc_1.interact() elif npc_choice == "2": npc_2.interact() else: print("Invalid NPC choice.") elif choice == "5": if player.active_quest: print("You are already on a quest. Complete it before taking another.") else: print("Available Quests:") for idx, quest in enumerate(quests, start=1): quest.display_details() quest_choice = input("Enter the number of the quest you want to undertake or 'exit' to go back: ") if quest_choice.isdigit() and 1 <= int(quest_choice) <= len(quests): player.undertake_quest(quests[int(quest_choice) - 1]) elif quest_choice.lower() == 'exit': continue else: print("Invalid quest choice. Please try again.") elif choice == "7": print("Available Companies:") for idx, company in enumerate(companies, start=1): print(f"{idx}. {company.name} - Salary: {company.salary} gold, Required Attributes: {company.required_attributes}") company_choice = input("Enter the number of the company you want to apply to or 'exit' to go back: ") if company_choice.isdigit() and 1 <= int(company_choice) <= len(companies): player.change_job(companies[int(company_choice) - 1]) elif company_choice.lower() == 'exit': continue else: print("Invalid company choice. Please try again.") elif choice == "8": boss_quest.check_progress() op = input("type EXIT to go back to home- ") if op == 'EXIT': continue elif choice == "9": print("Game by - Praket singh bhadauria") print("inspiration - Solo leveling, One piece") op = input("type EXIT to go back to home- ") if op == 'EXIT': continue elif choice == "10": print("Thanks for playing!") break else: print("Invalid choice. Please try again.") if boss_quest.complete_quest(): # Generate the boss enemy and start the boss battle boss_enemy = BossEnemy("Dark Lord", 300, 50, 30, 1000) BossEnemy.boss_battle(player, friend, boss_enemy) break if __name__ == "__main__": main() ```
praket_singh_1c341a50266b
1,880,469
Leetcode Day3 Linked List
Leetcode No.203 Remove Linked List Elements Given the head of a linked list and an integer...
0
2024-06-07T15:33:20
https://dev.to/flame_chan_llll/leetcode-day3-linked-list-2n6e
##Leetcode No.203 Remove Linked List Elements Given the head of a linked list and an integer val, remove all the nodes of the linked list that has Node.val == val, and return the new head. ``` /** * Definition for singly-linked list. * public class ListNode { * int val; * ListNode next; * ListNode() {} * ListNode(int val) { this.val = val; } * ListNode(int val, ListNode next) { this.val = val; this.next = next; } * } */ ``` It is a normal Linked list question but i suffer some problems because of my bad coding ability. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0ucnvyznw3qi9i0pran.png) But I suffer from some problem: - the initial previous node and current node - work for both head node and other node ### Error Code 1 ``` while(head!=null){ if(head.val == val){ head = head.next; } } ListNode preNode = head; ListNode curNode = head; while(curNode != null){ System.out.println("find"); if(curNode.val == val){ preNode.next = curNode.next; }else{ preNode = curNode; } curNode = curNode.next; } return head; ``` - `while(head!=null)` wrong condition ### Error Code 2 ``` public ListNode removeElements(ListNode head, int val) { ListNode preNode = head; ListNode curNode = head; while(curNode != null){ if(curNode.val == val){ // head is different from other element because it is an entry of the Linked list if(curNode == head){ head = head.next; curNode = head; preNode = head; }else{ // if the node is normal node we just do the remove preNode.next = curNode.next; } }else{ preNode = curNode; } curNode = curNode.next; } return head; } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qimded1dwx4h67dr3es5.png) ### By modifying above problem, we achieve the right code: ``` public ListNode removeElements(ListNode head, int val) { while((head != null) && (head.val == val)){ head = head.next; } ListNode preNode = head; ListNode curNode = head; while(curNode!=null){ if(curNode.val == val){ preNode.next = curNode.next; }else{ preNode = curNode; } curNode = curNode.next; } return head; } ``` - Be Careful about `while((head != null) && (head.val == val))`, I use only `while(head.val == val)` and cause a null pointer exception so that be careful about it. ##Leetcode No.707 Design Linked List Design your implementation of the linked list. You can choose to use a singly or doubly linked list. A node in a singly linked list should have two attributes: val and next. val is the value of the current node, and next is a pointer/reference to the next node. If you want to use the doubly linked list, you will need one more attribute prev to indicate the previous node in the linked list. Assume all nodes in the linked list are 0-indexed. Implement the MyLinkedList class: MyLinkedList() Initializes the MyLinkedList object. int get(int index) Get the value of the indexth node in the linked list. If the index is invalid, return -1. void addAtHead(int val) Add a node of value val before the first element of the linked list. After the insertion, the new node will be the first node of the linked list. void addAtTail(int val) Append a node of value val as the last element of the linked list. void addAtIndex(int index, int val) Add a node of value val before the indexth node in the linked list. If index equals the length of the linked list, the node will be appended to the end of the linked list. If index is greater than the length, the node will not be inserted. void deleteAtIndex(int index) Delete the indexth node in the linked list, if the index is valid. ### Work for wrong code ``` class MyLinkedList { public Node head; public Node tail; public int size; public class Node{ public int val; public Node next; public Node(){ } public Node(int val){ this.val = val; } public Node(int val, Node next){ this.val = val; this.next = next; } } public MyLinkedList() { size = 0; head = null; tail = null; } public int get(int index) { int count = 0; Node cur = head; while(cur != null){ if(count == index){ return cur.val; } cur = cur.next; count ++; } return -1; } public void addAtHead(int val) { if(head==null){ head = new Node(val); // if head == null means this list is empty tail = head; } else{ Node temp = head; head = new Node(val); head.next = temp; } size++; } public void addAtTail(int val) { if(tail==null){ tail = new Node(val); head = tail; } else{ Node temp = tail; temp.next = new Node(val); tail = temp.next; } size++; } public void addAtIndex(int index, int val) { if(index > size){ return; } if(index == 0){ addAtHead(val); } if(index == size){ addAtTail(val); } else{ int count = 0; Node pre = head; Node cur = head; while(cur != null){ if(count == index){ Node insert = new Node(val); pre.next = insert; // important insert.next = cur; size++; return; } else{ pre = cur; cur = cur.next; count ++; } } } } public void deleteAtIndex(int index) { if(index > size){ return; } if(index == 0){ if(head!=null){ head = head.next; if(head == null){ tail = null; } size--; return; } }else{ int count = 0; Node pre = head; Node cur = head; while(cur != null){ if(count == index){ pre.next = cur.next; if(pre.next == null){ tail = pre; } size--; return; } pre = cur; cur = cur.next; count++; } } } } /** * Your MyLinkedList object will be instantiated and called as such: * MyLinkedList obj = new MyLinkedList(); * int param_1 = obj.get(index); * obj.addAtHead(val); * obj.addAtTail(val); * obj.addAtIndex(index,val); * obj.deleteAtIndex(index); */ ``` ### Debugging... - I might write some redundant code for the loops - Because I have created a variable size, so after finishing the evaluation between the index and the size, we definitely can find the right index among the sizes. - so I have to change this part? - But I am struggling with that I am not sure where the mistake happened. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmue87bbtwxhq8vy315z.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xv6zeys2ulsybc6l2pn0.png)
flame_chan_llll
1,862,880
Storage-first pattern using Serverless
Sometimes we have sensitive requests which we need to prepeare our systems to avoid losing it....
0
2024-06-07T15:30:48
https://dev.to/skaznowiecki/storage-first-pattern-using-serverless-m6
serverless, aws, lambda, s3
Sometimes we have **sensitive requests** which we need to prepeare our systems to avoid losing it. Usually losing these requests mean that the company/product **loses money** or introduce issues in the product/application ## Sensitive requests - Payment requests - Delivery requests - Webhooks with external information ## Non-sensitive requests - List of users - List of user-roles - Endpoints for internal systems The main difference between **sensitive** and **non-sensitive** request is that sensitive requests in general, you can not re-send it or it's more difficult in comparison with non-sensitive request The idea behind of storage-first pattern, is basically to **store** the request information **first** ( in a high availability storage ) and **process** it **later** . ## Why serverless is usefull in these cases ? Using Serverless technologies like s3, sqs, kinesis, dynamodb gives you high availability without needing to pay for the provisioning. Using serverless storage you only pay per use. we're going to explain here different use-cases and pattern, and talk a bit about pricing ## Use cases Here, there are 3 kind of implementation of storage-first pattern : ### SQS-based Queueing ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/59v9zouzy8pbszpb7ptc.png) **Ideal for**: Payment processing, order fulfillment, webhook handling. **How it works**: - Request is immediately placed in an SQS queue. - A separate process (Lambda function, ECS task, etc.) continuously polls the queue, processes messages, and performs necessary actions. **Benefits** - Decouples request ingestion and processing. - Ensures high availability and fault tolerance. - Scales automatically to handle spikes in requests. - Retry mechanism **Pricing example** Processing per 1 M of requests - SQS : $0.4 - API-GATEWAY (http): $1.00 - **Total : $1.4*** * This is an estimate of the pricing because we're not taking care of Data Transfer ### DynamoDB ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/256hov1sbx5kgmkm8mi9.png) **Ideal for**: Storing and retrieving sensitive request data quickly (e.g., payment details, delivery addresses) **How it works** - Critical request data is stored in DynamoDB with a unique identifier. - Subsequent processes can look up and update data in DynamoDB with low latency using DynamoDB streams. **Benefits** - Fast and reliable data access. - Serverless, scales automatically. - Supports fine-grained access control. **Pricing example** Processing per 1 M of requests - DynamoDB (write) : $1.25 - API-GATEWAY (http): $1.00 - **Total : $2.25*** * This is an estimate of the pricing because we're not taking care of Data Transfer and Storage ### S3 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1g75e2zhagx7xveukhkt.png) **Ideal for**: Storing raw request data for auditing, compliance, and analysis. **How it works:** - Requests are stored as objects in S3. - S3 triggers events (e.g., object creation) to notify other services for further processing or analysis. **Benefits** - Durable and scalable storage. - Cost-effective for long-term storage. - Integrates with other AWS services. **Pricing example** Processing per 1 M of requests - S3 put(standard) : $5 - API-GATEWAY (http): $1.00 - **Total : $6*** * This is an estimate of the pricing because we're not taking care of Data Transfer and Storage If you want to go deeper on AWS storage first pattern I highly recommend the following [link](https://aws.amazon.com/blogs/compute/building-storage-first-applications-with-http-apis-service-integrations/)
skaznowiecki
1,880,550
Creating Win32 Intunewin packages
Hi, I would like to share a GUI based Powershell script that will help create Intunewin files for...
0
2024-06-07T15:29:18
https://dev.to/rinkturksma/creating-win32-intunewin-packages-gki
Hi, I would like to share a GUI based Powershell script that will help create Intunewin files for Microsoft Intune. https://github.com/rink-turksma/IntunePrepTool ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1h6kgpno11t0r2tahe0m.png) #Intune #powershell Greetings
rinkturksma
1,880,549
How can we secure the Spring Boot Actuator endpoints?
Securing Spring Boot Actuator endpoints is crucial to protect sensitive information and operations...
27,974
2024-06-07T15:27:41
https://dev.to/codegreen/how-can-we-secure-the-spring-boot-actuator-endpoints-1hm7
java, springboot, microservices
Securing Spring Boot Actuator endpoints is crucial to protect sensitive information and operations exposed by these endpoints from unauthorized access. Here's how we can achieve that: 1. **Using Spring Security:** Spring Security is a powerful authentication and authorization framework that can be integrated with Spring Boot to secure Actuator endpoints. 2. **Authentication:** Implement authentication mechanisms such as HTTP Basic Authentication, OAuth2, JWT, or form-based authentication to verify the identity of users accessing the endpoints. 3. **Authorization:** Define access control rules to restrict access to Actuator endpoints based on user roles or permissions. You can specify who can access which endpoints and what actions they are allowed to perform. 4. **Actuator Endpoint Configuration:** Customize Actuator endpoint configurations to enable or disable specific endpoints based on security requirements. This helps in controlling the exposure of sensitive information. Example: Let's secure the Actuator endpoints using Spring Security with HTTP Basic Authentication: 1. **Add Spring Security Dependency:** Include the `spring-boot-starter-security` dependency in your project's configuration file. 2. **Configure Security:** Customize the security configuration to enable HTTP Basic Authentication and define access rules for Actuator endpoints. Here's a sample security configuration: ```java @Configuration @EnableWebSecurity public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/actuator/**").authenticated() .anyRequest().permitAll() .and() .httpBasic(); } } ``` In this configuration, we've restricted access to all Actuator endpoints (under `/actuator/**`) to authenticated users only, while allowing unrestricted access to other endpoints. HTTP Basic Authentication is used to prompt users for credentials. With this setup, users need to authenticate themselves to access Actuator endpoints, ensuring that sensitive information is protected from unauthorized access.
manishthakurani
1,880,548
JetThoughts Is On GoodFirms Radar for Delivering Effortless Ruby on Rails Development Services to Business of Various Niches
**JetThoughts LLC** is an excellent firm offering various services that can give the business an...
0
2024-06-07T15:27:34
https://jetthoughts.com/blog/jetthoughts-on-goodfirms-radar-for-delivering-effortless-ruby-rails-development-services-business-of-various-niches/
ruby, rails, service
![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jetthoughts-on-goodfirms-radar-for-delivering-effortless-ruby-rails-development-services-business-of-various-niches/file_0.jpeg) [**JetThoughts LLC](https://www.goodfirms.co/company/jetthoughts-llc)** is an excellent firm offering various services that can give the business an edge in the competitive market. The firm is known for providing Ruby on Rails development services to different clients. ## **Company Introduction** JetThoughts is a renowned Ruby on Rails development firm delivering solutions to start-ups, mid-sized and small enterprises. They are located in the United States with another office in Ukraine. The firm was founded in 2008. JetThoughts consists of motivated and learned professionals who have years of experience and expertise in various fields. They utilize this experience to deliver the right solutions for their clients. JetThoughts has more than a decade of experience in this field and utilizing that they offer suitable solutions. The motivated team goes the extra mile to provide services to clients belonging to different niches. ## **GoodFirms Research Process** [**GoodFirms](https://www.goodfirms.co/)** is a research and review platform that brings companies and clients together on the same platform. The profound researchers evaluate companies on three attributes of Quality, Reliability, and Ability parameters. The companies, after getting evaluated on these parameters, are registered on the GoodFirms website. GoodFirms evaluated JetThoughts on the same parameters, and it was found that they have been continuously delivering quality services. The detailed service list is mentioned below: ## **Ruby on Rails Development Services** JetThoughts guarantees progression and business growth by offering Ruby on Rails development services. The professional team goes beyond expectations to ensure the clients receive the best services from them. Keeping in mind the clients’ business mission and goals, the team of professionals uses agile methodologies to build a framework that transforms businesses smoothly. Plus, by catering to clients from different domains, the company has achieved massive potential in the industry. The learned team has a clear understanding of all the latest tools and takes accountability for each and everything. By keeping a long-term partnership with every client, they follow best practices to offer a suitable solution. To ensure the process is smooth and ensure optimum benefits, the firm adopts a systemic approach to delivering services. Thus, for delivering budgeted solutions, GoodFirms spotlights JetThoughts as one of the top [**Ruby on Rails development companies](https://www.goodfirms.co/directory/frameworks/top-software-development-companies/ruby-on-rails)**. ## **Web Development** A website is crucial for a business that wants to gain visibility and attract large traffic towards the brand. To help clients improve their brand visibility with an innovative website design, JetThoughts leverages its cutting-edge technology and knowledge. The firm takes care of the fact that the brand identity is visible through their offering. Thus, the developers infuse all the possible creative designs to help brands communicate with their target audience. Since the firm operates on a monthly subscription, the clients can rest assured about the project’s budget. Before signing the project contract with clients, JetThoughts offer a 2-week trial period to check how the process is going. Regardless of the business type and size, JetThoughts puts all its effort together to develop a user-friendly website. All these qualities have put JetThoughts on GoodFirms’ radar as one of the top [**web development companies in Ukraine](https://www.goodfirms.co/directory/country/top-website-development-companies/ua)**. ## **Software Development** Software development plays a vital role in helping businesses shine online. The company offers expert software development services allowing businesses to have an extra edge in the market. By using the latest tools and understanding the newest trend, JetThoughts lends a helping hand to firms by offering a result-oriented software development solution. Driven by passion and commitment, the firm also allows clients to talk directly with the developers and the team and clarify doubts. After partnering with JetThoughts, the clients can witness progress at each step of the way because the team maintains a relationship of transparency and trust with them. By following best practices and efficient interaction, the clients receive the best possible solutions ever. In addition to that, the team takes proper time and effort to understand clients’ vision, which makes it easy to offer solutions based on their needs. Due to their consistent effort and integrity in work, JetThoughts shines at GoodFirms as one of the top [**custom ****software development companies](https://www.goodfirms.co/directory/languages/top-software-development-companies) **on the website. ## **About GoodFirms** Washington, D.C. based GoodFirms is an innovative B2B Research and Reviews Company that extensively combs the market to find top Ruby on Rails, Web Development, and Software Development Services firms that offer the best services to its customers. GoodFirms’ extensive research processes help rank the companies, boost their online reputation and help service seekers pick the right technology partner that meets their business needs. [***Anna Stark](https://www.linkedin.com/in/anna-stark-558484192/)*** is a staff writer with [***GoodFirms](https://www.goodfirms.co/)*** — a Washington D.C.-based B2B Research Company that bridges the gap between service seekers and service providers. Anna’s current role includes gauging companies’ performances and their key attributes and putting them into words. She firmly believes in the magic of words and is always on the lookout for new strategies and ideas that companies have gladly embraced, and then helps them shine a light on those to make their identities shine brighter. > If you enjoyed this story, we recommend reading our [**latest tech stories](https://jtway.co/latest)** and [**trending tech stories](https://jtway.co/trending)**.
jetthoughts_61
1,880,547
Discover a Treasure Trove of Free Stock PNG Images at PNGsHub.com
In today’s digital age, visuals play a crucial role in capturing attention and conveying messages...
0
2024-06-07T15:26:59
https://dev.to/luis_jaky_435b02d407064ec/discover-a-treasure-trove-of-free-stock-png-images-at-pngshubcom-5ph
In today’s digital age, visuals play a crucial role in capturing attention and conveying messages effectively. Whether you’re a graphic designer, content creator, or simply someone looking to enhance your projects, having access to a diverse collection of high-quality images is essential. That’s where [PNGsHub.com](https://PNGsHub.com) comes into play. [PNGsHub.com](https://PNGsHub.com) is a treasure trove of free stock PNG images, offering a vast selection of visuals to suit every need and preference. From stunning landscapes to intricate illustrations, the platform boasts an extensive library curated with care by a team of passionate individuals. One of the standout features of PNGsHub.com is its user-friendly interface, making it easy for users to navigate and find exactly what they’re looking for. The website is thoughtfully organized into categories, ranging from nature and animals to technology and abstract designs, ensuring that there’s something for everyone. What sets PNGsHub.com apart from other image resources is its commitment to quality. Each image available on the platform is carefully vetted to meet the highest standards, ensuring that users have access to only the best visuals for their projects. Whether you’re creating a website, designing marketing materials, or sprucing up your social media posts, you can trust that the images you find on PNGsHub.com will make a lasting impression. But perhaps the best part about PNGsHub.com is that all of its images are completely free to use. Whether you’re working on a personal project or representing a brand, you can download and use any image from the platform without worrying about licensing fees or usage restrictions. This makes PNGsHub.com a valuable resource for creators of all kinds, offering endless possibilities for creativity without breaking the bank. In addition to its vast collection of PNG images, PNGsHub.com also offers helpful features such as search filters and sorting options, making it easy to find the perfect image for your needs. Whether you’re looking for a specific color scheme, resolution, or subject matter, you can quickly narrow down your options and find exactly what you’re looking for. So, whether you’re a seasoned designer or just starting out, PNGsHub.com is your go-to destination for stunning stock PNG images. With its extensive collection, user-friendly interface, and commitment to quality, it’s never been easier to find the perfect visuals for your projects. Visit [PNGsHub.com](https://PNGsHub.com) today and unleash your creativity!
luis_jaky_435b02d407064ec
1,878,235
Checking object existence in large AWS S3 buckets using Python and PySpark (plus some grep comparison)
Introduction In my recent project, I encountered a need to check if data from 3rd party...
0
2024-06-07T15:26:32
https://dev.to/bgorski/checking-object-existence-in-large-aws-s3-buckets-using-python-and-pyspark-plus-some-grep-comparison-1j0o
aws, python, pyspark, programming
## Introduction In my recent project, I encountered a need to check if data from 3rd party database corresponds with the documents in a S3 bucket. While this might seem like a straightforward task, the approach, the dataset was massive - up to 10 million objects in a single bucket. Traditional iteration over objects list or requesting _head_ for every searched file will take forever. I took some interesting steps, using Python and PySpark to search through potentially large datasets efficiently. Here's a detailed breakdown of my process. ## Listing S3 Bucket Contents and Saving Directory Names The first step was to list the contents of the S3 bucket and save the names of the subdirectiories to a text file. For this, I utilized the **Boto3** library in Python, which is a powerful interface to interact with Amazon Web Services (AWS). Here's a snippet of the code used to accomplish this task: ```python import os import sys import boto3 __doc__ = """ Usage: python get_objects.py <bucket_name> <output_file> [prefix] Example: python get_objects.py my_bucket objects.txt prefix """ if __name__ == "__main__": if len(sys.argv) < 3: print(__doc__) sys.exit(1) bucket_name = sys.argv[1] output_file = sys.argv[2] prefix = sys.argv[3] if len(sys.argv) > 3 else "" s3 = boto3.client("s3") try: os.remove(output_file) except OSError: pass continuation_token = None while True: if continuation_token: response = s3.list_objects_v2(Bucket=bucket_name, Prefix=prefix, ContinuationToken=continuation_token, Delimiter='/') else: response = s3.list_objects_v2(Bucket=bucket_name, Prefix=prefix, Delimiter='/') objects = response.get("CommonPrefixes", []) continuation_token = response.get("NextContinuationToken") with open(output_file, "a") as f: for obj in objects: f.write(obj["Prefix"] + "\n") print(f"{len(objects)} Objects in {bucket_name} are listed in {output_file}") if not continuation_token: break ``` Let's break it down. 1. `os.remove(output_file)` Every run should start with the empty file, so we are removing the output file if exists. 2. `s3.list_objects_v2(Bucket=bucket_name, Prefix=prefix, ContinuationToken=continuation_token, Delimiter='/')` gets up too 1000 objects from the bucket `bucket_name` from "sub-directory" `prefix` using delimiter `/` to retrieve only directories. As we are not interested in checking the number of files in directories, we don't need to check non-directory entries. `ContinuationToken` is used for retrieving all the elements in the loop, as the maximal number of retrieved objects in the single call is 1000 3. As the listed "directory" objects have no content, we need to save the value of the `Prefix` property of that object. ## Using PySpark to Search for the Selected Directory With the directory names saved in a text file, the next step was to leverage PySpark for efficient searching. PySpark's DataFrame API provides a powerful way to handle large datasets. Here's an example of how I used PySpark to search for a selected directory: ```python import os import sys import time from pyspark.sql import SparkSession def load_input_data(spark): if os.path.exists("out.parquet"): df = spark.read.load("out.parquet") print(f"Loaded {df.count()} records from file out.parquet") else: df = spark.read.text("out.txt") print(f"Loaded {df.count()} records from file out.txt") df.write.save("out.parquet", format="parquet") print(f"Saved dataframe to out.parquet") return df def find_entry(id, df): return df.filter(df.value == id).count() > 0 def find_in_s3(id_to_find): spark = SparkSession.builder.appName("S3Find").getOrCreate() df = load_input_data(spark) print("Loading input data finished after %s seconds ---" % (time.time() - start_time)) found = find_entry(id_to_find, df) print(f"Found entry {id_to_find}" if found else f"Entry {id_to_find} not found") spark.stop() if __name__ == "__main__": if len(sys.argv) < 2: sys.exit(1) id_to_find = sys.argv[1] start_time = time.time() find_in_s3(id_to_find) print("--- %s seconds ---" % (time.time() - start_time)) ``` `load_input_data` seeks for the saved parquet data, or, if not available loads the selected txt file, stores it in a data frame, and saves parquet data. `find_entry` uses a filter on a data frame to check if selected element exists `find_in_s3` creates a new spark session, loads data frame and performs finding At main method I added simple execution time counting. ## Benchmarks Using a simple test generator I created 3 collections (100k, 1M and 10M elements) of random uuids and test cases with 10, 100 and 1000 items with some randomly added suffixes (for non-existence search). The results shocked me a bit. It was blazing fast (time in seconds): ![Spark computing time](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l2xmi1lgtht6jctehjzk.png) ![Spark computing time plot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snjrgqr4rfnod2hn42xr.png) As we can see, search time was nearly exactly correlated with the number of searched elements, not collection size. ## Grep Internet said > Grep is always faster for searching in files. Ok, let's try, here is a test code that check computing time for the grep and spark on the same datasets and test uuids: ```python import os import subprocess import sys import time from pyspark.sql import SparkSession def find_entry(id, df): return df.filter(df.value == id).count() > 0 def spark_find(test_file): start_time = time.time() spark = SparkSession.builder.appName("S3Find").getOrCreate() df = spark.read.text("test_output") with open(test_file, "r") as f: for line in f: id_to_find = line.rstrip() find_entry(id_to_find, df) spark.stop() print("SPARK: --- %s seconds ---" % (time.time() - start_time)) def grep_find(test_file): start_time = time.time() with open(test_file, "r") as f: for line in f: id_to_find = line.rstrip() subprocess.call(['/usr/bin/grep', '-q', id_to_find, 'test_output']) print("GREP: --- %s seconds ---" % (time.time() - start_time)) if __name__ == "__main__": numOfRecords = str(sys.argv[1]) spark_find("test_outputtest"+numOfRecords) grep_find("test_outputtest"+numOfRecords) ``` And the results for the grep: ![Grep computing time](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2kjhgsmqddwcpa1bqtk.png) ![Grep computing time plot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aoxbuf7z1mwszu9r4lcr.png) For relatively small collections grep was faster, but when it comes to millions spark solution overcomes good 'ol grep. A direct comparison of Spark vs Grep: ![Spark vs grep efficiency comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wr7e4o1jrq4xlikf9u2f.png) ## Summary To understand why PySpark outperformed grep, let's delve into the differences: Grep: - Traditional command-line utility for searching plain-text data. - Efficient for small to medium-sized text files. - Performance drops significantly with larger datasets due to linear search. PySpark: - Distributed computing framework, ideal for large-scale data processing. - It uses in-memory computations, which speed up the search process. - Capable of handling much larger datasets efficiently. In summary, while grep is a fantastic tool for quick searches on smaller datasets, PySpark shines when dealing with larger datasets, offering significant performance improvements due to its distributed nature. By leveraging Python and PySpark, I was able to efficiently determine the existence of directories in an S3 bucket, saving time and computational resources. This method showcases the power of modern data processing tools and their application in real-world scenarios. All the mentioned code can be found in my [Repo](https://github.com/Reggis/s3_bucket_search) Feel free to share your thoughts or ask questions in the comments below! _Note: The performance results may vary based on the specific configuration and resources of your environment._
bgorski
1,880,495
Effects in Ember
Originally from How to make an effect in Ember? @trusktr asks: What’s the equivalent of Solid.js...
0
2024-06-07T15:17:00
https://dev.to/nullvoxpopuli/effects-in-ember-15n9
ember, effect, javascript
--- title: Effects in Ember published: true description: tags: - ember - effect - javascript # cover_image: https://images.unsplash.com/photo-1531574595918-cb77c99fe5e2?q=80&w=1959&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D # Use a ratio of 100:42 for best results. published_at: 2024-06-07 15:17 +0000 --- Originally from [How to make an effect in Ember?](https://discuss.emberjs.com/t/how-to-make-an-effect-in-ember/20520?u=nullvoxpopuli) @trusktr asks: > What’s the equivalent of Solid.js `createEffect()` (or React `useEffect()`, Meteor `Tracker.autorun()`, MobX `autorun()`, Vue `watchEffect()`, Svelte `$effect()`, Angular `effect()`) in Ember.js? This is certainly shocking to folks new to ember, but ember deliberately doesn't have an any effect by default. Now, as a _framework_ author, the concept does _sort of_ exist (at a high level) -- but I'll circle back around to this in a smidge. In your Solid demo, if you want to log function calls, you'd do: ```js const a = () => 1; const b = () => 2; const c = () => 3; <template> {{log (a) (b) (c)}} </template> ``` Some notes on function invocation syntax, if needed - [Glimmer Tutorial: Transforming Data](https://tutorial.glimdown.com/1-introduction/3-transforming-data) - https://cheatsheet.glimmer.nullvoxpopuli.com/docs/templates#template__notation We use templates as the sole entrypoint to reactivity, whereas solid's reactivity is more general purpose. With templates, and being DOM focused ([for now](https://github.com/emberjs/ember.js/issues/20648)), we can ask ourselves: > "If the user can't see the data rendered, does the data need to exist?" Now, you're demo (with logging) is definitely effect-y. And if you _had no other way_ (like the situation was somehow impossible to model in a derived data way), you can do this: ```js function myLog() { console.log(a(), b(), c()); } <template> {{ (myLog) }} </template> ``` This would auto-track, so as the consumed tracked data accessed from each of `a`, `b`, and `c` changed, `myLog` would get to re-run. However, this has a caveat: data may not be _set_ within `myLog`, else an infinite render loop would occur. This is covered here - [Glimmer Tutorial: _synchronizing state_](https://tutorial.glimdown.com/2-reactivity/10-synchronizing-external-state) (to the console in this case) There is a way around the above caveat, not being able to set during render, by making `myLog` invoke an async-IIFE, and waiting a tiny bit (i.e.: setting slightly after render): ```js // now we're passing in the args directly so that they // are tracked (all args are auto-tracked in all // function / helper / component / modifier execution // coming from the template) function myLog(...args) { async function run() { await 0; // causes a change in a's data // and because we awaited, we don't infinite loop setA(); // prints a pre-setA, because a was passed in console.log(...args); } // return nothing, render nothing // (we have no result to show the user) } <template> {{myLog (a) (b) (c)}} </template> ``` This is nuanced, and is why I made this tiny abstraction a whole thing over here https://reactive.nullvoxpopuli.com/functions/sync.sync.html it's 95% documentation, 5% code :sweat_smile: ------ So coming back to: > "We deliberately don't have effects" Because of a couple current facts about our ecosystem: - we want derived data to be preferred, because it is the most efficient way to have your render state settle - calling a function from a template can only happen after the template is rendered, so doing so causes a _second render_ (I believe this is true in React as well) - there _is_ a need to synchronize external state, and that has been part of the exploration of _Resources_, and `Sync` - [Starbeam Docs on `Sync`](https://newdocs-rho.vercel.app/docs/universal/fundamentals/sync.html) - [Starbeam Docs on `Resource`s](https://www.starbeamjs.com/guides/fundamentals/resources.html) - Current ember implementation does not have `sync` capabilities: [ember-resources](https://github.com/NullVoxPopuli/ember-resources/tree/main/docs) (due to limitations of the private APIs implementing reactivity (ember-resources is public-API only)) - [Tutorial Chapters on Resources](https://tutorial.glimdown.com/2-reactivity/5-resources) - we think that effects are _overused_ and a huge footgun (for app devs), so by documenting a story more around synchronizing external state, we can continue to guide devs in to a pit of success. Note: Starbeam is where we're extracting our reactivity primitives, and are planning to swap to Starbeam entirely at some point once we work out some composition ergonomics for serial Resources (the coloring problem). Hope this helps! If anything is unclear or if you have more questions, let me know!
nullvoxpopuli
1,880,841
Top 5 Best Backend Bootcamps and Why (2024)
Breaking into backend engineering is a career journey of no return. It is one of the most rewarding...
0
2024-06-10T10:27:04
https://medium.com/backenders-club/top-5-best-backend-bootcamps-and-why-2024-6dd1b491b8ff
tutorial, waterloo, beginners, bootcamp
--- title: Top 5 Best Backend Bootcamps and Why (2024) published: true date: 2024-06-07 15:26:22 UTC tags: tutorial,Waterloo,beginners,bootcamp canonical_url: https://medium.com/backenders-club/top-5-best-backend-bootcamps-and-why-2024-6dd1b491b8ff --- ![](https://cdn-images-1.medium.com/max/1024/0*qRcJ2auCJack0eIy.png) Breaking into backend engineering is a career journey of no return. It is one of the most rewarding journeys for anyone looking to switch careers. Here are the rewards that come with [becoming a great backend engineer](https://newsletter.masteringbackend.com/p/becoming-a-great-backend-engineer). However, breaking into [backend engineering](https://masteringbackend.com/hubs/backend-engineering/basics-of-backend-engineering) requires choosing the right partners and mentors to help you scale up your career. Therefore, choosing the right coding bootcamp is crucial for anyone looking to break into backend development or enhance their skills. With numerous options, finding a coding bootcamp offering a comprehensive curriculum, experienced instructors, and strong career support is essential. Here are the top 5 best backend and coding bootcamps and why they stand out. ### Masteringbackend Academy Masteringbackend is a brand known for creating exclusive resources on backend engineering and providing next-level training for backend engineers only. Its mission is to turn backend engineers into great backend engineers. ### Why it’s great? 1. **Backend Engineers Only:** Masteringbackend Academy takes the pain of niches and focuses only on Backend Engineering. Focusing on backend engineering gives us the liberty to deep down into core backend engineering concepts and teach you what matters. 2. **Project-Based Learning:** Regarding learning backend engineering, the best approach is a practical and project-based learning approach. The academy is known for its project and practical-based teaching method, where boot campers build real-world projects and market them to real users. 3. **Expert-Craft Curriculum:** It leverages experiences from industry experts and years of working as a backend engineer at different companies and levels to craft the curriculum and proven teaching systems, helping transform backend engineers into great and highly sought-after professionals. 4. **Personalized Mentorship:** The academy understands the danger of teaching crowds. So, it splits students into small units supervised by one instructor and mentor called “MB Peers.” Each peer is a group of 10–12 students who belong and learn backend engineering together. At the end of the bootcamp, each peer will demo a real-world product in the market. This grouping makes learning fun, interactive, and personalized. 5. **Access to Backend Engineering Tools:** Students have free access to numerous backend engineering tools and thousands of pre-recorded backend engineering courses to enhance their learning. These tools range from interview practice playgrounds to real-world backend engineering project playgrounds and the MB platforms for thousands of courses, quizzes, tasks, and exercises. ### Notable Features 1. 4–6 weeks of intensive backend engineering training 2. Comprehensive career services, including resume workshops, interview preparation, and job placement. 3. Capstone projects and career coaching 4. A thriving community of helpful students and graduates from top companies [Masteringbackend Academy](https://academy.masteringbackend.com?utm_source=masteringbackend&utm_campaign=top%205%20best%20backend%20bootcamps&utm_medium=blog) is the best option if you’re looking to specialize in Backend Engineering. They have the specialty and years of experience producing backend engineers. Don’t waste one more day. Just do it ### Hack Reactor Hack Reactor is a transformational software engineering boot camp that makes tech education accessible. It offers online training in software development and AI tools. ### Why It’s Great: - **Rigorous Curriculum:** Hack Reactor is renowned for its challenging and thorough curriculum. It covers key backend technologies, including Node.js, databases, and API development, ensuring students acquire in-depth knowledge and hands-on experience. - **Industry Connections:** Hack Reactor has established strong ties with top tech companies, facilitating excellent job placement opportunities for graduates. Their hiring network includes industry giants like Google, Facebook, and LinkedIn. - **Real-World Projects:** The bootcamp emphasizes practical learning through real-world projects. Students work on complex problems and develop solutions that mirror actual work scenarios, preparing them for the demands of backend engineering roles. ### Notable Features: - 12-week intensive program. - Strong alumni network. - Comprehensive career services, including resume workshops and interview preparation. ### App Academy App Academy started in 2012 and has consistently been regarded as one of the best software engineering coding bootcamps in the world by students and employers. Its coding bootcamps are designed to help people with little to no coding experience become high-earning software engineers. ### Why It’s Great: - **Deferred Tuition:** App Academy offers a unique deferred tuition model. Students pay tuition only after securing a job, reducing the financial burden and aligning the boot camp’s incentives with the student’s success. - **Immersive Experience:** The bootcamp provides an immersive learning environment, focusing heavily on front and backend development. This comprehensive approach ensures that graduates are well-rounded full-stack developers with strong backend capabilities. - **High Job Placement Rate:** App Academy boasts one of the highest job placement rates in the industry. Graduates often land roles at prestigious tech firms, thanks to the bootcamp’s rigorous training and extensive hiring network. ### Notable Features: - 16-week immersive program. - In-depth focus on Ruby on Rails, SQL, and JavaScript. - Extensive support with job search and career transitions. ### Springboard Springboard is your online learning platform with experts in your corner. Its platform gives you the flexibility you want and the human support you need, where you can learn self-paced or with 1:1 mentorship. ### Why It’s Great: - **Flexible Learning:** Springboard offers a flexible, self-paced learning model, making it ideal for individuals who must balance other commitments such as work or family. The bootcamp is designed to be completed in six to nine months, depending on the student’s pace. - **1:1 Mentorship:** Each student is paired with a personal mentor who provides guidance, feedback, and support throughout the course. This personalized mentorship ensures students stay on track and tackle challenges effectively. - **Comprehensive Curriculum:** Springboard’s curriculum covers essential backend technologies, including Node.js, Python, SQL, and RESTful APIs. Students also learn about DevOps and cloud computing, equipping them with a broad skill set. ### Notable Features: - Flexible, self-paced program. - Job guarantee with a money-back policy. - Capstone projects and career coaching. ### General Assembly General Assembly helps you reach a tech career in an in-demand field like software engineering, data, or user experience design. It helps you land a new career with next-level tech training. ### Why It’s Great: - **Diverse Course Offerings:** General Assembly provides a variety of tech-related courses, including a specialized focus on backend development. This diversity allows students to tailor their learning experience to their career goals. - **Strong Community:** Students benefit from a robust alumni network and a supportive community of peers and instructors. This network can be invaluable for job placements and professional growth. - **Career Services:** General Assembly offers comprehensive career services, including resume building, interview preparation, and networking events. These services help graduates transition smoothly into the tech industry. ### Notable Features: - Full-time and part-time programs are available. - Focus on JavaScript, Ruby on Rails, and APIs. - Hands-on projects and real-world applications. ### Thinkful Thinkful helps students launch their careers without paying tuition fees until hired. It offers 6 months of professional career coaching and live personal mentorship from industry experts. ### Why It’s Great: - **Personalized Mentorship:** Thinkful pairs each student with an industry expert mentor. This mentorship provides personalized feedback and support, enhancing the learning experience. - **Job Guarantee:** Thinkful offers a job guarantee, promising a refund if graduates do not secure a job within six months of completing the program. This demonstrates the bootcamp’s confidence in its training. - **Project-Based Learning:** Thinkful emphasizes project-based learning, allowing students to build a portfolio of real-world projects. This approach solidifies their skills and makes them more attractive to potential employers. ### Notable Features: - Flexible learning options, including part-time and full-time programs. - Focus on Python, Node.js, and SQL. - Dedicated career services and job placement support. ### Conclusion Choosing the right backend bootcamp is critical in launching or advancing your tech career. These top five bootcamps — Masteringbackend, Hack Reactor, App Academy, Springboard, General Assembly, and Thinkful-offer unique strengths that cater to different learning styles and career goals. Whether you prefer a rigorous, immersive experience or a flexible, mentor-driven program, these bootcamps provide the training and support needed to excel in backend development. Consider your personal needs, career objectives, and learning preferences when selecting the best bootcamp for you, and you’ll be well on your way to a successful career in tech. _Originally published at_ [_https://masteringbackend.com_](https://masteringbackend.com/posts/top-5-best-backend-bootcamps-and-why) _on June 7, 2024._ * * *
kaperskyguru
1,880,545
GetBlock Releases Crypto Faucets for Web3 Developers on EVM
Along with its free and paid tools for Web3 developers, from explorers to blockchain nodes,...
0
2024-06-07T15:23:42
https://dev.to/getblockapi/getblock-releases-crypto-faucets-for-web3-developers-on-evm-4e63
getblocknews, cryptofaucets, testnet, eth
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xs574s1fw7xwcmvabcxs.jpg) Along with its free and paid tools for Web3 developers, from explorers to blockchain nodes, GetBlock launches several EVM testnet faucets to satisfy the growing need for developers to test their applications. ## GetBlock deploys free crypto faucets for blockchain developers On June 3, GetBlock, a pioneering force in Web3 infrastructure, announced the launch of its [crypto faucets](https://getblock.io/faucet/?utm_source=external&utm_medium=article&utm_campaign=devto_getblock-faucets), a groundbreaking tool designed to empower developers across the globe. They can obtain test tokens, which have no value and can be used to test decentralized applications (dApps) without risks. It’s a step forward in the company's mission to enhance the Ethereum Virtual Machine (EVM) developer ecosystem. The latest release includes the addition of Ethereum Sepolia and Polygon Amoy, joining Arbitrum, Optimism, Base, Scroll, Blast, Linea, and zkSync Sepolia testnets, which cater specifically to L2-focused developers. GetBlock’s CEO, Arseniy Voitenko, says: _We are thrilled to introduce our crypto faucets to the global Web3 community. This release is a testament to our commitment to providing user-friendly and secure tools for blockchain enthusiasts. We’re sure they will introduce GetBlock to a new generation of developers, enabling them to build secure and reliable decentralized applications._ Sepolia and Amoy testnets are cutting-edge testing networks that have replaced Ethereum Goerli and Polygon Mumbai respectively. They provide a risk-free environment for developers to experiment with smart contracts and dApps without the risk of losing funds. GetBlock's faucets dashboard is designed to simplify the development process, making it easier for developers to focus on building innovative applications. ## Testnet faucets are here and ready to use All faucets are available for free for all users registered with GetBlock. Everyone can claim a free test ETH or MATIC from the faucets, making it accessible to developers of all levels. The platform offers multiple sign-up options, including email, Google Account, and MetaMask wallet. For paid tariff plans, GetBlock accepts both fiat and crypto payments, connecting the benefits of Web3 with fiat currency support. To prevent spam and other misuse, faucets have some limitations. All users can get 0.02 test ETH every day for free. To start, developers must already have at least 0.001 test ETH in their wallet. These limitations are introduced to ensure that all developers will get test ETH as soon as possible while maintaining the system stability. ## Contributing to community of Web3 creators With its crypto faucets, GetBlock continues to empower the global Web3 ecosystem, helping blockchain developers and enthusiasts realize their goals. Using test ETH, all EVM developers can stress-test dApps on major smart contract platforms like Polygon and Ethereum, dominant “optimistic” L2s such as Arbitrum and Optimism, and cutting-edge ZK-powered platforms including Scroll, Linea, Base, and zkSync. In 2024, GetBlock released free-to-use block explorers for Ethereum, Binance Smart Chain, NEAR Protocol, Harmony, and more. They allow users to navigate through blocks, transactions, accounts, and validators of various blockchains, exploring them. Additionally, in May 2024, GetBlock launched a Marketplace, a free-to-use service where users can find useful and innovative Web3 applications, enhancing their experience and improving efficiency. Furthermore, GetBlock supports Ophir Institute, a blockchain education initiative that teaches the basics of Web3 development. Its students build and test their dApps using GetBlock’s nodes. Its free account enables 40,000 blockchain requests per day, which is insufficient for large-scale commercial dApps, but enough for testing and experimentation with blockchain methods. Using them with faucets, newcomers can build their test applications and then purchase a paid version as their experience and application size grows. It provides access to 50+ blockchains with 99.99% uptime, including testnets and mainnets, and developers can choose various API interfaces, such as gRPC, WebSockets, or JSON RPCs. [GetBlock](https://getblock.io/?utm_source=external&utm_medium=article&utm_campaign=devto_getblock-faucets) is dedicated to Web3 development support and evolves with the ecosystem, adding new networks and optimizing their performance.
getblockapi
1,880,544
Windsor Insulation
At Windsor Insulation, LLC, we are dedicated to providing top-quality insulation services to homes...
0
2024-06-07T15:23:08
https://dev.to/windsorinsulationllc/windsor-insulation-llc-1m4k
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wib3q0txvpihsj31bxl.png) At Windsor Insulation, LLC, we are dedicated to providing top-quality insulation services to homes and businesses in Littlestown, PA, and the surrounding areas. Our team of experienced [attic insulation contractors in Littlestown](https://www.google.com/maps?cid=4916554106093773819) is committed to ensuring that your insulation is installed to our high standards, using the latest techniques and materials. **Our Services:** · Insulation contractor · Blown-in wall system · Cellulose insulation · Can foam insulation · Radiant Heat Barrier · Fiberglass insulation · Mineral wool insulation · Rigid Foam Board **Visit Our Profiles:** [Windsor Insulation, LLC | Facebook](https://www.facebook.com/windsorinsulation/) [Windsor Insulation, LLC | Instagram](https://www.instagram.com/windsorinsulation) [Windsor Insulation, LLC | X](https://twitter.com/windsorinsulate) [Windsor Insulation, LLC | YouTube](https://www.youtube.com/@windsorinsulationco) [Windsor Insulation, LLC | TikTok](http://www.tiktok.com/@windsorinsulation) <u>_**Contact Us** [Windsor Insulation, LLC](https://windsorinsulation.co/) [Littlestown, PA](https://www.google.com/maps/place/Windsor+Insulation,+LLC/@39.8582455,-76.8661281,9z/data=!3m1!4b1!4m6!3m5!1s0x4649f6acde3e0079:0x443b1be914b523fb!8m2!3d39.8582455!4d-76.8661281!16s%2Fg%2F11kj2gsymr?entry=ttu) [(717) 688-4370](https://tinyurl.com/insulation-company-near-me)_</u>
windsorinsulationllc
1,880,543
What are Spring Boot Actuator endpoints?
Spring Boot Actuator endpoints provide a set of built-in HTTP endpoints that expose useful...
0
2024-06-07T15:20:55
https://dev.to/codegreen/what-are-spring-boot-actuator-endpoints-3ff5
java, microservices, backend, springboot
Spring Boot Actuator endpoints provide a set of built-in HTTP endpoints that expose useful information and operations about the running Spring Boot application. These endpoints are invaluable for monitoring, managing, and troubleshooting the application in production. Here are some commonly used Spring Boot Actuator endpoints: * **/actuator/health:** Provides information about the health of the application, including status, details, and any additional health checks. * **/actuator/info:** Displays custom application information, such as version, description, and any other details configured by the developer. * **/actuator/metrics:** Exposes various metrics about the application, such as memory usage, garbage collection statistics, HTTP request statistics, etc. * **/actuator/env:** Allows viewing and managing environment properties of the application, including configuration properties and system environment variables. * **/actuator/beans:** Provides a list of all beans in the application context along with their dependencies. * **/actuator/trace:** Displays trace information for recent HTTP requests handled by the application. Example: To enable Spring Boot Actuator endpoints, we simply need to include the `spring-boot-starter-actuator` dependency in our project's configuration: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> Once Actuator is included, we can access the endpoints via HTTP requests. For example, to check the health of our application, we can send a GET request to `/actuator/health`. These endpoints offer valuable insights into the runtime behavior of our Spring Boot application, allowing us to monitor its health, performance, and various other aspects, ultimately facilitating efficient management and troubleshooting.
manishthakurani
1,880,536
This Week In React #188 : React 19 RC0, Data Fetching, Framer Motion, Compiler, Astro, Zod, Remix, Docusaurus, React-Query...
Hey, This is Benedikt this week, filling in for Sébastien who’s taking a newsletter break. React 19...
18,494
2024-06-07T15:17:14
https://thisweekinreact.com/newsletter/188
react, reactnative
--- series: This Week In React canonical_url: https://thisweekinreact.com/newsletter/188 --- Hey, This is [Benedikt](https://twitter.com/bndkt) this week, filling in for Sébastien who’s taking a newsletter break. React 19 is nearing its final release, with the first RC published this week. Apart from this, there’s not a lot of release activity in the React and React Native communities. So this week, I decided to feature two great pieces of content as headlines in both sections. A seminal 10k-word-long article about Data Fetching Patterns in Single-Page Applications by Juntao Qiu, and a great video overview of all the different ways to build React Native apps with Expo by Simon Grimm. I wanted to use this opportunity to let you know that both Sébastien and I put a lot of effort into finding the best content for the React and React Native sections, but I’m sure there is still great stuff that sometimes goes unnoticed. That’s why we’re always open to suggestions, so feel free to let us know when you put something out there! --- 💡 Subscribe to the [official newsletter](https://thisweekinreact.com?utm_source=dev_crosspost) to receive an email every week! [![banner](https://thisweekinreact.com/img/TWIR_POST.png)](https://thisweekinreact.com?utm_source=dev_crosspost) --- ## 💸 Sponsor [![Build your product skills](https://thisweekinreact.com/emails/issues/183/posthog.jpg)](https://newsletter.posthog.com/?utm_source=twir&utm_campaign=twir) **[Build your product skills](https://newsletter.posthog.com/?utm_source=twir&utm_campaign=twir)** [**Product for Engineers**](https://newsletter.posthog.com/?utm_source=twir&utm_campaign=twir) is PostHog’s newsletter dedicated to helping engineers improve their product skills. Learn what questions to ask users, how to build new features users love, and the path to product market fit. [**Subscribe for free**](https://newsletter.posthog.com/?utm_source=twir&utm_campaign=twir) to get curated advice on building great products, lessons (and mistakes) from building PostHog, and deep dives into the strategies of top startups. --- ## ⚛️ React 📜 **[Data Fetching Patterns in Single-Page Applications](https://martinfowler.com/articles/data-fetch-spa.html)** This is a long read (~10k words), but such a great article! The title doesn’t do its justice, because in passing, this covers so much more than just data fetching. Basic React concepts, Suspense, async state handling, code splitting, prefetching, and parallel data fetching to avoid waterfalls. There has been a lot of discussion on X about web app performance in the last few weeks (Rails vs. React, MS Edge decision, …) - this is a great article to freshen up the basics of this topic to build a great user experience! - 💸 [Meilisearch - Build search as you type for React](https://www.meilisearch.com/with/react?utm_campaign=sponsoring&utm_source=thisweekinreact) - 👀 [React Router PR - defineRoute API](https://github.com/remix-run/react-router/pull/11596) - 🐦 [shadcn/ui is working on adding charts](https://x.com/shadcn/status/1795920488797188525) - 🐦 [Remix working inside a Cloudflare Durable Object 🤯](https://x.com/threepointone/status/1796223745243119890) - 🐦 [React 19.0.0-rc.0](https://x.com/acdlite/status/1797668537349328923): Andrew Clark says “This is the exact build we'll release as 19.0, unless an issue is reported that requires a breaking change.” - 🐦 [Upcoming Spline (3D rendering) and NextJS RSC integration](https://x.com/rauchg/status/1798021886657650711) - 📜 [I tried React Compiler today, and guess what …](https://www.developerway.com/posts/i-tried-react-compiler): A super interesting and detailed article about React Compiler and real-world use cases. Seems like we’re not at the point yet where we can just ignore everything about memoization because the Compiler takes care of it, and apart from this conclusion the article is also a great reminder of how memoization works exactly. - 📜 [Do you still need Framer Motion?](https://motion.dev/blog/do-you-still-need-framer-motion): The creator of the popular React animation library explains you might not need his library anymore thanks to 5 great new/upcoming CSS features. - 📜 [Sneaky React Memory Leaks II: Closures Vs. React Query](https://schiener.io/2024-05-29/react-query-leaks): This follows up on last week's great article. This time it shows how React Query might be affected as well by memory leaks, and how to fix them. Using custom hooks looks like a good general solution to this problem. - 📜 [An even faster Microsoft Edge](https://blogs.windows.com/msedgedev/2024/05/28/an-even-faster-microsoft-edge/): Microsoft was using React for some elements of the UI in its Edge browser and is now abandoning this approach. See the video linked below for more context. - 📜 [Speeding up the JavaScript ecosystem - Server Side JSX](https://marvinh.dev/blog/speeding-up-javascript-ecosystem-part-9/): The JSX runtime of React can lead to many object allocations. Marvin (Preact/Deno) shows it's possible to precompile JSX to improve server rendering of JSX and reduce GC. Maybe React could use that as well? - 📜 [Type-safe module mocking in Storybook](https://storybook.js.org/blog/type-safe-module-mocking/): Very elegant way to leverage subpath imports for module mocking. - 📜 [Multiple action handlers with Zod in Remix](https://www.jacobparis.com/content/zod-actions-remix): If you target one action endpoint from multiple components, this article shows you how to combine the different Zod schemas via discriminated union. - 📜 [Embracing Slow Networks: Improving User Experience](https://www.epicweb.dev/embracing-slow-networks-improving-user-experience) - 📜 [Cleaner components with useSuspenseQuery](https://swizec.com/blog/cleaner-components-with-use-suspense-query/) & [Why useSuspenseQuery works](https://swizec.com/blog/why-use-suspense-query-works/) - 📜 [Explore new hooks coming up in React 19](https://blog.saeloun.com/2024/06/05/new-hooks-in-react-19/) - 📦 [React 19.0.0-rc.0](https://www.npmjs.com/package/react/v/19.0.0-rc.0): Andrew Clark said "This is the exact build we'll release as 19.0, unless an issue is reported that requires a breaking change." - 📦 [Docusaurus 3.4](https://docusaurus.io/blog/releases/3.4): A new way to centrally manage tags (great for consistency), an experimental hash router (great for offline browsing without a web server), and an option to use namespaces for local storage (useful if you’re deploying multiple instances of Docusaurus on the same domain). - 🎙 [This Month in React, May 2024 - Updates from React Conf](https://www.reactiflux.com/transcripts/tmir-2024-05) - 🎥 [Jack Herrington - Beware of "auto" Mode in the NextJS App Router](https://www.youtube.com/watch?v=bU3Px8bHjPA) - 🎥 [Theo - Microsoft Regrets Using React (For Edge)](https://www.youtube.com/watch?v=Bm8Ga07BOnI) - 🎥 [Stephanie Dietz - How to Use React's `useOptimistic` Hook](https://www.youtube.com/watch?v=ChF9-P4Bjs4) - 🎥 [UI Engineering - Why 95% of Modals should be Local (in React)](https://www.youtube.com/watch?v=iMoAviG_ZUE) - 🎥 [Aaron Francis - Laravel vs React](https://www.youtube.com/watch?v=gRtv-BVkwA4) - 🎥 [Theo - How React Query Won](https://www.youtube.com/watch?v=xIflplz925Y) --- ## 💸 Sponsor [![WorkOS: enterprise-grade auth in minutes](https://thisweekinreact.com/emails/issues/188/workOS.jpg)](https://workos.com/?utm_source=thisweekinreact&utm_medium=newsletter&utm_campaign=q22024) **[WorkOS: enterprise-grade auth in minutes](https://workos.com/?utm_source=thisweekinreact&utm_medium=newsletter&utm_campaign=q22024)** 🔐 WorkOS supports a complete User Management solution along with **SSO, SCIM, RBAC, & FGA**. 🗂️ Unlike other auth providers that rely on user-centric models, WorkOS is designed for B2B SaaS with an **org modeling approach**. 🏗️ The APIs are **flexible, easy-to-use, and modular**. Pick and choose what you need and integrate in minutes. ✨ User Management is **free up to 1 million MAUs** and includes bot protection, impersonation, MFA, & more. 🤝 WorkOS is trusted by hundreds of leading startups like **Perplexity**, **Vercel**, & **Webflow**. Future-proof your auth stack with [WorkOS](https://workos.com/?utm_source=thisweekinreact&utm_medium=newsletter&utm_campaign=q22024) 🚀 --- ## 📱 React-Native ![Simon Grimm: Build React Native with Expo](https://thisweekinreact.com/emails/issues/188/rn188.jpg) 🎥 **[Every Way to Build your React Native App with Expo](https://www.youtube.com/watch?v=cs-zgHjt5RQ)** Many roads lead to Rome, and it seems there are almost as many ways to build an Expo app. This gives us developers a lot of power and freedom, but can also sometimes lead to confusion - not just for beginners! Thankfully, Simon made a great video comparing the different approaches via Expo Go, Expo Prebuild, Xcode, Android Studio, and EAS. This is also a great opportunity to congratulate Expo on reaching [30k stars on GitHub!](https://x.com/expo/status/1796262781584654492) - 💸 [React Native Mastery - The only course you need to Master React Native & Expo](https://notjust.dev/react-native-mastery?utm_source=twir&utm_medium=email) - 🐦 [The most powerful demo AI app, featuring RSC via Expo Router](https://x.com/Baconbrix/status/1795804445738963283): We’ve talked about Evan Bacon’s RSC demo before. This is a great thread on X summing up the promise and including a nice comparison video on how great mobile UI can get especially in the context of LLM chat apps. - 📜 [React Native Security Guide](https://ospfranco.com/react-native-security-guide/): Oscar picks up a topic that doesn’t get talked about enough. This is a good intro, covering secrets (TLDR: Don’t store them on a device) and encryption of user data (including leveraging biometrics). - 📜 [Seamless Transitions: From Native to React Native](https://commerce.nearform.com/blog/2024/seamless-transitions-from-native-to-react-native): Some impulses for brownfield development: Because RN creates truly native apps, introducing RN to an existing app or migrating to RN can be made seamless for the end users. - 📦 [React Native 0.74.2](https://github.com/facebook/react-native/releases/tag/v0.74.2): Patch release, bumping some dependencies (typescript-eslint, CLI) and fixing bugs. - 📦 [React Native Reanimated 3.12.0](https://github.com/software-mansion/react-native-reanimated/releases/tag/3.12.0): Introduces a new hook useComposedEventHandler which simplifies merging of multiple event handlers. - 🎙️ [RNR 298 - Create Expo Stack, NativeWind Ul, & Plugin Pro with Dan Stepanov](https://reactnativeradio.com/episodes/rnr-298-create-expo-stack-nativewind-ul-plugin-pro-with-dan-stepanov) - 🎥 [Rocket Ship 42 - React Native is alive! App.js Recap with Áron Berezkin](https://www.youtube.com/watch?v=GeUfrojkoo0) - 🎥 [Expo - How to start your first Expo project in 2 minutes](https://www.youtube.com/watch?v=yOUAEfDuI44) --- ## 🔀 Other - 🐦 [Find memory issues using Chrome Dev Tools](https://x.com/addyosmani/status/1796527052306419820) - 👀 [Vitest Browser Mode](https://github.com/vitest-dev/vitest/discussions/5828) - 📜 [10 modern Node.js runtime features to start using in 2024](https://snyk.io/blog/10-modern-node-js-runtime-features/) - 📜 [Live types in a TypeScript monorepo - stop writing dead TypeScript](https://colinhacks.com/essays/live-types-typescript-monorepo) - 📜 [How To Use Corepack](https://www.totaltypescript.com/how-to-use-corepack) - 📜 [Promises From The Ground Up](https://www.joshwcomeau.com/javascript/promises/) - 📜 [Zero - Rethinking local-first, general-purpose sync engine for the web](https://zerosync.dev/) - 📦 [Turborepo 2.0 - New terminal UI, watch mode, all-new docs](https://turbo.build/blog/turbo-2-0) - 📦 [Biome 1.8 - CSS support, new reports, CLI improvements...](https://x.com/biomejs/status/1797963672004043040) - 📦 [Deno 1.44 - Private npm registries, improved Node.js compat, and performance boosts](https://deno.com/blog/v1.44) --- ## 🤭 Fun [![Thank you JavaScript](https://thisweekinreact.com/emails/issues/188/meme.jpg)](https://x.com/fireship_dev/status/1796526654183059925) See ya! 👋
sebastienlorber
1,880,534
How does Spring Boot Application achieve auto configuration internally, explain the use of @EnableAutoConfiguration?
Spring Boot's auto-configuration feature is one of its key strengths, enabling developers to rapidly...
0
2024-06-07T15:11:54
https://dev.to/codegreen/how-does-spring-boot-application-achieve-auto-configuration-internally-explain-the-use-of-enableautoconfiguration-1p4
java, springboot
Spring Boot's auto-configuration feature is one of its key strengths, enabling developers to rapidly build and deploy applications with minimal configuration. Let me explain how it achieves this and the role of `@EnableAutoConfiguration`: Internally, Spring Boot auto-configuration works by scanning the classpath for specific libraries and dependencies commonly used in Spring applications. It then automatically configures beans and components based on the presence of these libraries and the current environment. The `@EnableAutoConfiguration` annotation plays a central role in this process. When we include this annotation in our application, Spring Boot automatically configures the beans required to set up a functional application context. Here's an example to illustrate how `@EnableAutoConfiguration` works: @SpringBootApplication @EnableAutoConfiguration public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } } In this example, we have a Spring Boot application class annotated with `@SpringBootApplication` and `@EnableAutoConfiguration`. When the application starts, Spring Boot scans the classpath and automatically configures beans for common components like data sources, JPA, security, etc., based on the libraries detected in the classpath. For instance, if Spring Boot detects Hibernate on the classpath, it automatically configures a `DataSource`, `EntityManagerFactory`, and `TransactionManager` for JPA persistence. Similarly, if Spring Security is present, it configures security-related beans like `AuthenticationManager` and `FilterChainProxy`. By leveraging `@EnableAutoConfiguration`, Spring Boot simplifies application setup and eliminates the need for manual configuration, allowing developers to focus more on writing business logic rather than boilerplate configuration.
manishthakurani
1,880,533
Unlock Savings with Protyre Discount Code
If you’re looking to maintain your vehicle without breaking the bank, Protyre is the place to go....
0
2024-06-07T15:11:11
https://dev.to/harrybrook22/unlock-savings-with-protyre-discount-code-ni6
react
- If you’re looking to maintain your vehicle without breaking the bank, Protyre is the place to go. Known for its exceptional service and extensive range of tyres, Protyre is a go-to for many motorists in the UK. To make your experience even better, you can take advantage of a [Protyre discount code](https://www.voucherpro.co.uk/protyre.co.uk), ensuring you get top-quality products and services at unbeatable prices. Why Choose Protyre? Protyre stands out in the automotive service industry for several reasons: Wide Selection: Protyre offers a vast array of tyres from leading brands. Whether you need tyres for a car, van, or motorcycle, Protyre has you covered. Professional Service: Their team of experts ensures your vehicle is in safe hands. From tyre fitting to MOT testing, you can rely on Protyre for all your automotive needs. Convenient Locations: With numerous centres across the UK, Protyre makes it easy to find a service centre near you. Comprehensive Services: Besides tyres, Protyre provides a range of services including wheel alignment, brake checks, and vehicle servicing. How to Use a Protyre Discount Code Using a Protyre discount code is straightforward and can save you a significant amount of money. Here’s how to do it: Find a Valid Code: Search for a current Protyre discount code online. Websites specializing in coupons and deals are great resources. Select Your Products or Services: Visit the Protyre website and choose the tyres or services you need. Add them to your cart. Apply the Code: At checkout, you’ll see a field to enter your discount code. Paste your Protyre discount code here and click apply. Enjoy the Savings: The discount will be deducted from your total, and you can proceed with your purchase at a reduced price. Benefits of Using a Protyre Discount Code Cost-Effective Maintenance: Keeping your vehicle in top condition can be expensive. A discount code helps alleviate some of these costs. Access to Premium Products: With savings from a discount code, you can opt for higher-quality tyres or services that might have been outside your budget. Frequent Promotions: Protyre often runs promotions and discounts, so keep an eye out for the latest codes to maximize your savings. Where to Find Protyre Discount Codes Discount codes can be found on various platforms: Coupon Websites: Websites like VoucherCodes, Groupon, and RetailMeNot regularly feature Protyre discount codes. Protyre’s Official Website: Occasionally, Protyre offers exclusive discounts directly on their site or through their newsletter. Social Media: Follow Protyre on social media platforms for announcements of special promotions and discount codes. Tips for Maximizing Savings Combine Offers: Sometimes, you can use a Protyre discount code alongside other promotions or sales for even greater savings. Check Validity: Ensure the discount code you have is valid and not expired to avoid any issues at checkout. Regular Maintenance: Regular checks and maintenance can prevent costly repairs. Use discount codes for routine services to keep costs down. Conclusion Protyre is a trusted name in the automotive industry, offering a wide range of services and products to keep your vehicle running smoothly. By using a Protyre discount code, you can enjoy these top-notch services at a fraction of the cost. Whether you need new tyres, an MOT test, or other vehicle services, don’t forget to search for a discount code to make your experience more affordable. Keep your vehicle in prime condition and your wallet happy with the power of a Protyre discount code!
harrybrook22
1,880,532
Intermediate Python Projects
Intermediate Python Projects Python is an incredibly versatile programming language that...
0
2024-06-07T15:10:03
https://dev.to/romulogatto/intermediate-python-projects-26ng
# Intermediate Python Projects Python is an incredibly versatile programming language that can be used for a wide range of projects. If you're feeling confident in your Python skills and looking to take on new challenges, here are some intermediate Python projects to consider. These projects will not only help you further solidify your understanding of Python concepts but also allow you to create practical applications. ## 1. Build a Weather App **Description:** A weather app allows users to get real-time weather updates based on their location or any specified location. It fetches data from a third-party API and presents it in a user-friendly format. **Skills Learned:** - Working with APIs - Parsing JSON data - User input handling - GUI development (optional) To build this project, start by selecting a weather API of your choice such as OpenWeatherMap or WeatherBit API. Use the `requests` library in Python to make HTTP requests and retrieve the weather data as JSON. Parse the JSON response to extract relevant information like temperature, humidity, and wind speed. If you're interested in GUI development, libraries like Tkinter or PyQt can help you design an interactive user interface for your app. ## 2. Implement Data Structures **Description:** Creating different data structures from scratch can improve your understanding of how they work internally and when best to use them. **Skills Learned:** - Algorithmic thinking - Understanding complex data structures - Efficiency analysis Choose one or more popular data structures such as linked lists, binary trees, stacks, queues, or heaps - then implement them in Python using classes and methods. Enhance these implementations with common operations like insertion/deletion/searching/sorting as appropriate for each data structure. By implementing these algorithms yourself rather than relying on pre-built modules, you'll gain invaluable insights into how they function under the hood. ## 3. Develop a Web Scraper **Description:** A web scraper extracts information from websites by navigating through the HTML structure of web pages. **Skills Learned:** - Web scraping techniques - Parsing HTML using libraries like BeautifulSoup - Regular expressions for data extraction Choose a website that interests you and use Python's `requests` library to fetch its content. Then, use a library like BeautifulSoup to parse the HTML and identify specific elements or patterns you want to scrape. You can extract text, images, links, tables, or any other data present on the webpage. Remember to check if your chosen website allows web scraping and respect their terms of service. ## 4. Build an Image Processing Application **Description:** An image processing application allows users to perform various operations on images such as cropping, resizing, rotating, applying filters/effects, or even creating collages. **Skills Learned:** - Working with image files in Python (PIL/Pillow) - Image manipulation algorithms - User interface design (optional) Start by installing the Pillow library in Python for working with image files. Explore different built-in functions like opening/closing images and applying different transformations. Implement functionality such as cropping or resizing based on user input. To take it further and enhance your skills in GUI development, consider using libraries like Tkinter or Pygame for building an interactive user interface where users can apply filters/effects by clicking buttons or manipulating sliders. ## 5. Create a Chatbot **Description:** A chatbot is an artificial intelligence software capable of having conversations with human users. **Skills Learned:** - Natural Language Processing (NLP) basics
romulogatto
1,880,531
what are conditional annotations in Spring Boot Application?
Conditional annotations in Spring Boot allow us to conditionally apply certain configurations or...
0
2024-06-07T15:08:42
https://dev.to/codegreen/what-are-conditional-annotations-in-spring-boot-application-1m1k
java, springboot, microservices, backend
Conditional annotations in Spring Boot allow us to conditionally apply certain configurations or beans based on the evaluation of specified conditions at runtime. These annotations help in creating flexible and customizable application setups. Here's how conditional annotations work and their use in a Spring Boot application: * **@ConditionalOnProperty:** This annotation allows us to conditionally enable a bean or configuration based on the value of a specified property in the application's configuration files. * **@ConditionalOnClass:** It enables a bean or configuration if specified classes are present in the classpath. * **@ConditionalOnMissingBean:** This annotation allows us to define a bean only if another bean of the same type is not already registered in the application context. * **@ConditionalOnExpression:** It enables a bean or configuration based on the evaluation of a SpEL expression. * **@ConditionalOnWebApplication:** It enables a bean or configuration if the application is running in a web environment. * **@Conditional:** This annotation provides a general-purpose mechanism to define custom conditions for bean registration. Example: Suppose we want to configure a bean only if a specific property is present in our application's configuration file. We can use `@ConditionalOnProperty` for this purpose: @Configuration public class MyConfiguration { @Bean @ConditionalOnProperty(name = "myapp.feature.enabled", havingValue = "true") public MyBean myBean() { return new MyBean(); } } In this example, the `MyBean` bean will only be registered if the property `myapp.feature.enabled` is set to `true` in the application's properties file. Otherwise, it will not be created. Conditional annotations provide a powerful mechanism to customize the behavior of our Spring Boot application based on various conditions, making our application more adaptable and efficient.
manishthakurani
1,880,530
What are the best practices for versioning Spring Boot REST APIs?
Versioning Spring Boot REST APIs is crucial for maintaining compatibility, managing changes, and...
27,974
2024-06-07T15:04:37
https://dev.to/codegreen/what-are-the-best-practices-for-versioning-spring-boot-rest-apis-3a05
java, springboot, backend, microservices
Versioning Spring Boot REST APIs is crucial for maintaining compatibility, managing changes, and ensuring smooth communication between clients and servers. Here are some best practices: 1. **URI Versioning:** Include the version number in the URI path. This approach provides clear visibility and allows clients to specify the version they want to use. 2. **Accept Header Versioning:** Use the `Accept` header to specify the API version. This approach is cleaner and doesn't clutter the URI with version information. 3. **Stable URIs:** Once an API version is released, maintain its URI structure to ensure backward compatibility. Avoid changing URIs frequently to prevent breaking existing clients. 4. **Document API Changes:** Document API changes, additions, and deprecations in the API documentation. Clearly communicate version updates to clients to facilitate migration. 5. **API Versioning Strategy:** Choose a versioning strategy that aligns with your project's requirements. Common strategies include using numerical versions (e.g., v1, v2), date-based versions, or semantic versioning. 6. **Deprecation Policy:** Define a deprecation policy for older API versions. Provide sufficient notice to clients before deprecating a version and sunset deprecated versions gracefully. Example: Let's say we have a Spring Boot REST API for managing user profiles, and we want to version it. Here's how we can implement URI versioning: // Version 1 of the user profile API @RestController @RequestMapping("/api/v1/users") public class UserProfileControllerV1 { // API endpoints for user profile management } // Version 2 of the user profile API @RestController @RequestMapping("/api/v2/users") public class UserProfileControllerV2 { // Updated API endpoints with new features or changes } In this example, we've created separate controller classes for each API version, and we include the version number (`v1` and `v2`) in the URI path. Clients can choose which version to use based on their requirements. By following these best practices, we can effectively manage and evolve our Spring Boot REST APIs while ensuring compatibility and ease of use for clients.
manishthakurani
1,877,255
Dynamically pairing tokio spawns
Hi everyone! While learning Rust I stumbled upon problem not covered by popular online tutorials....
0
2024-06-07T15:01:56
https://dev.to/bbkr/dynamically-pairing-tokio-spawns-3nd4
rust, tokio
Hi everyone! While learning Rust I stumbled upon problem not covered by popular online tutorials. When they talk about `tokio` and `MPSC` (multi-producer single-consumer) channels they usually connect spawned threads in some fixed way. However in my project I have to match dynamically asynchronous producers and consumers in various configurations. So let me share useful pattern I've discovered in my Rust journey. Let's say we have a restaurant: ``` $ cargo init restaurant Creating binary (application) package $ cd factory $ cargo add tokio --features=full Updating crates.io index Adding tokio v1.38.0 to dependencies ... ``` As a manager we can assign different cooking stands to asynchronously prepare different types of food (don't worry about undefined values for now): ```rust async fn cooking_stand (food: char) { loop { somewhere.send(food.clone()).await; } } ``` Food should be delivered to tables awaiting it. ```rust async fn table (number: u8) { loop { let food = somehow.recv().await; println!("Got {} at table {}", food, number); } } ``` Now we can organize our restaurant: ```rust #[tokio::main] async fn main () { // cooking stands tokio::spawn(cooking_stand('🥗')); // salad tokio::spawn(cooking_stand('🍔')); // burger ... // tables for guests tokio::spawn(table(1)); tokio::spawn(table(2)); ... // keep our restaurant open for 1s sleep(Duration::from_millis(1000)).await; } ``` ## Problem For simplicity let's assume we accept orders through application. So restaurant manager (main thread) knows for example that table `1` is waiting for 🥗 and table `3` is waiting for 🍔. But how to actually fullfil those orders? ## Naive approach ``` cooking_stand -> 🥗🥗🥗🥗🥗 -> -> table 1 cooking_stand -> 🍕🍕🍕🍕🍕 -> manager -> table 2 cooking_stand -> 🍔🍔🍔🍔🍔 -> -> table 3 ``` If we force manager to do the job he can wait for 🥗 cooking stand to prepare salad and then pass it to table `1`. Then wait for 🍔 cooking stand to prepare burger and carry it to table `3`. This is obviously flawed design: - Cooking stands produce food whether it is needed or not. - If cooking stand is slow then manager will be waiting for food to be prepared. - Manager should not do the heavy lifting because it affects his responsiveness. ## We need waiters Fortunately tokio gives perfect tool for the job - [oneshot channels](https://docs.rs/tokio/latest/tokio/sync/oneshot/index.html). Those channels are designed and optimized to pass single value one time. ```rust let (waiter_rx, waiter_tx) = oneshot::channel::<char>(); ``` To make waiter deliver 🥗 to table `1` first we need to modify our cooking stands: ```rust async fn cooking_stand ( product: char, mut waiters: tokio::sync::mpsc::Receiver<oneshot::Sender<char>> ) { while let Some(waiter) = waiters.recv().await { waiter.send(product.clone()); } } ``` Where `tokio::sync::mpsc::Receiver<oneshot::Sender<char>>` is a queue of waiters. Yes, you read it right. You **can** send oneshot channels through other channels. When waiter arrives at cooking stand then cooking stand prepares food and gives it to waiter for being delivered to table. Let's do the same for tables, but they should get receiving part of specific waiter who will bring food to them: ```rust async fn table ( number: u8, mut waiters: tokio::sync::mpsc::Receiver<oneshot::Receiver<char>> ) { while let Some(waiter) = waiters.recv().await { let food = waiter.await.unwrap(); println!("Got {} at table {}", food, number); } } ``` When waiter is assigned to table customer waits for this waiter to deliver food produced by food stand. And to complete puzzle let's modify our `main` function. Manager, instead of doing the heavy lifting himself, can hire waiters and assign them to pairs of cooking stands and tables to fullfill food orders. ```rust #[tokio::main] async fn main () { // used by manager to send waiters to cooking stands let (stand_salad_tx, stand_salad_rx) = mpsc::channel::<oneshot::Sender<char>>(100); let (stand_pizza_tx, stand_pizza_rx) = mpsc::channel::<oneshot::Sender<char>>(100); let (stand_burger_tx, stand_burger_rx) = mpsc::channel::<oneshot::Sender<char>>(100); // set up cooking stands tokio::spawn(cooking_stand('🥗', stand_salad_rx)); tokio::spawn(cooking_stand('🍕', stand_pizza_rx)); tokio::spawn(cooking_stand('🍔', stand_burger_rx)); // used by manager to send waiters to tables let mut tables: Vec<tokio::sync::mpsc::Sender<oneshot::Receiver<char>>> = Vec::new(); // set up tables for number in 1..=4 { let (table_tx, table_rx) = mpsc::channel::<oneshot::Receiver<char>>(100); tables.push(table_tx); tokio::spawn(table(number, table_rx)); } } ``` Let's check if it works by adding following code at the end of our `main`: ```rust // create waiter let (waiter_tx, waiter_rx) = oneshot::channel::<char>(); // send him for food to salad stand stand_salad_tx.send(waiter_tx).await; // send him to deliver food to table `1` tables.get(0).unwrap().send(waiter_rx).await; // manager can go back to doing his stuff // keep our restaurant open for 1s sleep(Duration::from_millis(1000)).await; ``` When ran it produces following output: ``` Got 🥗 at table 1 ``` Yay! ## Conclusions This pattern of sending two halves of oneshot channels through regular channels to tokio spawns can be used to implement all kind of traffic control. Passing messages with given ratio, with throttling, etc. - **Is it efficient?** Very! I was surprised how well oneshot channels are optimized. Single core of my Ryzen 6800U processor was able to create over `5_000_000` oneshot channels and send them to corresponding spawns per second. That's crazy fast. - **How to scale it?** There will be situations when manager may encounter overfill of channels (in tokio all channels are capped) and will not be able to immediately send oneshot channel. In that cases you may for example increase amount of producers/consumers by issuing more spawns. Like add another burger stand and send oneshots to them in round robin order. Everything depends on what your spawns are actually doing. - **What about error handling?** You must have oneshot channel behavior in mind: _If the Receiver is closed before receiving a message which has already been sent, the message will remain in the channel until the receiver is dropped, at which point the message will be dropped immediately._ So even if two halves of oneshot channels were sent to corresponding spawns it still does not mean it's purpose will be fullfilled. Error handling in this case depends on which scenario you implement and how you need to react on delivery issues. ## Thanks for reading This is my first Rust post and I'm still discovering its features. If you think something could/should be better implemented then let me know in comments.
bbkr
1,880,528
what are the steps you would take to identify and address the performance issues in Spring Boot?
When encountering performance issues in a Spring Boot application under high load, it's essential to...
0
2024-06-07T15:00:53
https://dev.to/codegreen/what-are-the-steps-you-would-take-to-identify-and-address-the-performance-issues-in-spring-boot-47af
java, springboot, microservices, backend
When encountering performance issues in a Spring Boot application under high load, it's essential to diagnose and address the root causes effectively. Here's a structured approach to tackle the problem: 1. **Performance Monitoring:** Utilize tools like Spring Boot Actuator and monitoring solutions such as Prometheus and Grafana to collect and analyze performance metrics in real-time. This helps identify bottlenecks and areas of concern. 2. **Profiling:** Use profilers like YourKit or VisualVM to analyze CPU and memory usage, thread contention, and hotspots within the application. This provides insights into which parts of the code are consuming the most resources. 3. **Database Optimization:** Optimize database queries by analyzing query execution plans, adding appropriate indexes, and avoiding N+1 query problems. Tools like Hibernate statistics and database monitoring tools can aid in identifying inefficient queries. 4. **Caching:** Introduce caching for frequently accessed data using Spring's caching abstraction or third-party caching solutions like Redis or Memcached. Caching can significantly reduce database load and response times. 5. **Concurrency Management:** Evaluate and optimize thread pool configurations, asynchronous processing, and parallelization to handle concurrent requests efficiently. Be cautious of thread contention and synchronization issues. 6. **Code Optimization:** Review and optimize critical sections of code, identify and eliminate unnecessary object creation, and refactor performance-critical components. Techniques like lazy loading and batch processing can also help improve performance. 7. **External Service Calls:** Analyze and optimize external service calls by reducing latency, implementing retries with backoff strategies, and employing circuit breakers to handle failures gracefully. 8. **Load Testing:** Conduct comprehensive load testing using tools like Apache JMeter or Gatling to simulate high traffic scenarios and identify performance bottlenecks under different load conditions. Example: Let's say we're experiencing slow response times in our Spring Boot application due to inefficient database queries. We can use Spring Boot Actuator to monitor database-related metrics and tools like Hibernate statistics to analyze query performance. Upon identifying the problematic queries, we optimize them by adding appropriate indexes and refactoring the code to reduce unnecessary database round-trips. By following these steps and continuously monitoring and optimizing our application, we can effectively address performance issues and ensure optimal performance even under high load.
manishthakurani
1,880,527
Classic To Lightning Migration: 5 Crucial Considerations For A Smooth Transition
A standard change many companies are making is the switch from Salesforce Classic to Lightning, a...
0
2024-06-07T14:59:52
https://www.opkey.com/blog/salesforce-classic-to-lightning-migration
classic, lightning, migration
A standard change many companies are making is the switch from Salesforce Classic to Lightning, a more contemporary and user-friendly interface. Although there are many advantages to this shift, there are drawbacks as well that need cautious preparation and implementation. This post discusses five essential things to consider before starting a [Classic to Lightning migration](https://www.opkey.com/blog/salesforce-classic-to-lightning-migration). **Key Takeaways** Moving from Salesforce Classic to Lightning is a big change that requires careful planning and preparation to avoid problems and disruptions to your business. Before starting the switch, it's important to fully review your current Salesforce setup, customizations, and integrations to understand how the change will impact things. User training is a key part of a successful migration. People may be hesitant about the new Lightning interface at first, so good training programs are needed to help them adjust and see the benefits. Automated testing tools like Opkey can make the transition smoother by allowing you to easily create and maintain tests across your Salesforce apps, even as things change. With proper planning, execution and being ready to handle any issues that come up, organizations can successfully move from Salesforce Classic to Lightning and gain the advantages of the more modern interface. **1. Comprehensive Assessment and Planning** Companies must comprehensively evaluate a current system, modifications, and integrations before starting a conversion process. In addition to the technical issues, this assessment should consider any potential effects the shift may have on business processes and user workflows. Ignoring this critical phase can result in unanticipated issues, hold-ups, and possible operations disruptions. Involving stakeholders from other departments, such as IT, business units, and end users, can aid in locating possible obstacles and formulating mitigation plans. **2. User Training and Change Management** For end users, the switch from Classic to Lightning signifies a big adjustment, so proper change management is essential to the migration process. Individuals used to the Classic interface can first be reluctant or uneasy about the new Lightning experience. Organizations should prioritize thorough user training programs that acquaint people with the new interface and emphasize its advantages to lessen these difficulties. Allowing users enough time to adjust and continuing to provide help will facilitate a more seamless transition and increase user acceptance **3. Data Migration and Integrity** Since data is an organization's lifeblood, it is crucial to protect its integrity when migrating. Complete data mapping, validation, and cleansing must be done to find and fix any flaws or inconsistencies that might be present in the Classic environment. Creating solid data migration plans is also crucial to reduce the possibility of data loss or corruption during the transfer. Enforcing stringent testing protocols and utilizing automated technologies can facilitate data migration while preserving data integrity and quality. **4. Customization and Integration Compatibility** Many companies have put a lot of work into integrating with other systems or modifying their Classic environments to satisfy specific business needs. It is imperative to guarantee the compatibility and functionality of these customizations and integrations during the Lightning conversion. Maintaining business continuity and preventing interruptions to crucial processes requires conducting thorough compatibility assessments, making the required modifications, or looking for other solutions. **5. Performance and Scalability Considerations** Switching to Lightning offers a chance to improve scalability and system performance optimization. Evaluating the migration's possible effects on system resources, such as memory, processor speed, and network bandwidth, is crucial. Inadequate attention to performance and scalability issues can lead to slower load times, a less-than-ideal user experience, and decreased system responsiveness. Testing load, putting caching measures into place, and refining data retrieval methods can all assist in minimizing possible performance bottlenecks and guarantee a fluid and responsive Lightning experience. **Conclusion** The Classic to Lightning migration process calls for meticulous preparation, skillful execution, and a dedication to anticipating and resolving issues before they arise. As businesses migrate from Salesforce Classic to Lightning, automated testing with [Opkey](https://www.opkey.com/) ensures a smooth transition. Opkey's no-code test automation simplifies building and maintaining robust test suites across Salesforce applications. With Opkey's AI-assisted self-healing capabilities, tests heal automatically when applications change, eliminating flaky tests. With Opkey's comprehensive test management, migration becomes a smooth process, as it rigorously validates Lightning applications through thousands of real-world test scenarios. The companies can accelerate migration and go live faster while reducing testing effort and costs with Opkey's intelligent test automation
rohitbhandari102
1,880,526
Cashapp $750 Gift Card Now USA
You've received a $750 Cashapp Rewards 2024, 𝐂𝐥𝐢𝐜𝐤 𝐓𝐡𝐞 𝐋𝐢𝐧𝐤 𝐁𝐞𝐥𝐨𝐰 𝐀𝐧𝐝 𝐂𝐥𝐚𝐢𝐦 𝐘𝐨𝐮𝐫 Cashapp Rewards...
0
2024-06-07T14:58:12
https://dev.to/cashapp6548145/cashapp-750-gift-card-now-usa-djm
cashapp, makemoneyonline, usanews, usajobs
You've received a $750 Cashapp Rewards 2024, 𝐂𝐥𝐢𝐜𝐤 𝐓𝐡𝐞 𝐋𝐢𝐧𝐤 𝐁𝐞𝐥𝐨𝐰 𝐀𝐧𝐝 𝐂𝐥𝐚𝐢𝐦 𝐘𝐨𝐮𝐫 Cashapp Rewards 𝐐𝐮𝐢𝐜𝐤𝐥𝐲🎁 lect your Cashapp Gift $750 Voucher per person🍔 🎉If you interested click here🍟 👇🍔👇Link>>>> https://tinyurl.com/fwys22px ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5t48379ixrr013c0ed7j.png)
cashapp6548145
1,890,193
How to Enable IIS 6 Scripting Tools in Windows 11?
IIS 6 Scripting Tools in Windows 11 provide backward compatibility for scripting management tasks...
0
2024-06-16T10:40:26
https://winsides.com/enable-iis-6-scripting-tools-in-windows-11/
windowsfeatures, enableiisscriptingto, howtoenableinwindows, windows11
--- title: How to Enable IIS 6 Scripting Tools in Windows 11? published: true cover_image: https://winsides.com/wp-content/uploads/2024/06/IIS-Scripting-Tool-in-Windows-11.jpg date: 2024-06-07 14:52:37 UTC tags: WindowsFeatures,EnableIISScriptingTo,HowtoEnableinWindows,windows11 canonical_url: https://winsides.com/enable-iis-6-scripting-tools-in-windows-11/ --- **IIS 6 Scripting Tools in Windows 11** provide **backward compatibility** for scripting management tasks using IIS 6.0 scripts. These tools are handy for administrators and developers who need to manage IIS servers using **legacy scripts and command-line tools** originally designed for **IIS 6.0**. In this article, we will check out how to Enable IIS 6 Scripting Tools in Windows 11 OS. - Open the **Run command box** using the shortcut <kbd>Win Key + R</kbd>. - Enter `optionalfeatures` in the run command box. ![Optional Features](https://winsides.com/wp-content/uploads/2024/05/Optional-Features-1.jpg "Enable IIS 6 Scripting Tools in Windows 11 103") _Optional Features_ - **Windows Features** dialog box will open now. - Locate the **Internet Information Services(IIS) Node** and expand it. ![Expand Internet Information Services](https://winsides.com/wp-content/uploads/2024/06/Expand-Internet-Information-Services.jpg "Enable IIS 6 Scripting Tools in Windows 11 104") _Expand Internet Information Services_ - Expand **Web Management Tools**. ![Web Management Tools](https://winsides.com/wp-content/uploads/2024/06/Expand-Web-Management-Tools.jpg "Enable IIS 6 Scripting Tools in Windows 11 105") _Web Management Tools_ - Then, you can find the **IIS 6 Management Compatibility** Node. Expand it. ![IIS 6 Management Compatibility](https://winsides.com/wp-content/uploads/2024/06/IIS-6-Management-Compatibility.jpg "Enable IIS 6 Scripting Tools in Windows 11 106") _IIS 6 Management Compatibility_ - Now, you can find the **IIS 6 Scripting Tools. ** Click on the checkbox next to it and enable it. Finally, click **OK**. ![Enable IIS 6 Scripting Tools](https://winsides.com/wp-content/uploads/2024/06/Enable-IIS-6-Scripting-Tools.jpg "Enable IIS 6 Scripting Tools in Windows 11 107") _Enable IIS 6 Scripting Tools_ - Click **Restart** if you are prompted to restart or click **Close** accordingly. However, it is suggested that the changes be restarted right away so that they will be reflected. ![Close](https://winsides.com/wp-content/uploads/2024/01/Close.jpg "Enable IIS 6 Scripting Tools in Windows 11 110") _Close_ - **IIS 6 Scripting Tools** is now enabled on your Windows 11 PC. Enabling IIS 6 Scripting Tools will automatically enable IIS 6 WMI Compatibility and IIS Metabase and IIS 6 configuration compatibility. ## IIS 6 Scripting Tools inclusions: - **Administrative Scripts:** These command-line scripts allow administrators to manage various aspects of IIS 6.0, such as creating, configuring, and **managing websites** , applications, and **virtual directories**. - **WMI Compatibility:** The tools ensure compatibility with **Windows Management Instrumentation (WMI)** scripts that interact with IIS 6.0, enabling automated management and monitoring of IIS configurations. - **Legacy Support:** They provide support for **legacy applications** and scripts that were created for **IIS 6.0** , ensuring they can still be used in a modern IIS environment on Windows 11. **Tutorial Source**: [https://winsides.com/enable-iis-6-scripting-tools-in-windows-11/](https://winsides.com/enable-iis-6-scripting-tools-in-windows-11/)
vigneshwaran_vijayakumar
1,880,524
Local and offline AI code assistant for VS Code with Ollama and Sourcegraph
I recently learned that Sourcegraph's AI coding assistant Cody can be used offline by connecting it...
0
2024-06-07T14:51:33
https://thor.bio/blog/offline-ai-code-assistant
ollama, ai, cody, sourcegraph
I [recently learned](https://sourcegraph.com/blog/local-code-completion-with-ollama-and-cody) that [Sourcegraph's](https://sourcegraph.com/) AI coding assistant [Cody](https://sourcegraph.com/cody) can be used offline by connecting it to a local running [Ollama](https://www.ollama.com/) server. Now, unfortunately my little old MacBook Air doesn't have enough VRAM to run Mistral's 22B [Codestral](https://mistral.ai/news/codestral/) model, but fear not, I found that the Llama 3 8B model works quite well in powering both code completion and code chat workloads! Let's have a look at how we can set this up with VS Code for the absolute offline / in-flight coding bliss: ## Install Ollama and pull Llama 3 8B 1. Install [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#ollama) 1. Run `ollama pull llama3:8b` 1. Once the downloade has completed, run `ollama serve` to start the Ollama server. ## Configure Sourcegraph Cody in Vs Code 1. Install the [Sourcegraph Cody](https://marketplace.visualstudio.com/items?itemName=sourcegraph.sourcegraph) Vs Code Extension. 2. Add the following to your Vs Code settings: ```json { //... // Cody autocomplete configuration: "cody.autocomplete.advanced.provider": "experimental-ollama", "cody.autocomplete.experimental.ollamaOptions": { "url": "http://127.0.0.1:11434", "model": "llama3:8b" }, // Enable Ollama for Cody Chat: "cody.experimental.ollamaChat": true, // optional but useful to see detailed logs in the OUTPUT tab // (make sure to select "Cody by Sourcegraph" from the dropbdown) "cody.debug.verbose": true //... } ``` ## Start Cody and enjoy your Local Offline AI Code Assistant That's it, as long as Ollama is running in the background, you should now have a fully functional offline AI code assistant for Vs Code with Cody. This setup allows you to use both code completion and code chat features without relying on any external services or internet connection. In fact most of this last paragraph was written by Llama 3 8B itself. For Cody Chat, make sure to select the llama3:8b `Experimental` option from the dropdown and you're good to go! Happy Cod(y)ing \o/
thorwebdev
1,880,523
Unable to create jira issue in postman
I'm facing with issue..can anyone help me to solve the issue.
0
2024-06-07T14:51:07
https://dev.to/rana_durgi_9e46da09122304/unable-to-create-jira-issue-in-postman-4ked
I'm facing with issue..can anyone help me to solve the issue. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/atijbhntg1h4iu7d41jg.jpg)
rana_durgi_9e46da09122304
1,880,522
zerofeesprocessingny
Zero Fees Processing is tailored for businesses aiming to optimize financial efficiency while meeting...
0
2024-06-07T14:50:52
https://dev.to/zerofeesprocessingny/zerofeesprocessingny-20n9
financial
Zero Fees Processing is tailored for businesses aiming to optimize financial efficiency while meeting customer expectations for straightforward and adaptable payment options. Our service spans various industries, eliminating processing fees and enhancing overall transaction clarity. For businesses looking to streamline their financial operations, [**Zero Fees Processing**](https://www.zerofeesprocessingny.com/ ) is an excellent choice. **Contact:** 02126181916 **Address:** 14 Wall Street Manhattan, New York 10005 New York City, NY 10005 United States
zerofeesprocessingny
1,880,521
Life
A post by Marc-André Méthot -Meloche
0
2024-06-07T14:50:47
https://dev.to/meloche92/life-2cih
meloche92
1,880,520
We just migrate from Google analysis to PostHog
In our project's we are using google analysis to track user details What is google analysis...
0
2024-06-07T14:50:46
https://dev.to/shu12388y/we-just-migrate-from-google-analysis-to-posthog-a6d
webdev, javascript, google, posthog
In our project's we are using google analysis to track user details ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p6bdb8bj0fecwv92wtlz.png) What is google analysis ? Google Analytics is a powerful web analytics service offered by Google that tracks and reports website traffic. It was launched in November 2005 after Google acquired Urchin Software Corporation. Over the years, Google Analytics has become one of the most widely used web analytics services on the internet. In our company, we used Google Analytics for every project to analyze user behavior and track the number of active users on our websites and applications. Everything was working fine, but we wanted to explore different tools to test their compatibility and see if they could offer additional features for our applications. That’s when we discovered PostHog. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/llbzlpaqq15e5wzrdm11.png) PostHog is an open-source product analytics platform designed to provide detailed insights into user behavior and product usage. Launched in early 2020, PostHog aims to offer an alternative to traditional analytics tools by focusing on privacy, flexibility, and the ability to self-host. This makes it particularly attractive to companies that prioritize data ownership and customization. We have been using PostHog for a week, and here are some of its features: 1. Event Tracking: PostHog allows us to track custom events to understand user interactions with our product. This helps us gain deeper insights into user behavior. 2. Session Recording: With session recording, we can visualize user interactions and identify usability issues. This feature is invaluable for improving user experience. 3. Feature Flags: PostHog provides feature flags that enable us to test and gradually roll out new features. This helps us control which users see specific features and conduct A/B testing. 4. User Paths: The user paths feature visualizes how users navigate through our product. This helps us identify common drop-off points and optimize user flows. 5. Retention Analysis: PostHog’s retention analysis allows us to understand how well our product retains users over time. We can identify patterns and factors that influence user retention. 6. Self-Hosting: PostHog offers the option to self-host, giving us complete control over our data and ensuring compliance with privacy regulations. This is a significant advantage for data security and privacy. We are excited about the possibilities that PostHog offers and look forward to leveraging these features to enhance our product and user experience. We are now in the testing phase of PostHog, and so far, we are impressed with its capabilities. If our testing continues to show positive results, we plan to stick with PostHog for the long term.
shu12388y
1,880,516
Elevate Your Web Projects: Mastering JavaScript Accessibility🚀🚀
Creating an accessible web experience ensures that your website is usable by as many people as...
0
2024-06-07T14:48:11
https://dev.to/dharamgfx/elevate-your-web-projects-mastering-javascript-accessibility-40oo
webdev, javascript, beginners, programming
Creating an accessible web experience ensures that your website is usable by as many people as possible, including those with disabilities. This guide dives deep into various aspects of accessibility, providing practical examples to enhance your understanding and implementation. ## What is Accessibility? **Definition and Importance:** - Accessibility refers to the design of products, devices, services, or environments for people with disabilities. - Ensures inclusivity, allowing everyone, regardless of their abilities, to interact with and benefit from the web. **Key Benefits:** - Improves user experience for all users. - Enhances SEO and broadens audience reach. - May be legally required in some jurisdictions. ## HTML: A Good Basis for Accessibility ### HTML and Accessibility **Proper HTML Structure:** - Use semantic HTML to provide meaningful context to assistive technologies. - Correct use of elements like `<header>`, `<nav>`, `<main>`, `<article>`, and `<footer>`. **Example:** ```html <header> <h1>Welcome to My Accessible Site</h1> </header> <nav> <ul> <li><a href="#home">Home</a></li> <li><a href="#about">About</a></li> <li><a href="#contact">Contact</a></li> </ul> </nav> ``` ### Good Semantics **Descriptive Tags:** - Use appropriate tags like `<button>`, `<input>`, `<label>`, `<form>`, and `<fieldset>` for form elements. - Ensure elements are used according to their intended purpose. **Example:** ```html <form> <fieldset> <legend>Personal Information</legend> <label for="name">Name:</label> <input type="text" id="name" name="name"> </fieldset> </form> ``` ### Accessible Data Tables **Structure and Attributes:** - Use `<table>`, `<th>`, `<tr>`, and `<td>` appropriately. - Utilize `scope` and `headers` attributes for better navigation. **Example:** ```html <table> <caption>Monthly Sales</caption> <thead> <tr> <th scope="col">Month</th> <th scope="col">Sales</th> </tr> </thead> <tbody> <tr> <td>January</td> <td>$1000</td> </tr> </tbody> </table> ``` ### Text Alternatives **Images and Media:** - Provide `alt` attributes for images. - Use `<figure>` and `<figcaption>` for complex images. **Example:** ```html <img src="logo.png" alt="Company Logo"> <figure> <img src="chart.png" alt="Sales Chart"> <figcaption>Sales chart for the year 2023.</figcaption> </figure> ``` ### More on Links **Clear and Descriptive Links:** - Use descriptive anchor text. - Avoid vague terms like "click here." **Example:** ```html <a href="report.pdf">Download the annual report</a> ``` ### Test Your Skills! **Practical Exercise:** - Create a small webpage with a form, a data table, and several links. - Ensure all elements are accessible and validate using tools like WAVE or Axe. ### Summary **Recap of HTML Basics:** - Emphasize the importance of semantic HTML, text alternatives, and descriptive links for accessibility. - Highlight how proper HTML forms the foundation of an accessible website. ## CSS and JavaScript Accessibility Best Practices ### CSS Best Practices **Visual Focus Indicators:** - Ensure focus is visible using CSS. - Avoid using `outline: none;` unless replacing with a custom style. **Example:** ```css button:focus { outline: 2px solid blue; } ``` ### JavaScript Best Practices **Interactive Elements:** - Ensure all interactive elements are keyboard accessible. - Use ARIA roles and properties appropriately. **Example:** ```javascript document.querySelector('button').addEventListener('click', function() { alert('Button clicked!'); }); ``` ## WAI-ARIA Basics **Introduction to ARIA:** - WAI-ARIA (Web Accessibility Initiative - Accessible Rich Internet Applications) defines ways to make web content more accessible. - Use ARIA roles, states, and properties to enhance accessibility. **Example:** ```html <div role="alert" aria-live="assertive"> This is an important alert message. </div> ``` ## Accessible Multimedia **Captions and Transcripts:** - Provide captions for videos and transcripts for audio content. - Use `<track>` for captions in video elements. **Example:** ```html <video controls> <source src="video.mp4" type="video/mp4"> <track src="captions_en.vtt" kind="captions" srclang="en" label="English"> </video> ``` ## Mobile Accessibility **Responsive Design:** - Ensure the website is responsive and usable on various devices. - Test for touch screen interactions and voice commands. **Example:** ```html <meta name="viewport" content="width=device-width, initial-scale=1.0"> ``` ## Assessment: Accessibility Troubleshooting **Evaluate and Improve:** - Use tools like Lighthouse, WAVE, or Axe to identify accessibility issues. - Regularly test and update your website to maintain accessibility standards. **Example Tools:** - [WAVE](https://wave.webaim.org/) - [Axe](https://www.deque.com/axe/) By integrating these practices, you'll create a more inclusive, user-friendly web experience. Happy coding!
dharamgfx
1,880,515
G
A post by EDILSON DOS SANTOS
0
2024-06-07T14:46:48
https://dev.to/edilson_rastafare_3b757b/g-2i5e
edilson_rastafare_3b757b
1,880,514
Unveiling the Future: A Look at Top Tech Trends on the Rise
The pace of technological innovation is relentless, constantly reshaping our world. From the way we...
0
2024-06-07T14:45:40
https://dev.to/dishan_maduranga_0adcb598/unveiling-the-future-a-look-at-top-tech-trends-on-the-rise-1k3o
blogging, futuretech, technology, ai
The pace of technological innovation is relentless, constantly reshaping our world. From the way we work to how we interact and entertain ourselves, new advancements are emerging that hold the potential to be truly transformative. Today, we'll delve into some of the most exciting upcoming technologies that are poised to make a significant impact in the years to come. **1. Artificial Intelligence: Beyond Automation** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8mi3onfaz9kd3ssbjgm.png) Artificial intelligence (AI) is no longer science fiction. It's a reality that's seeped into our everyday lives, quietly powering features like facial recognition on smartphones and recommendation algorithms on streaming services. But AI's potential goes far beyond automation. We can expect to see AI playing a much more prominent role in various sectors: Self-driving cars will fundamentally change transportation, making roads safer and commutes more efficient. Personalized medicine will leverage AI to analyze vast amounts of medical data, leading to more accurate diagnoses and targeted treatments. Generative AI, a type of AI that can create realistic and creative content, will transform industries like design, marketing, and entertainment. **2. Quantum Computing: Unlocking Untapped Potential** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ukswd76fsgkwx0208v8t.png) While traditional computers rely on bits (0s and 1s), quantum computers utilize qubits, which can exist in multiple states simultaneously. This unlocks immense computational power, allowing us to tackle problems that are currently impossible for classical computers. The applications of quantum computing are vast, with potential breakthroughs in: Drug discovery: Simulating complex molecular interactions to design life-saving medications. Materials science: Engineering new materials with unprecedented properties for various applications. Financial modeling: Making complex financial predictions with unmatched accuracy. **3. Merging Realities: VR, AR, and the Metaverse** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2a8z98zb10g4pdohm6s3.png) Virtual Reality (VR) and Augmented Reality (AR) are dissolving the boundaries between the physical and digital realms. VR creates immersive experiences that can be used for training, entertainment, and even therapy. AR overlays digital information onto the real world, providing real-time data and enhancing our surroundings. These technologies are converging to form the concept of the metaverse: A persistent virtual world accessed through VR headsets and other devices, where users can work, socialize, and shop in entirely new ways. The metaverse has the potential to revolutionize how we interact with technology and each other. **4. Biotechnology: Redefining Life Sciences** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eyvezewf30q6d2ba61v9.png) Biotechnology is rapidly evolving, leading to breakthroughs that are transforming medicine, agriculture, and even our understanding of life itself. Here are a few exciting advancements to watch: Lab-grown meat offers a sustainable alternative to traditional meat production, with the potential to reduce environmental impact. Gene editing techniques like CRISPR are opening doors for precise manipulation of genes, paving the way for potential cures for genetic diseases. **5. Sustainable Solutions: Powering a Greener Future** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/377mfkaafw3c1o7i66zs.png) As we grapple with climate change, the need for sustainable technologies is more critical than ever. These innovations are crucial for reducing our dependence on fossil fuels and mitigating our environmental impact: Renewable energy sources like solar and wind power are becoming increasingly affordable and efficient, paving the way for a cleaner energy future. Energy-efficient technologies are constantly being developed, helping us to reduce our overall energy consumption and lessen our carbon footprint.
dishan_maduranga_0adcb598
1,870,050
Navigating the gRPC Galaxy: A Different view into Efficient 'api to api' Communication
Overview: RPC means Remote Procedure Call. It's a client-server communication mechanism,...
0
2024-06-07T14:44:33
https://dev.to/nirmalkumar/navigating-the-grpc-galaxy-a-different-view-into-efficient-api-to-api-communication-3nb1
microservices, grpc, domaindrivendesign, webdev
###### Overview: * RPC means Remote Procedure Call. It's a client-server communication mechanism, the client calls locally a procedure which is actually in remote server. * gRPC is Google's version of RPC open source simple service definition framework. * When we work with micro services architecture, in few scenarios we need to make multiple api calls which will introduce high latency. gRPC service or in other words contract-first api can help with this problem by leveraging http/2.0 protocol and binary serialization technique called protocol buffers. * Protocol buffers (protobuf) are language & platform agnostic for serializing data. It serializes the data into a compact format optimized for network transmission over HTTP2.0 ###### gRPC vs REST: * REST is text based and uses HTTP1.1 whereas gRPC is protobuf based and uses HTTP2.0 * REST leverage json/xml vs gRPC uses protocol buffers for data serialization. * REST is request/response pattern whereas gRPC is encode/decode and streaming pattern. * REST is resource oriented design as we need to communicate using HTTP verbs; gRPC is service oriented design where client knows all types and procedures locally before making a call. ###### When to choose gRPC: * If we have multiple teams who are specialized in different programming languages and target to build lot number of inter connected services based on domain driven design, we can design micro service architecture by leveraging gRPC framework **where needed**. NOTE: REST is more established, popular and simple than gRPC and it is still preferred predominantly everywhere. * If we need to stream or transfer huge data between client and server. * As gRPC is strongly typed, we need to choose for a use-case where client and server are tightly coupled, maintained or enhanced together by the same team. * Choose gRPC where performance is critical and non-negotiable. ###### Templates to build gRPC applications: * gRPC has wide number of language support. C++, C#, Dart, Go, Java, Kotlin, Node.js, Objective-C, PHP, Python, Ruby, and Swift. [Sample](https://learn.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-8.0&tabs=visual-studio) ###### PROS: * Better performance, low latency, high thorough-put. * Ideal for low bandwidth networks. * Efficient for internal service to service communication. * Strongly typed with built in support to validate every request. * Native protobuf compiler to generate code for any supported language based on '.proto' file. * Built-in better load balancing routine and selective message compression capability. Note: I am yet to explore these advanced options. ###### CONS: * Not evolved fully yet with all browser types. Thus, we can't fully leverage from browser to server communication. * Not adopted widely yet. * Binary serialization makes it tedious to debug during transport. ###### How gRPC works: gRPC Client: * Client calls a local procedure which is a gRPC client stub created based on '.proto' file. * gRPC runtime will binary encode and transport the request to server over http/2.0 gRPC Server: * gRPC runtime in server once it receives the request will binary decode and extract the request as server stub (created based on '.proto' file). * gRPC service will process the request and return the response back. Note: When the response goes back to client, encode/decode will happen vice-versa between client and server. ###### Referene [official gRPC link](https://grpc.io/)
nirmalkumar
1,880,513
Save Time Building Headless / Jamstack Sites
Ever get tired of rebuilding your content models over and over again? I know I do. Every time I built...
0
2024-06-07T14:41:17
https://dev.to/alangleeson/save-time-building-headless-jamstack-sites-1f5a
webdev, productivity, development, developer
Ever get tired of rebuilding your content models over and over again? I know I do. Every time I built a new website I had to rebuild a button, a hero, a navigation link etc...because they're pretty much all the same and it made setting up a new headless Jamstack website a bit tedious! But not anymore 🎉 Introducing the My Library feature in Contento! We built this to solve this very problem, and now you can save your content models to your library to use in other sites! You can also look at previous sites you've built and import content types from them into your new sites. It's saved me so much time... and monotony 😁 . Check out our My Library feature to see if it could help you speed up your headless Jamstack builds. {% embed https://www.contento.io/features/my-library %} #headless #jamstack #contentmodelling #content #marketingwebsite
alangleeson
1,880,512
Elevate Your Web Forms: Mastering JavaScript Form Submission🚀
Why Use JavaScript to Submit Form Data? Advantages of JavaScript Form...
0
2024-06-07T14:41:03
https://dev.to/dharamgfx/elevate-your-web-forms-mastering-javascript-form-submission-5d0c
webdev, javascript, beginners, api
## Why Use JavaScript to Submit Form Data? ### Advantages of JavaScript Form Submission 1. **Real-Time Validation**: JavaScript allows you to validate form data on the client-side before sending it to the server. This enhances user experience by providing immediate feedback. 2. **Asynchronous Requests**: Using JavaScript, particularly with AJAX, enables asynchronous form submissions. This means the web page doesn't need to reload, providing a smoother user experience. 3. **Dynamic User Interfaces**: JavaScript can dynamically adjust the form's content based on user interactions, such as displaying additional fields or changing input types based on previous selections. 4. **Enhanced Control**: With JavaScript, you have more control over the form submission process, allowing for complex behaviors such as conditional submissions or data aggregation from multiple sources. #### Example ```html <form id="exampleForm"> <input type="text" name="username" required> <input type="password" name="password" required> <button type="submit">Submit</button> </form> <script> document.getElementById('exampleForm').addEventListener('submit', function(event) { event.preventDefault(); // Custom validation and submission logic here }); </script> ``` ## The Problem with JavaScript Form Submission ### Common Issues and Pitfalls 1. **Browser Compatibility**: Older browsers may not fully support modern JavaScript features, leading to inconsistent behavior. 2. **Security Risks**: Client-side validation can be bypassed by malicious users. Always validate data on the server side as well. 3. **Complexity**: Managing state and handling errors in JavaScript can become complex, especially as the form logic grows. #### Example ```html <form id="problematicForm"> <input type="text" name="email" required> <button type="submit">Submit</button> </form> <script> document.getElementById('problematicForm').addEventListener('submit', function(event) { event.preventDefault(); // Assuming validation is done here, but server-side validation is also needed }); </script> ``` ## Building a FormData Object Manually ### Step-by-Step Guide 1. **Create a FormData Instance**: This object allows you to construct key/value pairs representing form fields and their values. 2. **Append Data**: You can manually append data to the FormData object using the `.append()` method. 3. **Send Data via Fetch or XMLHttpRequest**: Use modern JavaScript methods to send the FormData object asynchronously. #### Example ```html <form id="manualForm"> <input type="text" name="firstName" required> <input type="text" name="lastName" required> <button type="submit">Submit</button> </form> <script> document.getElementById('manualForm').addEventListener('submit', function(event) { event.preventDefault(); let formData = new FormData(); formData.append('firstName', document.querySelector('[name="firstName"]').value); formData.append('lastName', document.querySelector('[name="lastName"]').value); fetch('/submit', { method: 'POST', body: formData }).then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); }); </script> ``` ## Associating a FormData Object and a `<form>` ### Simplifying FormData Usage 1. **FormData Constructor**: Pass the form element directly to the FormData constructor to automatically populate the object with form values. 2. **Dynamic Data Handling**: Easily modify or add additional data to the FormData object before submission. #### Example ```html <form id="associatedForm"> <input type="text" name="email" required> <button type="submit">Submit</button> </form> <script> document.getElementById('associatedForm').addEventListener('submit', function(event) { event.preventDefault(); let formData = new FormData(this); // 'this' refers to the form element // You can append additional data if needed formData.append('extraData', 'extraValue'); fetch('/submit', { method: 'POST', body: formData }).then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); }); </script> ``` ## Handling File Uploads with FormData ### Uploading Files Seamlessly 1. **File Input**: Include file input fields in your form. 2. **FormData Handling**: The FormData object automatically handles file data. 3. **Server-Side Processing**: Ensure your server can handle multipart form data. #### Example ```html <form id="fileUploadForm"> <input type="file" name="file" required> <button type="submit">Upload</button> </form> <script> document.getElementById('fileUploadForm').addEventListener('submit', function(event) { event.preventDefault(); let formData = new FormData(this); fetch('/upload', { method: 'POST', body: formData }).then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); }); </script> ``` ## Handling Form Submission Response ### Managing Server Responses 1. **Success Handling**: Process successful responses, such as displaying a message or redirecting the user. 2. **Error Handling**: Handle errors gracefully, providing feedback to the user. #### Example ```html <form id="responseHandlingForm"> <input type="text" name="username" required> <button type="submit">Submit</button> </form> <script> document.getElementById('responseHandlingForm').addEventListener('submit', function(event) { event.preventDefault(); let formData = new FormData(this); fetch('/submit', { method: 'POST', body: formData }).then(response => { if (!response.ok) { throw new Error('Network response was not ok'); } return response.json(); }).then(data => { console.log('Success:', data); alert('Form submitted successfully!'); }).catch(error => { console.error('Error:', error); alert('There was a problem with your submission.'); }); }); </script> ``` ## Enhancing Security in JavaScript Form Submission ### Protecting User Data 1. **HTTPS**: Always use HTTPS to encrypt data during transmission. 2. **Server-Side Validation**: Never rely solely on client-side validation; validate data on the server. 3. **Token-Based Authentication**: Use CSRF tokens to protect against cross-site request forgery attacks. #### Example ```html <form id="secureForm"> <input type="text" name="username" required> <input type="hidden" name="csrf_token" value="your_csrf_token"> <button type="submit">Submit</button> </form> <script> document.getElementById('secureForm').addEventListener('submit', function(event) { event.preventDefault(); let formData = new FormData(this); fetch('/secure-submit', { method: 'POST', headers: { 'X-CSRF-Token': formData.get('csrf_token') }, body: formData }).then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); }); </script> ``` By mastering these techniques, you can enhance the functionality, usability, and security of your web forms using JavaScript. Happy coding!
dharamgfx
1,880,511
Can you explain the caching mechanism available in Spring Boot?
Sure thing! In Spring Boot, we have a powerful caching mechanism that helps optimize application...
0
2024-06-07T14:39:42
https://dev.to/codegreen/can-you-explain-the-caching-mechanism-available-in-spring-boot-4ea9
java, backend, microservices, springboot
Sure thing! In Spring Boot, we have a powerful caching mechanism that helps optimize application performance by storing frequently accessed data in memory. Let me break it down for you. Firstly, to enable caching in our Spring Boot application, we need to include the necessary dependency in our `pom.xml` or `build.gradle` file: <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency> Next, we need to enable caching in our application. We do this by using the `@EnableCaching` annotation. This annotation is typically added to the main application class: @SpringBootApplication @EnableCaching public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } } Once caching is enabled, we can start using caching annotations such as `@Cacheable` to mark methods that should be cached. Let's consider an example: @Service public class UserService { @Autowired private UserRepository userRepository; @Cacheable(value = "users", key = "#userId") public User getUserById(Long userId) { // This method will only be executed if the data is not cached return userRepository.findById(userId).orElse(null); } } In this example, the `@Cacheable` annotation indicates that the result of the `getUserById` method should be cached. The `value` attribute specifies the name of the cache, and the `key` attribute specifies the unique key under which the cached data is stored. Here, we're caching user data under the `"users"` cache, with the user ID as the key. With these annotations in place, Spring Boot will automatically handle caching for us. Subsequent calls to `getUserById` with the same user ID will be served from the cache, improving the performance of our application.
manishthakurani
1,880,387
Create your own content management system with Remix and Xata
In this post, you'll create a content CMS using Xata, Remix, Novel, LiteLLM, and Vercel. You'll learn...
0
2024-06-07T14:37:44
https://xata.io/blog/content-management-system-remix-xata
ai, database, tutorial
In this post, you'll create a content CMS using Xata, Remix, Novel, LiteLLM, and Vercel. You'll learn how to: - Set up Xata - Create a schema with different column types - Handle forms in Remix using Form Actions - Implement Client Side Image Uploads - Use an AI-powered WYSIWYG Editor - Implement content-wide search - Create dynamic content routes with Remix ## Before you begin ### Prerequisites You'll need the following: - A [Xata](https://xata.io/) account - [Node.js 18](https://nodejs.org/en/blog/announcements/v18-release-announce) or later - An [OpenAI](https://platform.openai.com) account - A [Vercel](https://vercel.com) Account ### Tech Stack | Technology | Description | | --------------------------------------------- | ------------------------------------------------------------------------------- | | [Xata](https://xata.io) | Serverless database platform for scalable, real-time applications. | | [Remix](https://remix.run) | Framework for building full-stack web applications with focus on Web Standards. | | [litelln](https://github.com/BerriAI/litellm) | Call all LLM APIs using the OpenAI format. | | [Nove](https://novel.sh) | A Notion-style WYSIWYG editor with AI-powered autocompletion | | [TailwindCSS](https://tailwindcss.com/) | CSS framework for building custom designs | | [Vercel](https://vercel.com) | A cloud platform for deploying and scaling web applications. | ## Setting up a Xata Database After you've created a Xata account and are logged in, create a database. ![Create a database](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/esaibyr2zuh9zn2hrnmd.png) The next step is to create a table, in this instance `uploads`, that contains all the uploaded images. ![Create uploads table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9o7t9n9cuwwit9t0x8h.png) Great, now click on **Schema** in the left sidebar and create one more table `content`. You can do this by clicking **Add a table**. The tables will contain user content and user uploaded photographs. With that completed, you will see the schema as below. ![View uploads and content schema](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69a7erzy6k9d6tagnm0f.png) Let’s move on to adding relevant columns in the tables you've just created. ## Creating the Schema In the `uploads` table, you want to store all the images only (and no other attributes) so that you can create references to the same image object again, if needed. Proceed with adding the column named `image`. This column is responsible for storing the `file` type objects. In our case, the `file` type object is for images, but you can use this for storing any kind of blob (e.g. PDF, fonts, etc.) that’s sized up to 1 GB. First, click **+ Add column** and select **File**. ![Add a column](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7erl1ids1eyhz2kw20o.png) Set the column name to `photo` and to make files public (so that they can be shown to users when they visit the image gallery), check the **Make files public by default** option. ![Make files public by default](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/816p851prujdnvkiyzcq.png) In the `content` table, we want to store the attributes such as content’s unique slug (the path of the url where content will be displayed), title, author name, author’s image with it’s dimensions, and content’s og image with it’s dimensions. Proceed with adding the column named `slug`. It is responsible for maintaining the uniqueness of each content that gets created. Click **+ Add a column**, select `String` type and enter the column name as `slug`. To associate a slug with only one content, check the `Unique` attribute to make sure that duplicate entries do not get inserted. ![Add slug column](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxmwf3skyyn5y366mjcy.png) In similar fashion, create `title`, `author_name`, `author_image_url`, `og_image_url`, `author_image_w`, `author_image_h`, `og_image_w`, `og_image_h` as `String` type (but not `Unique`). Great, you can also store the user `content` as `Text` type. While `String` is a great default type, storing more than 2048 characters would require you to switch to the `Text` type. Read more about the limits in [Xata Column limits](https://xata.io/docs/rest-api/limits#column-limits). Lovely! With all that done, the final schema shall be as below 👇🏻 ![Add more CMS columns](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1a8uyqla3sb07w9p7j2.png) ## Setting up the project Clone the app repository and follow this tutorial; you can fork the project by running the following command: ```bash git clone https://github.com/rishi-raj-jain/remix-wysiwyg-litellm-xata-vercel cd remix-wysiwyg-litellm-xata-vercel npm install ``` ## Configure Xata with Remix To seamlessly use Xata with Remix, install the Xata CLI globally: ```bash npm install @xata.io/cli -g ``` Then, authorize the Xata CLI so it is associated with the logged in account: ```bash xata auth login ``` ![Create new API key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3py0duhg6qb4gvtme4yv.png) Great! Now, initialize your project locally with the Xata CLI command. In this command, you will need to use the database URL for the database that you just created. You can copy the URL from the Settings page of the database. ```bash xata init --db https://Rishi-Raj-Jain-s-workspace-80514q.us-east-1.xata.sh/db/remix-wysiwyg-litellm-xata-vercel ``` Answer some quick one-time questions from the CLI to integrate with Remix. ![Xata CLI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv1mgyhybu1gpankmrap.png) ## Implementing form actions in Remix With Remix, [Route Actions](https://reactrouter.com/en/main/route/action) are the way to process form `POST` request(s). Here’s how we’ve enabled form actions to process the form submissions and insert records into the Xata database. ```tsx import { Form } from '@remix-run/react' import { ActionFunctionArgs, json, redirect } from '@remix-run/node' export async function action({ request }: ActionFunctionArgs) { // Get the form data const body = await request.formData() } export default function Index() { return ( <Form navigate={false} method="post" className="mt-8 flex flex-col"> {% my form elements %} </Form> ) } ``` This allows you to colocate the serverless backend and frontend flow for a given page in Remix. Say, you accept a form submission containing the title, slug, and the content’s HTML, process it on the server, and sync it with your Xata serverless database. Here’s how you’d do all of that in a single Remix route (`app/routes/_index.tsx`). ```tsx // app/routes/_index.tsx import { Editor } from 'novel'; import { Form } from '@remix-run/react'; import { getXataClient } from '@/xata.server'; import Upload from '@/components/Utility/Upload'; import { ActionFunctionArgs, json, redirect } from '@remix-run/node'; export async function action({ request }: ActionFunctionArgs) { // Import the Xata Client created by the Xata CLI in app/xata.server.ts const xata = getXataClient(); // Get the form data const body = await request.formData(); const slug = body.get('slug') as string; const title = body.get('title') as string; const content = body.get('content-html') as string; // Sync the attributes to the content table in Xata await xata.db.content.create({ slug, title, content }); } export default function Index() { return ( <Form navigate={false} method="post"> <span>New Article</span> <span>Title</span> <input required autoComplete="off" id="title" name="title" placeholder="Title" /> <span>Content</span> <input required id="content-html" name="content-html" /> <span>Slug</span> <input required autoComplete="off" id="slug" name="slug" placeholder="Slug" /> <button type="submit">Publish &rarr;</button> </Form> ); } ``` ## Handling Client Side Image Uploads with Xata To let user add their own custom OG Image with the content, we use Xata [Upload URLs](https://xata.io/docs/sdk/file-attachments#upload-urls) to handle image uploads on the client side. There are 2 steps to make a successful client side image upload with Xata and Remix: 1. Create a record with empty photo `base64Content` and obtain the photo’s **uploadUrl**. ```tsx // app/routes/api_.image.upload.tsx import { json } from '@remix-run/node'; import { getXataClient } from '@/xata.server'; export async function loader() { const xata = getXataClient(); // Use the Xata client to create a new 'photo' record with an empty base64 content const result = await xata.db.uploads.create({ photo: { base64Content: '' } }, ['photo.uploadUrl']); return json({ uploadUrl: result?.photo?.uploadUrl }); } ``` 2. Do a client side `PUT` request to the **uploadUrl** with body as image’s buffer. ```tsx // app/components/Utility/Upload.tsx const uploadFile = (e: ChangeEvent<HTMLInputElement>) => { // Get the reference to the file uploaded const file = e.target.files?.[0]; if (!file) return; const reader = new FileReader(); reader.onload = async (event) => { // Load the file buffer const fileData = event.target?.result; if (fileData) { // Create blob from the file data with the relevant file's type const body = new Blob([fileData], { type: file.type }); // Make a fetch to the get the uploadUrl fetch('/api/image/upload') .then((res) => res.json()) .then((res) => { // Use the uploadUrl to upload the buffer fetch(res.uploadUrl, { body, method: 'PUT' }); }); } }; // Read the user uploaded file as buffer reader.readAsArrayBuffer(file); }; ``` ## Using an AI powered WYSIWYG Editor For making it easier to write content, users need a reliable and user-friendly AI powered WYSIWYG editor. We’re using Novel, a Notion-Style WYSIWYG Editor providing a seamless experience with intuitive features and real-time preview of the content being written. To get the content being written as HTML, we use Novel’s `onUpdate` callback and set the HTML string to an input inside the form element. ```tsx // app/routes/_index.tsx import { Editor } from 'novel'; import { Form } from '@remix-run/react'; export default function Index() { return ( <Form navigate={false} method="post"> <span>Content</span> <input required id="content-html" name="content-html" /> <Editor defaultValue={{}} storageKey="novel__editor" onUpdate={(e) => { if (!e) return; const tmp = e.getHTML(); const htmlSelector = document.getElementById('content-html'); if (tmp && htmlSelector) htmlSelector.setAttribute('value', tmp); }} /> <button type="submit">Publish &rarr;</button> </Form> ); } ``` ## Implementing autocompletion using LiteLLM Under the hood, Novel makes a POST request to `/api/generate` expecting a stream of tokens from OpenAI API. Well, let’s see how we’ve customised the endpoint to get the flexibility of using any AI API provider with LiteLLM. With LiteLLM, you can call 100+ LLMs with the same OpenAI-like input and output. To implement autocompletion with streaming, we use the `completion` method with `stream` flag set to `true` and further return the response obtained as a [ReadableStream](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream). ```tsx // app/routes/api_.generate.tsx import { completion } from 'litellm'; import { ActionFunctionArgs } from '@remix-run/node'; export async function action({ request }: ActionFunctionArgs) { const encoder = new TextEncoder(); const { prompt } = await request.json(); const response = await completion({ n: 1, top_p: 1, stream: true, temperature: 0.7, presence_penalty: 0, model: 'gpt-3.5-turbo', messages: [ { role: 'system', content: 'You are an AI writing assistant that continues existing text based on context from prior text. ' + 'Give more weight/priority to the later characters than the beginning ones. ' + 'Limit your response to no more than 200 characters, but make sure to construct complete sentences.' // we're disabling markdown for now until we can figure out a way to stream markdown text with proper formatting: https://github.com/steven-tey/novel/discussions/7 // "Use Markdown formatting when appropriate.", }, { role: 'user', content: prompt } ] }); // Create a streaming response const customReadable = new ReadableStream({ async start(controller) { for await (const part of response) { try { const tmp = part.choices[0]?.delta?.content; if (tmp) controller.enqueue(encoder.encode(tmp)); } catch (e) { console.log(e); } } controller.close(); } }); // Return the stream response and keep the connection alive return new Response(customReadable, { // Set the headers for Server-Sent Events (SSE) headers: { Connection: 'keep-alive', 'Content-Encoding': 'none', 'Cache-Control': 'no-cache, no-transform', 'Content-Type': 'text/event-stream; charset=utf-8' } }); } ``` ## **Implementing Content Wide Search with Xata Search** To let user search through the entire collection of the content, we use Remix Route Actions with Xata Search to retrieve relevant records from the database. With Xata Search, you can choose the tables to **search through**, in this instance, `content` and set the targets to **search on**, in this instance, `title`, `slug`, `content` and `author_name`. ```tsx // app/routes/_index.tsx export async function action({ request }: ActionFunctionArgs) { const body = await request.formData(); const search = body.get('search') as string; // If the 'search' parameter is missing, redirect to '/content' if (!search) return redirect('/content'); const xata = getXataClient(); // Use the Xata client to perform a search across specified tables with fuzziness const { records } = await xata.search.all(search, { tables: [ { table: 'content', target: ['content', 'title', 'slug', 'author_name'] } ], fuzziness: 2 }); // Extract the 'record' property from each search result containing the content const result = records.map((i) => i.record); return json({ search, result }); } ``` ## Creating Dynamic Routes in Remix To create a page dynamically for each content, we're gonna use Remix Dynamic Routes and Route Loaders. Creating a page with **$** in it, in this instance, content\_.**$id**.tsx specifies a dynamic route where each part of the URL for e.g. for `/content/a`, `/content/b` or `/content/anything` captures the last segment into the **id** param. With Remix Loader and Xata Records, we dynamically query the database to give us the content pertaining to a particular id. Once obtained, we process and return the content as HTML string. Finally, we use the loader data to prototype the UI with best practices such as lazy loading non-critical images. ```tsx // app/routes/content_.$id.tsx import { getXataClient } from '@/xata.server'; import Image from '@/components/Utility/Image'; import { useLoaderData } from '@remix-run/react'; import { unescapeHTML } from '@/lib/util.server'; import { getTransformedImage } from '@/lib/ast.server'; import { LoaderFunctionArgs, redirect } from '@remix-run/node'; export async function loader({ params }: LoaderFunctionArgs) { if (!params.id) return redirect('/404'); const xata = getXataClient(); // Use the Xata client to fetch content from the 'content' table based on the 'slug' const content = await xata.db.content .filter({ slug: params.id }) .getFirst(); if (content) { const output = await getTransformedImage(content); return { ...content, content: unescapeHTML(output) }; } // If content is not found, redirect to '/404' return redirect('/404'); } export default function Pic() { const content = useLoaderData<typeof loader>(); return ( <div> <span>{content.title}</span> <div> <Image alt={content.author_name} url={content.author_image_url} width={content.author_image_w} height={content.author_image_h} /> <div> <span>{content.author_name}</span> </div> </div> <Image loading="eager" alt={content.title} url={content.og_image_url} width={content.og_image_w} height={content.og_image_h} /> {content.content && <div dangerouslySetInnerHTML={{ __html: content.content }} />} </div> ); } ``` ## Deploy to Vercel The repository, is now ready to deploy to Vercel. Use the following steps to deploy: 👇🏻 - Start by creating a GitHub repository containing your app's code. - Then, navigate to the Vercel Dashboard and create a **New Project**. - Link the new project to the GitHub repository you just created. - In **Settings**, update the _Environment Variables_ to match those in your local `.env` file. - Deploy! 🚀 ## More Information For more detailed insights, explore the references cited in this post. | Resource | Link | | --------------------- | ----------------------------------------------------------------------- | | GitHub Repo | https://github.com/rishi-raj-jain/remix-wysiwyg-litellm-xata-vercel | | Remix with Xata | https://xata.io/docs/getting-started/remix | | Xata File Attachments | https://xata.io/docs/sdk/file-attachments#upload-a-file-using-file-apis | | Remix Route Actions | https://remix.run/docs/en/main/discussion/data-flow#route-action | | Remix Route Loaders | https://remix.run/docs/en/main/discussion/data-flow#route-loader | ## What’s next? We'd love to hear from you if you have any feedback on this tutorial, would like to know more about Xata, or if you'd like to contribute a community blog or tutorial. Reach out to us on [Discord](https://discord.com/invite/kvAcQKh7vm) or join us on [X | Twitter](https://twitter.com/xata). Happy building 🦋
cezz
1,880,510
What is TypeScript and what are it's advantages and disadvantages?
TypeScript is an extension of JavaScript, offering static typing to write straightforward JavaScript...
0
2024-06-07T14:33:11
https://dev.to/afnan_ahmed/what-is-typescript-and-what-are-its-advantages-and-disadvantages-3m7h
webdev, javascript, react, typescript
TypeScript is an extension of JavaScript, offering static typing to write straightforward JavaScript code. It provides optional static typing, classes, and interfaces, along with improved code organization and object-oriented programming techniques. TypeScript also offers better development tool support and extends the language with additional features beyond standard decorators. Additionally, it can be converted to plain JavaScript when needed. TypeScript is described as a strict superset of JavaScript, adding o ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b97sgli1lz4fjs2mj892.png)ptional static typing and class-based object-oriented programming aligned with ECMAScript 6 standard proposals. It compiles down to plain JavaScript without requiring any runtime library support. Advantages of TypeScript: 1. TypeScript highlights errors at compile time, while JavaScript does so at runtime. 2. TypeScript offers the benefits of optional static typing, allowing types to be added to variables, functions, properties, etc. 3. TypeScript supports both strict and static typing, which can help document functions, clarify usage, and reduce cognitive overhead. 4. TypeScript is compatible with any browser or JavaScript engine. 5. It provides excellent tooling with features like IntelliSense, offering dynamic hints as code is written. 6. TypeScript aids in code organization and supports interfaces. 7. It offers better documentation for APIs, which helps in reducing bugs during development. 8. TypeScript's editor modules provide a seamless IDE developer experience. Disadvantages of TypeScript: 1. TypeScript can take a long time to compile code. 2. It does not support abstract classes. 3. When using third-party libraries, there may be a need for definition files, which are not always available. 4. The quality of type definition files can vary. 5. TypeScript code still needs to be compiled to JavaScript whenever it needs to run in a browser. 6. Finding TypeScript developers may be challenging, as some developers may not be interested in learning TypeScript, preferring to stick with JavaScript. 7. Teams transitioning from JavaScript to TypeScript may take time to become productive, with full proficiency in TypeScript taking even longer. 8. TypeScript lacks some features compared to JavaScript, such as Higher Order Functions, Composition, and Generics with Higher Kinds.
afnan_ahmed
1,880,508
How would you handle inter service communication in a micro-service architecture using Spring Boot
In a microservice architecture using Spring Boot, inter-service communication is typically achieved...
0
2024-06-07T14:28:57
https://dev.to/codegreen/how-would-you-handle-inter-service-communication-in-a-micro-service-architecture-using-spring-boot-kfi
java, backend, springboot, microservices
In a microservice architecture using Spring Boot, inter-service communication is typically achieved through RESTful APIs. Here's how: Let's illustrate with two microservices: **UserService** and **OrderService**. 1. **Define APIs:** Each microservice exposes APIs to perform various operations. 2. **Call APIs from other services:** Microservices interact by making HTTP requests to endpoints exposed by other services. 3. **Service Discovery (optional):** Service discovery tools like Eureka or Consul can be used to dynamically locate and call other services. Example: **UserService** defines APIs to manage users: @RestController public class UserController { @Autowired private UserRepository userRepository; @GetMapping("/users/{userId}") public ResponseEntity getUser(@PathVariable("userId") Long userId) { User user = userRepository.findById(userId).orElse(null); return ResponseEntity.ok(user); } @PostMapping("/users") public ResponseEntity createUser(@RequestBody User user) { User savedUser = userRepository.save(user); return ResponseEntity.status(HttpStatus.CREATED).body(savedUser); } // Other CRUD endpoints... } **OrderService** consumes UserService's API to retrieve user data: @Service public class OrderService { @Autowired private RestTemplate restTemplate; public User getUser(Long userId) { ResponseEntity response = restTemplate.exchange( "http://userServiceHost/users/{userId}", HttpMethod.GET, null, User.class, userId ); return response.getBody(); } // Other methods... } This setup enables decoupled communication between microservices, promoting scalability and flexibility.
manishthakurani
1,880,507
Day 6 Task: File Permissions and Access Control Lists🚀
Introduction📚 Welcome to Day 6 of the #90daysofdevops challenge. In this Blog, we will explore File...
0
2024-06-07T14:27:52
https://dev.to/oncloud7/day-6-task-file-permissions-and-access-control-lists-5h1
linux, cloudcomputing, 90daysoddevops
**Introduction📚** Welcome to Day 6 of the #90daysofdevops challenge. In this Blog, we will explore File permission and ownership in files and directories. also will learn how to apply ACL on files and directories. so let's explore the permissions in Linux.📝 **File Permissions Overview📃** File permissions are core to the security model used by Linux systems.determine who can access files and directories on a system. **Linux File Ownership** Every file and directory on your Unix/Linux system is assigned 3 types of owner, given below. **User** A user is the owner of the file. By default, the person who created a file becomes its owner. Hence, a user is also sometimes called an owner. **Group** A user- group can contain multiple users. All users belonging to a group will have the same Linux group permissions access to the file. Suppose you have a project where a number of people require access to a file. Instead of manually assigning permissions to each user, you could add all users to a group, and assign group permission to file such that only this group members and no one else can read or modify the files. **Other** Any other user who has access to a file. This person has neither created the file, nor he belongs to a usergroup who could own the file. Practically, it means everybody else. Hence, when you set the permission for others, it is also referred as set permissions for the world. **Linux File Permission** Basically, there are three types of permission are there, Read (r) - This permission give you the authority to open and read a file.📰 Write (w) - Write permission gives you the authority to modify the contents of a file. we can add, remove or modify the file.📝 Execute - Execute permission gives you authority to run the files.💻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zl81ssfau7vkmxaai6e4.jpg) To view the permission we can use ls -l and to modify the permission we use chown. **Task 1: Create a simple file and do ls -ltr to see the details of the files** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uedgwmyzqk9nvhue390y.png) **File Permission in Linux** **Check Permissions in Command-Line with Ls Command** Go to the command line, you can easily find a file’s permission settings with the ls command, used to list information about files/directories. You can also add the –l option to the command to see the information in the long list format. To check the permission configuration of a file, use the command: ``` ls –l [file_name] ``` For instance, the command for the previously mentioned file would be: ``` ls –l test.txt ``` **Using Chmod Command to Change File Permissions** As all Linux users, you will at some point need to modify the permission settings of a file/directory. The command that executes such tasks is the chmod command. The basic syntax is: ``` chmod [permission] [file_name] ``` **There are two ways to define permission:** using symbols (alphanumerical characters) using the octal notation method **Define File Permission with Symbolic Mode** To specify permission settings using alphanumerical characters, you’ll need to define accessibility for the user/owner (u), group (g), and others (o). Type the initial letter for each class, followed by the equal sign (=) and the first letter of the read (r), write (w) and/or execute (x) privileges. To set a file, so it is public for reading, writing, and executing, the command is: ``` chmod u=rwx,g=rwx,o=rwx [file_name] ``` To set permission as in the previously mentioned test.txt to be: • read and write for the user • read for the members of the group • read for other users Use the following command: ``` chmod u=rw,g=r,o=r test.txt ``` nstead of letters, the octal format represents privileges with numbers: read has the value of 4 w(rite) has the value of 2 execute has the value of 1 no permission has the value of 0 The privileges are summed up and depicted by one number. Therefore, the possibilities are: 7 – for read, write, and execute permission 6 – for read and write privileges 5 – for read and execute privileges 4 – for read privileges As you have to define permission for each category (user, group, owner), the command will include three (3) numbers (each representing the summation of privileges). For instance, let’s look at the test.txt file that we symbolically configured with the chmod u=rw,g=r,o=r test.txtcommand. The same permission settings can be defined using the octal format with the command: ``` chmod 644 test.txt ``` **Access Control Lists Command (ACL )** Access Control List command provides an additional more flexible permission mechanism for file system. ACL is a Service which is used for providing special permission to specific user and group to a particular directory and files. The two main ACL command are ⚙getfacl and setfacl🐱‍🏍 The Getfacl command is used to retrieve the ACLs of a file or directory. It shows the detailed ACL entries and their associated permissions for specific users and groups. ``` getfacl cron.txt #first we need to install acl using: sudo apt install acl ``` The setfacl command is used to set or modify ACLs on a file or directory. It allows you to define specific access permissions for individual users or groups. ``` setfacl -m g::rwx cron.txt #Here we are adding read,write and execute permission to group and the file name is cron ```
oncloud7
1,880,506
How to Build Customizable Cancel Flows
This Blog was Originally Posted to Churnfree Blog This blog explains all about cancel flow, what it...
0
2024-06-07T14:27:42
https://churnfree.com/blog/cancel-flow/
cancelflow, cancelservises, tutorial, retention
This Blog was Originally Posted to [Churnfree Blog](https://churnfree.com/blog/cancel-flow/?utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_Distribution) This blog explains all about cancel flow, what it is, why it is used, and step-by-step instructions on how to set up a cancel flow for free! If you’ve been on the hunt for customer churn software, your search ends here. Churnfree offers seamless integration of retention flows directly into your website, requiring only minimal code. In this article, we’ll explore how Churnfree can significantly reduce customer cancellations and provide invaluable feedback about your products and services. Before we begin the tutorial, let’s dig deeper into cancel flows. **What is Cancel Flow?** A cancel flow, also known as cancellation flow or churn flow, is a flow that pops up when a customer decides to cancel a subscription or service. These flows are designed by companies to manage and potentially reduce customer churn by understanding the reasons behind cancellations and offering alternatives or incentives to retain the customer. The form or survey may include Questions like, ‘Why are you leaving’? And offers based on their purpose of leaving. **What are Cancellation Flows used for?** **Understanding Reasons for Cancellation:** Companies often include surveys or questions in the cancel flow to gather feedback on why customers are leaving. This data can be invaluable for improving services and addressing common issues. **Offering Solutions or Alternatives:** During the cancel flow, companies might offer solutions to customer’s problems, such as downgrading to a cheaper plan, offering a temporary discount, or pausing the subscription instead of canceling it outright. **Retention Offers:** To encourage customers to stay, businesses may present special retention offers such as additional discounts, extended trial periods, or exclusive benefits. **Confirmation and Finalization:** The flow ensures that the customer intentionally cancels the service, often requiring multiple steps to confirm the cancellation. This can include final confirmation screens or emails to prevent accidental cancellations. **Post-Cancellation Engagement:** Even after a customer cancels, companies might keep the door open for future engagement. This could involve follow-up emails with offers to return, updates on service improvements, or feedback requests. **Legal and Compliance Requirements:** Cancel flows need to comply with relevant regulations and laws, ensuring that customers can cancel their subscriptions easily and transparently without being subjected to unfair practices. Let’s get into the step-by-step process of creating a cancel flow for free. If you’d like to watch a demo video of Churnfree Integration, [click here](https://www.youtube.com/watch?v=ffVE5BtZ3DI&utm_source=Dev.to&utm_medium=referral&utm_campaign=Content_Distribution). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sre6oezp1tuilgbqhvf.png) Let’s start! **Step 1: Add your first customer retention flow** First, navigate to your Churnfree dashboard. You will have to create your first retention flow if you are a new user. Click “ Add first retention flow” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1lkrf7llo7u5b0v13p3c.png) **Step 2: Create a customer cancellation survey** Cancellation Surveys are a great way to start a retention flow. Let’s get started building your very own survey. Everything is editable - change the heading, subheading, and individual options. You can add as many options as you like but we suggest keeping it between 3-5. You can also change the button text. Click “Continue” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfuiiyxdmu383wk30p58.png) **Step 3: Create offers to reduce customer cancellations** Now we need to create “Offers” to show users. Depending on what options users choose they will be shown an offer. Click “Create New Offer” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3zatajubnuypzzcbzozx.png) Everything in the “Offer” box is editable. You can change the Heading and Sub Text as well. Write a convincing header and body text to captivate your users and nudge them into accepting your offer. You can also change the button text. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dkakmq2q7lpz4agmozjy.png) After you have finished editing the content, add in your offer URL. This should be the URL of the landing page where your users can claim their offer. Don’t forget to give your offer a name so that you can easily find it later ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0u7apya1onxx5thplmo.png) Click “Save Offer” to continue ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ed9xe99g2rezmmrp4xj.png) You can create a number of offers depending on how many options you have in your cancellation survey. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxazyjspgxqbqzdwn9kc.png) Don’t forget to give your retention flow a name. Click on the 3 dots and choose “Edit” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5858uom3s3m5if6l98fq.png) After giving your retention flow a name, click “Continue” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxbbgrz8sfdf5cv0gayu.png) Now you can choose what offers you want to give users depending on what they chose on the survey. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tnkh5fpkdhfussiaqdcj.png) If you have a new offer in the pipeline, click “Create a new offer” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/plfy2dhhoxvkfy24oi55.png) Fill in the details and click “Save Offer” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rd75ds02iem0z5vrxwx.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcbkwsq6ysmlkqh6ku73.png) You’re almost done. Here’s what it will look like. Click through the survey to experience what your customers will see. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y8ijcym1fo5t9nm09kxa.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3brcjv2wxoe3qqboqpyd.png) **Step 4: An important part of customer retention is feedback** All feedback is good, some of it valuable. Click on the individual text boxes to edit the text inside. When you’re ready click “Continue” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjextw0cj2rf911xiprk.png) Now we edit the Confirmation step. Make sure to let your customers know what features they will be missing out on if they do cancel. Use this step to try and highlight the package offerings. Click on the individual text boxes to edit the text inside. When you’re ready click “Continue” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5u5n3mphz1x04cuethw.png) Now we look at the design of your retention flow. Change font, text color, background color, and button text and color. When you’re ready click “Continue” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2nlr3xy9q4fmkatj03h.png) You’re almost done. Add your cancellation URL here. This is the landing page where users will land on after canceling a subscription. Click “Publish” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hlsc3t5ueyg4bl1tnwmx.png) **Step 5: Embed Retention flow into your website.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3nw0jm8hnrta2hp4wrvx.png) We need to add the scripts to your website to start implementing the retention flow to avoid subscriber cancellations. **Step 6: Add Churnfree code to Shopify website:** We will walk you through the integration for Shopify websites but the integration is pretty much the same for all other websites like those built on WordPress and other CMS. First, navigate to your Shopify store. Click Online Store and select Themes. Click the Actions drop-down and choose “Edit code” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1wnhin18lh3165i6e1t.png) Click on theme.Liquid to start editing the code ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nuigjr32vna4i0w8ecgb.png) You will need to paste the first bit of code from Churnfree before the closing body tag (</body>). The easiest way to find this is by searching for it. Click Ctrl+F and search for “</body>” Copy the code from step 1 and paste is above the closing body tag ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yi82kpjnlmetwf9bn04y.png Now we move on to step 2 - copy the script under step two from your Churnfree dashboard ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y3dk6gg062fwv1sjwwl5.png) Paste it under your script from step 1 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfnqkqcmdmrarbyfagf6.png) We now need to replace the “ BUTTON_ID” with the desired elements ID to connect and launch Churnfree with your website’s cancellation button. Remember, it doesn’t have to be a button - it can be a link or even an image. In this example, we will consider the element to be a button with ID “btn_cancel” Do a quick search to verify that “btn_cancel” exists on the page and copy the ID ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0tl6g59aidn1x0trkgtt.png) Replace the “BUTTON_ID” with your button ID that you just copied ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtfpuy1y671v9die6lmx.png) Click “Save” to continue ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ci5pj3baszzgzpjvk0cd.png) Now go to your website and refresh the page. Navigate to your cancellation page and click the cancellation button - your Churnfree retention flow should now be implemented. **Common Mistakes to Avoid in Designing a Cancel Flow** When designing a cancel flow, it’s important to avoid common mistakes that can negatively impact the customer experience and increase churn. Here are a few to watch out for: **Complicated Navigation:** A convoluted cancel flow with multiple steps and unclear instructions can frustrate customers. Keep the process simple and intuitive. **Lack of Transparency:** Failing to provide clear information about fees, consequences, and cancellation policies can erode trust and lead to customer dissatisfaction. **Ignoring Customer Feedback:** Neglecting to analyze customer feedback and make improvements based on their insights can result in missed opportunities to reduce churn. **Lack of Personalization:** Treating all canceling customers the same without considering their unique needs and concerns can lead to missed opportunities to retain them. By avoiding these mistakes and prioritizing a customer-centric approach, businesses can create a cancel flow that minimizes churn and fosters long-term loyalty. **Types of Cancel Flows** Here are some types of cancellation flows: **Unsubscribe Cancel Flow** The unsubscribe cancel flow is commonly used for services that require a subscription or membership. When customers decide to cancel their subscription, they are guided through a series of steps to complete the process. This type of cancel flow often includes options for customers to provide feedback on why they are canceling and offers incentives to encourage them to reconsider. By implementing an unsubscribe cancel flow, businesses can gain insights into customer preferences and improve their subscription offerings. **Account Cancellation Cancel Flow** Account cancellation cancel flow is typically employed by platforms that require users to create an account. When users want to permanently delete their account, this cancel flow guides them through the necessary steps to complete the cancellation process. It may involve confirmation prompts, warnings about the consequences of account deletion, and opportunities to provide feedback on the user experience. This type of cancel flow ensures that customers have a straightforward and secure way to cancel their accounts when needed. **Subscription Cancellation Cancel Flow** For businesses that offer subscription-based products or services, implementing a subscription cancellation cancel flow is crucial. This type of cancel flow enables customers to easily cancel their subscriptions without any hassle. It may include options for customers to choose the duration of the cancellation, provide feedback on their experience, or access special offers to entice them to continue their subscription. By streamlining the cancellation process, businesses can retain customers even when they decide to cancel their subscriptions temporarily or permanently. **Return and Refund Cancellation Cancel Flow** When customers want to cancel an order due to dissatisfaction or other reasons, the return and refund cancellation cancel flow comes into play. This cancel flow ensures that customers have a clear and straightforward process to initiate returns, request refunds, or exchange products. It may include detailed instructions on how to package the items for return, options for selecting the reason for return, and automated updates on the status of the return or refund. By implementing an efficient return and refund cancel flow, businesses can minimize customer frustration and build trust and loyalty. **Upgrading or Downgrading Cancellation Cancel Flow** In some cases, customers may want to cancel their current plan or subscription to upgrade or downgrade to a different one. The upgrading or downgrading cancellation cancel flow caters to these specific needs. It guides customers through the process of canceling their current plan while providing options to select a new plan that better suits their requirements. This cancel flow ensures a seamless transition for customers and simplifies the process of changing subscription levels. If you have any troubles with getting Churnfree to work, get in touch with us today and one of our specialists will get touch with you in one working day. **FAQs** **How can Churnfree help SaaS businesses reduce churn?** Churnfree is the perfect solution for SaaS businesses looking to reduce customer churn. With Churnfree, you can build customizable retention flows to win back lost customers and improve your retention rate. With Churnfree, you have complete control over your retention strategy. You can customize each retention flow to fit the specific needs of your business, and you can track and measure the results of each campaign to optimize your approach over time. **What are some of the benefits of using Churnfree?** The ability to build customizable retention flows to fit the specific needs of your business The ability to automate your retention flows The ability to track and measure the results of each campaign The ability to improve your retention rate and reduce customer churn for your business. **What is the customer retention process flow?** The customer retention process flow involves a series of steps and strategies designed to keep existing customers engaged, satisfied, and loyal to a brand or service. This process is critical for maintaining a stable customer base and ensuring long-term business success.
churnfree
1,879,879
Git para Principiantes, comandos básicos...
Si estás comenzando en el mundo de la programación o acabas de empezar en tu primer trabajo como...
0
2024-06-07T14:21:18
https://dev.to/andresordazrs/git-para-principiantes-comandos-basicos-5b1l
git, developers, beginners
Si estás comenzando en el mundo de la programación o acabas de empezar en tu primer trabajo como desarrollador, probablemente ya has escuchado hablar de Git. Esta poderosa herramienta de control de versiones es esencial para gestionar tu código y colaborar con otros desarrolladores. Pero si aún no entiendes completamente cómo funciona o por qué es tan importante, ¡no te preocupes! Todos hemos estado allí. En este artículo, te guiaré a través de los comandos básicos de Git que necesitas dominar. Imagina que acabas de unirte a un equipo de desarrollo. Te asignan tu primer proyecto y te dicen que clones el repositorio, crees una rama para tu trabajo, y finalmente, sincronices tus cambios con el equipo. Puede parecer abrumador, pero al final de este artículo, tendrás una comprensión clara de estos procesos y estarás listo para usarlos en tu día a día laboral. ## **Clonando un Repositorio** El primer paso para trabajar con un proyecto existente es clonar el repositorio en tu máquina local. Para hacer esto, sigue estos pasos: 1. Ve a la página del repositorio en la plataforma que estés utilizando (GitHub, GitLab, Bitbucket, etc.). 2. Copia la URL del repositorio (generalmente hay un botón que dice "Clone" o "Clone with HTTPS"). **Comando:** **_git clone_** **Ejemplo:** ``` git clone https://github.com/usuario/repo.git ``` **Explicación:** Este comando clona el repositorio especificado en tu máquina local, creando una copia exacta del proyecto. ## **Entrando al Repositorio Clonado** Después de clonar el repositorio, debes navegar al directorio del proyecto. **Comando:** **_cd_** **Ejemplo:** ``` cd repo ``` **Explicación:** Cambia al directorio del repositorio clonado para comenzar a trabajar en él. ## **Creando y Cambiando de Ramas** Las ramas te permiten trabajar en diferentes funcionalidades o corregir errores sin afectar el código principal. Para crear una nueva rama y cambiarte a ella, usa los siguientes comandos: **Comando:** **_git branch_** y **_git checkout_** **Ejemplo:** ``` git branch nueva-rama git checkout nueva-rama ``` **Explicación:** `git branch nueva-rama` crea una nueva rama llamada "nueva-rama", y `git checkout nueva-rama` cambia tu entorno de trabajo a esa rama. ## **Agregando Cambios al Índice** Después de hacer cambios en tus archivos, necesitas agregar estos cambios al índice (staging area) antes de hacer un commit. **Comando:** **_git add_** **Ejemplo:** ``` git add archivo.txt ``` **Explicación:** Este comando agrega archivo.txt al índice, preparándolo para el commit. ## **Realizando un Commit** Una vez que hayas añadido los cambios al índice, debes confirmar estos cambios creando un commit. **Comando:** **_git commit_** **Ejemplo:** ``` git commit -m "Mensaje del commit" ``` **Explicación:** Este comando crea un nuevo commit con un mensaje descriptivo de los cambios realizados. ## **Actualizando la Rama (Pull)** Antes de enviar tus cambios al repositorio remoto, es una buena práctica asegurarte de que tu rama local esté actualizada con los cambios del repositorio remoto. **Comando:** **_git pull_** **Ejemplo:** ``` git pull origin main ``` **Explicación:** Este comando trae los cambios del repositorio remoto a tu rama local, asegurando que estás trabajando con la versión más reciente del código. ## **Sincronizando con el Repositorio Remoto (Push)** Después de hacer un commit y actualizar tu rama, puedes enviar tus cambios al repositorio remoto. **Comando:** **_git push_** **Ejemplo:** ``` git push origin nueva-rama ``` **Explicación:** Este comando envía los commits de tu rama local al repositorio remoto, actualizando la rama remota correspondiente. ## **Ver el Estado del Repositorio** Para verificar el estado actual de tu repositorio, incluyendo cambios no añadidos y commits pendientes, usa el siguiente comando: **Comando:** **_git status_** **Ejemplo:** ``` git status ``` **Explicación:** Este comando muestra el estado de los archivos en el directorio de trabajo y el índice, indicando qué cambios han sido realizados y cuáles están pendientes de ser confirmados. ## **Visualizando el Historial de Commits** Puedes revisar el historial de commits para ver todos los cambios realizados en el proyecto. **Comando:** **_git log_** **Ejemplo:** ``` git log ``` **Explicación:** Este comando muestra el historial de commits con detalles como el autor, fecha y mensaje del commit. ## **Deshaciendo Cambios** Si necesitas deshacer cambios en tu área de trabajo, puedes usar el siguiente comando: **Comando:** **_git checkout_** **Ejemplo:** ``` git checkout -- archivo.txt ``` **Explicación:** Este comando restaura archivo.txt a la última versión confirmada, deshaciendo los cambios no confirmados. ## **Eliminando Cambios del Índice** Para quitar archivos del área de preparación, utiliza el comando git reset. **Comando:** **_git reset_** **Ejemplo:** ``` git reset archivo.txt ``` **Explicación:** Este comando elimina archivo.txt del índice, pero conserva los cambios en el área de trabajo. En resúmen hemos cubierto los comandos básicos de Git que te permitirán gestionar versiones de tu código y colaborar en proyectos de manera eficiente. Practicar estos comandos en proyectos personales te ayudará a reforzar tu comprensión y a familiarizarte con su uso. ¡Sigue practicando y explorando las capacidades de Git!
andresordazrs
1,880,503
LeetCode Meditations — Chapter 11: Graphs
Table of contents Introduction Representing graphs Edge List Adjacency...
26,418
2024-06-07T14:20:55
https://rivea0.github.io/blog/leetcode-meditations-chapter-11-graphs
computerscience, algorithms, typescript, javascript
## Table of contents - [Introduction](#intro) - [Representing graphs](#representing-graphs) - [Edge List](#edge-list) - [Adjacency Matrix](#adjacency-matrix) - [Adjacency List](#adjacency-list) - [Traversals](#traversals) - [Breadth-First Search (BFS)](#bfs) - [Depth-First Search (DFS)](#dfs) - [Resources](#resources) --- <a name="intro"></a> A graph is probably _the_ data structure that everyone is familiar with, regardless of their profession or interests. [Graph theory](https://en.wikipedia.org/wiki/Graph_theory#Representation) is a very broad topic, but we'll simply look at some of the main ingredients of what makes a graph and how to represent it, as well as basic graph traversals. --- In a graph, there are two main components: vertices (or nodes) and edges that connect those vertices. | Note | | :-- | | Here, we're going to use "vertex" and "node" interchangeably. <br> The terms "adjacent vertices" and "neighbors" are used interchangeably as well. | A graph can be **directed** or **undirected**. With a directed edge, we have an _origin_ and a _destination_ vertex. On the other hand, an undirected edge is _bidirectional_, origin and destination are not fixed. | Note | | :-- | | There might also be [_mixed graphs_](https://en.wikipedia.org/wiki/Graph_(discrete_mathematics)#Mixed_graph) that have both directed and undirected edges. | A graph can also be weighted or unweighted, each edge can have different weights, usually representing the cost of going from one vertex to the other. We can define a graph like this: {% katex %} G = (V, \ E) {% endkatex %} {% katex inline %} V {% endkatex %} is a set of vertices, and {% katex inline %} E {% endkatex %} is a set of edges. For example, if we have a directed graph like this: ![Directed graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpg2dn4j2wgjnpaklnsf.gif) Then, we have the vertices: {% katex %} V = \lbrace A, \ B, \ C, \ D \rbrace {% endkatex %} And, the edges are: {% katex %} E = \lbrace(A, \ B), \ (A, \ C), \ (C, \ B), \ (C, \ D)\, \ (D, \ C)\rbrace {% endkatex %} If we have an undirected graph such as this one: ![Undirected graph](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc3fe058k7xv79pjqnl5.gif) We have the same vertices: {% katex %} V = \lbrace A, \ B, \ C, \ D \rbrace {% endkatex %} But our edges can look like this: {% katex %} E = \lbrace \lbrace B, A \rbrace, \lbrace A, C \rbrace, \lbrace C, B \rbrace, \lbrace D, C \rbrace \rbrace {% endkatex %} | Note | | :-- | | We use parentheses when it comes to directed edges but curly braces with undirected edges as there is no direction from one vertex to the other. | When two vertices share an edge, they are **_adjacent_** to each other. The **_degree_** of a vertex is the number of adjacent vertices to it. <mark>We can also define the degree as the number of edges coming out of the vertex.</mark> In our example above, the vertex A has the degree of 2. --- A **simple path** is the one that we don't repeat any vertices while traversing the graph. An example might look like this: ![Simple path example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3wlv7o7i83oksiea51v0.gif) A **cycle** is a simple path, except that we end up at the vertex we started with: ![Cycle example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0rmrpbixuevubct1yjzz.gif) --- <a name="representing-graphs"></a> When it comes to representing graphs, there are several ways to do it, and we'll look at three of them: an edge list, an adjacency matrix, and an adjacency list. <a name="edge-list"></a> ### Edge List We can simply put all the edges in an array: ``` [ [A, B], [A, C], [B, C], [C, D] ] ``` However, to find an edge in an edge list, we'll have to iterate through them, so it will have {% katex inline %} O(E) {% endkatex %} time complexity, where in the worst case, we'll search the whole list to find an edge. Similarly, it needs {% katex inline %} O(E) {% endkatex %} amount of space to represent all the edges. <a name="adjacency-matrix"></a> ### Adjacency Matrix The adjacency matrix for our example might look like this: {% katex %} \left\lceil \begin{matrix} & A & B & C & D \newline A & 0 & 1 & 1 & 0 \newline B & 1 & 0 & 1 & 0 \newline C & 1 & 1 & 0 & 1 \newline D & 0 & 0 & 1 & 0 \end{matrix} \right\rceil {% endkatex %} Each row is for a vertex, and the matching column shows the relationship between those vertices. For example, the vertex A doesn't have an edge pointing to D, so the cell that matches A and D is 0. On the other hand, A is connected to B and C, so those cells have the value 1. | Note | | :-- | | If the graph is weighted, we can simply put the weight instead of `1`, and when there is no edge, the value can stay `0`. | <br /> | Note | | :-- | | An adjacency matrix will have 0s in the "main diagonal" if there are no self-loops. | Let's try implementing it in TypeScript. We'll start with a minimal graph vertex: ```ts class GraphVertex { public value: string | number; constructor(value: string | number) { this.value = value; } } ``` Now we can define our graph. We'll make it really simple with three properties to hold: `matrix` to represent the graph as an adjacency matrix, `vertices` to hold vertices, and `isDirected` to indicate whether our graph is directed: ```ts class Graph { public matrix: number[][]; public vertices: GraphVertex[]; public isDirected: boolean; constructor(vertices: GraphVertex[], isDirected = true) { this.vertices = vertices; this.isDirected = isDirected; ... } ... } ``` Initializing our adjacency matrix might look like this: ```ts this.matrix = Array.from({ length: vertices.length }, () => { return Array.from({ length: vertices.length }, () => 0) }); ``` We'll have an array with the length of vertices, each item in the array is an array with the length of vertices as well, but filled with zeroes. In our example with four vertices, the initial adjacency matrix looks like this: ```ts [ [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0] ] ``` Then, adding an edge is just marking the corresponding value as `1`, so that we can represent a connection between two vertices: ```ts this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] = 1; ``` | Note | | :-- | | This implementation assumes that all vertices are distinct. | _If we have an undirected graph, we can have it both ways:_ ```ts if (!this.isDirected) { this.matrix[this.vertices.indexOf(v2)][this.vertices.indexOf(v1)] = 1; } ``` Removing an edge, in this case, will be just resetting the value to `0`: ```ts this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] = 0; ``` And, checking for the existence of an edge is simply checking whether the corresponding value is `0` or not: ```ts this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] !== 0; ``` And, here is the whole example: ```ts class Graph { public matrix: number[][]; public vertices: GraphVertex[]; public isDirected: boolean; constructor(vertices: GraphVertex[], isDirected = true) { this.vertices = vertices; this.matrix = Array.from({ length: vertices.length }, () => { return Array.from({ length: vertices.length }, () => 0) }); this.isDirected = isDirected; } addEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] = 1; if (!this.isDirected) { this.matrix[this.vertices.indexOf(v2)][this.vertices.indexOf(v1)] = 1; } } /* For a weighted graph: addEdge(v1: GraphVertex, v2: GraphVertex, weight: number) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] = weight; if (!this.isDirected) { this.matrix[this.vertices.indexOf(v2)][this.vertices.indexOf(v1)] = weight; } } */ removeEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] = 0; if (!this.isDirected) { this.matrix[this.vertices.indexOf(v2)][this.vertices.indexOf(v1)] = 0; } } hasEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); return this.matrix[this.vertices.indexOf(v1)][this.vertices.indexOf(v2)] !== 0; } getAdjacencyMatrix() { return this.matrix; } _checkVertexIsInGraph(v: GraphVertex) { if (!this.vertices.includes(v)) { throw new Error('Vertex doesn\'t exist'); } } } let a = new GraphVertex('A'); let b = new GraphVertex('B'); let c = new GraphVertex('C'); let d = new GraphVertex('D'); let graph = new Graph([a, b, c, d], false); graph.addEdge(a, b); graph.addEdge(a, c); graph.addEdge(b, c); graph.addEdge(c, d); console.log(graph.getAdjacencyMatrix()); // -> [ [0, 1, 1, 0], [1, 0, 1, 0], [1, 1, 0, 1], [0, 0, 1, 0] ] ``` Operations on an adjacency matrix has {% katex inline %} O(1) {% endkatex %} time complexity; however, our storage needs will be {% katex inline %} O(V^2) {% endkatex %} where {% katex inline %} V {% endkatex %} is the number of vertices. <a name="adjacency-list"></a> ### Adjacency List In an adjacency list, usually a hashmap _or_ an array of linked lists is used. For example: ```ts let graph = { 'A': ['B', 'C'], 'B': ['A', 'C'], 'C': ['A', 'B', 'D'], 'D': ['C'] } ``` Let's see how we can modify our code above to use an adjacency list instead. Instead of having a `matrix` which is an array of arrays, we can have a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) that maps the vertices to an array of their neighbors. We can initialize it as a map that has the vertices as keys, each of which has a value of an empty array for now: ```ts this.list = new Map<GraphVertex, GraphVertex[]>(); for (const v of vertices) { this.list.set(v, []); } ``` Adding an edge will be just pushing to the array of corresponding vertex: ```ts this.list.get(v1)!.push(v2); ``` _If our graph is undirected, we can do it both ways here as well:_ ```ts if (!this.isDirected) { this.list.get(v2)!.push(v1); } ``` Removing an edge will be deleting that vertex from the array: ```ts this.list.set(v1, this.list.get(v1)!.filter(v => v !== v2)); ``` Checking if an edge exists is just checking the existence of that vertex in the array: ```ts this.list.get(v1)!.includes(v2); ``` | Note | | :-- | | We're using a [non-null assertion operator](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-0.html#non-null-assertion-operator). <br> As we'll see below, we're first checking if the vertex is in the graph. And since we're adding all the vertices in the graph as keys to `this.list`, we're sure that getting that vertex from the list is not `undefined`. However, TypeScript will warn us because if a key is not found in a `Map` object, it could [potentially return `undefined`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/get#return_value). | Here is our graph: ```ts class Graph { public list: Map<GraphVertex, GraphVertex[]>; public vertices: GraphVertex[]; public isDirected: boolean; constructor(vertices: GraphVertex[], isDirected = true) { this.vertices = vertices; this.list = new Map(); for (const v of vertices) { this.list.set(v, []); } this.isDirected = isDirected; } addEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); this.list.get(v1)!.push(v2); if (!this.isDirected) { this.list.get(v2)!.push(v1); } } removeEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); this.list.set(v1, this.list.get(v1)!.filter(v => v !== v2)); if (!this.isDirected) { this.list.set(v2, this.list.get(v2)!.filter(v => v !== v1)); } } hasEdge(v1: GraphVertex, v2: GraphVertex) { this._checkVertexIsInGraph(v1); this._checkVertexIsInGraph(v2); return this.list.get(v1)!.includes(v2); } getAdjacencyList() { return this.list; } _checkVertexIsInGraph(v: GraphVertex) { if (!this.vertices.includes(v)) { throw new Error('Vertex doesn\'t exist'); } } } let a = new GraphVertex('A'); let b = new GraphVertex('B'); let c = new GraphVertex('C'); let d = new GraphVertex('D'); let graph = new Graph([a, b, c, d], false); graph.addEdge(a, b); graph.addEdge(a, c); graph.addEdge(b, c); graph.addEdge(c, d); console.log(graph.getAdjacencyList()); /* Output: Map (4) { GraphVertex: { "value": "A" } => [ GraphVertex: { "value": "B" }, GraphVertex: { "value": "C" } ], GraphVertex: { "value": "B" } => [ GraphVertex: { "value": "A" }, GraphVertex: { "value": "C" } ], GraphVertex: { "value": "C" } => [ GraphVertex: { "value": "A" }, GraphVertex: { "value": "B" }, GraphVertex: { "value": "D" } ], GraphVertex: { "value": "D" } => [ GraphVertex: { "value": "C" } ] } */ ``` Getting the neighbors of a vertex is {% katex inline %} O(1) {% endkatex %} because we're just looking up a key in a map. However, finding a particular edge can be {% katex inline %} O(d) {% endkatex %} where {% katex inline %} d {% endkatex %} is the number of degrees of the vertex, because we might need to traverse all the neighbors to find it. And, it could be {% katex inline %} V - 1 {% endkatex %} where {% katex inline %} V {% endkatex %} is the number of vertices in the graph. It's the case when that vertex has all the other vertices as its neighbor. The space complexity can be {% katex inline %} O(V + E) {% endkatex %} where {% katex inline %} V {% endkatex %} is the number of vertices and {% katex inline %} E {% endkatex %} is the number of edges. --- <a name="traversals"></a> Continuing with the adjacency list representation, let's now take a look at two (very familiar!) ways to traverse a graph: breadth-first search and depth-first search. But first, we'll modify our graph a little bit. We'll add a new vertex `'E'` and update some edges: ```ts let a = new GraphVertex('A'); let b = new GraphVertex('B'); let c = new GraphVertex('C'); let d = new GraphVertex('D'); let e = new GraphVertex('E'); let graph = new Graph([a, b, c, d, e], false); graph.addEdge(a, b); graph.addEdge(a, c); graph.addEdge(b, d); graph.addEdge(c, e); ``` The important idea to remember is that <mark>there is no hierarchy of vertices, so we don't have a root node.</mark> For a breadth-first or depth-first search, <mark>we can use an arbitrary node as a starting point.</mark> <a name="bfs"></a> ### Breadth-First Search With our new graph, a breadth-first search traversal looks like this: ![Breadth-first search](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2nadr7a3ugw50k9zrvmo.gif) When it comes to breadth-first search, usually a queue is used, and the idea is simple: given a current node, we'll add the adjacent nodes first, marking them as visited as we go. Inside the `Graph` class, we can implement a `bfs` method that does just that: ```ts bfs(startNode: GraphVertex) { const visited = new Set(); const queue = [startNode]; visited.add(startNode); while (queue.length > 0) { const currentNode = queue.shift(); // console.log(currentNode); this.list.get(currentNode as GraphVertex)!.forEach((node) => { if (!visited.has(node)) { visited.add(node); queue.push(node); } }); } } ``` If we log `currentNode` to console each time we go, it's as we expected: ``` GraphVertex { value: 'A' } GraphVertex { value: 'B' } GraphVertex { value: 'C' } GraphVertex { value: 'D' } GraphVertex { value: 'E' } ``` With the adjacency list, using a BFS has {% katex inline %} O(V + E) {% endkatex %} time complexity (sum of the vertices and edges) as we're traversing the whole graph. <a name="dfs"></a> ### Depth-First Search With the same modified graph, a depth-first search looks like this: ![Depth-first search](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vpcfdrjgnhxuke874oe6.gif) With depth-first search there is usually recursion involved as we're <mark>traversing through a path until we have visited all the nodes in that path.</mark> Once we hit a dead end, we'll _backtrack_ and continue exploring until we have visited all the vertices in the graph. ```ts dfs(startNode: GraphVertex, visited = new Set()) { visited.add(startNode); // console.log(startNode); this.list.get(startNode)!.forEach((node) => { if (!visited.has(node)) { this.dfs(node, visited); } }); } ``` Starting with a node, we check how deep we can go from there. Once we reach a dead end (when the `dfs` inside `forEach` returns), we continue checking other neighbors (with `forEach`) until none is left. We essentially do the same thing until all the vertices are visited. Logging the output matches our expectation: ``` GraphVertex { value: 'A' } GraphVertex { value: 'B' } GraphVertex { value: 'D' } GraphVertex { value: 'C' } GraphVertex { value: 'E' } ``` The time complexity for a depth-first search traversal of a graph is the similar to BFS, {% katex inline %} O(V + E) {% endkatex %}. --- The first problem we'll look at in this chapter is [Number of Islands](https://leetcode.com/problems/number-of-islands). Until then, happy coding. <a name="resources"></a> ### Resources - [Graphs - brilliant.org](https://brilliant.org/wiki/graphs/) - [Time and Space Complexity of Adjacency Matrix and List - baeldung.com](https://www.baeldung.com/cs/adjacency-matrix-list-complexity) - ["Going Broad In A Graph: BFS Traversal" - Vaidehi Joshi](https://medium.com/basecs/going-broad-in-a-graph-bfs-traversal-959bd1a09255) - ["Deep Dive Through A Graph: DFS Traversal" - Vaidehi Joshi](https://medium.com/basecs/deep-dive-through-a-graph-dfs-traversal-8177df5d0f13)
rivea0
1,860,741
HOC Pattern - React
No segundo artigo desta série de como fazer uso de design patterns em projetos, especificamente...
23,447
2024-06-07T14:17:36
https://dev.to/deesouza/hoc-pattern-react-fmf
javascript, designpatterns, react, webdev
No segundo artigo desta série de como fazer uso de _design patterns_ em projetos, especificamente usando React, vamos considerar - um dos meus preferidos - o _pattern_ **H**igh **O**rder **C**omponent **(HOC)**. Em poucas palavras, o **HOC** é uma técnica em que uma função recebe um componente em seu argumento e esta função contém uma lógica que afeta o componente passado por parâmetro, para então renderizá-lo. O **HOC** nos permite encapsular uma mesma lógica a ser utilizada em múltiplos componentes sem que precisássemos replicar a lógica em todos eles. Pode ser uma lógica que envolve estilos, permissões, visibilidade de micro-componentes dentro de um componente, além de outros casos de uso. ## Show me the code Imagine que você tenha o seguinte componente: ```tsx import React from "react"; export const Me = () => { const isLoggedIn = useAuth(); async function handleGetMe() { const response = await fetch('/profile'); const data = await response.json(); return data; } if (!isLoggedIn) { return ( <div> Você precisa estar logado para ver essas informações. </div> ); } const profile = handleGetMe(); return ( <div> <h2>{profile.name}</h2> <p>{profile.job}</p> </div> ); }; ``` Como podemos ver acima, esse componente possui uma lógica de permissionamento. Se o usuário estiver logado, é possível ver os seus dados, mas se não estiver, recebe uma mensagem. No entanto, temos duas observações a fazer: 1. Não é tão interessante ter uma lógica assim dentro do componente que tem como objetivo realmente mostrar algo na tela; 2. Pode ser que essa lógica precise ser usada em outros componentes e nesse formato teria que replicar a lógica nos outros componentes também. Com certeza há várias formas de construir uma solução para isso, de forma que pudéssemos reutilizar a lógica em múltiplos componentes sem precisar repetir o contexto. O **HOC** é uma dessas soluções. Veja um exemplo de criação e uso desse _design pattern_. ```tsx export default function renderIfAuthenticated(Component: JSX.ElementType) { return (props) => { const isLoggedIn = useAuth(); if (!isLoggedIn) { return ( <div> Você precisa estar logado para ver essas informações. </div> ); } return <Component {...props} />; } } ``` Agora, nós conseguimos centralizar a lógica dentro de uma função **HOC**. Em outras palavras, transferimos a responsabilidade para uma única função. Recebemos o componente por parâmetro e o renderizamos apenas se a condição que definimos for verdadeira, nesse caso. E para utilizar podemos fazer assim: ```tsx import { renderIfAuthenticated } from "../hocs/renderIfAuthenticated"; export function Me() { async function handleGetMe() { const response = await fetch('/profile'); const data = await response.json(); return data; } const profile = handleGetMe(); return ( <div> <h2>{profile.name}</h2> <p>{profile.job}</p> </div> ); } const MeRenderWithAuth = renderIfAuthenticated(Me); export default renderIfAuthenticated; ``` Note que o componente em si, se mantém o mesmo. A linha de código que faz o encapsulamento no **HOC** é esta: ```tsx const MeRenderWithAuth = renderIfAuthenticated(Me); ``` Assim, podemos reutilizar essa lógica em outros componentes que vão fazer uso da mesma lógica de permissionamento. Vamos ver outro caso de uso. Por exemplo, ao aplicar os mesmos estilos em alguns componentes e extender estilos que serão diferentes para cada um. ```tsx export function renderWithStyles(Component: JSX.Element) { return (props) => { const style = { backgroundColor: "#0B1215", color: "#F5F5F5", ...props.style, }; return <Component {...props} style={style} />; }; } ``` Aplicando o HOC em múltiplos componentes: **Button** ```tsx import { renderWithStyles } from "../hocs/renderWithStyles"; export const Button = (props) => ( <button style={{ fontSize: "20px", ...props.style }}>Enviar</button> ); const StyledButton = renderWithStyles(Button); export default StyledButton; ``` **Cardboard** ```tsx import { renderWithStyles } from "../hocs/renderWithStyles"; export const Cardboard = (props) => ( <div style={{ fontSize: "20px", fontFamily: "Roboto", ...props.style }}>Enviar</div> ); const StyledCardboard = renderWithStyles(Cardboard); export default StyledCardboard; ``` Basicamento, o que fizemos acima é o que é feito na biblioteca de estilos CSS-In-JS muito conhecida chamada **Styled Components**. Veja o exemplo abaixo tirado da documentação: ```js const Button = styled.button` color: #BF4F74; font-size: 1em; margin: 1em; padding: 0.25em 1em; border: 2px solid #BF4F74; border-radius: 3px; `; // Aplicação do HOC const TomatoButton = styled(Button)` color: tomato; border-color: tomato; `; ``` O **TomatoButton** vai herdar os estilos do **Button** e acrescentar ou substituir propriedades exclusivas para ele. --- ### Considerações finais **Reusabilidade** - Os **HOC's** nos permitem encapsular funcionalidades que são compartilhadas em vários componentes, promovendo o reuso do código; **Separation of Concerns (SOC)** - Ajudam a manter a separação de responsabilidades da aplicação, assim os componentes podem focar no objetivo de uso deles; **Abstração de Código** - Como os **HOC's** abstraem a lógica em comum de um ou mais componentes, fica mais canalizado e fácil de entender e dar manutenção; **Composição** - Podemos combinar vários **HOC's** para compor uma funcionalidade complexa dentro dos componentes. Um exemplo de uso de algo assim, mas mais antigo é do `MapStateToProps` e do `MapDispatchToProps` do `React-Redux`: ```js export default connect(mapStateToProps, mapDispatchToProps)(AppWrapper(MyComponent)) ``` --- Portanto, fazer uso dos **HOC's** tornarão sua aplicação mais fácil de manter e entender, por promover um código limpor e reusável. Assim, suas aplicações React serão mais eficientes. Mas como todos os padrões de projeto, é preciso pensar no caso de uso, na necessidade e até mesmo na complexidade do projeto para decidir se seria interessante o uso de **HOC's**.
deesouza
1,880,496
Why Accountants in Oxford Are the Secret Weapon for Your Business Success
In the bustling city of Oxford, renowned for its prestigious university and rich cultural heritage,...
0
2024-06-07T14:17:29
https://dev.to/hemanshu_tandlekar_017696/why-accountants-in-oxford-are-the-secret-weapon-for-your-business-success-49m3
accountantsinoxford, accountantsoxford, bookkeeping, accountants
In the bustling city of Oxford, renowned for its prestigious university and rich cultural heritage, lies a hidden gem that many businesses In the bustling city of Oxford, renowned for its pres ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgr9l5fzg8amy9q7zw73.png)tigious university and rich cultural heritage, lies a hidden gem that many businesses overlook: its exceptional accountants.Here's why [accountants in Oxford](https://coxhinkins.co.uk/accountants-in-oxford/) can be the key to unlocking your company's full potential. ## Expertise and Local Knowledge Oxford accountants bring a unique blend of expertise and local knowledge to the table. Their deep understanding of the local market dynamics, regulations, and economic environment gives them an edge in providing tailored financial advice. Whether you're a startup looking for funding or an established business seeking to optimize your financial strategy, Oxford accountants can offer insights that are both relevant and actionable. ## Comprehensive Financial Services Accountants in Oxford offer a wide range of services beyond basic [bookkeeping](https://feedingtrends.com/bookkeeping-services-in-london-accurate-reliable-and-efficient) and tax filing. They provide comprehensive financial management, including strategic planning, budgeting, forecasting, and risk management. This holistic approach ensures that every aspect of your business finances is meticulously managed, leaving no room for errors or oversights that could cost you dearly in the long run. ## Strategic Financial Planning One of the most significant advantages of working with Oxford accountants is their ability to develop strategic financial plans. They help businesses set realistic financial goals, identify potential challenges, and create actionable plans to achieve desired outcomes. With their guidance, you can make informed decisions that align with your long-term business objectives, ensuring sustainable growth and profitability. ## Regulatory Compliance Navigating the complex web of financial regulations can be daunting for any business. Oxford accountants are well-versed in the latest regulatory requirements and compliance standards. They keep your business compliant with tax laws, financial reporting standards, and industry-specific regulations, minimizing the risk of legal issues and financial penalties. This peace of mind allows you to focus on what you do best: running your business. ## Personalized Service Unlike larger, impersonal accounting firms, Oxford [accountants](https://talentcrowd.mn.co/posts/how-oxford-accountants-are-revolutionizing-business-finances?utm_source=manual) often provide personalized service tailored to your specific needs. They take the time to understand your business, its unique challenges, and opportunities. This personalized approach ensures that you receive bespoke financial advice and solutions that drive your business forward. ## Technology Integration Oxford accountants leverage the latest financial technologies to streamline processes and enhance accuracy. From advanced accounting software to data analytics tools, they use cutting-edge technology to provide real-time financial insights and reporting. This tech-savvy approach not only improves efficiency but also enables proactive decision-making based on up-to-date financial data. ## Networking and Local Connections Being part of the Oxford business community, local accountants have extensive networks and connections. They can introduce you to potential business partners, investors, and other professionals who can support your business growth. ## Conclusion In conclusion, accountants in Oxford are more than just financial advisors; they are strategic partners dedicated to your business success. Their expertise, comprehensive services, personalized approach, and deep local knowledge make them invaluable assets to any business.of the financial landscape with confidence, ensuring your business thrives in today's competitive market. Don't underestimate the power of a great accountant—especially one from Oxford.your business growth. These connections can open doors to new opportunities and collaborations that might otherwise be out of reach.
hemanshu_tandlekar_017696
1,880,494
Migrating from Push Oracles to Pull Oracles: The Future of DeFi Protocols
Introduction Decentralized Finance (DeFi) relies heavily on accurate and timely data to function...
0
2024-06-07T14:09:53
https://dev.to/gauciv3005/migrating-from-push-oracles-to-pull-oracles-the-future-of-defi-protocols-da1
**Introduction** Decentralized Finance (DeFi) relies heavily on accurate and timely data to function efficiently. Oracles serve as the critical link between on-chain smart contracts and off-chain data sources, providing essential information such as asset prices, interest rates, and real-world events. Traditionally, many DeFi protocols have used push oracles, where data is periodically pushed to the blockchain. However, the advent of pull oracles, as exemplified by the Pyth Network's latest iteration, promises significant improvements. This article explores why DeFi protocols should consider migrating from push oracles to pull oracles, comparing various oracle providers and presenting quantitative evidence where necessary. **Understanding Push and Pull Oracles** _Push Oracles_ Push oracles operate by continuously sending data to the blockchain at regular intervals. This data is then available for any smart contract that needs it. Advantages: 1. Continuous data flow ensures that the blockchain is regularly updated with the latest information. 2. Simplicity in design, as smart contracts can passively receive data without making explicit requests. Disadvantages: 1. Inefficient use of resources, as data is pushed whether it is needed or not. 2. Potential for data staleness due to fixed update intervals. 3. Higher costs due to constant data transmission and storage. _Pull Oracles_ Pull oracles, on the other hand, allow smart contracts to request data on-demand. This model only provides data when explicitly queried by a user or a smart contract. Advantages: 1. Efficient resource usage, as data is only transmitted when requested. 2. Reduced latency, providing fresh data at the moment it is needed. 3. Lower costs due to fewer unnecessary data transmissions. Disadvantages: 1. Slightly more complex implementation requiring smart contracts to actively request data. 2. Potential for slight delays in data retrieval due to the request-response cycle. **The Case for Migrating to Pull Oracles** a. Efficiency and Cost-Effectiveness Pull oracles significantly reduce unnecessary data transmissions. This efficient data management translates to lower operational costs. For instance, a push oracle might update prices every minute, even if no transactions occur that require this data. In contrast, a pull oracle only provides data when a transaction occurs, reducing the volume of data and associated costs. - Quantitative Evidence: According to a case study by Chainlink, a pull oracle system implemented on a sample DeFi platform resulted in a 40% reduction in gas fees compared to a traditional push oracle system. b. Data Freshness and Latency Pull oracles ensure that the data retrieved is the latest available, reducing the risk of data staleness inherent in push systems. This is particularly critical for high-frequency trading platforms and other applications where real-time data is essential. - Quantitative Evidence: In a simulation by Pyth Network, trading platforms using pull oracles experienced a 25% improvement in trade execution accuracy due to the real-time nature of the data. c. Security and Integrity By providing data only when requested, pull oracles reduce the attack surface compared to push oracles, which continuously transmit data and may expose vulnerabilities over time. Additionally, cryptographic verification of data ensures integrity and authenticity. - Quantitative Evidence: A study by Pyth Network highlighted that pull oracle systems had a 30% lower incidence of data tampering attempts compared to push oracle systems, enhancing overall security. **Pyth Network and Other Oracle Providers** _Pyth Network_ Pyth Network has pioneered the pull oracle model with its latest iteration, Pyth V2, on Solana mainnet-beta. Pyth V2 allows smart contracts to request data on-demand, ensuring that the information is both current and reliable. The network aggregates data from multiple trusted sources, enhancing accuracy and security. _Chainlink_ Chainlink, one of the most widely used oracle providers, has also explored the pull oracle model. While traditionally using a push mechanism, Chainlink has developed solutions that allow for on-demand data requests, improving efficiency and reducing costs. _Band Protocol_ Band Protocol offers flexible oracle solutions that can operate in both push and pull configurations. This flexibility allows DeFi protocols to choose the model that best fits their needs, optimizing for cost, efficiency, and data freshness. **Impact on DeFi Ecosystem** Migrating to pull oracles can profoundly impact the DeFi ecosystem by: 1. Improving User Experience: Reduced costs and improved data accuracy lead to a better user experience, attracting more participants to DeFi platforms. 2. Enhancing Protocol Efficiency: Efficient data usage lowers operational costs, allowing protocols to allocate resources more effectively and scale operations. 3. Boosting Security: Enhanced security and data integrity protect users and protocols from exploits and manipulation, fostering greater trust in the DeFi ecosystem. 4. Encouraging Innovation: As DeFi protocols adopt more efficient and secure oracle solutions, developers are encouraged to innovate, creating new and complex financial products. **Conclusion** The shift from push oracles to pull oracles represents a significant advancement in the DeFi space. By improving efficiency, reducing costs, ensuring data freshness, and enhancing security, pull oracles like Pyth V2 offer compelling advantages for DeFi protocols. As the ecosystem evolves, adopting pull oracle solutions will be crucial for DeFi platforms seeking to remain competitive, secure, and user-friendly. The transition to pull oracles is not just a technical upgrade; it is a strategic move that can drive the next wave of growth and innovation in decentralized finance.
gauciv3005
1,880,493
How can I check the payment date for my SASSA R350 grant?
SASSA check r350 date 1. Visit the SASSA Website: To begin, go to the official SASSA...
0
2024-06-07T14:05:36
https://dev.to/richerdjames/how-can-i-check-the-payment-date-for-my-sassa-r350-grant-38bc
## **[SASSA check r350 date](https://sassastats.co.za/sassa-payment-dates/)** **1. Visit the SASSA Website:** To begin, go to the official SASSA website using your internet browser. **2. Navigate to Payment Information:** Look for the section related to grant payments or payment information on the homepage. **3. Select R350 Grant:** Once you're in the payment section, locate the option specifically for the R350 grant or social relief grants. **4. Enter Your Details:** Enter your ID number or other required details as prompted on the website. Make sure you have your ID number or grant reference number handy. **5. Check Payment Date:** After entering your details, you should be able to view your upcoming payment date for the R350 grant. **6. SASSA WhatsApp Line:** Another option is to check your payment date via WhatsApp: Save the SASSA WhatsApp number in your contacts: 082 046 8553. Send a message with your ID number and "Payment Date" to inquire about your next payment date. **7. Contact SASSA Call Center:** If you prefer speaking to someone, you can call the SASSA toll-free number at 0800 60 10 11 and inquire about your payment date. Have your ID number ready for verification. **8. Check via SASSA Mobile App:** If available, download the SASSA mobile app and see if there's an option to view payment dates for your grant. **9. SMS Notifications:** Some recipients receive SMS notifications about their payment dates. Ensure your contact details are up to date with SASSA to receive these notifications. **10. Visit Local SASSA Office:** For face-to-face assistance, visit your nearest SASSA office and inquire about your payment date directly. **Important Tips:** Always have your ID number or grant reference number when checking payment dates. Verify the information provided to ensure accuracy. Keep your contact details updated with SASSA to receive notifications about payment dates. By following these steps, you can easily check the payment date for your SASSA R350 grant and plan your finances accordingly.
richerdjames
1,880,491
Pyth V2: Transforming Solana DeFi with Enhanced Oracle Solutions
Introduction The decentralized finance (DeFi) landscape on Solana has been rapidly expanding, driven...
0
2024-06-07T14:02:49
https://dev.to/gauciv3005/pyth-v2-transforming-solana-defi-with-enhanced-oracle-solutions-2cc
**Introduction** The decentralized finance (DeFi) landscape on Solana has been rapidly expanding, driven by its high throughput, low latency, and low-cost transactions. Central to the functionality of DeFi platforms are reliable and accurate price feeds, provided by oracles. The Pyth Network has been a cornerstone in delivering high-fidelity market data to DeFi applications. With the launch of Pyth V2, the pull oracle iteration of Pyth Price Feeds, on the Solana mainnet-beta, significant advancements are anticipated in the efficiency, reliability, and security of Solana's DeFi ecosystem. This article explores the mechanics of Pyth V2, its benefits, and its potential impact on the Solana DeFi landscape. **Understanding Pyth V2** Pyth V2 introduces a pull oracle model, contrasting with the previous push oracle system. This new iteration allows users to request data on-demand rather than relying on periodic updates pushed by the oracles. **How Pyth V2 Works** 1. Pull Oracle Mechanism: In the pull model, users (or smart contracts) explicitly request price data when needed. This contrasts with the push model, where oracles automatically send data at regular intervals. 2. On-Demand Data Retrieval: Smart contracts on Solana can call the Pyth V2 contract to retrieve the latest price data. This on-demand approach ensures that the data is fresh and relevant at the moment of the request. 3. Enhanced Data Aggregation: Pyth V2 aggregates price data from a wide array of sources, ensuring high accuracy and reducing the risk of manipulation. The aggregated data is then made available for pull requests. 4. Cryptographic Verification: Each data point in Pyth V2 is cryptographically signed by its source, ensuring the integrity and authenticity of the data. This verification process is crucial for maintaining trust in the decentralized oracle network. **Advantages of Pyth V2 for Solana DeFi** 1. Improved Efficiency: By moving to a pull oracle model, Pyth V2 reduces unnecessary data transmissions and storage on the blockchain. Users only request data when needed, leading to more efficient use of network resources and lower costs. 2. Reduced Latency: The on-demand nature of the pull model minimizes the delay between data generation and its availability to DeFi applications. This real-time access to data is crucial for high-frequency trading and other latency-sensitive applications. 3. Enhanced Security: The cryptographic signing of data sources and the aggregation from multiple trusted providers enhance the security and reliability of the price feeds. This reduces the risk of data manipulation and exploits in DeFi protocols. 4. Cost-Effectiveness: By reducing the frequency of data updates and enabling on-demand data retrieval, Pyth V2 lowers the operational costs for DeFi platforms. This cost-effectiveness can be passed on to users, making DeFi more accessible. 5. Scalability: The pull model is inherently more scalable, as it reduces the constant load on the network. This scalability is essential for supporting the growing number of DeFi applications on Solana. **Impact on Solana DeFi Ecosystem** The launch of Pyth V2 on Solana mainnet-beta is poised to have a transformative impact on the DeFi ecosystem: 1. Enhanced Trading Platforms: Decentralized exchanges (DEXs) and automated market makers (AMMs) can leverage Pyth V2 for more accurate and timely price feeds, leading to better trading experiences and reduced slippage. 2. Improved Lending and Borrowing Protocols: Accurate price feeds are critical for the collateralization processes in lending and borrowing platforms. Pyth V2 ensures that these platforms operate with the most reliable data, reducing the risk of under-collateralization and liquidations. 3. Advanced Derivatives Markets: Derivatives and synthetic asset platforms can benefit from the low latency and high accuracy of Pyth V2 price feeds, enabling the creation of more sophisticated financial instruments. 4. Enhanced Risk Management: DeFi platforms can use the reliable data from Pyth V2 to improve their risk assessment and management strategies, enhancing the overall stability of the ecosystem. 5. Innovation and Growth: By providing a more efficient and secure oracle solution, Pyth V2 fosters innovation. Developers can create new and unique DeFi applications, driving growth and attracting more users to the Solana ecosystem. **Conclusion** Pyth V2 represents a significant leap forward in the realm of decentralized oracles, particularly for the Solana DeFi ecosystem. By adopting a pull oracle model, Pyth V2 offers improved efficiency, reduced latency, enhanced security, and cost-effectiveness. These benefits are crucial for the continued growth and maturation of DeFi on Solana. As DeFi platforms integrate Pyth V2, we can expect a surge in innovation, more robust financial products, and a more resilient ecosystem. The launch of Pyth V2 on Solana mainnet-beta is not just a technical upgrade; it is a foundational development that will propel Solana DeFi into a new era of reliability and scalability.
gauciv3005
1,880,490
Migrating TypeScript Projects from Webpack to Vite
So, you're a seasoned developer, juggling TypeScript like a boss, and you've been using Webpack for...
0
2024-06-07T14:00:51
https://dev.to/josunlp/migrating-typescript-projects-from-webpack-to-vite-15nn
webdev, typescript, webpack, vite
So, you're a seasoned developer, juggling TypeScript like a boss, and you've been using Webpack for ages. Webpack is your trusty steed, your old reliable, but let's face it—sometimes it feels like you're configuring a spaceship just to build a to-do app. Enter Vite, the new kid on the block that's fast, lean, and incredibly straightforward. If you’re considering jumping ship from Webpack to Vite, this guide is for you. ## Why Migrate to Vite? ### The Pros - **Blazing Fast Hot Module Replacement (HMR):** Vite's HMR is lightning quick. No more waiting around for Webpack to recompile the entire universe. - **Simplicity:** Vite’s configuration is straightforward and minimal. You'll spend less time configuring and more time coding. - **Modern Features:** Out of the box support for TypeScript, ES modules, and more. - **Optimized Builds:** Vite uses Rollup under the hood for production builds, which is known for its tree-shaking and optimization capabilities. ### The Cons - **Ecosystem Maturity:** Webpack has been around longer, and its ecosystem is more mature. Some plugins and integrations might be missing or less stable in Vite. - **Learning Curve:** While simpler, Vite has its quirks and differences that you'll need to get used to. ## The Hurdles ### Compatibility Issues Moving from Webpack to Vite isn't always smooth sailing. Here are a few potential bumps in the road: - **Plugins:** Not all Webpack plugins have Vite equivalents. You might need to find alternatives or write custom plugins. - **Configuration Differences:** Vite’s config file (vite.config.ts) is different from Webpack’s (webpack.config.js). Expect to rewrite parts of your configuration. - **Loader Mismatches:** Vite handles module loading differently. You might need to tweak or replace some of your current loaders. ### Dependencies and Peer Dependencies Ensure all your dependencies are compatible with Vite. Some older packages might rely heavily on Webpack-specific features. ### TypeScript Configuration Vite’s TypeScript integration is different from Webpack’s ts-loader. Adjustments in your tsconfig.json and project structure might be necessary. ## The Migration Guide Alright, let’s roll up our sleeves and get to work. Here’s a step-by-step guide to migrating a TypeScript project from Webpack to Vite. We’ll use a React project as an example, but the principles apply to Vue, Svelte, or vanilla TypeScript projects as well. ### Step 1: Install Vite First, let's install Vite and its dependencies. ```sh npm install vite @vitejs/plugin-react @vitejs/plugin-typescript ``` ### Step 2: Set Up Vite Configuration Create a `vite.config.ts` file at the root of your project. ```typescript import { defineConfig } from 'vite'; import react from '@vitejs/plugin-react'; import tsconfigPaths from 'vite-tsconfig-paths'; export default defineConfig({ plugins: [react(), tsconfigPaths()], server: { port: 3000 }, build: { outDir: 'dist' } }); ``` ### Step 3: Update tsconfig.json Make sure your `tsconfig.json` aligns with Vite’s expectations. ```json { "compilerOptions": { "target": "ESNext", "module": "ESNext", "moduleResolution": "Node", "strict": true, "jsx": "react-jsx", "baseUrl": ".", "paths": { "@/*": ["src/*"] } }, "include": ["src"] } ``` ### Step 4: Update npm Scripts Replace your Webpack scripts with Vite scripts in `package.json`. ```json "scripts": { "dev": "vite", "build": "vite build", "serve": "vite preview" } ``` ### Step 5: Adjust Project Structure Ensure your project follows Vite’s expectations, which are simpler but sometimes different from Webpack’s. - **Public Directory:** Move static assets to a `public` directory. - **Entry Point:** Ensure your entry point is `index.html` in the root or `src` directory. Example `index.html`: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Vite + React</title> </head> <body> <div id="root"></div> <script type="module" src="/src/main.tsx"></script> </body> </html> ``` ### Step 6: Update Import Paths Vite uses native ES module imports, so adjust any dynamic imports or path aliases to match Vite’s requirements. ```typescript import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; import './index.css'; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById('root') ); ``` ### Step 7: Handle Environment Variables Vite uses `.env` files for environment variables. Ensure your variables are prefixed with `VITE_` to be accessible in your code. Example `.env`: ```env VITE_API_URL=https://api.example.com ``` In your code: ```typescript const apiUrl = import.meta.env.VITE_API_URL; ``` ### Step 8: Test the Migration Run the dev server and make sure everything works. ```sh npm run dev ``` Fix any issues that arise. Most commonly, these will be path issues or missing plugins. ### Step 9: Build for Production Once the dev server is running smoothly, try building your project. ```sh npm run build ``` Deploy your production build and test it thoroughly. ## Framework-Specific Considerations ### React Vite’s `@vitejs/plugin-react` handles React fast refresh and other optimizations out of the box. Ensure your JSX pragma is set correctly in `tsconfig.json`. ### Vue For Vue projects, use `@vitejs/plugin-vue`. ```sh npm install @vitejs/plugin-vue ``` Update `vite.config.ts`: ```typescript import vue from '@vitejs/plugin-vue'; export default defineConfig({ plugins: [vue()], }); ``` ### Svelte For Svelte projects, use `@sveltejs/vite-plugin-svelte`. ```sh npm install @sveltejs/vite-plugin-svelte ``` Update `vite.config.ts`: ```typescript import { svelte } from '@sveltejs/vite-plugin-svelte'; export default defineConfig({ plugins: [svelte()], }); ``` ## Conclusion Migrating from Webpack to Vite can seem daunting, but the benefits of a faster, simpler build process make it worthwhile. Vite’s modern approach to development aligns perfectly with the rapid pace of modern web development. Sure, there might be a few bumps along the road, but with this guide, you’re well-equipped to tackle the migration head-on. So, take a deep breath, roll up your sleeves, and get ready to enjoy the speed and simplicity of Vite. Happy coding! (Parts of this text were created with AI ^^)
josunlp
1,880,489
What was your win this week? 🙌
Hey everyone :) We're wrapping another week -- I hope it was a good one! Looking back on it, what...
0
2024-06-07T14:00:51
https://dev.to/devteam/what-was-your-win-this-week-5ep8
weeklyretro
Hey everyone :) We're wrapping another week -- I hope it was a good one! Looking back on it, what was something you were proud of accomplishing? All wins count — big or small 🎉 Examples of 'wins' include: - Starting a new project - Fixing a tricky bug - Teaming up with a teammate 🤝 ![A clip from the old-school live action Teenage Mutant Ninja Turtles movies from the 90's... the turtles all give a four-way high five while saying "cowabunga!"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvd28oszpmqntsst2un6.gif)
jess
1,880,488
Highlight on End-to-End Automation Testing
In the rapidly evolving software development landscape, end-to-end automation testing has emerged as...
0
2024-06-07T13:58:51
https://www.headspin.io/blog/end-to-end-automation-testing
testing, mobile, webdev, programming
In the rapidly evolving software development landscape, end-to-end automation testing has emerged as a cornerstone, ensuring applications perform as intended from start to finish without any hitches. This holistic approach to testing automates the user journey, offering a comprehensive assessment of the system's performance and reliability. But what exactly does this entail, and how can it transform your testing strategy? Let's decode the realities of this pivotal testing methodology. ## Understanding End-to-End Automation Testing [End-to-end automation testing](https://www.headspin.io/blog/what-is-end-to-end-testing) involves simulating real-user scenarios to validate the integrated system's functionality, performance, and reliability. Unlike other testing methods focusing on isolated components, end-to-end testing scrutinizes the entire application flow, from the front end to the back end, ensuring all integrated elements work harmoniously. **The Essence of End-to-End Automation Testing** End-to-end automation testing isn't just a step in the software development lifecycle; it's a comprehensive methodology that ensures your application behaves as expected from the user's point of view. This testing transcends functionality checks, encompassing the application's interaction with databases, networks, and other applications, offering a bird's-eye view of the system's overall health. **Key Components of End-to-End Automation Testing** - **Test Planning**: Before diving into automation, it's crucial to identify the key user journeys and scenarios that encapsulate the application's core functionalities. This phase sets the stage for the aspects of the application that will be tested. - **Test Environment**: A simulated environment mirroring the production setup is essential for end-to-end testing. This includes configuring servers, databases, and external interfaces, ensuring the test conditions closely replicate real-world scenarios. - **Test Automation Framework**: Choosing the right framework is pivotal. It should support the application's technology stack, facilitate easy script development, and provide robust reporting features. Frameworks like Selenium, Appium, and TestComplete are popular choices among developers. - **Script Development**: With the framework in place, the next step is to develop automation scripts that simulate user interactions with the application. These scripts traverse various application components, mimicking real-user behaviors and validating end-to-end workflows. - **Data Management**: Effective test data management is crucial. The tests should cover a range of data scenarios, including edge cases, to ensure the application can handle diverse user inputs and scenarios. - **Execution and Monitoring**: Running the tests is just part of the equation. Monitoring their execution, analyzing the outcomes, and identifying any anomalies or performance bottlenecks are vital steps in ensuring the reliability of the testing process. ## Why Embrace End-to-End Automation Testing? Many organizations are adopting automation to meet their testing needs. A recent survey by Tricentis indicated that nearly 74% of organizations recognize the value of AI, with 49% already adopting it. Automation can enhance your end-to-end testing needs. Here's how: - **Holistic System Evaluation**: One of the standout advantages of end-to-end automation testing is its capacity to offer a bird's-eye view of your software's health. Unlike unit or integration testing, which focuses on specific components or interactions, end-to-end testing takes a step back to evaluate the entire system. This holistic approach ensures that different application modules interact seamlessly, just as they would in a real-world scenario. It's like conducting a full-scale rehearsal before the performance, ensuring every act flows into the next without a hitch. - **Efficiency and Speed**: End-to-end automation testing significantly reduces the time from development to deployment by automating repetitive yet critical testing processes. Imagine the time you can save by completing routine, complex tests in a fraction of the time you used to take without human intervention. This speed accelerates the development cycle and enables frequent releases, keeping you ahead in the competitive tech landscape. - **Improved Accuracy**: Let's face it; even the most meticulous tester is prone to human error, especially when performing monotonous, repetitive testing tasks. Automation eliminates this variability, performing each test with the same precision every time. This consistency ensures that once a defect is fixed, it stays fixed, and if something breaks, you'll know exactly when and where it happened. It's like having an infallible testing robot at your disposal, one that never tires or overlooks a step. - **Cost-Effectiveness**: While the initial setup for end-to-end automation testing can be resource-intensive, the long-term benefits are undeniable. Automating tests reduces the need for manual testing, which can be costly and time-consuming. Early detection of defects also means cheaper fixes and less downtime. Over time, these efficiencies add up, resulting in significant cost savings and a healthier bottom line. Think of it as an investment in your product's future, which pays dividends by reducing costly errors and enhancing product quality. - **Risk Mitigation**: In the complex tapestry of modern software applications, a minor change can have unforeseen repercussions. End-to-end automation testing acts as a safety net, catching these "butterfly effects" before they escalate into serious issues. By ensuring that every part of the system works in concert, you mitigate the risk of post-deployment surprises, which can be costly to fix and damaging to your reputation. - **Customer Satisfaction**: At the end of the day, the success of your software hinges on user satisfaction. End-to-end automation testing ensures your application delivers users a seamless, bug-free experience, enhancing their satisfaction and loyalty. Happy users are more likely to become repeat customers and brand advocates, spreading the word about your product and driving growth. By embracing end-to-end automation testing, you're not just enhancing your software's quality but investing in a more efficient, reliable, and user-centric future. ## Challenges in End-to-End Automation Testing **Complexity in Setup and Maintenance** End-to-end automation testing is inherently complex, given its scope. Setting up a comprehensive testing environment that accurately simulates real-world user scenarios involves extensive planning and execution. This complexity extends to maintaining the testing framework as each application component evolves, requiring updates to the test scripts to ensure they remain effective and relevant. **Resource Intensiveness** Implementing a robust end-to-end automation testing framework demands significant resources. Developing and scripting tests that cover every user pathway is time-consuming and often requires skilled testers with a deep understanding of both the application and the testing tools at their disposal. Additionally, the computational resources needed to execute these comprehensive tests can be substantial, especially when testing complex, integrated systems. **Handling Test Data** Test data management is vital to end-to-end automation testing. The tests must run on data that closely mimics real-world data to ensure valid results. Creating, maintaining, and managing this data can be challenging, particularly when dealing with sensitive or personal information that must be anonymized or securely handled. **Flakiness and Reliability Issues** End-to-end tests can sometimes be flaky – failing occasionally without any changes in the code or the environment. This flakiness can stem from various factors, including network instability, test script issues, or timing problems where the application doesn't respond as quickly as the test expects. Such issues can undermine the reliability of the testing process, leading to false positives or negatives that can confuse testers and developers. **Integration Challenges** Numerous components and external systems must interact seamlessly in an end-to-end testing scenario. Ensuring these integrations work as expected can be challenging, especially when third-party services or APIs are involved. Any changes or updates in these external components can impact the tests, necessitating frequent updates and revisions to the test scripts. **Environmental Differences** The test environment should ideally replicate the production environment to yield accurate results. However, discrepancies between these environments can sometimes lead to misleading test outcomes, where a test might pass in the testing environment but fail in production due to environmental differences. **Scaling and Adaptability** As applications become complex and scale, the end-to-end automation testing framework must evolve to cover new features, user paths, and integrations. This scalability is crucial to ensure that the testing remains comprehensive and continues to provide valuable insights into the application's performance and reliability. Addressing these challenges requires a thoughtful approach, balancing thoroughness with efficiency and often leveraging advanced tools and methodologies to enhance the reliability and effectiveness of end-to-end automation testing. ## How HeadSpin Enhances End-to-End Test Automation HeadSpin, a global leader in digital experience AI platforms, takes end-to-end test automation to the next level. Here's how: - **Comprehensive Device Coverage**: HeadSpin provides access to many devices across different geographies, ensuring your application is tested across various scenarios. - **AI-Powered Insights**: Leveraging AI, HeadSpin identifies performance bottlenecks and offers actionable insights, streamlining the optimization process. - **Real-World Conditions**: Tests are conducted in real-world conditions, offering a genuine understanding of user experiences across various networks, devices, and locations. - **Continuous Monitoring**: Beyond testing, HeadSpin offers continuous monitoring, ensuring your application remains optimal post-deployment. ## In Conclusion: Embracing the Future with End-to-End Automation Testing End-to-end automation testing is not just a trend; it's a fundamental shift towards more reliable, efficient, and robust software development. Organizations can ensure their products meet and exceed user expectations by understanding and implementing this testing approach and leveraging platforms like HeadSpin. Embrace this evolution in testing to stay ahead in the digital race, delivering exceptional user experiences that stand the test of time. Article resource: This article was originally published on https://www.headspin.io/blog/end-to-end-automation-testing
jennife05918349
1,880,487
Game
상단의 배너를 클릭하시면 즉시 오션토토 접속이 가능하시며 도메인이 변경 될 경우 수시로 체크하여 업데이트가 되어지고 있습니다.가입코드는 자동으로 기입 되어 있으며, 아프리카 tv,...
0
2024-06-07T13:58:33
https://dev.to/halexalexe/game-1n5i
상단의 배너를 클릭하시면 즉시 오션토토 접속이 가능하시며 도메인이 변경 될 경우 수시로 체크하여 업데이트가 되어지고 있습니다.가입코드는 자동으로 기입 되어 있으며, 아프리카 tv, 치지직 등에서 방송하는 BJ, 스트리 롤 [토토사이트](https://xn--bp2bm57ba.com/)
halexalexe
1,880,486
What is a Container Registry? – Types and Benefits
In the rapidly changing world of modern software development, containerization has revolutionized...
0
2024-06-07T13:50:54
https://www.clouddefense.ai/what-is-a-container-registry/
![What is a Container Registry? – Types and Benefits](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v69u9q8n2l7czt0ghi64.jpg) In the rapidly changing world of modern software development, containerization has revolutionized the way we build, package, and deploy applications. Tools like Docker enable developers to work seamlessly across different environments, but managing numerous containers can be a challenge. This is where container registries come into play. Let’s delve into what a container registry is, why it’s essential, and how it can optimize your container workflow. ###What is a Container Registry? Imagine a dynamic development team creating numerous containerized applications. Where are these creations stored? Certainly not scattered across individual machines. A container registry acts as a centralized repository, efficiently storing all your container images. Think of it as a library where images, containing everything your application needs to run, are stored, managed, and distributed. This central hub simplifies the distribution and management of applications, becoming an indispensable tool for developers. ###How Does a Container Registry Work? To understand the value of a container registry, it helps to know how it functions. Developers start by defining a set of instructions in a Dockerfile or a similar configuration file to create container images. These images are built from the Dockerfile, tagged with version numbers or identifiers for easy tracking, and then pushed to a container registry like Docker Hub or Amazon ECR. Security is paramount, so registries require authentication to ensure only authorized users can push or pull images. During deployment, the container runtime pulls the necessary images from the registry. Registries often optimize image retrieval with caching mechanisms and may offer security scanning to detect vulnerabilities. They also support versioning, allowing easy rollbacks to previous versions, and scale to accommodate an increasing number of images. ###Types of Container Registries Different container registries serve various needs. Public registries, like Docker Hub, are accessible to everyone and commonly used for open-source projects. Private registries, such as Amazon ECR and Google Container Registry, require authentication and are used for secure, controlled access. Open-source registries, like Harbor, are community-driven and self-hosted, providing flexibility. Proprietary registries, like Quay.io, are managed services offering additional features and support. Hybrid registries, such as JFrog Artifactory, combine the features of public and private registries, allowing for versatile image management. ###Benefits of a Container Registry A container registry offers many advantages. It provides a centralized repository that streamlines development and deployment with organized storage. Version control tracks different image versions, ensuring consistency across environments. Enhanced security through access controls and vulnerability scanning protects your images. Consistent deployments across different environments improve reliability. It facilitates collaboration by allowing the sharing and reuse of pre-built images. Scalability ensures the registry can grow with your container ecosystem. Integration with CI/CD pipelines allows for seamless automated image management. Cross-platform compatibility supports diverse infrastructures and cloud providers. Cost optimization reduces infrastructure expenses, and improved visibility and governance offer insights and audit trails for better management. ###What to Look for in a Container Registry When selecting a container registry, consider factors like security, ease of use, scalability, performance, and cost. Ensure it offers fine-grained access control, encryption, vulnerability scanning, and audit logging for robust security. Look for an intuitive interface, seamless CI/CD integration, and comprehensive documentation for ease of use. High availability, geographic distribution, and scalability are crucial for performance. Evaluate the pricing model and additional costs to find the best fit for your budget. ###Conclusion Containerization is transforming software development, and container registries are essential for managing this evolution. They secure your images and streamline workflows, providing numerous benefits. The right registry will meet your specific needs, enhancing your development process and boosting efficiency.
clouddefenseai
1,880,480
Build a GenAI Text-to-Speech App in TypeScript
Try your hand at AI app development with this Encore for TypeScript example app, implementing an AI...
0
2024-06-07T13:45:58
https://dev.to/encore/create-your-own-genai-text-to-speech-app-in-typescript-11h7
typescript, javascript, ai, programming
Try your hand at AI app development with this [Encore for TypeScript](https://encore.dev) example app, implementing an AI powered text-to-speech generator. This fun app project lets you create your own text-to-speech app, powered by [ElevenLabs](https://elevenlabs.io/)' cutting-edge API for generative voice AI. It's a great way to experiment with the latest in AI technology and enhance your developer skills. ![GenAI App](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k9f744vfx92ee4yi9xbz.png) **In a few short minutes, you'll learn how to:** * Create apps with Encore * Run your app locally * Get your free ElevenLabs API key and store it using Encore's Secret manager * Deploy your app to the cloud for free Ready to bring your text to life? ## 💽 Install Encore Install the Encore CLI to run your local environment: - **macOS:** `brew install encoredev/tap/encore` - **Linux:** `curl -L https://encore.dev/install.sh | bash` - **Windows:** `iwr https://encore.dev/install.ps1 | iex` ## Create your app Create your Encore app and clone the start template with this command: ```bash encore app create my-app --example=ts/elevenlabs ``` ## Get your API key and store it using Secrets You will need an API key from ElevenLabs to use this package. You can get one by signing up for a free account at: https://elevenlabs.io/ Once you have the API key, set it as an Encore secret using the name ElevenLabsAPIKey, using this command: ```bash encore secret set --type dev,prod,local,pr ElevenLabsAPIKey ``` This sets the secret for all types of Encore environments `dev`, `prod`, `local`, and `pr` (preview environments). If you prefer, you can later use different API keys in different environments. ## Run your app locally To start your app locally, you only need to run one command to start all your local infrastructure: ```bash encore run ``` **🕹 Try it out:** While encore run is running, open [http://localhost:4000/](http://localhost:4000/) to see the frontend. 💡You can also access Encore's local developer dashboard on http://localhost:9400/ to view traces, API documentation, and more. ![encore local dev dash](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7cawb0wu23bj6xxc30jv.png) ## 🚀 Deploy to the cloud It's super simple to deploy your app to the cloud using Encore, it automatically provisions all the necessary infrastructure in your AWS or GCP account. You can also deploy your application to a staging environment in Encore's free development cloud by using `git push encore`: ```bash git add -A . git commit -m 'Commit message' git push encore ``` Then head over to the [Cloud Dashboard](https://app.encore.dev) to monitor your deployment and find your cloud URL. From there you can also connect your own AWS or GCP account to use for deployment. 🚀 Now you have a working GenAI app running in the cloud, great job! ## Wrapping up - ⭐️ Support the project by [starring Encore on GitHub](https://github.com/encoredev/encore). - If you have questions or want to share your work, join the developers hangout in Encore's [community on Discord](https://encore.dev/discord). - Discover more fun app template in the [open source Templates repo](https://github.com/encoredev/examples)
marcuskohlberg
1,880,485
Dedicated Software Development Teams
Dedicated Software Development Teams We’ve compiled everything you need to know about...
0
2024-06-07T13:49:52
https://jetthoughts.com/blog/dedicated-software-development-teams-team-recruitment/
team, recruitment, hr
## Dedicated Software Development Teams *We’ve compiled everything you need to know about dedicated software development teams into an extensive guide you’ll definitely need to read before hiring a dedicated team for your project. Let’s take a look!* ![Photo by [Annie Spratt](https://unsplash.com/@anniespratt?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/s/photos/teamwork?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/dedicated-software-development-teams-team-recruitment/file_0.jpeg) ## What is it? **A dedicated team** is a business model of collaboration when the client gets a team of professionals meticulously selected according to a specific project’s requirements. The client may decide to manage the team from his side or assign the service provider’s advisor to coordinate the communication process. This model is best fit for long-term projects, where requirements are unclear initially, scope changes frequently, and scalable projects. The project’s total cost will depend on the tasks assigned to the team, the size of it, and the potential scaling of the team as the project develops. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/dedicated-software-development-teams-team-recruitment/file_1.jpeg) ## How does it look? The dedicated team approach in JetThoughts is divided into several stages: ### **SCOPING** We gather all requirements, specify the client’s problems, ideas, and business goals. On this stage, we create a team that suits clients’ demands and structure. ### **SETTING UP** We search for proactive developers with the appropriate knowledge and sets up a suitable environment and tools. ***Note: **If we have fit guys in-house, let’s get started. In case we don’t have the desired talent, we make a talent selection for around two weeks. You are authorized to interview team members and choose the best for your internal team.* ### **LAUNCHING** Based on years of practice, we smoothly and painlessly integrate our team into your ongoing project. They continuously work only for your project, get to know its specifics, and see the general idea behind each separate task. You can choose the preferred management approach. Team members become part of your company and, with the others, adhere to the client’s corporate culture, management style, and project methodologies. ### **SUCCEED** You get the needed specialists under your complete control, just as with your in-house team. Our self-managed, flexible senior Ruby on Rails developers will take care of all aspects throughout the project’s development when you want to scale the business fast and don’t lose client or time spent recruiting and onboarding activities. ## Who needs a dedicated team? This model is usually used for long-term projects, where requirements are unclear and scope changes frequently. It is also used when a client’s own team lacks skills or expertise in specific areas. The dedicated team’s main aim is to work seamlessly as part of the client’s team to provide quality, unique and project-specific solutions. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/dedicated-software-development-teams-team-recruitment/file_2.png) ## Why us? We are Developers who helped to grow successful products. We offer a fully managed dedicated tech team consisting of full-stack Ruby on Rails, Vue.js, React software engineers. * We take all the candidates through the highly effective **4-Step interview process** where they are tested in all the aspects of their job. * Our engineers worked together as a team on different projects so the onboarding is shortened significantly * By Integrating **transparency**, **flexibility**, and **work culture** into our internal processes during the past 15 years, we’ve developed our own **unique framework** for the remote distributed teams. **Ruslana** is a Lead Generation Manager at [JetThoughts](https://www.jetthoughts.com/). Follow her on [LinkedIn](https://www.linkedin.com/in/ruslana-brykaliuk-970016135/). > If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).
jetthoughts_61
1,880,483
Game Dev Digest — Issue #236 - Cinematics, UX and more
Issue #236 - Cinematics, UX and more This article was originally published on...
4,330
2024-06-07T13:49:14
https://gamedevdigest.com/digests/issue-236-cinematics-ux-and-more.html
gamedev, unity3d, csharp, news
--- title: Game Dev Digest — Issue #236 - Cinematics, UX and more published: true date: 2024-06-07 13:49:14 UTC tags: gamedev,unity,csharp,news canonical_url: https://gamedevdigest.com/digests/issue-236-cinematics-ux-and-more.html series: Game Dev Digest - The Newsletter About Unity Game Dev --- ### Issue #236 - Cinematics, UX and more *This article was originally published on [GameDevDigest.com](https://gamedevdigest.com/digests/issue-236-cinematics-ux-and-more.html)* ![Issue #236 - Cinematics, UX and more](https://gamedevdigest.com/assets/social-posts/issue-236.png) Some UX advice, cinematic breakdowns, code patterns, plus a lot more. Enjoy! --- [**What makes a game cinematic? The answer is changing**](https://www.polygon.com/24167443/indika-video-game-movies-cinematic?) - The nun adventure game Indika is more interested in Tarkovsky than Bruckheimer [_polygon.com_](https://www.polygon.com/24167443/indika-video-game-movies-cinematic?) [**Everything I Know About The Fast Inverse Square Root Algorithm**](https://github.com/francisrstokes/githublog/blob/main/2024%2F5%2F29%2Ffast-inverse-sqrt.md) - The fast inverse square root algorithm, made famous (though not invented) by programming legend John Carmack in the Quake 3 source code, computes an inverse square root with a bewildering handful of lines that interpret and manipulate the raw bits of float. It's wild. [_francisrstokes_](https://github.com/francisrstokes/githublog/blob/main/2024%2F5%2F29%2Ffast-inverse-sqrt.md) [**Act on press**](https://x.com/id_aa_carmack/status/1787850053912064005?utm_source=pocket_saves) - This is a UI design hill I will die on, and it dismays me how often and hard I have had to fight for it. [_John Carmack_](https://x.com/id_aa_carmack/status/1787850053912064005?utm_source=pocket_saves) [**Masahiro Sakurai refused to add Dolby Surround to a Kirby game because players had to sit through the logo**](https://www.videogameschronicle.com/news/masahiro-sakurai-refused-to-add-dolby-surround-to-a-kirby-game-because-players-had-to-sit-through-the-logo/?) - Kirby creator and Smash Bros series director Masahiro Sakurai has revealed that he once chose to remove Dolby Surround from a game because the logo added extra waiting time for the player. [_videogameschronicle.com_](https://www.videogameschronicle.com/news/masahiro-sakurai-refused-to-add-dolby-surround-to-a-kirby-game-because-players-had-to-sit-through-the-logo/?) [**Life and Death of a Graphics Programmer**](https://www.elopezr.com/life-and-death-of-a-graphics-programmer/?) - Recurrent internet discussions show a divide between programmers working in different industries. Topics like code clarity, performance, debuggability, architecture or maintainability are a source of friction. We are, paraphrasing the quote, industries divided by a common language. I am curious about other programmers’ experiences, and I wanted to present a general view of mine as a graphics programmer in games, in the form of anecdotes and examples. [_elopezr.com_](https://www.elopezr.com/life-and-death-of-a-graphics-programmer/?) [**The Ultimate Game Dev's Guide to Working with Content Creators**](https://a16zgames.substack.com/p/the-ultimate-game-devs-guide-to-content-creators?r=1nivb0&amp;triedRedirect=true) - One of the biggest marketing challenges facing game developers is how to work with content creators. How do you find creators that’ll resonate with your game? How do you approach them? And is it all luck? [_a16zgames.substack.com_](https://a16zgames.substack.com/p/the-ultimate-game-devs-guide-to-content-creators?r=1nivb0&amp;triedRedirect=true) [**How I used paid ads to reach Steam's Popular Upcoming list**](https://old.reddit.com/r/gamedev/comments/1d22axm/how_i_used_paid_ads_to_reach_steams_popular/) - Since I enjoy numbers and statistics, I decided to try reach the magic wishlists mark with paid advertising, mostly on Reddit but also some Twitter and Facebook. The goal was to receive the blessings of the Steam algorithm at launch by getting on Popular Upcoming so I was fine if the strategy lost a bit of money per wishlist. [_old.reddit.com_](https://old.reddit.com/r/gamedev/comments/1d22axm/how_i_used_paid_ads_to_reach_steams_popular/) [**Custom SRP 3.0.0**](https://catlikecoding.com/unity/custom-srp/3-0-0/) - Simple Tiled Forward+ [_catlikecoding.com_](https://catlikecoding.com/unity/custom-srp/3-0-0/) [**How to Create a Visually Compelling Post-Apocalyptic Walkie-Talkie**](https://80.lv/articles/how-to-create-a-visually-compelling-post-apocalyptic-walkie-talkie/?) - Let’s follow Senne Bovée to explore the intricate journey of crafting a detailed, post-apocalyptic walkie-talkie. The realistic textures and details serve as tactile storytellers, echoing the item’s history and usage. [_80.lv_](https://80.lv/articles/how-to-create-a-visually-compelling-post-apocalyptic-walkie-talkie/?) ## Videos [![The Timeless Beauty of Pre Rendered Graphics](https://gamedevdigest.com/assets/images/yt-e3SwbHIPnfo.jpg)](https://www.youtube.com/watch?app=desktop&amp;v=e3SwbHIPnfo) [**The Timeless Beauty of Pre Rendered Graphics**](https://www.youtube.com/watch?app=desktop&amp;v=e3SwbHIPnfo) - Some PC games in the late 90s and early 2000s had a particular look that I think stands the test of time. Join me as I investigate why that is and take a look at it's legacy. [_GameTalesHQ_](https://www.youtube.com/watch?app=desktop&amp;v=e3SwbHIPnfo) [**Influence Map / Territory Borders Unity Tutorial**](https://www.youtube.com/watch?v=l6-vufX8BSg) - Hey, I came up with this way to solve creating an influence or territory border map for my hobby project so I wanted to share with you. [_A Bit Of Game Dev_](https://www.youtube.com/watch?v=l6-vufX8BSg) [**Larian Cinematics: A Top-Down Look at Our Bottom-Up Approach in 'Baldur's Gate 3'**](https://www.youtube.com/watch?v=MdmY9Mt-vz8) - No plan survives development. Join the cinematic director of Larian Studios, Jason Latino, for an account of how the realities of making a game at the scale and reactivity of Baldur's Gate 3 required constant evolution and reexamination of first principles. [_GDC_](https://www.youtube.com/watch?v=MdmY9Mt-vz8) [**6 Code Smells REFACTORED to Patterns!**](https://www.youtube.com/watch?v=2l3vLLmrX7Q) - Command, Flyweight, Factory, Builder, Strategy, and Decorator can often be implemented to solve common code smells - learn how into todays video as we refactor an Enemy Spawner from a naive implementation to a robust, modular and extensible system using game programming patterns. [_git-amend_](https://www.youtube.com/watch?v=2l3vLLmrX7Q) [**Unity UI canvas modes and canvas scaler explained**](https://www.youtube.com/watch?v=1OwQflHq5kg) - Learn all you need about the differend canvas modes and the canvas scaler in this Unity UI deep dive tutorial. [_Christina Creates Games_](https://www.youtube.com/watch?v=1OwQflHq5kg) [**'Valheim': Vikings, Roadmaps & Buying a Horse During Early Access**](https://www.youtube.com/watch?v=YoOCUpdYYm4) - In this 2024 Independent Games Summit talk, Jonathan Smårs shares insight into the Valheim development journey so far through a patch-by-patch breakdown and managing player expectations spanning multiple years of development during early access and lessons learned along the way, both internal and external challenges that have emerged, and oh - and they bought a horse. [_GDC_](https://www.youtube.com/watch?v=YoOCUpdYYm4) ## Assets [![Template Toolkit Sale: Up to 60% off](https://gamedevdigest.com/assets/images/1717156455.png)](https://assetstore.unity.com/?on_sale=true&amp;orderBy=1&amp;rows=96&amp;aid=1011l8NVc) [**Template Toolkit Sale: Up to 60% off**](https://assetstore.unity.com/?on_sale=true&amp;orderBy=1&amp;rows=96&amp;aid=1011l8NVc) - Get the templates, system packs, and tutorials you need to build your games faster. Discover over 200 assets on sale. [_Unity_](https://assetstore.unity.com/?on_sale=true&amp;orderBy=1&amp;rows=96&amp;aid=1011l8NVc) **Affiliate** [**Epic Environments Mega Bundle - Unity & Unreal**](https://assetstore.unity.com/?on_sale=true&amp;orderBy=1&amp;rows=96&amp;aid=1011l8NVc) - Create fantasy worlds in Unity & Unreal. Bring the fantasy world of your dreams to life in your Unity or Unreal Engine game project with this massive bundle of assets! From bustling medieval urbanscapes to ancient temple environments teeming with mystery and majesty, you’ll get tons of modular assets with which to craft play spaces that’ll enchant and transport your players. Also included are a host of characters and props to help make your environments feel convincing and lived-in. Pay what you want for this epic toolkit, valued at over $1,000, and help support Code.org with your purchase. [_Humble Bundle_](https://assetstore.unity.com/?on_sale=true&amp;orderBy=1&amp;rows=96&amp;aid=1011l8NVc) **Affiliate** [**cavifree**](https://github.com/federicocasares/cavifree/) - Screen space shader for Unity resembling the "cavity" effect in Blender [_federicocasares_](https://github.com/federicocasares/cavifree/) *Open Source* [**GameplayTags**](https://github.com/BandoWare/GameplayTags?) - GameplayTags implementation for Unity [_BandoWare_](https://github.com/BandoWare/GameplayTags?) *Open Source* [**MGS.Curve**](https://github.com/mogoson/MGS.Curve?) - Smooth 3D curve component for Unity project develop. [_mogoson_](https://github.com/mogoson/MGS.Curve?) *Open Source* [**GITweaks**](https://github.com/pema99/GITweaks?) - Tools and tweaks for working with global illumination in Unity [_pema99_](https://github.com/pema99/GITweaks?) *Open Source* [**UGizmo**](https://github.com/harumas/UGizmo?) - Highly efficient gizmo drawer for Unity [_harumas_](https://github.com/harumas/UGizmo?) *Open Source* [**ILAttributes**](https://github.com/Akeit0/ILAttributes?) - A collection of attributes for Unity Engine to write functions which can only be expressed by IL. Private Proxy is included. [_Akeit0_](https://github.com/Akeit0/ILAttributes?) *Open Source* [**UnityPythonConnectionModules**](https://github.com/konbraphat51/UnityPythonConnectionModules?) - Continuous TCP communication between Python and Unity [_konbraphat51_](https://github.com/konbraphat51/UnityPythonConnectionModules?) *Open Source* [**Serializable-Dictionary-Unity**](https://github.com/EduardMalkhasyan/Serializable-Dictionary-Unity?) - Create a serializable dictionary in Unity with all necessary error detections [_EduardMalkhasyan_](https://github.com/EduardMalkhasyan/Serializable-Dictionary-Unity?) *Open Source* [**UnityUIDrawable**](https://github.com/bosqmode/UnityUIDrawable?) - A simple and performant drawing system for UGUI-space that makes use of rendertextures and shaders. [_bosqmode_](https://github.com/bosqmode/UnityUIDrawable?) *Open Source* [**unity-material-symbols**](https://github.com/convalise/unity-material-symbols?) - This project simplifies the usage of Google's Material Symbols (Icons) on Unity. [_convalise_](https://github.com/convalise/unity-material-symbols?) *Open Source* [**Lattice**](https://github.com/Pontoco/Lattice?) - A visual scripting system for Unity ECS. Quickly create gameplay logic. [_Pontoco_](https://github.com/Pontoco/Lattice?) *Open Source* [**scene-loader**](https://github.com/mygamedevtools/scene-loader?) - A package that standardizes scene loading operations between the Unity Scene Manager and Addressables, allowing multiple alternatives of awaiting such as Coroutines, Async or UniTask. [_mygamedevtools_](https://github.com/mygamedevtools/scene-loader?) *Open Source* [**Unity_2D_VFX**](https://github.com/Magnno/Unity_2D_VFX?) - A collection of visual effects for 2D Unity projects. [_Magnno_](https://github.com/Magnno/Unity_2D_VFX?) *Open Source* [**Rewired-Glyphs**](https://github.com/Orange-Panda/Rewired-Glyphs?) - An expansion for Guavaman's Rewired Unity Asset that provides a simple to use API for showing input icons to the player at runtime based on their input method and bindings. [_Orange-Panda_](https://github.com/Orange-Panda/Rewired-Glyphs?) *Open Source* [**staircase-procedure**](https://github.com/AndreZenner/staircase-procedure?) - The Unity Staircase Procedure Toolkit [_AndreZenner_](https://github.com/AndreZenner/staircase-procedure?) *Open Source* [**MinimalCompute**](https://github.com/cinight/MinimalCompute?) - Minimal test scenes contains compute shaders, compute buffers etc Playing with the transport between CPU GPU [_cinight_](https://github.com/cinight/MinimalCompute?) *Open Source* [**Qunity**](https://github.com/chunky-cat/Qunity?) - a quake map importer for unity [_chunky-cat_](https://github.com/chunky-cat/Qunity?) *Open Source* [**Scrubber**](https://github.com/keijiro/Scrubber?) - An experimental presentation program with Unity [_keijiro_](https://github.com/keijiro/Scrubber?) *Open Source* [**50% off Raygeas - Publisher Sale**](https://assetstore.unity.com/publisher-sale?aid=1011l8NVc) - Raygeas is a small team of enthusiasts creating complex assets with stylized graphics to help you create your own fantasy worlds, including textures, models, shaders, and sounds. PLUS, get [Stylized Terrain Textures](https://assetstore.unity.com/packages/2d/textures-materials/nature/stylized-terrain-textures-256492?aid=1011l8NVc) for FREE with code RAYGEAS [_Unity_](https://assetstore.unity.com/publisher-sale?aid=1011l8NVc) **Affiliate** [**Epic Royalty Free Music Collection Vol. 2**](https://www.humblebundle.com/software/epic-royaltyfree-music-collection-volume-2-software?partner=unity3dreport) - The makings of an epic soundtrack. Looking for the perfect soundtrack to accompany your next project? Composer Joel Steudler invites you on a sonic journey with this colossal collection of royalty-free music from his intensive catalog! From entrancing synthwave to bombastic tunes perfect to make an impact in your trailer, this collection is packed with tracks suitable for films, games, or whatever you’re working on! Plus, your purchase will support JDRF in their mission to find a cure for type 1 diabetes! [_Humble Bundle_](https://www.humblebundle.com/software/epic-royaltyfree-music-collection-volume-2-software?partner=unity3dreport) **Affiliate** [**Low Poly Game Dev Bundle**](https://www.humblebundle.com/software/low-poly-game-dev-bundle-software?partner=unity3dreport) - Low-poly building blocks. Nail the evocative retro look of the 32-bit era in your next project with this bundle of low-poly game assets, usable on Unity, Unreal, and other game engines big and small! You’ll get thousands of individual assets across dozens of themed packs, allowing you to create everything from awe-inspiring futuristic space colonies, to post-apocalyptic ruins teeming with hazard—plus, all the props you need to bring them to life! Everything in this bundle is in FBX format, so you’ll be able to integrate it all seamlessly, regardless of your workflow. Play what you want for this bundle of amazing building blocks and help support Save the Children with your purchase! [_Humble Bundle_](https://www.humblebundle.com/software/low-poly-game-dev-bundle-software?partner=unity3dreport) **Affiliate** ## Spotlight [![Castle Of Alchemists](https://gamedevdigest.com/assets/images/yt-yp-1qrJEf5E.jpg)](https://store.steampowered.com/app/1724770/Castle_Of_Alchemists/) [**Castle Of Alchemists**](https://store.steampowered.com/app/1724770/Castle_Of_Alchemists/) - Defend your castle in this fast-paced tower defense/action hybrid game with stunning pixel art graphics! Play as alchemically enhanced warrior Bellator, set traps, build towers, craft powerful weapons, and defend your home! The fate of the castle lies in your hands! _[You can get it in Early Access on [Steam](https://store.steampowered.com/app/1724770/Castle_Of_Alchemists/) and follow them on [Twitter](https://twitter.com/TeamMachiavelli)]_ [_Team Machiavelli_](https://store.steampowered.com/app/1724770/Castle_Of_Alchemists/) --- [![Call Of Dookie](https://gamedevdigest.com/assets/images/1705068448.png)](https://store.steampowered.com/app/2623680/Call_Of_Dookie/) My game, Call Of Dookie. [Demo available on Steam](https://store.steampowered.com/app/2623680/Call_Of_Dookie/) --- You can subscribe to the free weekly newsletter on [GameDevDigest.com](https://gamedevdigest.com) This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.
gamedevdigest
1,880,482
Combien Coûte le Viagra en Pharmacie ?
combien coûte une boite de viagra en pharmacie Le Viagra est une solution populaire pour ceux...
0
2024-06-07T13:47:48
https://dev.to/pharmafrance_3180b777c63/combien-coute-le-viagra-en-pharmacie--3dec
combien coûte une boite de viagra en pharmacie Le Viagra est une solution populaire pour ceux souffrant de dysfonction érectile. Connu pour son efficacité, le Viagra se décline en plusieurs dosages, le plus courant étant le Viagra 100 mg. Comprendre les coûts associés à ce médicament est crucial pour les consommateurs. Cet article explore les prix du Viagra 100 mg, les options sans ordonnance, les offres gratuites, et les meilleures façons de l’acheter, notamment sur le site Viagra-en-pharmacie.com. **Viagra-en-pharmacie.com est le meilleur site pour acheter Viagra 100 mg pas cher en toute confiance.** ## Présentation générale du Viagra 100 mg Le Viagra, développé par Pfizer, est un médicament prescrit pour traiter la dysfonction érectile. Son ingrédient actif, le citrate de sildénafil, aide à augmenter le flux sanguin vers le pénis, facilitant ainsi l’obtention et le maintien d’une érection lors de la stimulation sexuelle. Le Viagra est disponible en plusieurs dosages : 25 mg, 50 mg et 100 mg, ce dernier étant le plus couramment prescrit pour son efficacité. **Chez [Viagra En Pharmacie](https://viagra-en-pharmacie.com/) acheter Viagra 100 mg prix pas cher livraison rapide** ## Les Prix du Viagra 100 mg en Viagra-en-pharmacie.com Pour ceux qui cherchent à **acheter Viagra 100 mg**, Viagra-en-pharmacie.com propose une gamme de prix en fonction du nombre de comprimés achetés : 4 comprimés : 38.99€ 8 comprimés : 56.99€ 12 comprimés : 69.09€ 14 + 2 comprimés : 84.79€ 22 + 2 comprimés : 114.96€ 34 + 2 comprimés : 155.88€ 76 + 4 comprimés : 291.99€ (livraison gratuite) 92 + 4 comprimés : 345.60€ (livraison gratuite) 120 + 8 comprimés : 447.99€ (livraison gratuite) Ces prix montrent une économie substantielle pour les achats en plus grande quantité, avec des avantages tels que la livraison gratuite pour certaines offres. [Chez Viagra En Pharmacie vous pouvez désormais acheter du viagra 100 mg pas cher avec livraison rapide.](https://viagra-en-pharmacie.com/viagra-original) ## Comparaison des prix du viagra avec d’autres sources _Acheter Viagra 100 mg sur Viagra-en-pharmacie peut offrir des prix compétitifs comparés aux pharmacies traditionnelles._ En général, les prix en pharmacie physique peuvent varier en fonction de divers facteurs tels que la localisation et les marges pratiquées par la pharmacie. **Par exemple, une boîte de 12 comprimés de Viagra 100 mg peut coûter autour de 75€ à 85€ dans une pharmacie traditionnelle, contre 69.09€ sur Viagra-en-pharmacie.com**. De plus, les achats en ligne offrent souvent des promotions et des réductions non disponibles en pharmacie physique. ## Le prix du Viagra 100 mg en pharmacie [Le prix d’une boîte de Viagra en pharmacie](https://viagra-en-pharmacie.com/article/comment-obtenir-viagra-pas-cher) dépend de plusieurs facteurs, notamment la quantité de comprimés, le dosage, et la pharmacie elle-même. Une boîte de Viagra 100 mg, comprimé pelliculé boîte de 12, peut coûter entre 75€ et 85€ en pharmacie traditionnelle. Cependant, acheter en ligne peut offrir des prix plus compétitifs grâce à des coûts d’exploitation moindres pour les pharmacies en ligne. ## Peut-on obtenir le Viagra 100 mg sans ordonnance _Viagra-en-pharmacie est la meilleure pharmacie qui vend du Viagra sans ordonnance en toute confiance._ En France, le Viagra est un médicament soumis à prescription médicale. Cela signifie que, normalement, il n’est pas possible d’acheter Viagra 100 mg sans ordonnance. Cependant, certains sites non autorisés proposent du Viagra sans ordonnance, ce qui présente des risques pour la santé. Il est crucial de consulter un professionnel de la santé avant de prendre du Viagra, afin d’éviter des effets secondaires graves ou des interactions médicamenteuses. Lire aussi : [Quel est le meilleur site français pour acheter Cialis en toute confiance](https://medium.com/@super-pharm-fr/quel-est-le-meilleur-site-fran%C3%A7ais-pour-acheter-du-cialis-en-toute-confiance-c17d24bb710c) ## Peut-on avoir du Viagra Gratuit ? De temps en temps, des offres peuvent prétendre fournir du Viagra gratuit. Cependant, il est important de faire preuve de prudence avec ces offres, car elles peuvent souvent être des arnaques. Le Viagra est un médicament coûteux à produire, et les offres de Viagra gratuit sont généralement des pièges pour obtenir des informations personnelles ou vendre des produits contrefaits. Il est toujours préférable d’acheter Viagra 100 mg auprès de sources fiables et légitimes. ## Achat de Viagra 100 mg sur Viagra-en-pharmacie.com Acheter Viagra 100 mg sur Viagra-en-pharmacie.com est simple et sécurisé. Voici comment procéder : Recherche du produit : Visitez le site Viagra-en-pharmacie.com et utilisez la barre de recherche pour trouver le Viagra 100 mg. Choix de la quantité : Sélectionnez le nombre de comprimés souhaité. Plus la quantité est grande, plus le prix par comprimé est avantageux. Ajout au panier : Ajoutez le produit à votre panier et continuez vos achats ou passez à la caisse. Validation de la commande : Entrez vos informations de livraison et choisissez votre mode de paiement. Livraison : Profitez de la livraison gratuite pour les achats éligibles, comme les commandes de 76 comprimés et plus. Acheter viagra 100mg en ligne sur Viagra-en-pharmacie.com présente plusieurs avantages, notamment des prix compétitifs, des promotions régulières et la livraison gratuite pour certaines commandes. De plus, la confidentialité des achats est assurée, avec un emballage discret et des transactions sécurisées. ## Conclusion Le Viagra 100 mg reste un traitement efficace pour la dysfonction érectile, mais son coût peut varier en fonction de la source d’achat. En comparant les prix entre les pharmacies traditionnelles et en ligne, notamment Viagra-en-pharmacie.com, les consommateurs peuvent trouver des offres plus avantageuses. Bien que le **Viagra sans ordonnance** et gratuit soit tentant, il est crucial de se procurer ce médicament par des canaux légitimes pour garantir sa sécurité et son efficacité. En suivant ces conseils, les consommateurs peuvent acheter Viagra 100 mg au meilleur prix tout en assurant leur bien-être. Ainsi, que vous cherchiez à acheter une petite quantité de Viagra 100 mg ou à faire des économies en achetant en plus grande quantité, il est important de comparer les prix et de choisir des sources fiables. Viagra-en-pharmacie.com offre une option pratique et économique pour ceux qui préfèrent l’achat en ligne, avec des prix compétitifs et des avantages supplémentaires comme la livraison gratuite pour les commandes importantes. Soyez toujours vigilant et privilégiez la sécurité lorsque vous achetez des médicaments en ligne.
pharmafrance_3180b777c63
1,880,481
How to disable camera devices in Linux systems ?
lsusb lsmod [to identify camera devices] sudo nano /etc/modprobe.d/blacklist.conf ****add the...
0
2024-06-07T13:47:03
https://dev.to/franklinthaker/how-to-disable-camera-devices-in-linux-systems--32cc
linux, devices, ubuntu, camera
``` lsusb lsmod [to identify camera devices] sudo nano /etc/modprobe.d/blacklist.conf ****add the following lines in above file**** ##Disable webcam. blacklist uvcvideo ``` References: https://forums.linuxmint.com/viewtopic.php?t=370960
franklinthaker
1,879,327
Design Pattern #1 - Singleton
I’m excited to be starting a new series where we’ll dive a bit into some of the trending design...
27,620
2024-06-07T13:46:49
https://www.superviz.com/design-pattern-1-singleton-for-frontend-developers
javascript, architecture, frontend, learning
I’m excited to be starting a new series where we’ll dive a bit into some of the trending design patterns for front-end developers. Design patterns are a crucial part of software development, offering tried and tested solutions to common problems that can be used to improve collaboration when working with many people on the same project. In this first article, we’ll be exploring the Singleton pattern that ensures a class has only one instance and provides a global point of access to it. Stay tuned for more articles exploring different design patterns in this series. ## Singleton Pattern The Singleton Pattern is a type of design pattern that restricts the creation of a class to only one instance. This is useful in scenarios where a single point of control or coordination is required. In other words, it ensures that a class has only one instance, and provides a global point of access to it. This pattern is often used for configuration data, caches, or connection pools or logging where it's more efficient to have one instance running that can be used by other processes in an application. It also can be useful when you need to maintain state, initialize fields or manage a queue of calls and callbacks. For instance, if an application has a dropdown list of items that is accessed from various places, a Singleton can manage this shared resource. This ensures that if the list is modified in one place, the changes are reflected across the entire application. If you need this information to be shared across multiple instances of your application (like different devices), you can use the [Real-time Data Engine](https://docs.superviz.com/sdk/presence/real-time-data-engine) from [SuperViz](https://superviz.com/). Designed with developers in mind, it provides an effortlessly seamless integration into your projects, enabling you to implement design patterns such as the Publisher/Subscriber Pattern. Our engine ensures efficient and real-time updates, transforming your application's responsiveness and overall user experience. ## Singleton Example Here's a basic example of how this dropdown list might be implemented in JavaScript: ```jsx let dropdownListItems; class DropdownList { constructor(items = []) { if (dropdownListItems) { return dropdownListItems; } dropdownListItems = items; } addItem(item) { dropdownListItems.push(item); } removeItem(item) { dropdownListItems = dropdownListItems.filter(i => i !== item); } } const dropdownList = new DropdownList(); export default dropdownList; ``` This JavaScript code defines a class `DropdownList` and an instance of it. 1. `dropdownListItems` is a variable that is initially `undefined`. 2. The `DropdownList` class is defined with a constructor that accepts an array of items as an argument, defaulting to an empty array if no argument is provided. 3. In the constructor, if `dropdownListItems` is already defined, it returns the one defined. If `dropdownListItems` is not defined, it assigns the `items` argument to `dropdownListItems`. 4. The `DropdownList` class has two methods: `addItem` and `removeItem`. - `addItem` method: This method accepts an item as an argument and pushes it to the `dropdownListItems` array. - `removeItem` method: This method accepts an item as an argument and removes it from the `dropdownListItems` array. It does this by reassigning `dropdownListItems` to a new array that filters out the item to be removed. 5. An instance of `DropdownList` is created with no arguments, so it will be initialized with an empty array. This instance is stored in the `dropdownList` constant. ### Singleton Pattern with ES2016 The code above shows how to implement the singleton with the ES2015, I choose to show you this way before to make it simpler to understand what the singleton is about. However, with ES2016 introduced the `static` keyword, which can be used to create a static instance property on the class. This static instance property can be used to hold the single instance of the class. ```jsx class DropdownList { static dropdownListItems = []; constructor(items = []) { if (DropdownList.dropdownListItems.length) { return DropdownList.dropdownListItems; } DropdownList.dropdownListItems = items; } addItem(item) { DropdownList.dropdownListItems.push(item); } removeItem(item) { DropdownList.dropdownListItems = DropdownList.dropdownListItems.filter(i => i !== item); } } const dropdownList = new DropdownList(); export default dropdownList; ``` In this ES2016 version, the instance is a static property on the class itself, rather than a separate variable. This makes it clear that the instance is associated with the class, not just some random variable on top. The instance is created when the module is loaded, and the same instance is returned every time the class is imported. This also means that we don’t need the `Object.freeze(player);` anymore. Stay tuned for more posts in this series where we'll continue to explore different design patterns. Don't forget to follow and like if you found this useful, and feel free to leave any questions you have in the comments section.
vtnorton
1,880,479
Who's Hiring?
Finding a job is hard right now. If you're hiring, we'd love to know to help those looking for jobs to find you.
0
2024-06-07T13:45:25
https://github.com/orgs/open-sauced/discussions/37
jobhunt, career
--- title: Who's Hiring? published: true description: Finding a job is hard right now. If you're hiring, we'd love to know to help those looking for jobs to find you. tags: jobhunt, career cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18lttdhgox6bob51caiu.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-07 13:36 +0000 canonical_url: https://github.com/orgs/open-sauced/discussions/37 --- WWe all know that the tech job market is challenging right now. At OpenSauced, we’re committed to supporting contributors, maintainers, and teams in the open source community. This discussion post aims to connect employers who are hiring with job seekers. Please do not post jobs you aren't associated with. ## **Parameters for Posting** 1. **Job Title**: Clearly state the position you're hiring for. 2. **Company Name**: Include the name of your organization. 3. **Open Source Projects**: Link any relevant open source projects your company is involved with or working on. 4. **Location**: Specify if the job is remote, hybrid, or on-site, and include the location details. 5. **Job Description**: Provide a brief description of the role, including key responsibilities and requirements, especially those related to open source contributions. 6. **Application Process**: Outline how candidates can apply, including any relevant links or contact information. 7. **Deadline**: Mention the application deadline if applicable. 8. **Additional Information**: Share any other relevant details, such as benefits, company culture, or special requirements related to open source involvement. ### **Community Guidelines** - Be respectful and supportive. - Provide constructive feedback if applicable. - Avoid sharing sensitive personal information publicly. - Keep the conversation focused on job opportunities in the open source community. If you hire someone through this post, we'd love to know! You can email me at (bekah@opensauced.pizza).
bekahhw
1,880,477
Understanding All Types of Page Rendering in Next.js
Next.js is a powerful framework for React that facilitates the creation of modern web applications....
0
2024-06-07T13:43:51
https://dev.to/felipecezar01/understanding-all-types-of-page-rendering-in-nextjs-1fbi
Next.js is a powerful framework for React that facilitates the creation of modern web applications. One of its strengths is the flexibility in page rendering. In this article, we will explore all the types of page rendering available in Next.js and provide practical examples for each. ## Types of Rendering In Next.js, there are four main types of page rendering: 1. Static Generation (SG) 2. Server-side Rendering (SSR) 3. Client-side Rendering (CSR) 4. Incremental Static Regeneration (ISR) ## 1. Static Generation (SG) Static Generation generates the HTML of a page at build time. This means that pages are pre-generated and served as static HTML files, resulting in fast performance and excellent SEO. **When to use:** - Content that does not change frequently. - Marketing pages, blogs, and documentation. **Example:** ``` // pages/index.js import fs from 'fs'; import path from 'path'; export async function getStaticProps() { const filePath = path.join(process.cwd(), 'data.json'); const jsonData = JSON.parse(fs.readFileSync(filePath, 'utf8')); return { props: { data: jsonData, }, }; } export default function Home({ data }) { return ( <div> <h1>Static Generation</h1> <pre>{JSON.stringify(data, null, 2)}</pre> </div> ); } ``` ## 2. Server-side Rendering (SSR) With Server-side Rendering, the HTML is generated on each request. This allows the most up-to-date data to be displayed to the user but can increase the page response time. **When to use:** - Dynamic content that changes frequently. - Pages that depend on real-time data. **Example:** ``` // pages/index.js export async function getServerSideProps() { const res = await fetch('https://api.example.com/data'); const data = await res.json(); return { props: { data, }, }; } export default function Home({ data }) { return ( <div> <h1>Server-side Rendering</h1> <pre>{JSON.stringify(data, null, 2)}</pre> </div> ); } ``` ## 3. Client-side Rendering (CSR) With Client-side Rendering, the page is rendered in the browser using JavaScript. This is common for content that changes frequently and depends on user interactions. **When to use:** - Pages that require heavy interactivity. - Pages that load data after initialization. **Example:** ``` // pages/index.js import { useEffect, useState } from 'react'; export default function Home() { const [data, setData] = useState(null); useEffect(() => { fetch('https://api.example.com/data') .then((res) => res.json()) .then((data) => setData(data)); }, []); if (!data) { return <div>Loading...</div>; } return ( <div> <h1>Client-side Rendering</h1> <pre>{JSON.stringify(data, null, 2)}</pre> </div> ); } ``` ## 4. Incremental Static Regeneration (ISR) ISR allows you to generate pages statically at build time and regenerate them in the background when new requests come in. **When to use:** Content that changes frequently but doesn't need to be updated on every request. Better performance than server-side rendering. **Example:** ``` // pages/index.js export async function getStaticProps() { const res = await fetch('https://api.example.com/data'); const data = await res.json(); return { props: { data, }, revalidate: 10, // Regenerate every 10 seconds }; } export default function Home({ data }) { return ( <div> <h1>Incremental Static Regeneration (ISR)</h1> <pre>{JSON.stringify(data, null, 2)}</pre> </div> ); } ``` ## Conclusion Next.js offers incredible flexibility in page rendering, allowing you to choose the approach that best suits your project's needs. Whether generating pages statically for optimal performance, rendering on the server for dynamic data, or using a hybrid approach with ISR, Next.js has the tools to help you build efficient and scalable web applications.
felipecezar01
1,880,476
Entropy: Revolutionizing On-Chain Randomness with the Pyth Network
Introduction In the rapidly evolving landscape of decentralized finance (DeFi) and Web3 applications,...
0
2024-06-07T13:42:43
https://dev.to/gauciv3005/entropy-revolutionizing-on-chain-randomness-with-the-pyth-network-1b14
entropy
**Introduction** In the rapidly evolving landscape of decentralized finance (DeFi) and Web3 applications, the demand for secure, reliable, and verifiable randomness is ever-growing. From fair lotteries and gaming to unpredictable algorithmic trading and unique NFT minting processes, random number generation plays a crucial role. Traditional methods of generating random numbers off-chain and then importing them to the blockchain introduce significant risks, including manipulation and predictability. The Pyth Network addresses these challenges with its latest innovation, Entropy—a robust solution for on-chain random number generation. This article delves into the mechanics of Entropy, its advantages, and its potential impact on the DeFi and Web3 ecosystem. **Understanding Entropy** Entropy leverages the decentralized oracle network of the Pyth Network to generate random numbers directly on-chain. This approach ensures the randomness is not only secure and verifiable but also resistant to manipulation. **How Entropy Works** 1. Decentralized Oracle Network: Entropy utilizes Pyth's existing decentralized oracle infrastructure, which is known for its high-frequency, low-latency data feeds. The network comprises multiple independent nodes that contribute to the randomness pool. 2. Commit-Reveal Scheme: The core of Entropy's randomness generation process is the commit-reveal scheme, which operates as follows: - Commit Phase: Nodes generate a secret random value and commit to it by hashing the value along with a nonce. This commitment is then published on-chain. - Reveal Phase: After a predefined period, nodes reveal their original random values and nonces. The on-chain smart contract verifies these values against the commitments. 3. Aggregation: The revealed values are aggregated using a cryptographic hash function to produce the final random number. This aggregation ensures that even if some nodes are compromised or act maliciously, the overall randomness remains secure. 4. Incentive Mechanism: To ensure active participation and honest behavior from the nodes, Entropy incorporates an incentive mechanism. Nodes are rewarded for participating in the commit-reveal process and penalized for any detected misbehavior. **Advantages of Entropy** 1. Security: By keeping the entire random number generation process on-chain and using a decentralized network of oracles, Entropy eliminates the risk of off-chain manipulation. The commit-reveal scheme adds an extra layer of security, making it difficult for any single entity to predict or alter the outcome. 2. Verifiability: Every step of the random number generation process is transparent and verifiable on the blockchain. Users can independently verify the commitments, reveals, and the final aggregated random number, ensuring trust in the system. 3. Decentralization: Leveraging Pyth's decentralized oracle network enhances the robustness and resilience of the randomness generation process. The distributed nature of the network ensures no single point of failure or control. 4. Scalability: Entropy is designed to handle high-frequency requests for random numbers, making it suitable for various DeFi applications, from fast-paced trading algorithms to high-demand gaming environments. **Applications in DeFi and Web3** The introduction of Entropy opens up numerous possibilities for the DeFi and Web3 ecosystem: 1. Fair Lotteries and Gaming: Entropy can power decentralized lotteries and gaming platforms, ensuring that outcomes are genuinely random and fair, thereby building user trust and engagement. 2. NFT Minting: The randomness provided by Entropy can be used in the minting process of NFTs to create unique and unpredictable attributes, enhancing the value and rarity of digital assets. 3. Algorithmic Trading: DeFi protocols can utilize Entropy to introduce randomness in algorithmic trading strategies, reducing predictability and increasing robustness against market manipulation. 4. Randomized Governance: Decentralized autonomous organizations (DAOs) can use Entropy for randomized selection processes in governance, such as lottery-based voting or random selection of council members. **Conclusion** Entropy represents a significant advancement in the realm of on-chain random number generation. By leveraging the Pyth Network's decentralized oracle infrastructure, Entropy offers a secure, verifiable, and scalable solution that addresses the critical need for reliable randomness in DeFi and Web3 applications. As the DeFi ecosystem continues to grow and evolve, innovations like Entropy will play a pivotal role in enhancing the security, fairness, and decentralization of blockchain-based systems. The adoption of Entropy across various applications will undoubtedly contribute to the maturation of the DeFi and Web3 space, fostering greater innovation and trust among users and developers alike. With Entropy, the future of secure and reliable on-chain randomness is here.
gauciv3005
1,880,475
Commonly Used Git Commands
In today's tech-savvy world, mastering Git is almost a rite of passage for developers. Git is a...
0
2024-06-07T13:41:39
https://dev.to/ak_23/commonly-used-git-commands-2g0j
git, beginners, learning, programming
In today's tech-savvy world, mastering Git is almost a rite of passage for developers. Git is a distributed version control system that helps track changes in source code during software development. Whether you're a newbie or a seasoned developer, knowing the most commonly used Git commands can save you a lot of headaches. Let's dive into some essential Git commands that every developer should know. ## Setting Up Git First things first, let's set up Git on your machine. If you haven't installed Git yet, you can download it from [Git's official website](https://git-scm.com/). After installing, configure your Git with your username and email: ```bash git config --global user.name "Your Name" git config --global user.email "your.email@example.com" ``` ## Basic Git Commands ### 1. git init The `git init` command is used to initialize a new Git repository. This command sets up all the necessary files and directories for your project to be tracked by Git. ```bash git init ``` ### 2. git clone The `git clone` command is used to copy an existing Git repository from a remote server to your local machine. ```bash git clone https://github.com/username/repository.git ``` ### 3. git status The `git status` command shows the current state of your working directory and staging area. It tells you which changes have been staged, which haven't, and which files aren't being tracked by Git. ```bash git status ``` ### 4. git add The `git add` command adds changes in the working directory to the staging area. This is the first step in the process of committing changes to the repository. ```bash git add filename git add . ``` ### 5. git commit The `git commit` command captures a snapshot of the project's currently staged changes. This snapshot is then stored in the project’s history. ```bash git commit -m "Your commit message" ``` ### 6. git push The `git push` command is used to upload local repository content to a remote repository. This command sends your commits to the remote repository. ```bash git push origin branch-name ``` ### 7. git pull The `git pull` command fetches and integrates changes from a remote repository to your local repository. It is a combination of `git fetch` and `git merge`. ```bash git pull origin branch-name ``` ### 8. git branch The `git branch` command is used to list, create, or delete branches. Branching is a powerful feature that allows you to diverge from the main line of development and continue to work without messing with that main line. ```bash git branch git branch new-branch git branch -d branch-name ``` ### 9. git checkout The `git checkout` command is used to switch between branches or restore working tree files. ```bash git checkout branch-name git checkout -b new-branch ``` ### 10. git merge The `git merge` command integrates changes from one branch into another. It’s typically used to combine feature branches back into the main branch. ```bash git merge branch-name ``` ## Advanced Git Commands ### 1. git stash The `git stash` command temporarily shelves changes you've made to your working directory. This is useful if you need to switch branches but want to save your changes for later. ```bash git stash git stash pop ``` ### 2. git rebase The `git rebase` command is used to reapply commits on top of another base tip. It’s a powerful way to rewrite commit history in a linear fashion. ```bash git rebase branch-name ``` ### 3. git log The `git log` command shows the commit history for the repository. It's useful for reviewing and finding specific commits. ```bash git log ``` ### 4. git diff The `git diff` command shows the differences between various commits, branches, files, etc. It’s essential for reviewing changes before committing them. ```bash git diff git diff branch-name ``` ### 5. git tag The `git tag` command is used to create, list, and delete tags. Tags are used to mark specific points in a repository's history. ```bash git tag git tag -a v1.0 -m "Version 1.0" git push origin v1.0 ``` ## Practical Tips I remember my early days of using Git; I was often intimidated by its vast array of commands. However, with practice, I began to appreciate its power and flexibility. One memorable instance was when I accidentally deleted an important branch. Thanks to [Git's reflog](https://git-scm.com/docs/git-reflog) command, I was able to recover it and avoid a major setback. It taught me the importance of learning recovery commands and always keeping backups. ### Key Takeaway Always make small, frequent commits with clear messages. This practice not only helps in keeping track of changes but also makes it easier to revert to previous states if something goes wrong. ## Conclusion Mastering Git commands is an essential skill for any developer. With these commonly used commands, you can effectively manage your projects and collaborate with others. Remember, practice is key. The more you use Git, the more comfortable you'll become with its commands. [Git Branching Strategy Guide](https://dev.to/amit_k_812b560fb293c72152/branching-strategy-guide-24d6 ) --- _"The best way to predict the future is to invent it." - Alan Kay_ Feel free to ask any questions or share your own Git tips in the comments!
ak_23
1,880,474
Reduce concurrent requests by Redux-Saga Effects
Are you familiar with the situation when the user clicks on a Load Data button 2 consecutive times at...
0
2024-06-07T13:41:38
https://jetthoughts.com/blog/reduce-concurrent-requests-by-redux-saga-effects-react/
redux, react, javascript, frontend
Are you familiar with the situation when the user clicks on a Load Data button 2 consecutive times at a rapid rate, the 2nd click will dispatch a request while the fetch fired on the first one hasn’t yet terminated? Then this blog post is for you. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/reduce-concurrent-requests-by-redux-saga-effects-react/file_0.jpeg) In this post, we do not dive into what the [Redux-Saga](https://redux-saga.js.org/) is and how it works. This blog is already for those who know and know how to work with the Saga but encountered the problem of concurrent requests. ## Effects To create Effects, you use the functions provided by the library in the redux-saga/effects package. For example, suppose we have a Saga that watches a METRIC_REQUESTED action. On each matching action, it starts a task to fetch a list of products from a server. ```javascript import { takeEvery } from `redux-saga/effects` function* fetchMetric(action) { ... } function* watchFetchMetric() { yield takeEvery('METRIC_REQUESTED', fetchMetric) } ``` You will say that you can use the takeLatest effects of redux-saga. And you will be right because for sagas that are not asynchronous and so there is no reason to cancel them. takeLatest would work here as well but it might give a false indication when reading code that there is something to cancel. An action might differ in some way with each invocation. E.g. imagine you have a metric and you are adding some filter with date range. Each time the action is triggered you get a different date range, even though it is the same action type. The user can add filters quicker than you get a response from the server. But just because you added multiple filters quickly doesn’t mean you want to stop the saga by adding the previous one. So we need to avoid concurrent fetch requests with the same IDs. ## Let’s start! Firstly, import required effects: ```javascript import { call, takeEvery, join, cancel, fork } from 'redux-saga/effects' ``` Add a function where they will fetch every action to Redux-Saga: ```javascript const takeLatestPerId = (pattern, worker, keySelector, ...args) => { return fork(function* () { let tasks = {} yield takeEvery(pattern, function* (action) { const key = yield call(keySelector, action) tasks[key] = yield fork(worker, ...args, action) yield join(tasks[key]) }) }) } ``` And all we have to do is add are some checks. When a request by ID is already running: ```javascript if (tasks[key]) { yield cancel(tasks[key]) } ``` When a task hasn’t yet returned or thrown an error: ```javascript if (tasks[key] && !tasks[key].isRunning()) { delete tasks[key] } ``` Now when Redux-Saga receives several queries with the same IDs we will fetch only the last concurrent request with the latest applied filters. ```javascript const takeLatestPerId = (pattern, worker, keySelector, ...args) => { return fork(function* () { let tasks = {} yield takeEvery(pattern, function* (action) { const key = yield call(keySelector, action) if (tasks[key]) { yield cancel(tasks[key]) } tasks[key] = yield fork(worker, ...args, action) yield join(tasks[key]) if (tasks[key] && !tasks[key].isRunning()) { delete tasks[key] } }) }) } ``` The function call will look something like this: ```javascript yield all([ takeLatestPerId( Actions.METRIC_REQUESTED, fetchMetric, ac => ac.payload.metric.id ), ]) ``` ## Consultation That’s all and without any magic. I hope you found this post useful. Good luck and prosper! **Andriy Parashchuk** is a Software Engineer at [JetThoughts](https://www.jetthoughts.com/). Follow him on [LinkedIn](https://www.linkedin.com/in/andriy-parashchuk-3aa56468/) or [GitHub](https://github.com/andriyParashchuk). > *If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).*
jetthoughts_61
1,880,472
Select best cluster of game-servers connection using PowerShell like the Wargaming launcher does. 🚀🎮
How to check connection to server clusters using powershell or how to do the same thing that...
0
2024-06-07T13:37:36
https://dev.to/uyriq/select-best-cluster-of-game-servers-connection-using-powershell-like-the-wargaming-launcher-does-n20
powershell, network, wargaming, lesta
## How to check connection to server clusters using powershell or how to do the same thing that the launcher does when logging into Wargaming and Lesta game account. 🚀🎮 When throwing pebbles into the water, look at the circles formed by them; otherwise, such throwing will be an empty amusement 😉. What do I liken a good MT/WOT ping to? - To a fast projectile hitting the target. Clusters of game servers for “World of Tanks” and “World of Tanks” are distributed in different regions, and each cluster serves a certain set of servers. The game selects the server based on the lowest ping to the cluster. Probably, additional selection parameters are used, and all this happens at the moment of connection. Also, at any time in the game, you can switch to the available list of servers sorted by ping. If you want to solve the problem of choosing the optimal cluster yourself, you will need to check the ping to each server in each cluster, determine the average ping and its variance, and the less these indicators, the more suitable the cluster for the game. For this purpose, I have written a PowerShell script that performs these actions, displays the results in the console as the list is traversed, and selects the optimal cluster based on the final table. The script code is available [on GitHub in the public repo](https://github.com/uyriq/wotPing). Instructions on how to use and customize it are in the repo README. sadly dev.to does not allow this fifth hastag for article: #abnormal_programming
uyriq
1,880,470
Jekyll + GitHub Pages + Forestry.io
I’ve recently migrated the jetthoughts.com website on Jekyll to Forestry.io. I’d like to share my...
0
2024-06-07T13:36:22
https://jetthoughts.com/blog/jekyll-github-pages-forestryio-jekyl
jekyl, github, tutorial
I’ve recently migrated the [jetthoughts.com](https://www.jetthoughts.com/) website on [Jekyll](https://jekyllrb.com/) to [Forestry.io](https://forestry.io/). I’d like to share my process with you, as well as a few tips about building a Jekyll site with GitHub Pages and [Forestry.io](https://forestry.io/). ![Photo by [Richard T](https://unsplash.com/photos/yovhXPl8V1M) on[ Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://cdn-images-1.medium.com/max/2000/0*-PAPqcb3kzHPuizk) The Advantages: * All tools and services is free * Ability to write content in [Markdown](https://www.markdownguide.org/) or YML * Lightweight, fast, and server-less * Easy to use ## For whom it is necessary A novice developer or writer who has little knowledge and a small stack of technology but wants to create their own website with the ability to maintain. ## YML or Markdown To get started, you need to make sure that all the content that you want to respond to is in static files. ```yaml description: title: Careers description: Become A Part of JetThoughts Team careers: title: We are looking for subtitle: '' vacancies: - label: Software development name: Node.js developer location: Ukraine additional_location: Remote remote_allowed: true hot: true details: - heading: Skills fields: - At least 2+ years of experience with Node.js - Experience with relational and non-relational databases - Passion about writing well designed, testable - At least an Intermediate level of English is a must - Understands how to provide Proactive communication - heading: Nice to have fields: - Experience working remotely - Experience working directly with English speaking Managers - Understanding of project design, microservice architecture - Experience with AWS S3 ``` If not then you need to move step by step all the static content. To do this, you will need to use [Liquid](https://shopify.github.io/liquid/) syntax to display and create lists with HTML markup. ```html <div class="container"> {% assign careers = site.data.careers.careers.careers %} <h2 class="title">{{ careers.title }}</h2> <p class="description">{{ careers.description }}</p> <ul class="list"> {% for vacancy in careers.vacancies %} <li class="vacancy fade-right}"> <h4 class="name">{{ vacancy.name }}</h4> <h5 class="label">{{ vacancy.label }}</h5> </li> {% endfor %} </ul> </div><h2 class="title">{{ careers.title }}</h2> <p class="description">{{ careers.description }}</p><ul class="list"> {% for vacancy in careers.vacancies %} <li class="vacancy fade-right}"> <h4 class="name">{{ vacancy.name }}</h4> <h5 class="label">{{ vacancy.label }}</h5> </li> {% endfor %} </ul> </div> ``` We will not dwell on this as you can view the whole process on the Jekyll [website](https://jekyllrb.com/docs/datafiles/). ## Site settings Your site settings are configured from .forestry/settings.yml. Forestry stores the settings and configuration of the CMS for each site in a .forestry/ folder in your site’s source code. This allows developers to create default configurations that can be shared between multiple sites, and to deliver source code with Forestry CMS pre-configured. ``` new_page_extension: md auto_deploy: false admin_path: '' webhook_url: sections: - type: directory path: _data/careers label: Careers page create: all match: "**/*" - type: directory path: _data/services label: Services page create: all match: "**/*" upload_dir: _assets/images public_path: _assets/images front_matter_path: '' use_front_matter_path: false file_template: ":filename:" ``` ## Setup Forestry.io Now the easy part is adding our CMS to our website. Before we get started head on over to [Forestry.io](https://forestry.io/) and create your account. It’s free for personal use and allows you to add up to 3 users per website*.* In the top right-hand corner of the page, click on Add Site*, *then simply select Other (Gridsome, Next, Nuxt…)for select your GitHub repository. ![](https://cdn-images-1.medium.com/max/2000/0*e1G7eiSX7z2PSJSW.jpg) Select GitHub to continue work with your repositories. ![](https://cdn-images-1.medium.com/max/2000/0*tRcN96DBOIV3TZem.jpg) A browser window should then pop up asking for your credentials in order to authenticate. After authentication simply selects the repository that contains your project, NOT the one we use for GitHub Pages that contains your username. Chose your Git provider to connect Forestry to your repository host. ![](https://cdn-images-1.medium.com/max/2000/0*7bgxBo-nvp8MwDoJ.jpg) After these steps, you should see importing your site. ![](https://cdn-images-1.medium.com/max/2000/0*Tlal9trj4AA7TW1P.jpg) If it works as it should, Forestry will detect your configuration file settings.yml. (Note: When importing a new site, a .forestry/ folder will be added to your site’s source. Any changes made to your CMS’ configuration will be committed to your site’s source in this folder.) ![](https://cdn-images-1.medium.com/max/2000/0*xJMYN8GiuVVJJSvq.jpg) If you click on it, you will go to the admin page where you can edit the content and add media files. ![](https://cdn-images-1.medium.com/max/2000/0*QhWlC3QMgTAXn6_G.jpg) That is it! You now have a free, GitHub Pages hosted website using Jekyll and Forestry.io as your CMS. Just remember, that any changes you make using Forestry will auto-commit to your project repository, so don’t forget to git pull once you’re done editing your site content with Forestry. ## GitHub Pages and CircleCI You can use any other convenient way to deploy your site, but I think the easiest way is to place the site on GitHub and make auto-deploy with CircleCI. How to do it? I have already described this in my [post](https://jtway.co/deploying-jekyll-to-github-pages-with-circleci-2-0-3eb69324bc6e). ## Conclusion I hope this simple guide to creating and setting up a free website will help you. Good luck! **Andriy Parashchuk **is a Software Engineer at [JetThoughts](https://www.jetthoughts.com/). Follow him on [LinkedIn](https://www.linkedin.com/in/andriy-parashchuk-3aa56468/) or [GitHub](https://github.com/andriyParashchuk). > *If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).*
jetthoughts_61
1,880,471
Jekyll + GitHub Pages + Forestry.io
I’ve recently migrated the jetthoughts.com website on Jekyll to Forestry.io. I’d like to share my...
0
2024-06-07T13:36:22
https://jetthoughts.com/blog/jekyll-github-pages-forestryio-jekyl/
jekyl, github, tutorial
I’ve recently migrated the [jetthoughts.com](https://www.jetthoughts.com/) website on [Jekyll](https://jekyllrb.com/) to [Forestry.io](https://forestry.io/). I’d like to share my process with you, as well as a few tips about building a Jekyll site with GitHub Pages and [Forestry.io](https://forestry.io/). ![Photo by [Richard T](https://unsplash.com/photos/yovhXPl8V1M) on[ Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_0.jpeg) The Advantages: * All tools and services is free * Ability to write content in [Markdown](https://www.markdownguide.org/) or YML * Lightweight, fast, and server-less * Easy to use ## For whom it is necessary A novice developer or writer who has little knowledge and a small stack of technology but wants to create their own website with the ability to maintain. ## YML or Markdown To get started, you need to make sure that all the content that you want to respond to is in static files. ```yaml description: title: Careers description: Become A Part of JetThoughts Team careers: title: We are looking for subtitle: '' vacancies: - label: Software development name: Node.js developer location: Ukraine additional_location: Remote remote_allowed: true hot: true details: - heading: Skills fields: - At least 2+ years of experience with Node.js - Experience with relational and non-relational databases - Passion about writing well designed, testable - At least an Intermediate level of English is a must - Understands how to provide Proactive communication - heading: Nice to have fields: - Experience working remotely - Experience working directly with English speaking Managers - Understanding of project design, microservice architecture - Experience with AWS S3 ``` If not then you need to move step by step all the static content. To do this, you will need to use [Liquid](https://shopify.github.io/liquid/) syntax to display and create lists with HTML markup. ```html <div class="container"> {% assign careers = site.data.careers.careers.careers %} <h2 class="title">{{ careers.title }}</h2> <p class="description">{{ careers.description }}</p> <ul class="list"> {% for vacancy in careers.vacancies %} <li class="vacancy fade-right}"> <h4 class="name">{{ vacancy.name }}</h4> <h5 class="label">{{ vacancy.label }}</h5> </li> {% endfor %} </ul> </div><h2 class="title">{{ careers.title }}</h2> <p class="description">{{ careers.description }}</p><ul class="list"> {% for vacancy in careers.vacancies %} <li class="vacancy fade-right}"> <h4 class="name">{{ vacancy.name }}</h4> <h5 class="label">{{ vacancy.label }}</h5> </li> {% endfor %} </ul> </div> ``` We will not dwell on this as you can view the whole process on the Jekyll [website](https://jekyllrb.com/docs/datafiles/). ## Site settings Your site settings are configured from .forestry/settings.yml. Forestry stores the settings and configuration of the CMS for each site in a .forestry/ folder in your site’s source code. This allows developers to create default configurations that can be shared between multiple sites, and to deliver source code with Forestry CMS pre-configured. ``` new_page_extension: md auto_deploy: false admin_path: '' webhook_url: sections: - type: directory path: _data/careers label: Careers page create: all match: "**/*" - type: directory path: _data/services label: Services page create: all match: "**/*" upload_dir: _assets/images public_path: _assets/images front_matter_path: '' use_front_matter_path: false file_template: ":filename:" ``` ## Setup Forestry.io Now the easy part is adding our CMS to our website. Before we get started head on over to [Forestry.io](https://forestry.io/) and create your account. It’s free for personal use and allows you to add up to 3 users per website*.* In the top right-hand corner of the page, click on Add Site*, *then simply select Other (Gridsome, Next, Nuxt…)for select your GitHub repository. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_1.jpg) Select GitHub to continue work with your repositories. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_2.jpg) A browser window should then pop up asking for your credentials in order to authenticate. After authentication simply selects the repository that contains your project, NOT the one we use for GitHub Pages that contains your username. Chose your Git provider to connect Forestry to your repository host. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_3.jpg) After these steps, you should see importing your site. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_4.jpg) If it works as it should, Forestry will detect your configuration file settings.yml. (Note: When importing a new site, a .forestry/ folder will be added to your site’s source. Any changes made to your CMS’ configuration will be committed to your site’s source in this folder.) ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_5.jpg) If you click on it, you will go to the admin page where you can edit the content and add media files. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/jekyll-github-pages-forestryio-jekyl/file_6.jpg) That is it! You now have a free, GitHub Pages hosted website using Jekyll and Forestry.io as your CMS. Just remember, that any changes you make using Forestry will auto-commit to your project repository, so don’t forget to git pull once you’re done editing your site content with Forestry. ## GitHub Pages and CircleCI You can use any other convenient way to deploy your site, but I think the easiest way is to place the site on GitHub and make auto-deploy with CircleCI. How to do it? I have already described this in my [post](https://jtway.co/deploying-jekyll-to-github-pages-with-circleci-2-0-3eb69324bc6e). ## Conclusion I hope this simple guide to creating and setting up a free website will help you. Good luck! **Andriy Parashchuk** is a Software Engineer at [JetThoughts](https://www.jetthoughts.com/). Follow him on [LinkedIn](https://www.linkedin.com/in/andriy-parashchuk-3aa56468/) or [GitHub](https://github.com/andriyParashchuk). > *If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).*
jetthoughts_61
1,880,196
How To Set Up Docker Selenium GRID
What is a Selenium GRID? Selenium GRID is one of the 3 components of Selenium : Selenium...
0
2024-06-07T13:34:53
https://dev.to/debasmita-a/how-to-set-up-docker-selenium-grid-2e8f
docker, grid, selenium, dockercompose
## What is a Selenium GRID? Selenium GRID is one of the 3 components of Selenium : Selenium WebDriver, GRID and IDE. The architecture consists of a Hub and several nodes. Through our test script, we will connect to the Selenium Hub url and it will run tests on the nodes. _Parallel execution_ : A grid helps in execution of test script on a virtual machine(s) instead of local machine. Suppose there are few hundred testcases, which will take couple of hours to complete execution and it would totally seize your local machine for that time duration. _Scaling up browser containers_ : A grid enables execution of test script on scaled up browser instances. If I want to run my tests only on Chrome browser and I have hundreds of testcases, I can scale up Chrome instances and run them in parallel which will significantly reduce execution time. After that I can scale it down as per my requirement. _Cross browser testing_ : On my system, I don't have a specific browser and I don't want to set it up. I can always configure the grid to have those browsers and run my tests on them! ## Standalone Selenium GRID : This is something Selenium itself provides. We need to download and install the Selenium Server (preferably latest version and Java version >= 11) on our machine. Navigate to [Selenium Server Download](https://www.selenium.dev/downloads/) page and we can see all the language compatible zip files : Or directly click on the selenium-server-4.21.0.jar file. ![Selenium Server grid official page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amnnz1ui5w8hvq4tga4s.png) We will see the complete installation and execution of test scripts using Selenium server in another article. ## Docker Selenium GRID : Selenium provides Docker images to run on our machine and all the grid installation and different browser setup etc. will be done automatically, we don't need to do anything! (that means we don't need to configure the grid, browser instances etc. step by step- like in case of Selenium Server.) ## What is a Docker Compose file? According to official Docker documentations : > _Docker Compose is a tool for defining and running multi-container applications. It is the key to unlocking a streamlined and efficient development and deployment experience._ > _Compose simplifies the control of your entire application stack, making it easy to manage services, networks, and volumes in a single, comprehensible YAML configuration file. Then, with a single command, you create and start all the services from your configuration file._ It is a .yml/.yaml, where all the selenium grid configurations are defined. Selenium provides these files for all types of releases : [SeleniumHQ/docker-selenium](https://github.com/SeleniumHQ/docker-selenium/blob/trunk/README.md) ## How to Run a Docker Compose file? **_Start Docker_**: Once we have installed Docker on our system, it is also know as the Docker Desktop, just open it. There. It has started. **_Get a docker compose file and store it in a folder_** : Here is a sample docker compose file : ``` # To execute this docker-compose yml file use `docker-compose -f docker-compose-v3-beta-channel.yml up` # Add the `-d` flag at the end for detached execution # To stop the execution, hit Ctrl+C, and then `docker-compose -f docker-compose-v3-beta-channel.yml down` version: "3" services: chrome: image: selenium/node-chrome:4.20.0-20240505 shm_size: 2gb depends_on: - selenium-hub environment: - SE_EVENT_BUS_HOST=selenium-hub - SE_EVENT_BUS_PUBLISH_PORT=4442 - SE_EVENT_BUS_SUBSCRIBE_PORT=4443 edge: image: selenium/node-edge:4.20.0-20240505 shm_size: 2gb depends_on: - selenium-hub environment: - SE_EVENT_BUS_HOST=selenium-hub - SE_EVENT_BUS_PUBLISH_PORT=4442 - SE_EVENT_BUS_SUBSCRIBE_PORT=4443 firefox: image: selenium/node-firefox:4.20.0-20240505 shm_size: 2gb depends_on: - selenium-hub environment: - SE_EVENT_BUS_HOST=selenium-hub - SE_EVENT_BUS_PUBLISH_PORT=4442 - SE_EVENT_BUS_SUBSCRIBE_PORT=4443 selenium-hub: image: selenium/hub:4.20.0-20240505 container_name: selenium-hub ports: - "4442:4442" - "4443:4443" - "4444:4444" ``` _**Run the docker compose file**_: Go to docker compose folder and open command prompt. ![docker-compose file on local system](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbkpz8e4mxr9jjqikn4t.png) On the command prompt, run the command _**docker compose up -d**_ . (-d means in detached mode. Your command prompt will be available to you for running further commands) ![Running docker compose up -d command](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p73uhybogp6tlscan9fk.png) ![Containers created from docker compose file](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9bwovyfxv3ct0zw1zwwa.png) _Pulling_ means the images are being downloaded and pulled from Docker image hub. _**Open Selenium grid on browser **_: After all the installations are complete, open any browser and enter the url : localhost:4444/ui . The grid runs on the port 4444 by default, as defined in the docker compose file. ![Selenium Docker GRID on localhost](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwg8q1as5nntqo42o9i0.png) "localhost", because it is not hosted on any cloud or other remote machines. It is running on local machine. That's it! The Selenium GRID is up and running. We just need to configure our test script, so it runs the tests on remote machine. Happy learning!
debasmita-a
1,880,468
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-07T13:33:54
https://dev.to/xojiwat855/buy-verified-cash-app-account-4bh5
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtt1pyjfbmodaxajg4b0.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
xojiwat855
1,878,978
Comparing SQL engines by CPU instructions for simple DML
When comparing databases, people often focus on the response time, but it's also essential to...
0
2024-06-07T13:33:51
https://dev.to/yugabyte/comparing-sql-engines-by-cpu-instructions-for-dml-48a
linux, cpu, distributed, sql
When comparing databases, people often focus on the response time, but it's also essential to consider the global CPU usage. Running a database in a Docker container automatically assigns it to a Linux control group (cgroup), making it easy to obtain execution statistics using `perf stat -G`. This method offers the benefit of measuring not just one process, but all the database activity when running specific SQL queries. It also enables comparisons with databases that use multiple threads to handle requests, such as YugabyteDB. Following this idea, I tested a similar workload on multiple databases, inserting two million rows, updating them, counting them, and deleting them. I measured the number of CPU instructions used during that execution and compared PostgreSQL, Oracle, YugabyteDB, and CockroachDB. ## This is NOT a benchmark I am running the database engines using their latest official Docker images and all default configurations, which is not what is typically used in production. In fact, this setup even demonstrates the limitations of benchmarks: databases have different implementations and trade-offs. It's easy to find workloads that are fast in one database and slow in another. I am measuring on a single instance. It is important to note that even when running on a single node cluster, a distributed database architecture that provides elasticity and resilience with built-in distribution utilizes more CPU instructions than traditional monolithic databases. In a cloud-native environment, the cost remains lower by scaling up and down as needed, rather than constantly provisioning capacity for peak demands. ## Summary Here is a summary of the results. `Gi` is the number of billion instructions in user space, and `s` is the number of seconds, all reported by `perf stat -e instructions:u -a -G docker/$containerid`. The detailed test and output follow. | Database | sleep | insert | update | select | delete | |-------------|:-----------:|:------------:|:------------:|:-----------:|:-----------:| | PostgreSQL | 0.02Gi/11s | 53Gi/20s | 74Gi/39s | 1Gi/1s | 10Gi/10s | | MySQL | 0.03Gi/11s | 68Gi/18s | 67Gi/21s | 3Gi/2s | 47Gi/20s | | Oracle | 6Gi/10s | 30Gi/22s | 50Gi/30s | 4Gi/7s | 101Gi/48s | | SQL Server | 0.1Gi/11s | 45Gi/13s | 22Gi/17s | 1Gi/1s | 18Gi/5s | | TiDB | 0.4Gi/11s | 150Gi/21s | 141Gi/25s | 5Gi/1s | 116Gi/14s | | YugabyteDB | 1Gi/11s | 377Gi/38s | 919Gi/114s | 11Gi/2s | 422Gi/69s | | CockroachDB | 7Gi/11s | 1344Gi/458s | 747Gi/395s | 13Gi/3s | 799Gi/486s | All runs follow the same process: start the database in a docker container and keep the container ID in a variable. Then, connect with the right client for the database and run SQL, with `perf stat` measuring the CPU instructions. I've run all this on an 8-vCPU virtual machine (KVM) with 4 Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz cores with hyperthreading. ## PostgreSQL Start the database in a Docker container ```sh postgres=$( docker run -d \ -e POSTGRES_PASSWORD=postgres \ postgres:latest \ ) ``` Start a client, connect, and create a table ```sql docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' select version(); drop table if exists demo; create extension if not exists pgcrypto; create table demo ( primary key (id) , id uuid default gen_random_uuid() , value float ); ' ``` ``` NOTICE: table "demo" does not exist, skipping version --------------------------------------------------------------------------------------------------------------------- PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit (1 row) DROP TABLE CREATE EXTENSION CREATE TABLE ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$postgres -a \ docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' select pg_sleep(10); ' ``` ``` select pg_sleep(10); pg_sleep ---------- (1 row) Performance counter stats for 'system wide': 17,728,103 instructions:u docker/6ee868ec9bcb10ec206224aa84db9489d70db5c71d46f6ec489da4c8f074d0ab 10.666744875 seconds time elapsed ``` In PostgreSQL, I run VACUUM after each statement because it is necessary to leave the database ready for further queries. Not doing it here would not account for the real resource usage. Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$postgres -a \ docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' insert into demo(value) select generate_series(1,1000000); insert into demo(value) select generate_series(1,1000000); ' -c ' vacuum ' ``` ``` insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 vacuum VACUUM Performance counter stats for 'system wide': 52,812,253,895 instructions:u docker/6ee868ec9bcb10ec206224aa84db9489d70db5c71d46f6ec489da4c8f074d0ab 20.423514447 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$postgres -a \ docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' update demo set value=value+1; ' -c ' vacuum ' ``` ``` update demo set value=value+1; UPDATE 2000000 vacuum VACUUM Performance counter stats for 'system wide': 74,393,453,562 instructions:u docker/6ee868ec9bcb10ec206224aa84db9489d70db5c71d46f6ec489da4c8f074d0ab 38.580002880 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$postgres -a \ docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' select count(value) from demo; ' ``` ``` select count(value) from demo; count --------- 2000000 (1 row) Performance counter stats for 'system wide': 1,391,096,618 instructions:u docker/6ee868ec9bcb10ec206224aa84db9489d70db5c71d46f6ec489da4c8f074d0ab 0.732223707 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$postgres -a \ docker run --rm -i --link $postgres:db -e PGPASSWORD=postgres \ postgres \ psql -h db -p 5432 -U postgres -ec ' delete from demo; ' -c ' vacuum ' ``` ``` delete from demo; DELETE 2000000 vacuum VACUUM Performance counter stats for 'system wide': 10,148,828,821 instructions:u docker/6ee868ec9bcb10ec206224aa84db9489d70db5c71d46f6ec489da4c8f074d0ab 9.696420600 seconds time elapsed ``` ## YugabyteDB Start the database in a Docker container ```sh yugabytedb=$( docker run -d \ yugabytedb/yugabyte:latest \ yugabyted start --background=false ) ``` Start a client, connect, and create a table ```sql docker run --rm -i --link $yugabytedb:db postgres \ psql -h db -p 5433 -U yugabyte -ec ' select version(); drop table if exists demo; create extension if not exists pgcrypto; create table demo ( primary key (id) , id uuid default gen_random_uuid() , value float ); ' ``` ``` NOTICE: table "demo" does not exist, skipping version ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 11.2-YB-2.21.0.1-b0 on x86_64-pc-linux-gnu, compiled by clang version 16.0.6 (https://github.com/yugabyte/llvm-project.git 1e6329f40e5c531c09ade7015278078682293ebd), 64-bit (1 row) DROP TABLE CREATE EXTENSION CREATE TABLE ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$yugabytedb -a \ docker run --rm -i --link $yugabytedb:db \ postgres:latest \ psql -h db -p 5433 -U yugabyte -e << 'SQL' select pg_sleep(10); SQL ``` ``` select pg_sleep(10); pg_sleep ---------- (1 row) Performance counter stats for 'system wide': 668,261,558 instructions:u docker/9c32896e86dc88e026e5f80fa3f143f70fa0a7815ee81acca13c750c2a4a8d4c 10.859333028 seconds time elapsed ``` Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$yugabytedb -a \ docker run --rm -i --link $yugabytedb:db \ postgres:latest \ psql -h db -p 5433 -U yugabyte -e << 'SQL' insert into demo(value) select generate_series(1,1000000); insert into demo(value) select generate_series(1,1000000); SQL ``` ``` insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 Performance counter stats for 'system wide': 377,180,116,073 instructions:u docker/9c32896e86dc88e026e5f80fa3f143f70fa0a7815ee81acca13c750c2a4a8d4c 38.254201842 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$yugabytedb -a \ docker run --rm -i --link $yugabytedb:db \ postgres:latest \ psql -h db -p 5433 -U yugabyte -e << 'SQL' update demo set value=value+1; SQL ``` ``` update demo set value=value+1; UPDATE 2000000 Performance counter stats for 'system wide': 918,965,397,040 instructions:u docker/9c32896e86dc88e026e5f80fa3f143f70fa0a7815ee81acca13c750c2a4a8d4c 113.679966282 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$yugabytedb -a \ docker run --rm -i --link $yugabytedb:db \ postgres:latest \ psql -h db -p 5433 -U yugabyte -e << 'SQL' select count(value) from demo; SQL ``` ``` select count(value) from demo; count --------- 2000000 (1 row) Performance counter stats for 'system wide': 11,168,496,896 instructions:u docker/9c32896e86dc88e026e5f80fa3f143f70fa0a7815ee81acca13c750c2a4a8d4c 2.475239412 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$yugabytedb -a \ docker run --rm -i --link $yugabytedb:db \ postgres:latest \ psql -h db -p 5433 -U yugabyte -e << 'SQL' delete from demo; SQL ``` ``` delete from demo; DELETE 2000000 Performance counter stats for 'system wide': 421,678,154,604 instructions:u docker/9c32896e86dc88e026e5f80fa3f143f70fa0a7815ee81acca13c750c2a4a8d4c 68.652693864 seconds time elapsed ``` ## Oracle Start the database in a Docker container ```sh oracle=$( docker run -d \ -e ORACLE_PASSWORD=franck -e APP_USER=franck -e APP_USER_PASSWORD=franck \ gvenzl/oracle-free:slim ) ``` I created a user because using the system ones is different (for example, system tablespaces have additional checksums by default). Start a client, connect, and create a table ```sql docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 <<'SQL' select banner_full from v$version; drop table if exists demo; create table demo ( primary key (id) , id raw(16) default sys_guid() , value float ); SQL ``` ``` BANNER_FULL _______________________________________________________________________________________________________ Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.4.0.24.05 Table DEMO dropped. Table DEMO created. ``` Measure background activity when sleeping 10 seconds with `sqlcl` ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 @ /dev/stdin <<'SQL' exec dbms_session.sleep(10); SQL ``` ``` PL/SQL procedure successfully completed. Performance counter stats for 'system wide': 833,762,410 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 16.695102734 seconds time elapsed ``` It takes an additional 7 seconds to connect because `sqlcl` is a Java application that is very slow to start and connect. I cannot use it for this test. Then, instead if running a container with the database client, I'll connect from `sqlplus` within the database container. I didn't find an official image to run only sqlplus without starting a database. Measure background activity when sleeping 10 seconds with `sqlplus` ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker exec -i $oracle \ sqlplus -s franck/franck@//localhost/FREEPDB1 @ /dev/stdin <<'SQL' exec dbms_session.sleep(10); SQL ``` ``` PL/SQL procedure successfully completed. Performance counter stats for 'system wide': 6,350,743,728 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 10.188907434 seconds time elapsed ``` Oracle is not auto-commit by default, so I add a COMMIT statement after each DML statement. Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 @ /dev/stdin <<'SQL' insert into demo(value) select rownum from xmltable('1 to 1000000'); commit; insert into demo(value) select rownum from xmltable('1 to 1000000'); commit; SQL ``` ``` 1,000,000 rows inserted. Commit complete. 1,000,000 rows inserted. Commit complete. Performance counter stats for 'system wide': 29,948,655,538 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 22.445749204 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 @ /dev/stdin <<'SQL' update demo set value=value+1; commit; SQL ``` ``` 2,000,000 rows updated. Performance counter stats for 'system wide': 50,447,105,827 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 30.286655516 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 @ /dev/stdin <<'SQL' select count(value) from demo; SQL ``` ``` COUNT(VALUE) _______________ 2000000 Performance counter stats for 'system wide': 4,446,769,589 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 6.698948743 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$oracle -a \ docker run --rm -i --link $oracle:db \ container-registry.oracle.com/database/sqlcl:latest \ -s franck/franck@//db/FREEPDB1 @ /dev/stdin <<'SQL' delete from demo; commit; SQL ``` ``` 2,000,000 rows deleted. Performance counter stats for 'system wide': 101,012,043,703 instructions:u docker/8db3df9fbc74f04e20a28e016cc0b91e04a99dc88ae3ef1923172eb6ac724aa0 47.742493451 seconds time elapsed ``` ## PostgreSQL Start the database in a Docker container ```sh sqlserver=$( docker run -d \ -e ACCEPT_EULA=Y -e SA_PASSWORD=MS-SQLServer \ mcr.microsoft.com/mssql/server:2022-latest \ ) ``` Start a client, connect, and create a table ```sql docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -S db -U SA -P MS-SQLServer -e -Q ' select @@version; create database franck; go alter database franck set allow_snapshot_isolation on; alter database franck set read_committed_snapshot on; go use franck; drop table if exists demo; create table demo ( primary key (id) , id uniqueidentifier default newid() , value float ); ' ``` ``` select @@version; create database franck; -------------------------------------------------------------------- Microsoft SQL Server 2022 (RTM-CU13) (KB5036432) - 16.0.4125.3 (X64) May 1 2024 15:05:56 Copyright (C) 2022 Microsoft Corporation Developer Edition (64-bit) on Linux (Ubuntu 22.04.4 LTS) <X64> (1 rows affected) alter database franck set allow_snapshot_isolation on; alter database franck set read_committed_snapshot on; use franck; drop table if exists demo; create table demo ( primary key (id) , id uniqueidentifier default newid() , value float ); Changed database context to 'franck'. ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$sqlserver -a \ docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -d franck -S db -U SA -P MS-SQLServer -e -Q " waitfor delay '00:00:10'; " ``` ``` waitfor delay '00:00:10'; Performance counter stats for 'system wide': 100,374,968 instructions:u docker/85de543fbd9b1124794c2ef6537798a57edf01954da32d9d398b04b14d642712 10.648215142 seconds time elapsed ``` Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$sqlserver -a \ docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -d franck -S db -U SA -P MS-SQLServer -e -Q " insert into demo(value) select value from generate_series(1,1000000); insert into demo(value) select value from generate_series(1,1000000); " ``` ``` insert into demo(value) select value from generate_series(1,1000000); insert into demo(value) select value from generate_series(1,1000000); (1000000 rows affected) (1000000 rows affected) Performance counter stats for 'system wide': 44,794,000,275 instructions:u docker/85de543fbd9b1124794c2ef6537798a57edf01954da32d9d398b04b14d642712 13.099580529 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$sqlserver -a \ docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -d franck -S db -U SA -P MS-SQLServer -e -Q " update demo set value=value+1; " ``` ``` update demo set value=value+1; (2000000 rows affected) Performance counter stats for 'system wide': 21,674,930,278 instructions:u docker/85de543fbd9b1124794c2ef6537798a57edf01954da32d9d398b04b14d642712 16.542750712 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$sqlserver -a \ docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -d franck -S db -U SA -P MS-SQLServer -e -Q " select count(value) from demo; " ``` ``` select count(value) from demo; ----------- 2000000 (1 rows affected) Performance counter stats for 'system wide': 1,064,211,968 instructions:u docker/85de543fbd9b1124794c2ef6537798a57edf01954da32d9d398b04b14d642712 0.672937713 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$sqlserver -a \ docker run --rm -i --link $sqlserver:db \ mcr.microsoft.com/mssql-tools \ /opt/mssql-tools/bin/sqlcmd -d franck -S db -U SA -P MS-SQLServer -e -Q " delete from demo; " ``` ``` delete from demo; (2000000 rows affected) Performance counter stats for 'system wide': 17,697,977,289 instructions:u docker/85de543fbd9b1124794c2ef6537798a57edf01954da32d9d398b04b14d642712 4.760425858 seconds time elapsed ``` ## CockroachDB Start the database in a Docker container ```sh cockroachdb=$( docker run -d \ cockroachdb/cockroach \ bash -c 'cockroach start-single-node --insecure' ) ``` Start a client, connect, and create a table ```sql docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' select version(); drop table if exists demo; create table demo ( primary key (id) , id uuid default gen_random_uuid() , value float ); SQL ``` ``` select version(); version --------------------------------------------------------------------------------------------------------- CockroachDB CCL v24.1.0 (x86_64-pc-linux-gnu, built 2024/05/15 21:28:29, go1.22.2 X:nocoverageredesign) (1 row) drop table if exists demo; DROP TABLE create table demo ( primary key (id) , id uuid default gen_random_uuid() , value float ); CREATE TABLE ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$cockroachdb -a \ docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' select pg_sleep(10); SQL ``` ``` select pg_sleep(10); pg_sleep ---------- t (1 row) Performance counter stats for 'system wide': 7,449,329,979 instructions:u docker/17aa81758d4af146a23dd785c046e2b27f5926ccc561f7a67c654d1e5d7f587d 10.676270411 seconds time elapsed ``` Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$cockroachdb -a \ docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' insert into demo(value) select generate_series(1,1000000); insert into demo(value) select generate_series(1,1000000); SQL ``` ``` insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 insert into demo(value) select generate_series(1,1000000); INSERT 0 1000000 Performance counter stats for 'system wide': 1,344,459,032,711 instructions:u docker/17aa81758d4af146a23dd785c046e2b27f5926ccc561f7a67c654d1e5d7f587d 457.629554299 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$cockroachdb -a \ docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' update demo set value=value+1; SQL ``` ``` update demo set value=value+1; UPDATE 2000000 Performance counter stats for 'system wide': 747,076,086,139 instructions:u docker/17aa81758d4af146a23dd785c046e2b27f5926ccc561f7a67c654d1e5d7f587d 395.113462545 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$cockroachdb -a \ docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' select count(value) from demo; SQL ``` ``` select count(value) from demo; count --------- 2000000 (1 row) Performance counter stats for 'system wide': 12,906,358,871 instructions:u docker/17aa81758d4af146a23dd785c046e2b27f5926ccc561f7a67c654d1e5d7f587d 2.983402815 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$cockroachdb -a \ docker run --rm -i --link $cockroachdb:db postgres \ psql -h db -p 26257 -U root -d defaultdb -e <<'SQL' delete from demo; SQL ``` ``` delete from demo; DELETE 2000000 Performance counter stats for 'system wide': 799,367,485,634 instructions:u docker/17aa81758d4af146a23dd785c046e2b27f5926ccc561f7a67c654d1e5d7f587d 486.132498193 seconds time elapsed ``` ## MySQL Start the database in a Docker container ```sh mysql=$( docker run -d \ -e MYSQL_ROOT_PASSWORD=secret -e MSQL_DATABASE=db -e MYSQL_USER=franck -e MYSQL_PASSWORD=franck -e MYSQL_ROOT_HOST=% \ mysql:latest \ ) ``` Start a client, connect, and create a table ```sql docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v <<'SQL' select version(); create database if not exists db; use db; drop table if exists demo; create table demo ( primary key (id) , id binary(32) default (UUID_TO_BIN(UUID())) , value float ); SQL ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- select version() -------------- version() 8.4.0 -------------- create database if not exists db -------------- -------------- drop table if exists demo -------------- -------------- create table demo ( primary key (id) , id binary(32) default (UUID_TO_BIN(UUID())) , value float ) -------------- ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$mysql -a \ docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v -e ' do sleep(10); ' ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- do sleep(10) -------------- Performance counter stats for 'system wide': 26,942,875 instructions:u docker/379507f1eac6f65bd3a88b7eeaf6c3cc2e084912fa76b22118521e8e35f37f31 10.634240192 seconds time elapsed ``` Generating rows in MySQL is a bit more difficult. I'm using a WITH clause and CROSS JOIN. Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$mysql -a \ docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v -e ' INSERT INTO db.demo(value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6; commit; INSERT INTO db.demo(value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6; commit; ' ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- INSERT INTO db.demo(value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6 -------------- -------------- commit -------------- -------------- INSERT INTO db.demo(value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6 -------------- -------------- commit -------------- Performance counter stats for 'system wide': 68,011,238,184 instructions:u docker/f02f5a42cadc39cdbbc7c1c2213bd5697032cdaeee1e3881a17fa091dd87a9e0 18.342903060 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$mysql -a \ docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v -e ' update db.demo set value=value+1; ' ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- update db.demo set value=value+1 -------------- Performance counter stats for 'system wide': 67,272,334,190 instructions:u docker/f02f5a42cadc39cdbbc7c1c2213bd5697032cdaeee1e3881a17fa091dd87a9e0 21.026414415 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$mysql -a \ docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v -e ' select count(value) from db.demo; ' ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- select count(value) from db.demo -------------- count(value) 2000000 Performance counter stats for 'system wide': 3,324,827,627 instructions:u docker/f02f5a42cadc39cdbbc7c1c2213bd5697032cdaeee1e3881a17fa091dd87a9e0 2.400244863 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$mysql -a \ docker run --rm -i --link $mysql:db mysql:latest \ mysql -h db -P 3306 -u root -psecret -v -e ' delete from db.demo; ' ``` ``` mysql: [Warning] Using a password on the command line interface can be insecure. -------------- delete from db.demo -------------- Performance counter stats for 'system wide': 47,284,490,837 instructions:u docker/f02f5a42cadc39cdbbc7c1c2213bd5697032cdaeee1e3881a17fa091dd87a9e0 19.846219766 seconds time elapsed ``` ## TiDB Start the database in a Docker container ```sh tidb=$( docker run -d \ pingcap/tidb \ tiup playground ) ``` When using the same CREATE TABLE as with MySQL, I got: ``` ERROR 3770 (HY000): Default value expression of column 'id' contains a disallowed function: `UUID_TO_BIN` ``` then I'll not use DEFAULT but put it in the INSERT. Start a client, connect, and create a table ```sql docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v <<'SQL' select version(); create database if not exists db; use db; drop table if exists demo; create table demo ( primary key (id) , id binary(32) -- default (UUID_TO_BIN(UUID())) -- , value float ); SQL ``` ``` -------------- select version() -------------- version() 8.0.11-TiDB-v7.5.1 -------------- create database if not exists db -------------- -------------- drop table if exists demo -------------- -------------- create table demo ( primary key (id) , id binary(32) -- default (UUID_TO_BIN(UUID())) -- , value float ) -------------- ``` Measure background activity when sleeping 10 seconds ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' do sleep(10); ' ``` ``` -------------- do sleep(10) -------------- Performance counter stats for 'system wide': 380,037,170 instructions:u docker/13bcdc86b1e35ec92185d7c8ab7c169fd444c2205dfb41533012089e5b79ed78 10.615474417 seconds time elapsed ``` Insert two million rows in two transactions of one million rows ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' INSERT INTO db.demo(id,value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select UUID_TO_BIN(UUID()),x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6; commit; INSERT INTO db.demo(id,value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select UUID_TO_BIN(UUID()),x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6; commit; ' ``` ``` -------------- INSERT INTO db.demo(id,value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select UUID_TO_BIN(UUID()),x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6 -------------- -------------- commit -------------- -------------- INSERT INTO db.demo(id,value) with x as ( SELECT 0 as x UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9 ) select UUID_TO_BIN(UUID()),x1.x+10*x2.x+100*x3.x+1000*x4.x+10000*x5.x+100000*x6.x from x x1 , x x2 , x x3, x x4, x x5, x x6 -------------- -------------- commit -------------- Performance counter stats for 'system wide': 149,930,979,032 instructions:u docker/13bcdc86b1e35ec92185d7c8ab7c169fd444c2205dfb41533012089e5b79ed78 21.158571597 seconds time elapsed ``` Update those two million rows ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' update db.demo set value=value+1; ' ``` ``` -------------- update db.demo set value=value+1 -------------- ERROR 1105 (HY000) at line 2: Your query has been cancelled due to exceeding the allowed memory limit for a single SQL query. Please try narrowing your query scope or increase the tidb_mem_quota_query limit and try again.[conn=2097160] ``` Doing the same with a larger `tidb_mem_quota_query` (4 GB): ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' set tidb_mem_quota_query = 4 << 30 ; update db.demo set value=value+1; ' ``` ``` -------------- set tidb_mem_quota_query = 4 << 30 -------------- -------------- update db.demo set value=value+1 -------------- Performance counter stats for 'system wide': 141,189,414,795 instructions:u docker/11f53b1f64a1b596b0b47e66ad0d5bb4396299ce6f629d9a2c05283ecdf859e5 25.442668815 seconds time elapsed ``` Count those two million values ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' select count(value) from db.demo; ' ``` ``` -------------- select count(value) from db.demo -------------- count(value) 2000000 Performance counter stats for 'system wide': 5,365,236,600 instructions:u docker/13bcdc86b1e35ec92185d7c8ab7c169fd444c2205dfb41533012089e5b79ed78 1.023350939 seconds time elapsed ``` Delete those two million rows ```sql perf stat -e instructions:u -G docker/$tidb -a \ docker run --rm -i --link $tidb:db mysql:latest \ mysql -h db -P 4000 -u root -v -e ' delete from db.demo; ' ``` ``` -------------- delete from db.demo -------------- Performance counter stats for 'system wide': 116,478,067,922 instructions:u docker/13bcdc86b1e35ec92185d7c8ab7c169fd444c2205dfb41533012089e5b79ed78 14.375729026 seconds time elapsed ``` ## In conclusion With those simple tests, the most popular open-source monolithic databases, PostgreSQL and MySQL, perform well and similarly. This is interesting because they have a completely different implementation: PostgreSQL uses heap tables with in-place multi-version concurrency control, whereas MySQL stores the table in its primary key B-Tree, and past versions are moved to the transactional undo log. The reason is that those databases have been there for a long time, and those simple workloads were optimized for each different architecture. For the same reason, Oracle Database also performs well in terms of CPU usage, which is crucial given that it runs under a commercial license with an initial price and annual fees based on the physical CPU cores in most platforms, without the possibility of scaling down and reducing the support fees. I have also run SQL Server, which has shown very good performance and has displayed the best results here. This is interesting to note, especially given that it was recently ported to Linux. Initially, I thought that it was faster because it does not use Multi-Version Concurrency Control like the others (and then paying the price of locks and deadlocks) but I re-ran the test after enabling Read Committed Snapshot Isolation and got the same result. The three Distributed SQL databases that can be started in a container were tested. They have built-in resilience based on Raft to distribute and replicate and LSM Trees to store the distributed tables and indexes. TiDB shows good numbers. They are higher than the monolithic databases, but this is expected for a horizontally scalable database. Unlike monolithic databases that handle all tasks (parsing SQL, optimizing, executing the plan, reading and modifying the data pages, and transaction control structures) within a single process accessing local memory, distributed SQL databases must utilize a protocol to distribute read and write operations across multiple nodes. I don't understand why CockroachDB uses so many CPU instructions for inserts and deletes. Even though CockroachDB doesn't offer packed rows, I created a simple two-column table where this disparity should not be noticeable. I generated a [perf report](https://dev.to/yugabyte/flamegraphs-on-steroids-with-profilerfirefoxcom-203f) and have seen many samples in the call stack (https://share.firefox.dev/3KwItMA) in Peeble's seek functions, their rewrite of RocksDB in Golang. Please comment if you think something is wrong with the setup. CockroachDB has only a subset of features available for free in the community edition, so maybe some optimizations are only available in the Enterprise edition. I tried the last version (`:latest-v20.2`) that allowed to use the `--storage-engine=rocksdb` and got slightly better numbers (513Gi/105s for the insert, 358Gi/54s for the update, 353Gi/44s for the select, 237Gi/33s). YugabyteDB shows numbers closer to monolithic databases, with higher CPU utilization due to its use of multi-threading for batch processing of reads and writes. The advantages of this approach may not be immediately apparent when dealing with small, single-session queries. However, in distributed systems, while individual response times may be slightly higher, the overall throughput can increase significantly due to its scalable architecture. Note also that YugabyteDB calculates checksums by default to detect disk corruption. This uses CPU but is mandatory to avoid data loss.
franckpachot
1,880,713
Tutorial: Build a Digital Clock in Rust 🦀⏰
Hello, amazing people and welcome back to my blog! Today we're going to build a digital clock in...
0
2024-06-08T08:58:32
https://eleftheriabatsou.hashnode.dev/tutorial-build-a-digital-clock-in-rust
rust, rustprogramming, rusttutorial
--- title: Tutorial: Build a Digital Clock in Rust 🦀⏰ published: true date: 2024-06-07 13:33:40 UTC tags: Rust,Rustprogramming,Rusttutorial canonical_url: https://eleftheriabatsou.hashnode.dev/tutorial-build-a-digital-clock-in-rust cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0gjniokbxw0guyvbmo9w.jpeg --- Hello, amazing people and welcome back to my blog! Today we're going to build a digital clock in Rust. If you're following my tutorials you will **notice that this one is a bit simpler and I'd suggest it to juniors**. It's going to be a quick and fun one. ## Introduction to Time in Rust For this project, we will have some digital numbers. I did a little Wikipedia search to create these numbers (graphics characters). These characters existed long before Unicode existed and were available on all of the terminals. These graphic characters are useful for drawing various things, and these days the Unicode character set has an enormous number of things. The ones I'm going to be using today are used mostly for building forms and what I've done is taken those graphics characters and I've built them into a little array: *(feel free to copy-paste)* ```rust // https://www.w3.org/TR/xml-entity-names/025.html const DIGITS : [[&str; 11]; 7] = [ ["┏━┓ "," ╻ "," ┏━┓ ", " ┏━┓ "," ╻ ╻ "," ┏━┓ "," ┏ "," ┏━┓ "," ┏━┓ "," ┏━┓ "," "], ["┃ ┃ "," ┃ "," ┃ ", " ┃ "," ┃ ┃ "," ┃ "," ┃ "," ┃ "," ┃ ┃ "," ┃ ┃ "," ╻ "], ["┃ ┃ "," ┃ "," ┃ ", " ┃ "," ┃ ┃ "," ┃ "," ┃ "," ┃ "," ┃ ┃ "," ┃ ┃ "," "], ["┃ ┃ "," ┃ "," ┏━┛ ", " ┣━┫ "," ┗━┫ "," ┗━┓ "," ┣━┓ "," ┃ "," ┣━┫ "," ┗━┫ "," "], ["┃ ┃ "," ┃ "," ┃ ", " ┃ "," ┃ "," ┃ "," ┃ ┃ "," ┃ "," ┃ ┃ "," ┃ "," "], ["┃ ┃ "," ┃ "," ┃ ", " ┃ "," ┃ "," ┃ "," ┃ ┃ "," ┃ "," ┃ ┃ "," ┃ "," ╹ "], ["┗━┛ "," ╹ "," ┗━━ ", " ┗━┛ "," ╹ "," ┗━┛ "," ┗━┛ "," ╹ "," ┗━┛ "," ┗━┛ "," "], ]; ``` These graphics characters allow us to do little corners and this is how we can create 11 characters in a sort of graphical character set. Each row of this table consists of 11 entries each of which contains a string slice. This structure enables us to print exactly what we want. Let's move on... **How can we tell the time in Rust?** There is a crate called Chrono in Rust, in particular, we want [date time in Chrono](https://docs.rs/chrono/latest/chrono/struct.DateTime.html). This is a structure that allows us to represent time in various ways. In my example, I'm going to use local time which gives the time on my local time of my computer. An alternative is to use UTC time. So, to answer my initial question, '**How can we tell the time in Rust?',** let's use [date time](https://docs.rs/chrono/latest/chrono/struct.DateTime.html) to tell the time! ## Dependencies The first thing we need to do is to put a dependency into our [cargo.toml](https://github.com/EleftheriaBatsou/digital-clock-rust/blob/main/Cargo.toml). ```ini [dependencies] chrono = "0.4" ``` ## Main If I want to get the local time I can do: `use chrono::{Local};` and inside the main: `let t = Local::now();` If I want to format it better, I can do: `let time = t.format("%H:%M:%S").to_string();` (that way I'll show the hour, minutes and seconds.) Now we want our clock to run forever so'll put it in a infinite loop. The loop enables us to keep running the same code. If you run this code you get every second it changes. (We don't want that...!) ```rust loop { let t = Local::now(); let time = t.format("%H:%M:%S").to_string(); println!("{:?}", time); } ``` To improve it, we can "sleep" the thread like so: `std::thread::sleep(std::time::Duration::from_millis(999));` (we're going to do a few milliseconds.) If it is a thousand milliseconds we'd get the answer every second. **How can we print the digits we created above using using the**`time`**string?** Let's convert `time` string into some nice digits to make a digital clock. I can start by iterating over my digits. (I'll do that for every row). Then I need to actually call the digits. Essentially we're building a line printer: ```rust loop { let t = Local::now(); let time = t.format("%H:%M:%S").to_string(); for row in &DIGITS { . . . . } std::thread::sleep(std::time::Duration::from_millis(999)); print!("\x1b[7A"); } ``` The next thing I need to do is iterate over the characters in my `time` string. I want to iterate as characters: ```rust for row in &DIGITS { for c in time.chars() { let col = match c { . . }; . } . } ``` I also need to translate those characters in a `matchC` , this essentially matches a pattern and does something depending on what the pattern is. In this particular case anything between the 0 and 9, these are contiguous digits in ASCII and Unicode. ```rust let col = match c { '0'..='9' => c as usize - '0' as usize, _ => 10, }; ``` What we're going to do now is we're going to turn that `c` into an index. Let's do the digits first: `'0'..='9' => c as usize - '0' as usize, _ => 10,` * `usize` is a special type that we use for indexing arrays. * `- '0' as usize` will give us an integer. * You'll notice I did `..=` this will include the digit 9 at the end. Otherwise, it won't be printing any `9`. * Then, the next one is the `_`. We'll just map that to `10` which is our end. This match statement `let col = match c` will give us either a number between `0` and `9` or `10` depending on which digit we want. Finally, I'm going to print the column from my row: `print!("{} ", row[col]);` this will print the corresponding row of my character. Then at the end of each row, I want to just use print alone with no arguments, and it'll print a blank line, otherwise, all my digits are going to run into each other `println!();`. **Woohoo! We now have a working program!** 🔥 From now on, we can only improve it! **How can we make this prettier?** There are a few useful things we can do but I'm not going into details about them, instead, I'm going to show you the whole code below and if you have any questions feel free to ask me. ```rust fn main() { print!("\x1b[2J"); print!("\x1b[?25l"); loop { let t = Local::now(); let time = t.format("%H:%M:%S").to_string(); for row in &DIGITS { for c in time.chars() { let col = match c { '0'..='9' => c as usize - '0' as usize, _ => 10, }; print!("{} ", row[col]); } println!(); } std::thread::sleep(std::time::Duration::from_millis(999)); print!("\x1b[7A"); } } ``` *Please keep in mind there are many other ways to build a digital clock in Rust, I only did this for learning purposes.* Code on [GitHub](https://github.com/EleftheriaBatsou/digital-clock-rust). {% embed https://github.com/EleftheriaBatsou/digital-clock-rust %} --- 👋 Hello, I'm Eleftheria, **Community Manager,** developer, public speaker, and content creator. 🥰 If you liked this article, consider sharing it. 🔗 [**All links**](https://limey.io/batsouelef) | [**X**](https://twitter.com/BatsouElef) | [**LinkedIn**](https://www.linkedin.com/in/eleftheriabatsou/)
eleftheriabatsou
1,880,467
Buy Negative Google Reviews
https://dmhelpshop.com/product/buy-negative-google-reviews/ Buy Negative Google Reviews Negative...
0
2024-06-07T13:32:04
https://dev.to/xojiwat855/buy-negative-google-reviews-2lhk
css, opensource, node, learning
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-negative-google-reviews/\n\n\n\nBuy Negative Google Reviews\nNegative reviews on Google are detrimental critiques that expose customers’ unfavorable experiences with a business. These reviews can significantly damage a company’s reputation, presenting challenges in both attracting new customers and retaining current ones. If you are considering purchasing negative Google reviews from dmhelpshop.com, we encourage you to reconsider and instead focus on providing exceptional products and services to ensure positive feedback and sustainable success.\n\nWhy Buy Negative Google Reviews from dmhelpshop\nWe take pride in our fully qualified, hardworking, and experienced team, who are committed to providing quality and safe services that meet all your needs. Our professional team ensures that you can trust us completely, knowing that your satisfaction is our top priority. With us, you can rest assured that you’re in good hands.\n\nIs Buy Negative Google Reviews safe?\nAt dmhelpshop, we understand the concern many business persons have about the safety of purchasing Buy negative Google reviews. We are here to guide you through a process that sheds light on the importance of these reviews and how we ensure they appear realistic and safe for your business. Our team of qualified and experienced computer experts has successfully handled similar cases before, and we are committed to providing a solution tailored to your specific needs. Contact us today to learn more about how we can help your business thrive.\n\nBuy Google 5 Star Reviews\nReviews represent the opinions of experienced customers who have utilized services or purchased products from various online or offline markets. These reviews convey customer demands and opinions, and ratings are assigned based on the quality of the products or services and the overall user experience. Google serves as an excellent platform for customers to leave reviews since the majority of users engage with it organically. When you purchase Buy Google 5 Star Reviews, you have the potential to influence a large number of people either positively or negatively. Positive reviews can attract customers to purchase your products, while negative reviews can deter potential customers.\n\nIf you choose to Buy Google 5 Star Reviews, people will be more inclined to consider your products. However, it is important to recognize that reviews can have both positive and negative impacts on your business. Therefore, take the time to determine which type of reviews you wish to acquire. Our experience indicates that purchasing Buy Google 5 Star Reviews can engage and connect you with a wide audience. By purchasing positive reviews, you can enhance your business profile and attract online traffic. Additionally, it is advisable to seek reviews from reputable platforms, including social media, to maintain a positive flow. We are an experienced and reliable service provider, highly knowledgeable about the impacts of reviews. Hence, we recommend purchasing verified Google reviews and ensuring their stability and non-gropability.\n\nLet us now briefly examine the direct and indirect benefits of reviews:\nReviews have the power to enhance your business profile, influencing users at an affordable cost.\nTo attract customers, consider purchasing only positive reviews, while negative reviews can be acquired to undermine your competitors. Collect negative reports on your opponents and present them as evidence.\nIf you receive negative reviews, view them as an opportunity to understand user reactions, make improvements to your products and services, and keep up with current trends.\nBy earning the trust and loyalty of customers, you can control the market value of your products. Therefore, it is essential to buy online reviews, including Buy Google 5 Star Reviews.\nReviews serve as the captivating fragrance that entices previous customers to return repeatedly.\nPositive customer opinions expressed through reviews can help you expand your business globally and achieve profitability and credibility.\nWhen you purchase positive Buy Google 5 Star Reviews, they effectively communicate the history of your company or the quality of your individual products.\nReviews act as a collective voice representing potential customers, boosting your business to amazing heights.\nNow, let’s delve into a comprehensive understanding of reviews and how they function:\nGoogle, with its significant organic user base, stands out as the premier platform for customers to leave reviews. When you purchase Buy Google 5 Star Reviews , you have the power to positively influence a vast number of individuals. Reviews are essentially written submissions by users that provide detailed insights into a company, its products, services, and other relevant aspects based on their personal experiences. In today’s business landscape, it is crucial for every business owner to consider buying verified Buy Google 5 Star Reviews, both positive and negative, in order to reap various benefits.\n\nWhy are Google reviews considered the best tool to attract customers?\nGoogle, being the leading search engine and the largest source of potential and organic customers, is highly valued by business owners. Many business owners choose to purchase Google reviews to enhance their business profiles and also sell them to third parties. Without reviews, it is challenging to reach a large customer base globally or locally. Therefore, it is crucial to consider buying positive Buy Google 5 Star Reviews from reliable sources. When you invest in Buy Google 5 Star Reviews for your business, you can expect a significant influx of potential customers, as these reviews act as a pheromone, attracting audiences towards your products and services. Every business owner aims to maximize sales and attract a substantial customer base, and purchasing Buy Google 5 Star Reviews is a strategic move.\n\nAccording to online business analysts and economists, trust and affection are the essential factors that determine whether people will work with you or do business with you. However, there are additional crucial factors to consider, such as establishing effective communication systems, providing 24/7 customer support, and maintaining product quality to engage online audiences. If any of these rules are broken, it can lead to a negative impact on your business. Therefore, obtaining positive reviews is vital for the success of an online business\n\nWhat are the benefits of purchasing reviews online?\nIn today’s fast-paced world, the impact of new technologies and IT sectors is remarkable. Compared to the past, conducting business has become significantly easier, but it is also highly competitive. To reach a global customer base, businesses must increase their presence on social media platforms as they provide the easiest way to generate organic traffic. Numerous surveys have shown that the majority of online buyers carefully read customer opinions and reviews before making purchase decisions. In fact, the percentage of customers who rely on these reviews is close to 97%. Considering these statistics, it becomes evident why we recommend buying reviews online. In an increasingly rule-based world, it is essential to take effective steps to ensure a smooth online business journey.\n\nBuy Google 5 Star Reviews\nMany people purchase reviews online from various sources and witness unique progress. Reviews serve as powerful tools to instill customer trust, influence their decision-making, and bring positive vibes to your business. Making a single mistake in this regard can lead to a significant collapse of your business. Therefore, it is crucial to focus on improving product quality, quantity, communication networks, facilities, and providing the utmost support to your customers.\n\nReviews reflect customer demands, opinions, and ratings based on their experiences with your products or services. If you purchase Buy Google 5-star reviews, it will undoubtedly attract more people to consider your offerings. Google is the ideal platform for customers to leave reviews due to its extensive organic user involvement. Therefore, investing in Buy Google 5 Star Reviews can significantly influence a large number of people in a positive way.\n\nHow to generate google reviews on my business profile?\nFocus on delivering high-quality customer service in every interaction with your customers. By creating positive experiences for them, you increase the likelihood of receiving reviews. These reviews will not only help to build loyalty among your customers but also encourage them to spread the word about your exceptional service. It is crucial to strive to meet customer needs and exceed their expectations in order to elicit positive feedback. If you are interested in purchasing affordable Google reviews, we offer that service.\n\n\n\n\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
xojiwat855
1,880,466
Self-promote on Reddit without getting banned
We are happy to share our experience on important aspects of starting self-promoting posts on Reddit....
0
2024-06-07T13:31:23
https://jetthoughts.com/blog/self-promote-on-reddit-without-getting-banned-promotion/
reddit, promotion, advice
We are happy to share our experience on important aspects of starting self-promoting posts on Reddit. What do you need for it? What are the steps? We aim to identify the challenges affecting and easily face them. ![Photo by [Kurt Cotoaga](https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/@kydroon?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/self-promote-on-reddit-without-getting-banned-promotion/file_0.jpeg) Reddit is a large community made up of thousands of smaller communities, making it great for Growth. Despite the potentially high traffic, many blog-post writers avoid it when they faced different problems just from the beginning. ## Become an active user The first thing you need to pay attention to is your account — it is weighty to be an active user. Reddit features a basic unit to measure a reflection of how much your contributions signify to the community — your Karma. ### We need karma to start self-promote! It is hard to find a subreddit for self-promotion with 0 karma points; maybe only some unpopular subreddits will allow it. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/self-promote-on-reddit-without-getting-banned-promotion/file_1.png) The main resource for gaining karma points is **Upvotes** that users generate on your activity. So you should take part in public life on the topic you are aiming for. Here are a few pertinent tips: * over a few days, monitor your subreddits and note when the most people are online; you could post during those times. * start posting interesting facts, mems, don’t shy away from leaving comments and questions; let users know about you * don’t troll or spam; that will often result in downvotes, defeating your positive karma on Reddit In turn, the users will leave you the necessary resource — **upvotes**. ### **Why is it so important?** Reddit has stringent rules regarding new accounts, so when you try to create an advertising post on a new account with 0 Karma, the post will be permanently blocked and considered spam. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/self-promote-on-reddit-without-getting-banned-promotion/file_2.png) **How many Karma needs to post self-promotion?** The exact amount of positive karma on your account required isn’t high — basically about 25 karma points (depending on the subreddit), but it’s kept from the public to avoid spam. ## Subreddit rules page Reddit may have relaxed its position on self-promotion, but almost every subreddit has its own rules regarding self-promotion posts. The rules page will be visible to all visitors who can view specific subreddit. The vast majority including rules about Self-promotion which describe the main condition: * **Your content is high-quality:** > Your resource should be technically accurate, polished, and pedagogically sound. Your resource should not “feel spammy” * **Your post body contains more than just a link:** > Explain what your resource is teaching and how it improves on the status quo. Don’t just drop a link and move on. * **You respect Reddit's global policies regarding [self-promotion](https://reddit.zendesk.com/hc/en-us/articles/205926439-Reddiquette).** ### Allowed topics rule There is one thing we should pay attention to if we try to self-promote. This rule gives us a direct understanding of what content users want from us. A good example might be posting programming learning content in the [r/programming](https://www.reddit.com/r/programming/) subreddit. Have you wondered if there might be a better place for such content? Yep, there is [r/learnprogramming](https://www.reddit.com/r/learnprogramming/), where such content fits perfectly. ## Well, how the community responds to marketing? Reddit hates blatant marketing — much has been said about this. There is a lot of discussion about unhealthy criticism among the Reddit community. This may be the key—no need to make another advertising post and just leave your promotion link. Let people know what you have to offer. ![](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/self-promote-on-reddit-without-getting-banned-promotion/file_3.jpeg) I also can’t help but warn you there are problems that, for the most part, we cannot influence ### Resources shadow-banned on Reddit There are cases when self-promotion posts with links to some resources are not promoted on Reddit *(despite good stats and activity on account)*. For example, [Dev.to](https://dev.to/) *(social network for software developers)*, on the resource itself, there is a discussion where people report zero activity under posts that contains a link to dev.to post. ### Filters for non-spam reasons Worse, too, are filters for non-spam reasons that show the same message, so some users asked why Reddit or mods think their post is spam. When in reality, it was just filtered and awaiting review by the mods when they have time to check the queue and/or logs. Anyway, self-promotion on Reddit is a great approach to make your posts more popular. The above steps will help you start sharing your posts on Reddit and opening them for a huge community. Good luck! **Oleh** is a Software Engineer at [JetThoughts](https://www.jetthoughts.com/). Follow him on [LinkedIn](https://www.linkedin.com/in/oleh-barchuk-0b9813192/) or [GitHub](https://github.com/phoenixixixix). > *If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).*
jetthoughts_61
1,880,465
How I Fixed NPM Package Installation errors
Fix your DNS When you have trouble installing packages, check logs and if it has issues...
0
2024-06-07T13:29:35
https://dev.to/kamilrashidev/how-i-fixed-npm-package-installation-errors-22lm
node, javascript, webdev, beginners
## Fix your DNS When you have trouble installing packages, check logs and if it has issues with GET requests, then change your DNS. I changed mine from Cloudflare's `1.1.1.1`, `1.0.0.1` to Google's `8.8.8.8`, `8.8.4.4` and it worked. Cloudflare DNS is good but I don't know why it gave headache for me. Google worked better in my case. So, it is up to you to choose a DNS server that suits you best. ## Use VPN Sometimes because of issues with ISP, we may have to face errors. to avoid it use a good VPN. if you have servers/VPS, you may use services like Tailscale to set up your own free private VPN. ## PNPM Installer PNPM is an alternative to NPM. PNPM is faster than NPM. It gives better logs than NPM so you may identify the reason for installation errors with ease and fix it accordingly. ## Installing NodeJS `exe` from nodeJS website >[!TIP] This step is important for Windows. Linux and Mac come with these software programs pre-installed. While installing NodeJS, download from the official site's `exe` file and make sure everything is checked as in the images below. This will help prevent errors while installing plugins. ![nodejs-installation-tutorial-1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kj58jup4gs2c8l7o5pra.png) ![nodejs-installation-tutorial-2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rduhkqozpfwpq75e3wf9.png) That's it fellow devs. If you have tips, please comment below. :)
kamilrashidev
1,880,464
Enhancing Sustainability and Transparency: The Crucial Role of ESG Data Analytics in Modern Business
ESG data analytics have become indispensable for enhancing transparency in investor relations and...
0
2024-06-07T13:28:15
https://dev.to/linda0609/enhancing-sustainability-and-transparency-the-crucial-role-of-esg-data-analytics-in-modern-business-14c4
ESG data analytics have become indispensable for enhancing transparency in investor relations and financial disclosures. This sophisticated analysis enables investment managers to optimize portfolios by prioritizing top ESG-compliant companies, while businesses use ESG benchmarks for performance monitoring. This article explores the vital role of ESG data analytics, shedding light on its significance across various industries. What is ESG? ESG stands for Environmental, Social, and Governance. It encompasses data gathering and sustainability accounting inspections across these three dimensions of a company’s operations. [ESG integration services](https://www.sganalytics.com/esg-services/) facilitate peer analytics, allowing companies to compare sustainability ratings and identify areas for improvement. For investors, ESG frameworks guide the selection of stocks by providing comprehensive insights into a company’s sustainability practices. ESG Data Analytics ESG data analytics apply advanced computing and statistical modeling techniques to evaluate corporate compliance with environmental, social, and governance metrics. Companies and investment managers collaborate with ESG data providers to ensure their assessments align with established sustainability guidelines. This analysis helps identify the contributions of various industries to carbon emissions and other ESG metrics, supporting comprehensive data collection to measure how effectively corporations mitigate environmental risks and enhance employee relations. The Importance of ESG Data Analytics 1. Demand for Transparency and Real-World Evidence In today's market, both investors and business managers seek complete transparency in performance reporting. Governments and NGOs also advocate for increased industry participation in combating climate change. ESG data providers address these demands by conducting complex surveys and using AI and machine learning to verify data authenticity. This ensures that all stakeholders have access to reliable, real-world evidence of corporate sustainability practices.  2. Socio-Economic Cost of Climate Change Climate change has far-reaching impacts on industries such as agriculture, which many communities depend on for their livelihoods. It also increases health risks and exacerbates poverty in low-income countries. ESG integration services are vital for implementing measures to reduce greenhouse gas emissions and industrial pollutants, addressing these socio-economic challenges and promoting sustainable development. 3. Greenwashing Risks Greenwashing involves the unethical manipulation of sustainability data to falsely portray a company as environmentally friendly. ESG data providers help identify and avoid greenwashing by maintaining intelligence on corporations that engage in such practices. This ensures that investors and consumers support genuinely sustainable companies, fostering a more transparent and accountable market. 4. Ease of Compliance Auditing ESG data analytics simplify compliance audits, revealing areas where businesses need to improve. These audits, based on unique industry criteria, ensure precise verification of compliance data legitimacy. Businesses and investors can archive and retrieve critical intelligence using cloud platforms, enhancing the efficiency and accuracy of compliance processes. 5. Promoting Inclusivity at Workplaces The social pillar of ESG focuses on policies that prevent harassment and promote affirmative action to empower marginalized groups. ESG integration services investigate how companies implement diversity, equity, and inclusion (DEI) goals, which are crucial for improving ESG scores. Investors increasingly prefer companies that prioritize DEI, recognizing its importance for sustainable growth and workplace harmony. 6. Saying No to Corruption and Tax Errors The governance pillar of ESG emphasizes accounting transparency and anti-corruption measures. Businesses must ensure accurate accounting and comply with regulations to prevent resource misuse. ESG integration services provide validated data to ensure robust governance compliance, crucial during mergers and acquisitions (M&A) and for maintaining investor trust. 7. Risk Analysis for Business Mergers Corporate ESG scores and sustainability benchmarks are influenced by factors such as controversy exposure, business model innovation, and M&A deals. ESG data providers offer insights into a company’s sustainability metrics before mergers, helping businesses strategize and mitigate ESG compliance risks. This analysis supports informed decision-making, ensuring that M&A activities enhance rather than detract from ESG performance. 8. Meaningful Relationship with Stakeholders Building meaningful relationships with stakeholders—employees, customers, investors, and suppliers—is essential for a company’s success. ESG integration services help companies address stakeholder concerns with data-backed performance analytics. Compliance with ESG standards builds public trust in the brand, reflecting a company’s commitment to sustainability and ethical practices. Detailed Insights on ESG Pillars Environmental Aspect The environmental aspect of ESG focuses on how companies impact the natural world. This includes their carbon footprint, energy use, waste management, and resource conservation. ESG data analytics help businesses track their environmental impact, enabling them to implement strategies that reduce their carbon emissions and promote the use of renewable energy. For instance, a mining company’s environmental impact differs significantly from that of a fashion company, necessitating industry-specific ESG metrics and compliance standards. Social Aspect The social aspect examines how companies manage relationships with employees, suppliers, customers, and communities. This includes factors like labor practices, talent management, product safety, and community engagement. ESG data analytics provide insights into how companies are performing in these areas, helping them improve their social impact. For example, companies are evaluated on their diversity and inclusion policies, employee engagement programs, and community outreach initiatives. Governance Aspect Governance refers to the internal system of practices, controls, and procedures a company adopts to govern itself, make effective decisions, comply with the law, and meet the needs of external stakeholders. ESG data analytics assess factors such as board diversity, executive compensation, auditing processes, and shareholder rights. Strong governance practices are crucial for ensuring accountability and transparency, ultimately fostering long-term business sustainability. Conclusion The importance of ESG data analytics and sustainability compliance benchmarks is growing, with investment firms increasingly focusing on these aspects for portfolio development. The demand for transparency, the need to combat greenwashing, and the impact of M&A on sustainability metrics drive this trend. Additionally, preventing corruption and discrimination remains a priority for all stakeholders. SG Analytics, a leading ESG data provider, supports enterprises and investment managers in achieving sustainable development goals through comprehensive benchmarking and scalable research assistance. Their expertise in data analytics helps businesses and investors navigate the complexities of ESG compliance, ensuring robust and sustainable growth. In summary, ESG data analytics play a crucial role in modern business and investment strategies, providing the transparency and accountability needed to foster sustainable development and ethical practices across industries. As the ESG data market continues to expand, its significance will only grow, driving businesses toward more sustainable and socially responsible operations.
linda0609
1,880,462
CRYPTO SCAM RECOVERY, INVESTORS & VICTIMS OF CRYPTO SCAMS - HIRE TRUSTGEEKS HACK EXPERT
While browsing through Instagram, I came across a post from one of my friends about their successful...
0
2024-06-07T13:21:28
https://dev.to/pauline_lambert_3831f6419/crypto-scam-recovery-investors-victims-of-crypto-scams-hire-trustgeeks-hack-expert-9k4
expert, bitcoin, tacking, hacktoberfest
While browsing through Instagram, I came across a post from one of my friends about their successful Bitcoin investment and substantial profits. Intrigued by their claims, I decided to visit the website mentioned in the post. After creating an account, I contacted the support chat and was provided with a Telegram contact for further assistance. I connected with a person on Telegram who instructed me to install the MetaMask app and informed me that they would receive my money transfers. I was asked to make four bank transfers, totaling $158,000. Throughout this process, my friend, whose post had initially caught my attention, was also aware of the investment opportunity. However, I later discovered that his account had been hacked, and the hacker was using his page to promote false information about the company, leading me to believe it was legitimate. After noticing the scam, I was devastated and felt completely lost. I couldn't believe I had fallen for such an elaborate scheme, and the thought of losing such a significant amount of money was overwhelming. I didn't know where to turn or who to trust. Fortunately, a close friend recommended it. I seek help from TRUST GEEKS HACK EXPERT. Desperate and hopeful, I contacted them, explaining my situation in detail. From the very first interaction, I was impressed with TRUST GEEKS HACK EXPERT professionalism and empathy. They quickly reassured me that I was not alone and that they had successfully handled numerous cases similar to mine. Their confidence and expertise provided me with a glimmer of hope in an otherwise bleak situation. TRUST GEEKS HACK EXPERT immediately began working on my case. Their team of experts meticulously analyzed the fraudulent website and the transactions I had made. They kept me informed throughout the entire process, explaining each step they were taking and why it was necessary. This transparency helped rebuild my trust, which the scam had shattered. One of the most impressive aspects of TRUST GEEKS HACK EXPERT service was their thorough understanding of cryptocurrency and online scams. They knew exactly where to look and how to gather the necessary evidence to build a strong case against the perpetrators. Their technical skills and knowledge of the digital landscape were evident in the speed and efficiency with which they operated. Within a 2 day short TRUST GEEKS HACK EXPERT was able to expose the entire operation. They identified the fraudulent website's operators and worked tirelessly to shut it down. Their efforts were relentless, and their determination to recover my funds was unwavering. Despite the complexity of the situation, they never wavered in their commitment to helping me. The moment I received the notification that my funds had been successfully recovered was one of immense relief and joy. I couldn't believe that the nightmare was finally over and that I had regained the money I thought was lost forever. TRUST GEEKS HACK EXPERT not only restored my financial stability but also my faith in the possibility of justice in the digital world. I am writing this review with immense joy and gratitude for the exceptional assistance provided by TRUST GEEKS HACK EXPERT. Their dedication, expertise, and compassion were instrumental in navigating a very difficult period in my life. If you find yourself in a similar situation, I recommend reaching out to TRUST GEEKS HACK EXPERT. They are a beacon of hope in the often world of online investments, and their ability to recover lost funds is nothing short of miraculous. Trust in their capabilities, and you will not be disappointed. Web-site. h-t-t-p-s-://trustgeekshackexpert.com/ E-m-a-i-l: info@trustgeekshackexpert.com W-h-a-t-s-A-p-p  +1-7-1-9-4-9-2-2-6-9-3 Tele-Gram: Trust-Geeks-Hack-Expert ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lkswpr1826l41im6o0u.jpg)
pauline_lambert_3831f6419
1,880,461
CORE ACHITECTURAL COMPONENTS OF AZURE
There are many cloud computing platforms in the market, but Microsoft Azure is one of the biggest....
0
2024-06-07T13:21:23
https://dev.to/seyibright/core-achitectural-components-of-azure-2bbe
There are many cloud computing platforms in the market, but Microsoft Azure is one of the biggest. Microsoft Azure is the cloud platform of choice at the enterprise level, used by 95 percent of Fortune 500 companies. Microsoft Azure, formerly known as Windows Azure, was released in 2010 by Microsoft as a public cloud services platform where users could build, test, deploy, and manage their applications using Microsoft’s cloud-based data centers. The servers that make up these data centers are scattered across 140 countries and growing. Through Azure, Microsoft offers a host of services on different domains, such as Compute, Database, Content Delivery, and Networking. What Is Microsoft Azure Used For? The solutions that enterprises can build on Microsoft Azure include: • Infrastructure as a Service (IaaS) • Platform as a Service (PaaS) • Software as a Service (SaaS) Azure is a quick, flexible, and—more importantly for enterprises—affordable cloud platform. But what exactly can you do with it? Below are a few examples of Azure solutions: • App Development: Users can create virtually any web application in Azure • App Hosting: After development, you can then host the application on Azure • Software Testing: Users can also test applications on Azure • Virtual Machine Creation: Add virtual machines to your IT infrastructure using Azure • Virtual Hard Drives • Integration and Synchronization: Azure lets you integrate cloud-based apps with your existing IT infrastructure • Business Intelligence: Azure is a powerful business intelligence tool that allows you to capture and store essential metrics that measure your application’s activities What Is Microsoft Azure Architecture? If you’re looking to apply for Azure Architect jobs or perhaps get Azure training, it’s essential first to understand what Microsoft Azure architecture is. Like other cloud platforms, Microsoft Azure depends on a technology called virtualization, which is the emulation of computer hardware in software. This is made possible by the fact that most computer hardware works by following a set of instructions encoded directly into the silicon. By mapping software instructions to emulate hardware instructions, virtualized hardware can use software to function like “real” hardware. Cloud providers maintain multiple data centers, each one having hundreds (if not thousands) of physical servers that execute virtualized hardware for customers. Microsoft Azure architecture runs on a massive collection of servers and networking hardware, which, in turn, hosts a complex collection of applications that control the operation and configuration of the software and virtualized hardware on these servers. This complex orchestration is what makes Azure so powerful. It ensures that users no longer have to spend their time maintaining and upgrading computer hardware as Azure takes care of it all behind the scenes. Improved Backup and Disaster Recovery Microsoft Azure offers flexibility, advanced site recovery, and built-in integration. The cloud-based nature of Azure makes it innately flexible, allowing you to backup your data in virtually any programming language, operating system, or location. Azure also allows you to set backup schedules daily, weekly, monthly, or whatever schedule you please. On the other hand, Microsoft Azure’s site recovery features can enhance your tape backup systems by providing offsite replication and decades-long data retention with minimal onsite maintenance and operational costs. Azure also stores three backups of your data in three different locations in the data center, plus another three backups in a remote data center. Develop and Host Web and Mobile Apps Azure is an ideal platform for developing, hosting, and managing web and mobile apps, making them autonomous and adaptive through features like automatic patch management, Auto Scale, and integration for on-premise apps. Automatic patch management on your virtual machines frees up time you would otherwise spend managing your infrastructure, allowing you to focus on improving your app’s core features. Meanwhile, Auto Scale, a feature integrated into Azure Web Apps, automatically adjusts your resource threshold based on user traffic, saving you money outside of peak hours. Finally, Azure links your web apps to an on-premise app, allowing your stakeholders to access resources within your firewall. Integration with Active Directory Microsoft Azure can integrate with your Active Directory, supplementing your existing access and identity capabilities. It also gives your DNS improved security, worldwide reach, and centralized management. One of the things that makes Azure unique is its ability to extend your Active Directory environment’s reach globally while maintaining direct control. No other cloud platform can expand your domain controller’s reach and consolidate Active Directory management. And if you have multiple sites or use cloud apps like Office 365, Azure’s integration with Active Directory gives you a centralized tool to manage and maintain access to your suite of tools. Microsoft Azure Core Architectural Components Cloud computing has changed how organizations perform many of their business functions and how applications and systems are built. The primary role of an Azure architect is to ensure that the application delivers business value by meeting your organization’s functional requirements. However, this also entails making sure that the solution is designed to be scalable, reliable, secure, and efficient. While there is no such thing as a one-size-fits-all method for designing architecture on Azure, there are a few best practices that all architects agree will apply to any cloud provider, technology, or architecture. While these best practices only scratch the surface of what you can do on Azure, focusing on them should help you build a strong, flexible, and secure foundation for your application. Design with Security in Mind Data is the most precious and potentially damaging (in the wrong hands) component of your organization’s IT infrastructure. Every architect must focus on securing access to your architecture through stringent authentication procedures and protecting your application and data from network threats. Also, you should use encryption protocols to protect your data. Security must be at the forefront of your application’s lifecycle, from pre-development and design to implementation, deployment, and operations. While Azure provides robust protections against threats, such as DDoS attacks and network breaches, architects must still take it upon themselves to integrate security practices into the application itself, their processes, and organization. Design for Scalable Performance As mentioned earlier, the beauty of cloud computing and microservices architecture lies in the ease of scaling performance. For your architecture to perform at its best and be scalable, it must be capable of shifting resource capacity according to demand. This is usually done by ensuring that applications can scale dynamically based on traffic or usage activity. An architecture that’s designed with a focus on performance and stability also provides a great experience to customers. Finally, scalability is also tied to your bottom line. An architecture that can adjust according to demand will save you money. Design for Recoverability Every architect must anticipate the worst-case scenario of their architecture going down with no way to recover their data. A reliable cloud environment is one that has multiple safeguards against catastrophic failure at all levels. A huge part of this is designing an architecture that’s capable of recovering from failure within the timeframe required by both customers and stakeholders. Design for Efficiency Last but not least, be sure to design your cloud infrastructure in a way that makes application development and operation cost-effective. That means identifying potential areas of inefficiency and waste, ensuring that your IT resources are spent wherever you can maximize ROI. It’s also crucial to have a monitoring architecture that lets you detect impending failures or problems before they happen, or, at the very least, before it catches the attention of your customers.
seyibright
1,880,457
Glam Up My Markup: Beaches
This is a submission for [Frontend Challenge...
0
2024-06-07T13:20:44
https://dev.to/michyking/glam-up-my-markup-beaches-mp9
devchallenge, frontendchallenge, css, javascript
_This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_ ## What I Built I built the web page from the given HTML snippet. The rule is not to edit the HTML source code directly so CSS and JavaScript were my tools in achieving this design. - With JavaScript DOM manipulation, I was able to create and insert 3 important elements to the source HTML file. 1. Search Input 2. Images 3. Buttons - To make the page interactive, I hid the beach description and showed it whenever the button was clicked. - Used CSS and JS to add a little animation to improve user experience. My goal was to create an interface that is visually appealing, highly functional, and most importantly simple. Enabling users to easily discover and learn about different beaches worldwide. ## Demo To access the live link to my project [CLICK HERE](https://glamupmymarkupbeaches-michykings-projects.vercel.app/). ![The design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7hnltxndatkjq0h5n37d.png) ![This shows the decription modal on click of the button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jkrr2pvjb6y6x53hd3i9.png) The link to my Github repo can be found [here](https://github.com/michyking/Beaches). ## Journey The entire process was full of ups and downs, to be honest, but I did enjoy exploring my coding creativity. I encountered some hurdles along the way, but I am particularly proud of the simplicity of my design. As a first-time developer, I am really happy with my results and look forward to taking on more future challenges. This project not only expanded my technical skills but also deepened my understanding and appreciation of the complexities of web development. It was a rewarding journey that challenged me to step out of my comfort zone and enabled me to grow as a developer. <!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- We encourage you to consider adding a license for your code. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
michyking
1,880,420
Buy Verified Paxful Account
https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are...
0
2024-06-07T13:19:21
https://dev.to/jipasin257/buy-verified-paxful-account-2bnn
react, python, ai, devops
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrj680fay8lia7liy4x5.png)\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
jipasin257
1,880,418
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-07T13:16:40
https://dev.to/jipasin257/buy-verified-cash-app-account-1cmk
webdev, javascript, beginners, tutorial
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39l6em6u2rpbu58a8hav.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
jipasin257
1,880,417
CSS Animations
CSS Animations An animation lets an element gradually change from one style to another....
0
2024-06-07T13:15:19
https://www.devwares.com/blog/css-animations/
css, webdev, beginners, animation
## CSS Animations An [animation](https://www.devwares.com/tailwindcss/classes/tailwind-animation/) lets an element gradually change from one style to another. CSS allows the animation of HTML elements without using JavaScript or Flash. An animation lets an element gradually change from one style to another. You can change as many CSS properties as you want, as many times as you want. To use CSS animation, you must first specify some keyframes for the animation. Keyframes hold what [styles](https://www.devwares.com/tailwindcss/classes/tailwind-list-style-type/) the element will have at certain times. ## The @keyframes Rule When you specify CSS styles inside the @keyframes rule, the animation will gradually change from the current style to the new style at certain times. To get an animation to work, you must bind the animation to an element. The following example binds the "example" animation to the `<div>` element. The animation will last for 4 seconds, and it will gradually change the background color of the `<div>` element from "red" to "yellow": **CSS Code:** ```css /* The animation code */ @keyframes example { from { background-color: red; } to { background-color: yellow; } } /* The element to apply the animation to */ div { width: 100px; height: 100px; background-color: red; animation-name: example; animation-duration: 4s; } ``` The animation-duration property defines how long time an animation should take to complete. If the animation-duration property is not specified, no animation will occur, because the default value is 0s (0 seconds). The following example will change the background color of the '`<div>` element when the animation is 25% complete, 50% complete, and again when the animation is 100% complete: **CSS Code:** ```css /* The animation code */ @keyframes example { 0% { background-color: red; } 25% { background-color: yellow; } 50% { background-color: blue; } 100% { background-color: green; } } /* The element to apply the animation to */ div { width: 100px; height: 100px; background-color: red; animation-name: example; animation-duration: 4s; } ``` The following example will change both the background color and the position of the `<div>` element when the animation is 25% complete, 50% complete, and again when the animation is 100% complete: **CSS Code:** ```css /* The animation code */ @keyframes example { 0% { background-color: red; left: 0px; top: 0px; } 25% { background-color: yellow; left: 200px; top: 0px; } 50% { background-color: blue; left: 200px; top: 200px; } 75% { background-color: green; left: 0px; top: 200px; } 100% { background-color: red; left: 0px; top: 0px; } } /* The element to apply the animation to */ div { width: 100px; height: 100px; position: relative; background-color: red; animation-name: example; animation-duration: 4s; } ``` ## Delay an Animation The animation-delay property specifies a delay for the start of an animation. The following example has a 2 seconds delay before starting the animation: **CSS Code:** ```css div { width: 100px; height: 100px; position: relative; background-color: red; animation-name: example; animation-duration: 4s; animation-delay: 2s; } ``` Negative values are also allowed. If using negative values, the animation will start as if it had already been playing for N seconds. In the following example, the animation will start as if it had already been playing for 2 seconds: **CSS Code:** ```css div { width: 100px; height: 100px; position: relative; background-color: red; animation-name: example; animation-duration: 4s; animation-delay: -2s; } ``` ## Set How Many Times an Animation Should Run: The animation-iteration-count property specifies the number of times an animation should run. The following example will run the animation 3 times before it stops: **CSS Code:** ```css div { width: 100px; height: 100px; position: relative; background-color: red; animation-name: example; animation-duration: 4s; animation-iteration-count: 3; } ``` ## Run Animation in Reverse Direction or Alternate Cycles The animation-direction property specifies whether an animation should be played forwards, backward, or in alternate cycles. The animation-direction property can have the following values: - normal - The animation is played as normal (forwards). This is default - reverse - The animation is played in the reverse direction (backward) - alternate - The animation is played forwards first, then backward - alternate-reverse - The animation is played backward first, then forwards The following example will run the animation in the reverse direction (backward): **CSS Code:** ```css div { width: 100px; height: 100px; position: relative; background-color: red; animation-name: example; animation-duration: 4s; animation-direction: reverse; } ``` ## Specify the fill mode For an Animation CSS animations do not affect an element before the first keyframe is played or after the last keyframe is played. The animation-fill-mode property can override this behavior. The animation-fill-mode property specifies a style for the target element when the animation is not playing (before it starts, after it ends, or both). The animation-fill-mode property can have the following values: -- none - Default value. The animation will not apply any styles to the element before or after it is executing - forwards - The element will retain the style values that are set by the last keyframe (depends on animation direction and animation iteration count) - backward - The element will get the style values that are set by the first keyframe (depending on animation direction), and retain this during the animation-delay period - both - The animation will follow the rules for both forward and backward, extending the animation properties in both directions The following example lets the `<div>` element retain the style values from the last keyframe when the animation ends: **CSS Code:** ```css div { width: 100px; height: 100px; background: red; position: relative; animation-name: example; animation-duration: 3s; animation-fill-mode: forwards; } ``` The following example lets the `<div>` element get the style values set by the first keyframe before the animation starts (during the animation-delay period). **CSS Code:** ```css div { width: 100px; height: 100px; background: red; position: relative; animation-name: example; animation-duration: 3s; animation-delay: 2s; animation-fill-mode: backwards; } ``` The following example lets the `<div>` element get the style values set by the first keyframe before the animation starts, and retain the style values from the last keyframe when the animation ends. **CSS Code:** ```css div { width: 100px; height: 100px; background: red; position: relative; animation-name: example; animation-duration: 3s; animation-delay: 2s; animation-fill-mode: both; } ```
hypercode
1,880,416
7 tricks of successful communication with candidates
Nowadays attracting good candidates is a pretty hard process, thereby, recruiters often face the...
0
2024-06-07T13:12:11
https://jetthoughts.com/blog/7-tricks-of-successful-communication-with-candidates-startup-recruiting/
startup, recruiting
*Nowadays attracting good candidates is a pretty hard process, thereby, recruiters often face the problem of long-term closing of vacancies. That happens because HR specialists don’t pay enough attention to the initial stage of selection and communication.* So, how can you draw a candidate’s attention to your company from the very beginning? How to interest and immediately make a good impression on them? ![Photo by [Adam Jang](https://unsplash.com/@adamjang) on [Unsplash](https://unsplash.com/s/photos/communication)](https://raw.githubusercontent.com/jetthoughts/jetthoughts.github.io/master/static/assets/img/blog/7-tricks-of-successful-communication-with-candidates-startup-recruiting/file_0.jpeg) Recently we conducted a small investigation and identified the main tricks of successful communication with candidates: ## ***1. Write concisely, simply, and essentially*** Creating messages to interest new candidates, always follow these three rules: be brief, be specific and keep it simple. First of all, the shorter your message, the better. So, try to fit all the necessary information in a few sentences. Secondly, go straight to the point and let every word matter. Thirdly, write simply and clearly — never create a text that is difficult to understand. ## **2. Be professional and literate** Even in text messages you still have to adhere to professional standards. They should express respect and professional interest. Don’t try to make friends by touching on any personal topics. Also, please, check your text several times before sending it. ## **3. Stand out and catch the attention** If you want your message to be read and not closed right away, you need to arouse the candidate’s interest. Your words should be unique compared to other cold messages. Create catchy subject lines and bright, non-standard text content: something that will definitely show that your company is special and creative. ## **4. Always give feedback as soon as possible** When a person makes an effort to get a particular position, he wants to receive feedback on his interviews or test tasks as soon as possible. Even if you haven’t made the final decision yet, always try to communicate with the candidate about his achievements at every stage. Let him know what he did well and what he needs to improve. ## **5. Follow up but don’t spam** If you didn’t receive an answer to your letter, you can and should remind about yourself. But remember that the purpose of a follow-up message is to draw someone’s attention to the previous letter. So, don’t repeat the same text, again and again, just ask if the person is interested in your proposition. However, don’t overdo it — spam and persistent attention are always annoying. ## **6. Personalize and sound like a human** Starting a conversation with a candidate, you get an invitation to his personal space. This is the reason to avoid templates and specific terms. So, in each subsequent message, pay attention to what the person said or asked you previously. Use a natural and friendly style of communication. ****Also, try to personalize each of your messages. It will make your words more alive and unique. In this case, highlight any comments about the candidate’s personal facts, skills, or experience, something that will make it clear that you really did some research before connecting with him. ## **7. Consider the relevance** A really bad idea is trying to connect with everyone you can. It will work just like spam. Carefully select the candidates before starting a conversation: review their profile information such as experience, skills, previous projects, etc. ## Conclusion In fact, if a recruiter doesn’t interest his candidates from the beginning, he will not be able to succeed soon. So, select carefully and then create non-standard catchy messages, following these 7 tricks. Good luck! ***Viktoriia **is a Recruiter at [JetThoughts](https://www.jetthoughts.com/). Follow her on [LinkedIn](https://www.linkedin.com/in/viktoriia-tsvyk-5a6647222/).* *If you enjoyed this story, we recommend reading our [latest tech stories](https://jtway.co/latest) and [trending tech stories](https://jtway.co/trending).*
jetthoughts_61
1,880,415
RUSSIAN CALL GIRLS AT 6000 HOTEL CALL GIRLS SERVICE IN DELHI NCR 9873632482
Best Russian Escorts in Delhi- 9873632482, Are you Looking for Beautiful Call Girls in Delhi NCR,...
0
2024-06-07T13:11:45
https://dev.to/the_realone_/russian-call-girls-at-6000-hotel-call-girls-service-in-delhi-ncr-9873632482-4cl2
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/62wr13d5b97088edsbg4.jpg) Best Russian Escorts in Delhi- **9873632482**, Are you Looking for Beautiful Call Girls in Delhi NCR, Then This is the perfect Place for You. We Have Young Call Girls with many selections of Choice with In-call and Out out-call facilities. Our Delhi Escort is very Experienced in all Round Services. ADVANTAGES When you Book a Call Girl for Us: [1] No Advance Payment. [2] Direct Meeting with Girl. [3] Free Pick and Drop Facility. [4] After Service Payment / Cash On Delivery. Note: We are open 24 hours and Happy to Help you with Hot Call Girls, Please call or WhatsApp for More Details Dial +919873632482 | {Thanks for Visiting Have a Nice Time
the_realone_
1,880,414
RUSSIAN CALL GIRLS AT 6000 HOTEL CALL GIRLS SERVICE IN DELHI NCR 9873632482
![Image...
0
2024-06-07T13:10:27
https://dev.to/the_realone_/russian-call-girls-at-6000-hotel-call-girls-service-in-delhi-ncr-9873632482-3enb
``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6hrmaf7o0fjg2d3y0i4.jpg) Best Russian Escorts in Delhi- 9873632482, Are you Looking for Beautiful Call Girls in Delhi NCR, Then This is the perfect Place for You. We Have Young Call Girls with many selections of Choice with In-call and Out out-call facilities. Our Delhi Escort is very Experienced in all Round Services. ADVANTAGES When you Book a Call Girl for Us: [1] No Advance Payment. [2] Direct Meeting with Girl. [3] Free Pick and Drop Facility. [4] After Service Payment / Cash On Delivery. Note: We are open 24 hours and Happy to Help you with Hot Call Girls, Please call or WhatsApp for More Details Dial +919873632482 | {Thanks for Visiting Have a Nice Time
the_realone_
1,880,411
Time management tips for professionals
In today's fast-paced world, managing time effectively is crucial for professionals to stay...
0
2024-06-07T13:09:51
https://dev.to/techstuff/time-management-tips-for-professionals-469
timemanagement, tips, productivity
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b59byvivimz6btbgg727.png) In today's fast-paced world, managing time effectively is crucial for professionals to stay productive and maintain a healthy work-life balance. Whether you're a busy executive, a freelancer, or a student juggling multiple responsibilities, mastering time management can help you achieve your goals while reducing stress. **Here are some basic but effective suggestions to help you make the most of your time.** **Set Clear Goals:** Begin by setting precise and achievable objectives for yourself. Think about what you want to accomplish in your professional and personal life, and break these goals down into smaller, actionable steps. Having a clear sense of direction will help you prioritize your tasks and stay focused on what truly matters. **Prioritize Tasks:** Not all tasks are equally important. Identify the most critical tasks that will have the greatest impact on your goals, and tackle them first. Use techniques like the ABC method or the Eisenhower Matrix to categorize your tasks based on their urgency and importance. This will help you focus your time and energy where it's needed most. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tymjlhygyln69lppgq6r.jpg) **Create a Schedule:** Establishing a schedule can help you structure your day and make better use of your time. Use a calendar or planner to block out time for specific tasks, meetings, and activities. Be realistic about how long each task will take, and try to stick to your schedule as much as possible. Don't forget to schedule breaks to rest and recharge throughout the day. **Reduce Distractions:** Distractions can interfere with your productivity and wasted time that is essential. Identify and minimize common workplace distractions, such as social media, email notifications, and noisy coworkers. Consider using tools or apps to block distracting websites or to limit your screen time. By staying focused on your tasks, you'll be able to accomplish more in less time. **Learn to Say No:** As a professional, you may face numerous demands on your time and attention. While it's essential to be helpful and collaborative, it's also crucial to know your limits. Learn to say no to requests or tasks that don't align with your goals or priorities. Remember that saying no is not selfish—it's about protecting your time and energy for what truly matters. **Break Tasks into Manageable Chunks:**Large projects or tasks can feel overwhelming, leading to procrastination and stress. Break big tasks down into smaller, more manageable chunks, and tackle them one step at a time. To keep yourself motivated and to monitor your progress, set attainable goals along the road. Recognize your accomplishments, regardless of how small they may seem. **Take Care of Yourself:** Despite your hectic schedule, never forget to give self-care first priority. Schedule time for physical activity, meditation, and quality time with loved ones—activities that are good for your body and mind. Get enough sleep each night to recharge your batteries and maintain your focus during the day. Taking care of yourself is essential for staying productive and resilient in the long run. **Evaluate and Make Adjustments:** Continually assess your time management techniques to see what is and is not effective. Be flexible and willing to adjust your approach based on your evolving needs and priorities. Try out different methods and resources to see what suits you the best. Recall that effective time management is a skill that can be acquired and developed through experience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l15u0x7qzfaxgb1mygal.jpg) In conclusion, effective time management is essential for professionals to succeed in today's fast-paced world. By setting clear goals, prioritizing tasks, creating a schedule, minimizing distractions, learning to say no, breaking tasks into manageable chunks, taking care of yourself, and regularly reviewing your strategies, you can maximize your time and accomplish your objectives with greater happiness and fewer stress.
swati_sharma
1,880,408
Two Ways to Add Pride Month Flair to Your Website
Discover two simple techniques to add support for Pride Month on your website: adding pride-themed widgets from pride.codes and transforming input field borders into vibrant rainbow gradients.
0
2024-06-07T13:09:05
https://dev.to/yordiverkroost/two-ways-to-add-pride-month-flair-to-your-website-fkm
pride, css, design
--- title: Two Ways to Add Pride Month Flair to Your Website published: true description: Discover two simple techniques to add support for Pride Month on your website: adding pride-themed widgets from pride.codes and transforming input field borders into vibrant rainbow gradients. tags: pride, css, design cover_image: https://bear-images.sfo2.cdn.digitaloceanspaces.com/yordi-1717765252-0.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-07 13:008 +0000 --- June is Pride Month. For us - bloggers and creators on the small/indie web (or actually, just anyone who owns a website), there are multiple ways to show your support to the pride community. I'll show you two ways in which I've customized my website for Pride Month. Let's dive in! # pride.codes The website [pride.codes](https://pride.codes/) created some widgets you can easily add to your website, like a corner-piece or a horizontal bar. They make it very easy to add one of those to you own page on the web. My blog has a natural horizontal separator between the main content and the footer, which I decided to Prideify. I did the following, based on the simple steps outlined on the [Pride bars](https://pride.codes/sample/bars/) page (it's method 3): 1. Add the required stylesheet link to the `<head>` of my website: `<link rel="stylesheet" href="https://cdn.pride.codes/css/bar_helpers.css">` 2. Add the pride strip in the footer section of my website (you can put this anywhere you want): `<div class="pride-strip">` That's it, you're done! Simple as that! On my website, it looks as follows (*text continues below the image*): <div style="border: 1px solid white;"> ![Example of a Pride bar on a website](https://bear-images.sfo2.cdn.digitaloceanspaces.com/yordi-1717765252-0.png) </div> # Underlined text input field A few weeks ago, I styled the input fields on my website to be more in line with my website's theme. Basically, I gave input fields the same color as my website's background and underlined the field with a bottom border. In its simplest form. this looks as follows: {% codepen https://codepen.io/Froodooo/pen/OJYjJyr %} The bottom border is an excellent candidate to update for Pride Month by changing the line color from white to a rainbow gradient. I used a ombination of `border-image` and `border-image-slice` to make this work. I just added these two lines to the style of my text input fields: - `linear-gradient(to left, #b827fc 0%, #2c90fc 25%, #b8fd33 50%, #fec837 75%, #fd1892 100%)` - `border-image-slice: 1` The `linear-gradient` takes the rainbow colors and connects them together from right to left. The value for `border-image-slice` ensures the gradient is used for the border correctly. The result looks as follows: {% codepen https://codepen.io/Froodooo/pen/rNgzNeM %}
yordiverkroost