id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,833,706 | Unveiling the Pinnacle Alternatives to BrowserStack for Efficient Software Testing | Ensuring software performs impeccably across various browsers and devices in the ever-evolving... | 0 | 2024-04-25T06:15:03 | https://www.technology.org/2024/03/19/unveiling-the-pinnacle-alternatives-to-browserstack-for-efficient-software-testing/ | testing, mobile, automation, development | Ensuring software performs impeccably across various browsers and devices in the ever-evolving digital era is paramount. This is where software testing tools come into the limelight, providing the essential service of automating and streamlining the testing process to validate that software operates without a hitch in every possible scenario. The benefits of utilizing software testing tools are multi-faceted, ranging from time-saving via automation to [ensuring a seamless user experience](https://www.headspin.io/blog/user-experience-testing-a-complete-guide) across all platforms.
## What are Software Testing Tools?
Software testing tools are specialized applications or frameworks used by testers and developers to ensure software or applications’ quality. These tools assist in performing various testing activities, such as identifying issues, evaluating system performance, and ensuring the software behaves as expected in all possible environments and scenarios. Some widely recognized software testing tools include Selenium, Apache JMeter, and, notably, BrowserStack.
## Enhanced Benefits of Leveraging Software testing Tools
Ensuring software operates seamlessly and effectively is indispensable in safeguarding user satisfaction and loyalty. Software testing tools play a pivotal role in extending an array of advantages to developers, testers, and enterprises by bringing automation and accuracy to the testing process.
- **Accuracy and Reliability**: Software testing tools offer precision that is nearly impossible to achieve through manual testing. Automated tests execute the same steps with unerring accuracy, avoiding the pitfalls of human error and ensuring that the software is rigorously validated under consistent conditions every time.
- **Efficiency and Time Management**: Automation enables testing to be conducted swiftly and concurrently across various devices and environments, thus significantly reducing the time to market for software products. This allows for more frequent releases and updates, enabling organizations to stay competitive and responsive to market demands.
- **Cost-Effectiveness**: While there is an initial investment in software testing tools, the savings derived from reduced testing time, early detection of issues, and mitigation of failure risks often outweigh the initial outlay. Automating repetitive but necessary testing tasks allows human testers to focus on more high-value work, driving better utilization of resources and capital.
- **Scalability**: Automated testing tools can effortlessly adapt to varying scales of testing requirements, whether deploying code changes for small sections of an application or launching extensive updates for complex software. This ability to scale ensures that the software testing process remains robust and relevant even as the software and user base evolve.
- **Comprehensive Testing**: Software testing tools enable exhaustive testing, ensuring that every aspect of the software is validated under many scenarios, including edge cases that might be overlooked in manual testing. This ensures comprehensive coverage and robust testing, ascertaining that every potential issue is identified and addressed.
- **Regression Testing and Continuous Development**: In the current age of continuous development and delivery, automated testing tools facilitate easy regression testing. Whenever a change is made, the tool can quickly identify and address unintended consequences or disruptions, ensuring continuous development does not compromise software quality.
- **Quality Assurance**: The assurance of quality is intrinsic to software testing tools. They ensure that every release meets a defined quality standard, reinforcing user trust and satisfaction. The consistent performance, minimized bugs, and optimized functionality lead to a superior user experience, enhancing the reputation and reliability of the product and the brand.
- **Global and Local Testing**: With globalized testing, software testing tools facilitate testing in varied locations, under various network conditions, and across different devices, providing a truly global testing environment. This ensures that software delivers consistent and optimal performance for all users, regardless of geographical and technological variations.
- **Performance Testing**: Understanding how software behaves under different conditions and loads is vital. Software testing tools enable performance testing, ensuring that the software can handle expected user loads, peak times, and more without degradation in performance and user experience.
- **Enhanced Security**: In an age where data breaches and cyber-attacks are rampant, ensuring the security of applications and data is paramount. Automated testing tools can rigorously test security protocols and defenses, ensuring that software is functional and secure against potential threats.
## Top Alternatives to BrowserStack
Given the significance of robust testing, opting for a tool that aligns with your organizational requirements is vital. While BrowserStack is a known player in the realm of software testing, exploring its alternatives to discern a more fitting option for your specific needs is prudent.
- **HeadSpin**
HeadSpin, with its comprehensive device cloud, offers a considerable range of testing solutions that are revered for enabling seamless user experiences across multiple platforms. The platform excels in providing real-world, actionable insights that aid in optimizing performance across every network and device. With a global device infrastructure and a feature-rich platform, HeadSpin stands out as a compelling alternative, ensuring accurate and efficient testing in real-world conditions.
- **Sauce Labs**
Sauce Labs offers a cloud-based testing platform, allowing users to run tests in the cloud on various browsers and operating systems, eradicating the need for internal labs and reducing infrastructure costs. Sauce Labs is recognized for its secure testing, wide device, and browser coverage and alignment with DevOps methodologies, facilitating continuous testing and delivery.
- **LambdaTest**
LambdaTest is renowned for its cloud-based cross-browser testing capabilities, enabling testing on 2,000+ browsers and operating system environments. It’s user-friendly, offers robust support, and many integrations, making it a solid choice for those looking to ensure their web applications function flawlessly across all browsers and devices.
- **CrossBrowserTesting**
CrossBrowserTesting ensures that software delivers a consistent user experience for all users, regardless of the technology utilized, with a focus on testing websites and mobile applications across various browsers, operating systems, and devices. It supports automated, visual, and manual testing, catering to various testing requirements.
- **Kobiton**
Kobiton takes device testing to the next level by offering a platform that supports real and virtual testing environments. With a focus on mobile device testing, it allows users to test on real devices in the cloud, ensuring that applications run smoothly across all device types and networks.
## Conclusion
Opting for a software testing tool that synchronizes with your specific needs is crucial in maintaining your software or application’s quality, reliability, and user satisfaction. Each [alternative to BrowserStack](https://www.headspin.io/why-choose-headspin/headspin-vs-browserstack) has unique features and capabilities, accommodating diverse testing needs and organizational requirements.
Whether you are inclined towards the globally-equipped HeadSpin or the flexible LambdaTest, ensuring your choice aligns with your enterprise test automation needs and other testing prerequisites is paramount. Exploring and evaluating each alternative against your unique use case ensures that your software testing is efficient and effective, safeguarding impeccable software performance and a stellar user experience.
_Article resource: This article was originally published on https://www.technology.org/2024/03/19/unveiling-the-pinnacle-alternatives-to-browserstack-for-efficient-software-testing/_ | abhayit2000 |
1,833,843 | How Web Applications Can Help You Succeed Online | A post by digital | 0 | 2024-04-25T09:15:32 | https://dev.to/digital647/how-web-applications-can-help-you-succeed-online-212i | digital647 | ||
1,833,928 | Engaging Your Audience: Mastering Elearning Voice Over | Enhance your elearning experience with engaging voice overs. Discover how elearning voice over can... | 0 | 2024-04-25T10:11:54 | https://dev.to/novita_ai/engaging-your-audience-mastering-elearning-voice-over-h4p | ai, audio, api, voiceover | Enhance your elearning experience with engaging voice overs. Discover how elearning voice over can elevate your content on our blog!
##Key Highlights
1. **Understanding E-Learning**: E-learning, comprising lectures, videos, quizzes, and simulations, dominates modern education, with technology-based methods on the rise.
2. **Effective Voice-Overs**: Key strategies include using music, selecting appropriate tones, and summarizing or expanding on content.
3. **Advantages of Voiceover Software**: While human voiceovers offer a personal touch, AI platform like novita.ai provides consistency, flexibility, and cost-effectiveness.
4. **Empowering Learners**: Quality voiceovers enhance understanding and accessibility for individuals with disabilities.
5. **Selecting the Perfect Voice**: Choosing the right voice involves considering the audience, pacing, and clarity, with tools like novita.ai simplifying the process.
##Introduction
In today's digital age, e-learning has become a cornerstone of education and training, offering a flexible and accessible way for learners to acquire knowledge and skills. However, creating effective e-learning content goes beyond just compiling information; it requires mastering the art of engaging your audience. One crucial aspect of this is voice-over narration, which can significantly impact the learner's experience. In this article, we'll explore the key strategies for using voice-overs effectively in e-learning content, the advantages of both human and AI-generated voiceovers, and how novita.ai's cutting-edge technology is revolutionizing the e-learning landscape.
##What is Elearning?
E-learning is a digital course that includes various elements like lectures, videos, quizzes, simulations, and interactive components. It can be broadly defined as electronically delivered learning content. E-learning courses are typically managed through a learning management system (LMS), which helps organizations handle training events efficiently.
Online learning is gaining more popularity, with technology-based methods like e-learning making up 80 percent of learning hours in 2020, as per ATD's 2021 State of the Industry report. E-learning is generally divided into asynchronous and synchronous categories.

##Types of Elearning voice overs
There are two main categories of elearning voice overs: educational voice over and corporate voice over. Educational voice over finds application in illustrative YouTube videos, podcasts, audiobooks, university audio online courses, and as an aid for the disabled, just to name a few. A more formal, professional voice with an authoritative and welcoming tone, provided by skilled voice acting, suits well for an academic elearning video.
Corporate voice over, on the other hand, is usually in the form of user manuals, video/audio modules, and presentations. For example, there is a lot of information a new hire or staff needs to absorb when they start with an organization, and as they continue on their professional development journey.
##How To Use Voice-Overs In ELearning The Right Way
As mentioned above, you could ruin the learning experience by investing in low-quality voice-overs. It's critical to understand how to use voice-overs correctly in eLearning content so your audience will return.
###1. Use Music
To use voice-overs effectively in eLearning, complement spoken content with music to engage learners and enhance the learning experience. This is particularly beneficial for visually impaired students as it offers audio descriptions. Avoid including distracting lyrics to keep the focus on learning.
###2. Tones
When hiring a voice actor, consider their tone. A conversational voice may engage students more than an authoritative one, depending on your eLearning content. Serious topics require a serious voice. Evaluate the pros and cons of each tone for your audience and goals to make the best choice for your eLearning voice-over.
###3. Summarize Or Expand
ELearning content often uses voice-over to summarize detailed points on screen or to discuss concise points further. Expanding on information is preferred as it fosters conversation while keeping visual notes for students to reference.
##Advantages of Using Voiceover Software Over Live Voice Recordings
Human voiceovers provide a personal touch and emotions that AI audio lacks. However, recordings done by untrained individuals may contain pronunciation errors and awkward pauses. Creating multilingual content with live recordings can be challenging. Professional voice talent for eLearning voiceovers offers benefits such as conveying subtleties and boosting engagement. Using a human voice for eLearning voiceovers allows for greater flexibility and adaptability in delivering complex information, as voice actors can adjust their tone and pacing to suit the needs of the material and the audience.

AI voiceover software offers more consistency, reliability, and flexibility compared to traditional methods. It is more cost-effective, requires less time and expertise, and offers a wide selection of voices in different languages, accents, and styles. Users have the ability to customize voiceovers by adjusting pitch, speed, volume, emphasis, and pronunciation. Furthermore, some software enables the creation of custom voices, including replicating existing voices or collaborating with professional voice actors. With the improvement in synthetic speech quality, AI voiceover software is increasingly preferred over human voiceovers.
##Reasons for using novita.ai for voice over.
In the realm of eLearning, captivating your audience is paramount. Mastering the art of eLearning voice over can make all the difference in creating an immersive learning experience.
When it comes to voice over, novita.ai stands out for its cutting-edge technology and sAPIs provided. With novita.ai, you can elevate your eLearning content with professional-grade voice overs that enhance engagement and comprehension.
###Create a high-quality course quickly!
With novita.ai [txt2speech](https://novita.ai/product/txt2speech), you can easily convert text-based educational content into a more convenient audio format, make necessary edits, all with just one click, in real time.

Create engaging experiences by utilizing genuine voices.
Use customization features [voice-cloning](https://novita.ai/product/voice-cloning-instant) to add more character to your elearning voice and deliver engaging experiences for online learners.

Offer APIs to developers to further enhance capabilities
As eLearning continues to evolve, the demand for efficient and engaging voice over solutions is on the rise. novita.ai emerges as a game-changer in this realm, offering unparalleled [APIs](https://docs.novita.ai/novita-ai) that cater to the diverse needs of content creators and learners alike.

##Begin creating voice overs.
Creating eLearning content can be a challenging and time-consuming task, especially for seasoned professionals. Thankfully, novita.ai simplifies this process by offering a wide selection of male and female AI voices in various languages and accents, including TTS options. Additionally, you can seamlessly incorporate videos or slides, adjusting the timing to synchronize with your content.
###Creating a Voiceover for Elearning Step by Step
Step1. Head to novita.ai txt2speech.

Step2. Type in your e-learning script or if you have a pre-written version of any training material, copy-paste it to novita.ai's text editor.
Step3. Choose a suitable voice from novita.ai's extensive library of AI voices across different tonalities.

Step4. Click on 'play' button to render the voice over for your Elearning course.
##Quality Voice Overs for Elearning
Crafting engaging e-learning modules involves creating clear, memorable content with immersive voice narration. A natural-sounding voice can enhance understanding and retention for online learners, making the training experience more inclusive. Combining audio and visual elements can simplify complex topics, humanize content, and improve knowledge retention. Tools like novita.ai can streamline content integration and adapt output for various platforms. Quality voice overs in e-learning programs enhance retention and behavior change for both internal and external audiences. Consider technical specifications like high-quality audio and compatible file formats (MP3, WAV, AAC) when recording and using audio files for a professional learning experience.
###Empowering individuals with disabilities to improve their learning.
Voiceovers are essential for helping individuals with disabilities interact with course content. They enable text-to-speech for the blind and provide audio descriptions for those with low vision, offering struggling readers access to written materials.

###Enhance the professionalism of your courses
When writing an e-learning script, keep it simple and engaging. Avoid complexity and long sentences. Listen to the audio recording for better adjustments. Simplify complex sentences and use punctuation effectively for pauses.
##Choosing the perfect voice for your Elearning course
Using the right voice over is vital for an e-learning course. Consider your target audience and avoid fast-paced narration, which can lead to disinterest. Opt for a leisurely pace and varied tone to engage learners effectively. Choose a voiceover talent that aligns with the pacing and tone needed for an engaging audio experience, such as an experienced professional in elearning narration.
Ensure your voice is clear, conversational, and easy to understand. Avoid distracting sound effects or dramatic voiceovers with heavy accents. For instance, in a scenario-based training module for nurse practitioners, opt for a voice that conveys urgency during emergencies. Choose a professional, calm voice similar to a physician's demeanor, free of background noise. Selecting the right voice is vital for engaging your audience and enhancing the learning experience.
novita.ai's text-to-speech APIs simplifies the daunting tasks of engaging a voice artist, booking a recording studio, and editing recordings to create AI voices. In addition to a variety of warm and empathetic e-learning voices, novita.ai provides a solution that not only enhances clarity and engagement but also fosters a stronger connection between learners and the content.

###How to select an elearning narrator
When choosing a voice actor for your e-learning project, consider your audience's preferences. Opt for voices that are confident, authoritative, and relatable. Choose the right narrator based on factors like age, gender, accent, and dialects with Voices' experts. For longer content, prioritize a friendly and conversational tone over a formal announcer voice. Look for a narrator who can adjust their voice clearly and modulate it to enhance the learning experience and engage your audience.
##Conclusion
In the ever-evolving landscape of e-learning, engaging your audience is paramount to the success of educational initiatives. Mastering the art of e-learning voice-over narration is key to creating immersive and effective learning experiences. By leveraging the latest advancements in voiceover technology, such as those offered by novita.ai, educators and content creators can elevate their e-learning content to new heights, ensuring accessibility, engagement, and effectiveness for learners worldwide.
##Frequently Asked Questions
###How do I record audio for voiceover?
There are two ways to record voiceover audio: manual recording with a voice actor or independently, or using voice generation software like novita.ai. novita.ai can quickly convert your script into realistic audio by uploading or typing it into the Studio.
###How much does elearning voice over cost?
The cost of eLearning voice-over varies based on factors like script length, project complexity, voice actor experience, and additional services. Consider your budget and desired quality when selecting a voice actor for your eLearning project. Experience novita.ai for free today!
Originally published at [novita.ai](https://blogs.novita.ai/engaging-your-audience-mastering-elearning-voice-over/?utm_source=dev_audio&utm_medium=article&utm_campaign=elearning-voice-over-tips)
[novita.ai](https://novita.ai/?utm_source=dev_audio&utm_medium=article&utm_campaign=elearning-voice-over-tips), the one-stop platform for limitless creativity that gives you access to 100+ APIs. From image generation and language processing to audio enhancement and video manipulation,cheap pay-as-you-go , it frees you from GPU maintenance hassles while building your own products. Try it for free. | novita_ai |
1,833,954 | 5 Essential Tips for Maintaining Good Oral Hygiene by Experience Dentist |Arcus Dental | Maintaining good oral hygiene is vital not only for a bright smile but also for overall health.... | 0 | 2024-04-25T09:55:45 | https://dev.to/onlineconsultancy25/5-essential-tips-for-maintaining-good-oral-hygiene-by-experience-dentist-arcus-dental-2kk8 | bestdentalhospitalinkphb, bestdentalclinicnearme, bestdentistnearme, dentalservicesnearm | Maintaining good oral hygiene is vital not only for a bright smile but also for overall health. Neglecting oral health can lead to serious dental issues like cavities, gum disease, and, indeed, more serious health conditions. Fortunately, taking up simple habits can significantly improve oral hygiene. Here are five essential tips to help you keep your teeth and gums healthy
Top 5 Dental Tips For Your Oral Health
1. Brush Twice a Day: Brushing your teeth is the foundation of good oral hygiene. It helps remove plaque, bacteria, and food particles that can lead to tooth decay and gum disease. Make it a habit to brush your teeth at least twice a day, preferably in the morning and before bedtime, which improves oral health. Use fluoride toothpaste and a soft-bristled toothbrush to gently clean all surfaces of your teeth, including the front, revive your breath.
##
2. Floss Daily: While brushing is essential, it doesn’t reach all areas between your teeth and along the gumline. That’s where flossing comes in. Flossing helps remove plaque and debris from these hard-to-reach areas, reducing the risk of cavities and gum disease. Make it a habit to floss at least one time a day, preferably before bedtime. Use a piece of dental floss or a flossing tool to gently clean between each tooth, being careful not to snap the floss against your gums.
3. Mouthwash: Mouthwash is another important oral hygiene routine. It helps reduce plaque, freshen breath, and can indeed reach areas that brushing and flossing may miss. Mouthwash contains antimicrobial ingredients that help to fight against bacteria. Incorporate mouthwash into your day-to-day routine by swishing it around your mouth for 30–60 seconds after brushing and flossing.
4. Eat a Balanced Diet: Your diet plays a significant part in your oral health. Foods rich in sugar and carbohydrates can feed bacteria in your mouth, leading to tooth decay and gum disease. Choose a balanced diet rich in fruits, vegetables, lean proteins, and whole grains. Limit your input of sticky snacks and drinks, and try to avoid snacking in between meals. Drinking an abundance of water throughout the day can also help flush down food particles and bacteria, promoting good oral health.
5. Visit Your Dentist Regularly: Indeed if you practice good oral hygiene at home, it’s essential to visit your dentist regularly for professional cleanings and check-ups. Your dentist can detect early signs of dental problems, like cavities and gum disease, and give treatment before they progress. Aim to schedule a dental examination and cleaning at least twice a year, or as recommended by your dentist. Additionally, don’t hesitate to see your dentist if you witness any dental pain, sensitivity, or other concerns between regular appointments.
In conclusion, maintaining good oral hygiene is essential for a healthy smile and overall well-being. By following these five tips—brushing twice a day, flossing daily, using mouthwash, eating a balanced diet, and visiting your dentist regularly—you can keep your teeth and gums in top condition. Remember, a little effort goes a long way toward achieving a lifetime of oral health. And when it comes to finding the best dentist near KPHB, trust in our experienced team to provide exceptional care and support for all your dental needs.
TO Get More Details
https://www.arcusdentalclinic.com/
Email: arcusclinichyd@gmail.com
Phone: +91 9032802805
https://maps.app.goo.gl/NvoiHQMjPm45VwH37
1st Floor, LIG 61-8, 4th Phase, KPHB,
JNTU-Hitech City Road,
Hyderabad, Telangana-500072
| onlineconsultancy25 |
1,834,000 | Digital Marketing Services in the USA - JM Digital Inc | Transform your digital presence with JM Digital Inc's comprehensive Digital Marketing Services. Our... | 0 | 2024-04-25T11:19:49 | https://dev.to/jmdigitalinc/digital-marketing-services-in-the-usa-jm-digital-inc-1foh | Transform your digital presence with JM Digital Inc's comprehensive [Digital Marketing Services](https://www.jaymehta.co/digital-marketing/). Our tailored solutions encompass everything from [Search Engine Optimization (SEO)](https://www.jaymehta.co/digital-marketing/seo/) for heightened visibility to engaging [Content Marketing strategies](https://www.jaymehta.co/digital-marketing/content-marketing/). With expertise in [Social Media Marketing](https://www.jaymehta.co/digital-marketing/social-media-marketing/), [Email Marketing](https://www.jaymehta.co/digital-marketing/email-marketing/), and precise [PPC services](https://www.jaymehta.co/digital-marketing/ppc-advertising/), we ensure targeted outreach and measurable results. Harness the power of AI Automation to streamline processes and enhance efficiency. Trust [JM Digital Inc](https://www.jaymehta.co/) to be your strategic partner in navigating the dynamic landscape of online marketing.
| jaymehtadigital | |
1,834,312 | What are LLMs? An intro into AI, models, tokens, parameters, weights, quantization and more | To keep up with everything happening in the world of artificial intelligence, it helps to understand... | 22,944 | 2024-04-28T22:00:00 | https://www.koyeb.com/blog/what-are-large-language-models | To keep up with everything happening in the world of artificial intelligence, it helps to understand and grasp key terms and concepts behind the technology.
In this introduction, we are going to dive into what is generative AI, looking at the technology and models they are built on. We'll discuss how these models are built, trained, and deployed into the world.
We'll also dive into questions like, "How "large" is a large language model?" and take a look at the relationship between model size and performance. During this part, we will also cover terms you might have heard like parameters, weights, and tokens.
Lastly, we'll explore when you would want to reduce a model size. During this part, we will also go over how quantization and sparsity are two techniques that effectively reduce a model's size.
## What is AI, machine learning, and a model?
First things first, large language models are a subset of AI (artificial intelligence) and ML (machine learning.) [UC Berkeley](https://ischoolonline.berkeley.edu/blog/what-is-machine-learning/) defines AI and machine learning as follows:
- AI refers to any of the software and processes that are designed to mimic the way humans think and process information.
- On the other hand, machine learning specifically refers to teaching devices to learn information given to a dataset without manual human interference.
Then there are models. In artificial intelligence, a model is a representation of a system or process that is used to make predictions or decisions based on data. In other words, it is a mathematical algorithm that is trained on a dataset to learn patterns and relationships in the data. Once trained, the model can be used to make predictions or decisions on new data.
### Training and inference are different stages of a model's lifecycle
**Training** and **inference** are two distinct phases in the lifecycle of a machine learning model.
During **training**, the model learns from the input data and adjusts its parameters to minimize the difference between its predictions and the actual target values. This process involves backpropagation, optimization algorithms, and iterative updates to the model's parameters.
**Inference** is the phase where the trained model is used to make predictions on new, unseen data. During inference, the model takes input data and generates output predictions based on the learned patterns and relationships in the training data. Inference is typically faster and less computationally intensive than training, as the model's parameters are fixed and do not need to be updated.
## What are Large Language Models (LLMs)?
While there are many different kinds of models, today we are going to focus on large lanaguage models.
LLMs are a type of computer program that can recognize, understand, and generate human language. Built on machine learning, they are trained on huge sets of data, which explains where the name "large" comes from.
They are used to generate human-like text, answer questions, and perform other natural language processing tasks.
## Parameters versus Weights versus Tokens
When talking about large language models (LLMs), it's helpful to understand the distinction between parameters, weights, and tokens. These terms are often used interchangeably, but they refer to different aspects of the model's architecture, training, and input/output.
- **Parameters**: Parameters are variables that the model learns during the training process. These parameters are adjusted through backpropagation to minimize the difference between the model's predictions and the actual target values.
- **Weights**: Weights are a subset of the parameters in a model that represent the strength of connections between variables. During training, the model adjusts these weights to optimize its performance. Weights determine how input tokens are transformed as they pass through the layers of the model.
- **Tokens**: Tokens are the basic units of input and output in a language model. In natural language processing tasks, tokens typically represent words, subwords, or characters. During training and inference, the LLM processes input text as a sequence of tokens, each representing a specific word or symbol in the input text. The model generates output by predicting the most likely token to follow a given sequence of input tokens.
## What makes a large language model "large"?
The size of a language model can be measured in several ways, depending on the context and the specific characteristics of the model. Some common metrics used to describe the size of a language model include:
1. **Parameter Count**: The number of parameters in an LLM typically represents the size or complexity of the model, with larger models having more parameters.
2. **Memory Footprint**: The size of a model in terms of memory footprint can also indicate its scale. Large models often require significant amounts of memory to store their parameters during training and inference.
3. **Compute Requirements**: The computational complexity of training and running a model can also indicate its size. Larger models typically require more computational resources (such as CPU or GPU cores) and longer training times.
## Smaller versus Larger Model Sizes
The size of a large model can vary widely depending on factors such as the complexity of the task it's designed for. For example, models used for tasks like natural language processing (NLP) or computer vision may tend to be larger due to the complexity of the underlying data and tasks.
There has been a trend towards increasingly larger models in the field of deep learning, driven by advances in hardware, algorithms, and access to large-scale datasets. For example, models like OpenAI's GPT-3 and Google's BERT have billions of parameters, pushing the boundaries of what was previously considered "large.”
In general, we can categorize language models into three broad categories based on their size:
- **Small models**: Less than ~1B parameters. [TinyLlama](https://github.com/jzhang38/TinyLlama) and [tinydolphin](https://ollama.com/library/tinydolphin) are examples of small models.
- **Medium models**: Roughly between 1B to 10B parameters. This is where [Mistral 7B](https://mistral.ai/news/announcing-mistral-7b/), [Phi-3](https://azure.microsoft.com/en-us/blog/introducing-phi-3-redefining-whats-possible-with-slms/), [Gemma from Google DeepMind](https://github.com/google-deepmind/gemma), and [wizardlm2](https://ollama.com/library/wizardlm2) sit. Fun fact: [GPT 2](https://github.com/openai/gpt-2) was a medium sized model, much smaller than its latest versions.
- **Large models**: Everything above 10B of parameters. This is where [Llama 3](https://llama.meta.com/llama3/), [Llama 2](https://llama.meta.com/llama2/), [Mistral 8x22B](https://mistral.ai/news/mixtral-8x22b/), [GPT 3](https://github.com/openai/gpt-3), and most likely [GPT 4](https://openai.com/research/gpt-4) sit.
## The Relationship Between Model Size and Performance
The size of a language model can have a significant impact on its performance and accuracy.
In general, larger models tend to perform better on complex tasks and datasets, as they have more capacity to learn complex patterns and relationships in the data.
That being said, large models require more computational resources to train and run, making them more expensive and time-consuming to develop and deploy. Additionally, large models may be more prone to overfitting, where the model learns to memorize the training data rather than generalize to new, unseen data.
## Reducing Model Size with Quantization and Sparsity
Large language models can be computationally expensive to train and deploy, making it challenging to scale them to real-world applications. To address this challenge, techniques have been developed to reduce the size of language models while maintaining their performance and accuracy.
1. **Sparsity** introduces zeros into the parameters (weights) of the model to reduce its overall size and computational complexity. Sparse models have a significant portion of their parameters set to zero, resulting in fewer non-zero weights and connections. This reduces memory footprint and computational requirements during both training and inference.
2. **Quantization** involves reducing the precision of numerical values in the model, typically by representing them with fewer bits. For example, instead of using 32-bit floating-point numbers (float32), quantization may use 8-bit integers (int8) or even fewer bits to represent weights, activations, and other parameters of the model.
_Checkout Patrick from [Ollama's lightning demo from the AI Developer Meetup](https://youtu.be/9c4pMnPXjTQ?feature=shared&t=442) last month about how quantization works._
## Deploying LLMs and AI Workloads into the World
In this article, we've covered the surface of AI, machine learning, and large language models. We've discussed the importance of model size, the relationship between model size and accuracy, and techniques to reduce model size such as quantization and sparsity.
If you are looking for the fastest way to [deploy your inference models](https://www.koyeb.com/ai) worldwide, give our platform a test drive. Get ready to deploy on high-end infrastructure, with a single click and no stress DevOps.
| alisdairbr | |
1,834,368 | Revisiting the "Revealing Module pattern" | (Cover Image source) Maybe you've heard of the "Revealing module pattern" [RMP], which is a way to... | 0 | 2024-04-27T13:33:27 | https://dev.to/efpage/revisiting-the-revealing-module-pattern-1fp1 | javascript, programming, oop, tutorial | ([Cover Image source](https://algodaily.com/lessons/understanding-encapsulation-in-programming))
Maybe you've heard of the "[Revealing module pattern](https://www.digitalocean.com/community/conceptual-articles/module-design-pattern-in-javascript)" [RMP], which is a way to create protected code modules in Javascript. Unlike JS class objects, code inside the modules cannot be altered from the outside, which can be a huge benefit to protect your code from it´s worst enemy: you yourself! There are quite some explanations on the pattern on the net (even on [dev.to](https://dev.to/imsabodetocode/javascript-revealing-module-pattern-3ji4)), but I like to show some extensions to make the patter more useful.
The RMP uses the fact that local functions and variables created inside a function cannot be reached from the outside. The function body forms a local scope, that is insulated from the rest of the program. To make elements inside the function body available, the main function returns an object containing the all referencs that should be accessible. See an example here:
```JS
function Person(myName) {
let name = myName
function log(txt) { console.log(txt) }
function public_talk() { log(name + " is talking") }
function public_dance() { log(name + " is dancing") }
// interface
return {
talk: public_talk,
dance: public_dance
}
}
let father = Person("Peter")
father.talk()
father.dance()
```
Person returns an object contaning just the two functions `talk` and `dance`, so all the "internals" are hiden to the outside world. In contrast to JS classes, the code inside an RMP looks pretty normal, in fact the RMP-body looks and works exactly like a small program inside the rest of the code. But there are some downsides too. Most of all, they lack any form of inheritance, so RMP-modules are not open for extensions. But let´s see, what we can do on this...
### JS Tricks and shortcuts
The examples below use some JS-"tricks" that are absolute standard, but which you may or may not be familiar with, so here is a short introduction:
```JS
// Destructuring
let {a, b} = myFunction()
-> if myFunction returns an object {a:1, b:2, c:3}, a and b are assigned to local variables.
// Arrow-functions
function value(){return x}
value = () => x
-> Shorter, if you just need to return a value
// ES6-shorthands
let a=1, b=2
let ob = {a:a, b:b}
-> instead, you can just write ob = {a,b}
// getter and setter
let a=1, b=2
let ob = {
get a(){ return a},
set a(x){ a = x },
b
}
-> setter can be used like a normal proerty ob.a = 5, but invokes a function call
```
This "tricks" are important to know to understand the following code examples.
### Improving the Revealing Module Pattern
First, let see if we can make our example a bit smarter:
```JS
function Person(name) {
function log(txt) { console.log(txt) }
function talk() { log(name + " is talking") }
function dance() { log(name + " is dancing") }
// interface
return {
talk,
dance
}
}
let father = Person("Peter")
...
```
We do not need a variable to keep the name, as name is already a local variable in the function scope. So Parameters do not need to be stored and can be altered too. And we do not need separate names for our functions using ES6-shorthands.
### Removing limitations
But what about changing values from the outside? Ok, you can build a function to to the job. But there are better ways. Let´s try to expose a "variable":
```JS
function Person(name) {
function log(txt) { console.log(txt) }
function talk() { log(name + " is talking") }
function dance() { log(name + " is dancing") }
// interface
return {
name,
talk,
dance
}
}
let father = Person("Peter")
father.talk() // --> Peter is talking
father.name = "Paul"
father.talk() // --> Peter is talking
```
This does not work, as the object just returns a copy of our value. Even if we change the value of name inernally, this would not be reflected in the result. But we can use getters and setter to get what we want:
```JS
...
// interface
return {
get name(){return name},
set name(x){name = x},
talk,
dance
}
father.talk() // --> Peter is talking
father.name = "Paul"
father.talk() // --> Paul is talking
```
> **Hint:** _Be careful with getters! They work only in the initial context. So, you can use `father.name = "Newname"`. But if you destructure father, you will receive a string, not a getter:_
```JS
let {name} = father
name = "Paul" // --> does not change the internal variable
father.name = "Paul" // --> does change the internal variable
```
### How to inherit?
This is my very special pattern to implement a simple form of inheritance. Often it comes handy to be able to extend or change an existing "module" without changing the initial code. Hier is my proposal:
```JS
function Person(name) {
function log(txt) { console.log(txt) }
function talk() { log(name + " is talking") }
function dance() { log(name + " is dancing") }
// interface
return {
private: {
log,
get name(){return name}
},
talk,
dance
}
}
function Woman(_name) {
// Inherit
let Super = Person(_name)
let { name, log } = Super.private
function talk() { log(name + " is a talking woman"); }
function jump() { log(name + " is jumping") }
return Object.assign(Super, { talk, jump }) // Override talk
}
let father = Person("Peter")
let mother = Woman("Claire")
father.talk()
father.dance()
mother.talk()
mother.dance()
mother.jump()
```
Some comments on the code:
**Super**: This object keeps all references from the "parent" module. You can add new functions to the class by extending the object. Initially i tried this:
```JS
return{...Super, { talk, jump }}
```
This works, but breaks the getters and setters. So, it is advised to use `Object.assign()` to extend the parent class.
**private**: This retrns references that are not intended for external use. You do not need to use this, but putting some references inside a sub-object reminds me on the task.
**destructuring**: You can simply use Super.private.log(), but destructuring make the functions available in the local scope. It´s simply easier to read. But be careful: Destructuring name returns a string. If you want to invoke the getter, you need to use `Super.private.name` instead.
**polymorphism**: There is a pretty simple way to even change parent frunctions from inside a child class. Just use a setter to make parent functions mutable:
```JS
...Parent:
// interface
return {
private: {
get log(){log},
set log(x){log = x}
}
}
...Child:
function mylog(){}
Super.private.log = mylog
```
Ok, the RMP will not provide anything a full featured class system may contain, but it has a lot of advantage:
- You can use the same code inside and outside the module
- Variables and functions are protected by desing until you manually expose them to the public
- The RMP is simple Javascript, no hidden gems. So it will most likely work on a wide range of browsers without any polyfilling
The RMP can be a powerful part in your toolbox. With some extensions, it can be even more flexible.
Here is a working [example](https://flems.io/#0=N4IgZglgNgpgziAXAbVAOwIYFsZJAOgAsAXLKEAGhAGMB7NYmBvEAXwvW10QICsEqdBk2J4ABBLFgArmmrEI9MQAUYAJzj0AFJhwBKMcAA6aSZJlyFSqLQDmW4gA9iB4GKGbY+G-acuxrCZmEhbyiqbEGFAA1lquYj46XGIA1GJGIGIQcGKRMRBothkGgabBoVamACYYcjBxhgl2STip6ZnZYjVyBUUgJSZBwQD0w1nCamAY1DBDZmowxNJqpsZlwWIADmoQAG4YjIiGcxtiujAUJxu2i0328QtLK3cBl+uncLeJYPE+YgC8UgCV0k7BBEjy0Tepy6tRmINKZlKJwq4TEAHVaFhaloAPrnVwg0ZiACSaEI6ggxBBsGIYgAytJNuoASp1Jo0HiCTTbm5zgFWYzmWp8Ns9gdZu8JCDUdZmn54h5aF5EgADAB86rEABJgH5WGIADyG1UDKUMpnqUU7faMbx2Vk+Qbm2URKKxX7NflpDJZHIYXLu3piADuWNqxQA3MCXbIwkpeNIsJsGm5Et72n6xInk71ijGQY9lqYAPIAI14MHk+AwcDgEFsnKF6gojUhrZzmwCBmJJd26h2VRggZiJ2R7xOtKkBwpalZqg02gyqkYamKk9uWFoxFnrMx2M5GQAwlAMBAFuu0CiZ1bIXFrzurd0Zver+8t4+RXe9CcP7P8M+9Q-u+27-p2r6UCAnywPGaAIDwAAsACMiAAAxsBwIDnHg+DUHWkFCIwzA8GwAC6rBAA) to see all patterns in action
Happy coding!
| efpage |
1,834,425 | CS50P Problem Set 5 - Back to the Bank | Dear participants, I need help and decided to ask you what is the best place to get it. I am taking... | 0 | 2024-04-25T18:40:25 | https://dev.to/9auloandre/cs50p-problem-set-5-back-to-the-bank-di1 | question, pytest, check50, errors | Dear participants,
I need help and decided to ask you what is the best place to get it.
I am taking David Malan's CS50P course. I got up to Problem Set 5 and I am stuck at the "Back to the Bank" problem.
I created the required Python script and run it flawlessly. However, when I checked the script with "check50" I got the following error:
Quote
_:( test_bank catches bank.py without case-insensitivity
expected exit code 1, not 0_
Unquote
Would someone kindly indicate where I could get help?
Thanks
| 9auloandre |
1,834,441 | Unlocking the Power of Databases in Java with JDBC | Java has long been a powerhouse in the world of software development, and its ability to interact... | 0 | 2024-04-25T19:16:29 | https://dev.to/dbillion/unlocking-the-power-of-databases-in-java-with-jdbc-25gj | java, database | Java has long been a powerhouse in the world of software development, and its ability to interact with databases through Java Database Connectivity (JDBC) is a testament to its versatility. Inspired by the practical insights from “The Java Workshop” by David Cuartielles, Andreas Goarnsson, and Eric Foster-Johnson, let’s delve into how JDBC serves as a pivotal link between Java applications and databases.
#JDBC: The Gateway to Databases
At its core, JDBC is an API that allows Java programs to connect to and manipulate databases. It’s the bridge that carries requests from Java applications to databases and brings back results to be processed within the application. Whether you’re dealing with Oracle, MySQL, SQL Server, or any other relational database, JDBC provides a uniform interface for communication.
**Code Snippet: Establishing a Connection**

#Leveraging JDBC’s Capabilities
JDBC’s design allows developers to write once and run anywhere. You can execute SQL queries, update records, and even handle transactions in a standardized way across all supported databases. The “The Java Workshop” emphasizes the importance of writing clean, maintainable code, and JDBC aligns perfectly with this philosophy by providing a clear structure for database interactions
**Code Snippet: Executing a Query**

#Conclusion
JDBC is more than just a tool; it’s a foundational skill for any Java developer. By understanding and utilizing JDBC, you can build robust, data-driven applications that stand the test of time. As “[The Java Workshop](https://www.amazon.com/Java-Workshop-Interactive-Approach-Learning/dp/1838986693)” guides you through the nuances of Java programming, remember that JDBC is your ally in the quest to harness the full potential of databases in your Java applications.
---
This article aims to provide a concise yet comprehensive overview of JDBC and its role in Java development, complete with code snippets to get you started. For a deeper dive into Java and JDBC, “The Java Workshop” is an excellent resource that covers these topics and more, setting you on the path to becoming a proficient Java developer | dbillion |
1,834,848 | Kamagra 50 mg: Revitalize Your Love Life | Introduction to Kamagra 50 mg In the realm of intimate relationships, maintaining a healthy and... | 0 | 2024-04-26T05:56:52 | https://dev.to/lyfechmiest/kamagra-50-mg-revitalize-your-love-life-55a4 | kamagra50mg, erectiledysfunction, intimacy, relationships | Introduction to Kamagra 50 mg
In the realm of intimate relationships, maintaining a healthy and satisfying sexual life is paramount. However, for many individuals, the struggle with erectile dysfunction (ED) can significantly hinder their ability to enjoy intimate moments with their partners. Fortunately, pharmaceutical advancements have led to the development of medications like Kamagra 50 mg, offering hope and rejuvenation for those experiencing ED.
Understanding Erectile Dysfunction (ED)
What is ED?
Erectile dysfunction, commonly known as ED, refers to the inability to achieve or maintain an erection firm enough for sexual intercourse. It can occur due to various physical and psychological factors, affecting men of all ages.
Causes of ED
The causes of ED can range from underlying health conditions such as diabetes and hypertension to psychological factors like stress and anxiety. Lifestyle choices such as smoking, excessive alcohol consumption, and a sedentary lifestyle can also contribute to the development of ED.
Importance of addressing ED
Addressing ED is crucial not only for the physical aspect of intimacy but also for the overall well-being of individuals and their relationships. Untreated ED can lead to feelings of inadequacy, frustration, and strain on relationships. Therefore, seeking appropriate treatment is essential for improving quality of life.
Overview of Kamagra 50 mg
What is Kamagra 50 mg?
Kamagra 50 mg is a medication primarily used to treat erectile dysfunction in men. It contains sildenafil citrate, a potent vasodilator that works by increasing blood flow to the penis, resulting in improved erectile function.
How does it work?
Sildenafil citrate, the active ingredient in Kamagra 50 mg, inhibits the enzyme phosphodiesterase type 5 (PDE5), allowing for increased blood flow to the penile region during sexual stimulation. This leads to firmer and more sustained erections, enhancing sexual performance.
Benefits of [Kamagra 50 mg](https://www.lyfechemist.com/product/kamagra-50-mg/)
The benefits of Kamagra 50 mg extend beyond its ability to treat ED. It offers a convenient and effective solution for individuals seeking to regain confidence and intimacy in their relationships. Additionally, its affordability and accessibility make it a preferred choice for many.
Safety and Precautions
Who should avoid Kamagra 50 mg?
While Kamagra 50 mg is generally safe for most men, there are certain individuals who should avoid its use. This includes those with a history of cardiovascular diseases, severe liver or kidney impairment, and certain eye conditions. It is also not recommended for use in women and children.
Potential side effects
Like any medication, Kamagra 50 mg may cause side effects in some individuals. Common side effects include headache, dizziness, flushing, and indigestion. In rare cases, more severe side effects such as priapism (prolonged erection) and sudden vision loss may occur.
Safety precautions
To ensure the safe and effective use of Kamagra 50 mg, it is important to follow the prescribed dosage and instructions provided by a healthcare professional. Avoid consuming alcohol or grapefruit juice while taking this medication, as it may potentiate the side effects. If any adverse reactions occur, seek medical attention promptly.
Dosage and Administration
Recommended dosage
The recommended dosage of Kamagra 50 mg is one tablet taken orally, approximately 30-60 minutes before anticipated sexual activity. It is important not to exceed the recommended dose, as it may increase the risk of adverse effects.
How to take Kamagra 50 mg
Kamagra 50 mg should be taken with a full glass of water, without chewing or crushing the tablet. It can be taken with or without food, although consuming a heavy meal beforehand may delay its onset of action. Sexual stimulation is necessary for the medication to be effective.
Customer Reviews and Testimonials
Positive experiences
Many users of Kamagra 50 mg have reported positive experiences, citing improved erectile function and enhanced sexual satisfaction. Some have even described it as a life-changing medication that has reignited intimacy in their relationships.
Common feedback
Common feedback from users includes praise for its fast-acting nature, long-lasting effects, and minimal side effects compared to other ED medications. Additionally, its affordability and discreet packaging have been appreciated by many.
Comparative Analysis with Similar Products
Comparison with other ED medications
When compared to other ED medications such as Viagra and Cialis, Kamagra 50 mg offers similar efficacy at a fraction of the cost. Its generic formulation makes it more accessible to a wider population, without compromising on quality or effectiveness.
Purchasing Kamagra 50 mg
Where to buy Kamagra 50 mg
Kamagra 50 mg is available for purchase online from reputable pharmacies and authorized distributors. It is important to ensure that you are buying from a legitimate source to avoid counterfeit products and potential health risks.
Pricing and availability
The pricing of Kamagra 50 mg may vary depending on the quantity purchased and the supplier. However, it is generally more affordable than brand-name alternatives, making it an attractive option for many individuals.
Tips for Enhancing Intimacy
Lifestyle changes
Making positive lifestyle changes such as regular exercise, healthy eating, and stress management can contribute to overall sexual health and well-being.
Communication with partner
Open and honest communication with your partner about your feelings and concerns regarding intimacy can strengthen your relationship and improve intimacy.
Seeking professional help
If ED persists despite lifestyle changes and medication, seeking professional help from a healthcare provider or therapist specializing in sexual health may be beneficial.
Conclusion
In conclusion, Kamagra 50 mg offers a promising solution for individuals struggling with erectile dysfunction, allowing them to elevate intimate moments and regain confidence in their sexual abilities. With its effectiveness, affordability, and minimal side effects, it has become a preferred choice for many seeking to enhance their sexual experiences and relationships.
FAQs
Is Kamagra 50 mg safe for long-term use?
Kamagra 50 mg is generally safe for long-term use when taken as directed by a healthcare professional. However, regular check-ups and | lyfechmiest |
1,834,857 | How to position the chart at the far left of the canvas? | Question Title How to position the chart at the far left of the canvas in the vchart chart... | 0 | 2024-04-26T06:14:02 | https://dev.to/da730/how-to-position-the-chart-at-the-far-left-of-the-canvas-2l0i | # Question Title
How to position the chart at the far left of the canvas in the vchart chart library?
# Question Description
I am using the vchart chart library for visualization operations, and I hope that the chart can be located at the far left of the canvas. However, I had a problem when trying to adjust the configuration. I don't know how to set it.
`
{
type: 'line',
data: {
values: [
{
time: '2:00',
value: 8
},
{
time: '4:00',
value: 9
},
{
time: '6:00',
value: 11
},
{
time: '8:00',
value: 14
},
{
time: '10:00',
value: 16
},
{
time: '12:00',
value: 17
},
{
time: '14:00',
value: 17
},
{
time: '16:00',
value: 16
},
{
time: '18:00',
value: 15
}
]
},
xField: 'time',
yField: 'value',
axes:[
{
type:'band',
orient:'bottom',
visible:false,
},
{
orient:'left',
visible:false,
}
]
};
`


# Solution
In the configuration items of vchart, there is a trimPadding attribute. This attribute is used to configure whether to remove the margins at both ends of the band axis. If it is true, there will be no margins at both ends, and the settings of bandPadding, paddingInner and paddingOuter will be ignored.
And here, what we need is to place the chart on the far left of the canvas, that is, without the left margin, so we need to add the trimPadding configuration on the 'bottom' axis.
Below is an example of the configuration:
`
{
//...other spec configurations omitted
axes:[
{
type:'band',
orient:'bottom',
visible:false,
trimPadding:true,
},
{
orient:'left',
visible:false,
}
]
};
`
# Result Display
After adding the trimPadding configuration, the chart can now be properly displayed at the far left of the canvas.
Online effect reference: https://codesandbox.io/p/sandbox/common-chart-interactive-forked-cn95kp
# Related Documents
Related API: https://visactor.bytedance.net/vchart/option/barChart-axes-band#trimPadding
github: https://github.com/VisActor/VChart | da730 | |
1,834,872 | Candid B Cream: Your Trusted Antifungal Companion | Introduction Candid B Cream: Your Trusted Antifungal Companion is a potent remedy designed to combat... | 0 | 2024-04-26T06:34:44 | https://dev.to/lyfechemist0956/candid-b-cream-your-trusted-antifungal-companion-3gja | healthydebate | Introduction
Candid B Cream: Your Trusted Antifungal Companion is a potent remedy designed to combat various fungal infections effectively. It stands out as a reliable solution in the realm of antifungal treatments, offering fast relief and soothing comfort to those grappling with fungal issues. Let's delve deeper into what makes [Candid B Cream](https://www.lyfechemist.com/product/candid-b-cream/) a go-to choice for many.
Understanding Candid B Cream
Candid B Cream's Mechanism of Action: Candid B Cream operates by targeting the root cause of fungal infections. Its active ingredients penetrate the affected area, inhibiting the growth of fungi and providing relief from itching and irritation.
Ingredients of Candid B Cream: The cream contains a blend of antifungal agents such as Clotrimazole and Beclometasone Dipropionate. Clotrimazole effectively eradicates fungal growth, while Beclometasone Dipropionate reduces inflammation and redness, offering soothing relief.
Benefits of Candid B Cream
Fast Relief from Fungal Infections: Candid B Cream offers rapid relief from common fungal infections, alleviating symptoms such as itching, redness, and discomfort within a short span.
Soothing and Moisturizing Properties: In addition to combating fungal infections, Candid B Cream also moisturizes the skin, soothing dryness and restoring its natural balance.
How to Use Candid B Cream
Application Instructions: Apply a thin layer of Candid B Cream to the affected area and gently massage it into the skin. Repeat this process twice daily for optimal results.
Precautions to Consider: While Candid B Cream is generally safe for use, it's essential to perform a patch test before extensive application to rule out any allergic reactions. Avoid contact with eyes and mucous membranes.
Who Should Use Candid B Cream
Suitable for All Ages and Skin Types: Candid B Cream is suitable for individuals of all ages and skin types, making it a versatile solution for combating fungal infections across diverse demographics.
Cases Where Candid B Cream is Recommended: Whether you're dealing with athlete's foot, jock itch, or ringworm, Candid B Cream is your trusted ally in overcoming these common fungal ailments.
Common Fungal Infections Treated
Candid B Cream addresses a variety of fungal infections, including:
Athlete's Foot: A common fungal infection affecting the skin between the toes and the soles of the feet.
Jock Itch: Characterized by a red, itchy rash in the groin area, jock itch is another condition effectively treated by Candid B Cream.
Ringworm: Contrary to its name, ringworm is not caused by a worm but by a fungus. Candid B Cream effectively clears ringworm infections, providing relief from itching and discomfort.
Prevention Tips
Maintain Good Hygiene: Practicing good hygiene, such as keeping your skin clean and dry, can help prevent fungal infections from taking hold.
Avoid Sharing Personal Items: Refrain from sharing personal items such as towels, socks, and shoes to minimize the risk of fungal transmission.
Side Effects and Safety Concerns
Possible Side Effects: While Candid B Cream is generally well-tolerated, some individuals may experience mild side effects such as skin irritation or allergic reactions. Discontinue use if any adverse reactions occur.
Safety Precautions: Pregnant or breastfeeding women should consult with a healthcare professional before using Candid B Cream. Additionally, avoid applying the cream to broken or sensitive skin areas.
Conclusion
Candid B Cream: Your Trusted Antifungal Companion emerges as a reliable solution for combating fungal infections effectively. With its potent blend of antifungal agents and soothing properties, Candid B Cream offers fast relief and promotes skin health. Whether you're dealing with athlete's foot, jock itch, or ringworm, Candid B Cream stands as a beacon of hope in the fight against fungal ailments. | lyfechemist0956 |
1,834,897 | Exploring Cross-Cultural Perspectives in Audio Visual Installation | Cultural contexts significantly impact how technology is applied and perceived. With the rise of... | 0 | 2024-04-26T07:06:30 | https://dev.to/jamesespinosa926/exploring-cross-cultural-perspectives-in-audio-visual-installation-b79 | Cultural contexts significantly impact how technology is applied and perceived. With the rise of [audio video proposals](https://xtenav.com/x-doc/) integrating diverse perspectives optimally designs experiences. This blog explores some cross-cultural considerations in audio visual installations to foster inclusion and resonate authentically with global audiences.

## Aesthetics and Symbolism
Distinct visual languages use form, color, texture differently across regions. Holistic understanding avoids literal translations appropriating cultural assets disrespectfully. Consultation ensures sensitivity representing diversity accurately.
## Contextual Storytelling
Local histories, ideals and worldviews shape how narratives are communicated. Collaborative curation authentically voices varied perspectives. Multi-vocality recognizes non-dominant narratives balanced with majority stories.
## Community Engagement
Participatory design empowers grassroots input from inception. Installations uplift traditional knowledge and celebrate living cultural practices respecting ownership. Benefit-sharing through entrepreneurship sustainably invests in origin cultures.
## Language and Accessibility
Multi-lingual, image-based orientations welcome diverse visitors. Closed captions, audio descriptions consider disabilities ensuring inclusion. Universal design evaluates cultural quirks optimizing intuitiveness across backgrounds.
## Experience Design
Paced, sensory-rich story arcs match attention spans and learning styles. Touch, light, music may carry different connotations needing re-interpretation. Playfulness balances solemn topics appropriately across maturity levels.
## Environmental Contexts
Climate, terrain, available materials inspired distinctive styles. Adaptations suit local infrastructures prioritizing functionality. Operations leverage sustainable energy sources minimally disturbing habitats.
##
Regulations and Beliefs
Legislations and cultural protocols guide respectful designs, especially around sacred sites, artifacts and rituals. Consult traditional knowledge systems to avoid misunderstandings or offenses.
## Shared Humanity
While celebrating diversity, installations also highlight our common hopes, values and capacity for wonder appealing to our collective human experience above all.
Sensitive cross-pollination optimizes audio visual design resonating universally through localized relevance and cultural mindfulness progressively. | jamesespinosa926 | |
1,834,983 | How Do Free Apps Make Money? | I always wondered how free apps make money. I did some research on this and found a lot of different... | 0 | 2024-04-26T09:22:14 | https://dev.to/martinbaun/how-do-free-apps-make-money-4ehh | learning, security, cybersecurity, monetization | I always wondered how free apps make money. I did some research on this and found a lot of different avenues to follow.
Let's jump into the ins and outs of how to earn money with free apps
**Background**
I always encountered those exciting, sometimes annoying ads and pop-ups while browsing the web. YouTube has certainly made their ads unescapable, robbing me of two minutes of desired view time.
Have you thought about why they appear anyway? I certainly have. Those are some of the different monetization strategies for free apps. There are multiple other ways to monetize free apps.
**Monetization defined**
Monetization is the ability to earn from your creations. App monetization is the strategy of making money off an application.
Some monetization styles perform better than others. Let's learn the monetization method that works best in your situation.
**How to make money from an app: Different monetization strategies**
There are countless ways to profit from free apps. You can apply one or several monetization methods depending on your industry or the type of app. Let's get into the details:
**In-app advertising**
Do you know how YouTube makes money? Discover what is in-app advertising and how it occurs below:
Apps like YouTube display banner ads via content uploaded on its platform. Despite being a popular monetization style, you may need high engagement rates to make good advertising income.
**Video ads**
You have most likely come across those ad videos that appear when viewing certain content. Video ads are more visually appealing and could attract the audience better than display ads. At the same time, you must ensure the ads do not ruin user experience and affect your monetization. There’s no way to tell where this line is but if YouTube is anything to go by then I can guess the line is far.
Read: _[The best screen-recording software in 2024](https://martinbaun.com/blog/posts/the-best-screen-recording-software-in-2024/)
_
**Native advertising**
In-app advertising can also work by promoting other businesses, products, or services. Native advertising may also include sponsored content mixed into the app. This method may be popular if you do not want to distract your audience through other ads.
**In-app purchases**
_How does Amazon make money?_ In-app purchases are one of them through the products or services listed on the platform. The benefits of in-app purchases are low costs, and you do not need to own the products or services. In-app purchasing is popular with e-commerce sites, dating apps, gaming platforms, and fitness apps. Monetization through this method can occur in the following ways:
**Paid content**
Paid content is information on your app, whether video, text, or downloadable that others must pay to access. Examples like e-books and courses can be offered as paid-for content despite your app being free for other services.
**Subscriptions**
Free apps may not be free forever. Free apps provide trial periods where users can use them at no cost. The aim is to lure them into paying a subscription fee once they are used to the product or service offered.
**Freemium model**
Freemium apps make money by charging users for premium features. Users have unrestricted access to the basic free version. They have to pay for advanced features. An example of this is ChatGPT, which you can access for free, but you will be charged $20 monthly for ChatGPT Plus.
**Data monetization**
Have you ever wondered why companies like Google and Facebook can provide relevant content tailored to your needs? It's because they have mastered the art of data monetization. Below are some valuable data monetization methods.
**User data**
Do you want to know how to monetize user data? If you hold a directory of people visiting your app, you can monetize it by licensing or selling it to third parties that need it. This monetization may require user approval; as unauthorized sharing can cause privacy violations. You may also use the user data to tailor your in-app products or services for maximum value creation.
**Location data**
Think of navigation apps like Waze. These apps collect crucial geo data. Businesses needing to place location-related ads rely on such apps for data collection. If your app is one of these categories, this could be a suitable monetization strategy, although you may need user consent.
**Targeted advertising**
Once you understand the type of data you hold, including age profiles, sex, preferences, and user traits, you can use it for targeted advertising. You can sell your own or a third-party product or service through targeted ads.
**How to choose the best free app monetization strategy**
There are numerous ways to monetize free apps. However, to succeed in-app monetization, you must apply the right strategy. There are plenty of tips to use when choosing the app monetization strategy.
**What is your business goal?**
It must be clear what you want in your app monetization. A monetization method like a subscription could work best if you already have a client base. If you are relatively new, you may consider an advertising revenue model.
**What's your target audience?**
Know your customers and what they want. Are they able and willing to pay for a product or service, and what functionality do they want with the app? It would be prudent to run a rewarded video ad for gamers since it addresses their needs. Knowing your audience and offering tailored ads ensures better response rates and engagements.
Read: _[The Human Touch: Why I Hired a Writer Over ChatGPT in 2023](https://martinbaun.com/blog/posts/the-human-touch-why-i-hired-a-writer-over-chatgpt-in-2023/)_
**How about the competition?**
Chances are that a rival app is using your considered monetization method. You may use the same strategy if you can cut a niche that suits your business. However, you may evaluate the competitors' strengths and weaknesses and use a unique monetization method.
**Does it make economic sense?**
As much as you may prefer a particular monetization, it must make sense regarding the potential number of users, signups, average spending, etc. The numbers must add up either now or in a predictive model. Your last wish is to select a monetization method that will make your business struggle to break even.
**_Summary_**
I don’t think there is a specific rule for monetization for your free app. Your choice method needs to make economic sense and not impact user experience. My rule of thumb is to choose a monetization method that aligns with the business goal and users and carries a unique value proposition.
_For these and more thoughts, guides and insights visit my blog at [martinbaun.com](https://martinbaun.com/blog/)_ | martinbaun |
1,835,001 | Empowering Learners: The Evolution of Education in Dubai's Schools | Education in Dubai is undergoing a profound transformation, reflecting the city's dynamic growth and... | 0 | 2024-04-26T09:39:20 | https://dev.to/faizalkhan1393/empowering-learners-the-evolution-of-education-in-dubais-schools-2cc4 | Education in Dubai is undergoing a profound transformation, reflecting the city's dynamic growth and commitment to innovation. Dubai's schools have emerged as incubators of change, embracing new paradigms and approaches to empower learners for the challenges of the 21st century. This article explores the evolution of education in Dubai's schools, highlighting innovative practices, challenges, and the vision for the future.
The Shift Towards Learner-Centered Education
In recent years, Dubai's schools have shifted towards a learner-centred approach, placing students at the heart of the educational experience. This shift recognizes the unique needs, interests, and aspirations of each learner, fostering a culture of empowerment, curiosity, and lifelong learning. From personalised learning pathways and project-based instruction to competency-based assessments and student-led inquiry, Dubai's schools are reimagining education to meet the diverse needs of today's learners.
Embracing Technological Advancements
Technology plays a central role in the evolution of education in Dubai's schools. With the rise of digital tools, online resources, and immersive technologies, educators are leveraging technology to enhance teaching and learning experiences. From interactive smartboards and educational apps to virtual reality simulations and online collaboration platforms, technology is transforming classrooms into dynamic hubs of exploration and discovery. Dubai's schools are at the forefront of integrating technology seamlessly into the curriculum, equipping students with essential digital literacy skills and preparing them for success in a digital world.
Cultivating 21st Century Skills:
In addition to prioritizing academic knowledge, Dubai's schools recognize the critical importance of equipping students with 21st-century skills to thrive in an increasingly complex and interconnected world. Beyond rote memorization, educators are fostering critical thinking by encouraging students to analyze information critically, evaluate evidence, and make informed decisions. Creativity is nurtured through opportunities for artistic expression, design thinking challenges, and open-ended problem-solving tasks that encourage innovative thinking. Collaboration skills are honed through group projects, teamwork activities, and peer-to-peer learning experiences, preparing students to collaborate effectively in diverse and interdisciplinary teams. Furthermore, communication skills are developed through oral presentations, written assignments, and digital media projects, empowering students to express themselves clearly, persuasively, and empathetically in various contexts. By embedding these 21st-century skills into the fabric of education, Dubai's schools are equipping students with the tools they need to succeed in a rapidly evolving global landscape.
Fostering Inclusion and Diversity:
In Dubai's schools, fostering inclusion and celebrating diversity are not just buzzwords but guiding principles that shape every aspect of the educational experience. Recognizing the richness that diversity brings to the learning environment, educators actively promote an inclusive culture where every student feels welcomed, valued, and supported. Specialized support services are provided to students with diverse learning needs, ensuring that every learner has equitable access to educational opportunities and resources. Moreover, cultural celebrations, heritage months, and multicultural events are integrated into the school calendar, allowing students to share and celebrate their unique backgrounds and traditions. Intercultural understanding is fostered through cross-cultural exchanges, global awareness initiatives, and community service projects that promote empathy, respect, and appreciation for different perspectives. By embracing inclusivity and diversity, Dubai's schools are cultivating environments where students learn not only from textbooks but from each other, fostering a sense of belonging and unity amidst diversity.
Challenges and Opportunities:
While Dubai's schools have made significant strides in advancing educational excellence, they continue to face challenges that require innovative solutions and collaborative efforts. The digital divide persists as a barrier to equitable access to technology and resources, highlighting the need for targeted interventions to bridge this gap and ensure that all students have the tools they need to succeed in the digital age. Furthermore, supporting teacher professional development remains essential to equip educators with the skills, knowledge, and pedagogical strategies needed to effectively integrate technology, promote inclusion, and cultivate 21st-century skills in their classrooms. Additionally, navigating cultural expectations and societal norms requires ongoing dialogue, cultural sensitivity, and community engagement to ensure that educational practices are responsive to the diverse needs and values of the community. However, amidst these challenges lie opportunities for growth, innovation, and collaboration. By leveraging the collective expertise, resources, and creativity of stakeholders, Dubai's schools can overcome obstacles, seize opportunities, and continue to evolve as hubs of educational excellence and innovation, shaping the future of education not only in the city but beyond its borders.
Conclusion
As Dubai's schools, including Dubai schools nad al sheba, continue to evolve, they remain steadfast in their commitment to empowering learners, fostering innovation, and embracing diversity. By embracing learner-centred approaches, leveraging technology, cultivating 21st-century skills, and promoting inclusion, Dubai's schools, including those in Nad Al Sheba, are equipping students with the knowledge, skills, and mindset needed to excel in a rapidly changing world.
With a focus on personalized learning experiences tailored to individual student needs, Dubai schools nad al sheba are creating environments where students are encouraged to explore their interests, pursue their passions, and take ownership of their learning journey. Through hands-on activities, collaborative projects, and real-world experiences, students at Dubai schools nad al sheba develop critical thinking, creativity, communication, and collaboration skills that are essential for success in the 21st century.
Moreover, Dubai schools nad al sheba embrace technology as a powerful tool for enhancing teaching and learning. From interactive smartboards and educational apps to online resources and virtual reality simulations, technology is integrated seamlessly into classroom instruction, providing students with engaging and interactive learning experiences. By harnessing the power of technology, Dubai schools nad al sheba are preparing students to thrive in a digital world and adapt to the ever-evolving demands of the modern workforce.
| faizalkhan1393 | |
1,835,085 | How To Hire An Expert To Take My Online Class | Online education provides a great opportunity for everyone to build an enviable career. Anyone can... | 0 | 2024-04-26T10:20:51 | https://dev.to/onlineclasshelp/how-to-hire-an-expert-to-take-my-online-class-4d26 | education | Online education provides a great opportunity for everyone to build an enviable career. Anyone can sign up for an online course, upskill, and land a lucrative job. It may sound enticing and easily achievable, but in truth, online classes are as tough as regular classes. That’s why many busy online students turn to class help services and ask, “Can I [pay someone to take my online class](https://www.onlineclasshelp.com/pay-someone-to-take-my-online-class/)?” An expert help gives them the confidence to earn better grades and achieve their goals effortlessly. However, hiring genuine class help service requires thorough research, as you may end up losing your money to scammers. Here is an overview of how you can hire a professional to take your online classes.
**Check Online Class Takers’ Academic Background**
There are hundreds of online class takers helping students achieve their academic goals. But how do you hire an expert who can get your job done? You can start by verifying the class taker’s professional background and past experience. Try to stay away from tutors who don’t reside in the USA because they may not understand the curriculum and syllabus of US universities.
At Online Class Help, we have a team of skilled professionals who have graduated from top universities in the USA. Call us and ask, “Can you do my online class?” There is instant help!
**Go Through Reviews, Ratings, And Testimonials**
Social media platforms and review sites offer a great opportunity for students to express their views and vent their anger if they don’t obtain quality services. Check for reviews, ratings, and testimonials of class help services. If you come across predominantly negative reviews, the best thing is to skip their services. Choose the services that have positive reviews and great customer feedback.
**Inquire About the Charges & Fees**
You may come across tutoring companies that charge low prices. It may seem enticing to take their service. Beware! They may be a group of scammers who try to lure innocent students with inferior services. Compare the prices of multiple online academic services and choose the one that provides better quality with moderate packages.
At Online Class Help, we provide personalized packages that allow online students to hire an expert to take the entire course or just do a single assignment. The prices vary depending on the services provided. If you want to get a free quote, call us and ask, “Can you [do my online class](https://www.onlineclasshelp.com/)?” Our executive will contact you immediately.
**Scan Their Website For Errors**
The company’s official website reveals a lot of crucial facts to gauge the quality of their service. Go through their content and look for any inconsistencies with their information, contact details, and graphic designs. If the website lacks professionalism, it could be a red flag that you must stay alert and make an informed decision.
**Availability & Contact Details**
Scammers usually pressure students to make a quick payment, and they don’t bother to respond after receiving the money. Make sure the tutoring company has adequate contact details and is available 24/7 to answer all your queries. You can also check with previous clients to ascertain their credibility and quality.
If you’re looking for quality class help services in the USA, reach out to Online Class Help and ask, “Can I pay someone to take my online class?” We provide professional assistance for a fair price.
Author Bio
Online Class Help is one of the leading online class help service providers that assists busy online students with assignments, homework, essays, exams, quizzes, discussion forums, and research projects. Call them and ask, “Can you do my online class?” They will help you instantly. For more details, visit https://www.onlineclasshelp.com/. | onlineclasshelp |
1,835,155 | Array.reduce() to fill <select> | I have a list of colors sitting in my database and a <select> HTML element where I want to use... | 27,198 | 2024-04-26T18:32:48 | https://dev.to/andrewelans/use-arrayreduce-to-fill-26eo | javascript, array, reduce | I have a list of colors sitting in my database and a `<select>` HTML element where I want to use these colors as `<option>`s.
## Colors
I get the values from the database and store them in a variable.
```javascript
const colors = [
{val: "1", name: "Black"},
{val: "2", name: "Red"},
{val: "3", name: "Yellow"},
{val: "4", name: "Green"},
{val: "5", name: "Blue"},
{val: "6", name: "White"}
]
```
## Generate options with Array.reduce()
### With `return` in the reducer callback
```javascript
const colorOptions = colors.reduce(
(options, color) => {
options.push(`<option value="${color.val}">${color.name}</option>`)
return options
}, []
)
```
### Without the `return` word in the reducer callback
We use [Grouping ( )](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Grouping) and [Comma (,)](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comma_operator) operators for one-liner implementation.
Identation is added for better human readability.
```javascript
const colorOptions = colors.reduce(
(options, color) => (
options.push(`<option value="${color.val}">${color.name}</option>`),
options
), []
)
```
### Resulting `colorOptions`
```javascript
[
'<option value="1">Black</option>',
'<option value="2">Red</option>',
'<option value="3">Yellow</option>',
'<option value="4">Green</option>',
'<option value="5">Blue</option>',
'<option value="6">White</option>'
]
```
## Sort before reducing
You can also sort on `val` or `name` before `Array.reduce()`.
```javascript
const colors = [
{val: "1", name: "Black"},
{val: "2", name: "Red"},
{val: "3", name: "Yellow"},
{val: "4", name: "Green"},
{val: "5", name: "Blue"},
{val: "6", name: "White"}
].sort(
(a, b) => a.name.localeCompare(b.name)
)
// colors => [
// '<option value="1">Black</option>',
// '<option value="5">Blue</option>',
// '<option value="4">Green</option>',
// '<option value="2">Red</option>',
// '<option value="6">White</option>',
// '<option value="3">Yellow</option>'
// ]
```
## Use DocumentFragment to fill in `<select>`
We have a `<select>` on a page which is currently empty.
```html
<select id="colors-select"></select>
```
We can use the [DocumentFragment](https://developer.mozilla.org/en-US/docs/Web/API/DocumentFragment) interface to load `<select>` with options as nodes.
### Create DocumentFragment
```javascript
const fragment = document.createRange().createContextualFragment(
colorOptions.join('') // convert colors array to string
)
```
### Fill in `<select>`
```javascript
document.getElementById('colors-select').appendChild(fragment)
```
### Result
```html
<select id="colors-select">
<option value="1">Black</option>
<option value="5">Blue</option>
<option value="4">Green</option>
<option value="2">Red</option>
<option value="6">White</option>
<option value="3">Yellow</option>
</select>
```
## Full code snippet
```javascript
const colors = [
{val: "1", name: "Black"},
{val: "2", name: "Red"},
{val: "3", name: "Yellow"},
{val: "4", name: "Green"},
{val: "5", name: "Blue"},
{val: "6", name: "White"}
].sort(
(a, b) => a.name.localeCompare(b.name)
)
const colorOptions = colors.reduce(
(options, color) => (
options.push(`<option value="${color.val}">${color.name}</option>`),
options
), []
).join('')
const fragment = document.createRange().createContextualFragment(colorOptions)
document.getElementById('colors-select').appendChild(fragment)
```
| andrewelans |
1,835,258 | Is it necessary for me to Code? | Good day, developers. I just want to ask question on behalf of our team. I have an interest in tech;... | 0 | 2024-04-26T13:14:30 | https://dev.to/filmovity/is-it-necessary-for-me-to-code-3n27 | webdev | Good day, developers.
I just want to ask question on behalf of our team. I have an interest in tech; in fact, I studied software engineering.
But I started to lose interest in coding, only working with developers to develop projects that I came up with ideas for.
Is there any problem with me? | filmovity |
1,835,290 | What we learned building our SaaS with Rust 🦀 | In this post we will not answer the question everybody asks when starting a new project: Should I do... | 0 | 2024-04-29T10:45:02 | https://dev.to/meteroid/5-lessons-learned-building-our-saas-with-rust-1doj | webdev, rust, opensource, beginners | In this post we will **not** answer the question everybody asks when starting a new project: **Should I do it in Rust ?**
<img src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExNHQwOTl6Ym5odmVmNDZpdzVmZG9mMW9yd2tmN2lyZ2NzOWNxc2MxMCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/l83rkRUu4IqyUbt5k6/giphy.gif">
Instead, we'll explore the pitfalls and insights we encountered after confidently answering "**absolutely!**" and embarking on our journey to build a business using mostly Rust.
This post aims to provide a high-level overview of our experiences, we will delve deeper into the details in an incoming series.
<sub>(vote in the comments for our next post 🗳️)</sub>
---
## Why Rust
Choosing the right language for a project is never a one-size-fits-all decision.
A couple words about our team and use case :
- we're a team of 6, with almost no prior Rust experience but an extensive Scala/Java background building data-intensive applications
- our SaaS is a Billing platform with a strong focus on analytics, realtime data and actionable insights (think Stripe Billing meets Profitwell, with a dash of Posthog).
- our backend is fully in Rust (divided in 2 modules and a couple of workers), and talks to our React frontend using gRPC-web
> We're open source !
> You can find our repo here : https://github.com/meteroid-oss/meteroid
> We would love your support ⭐ and contribution
We therefore have some non-negotiable requirements that happen to fit Rust pretty well: **performance, safety, and concurrency**.
Rust virtually eliminate entire classes of bugs and CVEs related to memory management, while its concurrency primitives are pretty appealing (and didn't disappoint).
In a SaaS, all these features are particularly valuable when dealing with sensitive or critical tasks, like in our case metering, invoice computation and delivery.
Its significant memory usage reduction is also a major bonus to build a scalable and **sustainable** platform, as many large players [including Microsoft](https://mspoweruser.com/microsoft-forms-new-team-to-help-rewrite-core-windows-components-into-rust-from-c-c/) have recently acknowledged.
Coming from the drama-heavy and sometimes toxic Scala community, the **welcoming and inclusive** Rust ecosystem was also a significant draw, providing motivation to explore this new territory.
With these high hopes, let's start our journey !
---
## Lesson 1: The Learning Curve is real
Learning Rust isn't like picking up just another language. Concepts like ownership, borrowing, and lifetimes can be daunting initially, making otherwise trivial code extremely time consuming.
As pleasant as the ecosystem is (more on that later), **you WILL inevitably need to write lower-level code** at times.
For instance, consider a rather basic middleware for our API (Tonic/Tower) that simply reports the compute duration :
```rust
impl<S, ReqBody, ResBody> Service<Request<ReqBody>> for MetricService<S>
where
S: Service<Request<ReqBody>, Response = Response<ResBody>, Error = BoxError>
+ Clone + Send + 'static,
S::Future: Send + 'static,
ReqBody: Send,
{
type Response = S::Response;
type Error = BoxError;
type Future = ResponseFuture<S::Future>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.inner.poll_ready(cx)
}
fn call(&mut self, request: Request<ReqBody>) -> Self::Future {
let clone = self.inner.clone();
let mut inner = std::mem::replace(&mut self.inner, clone);
let started_at = std::time::Instant::now();
let sm = GrpcServiceMethod::extract(request.uri());
let future = inner.call(request);
ResponseFuture {
future,
started_at,
sm,
}
}
}
#[pin_project]
pub struct ResponseFuture<F> {
#[pin]
future: F,
started_at: Instant,
sm: GrpcServiceMethod,
}
impl<F, ResBody> Future for ResponseFuture<F>
where
F: Future<Output = Result<Response<ResBody>, BoxError>>,
{
type Output = Result<Response<ResBody>, BoxError>;
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
let res = ready!(this.future.poll(cx));
let finished_at = Instant::now();
let delta = finished_at.duration_since(*this.started_at).as_millis();
// this is the actual logic
let (res, grpc_status_code) = (...)
crate::metric::record_call(
GrpcKind::SERVER,
this.sm.clone(),
grpc_status_code,
delta as u64,
);
Poll::Ready(res)
}
}
```
Yes, in addition to generic types, generic lifetimes, and trait constraints, you end up writing a custom Future implementation for a simple service middleware.
Keep in mind that this is a somewhat extreme example, to showcase the rough edges existing in the ecosystem. _In many cases, Rust can end up being as compact as any other modern language._
**The learning curve can vary depending on your background.** If you're used to the JVM handling the heavy lifting and working with a more mature, extensive ecosystem like we were, it might take a bit more effort to understand Rust's unique concepts and paradigms.
However, once you grasp these concepts and primitives, they become incredibly powerful tools in your arsenal, boosting your productivity even if you occasionally need to write some boilerplate or macros.
It's worth mentioning that [Google has successfully transitioned teams from Go and C++ to Rust](https://www.theregister.com/2024/03/31/rust_google_c) in a rather short timeframe and with positive outcomes.
To smooth out the learning curve, consider the following:
- **Read the official [Rust Book](https://doc.rust-lang.org/stable/book/) cover to cover**. Don't skip chapters. Understanding these complex concepts will become much easier.
- **Practice, practice, practice!** Work through [Rustlings](https://rustlings.cool/) exercises to build muscle memory and adopt the Rust mindset.
- **Engage with the [Rust community](https://www.reddit.com/r/rust/).** They're an incredible bunch, always willing to lend a helping hand.
- **Leverage GitHub's search** capabilities to find and learn from other projects. The ecosystem is still evolving, and collaborating with others is essential (just be mindful of licenses and always contribute back).
We'll explore some of the projects we've been inspired by in the next post.
---
## Lesson 2: The ecosystem is still maturing
The low-level ecosystem in Rust is truly incredible, with exceptionally well-designed and maintained libraries that are widely adopted by the community. These libraries form a solid foundation for building high-performance and reliable systems.
However, as you move higher up the stack, things can get slightly more complex.
<img src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExeWNoejRsb2RhaGsybzQwdXJydjJzbHVpNjR6eW9udzdudjlvdWVjdiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/l2SpOlC7JLROBEkO4/giphy.gif" >
For example, in the database ecosystem, while excellent libraries like [`sqlx`](https://github.com/launchbadge/sqlx) and [`diesel`](https://github.com/diesel-rs/diesel) exist for relational databases, the story is more complicated with many asynchronous or non-relational database clients. High-quality libraries in these areas, even if used by large companies, often have **single maintainers**, leading to slower development and potential maintenance risks.
The challenge is more pronounced for distributed systems primitives, where you may need to implement your own solutions.
This is not unique to Rust, but we found ourselves in this situation quite often compared to older/more mature languages.
On the bright side, **Rust's ecosystem is impressively responsive to security issues**, with swift patches promptly propagated, ensuring the stability and security of your applications.
The tooling around Rust development has been pretty amazing so far as well.
We'll take a deep dive into the libraries we chose and the decisions we made in a future post.
The ecosystem is constantly evolving, with the community actively working to fill gaps and provide robust solutions. Be prepared to navigate uncharted waters, allocate resources accordingly to help with maintenance, and contribute back to the community.
---
### ...did I mention we are open source ?
> [Meteroid](https://meteroid.com/) is a modern, open-source billing platform that focuses on business intelligence and actionable insights.
**We need your help ! If you have a minute,**
<a href="https://git.new/meteroid">
<img width="20%" style="width:20%" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExZDFvd2M3bnZ4OTF1dzBkcHh1NnlwemY1cTU5NWVjOThoZjU4a2U5biZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/XATW2O9w0hrmuIpvtu/giphy.gif">
</a>
Your support means a lot to us ❤️
{% cta https://github.com/meteroid-oss/meteroid %} ⭐️ Star us on Github ⭐️ {% endcta %}
---
## Lesson 3: Documentation Lies in the Code
When diving into Rust's ecosystem, you'll quickly realize that documentation sites can be a bit... well, sparse, at times.
But fear not! The real treasure often lies within the source code.
Many libraries have **exceptionally well-documented methods** with comprehensive examples nestled **within the code comments**. When in doubt, dive into the source code and explore. You'll often discover the answers you seek and gain a deeper understanding of the library's inner workings.
While external documentation with usage guides is still important and can save developers time and frustration, in the Rust ecosystem, it's crucial to be prepared to dig into the code when necessary.
Sites like [docs.rs](https://docs.rs) provide easy access to code-based documentation for public Rust crates. Alternatively, you can generate documentation for all your dependencies locally using cargo doc. This approach might be confusing at first, but spending some time learning how to navigate this system can be quite powerful in the long run.
Needless to say, another helpful technique is to look for examples (**most libraries have an `/examples` folder in their repository**) and other projects that use the library you're interested in, and engage with these communities. These always provide valuable guidance into how the library is meant to be used and can serve as a starting point for your own implementation.
---
## Lesson 4: Don't aim for perfection
When starting with Rust, it's tempting to strive for the most idiomatic and performant code possible.
However, most of the time, it's okay to make trade-offs in the name of simplicity and productivity.

For instance, using `clone()` or `Arc` to share data between threads might not be the most memory-efficient approach, but it can greatly simplify your code and improve readability. As long as you're conscious of the performance implications and make informed decisions, **prioritizing simplicity is perfectly acceptable.**
Remember, premature optimization is the root of all evil. Focus on writing clean, maintainable code first, and optimize later when necessary. **Don't try to micro-optimize** **¹** (until you really need to). Rust's strong type system and ownership model already provide a solid foundation for writing efficient and safe code.
When optimizing performance becomes necessary, focus on the critical path and use profiling tools like `perf` and `flamegraph` to identify the real performance hotspots in your code. For a comprehensive overview of the tools and techniques, I can recommend [The Rust Performance Book](https://nnethercote.github.io/perf-book/introduction.html).

**¹** <sub>this applies throughout your startup journey, including fundraising<sub>
---
## Lesson 5: Errors can be nice after all
Rust's error handling is quite elegant, with the `Result` type and the `?` operator encouraging explicit error handling and propagation. However, it's not just about handling errors; it's also about providing clean and informative error messages with traceable stack traces.
Without tons of boilerplate to convert between error types.
Libraries like `thiserror`, `anyhow` or `snafu` are invaluable for this purpose. We decided to go with `thiserror`, which simplifies the creation of custom error types with informative error messages.
In most Rust use cases, you don't care that much about the underlying error type stack trace, and prefer to map it directly to an informative typed error within your domain.
```rust
#[derive(Debug, Error)]
pub enum WebhookError {
#[error("error comparing signatures")]
SignatureComparisonFailed,
#[error("error parsing timestamp")]
BadHeader(#[from] ParseIntError),
#[error("error comparing timestamps - over tolerance.")]
BadTimestamp(i64),
#[error("error parsing event object")]
ParseFailed(#[from] serde_json::Error),
#[error("error communicating with client : {0}")]
ClientError(String),
}
```
Investing time in crafting clean and informative error messages greatly enhances the developer experience and simplifies debugging. It's a small effort that yields significant long-term benefits.
However sometimes, even more in SaaS use cases where logs stays outside of the user scope, it makes a lot of sense to keep the full error chain, with possibly additional context along the way.
We're currently experimenting with [`error-stack`](https://github.com/hashintel/hash/tree/main/libs/error-stack), a library maintained by hash.dev that allows exactly that, attaching additional context and keep it throughout your error tree. It works great as a layer on top of `thiserror`.
It provides an idiomatic API, actualling wrapping the error type in a Report datastructure that keeps a stack of all the errors, causes and any additional context you may have added, providing a lot of informations in case of failure.
We've encountered a couple of hiccups, but this post is far too long already, more on that in a subsequent post !
## Wrapping up
Building our SaaS with Rust has been (and still is) a journey. A long, challenging journey at start, but also a pretty fun and rewarding one.
- **Would we have built our product faster with Scala ?**
Certainly.
- **Would it be as effective ?**
Maybe.
- **Would we still be as passionate and excited as we are today?**
Probably not.
Rust has pushed us to think differently about our code, to embrace new paradigms, and to constantly strive for improvement.
**Sure, Rust has its rough edges**. The learning curve can be steep, and the ecosystem is still evolving. But that's part of the excitement.
Beyond the technical aspects, the **Rust community has been an absolute delight**. The welcoming atmosphere, the willingness to help, and the shared enthusiasm for the language have made this journey all the more enjoyable.
<img src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExazJlZGppYjY5M3RwOG5sdHdudW94dzk4eXczZm5iMmN0YWUzdG10NyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/sn39fEb1LcHPGQ4b6h/giphy.gif" />
So, if you have the time and the inclination to explore a new and thriving ecosystem, if you're willing to embrace the challenges and learn from them, and if you have a need for performance, safety, and concurrency, then **Rust might just be the language for you**.
As for us, we're excited to continue building our SaaS with Rust, to keep learning and growing, and to see where this journey takes us. Stay tuned for more in-depth posts, or vote for which one we should do next in the first comment.
And if you enjoyed this post and found it helpful, don't forget to give [our repo](https://github.com/meteroid-oss/meteroid) a star! Your support means the world to us.
{% cta https://github.com/meteroid-oss/meteroid %} ⭐️ Star Meteroid ⭐️ {% endcta %}
Until next time, happy coding ! | gaspardb |
1,835,293 | Best International Courier Services in India | A post by Courier Dunia | 0 | 2024-04-26T14:13:36 | https://dev.to/courierdunia/best-international-courier-services-in-india-236n |
 | courierdunia | |
1,835,701 | Envio e recebimento de mensagens de texto dentro de imagens com Python | O processo de enviar e receber mensagens de texto dentro de imagens faz parte da área de... | 0 | 2024-04-29T22:51:07 | https://dev.to/msc2020/envio-e-recebimento-de-mensagens-de-texto-dentro-de-imagens-com-python-1lna-temp-slug-3017731?preview=b263ae880631f618231d5285f7f0b9ec536ba58e40aa9307b95058821c9bd50b8a47f1512274a91b480669387f40d332890900f6e29da844db2b7274 | python, tutorial, braziliandevs | O processo de enviar e receber mensagens de texto dentro de imagens faz parte da área de [Esteganografia](https://pt.wikipedia.org/wiki/Esteganografia). No post de hoje, mostramos uma forma simples de como fazer isso utilizando a linguagem Python. ☕
---
## Pré-requisitos
Para fazer este tutorial é necessário instalar a biblioteca <u>[`Pillow` (`PIL`)](https://pillow.readthedocs.io/en/stable/)</u> do Python 3:
```shell
pip install pillow
```
---
## Representação de um pixel RGB
Uma imagem digital é formada por pixels, sendo essas sua menor unidade. Por sua vez, cada pixel de imagens usuais como `.JPG`, `.PNG`, `.JPEG`, está associado a três valores inteiros (`int`) que representam a quantidade de cores R (_red_, vermelho), G (_green_, verde) e B (_blue_, azul). Os valores das cores R, G e B variam entre 0 e 255. A combinação desses valores irá formar uma ampla gama de cores do sistema RGB.
Algumas imagens contam com uma componente extra, além dos canais de cores R, G e B, chamada A (_alpha_) que controla a _transparência_ da imagem. Esse valor varia de 0 a 1 (ou de 0 a 255, como no `Pillow`). Quanto mais próximo de 0, mais transparente a imagem ficará.
Neste post usaremos a imagem abaixo para realizar os testes.

<center><small><small><u>Fonte:</u> <a href="https://it.wikifur.com/wiki/Camaleonte">https://it.wikifur.com/wiki/Camaleonte</a></small></small></center>
---
## Exemplo
Vejamos um exemplo com a biblioteca `Pillow`.
```Python
# exemplo.py
from PIL import Image
# path para imagem de entrada
filename_path = './images/img_camaleao.jpeg'
# download da imagem: https://it.wikifur.com/wiki/Camaleonte
# carrega imagem como um objeto Image
img = Image.open(filename_path)
# print do formato e sistema de cores
print(f'Formato: {img.format}')
print(f'Cores: {img.getbands()}')
print(f'Tamanho: {img.size}')
x, y = 400, 100
print(f'Valor das cores do pixel na posição \
(x, y) = ({x}, {y}): {img.getpixel( (x, y))}')
# exibe imagem
img.show()
'''
saída esperada no terminal:
Formato: JPEG
Cores: ('R', 'G', 'B')
Tamanho: (800, 500)
Valor das cores do pixel na posição (x, y) = (400, 100): (184, 215, 95)
'''
```
No exemplo considerado, o pixel selecionado faz parte do camaleão verde, pois a componente G tem o maior valor na tripla ordenada (R, G, B) = (184, **215**, 95). Usando um [site](https://www.rapidtables.com/web/color/RGB_Color.html) para facilitar na conversão, checamos que a cor do pixel escolhido:

---
## Inserindo uma mensagem de texto em uma imagem RGB
Assim como o `Pillow` possui um método para coletar as informações sobre um determinado pixel de uma imagem, `getpixel()`, também há um para inserir um pixel. Para inserir um pixel com cores (R, G, B), onde 0{% katex inline %}\le{% endkatex %} R, G, B {% katex inline %}\le{% endkatex %}255 usamos o `putpixel()`.
O código a seguir insere uma mensagem numa imagem com cores RGB:
```Python
def insert_msg(img_original, msg):
'''
Input:
. img_original: imagem com cores em RGB
. msg: uma mensagem em forma de string
Output:
. img_with_msg: imagem com a mensagem introduzida em alguns pixels
'''
img_with_msg = img_original.copy()
x_cte = img_with_msg.size[0] - 1
# codifica mensagem de string para números inteiros
msg_encoded_bytes = msg.encode(encoding='utf-8', errors='strict')
msg_encoded = str(int.from_bytes(bytes=msg_encoded_bytes, byteorder='little'))
# insere mensagem no canal de cor R da imagem
for j, n in enumerate(msg_encoded):
rgb_pixel = list(img.getpixel((x_cte, j)))
red_color = rgb_pixel[0] # (r, g, b)[0]
try:
if red_color < 255-9:
r = red_color % 10
rgb_pixel[0] = red_color - r + int(n)
else:
rgb_pixel[0] = int(n)
img_with_msg.putpixel((x_cte, j), tuple(rgb_pixel))
except Exception as e:
print(e)
img_with_msg.putpixel((x_cte, j), tuple(rgb_pixel))
# adiciona pixels no fim como flag
flag_pixel = (233, 233, 233)
for j in range(5):
k = len(msg_encoded) + j
rgb_pixel = img.getpixel((x_cte, k))
try: # tenta retorna um flag_pixel
img_with_msg.putpixel((x_cte, k), flag_pixel)
except: # se der exceção, retorna o pixel original
img_with_msg.putpixel((x_cte, j), rgb_pixel)
return img_with_msg, msg_encoded
```
**Estratégia usada:** Para inserir a mensagem na imagem, primeiramente codificamos ela como uma lista de inteiros e depois inserimos cada valor inteiro no canal de cor vermelho R, mantendo fixo um valor para coordenada `x` (`x = x_cte`).
As inserções alteram os pixels da imagem original. Por exemplo, suponha que o valor do pixel original escolhido for, digamos, `(23, 127, 53)` e vamos inserir o número `7` na lista de inteiros que codificam a mensagem. Então, o pixel alterado é `(27, 127, 53)`.
Essa estratégia de inserir, foi definida na etapa `r = red_color % 10`. Caso haja valores maiores do que `255`, que é o valor máximo permitido, tratamos com o `try/except`.
**Nota:** Essa forma de inserir pode não ser a mais eficiente, nem tem essa intenção. O objetivo do post é apresentar _uma_, dentre tantas, maneira de fazer a tarefa desejada.
Após inserir a mensagem codificada na imagem, finalizamos a lista de pixels alterados com uma _flag_. A _flag_ corresponde em alterar os 5 próximos pixels, após o último que foi necessário, para o valor de `(233, 233, 233)`. Isso nos ajudará no momento de recuperar a mensagem enviada.
**Curiosidade:** O [número `233`](https://en.wikipedia.org/wiki/233_(number)) tem um lado cabalístico, pois é ao mesmo tempo um [número primo](https://en.wikipedia.org/wiki/Prime_number), [primo de Shopie Germain](https://en.wikipedia.org/wiki/Safe_and_Sophie_Germain_primes), [primo de Srinivasa Ramujan](https://en.wikipedia.org/wiki/Ramanujan_prime) e também de [Fibonacci](https://en.wikipedia.org/wiki/Fibonacci_prime). Números com essas características são de extrema importância para área de criptografia. Por exemplo, o projeto [PrimeGrid](https://www.primegrid.com/) investiga números primos como esses desde 2005.
---
## Extraindo a mensagem de texto da imagem
Para extrair a mensagem presente na imagem, usamos essa função:
```Python
def exctract_msg(img_with_msg):
'''
Input:
. img_with_msg: imagem com uma mensagem inserir pela função `insert_msg()`
Output:
. msg_decoded_str: mensagem na forma de string
'''
img_input = img_with_msg.copy()
x_cte = img_input.size[0] - 1
# pega posição da flag
is_flag = 0
flag_pixel = (233, 233, 233)
list_pixels = []
for j in range(img_input.size[1] - 1):
rgb_pixel = img_input.getpixel((x_cte, j))
if rgb_pixel == flag_pixel:
is_flag += 1
if is_flag == 5:
j_end = j
# cria lista com os números inteiros correspondentes a mensagem codificada como inteiros
msg_encoded_int = []
for j in range(j_end-4):
rgb_pixel = list(img_input.getpixel((x_cte, j)))
red_color = rgb_pixel[0] # (r, g, b)
r = str(red_color % 10)
msg_encoded_int.append(r)
list_pixels.append(rgb_pixel)
# decodifica, passando a lista de inteiros para uma string com a mensagem escolhida
msg_encoded_int = ''.join(msg_encoded_int)
msg_encoded_int = int(msg_encoded_int)
msg_decoded_bytes = msg_encoded_int.to_bytes((msg_encoded_int.bit_length() + 7) // 8, 'little')
msg_decoded_str = msg_decoded_bytes.decode('utf-8')
return msg_decoded_str
```
Resumidamente, para extrair o texto da imagem, basta seguir o processo contrário do que foi feito em `insert_msg()`. Ou seja, capturar a lista de números inteiros que foram inseridos nos pixels alterados e decodificar essa lista para uma string que será a mensagem original.
---
## Testes
Para testar as funções criadas, iremos considerar a imagem do camaleão já mostrada.
O seguinte código usa as duas funções, para inserção e posterior extração da mensagem. Para evitar repetir o código dessas funções, inserimos ambas, `insert_msg()` e `exctract_msg ()`, no `utils.py` e importamos seu conteúdo com `from utils import *`.
```Python
#insere_extrai_msg_em_imagem.py
from PIL import Image
from utils import *
# caminho para imagem JPEG
img_path = './images/img_camaleao.jpeg' # https://it.wikifur.com/wiki/Camaleonte
# mensagem escolhida
msg_input = 'Há mais força no perdão do que na ofensa, há mais força no reparo do que no erro. Raduan Nassar.'
# carrega imagem como um objeto Image
img = Image.open(img_path)
# descomentar a linha abaixo para exibir imagem original
# img.show(title='Imagem original')
# insere mensagem na imagem
img_encoded = insert_msg(img, msg_input)
# descomentar para exibir imagem com a mensagem inserida
# img_encoded.show(title='Imagem com mensagem')
# extrai mensagem de imagem
msg_decoded = exctract_msg(img_encoded)
print(msg_decoded)
'''
saída esperada:
'Há mais força no perdão do que na ofensa, há mais força no reparo do que no erro. Raduan Nassar.'
'''
```

<center><small><small><u>Sobre a figura:</u> A imagem a esquerda é a original e a direita possui a mensagem inserida. Se dermos uma 'zoom in' na imagem a direita notaremos uma sequência de pixels brancos em um pedaço da borda.</a></small></small></center>
---
## Outras formas de inserir e extrair mensagens em imagem
Há maneiras mais eficientes de se inserir e extrair mensagens em imagens digitais. Uma forma muito usada é considerar a codificação binária da mensagem e dos pixels da imagem e alterar os [bits menos significativos da imagem (LSB)](https://en.wikipedia.org/wiki/Bit_numbering). Dessa maneira, as mensagens transmitidas podem ser um tanto longas e a chance das mudanças na imagem serem perceptíveis a olho nu é baixa. Um bom material sobre o assunto pode ser encontrado neste [link](https://www.vivaolinux.com.br/artigo/Esteganografia-e-Esteganalise-transmissao-e-deteccao-de-informacoes-ocultas-em-imagens-digitais/?pagina=1) ou [aqui](https://dev.to/vapourisation/steganograhy-part-1-2j73).
<center>👾 👻 🦎 🐉 🧌 🖼️ 🕴️</center>
<br/>
***Esperamos que tenham gostado e agradecemos a leitura!***
| msc2020 |
1,836,067 | Tweet Media Extractor Plugin | This is a submission for the Coze AI Bot Challenge: Trailblazer. What I Built There are... | 0 | 2024-04-27T11:24:46 | https://dev.to/sojinsamuel/tweet-media-extractor-plugin-4a35 | cozechallenge, devechallenge, ai, machinelearning | *This is a submission for the [Coze AI Bot Challenge](https://dev.to/devteam/join-us-for-the-coze-ai-bot-challenge-3000-in-prizes-4dp): Trailblazer.*
## What I Built
<!-- Tell us what your plugin or workflow does and what problem it solves -->
There are already amazing Twitter related plugins available on Coze Plugins store that can be used with Tweets.

What I needed was a plugin that could take the Media comprised in a Tweet, could be an image or video (Which I needed in different resolutions)

So I built one to use on my [TweetMediaManager bot](https://dev.to/sojinsamuel/how-tweetmediamanager-transforms-tweet-urls-into-valuable-resources-99n), which is an integral part for the bots functionality.
When a user submits a tweet or post URL:
`https://twitter.com/<username>/status/<id>`
It has a unique id associated with it. which is the input parameter being used in TweetMediaExtractor to make a LookUp search via Twitter API v2.
If the tweet does comprise media the JSON response would contain an includes field.
```json
// Assuming the target Tweet contained a video
{
"data": {
...
},
"includes": {
"media": [
{
"media_key": "77777777",
"type": "video",
"variants": [
{
"bit_rate": 632000,
"content_type": "video/mp4",
"url": "https://video.twimg.com/ext_tw_video/12345678/pu/vid/avc1/320x568/4024rVUaMBVYHT_b.mp4?tag=12"
},
{
"bit_rate": 950000,
"content_type": "video/mp4",
"url": "https://video.twimg.com/ext_tw_video/12345678/pu/vid/avc1/480x852/5JGUFqyletKVFUuF.mp4?tag=12"
},
{
"bit_rate": 2176000,
"content_type": "video/mp4",
"url": "https://video.twimg.com/ext_tw_video/12345678/pu/vid/avc1/720x1280/E9zgV0hONDsfOwrq.mp4?tag=12"
},
{
"content_type": "application/x-mpegURL",
"url": "https://video.twimg.com/ext_tw_video/12345678/pu/pl/ORE8nOl29XDVW9kz.m3u8?tag=12&container=cmaf"
}
]
}
],
"users": [
...
]
}
}
```
Even though we could download an image by going to devtools via inspect on chrome, using the same method for a video will only give you a blob URL. which isn't what we want. Many people do share awesome infographical or gif representation of AWS architectural patterns which is what my motivation behind building this custom plugin.
At the time of building this, I wasn't familiar with the Coze Studio and was honestly confused like what are tools? where do they even come in action. where would I safely store my Twitter API Keys, what does Tool name mean.
Even though there is already options from Coze to bind existing external service APIs. I still needed the Studio option because I needed to implement oauth 1.0 authentication method for the Twitter API.
So my second option was to create a REST API via AWS API gateway and make a resource path `/getmedia`, which then executes a lambda function and the id needed was passed via query param by registering it as an Existing service (used the API Gateway REST API endpoint).
## Demo
<!-- If submitting a plugin, share a link to your plugin in the Coze Plugin Store -->
You can check out the [TweetMediaExtractor plugin from Coze plugin store](https://www.coze.com/store/plugin/7362301488493969413?from=explore_card)

This is the JSON response we saw earlier, the plugin usage in action can be tested via my [TweetMediaManager bot](https://www.coze.com/s/ZmFqHVFVQ/)
## Journey
Even though I moved forward with using API Gateway instead of Coze Studio, it was still a great experience to learn about this AWS service and made my development process much easier.
Still, I wasn't gonna let go of using Coze Studio that easily!
It is indeed a learning process, So i created another plugin called TweetGagger (yeah another plugin powered by Twitter API) which I made it work from the Studio itself instead of using API gateway.
Read: TweetGagger Plugin Submission
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! --> | sojinsamuel |
1,836,079 | Dr Aditya Raj | Orthopaedic Spine Surgeon Mumbai | Hello, I am Dr Aditya Raj (Orthopaedic Spine Surgeon SportsDocs) An Orthopaedic spine surgeon with a... | 0 | 2024-04-27T11:44:19 | https://dev.to/dradityaraj/dr-aditya-raj-orthopaedic-spine-surgeon-mumbai-8ab | bestorthopaedicsurgeoninmumbai, orthopaedic, healthcare, bonespecialist | Hello, I am [Dr Aditya Raj](https://synapsespine.in/dr-aditya-raj-orthopaedic-spine-surgeon/) (Orthopaedic Spine Surgeon SportsDocs)
An Orthopaedic spine surgeon with a passion for delivering exceptional care. With extensive training both in India and abroad, including specialized fellowships in complex spine surgery and deformity correction, I'm dedicated to providing my patients with the highest standard of treatment.
If you're seeking quality spine care that's tailored to international standards, don't hesitate to reach out. Schedule a [consultation](https://maps.app.goo.gl/4WyGZTEjwNWvUDPA8) today or call us on [9372671858](Tel:9372671858) and let's work together towards your optimal spine health and well-being. Your journey to a pain-free life starts here.
| dradityaraj |
1,836,095 | Functions in JavaScript: A Comprehensive Guide | Understanding Declaration, Parameters, Return Statements, Function Expressions, Arrow Functions,... | 26,790 | 2024-04-27T12:31:12 | https://dev.to/sadanandgadwal/functions-in-javascript-a-comprehensive-guide-40d6 | webdev, javascript, beginners, sadanandgadwal | > Understanding Declaration, Parameters, Return Statements, Function Expressions, Arrow Functions, and More
Functions are a fundamental concept in JavaScript, allowing developers to encapsulate code for reuse, organization, and abstraction. In this guide, we'll explore various aspects of functions in JavaScript, including their declaration, parameters, return statements, function expressions, and arrow functions.
**1. Declaration of Functions**
In JavaScript, functions can be declared using the function keyword followed by the function name and a pair of parentheses () containing optional parameters.
Here's a basic example:
```
function greet(name) {
return `Hello, ${name}!`;
}
console.log(greet('sadanand gadwal')); // Output: Hello, sadanand gadwal!
```
- The greet function is declared using the function keyword. It takes a parameter name and returns a greeting message using string interpolation.
**2. Parameters**
Functions can accept parameters, which are variables that hold the values passed to the function when it is called. Parameters are declared within the parentheses following the function name.
Here's an example:
```
function add(a, b) {
return a + b;
}
console.log(add(5, 3)); // Output: 8
```
- The add function takes two parameters a and b and returns their sum.
- The subtract function takes two parameters a and b and returns the result of a - b.
**3. Return Statements**
Functions can use the return statement to send a value back to the code that called the function. If a function doesn't explicitly return a value, it implicitly returns undefined.
Here's an example:
```
function subtract(a, b) {
return a - b;
}
console.log(subtract(10, 4)); // Output: 6
```
- Both add and subtract functions use the return statement to return the result of the arithmetic operation.
**4. Function Expressions**
Function expressions define functions as part of an expression, rather than as a declaration. They can be named or anonymous and are often used to assign functions to variables.
Here's an example of a named function expression:
```
const multiply = function multiply(a, b) {
return a * b;
};
```
console.log(multiply(7, 8)); // Output: 56
And here's an example of an anonymous function expression:
```
const divide = function(a, b) {
return a / b;
};
console.log(divide(100, 5)); // Output: 20
```
- The multiply function is defined using a named function expression. The function is assigned to the variable multiply.
- The divide function is defined using an anonymous function expression. The function is assigned to the variable divide.
**5. Arrow Functions**
Arrow functions are a more concise way to write functions in JavaScript, introduced in ES6. They have a more compact syntax and automatically bind this to the surrounding code's context. Here's an example:
```
const square = (x) => {
return x * x;
};
console.log(square(4)); // Output: 16
```
For simple functions that have only one expression in the body, the curly braces and return keyword can be omitted:
```
const cube = (x) => x * x * x;
console.log(cube(3)); // Output: 27
```
- The square function is defined using an arrow function. It takes a parameter x and returns the square of x.
- The cube function is also defined using an arrow function, but with a more concise syntax since it has only one expression in its body.
**6. Example: Using Functions**
```
function calculate(operation, a, b) {
switch (operation) {
case 'add':
return add(a, b);
case 'subtract':
return subtract(a, b);
case 'multiply':
return multiply(a, b);
case 'divide':
return divide(a, b);
default:
return 'Invalid operation';
}
}
console.log(calculate('add', 5, 3)); // Output: 8
console.log(calculate('multiply', 4, 6)); // Output: 24
console.log(calculate('divide', 10, 2)); // Output: 5
console.log(calculate('power', 2, 3)); // Output: Invalid operation
```
The calculate function takes three parameters: operation, a, and b. It uses a switch statement to determine which operation to perform (add, subtract, multiply, divide) and calls the corresponding function with the given arguments.
The switch statement also handles the case when an invalid operation is provided, returning an error message.
**Conclusion**
Functions are a powerful feature in JavaScript, allowing developers to write modular and reusable code. Understanding the different ways to declare and use functions is essential for any JavaScript developer.
**Bonus : Complete code :- **
```
// Declaration of Functions
function greet(name) {
return `Hello, ${name}!`;
}
// Parameters
function add(a, b) {
return a + b;
}
// Return Statements
function subtract(a, b) {
return a - b;
}
// Function Expressions
const multiply = function multiply(a, b) {
return a * b;
};
const divide = function(a, b) {
return a / b;
};
// Arrow Functions
const square = (x) => {
return x * x;
};
const cube = (x) => x * x * x;
// Example: Using Functions
function calculate(operation, a, b) {
switch (operation) {
case 'add':
return add(a, b);
case 'subtract':
return subtract(a, b);
case 'multiply':
return multiply(a, b);
case 'divide':
return divide(a, b);
default:
return 'Invalid operation';
}
}
console.log(greet('sadanand gadwal')); // Output: Hello, sadanand gadwal!
console.log(add(5, 3)); // Output: 8
console.log(subtract(10, 4)); // Output: 6
console.log(multiply(7, 8)); // Output: 56
console.log(divide(100, 5)); // Output: 20
console.log(square(4)); // Output: 16
console.log(cube(3)); // Output: 27
console.log(calculate('add', 5, 3)); // Output: 8
console.log(calculate('multiply', 4, 6)); // Output: 24
console.log(calculate('divide', 10, 2)); // Output: 5
console.log(calculate('power', 2, 3)); // Output: Invalid operation
```
---
🌟 Stay Connected! 🌟
Hey there, awesome reader! 👋 Want to stay updated with my latest insights,Follow me on social media!
[🐦](https://twitter.com/sadanandgadwal) [📸](https://www.instagram.com/sadanand_gadwal/) [📘](https://www.facebook.com/sadanandgadwal7) [💻](https://github.com/Sadanandgadwal) [🌐](https://sadanandgadwal.me/) [💼
](https://www.linkedin.com/in/sadanandgadwal/)
[Sadanand Gadwal](https://dev.to/sadanandgadwal)
| sadanandgadwal |
1,836,139 | Casibom Giriş | Casibom casino sitesi son zamanların adından en çok söz ettiren bahis ve casino sitelerinden... | 0 | 2024-04-27T14:48:01 | https://dev.to/casibomgirisi/casibom-giris-3h11 |

Casibom casino sitesi son zamanların adından en çok söz ettiren bahis ve casino sitelerinden biridir. Casino oyunları oynuyorsanız Casibom’u duymamış olma ihtimaliniz yok.
Casibom casino sitesi son zamanların adından en çok söz ettiren bahis ve casino sitelerinden biridir. Casino oyunları oynuyorsanız Casibom’u duymamış olma ihtimaliniz yok.
Her hafta değişen [casibom giriş](https://casibom-girisi.com) adresi, Türkiye’nin en iyi casino sitesi olarak bilinir. Casibom, sektörde çok yeni olmasına rağmen güvenilir ve kazandıran bir casino sitesidir.
Rakiplerinin arkasında kalmayarak, çoğu köklü casino sitesinden önde. Bu işte köklü olmak çok önemlidir. Bununla birlikte, oyunculara ulaşmak ve onlarla iletişim kurmak çok önemlidir. Canlı desteklerindeki güler yüzlü çalışanlarıyla 7/24 rahatlıkla iletişim kurabilirler.
Çok hızlı bir şekilde çok sayıda oyuncuya ulaşmayı başardılar. Tabii ki bunun altında yatan bazı faktörler vardır. Öncelikle, sitenin ismine baktığınızda, sadece casino slot oyunları ve canlı casino oyunları olacağını düşünüyorsunuz, ancak bu beklentilerinizin üstünde hizmetler var. Spor bahislerine yönelik geniş seçenekleri ve yüksek oranları ile öne çıkan bir firmadan bahsediyoruz.
Hem spor bahisleri hem de kumarhane ve canlı casino oyuncularına sunduğu yüksek yüzdeli avantajlı bonuslarıyla dikkatleri çekmeyi başardı. Bu nedenle kısa sürede sektörde iyi reklam başarısı elde ettiler. Bu nedenle de çok sayıda oyuncu çekmeyi başardı.
Casibom sitesinin kazançlı bonuslarından ve popülerliğinden bahsettikten sonra, siteye nasıl üye olunacağını anlatmak isteriz. Casibom sitesine tarayıcınız üzerinden mobil cihazlarınızdan, bilgisayarlarınızdan ve tabletlerinizden erişebilirsiniz. Sitenin mobil uyumlu olması da kullanıcılar için önemli bir kolaylıktır. Bu tür sitelerin şu anda mobil cihazlarla uyumlu olması çok önemlidir. Mobil cihazlarımız sayesinde artık neredeyse her şeyi tek bir dokunuşla yapabiliyoruz. Her oyuncunun bilgisayarı veya tableti olmamasına rağmen, bu günlerde akıllı telefonları olmayan insanlar çok az.
Yolda bile canlı bahis seçenekleriyle bahis alabilir ve müsabakaları canlı olarak izleyebilirsiniz. Canlı casino oyunlarına kolayca erişim sağlamanız bile mümkün. Ülkemizde casino ve kumarhaneler devlet tarafından yasal olarak kapatıldığı için çoğu kumarhane oyuncusunun yurt dışına seyahat ettiğini düşünürsek, kumarhane sitelerinin mobil cihazlara uyumlu olması da diğer güzel şeylerden biridir. Güvenilir olarak kesintisiz bir şekilde eğlenebilir ve kazanç sağlayabilirsiniz çünkü Casibom canlı casino sitesi lisanslıdır.
Casibom’a giriş yaptığınızda, üye ol seçeneğini seçerek herhangi bir belge şartı istenmeden ve ücret ödemeden kolayca üye olabilirsiniz. Sitede üyelik için herhangi bir ücret ödemediğiniz gibi para yatırma ve para çekme seçenekleri de mevcuttur. Herhangi bir kesim ücreti olmadığı gibi finansal işlemlerde kullanılan çok sayıda yöntem nedeniyle geniş bir kesim elde etmeyi başardı. Site ayrıca 7/24 finansal işlemler yapabilmenizi sağlar.
Casibom’un güncel giriş adresi, sitenin kendi içeriğini sürekli güncellediği gibi sürekli olarak güncellenmektedir. Bu da web sitesinin güvenilirliği için çok önemlidir. Bildiğiniz gibi, canlı bahis ve casino siteleri çok popüler. Çok sayıda insan bu firmalarda çalışıyor. Bu da dolandırıcıların internet dünyasına taşındığını gösteriyor. Maalesef ki hayatımızda böyle insanlar varsa, bu durum internette de var. Bu nedenle sahte giriş adresleri oluşturabilirler. Bu tür şeylere aldanmamalıyız çünkü aksi takdirde paramızı kötü insanlara kaptırabiliriz.
Boşu boşuna eğlenmek ve para kazanmak için girdiğimiz yerde hem eğlencemizden olmayalım hem de paramızı bu kötü insanlara kaptırmayalım. Bu nedenle dikkatli olmamız gerekiyor. Sık sık ziyaret ettiğimiz sitelerin güncel giriş adreslerini takip etmemiz gerekir. Casibom web sitesinin güncel adresini sosyal medya üzerinden takip edin. Casibom, Twitter’da sürekli olarak yeni giriş adresini paylaşıyor.
Ülkemizin casino ve kumarhaneleri maalesef devlet tarafından yasaklandı. Bu nedenle, ülkemizde artan sayıda çevrimiçi casino sitesi var. Bununla birlikte, devlet tarafından alınan bir karara göre, bunlar da yasal olarak hizmet veremez. Herhangi bir casino sitesinin giriş adresinin sürekli olarak değiştirildiğini fark etmişsinizdir. Siteler, hizmetlerini sürdürebilmek için sürekli olarak giriş adreslerini değiştirmelidir. Bu alt başlıkta Casibom’un yeni adresini de göreceğiz.
Diğer siteler gibi Casibom da sürekli olarak giriş adreslerini değiştiriyor. Böylece web sitesine girmeniz engellenmiyor. Kesintisiz olarak bahislerinizi alabilirsiniz. Casibom, yeni adresi nereden takip edeceğiz? Kısa mesaj yoluyla siteye üye olurken cep telefonu numaranıza güncel giriş adresleri gönderilir. Bununla birlikte, üye olurken kullandığınız e-posta adresinize güncel giriş adresi gönderilir.
Sitenin mobil uygulaması benim favorim. Sitedeki bağlantılar aracılığıyla mobil uygulamayı indirebilirsiniz. Casibom mobil uygulaması sayesinde yeni giriş adresi bulma sorunu ortadan kalkıyor. Bu mobil uygulama, her zaman güncel adrese erişmenizi sağlar. Mobil uygulamayı indiren kişiler ayrıca şaşırtıcı bonuslar ve freespin’ler elde ediyor. Mobil sitenin ve mobil uygulamanın kullanımı aynı. Mobil sitede para yatırma, çekme ve oyun oynama gibi işlemleri gerçekleştirebilirsiniz.
Casibom’un aktif olarak kullandığı Twitter adresinden bahsetmeden geçmek istemiyorum. Sosyal medyayı en çok kullanan online casino sitelerinden biri Casibom olduğunu söylemek abartmış olur. Casibom, Twitter gibi çok önemli bilgilerin paylaşıldığı ve insanlara ulaşmanın en kolay ve en hızlı olduğu bir sosyal medya platformunda çok iyi performans gösteriyor.
Casibom, sosyal medyadaki gelişmeleri takip ederek Twitter’dan paylaşımlar yapıyor. Twitter’da çekilişler düzenleyerek bonuslar ve ücretsiz spinler takipçilerine dağıtıyor. Free spin söz konusu olduğunda, Casibom’un tam bir free spin dağıtıcısı olduğunu belirtmek isterim. Canlı casino ve spor bahisleri sitelerinde hiçbir şekilde bulamayacağınız bir uygulamadan bahsediyorum. Her hafta üyelerine ücretsiz bir spin veriyor. Bunun için siteye üye olmak ve bir kez para yatırmanız gerekiyor. Başka bir koşul istenmiyor.
Sitede yatırım sıklığınıza ve yatırım miktarınıza göre kademeler vardır. Altın, gümüş ve bronz kademeler var. Her kademeye verilen ücretsiz spinlerin miktarı ve hangi oyunda geçerli olacakları değişmektedir. En alt kademede bile en az 25 ücretsiz spin alabileceğinizi belirtmek isterim. Sitenin giriş adresi ve Twitter hesabı, hangi oyunları oynadığınızı gösterecektir. Casibom, müşterilerinin gönlünde taht kurdu ve sosyal medya hesaplarını aktif bir şekilde kullanıp bonuslar sunarak eğlenceyi güvenli bir şekilde tadabilirsiniz.
Güncel giriş adresini takip edin. Maalesef, bazı kötü kişiler sahte adresler kullanarak paranıza göz dikiyorlar. Bu nedenle gelen mesajları ve e-postaları inceleyin. Casibom’un Instagram hesaplarını ve bizim sayfamızı da incelemenizi tavsiye ederim. Kötü insanlara para vermenizi istemem.
Casibom Güncel: En Son Bahis ve Casino Haberleri
Casibom güncel anahtar kelimesi, bahis ve casino tutkunları için merakla takip edilen bir terimdir. Casibom platformu, kullanıcılarına sürekli olarak güncel ve ilgi çekici içerik sunarak sektörde öne çıkmaktadır. Casibom güncel başlığı altında, en son bahis ve casino haberlerini ve güncellemelerini bulabilirsiniz.
Bahis tutkunları için Casibom güncel haberleri oldukça değerlidir. Site, spor bahisleri alanında yaşanan son gelişmeleri, maç öncesi analizleri ve bahis ipuçlarını içeren güncel içerikler sunmaktadır. Futbol, basketbol, tenis ve daha birçok spor dalında güncel bahis fırsatlarını takip edebilirsiniz. Ayrıca, Casibom’un sunduğu yüksek oranlar ve özel promosyonlarla kazanma şansınızı artırabilirsiniz.
Casino tutkunları için ise Casibom güncel haberleri, yeni oyunların tanıtımlarını, özel turnuvaları ve jackpot güncellemelerini içermektedir. En yeni slot oyunlarından canlı krupiyeli masalara kadar geniş bir oyun yelpazesine erişebilirsiniz. Ayrıca, Casibom’un sağladığı bonuslar ve sadakat programlarıyla casino deneyiminizi daha da heyecanlı hale getirebilirsiniz.
Casibom güncel içerikleriyle, bahis ve casino tutkunları her zaman sektördeki en son gelişmelerden haberdar olabilirler. Sitenin kullanıcı dostu arayüzü ve mobil uyumluluğu sayesinde, istediğiniz zaman istediğiniz yerden güncel içeriklere erişebilirsiniz. Casibom’un sunduğu kaliteli hizmet ve güvenilir oyun ortamıyla, bahis ve casino keyfinizi en üst düzeye çıkarabilirsiniz.
Casibom güncel içerikleriyle, adrenalin dolu bir oyun deneyimi yaşamak ve kazançlı bahisler yapmak için hemen şimdi siteye üye olun ve güncel haberleri takip edin!
Casibom: Çevrimiçi Bahis ve Casino Platformu
Casibom, çevrimiçi bahis ve casino tutkunlarının vazgeçilmez adreslerinden biridir. Geniş oyun seçenekleri, yüksek oranlar ve güvenilir hizmetiyle kullanıcıların favorisi haline gelmiştir.
Casibom Giriş: Hızlı ve Güvenilir Erişim
Casibom giriş işlemi oldukça basittir. Kullanıcılar, web tarayıcılarını kullanarak Casibom’un resmi web sitesine erişebilirler. Ardından, siteye girdikten sonra kullanıcı adı ve şifreleriyle giriş yapabilirler. Bu sayede, Casibom’un sunduğu tüm hizmetlere erişim sağlayabilirler.
Casibom Twitter: Güncel Bilgilere Ulaşın
Casibom’un resmi Twitter hesabını takip ederek platformla ilgili güncel bilgilere ve duyurulara ulaşabilirsiniz. Yeni promosyonlar, bonuslar ve güncel giriş adresleri hakkında bilgi almak için Casibom’un Twitter hesabını takip etmeyi unutmayın.
Casibom Güncel: Her Zaman En Yeni Adres
Casibom güncel giriş adresi, kullanıcıların her zaman erişim sağlayabileceği güncel bir bilgidir. Platformun resmi web sitesi veya sosyal medya hesapları üzerinden güncel adres bilgilerine ulaşabilirsiniz. Güvenilir bir şekilde bahis yapmak için her zaman güncel adresi kontrol etmeyi unutmayın.
casibom: Casibom, çeşitli online bahis ve casino oyunlarını içeren geniş bir platformdur. Kullanıcılarına güvenilir bir oyun deneyimi sunar.
casibom giriş: Casibom’a giriş yapmak isteyen kullanıcılar, güncel giriş adresini bulmak için platformun resmi iletişim kanallarını takip etmelidir.
casibom güncel giriş: Casibom’un güncel giriş adresine erişmek için kullanıcılar, platformun resmi Twitter hesabını düzenli olarak kontrol etmelidir.
casibom Twitter: Casibom’un resmi Twitter hesabı, kullanıcılara güncel bilgiler, kampanyalar ve etkinlikler hakkında bilgi verir.
casibom giriş Twitter: Casibom’un güncel giriş adresi hakkında en son bilgileri almak için kullanıcılar, Casibom’un resmi Twitter hesabını takip etmelidir.
casibom güncel: Casibom’un güncel oyun seçenekleri ve kampanyaları, kullanıcıların ilgisini çekecek çeşitliliktedir.
casibom güncel giriş: Casibom’un güncel giriş adresi, kullanıcıların platforma sorunsuz bir şekilde erişmelerini sağlar.
casibom Twitter giriş: Casibom’un güncel giriş adresi hakkında anlık bilgiler almak için kullanıcılar, Casibom’un resmi Twitter hesabını ziyaret etmelidir.
casibom giriş adresi: Casibom’un güncel giriş adresine erişmek için kullanıcılar, platformun resmi iletişim kanallarını kontrol etmelidir.
casibom giriş güncel: Casibom’un güncel giriş adresi, kullanıcıların platforma kesintisiz erişim sağlamasına yardımcı olur.
casibom güncel giriş adresi: Casibom’un güncel giriş adresine erişmek için kullanıcılar, Casibom’un resmi iletişim kanallarını takip etmelidir.
casibom giriş Twitter: Casibom’un güncel giriş adresi hakkında en son bilgileri almak için kullanıcılar, Casibom’un resmi Twitter hesabını düzenli olarak kontrol etmelidir.
casibom güncel giriş Twitter: Casibom’un güncel giriş adresi hakkında anlık güncellemeler almak için kullanıcılar, Casibom’un resmi Twitter hesabını takip etmelidir.
casibom resmi giriş: Casibom’un resmi giriş adresi, kullanıcıların platforma güvenli bir şekilde erişmelerini sağlar.
twitter Casibom: Casibom’un resmi Twitter hesabı, kullanıcılara güncel bilgiler ve kampanyalar hakkında anlık bildirimler sunar.
casibom uygulama: Casibom’un mobil uygulaması, kullanıcıların favori oyunlarına her zaman ve her yerde erişmelerini sağlar.
casibom mobil: Casibom’un mobil uygulaması, kullanıcıların platformdaki oyunları mobil cihazları üzerinden kolayca oynamalarını sağlar.
casibom giris: Casibom’a giriş yapmak isteyen kullanıcılar, güncel giriş adresini bulmak için platformun resmi iletişim kanallarını takip etmelidir.
casıbom: Casibom, online bahis ve casino oyunları sunan güvenilir bir platformdur. Kullanıcılarına geniş bir oyun seçeneği ve güvenli bir oyun ortamı sağlar.
casibom guncel giriş: Casibom’un güncel giriş adresine erişmek için kullanıcılar, platformun resmi Twitter hesabını düzenli olarak kontrol etmelidir.
casibom yeni giriş: Casibom’un yeni giriş adresi hakkında en son bilgileri almak için kullanıcılar, platformun resmi iletişim kanallarını takip etmelidir.
casibom güvenilir mi: Casibom, kullanıcıların güvenliği ve memnuniyeti için çeşitli önlemler alarak güvenilir bir platform sunar. Yüksek kullanıcı memnuniyeti ve güvenlik standartlarına sahip bir çevrimiçi oyun ortamı sağlar.
casibom resmi: Casibom’un resmi web sitesine erişmek için kullanıcılar, platformun güncel giriş adresini takip etmelidir. Resmi site, kullanıcıların güvenle oyun oynamasını sağlayan güvenilir bir çevrimiçi oyun platformudur.
Burayı ziyaret et: https://casibom-girisi.com | casibomgirisi | |
1,836,402 | And the nominees for “Best Cypress Helper” are: Utility Function, Custom Command, Custom Query, Task, and External Plugin | And the Oscar goes to… ACT 1: EXPOSITION On numerous occasions, colleagues have come... | 27,209 | 2024-04-27T23:59:26 | https://medium.com/@sebastian-cs/and-the-nominees-for-best-cypress-helper-are-utility-function-custom-command-custom-query-6af26e6d1597 | cypress, testing, automation, qa | **And the Oscar goes to…**
---
### ACT 1: EXPOSITION
On numerous occasions, colleagues have come to me with a question that seems to resonate among many Cypress users: Which approach is best for reusing actions or assertions when writing tests? Should they opt for a _JavaScript Utility Function_, a _Custom Command_, perhaps a _Custom Query_, or even one of the so-called _Tasks_? What about an _External Plugin_?
This query isn’t unique to my circle; it’s a topic that even once in a while surfaces in the Cypress.io Discord community. The myriad of methods available in Cypress can be overwhelming, but this versatility is also what makes this tool so powerful.
If you’re reading this blog post in the hope of finding a definitive answer as to which Cypress helper deserves the “Oscar,” then this might be the point where you choose to stop reading and check back in my next entry. After all, when it comes to movie tastes, nothing is written in stone.
However, if you’re open to exploring some recommendations and understanding when one method might be more advantageous than another, then I believe you are be in the right place.
I had the idea that presenting a sneak peek of each Cypress Helper on a ‘single screen’ might just help you decide which one to ‘watch’ the full feature film based on your mood or needs on any given day.
My hope is that by the end of this post, you will have a stronger set of strategies to enhance your Cypress toolkit for everyday use. So get the popcorn ready and enjoy the previews. 🍿
---
### ACT 2: CONFRONTATION
In Cypress.io, the decision to use a _JavaScript Utility Function_, _Custom Command_, _Custom Query_, _Task_ or even an _External Plugin_ should depend on the specific needs of your test suite and the scope of functionality you are aiming to achieve.
So… getting to the point, in what situations can each of them lend you a better hand and save you a lot of work?
Let’s look at them one by one.
#### JavaScript Utility Function
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ug24dfwotx1eare347m.png" />
</p>
- Use _JavaScript Utility Functions_ for simple, **synchronous** operations that don’t need to interact with the Cypress command chain.
- Ideal for data transformations, calculations, or any generic JavaScript functions.
- You could define them directly in your test spec for cases where the operation is very specific to the scope of those tests, or wrap several of them in a helper file in the **`cypress/support`** folder if you plan to use them across multiple test specs.
> _There is nothing preventing you from returning a Cypress chainable object from the call to JavaScript Utility Functions, and in certain cases, it might even be quite convenient._
>
> _However, given that **the majority of JavaScript code is synchronous**, and considering that the nature of Cypress commands is asynchronous and they get queued for execution at a later time, you might want to consider using a **Custom Command** instead of a JavaScript Utility Function when returning a Cypress chainable object._
#### Custom Command
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/flq43fc4ihn0jh6k8mj9.png" />
</p>
This is when things start to get interesting…
- Use _Custom Commands_ to create reusable sets of Cypress commands that can be called as a single command.
- They are **asynchronous**, and they can return a Cypress chainable object, over which you can run assertions.
- A _Custom Command_ can: start a new Cypress chain (called a parent command), receive a previous subject and continue an existing chain (called a child command), or either start a chain or use an existing chain (called a dual command).
- Added via `Cypress.Commands.add()` and becomes part of the Cypress chainable interface. You can also overwrite existing commands using `Cypress.Commands.overwrite()`.
- They are defined in the **`cypress/support/commnads.js`** and are extremely helpful for actions performed frequently in your tests, like custom login procedures or form submissions.
- They can be reused across multiple test specs in your Cypress project.
- BE AWARE! **Custom Commands are executed once and do not have built-in retry-ability**. If you want your method to have retry-ability, it is better to use a **_Custom Query_**.
> _If you would like to dig deeper in the intricacies of Custom Commands you can visit the Cypress documentation [Custom Commands](https://docs.cypress.io/api/cypress-api/custom-commands), [Building Cypress Commands](https://learn.cypress.io/advanced-cypress-concepts/building-the-right-cypress-commands), and [Custom Cypress Command Examples](https://learn.cypress.io/real-world-examples/custom-cypress-commands)._
#### Custom Query
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jkiztiw4jdlltlcvk5x6.png" />
</p>
- _Custom Queries_ were a special ‘feature film’ introduced in the Cypress series at the end of 2022, debuting in version 12.
- Use _Custom Queries_ to query the state of your application, for instance, to find elements based on custom logic or conditions not covered by [Cypress build-in queries](https://docs.cypress.io/api/table-of-contents#Queries). Examples of built-in queries include `get()`, `find()`, `filter()`, `url()`, and `window()`.
- They can be particularly useful when integrating with UI libraries or frameworks that require specific selectors or patterns to interact with their components.
- Queries are **synchronous** and can return a Cypress chainable object, over which you can also run assertions.
- _Custom Queries_ are **retry-able**, meaning they will continuously attempt to retrieve whatever you have requested until they succeed or a timeout is reached. It is important that the Custom Query callback function does not change the state of your application.
- New queries are added via `Cypress.Commands.addQuery()`, but you can overwrite an existing query using `Cypress.Commands.overwriteQuery()`.
- They are defined in the **_cypress/support/commands.js_** file and are useful when you can encapsulate complex or repeated DOM queries that you will reuse across your test framework.
- However, for repeatable behavior, it is often more efficient to write a JavaScript Utility function rather than a _Custom Query_, and after all both behave “synchronously” (by default, JavaScript is a synchronous).
- TAKE CAUTION! If your method needs to be asynchronous or only to be called once, then you should write a **_Custom Command_** instead.
- STAY ALERT ! When piecing together lengthy sequences of queries, ensure that you avoid incorporating standard Cypress commands, as their inclusion will disrupt the test’s ability to retry the full chain.
> _For more information about Custom Queries you can visit the Cypress documentation [Custom Queries](https://docs.cypress.io/api/cypress-api/custom-queries) and [Retry-ability](https://docs.cypress.io/guides/core-concepts/retry-ability#Only-queries-are-retried)._
#### Task
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bmmrcyi510gxlwhoz3m7.png" />
</p>
- _Tasks_ are used for handling operations that need to be executed outside the browser context.
- They run in Node.js, executed by the Cypress process, and can be invoked from your tests using the `cy.task()` command. This bridges the gap between the Node.js server and browser-based tests, enabling a more comprehensive testing strategy.
- `Tasks` are ideal for database operations such as seeding, querying, or cleanup. They are also useful for file system interactions, such as downloading files, or any server-side feature not accessible within the browser.
- Additionally, they are great for storing state in Node that needs to persist between spec files, running parallel tasks like making multiple HTTP requests, and for executing an external process or system command.
- The _Task_ event handler can return a value or a promise. Returning undefined, or a promise resolved with undefined, will cause the command to fail.
- Tasks are typically defined in the project’s **`cypress.config.js`** file within the `setupNodeEvents()` function. Alternatively, you can define them in the project's **`cypress/plugins/index.js`** file.
> _For more information about Tasks you can visit the Cypress documentation [Tasks](https://docs.cypress.io/api/commands/task) and [Real World Example tasks](https://docs.cypress.io/guides/references/best-practices#Real-World-Example-1)._
#### External Plugin
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdi6knsa1tynhbxiopc9.png" />
</p>
- Cypress _External Plugins_ are used to extend the functionality of Cypress tests beyond the capabilities provided by the core framework.
- These plugins are typically installed via **Node Package Manager** and configured within the Cypress project to provide additional capabilities tailored to specific testing needs. They help make Cypress a more powerful and versatile tool for end-to-end testing.
- You can host your External Plugins on multiple repositories such as **GitHub** or **Bitbucket**, and distribute them publicly via the software registry **NPM** or internally within your organization via **Nexus**.
- They are extraordinarily useful when you have common tools, commands, and assertions that will be reused across multiple Cypress frameworks.
- There is a vast array of Cypress _External Plugins_ available; some are very well-maintained and supported by the Cypress community, while others… are not maintained at all.
- Therefore, be selective and critical when choosing a Cypress plugin for your application. I recommend opting for plugins that are clean, light-weighted, frequently updated, and supported by credible creators. After all, each time you load a plugin in your test or framework, you are adding time to your test run.
Some common uses for External Plugins in Cypress include:
✔️ Visual Testing (such as Applitools’ `cypress-eyes`)
✔️ Accessibility Testing (such as Andy’s `cypress-axe`, which utilizes Deque's `axe-core`).
✔️ Reporting (such as Yousaf’s `cypress-multi-reporters` for generating more informative and styled test reports)
✔️ API Testing (such as Filip’s `cypress-plugin-api`)
✔️ A Toolkit of useful extra Query Commands (such as Gleb’s `cypress-map`)
✔️ Firing native system events (such as Dmitriy’s `cypress-real-events`)
✔️ Filtering Tests (such as `@cypress/grep`)
> _For more information about available External Plugins, you can visit the Cypress documentation on [Plugins](https://docs.cypress.io/plugins) (however this list seems a little dated)._
---
### ACT3: RESOLUTION
Each of these Cypress Tools serves a different purpose and can be used in conjunction to create a robust and maintainable testing suite. So, in my opinion, ALL OF THEM truly deserve to share the “Oscar” for “Best Cypress Helper”. 🏆
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddfvpo1dzebufz2hvv7i.png" />
</p>
_JavaScript Utility Functions_, _Custom Commands_ and _Custom Queries_ are primarily about organizing code within your tests, while _Tasks_ and _External Plugins_ are for interacting with the system and environment outside the browser or for extending Cypress’s capabilities.
_Custom Commands_ can enhance the readability of your tests by tackling repetitive sets of commands at once. Similarly, _Custom Queries_ can abstract the querying logic, making the tests easier to understand at a glance.
Remember to use _Custom Commands_ and _Custom Queries_ judiciously, as each addition to your testing framework increases the maintenance overhead and can potentially introduce complexity. Keep them well-documented and ensure they provide clear value over the standard set of queries provided by Cypress.
However, I have to say that Cypress _External Plugins_ hold a special place in my 💖, and that’s why I will dedicate a full blog post to them in the future.
> _**Disclaimer**: Keanu Reeves has neither won an Oscar, Golden Globe, or Emmy, and nor even been nominated for any of these awards. Maybe in his next John Wick feature film!_ 🤞 😉
<p align="center">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxzz6meip4m9p9084x6k.png" />
(Image from Marvelous Videos)
**Don't forget to leave a comment, give a thumbs up, or follow my Cypress blog if you found this post useful.**
Happy reading! | sebastianclavijo |
1,836,530 | Collate: Transform content overwhelm into a daily short email | Hey all, Built small app to upload your read-later list into a daily short email. Upload your PDF... | 0 | 2024-04-28T07:14:14 | https://dev.to/vel_is_lava/collate-transform-content-overwhelm-into-a-daily-short-email-3gel | ai, testing, showdev | Hey all,
Built small app to upload your read-later list into a daily short email.
Upload your PDF and websites, the content gets analyzed and organized into topics.
You will receive daily articles based on your content.
Give it a go for free [here](https:collate.one/newsletter)!
Keen to know what you think! | vel_is_lava |
1,836,562 | 5 Best Inventory Management Softwares in Jira 2024 | Inventory management software is a tool to help businesses effectively manage their inventory. It... | 0 | 2024-04-28T07:44:50 | https://dev.to/assetitapp/5-best-inventory-management-softwares-in-jira-2024-28l1 | inventory, inventorymanagement, jira, atlassian | <p><span>Inventory management software is a tool to help businesses effectively manage their inventory. It could be about managing various tasks, providing real-time visibility into stock levels, or even the products' activity logs. By utilizing such software, you can eliminate the need for manual tracking, reduce errors, and gain valuable insights into your inventory performance. Whether you run a small retail store or a large-scale e-commerce business, finding the best inventory management software for your specific needs is crucial.</span></p>
<p><span>As a business owner, managing your inventory is one of the most crucial tasks you have to deal with. Keeping track of your stock levels, knowing when to reorder, and ensuring that you have the right products available at the right time can be pretty challenging. That's where inventory management software comes in.</span></p>
<p><span>Utilizing such tools can eliminate the need for manual tracking and reduce errors. Thereby, it will improve efficiency and maximize your profits. No matter what business you are running, choosing your own best inventory management software is crucial.</span></p>
<p><span><li-image width="999" height="562" alt="5 Best Inventory Management Software in 2024.jpg" align="inline" id="320963i4D5A47A2A13E844C" size="large" sourcetype="new"></li-image></span></p>
<h2><span>Benefits of Inventory Management Software</span></h2>
<p><span>Inventory management software offers many benefits for organizations of all sizes. Firstly, it helps optimize your inventory levels. By accurately tracking sales and monitoring stock movements, the software ensures you always have the right amount of stock. Moreover, it will help you avoid stockouts or overstocking situations, which can both negatively impact your bottom line.</span></p>
<p><span>Secondly, inventory management software enhances operational efficiency by automating various tasks. You can significantly reduce manual work with features like barcode scanning, minimum quantity notifications, etc. We can avoid time-consuming tasks and free up our staff to focus on more strategic activities. This way, your team can manage time to improve customer service or expand your product range.</span></p>
<p><span>What's more, some inventory management software can provide valuable insights and analytics. Thanks to the robust reporting and analysis systems, store owners can analyze sales patterns or forecast customer demand. As a result, they can identify trends and adjust the inventory levels accordingly, contributing to future growth.</span></p>
<h2><span>Key Features to Look for in Inventory Management Software</span></h2>
<p><span>The question is, what to consider when choosing the best inventory management software? The answer should lie in their key features, judging whether they meet your needs. While different companies may have different requirements, some crucial features to look for include:</span></p>
<ol>
<li aria-level="1"><strong>Real-time tracking:</strong><span> Ensure that the software provides accurate and up-to-date information on stock levels. </span></li>
<li aria-level="1"><strong>Stock quantity notifications:</strong><span> Look for software that sends you the alarm when stock levels reach a certain threshold so you never run out of essential items.</span></li>
<li aria-level="1"><strong>Code scanning:</strong><span> This feature simplifies the process of receiving and tracking inventory. QR codes, barcodes, etc. allow you to track with a mobile device or barcode scanner.</span></li>
</ol>
<p><a href="https://assetit.app/qr-code-and-asset-scan/"><img src="https://assetit.app/wp-content/uploads/2024/04/Example-of-the-QR-code-configuration-in-AssetIT.jpg" border="0" alt="Example of the QR code configuration in AssetIT, asset inventory management software" width="1920" height="1080"></a> <em>Example of the QR code configuration in AssetIT</em></p>
<ol>
<li aria-level="1"><strong>Integration with other systems:</strong><span> Most of the time, as a vendor, you will work with additional business tools. It could be accounting software, e-commerce platforms, CRM tools, and you name it. Make sure the inventory management software is integratable or migratable to avoid manual data entry.</span></li>
<li aria-level="1"><strong>Reporting system:</strong><span> The software should provide potent reporting builders. Reports on sales, stock levels, and other key metrics are essential when it comes to inventory management. Therefore, you can gain insights into the inventory performance and make the right decisions afterward.</span></li>
</ol>
<p><em><img src="https://assetit.app/wp-content/uploads/2024/04/Easily-get-your-data-with-a-customizable-report.jpg" border="0" alt="Easily get your data with a customizable report of inventory management software" width="1920" height="1080"> Easily get your data with a customizable report</em></p>
<p><span>By carefully considering these features, you can choose the inventory management software for yourself. As a result, you and your organization can achieve the ultimate goal.</span></p>
<h2><span>Top 5 Inventory Management Software Options on Atlassian Marketplace</span></h2>
<p><span>Now that we understand its benefits and must-have features, let's take a closer look at the top 5 options available on the Atlassian Marketplace. These Jira plugins have been carefully selected based on their popularity, user reviews, and overall functionality.</span></p>
<h3><a href="https://marketplace.atlassian.com/apps/1228867/?utm_source=assetit.app&utm_medium=article&utm_campaign=inventory-management&utm_content=inventory-management-software">AssetIT - Asset Management for Jira, JSM, ITSM</a></h3>
<p><strong>AssetIT</strong><span> is a robust inventory management software that helps you efficiently manage your inventory. You can track stock levels and the ordering process and separately manage items of many inventories at once. Stock movement logs and QR code scanning are also available. Most importantly, AssetIT integrates seamlessly with Jira, allowing for smooth data flow. Not to mention, it provides public API for those using additional platforms for inventory tracking.</span></p>
<p><em><img src="https://assetit.app/wp-content/uploads/2024/04/Integrate-with-Jira-for-a-smoother-project-management-process.jpg" border="0" alt="Integrate with Jira for a smoother project management process" width="1920" height="1080"> Integrate with Jira for a smoother project management process</em></p>
<p><span>Pricing for AssetIT starts at $2 per month per user, depending on team sizes. A free trial is available with 24/7 customer support. In case your team has 10 people, AssetIT is </span><strong><i>free</i></strong><span> to use.</span></p>
<h3>Assets of JSM Premium Package</h3>
<p><strong>Assets</strong><span> is another top-rated inventory management software that works as a part of Jira Service Management. It focuses on incident management and problem resolution. Along with the Jira Service Management capabilities, Assets perfectly fits for businesses looking to streamline their inventory management processes. </span></p>
<p><span>As a feature of JSM Premium package, to use Assets means that you will have to purchase the whole Premium Plan of JSM. Jira pricing for Assets starts at $49.35 per month per Agent.</span></p>
<h3>Asset Management for Jira</h3>
<p><strong>Asset Management for Jira</strong><span> is asoftware that caters to the essential functions of inventory management. With its user-friendly interface, you can easily manage your inventory, track stocks, and generate reports on key metrics. They also provide a detailed guide on how to work with the app, which is suitable for beginners. Asset Management for Jira costs $10.00 per month for 10 users. The price will be subject to the number of users on your team.</span></p>
<h3>EZOfficeInventory for Jira-Basic</h3>
<p><strong>EZOfficeInventory for Jira-Basic</strong><span> is a Jira user-friendly inventory management software. It offers the ability to track items' ROI by recording all maintenance activities in your Jira Account. This is a free app on Atlassian Marketplace, developed by EZOfficeInventory.</span></p>
<h3>STAGIL Assets</h3>
<p><strong>STAGIL Assets </strong><span>is a software that offers advanced features for inventory tracking. With STAGIL Assets, you can manage assets and generate detailed reports on key metrics. It also offers barcode scanning capabilities for easy inventory tracking and supports multiple locations. Like AssetIT, STAGIL Assets is free for a team below 10, starting with $11,00 per month from 11 users. The price will also depend on the size of your business.</span></p>
<h2><span>Which is The Best Inventory Management Software?</span></h2>
<p><span>From my view, AssetIT is the ultimate choice for better inventory management as it includes every factor we need for a best inventory management software: customizable features, reporting builder, QR code generations, minimum quantity alerts.</span></p>
<p><span>However, it's essential to consider specific factors to determine the best inventory management software for your business. Here are some:</span></p>
<ol>
<li aria-level="1"><span>Make sure the software can adapt to your business's growth and handle increasing inventory volumes.</span></li>
<li aria-level="1"><span>Consider the ease of use and its interface. Ensure that your employees can quickly get the most out of the software without extensive training.</span></li>
<li aria-level="1"><span>Look for software that allows you to customize your workflows and other features to meet your specific business requirements.</span></li>
<li aria-level="1"><span>Evaluate the level of customer support provided by the software vendor. Prompt and knowledgeable support is crucial when resolving issues or answering your questions in a timely manner.</span></li>
<li aria-level="1"><span>Consider your budget. Compare its value to how much you will pay for it. </span></li>
</ol>
<h2><span>Bottom Line</span></h2>
<p><span>Choosing the best inventory management software can greatly impact your business outcome. Let's carefully evaluate your needs and key factors, make the right decision that best fits your requirements, and boost efficiency and profits!</span></p> | assetitapp |
1,836,649 | Derek Ferriera - Lincoln Financial Advisors Corporation | Derek Ferriera understands: Your exit plan objectives. Your desire to reach financial independence... | 0 | 2024-04-28T11:44:50 | https://dev.to/derekferriera/derek-ferriera-lincoln-financial-advisors-corporation-2lgo | Derek Ferriera understands:
- Your exit plan objectives.
- Your desire to reach financial independence with certainty and tax efficiency.
- Your desire to recruit, retain, and reward key employees for their hard work and dedication.
- Your need to be equitable in the business you built with your partner.
- Your hope to leave a legacy within your family or community after you are gone.
- That while you are busy taking care of everyone else in your life, he can help take care of you.
Derek Ferriera is committed to understanding your situation and customizing a comprehensive exit and financial plan to help achieve your goals. Whether you are a business owner, an executive, the bread winner of your family or a company ready to go public, Derek and his team can help you translate your goals into action steps. Derek is a motivating catalyst to take the words from the plan into immediate action.
Derek brings over 30 years of experience to his clients. For many of the 30 years, Derek also coached and developed financial planners to help build successful practices. This provides Derek the depth and breadth of financial planning strategies that will optimize the various objectives his clients desire to reach. His experience on thousands of cases provides his clients the benefit of knowing that their plan will run through multiple design sessions before the appropriate options are presented to the client.
Derek Ferriera is an active member of the business planning community. As a partner in Equity Strategies Group and a Business Intelligence Specialist, he keeps abreast of changes and innovations in the industry. He holds the following designations, certifications and affiliations listed below:
- Certified Business Exit Consultant (CBEC®)
- CERTIFIED FINANCIAL PLANNER™ Practitioner certification, CFP®, 1990 College of Financial Planning, Denver
- Life Underwriting Training Council Fellow designation, LUTCF, 1991
- Agency Management Training Council AMTC, 1991
- Chartered Life Underwriter designation, CLU, 1993, American College, Bryn Mawr, PA
- Registered Employee Benefits Consultant designation, REBC, 1998
- Certified Fund Specialist designation, CFS, Institute of Business & Finance, 1998
- Board Member of Society of Financial Service Professionals (SFSP)
- Member of International Association for Financial Planning, IAFP
- Member of American Society of CLU/ChFC
- Board Member of The Resource Group of Lincoln Financial Advisors
As a graduate of Cal Poly State University, San Luis Obispo, Derek began his career in 1984 spending time with John Hancock and more recently with C Solutions. Prior to 1984, Derek worked for a nationally known real estate syndication firm running a 200+ unit garden apartment development. Derek has lived in the Bay Area for many years and enjoys sports, music, and travel. Derek and his wife Karen currently reside in Morgan Hill and have two children, Andrea and Alex, two dogs and two cats.
The Decisions are Yours! | derekferriera | |
1,836,692 | The Power of Automated Solutions to Solve hCAPTCHA challenges | Introduction In the ever-evolving landscape of cybersecurity, the battle between bots and humans... | 0 | 2024-04-28T12:34:27 | https://dev.to/media_tech/the-power-of-automated-solutions-to-solve-hcaptcha-challenges-2igc | **Introduction**
In the ever-evolving landscape of cybersecurity, the battle between bots and humans rages on. As online platforms strive to protect their integrity and users from malicious activities, they often employ CAPTCHA challenges. However, the traditional CAPTCHA model has its limitations, leading to the rise of more sophisticated alternatives like hCAPTCHA. In this article, we delve into the power of automated solutions in tackling hCAPTCHA challenges effectively.
**Understanding hCAPTCHA Challenges**
hCAPTCHA, a more advanced version of CAPTCHA, presents users with tasks that are easy for humans but challenging for bots to complete. These tasks often involve identifying objects in images or solving puzzles, ensuring that only genuine human users can pass through. While effective in combating automated attacks, hCAPTCHA can pose significant hurdles for legitimate users, leading to frustration and drop-offs.
**The Need for Automated Solutions**
As hCAPTCHA challenges become increasingly complex, manual intervention to solve them becomes impractical. Enter automated solutions powered by cutting-edge technologies such as artificial intelligence and machine learning. These solutions offer a seamless and efficient way to navigate hCAPTCHA challenges, ensuring a smooth user experience without compromising security.
**Advantages of Automated Solutions**
**1. Accuracy**
Automated solutions boast unparalleled accuracy in solving hCAPTCHA challenges, outperforming manual methods by a significant margin. Leveraging advanced algorithms, these solutions can swiftly analyze and respond to complex tasks with precision, minimizing false positives and negatives.
**2. Speed**
Speed is of the essence in the digital realm, and automated solutions excel in this aspect. By streamlining the process of solving hCAPTCHA challenges, they reduce waiting times for users, enhancing overall satisfaction and retention rates.
**3. Scalability**
With the proliferation of online platforms and the exponential growth of user interactions, scalability is paramount. Automated solutions can effortlessly scale to meet the demands of high-traffic websites, ensuring uninterrupted service without compromising performance.
**4. Cost-Effectiveness**
While manual intervention may seem cost-effective initially, the long-term benefits of automated solutions far outweigh the investment. By minimizing the need for human resources and optimizing operational efficiency, these solutions deliver substantial cost savings in the long run.
**Implementing Automated Solutions**
Integrating automated solutions into your platform is a straightforward process that yields immediate results. Whether you opt for proprietary solutions or third-party services, ensure compatibility with your existing infrastructure and adherence to industry standards.
**Conclusion**
In conclusion, the power of automated solutions in overcoming hCAPTCHA challenges cannot be overstated. By combining precision, speed, scalability, and cost-effectiveness, these solutions offer a robust defense against automated attacks while enhancing the user experience. Embrace the future of cybersecurity with automated solutions and stay ahead of the curve in safeguarding your online assets.
**CaptchaAI, the premier captcha solving service powered by cutting-edge AI technology. Their innovative solution effortlessly bypasses hCAPTCHA and other types of captchas in mere seconds, thanks to advanced OCR technology integration. As the go-to reCaptcha solving service, they ensure seamless verification processes for their users. Experience the efficiency firsthand with their free trial offer, enabling you to test their service with zero commitment.**
| media_tech | |
1,836,758 | Airport Helper - Plugin | This is a submission for the Coze AI Bot Challenge: Trailblazer. What I Built This is a... | 0 | 2024-04-28T16:31:10 | https://dev.to/sanjaysekaren/airport-helper-plugin-7h8 | cozechallenge, devechallenge, ai | *This is a submission for the [Coze AI Bot Challenge](https://dev.to/devteam/join-us-for-the-coze-ai-bot-challenge-3000-in-prizes-4dp): Trailblazer.*
## What I Built
<!-- Tell us what your plugin or workflow does and what problem it solves -->
This is a comprehensive plugin tailored for aviation professionals and enthusiasts, providing easy access to detailed airport information and preferred routes between specified airports. Whether you're a pilot planning a flight, a dispatcher coordinating routes, weather report or an aviation enthusiast exploring the skies, also offers essential features to streamline your aviation experience.
## Demo
<!-- If submitting a plugin, share a link to your plugin in the Coze Plugin Store -->
Plugin Link: https://www.coze.com/store/plugin/7362940742463127558?from=explore_card
<!-- Screenshot or record your plugin/workflow setup and show us what it can do. -->

## Journey
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
So, I've always been super into aviation, I love everything about it. So, me and the team decided to create this airport plugin that's like, super detailed. We wanted something that not only gives you all the deets about airports, but also throws in weather reports and even suggests preferred routes between airports.
One big issue we noticed was that a lot of the open source stuff out there gives you details based on these ICAO or FAA codes. But let's be real, most folks don't know what those codes even mean. So, we thought, why not make a tool that lets you search by city name? That way, you just punch in the city and boom, you get all the airports with their ICAO and FAA codes.
Once we had that sorted, it was all about putting those codes to work. We used them to get super intense details about the airports, and even hooked up other services like weather reports. Now, you can plan your flights like a pro, without having to dig through a bunch of confusing codes.
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
This is a team submission.
Team Member: https://dev.to/senthilbalajiganesan
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! --> | sanjaysekaren |
1,836,764 | Can i move my website from WordPress (Elementor) to Shopify? | I'm thinking about moving my website, Web X Founders, from WordPress (Elementor) to Shopify so I can... | 0 | 2024-04-28T16:50:30 | https://dev.to/webxfounders/can-i-move-my-website-from-wordpress-elementor-to-shopify-3ifl | I'm thinking about moving my website, [Web X Founders](https://webxfounders.com), from WordPress (Elementor) to Shopify so I can start selling digital products. Can I do it? My tech team says Shopify has tools for managing digital products and secure payments. They suggest transferring our content to Shopify and setting up products there can make selling easier and give customers a better experience. With planning, we hope to move smoothly to Shopify and use its e-commerce features to grow our business.
Possible ? | webxfounders | |
1,858,179 | Top College's Abroad for MCA | Top Colleges Abroad for MCA or Similar Postgraduate Courses United... | 0 | 2024-05-19T08:55:04 | https://dev.to/suraj_0031/top-college-abroad-for-mca-27f1 | ##Top Colleges Abroad for MCA or Similar Postgraduate Courses
### United States
1. **Massachusetts Institute of Technology (MIT)**
- Offers programs like Master of Engineering in Computer Science.
2. **Stanford University**
- Offers a Master of Science in Computer Science.
3. **Carnegie Mellon University**
- Renowned for its School of Computer Science with various postgraduate programs.
4. **University of California, Berkeley**
- Offers a Master of Science in Computer Science.
### United Kingdom
1. **University of Cambridge**
- Offers a Master of Philosophy in Advanced Computer Science.
2. **University of Oxford**
- Offers an MSc in Computer Science.
3. **Imperial College London**
- Offers an MSc in Computing Science.
### Canada
1. **University of Toronto**
- Offers a Master of Science in Computer Science.
2. **University of British Columbia**
- Offers a Master of Science in Computer Science.
3. **McGill University**
- Offers an MSc in Computer Science.
### Australia
1. **University of Melbourne**
- Offers a Master of Computer Science.
2. **Australian National University**
- Offers a Master of Computing.
3. **University of Sydney**
- Offers a Master of Information Technology.
### Europe (Other than the UK)
1. **ETH Zurich, Switzerland**
- Offers a Master's in Computer Science.
2. **Technical University of Munich, Germany**
- Offers a Master's in Informatics.
3. **École Polytechnique Fédérale de Lausanne (EPFL), Switzerland**
- Offers a Master’s in Computer Science. | suraj_0031 | |
1,862,955 | How to Verify Smart Contracts on BlockScout with Hardhat | Verifying a smart contract on Blockscout makes the contract source code publicly available and... | 0 | 2024-05-23T14:22:13 | https://dev.to/modenetwork/how-to-verify-smart-contracts-on-blockscout-with-hardhat-5b9 | Verifying a smart contract on Blockscout makes the contract source code publicly available and verifiable, which creates transparency and trust in the community.
This guide will walk you through verifying a smart contract on [Blockscout](http://blockscout.com) with [Hardhat](https://hardhat.com).
## Prerequisites
Before you begin the steps in this guide, please ensure you have the following:[](https://docs.celo.org/developer/verify/hardhat#prerequisites)
* You should have a [Hardhat project](https://hardhat.org/hardhat-runner/docs/getting-started#quick-start) initialized on your machine.
* An Etherscan Account. Don't have one? go to Etherscan and [sign up](https://etherscan.io) for an account.
## Verifying the Smart Contract using Hardhat
### Step 1: Install the `Hardhat-Verify` Plugin
The `hardhat-verify` plugin helps you verify the source code for your Solidity contracts.
In your project directory, install `hardhat-verify`
```javascript
npm install --save-dev @nomicfoundation/hardhat-verify
```
And add the following statement to your `hardhat.config.js`:
```javascript
require("@nomicfoundation/hardhat-verify");
```
Or, if you are using TypeScript, add this to your `hardhat.config.ts`:
```javascript
import "@nomicfoundation/hardhat-verify";
```
### Step 2: Configure Hardhat
Add the following configuration to the `config` object in `hardhat.config.js`.
```javascript
etherscan: {
// Your API key for Etherscan
// Obtain one at https://etherscan.io/
apiKey: {
mode: '<ETHERSCAN_API_KEY>',
},
customChains: [
{
network: 'mode',
chainId: 919,
urls: {
apiURL: 'https://sepolia.explorer.mode.network/api\?',
browserURL: 'https://sepolia.explorer.mode.network',
},
},
],
},
```
Replace `<ETHERSCAN_API_KEY>` with your API key for Etherscan. Under your Etherscan account settings, find the “API Keys” section. Generate one API key using the Free Plan.
This is what the `hardhat.config.js` file looks like:
```javascript
require('@nomicfoundation/hardhat-toolbox');
/** @type import('hardhat/config').HardhatUserConfig */
module.exports = {
solidity: {
version: '0.8.19',
settings: {
evmVersion: 'london',
},
},
networks: {
'mode': {
url: 'https://sepolia.mode.network',
chainId: 919,
accounts: ['<MY_PRIVATE_KEY>'],
gasPrice: 10000,
},
},
etherscan: {
apiKey: {
mode: '<ETHERSCAN_API_KEY>',
},
customChains: [
{
network: 'mode',
chainId: 919,
urls: {
apiURL: 'https://sepolia.explorer.mode.network/api\?',
browserURL: 'https://sepolia.explorer.mode.network',
},
},
],
},
};
```
### Step 3: Deploying and Verifying Contracts
**Deploy the Contract**
In your terminal, run the following command:
```javascript
npx hardhat run scripts/deploy.js --network mode
```
> Ensure you have enough funds in your account to pay for Gas. You can grab some test tokens from the [Faucet](https://faucet.modedomains.xyz/) to test your contracts before deploying to the mainnet.
Once your smart contract has been successfully deployed, we can now proceed to verify the contract using the contract address.

**Verify the contract**
In your terminal, run the `verify` task, passing the address of the contract, the network where it's deployed, and the constructor arguments that were used to deploy it (if any):
```javascript
npx hardhat verify --network mode DEPLOYED_CONTRACT_ADDRESS "Constructor argument 1"
```
Where `DEPLOYED_CONTRACT_ADDRESS` is the contract of your deployed smart contract. Also, replace `"Constructor argument 1"` (keep the quotes) with the arguments you passed to your smart contract when deploying.
> If you are using complex constructor arguments, reference the following [Hardhat Documentation.](https://hardhat.org/hardhat-runner/plugins/nomicfoundation-hardhat-verify#complex-arguments)
### Conclusion
Congratulations! In this guide, we’ve covered how to verify smart contracts on Blockscout with Hardhat using the [`hardhat-verify` plugin](https://hardhat.org/hardhat-runner/plugins/nomicfoundation-hardhat-verify) that simplifies the process. Verifying smart contracts is a crucial step in the deployment process because it allows the community to review the source code before usage, thereby promoting transparency and trust within the community.
### What's Next:
Check out these other resources from the Mode C00perators:
* [Mode - Comprehensive Starter Guide](https://mode.hashnode.dev/comprehensive-guide?source=koha)
* [How to Get the Developer Role on Mode Discord Server](https://mode.hashnode.dev/get-developer-role?source=koha)
* [How to Register a Smart Contract to Mode SFS with Hardhat](https://mode.hashnode.dev/how-to-register-a-smart-contract-to-mode-sfs-with-hardhat?source=koha) | modenetwork | |
1,870,083 | Thinking of migrating from Confluence but worried about losing data? | There's still time! XWiki's FREE webinar on migrating to an open-source alternative starts TODAY, May... | 0 | 2024-05-30T08:25:45 | https://dev.to/lorina_b/thinking-of-migrating-from-confluence-but-worried-about-losing-data-1gpc | opensource, resources, tutorial, productivity | There's still time! XWiki's FREE webinar on migrating to an open-source alternative starts TODAY, May 30th at 16:00 CET. ➡️ [Register here!](https://xwiki.com/en/webinars/easiest-migration-from-confluence-to-xwiki)
**Don't miss out on learning:**
1. Challenges and solutions for migrating Confluence data (especially to open source!)
2. Effective methods to transfer macros, users, and permissions seamlessly.
3. The latest tools and upgrades to make migration a breeze.
4. Live demos and expert Q&A to answer all your migration questions.
This webinar is your one-stop shop for a smooth Confluence migration!
➡️ [Register Now!](https://xwiki.com/en/webinars/easiest-migration-from-confluence-to-xwiki) Spots are filling up fast. | lorina_b |
1,913,664 | Explore, Integrate, Sleep, Repeat | Hi everyone, I hope you're enjoying your time surfing the web. Since I started focusing on this dev... | 0 | 2024-07-08T05:40:17 | https://blog.lamparelli.eu/explore-integrate-sleep-repeat | ---
title: Explore, Integrate, Sleep, Repeat
published: true
date: 2024-07-06 08:00:52 UTC
tags:
canonical_url: https://blog.lamparelli.eu/explore-integrate-sleep-repeat
---
Hi everyone,
I hope you're enjoying your time surfing the web. Since I started focusing on this dev journey, I've gone through several stages to ensure I can dedicate a bit of time every day.
---
It's been almost three months now that I've been progressing step by step in this adventure and challenge I set for myself. Coincidence or not, it's also been nearly two years since I decided to leave my full-time job to become an independent consultant. I think there's a pattern here. The progress made since day one is huge, and you only see how far you've come when you look back at the entire journey. It's true what they say: "Little by little, the bird builds its nest."
In any case, it's essential to live your experience fully. Apply a small improvement at each iteration, which will allow you to go a bit further. I learned this from the personal development books I've read over the past few years (I can share a list of books that had a significant impact on my journey if you're interested). While it's important to have a guiding line, a goal to reach (no matter what the goal is at first, just define something to get you started...), it's even more important to take the first step.
Now, back to the topic of this post. Over the past three months, I've worked on improving aspects that help me create an ideal environment to maximize my ability to absorb a concept or method each day.
The most important thing for learning is not just learning itself, but having good conditions such as:
* **Getting enough sleep**
We often try to maximize our learning time by cutting down on sleep, but that's the worst thing we can do! The brain and body need to recharge their batteries, and unlike solar-powered batteries, the human body recharges at night...
* **Staying properly hydrated**
Besides keeping your body well-hydrated, taking a hydration break also gives you a chance to move and get some fresh air (a non-scientific but effective fact).
* **Just Move**
It's enough to move and get your blood and oxygen circulating in your body. I chose an intensive cycling session several times a week.
* **Avoiding intoxicating or euphoric substances**
While it's always enjoyable to share a good time with friends, the body struggles to recover, and your learning will suffer... So, it's not forbidden but something to consider in your planning.
* **Eating well**
The hardest part is maintaining a healthy habit and not always eating something hot, greasy, soft, and/or sweet (okay, after a night out, you'll need it to recover, for sure). I'm not a nutritionist, so it's hard to give advice, but try to prioritize balanced meals.
In the end, I've managed to integrate these different points, even though these are messages we're bombarded with all day long... I had to go through my own experiences, and by reiterating what I've learned, adding a little point each time, I'm now here writing to you...
And you, how is your learning going? What techniques have you put in place to learn more effectively?
Happy coding! 🚀😊 | alamparelli | |
1,888,608 | Enhance Your Test Automation with pCloudy Device Farm: Seamless Integration with Leading Frameworks and Tools | In today’s fast-paced digital world, delivering high-quality applications across various devices and... | 0 | 2024-06-14T13:40:43 | https://dev.to/pcloudy_ssts/enhance-your-test-automation-with-pcloudy-device-farm-seamless-integration-with-leading-frameworks-and-tools-4n82 | testautomationtool, crossbrowser, testingwebapplications | In today’s fast-paced digital world, delivering high-quality applications across various devices and platforms is crucial for businesses. pCloudy, a robust cloud-based mobile app testing platform, understands this need and continuously strives to provide developers and testers with the most comprehensive set of integrations. With its recent announcement of seamless integration with a wide range of content-test automation frameworks, CI/CD tools, version control systems, and web development frameworks, pCloudy has solidified its position as a one-stop solution for efficient and effective testing.
Content-Test Automation Frameworks:
1.1 Katalon:
Katalon, a popular [test automation tool](https://www.pcloudy.com/rapid-automation-testing/), is now seamlessly integrated with pCloudy. Testers can leverage Katalon’s powerful features and create automated tests that can be executed on pCloudy’s vast device farm, ensuring comprehensive coverage across multiple devices.
1.2 Test Complete:
Test Complete users can now benefit from the integration with pCloudy, enabling them to execute their tests on a wide array of real devices available on the pCloudy platform. This integration empowers testers to accelerate their testing cycles and enhance overall application quality.
1.3 Oxygen HQ:
Oxygen HQ, a cutting-edge test automation framework, seamlessly integrates with pCloudy. This integration allows testers to leverage Oxygen HQ’s capabilities for [cross-browser](https://www.pcloudy.com/cross-browser-testing/) and cross-platform testing on real devices hosted on pCloudy’s device farm.
1.4 Jest:
Jest is a popular JavaScript testing framework commonly used for testing React applications. With the integration of pCloudy, Jest users can execute their tests on real devices hosted on the pCloudy platform. This integration ensures that React applications are thoroughly tested across multiple devices, ensuring optimal performance and user experience.
1.5 Behave:
Behave is a Python-based test framework for behavior-driven development (BDD). The integration with pCloudy allows Behave users to seamlessly execute their BDD tests on real devices available on the pCloudy device farm. By testing on real devices, Behave users can validate the behavior of their applications accurately across different platforms and configurations.
1.6 Nemo:
Nemo is a Node.js-based test automation framework used for [testing web applications](https://www.pcloudy.com/blogs/web-application-testing/). With the integration of pCloudy, Nemo users can leverage the benefits of testing on real devices, ensuring comprehensive coverage and accurate results. This integration empowers Nemo users to enhance the quality and reliability of their web applications.
1.7 WDIO (WebdriverIO):
WDIO is a popular test automation framework for web applications using WebDriver. By integrating with pCloudy, WDIO users can execute their tests on real devices hosted on the pCloudy platform. This integration enables effective cross-browser and cross-platform testing, helping ensure consistent performance across different environments.
1.8 Capybara:
Capybara is a Ruby-based acceptance test framework for web applications. With the integration of pCloudy, Capybara users can effortlessly execute their tests on a wide range of real devices available on the pCloudy device farm. This integration facilitates accurate and comprehensive testing, allowing Capybara users to identify and resolve potential issues promptly.
1.9 Selenide:
Selenide is a concise and powerful Java-based test automation framework for web applications using Selenium WebDriver. By integrating with pCloudy, Selenide users can execute their tests on real devices, ensuring reliable and accurate test results. This integration empowers Selenide users to validate their web applications across various platforms and devices efficiently.
1.10 Mocha:
Mocha is a feature-rich JavaScript test framework commonly used for both front-end and back-end testing. With the integration of pCloudy, Mocha users can seamlessly execute their tests on real devices available on the pCloudy platform. This integration enables Mocha users to ensure the functionality and stability of their applications across different devices and browsers.
1.11 Puppeteer:
Puppeteer is a Node.js library that provides a high-level API for controlling headless Chrome or Chromium browsers. By integrating with pCloudy, Puppeteer users can run their tests on real devices, leveraging the power of the pCloudy device farm. This integration allows Puppeteer users to test their applications comprehensively and accurately on a wide range of devices.
1.12 Playwright:
Playwright is a JavaScript-based test automation framework for web applications. With the integration of pCloudy, Playwright users can execute their tests on real devices, ensuring thorough testing and reliable results. This integration empowers Playwright users to validate the performance and behavior of their web applications across multiple platforms.
1.13 Nightwatch:
Nightwatch is a popular JavaScript-based test automation framework for web applications. By integrating with pCloudy, Nightwatch users can seamlessly execute their tests on real devices available on the pCloudy platform. This integration enables Nightwatch users to perform end-to-end testing, ensuring optimal functionality and user experience
1.14 Serenity:
Serenity is a powerful open-source library for writing high-quality automated acceptance tests for web applications. The integration with pCloudy allows Serenity users to execute their tests on real devices, enabling comprehensive and reliable testing. This integration enhances the capabilities of Serenity, ensuring robust test coverage across different platforms and devices.
1.15 Testbio:
Testbio is a test automation framework that simplifies mobile testing using JavaScript. By integrating with pCloudy, Testbio users can execute their mobile tests on a vast collection of real devices hosted on the pCloudy platform. This integration facilitates efficient and accurate mobile testing, empowering Testbio users to ensure the quality and performance of their mobile applications.
pCloudy Integration Matrix
2. CI/CD Tools:
2.1 Circle CI
Circle CI is a leading CI/CD platform that empowers developers to run automated tests on their code before committing any changes. It supports a wide variety of testing tools and frameworks, including Mocha, Jest, pytest, XCTest, JUnit, Selenium, and many more. The integration of pCloudy and CircleCI brings the power of automation to the forefront, eliminating manual testing efforts and significantly reducing the chances of human error. This, in turn, allows your team to focus on what they do best: creating high-quality, innovative apps.
2.2 Travis CI
Travis CI is a reputable, cloud-based continuous integration solution. It is preferred by developers worldwide for building and testing projects, primarily those hosted on GitHub. What sets Travis CI apart is its ability to automatically trigger the build process with every code commit, which translates to a robust system that identifies potential build failures early and provides prompt reports. The combination of Travis CI and pCloudy forms a strong alliance that can greatly enhance your application development endeavors. This powerful integration enables development teams to improve their productivity, elevate the quality of their code, and accelerate the delivery process.
2.3 GitLab
The built-in CI/CD functionality of GitLab enables seamless and continuous building, testing, and deployment of applications, facilitating an efficient DevOps cycle. GitLab offers remarkable flexibility when it comes to project visibility, allowing you to create projects that can be public, internal, or private, based on your unique requirements. Furthermore, there is no restriction on the number of private projects you can create. The integration of pCloudy with GitLab CI enables you to streamline your testing processes, report issues, and manage your codebase seamlessly within a unified platform. This integration promotes enhanced collaboration among teams and offers a robust permission model that ensures smooth workflow without any hindrances.
2.4 Bamboo CI
Integrating Bamboo CI with pCloudy, a cloud-based app testing platform, enhances the app testing efforts by combining the versatile features of Bamboo CI with the capabilities of pCloudy. Bamboo CI, with its automated build and test capabilities, comprehensive reporting, and flexible deployment options, streamlines the app development process. When integrated with pCloudy, it brings additional benefits such as seamless test automation, wider device coverage, real-world testing scenarios, enhanced test coverage, centralized reporting, and improved collaboration. This integration creates a powerful ecosystem that empowers development teams to deliver high-quality and reliable mobile applications efficiently and effectively.
2.5 Azure Pipeline
Azure Pipeline is a highly reliable and scalable continuous integration and continuous delivery (CI/CD) platform that empowers developers to effortlessly build, test, and deploy applications across various platforms. Integrating Azure Pipeline with the pCloudy offers a robust solution to enhance the testing of mobile and web applications. This integration enables automated test execution on real devices and browsers, expands device coverage, and harnesses advanced reporting and analytics capabilities. Consequently, the efficiency, reliability, and overall quality of applications experience a significant improvement. This integration is surely going to change the way teams streamline their testing processes to deliver exceptional user experiences and quality apps faster.
2.6 Google Cloud CI
Google Cloud CI’s scalable infrastructure, seamless integration with Google Cloud services, parallel testing, and comprehensive reporting ensure code quality and stability. Integrating a powerful infrastructure like Google Cloud CI with pCloudy brings benefits such as access to a vast device inventory, parallel testing on real devices, automated test execution, and detailed reporting and analysis. Combining the robust capabilities of Google Cloud CI with the comprehensive features of pCloudy, developers can automate and streamline their app testing process for faster delivery of high-quality applications.
3. Version Control Systems:
3.1 Bitbucket:
pCloudy offers seamless integration with Bitbucket, a widely used version control system. This integration ensures that teams can easily manage their test assets and collaborate on test scripts and test data within their existing Bitbucket repositories.
4. Web Development Frameworks:
4.1 Laravel:
pCloudy provides a smooth integration with Laravel, one of the most popular PHP web development frameworks. This integration enables developers and testers to effortlessly test their Laravel applications on real devices using pCloudy’s device farm, ensuring optimal application performance across different devices and browsers.
Benefits of pCloudy Integrations:
Extensive Device Coverage: With pCloudy’s device farm, testers can access a vast collection of real devices, covering various platforms, operating systems, and device models, ensuring comprehensive test coverage.
Accelerated Testing Cycles: Integrating popular test automation frameworks and CI/CD tools with pCloudy streamlines the testing process, allowing teams to execute automated tests seamlessly and efficiently, ultimately reducing time-to-market.
Improved Collaboration and Version Control: Integration with version control systems like Bitbucket enables teams to collaborate effectively, manage test assets, and track changes, enhancing productivity and maintaining version control
Integration Description
Content-Test Automation Frameworks
Katalon Execute automated tests on real devices hosted on pCloudy.
Test Complete Seamlessly execute tests on a wide array of real devices.
Oxygen HQ Leverage Oxygen HQ for cross-browser and cross-platform testing.
Jest Test React applications on real devices available on pCloudy.
Behave Execute BDD tests on real devices for accurate behavior validation.
Nemo Test web applications using Nemo on real devices.
WDIO (WebdriverIO) Execute WebDriver-based tests on real devices.
Capybara Test web applications using Capybara on real devices.
Selenide Run tests on real devices using Selenide framework.
Mocha Execute JavaScript tests on real devices hosted on pCloudy.
Puppeteer Control headless Chrome or Chromium browsers on real devices.
Playwright Test web applications across platforms using Playwright.
Nightwatch Perform end-to-end testing on real devices with Nightwatch.
Serenity Execute high-quality acceptance tests on real devices.
Testbio Simplify mobile testing using JavaScript on real devices.
CI/CD Tools
Circle CI Seamless integration for test automation in CI/CD pipelines.
Travis CI Incorporate pCloudy into Travis CI workflows for efficient testing.
GitLab Integrate pCloudy with GitLab CI/CD for continuous testing.
Bamboo Streamline test automation in Bamboo CI/CD pipelines.
Google Cloud CI Effortlessly integrate pCloudy with Google Cloud CI/CD.
Azure Pipeline Seamlessly incorporate pCloudy into Azure Pipeline workflows.
Version Control Systems
Bitbucket Manage test assets and collaborate using Bitbucket repositories.
Web Development Frameworks
Laravel Test Laravel applications on real devices hosted on pCloudy.
Conclusion
pCloudy’s seamless integration with leading frameworks, tools, CI/CD platforms, version control systems, and web development frameworks significantly enhances test automation capabilities. By leveraging the power of pCloudy’s device farm, testers can achieve extensive device coverage and execute automated tests across multiple devices and platforms. This integration enables accelerated testing cycles, reduces time-to-market, and improves overall application quality. The collaboration and version control features offered by pCloudy’s integrations ensure efficient teamwork and streamlined test management. By choosing pCloudy as a comprehensive testing solution, businesses can now optimize their test automation processes, deliver high-quality applications, and stay competitive in today’s fast-paced digital world. | pcloudy_ssts |
343,141 | Get the list of classes connected to the DB | It was quite troublesome before Rails4, but after Rails5 You can get a list of models connected to DB... | 0 | 2020-05-25T01:47:58 | https://dev.to/konyu/get-the-list-of-classes-connected-to-the-db-4592 | rails, rails5 | ---
title: Get the list of classes connected to the DB
published: true
description:
tags: rails, rails5
---
It was quite troublesome before Rails4, but after Rails5
You can get a list of models connected to DB in ApplicationRecord.descendants.
For example, something like this.
```
model_list = ApplicationRecord.descendants
# If you want to output a list of class names
model_list.map { |v| v.to_s }
=> ["User", "Owner", "Blogs::Comment"]
```
## Usage scenarios
* Batch processing
* If you are looking for a method that can be executed by the method xxx
| konyu |
384,378 | Creating a blog with NuxtJS and Netlify CMS - 1 | In this two-part series, I'm going to cover How I created my blog using NuxtJS and NetlifyCMS. ... | 7,644 | 2020-07-07T08:17:13 | https://dev.to/frikishaan/creating-a-blog-with-nuxtjs-and-netlify-cms-1-44on | vue, nuxt, netlify, tutorial | In this two-part series, I'm going to cover **How I created my [blog](https://frikishaan.com/blog) using NuxtJS and NetlifyCMS**.
<!--
## Why I choose this stack?
Creating a blog with CMS like WordPress is quite an **unwieldy** task. I am not saying WordPress is garbage, it's a great tool for creating websites as **37%** of websites in the world are using it. But I want something with fair performance, security, and price. So this stack is the best option for me. Read [this](https://www.netlify.com/blog/2016/05/18/9-reasons-your-site-should-be-static/) blog for detailed reasons.
-->
## Getting started
#### Creating NuxtJS app
To set up a blog with NetlifyCMS all you need is a **Netlify** and a **GitHub** (or GitLab or Bitbucket) account.
Create a NuxtJS app using `create-nuxt-app`
```bsh
npx create-nuxt-app <app-name>
cd <app-name>
npm run dev
```
#### Setting up NetlifyCMS
In `static` directory add a new directory named `admin` and add an HTML file named `index.html` with the following content -
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Content Manager</title>
<!-- Include the script that enables Netlify Identity on this page. -->
<script src="https://identity.netlify.com/v1/netlify-identity-widget.js"></script>
</head>
<body>
<!-- Include the script that builds the page and powers Netlify CMS -->
<script src="https://unpkg.com/netlify-cms@^2.0.0/dist/netlify-cms.js"></script>
</body>
</html>
```
Add another file named `config.yml` which contains all the configuration about your model and collections.
```yaml
backend:
name: git-gateway
branch: master
media_folder: static/img
public_folder: /img
collections:
- name: "blog"
label: "Blog"
format: "json"
folder: "assets/content/blog"
create: true
slug: "{{slug}}"
editor:
preview: true
fields:
- { label: "Title", name: "title", widget: "string" }
- { label: "Publish Date", name: "date", widget: "datetime" }
- {
label: "Featured Image",
name: "thumbnail",
widget: "image",
required: true,
}
- { label: "Body", name: "body", widget: "markdown" }
```
Push the code to GitHub. Now create a new website on Netlify using your GitHub so that whenever you push to the repository Netlify will automatically fetch the new content from the repo and build the latest version of your website, this is called **Continuous Deployment**.
#### Enable Identity & Git Gateway in Netlify
To access the CMS you need to enable authentication in your netlify website. Go to your netlify dashboard and select the website you have created.
1. Go to **Settings > Identity** and Enable Identity service.

2. After enabling, go to **Identity>Registration**, set this option to **open** or **invite**. Usually, **invite** is the best option if you are the only person writing blogs on the website.
You can also enable **external providers** like Google, GitHub, etc for authentication if you don't want to create an account.

3. Go to **Identity>Services** and click **Enable Git gateway**

Now go to `https://<your-website>.netlify.app/admin` you'll be prompted to log in. Create your account and set the registration option to **invite-only** (as in step 2). Log in with your credentials and create a new blog post and publish it.
Now do a `git pull` to fetch the latest posts from the repository. You can find the blogs in the `assets/content/blog` directory of your project.
In the next part, we'll see how to integrate the content in NuxtJS to show on the website.
I have also created a repository to get you started with the NuxtJS blog.
{% github sheikh005/nuxt-netlify-cms-starter-template no-readme %}
| frikishaan |
402,498 | Testing Vue.js Application Files That Aren't Components | Ok, before I begin, a huge disclaimer. My confidence on this particular tip is hovering around 5% or... | 0 | 2020-07-20T12:32:15 | https://www.raymondcamden.com/2020/07/17/testing-vuejs-application-files-that-arent-components | vue, javascript, serverless | ---
title: Testing Vue.js Application Files That Aren't Components
published: true
date: 2020-07-17 00:00:00 UTC
tags: vuejs,javascript,serverless
canonical_url: https://www.raymondcamden.com/2020/07/17/testing-vuejs-application-files-that-arent-components
cover_image: https://static.raymondcamden.com/images/banners/catsleeping1.jpg
---
Ok, before I begin, a _huge_ disclaimer. My confidence on this particular tip is hovering around 5% or so. Alright, so some context. I'm working on a game in Vue.js. Surprise surprise. It probably won't ever finish, but I'm having some fun building small parts of it here and there. The game is an RPG and one of the first things I built was a basic dice rolling utility.
In my Vue application, I created a `utils` folder and made a file `dice.js`. I used this setup because I wasn't building a component, but rather a utility that my Vue components could load and use. My dice utility takes strings like this - `2d6` - which translate to "roll a six sided die 2 times". It even supports `2d6+2` which means to "roll a six sided die 2 times and 2 to the final result". It's rather simple string parsing, but here's the entirety of it:
```js
export const dice = {
roll(style) {
let bonus=0, total=0;
if(style.indexOf('+') > -1) {
[style, bonus] = style.split('+');
}
let [rolls, sided] = style.split('d');
//console.log(rolls, sided);
for(let i=0;i<rolls;i++) {
total += getRandomIntInclusive(1, sided);
}
total += parseInt(bonus);
return total;
}
}
function getRandomIntInclusive(min, max) {
min = Math.ceil(min);
max = Math.floor(max);
return Math.floor(Math.random() * (max - min + 1)) + min; //The maximum is inclusive and the minimum is inclusive
}
```
In one of my Vue components, I use it like so:
```js
import { dice } from '@/utils/dice';
export default {
data() {
return {
newName:'gorf',
str: '',
dex: '',
int: ''
}
},
created() {
this.reroll();
},
computed: {
cantContinue() {
return this.newName == ''
}
},
methods: {
reroll() {
this.str = dice.roll('3d6');
this.dex = dice.roll('3d6');
this.int = dice.roll('3d6');
},
start() {
this.$store.commit('player/setName', this.newName);
this.$store.commit('player/setStats', { str: this.str, dex: this.dex, int: this.int });
this.$router.replace('game');
}
}
}
```
I import the dice code and then can make calls to it for my UI. Nothing too crazy here, but I ran into an interesting issue today. My initial version of `dice.js` didn't support the "+X" syntax. I wanted to add it, but also wanted a quick way to test it.
So I could have simply gone into my Vue component and add some random tests to the `created` block, something like:
```js
console.log(dice.roll('2d6+2'));
```
And that would work, but as I developed, I'd have to wait for Vue to recompile and reload my page. In general that's pretty speedy, but what I really wanted to do was write a quick Node script and run some tests at the CLI. To be clear, not unit tests, just literally a bunch of console logs and such. That may be lame, but I thought it might be quick and simple.
However... it wasn't. If you look back at the source of dice.js, you'll see it's _not_ using `module.exports` but just a regular export. This was my test:
```js
import { dice } from '../src/utils/dice'
// just some random rolls
for(let i=1;i<4;i++) {
for(let k=3;k<10;k++) {
let arg = i+'d'+k;
console.log('input = '+arg, dice.roll(arg));
}
}
console.log(dice.roll('2d6+2'));
```
And this was the result:

Ok, so an admission. I'm still a bit hazy on the whole module thing in Node, and JavaScript in general. I've used require, imports, exports, but I wouldn't pass a technical interview question on them. I hope you don't think less of me. Honestly.
That being said, the error kinda made sense, but I didn't want to use the `.mjs` extension because I didn't know if that would break what the Vue CLI does.
I was about to give up and was actually considering adding a route to my Vue application just for debugging.
Thankfully, StackOverflow came to the rescue. I found [this solution](https://stackoverflow.com/a/54090097/52160) which simply required me adding `esm` and then running my code like so: `node -r esm testDice.js`. It worked perfectly! And because my memory is crap, I added this to the top of the file:
```js
/*
Ray, run with: node -r esm test.js
*/
```
Yes, I write notes to myself in comments. You do too, right?
Anyway, I hope this helps others, and I'm more than willing to be "schooled" about how this could be done better. Just leave me a comment below!
Photo by [Nancy Yang](https://unsplash.com/@seven_77?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/s/photos/cats-sleeping?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) | raymondcamden |
1,786,750 | Developing a Progressive Web App (PWA) with Vue | Progressive Web Apps (PWAs) have gained popularity in the world of web development due to their... | 0 | 2024-03-11T10:42:12 | https://dev.to/bubu13gu/developing-a-progressive-web-app-pwa-with-vue-3716 | webdev, vue, coding, softwaredevelopment | Progressive Web Apps (PWAs) have gained popularity in the world of web development due to their ability to provide app-like experiences on the web. When it comes to developing a PWA with Vue, a popular JavaScript framework, there are various intricacies to consider. This article will explore the process of developing a PWA with Vue and discuss the positives and negatives of building a PWA instead of just a traditional mobile app.
The Intricacies of Developing a PWA with Vue
Understanding Vue.js
Vue.js is a progressive JavaScript framework that is used to build user interfaces and single-page applications. It offers a gentle learning curve and provides a flexible and efficient way to create web interfaces. When developing a PWA with Vue, developers can take advantage of Vue's reactivity, component-based structure, and the Vue Router for managing navigation within the app.
Implementing Progressive Enhancement
One of the key principles of PWAs is progressive enhancement, which ensures that the app works for all users regardless of the browser or device they are using. When developing a PWA with Vue, it is essential to focus on progressive enhancement by using modern web capabilities while maintaining compatibility with older browsers. This includes implementing responsive design, offline support, and fast load times.
Service Worker Integration
Service workers play a crucial role in PWAs by enabling features such as offline functionality, push notifications, and caching. When using Vue to develop a PWA, integrating a service worker is necessary to handle the caching and network requests, allowing the app to work offline and load quickly on subsequent visits.
Positives of Building a PWA with Vue
Cross-Platform Compatibility
One of the primary advantages of building a PWA with Vue is cross-platform compatibility. PWAs can run on any device with a web browser, eliminating the need to develop separate native apps for different platforms. This can result in cost savings and a broader reach for the app.
Improved User Experience
PWAs built with Vue can provide an enhanced user experience compared to traditional mobile apps. They can be installed on the user's device, work offline, and deliver fast and engaging experiences, leading to higher user engagement and satisfaction.
Easier Maintenance and Updates
Maintaining and updating PWAs built with Vue is generally easier compared to native apps. Developers can push updates directly to the PWA, and users will receive the latest version when they next access the app. This streamlined update process can result in better user retention and improved security.
Negatives of Building a PWA with Vue
Limited Access to Device Features
While PWAs have made significant progress in accessing device features such as camera, geolocation, and push notifications, they still have limitations compared to native apps. Certain advanced device functionalities may not be fully accessible when building a PWA with Vue, which can hinder the app's capabilities in some scenarios.
Browser Support and Performance Variability
PWAs rely on modern web technologies and APIs, which may not be uniformly supported across all browsers. This can lead to variability in performance and user experience, especially on older browser versions. Developers need to carefully consider browser compatibility and performance optimization when building a PWA with Vue.
Discoverability and Installation Barriers
While PWAs can be installed on a user's device, the process is not as seamless as downloading an app from an app store. Discoverability and installation barriers may exist, impacting the adoption and usage of the PWA. Educating users about the benefits of installing a PWA and simplifying the installation process are ongoing challenges for PWA developers.
In conclusion, developing a Progressive Web App with Vue presents a unique set of intricacies and considerations. While there are clear positives such as cross-platform compatibility, improved user experience, and easier maintenance, it's important to weigh these against the potential negatives such as limited access to device features, browser support variability, and installation barriers. Ultimately, the decision to build a PWA with Vue should be based on the specific requirements and goals of the project, considering the trade-offs between web-based and native app development. | bubu13gu |
500,437 | Modern Software engineering is impossible to imagine without:- | 1) Linux 2) Terminal 3) Git 4) Github 5) StackOverflow 6) Google 7) Wikipedia 8) Virtualization 9) Je... | 0 | 2020-10-28T19:15:33 | https://dev.to/rajeevranjancom/modern-software-engineering-is-impossible-to-imagine-without-2ehg | productivity, linux, github, git | 1) Linux
2) Terminal
3) Git
4) Github
5) StackOverflow
6) Google
7) Wikipedia
8) Virtualization
9) Jenkins
10) MySQL
11) Redis
12) Diff
13) Tomcat
14) Slack
15) JIRA
16) Chrome | rajeevranjancom |
745,504 | How to build internal tools on Stripe with SQL | We're all building internal tools on Stripe. It would be great if we could build them faster so our... | 0 | 2021-07-01T20:29:34 | https://docs.sequin.io/stripe/playbooks/retool-subs | stripe, sql, tutorial, postgres | We're all building internal tools on Stripe. It would be great if we could build them faster so our customers and business are happier.
So let's build a tool to manage Stripe subscriptions using Retool. The app will allow you to search through all your subscriptions, see all the associated current and upcoming invoices, charges, and products for your customer all in one, clean view. Then you can begin to take actions like exporting invoices or canceling subscriptions:

While Retool does come with a [Stripe API integration](https://docs.retool.com/docs/stripe-integration), configuring a Retool app to search and retrieve data through the Stripe API is tedious. You'll still need to handle pagination, caching, and running multiple, sequential API calls.
Luckily, Sequin replicates all your Stripe data to a Postgres database so you can use Retool's first class support for SQL to work with your Stripe data. In this tutorial you'll learn how this all works together to make building your Stripe subscription tools easy.
## Stripe Setup
You'll need a Stripe account that contains active subscriptions to build this Retool app. If you don't have any active subscriptions in your LIVE Stripe account, you can easily add some fake ones in your TEST account (in fact - building a `staging` version of this app using your Stripe account in TEST mode is highly recommended since you're working with sensitive data).
### Create test subscriptions
To get going, add a couple test subscriptions to your Stripe account:
**Step 1:** Login to your [Stripe dashboard](https://dashboard.stripe.com/dashboard) and put your account into _TEST MODE_ by flipping the **View test data** switch.

**Step 2:** Create a recurring product by going to the product page, clicking **+ Add Product**, and filling out the form to create a new product. Double check that the product is configured to be **Recurring**:

**Step 3:** Finally, create a new customer with a subscription to the product you just created. To do so, click the **Actions** button on the customer page and select **Create Subscription**:

Repeat the process by creating a couple more customers with recurring subscriptions.
### Generate a restricted Stripe API key
Sequin requires an API key to sync all your Stripe data in real-time. As a best practice, we recommend providing Sequin with a [restricted API key](https://stripe.com/docs/keys). To generate a restricted API key:
**Step 1:** Navigate to the [Stripe API keys page](https://dashboard.stripe.com/test/apikeys) by selecting **Developers** in the left navigation and clicking **API keys**. Then click the **+ Create restricted key** button.

**Step 2:** Give your key a name (something like "Sequin" will do just fine). Then provide this new key with the following permissions:
- **READ** access to everything
- **READ & WRITE** access to **Webhooks**
- **No** access to the **CLI**
You can get more details by reading [Sequin's reference for Stripe](https://docs.sequin.io/stripe/reference#create-a-stripe-api-key).
**Step 3:** Finally, click the **Create Key** button and keep this new restricted API key handy as you move on to set up Sequin.
## Sequin Setup
With your Stripe API key created, you can now setup Sequin to replicate Stripe to a Postgres database:
**Step 1:** Create or Login to your [Sequin account](https://app.sequin.io/signup).
**Step 2:** Connect your Stripe account to Sequin by going through the tutorial or clicking the **Add database** button and selecting **Stripe**.
**Step 3:** You'll be prompted to enter your Stripe API key. Then, in the destination section, select to have a **New Sequin database** generated. Finally, click **Create**.
**Step 4:** Sequin will immediately provision you a Postgres database and begin syncing all your Stripe data to it (if you're using a TEST API key, then Sequin will only sync your TEST data for free, forever). You'll be provided with credentials for you new database:

## Retool Resource Setup
Now, add your Sequin database to Retool [like any other Postgres database](https://docs.retool.com/docs/connecting-your-database):
**Step 1:** In a new tab, log into your [Retool dashboard](https://retool.com/). In the top menu bar click **Resources** and then the blue **Create New** button.
**Step 2:** Select **Postgres** from the list of resource types.
**Step 3:** Enter the name for your resource (i.e. "Stripe") and then enter the **Host**, **Port**, **Database name**, **Database username**, and **Password** for your Sequin database. You can copy and paste these from Sequin. Then click the blue **Create resource** button.

**Step 4:** Retool will confirm that your resource was created. Click **Back ro resources** for now.
## Retool App Setup
With Stripe successfully connected to Retool using Sequin, we are ready to build an app that shows all your subscriptions, invoices, and charges in one clean view.
First, get the app set up in Retool.
**Step 1:** On the Retool app page, click the blue **Create new** button and select **Create a blank app**:

**Step 2:** Give your app a name. Something like _Super Subscription Center_ will work just fine and then click **Create app**:

**Step 3:** You'll now see a blank Retool app in edit mode. To start building the app, drag and drop a text field into the header. Then, in the inspector drawer on the right, enter `# Super Subscription Center` as the value to give your app a name:

This is the basic flow for adding new components to your app:
1. **Drag and drop** the visual components into your app.
2. **Configure** the data and interactions for the component.
3. **Adjust** layout and polish the UI of the component.
You'll follow this construction pattern as you build the rest of the app from here on out.
## Searchable subscriptions
With all the foundations in place, you are ready to start building the core functionality of your app - starting with a searchable table that shows all the current subscriptions.
### Scaffold the UI
Drag and drop the components that will make up this section of the app onto the canvas:
First, drag a **Container** component onto the canvas. Resize it to cover about half the width of the app.
Then drag and drop a text input field and place it at the top of the container. This will be your search bar. In the inspector on the right, edit the component's **Label** to be `Email` and then to make it look nice select `search` as the **Left icon**:

Drag and drop a **table** component under your newly created search bar and position it to fill up the container. At the end, your app will look something like this:

### Query for subscriptions
To add the underlying Stripe data to your app, you'll simply query your Sequin database using SQL. To step into this paradigm, let's add a simple set of data with search. Then, we'll refine the query to pull in the exact data you need.
**Step 1:** Open up the bottom panel and create new query by clicking **+ New** and selecting **Resource query**:

**Step 2:** Select the **Stripe Postgres** database you created earlier as the resource, then enter the SQL statement below:
```sql
select
customer.id as "cus_id",
customer.name,
customer.email,
subscription.id as "sub_id",
subscription.status,
subscription.current_period_end,
subscription.collection_method
from customer
left join subscription
on customer.id = subscription.customer_id;
```
When you click the **Preview** button you'll see that this query pulls in key details about your customers as well as the customer's associated subscriptions via a `JOIN` with the `subscription` table.

**Step 3:** This query looks good for now, so click the **Save & Run** button and then name the query `get_subscriptions` by clicking on the query in the list on the left.
**Step 4:** To pull the data from `get_subscriptions` into the table in your app, open the right inspector, select your table in the canvas, and then in the **data** field enter `{{get_subscriptions.data}}`. The double brackets (i.e. `{{}}`) indicate that you are using JavaScript in Retool. Then, the `get_subscriptions.data` is retrieving the data from your query.
You'll immediately see the data from your query populate your table:

**Step 5:** You're now querying data from Stripe (using SQL!) and populating that data to your table in the UI of your app. Now, add search. To do so, add the following `WHERE` clause to your `get_subscriptions` query:
```sql
select
customer.id as "cus_id",
customer.name,
customer.email,
subscription.id as "sub_id",
subscription.status,
subscription.current_period_end,
subscription.collection_method
from customer
left join subscription
on customer.id = subscription.customer_id
where subscription.status is not null and ({{ !textinput1.value }} or customer.email::text ilike {{ "%" + textinput1.value + "%" }});
```
This `WHERE` clause does two things:
- First, it checks if there is a value in the text input box. If there is nothing in the text input, then nothing happens.
- If there is text in the text input, then it uses Postgres `ilike` to search by the customer's email.
When you click **Save & Run** you'll now see that when you enter text into the text input you search your table:

**Step 6:** In addition to customer and subscription data, you also want to see some invoicing data. Specifically, you need see the value of the subscription and what products are included. To do so, update your `get_subscriptions` query to add the following fields:
```sql
select
customer.id as "cus_id",
customer.name,
customer.email,
subscription.id as "sub_id",
subscription.status,
subscription.current_period_end,
subscription.collection_method,
invoice.amount_due::numeric,
line_item.description
from customer
left join subscription
on customer.id = subscription.customer_id
left join invoice
on subscription.latest_invoice_id = invoice.id
left join invoice_line_item_map
on invoice.id = invoice_line_item_map.invoice_id
left join line_item
on invoice_line_item_map.line_item_id = line_item.id
where subscription.status is not null and ({{ !textinput1.value }} or customer.email::text ilike {{ "%" + textinput1.value + "%" }})
```
This will pull in the `invoice.amount_due` and `line_item.description` details you need by joining to the `invoice` and `line_item` table.
You've now queried for all your data and set up search using SQL.
### Clean up the table
As a last step, adjust the UI so it shows the data effectively.
**Step 1:** Select the table and open up the right inspector.
**Step 2:** Simplify the table by removing data that is only helpful to the app - but not your user. In this case, you can drop the `cud_id` and `sub_id` columns as well as the `subscription end` column by clicking the eye icon.
**Step 3:** Rename and format each of the columns in your table by select each column in the inspector, formatting the name of the column, and aligning the data type. For instance, for the `amount` column you can give the column a friendly name like `Amount` and then for the data type select `USD (cents)`:

You now have a searchable table that returns data to help you evaluate subscriptions. Now, you'll bring in the details.
## Subscription details
After you select a subscription in the table you just created, you'll want to see the details of the subscription on the right side of your app. Let's start with the customer and subscription details card in the top right:

For this component, you'll repeat the same construction pattern by first scaffolding the UI, connecting the data, and then cleaning up the interface.
### Scaffold the UI
This component is simply a container of text fields that present data about the customer and their subscription in more detail. To lay out the UI, drag and drop a container in the top right portion of the app and add the following placeholder text fields:

To format the text appropriately, use [Markdown](https://www.markdownguide.org/cheat-sheet/). For instance, for **Customer Name** and **Status** you can format the text as `H3` by entering the value as `### Customer Name`.
### Query for customer and subscription data
Now, replace the placeholder text with real data from Stripe.
You can populate the first several fields in the subscription card with data already available in the table to the left. All you need to do is pull those values into the text components.
Starting with the `Customer Name` text component, select the component and in the inspector on the right enter the value as `### {{table1.selectedRow.data.name}}`:

This tiny JavaScript statement pulls the `name` value from whatever row is selected in `table1`. For your end user, this means the text box will immediately show the name of any customer they select in the table.
You can repeat this same data access pattern for the next several fields:
#### Status
```js
### {{table1.selectedRow.data.status === "active" ? "Active" : table1.selectedRow.data.status === "canceled" ? "Canceled" : "Issue"}}
```
For status, you'll again pull the value from the selected row in `table` and utilize a [ternary operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Conditional_Operator) to show a user-friendly value for the status of the subscription.
#### Customer Email | Customer ID
```js
{{table1.selectedRow.data.email}} | {{table1.selectedRow.data.cus_id}}
```
Here is a nice example of how you can easily concatenate string values in a text component.
#### Subscription Plan
```
{{table1.selectedRow.data.description}}
```
For the subscription plan you can simply return the description for the selected row in `table1` as normal.
---
You've now populated as much of the data in the subscription card as you can from the existing data in `Table1`. For the rest of the data in this card, you'll need to write new queries.
Turn your attention to the **Since:**, **Spent:**, **MRR:**, and **Δ:** fields:

For these fields, you want to pull in specific details about the customer so you can easily see how valuable the customer is. To get this data, you'll need to write a new query (`get_customer`) as well as a helper function (`calc_customer_stats`).
Starting with the `get_customer` query, open the bottom drawer and click to create a new query against your Sequin database. Then enter the following SQL statement:
```sql
SELECT
customer.id,
customer.created as "cus_created",
invoice.number,
invoice.amount_paid::int as "amount_paid",
invoice.period_start,
invoice.period_end,
invoice.created as "inv_created"
FROM customer
left join invoice
on customer.id = invoice.customer_id
where customer.id = {{table1.selectedRow.data.cus_id}}
order by invoice.number asc;
```
This query pulls in some more information about the customer and then performs a `JOIN` with the `invoice` table to pull in all their billing history. The `WHERE` clause at the end filters the data for just the one customer selected in `Table1`. Last, the `ORDER BY` clause allows us to sort the results to make working with the data easier in the helper functions.
Click the **Save & Run** button and then name the query `get_customer`:

You now have the raw data required to calculate the rest of the fields. While you could use some additional SQL to calculate the specific values for each field, you'll use a JavaScript helper function here.
To build your helper function, click to create a new query and select **JavaScript Query**:

For this helper function, you want to iterate through the array of data returned from your `get_customer` query to calculate some metrics:
```js
let index = get_customer.data.amount_paid.length;
let toDollars = (num) => {
return (num / 100).toLocaleString("en-US", {
style: "currency",
currency: "USD",
});
};
return {
mrr: toDollars(get_customer.data.amount_paid[index - 1]),
spend: toDollars(get_customer.data.amount_paid.reduce((i, o) => i + o)),
growth: toDollars(
get_customer.data.amount_paid[index - 1] - get_customer.data.amount_paid[0]
),
};
```
This helper function does two things. First, it formats numbers into currency strings using the `toDollars()` function.
Next, it calculates the metrics you need as follows:
- **MRR:** is calculated as the amount paid on the most recent invoice (assuming all your customer only have subscription products)
- **Spend:** is a summation of all the revenue from the customer.
- **Growth (i.e. Δ):** is simply the difference in value of the most resent invoice compared to the first invoice.
Click **Save** and then name the query `calc_customer_stats`.

You want this helper function to run anytime the `get_customer` query is run. So as a last step, open the `get_customer` query and have the `calc_customer_stats` query trigger on success:

With your metrics calculated you can now add these values to the subscription card:
#### Since
```
{{moment(get_customer.data.cus_created[0]).format("MMMM DD, YYYY")}}
```
Here you use `moment.js` to format the `datetime` value returned from your `get_customer` query.
#### MRR
```
{{calc_customer_stats.data.mrr}}
```
#### Spent
```
{{calc_customer_stats.data.spend}}
```
#### Growth (a.k.a Δ)
```
{{calc_customer_stats.data.growth}}
```
---
There is just one more field to add to your subscription card: details around when the customer will receive their next invoice.
This last field requires one additional query that pulls in the `upcoming_invoice` details for the customer. Luckily, this data lives in your Sequin database.
Click to create another SQL query against your Sequin database and enter the following SQL statement:
```sql
SELECT
upcoming_subscription_invoice.next_payment_attempt as "next_invoice_date",
(upcoming_subscription_invoice.amount_due/100.00)::money as "next_invoice_amount"
from upcoming_subscription_invoice
where upcoming_subscription_invoice.subscription_id = {{table1.selectedRow.data.sub_id}};
```
Sequin maintains tables that show the _temporary_ state of upcoming objects you would otherwise need to use the Stripe API for. In this case, you're pulling in the `time period` and `amount` of the next invoice associated to the subscription.
Click the **Save & Run** button and name this query `get_next_invoice`:

Now, pull this data into your app's interface by updating the last remaining value in your subscription card with the data retrieved in `get_next_invoice`:
```
Next Invoice on **{{moment(get_next_invoice.data.next_invoice_date[0]).format("MMMM Do YYYY")}}** for **{{get_next_invoice.data.next_invoice_amount[0]}}**
```
To make this value stand out, format the text to be green by using the style options in the inspector:

---
The subscription card is now complete. When a user selects a subscription in the table, the details of the customer including key metrics and upcoming invoice details are immediately shown. All with SQL.

## Invoices, charges, and products
Now, you'll round out the app by showing the customer's prior invoices, charges, and products.

You'll be able to pull in all this data in one additional SQL query and then display it in your app using Retool's `List View` component.
Keeping with the process, first you'll scaffold the UI components.
### Scaffold the UI
The `List View` component allows you to show a list of items. It can dynamically show more or fewer items depending on how the underlying data changes.
Drag and drop the `List View` component onto your app and then add a `Container` component to the top of the list. As soon as you drop the `Container` into the `List View` component you'll see it's duplicated three times. This quickly gives you a sense of how the `List View` component works by showing a new UI component for each item in an array of data.
You'll make the `List View` component dynamic later, but for now you're just scaffolding the front-end. So to make things easier select the `List View` component and in the inspector adjust the **Number of rows** to one for the time being.

Now, add a couple more UI components to the `Container` you created:

The only flourish here (in addition to the emojis 👏) is the styling on the container with the payment information. You can do the same by selecting the container and editing the style as you did previously with the green text.
### Query for invoices, payments, and products
Open the bottom drawer and create a new query for your Sequin database. Enter the following SQL statement:
```sql
SELECT
invoice.id as "inv_id",
invoice.subscription_id as "sub_id",
invoice.number,
invoice.created,
invoice.status,
(invoice.amount_paid /100.00)::money as "amount",
invoice.hosted_invoice_url,
line_item.description as "line_item_description",
(price.unit_amount/100.00)::money as "unit amount",
price.recurring_interval,
product.name as "product",
charge.id as "charge_id",
charge.description as "charge_description",
(charge.amount/100.00)::money as "charge_amount",
charge.status as "charge_status",
charge.created as "charge_created"
from invoice
left join charge
on invoice.charge_id = charge.id
left join invoice_line_item_map
on invoice.id = invoice_line_item_map.invoice_id
left join line_item
on invoice_line_item_map.line_item_id = line_item.id
left join price
on line_item.price_id = price.id
left join product
on price.product_id = product.id
where invoice.subscription_id = {{table1.selectedRow.data.sub_id}}
group by invoice.id, charge.id, line_item.description, price.unit_amount, product.name, price.recurring_interval
order by invoice.number desc;
```
This SQL query performs a `SELECT` across several tables that you `JOIN` together in order to pull in invoices, line_items, prices, products, and charges. Then, you use the `WHERE` statement to filter the data down to just the one subscription you have selected in `Table1`.
Click to **Save & Run** the query and name it `get_current_invoices`:

### Clean up the list
You'll now link the data from your `get_current_invoices` query to your UI components.
Select the **Invoice #** placeholder and replace the value with:
```
### 🧾 Invoice #: {{get_current_invoices.data.number[i]}}
```
This statement should look familiar to you with the exception of the `[i]` at the end. So let's step through this:
- The `###` is markdown notation for an `H3` text format.
- The double brackets then tell Retool we'll be using JavaScript. The `get_current_invoices.data.number` pulls in the invoice number from the `get_current_invoices` query. Because we pull in all the invoices in the `get_current_invoices` query, this value is actually an array.
- So finally, the `[i]` is extracting just one value from that array. The variable `i` is used by the `List View` component so that you request the same index from the array for every item in the same `container` in the list. So for instance, the first `container` in the list will use index 0 and then the second will use index 1. So on and so forth.
After you enter the value, you should see your text component update correctly:

To finish the job, you now need to match the value from the `get_current_invoices` query to the remaining values in the UI:
| UI Text Component | Value |
| --------------------- | -------------------------------------------------------------------------------------------------- |
| Invoice Date and Time | `{{moment(get_current_invoices.data.created[i]).format("MMM DD, YYYY - hh:mm A")}}` |
| Invoice Status | `#### {{get_current_invoices.data.status[i] === "paid" ? "🟢 Paid" : "⚠️ Issue"}}` |
| Invoice Amount | `{{get_current_invoices.data.amount[i]}}` |
| Invoice Plan | `{{get_current_invoices.data.line_item_description[i]}}` |
| Payment | `##### Payment → {{get_current_invoices.data.charge_amount[i]}}` |
| Payment Status | `##### {{get_current_invoices.data.charge_status[i] === "succeeded" ? "✅ Success" : "⚠️ Issue"}}` |
| Payment Date and Time | `{{moment(get_current_invoices.data.charge_created[i]).format("MMM DD, YYYY - hh:mm A")}}` |
| Payment Description | `{{get_current_invoices.data.charge_description[i]}}` |
You'll now have a clean representation of your customer's invoices:

Finally, you want the number of list items displayed in your list to change depending on the number of invoices associated to a subscription. To do so, select the `List View` component and change the **Number of rows** in the inspector to `{{get_current_invoices.data.inv_id.length}}`:

## Add interactions
Your Super Subscription Center is now pulling in all the data you need to find a customer's subscription and evaluate it. Now, add two interactions to your app to start exploring how to mutate your Stripe data using Sequin and Retool.
### View invoice
To get a hang of interactions, let's start with a simple button that allows a user to see an invoice.
First, drag and drop a button into the container of one of your invoice items.
Then, in the inspector change the text of the button to read `View Invoice`.
Finally, to make the button trigger an event click the **+ New** link in the **Event Handlers** section of the inspector. Configure the event as follows:

- **Event:** Click
- **Action:** Go to URL
- **URL:** `{{get_current_invoices.data.hosted_invoice_url[i]}}`
Since you are already pulling in the URL for the invoice in the `get_current_invoices` query, you just need to associate this URL to the button.
With the event configured, click the **View Invoice** button you just created and you'll see the invoice load in a new tab.
### Cancel subscription
So far, you've read all your Stripe data using Sequin. Sequin is a read-only database, so to mutate your Stripe data, you'll use the Stripe API.
For instance, to cancel a subscription, you'll simply make a `DELETE` request against the Stripe API.
Any mutation you make will then propagate to your Sequin database in about 1 second.
Let's step through it. To get started, add a new Stripe API resource to Retool:
**Step 1:** Open up the bottom drawer and create a new resource query. In the **Resource** dropdown, select **Create new resource**:

**Step 2:** You'll be taken to Retool's resource page. Select to create a new **Stripe** resource.

**Step 3:** Give the new resource a name (something like _Stripe API_) and enter your API key. Here, you'll want to generate a new restricted API key for Retool that includes WRITE permissions as well.
**Step 4:** Click **Create resource** and then navigate back to your app.

Back in your Super Subscription Center app, open the bottom drawer, and click to create a new resource.
Select the **Stripe API** resource you just created and then select the **DELETE** **`/v1/subscriptions/{subscription_exposed_id}`** as the operation.
You want to delete the subscription that the user has selected, so in the **PATH** section, set the **subscription_exposed_id** to `{{table1.selectedRow.data.sub_id}}`.
After the user deletes a subscription, you also want to update the subscription's status in the app to close the feedback loop and let the user know the subscription has indeed been canceled. To do so, set the `get_subscriptions` query to trigger when your Stripe API call succeeds.
All together, your Stripe API query to delete subscriptions will look like this:

Deleting a subscription is a big action. So click the **Advanced** tab and make the following changes:

- Select to **Show a confirmation modal before running**. This will ensure the user needs to confirm the action so they wield this power with caution.
- Then, set the **Run triggered queries after** to `1000` - this will ensure that your Sequin database is fully up-to-date before you refresh the data on the page to confirm the subscription has been deleted.
With your advanced settings in place, click to **Save** the query and name it `cancel_subscription`.
Now, drag a button into the subscription card and configure it to trigger the `cancel_subscription` query:

- Edit the button's text to read **Cancel Subscription**
- Create an event handler that triggers the `cancel_subscription` query
- To improve the UX, disable the button if the subscription is already canceled by setting **Disable when** to `{{table1.selectedRow.data.status === "canceled"}}`
- Finally, make the button red to let the user know this is dangerous.
Now, see your full Super Subscription Center working by searching for a subscription, evaluating it, and then deleting it:

## Next Steps
You now have an internal tool purpose built for your team to mange subscriptions.
Note all the things you didn't need to build to get to this point.
With Retool, you didn't need to create a React application, worry about deployments, authentication, or even fuss with HTML, CSS, and boilerplate JavaScript.
And with Sequin, you were able to pull in all your Stripe data in just a couple SQL queries. No need to created nested API call, deal with pagination, or fuss with client side search logic.
From here, you can continue to customize your app. Bring in data from your production database and join it to Stripe seamlessly (Sequin can put your Stripe data in _your_ database). And of course, when you are ready, add your production API key to Sequin, change your resource in Retool, and start working with real customer subscriptions. | thisisgoldman |
793,016 | Vscode Extensions You Should Try Out | It’s no news that vscode has been and still is one of the best code editors in the market. Vscode... | 0 | 2021-08-20T01:47:20 | https://dev.to/oyedeletemitope/vscode-extensions-you-should-try-out-4f58 | vscode, 100daysofcode, devops, javascript | It’s no news that vscode has been and still is one of the best code editors in the market.
Vscode comes with tons of extensions and features that’ll make development processes more efficient, get things done faster, and many more.
In this article, I’ll be writing about some of these extensions. These are the ones that you'll definitely need.Most of them I’ve used and others were recommend by a few of my colleagues. To make it easier for us I’ll be grouping them into :
* General-purpose extension (necessary extensions that will help improve our use of vs code editor)
* Themes (giving our code editor a customized look)
So let's jump right in!!!
##General-purpose extensions
### Blockman

Blockman is a vscode extension for highlighting a nested block of codes. It gives you information about where the code belongs. It's an extension that’s handy. Get it [here](https://marketplace.visualstudio.com/items?itemName=leodevbro.blockman)
### Auto close tag

We’ve all had moments where we forgot to close a tag which has lead to an error or a bug. Auto close tag eases that burden of ensuring we close our tags by automatically adding HTML/XML close tag now we can write our code without even worrying. Get it [here](https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-close-tag)
### Auto rename tag

Auto name tag as the name implies helps in automatically renaming paired HTML/XML tag. This is another extension I’ll recommend. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag)
### Code spell checker
This extension helps check whether your spellings and variables are spelled correctly. The goal of this spell checker is to help catch common spelling errors while keeping the number of false positives low. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker)
### Eslint

ESLint is a vscode extension that can both format your code and analyze it to make suggestions for improvement. It is also configurable. This means that you can customize how your code is evaluated. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)
### Prettier

Prettier is a formatting extension like eslint that automatically helps formats your code whenever you save it. also, if you’re still new to coding, Prettier can save you by allowing you to focus on your project instead of how to make your code readable. Get it [here] (https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode)
### Live server

Live server is another “must-have” vscode extension. Normally, when you make a change in your code or write something new, you need to refresh the page manually to see the changes. In other words, if you make 100 changes in your code each day, you need to refresh the browser 100 times.The live-server extension, however, automates this for you. After installing it, automated localhost will be able to run in your browser, which you can start with a single button. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer)
### Quokka J.s

Quokka.js is a developer productivity tool for rapid JavaScript / TypeScript prototyping. Runtime values are updated and displayed in your IDE next to your code, as you type. It’s a simple, lightweight extension, and one that’s perfect for both seasoned developers and newbies alike. It’s also free for community use, but if you’re a JavaScript/TypeScript professional, you can also buy a Pro license that lets you modify your runtime values without having to change your code. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=WallabyJs.quokka-vscode)
### VSCode icons

vscode-icons is an extension for icon customization, project auto-detection and it adds nice icons too. It helps me identify what I’m looking for much faster. Get it [here](https://marketplace.visualstudio.com/items?itemName=vscode-icons-team.vscode-icons)
### Gitlens

The gitlens extension Supercharges the Git capabilities built into Visual Studio Code. It helps you to visualize code authorship at a glance via Git blame annotations and code lens, seamlessly navigate and explore Git repositories, gain valuable insights via powerful comparison commands, and so much more. GitLens simply helps you better understand code. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens)
### Bracket Pair Colorizer

It’s such a simple quality of life improvement, This extension essentially allows the bracket that you use in your vscode to have a different color depending on how far they are nested. It allows matching brackets to be identified with colors. The user can define which characters to match, and which colors to use. Get it [here](https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer)
### Live share

Visual Studio Live Share enables you to edit and debug collaboratively with others in real-time, not minding what programming languages you're using or app types you're building. It allows you to instantly (and securely) share your current project, and then as needed, share debugging sessions, terminal instances, localhost web apps, voice calls, and more! Developers that join your sessions receive all of their editor context from your environment (e.g. language services, debugging), which ensures they can start productively collaborating immediately, without needing to clone any repos or install any SDKs.Get it [here](https://marketplace.visualstudio.com/items?itemName=MS-vsliveshare.vsliveshare)
## Themes
### One dark pro

One Dark Pro is based on Atom's default One Dark theme and is one of the most downloaded themes for VS Code. It's one of my favorite themes so far. Get it [here](https://marketplace.visualstudio.com/items?itemName=zhuangtongfa.Material-theme)
### Dracula theme

Dracula is a color scheme for code editors and terminal emulators. with features similar to one dark pro. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=dracula-theme.theme-dracula)
### Night owl

For those who are fancy coding at night. night owl extension is the one for you. it has been Fine-tuned for those who like to code late into the night. Color choices have taken into consideration what is accessible to people with colorblindness and in low-light circumstances. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=sdras.night-owl)
###Shades of Purple (SOP)

A professional theme with hand-picked & bold shades of purple for your VS Code editor and terminal. It comes with features and attributes like color highlighting and more. Get it [here]
(https://marketplace.visualstudio.com/items?itemName=ahmadawais.shades-of-purple)
###Conclusion
There are tons of extensions out there, these are the few I've worked with. I'll be at the comment section wanting to hear which VSCode extension you'd recommend and how helpful you think it'll be. Please share if you found this helpful. | oyedeletemitope |
1,211,652 | Struct in C++ | include include using namespace std; namespace game { struct game { ... | 0 | 2022-10-05T10:21:08 | https://dev.to/cpp/struct-in-c-b0 | cpp, cpptutorial, learncpp | #include <iostream>
#include <string>
using namespace std;
namespace game
{
struct game
{
unsigned short int score = 0;
unsigned short int correctAnswers = 0;
unsigned short int playedQuestions = 0;
string playerName = "unknown";
void yourInformation();
void gameStart();
void gameInfo();
};
}; // end of namespace
void game::game::yourInformation()
{
cout << "Welcome to Play a quiz Game which played by console " << endl;
string userData = "unknown";
cout << "Enter your Name : ";
getline(cin, userData);
this->playerName = userData;
cout << "Ok Mr./MS. " << this->playerName << " do you want to play a Game ? ";
getline(cin, userData);
if (userData == "yes" || "Yes" || "YES")
{
this->gameStart();
}
}
void game::game::gameStart()
{
string userData = "null";
cout << "Enter the Name of Capital city of India ";
getline(cin, userData);
this->score += userData == "new delhi" || "New Delhi" || "delhi" || "Delhi" ? 1 : 0;
this->gameInfo();
cout << "Capital City of China : ";
getline(cin, userData);
this->score += userData == "Beijing" || "beijing" || "BEIJING" ? 1 : 0;
this->gameInfo();
cout << "who is the president of Russia ? :";
getline(cin, userData);
this->score += userData == "Valadmir Putin" || "Putin" || "putin" || "Valadmir putin" || "PUTIN" || "VALADMIR PUTIN" ? 1 : 0;
this->gameInfo();
cout << "Which is world's Fastet Object Oriented Programming ? ";
getline(cin, userData);
this->score += userData == "cpp" || "Cpp" || "c++" || "C++" || "Cplusplus" || "cplusplus" || "cplus plus programming" || "cpp language" ? 1 : 0;
this->gameInfo();
cout << "which Operating system mostly used at Servers and Satelite ? ";
cout << "(a). windows, (b). Linux, (c).Mac, (d). Android" << endl;
getline(cin, userData);
this->score += userData == "b" || "Linux" || "linux" || "B" || "LINUX" ? 1 : 0;
this->gameInfo();
cout << "What is your Name ? ";
getline(cin, userData);
this->score += userData == this->playerName ? 1 : 0;
this->gameInfo();
cout << "thanks for playing Game : " << endl;
}
void game::game::gameInfo()
{
this->correctAnswers = this->score;
this->playedQuestions++;
cout << "=============================" << endl;
cout << "Score : " << this->score << endl;
cout << "Correct Answer : " << this->correctAnswers << endl;
cout << "Played Questions : " << this->playedQuestions << endl;
cout << "===============================" << endl;
}
void information()
{
game::game myGame;
myGame.yourInformation();
}
int main()
{
information();
return 0;
} | cpp |
1,322,566 | Five tools and resources for Web that survived information overload | How often do you see titles of articles that read something like this: 20 productivity tools to help... | 0 | 2023-01-09T14:19:46 | https://garage.sekrab.com/posts/five-tools-and-resources-for-web-that-survived-information-overload | webdev, design, html, productivity | How often do you see titles of articles that read something like this: 20 productivity tools to help you as a developer, 10 best chrome extensions, 5 hidden resources to help you do this, or that? Well, like all of you, I have a soft spot for these titles and keep them in my favorites with a promise to come back. Though I am old enough to know not to make that promise. Nonetheless, despite the information overload, I've been using **some tools and websites for the past ten years or so**. Here are five of them.
## 5\. WhatFont
<https://chrome.google.com/webstore/detail/whatfont/jabopobgcpjmedljpbcaablpmlmfcogm>
A Chrome extension that lets you inspect fonts on any page without the need to open the inspector. No bells and whistles.

## 4\. Save to Instapaper
<https://www.instapaper.com/save>
A simple bookmarklet that gathers the current URL and adds it to your Instapaper account. I must admit that this is my way of tucking long articles under the rug. It's much more effective than bookmarks because, when you finally have the time, it lists all the articles and their contents in a readable format.
## 3\. Learn RxJS
<https://www.learnrxjs.io/>
If you are an Angular developer, you most probably use RxJS heavily. This **GitBook** is a lifesaver. Very clean and quick to the point. It is more of a reference than a book.

## 2\. Entity conversion calculator
<https://www.evotech.net/articles/testjsentities.html>
This should surprise you. It was created in 2007, and last updated in 2012 (as shown in the source code). The code is so simple I feel guilty I have not done that locally. All it does is give you the HTML code of any character you type, and vice versa. What I like most about this tool is how empty the page is. I hope the author ([Estelle Weyl](https://estelle.github.io/)) never updates it.

## 1\. Paste to Markdown
<https://euangoddard.github.io/clipboard2markdown/>
The simplest tool ever to turn pasted HTML into Markdown. This one is a lifesaver because I write with a Wysiwyg editor (Notion) and then have to create a dev.to version which only accepts markdown. All I have to do is open the site, and paste, nothing else.

## 0\. Copy Paste Characters
<https://copypastecharacter.com/>
A very simple tool that displays all special characters, and the HTML code for them. I started using this a long time ago, and it has survived all of my chrome updates. It never fails. It used to even use a Flash embed to allow copying when clipboard access in the browser was---let's just say---not a thing.

I know there are many web tools for all kinds of activities, whether it is creating CSS, animation, SVG, video conversion, icons, and favicon creation or proofreading and spell-checking. I almost always google for the tool when I need one, even though I have them in my favorites list. Context is a **B***.
And of course, we should not leave out:
- Google define (just type define something in chrome address bar).
- Dare I say YouTube?
Disclaimer: my counter is zero-based. | ayyash |
1,371,859 | Anagram solution | LeetCode is a popular platform that offers various coding challenges and problems. One of the most... | 0 | 2023-02-19T20:02:30 | https://dev.to/isaacttonyloi/anagram-solution-l7j | leetcode, anagram, beginners, interview |
LeetCode is a popular platform that offers various coding challenges and problems. One of the most interesting categories of
The problem we will be solving is "Group Anagrams," which can be found on LeetCode under the ID "49." This problem requires us to group an array of strings into groups of anagrams. An anagram is a word or phrase formed by rearranging the letters of another word or phrase. For example, "listen" and "silent" are anagrams.
The problem statement provides us with an array of strings, and our task is to group them into anagrams. We can approach this problem by using a hash table. We will iterate through each string in the array and sort the characters in the string. We will then use the sorted string as a key to a dictionary and add the original string to the value of that key. Finally, we will return the values of the dictionary as a list.
```
def groupAnagrams(strs):
dict = {}
for s in strs:
key = ''.join(sorted(s))
if key in dict:
dict[key].append(s)
else:
dict[key] = [s]
return list(dict.values())
```
| isaacttonyloi |
1,409,812 | Learn Open Closed Principle in C# (+ Examples) | The Open/Closed Principle (OCP) is a core tenet of the SOLID principles in object-oriented... | 22,559 | 2023-04-10T08:28:00 | https://www.bytehide.com/blog/open-closed-principle-in-csharp-solid-principles | csharp, dotnet, tutorial, programming | The **Open/Closed Principle (OCP)** is a core tenet of the **SOLID principles** in object-oriented programming. By understanding and applying the OCP in **C#**, developers can create maintainable, scalable, and flexible software systems.
This article will discuss the Open/Closed Principle in C#, provide examples, and share best practices to help developers craft clean and robust code.
## Defining the Open/Closed Principle (OCP)
The Open/Closed Principle, introduced by **Bertrand Meyer**, states that software entities (such as classes, modules, and functions) should be open for extension but closed for modification.
In other words, developers should be able to add new functionality to a class without changing its existing implementation. This can be achieved through **abstraction**, **inheritance**, and **polymorphism**.

## Why is the Open/Closed Principle Important?
Adhering to the OCP promotes a more maintainable, flexible, and scalable codebase.
By ensuring that classes are open for extension and closed for modification, developers can add new functionality without altering existing code, minimizing the risk of introducing bugs or breaking existing features.
This principle encourages the use of abstractions and promotes a modular architecture that is easier to understand, test, and refactor.
## Open/Closed Principle in C#: Key Concepts
To understand the OCP in C#, let’s explore some key concepts:
### Abstraction
Abstraction is a technique that allows developers to hide the internal implementation details of a class and expose only its essential features. By using abstraction, developers can create flexible and extensible designs that are less susceptible to change.
### Inheritance and Polymorphism
Inheritance is a mechanism in C# that allows one class to inherit the properties and methods of another class, while polymorphism enables a class to have multiple implementations based on the context.
These concepts play a crucial role in achieving the Open/Closed Principle, as they allow developers to extend classes without modifying their existing implementation.
## Open/Closed Principle C# Example
Let’s consider an example to demonstrate the Open/Closed Principle in C#. Suppose we have a `Shape` class and a `AreaCalculator` class that calculates the area of different shapes:
```csharp
public class Shape
{
public double Width { get; set; }
public double Height { get; set; }
}
public class AreaCalculator
{
public double CalculateArea(Shape shape)
{
return shape.Width * shape.Height;
}
}
```
In this example, the `AreaCalculator` class can only calculate the area of rectangles. If we want to add support for other shapes, such as circles and triangles, we would need to modify the existing implementation of the `AreaCalculator` class, which violates the Open/Closed Principle.
To adhere to the OCP, we can use abstraction and inheritance to create separate classes for each shape type and provide a consistent method for calculating the area:
```csharp
public abstract class Shape
{
public abstract double CalculateArea();
}
public class Rectangle : Shape
{
public double Width { get; set; }
public double Height { get; set; }
public override double CalculateArea()
{
return Width * Height;
}
}
public class Circle : Shape
{
public double Radius { get; set; }
public override double CalculateArea()
{
return Math.PI * Math.Pow(Radius, 2);
}
}
public class AreaCalculator
{
public double CalculateArea(Shape shape)
{
return shape.CalculateArea();
}
}
```
Now, the `AreaCalculator` class adheres to the Open/Closed Principle, as it can support new shapes without modifying its existing implementation.
## Strategies for Implementing the Open/Closed Principle in C#
Here are some strategies to help implement the Open/Closed Principle in C#:
### Using Abstract Classes
Abstract classes can be used to define a base class with common functionality and provide a consistent interface for derived classes. By creating abstract methods, developers can enforce that each derived class implements its own version of the method, allowing for extensibility without modifying the base class.
### Leveraging Interfaces
Interfaces in C# can be used to define a contract that classes must adhere to. By implementing interfaces, developers can create flexible designs that can be easily extended and modified without affecting existing implementations.
### Applying the Strategy Pattern
The Strategy Pattern is a behavioral design pattern that enables selecting an algorithm at runtime. It can be used to implement the Open/Closed Principle by encapsulating different algorithms within separate classes and providing a common interface for them.
## OCP: Best Practices
To ensure adherence to the Open/Closed Principle, follow these best practices:
- Use **abstraction** and **inheritance** to create extensible designs.
- Leverage **interfaces** to define flexible contracts.
- Encapsulate varying behavior using design patterns, such as the **Strategy Pattern**.
## Open/Closed Principle and Other SOLID Principles
The Open/Closed Principle is an integral part of the SOLID principles, ensuring that software entities are open for extension and closed for modification. Adhering to the OCP often goes hand-in-hand with following the other SOLID principles, resulting in a maintainable and flexible codebase.
## Real-World Applications of the Open/Closed Principle
Applying the Open/Closed Principle in real-world scenarios can lead to cleaner, more maintainable software systems. For example, when designing a payment processing system, adhering to the OCP helps evelopers easily add support for new payment methods without altering existing code.
By creating an abstract `Shape` class or an `IShape` interface, new payment methods can be added as separate classes, ensuring that the core payment processing logic remains unchanged and adheres to the OCP.
## Conclusion
The Open/Closed Principle is a fundamental concept in object-oriented programming and a core tenet of the SOLID principles. By adhering to the OCP in C#, developers can create maintainable, scalable, and flexible software systems that are easier to extend and modify.
By understanding key concepts such as abstraction, inheritance, and polymorphism, and following best practices for implementing the Open/Closed Principle, developers can craft clean and robust code that stands the test of time. | bytehide |
1,411,911 | Salam hemma | A post by Atahan_Biuse | 0 | 2023-03-23T05:26:30 | https://dev.to/atahanbius/salam-hemma-1b06 | atahanbius | ||
1,414,815 | Animated Login Page Using Html & CSS + Source Code | Support us and GET 10% OFF of your next order in my shop using the code: EARLYBIRD... | 0 | 2023-03-25T15:43:49 | https://dev.to/hojjatbandani/animated-login-page-using-html-css-32ij | html, css, beginners, tutorial | {% youtube ypWpu1QVw_M %}
Support us and GET 10% OFF of your next order in my shop using the code: EARLYBIRD 🙏❤️
https://tinyurl.com/2uf6zk72
DOWNLOAD Source : https://github.com/soudemy/simpleLogin
Hi guys, today we want to show How To Create a Animated Login Form Using HTML & CSS
Please, if you love it, support us with Like & Subscribe.🙏🙏❤️❤️
Let's be friends and connect
📩 Subscribe:https://shorturl.at/BCTV8
📷 Instagram: https://www.instagram.com/soudemy/
📱 Behance: https://www.behance.net/soudemy
🐦 Twitter: https://twitter.com/SoudemyAcademy
In the comments, you can say what design you want so that we can prepare the video for you.
#html #html5 #htmlcss #csc #htmltutorial #htmlcsstutorial #htmlcssjs #tutorial | hojjatbandani |
1,443,274 | A review of this week's APIs: Amazon Android Apps Lookup, tencent myapp top charts and Ad Fraud | In keeping with our weekly routine, we will introduce three new APIs to you. We have chosen a diverse... | 0 | 2023-05-15T06:36:00 | https://dev.to/worldindata/a-review-of-this-weeks-apis-amazon-android-apps-lookup-tencent-myapp-top-charts-and-ad-fraud-2knl | api, android, tencent, adfraud | In keeping with our weekly routine, we will introduce three new APIs to you. We have chosen a diverse range of data topics for this round-up of APIs. The purpose, industry, and client types of these APIs will be analyzed. [Worldindata's Marketplace](https://www.worldindata.com/) for Data and APIs has more information on the APIs if you would like to learn more. Let's start now!
## Amazon Android Apps Lookup API made by 42 Matters
[The Amazon Android Apps Lookup API](https://www.worldindata.com/api/42-Matters-amazon-android-apps-lookup-api) provided by 42 Matters is a powerful tool for finding Android apps that match a specified Amazon Standard Identification Number (ASIN) on the Amazon Appstore. The main purpose of this API is to retrieve full details about the app, including its title, developer name, version number, category, price, user ratings, reviews, and more. With this information, e-commerce platforms, mobile app marketplaces, app marketers, mobile app developers, consumer analysis platforms, app testers, advertisers, and other clients can improve their services and better understand their customers' needs.
One of the primary client types that use the data from the Amazon Android Apps Lookup API is e-commerce platforms. These platforms often rely on app recommendations and reviews to help customers find products and services they are interested in. With this API, they can access detailed information about Android apps available on the Amazon Appstore, which can help them make informed decisions about which apps to recommend to their users. Similarly, mobile app marketplaces can use this API to offer better app discovery experiences for their users.
In addition to e-commerce platforms and mobile app marketplaces, a range of other sectors also use the Amazon Android Apps Lookup API. These include mobile app development, app and marketing analysis, advertisement, and more. For example, mobile app developers can use the API to gather intelligence about competitor apps and identify potential partners. App testers can use the API to ensure that apps are compatible with the latest versions of Android and other devices. Advertisers can use the API to target specific audiences and optimize their campaigns. Overall, the Amazon Android Apps Lookup API provides valuable data to a wide range of clients across multiple industries.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /api/v2.0/amazon/android/apps/lookup.json
Filters: asin, fields and callback
www.42matters.com
## 42 Matters tencent myapp top charts API
[The Tencent MyApp Top Charts API](https://www.worldindata.com/api/42-Matters-tencent-myapp-top-charts-api) provided by 42 Matters is a powerful tool for retrieving the top charts from Tencent MyApp for a specific date. The main purpose of this API is to help clients gain insights into the most popular mobile apps in China's app market. With this information, e-commerce and gaming platforms, mobile app development companies, app and marketing analysis firms, and advertisers can better understand market trends, identify opportunities for growth, and make informed decisions about their business strategies.
One of the primary industries that use the Tencent MyApp Top Charts API is e-commerce and gaming platforms. These platforms often rely on app recommendations and reviews to help users discover new products and services. With this API, they can access detailed information about the most popular mobile apps in China's app market and make informed decisions about which apps to feature on their platforms. Similarly, mobile game development companies can use this API to identify trends in mobile gaming and create games that are tailored to the needs and preferences of Chinese consumers.
In addition to e-commerce and gaming platforms, a range of other industries also use the Tencent MyApp Top Charts API. These include mobile app development, app and marketing analysis, advertisement, and more. For example, app marketers can use the API to identify new opportunities for user acquisition and engagement. App testers can use the API to ensure that their apps are compatible with the most popular apps in China's app market. Advertisers can use the API to target specific audiences and optimize their campaigns. Overall, the Tencent MyApp Top Charts API provides valuable data to a wide range of clients across multiple industries.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /api/v2.0/tencent/android/apps/top_myapp_charts.json
Filters: cat_key, country, lang, app_country, limit, page, date, fields and callback
www.42matters.com
## Ad Fraud API by Pixalate
[The ad fraud API](https://www.worldindata.com/api/Pixalate-ad-fraud-api) provided by Pixalate is a powerful tool used by e-commerce platforms, digital advertisers, marketers, adtech platforms, and other clients to detect and prevent ad fraud. The main purpose of the data is to request Pixalate's servers to retrieve the probability (risk score) and determine if a user's IP, DeviceID, or User-Agent is compromised or performing malicious activity. With this information, clients can prevent ad fraud, protect their advertising budgets, and ensure that their ads are being served to legitimate users.
One of the primary client types that use the ad fraud API data is e-commerce platforms. These platforms often rely on digital advertising to drive traffic to their websites and increase sales. With the ad fraud API, they can detect and prevent fraudulent activity, such as click fraud, impression fraud, and bot traffic, which can help them protect their advertising budgets and improve their ROI. Similarly, digital advertisers and marketers can use this data to ensure that their ads are being served to legitimate users and improve their targeting and ad performance.
In addition to e-commerce platforms and digital advertisers, a range of other industries also use the ad fraud API data. These include adtech, marketing, and fraud detection. For example, adtech platforms can use this data to prevent fraudulent activity and improve their ad verification processes. Marketing agencies can use this data to improve their targeting and optimize their campaigns. Fraud detection companies can use this data to identify and prevent ad fraud across multiple channels. Overall, the ad fraud API data provided by Pixalate is an essential tool for anyone involved in digital advertising and fraud prevention.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /api/v2/fraud
Filters: pretty, ip and deviceId
www.pixalate.com | worldindata |
1,494,545 | My first open source contribution. | this is an attempt by me Phil, to socially pressure myself into doing my first open source... | 0 | 2023-06-07T08:47:28 | https://dev.to/vtguy65/my-first-open-source-contribution-17g7 | this is an attempt by me Phil, to socially pressure myself into doing my first open source contribution. I will post the details as soon as I find an issue to fix. If anyone has any questions or requests please hit me up. | vtguy65 | |
1,508,281 | How I Secured My Apache Age Internship: A Journey of Challenges and Triumphs | Introduction: Securing an internship opportunity with Apache Age, a renowned software... | 0 | 2023-06-18T06:41:00 | https://dev.to/munmud/how-i-secured-my-apache-age-internship-a-journey-of-challenges-and-triumphs-2ahk | ### Introduction:
Securing an internship opportunity with Apache Age, a renowned software company, was a dream come true for me. This blog post recounts my exhilarating journey from the initial application on LinkedIn to receiving an offer to join Bitnine, a subsidiary of Apache Age. Join me as I share the step-by-step account of my experience, including the assessments, challenges, and ultimate success.
### Step 1: Applying on LinkedIn
Like many aspiring interns, I took to LinkedIn to explore internship opportunities. While scrolling through job postings, I stumbled upon a captivating advertisement by Apache Age. With great excitement, I submitted my application, along with a personalized cover letter highlighting my passion for software development and previous experience in the field.
### Step 2: The Assessment Stage
After a seemingly long waiting period of 30 days, I received an email from Apache Age. To my delight, it was an invitation to participate in their assessment process. The email contained a series of technical challenges and coding problems designed to test the applicants' skills and problem-solving abilities. It was a daunting task, but I was determined to rise to the occasion.
### Step 3: The One-Week Challenge
With just one week to complete the challenges, I dedicated myself to the task at hand. I meticulously planned my schedule, allocating time for research, coding practice, and debugging. The challenges ranged from designing efficient algorithms to troubleshooting complex code snippets. Each problem demanded a unique approach, but I tackled them with determination and perseverance.
Throughout the week, I immersed myself in the world of software development, delving into documentation, consulting online forums, and seeking guidance from mentors. The challenges pushed me to my limits, but they also served as an incredible learning opportunity, helping me hone my skills and broaden my knowledge base.
### Step 4: The Interview Call
Two weeks after successfully completing the challenges, I received a call from Apache Age. It was an interview invitation! My excitement knew no bounds as I prepared for the interview meticulously. I reviewed the company's mission, vision, and ongoing projects, ensuring that I was well-equipped to discuss my passion for their work.
The interview process was intense but rewarding. The interviewers were keen on evaluating my technical expertise, problem-solving capabilities, and ability to work in a team. I answered their questions to the best of my abilities, shared my experiences, and demonstrated my enthusiasm for joining Apache Age.
### Step 5: The Offer to Join Bitnine
After a few nail-biting days of waiting, the moment I had been eagerly anticipating arrived—an offer letter from Bitnine, a subsidiary of Apache Age. The letter congratulated me on successfully securing the internship position. My heart soared with joy and gratitude for the incredible opportunity that lay ahead.
### Conclusion:
My journey from applying on LinkedIn to receiving an offer from Bitnine has been a remarkable experience. It is a testament to the power of perseverance, dedication, and a passion for one's craft. The challenges I faced throughout the process only served to strengthen my skills and deepen my understanding of software development.
Securing an internship with bitnine has not only provided me with an invaluable learning experience but also opened doors to a bright future in the software industry. I am grateful for the opportunity to work with such a prestigious company and excited to embark on this new chapter of my career journey. | munmud | |
1,660,178 | A Simple To Do List with Next.js | Simple To Do List ✅ O Simple To Do List é uma aplicação simples e intuitiva para... | 0 | 2023-11-10T08:59:17 | https://reactjsexample.com/a-simple-to-do-list-with-next-js/ | todo, nextjs | ---
title: A Simple To Do List with Next.js
published: true
date: 2023-11-08 00:58:00 UTC
tags: Todo,Nextjs
canonical_url: https://reactjsexample.com/a-simple-to-do-list-with-next-js/
---


# Simple To Do List ✅
O Simple To Do List é uma aplicação simples e intuitiva para ajudá-lo a organizar suas tarefas diárias, aumentar sua produtividade e manter-se focado no que é realmente importante. Com uma interface amigável. A aplicação foi desenvolvida pensando em você, para simplificar a gestão de suas atividades diárias.
## Tecnologias 🚀
- React: Uma biblioteca JavaScript popular para construir interfaces de usuário interativas.
- Next.js 13: Um framework React que oferece renderização do lado do servidor (SSR), geração estática (SSG), entre muitos outros recursos.
- Next Auth: Biblioteca para autenticação de usuários com OAuth.
- Postgres: Um sistema de gerenciamento de banco de dados relacional.
- Prisma: Um ORM (Object-Relational Mapping) para Node.js e TypeScript.
- Tailwind CSS: Um framework CSS que oferece várias classes para utilização já pré-estilizadas.
## Funcionalidades 📦
- **Login com google:** Oferecemos aos nossos usuários a conveniência de acessar nossa plataforma com apenas um clique, utilizando suas contas do Google. Esse método de autenticação simplificado proporciona uma entrada fácil e segura ao mundo das tarefas organizadas.
- **Visualizar tarefas:** Com apenas alguns cliques, os usuários podem visualizar todas as suas tarefas de forma rápida e intuitiva. A interface proporciona uma experiência de navegação fluida, permitindo que você mantenha o controle total de suas atividades com facilidade.
- **Criação de tarefas (Em breve):** Em breve, o usuário poderá criar um número ilimitado de tarefas para transformar seu dia a dia em algo mais produtivo e organizado. Esta funcionalidade de criação de tarefas intuitiva e fácil de usar será sua aliada na gestão eficaz do seu tempo e na consecução de seus objetivos.
- **Finalizar tarefa (Em breve):** Em um futuro próximo, o usuário terá a capacidade de marcar suas tarefas como concluídas. Essa funcionalidade permite que você acompanhe seu progresso de maneira clara e visual, ajudando-o a manter o foco nas atividades mais importantes de sua lista.
- **Excluir tarefa (Em breve):** Estamos implementando a opção de exclusão de tarefas para fornecer total flexibilidade no gerenciamento de suas atividades. Se uma tarefa não for mais relevante ou necessária, você poderá removê-la com facilidade, mantendo sua lista de tarefas organizada e livre de distrações
## Variaveis de ambiente 🕵🏻♂️
Para executar este projeto, você precisará adicionar as seguintes variáveis de ambiente ao seu arquivo .env.
`DATABASE_URL`
`GOOGLE_CLIENT_SECRET`
## Support ❓
Para suporte, envie um email para [vitorfragaps@gmail.com](mailto:vitorfragaps@gmail.com) ou através do discord _vitoorfrag_.
## Contribuições e Colaborações 🤝
Este projeto está totalmente aberto a contribuições. Se você deseja colaborar, fique à vontade para criar pull requests, corrigir bugs, adicionar novos recursos ou aprimorar a documentação. Sua contribuição é valiosa e ajuda a melhorar ainda mais este projeto!
Como Contribuir Faça um fork deste repositório.
Crie uma branch para sua contribuição:
```
git checkout -b minha-contribuicao
```
Faça suas alterações e adicione commits descritivos (seguindo o Conventional Commits, preferencialmente).
Crie um pull request para a branch main deste repositório.
## GitHub
[View Github](https://github.com/vitoorfraga/simple-to-do-list?ref=reactjsexample.com) | mohammadtaseenkhan |
1,713,605 | Test your Code Efficiently using pytest Module | You may have done unit testing or heard the term unit test, which involves breaking down your code... | 0 | 2024-01-01T12:16:07 | https://geekpython.in/understanding-pytest-to-test-python-code | testing, python, programming | You may have done unit testing or heard the term unit test, which involves breaking down your code into smaller units and testing them to see if they are producing the correct output.
Python has a robust unit testing library called [unittest](https://geekpython.in/unit-tests-in-python) that provides a comprehensive set of testing features. However, some developers believe that unittest is more verbose than other testing frameworks.
In this article, we'll look at how to use the `pytest` library to create small, concise test cases for your code. Throughout the process, you'll learn about the pytest library's key features.
## Installation
Pytest is a third-party library that must be installed in your project environment. In your terminal window, type the following command.
```python
pip install pytest
```
Pytest has been installed in your project environment, and all of its functions and classes are now available for use.
## Getting Started With Pytest
Before getting into what `pytest` can do, let's take a look at how to use it to test the code.
Here's a Python file `test_square.py` that contains a `square` function and a test called `test_answer`.
```python
# test_square.py
def square(num):
return num**2
def test_answer():
assert square(3) == 10
```
To run the above test, simply enter the `pytest` command into your terminal, and the rest will be handled by the `pytest` library.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest
========================================= test session starts ==========================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 1 item
test_square.py F [100%]
=============================================== FAILURES ===============================================
_____________________________________________ test_answer ______________________________________________
def test_answer():
> assert square(3) == 10
E assert 9 == 10
E + where 9 = square(3)
test_square.py:7: AssertionError
======================================= short test summary info ========================================
FAILED test_square.py::test_answer - assert 9 == 10
========================================== 1 failed in 0.27s ===========================================
```
The above test failed, as evidenced by the output generated by the `pytest` library. You might be wondering how `pytest` discovered and ran the test when no arguments were passed.
This occurred because `pytest` uses standard test discovery. This includes the conventions that must be followed in order for testing to be successful.
* When no argument is specified, pytest searches files that are in \**\_test.py or test\_\*.py* format.
* Pytest collects test functions and methods that are prefixed with `test`, as well as `test` prefixed test functions and modules inside `Test` prefixed test classes that do not have a `__init__` method, from these files.
* Pytest also finds tests in subdirectories, making it simple to organize your tests within the context of your project structure.
## Why do Most Prefer pytest?
If you've used the `unittest` library before, you'll know that even writing a small test requires more code than `pytest`. Here's an example to demonstrate.
Assume you want to write a `unittest` test suite to test your code.
```python
# test_unittest.py
import unittest
class TestWithUnittest(unittest.TestCase):
def test_query(self):
sentence = "Welcome to GeekPython"
self.assertTrue("P" in sentence)
self.assertFalse("e" in sentence)
def test_capitalize(self):
self.assertEqual("geek".capitalize(), "Geek")
```
Now, from the command line, run these tests with `unittest`.
```bash
D:\SACHIN\Pycharm\pytestt_lib>python -m unittest test_unittest.py
.F
======================================================================
FAIL: test_query (test_unittest.TestWithUnittest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "D:\SACHIN\Pycharm\pytestt_lib\test_unittest.py", line 9, in test_query
self.assertFalse("e" in sentence)
AssertionError: True is not false
----------------------------------------------------------------------
Ran 2 tests in 0.001s
FAILED (failures=1)
```
As you can see, the `test_query` test failed while the `test_capitalize` test passed, as expected by the code.
However, writing those tests requires more lines of code, which include:
* Importing the `unittest` module.
* A test class (`TestWithUnittest`) is created by subclassing `TestCase`.
* Making assertions with unittest's [assert](https://geekpython.in/python-assert) methods (`assertTrue`, `assertFalse`, and `assertEqual`).
However, this is not the case with `pytest`, if you wrote those tests with `pytest`, they must look like this:
```python
# test_pytest.py
def test_query():
sentence = "Welcome to GeekPython"
assert "P" in sentence
assert "e" not in sentence
def test_capitalize():
assert "geek".capitalize(), "Geek"
```
It's as simple as that, there's no need to import the package or use pre-defined assertion methods. With a detailed description, you will get a nicer output.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 2 items
test_pytest.py F. [100%]
============================================================================ FAILURES ============================================================================
___________________________________________________________________________ test_query ___________________________________________________________________________
def test_query():
sentence = "Welcome to GeekPython"
assert "P" in sentence
> assert "e" not in sentence
E AssertionError: assert 'e' not in 'Welcome to GeekPython'
E 'e' is contained here:
E Welcome to GeekPython
E ? +
test_pytest.py:6: AssertionError
==================================================================== short test summary info =====================================================================
FAILED test_pytest.py::test_query - AssertionError: assert 'e' not in 'Welcome to GeekPython'
================================================================== 1 failed, 1 passed in 0.31s ===================================================================
```
The following information can be found in the output:
* The platform on which the test is run, the library versions used, the root directory where the test files are stored, and the plugins used.
* The Python file from the test, in this case, `test_pytest.py`, was collected.
* The test result, which is a `"F"` and a dot (`.`). An `"F"` indicates a **failed test**, a dot (`.`) indicates a **passed test**, and a `"E"` indicates an **unexpected condition that occurred during testing**.
* Finally, a test summary, which prints the results of the tests.
## Parametrize Tests
What exactly is parametrization? **Parametrization** is the process of running multiple sets of tests on the same test function or class, each with a different set of parameters or arguments. This allows you to test the expected results of different input values.
If you want to write multiple tests to evaluate various arguments for the `square` function, your first thought might be to write them as follows:
```python
# Function to return the square of specified number
def square(num):
return num ** 2
# Evaluating square of different numbers
def test_square_of_int():
assert square(5) == 25
def test_square_of_float():
assert square(5.2) == 27.04
def test_square_of_complex_num():
assert square(5j+5) == 50j
def test_square_of_string():
assert square("5") == "25"
```
But wait, there's a twist, `pytest` saves you from writing even more boilerplate code. To allow the parametrization of arguments for a test function, `pytest` provides the `@pytest.mark.parametrize` decorator.
Using parametrization, you can eliminate code duplication and significantly reduce your test code.
```python
import pytest
def square(num):
return num ** 2
@pytest.mark.parametrize("num, expected", [
(5, 25),
(5.2, 27.04),
(5j + 5, 50j),
("5", "25")
])
def test_square(num, expected):
assert square(num) == expected
```
The `@pytest.mark.parametrize` decorator defines four different (`"num, expected"`) tuples in the preceding code. The `test_square` test function will execute each of them one at a time, and the test report will be generated by determining whether the `num` evaluated is equal to the expected value.
## Pytest Fixtures
Using `pytest` fixtures, you can avoid duplicating setup code across multiple tests. By defining a function with the `@pytest.fixture` decorator, you create a reusable setup that can be shared across multiple test functions or classes.
> In testing, a [fixture](https://en.wikipedia.org/wiki/Test_fixture#Software) provides a defined, reliable, and consistent context for the tests. This could include environment (for example a database configured with known parameters) or content (such as a dataset). [Source](https://docs.pytest.org/en/7.4.x/explanation/fixtures.html#about-fixtures)
Here's an example of when you should use fixtures. Assume you have a continuous stream of dynamic vehicle data and want to write a function `collect_vehicle_number_from_delhi()` to extract vehicle numbers belonging to Delhi.
```python
# fixtures_pytest.py
def collect_vehicle_number_from_delhi(vehicle_detail):
data_collected = []
for item in vehicle_detail:
vehicle_number = item.get("vehicle_number", "")
if "DL" in vehicle_number:
data_collected.append(f"{vehicle_number}")
return data_collected
```
To check whether the function works properly, you would write a test that looks like the following:
```python
# test_pytest_fixture.py
from fixtures_pytest import collect_vehicle_number_from_delhi
def test_collect_vehicle_number_from_delhi():
vehicle_detail = [
{
"category": "Car",
"vehicle_number": "DL04R1441"
},
{
"category": "Bike",
"vehicle_number": "HR04R1441"
},
{
"category": "Car",
"vehicle_number": "DL04R1541"
}
]
expected_result = [
"DL04R1441",
"DL04R1541"
]
assert collect_vehicle_number_from_delhi(vehicle_detail) == expected_result
```
The test function `test_collect_vehicle_number_from_delhi()` above determines whether or not the `collect_vehicle_number_from_delhi()` function extracts the data as expected. Now you might want to extract the vehicle number from another state, then you will write another function `collect_vehicle_number_from_haryana()`.
```python
# fixtures_pytest.py
def collect_vehicle_number_from_delhi(vehicle_detail):
# Remaining code
def collect_vehicle_number_from_haryana(vehicle_detail):
data_collected = []
for item in vehicle_detail:
vehicle_number = item.get("vehicle_number", "")
if "HR" in vehicle_number:
data_collected.append(f"{vehicle_number}")
return data_collected
```
Following the creation of this function, you will create another test function and repeat the process.
```python
# test_pytest_fixture.py
from fixtures_pytest import collect_vehicle_number_from_haryana
def test_collect_vehicle_number_from_haryana():
vehicle_detail = [
{
"category": "Car",
"vehicle_number": "DL04R1441"
},
{
"category": "Bike",
"vehicle_number": "HR04R1441"
},
{
"category": "Car",
"vehicle_number": "DL04R1541"
}
]
expected_result = [
"HR04R1441"
]
assert collect_vehicle_number_from_haryana(vehicle_detail) == expected_result
```
This is analogous to repeatedly writing the same code. To avoid writing the same code multiple times, create a function decorated with `@pytest.fixture` here.
```python
# test_pytest_fixture.py
import pytest
from fixtures_pytest import collect_vehicle_number_from_haryana
from fixtures_pytest import collect_vehicle_number_from_delhi
@pytest.fixture
def vehicle_data():
return [
{
"category": "Car",
"vehicle_number": "DL04R1441"
},
{
"category": "Bike",
"vehicle_number": "HR04R1441"
},
{
"category": "Car",
"vehicle_number": "DL04R1541"
}
]
# test 1
def test_collect_vehicle_number_from_delhi(vehicle_data):
expected_result = [
"DL04R1441",
"DL04R1541"
]
assert collect_vehicle_number_from_delhi(vehicle_data) == expected_result
# test 2
def test_collect_vehicle_number_from_haryana(vehicle_data):
expected_result = [
"HR04R1441"
]
assert collect_vehicle_number_from_haryana(vehicle_data) == expected_result
```
As you can see from the code above, the number of lines has been reduced to some extent, and you can now write a few more tests by reusing the `@pytest.fixture` decorated function `vehicle_data`.
### Fixture for Database Connection
Consider the example of creating a database connection, in which a fixture is used to set up the resources and then tear them down.
```python
# fixture_for_db_connection.py
import pytest
import sqlite3
@pytest.fixture
def database_connection():
# Setup Phase
conn = sqlite3.connect(":memory:")
cur = conn.cursor()
cur.execute(
"CREATE TABLE users (name TEXT)"
)
# Freeze the state and pass the object to test function
yield conn
# Teardown Phase
conn.close()
```
A fixture `database_connection()` is created, which creates an SQLite database in memory and establishes a connection, then creates a table, yields the connection, and finally closes the connection once the work is completed.
This fixture can be passed as an argument to the test function. Assume you want to write a function to insert a value into a database, simply do the following:
```python
# fixture_for_db_connection.py
def test_insert_data(database_connection):
database_connection.execute(
"INSERT INTO users (name) VALUES ('Virat Kohli')"
)
res = database_connection.execute(
"SELECT * FROM users"
)
result = res.fetchall()
assert result is not None
assert ("Virat Kohli",) in result
```
The `test_insert_data()` test function takes the `database_connection` fixture as an argument, which eliminates the need to rewrite the database connection code.
You can now write as many test functions as you want without having to rewrite the database setup code.
## Markers in Pytest
Pytest provides a few built-in markers to mark your test functions which can be handy while testing.
In the earlier section, you saw the parametrization of arguments using the `@pytest.mark.parametrize` decorator. Well, `@pytest.mark.parametrize` is a decorator that marks a test function for parametrization.
### Skipping Tests
If you have a test function that you want to skip during testing for some reason, you can decorate it with `@pytest.mark.skip`.
In the `test_pytest_fixture.py` script, for example, you added two new test functions but want to skip testing them because you haven't yet created the `collect_vehicle_number_from_punjab()` and `collect_vehicle_number_from_maharashtra()` functions to pass these tests.
```python
# test_pytest_fixture.py
# Previous code here
@pytest.mark.skip(reason="Not implemented yet")
def test_collect_vehicle_number_from_punjab(vehicle_data):
expected_result = [
"PB3SQ4141"
]
assert collect_vehicle_number_from_punjab(vehicle_data) == expected_result
@pytest.mark.skip(reason="Not implemented yet")
def test_collect_vehicle_number_from_maharashtra(vehicle_data):
expected_result = [
"MH05X1251"
]
assert collect_vehicle_number_from_maharashtra(vehicle_data) == expected_result
```
Both test functions in this script are marked with `@pytest.mark.skip` and provide a reason for skipping. When you run this script, pytest will bypass these tests.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest test_pytest_fixture.py
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 4 items
test_pytest_feature.py ..ss [100%]
================================================================== 2 passed, 2 skipped in 0.05s ==================================================================
```
The report shows that two tests were passed and two were skipped.
If you want to conditionally skip a test function. In that case, use the `@pytest.mark.skipif` decorator to mark the test function. Here's an illustration.
```python
# test_pytest_fixture.py
# Previous code here
@pytest.mark.skipif(
pytest.version_tuple < (7, 2),
reason="pytest version is less than 7.2"
)
def test_collect_vehicle_number_from_punjab(vehicle_data):
expected_result = [
"PB3SQ4141"
]
assert collect_vehicle_number_from_punjab(vehicle_data) == expected_result
@pytest.mark.skipif(
pytest.version_tuple < (7, 2),
reason="pytest version is less than 7.2"
)
def test_collect_vehicle_number_from_karnataka(vehicle_data):
expected_result = [
"KR3SQ4141"
]
assert collect_vehicle_number_from_karnataka(vehicle_data) == expected_result
```
In this example, two test functions (`test_collect_vehicle_number_from_punjab` and `test_collect_vehicle_number_from_karnataka`) are decorated with `@pytest.mark.skipif`. The condition specified in each case is `pytest.version_tuple < (7, 2)`, which means that these tests will be skipped if the installed `pytest` version is less than 7.2. The `reason` parameter provides a message explaining why the tests are being skipped.
### Filter Warnings
You can add warning filters to specific test functions or classes using the `@pytest.mark.filterwarnings` function, allowing you to control which warnings are captured during tests.
Here's an example of the code from above.
```python
# test_pytest_fixture.py
import warnings
# Previous code here
# Helper warning function
def warning_function():
warnings.warn("Not implemented yet", UserWarning)
@pytest.mark.filterwarnings("error:Not implemented yet")
def test_collect_vehicle_number_from_punjab(vehicle_data):
warning_function()
expected_result = ["PB3SQ4141"]
assert collect_vehicle_number_from_punjab(vehicle_data) == expected_result
@pytest.mark.filterwarnings("error:Not implemented yet")
def test_collect_vehicle_number_from_karnataka(vehicle_data):
warning_function()
expected_result = ["KR3SQ4141"]
assert collect_vehicle_number_from_karnataka(vehicle_data) == expected_result
```
In this example, a warning message is emitted by a helper warning function (`warning_function()`).
Both test functions (`test_collect_vehicle_number_from_punjab` and `test_collect_vehicle_number_from_karnataka`) are decorated with `@pytest.mark.filterwarnings` which specifies that any `UserWarning` with the message **"Not implemented yet"** should be treated as an error during the execution of these tests.
These test functions call `warning_function` which, in turn, emits a `UserWarning` with the specified message.
You can see in the summary of the report generated by `pytest`, the warning is displayed.
```bash
==================================================================== short test summary info =====================================================================
FAILED test_pytest_feature.py::test_collect_vehicle_number_from_punjab - UserWarning: Not implemented yet
FAILED test_pytest_feature.py::test_collect_vehicle_number_from_karnataka - UserWarning: Not implemented yet
======================================================================= 2 failed in 0.31s ========================================================================
```
## Pytest Command-line Options
Pytest provides numerous command-line options that allow you to customize or extend the behavior of test execution. You can list all the available `pytest` options using the following command in your terminal.
```bash
pytest --help
```
Here are some pytest command-line options that you can try when you execute tests.
### Running Tests Using Keyword
You can specify which tests to run by following the `-k` option with a keyword or expression. Assume you have the Python file `test_sample.py`, which contains the tests listed below.
```python
def square(num):
return num**2
# Test 1
def test_special_one():
a = 2
assert square(a) == 4
# Test 2
def test_special_two():
x = 3
assert square(x) == 9
# Test 3
def test_normal_three():
x = 3
assert square(x) == 9
```
If you want to run tests that contains `"test_special"`, use the following command.
```python
D:\SACHIN\Pycharm\pytestt_lib>pytest -k test_special
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 3 items / 1 deselected / 2 selected
test_sample.py .. [100%]
================================================================ 2 passed, 1 deselected in 0.07s =================================================================
```
The tests that have `"test_special"` in their name were selected, and the others were deselected.
If you want to run all other tests but not the ones with "test\_special" in their names, use the following command.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest -k "not test_special"
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 3 items / 2 deselected / 1 selected
test_sample.py . [100%]
================================================================ 1 passed, 2 deselected in 0.06s =================================================================
```
The expression `"not test_special"` in the above command indicates that run only those tests that don't have "test\_special" in their name.
### Customizing Output
You can use the following options to customize the output and the report:
* `-v`, `--verbose` - Increases verbosity
* `--no-header` - Disables header
* `--no-summary` - Disables summary
* `-q`, `--quiet` - Decreases verbosity
**Output of the tests with increased verbosity.**
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest -v test_sample.py
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0 -- D:\SACHIN\Python310\python.exe
cachedir: .pytest_cache
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 3 items
test_sample.py::test_special_one PASSED [ 33%]
test_sample.py::test_special_two PASSED [ 66%]
test_sample.py::test_normal_three PASSED [100%]
======================================================================= 3 passed in 0.04s ========================================================================
```
**Output of the tests with decreased verbosity.**
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest -q test_sample.py
... [100%]
3 passed in 0.02s
```
When you use `--no-header` and `--no-summary` together, it is equivalent to using `-q` (decreased verbosity).
### Test Collection
Using the `--collect-only`, `--co` option, `pytest` collects all the tests but doesn't execute them.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest --collect-only test_sample.py
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 3 items
<Module test_sample.py>
<Function test_special_one>
<Function test_special_two>
<Function test_normal_three>
=================================================================== 3 tests collected in 0.02s ===================================================================
```
### **Ignore Path or File during Test Collection**
If you don't want to collect tests from a specific path or file, use the `--ignore=path` option.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest --ignore=test_sample.py
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 1 item
test_square.py . [100%]
======================================================================= 1 passed in 0.06s ========================================================================
```
The `test_sample.py` file is ignored by pytest during test collection in the above example.
### Exit on First Failed Test or Error
When you use the `-x`, `--exitfirst` option, pytest exits the test execution on the first failed test or error that it finds.
```bash
D:\SACHIN\Pycharm\pytestt_lib>pytest -x test_sample.py
====================================================================== test session starts =======================================================================
platform win32 -- Python 3.10.5, pytest-7.3.2, pluggy-1.0.0
rootdir: D:\SACHIN\Pycharm\pytestt_lib
plugins: anyio-3.6.2
collected 3 items
test_sample.py F
============================================================================ FAILURES ============================================================================
________________________________________________________________________ test_special_one ________________________________________________________________________
def test_special_one():
a = 2
> assert square(a) == 5
E assert 4 == 5
E + where 4 = square(2)
test_sample.py:6: AssertionError
==================================================================== short test summary info =====================================================================
FAILED test_sample.py::test_special_one - assert 4 == 5
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================================= 1 failed in 0.35s ========================================================================
```
Pytest immediately exits the test execution when it finds a failed test and a stopping message appears in the report summary.
## Conclusion
Pytest is a testing framework that allows you to write small and readable tests to test or debug your code.
In this article, you've learned:
* How to use `pytest` for testing your code
* How to parametrize arguments to avoid code duplication
* How to use fixtures in `pytest`
* Pytest command-line options
---
🏆**Other articles you might be interested in if you liked this one**
✅[Debug/Test your code using the unittest module in Python](https://geekpython.in/unit-tests-in-python).
✅[What is assert in Python and how to use it for debugging](https://geekpython.in/python-assert)?
✅[Create a WebSocket server and client in Python](https://geekpython.in/build-websocket-server-and-client-using-python).
✅[Create multi-threaded Python programs using a threading module](https://geekpython.in/threading-module-to-create-threads-in-python).
✅[Create and integrate MySQL database with Flask app using Python](https://geekpython.in/create-and-integrate-mysql-database-with-flask-app).
✅[Upload and display images on the frontend using Flask](https://geekpython.in/render-images-from-flask).
---
**That's all for now**
**Keep Coding✌✌** | sachingeek |
1,714,328 | MMOexp WoW Classic SoD Gold: How many take pictures | You'd like to park and go for a walk now? I'm likely to WoW Classic SoD Gold wake up feeling worse... | 0 | 2024-01-02T05:53:24 | https://dev.to/nevillberger/mmoexp-wow-classic-sod-gold-how-many-take-pictures-1mbj |
You'd like to park and go for a walk now? I'm likely to [WoW Classic SoD Gold](https://www.mmoexp.com/Wow-classic-sod/Gold.html) wake up feeling worse than I did yesterday. Get a good night's sleep so I'll be back later in the day. I was seeking out teammates. I had to pay for phenomics Indyk pitchers to let me as I walked a dog.
Are you waiting for your heart to be? Wait How many take pictures? Simply
who's? I received one from you. I had to receive it. It was consensual, I guess. Ergonomics bro. It's a cost to pay. off the employees. Wow. Ah! Get him out of here. Take him out of here.
Get him out as fast. Take him out of this place. Get him out of here. Get him out of here. You must get him out. He's drunk. He's drunk. He's drunk.
Emma Lm GMMMN Oh my god It's dropping. Holy the fuck. My heart is fuckin drafted. I got the voice of the glyph.
That was really the worst. It was the definition of suck. What? Yikes. It's a pity that you're doing the same thing. Uh oh
LOL Just walked into my room and told me I'm fresh from the shower I'll be in the room waiting for you. I was surprised I still got in even at 43. I need to make her smile, after all she's my mother at CES Oh man, wait a minute Oh, one shot Bloodlust.
Oh my God oh my God please, I'm going to be dead I'm gonna die! Oh my god the orangey gods support me. Chad bugging dodge and Miss Holly Molly did a very excellent job pushing me away from contact.
Oh it's just known as [WoW SoD Gold](https://www.mmoexp.com/Wow-classic-sod/Gold.html) Danine or Guney pants Oh man , and get your man on the move now, my goodness!
| nevillberger | |
1,915,120 | To view the history of errors using PDO in SQL | To view the history of errors using PDO in... | 0 | 2024-07-08T03:05:42 | https://dev.to/anissepti/to-view-the-history-of-errors-using-pdo-in-sql-ppj | {% stackoverflow 78718965 %} | anissepti | |
1,740,741 | Crafting a Web Design Singapore that seamlessly integrates functionality — Subraa | A recruitment agency's online presence is pivotal, serving as the gateway for connecting top talent... | 0 | 2024-01-25T04:19:59 | https://dev.to/subraaoct2023/crafting-a-web-design-singapore-that-seamlessly-integrates-functionality-subraa-5cjk | web, design, singapore |

A recruitment agency's online presence is pivotal, serving as the gateway for connecting top talent with prospective employers. Crafting a [Web Design Singapore](https://www.subraa.com/) that seamlessly integrates functionality and aesthetics is crucial for success in the competitive landscape of talent acquisition. Here are the must-have features for a recruitment agency website:
**1. Intuitive Job Search:**
Streamlining the job search process is fundamental. An intuitive search bar and advanced filtering options allow candidates to find relevant opportunities efficiently.
**2. Compelling Visuals and Branding:**
A visually appealing design that aligns with the agency's brand is essential. Engaging visuals, consistent branding, and a professional look create a positive first impression.
**3. Mobile Responsiveness:**
Given the prevalence of mobile users, ensuring the website is fully responsive on various devices guarantees accessibility for a diverse audience.
**4. Seamless Application Process:**
Simplifying the application process enhances user experience. A user-friendly application form with clear instructions facilitates candidate submissions.
**5. Comprehensive Job Listings:**
Displaying comprehensive job listings with detailed descriptions, requirements, and application deadlines provides transparency and attracts qualified candidates.
**6. Client and Candidate Portals:**
Implementing secure portals for both clients and candidates fosters a personalized experience. Clients can manage job postings, while candidates can track applications and updates.
**7. Social Media Integration:**
Seamless integration with social media platforms amplifies the agency's reach. Sharing job listings and engaging content enhances brand visibility.
**8. Robust Content Management System (CMS):**
A robust CMS empowers the agency to update content, add new job listings, and make timely announcements without technical challenges.
**9. Testimonials and Success Stories:**
Showcasing client testimonials and success stories builds credibility. Real-life experiences provide valuable insights for both clients and candidates.
**10. Blog or Resource Section:**
A blog or resource section demonstrates industry expertise. Providing informative content on resume writing, interview tips, and industry trends positions the agency as a thought leader.
**11. Advanced Search and Match Algorithms:**
Implementing advanced algorithms that match candidate profiles with job requirements enhances the agency's efficiency in talent placement.
**12. Security Measures:**
Ensuring the security of sensitive candidate and client data is paramount. Implementing robust security measures builds trust and compliance.
**13. Analytics and Reporting:**
Integrating analytics tools allows the agency to track website performance, user engagement, and the success of job placements.
In conclusion, a recruitment agency website is more than a digital platform—it's a dynamic tool for talent acquisition and client engagement. By incorporating these must-have features, agencies can elevate their online presence, streamline operations, and create a compelling environment for both candidates and clients in the competitive landscape of recruitment.
**Website : [https://www.subraa.com/](https://www.subraa.com/)** | subraaoct2023 |
1,741,966 | How to deal with API rate limits | When I first had the idea for this post, I wanted to provide a collection of actionable ways to... | 0 | 2024-01-26T10:06:33 | https://blog.sentry.io/how-to-deal-with-api-rate-limits/ | api, javascript, webdev, tutorial | When I first had the idea for this post, I wanted to provide a collection of actionable ways to handle errors caused by API rate limits in your applications. But as it turns out, it’s not that straightforward (is it ever?). API rate limiting is a minefield, and at the time of writing, there are **no published standards** in terms of how to build and consume APIs that implement rate limiting. And while I will provide some code solutions in the second half of this post, I want to start by discussing why we need rate limits and highlight some of the inconsistencies you might find when dealing with rate-limited APIs.
## Why do rate limits exist?
From the late 1990s to the early 2000s, the use of Software-as-a-Service (SaaS) tools was not mainstream. Authentication, content management, image storage, and optimization were painstakingly hand-crafted in-house. Frontend and backend weren’t separate entities or disciplines. The use of APIs as a middle layer between frontend and backend wasn’t common; database calls were made directly from page templates.
When the need (and desire) for separation between the front and backend emerged, so did APIs as a middle layer. But these APIs were also built, scaled, and managed in-house while being hosted on physical servers on business premises. Development teams decided how to rate limit their own APIs if there was a need. Fast forward to the mid-2010s and the SaaS ecosystem is packed full of headless, serverless, cloud-based tools for anything and everything.
And with SaaS APIs being publicly available to everyone, rate limiting was introduced as a traffic management strategy, on top of other low-level DDoS mitigation measures. It exists to maintain the stability of APIs and to prevent users (good or bad actors) from exhausting available system resources. Rate limiting is also part of a SaaS pricing model; pay a higher subscription fee and receive more generous limits.
So, how do we deal with being rate-limited by the APIs we consume?
## HTTP response status codes are varied and inconsistent
The standard HTTP response code to send with a rate-limited response is [HTTP 429 Too Many Requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429). However, given that you might be rate-limited according to whether or not you are *authorized* to make a particular number of requests (perhaps your API pricing model has rate-limited you), you may receive a [HTTP 403 Forbidden](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403). I experienced this when I was doing some testing [using the GitHub API as an unauthenticated user](https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28#primary-rate-limit-for-unauthenticated-users).

Whilst this is a valid use of a 403 HTTP, it suggests that, technically, there *could* be other valid HTTP status codes to return in a rate-limited API response other than 429. In this case, even [418 I’m a teapot](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/418) could be a valid HTTP status code, given that “some websites use this response for requests they do not wish to handle.” In conclusion, when you’re working with rate-limited APIs and you want to evaluate the HTTP response status in code, you may need to do some testing to work out the full scope of HTTP responses you may receive in different cases. The same goes for HTTP response headers.
## HTTP response headers could be any combination of key-value pairs
When consuming an API that implements rate limiting, you should receive a set of response headers with more information about the rate limit on each request, (whether or not your request has been allowed or denied) such as how many requests you have remaining in any given time period and when the number of requests is reset to the maximum.
As stated in a draft proposal from the Internet Engineering Task Force, across the APIs I have consumed and tested for this post, I found that [“there is no standard way for servers to communicate quotas so that clients can throttle its requests to prevent errors”](https://www.ietf.org/archive/id/draft-ietf-httpapi-ratelimit-headers-07.html#name-introduction-2:~:text=Currently%2C%20there%20is%20no%20standard%20way%20for%20servers%20to%20communicate%20quotas%20so%20that%20clients%20can%20throttle%20its%20requests%20to%20prevent%20errors.). For example:
| API | Header key | Header value |
|------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------|-------------------------------------------------------------------------------|
| [GitHub](https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28#exceeding-the-rate-limit) (unauthenticated) | X-RateLimit-Reset | epoch timestamp of expiration |
| [Sentry](https://docs.sentry.io/api/ratelimits/#headers) | X-Sentry-Rate-Limit-Reset | epoch timestamp of expiration |
| [OpenAI](https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free) | X-RateLimit-Reset-Requests | time period in minutes and seconds after which the limit is reset (e.g. 1m6s) |
| [Discord](https://discord.com/developers/docs/topics/rate-limits#header-format) | X-RateLimit-Reset | epoch timestamp of expiration |
| Discord | X-RateLimit-Reset-After | time duration in seconds |
| Discord | Retry-After | time duration in seconds if the limit has been exceeded |
With this many differences across just four APIs, and in the absence of standards around this, HTTP response headers relating to rate limiting could, theoretically, be *any combination of key-value pairs*.
## What to do when you’re being rate-limited
When using APIs that implement rate limits, you can use a combination of the HTTP status code and HTTP response headers to determine how to handle responses to prevent unexpected errors. But should you always retry an API call? As usual, it depends, but it can be useful to consider:
* Rate limit thresholds: Can you retry after one second, or do you need to wait for 60 minutes?
* Request urgency: Does one request need to be completed successfully before another can be made? If so, see point 1.
* Application type: Can you provide feedback to a user to ask them to try again in N minutes rather than providing a seemingly infinite loading spinner whilst the request is retried in the background?
### Use HTTP 429 with available response headers
Say (very hypothetically) you want to request information for 100 GitHub users on the client, and you don’t have access to a server to hide your authentication credentials. When using GitHub API in unauthenticated mode, you are limited to making 60 API calls per hour. In this scenario, you’d hit the rate limit immediately, and not be able to recover for 60 minutes. Realistically, you wouldn’t do something like this in a large-scale production app, but here’s how I decided to deal with this case of rate limiting in JavaScript.
If we receive an HTTP status code of 429 or 403, grab the epoch time value from the `x-ratelimit-reset` header, and work out how long we need to wait. If that time is fewer than five seconds, we can retry after that many seconds (and to be honest, five seconds is probably still too long). Otherwise, we provide feedback about when to manually try again.
```javascript
async function getArbitraryUser() {
const response = await fetch("https://api.github.com/users/octocat");
return response;
}
// Thanks https://flaviocopes.com/await-loop-javascript/
const wait = (ms) => {
return new Promise((resolve) => {
setTimeout(() => resolve(), ms);
});
};
const makeLotsOfRequests = async (action, n) => {
for (let i = 1; i <= n; i++) {
const result = await action();
// for the unauthenticated API, we may receive 429 or 403
if (result.status === 429 || result.status === 403) {
// epoch time in seconds
const resetInSeconds = result.headers.get("x-ratelimit-reset");
if (resetInSeconds !== null) {
const nowInSeconds = Math.round(new Date().valueOf() / 1000);
const secondsToWait = resetInSeconds - nowInSeconds;
// Retry only if we need to wait fewer than 5 seconds
// we *could* be waiting for up to 60 minutes for the limit to reset
if (secondsToWait < 5) {
await wait(secondsToWait * 1000);
} else {
// provide useful feedback to user
console.error(
`HTTP ${result.status}: Sorry, try again later in ${Math.round(
secondsToWait / 60,
)} mins.`,
);
break;
}
}
}
}
};
await makeLotsOfRequests(getArbitraryUser, 100);
```
### Retry the request with exponential backoff using HTTP response statuses
In the example above, it’s unrealistic to check the response headers and wait for rate limits to reset. If N clients (from the same IP address) simultaneously wait for the limit to expire and subsequently retry at the same time, the limits would be hit again immediately. This is also known as the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem).
Instead of using rate limit headers, you can retry the request arbitrarily, increasing the wait time exponentially for each subsequent request. Here’s a code example in JavaScript. In the `doRetriesWithBackOff()` function, while the current retries are less than the maximum number of retries we have defined (in this case three) and we haven’t set the `retry` variable to false after getting a successful response, we continue to retry the request. For each subsequent request, we use the retries variable to generate a higher value for the `waitInMilliseconds` parameter. Given that we define the `retries` variable as 0 when the loop begins, the first `waitInMilliseconds` will also evaluate to 0.
This example only handles receiving an HTTP 200 or 429. You could specify different behaviors depending on other HTTP status codes you expect, or you could send an error to [Sentry](https://sentry.io/welcome/) to monitor the different types of responses your application is receiving to decide whether or not a case is worth implementing in the code. As with the example above, this is a somewhat arbitrary example that probably doesn’t cover all bases, but is intended to give you a starting point if you’re looking for this type of rate limit handling.
```javascript
const MAX_RETRIES = 3;
function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function doRetriesWithBackOff() {
let retries = 0;
let retry = true;
do {
const waitInMilliseconds = (Math.pow(2, retries) - 1) * 100;
await sleep(waitInMilliseconds);
const response = await fetch("https://api-url.io/get-me-something");
switch (response.status) {
case 200: // Ok, successful
retry = false;
console.log("successful");
break;
case 429: // Too Many Requests
console.log("retrying");
retries++;
retry = true;
break;
default: // Something unexpected happened, stop retrying
console.log("stopping");
retry = false;
// You could send an error message to Sentry here
Sentry.captureException(`${response.status} received for API call to https://api-url.io/get-me-something`);
break;
}
} while (retry && retries < MAX_RETRIES);
}
// Arbitrary loop to call API 500 times for testing purposes
for (let i = 0; i < 500; i++) {
await doRetriesWithBackOff();
}
```
### Don’t retry the rate-limited request
It might not always be appropriate to retry a rate-limited request (especially in the case of very hard rate limits, such as when using the unauthenticated GitHub API). In this case, you could reduce the overheads of function execution time and return a friendly message to rate-limited users — including the time they need to wait before manually retrying.
## Being rate-limited is one of those “nice to have” problems
If your third-party API tools are rate-limiting your application, it means you have users. And depending on your pricing tier, it could mean you have lots of users. Congratulations! Now go and upgrade your SaaS tool plans.
| whitep4nth3r |
1,742,538 | Running a Vertex AI custom container | After creating our container, it's time to run it. Our first container didn't work (of course).... | 27,298 | 2024-01-27T00:06:00 | https://dev.to/dchaley/running-a-vertex-ai-custom-container-4jb6 | cloud, serverless, ai, containers | After [creating](https://dev.to/dchaley/building-a-vertex-ai-custom-job-container-5f66) our container, it's time to run it.
Our first container didn't work (of course). After a few iterations we got it running. 🎉

After a [container path whoopsie](https://github.com/dchaley/deepcell-imaging/pull/141/commits/fb2c89704bc6dc22993cf130cb9bd23f2ca20e1e), the main challenge was fetching the active machine config from within the container.
Previously, we'd copy the notebook id into the benchmark then use the notebook API to fetch the machine config. We found the API to describe the [custom job by ID](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform_v1.services.job_service.JobServiceClient#google_cloud_aiplatform_v1_services_job_service_JobServiceClient_get_custom_job) which has the machine info … but you don't have an ID until the job is created. 🤨
This [StackOverflow answer](https://stackoverflow.com/questions/75578886/how-to-create-job-id-for-vertex-ai-manually-or-how-to-access-job-id-in-custom-co) had the key. You do get to set a display name, then you can fetch all jobs filtered on that name.
So we'll need to make sure those job names (nominally for display) are actually unique identifiers… | dchaley |
1,742,846 | 🌟 iPhone 15 Pro Max Giveaway 🌟 | Hey [Your Community/Followers], 🎉 Exciting News! 🎉 We're thrilled to announce our exclusive iPhone 15... | 0 | 2024-01-27T11:22:06 | https://dev.to/jpfans24/iphone-15-pro-max-giveaway-4pko | Hey [Your Community/Followers],
🎉 Exciting News! 🎉 We're thrilled to announce our exclusive iPhone 15 Pro Max Giveaway! 📱✨
To show our appreciation for your amazing support, we're giving you the chance to win the latest and greatest [iPhone 15 Pro Max](https://docs.google.com/presentation/d/1vUkR43We5mQXRLw4-uesyIqcQPjKwkGSdcliZdMoJ80/edit#slide=id.g2b3c9bc954c_0_52)! 🚀🌈
Here's how to enter:
Follow Us: Make sure you're following us on all our social media platforms. 📲
Like & Share: Like this post and share it with your friends and family. The more, the merrier! 🤗🔄
Tag Friends: Tag friends who would love to win this incredible prize. Each tag gets you an extra entry! 🏷️🎟️
Comment: Drop a comment below telling us why you deserve to win the iPhone 15 Pro Max. Be creative! 🗨️🌟
📅 [Giveaway Ends: [25.03.2024]](https://bracecherry.com/nrzrt25b?key=ea3d1b88b0226c07a2bf34a395b71ec0)
🎁 Winner Announcement: We'll announce the lucky winner on [26.03.2024].!
🤞 Good luck to everyone! 🤞 May the odds be ever in your favor! 🍀💖
🤞 Open to [ Specify Eligibility].
#iPhone15ProMaxGiveaway #GiveawayTime #WinBig #TechLove
(Note: Ensure to follow the guidelines of the platform where you are hosting the giveaway, and adjust the details accordingly. Always be transparent and honest about the giveaway terms and conditions.)

[Win iphone 15 pro max](https://docs.google.com/presentation/d/1vUkR43We5mQXRLw4-uesyIqcQPjKwkGSdcliZdMoJ80/edit#slide=id.g2b3c9bc954c_0_52) | jpfans24 | |
1,772,745 | Bash Better: How I Built a Tool to Organize Scripts Effectively | Introduction Before I delve into discussing my script "Link-It," allow me to introduce... | 0 | 2024-02-26T19:50:35 | https://dev.to/ppfeiler/bash-better-how-i-built-a-tool-to-organize-scripts-effectively-55b3 | bash, github, opensource, linux | ## Introduction
Before I delve into discussing my script "Link-It," allow me to introduce myself briefly: My name is Patrick, and I've been working as a software developer, software architect, and project manager for over a decade.
I've been involved in various client projects (custom developments) and have always strived to automate repetitive tasks or write small scripts to accomplish specific tasks quickly and efficiently.
## The Inspiration Behind My Project
As I mentioned earlier, I have various scripts for different tasks. Some of these scripts I use daily, some weekly, and others only 1-2 times a month.
Each of these scripts resides in its own folder and is versioned in Git.
Now, it's not very user-friendly to always know where the script I currently need is located.
It would be much simpler if all my scripts were on the PATH, allowing me to execute them anywhere.
That's how I came up with the idea to create "Link-It." A small script that links the scripts to a folder that is on the PATH.
## How Link-It Works
Link-It is a simple script designed to streamline the process of managing and accessing your scripts. It automates the following tasks:
- It creates the folder `.local/bin` in your HOME directory if it doesn't already exist.
- It adds the newly created folder to your PATH if it's not already included.
- It creates a symlink to the specified file in the `.local/bin` folder.
- For user convenience, when creating the symlink, Link-It removes the file extension.
- For example, `my-super-script.sh` would be linked as `my-super-script`.
[Link-It is an open-source project hosted on GitHub](https://github.com/ppfeiler/link-it)
Feel free to explore the repository, contribute to the project, or provide feedback.
## How Link-It Can Simplify Your Workflow
If I'm now writing a new script that simplifies a process for a client project, I no longer have to remember the path to that script. Instead, I simply execute `link-it /path/to/my/new/script.sh`, and from then on, I can use my script anywhere.
## Conclusion
That concludes the introduction to my little script, "Link-It." I would greatly appreciate your feedback on this post, my README on GitHub, and the project in general.
Perhaps this project is also of interest to others. I'm open to expanding it to cover your specific use cases as well. Your input and contributions would be invaluable in shaping the future development of Link-It.
## My Call to the Community
Do you have your own little scripts that help you in your daily life?
Do you also enjoy writing small automations?
I would be thrilled to exchange scripts and ideas with you. I'm eager to learn from other scripts and experiences. Feel free to leave a comment and let's start the conversation!
 | ppfeiler |
1,915,226 | Test Post | This is my first post created for testing | 0 | 2024-07-08T05:05:04 | https://dev.to/navaneethank/test-post-5fif | This is my first post created for testing | navaneethank | |
1,826,373 | Don't Just Hire a JavaScript Developer, Build Your Dream Team | In the web development sphere, JavaScript has held its positioning as the cornerstone technology... | 0 | 2024-04-18T05:55:48 | https://dev.to/poojarysathvik/dont-just-hire-a-javascript-developer-build-your-dream-team-5co5 | javascript, hire, developers | In the web development sphere, JavaScript has held its positioning as the cornerstone technology powering interactive and dynamic user experiences for over a decade. When you decide to [hire JavaScript developers](https://www.uplers.com/hire-javascript-developers/?utm_source=Link+Building+Promotions&utm_medium=UTM_Javascript+Global&utm_campaign=Javascript+Global&utm_id=Javascript+Global) it’s only the first step in the process of bringing your digital vision to reality.
To be able to truly innovate and excel in the competitive landscape, you need just more than an individual talent, which means assembling a high-performance team that embodies a shared vision and diverse capabilities. In this article, let’s look at how you can strategically recruit JavaScript professionals and integrate them into a dream team that propels your project forward.
## Crafting the Ultimate Development Team with Top JavaScript Talent
### Define your needs
Begin the hiring journey with a clear outline of your project requirements and objectives. Pen down the specific goals, experience, and skills needed, and your expectations. Use this as your guide to find the right type of JavaScript experts that align with your needs.
### Assess a cultural fit
While technical proficiency is a must-check, you should also give equal emphasis to assessing the cultural compatibility of the applicant. Their culture should align with your company culture, values, mission, and work ethics to foster an inclusive work culture.
### Prioritize soft skills assessment
When you hire JavaScript developers you must evaluate their soft skills as they complement the technical expertise and qualifications of the candidates. This includes prioritizing effective communication and team coordination skills so that they can share their ideas precisely and concisely, problem-solving approach, ability to be open to constructive feedback, etc.
### Diversify your team
If you are looking for innovative solutions which is the lifeblood to stay competitive, it stems from diverse perspectives and experiences. This means you must invite individuals from distinct backgrounds to get unique and fresh perspectives that can result in robust and creative outputs.
### Compensation and benefits
Finally, last but not least to attract and retain top-tier talent you need to be very careful and planned in your compensation offering. Your compensation and benefits package should be competitive enough to make you an employer of choice in the recruitment market. For this, you must conduct a [Java developer salary](https://www.uplers.com/salary/?job_type=java+developers) survey using tools like the Uplers salary analysis tool. It will give you insights on location-specific salary comparisons so that you can make an informed decision.
## Conclusion
In a nutshell, building your dream team involves hiring skilled professionals from all walks of life. You can smartly hire JavaScript developers by focusing on the above-stated strategic elements and foster a dynamic group of individuals to contribute to your project's success.
| poojarysathvik |
668,355 | You have unread messages | Imagine creating a profile on this job searching site that promises to connect you to industry inside... | 12,255 | 2021-04-16T16:30:34 | https://blog.ninjobu.com/you-have-unread-messages | webdev, firebase | Imagine creating a profile on this job searching site that promises to connect you to industry insiders. A recruiter sees it and messages you with a dream opportunity. But, you never see the message because you signed up to this site for fun and haven't logged back in to check your messages for weeks. That was [Ninjobu](https://ninjobu.com), at least up until a few days ago.
In the [last post](http://blog.ninjobu.com/building-a-chat-with-firebase), I wrote a bit about how I set up the chat system so recruiters could communicate with candidates. One significant omission was new message notifications. Since we have a web app here, it's unlikely people stay active on the platform for long, the way you might do on social media or IM. And, arguably, user communication is a crucial part of asynchronous job searching. It made sense for the next feature to be a way to let users know when someone contacts them.
My goal was to write a simple solution that would notify users when they have new unread messages while not being too naggy. The resulting structure ended up straightforward, implemented with just two Firebase functions.

As mentioned in the previous post, our database consists of chat documents that store the last message timestamp and when each chat participant last viewed that message. With this information, we can easily decide which users we should notify.
First, I added a new Firebase function that triggers on writes for each chat document. The *onWrite* trigger executes for the creation, updates, and deletions of documents. I ignore the deletion case, but I know the last message timestamp will have updated during the creation and update events.

When a chat is updated, it will have a `lastMessageTime` timestamp and an array `lastSeenTime` with two entries: one timestamp for each of the two chat participants, representing when they last saw the chat. If any of the `lastSeenTime` timestamps are older than the message, we record the chat id in a document `misc/chats_to_notify` for later. Firebase's *FieldValue.arrayUnion* utility lets us atomically add unique entries to an array.
With the above function running every time a chat document is updated, we will end up with a list of chat ids in our `misc/chats_to_notify` document. The next step is to go through these chats and send emails to the participants, as required. We do this with a second function that we schedule at a set interval.
Being tightly integrated with GCP, Firebase offers many features from Google Cloud wrapped in simple-to-use interfaces. One example is scheduling functions that run at predefined intervals, using Google Cloud Pub/Sub and Scheduler behind the scenes. The function that handles our email notifications is a bit chunkier, so let's split it into smaller parts.

I set up the function to run once every two hours, quickly done with the [App Engine cron.yaml](https://cloud.google.com/appengine/docs/standard/python/config/cronref) syntax.

Start by retrieving the `misc/chats_to_notify` document where we have the ids for our updated chats, and early out if the document is invalid or the list of chats is empty.

Once we have our chats, I chose to process only up to 100 of them on each invocation. Firebase functions have a running time limit of 60 seconds, and I don't want the function to time out and not send any emails if the list of updated chats is too long. However, my choice is a bit premature and speculative. The 100 chats limit was chosen arbitrarily, and Firebase allows the function timeout to be configured to up to 9 minutes. I'm also unsure how the function itself will scale with a significantly larger array of chat ids. I will likely have to tweak this once I have more data. But for now, processing 100 chats every 2 hours seems reasonable based on the website's current activity.
While looping over each chat, we add its id to the `processed` array and later use it to remove the entry from the `misc/chats_to_notify` document. Next, we retrieve the chat document data and validate the required fields. We need `lastMessageTime` to exist and the `members` array to contain the two chat participants' UIDs. We then save each member's UID if their `lastSeenTime` is older than the `lastMessageTime`. This check is important because, between the time the chat id was recorded in the `misc/chats_to_notify` document and the time this function runs, each member of the chat may have already seen the last message. We don't want to send an email notification for a read message. Our *onWrite* function for the chat document only adds chat ids to the notification list and doesn't remove them.
A thing to keep in mind here is, if we have 100 chats where the participants have both seen the messages, this function will process those chat entries and not send any emails until the following invocation 2 hours later. I'm not entirely happy with this, but I consider this part of the code temporary until I have more data on how many emails one run of the function can process.

We now have a list of UIDs for users that need to receive a notification email. For each UID, we get the user's email from the Firebase Auth module and create a personalization entry. [Personalizations](https://sendgrid.com/docs/for-developers/sending-email/personalizations/) are a Sendgrid feature that lets us send the same email to multiple recipients with a single API call. It also ensures that each recipient will only see their email address in the *to* field and avoids catastrophic invasion of privacy.

Finally, we remove all the chat ids we have processed from the `misc/chats_to_notify` document to avoid sending multiple emails to the same folks. The *FieldValue.arrayRemove* feature allows us to do this quickly with a single call.
Speaking of the array with chat ids, it may be worth noting that at some point, this may become a bottleneck. Firestore document sizes have a limit of 1 MiB. If we assume the auto-generated chat ids continue to be 20 bytes each like they are at the moment, we have room for a bit over 50,000 entries in the array before we blow the size limit. Additionally, Firebase has a soft limit of one write per second to the same document that they don't recommend you go over to avoid contention errors. It's a soft limit in the sense that it shouldn't cause issues in short bursts, but something to keep in mind. If there's loads of chat activity on the site, the `misc/chats_to_notify` document will be hammered and potentially go over the one write per second limit. Once we hit these problems, however, congratulations are in order, most likely.
Thank you for reading this far. If you are a software engineer open to new opportunities but not actively looking for work, try out [Ninjobu](https://ninjobu.com)! Create a profile, let recruiters know what job and salary you'd like, and who knows what might happen.
Until next time. | ninjobu |
1,459,892 | Failing Fast | Raise your hand if you have seen this before in your development or production error... | 0 | 2023-05-07T00:42:44 | https://qualitysoftwarematters.com/failing-fast | ---
title: Failing Fast
published: true
date: 2015-06-19 05:00:00 UTC
tags:
canonical_url: https://qualitysoftwarematters.com/failing-fast
---
Raise your hand if you have seen this before in your development or production error logs.
```
System.NullReferenceException: Object reference not set to an instance of an object.
```
As you might have guessed, this is informing us that we are calling a method or property on an object that is currently null. The stack trace might be able to help us to locate where the error occurred. However, an ambiguous error like this can be hard to troubleshoot for multiple reasons.
- If the stack trace does not have line numbers and multiple local variables exist within the method, it can be hard to determine which variable was null.
```
public void Method()
{
var person1 = new Person{ FirstName = "Todd", LastName = "Meinershagen" };
var person2 = new Person();
Console.WriteLine(person1.LastName);
Console.WriteLine(person2.FirstName); //kapow!
}
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
```
- If the variable was an input parameter, it can be hard to determine which method within the call stack the null reference was introduced.
```
public void Method1()
{
Person person = null;
Method2(person);
}
public void Method2(Person person)
{
Method3(person);
}
public void Method3(Person person)
{
Console.WriteLine(person.FirstName); //kapow!
}
```
### Failing Fast Enhances Maintainability
In my [last post](http://www.qualitysoftwarematters.com/2015/06/what-is-software-quality.html), I talked about the various factors that define software quality. One of those, maintainability, is very important, because it reduces **_the effort required to locate and fix an error in an operational program_**. Using a technique like **_failing fast_** can enable you and your team to locate errors more quickly in both development and production.
So, what does it mean to fail fast?
It means that rather than write code that ignores or band-aids an issue (like setting default values) and allows the code to limp along throughout the execution of your program, you fail "immediately and visibly" the minute that you are aware that there is an issue.
### Assertions are the Key

In order to fail fast, the key is to use assertions in your code. An assertion is code that checks for a condition, and if the condition is not met, fails. In the case of the null object reference, you can do the following:
```
public void Method(int id)
{
var person = _gateway.GetPerson(id);
if (person == null)
{
throw new ArgumentNullException("person");
}
Console.WriteLine(person1.LastName);
Console.WriteLine(person2.FirstName);
}
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
```
For null references, I like to check in places where two methods or classes interact:
- **Constructor** : verify any dependencies passed in by another class
- **Method** : check any input parameters passed in by another method or class
- **Method** : check the response from another method or class, as in the case with the Gateway call above
### Drying Up Your Assertions
After using these kind of assertions throughout your code, you would want to DRY (don't repeat yourself) your code and create reusable assertions. Below is one example.
```
public class Assert
{
public static void IsNotNull(object value, string paramName)
{
if (value == null)
{
throw new ArgumentNullException(paramName);
}
}
}
```
You might also look into third party libraries such as [Magnum by Chris Patterson](http://www.nuget.org/packages/Magnum/) that are available as NuGet packages. Below is an example using the Guard class that Magnum provides for these kind of common assertions.
```
public void Method(int id)
{
var person = _gateway.GetPerson(id);
Guard.AgainstNull(person, "person");
Console.WriteLine(person1.LastName);
Console.WriteLine(person2.FirstName); //kapow!
}
public class Person
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
```
Another option, if you are using the .NET framework, is [Microsoft's Code Contracts](http://research.microsoft.com/en-us/projects/contracts/). These allow the developer to specify pre- and post-conditions that can also be seen in the documentation of a given method. A discussion of their use is outside the scope of this article.
### Is Failing Fast Robust?
At this point, some may object and say this technique of failing fast will create fragile software. However, by failing fast, errors will more likely be found during development and testing of your software, rather than in production where it really counts. And if a bug does escape in production, your team will more likely be able to fix the issue quickly by failing closer to where the issue originally occurred.
Another objection to this style of programming is that it will more likely cause the system to crash in front of the user. One way to mitigate this is to use a global error handler that gracefully displays a user-friendly, generic error message to the user while providing the error details to developers through logs or email. In the case of non-interactive applications such as batch processes or windows services, you don't have to display a message, but after handling the error globally, operations continue by moving on to the next action/transaction.
### Other Resources
If you are interested in reading more about the concept of failing fast, check out James Shore's [article for IEEE magazine](http://www.martinfowler.com/ieeeSoftware/failFast.pdf) discussing the same topic. | toddmeinershagen | |
1,663,524 | Docker | Introduction Before VMs, every server only had one operating system. This means if you... | 0 | 2023-11-11T04:50:58 | https://dev.to/aldoportillo/docker-18n7 | docker, cloud, node, systemdesign | ## Introduction
Before VMs, every server only had one operating system. This means if you wanted a windows server and linux server. You needed two physical servers. Here comes virtualization, instead of installing an OS you install a hypervisor (VMware ESXi). This allows you to divide your server resources into multiple servers running different OS.
### Hypervisor Flow
```mermaid
<Mermaid name="diagram">
flowchart TD
A[Hardware] --> B(Hypervisor)
B --> D{Windows Server}
B --> E{Ubuntu Server}
B --> F{Debian Server}
</Mermaid>
```
Well why use Docker to run different OS on your server if hypervisors do that already? Well, virtualization and virtual machines virtualize hardware. Docker visualizes the operating system.
### Docker Flow
```mermaid
flowchart TD
A[Hardware] --> C{Ubuntu Server}
C --> D(Docker Engine)
D --> E{Ubuntu Server}
D --> F{Debian Server}
D --> G{Centos Server}
```
The docker containers contain the OS that we need to run. These are micro containers that contain their own OS, CPU, Memory and Network. These are lightweight, fast and **isolated**.
### Why is it so fast?
Since we are building our docker engine on top of our linux server. We only need one kernel. All Docker containers use the same kernel. As opposed to, hypervisors which contain their own kernels.
### How is the industry using Docker?
Developers can write their code. Deploy it in a docker container, and it works anywhere.
### Microservices
Take portions of your stack and segmenting it into smaller bits and pieces. Split your app server, app client and database in separate containers.
## Dockers in Node
Dockerfile is a blueprint for building a Docker Image
A Docker Image is a template for running Docker Containers.
A Container is just a running process.
1. Developer creating the software defines the environment with the dockerfile.
2. Another developer can use the docker file to rebuild the environment which is saved as an image. Images can be shared and anyone can pull that image.
3. The image can be run in a docker container
### Creating a Dockerfile that Creates Docker Image
```js
//In src/index.js
const app = require ("express")() ;
app.get ('/', (req, res ) =>
res.json({ message: 'Docker is easy!' })
);
const port = process.env.PORT || 8080;
app.listen(port, () => console. log(`app listening on http://localhost:${port}`))
```
Create Docker Ignore and add node_modules
```Docker
#In Dockerfile
# From Node Image
FROM node:12
# Set directory
WORKDIR /app
#Install dependencies first so they can be cached
COPY package*.json ./
RUN npm install
# Copy source code
COPY . .
# Set Port
ENV PORT=8080
EXPOSE 8080
#Run
CMD["npm", "start"]
```
### Create Docker Image
```bash
docker build -t username/app:1.0 .
```
The period is the path. When the process is done. It returns "successfully built image_id"
Next you can push this to the cloud for others to use. For now we are gonna, run the docker locally.
### Create container
```bash
docker run image_id
docker run -p 5000:8080
```
### Volume: share data between containers
```bash
#Create volume
docker volume create shared-data-folder
#mount volume
docker run \
> --mount source=shared-data-folder,target=/data-folder
```
### Docker Compose: In Progress
## Command Cheat Sheet
```bash
#create new docker container w
docker run -d -t --name container_name centos/ubuntu/node
#show all containers in server
docker ps
#show resources for all containers
docker stats
```
| aldoportillo |
1,763,787 | ChatCraft Adventures #6 | This week in ChatCraft This week in ChatCraft, Release 1.3 has been completed, and is... | 26,549 | 2024-02-17T04:59:17 | https://dev.to/rjwignar/chatcraft-adventures-6-407d | beginners, opensource, openai | ## This week in ChatCraft
This week in ChatCraft, Release 1.3 has been completed, and is available [here](https://github.com/tarasglek/chatcraft.org/releases/tag/v1.3.0).
This has been a busy week for in terms of classes. Due to having to meet other deadlines, my PRs involved small code changes. However, I've been able to also discover issues and contribute reviews.
## Issues
### Searchbar Improvements
After provisioning my own [OpenAI API](https://openai.com/blog/openai-api) key, I've been using ChatCraft a lot, perhaps too much:

With over 70 chats I was wondering how I'd quickly find previous chats. After asking in the Discord server, I was informed there was a search bar, but I couldn't find it.

The search bar might be easy for readers to find, but it took me a while to notice it. It was from this experience I looked for ways to make the search bar more visible/noticeable.
### [ChatCraft page moves up](https://github.com/tarasglek/chatcraft.org/issues/442)
After a recent Pull Request, I noticed that accessing saved chats pushes up the page:

Luckily, my classmate [Amnish](https://github.com/Amnish04) recognized the cause, as he researched and fixed a very similar issue for [ChatCraft during Hacktoberfest 2023](https://dev.to/amnish04/no-backing-away-when-hacking-away-2h27).
https://github.com/tarasglek/chatcraft.org/pull/445
In short, the cause was a new [ChakraUI Menu](https://chakra-ui.com/docs/components/menu) that was missing the `fixed` CSS positioning strategy (from https://v1.chakra-ui.com/docs/components/overlay/menu).
### [Cannot access shared chats](https://github.com/tarasglek/chatcraft.org/issues/446)
After the same Pull Request, I noticed that ChatCraft crashes when accessing shared chats and using the search bar. Due to other commitments, I didn't have the bandwidth to investigate and solve the issue. Luckily, [Dave](https://github.com/humphd), my class instructor, figured out the cause and made a [PR](https://github.com/tarasglek/chatcraft.org/pull/469) that fixes the issue.
## Reviews
### [Page Moves Up Solution](https://github.com/tarasglek/chatcraft.org/pull/445#pullrequestreview-1876787728)
I had the pleasure of reviewing Amnish's fix for [Issue 442](https://github.com/tarasglek/chatcraft.org/issues/442), and reading the explanation for the fix on his blog.
### [Enable remote JS/TS execution using val.town endpoint](https://github.com/tarasglek/chatcraft.org/pull/403#pullrequestreview-1881446322)
This Pull Request by Dave involved using the [val.town](https://www.val.town/) [/eval endpoint ](https://docs.val.town/api/eval/) to run JavaScript/TypeScript remotely. I'm still learning about val.town so I don't know much at the moment, but it seems like a cool integration.
## Pull Requests
This week, I've made two pull requests to ChatCraft
### [Search Bar Placeholder](https://github.com/tarasglek/chatcraft.org/pull/448)
Following up on the search bar issue I brought up earlier, I added placeholder text to the ChatCraft search bar. I'm not a UI/UX expert, but I figured ChatCraft would have a better idea. In a new ChatCraft conversation, I modified the system prompt to make ChatCraft an expert in UI/UX:

And then [I asked ChatCraft](https://chatcraft.org/c/rjwignar/oOes55CNAXukfvwrofjxL) (using `gpt-4-vision-preview` via the new Image Input feature) to make recommendations for improvement, one of which was adding placeholder text.
ChatCraft made other suggestions that would improve the overall search functionality and overall experience, although most of them (besides placeholder text) wouldn't necessarily improve the search bar's visibility. Perhaps I could specifically ask for suggestions for improving visibility.
### [Adding Twitter Card Metadata to ChatCraft.org](https://github.com/tarasglek/chatcraft.org/pull/459)
Previously, ChatCraft links posted on Twitter wouldn't be rendered into a [Twitter Card](https://developer.twitter.com/en/docs/twitter-for-websites/cards/overview/abouts-cards). The ChatCraft page already contained [OpenGraph](https://ogp.me/) metadata but it wasn't enough for Twitter to render a Summary Card (it was missing an `og:type` property):

I initially added an `og:type` meta tag, but ultimately also added the required Twitter Tags for robustness:
```html
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:title" content="chatcraft.org" />
<meta name="twitter:description" content="Web-based AI Assistant for Software Developers" />
<meta name="twitter:image" content="https://chatcraft.org/favicon-32x32.png" />
```
I also added these new meta tags to the share logic, so shared chats can also be rendered into Summary Cards:
```js
setMetaContent(clonedDocument, "name", "twitter:card", "summary_large_image");
setMetaContent(clonedDocument, "name", "twitter:title", "chatcraft.org");
setMetaContent(clonedDocument, "name", "twitter:description", chat.summary);
setMetaContent(
clonedDocument,
"name",
"twitter:image",
"https://chatcraft.org/favicon-32x32.png"
);
```
Now ChatCraft can be rendered in a Twitter Card:

### Missing Image
You'll notice the Twitter Card is missing an image.
This is **not** a complete fix, as there's no image in the Twitter Card. I originally wanted to use a Summary Card with the ChatCraft logo as an image, but it doesn't meet the minimum image size requirments.
In future work, we're looking into using a screenshot of the final message as the card image.
## Next week in ChatCraft
Next week, I hope to have more time to make larger contributions to ChatCraft. | rjwignar |
1,829,706 | How to Build an AI FAQ System with Strapi, LangChain & OpenAI | Introduction Frequently Asked Questions (FAQs) offer users immediate access to answers for... | 0 | 2024-04-21T15:24:39 | https://dev.to/strapi/build-an-ai-faq-system-with-strapi-langchain-openai-3l5b | react, openapi, strapi, faqapp | ## Introduction
Frequently Asked Questions (FAQs) offer users immediate access to answers for common queries. However, as the volume and complexity of inquiries grow, manual management of FAQs becomes unsupportable. This is where an AI-powered FAQ system comes in.
In this tutorial, you'll learn how to create an AI-driven FAQ system using Strapi, LangChain.js, and OpenAI. This system will allow users to pose queries related to Strapi CMS and receive accurate responses generated by a GPT model.
## Prerequisites
To comfortably follow along with this tutorial, you need to have:
- [NodeJs](https://nodejs.org/en/download/) installed in your system
- Basic knowledge of [ReactJs](https://react.dev/)
- Basic Knowledge of [Express](https://expressjs.com/)
- Basic knowledge of [LangChain.js](https://js.langchain.com/docs/get_started/introduction)
## Setting Up the Project
You need to configure the data source, which, in this case, is Strapi. Then, obtain an OpenAI API key, initialize a React project, and finally install the required dependencies.
## Configuring Strapi as the Source for Managing FAQ Data
[Strapi](https://strapi.io/) provides a centralized data managing platform. This makes it easier to organize, update, and maintain the FAQ data. It also automatically generates a RESTful API for accessing the content stored in its database.
### Install Strapi
If you don't have Strapi installed in your system, proceed to your terminal and run the following command:
```bash
npx create-strapi-app@latest my-project
```
The above command will install Strapi into your system and launch the admin registration page on your browser.

Fill in your credentials in order to access the Strapi dashboard.
### Create a Collection Type
On the dashboard, under **Content-Type Builder** create a new collection type and name it `FAQ`.

Then, add a `question` and an `answer` field to the **FAQ collection**. The `question` field should be of type **text** as it will be a plain text input. As for the `answer` field use **Rich Text (Blocks)** type as it allows formatted text.

Proceed to the Content Manager and add entries to the `FAQ` collection type. Each entry should have a FAQ question and its corresponding answer. Make sure you publish the entry. Create as many entries as you wish.

### Expose Collection API
Now that you have the `FAQ` data in Strapi, you need to expose it via an API. This will allow the application you will create to consume it.
To achieve this, proceed to ***Settings > Users & Permissions Plugin > Roles > Public***.

Click on `Faq` Under **permissions**, check `find` and `findOne` actions and save.

This will allow us to retrieve our FAQ data via the http://localhost:1337/api/faqs endpoint. Here is how the data looks via a get request.

Strapi is now configured and the FAQ data is ready for use.
### Obtaining the OpenAI API Key
- Proceed to the [OpenAI API website](https://platform.openai.com/docs/overview) and create an account if you don't have one.
- Then click on API keys.

- Create a new secret key. Once generated, copy and save the API key somewhere safe as you will not be able to view it again.
### Initializing a React Project and Installing the Required Dependencies
This is the final step needed to complete setting up our project. Create a new directory in your preferred location and open it with an **IDE** like **VS Code**. Then run the following command on the terminal:
```bash
npx create-react-app faq-bot
```
The command will create a new React.js application named `faq-bot` set up and ready to be developed further.
Then navigate to the `faq-bot` directory and run the following command to install all the dependencies you need to develop the **FAQ AI** application:
```bash
yarn add axios langchain @langchain/openai express cors
```
If you don't have yarn installed, install it using this command:
```bash
npm install -g yarn
```
You can use **npm** to install the dependencies, but during development, I found **yarn** to be better at handling any dependency conflict issues that occurred.
The dependencies will help you achieve the following:
- [`axios`](https://www.npmjs.com/package/axios): To fetch data from the Strapi CMS API and also to fetch responses from our Express server.
- [`langchain`](https://www.npmjs.com/package/langchain): To implement the [Retrieval Augmented Generation(RAG)](https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/) part of the application.
- [`@langchain/openai`](https://www.npmjs.com/package/@langchain/openai): To handle communication with the OpenAI API.
- [`express`](https://www.npmjs.com/package/express): To create a simple server to serve the frontend.
- [`cors`](https://www.npmjs.com/package/cors): To ensure the server responds correctly to requests from different origins.
## Creating the FAQ AI App Backend
The core of your FAQ system will reside in an Express.js server. It will leverage the RAG (Retriever Augmented Generation) approach.

RAG approach enhances the accuracy and richness of responses. It achieves this by combining information retrieval with large language models (LLMs) to provide more factually grounded answers. A retrieval locates relevant passages from external knowledge sources, such as FAQs stored in Strapi CMS. These passages, along with the user's query, are then fed into the LLM. By leveraging both internal knowledge and retrieved context, the LLM generates responses that are more informative and accurate.
The server will be responsible for managing incoming requests, retrieving FAQ data from Strapi, processing user queries, and utilizing RAG for generating AI-driven responses.
### Importing the Necessary Modules and Setting Up the Server
At the root of your `faq-bot` project, create a file and name it `server.mjs`. The extension indicates that the JavaScript code is written in the **[ECMAScript module format](https://nodejs.org/api/esm.html)**. ECMAScript modules are a standard mechanism for modularizing JavaScript code.
Then open the `server.mjs` file and proceed to import the libraries we installed earlier and some specific ones from `LangChain`. Proceed to define the port on which the server will listen for incoming requests. Finally, configure the middleware functions to handle JSON parsing and CORS.
```javascript
import express from "express";
import axios from "axios";
import dotenv from "dotenv";
import cors from "cors";
import { ChatOpenAI } from "@langchain/openai";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
import { MessagesPlaceholder } from "@langchain/core/prompts";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import { Document } from "langchain/document";
dotenv.config();
const app = express();
const PORT = process.env.PORT || 30080;
// Middleware to handle JSON requests
app.use(express.json());
app.use(cors()); // Add this line to enable CORS for all routes
```
You will understand what each library does as we move on with the code.
The rest of the code in the **"Creating the FAQ AI App Backend"** section will reside in the same `server.mjs` file as the code above. The code in each subsection is a continuation of the code explained in the previous subsection.
### Initializing the OpenAI Model
To interact with the OpenAI language model, you'll need to initialize it with your API key and desired settings.
```javascript
// Instantiate Model
const model = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7,
openAIApiKey: process.env.OPENAI_API_KEY,
});
```
The API Key is stored as an environmental variable. Proceed to the root folder of your project and create a file named **.env**. Store your OpenAI API key there as follows:
```bash
OPENAI_API_KEY=Your API Key
```
**Temperature** is a hyperparameter that controls the randomness of the model's output.
### Fetching FAQ Data From Strapi
The system relies on pre-defined FAQ data stored in Strapi. Define a function to fetch this data using Axios and make a `GET` request to the Strapi API endpoint you configured earlier.
```javascript
// Fetch FAQ data
const fetchData = async () => {
try {
const response = await axios.get("http://localhost:1337/api/faqs");
return response.data;
} catch (error) {
console.error("Error fetching data:", error.message);
return [];
}
};
```
After fetching the data, extract the questions and their corresponding answers.
```javascript
const extractQuestionsAndAnswers = (data) => {
return data.data.map((item) => {
return {
question: item.attributes.Question,
answer: item.attributes.Answer[0].children[0].text,
};
});
};
```
The above function maps through the data array and extract the question and answer attributes from each item.
### Populating the Vector Store
To efficiently retrieve relevant answers, create a vector store containing embeddings of the FAQ documents.
```javascript
// Populate Vector Store
const populateVectorStore = async () => {
const data = await fetchData();
const questionsAndAnswers = extractQuestionsAndAnswers(data);
// Create documents from the FAQ data
const docs = questionsAndAnswers.map(({ question, answer }) => {
return new Document({ pageContent: `${question}\n${answer}`, metadata: { question } });
});
// Text Splitter
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 100, chunkOverlap: 20 });
const splitDocs = await splitter.splitDocuments(docs);
// Instantiate Embeddings function
const embeddings = new OpenAIEmbeddings();
// Create the Vector Store
const vectorstore = await MemoryVectorStore.fromDocuments(splitDocs, embeddings);
return vectorstore;
};
```
The above code uses the questions and answers data to create document objects. It then splits them into smaller chunks, computes embeddings, and constructs a vector store.
The vector store holds representations of the FAQ data, facilitating efficient retrieval and processing within the system.
### Answering Questions From the Vector Store
Having the vector store full of information, you need a way to retrieve only the relevant information to a user query. Then use an LLM to come up with a good response to the query based on the retrieved information and the chat history.
To achieve this, you will implement a function to create a retriever, define prompts for AI interaction, and invoke a retrieval chain.
```javascript
// Logic to answer from Vector Store
const answerFromVectorStore = async (chatHistory, input) => {
const vectorstore = await populateVectorStore();
// Create a retriever from vector store
const retriever = vectorstore.asRetriever({ k: 4 });
// Create a HistoryAwareRetriever which will be responsible for
// generating a search query based on both the user input and
// the chat history
const retrieverPrompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder("chat_history"),
["user", "{input}"],
[
"user",
"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
],
]);
// This chain will return a list of documents from the vector store
const retrieverChain = await createHistoryAwareRetriever({
llm: model,
retriever,
rephrasePrompt: retrieverPrompt,
});
// Define the prompt for the final chain
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
`You are a Strapi CMS FAQs assistant. Your knowledge is limited to the information I provide in the context.
You will answer this question based solely on this information: {context}. Do not make up your own answer .
If the answer is not present in the information, you will respond 'I don't have that information.
If a question is outside the context of Strapi, you will respond 'I can only help with Strapi related questions.`,
],
new MessagesPlaceholder("chat_history"),
["user", "{input}"],
]);
// the createStuffDocumentsChain
const chain = await createStuffDocumentsChain({
llm: model,
prompt: prompt,
});
// Create the conversation chain, which will combine the retrieverChain
// and combineStuffChain to get an answer
const conversationChain = await createRetrievalChain({
combineDocsChain: chain,
retriever: retrieverChain,
});
// Get the response
const response = await conversationChain.invoke({
chat_history: chatHistory,
input: input,
});
// Log the response to the server console
console.log("Server response:", response);
return response;
};
```
The above code creates a retriever for search queries and configures a history-aware retriever. It then defines prompts for AI interaction, constructs a conversation chain, and invokes it with chat history and input. Finally, it logs and returns the generated response.
### Handling Incoming Requests and Starting the Server
Now that you have everything for handling a user request ready, expose a `POST` endpoint `/chat` to handle incoming requests from clients. The route handler will parse input data, format the chat history, and pass it to the `answerFromVectorStore` function responsible for answering questions.
```javascript
// Route to handle incoming requests
app.post("/chat", async (req, res) => {
const { chatHistory, input } = req.body;
// Convert the chatHistory to an array of HumanMessage and AIMessage objects
const formattedChatHistory = chatHistory.map((message) => {
if (message.role === "user") {
return new HumanMessage(message.content);
} else {
return new AIMessage(message.content);
}
});
const response = await answerFromVectorStore(formattedChatHistory, input);
res.json(response);
});
// Start the server
app.listen(PORT, () => {
console.log(`Server is running on http://localhost:${PORT}`);
});
```
Run the following command on your terminal to start the server:
```javascript
node server.mjs
```
The server will run on the specified port.
Use **[Postman](https://www.postman.com/downloads/)** or any other software to test the server. Make sure the payload you send is in this format:
```javascript
{
"chatHistory": [
{
"role": "user",
"content": "What is Strapi?"
},
{
"role": "assistant",
"content": "Strapi is an open-source headless CMS (Content Management System) "
}
],
"input": "Does Strapi have a default limit"
}
```
You can change the content and input data to your liking. Below is a sample result after you make the post request:
```javascript
"answer": "The default limit for records in the Strapi API is 100."
```
That is the answer part of the response. But the response has a lot more data in it including the documents used to answer the question.
## Creating the Frontend of Your System
Having the core part of your system completed. You need a user interface in which the users will interact with your system. Under **src** in your React app, create a **ChatbotUI.js** file and paste the following code:
```javascript
import React, { useState, useEffect, useRef } from 'react';
import axios from 'axios';
import './ChatbotUI.css'; // Assuming the CSS file exists
const ChatbotUI = () => {
const [chatHistory, setChatHistory] = useState([]);
const [userInput, setUserInput] = useState('');
const [isLoading, setIsLoading] = useState(false);
const [error, setError] = useState(null);
const [isExpanded, setIsExpanded] = useState(true); // State for chat window expansion
const chatContainerRef = useRef(null);
useEffect(() => {
// Scroll to the bottom of the chat container when new messages are added
if (isExpanded) {
chatContainerRef.current.scrollTop = chatContainerRef.current.scrollHeight;
}
}, [chatHistory, isExpanded]);
const handleUserInput = (e) => {
setUserInput(e.target.value);
};
const handleSendMessage = async () => {
if (userInput.trim() !== '') {
const newMessage = { role: 'user', content: userInput };
const updatedChatHistory = [...chatHistory, newMessage];
setChatHistory(updatedChatHistory);
setUserInput('');
setIsLoading(true);
try {
const response = await axios.post('http://localhost:30080/chat', {
chatHistory: updatedChatHistory,
input: userInput,
});
const botMessage = {
role: 'assistant',
content: response.data.answer,
};
setChatHistory([...updatedChatHistory, botMessage]);
} catch (error) {
console.error('Error sending message:', error);
setError('Error sending message. Please try again later.');
} finally {
setIsLoading(false);
}
}
};
const toggleChatWindow = () => {
setIsExpanded(!isExpanded);
};
return (
<div className="chatbot-container">
<button className="toggle-button" onClick={toggleChatWindow}>
{isExpanded ? 'Collapse Chat' : 'Expand Chat'}
</button>
{isExpanded && (
<div className="chat-container" ref={chatContainerRef}>
{chatHistory.map((message, index) => (
<div
key={index}
className={`message-container ${
message.role === 'user' ? 'user-message' : 'bot-message'
}`}
>
<div
className={`message-bubble ${
message.role === 'user' ? 'user-bubble' : 'bot-bubble'
}`}
>
<div className="message-content">{message.content}</div>
</div>
</div>
))}
{error && <div className="error-message">{error}</div>}
</div>
)}
<div className="input-container">
<input
type="text"
placeholder="Type your message..."
value={userInput}
onChange={handleUserInput}
onKeyPress={(e) => {
if (e.key === 'Enter') {
handleSendMessage();
}
}}
disabled={isLoading}
/>
<button onClick={handleSendMessage} disabled={isLoading}>
{isLoading ? 'Loading...' : 'Send'}
</button>
</div>
</div>
);
};
export default ChatbotUI;
```
The above code creates a user interface for interacting with the **AI-powered FAQ system** hosted on the server. It allows users to send messages, view chat history, and receive responses from the server. It also maintains a state for chat history, user input, loading status, and error handling. When a user sends a message, the component sends an HTTP `POST` request to the server's `/chat endpoint`, passing along the updated chat history and user input. Upon receiving a response from the server, it updates the chat history with the bot's message.
Create another file under `src` directory and name it `ChatbotUI.css` and paste the following code. This code will be responsible for styling the user interface.
```css
.chatbot-container {
display: flex;
flex-direction: column;
background-color: #f5f5f5;
padding: 5px;
position: fixed;
bottom: 10px;
right: 10px;
width: 300px;
z-index: 10;
}
.toggle-button {
padding: 5px 10px;
background-color: #ddd;
border: 1px solid #ccc;
border-radius: 5px;
cursor: pointer;
margin-bottom: 5px;
}
.chat-container {
height: 300px;
overflow-y: auto;
}
.message-container {
display: flex;
justify-content: flex-start;
margin-bottom: 5px; /* Reduced margin for tighter spacing */
}
.message-bubble {
max-width: 70%;
padding: 5px; /* Reduced padding for smaller bubbles */
border-radius: 10px;
}
.user-bubble {
background-color: #007bff;
color: white;
}
.bot-bubble {
background-color: #f0f0f0;
color: black;
}
.input-container {
align-self: flex-end;
display: flex;
align-items: center;
padding: 5px;
}
.input-container input {
flex: 1;
padding: 5px;
border: 1px solid #ccc;
border-radius: 5px;
margin-right: 10px;
}
.input-container button {
padding: 10px 20px;
background-color: #007bff;
color: white;
border: none;
border-radius: 5px;
cursor: pointer;
}
```
The above code defines the layout and styling for the user interface. It positions the chat interface fixed at the bottom right corner of the screen, styles message bubbles, and formats the input field and send button for user interaction.
In the `App.js` file render the user interface.
```javascript
import React from 'react';
import ChatbotUI from './ChatbotUI';
const App = () => {
return (
<div>
<ChatbotUI />
</div>
);
};
export default App;
```
You are now done creating the FAQ AI-powered system.
Open a new terminal in the same path you run your server and start your react app using the following command:
```bash
yarn start
```
You can now start asking the system FAQs about Strapi CMS. The system knowledge depends on the FAQ data you have stored in Strapi.
## Testing the System
The following GIF shows how the system responds:

When asked about a topic outside Strapi, it reminds the user it only deals with Strapi CMS. Also if an answer is not present in the FAQ data stored in Strapi CMS, it responds it does not have that information.
## Conclusion
Congratulations on creating an AI & Strapi-powered FAQ system. In this tutorial, you've learned how to leverage the strengths of Strapi, LangChain.js, and OpenAI.
The system integrates seamlessly with Strapi, allowing you to effortlessly manage your FAQ data through a centralized platform. LangChain.js facilitates Retrieval Augmented Generation (RAG), enhancing the accuracy and comprehensiveness of the system's responses. OpenAI provides the large language model that the system uses to generate informative and relevant answers to user queries.
## Resources
* Have a look at the full code [here](https://github.com/FINCH285/AI-Powered-FAQ-System-with-Strapi-LangChain-OpenAI) and the Strapi backend [here](https://github.com/FINCH285/faq-bot-strapi-backend).
* https://js.langchain.com/docs/get_started/introduction
* https://docs.strapi.io/dev-docs/backend-customization
* https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/
| denis_kuria |
1,372,289 | Journal Entry 1 | Activity: Installed Unity and Platformer microgame Modified the game according to... | 0 | 2023-02-20T06:40:06 | https://dev.to/pavolrajczy/journal-entry-1-345k | Activity:
- Installed Unity and Platformer microgame
- Modified the game according to tutorials
- Downloaded and imported 2D Game Kit
Notes:
I worked in Unity before so i was overconfident and paid for it in time wasted. I started doing things without realizing there was a tutorial and subsequently i had to rework few things. i sent most time on confetti bc i forgot to give box the confetti. Also i need to remember to carefully read the instruction bc i forgot to install few things at beginning. Also i procrastinated a lot whit uploading it.
Invested hours:
Installations: 1hour + 30 min
Tutorials: 1.5 hours
Outcome:
Build (uploaded to the itch.io page as a .exe file)
| pavolrajczy | |
1,898,215 | Project Homelab: Kubernetes the Complex Way | There’s a joke that Kelsey Hightower wrote Kubernetes The Hard Way because there isn’t an easy way.... | 0 | 2024-07-09T14:18:29 | https://burnskp.dev/2024/06/23/project-homelab-kubernetes-the-complex-way/ | projecthomelab, devops, homelab, kubernetes | ---
title: Project Homelab: Kubernetes the Complex Way
published: true
date: 2024-06-23 19:46:02 UTC
tags: projecthomelab,devops,homelab,kubernetes
canonical_url: https://burnskp.dev/2024/06/23/project-homelab-kubernetes-the-complex-way/
---
There’s a joke that Kelsey Hightower wrote Kubernetes The Hard Way because there isn’t an easy way. While this may be less true today, Kubernetes can still be incredibly complex. Kubernetes is essentially a platform for creating platforms. The core functionality doesn’t require a large amount of understanding. However, Kubernetes brings a lot of additional components. Its pods and services are compartmentalized in a microservices format. It doesn’t hide the operational aspects around deploying and maintaining network-accessible applications. These pieces, while not considered ‘core Kubernetes,’ are essential to understand in order to use it effectively.
## Project Homelab
There are many good resources available for learning the basics of Kubernetes, such as the syntax, setting up pods and networks, and tying everything together. However, there’s much more to learn. I am redoing my homelab and will be starting a series of blog posts on the setup process. This includes:
- Creating a reusable and shareable dev environment
- Automating every aspect of the cluster using tools like Ansible, ArgoCD, and Terraform
- Implementing Build pipelines using GitHub actions
- Baseline services, such as service mesh, secret store, certificate management, logging, observability
- Securing Kubernetes
I’ll be using a “bare metal” setup with VMs on top of Proxmox. There are different concerns when using Kubernetes on bare metal compared to the cloud. In the cloud, you can benefit from a managed Kubernetes cluster and offload state to cloud-based databases, allowing you to focus on stateless applications. On bare metal, you may need to run databases within the cluster.
## Hardware
I’m trying to keep my setup small. For my initial setup, I’ll be using three machines:
- A Raspberry Pi 4 8GB to run DHCP, DNS, installation images, and Authentication services
- An Intel box with 64GB of RAM that will be used as a development / workstation machine.
- An AMD box with 128GB of RAM that will run Proxmox for the Kubernetes VMs
## What’s next
The initial setup requires quite a bit of preparatory work, sometimes called “yak shaving.” I’m not looking to create a bespoke Kubernetes cluster by hand. Instead, I will be automating as much as possible and providing a good developer experience for working with the automation.
In my next post, I’ll discuss setting up a Git repository for Ansible, including devcontainers, GitHub Actions, and pre-commit hooks. This will be used to manage the configuration of the Raspberry Pi and workstation nodes. | burnskp |
1,043,292 | Crescimento profissional e a cultura de aprendizagem. | Contexto: texto escrito em 2022 Em 2009 eu inicio uma nova jornada. Apoiar na transformação de uma... | 0 | 2024-07-12T17:09:06 | https://dev.to/umovme/crescimento-profissional-e-a-cultura-de-aprendizagem-k4j | culture, braziliandevs, agile, startup | _Contexto: texto escrito em 2022_
Em 2009 eu inicio uma nova jornada. Apoiar na transformação de uma empresa baseada em projetos e diversos nichos, para uma empresa de produto, construindo um produto (que virou uma plataforma) para embarcar estes nichos.
A jornada nunca é simples quando estamos mudando o foco de serviço para produto. A cobrança não é mais baseada em simplesmente horas de serviço e suporte ao que é entregue. Não é mais simplesmente crescer em clientes, com novos projetos. Trabalhar com produtos envolve encontrar o tal encaixe (_"fit"_) entre produto e mercado. Ganhamos a oportunidade de pensar em algumas coisas com mais força. Exemplo, aprender como [posicionar o produto no mercado](https://youtu.be/VgwWtoqgvXI). E não aceitar mais simplesmente "tirar pedidos" ou "fritar pastéis" para adequar a necessidade de clientes com o projeto contratado.
Eu entendi na época que eu precisava construir um perfil de produto, já que meu foco dos 13 anos anteriores era focado em serviços e projetos e consultoria. Também entendi que não teria chance de fazer isso, se não tivesse uma [equipe em aprendizado constante](https://blog.danielwildt.com/pt-criando-um-ambiente-de-aprendizado/).
Construir projetos baseados no puro interesse de um cliente ainda exige pensamento de priorização e entendimento de como fatiar entregas, mas quem define o alvo está dentro do projeto. Não é o caso do pensamento quando estamos atuando com produto. Vamos descobrir ao longo do tempo clientes importantes, que podemos entrevistar e aprender mais e mais sobre o mercado que eles vivem. Clientes que nos inspiram no jogo de entrega de valor. Só que no final do dia, a atividade de aprendizagem sobre o que fazemos e sobre o que queremos melhorar é nossa responsabilidade.
No caso do pensamento de produto, a construção do futuro do produto precisa de aprendizagem. Viveremos constantemente um pensamento entre certezas, suposições e dúvidas, que serão constantemente desafiadas e atuam na nossa [convivência com a incerteza](https://blog.danielwildt.com/incerteza-e-melhoria/). Precisa de uma equipe constantemente questionando como pode fazer o que faz melhor. Envolve entender este processo tanto tecnicamente como em processos para evolução do produto. Isso conecta com foco de mercado, entendimento de negócios.
Pensei em listar aqui atividades que considerei importante ao longo de 2009 até 2022 nesse pensamento de construção de culturas de aprendizagem, que eu considero como sendo uma das minhas habilidades. E também indicando práticas ligadas com o pensamento de evolução e aprendizado técnico, importante neste jogo de produto.
## Em abril/2009 DevOps e Lean Startup estavam nascendo
Eu sabia que falhar o mais rápido possível e aprender mais rápido ainda era uma das únicas oportunidades que tinha neste processo de evoluir equipes e produtos. Criar uma estrutura de apoio para a melhoria, com retrospectivas, espaços técnicos para a equipe se desafiar, também.
Já tinha aprendido também com o meu aprendizado sobre [eXtreme Programming](https://www.slideshare.net/dwildt/conhecendo-o-extreme-programming/dwildt/conhecendo-o-extreme-programming), que automação de testes ajudaria a equipe a conseguir evoluir constantemente garantindo a tal coragem necessária para modificar código e saber que ele vai quebrar se algo não estiver ok.
Pelo pensamento Lean, eu sabia que precisava melhorar, encontrar maneiras mais fáceis de produzir software e garantir que uma equipe conseguiria dar manutenção e ensinar pessoas novas a fazer o mesmo.
Também sabia que [atuar no desperdício da espera](https://www.slideshare.net/dwildt/no-espere-192007540) era o meu grande trunfo. Se eu melhorar os tempos de repasse de trabalho nas equipes, consigo aumentar o desempenho delas em muitas vezes.
Em abril de 2009, Eric Ries já tinha cunhado o termo Lean Startup, unindo Lean Thinking + eXtreme Programming + Customer Discovery/Customer Development.
Em 2009 uma palestra chama muita atenção, de uma empresa que eu acompanhava desde o ano anterior. O Flickr tinha uma estrutura no blog de código onde avisavam o que estava acontecendo com relação a updates de código e quem estava fazendo isso.

Eu já era fã fazia tempo e mantinha umas imagens desta época (confesso que elas estavam em uma pasta esquecida, mas eu não tinha esquecido)

Essa palestra, apresentada em junho no Velocity 2009, trouxe uma visão para o mainstream do desenvolvimento de software sobre a nossa capacidade de poder entregar software em produção de forma frequente.
{% embed https://youtu.be/LdOe18KhtT4 %}
Acho interessante que muitas revoluções relacionadas com a engenharia de software aconteceram em empresas que tiveram reviravoltas com o sucesso. O Flickr passou por diversas jornadas, desde aquisição do Yahoo e depois passando por dificuldades e demissões, até hoje em dia ser um produto em busca de significado. Particularmente gosto do serviço e sou assinante faz algum tempo.
Estes movimentos inspiraram diversas empresas e aqui no Brasil posso dizer que fui bastante influenciado por este tipo de movimento. Deploy contínuo era algo impossível de ser alcançado.
## Antes da primeira linha de código, tenha princípios!
Desde cedo eu tinha consciência de alguns princípios que precisava colocar em funcionamento. Alguns relacionados a evolução de código fonte para produção e organização de como podemos gerar valor para clientes.
Eu não tinha dinheiro para contratar uma equipe altamente experiente. Eu precisava compor a equipe existente, aprender novas linguagens de programação ou expandir o uso das linguagens existentes. O mesmo com as práticas de engenharia de software. Sabia o que não queria, no caso de construir um produto no estilo plataforma. E somava nisso todos aprendizados que tive no meu passado mantendo códigos de 10+ anos nos dias atuais.
- Testes automatizados, e para quem conseguir, desenvolvimento orientado a testes. Desde muito cedo comecei a trabalhar sessões de Coding Dojo, trazendo para as equipes o ritmo de codificar, evoluir e aprender sobre como modelar cenários de teste.
- Trunk Based Development, que muito rapidamente envolve permitir que uma base de código possa ser levada para produção quando for de interesse da equipe. E para que isso possa acontecer precisamos garantir que o código "acordado" esteja com testes passando e o código "dormindo" esteja configurado de forma adequada para não afetar o comportamento de produção. Pensa que nesta época ainda não usávamos sistemas como Git, mas mesmo sendo git eu sigo operando no modelo TBD.
- O sistema vai cair. Então como organizamos para que saibamos o que precisa funcionar e como podemos colocar para funcionar.
- O sistema pode evoluir sem depender do banco de dados. Muito cedo começamos a aplicar Database refactoring, para garantir que eu poderia evoluir o modelo de dados sem depender da sincronia de instalação de código novo. O mesmo valeria para uma necessidade de rollback de um módulo.
- O rollback precisava ser rápido, muito mais rápido que o tempo de instalação. Rollback é só mais uma instalação (deploy).
- Uma instalação de código não é igual features novas para clientes. Importante ter uma estrutura de toggles, para habilitar que um cliente tivesse acesso a uma determinada funcionalidade antes de outros clientes, para fins de testes.
- O cliente não tem acesso a um ambiente de testes. O teste é em produção. Assim como a bateria de testes chegava em um ambiente de produção, os clientes validavam uma funcionalidade também em produção, no seu próprio ambiente.
Alguns destes princípios levaram bons anos para entrar no fluxo da equipe. Mesmo hoje em 2022 ainda preciso explicar os benefícios de trabalhar com TBD para uma equipe que quer avançar e evoluir código. Assim como preciso ainda explicar os valores do manifesto ágil, sobre a importância de comunicação, de entregar software, de adaptar e manter contato próximo com quem demanda novas evoluções de um produto de software. Sempre lembrando que a coisa mais importante é maximizar o trabalho que não precisa ser feito, a tal simplicidade.
## O caminho para a cultura de aprendizagem?
Momentos de aprendizagem, por todos os lados. O aprendizado "_on the job_", na prática, também. Isso vai desde o processo seletivo até o dia a dia evoluindo alguma funcionalidade de produto.
Se for resumir a base deste processo, é poder operar em dois materiais importantíssimos de Nonaka e Takeuchi:
1. Transformar conhecimento tácito em conhecimento explícito.
2. Entender o novo jogo para desenvolvimento de produtos.
Relaciono aqui alguns destes momentos que considero importantes e relevantes para este processo de construção de conhecimento.
- Coding Dojo.
- Treinamentos internos, palestras e rodas de conversa.
- Ambiente de infraestrutura não adequado.
- Automação de infra e escala.
- Espaços de aprendizagem, criando eventos.
- Processo seletivo com toda a equipe.
- Operar de forma multimídia, com textos, áudio (podcast na época da trevisan), vídeo
## O que mudou nisso tudo em 2022?
Muita coisa. Isso falando no problema de formação de pessoas. Vivemos em um mundo cada vez mais desigual. Se queremos um mundo diferente, precisamos ser intencionais.
Isso inclui remover barreiras para conhecer pessoas. Muitas empresas ainda falam sobre suas estratégias para contratar pessoas somente de melhores universidades do Brasil, pessoas em universidades e esquecem da infinidade de pessoas excelentes em cursos técnicos e pessoas que estão na prática mas não participam de processos de educação formal.
As empresas precisam conectar com comunidades de tecnologia e apoiar onde puderem, patrocinando eventos para unir as pessoas, criando espaços, oferecendo tempo das suas pessoas para palestras e mentorias. As oportunidades são diversas.
E não tente ser a melhor ou a única iniciativa.
Não importa quantas iniciativas sejam criadas sobre aprendizagem e formação de pessoas. Nenhuma delas vai ser suficiente.
Em 2010 o problema já era grande. Em 2022 o problema aumentou de tamanho. E agora está afetando empresas que nem pensavam que esse seria um problema lá em 2010.
O que eu ainda considero que pode nos ajudar, como indústria de tecnologia Brasileira, é operar em uma cultura de aprendizagem, para dentro e para fora das empresas, chegando em instituições de ensino, comunidades e outras empresas.
-- Daniel Wildt | dwildt |
1,419,080 | Callbacks in Javascript | Callbacks are a fundamental concept in JavaScript and are commonly used in asynchronous programming.... | 0 | 2024-07-10T18:58:30 | https://dev.to/jpbp/callbacks-in-javascript-10i2 | Callbacks are a fundamental concept in JavaScript and are commonly used in asynchronous programming. In this article, we'll explore what callbacks are, why they were the main approach to async operations when async/await didn't exist in JavaScript, why they are not much used in modern projects, but it is important to understand the concept to work on legacy projects. We'll also give an example of a use case in modern JavaScript with the React Query mutation.
## What are callbacks?
A callback is simply a function that is passed as an argument to another function, thats it. The function that receives the callback is responsible for calling it when the appropriate time comes.
For example, consider the following code snippet:
```javascript
function greet(name, callback) {
console.log(`Hello, ${name}!`);
callback();
}
function sayGoodbye() {
console.log("Goodbye!");
}
greet("Alice", sayGoodbye);
```
In this example, the greet function takes a name argument and a callback argument. It logs a greeting to the console with the name, and then calls the callback function. The sayGoodbye function is passed as the callback argument and logs "Goodbye!" to the console.
When we run this code, it outputs:
```
Hello, Alice!
Goodbye!
```
Here, the sayGoodbye function is executed as a callback to the greet function.
## Why callbacks were the main approach to async operations when async/await didn't exist in JavaScript
Before the introduction of Promises and async/await in modern JavaScript, callbacks were the main approach to handling asynchronous operations. This is because JavaScript is a single-threaded language, and blocking operations could cause the entire program to freeze, making it unresponsive.
Callbacks allowed developers to execute asynchronous operations without blocking the main thread. For example, if you needed to make an AJAX request to a server and perform some operation on the result, you could pass a callback function to the XMLHttpRequest object's onreadystatechange event, which would be executed when the response was received.
```javascript
// Don't be scared, this is legacy
// code using the XMLHttpRequest API to make an HTTP request
const xhr = new XMLHttpRequest();
xhr.open('GET', 'https://example.com/data.json');
// Set up a callback function to be
// called when the request is complete
xhr.onreadystatechange = function() {
const data = JSON.parse(xhr.responseText);
// Do something with the
// JSON data (e.g. log it to the console)
console.log(data);
};
// Effectively call the HTTP request
xhr.send();
```
## Understanding Callback Hell in JavaScript
One of the main problems with callbacks in JavaScript is that they can lead to code that is difficult to read and maintain, especially when dealing with multiple nested callbacks. This phenomenon is commonly referred to as "callback hell".
Callback hell occurs when you have several asynchronous operations that depend on each other and need to be executed in a specific order. As a result, you end up with deeply nested functions that are difficult to read and debug.
Here's a simple example of callback hell:
```javascript
setTimeout(function() {
console.log('First operation completed');
setTimeout(function() {
console.log('Second operation completed');
setTimeout(function() {
console.log('Third operation completed');
}, 1000);
}, 1000);
}, 1000);
```
In this example, we have three asynchronous operations that need to be executed in a specific order. To achieve this, we've nested three setTimeout functions inside each other. As you can see, the code quickly becomes hard to read and understand.
### The solution
One way to solve callback hell is to use Promises, which provide a more structured way to handle asynchronous code. With Promises, you can chain together multiple asynchronous operations and handle errors in a more elegant way. Here's the same example using `async/await`:
```javascript
function wait(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
async function runOperations() {
await wait(1000);
console.log('First operation completed');
await wait(1000);
console.log('Second operation completed');
await wait(1000);
console.log('Third operation completed');
}
runOperations();
```
In this example, the wait function returns a Promise that resolves after a certain number of milliseconds. We use this function with the async/await syntax to execute the asynchronous operations in sequence. As you can see, the code is much cleaner and easier to understand compared to the nested callback approach.
## Why it is not much used in modern projects, but it is important to understand the concept to work on legacy projects
In modern JavaScript, Promises and async/await have largely replaced callbacks as the preferred way of handling asynchronous operations. Promises provide a more structured way to handle asynchronous code and make it easier to avoid callback hell, a situation where callback functions are nested inside each other, making the code difficult to read and maintain.
Async/await syntax provides a more concise and readable way to write asynchronous code, and it can be easier to handle errors and control the flow of execution. However, understanding callbacks is still important because you may encounter them in some situations, especially when working on legacy codebases or integrating with third-party libraries that use callback-style APIs. Additionally, callbacks are still commonly used in non-async code such as some array methods, where they provide a simple and flexible way to customize the behavior of the method.
Here is an example:
```javascript
const numbers = [1, 2, 3];
const evenNumbers = numbers.filter(function(num) {
return num % 2 === 0;
});
console.log(evenNumbers); // [2]
```
## Modern Usage of Callbacks
Callbacks are still a relevant concept, even in modern programming practices. They can be particularly useful when working with libraries or frameworks that use callback-based APIs.
| jpbp | |
1,916,533 | @Environment variables | SwiftUI provides a way to pass data down the view hierarchy using @Environment variables. These... | 0 | 2024-07-10T06:13:09 | https://wesleydegroot.nl/blog/@Environment | environment, swiftui | ---
title: @Environment variables
published: true
date: 2024-07-08 19:25:27 UTC
tags: Environment,SwiftUI
canonical_url: https://wesleydegroot.nl/blog/@Environment
---
SwiftUI provides a way to pass data down the view hierarchy using `@Environment` variables. These variables are environment-dependent and can be accessed from any child view. They are useful for sharing common data or settings across the app, such as color schemes, locale, or accessibility settings.
When you create an `@Environment` variable, SwiftUI automatically manages its value for you. This means that when the environment changes, the view will automatically update to reflect the new value.
## Creating an @Environment Variable
To create an `@Environment` variable, you can use one of the built-in environment keys provided by SwiftUI, such as `.colorScheme`, `.locale`, or `.accessibilityEnabled`. For example, to access the color scheme environment variable, you can use the following code:
```swift
struct ContentView: View {
@Environment(\.colorScheme) var colorScheme
var body: some View {
Text("Hello, World!")
.foregroundColor(colorScheme == .dark ? .white : .black)
}
}
```
In this example, the `colorScheme` variable is an `@Environment` variable that holds the current color scheme of the app. The text color is set based on the color scheme (dark or light) using a ternary operator.
## Custom @Environment Variables
You can also create custom `@Environment` variables to pass custom data or settings down the view hierarchy. To define a custom `@Environment` variable, you need to create a new type that conforms to the `EnvironmentKey` protocol and extend `EnvironmentValues` to introduce the value.
Here's an example of how you can define a custom `@Environment` variable:
```swift
private struct MyCustomKey: EnvironmentKey {
// Create a default value for the custom environment key
static let defaultValue: String = "Default Value"
}
extension EnvironmentValues {
/// Define a custom environment variable
var myCustomValue: String {
get { self[MyCustomKey.self] }
set { self[MyCustomKey.self] = newValue }
}
}
```
In this example, we define a custom key `MyCustomKey` that holds a default string value. We then extend `EnvironmentValues` to introduce the custom value `myCustomValue` using the key.
You can now use the custom `@Environment` variable in your views like this:
```swift
/// Pass custom environment value
struct ContentView: View {
var body: some View {
VStack {
SecondView()
.environment(\.myCustomValue, "hello there")
}
}
}
/// Access custom environment value
struct SecondView: View {
@Environment(\.myCustomValue) var customValue
var body: some View {
Text(customValue)
}
}
```
In this example, the `customValue` variable is an `@Environment` variable that holds the custom value defined earlier. The text view displays the value of the custom environment variable.
## iOS 18 and later (easier)
```swift
/// Setup environment key
extension EnvironmentValues {
@Entry var myCustomValue: String = "Default Value"
}
/// Pass custom environment value
struct ContentView: View {
var body: some View {
VStack {
SecondView()
.environment(\.myCustomValue, "hello there")
}
}
}
/// Access custom environment value
struct SecondView: View {
@Environment(\.myCustomValue) var customValue
var body: some View {
Text(customValue)
}
}
```
## Caveats
When using `@Environment` variables, keep in mind that they are not meant for sharing complex data structures or models across the view hierarchy. For more complex data sharing, consider using `@EnvironmentObject` or other data sharing techniques.
## Wrap up
`@Environment` variables are a powerful tool in SwiftUI for sharing common data or settings across the app. By creating custom `@Environment` variables, you can extend this functionality to suit your app's specific needs. Just remember to use them judiciously and follow best practices to avoid potential pitfalls.
Resources:
- [SwiftUI Environment](https://developer.apple.com/documentation/swiftui/environment) | 0xwdg |
1,446,416 | Our weekly API rundown: Bin Lookup, Currency And Cryptocurrency Conversion and Bad Word Filter | This week we will introduce three new APIs to you. We have chosen a diverse range of data topics for... | 0 | 2024-07-08T08:26:00 | https://dev.to/worldindata/our-weekly-api-rundown-bin-lookup-currency-and-cryptocurrency-conversion-and-bad-word-filter-2i4e | api, cryptocurrency, forex, json | This week we will introduce three new APIs to you. We have chosen a diverse range of data topics for this round-up of APIs. We will closely explore the purpose, industry, and client types of these APIs. If you want to know more, the Marketplace for Data and APIs of [Worldindata](https://www.worldindata.com/) provides additional details on the APIs. Let's start now!
## Bin Lookup API developed by Neutrino
[The BIN Lookup API](https://www.worldindata.com/api/Neutrino-BIN-lookup-api) offered by Neutrino is an essential tool for various sectors that rely on the authentication and verification of credit card information. It is widely used in industries like fraud protection, e-commerce, online shopping, and payment verification. The API provides detailed information about the credit card issuer, including the card type, bank name, country of origin, and the card brand. The BIN lookup API is a powerful tool that is used by many companies to safeguard against fraudulent transactions and protect their customers' financial data.
The primary purpose of the BIN Lookup API is to provide detailed information about the credit card issuer that can be used to build fraud protection systems and analyze payment data. By using the BIN lookup API, companies can identify potential fraudulent transactions and take steps to prevent them. Additionally, the API can be used to analyze payment data and identify trends in customer behavior, which can help companies optimize their sales and marketing strategies. The data provided by the BIN Lookup API can also be used to validate credit card information and ensure that transactions are processed accurately.
The BIN Lookup API is used by a diverse range of clients, including e-commerce and online shopping platforms, fraud analysts, sales analysts, and more. E-commerce platforms and online shopping websites use the API to verify credit card information and prevent fraudulent transactions. Fraud analysts and security professionals use the API to identify potential security threats and build fraud protection systems. Sales analysts and marketing professionals use the API to analyze customer behavior and optimize sales strategies. The versatility of the BIN Lookup API makes it an essential tool for many different industries that rely on accurate credit card authentication and verification.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /bin-lookup/
Filters: bin-number and customer-ip
www.neutrinoapi.com
## Currency And Cryptocurrency Conversion API created by Neutrino
[The Currency and Cryptocurrency Conversion API](https://www.worldindata.com/api/Neutrino-currency-and-cryptocurrency-conversion-api) provided by Neutrino is an essential tool for clients from a wide range of industries, including investment and forex platforms, crypto platforms, blockchain services, news websites, business platforms, exchange rate analysis platforms, business analysts, investment analysis and forecasting, and more. These clients use the data to make informed decisions about currency and cryptocurrency transactions, as well as to analyze exchange rates and trends in the financial markets.
The sectors that use the Currency and Cryptocurrency Conversion API are mainly those involved in investment, forex, crypto, and business. Investment platforms use the API to convert currencies and cryptocurrencies to make transactions in the financial markets. Forex platforms use the API to analyze exchange rates and make informed decisions about currency trades. Crypto platforms use the API to convert between cryptocurrencies and to track the value of different cryptocurrencies over time. Business platforms use the API to convert between currencies for international transactions, and to track the value of their assets and liabilities in different currencies.
The main purpose of the Currency and Cryptocurrency Conversion API is to provide accurate and up-to-date conversion rates between different currencies, cryptocurrencies, and various other units. The API offers real-time data on exchange rates and conversions, making it a valuable tool for clients who need to make quick and informed decisions about financial transactions. The API can also be used to track historical exchange rates and to analyze trends in the financial markets. Overall, the Currency and Cryptocurrency Conversion API is a versatile tool that provides valuable data for clients across a wide range of industries.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /convert/
Data: Live Data
Filters: from-value, from-type and to-type
www.neutrinoapi.com
## Bad Word Filter API by Neutrino
[The Bad Word Filter API](https://www.worldindata.com/api/Neutrino-bad-word-filter-api) offered by Neutrino is an essential tool for sectors involved in writing, editing, proofreading, SEO, and word processing. The API provides a comprehensive list of bad words, swear words, and profanity that can be used to detect inappropriate language in a given text. This is particularly important for companies and individuals who need to maintain a professional image and avoid offending their audiences. The Bad Word Filter API is a powerful tool that is used by many businesses to improve the quality of their content and protect their reputation.
The main purpose of the Bad Word Filter API is to detect bad words, swear words, and profanity in a given text. The API provides a comprehensive list of inappropriate words that can be used to scan text and identify any language that may be deemed inappropriate or offensive. This is particularly important for companies and individuals who need to maintain a professional image and avoid offending their audiences. The Bad Word Filter API is a valuable tool that helps businesses to improve the quality of their content and protect their reputation.
The Bad Word Filter API is used by a diverse range of clients, including writing and editing platforms, SEO analysts, proofreaders, and more. Writing and editing platforms use the API to scan their content for inappropriate language and improve the quality of their writing. SEO analysts use the API to ensure that their content is search engine friendly and free from inappropriate language. Proofreaders use the API to check for spelling and grammatical errors, as well as inappropriate language. Overall, the Bad Word Filter API is a valuable tool for any business or individual who wants to improve the quality of their content and maintain a professional image.
> **Specs:**
Format: JSON
Method: GET
Endpoint: /bad-word-filter/
Filters: content, censor-character and catalog
www.neutrinoapi.com | worldindata |
1,451,995 | GIT Cheatsheet | Here's a git cheat-sheet that covers some of the most commonly used git commands: ... | 0 | 2024-07-11T04:30:25 | https://dev.to/thrtn85dev/git-cheatsheet-429o | git, beginners, cheatsheet, learning | Here's a git cheat-sheet that covers some of the most commonly used git commands:
## Configuration
```
git config --global user.name "Your Name"
```
-
Sets your name for all git repositories on your computer
```
git config --global user.email "youremail@example.com"
```
-
Sets your email address for all git repositories on your computer
```
git config --global color.ui auto
```
Enables colored output in the terminal
## Creating a repository
```
git init
```
: Initializes a new git repository in the current directory
```
git clone <repository_url>
```
: Clones an existing repository from a remote server to your local machine
## Staging changes
```
git add <file>
```
: Adds a file to the staging area
```
git add .
```
: Adds all changed files in the current directory to the staging area
## Committing changes
```
git commit -m "Commit message"
```
: Commits the staged changes with a message describing the changes made
## Branching
git branch: Lists all local branches
git branch <branch_name>: Creates a new branch
git checkout <branch_name>: Switches to a different branch
git merge <branch_name>: Merges changes from the specified branch into the current branch
## Remote repositories
git remote add <name> <repository_url>: Adds a new remote repository
git push <remote> <branch>: Pushes changes to the specified branch in the remote repository
git pull <remote> <branch>: Pulls changes from the specified branch in the remote repository
## Viewing information
git status: Shows the status of the current branch and any changed files
git log: Displays a log of all commits on the current branch
git diff: Shows the differences between the working directory and the staging area
git diff --staged: Shows the differences between the staging area and the last commit
| thrtn85dev |
1,619,526 | Sending Emails in Node.js Using Nodemailer | In Today’s Article, I will explain how to send e-mails from your Node.js server with the use of a... | 0 | 2024-07-12T12:01:43 | https://dev.to/faizan711/sending-emails-in-nodejs-using-nodemailer-474 | node, webdev, javascript, nodemailer |
In Today’s Article, I will explain how to send e-mails from your Node.js server with the use of a library named **“Nodemailer”**.
> But before we begin, it’s important to note that a foundational understanding of creating APIs in a Node.js server, whether with or without Express.js, is assumed. If you’re unfamiliar with these concepts, I recommend exploring resources on creating an Express.js server and APIs first. You’ll find ample tutorials, YouTube videos, and articles online to build a strong foundation. Once you’re comfortable with the basics you can follow along with this article. So for the people who already know these things, let’s dive right into the heart of the matter.
## Set up your Express.js Server
Open your project directory and run the below commands to set up your node+express server. You can use either npm or yarn package manager, I am using npm here.
```
npm init -y
```
This will initialize your Node.js project.
```
npm i express cors
```
This command will install Express and Cors for your server.
Now create an _index.js_ file which will be your main file and paste the below code:
```
const express = require('express');
const cors = require('cors');
const app = express();
const port = 3000;
// Use the cors middleware to enable CORS for all routes
app.use(cors());
// Use the express.json() middleware to parse JSON data from requests
app.use(express.json());
// Define a sample route
app.get('/', (req, res) => {
res.send('Hello, Express server is up and running!');
});
// Start the server
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});
```
Then run:
```
node index.js
```
This will start your server on port 3000, you can check if your server is running by going to **http://localhost:3000**
## Installing and Using Nodemailer
Now that you have your Express server up and running, it's time to install nodemailer which we will use to send emails. Run the command below:
```
npm install nodemailer
```
This will install nodemailer as a dependency in your Node.js project. Now require it in your index.js in a const variable below const express shown below:
```
const nodemailer = require('nodemailer')
```
### Testing with a Test Account
Now that you have installed nodemailer, we will try sending emails with nodemailer with a test account first before using our gmail account. For that, you have to create an API endpoint in your server and define a few things
```
app.post('/testingroute', async (req, res) => {
});
```
Inside the route, you have to define 2 things, a **transporter** and a **test account** for which nodemailer provides functions
```
// This will create a test account to send email
let testAccount = await nodemailer.createTestAccount();
// This is a transporter, it is required to send emails in nodemailer
let transporter = nodemailer.createTransport({
host: "smtp.ethereal.email",
port: 587,
secure: false,
auth: {
user: testAccount.user,
pass: testAccount.pass,
}
});
```
Now you have to define your message which will go to your email, i have given a sample of how to do it, feel free to edit as you like it.
```
let message = {
from: '"Fred Foo 👻" <foo@example.com>', // sender address
to: "bar@example.com", // list of receivers
subject: "Hello ✔", // Subject line
text: "Hello world?", // plain text body
html: "<b>Hello world?</b>",
};
```
and now the only thing left is to send an email, the **transporter** is used here, and the full API endpoint will look like below :
```
app.post('/testingroute', async (req,res) => {
let testAccount = await nodemailer.createTestAccount();
const transporter = nodemailer.createTransport({
host: "smtp.ethereal.email",
port: 587,
secure: false,
auth: {
user: testAccount.user,
pass: testAccount.pass,
}
});
let message = {
from: '"Fred Foo 👻" <foo@example.com>', // sender address
to: "bar@example.com", // list of receivers
subject: "Hello ✔", // Subject line
text: "Hello world?", // plain text body
html: "<b>Hello world?</b>",
};
transporter.sendMail(message).then((info)=> {
return res.status(201)
.json({
message: "you should receive an email!",
info: info.messageId,
preview: nodemailer.getTestMessageUrl(info)
});
}).catch( error => {
return res.status(500).json({error});
})
});
```
To test this API endpoint, I am using Postman, you can use whatever you want, below is an image for your reference of how to test it using Postman

Now you have to go to the URL you will get in a preview of your API response as in the image above and you will be able to check the mail received in test account as below:

Now that you have seen how nodemailer works and how to test it using a test account, it's time to put our Gmail account and send real emails to users from our Node.js server.
### Sending Emails using Gmail
For sending emails using Gmail we have to set up a new route, let's call it signup, here I will use a library called **Mailgen** to send professional-looking emails, you can install it using the command below:
```
npm install mailgen
```
and require it as we did for Nodemailer,
```
const Mailgen = require('mailgen');
```
Now set up your new route signup,
```
app.post('/signup', async(req,res) => {
})
```
Here you have to define a few things you can send emails, so understand this carefully
```
let config = {
service: 'gmail',
auth : {
user: '', //please put a gmail id
pass: '' //please create an app password for gmail id and put here
}
}
let transporter = nodemailer.createTransport(config);
let MailGenerator = new Mailgen({
theme: "default",
product: {
name: 'Mailgen',
link: 'https://mailgen.js/'
}
});
```
config is where you have to put your Gmail account and password, if you don’t want to put your password, you can go to your Gmail account settings and generate an app password and put it here too. Check out this article to do so https://support.google.com/mail/answer/185833?hl=en
transporter is the same as in testing just that here it uses your gmail server.
and the new thing is MailGenerator, it is a great library to create and send professional-looking emails. You can learn more about it [here](https://www.npmjs.com/package/mailgen).
Now you have to create a template response for your email to be sent to the user and give it to MailGenerator and the rest of the part is the same as in the testing route.
```
let response = {
body: {
name : "Daily Tuition",
intro: "Your bill has arrived!",
table : {
data : [
{
item : "Nodemailer Stack Book",
description: "A Backend application",
price : "$10.99",
}
]
},
outro: "Looking forward to do more business"
}
}
let mail = MailGenerator.generate(response)
let message = {
from : , // Give your email address
to : yourmail@gmail.com, // give an email id
subject: "Place Order",
html: mail
}
transporter.sendMail(message).then(() => {
return res.status(201).json({
msg: "you should receive an email"
})
}).catch(error => {
return res.status(500).json({ error })
})
```
So everything is done, Now is the time to test your API endpoint as we did with the test account, If you have followed the article carefully your users will receive email as below

## Conclusion
In this article, we delved into the world of email communication within the Node.js environment using the powerful Nodemailer library. We embarked on a journey that covered the essentials of setting up Nodemailer, crafting and sending emails, test accounts, and even exploring advanced configuration options with our own Gmail account.
As developers, we understand the significance of effective communication in today’s digital landscape. Leveraging Nodemailer empowers us to seamlessly integrate email capabilities into our applications, enabling us to reach users with essential information, notifications, and personalized content.
---
> Thank you for reading! If you have any feedback or notice any mistakes, please feel free to leave a comment below. I’m always looking to improve my writing and value any suggestions you may have. If you’re interested in working together or have any further questions, please don’t hesitate to reach out to me at fa1319673@gmail.com.
| faizan711 |
1,674,253 | Simple steps on how to create a Windows 11 Virtual machine that is highly available, with a free tier azure account. | In this short post I will explore creation of a highly available windows 11 virtual machine. Below... | 0 | 2024-07-09T11:36:52 | https://dev.to/sethgiddy/simple-steps-on-how-to-create-a-windows-11-virtual-machine-that-is-highly-available-with-a-free-tier-azure-account-2f33 | In this short post I will explore creation of a highly available windows 11 virtual machine. Below are the steps to creating a windows 11 virtual machine. High availability refers to the ability of a system or application to remain operational and accessible even in the face of disruptions or failures. This is achieved through a combination of redundancy, failover mechanisms, and automatic recovery processes. Any or a combination of the following components will ensure high availability of azure cloud resources, to mention a few.
-Availability Sets
-Load Balancing
-Virtual Machine Scale Sets
-Backup and Restore
I will discuss these concepts in a different post
PART A
- STEP 1. This is involves creating a free tier account on azure portal (https://azure.microsoft.com/en-gb/free) The free tier azure account comes with a $200 subscription credit from azure.
- STEP 2. Create Virtual machine. Search for the resources to be created (Virtual machine) from the search box on azure portal. Another way to get to the resources is by clicking all service.
- STEP 3. Select the correct subscription (free tier in this article), An Azure subscription is a logical container used to provision related business or technical resources in Azure. Choose an existing resource group or create a new one. A resource group is a container that holds related resources for an Azure solution. Add a create a name for the virtual machine. Select the number of availability zones desired. You can select up to 3 availability zones, this process will create multiple virtual machines.

- STEP 4. Input all other necessary details require on the basic tab. The image for this write is Windows 11, memory size is the free tier. Complete the Administrator account section. Select SSH public key for the authentication type. Create a username for the virtual machine. For the public inbound ports select Allow selected port. Select RDP and for inbound ports.

- STEP 5. Click next button to disks, Networking, Management, Monitoring, Advanced, Tags review as desired and add resources if so desired. click Review + create, once validation passed click create.

- STEP 6. The progress for creation of the virtual machine in 2 availability zones is shown below.

- STEP 7.The newly created virtual machine will appear just like so. Click go to resource to access the virtual machine.

- STEP 8. Click connect button and click connect from the drop down. I will be using RDP to connect to the virtual machine on my local machine. Click download RDP after the native RDP is configured. Enter admin name and password to connect.

- STEP 9. Below is the virtual machine.

- STEP 10. After connecting to the virtual on the local machine. We have to stop/delete the virtual machine depending on the purpose of usage and other factors. Stopping virtual machine will deallocate the resource.

In this write up I used just one component to make my resource highly available which is creating my virtual machine in 2 separate availability zones. I will be discussing the components of high availability in another post.
Lastly, always remember to delete resources you are not currently using.
This is not an exhaustive step to creating a high availability resource i.e virtual machine using to RDP connect.
Thank you.
| sethgiddy | |
1,735,904 | Developer Activity and Collaboration Analysis with Airbyte Quickstarts ft. Dagster, BigQuery, Google Colab, dbt and Terraform | Airbyte could be used as a wonderful tool in-order to leverage the power of data with useful... | 0 | 2024-07-11T06:33:19 | https://dev.to/btkcodedev/developer-activity-and-collaboration-analysis-with-airbyte-quickstarts-ft-dagster-bigquery-google-colab-dbt-and-terraform-4184 | programming, airbyte, tutorial, terraform | **_Airbyte could be used as a wonderful tool in-order to leverage the power of data with useful transformations_**
**_Those transformed data could be further used for training AI models (Examples at the end)_**
In this tutorial, GitHub source API is used as source and transformed with trends in developer activity, which could be feed for an AI modal for image training purposes for enhancing its prediction capabilities
**I've made a full code walk-through at [colab reference](https://colab.research.google.com/drive/14U7NYK4dy5fBN3891Tbkl3SJYEqxkMYr?usp=sharing)**
You could either download it as ipynb and run with local jupyter or run step-by-step with local cmd
(If hyperlink is broken, try: https://colab.research.google.com/drive/14U7NYK4dy5fBN3891Tbkl3SJYEqxkMYr?usp=sharing)
The initial part of setting up Airbyte for pulling data from GitHub source to BigQuery and SQL transformations are already given precisely at [quickstarts directory](https://github.com/airbytehq/quickstarts/blob/main/developer_productivity_analytics_github/README.md)
<u>**Architecture**</u>

<u>Explanation:</u>
Tech stacks: Dagster, dbt, Airbyte, GitHub API, BigQuery, Terraform
_We are intended to pull data from GitHub source via Airbyte User Interface towards BigQuery dataset, The Airbyte User Interface is automated via Terraform Provider._
_After the dataset creation, data build tool (dbt) is used for transforming data with SQL queries for various metric findings viz. average time per PR, mean of total commits etc..._
## Part 1.1: Setting Up the Data Pipeline
Screenshots are attached to the colab notebook,
_Airbyte and GitHub API:_
The GitHub source connector requires three credentials:
1. Repository name
2. GitHub personal access token
3. Workspace ID
_Airbyte and BigQuery:_
The BigQuery destination connector requires three credentials:
1. IAM & Admin service account JSON key
2. Google cloud project ID
3. BigQuery Dataset ID
Ref: [Configuration steps] (https://github.com/airbytehq/quickstarts/blob/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/README.md?plain=1#L120)
In case you are wondering about behind the scenes, refer to [GitHub](https://github.com/airbytehq/quickstarts/blob/main/developer_productivity_analytics_github/infra/airbyte/main.tf), where you could see the sync between GitHub and BigQuery

After which the Terraform jobs are finished, Airbyte UI would be set ready with all the config which are provided and streams are ready to be pulled
You could find the reference of number of streams at GitHub

After running `terraform apply`, the Airbyte UI is configured with all the setup and the streams are ready to be pulled
## Part 1.2: Transformations with dbt
Setup the environment variables used for dbt setup, Currently three
1. [dbt_service_account_JSON_key](https://github.com/airbytehq/quickstarts/blob/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/dbt_project/profiles.yml#L8)
2. [bigquery_project_id](https://github.com/airbytehq/quickstarts/blob/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/dbt_project/profiles.yml#L13)
3. [github_source.yml](https://github.com/airbytehq/quickstarts/blob/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/dbt_project/models/sources/github_source.yml#L6)
Either setup those env variables or hard code in the files for next steps.
Run `dbt debug` for confirming the setup.
The schema for table population for each stream could be seen at [GitHub](https://github.com/airbytehq/quickstarts/tree/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/dbt_project/models/staging)

After running `dbt run --full-refresh`, You could find the transformed tables populated in the BigQuery dataset

_The [dbt marts](https://github.com/airbytehq/quickstarts/blob/8268d1b01ad2f8cfcff0a75c0bd4c0c9a45d197d/developer_productivity_analytics_github/dbt_project/models/marts/dev_activity_by_day_of_week_analysis.sql) are very useful where insights are extracted from the pulled data and could be further utilized for AI training purposes (*Provided large dataset)_

## Part 1.3: Orchestration using Dagster
_Dagster and BigQuery:_
Dagster is a modern data orchestrator designed to help you build, test, and monitor your data workflows.
After running `dagster dev`, localport would be opened or dagster where the workflow could be seen and syncs could be monitored

## Part 1.4: Future Reference AI Model Creation in Colab ft. Tensorflow
_Export BigQuery data to Colab:_
1. Use the BigQuery connector in Colab to load the desired data from your analysis tables.
2. Preprocess the data by cleaning, filtering, and transforming it for your specific model inputs.
3. Build a Tensorflow Model for Team Dynamics and Productivity:
Choose a suitable architecture like LSTM or RNN for time series analysis of developer activity, or use scikit-learn for quantitative analysis.
Train the model on historical data, using features like time to merge PRs, commits per day, code review frequency, etc.
Evaluate the model performance on validation data.
Specific code example is provided at [Colab](https://colab.research.google.com/drive/14U7NYK4dy5fBN3891Tbkl3SJYEqxkMYr#scrollTo=1l9f-FNihyMr)

- Another example of Tensorflow Model:
```
import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
# Load data from BigQuery
client = bigquery.Client()
query = """
SELECT
author.email,
author.time_sec AS time_sec,
committer.email,
committer.time_sec AS time_sec_1,
committer.time_sec - author.time_sec AS time_difference
FROM
`micro-company-task-367016.transformed_data.stg_commits`,
UNNEST(difference) AS difference
WHERE
TIMESTAMP_SECONDS(author.time_sec) BETWEEN TIMESTAMP("2023-01-01") AND TIMESTAMP("2023-12-31")
LIMIT 1000
"""
data = client.query(query).to_dataframe()
# Preprocess data
features = ['time_sec', 'time_sec_1']
target = 'time_difference'
# Drop rows with missing values
data = data.dropna(subset=features + [target])
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(data[features], data[target], test_size=0.2, random_state=42)
# Standardize features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Define TensorFlow model architecture
model = Sequential([
Dense(32, activation='relu', input_shape=(len(features),)),
Dense(16, activation='relu'),
Dense(1) # Output layer, no activation for regression
])
# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Convert y_train to a NumPy array with a compatible dtype
y_train_np = y_train.values.astype('float32')
scaler_y = StandardScaler()
y_train_scaled = scaler_y.fit_transform(y_train_np.reshape(-1, 1))
y_test_scaled = scaler_y.transform(y_test.values.reshape(-1, 1))
# Train the model
model.fit(X_train_scaled, y_train_scaled, epochs=50, batch_size=32, validation_data=(X_test_scaled, y_test_scaled))
# Convert y_test to a NumPy array with a compatible dtype
y_test_np = y_test.values.astype('float32')
# Evaluate the model
mse = model.evaluate(X_test_scaled, y_test_np)
print(f'Mean Squared Error on Test Data: {mse}')
# Generate synthetic new data
new_data = pd.DataFrame({
'time_sec': np.random.rand(10) * 1000, # Adjust the range as needed
'time_sec_1': np.random.rand(10) * 1000 # Adjust the range as needed
})
# Preprocess new data
new_data_scaled = scaler.transform(new_data[features])
# Make predictions on new data
predictions = model.predict(new_data_scaled)
predictions_inverse = scaler_y.inverse_transform(predictions)
# Display predictions
print(predictions_inverse)
# Plot model predictions
plt.plot(predictions, label="Predicted Time Difference")
plt.plot(y_test.values, label="Actual Time Difference")
plt.xlabel("Data Point")
plt.ylabel("Time Difference")
plt.title("Predicted vs. Actual Time Difference")
plt.legend()
plt.show()
```
Results of model:

- Other use cases
> - To predict future trends in team dynamics and productivity.
> - Identify factors that influence collaboration and individual developer performance.
> - Visualise the results using charts, graphs, and network visualisations.
## Final thoughts
350+ source connectors and huge data warehouse destinations are definitely a plus at Airbyte. We could use this power of transformed data as training data for many AI modals specifically tailored for prediction especially in developer behavioural analysis , stock prediction etc...
Links:
1. Colab Notebook: https://colab.research.google.com/drive/14U7NYK4dy5fBN3891Tbkl3SJYEqxkMYr?usp=sharing
2. Airbyte Quickstarts Repository: https://github.com/airbytehq/quickstarts
3. Airbyte Main Repository:
https://github.com/airbytehq/airbyte | btkcodedev |
1,741,672 | Building Hello World Smart Contracts: Solidity vs. Soroban Rust SDK - A Step-by-Step Guide | How to migrate smart contracts from Ethereum’s Solidity to Soroban Rust In this tutorial, we'll... | 0 | 2024-07-09T13:17:25 | https://dev.to/stellar/building-hello-world-smart-contracts-solidity-vs-soroban-rust-sdk-a-step-by-step-guide-3909 | rust, solidity, ethereum, smartcontract | How to migrate smart contracts from Ethereum’s Solidity to Soroban Rust
In this tutorial, we'll explore the intricacies of two major smart contract programming environments: Ethereum's Solidity and Soroban’s Rust SDK and why should consider migrating your smart contracts to Rust
## Why would a blockchain developer choose Rust over Solidity?
In the blockchain and smart contract realm, Rust is a standout choice for developers, and here's why:
- **Speed & Efficiency**: Rust whizzes through tasks like a sports car in style. It is super fast, even outpacing C++ when it comes to speed and efficiency so that your blockchain operations are not just quick but also smart, saving on resources.
- **Type Safety**: Picture Rust's type system as a meticulous inspector, who is watching each bit of your code at compile time. This means fewer errors and a safer environment for your smart contracts.
- **Memory Safety Without the Overhead**: Rust boasts top-shelf memory safety, acting as an invisible shield against vulnerabilities critical in the blockchain world. And it does this leanly without needing a garbage collector, keeping your projects lean and fortified.
- **Conquering Concurrency with Ease**: In blockchain, handling simultaneous transactions is like juggling fireballs. Rust excels in managing multiple operations seamlessly, preventing the common complications seen in other languages. This leads to faster, safer processing of transactions, enhancing the overall performance of your smart contracts.
Rust combines speed, safety, and execution efficiency making it an ideal language for blockchain development where such qualities are demanded. So how does Rust stack up against Solidity?
## What’s the difference between EVM and Soroban?
### What is the EVM?
The Ethereum Virtual Machine (EVM) is a core component of the Ethereum blockchain network. It is a virtual environment that allows for the execution of smart contracts and decentralized applications (dApps). While Ethereum is the primary network utilizing the EVM, other blockchain platforms have adopted or created compatible versions of the EVM. For instance:
- Avalanche has its own virtual machine, the Avalanche Virtual Machine (AVM), but it also supports the EVM through its C-Chain, enabling compatibility with Ethereum-based applications.
- Optimism and Polygon are Layer 2 solutions built on top of the Ethereum blockchain. They use Optimistic Rollups and Polygon's own technology, respectively, but are compatible with the EVM. This means they can run Ethereum smart contracts and dApps.
Each blockchain network can have its own consensus mechanisms, underlying architecture, and protocol implementations. Geth (Go Ethereum, an implementation of an Ethereum node in the Go programming language) is specifically an Ethereum client, and while other networks might draw inspiration or use aspects of Ethereum's technology, they often have distinct core protocols and implementations.
In EVM Land, Solidity is the go-to language for developing smart contracts. Here's a quick rundown for my fellow builders:
- **Object-Oriented Approach**: Just like other OOP languages, Solidity organizes code around data and objects, not just functions and logic.
- **High-Level Language**: It abstracts away from the nitty-gritty of computer hardware, making development smoother and more intuitive.
- **Statically-Typed**: Solidity checks your code for errors and type mismatches at compile time, saving you from a lot of headaches later.
What makes Solidity stand out is its role in powering decentralized transactions and managing blockchain accounts. Plus, if you're comfortable with JavaScript, C++, and Python you'll find Solidity's syntax familiar.
##What is Soroban?
Soroban is a smart contracts platform with sensibility, built-to-scale, batteries-included and developer-friendly wants.
While it works great with Stellar being that it shares the blockchain's values of scale and sensibility, it neither depends nor requires Stellar at all and can be used by any transaction processor, including other blockchains, L2s, and permissioned ledgers.
Currently, Soroban is available as a part of the v20 of Stellar protocol stable release. The package for the module consists of - smart contracts environment, a Rust SDK, A CLI, and an RPC server. Writing and testing of the contracts can be carried out on developers' local machines or deployed to Testnet.
## Which programming language is used for Stellar smart contracts?
Introduced in 2022, the Soroban Rust SDK is a suite of tools specifically for writing smart contracts on the Soroban platform. Built on Rust, it enables developers to create decentralized finance applications, automated market makers, and tokenized assets, while also leveraging some of Stellar's core functionalities.
## How to Build a Hello World Smart Contract
We will create two "Hello World" contracts: first in Solidity, then using the Soroban Rust SDK.
Here is the video if you want to follow along:
{% embed https://youtu.be/s8Ron2--78Y?feature=shared %}
## Solidity Version
Open the remix-ide in your browser by navigating to: https://remix.ethereum.org/

Click on the “create new file” icon in the "File Explorer" tab:

Type the file name “HelloWorld.sol” and enter the following code into the ide:
```
// SPDX-License-Identifier: MIT
// compiler version must be greater than or equal to 0.8.20 and less than 0.9.0
pragma solidity >=0.7.0 <0.9.0;
contract HelloWorld {
function hello(string memory to) public pure returns(string memory){
string memory greeting = string(abi.encodePacked("hello ", to));
return greeting;
}
}
```
Let's take a quick intermission to break down the code:
```
// SPDX-License-Identifier: MIT
```
This comment indicates the license under which the code is released (MIT License).
```
pragma solidity >=0.7.0 <0.9.0;
```
This line specifies that this code is compatible with Solidity compiler versions greater than or equal to 0.7.0 and less than 0.9.0. It sets compiler version boundaries to ensure code compatibility and expected behavior.
```
contract HelloWorld {
```
Here, we declare a Solidity contract named "HelloWorld."
```
function hello(string memory to) public pure returns(string memory){
```
This line defines a function named "hello."
- It takes one argument, a string named "to," which represents the name of the person you want to greet.
- The function is marked as "public," which means it can be called externally.
- The "pure" keyword indicates that this function does not modify the contract's state.
```
string memory greeting = string(abi.encodePacked("hello ", to));
```
Inside the "hello" function, a new string variable "greeting" is declared.
- It is constructed by concatenating "hello" with the provided name using the **abi.encodePacked** function.
- The result is stored in the "greeting" variable.
```
return greeting;
```
Finally, the function returns the "greeting" string as the result of the function call.
Now back to our regularly scheduled programming(🥁)
Once the code is in Remix, click the "Solidity Compiler" icon below the “File Explorer” icon.
Then, click “Compile HelloWorld.sol” or simply press `cmd+s`

Once compiled successfully click the icon below “Solidity Compiler” that is “Deploy & Run Transactions”.
Without changing any of the values as shown above just click the “Deploy” button to deploy your smart contract. Once deployed you will find your smart contract just below in “Deployed Contracts” heading.
Click “>” before your contract you will see a button “hello” below as our contract has a hello function variable that returns a string composed of “hello” + the value you passed in for the `to` argument.

Define an value for to and then click the “hello” button to return the greeting:

Nicely done! Now for the real Mccoy!
## Soroban Rust SDK Version
Open the smart contract playground built for Soroban, okashi, in your browser by navigating to: [https://okashi.dev/](https://okashi.dev/)
Start a new project and name it HelloWorld.

Enter the following code into the IDE:
```
#![no_std]
use soroban_sdk::{contract, contractimpl, symbol_short, vec, Env, Symbol, Vec};
#[contract]
pub struct Contract;
#[contractimpl]
impl Contract {
/// Say Hello to someone or something.
/// Returns a length-2 vector/array containing 'Hello' and then the value passed as `to`.
pub fn hello(env: Env, to: Symbol) -> Vec<Symbol> {
vec![&env, symbol_short!("Hello"), to]
}
}
```
Time for another commercial break already!?!
Don't worry this one is going to help you polish up on your Rust(🥁)
```
#![no_std]
```
- This directive is used at the beginning of the Rust code to specify that the standard library (std) should not be included in the build. In Soroban contracts, the standard library is excluded because it's large and not suitable for deployment on blockchains.
```
use soroban_sdk::{contract, contractimpl, symbol_short, vec, Env, Symbol, Vec};
```
- The **use** keyword is used to import external dependencies or modules into the current Rust code.
- **soroban_sdk**: This is the crate/module that provides the necessary functionalities and types for Soroban contracts.
- **{contract, contractimpl, symbol_short, vec, Env, Symbol, Vec}**: These are the specific items being imported from the **soroban_sdk** module, including attributes, macros **(contract, contractimpl, symbol_short!)**, and data types **(Env, Symbol, Vec)**.
```
#[contract]
pub struct Contract;
```
- **#[contract]** is an attribute applied to the Contract struct, designating it as the type to which contract functions are associated. It implies that this struct will have contract functions implemented for it.
- **pub struct Contract;** defines a public struct named Contract. In Soroban contracts, contract functions are associated with this struct.
```
#[contractimpl]
impl Contract {
pub fn hello(env: Env, to: Symbol) -> Vec<Symbol> {
vec![&env, symbol_short!("Hello"), to]
}
}
```
- **#[contractimpl]** is an attribute that is applied to the **impl** block for the **Contract** struct, indicating that this block contains the implementation of contract functions.
- **impl Contract { ... }**: This is the implementation block for the Contract struct, where contract functions are defined.
- **pub fn hello(env: Env, to: Symbol) -> Vec<Symbol> { ... }**: This line defines a public function named **hello**. It takes two arguments, **env** of type **Env** and "to" of type **Symbol**(in this case, a string of up to 8 characters). It also specifies the return type as **Vec<Symbol>**
- **{ vec![&env, symbol_short!("Hello"), to] }**: This block of code is where a length-2 vector/array containing "Hello" and then the value passed as "to" is constructed and returned.
That's all the breaks we have for today. Don't get crabby on me!(🥁)
Now that the code is in the editor, compile it by clicking the compile button or pushing “cmd+k”

Open the contract tab and push the `hello()` button
Pass in an value for `to` and click the “call” button

The "Console" tab should open and you should see your message!

## Comparison and Conclusion
Both Solidity and Soroban provide the functionality to declare public functions. However, their approaches to data handling and state management differ, influenced by their core languages – JavaScript for Solidity and Rust for Soroban. Solidity is ideal for those familiar with JavaScript, while Soroban's Rust foundation offers advantages in concurrency and safety.
## Additional Resources
For developers interested in transitioning from EVM to Soroban, we have comprehensive documentation that covers everything from the basics of the Soroban Rust SDK compared to Solidity, up to deploying your own smart contracts with Rust. Learn more about migrating from EVM here.
If you’re looking for more tools and want to learn more about the sdk, you can check out the official Soroban docs [here](https://soroban.stellar.org/docs/migrate/evm/introduction-to-solidity-and-rust).
- [Soroban CLI](https://soroban.stellar.org/docs/getting-started/setup)
- [Soroban Rust SDK](https://soroban.stellar.org/docs/reference/sdks/write-contracts)
Stay tuned for more insights and tutorials in this series, and happy coding in the world of smart contracts!
| j_dev28 |
1,756,209 | DTOs e PHP: simplificando a transferência de dados entre as camadas da aplicação | O padrão DTO O DTO (Data Transfer Object) é um padrão de projeto que visa ter objetos... | 0 | 2024-07-09T12:44:00 | https://dev.to/marcelochia/dtos-e-php-simplificando-a-transferencia-de-dados-entre-as-camadas-da-aplicacao-41h5 | php, dto | ## O padrão DTO
O DTO (Data Transfer Object) é um padrão de projeto que visa ter objetos usados exclusivamente para a transferência de dados entre camadas de uma aplicação. É um objeto anêmico, ou seja, a classe tem apenas atributos e sem métodos que manipulem dados, apenas de construção do objeto.
## Porque não um utilizar um array para receber dados em método?
Observe a classe `CreateProduct`:
```php
<?php
namespace App\Actions\Product;
use App\Product\Contracts\ProductRepository;
class CreateProduct
{
public function __construct(private ProductRepository $repository) {}
public function execute(array $data): Product
{
//
}
}
```
Como o método `execute()` recebe um array associativo, quem instanciar essa classe precisará abrir o método e ver quais chaves do array será necessário informar.
### Substituindo o array pelo DTO
Podemos construir uma classe DTO no PHP utilizando o modificador `readonly` para que os atributos dos objetos não sejam alterados.
PHP 8.1:
```php
<?php
namespace App\Dto;
readonly class ProductDto
{
public function __construct(
public string $name,
public string $description,
public float $price,
public int $quantity,
public int $categoryId,
public int $brandId,
public string $sku,
public string $ean
) {}
}
```
E na classe `CreateProduct` podemos alterar o método `execute()`:
```php
public function execute(ProductDto $data): Product
{
//
}
```
### Benefícios para a IDE
Ao utilizar a classe DTO no lugar do array associativo a IDE irá ajudar a identificar os atributos necessários:

E também na utilização desses dados:

### Indo além com métodos estáticos
Para facilitar a criação desses objetos podemos criar métodos estáticos:
```php
public static function fromRequest(Request $request): self
{
return new self(
name: $request->name,
description: $request->description,
price: $request->price,
quantity: $request->quantity,
categoryId: $request->category_id,
brandId: $request->brand_id,
sku: $request->sku,
ean: $request->ean
);
}
public static function fromArray(array $data): self
{
return new self(
name: $data['name'],
description: $data['description'],
price: $data['price'],
quantity: $data['quantity'],
categoryId: $data['categoryId'],
brandId: $data['brandId'],
sku: $data['sku'],
ean: $data['ean']
);
}
```
E utilizando dessa forma:
```php
$data = ProductDto::fromRequest($request);
```
### Quando não utilizar
Não vejo a necessidade de utilizar um DTO como parâmetro em métodos públicos que já indicam o dado que espera receber, como por exemplo um método que buscará uma lista de produtos pelos IDs informados:
```php
class FindProducts
{
public function execute(array $productsIds): Product
{
//
}
}
``` | marcelochia |
1,812,684 | Welcome Thread - v284 | Leave a comment below to introduce yourself! You can talk about what brought you here, what... | 0 | 2024-07-10T00:00:00 | https://dev.to/devteam/welcome-thread-v284-46df | welcome | ---
published_at : 2024-07-10 00:00 +0000
---

---
1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself.
2. Reply to someone's comment, either with a question or just a hello. 👋
3. Elevate your writing skills on DEV using our [Best Practices](https://dev.to/devteam/best-practices-for-writing-on-dev-creating-a-series-2bgj) series! | sloan |
1,815,217 | Understanding LEDs on Network Switches | Switches have LEDs for indicating power status, port status,link status, error indication,... | 0 | 2024-07-08T10:50:24 | https://dev.to/gateru/understanding-leds-on-network-switches-1kh1 | Switches have LEDs for indicating power status, port status,link status, error indication, troubleshooting and performance monitoring.
The LED colors for the switch and their corresponding status indications are as follows ;

To Select or change a mode, press the mode button until the desired mode is <u>highlighted</u>. When you change port modes, the meanings of the port LED colors also change.
## RPS MODE
The RPS LED is only available on switch models that have an RPS port
For RPS mode u will the switch will have the following lights.

## Port LEDs and Modes
The port and module slots each has a port LED. As a group or individually, the LEDs show information about the switch and about the ports

## Factors Leading to Power over Ethernet (PoE) Denial in Switches
1. Preventing Overload - Each port that provides PoE has a maximum power it can deliver. If the total power required by connected PoE devices exceeds the switch's power budget,providing power to additional devices could overload the switch.
2. Protecting Devices - Connecting devices to a switch that cannot provide sufficient power could damage the devices.
3. Maintaining Switch Stability - exceeding switch power consumption, could lead to overheating, performance degradation or other operational issues.
4. Compliance and safety- Adhering to the switch's power limitations ensures compliance with safety regulations and standards. Exceeding these limits could result in non-compliance with safety regulations and standards.
## STACK LED.
The STACK LED shows the sequence of member switches in a stack. Up to eight switches can be members of a stack. The first eight port LEDs show the switch member number. For example, if you press the Mode button and select Stack, the port LED 1 blinks green. The LEDs for port 2 and 3 are solid green, as these represent the member numbers of other stack members. The other port LEDs are off because there are no more members in the stack.
If the port LEDs are green on all the switches in the stack,the stack is operating at full bandwidth. If any port LED is not green, the stack is not operating at full bandwidth.
Stacking- Stacking allows users to expand their network capacity without the hassle of managing multiple devices.
**TIP** Some network switches have the ability to be connected to other switches and operate together as a single unit. These configurations are called "stacks", and are useful for quickly increasing the capacity of a network.
**A Stack** is a network solution composed of two or more stackable switches. Switches that are part of a stack behave as one single device.AS a result, a stacking solution shows the characteristics and functionality of a single switch, while having an increased number of ports.
Stackable switches can be added or removed from a stack as needed without affecting the overall performance of the stack. Depending on its topology, a stack can continue to transfer data even if a link or unit within the stack fails.
This makes stacking an
- effective,
- flexible
- scalable solution to expand network capacity.
## Stacking Terminology.
The **Active switch** is a switch in the stack that handles the configuration for the entire stack. When you want to manage your stack, the Active switch is the device that you connect to in-order to make changes. it also detect when switches enter or leave the stack, and upgrading outdated switches.
A **Standby switch** is a switch that will become the new Active switch if the original Active switch goes offline. In this way, a backup helps maintain the resiliency of the stack.
A **Member** is a stackable switch that operates as an additional unit within the stack.
A **stack port** is a port on the switch that is used to communicate with other switches in the stack. Depending on the model, a switch can have either preconfigured or user-defined stack ports.

## Console LEDs
The console LEDs show which console port is in use. If you connect cable to a console port, the switch automatically uses that port for console communication.

## Ethernet Management Port LED.
To understand the management LED port follow the table below,

## Link Status.
Verify that both sides have link. A broken wire or a shutdown port can cause one side to show link even though the other side does not have link.
A port LED that is on does not guarantee that the cable is fully functional. The cable might have encountered physical stress that causes it to function at a marginal level.
Examples of **physical stress** include;
1. Bending or flexing - Internal wires can be damaged when they're frequently moved or disconnected
2. Twisting or Torquing - Straining the cables when twisting or torquing can damage the connectors affecting the transmission of signals.
3. Rodent Damage - In some environments, rodents or pests may chew on cables, causing physical damage that affects their functionality.
4. Crushing or impact - Cables might get crushed under heavy equipment or suffer impacts from falling objects, leading to physical damage that affects their functionality.
If the port LED does<u> not </u>turn on:
- Connect the cable from the switch to a known good device.
- Ensure that both ends of the cable are connected to the correct ports.
- Verify that both devices have power.
- Verify that you are using the correct cable type.
- Check for loose connections. Sometimes a cable appears to be seated, but is not. Disconnect the cable and then reconnect it.
## Spanning Tree loops.
STP loops can cause serious perfomance issues that look like port or interface problems.
A unidirectional link can cause loops. It occurs when the traffic sent by the switch is received by its neighbour but the traffic from one traffic from the neighbour is not received by the switch.ie data can be sent from one device to another, but the reverse path for data transmission is either unavailable or unreliable.
A broken fiber-optic cable, other cabling problems, or a port issue could cause this one-way communication.
You can enable UniDirectional Link Detection (UDLD) on the switch to help identify unidirectional link problems.
## Autonegotiation.
When I get called to a user to diagnose a network slowdown or a slow device, the first things include looking at the error statistics and the autonegotiation settings on the switches as well as the devices connected to them.
Autonegotiation - is the feature that allows a port on a switch ,router,server or device to communicate with the device on the end of the link to determine the optimal duplex mode and speed for the connection.
| gateru | |
1,819,899 | Official deprecation announcement Storyblok Vue 2 & Nuxt 2 SDKs | Dear community, We want to announce some changes to the Storyblok Vue 2 SDK & Storyblok Nuxt 2... | 0 | 2024-07-08T10:00:00 | https://dev.to/storyblok/official-deprecation-announcement-storyblok-vue-2-nuxt-2-sdks-n1g | vue, nuxt, storyblok | Dear community,
We want to announce some changes to the [Storyblok Vue 2 SDK](https://github.com/storyblok/storyblok-vue-2) & [Storyblok Nuxt 2 SDK](https://github.com/storyblok/storyblok-nuxt-2).
## The future of the Vue Ecosystem: Vue 2 & Nuxt 2 Deprecation
Following the official end-of-life (EOL) for Vue 2 on December 31st, 2022, we will **end [Storyblok Vue 2 SDK](https://github.com/storyblok/storyblok-vue-2) support by August 31st, 2024**. During this extended support period, we will be solving bug fixes and supporting customers and community members. After August 2024, we will discontinue all support for this package, and all the efforts will be centered on the repository for the latest version of Vue (3.x): [Storyblok Vue SDK](https://github.com/storyblok/storyblok-vue).
For Nuxt 2 users, the official EOL is June 30th, 2024. **From the [Storyblok Nuxt 2 SDK](https://github.com/storyblok/storyblok-nuxt-2), the support will continue until December 31st, 2024**. After 2024, we will stop maintaining it, and the official Nuxt (3.x) SDK will be [Storyblok Nuxt SDK](https://github.com/storyblok/storyblok-nuxt).
With these changes, we want to ensure that our open-source SDKs for the Vue ecosystem are in sync with the latest trends.
For detailed insights into the Vue 2 & Nuxt 2 depreciation, the latest changes, and technical recommendations, we recommend exploring the following resources:
- Deprecation pages for both frameworks: [official Vue 2 deprecation](https://v2.vuejs.org/lts/) and [official Nuxt 2 deprecation](https://v2.nuxt.com/lts/).
- Migration guides: [official Vue 3 migration guide](https://v3-migration.vuejs.org/) and [official Nuxt 3 upgrade guide](https://nuxt.com/docs/getting-started/upgrade).
## Migrating to Vue 3 & Nuxt 3 Storyblok SDKs
**1 - New packages names**
- For Vue projects, instead of installing the package by `npm i -D @storyblok/vue-2`, you should now run `npm install @storyblok/vue`
- For Nuxt projects, the old way was `npm install @storyblok/nuxt-2`. Now you should run `npx nuxi@latest module add storyblok`
**2 - How to register them in your project**
- For Vue, you should still register the plugin at `main.js`:
The old way in Vue 2 SDK:
```js
import Vue from "vue";
import { StoryblokVue, apiPlugin } from "@storyblok/vue-2";
import App from "./App.vue";
Vue.use(StoryblokVue, {
accessToken: "<your-token>",
use: [apiPlugin],
});
```
New way at Vue SDK (v3.x):
```js
import { createApp } from "vue";
import { StoryblokVue, apiPlugin } from "@storyblok/vue";
import App from "./App.vue";
const app = createApp(App);
app.use(StoryblokVue, {
accessToken: "YOUR_ACCESS_TOKEN",
use: [apiPlugin],
});
```
- For Nuxt, you should still register the module inside `nuxt.config.js`:
The old way in Nuxt 2 SDK: (check config [options available](https://github.com/storyblok/storyblok-nuxt-2?tab=readme-ov-file#options))
```js
{
buildModules: [
// ...
["@storyblok/nuxt-2/module", { accessToken: "<your-access-token>" }],
],
// or set the accessToken as publicRuntimeConfig (take priority if both are set)
publicRuntimeConfig: {
storyblok: {
accessToken: process.env.STORYBLOK_ACCESS_TOKEN;
}
}
}
```
New way at Nuxt SDK (v3.x): (check config [options available](https://github.com/storyblok/storyblok-nuxt?tab=readme-ov-file#options))
```js
import { defineNuxtConfig } from "nuxt";
export default defineNuxtConfig({
modules: ["@storyblok/nuxt"],
storyblok: {
accessToken: process.env.STORYBLOK_ACCESS_TOKEN
}
});
```
**3 - Linking components between Storyblok Block Library and your Nuxt project**
In Nuxt 2, the folder used was `~/components/storyblok`; for Nuxt 3, by default is `~/storyblok`, but you can change it as stated in the [Storyblok Nuxt SDK README](https://github.com/storyblok/storyblok-nuxt?tab=readme-ov-file#1-creating-and-linking-your-components-to-storyblok-visual-editor).
The rest can be used and implemented the same way as in Vue 2 & Nuxt 2 SDKs.
> Always consider that the preferred way of using the composables available inside the SDKs is using [<script setup>](https://vuejs.org/api/sfc-script-setup.html).
---
If you have any questions or concerns not addressed in this communication and provided links, please [submit a support ticket](https://support.storyblok.com/hc/en-us/requests/new), and we will be happy to provide additional details.
Looking forward to a future of innovation and collaboration!
Warm regards,
The Storyblok DevRel Team
| dawntraoz |
1,825,379 | Stand out from an Average Developer | The insights mentioned are drawn from my experience working in a corporate environment and can help... | 0 | 2024-07-10T16:06:10 | https://dev.to/saloniagrawal/stand-out-from-an-average-developer-nng | webdev, programming, beginners, career | The insights mentioned are drawn from my experience working in a corporate environment and can help you excel and improve as a developer:
1. **Be Curious**: Curiosity fuels learning and innovation. Always seek to understand why things work the way they do and explore new ideas and technologies.
2. **Understand Before Fixing**: Take the time to fully understand the root cause of an issue before jumping into fixing it. Sometimes, the problem may not require a fix at all, or there might be a more efficient solution than initially apparent.
3. **Encourage Reusability**: Identify opportunities to refactor code into reusable methods, functions, or components. This reduces duplication, promotes consistency, and simplifies maintenance across the project.
4. **Optimize Code**: Minimize the number of loops and iterations in your code to improve performance and readability. Look for opportunities to use efficient data structures and algorithms to achieve the desired outcome with fewer iterations.
5. **Optimize Memory Usage**: Be mindful of memory usage in your code. Reduce the number of variables and allocate memory efficiently to optimize performance and resource utilization.
6. **Keep Code Simple**: Complex code is difficult to understand, maintain, and debug. Strive to keep your codebase simple, clear, and easy to follow, even if it means sacrificing some cleverness for readability.
7. **Continuous Learning**: Invest time in learning advanced skills and techniques relevant to the project you're working on. This enables you to deliver higher-quality solutions and stay competitive in your field.
8. **Focus on Best Practices**: Instead of settling for code that merely works, aim to write code that follows best practices and coding standards. This ensures maintainability, scalability, and reliability of your codebase over time.
9. **Consider Impact on Common Areas**: When making changes, consider how they might affect other parts of the codebase, especially common or shared components. This helps prevent unintended consequences and promotes code consistency.
10. **Review with Requirements in Mind**: When reviewing pull requests, ensure that the changes align with the project requirements and goals. This helps maintain consistency and clarity in the codebase.
By adhering to these principles, you not only become a more effective developer but also contribute to the overall quality and sustainability of the codebase. | saloniagrawal |
1,838,306 | Beyond the Hype: A Critical Look at Design Systems | Design systems have become a hot topic in the design world, lauded as a silver bullet for efficiency,... | 27,353 | 2024-07-10T05:00:00 | https://dev.to/shieldstring/beyond-the-hype-a-critical-look-at-design-systems-2eip | design, ui, ux, career | Design systems have become a hot topic in the design world, lauded as a silver bullet for efficiency, consistency, and a seamless user experience (UX). While they offer undeniable benefits, it's crucial to take a critical look at design systems and understand their limitations.
**The Allure of Design Systems:**
* **Efficiency:** Design systems streamline the design and development process by providing pre-built components and code snippets. This can save time and resources.
* **Consistency:** A well-defined design system ensures a consistent look and feel across all products, fostering brand recognition and a familiar UX.
* **Scalability:** Design systems act as a foundation for building new products and features faster, as core components and design principles are already established.
**The Potential Pitfalls:**
* **Over-reliance on Templates:** Design systems can stifle creativity if designers become overly reliant on pre-built components. There's a risk of homogenization and a lack of unique design solutions for specific needs.
* **Maintenance Burden:** Design systems require ongoing maintenance to keep pace with evolving user needs and technological advancements. A neglected design system can become outdated and hinder innovation.
* **Usability for Smaller Teams:** The benefits of design systems might not scale down well for smaller teams with limited resources. Implementing and maintaining a robust system can be a significant undertaking.
* **Focus on Consistency Over Context:** A rigid adherence to design system guidelines can lead to a one-size-fits-all approach. It's crucial to remember that design should always be informed by user needs and the specific context of a product.
**Building a Sustainable Design System:**
* **Start Small and Focus on Core Components:** Don't try to build a comprehensive system from day one. Begin with essential UI components and gradually expand as needed.
* **Prioritize Flexibility:** Design systems should be flexible enough to accommodate unique design needs for different products.
* **Foster Collaboration:** Involve designers, developers, and product managers throughout the design system's creation and evolution. A collaborative approach ensures the system meets everyone's needs.
* **Measure and Adapt:** Track usage data and user feedback to identify areas for improvement. Be prepared to adapt the design system based on real-world usage.
**The Future of Design Systems:**
Design systems are valuable tools, but they are not a magic solution. Here's what the future holds:
* **AI-powered Assistance:** AI can automate tasks like code generation and compliance checks, further streamlining the design system workflow.
* **Focus on User Research:** Data-driven insights from user research will be crucial for informing design system updates and ensuring they continue to meet user needs.
* **Metrics and Analytics:** Developing metrics to measure the effectiveness of design systems will become increasingly important to demonstrate their value and ROI.
**Conclusion:**
Design systems are powerful tools that can significantly enhance design and development efficiency. However, a critical understanding of their limitations is essential. By prioritizing flexibility, collaboration, and ongoing adaptation, organizations can leverage design systems to create a foundation for successful and user-centered products, without sacrificing creativity or responsiveness to evolving needs. The future of design systems lies in finding the right balance between efficiency and the ability to adapt to the ever-changing design landscape. | shieldstring |
1,844,568 | Flutter Package Power: Share Your Creations | Important things about Flutter package development. Simply create a flutter project. But set the... | 0 | 2024-07-08T10:48:41 | https://dev.to/ratul/flutter-package-power-share-your-creations-iph | flutter, mobile, dart | **Important things about Flutter package development.**
Simply create a flutter project. But set the Project Type to package.
Android Studio => New Flutter Project
***
And the project will be created. Here don't have any Android, iOS or other folders. Make a src directory for all files and a [package_name].dart inside the lib folder. Also, make an Example folder. Inside the example folder make a new Flutter application (not package) for the example app.
Inside the [package_name].dart addyour file name and most importantly add
a library declaration. Like this:-
Write readme.md file instruction is given inside the file. Record a short video of your project and convert it to .gif. And set it to readme file. It will be more professional.
Write CHANGELOG as .md method. You can follow this method -
## 1.0.0
* Initial release
## 1.0.1
* add example
## 1.0.2
* some minor changes
Edit the license file. Flutter recommend this https://opensource.org/license/BSD-3-Clause .
BSD 3-Clause License
```
`Copyright (c) 2024, RATUL HASAN RUHAN
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Add your project to GitHub.
Then the most important thing. Add data for the pub.dev.
```
Optional thing but If you add those. It will be more professional. `
Optional
Now, a most important thing. You have to add platforms: on pubspec.yaml otherwise it shows undefined platform on pub.dev.
Most important for show this:
For a better view on pub.dev . Like this-
Just add this on pubspec.yaml. And put your screenshot on it.
```
`screenshots:
- description: "Example of first page"
path: example/screenshot/screenshot1.png
- description: "Example of middle page"
path: example/screenshot/screenshot2.png
- description: "Example of final page"
path: example/screenshot/screenshot3.png
`
```
Now the publishing part:
You can create a publisher account (optional). Login with Google on pub.dev .
Now on the project terminal write this command -
dart pub publish
And Done!!
Thanks for reading this whole bunch of things.
| ratul |
1,847,338 | iphone safari issue | ** Safari iphone input zoom , bottom scroll, scroll to bottom , chatgpt... | 0 | 2024-07-09T05:41:03 | https://dev.to/parth24072001/iphone-safari-issue-2n0n | ios, iphone, safari, webdev | **
## _Safari iphone input zoom , bottom scroll, scroll to bottom , chatgpt textarea_
**
1.
**When we click on textbox that moment iPhone Safari that textbox zoom hear is the solution in react we use first and second for the next js**

```
<meta
name="viewport"
content="width=device-width,initial-scale=1.0, maximum-scale=1, user-scalable=no"
/>
```
```
import { Viewport } from "next/dist/lib/metadata/types/extra-types";
export const viewport: Viewport = {
width: "device-width",
initialScale: 1,
maximumScale: 1,
userScalable: false,
interactiveWidget: "overlays-content",
};
```
2.
**also iphone safari bottom scroll is default behaviour**

| parth24072001 |
1,847,936 | Integração Elegante de TailwindCSS com React | Introdução TailwindCSS tem se destacado como uma ferramenta inovadora para a criação de... | 0 | 2024-07-10T14:50:00 | https://dev.to/vitorrios1001/integracao-elegante-de-tailwindcss-com-react-1je1 | react, tailwindcss, javascript, html | ## Introdução
TailwindCSS tem se destacado como uma ferramenta inovadora para a criação de interfaces de usuário (UIs) responsivas e personalizáveis. Com sua abordagem de utilidade-primeiro, permite aos desenvolvedores estilizar suas aplicações sem sair do HTML (ou JSX, no caso de React). Este artigo aborda como integrar o TailwindCSS em projetos React, explorando os benefícios dessa combinação, comparando-a com outras abordagens de CSS e fornecendo exemplos práticos.
## Por que usar TailwindCSS com React?
TailwindCSS oferece várias vantagens quando usado com React:
- **Eficiência no desenvolvimento:** Ao usar classes utilitárias que podem ser aplicadas diretamente nos componentes React, os desenvolvedores podem construir UIs sem escrever CSS personalizado, acelerando significativamente o processo de desenvolvimento.
- **Responsividade facilitada:** Com classes responsivas integradas, é fácil criar designs que se ajustam a diferentes tamanhos de tela sem a necessidade de media queries complexas.
- **Customização e configuração:** Tailwind é altamente personalizável através de seu arquivo de configuração. Os desenvolvedores podem ajustar as configurações para alinhar com a identidade visual de um projeto, garantindo consistência em todo o design.
## Comparação com Outras Abordagens CSS
Antes do TailwindCSS, abordagens como BEM (Block Element Modifier) e sistemas CSS-in-JS como Styled Components eram comuns em projetos React. Enquanto BEM requer uma estrutura detalhada e manual de nomes de classes, o CSS-in-JS encapsula estilos dentro de componentes, aumentando o tamanho do pacote e, potencialmente, o tempo de renderização. Tailwind, ao contrário, fornece um meio-termo eficiente: baixa sobrecarga de estilo com rápida execução e facilidade de manutenção.
## Configurando TailwindCSS em um Projeto React
Para integrar TailwindCSS em um projeto React, siga estes passos:
### 1. Instalação e Configuração
Primeiramente, crie um novo projeto React se ainda não tiver um:
```bash
npx create-react-app my-tailwind-project
cd my-tailwind-project
```
Instale o TailwindCSS via npm:
```bash
npm install -D tailwindcss postcss autoprefixer
npx tailwindcss init -p
```
Este comando cria os arquivos de configuração `tailwind.config.js` e `postcss.config.js`, que você pode personalizar conforme necessário.
### 2. Configurando o CSS
Em `src/index.css`, adicione as diretivas de importação do Tailwind:
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
### 3. Usando TailwindCSS em Componentes React
Agora você pode usar as classes do Tailwind diretamente em seus componentes React:
```jsx
function App() {
return (
<div className="p-6 max-w-sm mx-auto bg-white rounded-xl shadow-lg flex items-center space-x-4">
<div>
<h1 className="text-xl font-semibold text-black">Hello Tailwind!</h1>
<p className="text-gray-500">Você está usando TailwindCSS com React!</p>
</div>
</div>
);
}
export default App;
```
## Exemplo Prático: Um Card de Perfil
Vamos construir um card de perfil simples usando TailwindCSS e React:
```jsx
function ProfileCard() {
return (
<div className="bg-white p-6 rounded-lg shadow-lg">
<img className="h-24 w-24 rounded-full mx-auto" src="/profile-pic.jpg" alt="Profile picture" />
<div className="text-center">
<h2 className="text-lg text-gray-800 font-semibold">João Silva</h2>
<p className="text-gray-600">Desenvolvedor Front-end</p>
<button className="mt-4 bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">
Conectar
</button>
</div>
</div>
);
}
```
{% codesandbox 55ld8x %}
## Conclusão
Integrar TailwindCSS em projetos React oferece uma abordagem moderna e eficiente para o desenvolvimento de UIs. Com a capacidade de personalizar totalmente e ajustar o design ao seu gosto, junto com a facilidade de aplicação de estilos responsivos e performáticos, TailwindCSS com React é uma combinação poderosa que pode acelerar o desenvolvimento sem comprometer a qualidade ou a manutenibilidade. Experimente em seu próximo projeto e observe a diferença! | vitorrios1001 |
1,862,966 | php: write php 8.4’s array_find from scratch | there’s an rfc vote currently underway for a number of new array functions in php8.4 (thanks to... | 0 | 2024-07-09T13:15:03 | https://dev.to/gbhorwood/php-write-php-84s-arrayfind-from-scratch-5c9m | php | there’s an rfc vote currently underway for a number of [new array functions in php8.4](https://laravel-news.com/php-8-4-array-find-functions) (thanks to [symfony station](https://symfonystation.mobileatom.net/) for pointing this out!). the proposal is for four new functions for finding and evaluating arrays using `callables`. the functions are:
* array_find()
* array_find_key()
* array_any()
* array_all()
the full details can be read [here](https://wiki.php.net/rfc/array_find).
if we’re impatient, though, we can skip waiting for php8.4 and homeroll these ourselves.
## array_find()
at it’s heart, the functionality of `array_find` is [`array_filter`](https://www.php.net/manual/en/function.array-filter.php). the major difference is that `array_find` returns only the *first* element of the array, not all of them.
with that in mind, we can build our own version like so:
```php
/**
* Find the first value in an array that evaluates to true in $func
*
* @param array $array
* @param callable $func
* @return any
*/
$array_find = function(array $array, callable $func) {
return array_values(array_filter($array, $func))[0] ?? null;
};
```
when we run this against the test data given in the rfc, we get the expected results.
```php
$array = [
'a' => 'dog',
'b' => 'cat',
'c' => 'cow',
'd' => 'duck',
'e' => 'goose',
'f' => 'elephant'
];
$array_find($array, function (string $value) {
return strlen($value) > 4;
}); // string(5) "goose"
```
## array_find_key()
the only difference between `array_find_key` and `array_find` is that the one that is called ‘find_key’ finds the key, not the value. no surprise.
we can implement this by taking our `array_find code` and adding [`array_key_first`](https://www.php.net/manual/en/function.array-key-first.php):
```php
/**
* Find the first key in an array where the value evaluates to true in $func
*
* @param array $array
* @param callable $func
* @return any
*/
$array_find_key = function(array $array, callable $func) {
return array_key_first(array_filter($array, $func)) ?? null;
};
```
##array_any() and array_all()
both `array_any` and `array_all` return booleans.
if one or more of the elements in the array passes the test in the callable, `array_any will` return true. we can achieve this by filtering the array and testing if the `count` of the result is greater than zero:
```php
/**
* Evaluate if any element in array $array evaluates to true in $func
*
* @param array $array
* @param callable $func
* @return bool
*/
$array_any = function(array $array, callable $func): bool {
return (bool)count(array_filter($array, $func));
};
```
for `array_all`, all the elements in the array must pass the callable test. this requires us to test if the number of elements in the filtered array is the same as in the original array before filtering, ie. if all the elements passed the filter.
```php
/**
* Evaluate if all elements in array $array evaluates to true in $func
*
* @param array $array
* @param callable $func
* @return bool
*/
$array_all = function(array $array, callable $func): bool {
return count(array_filter($array, $func)) == count($array);
};
```
and now we have a small slice of the power of php8.4 without the wait.
> 🔎 this post was originally written in the [grant horwood technical blog](https://gbh.fruitbat.io/2024/05/21/php-write-php-8-4s-array_find-from-scratch/) | gbhorwood |
1,863,000 | The Chain of Responsibility | The essence of the Responsibility of Chain pattern: The Chain of Responsibility pattern is a design... | 0 | 2024-07-09T08:00:00 | https://dev.to/ben-witt/the-chain-of-responsibility-1pl4 | **The essence of the Responsibility of Chain pattern**:
The Chain of Responsibility pattern is a design pattern that allows code to be structured to route requests through a series of handlers. Each handler has the ability to process a request or pass it on to the next handler in the chain. This flexibility facilitates the handling of complex requests in a modular and scalable way.
**The basics in detail:**
To fully grasp the pattern, an understanding of how it works is necessary. Here we will highlight the mechanism of the responsibility-of-chain pattern and compare it to other common approaches such as if-else statements to highlight its strengths and potential uses.
**Implementation in C#:**
Now let’s get down to business and implement the pattern in C#. Step by step, we will be guided through the creation of handler classes, their linking to a chain and the handling of requests through this chain.
**Practical application**:
To reinforce what we have learned, we present a practical example. Here we illustrate the pattern using a concrete scenario, show its implementation and carry out tests to demonstrate its effectiveness.
**Advanced concepts and subtleties:**
Looking outside the box reveals advanced concepts and optimizations of the Responsibility of Chain pattern. We discuss ways to improve the implementation, the dynamic adaptation of the chain at runtime as well as effective error handling and traceability.
**Summary:**
Finally, we summarize the key points and give advice for further deepening and applying the Chain of Responsibility pattern in your own projects.
Ready to unleash the power of the Chain of Responsibility pattern? Then let’s dive into the world of responsibility chains together and take your C# development to a new level.
## What is the Chain of Responsibility pattern?
The Chain of Responsibility pattern, an established design pattern in software development, acts as a proven tool for the efficient processing of requests or events. Its strength lies in the elegant decoupling of the requester from the recipients by providing a dynamic chain of potential recipients. Each recipient in this chain is able to decide independently whether it can handle the request adequately or forward it appropriately to the next recipient. This flexible structure ensures effective and modular processing of requests, which significantly improves both the maintainability and extensibility of software solutions.
**Purpose of the pattern:**
Decoupling of sender and recipient: The Chain of Responsibility pattern enables a clean separation between the sender of a request and the potential recipients. This avoids the need for the sender to have specific knowledge about the recipients, as the responsibility for processing the request lies within the chain.
**Flexibility and expandability:**
By structuring it as a chain of recipients, the template offers a high degree of flexibility when adding new recipients or changing existing ones. These changes can be made without modifying the sender, which significantly improves the maintainability and expandability of the system.
**Avoidance of hard-coded dependencies:**
The Chain of Responsibility pattern helps to avoid rigid dependencies by allowing the handling of requests to be configured dynamically. This means that the system remains open to change and can adapt to changing requirements without being tied to specific implementations.
**Advantages of the pattern:**
**Improved maintainability:** The clear separation of sender and recipient leads to code that is easier to understand and maintain. This level of abstraction makes the code base cleaner and less prone to unexpected side effects during maintenance.
**Extensibility**: Adding new recipients or changing the order of existing recipients is straightforward, as this can be done without making any adjustments to the sender. This keeps the code base flexible and open for future adaptations or extensions.
**Decoupling**: The individual components of the system are loosely coupled thanks to the Chain of Responsibility pattern. This enables improved reusability and testability of the components, as they can be developed, tested and maintained independently of each other.
**Use cases**:
Processing of requests: The pattern is ideal for processing requests in different scenarios. For example, it could be used in an e-commerce application to validate payment methods depending on various criteria, such as the amount of the purchase or the selected currency.
**Event handling**: In a GUI framework, the Chain of Responsibility pattern can be used to process user interactions efficiently. Different components within the chain can respond to events such as mouse clicks or keystrokes, ensuring flexible and scalable handling of user actions.
The pattern thus proves to be an extremely versatile tool for structuring complex processing logic in a wide variety of application areas, which can significantly increase the flexibility and maintainability of software systems.
**Understand the basics:**
In order to understand the The Chain of Responsibility pattern in depth, it is crucial to penetrate the fundamental workings of the pattern and relate them to other approaches. Such an understanding makes it possible to fully appreciate the strengths and potential of this design pattern.
The The Chain of Responsibility pattern operates on the basis of a hierarchical sequence of handlers that receive a request and either handle it themselves or pass it on to the next handler in the chain. This process of processing takes place in turn until the request is successfully processed or the chain is exhausted.
A key point of comparison is the contrast with other common approaches such as the use of if-else statements. While the latter represent a sequential and static decision structure in which each condition is explicitly defined in the code, the responsibility-of-chain pattern offers a dynamic and flexible alternative.
By using a chain of handlers, the pattern enables an elegant decoupling of the sender from the receivers, creating a loosely coupled architecture. This level of abstraction not only facilitates the maintenance and extension of the code, but also promotes the reusability of the individual components.
Overall, the Chain of Responsibility pattern enables efficient processing of requests or events in complex systems by providing a flexible and extensible structure. An in-depth understanding of how it works and a comparison with other approaches are therefore essential in order to recognize its full potential and make optimal use of it.
## How does the Chain of Responsibility pattern work?
The Chain of Responsibility pattern is divided into three main components that form the framework for its functionality:
**1. Handler (recipient):** Each handler embodies a potential recipient of a request or event. This component implements a method or interface for processing the request. If a handler is unable to handle the request, it forwards it to the next handler in the chain. This flexible structure makes it possible to dynamically distribute responsibility for the request between the handlers, depending on the specific requirements of the system.
**2. Chain**: The chain forms the structural basis of the pattern and consists of a series of handlers that are linked together. Each handler in the chain references the next handler, creating a sequential order. In this way, the request can be passed through the chain until a suitable handler is found that can process it successfully. This flexible forwarding functionality enables efficient and modular handling of requests in complex systems.
**3. Client (sender):** The client initiates the request process by creating a request and passing it to the first handler in the chain. An important feature of the pattern is that the client does not need to be aware of the specific handling of the request. This abstraction layer enables a clean separation between client and receivers, which increases the flexibility and maintainability of the system.
Through the interaction of these three main components, the Chain of Responsibility pattern enables elegant and flexible handling of requests or events in complex software systems. It promotes the reusability, extensibility and maintainability of the code by providing a clear and modular structure that allows the processing logic to be easily adapted and extended.
**Comparison with other patterns:**
A frequent comparison between the Chain of Responsibility pattern and the use of if-else statements shows clear differences:
If-Else statements: The request processing logic is embedded directly in the sender. This can lead to confusing and difficult to maintain code, especially with many conditions. Adding new conditions often requires changes to the sender, which makes maintenance more difficult.
**Chain of Responsibility pattern:** Here the processing logic is encapsulated in separate handler classes that are connected in a chain. The sender only has to send the request to the beginning of the chain without having to worry about the details of request processing. This promotes modularity and flexibility in the code.
**Example:**
To illustrate the concept of the Chain of Responsibility pattern, let’s look at a simple example from the field of authentication:
Suppose we have a chain of authentication checks that must be run in sequence:
1. checking user authorizations.
2. checking the validity of the password.
3. verification of two-factor authentication
.
In this scenario, each check is represented as a handler in the chain. If a check fails, the request is forwarded to the next handler in the chain. This structure allows for flexible and extensible authentication logic, as new checks can easily be added or the order changed without changing the main authentication mechanism.
**Summary**:
The Chain of Responsibility pattern presents an elegant solution for forwarding requests or events through a chain of handlers. This makes the code more flexible, maintainable and extensible. In the next section, we will take a closer look at the implementation of this pattern in C#.
**Step 1:** Creating the handler classes for support staff
We create handler classes for processing support requests.
```
public abstract class SupportHandler
{
protected SupportHandler NextHandler;
public void SetNextHandler(SupportHandler handler)
{
NextHandler = handler;
}
public abstract void HandleRequest(SupportRequest request);
}
public class Level1SupportHandler : SupportHandler
{
public override void HandleRequest(SupportRequest request)
{
// Verifying if Level-1 support can handle the request
if (/* Condition for successful processing */)
{
Console.WriteLine("Support request successfully processed by Level-1 support.");
}
else if (NextHandler != null)
{
// Forwarding request to the next handler
NextHandler.HandleRequest(request);
}
else
{
Console.WriteLine("No support personnel could handle the request.");
}
}
}
```
**Step 2:** Linking the handlers to a chain
```
// Similarly, implement the other handler classes for level 2 support, level 3 support, etc.
//We create a chain of support staff.
public class SupportChain
{
private SupportHandler _firstHandler;
public SupportChain()
{
// The order of the handlers is defined here
_firstHandler = new Level1SupportHandler();
_firstHandler.SetNextHandler(new Level2SupportHandler());
_firstHandler.SetNextHandler(new Level3SupportHandler());
// Further handlers can be added here
}
public void ProcessSupportRequest(SupportRequest request)
{
// Send request to the beginning of the chain
_firstHandler.HandleRequest(request);
}
}
```
**Step 3:** Processing support requests through the chain
```
//We use the created chain to process support requests.
public class SupportClient
{
public void ProcessSupport()
{
SupportChain chain = new SupportChain();
SupportRequest request = new SupportRequest(/* Details of the support request */);
chain.ProcessSupportRequest(request);
}
}
```
In this example, support requests are routed through a chain of support employees. Each employee checks whether they can process the request and forwards it to the next employee if necessary. This structure enables efficient processing of support requests and facilitates the expansion of the system with additional support levels or functions.
**Practical example:**
To further illustrate the responsibility-of-chain pattern, we will now implement a simple practical example. In this scenario, we will create an application to handle customer support requests.
In this application, support requests are routed through a chain of support employees. Each employee checks whether they can process the request and forwards it to the next employee if necessary. This enables efficient processing of support requests and easy expansion of the system with additional support levels or functions.
**Scenario:**
For our application, we plan to receive support requests from customers and forward them to different support levels according to their urgency. The support employees are to be organized in a chain, with each employee having the option of processing the request or forwarding it to the next employee in the chain.
To implement this, we need to:
1. create a chain of support agents.
2. each employee in the chain should have a method to handle a support request or pass it on to the next employee.
3. the application should provide a way to receive support requests and forward them to the first employee in the chain.
By implementing this scenario, we will see the Responsibility of Chain pattern in action and how it provides an efficient and flexible solution for handling support requests.
Implementation:
We will carry out the implementation in C#.
```
// Definition of the support request
public class SupportRequest
{
public string CustomerName { get; set; }
public string RequestDetails { get; set; }
public int Priority { get; set; }
// Further relevant properties can be added here
}
```
```
// Abstract handler class for support agents
public abstract class SupportHandler
{
protected SupportHandler NextHandler;
public void SetNextHandler(SupportHandler handler)
{
NextHandler = handler;
}
public abstract void HandleRequest(SupportRequest request);
}
```
```
// Handler class for level 1 support
public class Level1SupportHandler : SupportHandler
{
public override void HandleRequest(SupportRequest request)
{
if (request.Priority <= 3) // Example priority check
{
Console.WriteLine($"Support request from {request.CustomerName} processed by level 1 support.");
}
else if (NextHandler != null)
{
NextHandler.HandleRequest(request);
}
else
{
Console.WriteLine("No support agent was able to process the request.");
}
}
}
```
```
// Handler class for level 2 support
public class Level2SupportHandler : SupportHandler
{
public override void HandleRequest(SupportRequest request)
{
if (request.Priority <= 6) // Example priority check
{
Console.WriteLine($"Support request from {request.CustomerName} processed by level 2 support.");
}
else if (NextHandler != null)
{
NextHandler.HandleRequest(request);
}
else
{
Console.WriteLine("No support agent was able to process the request.");
}
}
}
```
```
// Handler-Klasse für Level-3-Support
public class Level3SupportHandler : SupportHandler
{
public override void HandleRequest(SupportRequest request)
{
// Level-3-Support bearbeitet alle Anfragen, da es die letzte Ebene ist
Console.WriteLine($"Supportanfrage von {request.CustomerName} vom Level-3-Support bearbeitet.");
}
}
```
```
// Implementation of the clients
public class SupportClient
{
public void ProcessSupportRequest(SupportRequest Anfrage)
{
SupportHandler chain = new Level1SupportHandler();
chain.SetNextHandler(new Level2SupportHandler());
chain.SetNextHandler(new Level3SupportHandler());
chain.HandleRequest(request);
}
}
```
**Application of the example:**
Now we can use our example to process requests:
```
class Program
{
static void Main(string[] args)
{
SupportClient client = new SupportClient();
// Create example request
SupportRequest request = new SupportRequest
{
CustomerName = "John Doe",
RequestDetails = "I am facing issues with my account login.",
Priority = 5 // Example priority
};
// Process request
client.ProcessSupportRequest(request);
}
}
```
**Result:**
Depending on the priority of the request, it is forwarded to the appropriate support level and processed accordingly. The Chain of Responsibility pattern enables flexible and scalable handling of support requests in a customer support system.
Advanced concepts and optimizations:
After looking at the basic understanding of the Chain of Responsibility pattern and its implementation in C#, we can turn to advanced concepts and optimizations to improve our solution. These include efficient chain traversal strategies, dynamic configuration of the support chain, improvements in error handling and optimization of resource usage. These measures help to increase the performance, efficiency and robustness of our application to meet the needs of our users.
Dynamic Chain Adjustment at Runtime:
One way to enhance the flexibility of our system is by dynamically adjusting the chain of support agents at runtime. This entails the ability to modify the order or composition of support agents as needed without altering the code.
Through this dynamic adjustment, we can, for instance, add new support tiers, replace existing agents, or alter the prioritization of agents in the chain, all without the necessity of modifying the source code. This endows our system with high flexibility and adaptability to changing requirements or business scenarios.
By implementing this dynamic adjustment, we can ensure that our support system consistently responds optimally to our customers’ needs, ensuring efficient handling of support inquiries even in rapidly changing environments.
```
public class SupportClient
{
private SupportHandler _chain;
public SupportClient(SupportHandler initialHandler)
{
_chain = initialHandler;
}
public void SetSupportChain(SupportHandler handler)
{
_chain = handler;
}
public void ProcessSupportRequest(SupportRequest request)
{
_chain.HandleRequest(request);
}
}
public void ProcessSupportRequest(SupportRequest request)
{
_chain.HandleRequest(request);
}
}
```
Implementation of a Dynamic Support Chain Adjustment Mechanism:
By implementing a mechanism for dynamically adjusting the support chain, the application can flexibly respond to changes in the support process without requiring modifications to the source code. This facilitates the adaptation of the support structure as needed, allowing for the addition of new support tiers, replacement of existing staff, or reordering of staff members. This approach ensures that the application remains agile and adaptable to meet evolving requirements and business scenarios.
Error Handling and Traceability:
Another crucial consideration is error handling and traceability within the chain. It is essential for the application to handle errors appropriately and provide the capability to trace the processing status of requests.
```
public abstract class SupportHandler
{
// …
public virtual void HandleRequest(SupportRequest request)
{
try
{
// Processing of the Request
}
catch (Exception ex)
{
Console.WriteLine($"Error occurs during the process: {ex.Message}");
}
}
}
```
By implementing error handling in each handler, we can ensure that the application is resilient against unexpected errors. Furthermore, we can add traceability mechanisms, such as logging or appending additional information to the request.
**Further Optimizations:**
- Implementation of mechanisms for parallel processing of requests in the chain to enhance performance.
- Use of Dependency Injection for easier configuration of the support chain and improved testability.
- Implementation of mechanisms for automatic adjustment of support priorities based on specific criteria.
Considering these advanced concepts and optimizations allows us to further enhance the flexibility, performance, and robustness of our system.
In this tutorial, we extensively covered the Chain of Responsibility pattern in C#. We explained its purpose, how it works, and its implementation through a practical example. Additionally, we examined advanced concepts and optimizations to improve the performance and flexibility of our solution. With this understanding, we are now able to effectively utilize the Chain of Responsibility pattern in our own projects and develop robust, flexible applications.
**Summary**:
- The Chain of Responsibility pattern allows requests or events to be passed through a chain of handlers, with each handler having the ability to process the request or pass it to the next handler.
- Using this pattern enables the separation of sender and receiver, leading to more flexible, maintainable, and extensible code.
- Implementing the pattern in C# involves creating handler classes and linking them to form a chain that processes requests.
- Advanced concepts such as dynamic adjustment of the chain at runtime and error handling enhance the flexibility and robustness of our solution.
**Conclusions**:
The Chain of Responsibility pattern is undeniably a powerful tool in software development that can be employed in numerous application domains to structure complex processing logic. Through careful implementation and consideration of advanced concepts, we can develop flexible, robust, and high-performing systems. Applying this pattern enables clean separation of concerns, increased maintainability and extensibility of the code, and improved responsiveness to changing requirements. By mastering the principles of the Chain of Responsibility pattern and fully harnessing its potential, we can develop software solutions that meet the highest standards in terms of flexibility, robustness, and performance. | ben-witt | |
1,871,560 | Django application with allauth configuration. | Hi, dev ninjas🥷! Welcome to my first post! Today, we're diving into Django's built-in authentication... | 0 | 2024-07-08T06:22:35 | https://dev.to/saiprasath/django-application-with-allauth-configuration-3oeo | django, allauth, sso, tutorial | Hi, dev ninjas🥷!
Welcome to my first post! Today, we're diving into Django's built-in authentication system, specifically using the django-allauth package. This powerful tool provides a range of authentication providers, simplifying the workflow for managing logins.
In this tutorial, we'll explore how to add Single Sign-On (SSO) login functionality to an existing Django application. Note that in this application, user sign-up and creation are disabled. Instead, users will be pre-added to the system, and they will link their social accounts through the Django application.
Let's walk through this process step-by-step and see how it can be done.
---
## Setting Up the Django Project
Before we dive into the SSO integration, let's review the setup process for a Django project. If you already have an existing Django application, you can skip ahead to the configuration steps. For those new to Django, here’s a quick guide on creating a project and app, installing necessary packages, and configuring settings.
### Step 1: Create a New Django Project
For those new to Django, we'll quickly cover how to create a new Django project and app. If you already have an existing project, you can skip to the next step.
Creating a Django Project and App:
Open your terminal and run the following commands:
```
$ django-admin startproject myproject
$ cd myproject
$ python manage.py startapp myapp
```
This will create a new Django project named `myproject` and an app named `myapp` (you can have your custom name for both).
### Step 2: Install and Configure django-allauth
Next, we need to install django-allauth. This package provides a comprehensive authentication system with support for multiple authentication providers.
#### Managing Dependencies:
- **Using Docker:**
If you are using Docker for your development environment, you can
define your dependencies in the `Dockerfile` and `docker-compose.yml`
files. This approach ensures that all dependencies are consistently managed across different environments.
- **Using Virtual Environment:**
If you are not using Docker, it's recommended to create a virtual environment to manage your project’s dependencies:
*Create and Activate a Virtual Environment:*
First, make sure to create a virtual environment:
```
$ python -m venv venv
$ source venv/bin/activate # On Windows use `venv\Scripts\activate`
```
Install django-allauth:
Add django-allauth to your requirements.txt file:
```
Django>=3.0,<4.0
django-allauth
authlib==1.0.0
```
Then, install the dependencies: `pip install -r requirements.txt`
If you are not using a requirements.txt file, you can install django-allauth directly: `pip install django-allauth` (inside virtual env.)
Update `settings.py`:
Open your `settings.py` file and add allauth and its dependencies to the INSTALLED_APPS list. Also, include the required authentication backends and set up the site ID:
```
# config/settings.py
INSTALLED_APPS = [
...
'django.contrib.sites',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.okta', # for sso config
'allauth.socialaccount.providers.google', # Include other providers as needed like instead of google, make it github, etc.
'allauth.socialaccount.providers.github', # example
...
]
SITE_ID = 1 #default one, if you're adding in the admin site, change it to the respective one
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend', # default backend
'allauth.account.auth_backends.AuthenticationBackend', #backend for OAuth
)
LOGIN_REDIRECT_URL = '/' # redirects to the login page on authentication
# For OAuth_AUTHENTICATION
ACCOUNT_EMAIL_VERIFICATION = "none"
ACCOUNT_EMAIL_REQUIRED = True
SOCIALACCOUNT_QUERY_EMAIL = True
SOCIALACCOUNT_AUTO_SIGNUP = False #new users will not get signed up using any of the providers, only if the user with the same mail id as in the provider then the user gets signed in.
SOCIALACCOUNT_LOGIN_ON_GET=True
SOCIALACCOUNT_ADAPTER = 'config.adapters.MySocialAccountAdapter'
SOCIAL_AUTH_REDIRECT_IS_HTTPS = True
SOCIALACCOUNT_PROVIDERS = {
'google': {
'APP': {
'client_id': 'YOUR_CLIENT_ID',
'secret': 'YOUR_SECRET',
'key': ''
}
},
..... # same way for other providers, for okta it is separate
}
# For Auth0 - Okta(SSO)
AUTH0_CLIENT_ID = 'YOUR_CLIENT_ID'
AUTH0_CLIENT_SECRET = 'YOUR_SECRET'
AUTH0_DOMAIN = 'YOUR_DOMAIN'
AUTH0_CALLBACK_URL = 'http://localhost:8000/callback'
# these configurations can either be given here, or in the admin page.
```
For getting client id and other credentials of any adapter, create an app in the respective developer area of the provider.
For E.g..: Google:
- Go to Google Cloud
- Create a project
- Go to API & Services
- Go to Oauth consent screen and complete the consent form
- Create an app
- Now come to credentials tab
- There go to the respective app and find out the client id and secret key
- If secret key is not there, create one!

Similarly create for other providers, and for Auth0 refer the references section given below.
### Step 3: Create adapters and manage authentications
Now we have to create a custom adapters.py file for configuring pre-social-login and connecting the socialaccounts part.
```
# config/adapters.py
from allauth.socialaccount.adapter import DefaultSocialAccountAdapter
from django.contrib.auth.models import User
from app.users.models import User
from django.core.exceptions import PermissionDenied
from django.shortcuts import redirect
from django.contrib import messages
class MySocialAccountAdapter(DefaultSocialAccountAdapter):
def is_open_for_signup(self, request, sociallogin):
return False #this is in case if you don't want auto login or signup
def pre_social_login(self, request, sociallogin):
if sociallogin.is_existing:
return
user_email = sociallogin.account.extra_data.get('email')
user_mail = sociallogin.account.extra_data.get('mail') # sometimes it can be mail or email in the data, so will have to check both
if user_email or user_mail:
try:
path = "/accounts/{username}/"
# Attempt to retrieve the user by email
if user_email:
user = User.objects.get(email=user_email)
elif user_mail:
user = User.objects.get(email=user_mail)
# Connect the social account to the existing user
sociallogin.connect(request, user)
return
except User.DoesNotExist:
# User does not exist, proceed with the redirection and message
# messages.warning(request, "User account does not exist.")
return redirect('/login')
# this will make sure the socialaccount is getting connected
```
During this process, as said earlier, an existing user while trying to login using one of the providers with the same mail id will be able to login.
### Step 4: Customizing URL(s) and Views
```
path('login/', users.user_login, name='custom_login'), # existing login page
path('logout/', users.user_logout, name='logout'),
path('accounts/social/connections/', users.social_connections, name='socialaccount_connections'), #custom view url for managing connected accounts and other stuffs
path('accounts/social/signup/', base.signup_view, name='socialaccount_signup'),
path('accounts/', include('allauth.urls'), name='account'), #this includes all the urls for the social accounts
path('sso_login/', base.auth0_login, name='okta_login'),
path("callback/", base.auth0_callback, name="sso_callback"),
```
If any url is not required from the default url(s) from the accounts, we can list only those urls, just like `accounts/social/connections/` this path and remove `accounts/', include('allauth.urls')` this path.
`social_connections view`
```
def social_connections(request):
if request.method == 'POST':
account_id = request.POST.get('account')
if account_id:
try:
account = SocialAccount.objects.get(pk=account_id, user=request.user)
account.delete()
messages.success(request, 'Social account removed successfully.')
except SocialAccount.DoesNotExist:
messages.error(request, 'Social account not found.')
else:
messages.error(request, 'No social account selected.')
return redirect('socialaccount_connections') # Adjust this to the appropriate URL name for your profile page
social_accounts = SocialAccount.objects.filter(user=request.user)
return render(request, 'admin/project/social_accounts.html', {'social_accounts': social_accounts})
```
this view mainly focuses on showing the connected accounts from different social accounts. For example, if the user has logged in using google, then in this page, it shows only google provider, all other providers will get added once the login happens through the provider.
### Step 5: Login for social accounts
```
<a href="{% url 'okta_login' %}" class="okta btn" style="color: black;">Okta</a><br>
<a href="{% provider_login_url 'google' process='login' method='js_sdk' %}" class="google btn" style="color: black;">Google</a><br>
```
same way for other providers!
---
## Running and testing
After setting up django-allauth and configuring your project, it’s important to test the integration to ensure everything works as expected.
### Testing Locally
- Run the Development Server:
Start your Django development server to test the application.
```
python manage.py runserver
```
- Access the Login Page:
Navigate to the login page, and customize the login page to have sso options with above mentioned urls.


- Test Different Providers:
Try logging in using different social providers you configured (e.g., Google, GitHub). Ensure that users with existing emails in your database can log in and their social accounts are linked correctly.
### Testing using docker
1. `docker compose up` your application
2. Go to login page and test your application with the same way!
---
## Conclusion
In this tutorial, we have covered how to integrate django-allauth into a Django application to provide SSO login functionality. We discussed:
- Setting up a Django project and installing dependencies.
- Configuring django-allauth in settings.py.
- Creating custom adapters to manage the social login process.
- Defining URLs and views for login, logout, and social account connections.
- Testing the integration locally and in production.
By following these steps, you can streamline the authentication process for your users, providing a seamless login experience with various social providers.
---
## Additional Resources
- [django-allauth Documentation](https://docs.allauth.org/en/latest/)
- [Django Documentation](https://docs.djangoproject.com/en/5.0/)
- [Auth0 Documentation](https://auth0.com/docs/quickstart/webapp/django/interactive)
> _Any issues can be addressed in the comments section below! Feel free to ask if you have any questions or need further clarification on any part of this tutorial. Happy coding!_ | saiprasath |
1,873,344 | The Ongoing War Between CJS & ESM: A Tale of Two Module Systems | (This was originally an in-person talk I gave, so the following graphics are from said presentation... | 0 | 2024-07-11T22:27:03 | https://dev.to/greenteaisgreat/the-ongoing-war-between-cjs-esm-a-tale-of-two-module-systems-1jdg | javascript, modules, cjs, esm | (This was originally an in-person talk I gave, so the following graphics are from said presentation slides, the bulk of which are attributed to [SlidesGO](https://slidesgo.com/); check them out for awesome & free slides! Just be sure to credit them during your presentation <3)
---
## _**A**_ battle has raged on for _millennia_, claiming numerous casualties in its wake, where no single individual, corporation nor entity was safe from the wreckage that followed in its trail; hidden deep within the binary bunkers, there festered something so awful, so _terrible_ that mankind is still reeling from it today.
---
##Of course, I'm speaking of the dreaded 20-year war that is commonly known as...
---
## COMMON JAVASCRIPT (CJS)
## VS
## ECMA SCRIPT MODULES (ESM)
---
So with **_that_**, my friends, let's dive into the ongoing war between the two different kinds of module implementations that exist within our favorite language, known as JavaScript; CJS & ESM.
---

Yeah, what _**is**_ ESM and CJS, anyways? Well, by definition, ESM stands for ECMAScript modules and CJS stands for Common JavaScript. ECMA is the governing body for JavaScript standardization and CJS, well…that’s kind of just the term that was given before any governing body had any say in the matter. Let’s first dive into some of the history surrounding these two methodologies
---

By now, I’m sure we’ve all used both variations of these syntaxes to bring in some sort of code that we’ve needed within our existing codebase. CJS syntax uses the `require()` and `export.modules()` commands to either bring in and/or send out some bit of code we’ve either written ourselves or have used by someone else to make our code do what it’s supposed to do.
Same exact thing for ESM syntax; it uses the `import` and `export` commands to either bring in or ship out some code we’ve either written ourselves or have used by someone else to make our code work. Seems like both are basically doing the same thing, right? So...
---

Well, that’s my current job to explain to you, but just let me say that difference is literally EVERYTHING. Let's explore how these two seemingly comparative modules differ so greatly.
---

At its core, CJS is **_synchronous_** and **_dynamic_** in its methods of importing and exporting modules. This means that a single module is only able to be loaded at a time and the next module is unable to be processed until the previous one has finished. The dynamic aspect comes into play when only certain modules are required in, as they’re needed at run-time.
ESM on the other hand, is **_asynchronous_** and **_static_**; this means that modules are able to be loaded in, regardless of any current processes occurring. The static aspect of ESM pertains to the _analysis_ of a given module and its dependencies, once they’re compiled.
Overall, both of these module implementations have their own strengths and weaknesses; the most common use-case for either of these, is for front end processes to use ESM and back end processes to use CJS.
But how did we arrive at this point? And why are computer nerds everywhere choosing factions to represent their chosen side to fight the good fight? To answer that, we need to consult the dusty tomb of JavaScript’s history.
---

CJS was first introduced in the year 2009 by Mozilla engineer [Kevin Dangoor](https://www.kevindangoor.com/about/), after the need for the modularizing of code became evident for larger code bases. **_Imagine_** having to contain an entire full stack project within a single file of code; _NO THANK YOU!_
So with the mother of innovation fully at work, CJS was born to address the lack of a module system in JavaScript, primarily for server-side applications. And a fun fact about CJS is that it was initially called _ServerJS_.
So that’s great and all, but if CJS solved so many problems, why was there a need for ESM to exist in the first place? The answer to that lies within the need for a standardized approach to the existing module system, as well as the need for asynchronous functionality that front end based applications necessitate.
---
## ENTER: ESM
ESM was introduced in 2015, alongside the advent of ES6, which brought about significant changes that a lot of us take for granted as the baseline approach for operating within JavaScript.
Ever since, the question on everyone’s mind has been, “why don’t we get rid of CJS and only implement ESM as the main source of truth for the modules we use?”
…do you _want_ the truth? Do you _really want_ the truth?
## Well I don’t think you can _handle_ the truth!
But alright, the truth is…
---

What you see before your very eyes. That’s right my fellow friends, CJS solely exists because old engineers **_refuse_** to adapt to any kind of new technology and are firmly set in their tired and decrepit ways.
Alright, so not really, but based off of the digging around I’ve done, that seems to make up at _least_ 20% of the reason why.
The bulk of the reason why these two approaches exist lies within the fact that Node.js has signed a blood-pact with CJS and isn’t functional without it, unless of course you bring in some external configurations to make server-side processes use ESM syntax, against their will (for example Typescript).
It’s also worth it to note that the vast majority of browsers simply refuse to acknowledge CJS’s existence, unless you bring in some kind of transpiler to force that acknowledgement. Although, before I get too ahead of myself, let’s gaze into the opening statement I had first written out at the beginning of this post; insecurity vs. security:
---

Back in 2016, a single individual disrupted the entire ecosystem of the web by deleting a mere 11 lines of code, as you see above. Sites that used React, such as Facebook, Instagram…basically any website you visit these days, experienced a downtime of about 2 hours thanks to this deletion of only _11 lines_ of code.
Seems like nothing super impressive, right?
**_Wrong._**
The single fact that over half of the Internet’s most-visited sites experienced that long of a down time, and billions of dollars lost in ad revenue, speaks volumes to just how fragile of an ecosystem we’re operating within. This piece of code, called "**left-pad**", is a package found within npm which solely operates based off of CJS modules.
The beauty and downfall of npm-based packages, and in turn, CJS modules, is that anyone is able to alter (or delete) their packages operating within CJS boundaries, and if the vast majority of websites are using that very code you’ve written, either knowingly or blissfully ignorant, significant changes can happen in the blink of an eye.
I encourage you to [research](https://www.theregister.com/2016/03/23/npm_left_pad_chaos/) more into this event, to see the full brevity of this small change that disrupted countless operations.
This isn’t to say that if this code had been using ESM modules that this could’ve been avoided, I only point this out to show that regardless of how much security we try to incorporate into the tools we use, we very much rely on external factors, even if we know it or not.
So with that, let’s dive into how ESM and CJS incorporate their different versions of security:
---

**ESM** introduced a number of security enhancements, which I’ll touch on a few of them briefly.
The first big implementation was an improved _**module-specific scoping**_, which means whoever is importing a package has to define their own variables, as opposed to having variables brought into the global scope.
Another big security feature upgrade was the use of **_import maps_**. Import maps are aiming to replace the use of bundlers, such as webpack, by calling in the packages you need in the actual `<script>` tag in an HTML file. You simply link the URL within an `imports` property and this helps with the resolution of the module, as well as controlling the origin of where that module is coming from. Currently, only a single import map is allowed per document, but changes are in the pipeline to address this limitation.
Now let’s get into the _extremely_ limited security features of the CJS module.
**CJS** offers a _little_ bit of _**encapsulation**_, but this is basically just from the encapsulation that JavaScript already offers natively from its scope within functions. So, not so much of a CJS-specific security implementation, it’s moreover a nice thing that happened to already be there.
The next security feature that was, in fact, implemented specifically for CJS is our trusty friend `package-lock.json`. This concept was introduced by npm to ensure **_reproducibility_** within any given module, so that what you got was going to function as the creator intended.
Although the beauty and terror within that statement is the latter; you’re at the behest of the creator. So by and large, most npm packages aren’t going to be malicious. But it only takes one bad actor to alter a single file and half of the Internet is down for an indeterminate amount of time.
So with all of these competing factors...
---

Should we choose **_team ESM_** for their improved security features and the slowly encroaching takeover that all modules, both back and front end, are eventually going to succumb to?
Should we choose **_Team CJS_** for their tried and true methods standing the test of time and the ease of use that a vast majority of npm packages are based on?
Or should we all just go the **_nihilism route_** and finally realize either of these viewpoints are essentially meaningless?
Well, allow me to give you a _100%_ objective and absolutely **_NOT_** biased take:
---

Though it may seem like I’ve been framing either module systems as at odds against each other, the truth is that each system has their own use-case.
The main take-away here is that, if you’re writing a new app from scratch, try to implement ESM if you can. ESM has significant security improvements, and just the fact that you only need to use a single type of syntax to import and export for both the front end and the back end, is a pretty cool thing.
CJS has been the de-facto standard for a very long time and is still widely implemented within the entire ecosystem that we all call home. The fact that it’s existed, and still is thriving for this long, stands as a testament to just how well it’s doing its job. So if you’re working within an existing full stack application and the back end is already using CJS, it’s probably fine to stick with it. Plus, you can also [convert it to ESM](https://pawelgrzybek.com/all-you-need-to-know-to-move-from-commonjs-to-ecmascript-modules-esm-in-node-js/) pretty effortlessly.
Overall, both methods have their own use-cases and a single one isn’t necessarily better than the other and both will more than likely be around for a very long time.
---
And with that my friends, we’ve reached the end of this controversial talk. If there’s anything you take away from this, it’s that **_absolute power_** corrupts **_absolutely_** and ESM and CJS are friends, rather than foes.
Thank you for reading! ~~ **<3**
| greenteaisgreat |
1,873,463 | Using an Existing Windows VM To Create and Attach a Data Disk To The VM And Initialise It For Use. | OS DISK The OS disk on a Windows VM refers to the virtual hard disk that contains the... | 0 | 2024-07-09T16:34:10 | https://dev.to/fola2royal/using-an-existing-windows-vm-to-create-and-attach-a-data-disk-to-the-vm-and-initialise-it-for-use-232h | cloudengineering, virtualmachine, cloudinfrastructure, azure | ## OS DISK
- The OS disk on a Windows VM refers to the virtual hard disk that contains the operating system. It’s the primary disk that a virtual machine (VM) boots from and is pre-installed with the selected OS when the VM is created.
- This disk includes the boot volume and it's recommended to store only the OS information on the disk.
## TEMPORARY DISK
- The Temporary disk is a storage space that is provided automatically with each VM
- This disk is located on the physical server where the VM is hosted and is intended for temporary data storage.
- Data on the temporary disk may be lost during a maintenance event, when you redeploy a VM, or when you stop the VM.
- On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by default.
## DATA DISK
- Data disk is a managed disk that's attached to a virtual machine to store application data instead of storing them on the OS also for storing other data you need to keep.
- The size of your virtual machine will determine how many data disks you can attach to it and the type of storage you can use to host the disks.
- Data Disk offer the following benefits:
Improved Backup and Disaster Recovery, More flexibility and scalability,
Performance isolation, Easier maintenance, Improved security and access control.
Let's delve into the practical hands-on of how to attach a data disk and initialize it for use
- Login to the Azure portal open and connect to your windows 11 VM
- Once connected click on the File explorer as highlighted(red) on the image

- The file explorer opens up this page; observe and note the disks available, you will notice there 3 disks:
Disk (C:) - Windows OS Disk, Disk (D:) - Temporary Disk and Disk (E:) The DVD Drive
There isn't a Data disk yet, we shall be attaching the Data disk either by creating an independent data disk and attach to an existing VM or from the VM create and attack the disk

- From the Portal open the existing windows VM, from the menu click on Settings, then click on Disks click create and attach a new disk


- Give a name to the disk, pick a storage type e.g Standard SSD, or Premium SSD according to your desired need. You can also decide to increase or decrease the size of your disk from 4-8GiB or more then Apply
- Once you apply it will create and update the VM
- Now Connect back to the VM and right click on the windows icon below then click on Disk management

- Click on OK to initialise the disk from the initialise prompt from the context menu
- Scroll down the disks and right-click on "Disk 2", click on "New Simple volume" from the context menu that pops up

- When you New simple Volume Wizards context menu appears, click on "Next" till "Finish"

- once the initialization is complete, it opens up to the page below.

- You can close up this page and click on file explorer, click on my PC and it will open up the page that displays the Disks, you now see the New Volume (F:)

And Voila!! your Data disk is ready for use
- You can click on the disk to rename for your personal need, also upload files for your local machine
| fola2royal |
1,873,729 | Getting Started with TypeScript: Type Annotations & Type Inference (Part I) | Type annotations and type inference are two different systems in typescript but in this post we will... | 0 | 2024-07-08T15:43:10 | https://dev.to/youxufkhan/getting-started-with-typescript-type-annotations-type-inference-part-i-2mgg | Type annotations and type inference are two different systems in typescript but in this post we will talk about them in parallel.
## Type Annotations
**Code we add to tell TypeScript what type of value a variable will refer to**
_We tell typescript the type_
## Type Inference
**Typescript tries to figure out what type of value a variable refers to**
_Typescript guesses the type_
---
The type annotation system is kind of at odds with the type inference, so we are going to first understand type annotations and then come along and understand how type inference comes into play.
To make things a little bit more complicated, know that these two different features apply slightly differently to
1. Variables
2. Functions
3. Objects
Let's first talk about how they are applied to Variables, By Examples
```typescript
let apples: number = 5;
```
In this simple example **:number** is type annotation.
If we were to set the value of apples to a boolean we would get an error in this case.
```typescript
const apples: number = true;
```
`Error: Type 'boolean' is not assignable to type 'number'.ts(2322)`

Similarly,
```typescript
let apples: number = 5;
apples = 'asdas
```
`Type 'string' is not assignable to type 'number'.ts(2322)
`

```typescript
let banana = 5; // type inference
let banana: number // type annotation
let banana: number = 5 // type annotation
```
If the declaration and initialization are on the same line without type annotation, typescript will figure out the type of variable for us
```
[Variable declaration] [Variable initialization]
const color = red
```
Objects:
```typescript
//Type inference
let point = {
x:10,
y:20
}
// Type annotation
let point: { x: number, y: number } = {
x: 10,
y: 20
}
```
Functions:
```typescript
//Type inference
const logNumber = (i) => {
console.log(i)
}
// Type annotation
const logNumber: (i: number) => void = (i: number) => { // [(i: number) => void] is the annotation here
console.log(i)
}
```
When to use annotations with functions
- Function that returns 'any' type
```typescript
const json = '{"x": 10, "y": 20}'
const coordinates = JSON.parse(json)
console.log(coordinates) // {x:10, y:20}
```
We need to understand how JSON.parse works for different values
| Passed Value | JSON.parse() | Return Value |
|------------------- |-------------- |--------------- |
| 'false' | JSON.parse() | boolean |
| '4' | JSON.parse() | number |
| '{"value:5"}' | JSON.parse() | {value:name} |
| '{"name":"alex"}' | JSON.parse() | {name:string} |
_Typscript decides to set return type to any, meaning it has no idea what type will be returned._
## Why and when to use type annotation or type inference
#### Type inference
- When we declare a variable on one line then initialize it later
- When we want a variable to have a type that cant be inferred
- When a function returns the 'any' type and we need to clarify the value
#### Type inference
- Always
- When we declare a variable on one line and initialize it later
Lets wrap the first part of type annotations and type inference in typescript. We will discuss it further in next post. 💯
| youxufkhan | |
1,876,870 | The HTML tags I use the most in my projects. | If you are interested in learning about web development, the first topic you will probably come... | 0 | 2024-07-08T22:55:39 | https://dev.to/audreymengue/the-html-tags-i-use-the-most-in-my-projects-d60 | webdev, html, beginners, programming | If you are interested in learning about web development, the first topic you will probably come across will be HTML. When it is presented to you in the first place, it might look like a lot to learn and it's normal, there is always a lot to learn in the tech industry. The hard truth is that you cannot be in tech without growing or learning but, the most important is what you learn and how you learn it.
There is a trinity raining on the Web kingdom, and they are HTML, CSS and JavaScript. And, you will almost always learn them in that order. In today's topic, let's learn about HTML.
### What is HTML?
It stands for HyperText Markup Language. It is the language used to create documents on the Web.
### What does it provide?
The reason why we cannot talk about building for the Web without talking about HTML is for the following reasons:
- Web pages: HTML is the foundation you need in order to create web pages. And if you can create one page then why not a website which is a collection of web pages.
- Structure: HTML will give a structure to your content by the use of semantic tags such as links, paragraphs, navigation etc.
- Easy to understand: HTML's tags are basic English. It's almost easy to guess the tags and the syntax too is pretty easy to understand.
- Navigation: with HTML, we can create hyperlinks which will help us navigate amongst our web pages.
## The tags I use the most.
There are so many tags available to learn, but as humans, we cannot learn all. Here are the little tags I am using almost on daily basis:
- Headings: They vary from `<h1>` to `<h6>` and they are used as section's titles and subtitles. `<h1>` being the biggest, it's advisable to only use one `<h1>` per page for the main title.
- Paragraphs (`<p></p>`): It's one of the most used one. I mean, who builds a site or a web app without a text content? Paragraphs <p> will hold the text I need to display on the page.
- Buttons (`<button></button>`): whether I want to navigate to another section of the page or a new page altogether or even trigger actions (like submitting a form), clicking a button is probably what I will do. They are very important tags.
- Ul (`<ul></ul>`) and li(`<li></li>`): Navigation bars are mostly done with these elements. All I need is to style them to fit my needs in terms of design.
- Anchor `<a></a>`: typically used to navigate from one page or resource to another. They are usually designed like buttons but they are not.
- Division `<div></div>`: a very popular tag of mine too is the division tag. It is a box that gives so much flexibility in terms of what it can hold. We can group so many other tags in a div and style or manipulate that group the way we need.
- Image `<img/>`: this tag holds images in many formats. It is used a lot for logos or pictures.
- Input `<input/>`: this tag is used to help users interact with the page. This tag had different types which allow people to enter data such as numbers, texts, emails etc.
- Form `<form></form>`: Last but not least on our post is the form tag. This tag is used to collect and submit the data collected to a server. Whether you are building a website or a web app, you will probably need it to collect data.
We are through with this post, but it's important to understand that there are so many other essential tags to know out there. You can share the tags you use the most in your project in the comment section and I will be happy to learn new things. | audreymengue |
1,877,084 | 17 Libraries to Become a React Wizard 🧙♂️🔮✨ | React is getting better, especially with the latest release of React 19. Today, we're going to dive... | 0 | 2024-07-09T11:56:51 | https://dev.to/copilotkit/17-libraries-to-become-a-react-wizard-1g6k | react, webdev, javascript, programming | React is getting better, especially with the latest release of React 19.
Today, we're going to dive into 17 react libraries, that will help you become a more productive developer and help you achieve react Wizadry! Don't forget to bookmark this article and star these awesome open-source projects.
This list might surprise you, so let's jump in and become React Wizards.

---
## 1. [CopilotKit](https://github.com/CopilotKit/CopilotKit) - 10x easier to build AI Copilots.
[](https://github.com/CopilotKit/CopilotKit)
You will agree that it's tough to add AI features in React, that's where Copilot helps you as a framework for building custom AI Copilots.
You can build in-app AI chatbots, and in-app AI Agents with simple components provided by Copilotkit which is at least 10x easier compared to building it from scratch.
You shouldn't reinvent the wheel if there is already a very simple and fast solution!
They also provide built-in (fully-customizable) Copilot-native UX components like `<CopilotKit />`, `<CopilotPopup />`, `<CopilotSidebar />`, `<CopilotTextarea />`.

Get started with the following npm command.
```npm
npm i @copilotkit/react-core @copilotkit/react-ui
```
This is how you can integrate a Chatbot.
A `CopilotKit` must wrap all components which interact with CopilotKit. It’s recommended you also get started with `CopilotSidebar` (you can swap to a different UI provider later).
```javascript
"use client";
import { CopilotKit } from "@copilotkit/react-core";
import { CopilotSidebar } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";
export default function RootLayout({children}) {
return (
<CopilotKit url="/path_to_copilotkit_endpoint/see_below">
<CopilotSidebar>
{children}
</CopilotSidebar>
</CopilotKit>
);
}
```
You can read the [docs](https://docs.copilotkit.ai/getting-started/quickstart-textarea) and check the [demo video](https://github.com/CopilotKit/CopilotKit?tab=readme-ov-file#demo).
You can integrate Vercel AI SDK, OpenAI APIs, Langchain, and other LLM providers with ease. You can follow this [guide](https://docs.copilotkit.ai/getting-started/quickstart-chatbot) to integrate a chatbot into your application.
The basic idea is to build AI Chatbots very fast without a lot of struggle, especially with LLM-based apps.
You can watch the complete walkthrough!
{% embed https://youtu.be/VFXdSQxTTww %}
CopilotKit has recently crossed 7k+ stars on GitHub with 300+ releases which is a significant milestone.

{% cta https://github.com/CopilotKit/CopilotKit %} Star CopilotKit ⭐️ {% endcta %}
---
## 2. [Mantine Hooks](https://www.npmjs.com/package/@mantine/hooks) - react hooks for state and UI management.

How many times have you been stuck in writing hooks from scratch?
Well, I'm never going to do it from now on thanks to mantine hooks!
It's not efficient to write more code since you would end up maintaining it later so it's better to use these production-level hooks to make your work a lot easier plus each of them has a good number of options.
We shouldn't compare projects but this would be the most useful project that everyone can use rather than writing code from scratch.
Trust me, getting 60+ hooks is a big deal considering they have a simple way for you to see the demo of each of the hooks with easy docs to follow.
Get started with the following npm command.
```npm
npm install @mantine/hooks
```
This is how you can use `useScrollIntoView` as part of mantine hooks.
```javascript
import { useScrollIntoView } from '@mantine/hooks';
import { Button, Text, Group, Box } from '@mantine/core';
function Demo() {
const { scrollIntoView, targetRef } = useScrollIntoView<HTMLDivElement>({
offset: 60,
});
return (
<Group justify="center">
<Button
onClick={() =>
scrollIntoView({
alignment: 'center',
})
}
>
Scroll to target
</Button>
<Box
style={{
width: '100%',
height: '50vh',
backgroundColor: 'var(--mantine-color-blue-light)',
}}
/>
<Text ref={targetRef}>Hello there</Text>
</Group>
);
}
```
They almost have everything from local storage to pagination, to scroll view, intersection, and even some very cool utilities like eye dropper and text selection. This is damn too helpful!

You can read the [docs](https://mantine.dev/hooks/use-click-outside/).
They have more than 24.5k+ stars on GitHub but it's not only for the hooks because they are a component library for React.
To get a better idea, the package has 340k+ weekly downloads along with the `v7` release which proves their credibility.
{% cta https://github.com/mantinedev/mantine %} Star Mantine Hooks ⭐️ {% endcta %}
---
## 3. [React Email](https://github.com/resend/react-email) - Build and send emails using React.

Email might be the most important medium for people to communicate. However, we need to stop developing emails like in 2010 and rethink how email can be done in 2022 and beyond. It should be modernized for the way we build web apps today.
Are you building software and you wish to send emails using code? You must have heard about Resend (one of the top products of 2023). They offer a simple solution to build and send emails using React.
A collection of high-quality, unstyled components for creating beautiful emails using React and TypeScript.
It reduces the pain of coding responsive emails with dark mode support. It also takes care of <q>inconsistencies between Gmail, Outlook, and other email clients<q> for you.

Get started with the following npm command which sets up everything automatically for you.
This will create a new folder called `react-email-starter` with a few email templates that you can use.
```npm
npx create-email@latest
```
They have provided a set of standard components to help you build amazing emails without having to deal with the mess of creating layouts from scratch. Find the complete [list of components](https://react.email/docs/components/html) available.
You can install the components using this command. You can do individual components as well if you want.
```
npm install @react-email/components -E
```
Let's see a couple of examples.
> Button component.
```javascript
import { Button } from "@react-email/components";
const Email = () => {
return (
<Button href="https://example.com" style={{ color: "#61dafb" }}>
Click me
</Button>
);
};
```
> Image component.
```javascript
import { Img } from "@react-email/components";
const Email = () => {
return <Img src="cat.jpg" alt="Cat" width="300" height="300" />;
};
```
Email clients have this concept of `preview text` which gives insight into what’s inside the email before you open it. You can do it using the preview component they have provided in the docs.
You can read the [docs](https://react.email/docs/introduction) and see the list of [open source templates (examples)](https://react.email/examples) built with React Email.

They have also documented how you can [switch from another email framework to react email](https://react.email/docs/getting-started/migrating-to-react-email).
In order to use React Email with any email service provider, you’ll need to convert the components made with React into an HTML string. A decent amount of integrations are present along with a clear example of how you can integrate React Email into your NodeJS app by installing `@babel/preset-typescript` and adding a `.babelrc` config file.

It has 12k stars on GitHub and is used by 7k+ developers on GitHub. I'm a big fan of how they have made it so simple.
{% cta https://github.com/resend/react-email %} Star React Email ⭐️ {% endcta %}
---
## 4. [React Player](https://github.com/cookpete/react-player) - A React component for playing a variety of URLs.

If you're looking to include a video in your website, especially embedding from other websites like Vimeo then this component makes your job much easier.
A simple React component for playing a variety of URLs, including file paths, YouTube, Facebook, Twitch, SoundCloud, Streamable, Vimeo, Wistia, Mixcloud, DailyMotion, and Kaltura. You can see the list of [supported media](https://github.com/cookpete/react-player?tab=readme-ov-file#supported-media).
The maintenance of ReactPlayer is being taken over by Mux (a popular org) which keeps them in good hands.
Get started with the following npm command.
```npm
npm install react-player
```
This is how you can use this.
```javascript
import React from 'react'
import ReactPlayer from 'react-player'
// Render a YouTube video player
<ReactPlayer url='https://www.youtube.com/watch?v=LXb3EKWsInQ' />
// If you only ever use one type, use imports such as react-player/youtube to reduce your bundle size.
// like this: import ReactPlayer from 'react-player/youtube'
```
You can also use `react-player/lazy` to lazy load the appropriate player for the URL you pass in. This adds several reactPlayer chunks to your output but reduces your main bundle size.
```
import React from 'react'
import ReactPlayer from 'react-player/lazy'
// Lazy load the YouTube player
<ReactPlayer url='https://www.youtube.com/watch?v=ysz5S6PUM-U' />
```
You can read the [docs](https://github.com/cookpete/react-player?tab=readme-ov-file#props) and see the [demo](https://cookpete.github.io/react-player/). They provide plenty of options including adding subtitles and making it responsive in an easy way. You can test the custom URL with a demo with a bunch of options!
They have 9k+ stars on GitHub, are used by 149k+ developers, and have a massive [680k+ weekly downloads](https://www.npmjs.com/package/react-player) on the npm package.
{% cta https://github.com/cookpete/react-player %} Star React Player ⭐️ {% endcta %}
---
## 5. [Replexica](https://github.com/replexica/replexica) - AI-powered i18n toolkit for React.

The struggle with localization is definitely real, so having a little AI help with that is worth looking at.
Replexica is an i18n toolkit for React, to ship multi-language apps fast. It doesn't require extracting text into JSON files and uses AI-powered API for content processing.
Replexica is a platform, not a library. It's like having a team of translators and localization engineers working for you but without the overhead. All you need is an API key and voila!
A couple of exciting features that make it all worth it.
✅ Replexica automatically translates your app into multiple languages.
✅ Replexica ensures that translations are accurate and contextually correct, that they fit in the UI, and aim to translate better than a human would ever do.
✅ Replexica keeps your app localized as you add new features (more like a continuous localization).
It has these two parts:
1. Replexica Compiler - an open-source compiler plugin for React.
2. Replexica API - an i18n API in the cloud that performs translations using LLMs. (Usage-based, it has a free tier)
Some of the i18n formats supported are:
1. JSON-free Replexica compiler format.
2. .md files for Markdown content.
3. Legacy JSON and YAML-based formats.
They also made an official announcement on DEV when they reached 500 Stars. I was one of the first readers (less than 3 reactions).
They cover a lot of things so you should read [We Got 500 Stars What Next](https://dev.to/maxprilutskiy/we-got-500-github-stars-whats-next-2njc) by Max.
To give a general idea behind Replexica, here's the only change that's needed to the basic Next.js app to make it multi-language.
Get started with the following npm command.
```npm
// install
pnpm add replexica @replexica/react @replexica/compiler
// login to Replexica API.
pnpm replexica auth --login
```
This is how you can use this.
```javascript
// next.config.mjs
// Import Replexica Compiler
import replexica from '@replexica/compiler';
/** @type {import('next').NextConfig} */
const nextConfig = {};
// Define Replexica configuration
/** @type {import('@replexica/compiler').ReplexicaConfig} */
const replexicaConfig = {
locale: {
source: 'en',
targets: ['es'],
},
};
// Wrap Next.js config with Replexica Compiler
export default replexica.next(
replexicaConfig,
nextConfig,
);
```
You can read the [quickstart guide](https://docs.replexica.com/quickstart) and also clearly documented stuff on [what is used under the hood](https://github.com/replexica/replexica?tab=readme-ov-file#whats-under-the-hood).
Replexica Compiler supports Next.js App Router, and Replexica API supports English 🇺🇸 and Spanish 🇪🇸. They are planning to release Next.js Pages Router + French 🇫🇷 language support next!
They have 950+ stars on GitHub and are built on TypeScript. A project that will save you a lot of time!
{% cta https://github.com/replexica/replexica %} Star Replexica ⭐️ {% endcta %}
---
## 6. [Victory](https://github.com/FormidableLabs/victory) - React components for building interactive data visualizations.

A lot of developers work on a lot of data these days (mostly using APIs). So, a method to easily visualize that data is a cool concept that can take the app to the next level.
Victory is an ecosystem of composable React components for building interactive data visualizations.

Get started with the following npm command.
```npm
npm i --save victory
```
This is how you can use this.
```javascript
<VictoryChart
domainPadding={{ x: 20 }}
>
<VictoryHistogram
style={{
data: { fill: "#c43a31" }
}}
data={sampleHistogramDateData}
bins={[
new Date(2020, 1, 1),
new Date(2020, 4, 1),
new Date(2020, 8, 1),
new Date(2020, 11, 1)
]}
/>
</VictoryChart>
```
This is how it's rendered. They also offer animations and theme options, which are generally useful.

You can read the [docs](https://commerce.nearform.com/open-source/victory/docs) and follow the [tutorial](https://commerce.nearform.com/open-source/victory/docs/native) to get started. They provide around 15 different chart options and an insane customization under each of them, it's almost unbelievable!
It's also available for [React Native (docs)](https://commerce.nearform.com/open-source/victory/docs/native), so that's a plus point. I would also recommend checking out their [FAQs](https://commerce.nearform.com/open-source/victory/docs/faq#frequently-asked-questions-faq) where they describe solutions of common problems with code and explanation such as styling, annotation (labels), handling axes.
It's on the `v37` release which is way huge in general (I didn't expect that), haha!
The project has 10.5k+ Stars on GitHub and is used by 24k+ developers on GitHub.
{% cta https://github.com/FormidableLabs/victory %} Star Victory ⭐️ {% endcta %}
---
## 7. [Tremor](https://github.com/tremorlabs/tremor) - React components to build charts and dashboards.

Tremor provides 20+ open source React components to build charts and dashboards built on top of Tailwind CSS to make visualizing data simple again.

Get started with the following npm command.
```npm
npm i @tremor/react
```
This is how you can use Tremor to build things quickly.
```javascript
import { Card, ProgressBar } from '@tremor/react';
export default function Example() {
return (
<Card className="mx-auto max-w-md">
<h4 className="text-tremor-default text-tremor-content dark:text-dark-tremor-content">
Sales
</h4>
<p className="text-tremor-metric font-semibold text-tremor-content-strong dark:text-dark-tremor-content-strong">
$71,465
</p>
<p className="mt-4 flex items-center justify-between text-tremor-default text-tremor-content dark:text-dark-tremor-content">
<span>32% of annual target</span>
<span>$225,000</span>
</p>
<ProgressBar value={32} className="mt-2" />
</Card>
);
}
```
This is what will be generated as an output.

You can read the [docs](https://www.tremor.so/docs/getting-started/installation) and see the [list of components](https://www.tremor.so/components). Between, they use remix icons under the hood.
From the variety of components that I've seen, it's a good starting point. Trust me!

They have a concept of [tremor blocks](https://blocks.tremor.so/) which gives you access to 250+ carefully crafted blocks and every template to build dashboards, apps, and websites even faster.

Tremor also provides a [clean UI kit](https://www.figma.com/community/file/1233953507961010067). How cool is that!

Tremor has 15.5k+ Stars on GitHub, is used by 8k+ developers on GitHub, and has more than 330 releases which means it's continuously improving.
{% cta https://github.com/tremorlabs/tremor %} Star Tremor ⭐️ {% endcta %}
---
## 8. [React Slick](https://github.com/akiran/react-slick) - React carousel component.

React Slick is a carousel component built with React. It is a react port of a slick carousel.
Around 2 years back, I was building a carousel using a JS library and it was one hell of a task plus it wasn't production-level code. I would prefer using this over simple libraries or even from scratch on any day!
Get started with the following npm command.
```npm
npm install react-slick --save
```
This is how you can use custom pagination.
```javascript
import React, { Component } from "react";
import Slider from "react-slick";
import { baseUrl } from "./config";
function CustomPaging() {
const settings = {
customPaging: function(i) {
return (
<a>
<img src={`${baseUrl}/abstract0${i + 1}.jpg`} />
</a>
);
},
dots: true,
dotsClass: "slick-dots slick-thumb",
infinite: true,
speed: 500,
slidesToShow: 1,
slidesToScroll: 1
};
return (
<div className="slider-container">
<Slider {...settings}>
<div>
<img src={baseUrl + "/abstract01.jpg"} />
</div>
<div>
<img src={baseUrl + "/abstract02.jpg"} />
</div>
<div>
<img src={baseUrl + "/abstract03.jpg"} />
</div>
<div>
<img src={baseUrl + "/abstract04.jpg"} />
</div>
</Slider>
</div>
);
}
export default CustomPaging;
```

You can read about the [prop options](https://react-slick.neostack.com/docs/api) and [methods](https://react-slick.neostack.com/docs/api#methods) that are available.
You can read the [docs](https://react-slick.neostack.com/docs/get-started) and all [sets of examples](https://react-slick.neostack.com/docs/example/) with complete code & output. Makes the work a lot easier, right?

They have 11.5k+ Stars on GitHub and are used by 400k+ developers on GitHub.
{% cta https://github.com/akiran/react-slick %} Star React Slick ⭐️ {% endcta %}
---
## 9. [React Content Loader](https://github.com/danilowoz/react-content-loader) - SVG-powered component to easily create skeleton loadings.

I visit websites and there is nothing to showcase. I know it's loading since I'm a developer. But not everyone is so they don't even know what's happening.
That is why skeleton loaders are important, especially for the loading state.
This project provides you with an SVG-powered component to easily create placeholder loadings (like Facebook's cards loading).
Skeletons are often used during the loading state to indicate to users that content is still loading. Every developer should use this handy project to improve the overall UX of the app.
A few things that make this one better!
✅ Lightweight - less than 2kB and 0 dependencies for the web version
✅ Feel free to change the colors, speed, sizes, and even RTL.
✅ It supports React Native with the common API and powerful features.
Get started with the following npm command.
```npm
npm i react-content-loader --save
```
This is how you can use it.
```javascript
import React from "react"
import ContentLoader from "react-content-loader"
const MyLoader = (props) => (
<ContentLoader
speed={2}
width={400}
height={160}
viewBox="0 0 400 160"
backgroundColor="#f3f3f3"
foregroundColor="#ecebeb"
{...props}
>
<rect x="48" y="8" rx="3" ry="3" width="88" height="6" />
<rect x="48" y="26" rx="3" ry="3" width="52" height="6" />
<rect x="0" y="56" rx="3" ry="3" width="410" height="6" />
<rect x="0" y="72" rx="3" ry="3" width="380" height="6" />
<rect x="0" y="88" rx="3" ry="3" width="178" height="6" />
<circle cx="20" cy="20" r="20" />
</ContentLoader>
)
export default MyLoader
```
The output of this is shown below.

You can even drag the individual skeleton or use pre-defined made for different socials like Facebook, and Instagram.
You can read the [docs](https://github.com/danilowoz/react-content-loader?tab=readme-ov-file#gettingstarted) and see the [demo](https://skeletonreact.com/).

The project has 13.4k+ Stars on GitHub and is used by 47k+ developers on GitHub.
{% cta https://github.com/danilowoz/react-content-loader %} Star React Content Loader ⭐️ {% endcta %}
---
## 10. [React Hot Toast](https://github.com/timolins/react-hot-toast) - Smoking Hot React Notifications.

Almost every React developer would have heard about this one, that's how famous it really is!
The reason why I'm still including this is because it offers a blazing default experience with easy customization options.
✅ It leverages a Promise API for automatic loaders, ensuring smooth transitions.
✅ Plus it's lightweight at under 5kb, and it remains accessible while giving options for developers with headless hooks like `useToaster()`.
Add the Toaster to your app first. It will take care of rendering all notifications emitted. Now you can trigger `toast()` from anywhere!
Get started with the following npm command.
```npm
npm install react-hot-toast
```
This is how easy it is to use.
```javascript
import toast, { Toaster } from 'react-hot-toast';
const notify = () => toast('Here is your toast.');
const App = () => {
return (
<div>
<button onClick={notify}>Make me a toast</button>
<Toaster />
</div>
);
};
```


They have lots of options to customize it but the `useToaster()` hook provides you a headless system that will manage the notification state for you. This makes building your notification system much easier.
You can read the [docs](https://react-hot-toast.com/docs), the [styling guide](https://react-hot-toast.com/docs/styling) and see the [demo](https://react-hot-toast.com/).
The project has 9.3k+ stars on GitHub and is used by 290k+ developers on GitHub. It's simple and works really well!
{% cta https://github.com/timolins/react-hot-toast %} Star React Hot Toast ⭐️ {% endcta %}
---
## 11. [aHooks](https://github.com/alibaba/hooks) - high quality & reliable React Hooks library.

This is another handy library for hooks similar to the one we discussed earlier. I've included this because it's another popular option among the developers.
ahooks is a set of easy-to-use and a reliable React Hooks library. It's written in TypeScript with predictable static types. It has around 50+ hooks from what I can see!
✅ Contains a large number of advanced Hooks that are refined from business scenarios.
✅ It supports SSR and special treatment for functions, avoiding closure problems.
Get started with the following npm command.
```npm
$ npm install --save ahooks
```
For instance, this is how you can use `useLocalStorageState` which is used to store data. You don't even need to know about local storage in order to use this hook. Read more about the [options](https://ahooks.js.org/hooks/use-local-storage-state#options) provided with this one.
```javascript
import React from 'react';
import { useLocalStorageState } from 'ahooks';
export default function () {
const [message, setMessage] = useLocalStorageState<string | undefined>(
'use-local-storage-state-demo1',
{
defaultValue: 'Hello~',
},
);
return (
<>
<input
value={message || ''}
placeholder="Please enter some words..."
onChange={(e) => setMessage(e.target.value)}
/>
<button style={{ margin: '0 8px' }} type="button" onClick={() => setMessage('Hello~')}>
Reset
</button>
<button type="button" onClick={() => setMessage(undefined)}>
Clear
</button>
</>
);
}
```
Check the [complete list of hooks](https://ahooks.js.org/hooks/use-request/index). In each of those, you get code sandbox links, examples, code, props, and parameters you can use to customize it.
They have a lot of options from web sockets, DOMs, events, effects, and even some advanced ones like useIsomorphicLayoutEffect and useMemoizedFn. Overall, it's very handy especially because you end up maintaining less code.
You can read the [quickstart guide](https://ahooks.js.org/guide/) and check it live on the [codesandbox](https://codesandbox.io/p/sandbox/demo-for-ahooks-forked-fg79k?file=%2Fsrc%2FApp.js).
If you're looking for alternatives, try these:
- ✅ [Use Hooks](https://github.com/uidotdev/usehooks) by ui.dev team.
- ✅ [Beautiful React Hooks](https://github.com/antonioru/beautiful-react-hooks).
I know, I found a lot of these but try to use only one for now to avoid being confused and to get a hang of that one!
It has 13.5k stars on GitHub and is used by 20k+ developers on GitHub.
{% cta https://github.com/alibaba/hooks %} Star aHooks ⭐️ {% endcta %}
---
## 12. [cmdk](https://github.com/pacocoursey/cmdk) - Fast, unstyled command menu React component.

This is a command menu React component that can also be used as an accessible combobox. You render items, and it filters and sorts them automatically. ⌘K (project name) supports a fully composable API so you can wrap items in other components or even as static JSX.
Get started with the following npm command.
```npm
pnpm install cmdk
```
This is how you can use this in general.
```javascript
import { Command } from 'cmdk'
const CommandMenu = () => {
return (
<Command label="Command Menu">
<Command.Input />
<Command.List>
<Command.Empty>No results found.</Command.Empty>
<Command.Group heading="Letters">
<Command.Item>a</Command.Item>
<Command.Item>b</Command.Item>
<Command.Separator />
<Command.Item>c</Command.Item>
</Command.Group>
<Command.Item>Apple</Command.Item>
</Command.List>
</Command>
)
}
```
You can read the [docs](https://github.com/pacocoursey/cmdk?tab=readme-ov-file#parts-and-styling) about parts and styling.
You can see how you can change it based on the styling you want.

<figcaption>raycast</figcaption>

<figcaption>linear</figcaption>

<figcaption>vercel</figcaption>

<figcaption>framer</figcaption>
You should see the list of [all the examples](https://github.com/pacocoursey/cmdk?tab=readme-ov-file#examples) and the [FAQs](https://github.com/pacocoursey/cmdk?tab=readme-ov-file#faq) which answers a lot of important questions.
It has almost 9k stars on GitHub and it's a very recent project (less than 130 commits).
{% cta https://github.com/pacocoursey/cmdk %} Star cmdk ⭐️ {% endcta %}
---
## 13. [React JSONSchema Form](https://github.com/rjsf-team/react-jsonschema-form) - for building Web forms from JSON Schema.

`react-jsonschema-form` automatically generates React forms from JSON Schema, making it ideal for generating forms for any data with just a JSON schema. It offers customization options like uiSchema to tailor the form's appearance beyond default themes.
Get started with the following npm command.
```npm
npm install @rjsf/core @rjsf/utils @rjsf/validator-ajv8 --save
```
This is how you can use this.
```javascript
import { RJSFSchema } from '@rjsf/utils';
import validator from '@rjsf/validator-ajv8';
const schema: RJSFSchema = {
title: 'Todo',
type: 'object',
required: ['title'],
properties: {
title: { type: 'string', title: 'Title', default: 'A new task' },
done: { type: 'boolean', title: 'Done?', default: false },
},
};
const log = (type) => console.log.bind(console, type);
render(
<Form
schema={schema}
validator={validator}
onChange={log('changed')}
onSubmit={log('submitted')}
onError={log('errors')}
/>,
document.getElementById('app')
);
```
They provide [advanced customization](https://rjsf-team.github.io/react-jsonschema-form/docs/advanced-customization/) options including custom widgets.
You can read the [docs](https://rjsf-team.github.io/react-jsonschema-form/docs/) and check the [live playground](https://rjsf-team.github.io/react-jsonschema-form/).
It has 13k+ Stars on GitHub and is used by 5k+ developers. They are on the `v5` with 190+ releases so they're constantly improving.
{% cta https://github.com/rjsf-team/react-jsonschema-form %} Star React JSONSchema Form ⭐️ {% endcta %}
---
## 14. [React DND](https://github.com/react-dnd/react-dnd) - Drag & Drop for React.

I haven't fully implemented the Drag and Drop feature yet, but I often find myself confused about which option to choose.

I'm only covering React DND for now. Another option I've come across is [interactjs.io](https://interactjs.io/), which seems very useful based on the documentation I've read. It's pretty easy due to the detailed examples that they have provided.
Get started with the following npm command.
```npm
npm install react-dnd react-dnd-html5-backend
```
Unless you're writing a custom one, you probably want to use the HTML5 backend that ships with React DnD.
This is how you can install `react-dnd-html5-backend`. Read [docs](https://react-dnd.github.io/react-dnd/docs/backends/html5).
This is the starting point.
```javascript
import { HTML5Backend } from 'react-dnd-html5-backend'
import { DndProvider } from 'react-dnd'
export default class YourApp {
render() {
return (
<DndProvider backend={HTML5Backend}>
/* Your Drag-and-Drop Application */
</DndProvider>
)
}
}
```
This is how you can implement a drag-and-drop for a card very easily.
```javascript
// Let's make <Card text='Write the docs' /> draggable!
import React from 'react'
import { useDrag } from 'react-dnd'
import { ItemTypes } from './Constants'
export default function Card({ isDragging, text }) {
const [{ opacity }, dragRef] = useDrag(
() => ({
type: ItemTypes.CARD,
item: { text },
collect: (monitor) => ({
opacity: monitor.isDragging() ? 0.5 : 1
})
}),
[]
)
return (
<div ref={dragRef} style={{ opacity }}>
{text}
</div>
)
}
```
Please note that the HTML5 backend does not support the touch events. So it will not work on tablet and mobile devices. You can use the `react-dnd-touch-backend` for touch devices. Read [docs](https://react-dnd.github.io/react-dnd/docs/backends/touch).
```javascript
import { TouchBackend } from 'react-dnd-touch-backend'
import { DndProvider } from 'react-dnd'
class YourApp {
<DndProvider backend={TouchBackend} options={opts}>
{/* Your application */}
</DndProvider>
}
```
This code sandbox dictates how we can properly use React DND.
{% embed https://codesandbox.io/embed/3y5nkyw381?view=Editor+%2B+Preview&module=%2Fsrc%2Findex.tsx&hidenavigation=1 %}
You can see the [examples](https://react-dnd.github.io/react-dnd/examples) of React DND.
They even have a clean feature where you can inspect what's happening internally with Redux.
You can enable [Redux DevTools](https://github.com/reduxjs/redux-devtools) by adding a debugModeprop to your provider, with the value of true.
```javascript
<DndProvider debugMode={true} backend={HTML5Backend}>
```
It offers a good variety of component options, which I'll need to test out for myself. Overall, it seems pretty good, especially if you're just starting.
React DND is licensed under `MIT`, has over 20k+ stars on GitHub, and is used by 232k+ developers on GitHub which makes it a perfect choice.
{% cta https://github.com/react-dnd/react-dnd %} Star React DND ⭐️ {% endcta %}
---
## 15. [Google Map React](https://github.com/google-map-react/google-map-react) - Google map library for react.

This provides `google-map-react` which is a component written over a small set of the [Google Maps API](https://developers.google.com/maps/).
✅ It allows you to render any React component on Google Maps.
✅ It is fully isomorphic and can render on a server.
✅ It can render map components in the browser even if the Google Maps API is not loaded.
✅ It uses an internal, tweakable hover algorithm basically every object on the map can be hovered.
It allows you to create interfaces such as [this example](). You can scroll the table, zoom/move the map, hover/click on markers, and even click on table rows.
I've seen a lot of developers just adding it without much customization which leads to a bad UX.
Get started with the following npm command.
```npm
npm install --save google-map-react
```
This is how you can use this.
```javascript
import React from "react";
import GoogleMapReact from 'google-map-react';
const AnyReactComponent = ({ text }) => <div>{text}</div>;
export default function SimpleMap(){
const defaultProps = {
center: {
lat: 10.99835602,
lng: 77.01502627
},
zoom: 11
};
return (
// Important! Always set the container height explicitly
<div style={{ height: '100vh', width: '100%' }}>
<GoogleMapReact
bootstrapURLKeys={{ key: "" }}
defaultCenter={defaultProps.center}
defaultZoom={defaultProps.zoom}
>
<AnyReactComponent
lat={59.955413}
lng={30.337844}
text="My Marker"
/>
</GoogleMapReact>
</div>
);
}
```
Make sure the container element has width and height. The map will try to fill the parent container, but if the container has no size, the map will collapse to 0 width/height (rule for adding Google Maps in general).
You can read the detailed [docs](https://github.com/google-map-react/google-map-react/blob/master/DOC.md) and [10+ examples](https://github.com/google-map-react/google-map-react?tab=readme-ov-file#examples).
It has 6k+ stars on GitHub and is used by 71k+ developers on GitHub.
{% cta https://github.com/google-map-react/google-map-react %} Star Google Map React ⭐️ {% endcta %}
---
## 16. [React Diagrams](https://github.com/projectstorm/react-diagrams) - a super simple, no-nonsense diagramming library written in React.

content
Get started with the following npm command. You will then find a dist folder that contains all the minified and production ready code.
```npm
yarn add @projectstorm/react-diagrams
```
Let's see how we can use it.
```javascript
import createEngine, {
DefaultLinkModel,
DefaultNodeModel,
DiagramModel
} from '@projectstorm/react-diagrams';
import {
CanvasWidget
} from '@projectstorm/react-canvas-core';
```
We need to call `createEngine` which will bootstrap a DiagramEngine for us that contains all the default setups.
```javascript
// create an instance of the engine with all the defaults
const engine = createEngine();
// creating two nodes
// node 1
const node1 = new DefaultNodeModel({
name: 'Node 1',
color: 'rgb(0,192,255)',
});
node1.setPosition(100, 100);
let port1 = node1.addOutPort('Out');
// node 2
const node2 = new DefaultNodeModel({
name: 'Node 1',
color: 'rgb(0,192,255)',
});
node2.setPosition(100, 100);
let port2 = node2.addOutPort('Out');
//we link the two ports of both of the nodes
// also add a label to the link
const link = port1.link<DefaultLinkModel>(port2);
link.addLabel('Hello World!');
```
Now we have set up a simple diagram.
All that's left to do, is create a DiagramModel to contain everything, add all the elements to it, and then add it to the engine.
```javascript
const model = new DiagramModel();
model.addAll(node1, node2, link);
engine.setModel(model);
// we just need to render in react
<CanvasWidget engine={engine} />
```
You can read the [docs](https://projectstorm.gitbook.io/react-diagrams) and check the [live demo](https://projectstorm.cloud/react-diagrams/?path=/story/simple-usage--events-and-listeners).

The inspiration for this library is Joint JS (a fantastic library) + the need for rich HTML nodes + LabView + Blender Composite sub-system.
You can read about how [end to end testing is performed](https://projectstorm.gitbook.io/react-diagrams/about-the-project/testing) if you're curious but most of the open source projects are well-tested so you don't have to worry much. You can also read on how to make [custom nodes](https://projectstorm.gitbook.io/react-diagrams/customizing/nodes) and [custom ports](https://projectstorm.gitbook.io/react-diagrams/customizing/ports).
It has 8k+ stars on GitHub and was built using TypeScript.
{% cta https://github.com/projectstorm/react-diagrams %} Star React Diagrams ⭐️ {% endcta %}
---
## 17. [Refine](https://github.com/refinedev/refine) - open source Retool for Enterprise.

Refine is a meta React framework that enables the rapid development of a wide range of web applications.

From internal tools to admin panels, B2B apps, and dashboards, it serves as a comprehensive solution for building any type of CRUD application such as DevOps dashboards, e-commerce platforms, or CRM solutions.

You can set it with a single CLI command in under a minute.
It has connectors for 15+ backend services including Hasura, Appwrite, and more.
You can see the [list of integrations](https://refine.dev/integrations/) which is available.

But the best part is that Refine is `headless by design`, thereby offering unlimited styling and customization options.
Due to the architecture, you can use popular CSS frameworks like Tailwind CSS or create your own styles from scratch.
This is the best part because we don't want to end up in constraints with styling for compatibility with particular libraries since everyone has their own style and uses different UIs.
Get started with the following npm command.
```npm
npm create refine-app@latest
```
This is how easy it is to add a login using Refine.
```javascript
import { useLogin } from "@refinedev/core";
const { login } = useLogin();
```
An overview of the structure of your codebase with Refine.
```javascript
const App = () => (
<Refine
dataProvider={dataProvider}
resources={[
{
name: "blog_posts",
list: "/blog-posts",
show: "/blog-posts/show/:id",
create: "/blog-posts/create",
edit: "/blog-posts/edit/:id",
},
]}
>
/* ... */
</Refine>
);
```
You can read the [docs](https://refine.dev/docs/).
Some example applications that you can see which is built with Refine:
- [Fully-functional Admin Panel](https://example.admin.refine.dev/)
- [Refine powered different use-cases scenarios](https://github.com/refinedev/refine/tree/master/examples).
They even provide templates which is why so many users love Refine.
You can see the [templates](https://refine.dev/templates/).

I was trying to find out more about them that is where I came across [their blog](https://refine.dev/blog/) which contains some decent articles. The one I liked the most was the React admin vs Refine comparison!

They have around 25k+ stars on GitHub and are used by 1.5k+ developers on GitHub.
{% cta https://github.com/refinedev/refine %} Star Refine ⭐️ {% endcta %}
---
I think being productive is a choice and it's on you (as a developer) to find better solutions for your use case.
Let me know which of the above library you loved the most.
Have a great day! Till next time.
| If you like this kind of stuff, <br /> please follow me for more :) | <a href="https://twitter.com/Anmol_Codes"><img src="https://img.shields.io/badge/Twitter-d5d5d5?style=for-the-badge&logo=x&logoColor=0A0209" alt="profile of Twitter with username Anmol_Codes" ></a> <a href="https://github.com/Anmol-Baranwal"><img src="https://img.shields.io/badge/github-181717?style=for-the-badge&logo=github&logoColor=white" alt="profile of GitHub with username Anmol-Baranwal" ></a> <a href="https://www.linkedin.com/in/Anmol-Baranwal/"><img src="https://img.shields.io/badge/LinkedIn-0A66C2?style=for-the-badge&logo=linkedin&logoColor=white" alt="profile of LinkedIn with username Anmol-Baranwal" /></a> |
|------------|----------|
Follow Copilotkit for more content like this.
{% embed https://dev.to/copilotkit %} | anmolbaranwal |
1,878,682 | Props drilling 📸 useContext() | Props Drilling Passing data from a parent component down through multiple levels of nested... | 0 | 2024-07-08T22:31:11 | https://dev.to/jorjishasan/props-drilling-usecontext-146 | ## Props Drilling
Passing data from a parent component down through multiple levels of nested child components via props is called props drilling. This can make the code hard to manage and understand as the application grows. It's not a topic but a problem. How it's a problem?
In react, Data flows from top to bottom component. Like this...
`Parent -> Children -> Grand Children `
Now I will show you two cases. Each case represents a different set of data flow.
---
## Case 1 👇
**Description:** A hand-drawn illustration to help visualize props drilling:

**Code**
{% details 👉🏽 TopLevelComponent.jsx %}
```jsx
// TopLevelComponent.jsx
import React from 'react';
import IntermediateComponent1 from './IntermediateComponent1';
const TopLevelComponent = () => {
const user = { name: 'Jorjis Hasan', age: 22 };
return (
<div>
<h1>Top-Level Component</h1>
<IntermediateComponent1 user={user} />
</div>
);
};
export default TopLevelComponent;
// IntermediateComponent1.jsx
import React from 'react';
import IntermediateComponent2 from './IntermediateComponent2';
const IntermediateComponent1 = ({ user }) => {
return (
<div>
<h2>Intermediate Component 1</h2>
<IntermediateComponent2 user={user} />
</div>
);
};
export default IntermediateComponent1;
// IntermediateComponent2.jsx
import IntermediateComponent3 from './IntermediateComponent3';
const IntermediateComponent2 = ({ user }) => {
return (
<div>
<h3>Intermediate Component 2</h3>
<IntermediateComponent3 user={user} />
</div>
);
};
export default IntermediateComponent2;
// IntermediateComponent3.jsx
import EndComponent from './EndComponent';
const IntermediateComponent3 = ({ user }) => {
return (
<div>
<h4>Intermediate Component 3</h4>
<EndComponent user={user} />
</div>
);
};
export default IntermediateComponent3;
// EndComponent.jsx
const EndComponent = ({ user }) => {
return (
<div>
<h5>End Component</h5>
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</div>
);
};
export default EndComponent;
```
{% enddetails %}
See how we had to pass data down through layers. For the sake of `EndComponents` needs, we had to pass `user` data through 3 extra components(IntermediateComponent1, IntermediateComponent3, IntermediateComponent3). This is absolutely not clean code.
---
## Case 2 🔽
**Description:** A hand-drawn illustration to help visualize Case 2 data flow.

**Code:**
Sorry! Sorry! Sorry!
I can't code it up by just passing props. Even though I code, that would not make sense.
Well, let's hand down the best practices. We have 2 consistent solutions that can be used against any complex data flow.
1. React built-in `useContext` Api
2. State Management Library
| jorjishasan | |
1,878,791 | Adopting Database Guardrails - Cultural Shift | To build continuous database reliability, you need to have database guardrails - the right tools,... | 0 | 2024-07-11T08:00:00 | https://www.metisdata.io/blog/adopting-database-guardrails-cultural-shift | sql, database, monitoring | To build continuous database reliability, you need to have [database guardrails](https://www.metisdata.io/blog/metis-your-ultimate-database-guardrail) - the right tools, processes, and mindset. However, it’s not only about these technical components. It’s a cultural shift that we all need to make. Let’s see why and how.
## Database Reliability Is a Must
Imagine your favorite streaming service suddenly losing your watchlist or your bank account mysteriously losing track of your savings - that's the nightmare we avoid with a reliable database. Stripe (the payment company) [faced an outage](https://news.ycombinator.com/item?id=10365798) due to changes in their database, Heroku (the hosting platform) faced [a similar outage](https://status.heroku.com/incidents/2558), and Atlassian (the management suite platform) [lost customers’ data](https://www.atlassian.com/engineering/post-incident-review-april-2022-outage). You can’t run your business successfully when your database is unreliable.
Achieving database reliability is not easy. We need to take care of many components, like data integrity, recoverability, performance, and security. However, we need to do it no matter who we are and what our business is. Our customers demand that. The reliability of a database directly impacts the efficiency, reputation, and success of an organization, making it an indispensable asset in today's data-driven world.
## You Can’t Do Database Reliability Without Database Guardrails
When building database reliability, we need to focus on all the aspects of our software development life cycle. Starting with the development, we need to analyze the code that talks to the database. We need to make sure that queries use the right [execution plans](https://www.metisdata.io/blog/reading-postgres-execution-plans-doesnt-have-to-be-so-complicated), [schema migrations](https://www.metisdata.io/blog/common-challenges-in-schema-migration-how-to-overcome-them) will not take the database down, and we do not send too many queries when one is enough. We need to include these tests in our CI/CD pipelines and run all of them before the deployment.
After deployment, we need to build [observability around our databases](https://www.metisdata.io/). We can’t just use metrics. We need to have database-oriented signals instead. We shouldn’t track CPU load per se. We should understand if our database consumes too much of the CPU to meet the business requirements. We need to constantly analyze our schemas, index usage, configurations, extensions, live queries, and everything else that affects the performance.
**Recommended reading:** [**How To Master PostgreSQL Performance Like Never Before**](https://www.metisdata.io/blog/how-to-master-postgresql-performance-like-never-before#optimizing-query-plans-in-postgresql)
Having that said, we need to focus on many aspects. We need to have the right solutions that can prevent the bad code from reaching production, constantly [monitor the database](https://www.metisdata.io/product/monitoring) in a database-oriented way, [prevent](https://www.metisdata.io/product/prevention) the issues from happening, or [troubleshoot](https://www.metisdata.io/product/troubleshooting) them automatically. This needs to be done in a human-oriented manner and be easy to use for humans, not machines. But the biggest aspect of database reliability is the cultural shift we need to make.
## You Must Change Your Culture
Performance is crucial to providing the best user experience, decreasing churn, and improving all SLAs. We need to make sure our queries are fast. We need to be fast to recover from bugs and have short MTTR. We need to be able to deploy often and have short development cycles. Everything needs to be fast.
The biggest killer in our organizations is not the tooling. It’s the way we run our teams and we share ownership. When developers don’t own their databases, they need to ask other teams for help. Many teams need to communicate and run synchronous meetings or war rooms. This must be changed! Only one team should be responsible for the database. This should be your development team.
Developers need to [own their databases](https://www.metisdata.io/blog/shift-left-for-devops-enhancing-service-ownership-in-databases). They need to be responsible and accountable for the database performance, reliability, changes, improvements, and security. Unfortunately, developers often lack knowledge and understanding. To aid that, we need to give them the tools described earlier.
## How to Drive This Change
It may sound surprising that we should give more ownership to our developers. They are already swamped with many tasks. However, more ownership leads to less work!
When developers do not own the databases, they need to communicate with other teams. They need to explain their requests, maintain Jira requests, and connect with other teams. This communication is tedious and time-consuming. By moving the ownership to developers, we can get rid of this communication entirely. Developers own more but work less.
**Recommended reading:** [**Transforming SDLC - A Must-Read For Platform Engineers & DBAs**](https://www.metisdata.io/blog/platform-engineers-must-change-developers-and-databases-and-here-is-how)
Similarly, when developers do not work on their databases, they lack working knowledge of the internals. The only way to let them learn and get better is to expose them to everything that happens in the database. However, we can’t just swamp them with raw signals and deep internals. We need to give them tools that can hide unimportant details and only present what’s crucial to make the right calls. Effectively, they can now do the work they would ask others to do, and they can do it faster.
## Metis to The Rescue
Metis gives you all the database guardrails you need! Whether it’s reviewing your code changes before the deployment, letting your teams own the databases, or reducing MTTR by efficient troubleshooting - Metis covers all these aspects. You need to have good tools to help you shift left the ownership and unblock your teams, and database guardrails are exactly what you need to achieve that.
It all starts with the good tools. Later, you can build good processes that focus on [key performance indicators](https://www.metisdata.io/blog/database-monitoring-metrics-key-indicators-for-performance-analysis) in your organization. Ultimately, database guardrails are not just tooling. It’s the cultural shift that matters and unblocks your full potential. We do all of that Metis and can help you drive the change. Talk to us to learn more.
## Summary
Achieving database reliability is hard and we need to take care of many aspects. The biggest one is moving the ownership to the right teams. We can’t just swamp them with data and signals. We need to reduce their work around communication and bookkeeping and give them the right tools to see only the important database-oriented information. This is not only a technical change, but a much bigger cultural shift. | adammetis |
1,880,599 | Ibuprofeno.py💊| #135: Explica este código Python | Explica este código Python Dificultad: Intermedio x = {1,2,3} y =... | 25,824 | 2024-07-08T11:00:00 | https://dev.to/duxtech/ibuprofenopy-135-explica-este-codigo-python-5em2 | python, spanish, learning, beginners | ## **<center>Explica este código Python</center>**
#### <center>**Dificultad:** <mark>Intermedio</mark></center>
```py
x = {1,2,3}
y = {3,4,5}
print(x.symmetric_difference(y))
```
* **A.** `{}`
* **B.** `{3}`
* **C.** `{1, 2, 4, 5}`
* **D.** `{1, 2, 3, 4, 5}`
---
{% details **Respuesta:** %}
👉 **C.** `{1, 2, 4, 5}`
En nuestro ejemplo la diferencia simétrica consiste en extraer todos los elementos que esten en `x` o `y` pero que no este en ambos.
{% enddetails %} | duxtech |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.