id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,712,274
Complete Object Oriented PHP & MVC 2024
Welcome to our advanced PHP course on building a custom MVC framework called PHPAdvance!...
0
2023-12-30T06:40:12
https://dev.to/adeleyeayodeji/complete-object-oriented-php-mvc-2024-34bp
webdev, programming, tutorial, beginners
{% embed https://www.youtube.com/watch?v=Ym4500z9mow %} Welcome to our advanced PHP course on building a custom MVC framework called PHPAdvance! Throughout this course, we will guide you through the step-by-step process of creating a lightweight MVC framework similar to Laravel but with a smaller footprint. This open-source framework, PHPAdvance, allows you the freedom to modify the name, add new features, and utilize it as your own.
adeleyeayodeji
1,712,298
Guide to Unlimited Laser Cladding
What Is Laser Cladding? Laser metal deposition, commonly referred to as laser cladding, involves...
0
2023-12-30T07:35:05
https://dev.to/arhamgbob/guide-to-unlimited-laser-cladding-5c6e
machinelearning
**What Is Laser Cladding?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mlxo6cwhm7m62cm3t9z5.PNG) Laser metal deposition, commonly referred to as laser cladding, involves the deposition of one material onto another surface. This is achieved by introducing a metallic powder or wire into a melt pool generated by a scanning laser beam. The result is a coating of the chosen material on the target surface. There are several [laser cladding machines](https://haitianlasertech.com/laser-cladding-machine/) to perform these functions. This method enhances the surface characteristics of a component, including increased wear resistance, and facilitates the repair of damaged or worn surfaces. The process employs a highly precise welding technique to establish a robust mechanical connection between the base material and the newly deposited layer. **What makes laser cladding a crucial manufacturing technology in the contemporary landscape?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/adclhn4guhc6qu5wls9c.PNG) [Laser cladding](https://haitianlasertech.com/) plays a pivotal role in enhancing the performance of industrial assets by creating protective layers that guard against wear and corrosion. Engineers have the flexibility to design components using generic base metal alloys, promoting the conservation of natural resources. Subsequently, specific areas of the component can be selectively laser clad with high-alloyed materials to impart the desired performance characteristics. Moreover, laser cladding serves as a valuable method for restoring and remanufacturing high-value components to their original specifications. Beyond simply restoring the shape of a part, the use of additive materials with superior wear resistance compared to the original component contributes to prolonged service life and enhanced overall performance. **Advantages Of Laser Cladding** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5baxe1puzvxhxgjihuja.PNG) In comparison to traditional coating techniques, laser cladding offers numerous advantages. These include the use of higher quality coating materials with superior bond strength and integrity, resulting in less distortion and dilution, as well as enhanced surface quality. The specific benefits encompass: Reduced laser exposure duration and depth. Establishment of a favorable metallurgical relationship between the layer and the base material. Enhanced durability compared to thermal spray coatings. Achieves good surface quality with minimal warpage, often requiring no additional post-processing. High energy efficiency and a rapid laser cladding process. Compatibility with a diverse range of materials for both substrate and layer, including custom alloys and metal matrix composite (MMC) designs. Minimal porosity within the deposits, exceeding 99.9% density. Formation of a narrow heat-affected zone (EHLA as low as 10m) due to relatively low heat input. Reduced need for corrective machining when the substrate experiences minimal deformation. **Classification of Laser Cladding** There are numerous iterations of laser cladding and its associated technology. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1u2hbr80jewuikhunpq.PNG) In the EHLA (Extreme High-Speed Laser Cladding) process, the powder is introduced into the focused laser beam's path above the substrate. This ensures that the deposited material is molten before it comes into contact with the substrate. However, a very shallow melt pool is retained on the substrate, enabling the deposited material to cool and solidify while in contact with the underlying material. This minimizes the amount of heat transferred to the component beneath and reduces the depth of dilution and heat effects. The limited dilution permits the creation of notably thinner coatings (20-300m) that achieve the required chemistry within 5-10m. This characteristic forms the basis for EHLA's ability to achieve high traversal speeds, often exceeding 100m/min. **Application Of Laser Cladding** Laser cladding is useful for a wide range of industrial applications. These uses range from agriculture and aerospace to drilling, mining, and power generation. · Flanges · Seats · Wear Sleeves · Pumps · Glass Molds · Seal/Bearing Journals · Impellers · Rotor Shafts · Pump Shafts · Compressor Wheels · Gearbox Housing · Propeller Shafts · Exhaust Valves · Rolls · Crank Shafts · Engine Components · Mandrels **Drilling tools** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk6k76xmt42mm135qxi9.PNG) Efficient drilling instruments play a crucial role in unlocking oil and gas reserves. Given the harsh conditions they face, these tools would have limited lifespans without proper protection against wear. Consequently, the adoption of special coatings, increasingly facilitated by laser coating technology, has become a standard practice in the industry. **Application of protective coatings on hydraulic cylinders for the mining sector.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ndta8sephnq1kcobb4a.PNG) The utilization of laser coating technology for hydraulic cylinders in technological mining endeavors, such as coal extraction, is an emerging industry. The cylinders' coatings deteriorate swiftly in the regional climate, leading to leaks and necessitating either replacement or recoating. Traditionally, chromium plating dominated this domain, but laser coatings are steadily supplanting it due to their superior durability. While the precise enhancement in longevity is not yet precisely quantified, current data indicate a lifespan increase exceeding 100%. **Cutting Tools** Utilizing layers of laser-clad materials proves effective in safeguarding saw blades, counter blades, disc harrows, and various cutting tools against wear and corrosion. This approach not only provides superior cutting properties but also ensures that the tools remain straight due to minimal distortion during the process. Moreover, the technique allows for variable coating thicknesses to cater to specific requirements. These coated tools find applications across diverse industries, including construction and agriculture. **Proven Solution for Hot Mill Rolls in the Steel Making Industry** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jnmmhwj1obu91bugc5qh.PNG) The process of rolling steel places significant demands on the components of rolling equipment, pushing their performance limits. Failures in rollers and other components can occur due to abrasion wear, heat stress, and galling, leading to downtime and quality issues. Given the wide temperature range involved, a solution with high hardness and low friction is essential. Haitian Laser Machinery offers a reliable solution to enhance the lifespan of hot mill rollers. Customers adopting our laser-cladded carbide-based overlays have witnessed a remarkable 6x increase in service life compared to traditional methods like thermal spray or arc welding. This results in reduced operating costs and minimized downtime. **Bottom Punch/Magnet Dies** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jgbogymo06v873q96pl.PNG) In the production of magnets for motors, the magnet material is mixed into a slurry and introduced into a die. Subsequently, the slurry is compelled into the magnet die to expel any water or contaminants. It is crucial for these punches to feature non-magnetic tips to prevent interference with the magnet's polarity. Additionally, the dies are susceptible to substantial metal-to-metal wear. Traditionally, Stellite 6 was applied to the die's top surface using GTAW (Gas Tungsten Arc Welding). However, the dilution line between the base material (W2 steel) and the substrate (Stellite 6) exhibited irregularities, and the manual process was time-consuming, taking approximately one hour for each punch. To enhance efficiency, HaiTian devised an automated solution. This involved utilizing a coaxial powder feed laser head and custom programming to map the clad/layer routes. With this automated process, HaiTian achieved a Stellite 6 buildup with a well-defined dilution line between the parent and clad material, completing the cladding in 15-20 minutes instead of one hour. Consequently, this automated approach significantly improved both quality and efficiency. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggkvawqexbsrqormtg1q.PNG) **Wind Turbine Centrifuge Hubs/Shafts** The wear and tear experienced by wind turbine hubs and shafts are attributed to the varying loads caused by fluctuations in wind speed and intensity. This wear can lead to premature gearbox failure, incurring substantial replacement costs. HaiTian addresses this issue by employing laser cladding on worn hubs, journals, and shafts, followed by machining to restore them to their original design specifications. The base materials utilized include stainless steels (410 SS, 420 SS, 440 SS), with 420 stainless steel serving as the primary cladding material. Careful control of preheating and process conditions ensures that the clad material is free from defects and possesses characteristics similar to or improved from the base material. **Steel Mill Rods** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rp5hzsc3cpxo7860q851.PNG) Rods in steel mills often undergo wear, with wear levels that, while generally minimal, can render the rod unusable. IBC Coatings Technologies addresses this by employing laser coating on worn rods. The base material utilized is high-strength steel (4140), and an overlay of 431SS martensitic stainless steel is applied to enhance the wear resistance of the substrate. **Types of Lasers Utilized in Laser Cladding** **1. Fiber Lasers** Fiber lasers utilize optical fibers to generate laser beams, renowned for their efficiency and reliability in laser cladding applications. **2. Diode Lasers** Diode lasers, employing semiconductor technology, produce laser radiation. These lasers, with variations in wavelength, power, and fiber type, find application in diverse laser cladding procedures, including certain types of prostatectomy. **3. CO2 Lasers** Carbon dioxide lasers, often referred to as CO2 lasers, generate laser beams through a gas mixture. They are particularly suited for cladding larger surface areas due to their characteristics. **4. Nd:YAG Lasers** Neodymium-doped yttrium aluminum garnet lasers use solid-state crystals to produce laser beams. Their adaptability makes them suitable for various laser cladding applications. **Laser Hardening** [laser hardening](https://haitianlasertech.com/laser-hardening-machine/) is a precision surface treatment method that boosts the hardness and wear resistance of metals. By focusing a high-intensity laser beam on specific areas, rapid heating and cooling transform the material's microstructure, increasing hardness. This targeted approach minimizes distortion and thermal damage, making it ideal for applications like gears and cutting tools in industries such as automotive and aerospace. The process ensures durable and wear-resistant components, enhancing overall product performance and longevity. **Laser Hardening Machine** A [laser hardening machine](https://haitianlasertech.com/laser-hardening-machine/) is a specialized tool designed for the precise application of laser hardening techniques to enhance the surface properties of materials, particularly metals. This advanced equipment utilizes a high-intensity laser beam to selectively heat specific areas of a material, followed by rapid cooling through quenching. The machine's precision allows for controlled and localized treatment, minimizing distortion and thermal damage to the surrounding material. Commonly used in industries such as automotive, aerospace, and manufacturing, laser hardening machines play a crucial role in improving the hardness and wear resistance of components like gears and cutting tools, contributing to increased durability and overall product performance.
arhamgbob
1,712,357
Lights, Camera, Code: A Blockbuster Streaming Adventure with Golang and Kafka!
Welcome, movie buffs and code enthusiasts! Get ready for a cinematic experience as we embark on a...
26,068
2023-12-30T10:11:57
https://dev.to/akshitzatakia/lights-camera-code-a-blockbuster-streaming-adventure-with-golang-and-kafka-4562
streaming, go, kafka, flink
Welcome, movie buffs and code enthusiasts! Get ready for a cinematic experience as we embark on a thrilling adventure into the world of Golang and Kafka. Picture this: a high-tech movie studio where messages flow like scenes in a blockbuster film. Buckle up, grab your popcorn, and let the streaming show begin! ![Watching movie](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zu2vsglvkuyug7mk6wy8.png) <br> ## Setting the Scene: Golang, the Movie Maverick, and Kafka, the Blockbuster Maestro <br> **1. Golang, the Movie Maverick:** Meet Golang, our movie maverick. Fast, efficient, and ready to script the perfect streaming tale. Golang is like the director ensuring our code scenes unfold seamlessly, just like the frames of your favorite movie. **2. Kafka, the Blockbuster Maestro:** Imagine Kafka as the mastermind behind the scenes. He's the blockbuster maestro orchestrating every twist and turn of the storyline. With Kafka, our movie script becomes a dynamic blockbuster, with messages flowing like a perfectly choreographed action sequence. <br> ## The Script: A Streaming Blockbuster with Golang and Kafka <br> ```go // Roll camera! Our Golang adventure begins here. package main import ( "fmt" "log" "os" "os/signal" "time" "github.com/confluentinc/confluent-kafka-go/kafka" ) func main() { // Cue the lights! Action with Golang and Kafka! // Configuration for Kafka producer and consumer producer, err := kafka.NewProducer(&kafka.ConfigMap{"bootstrap.servers": "localhost:9092"}) if err != nil { log.Fatal("Error creating Kafka producer:", err) } consumer, err := kafka.NewConsumer(&kafka.ConfigMap{ "bootstrap.servers": "localhost:9092", "group.id": "movie-group", "auto.offset.reset": "earliest", }) if err != nil { log.Fatal("Error creating Kafka consumer:", err) } // Subscribe to our movie topic err = consumer.SubscribeTopics([]string{"blockbuster-messages"}, nil) if err != nil { log.Fatal("Error subscribing to topic:", err) } // Handling signals for a blockbuster experience sigchan := make(chan os.Signal, 1) signal.Notify(sigchan, os.Interrupt) // Producing a movie quote every second go func() { for { quote := fmt.Sprintf("Here's looking at you, kid! - %v", time.Now().Format("2006-01-02 15:04:05")) err := produceQuote(producer, "blockbuster-messages", quote) if err != nil { log.Println("Error producing movie quote:", err) } time.Sleep(time.Second) } }() // Consuming movie quotes run := true for run == true { select { case sig := <-sigchan: fmt.Printf("Cut! Caught signal %v: terminating\n", sig) run = false default: msg, err := consumer.ReadMessage(-1) if err == nil { fmt.Printf("Action! Received movie quote: %s\n", msg.Value) } else { fmt.Printf("Error reading movie quote: %v\n", err) } } } // It's a wrap! Closing the Golang movie and rolling credits producer.Close() consumer.Close() } // Function to produce a movie quote to Kafka func produceQuote(producer *kafka.Producer, topic, quote string) error { deliveryChan := make(chan kafka.Event) err := producer.Produce(&kafka.Message{ TopicPartition: kafka.TopicPartition{Topic: &topic, Partition: kafka.PartitionAny}, Value: []byte(quote), }, deliveryChan) if err != nil { return err } e := <-deliveryChan m := e.(*kafka.Message) if m.TopicPartition.Error != nil { return m.TopicPartition.Error } fmt.Printf("Cinematic success! Produced movie quote to topic %s: %s\n", *m.TopicPartition.Topic, string(m.Value)) return nil } ``` In this blockbuster Golang and Kafka example, our movie quotes are produced and consumed in real-time, creating a dynamic and entertaining streaming experience. It's like having a front-row seat to a never-ending stream of classic movie moments. ![Output of the code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n7knl0uqx85kqqxf7b3p.png) **Conclusion:** And that's a wrap, movie maestros and coding cinephiles! We've just witnessed the magic of Golang and Kafka turning our code into a blockbuster streaming adventure. As we continue our journey through the cinematic world of technology, stay tuned for more thrilling coding tales and exciting examples that make learning and exploring the vast sea of technology an absolute blockbuster hit! Happy coding and may your code always have a Hollywood ending! 🎬🚀
akshitzatakia
1,712,406
How to become a Software Engineer in 2024
There isn't a one-size-fits-all approach to becoming a Software Engineer. You might not know what to...
0
2024-01-03T11:57:47
https://medium.com/stackademic/how-to-become-a-software-engineer-in-2024-4ae5d3dc8d9a
webdev, javascript, programming
_There isn't a one-size-fits-all approach to becoming a Software Engineer._ You might not know what to learn when you initially enter the field. To some degree, roadmaps enable you to start your journey and steer you in the right direction.  For the same reason, this article consists of six different roadmaps within Software Engineering for you to get started. However, besides those roadmaps, one must focus on developing a list of required skills. I'll deliver that list in this article. Those crucial skills will enable you to work with teams, communicate your ideas, ace your interviews, and more.  Let's take the **Why-How-When** framework I picked for this article. Furthermore, I have a dedicated section called "**What to look out for**" that expresses the lessons I learned throughout 2023 at the end. You must continue reading to learn from my mistakes and avoid making them. ## Why More often than not, engineers face Impostor Syndrome. We experience challenges and setbacks that make us feel underskilled and overwhelmed. We feel like we don’t know enough. In the SWE world, there are moments where we feel utterly useless if we can’t build an application or implement a feature. These moments are easier to deal with when you know why you started. I remember I loved computers when I was a kid. I didn’t know enough about programming. But, I felt peace when I created something using them. Even if my projects meant crap compared to industry standards, I still made slight progress. Later on, I wanted to become an entrepreneur. I desired to build solutions. I saw that programming was booming. It was a high-paying industry. I wanted to create solutions and chose this field because I had experience writing HTML & CSS code. _Yes, that’s funny._ I recognised financial potential with a passion for doing something. I purchased courses with my startup money and upskilled daily. I reminded myself why I started. It was necessary to help me continue because humans often forget why they started doing something. We get lost in fancy roadmaps, videos, courses, etc. But it’s necessary to look back once in a while. “**_How cool would it be if I could create a website and launch it on the World Wide Web for everyone to access and use?_**” And that’s where it started. Forget other success stories. Focus on building yours. Your reason to enter this field can be for the money. That’s fine. We all have different goals. _The idea is to focus on yourself and not others._ This field has many prodigies with early success stories. I don’t listen to their stories but spend time building mine. Learn from their mistakes instead of being jealous. Once you get your purpose straight, the next step is to understand how to conquer that purpose. ## How Now, explore the ways to achieve your purpose. I deliberately chose the word “explore” because the choices committed during this step change once beginners turn intermediate. The method of doing something should change as you grow. You explore more options and understand the intricacies of the field. I can build a product or solution to solve a specific problem in a hundred different fields. I have the options. As a beginner, I must narrow them down based on my interests, experience, and knowledge. It can be UIUX design, Desktop Development, Data Science, or Front-End Web Development. I ask myself — “**_Which of these fields most resonate with me? If I had to sit for 10 hours, which field would I choose and not get exhausted?_**” For me, the answer was Web Development. I began and eventually explored all the other options with experience. I knew the answer because I had explored other options. Before choosing Web Development or SWE, I built various products using different technologies in diverse fields and industries. Try everything to figure out what gives you joy. Follow the principles of Ali Abdaal when he says that productivity requires us to do something we find joyful and fulfilling. As a beginner, you don’t know about the fields in an industry. The best way to learn is to search for it. Do a precise Google search of “_Which fields exist in Software Engineering_” and examine the results. Make categories based on future potential in growth and finance because you don’t want to pick a dead field with no future scope. Learn the fundamentals of each option. You will get a gist of how they work based on videos and articles concerning the topic available on the Internet. With a strong foundation, you can learn the complex concepts effortlessly. Strike a balance between theoretical and practical. Create more problems and build solutions using them. Look at the financial future of your desired choice. Luckily, Web Development had a foreseeable future. It scaled as required. At that time, it was one of the highest-paying jobs. I selected it for the same reason. _Passion and Interest are pointless if you cannot make money. It’s like building a loss-making business without the potential to grow._ Here are a few roadmaps or guides with the names of central technologies that I believe have potential in the future. Furthermore, do your research. I don’t know you, your skillsets, or your past. Choose the right fit for yourself. ### Web Development ![Web Developemnt Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rqp9mhcou1dw6ii60d8t.png) ### Blockchain Development ![Blockchain Development Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qbepcfw21g32qrhxynkm.png) ### Data Science ![Data Science Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onr0nljm26uzagx2zgwq.png) ### Mobile Development ![Mobile Development Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7srk2htyrdvrqtf905n0.png) ### Desktop Development ![Desktop Developement Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0mvh2hds47o8zi87bpmi.png) ### Game Development ![Game Development Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhpemeaiqoqmlb2jyiq4.png) ## When Most people know what they want to do. They have the perfect plan and the best roadmaps on the Internet from the most experienced engineers. Yet, they fail to take action because of an excuse. Here are a few steps to take consistent actions — 1. Set a time to learn and practice every day for at least half an hour. 2. Interact with other engineers on platforms like Twitter (X) and share your knowledge. 3. If you want to build something, put it in a to-list and assign a deadline. **The best time to learn was yesterday, but you can still conquer today.** Take the necessary steps after reading this article. Go to the documentation of the language you wish to learn or buy the Udemy course you always wanted to learn. Watch the course daily for a blocked duration and practice while watching those videos. Focus on being practical because there’s nothing more vital than knowing how to fix a bug that most engineers cannot. Experience and a spark to explore for whatever purpose bring you the leverage to build whatever you want. Make things easier for you. Don’t make big goals that overwhelm you. Instead, _improve yourself and achieve your goals with slight (0.1%) progress daily._ Practising daily helps more than you can imagine. With fundamentals, you can also take advantage of Artificial Intelligence tools. ## What to look out for 1. The first lines of code you write by yourself will seem difficult. For the same reason, experienced engineers suggest picking a field with a minimum entry barrier, like Web Development. Don’t get scared or frustrated. Remember, we all have been there. Instead, focus on reading and learning more. In the SWE world, the resources are abundant. 2. Don’t jump to complex concepts or techniques. If you know how to write a for-loop, it doesn’t mean you can deal with data structures and algorithms already. It takes time to make progress. Avoid overwhelming yourself. Focus on the basics. 3. Avoid finding an answer directly. Spend a few hours, if not days, to understand why a specific bug or error keeps appearing in your code. Find the root cause. When you know why a problem occurs, it’s easier to solve it. 4. Avoid directly writing code. When you are assigned a problem, start by building a strategy. In Interviews, the employers test your thinking process. Before writing code, I design a flowchart or mindmap of how to solve a specific problem and what to use. 5. Take a break. Whenever I encounter a bug that overloads my cognitive ability to think and solve, I go on a small ride to the nearest cafe and spend time with my friends. I kept myself distracted from work and came back with a fresh mind. 6. Avoid copy-pasting code. If you ask an LLM for a code snippet, you must understand the logic before you use it. And while using it, try to write the entire code yourself and explain each keyword with their usage. It’s tempting to ask GPT models to write code. However, you are at the stage of learning, not implementing. 7. Use GPT models to explore and expand your avenues. I once saw a code snippet on Twitter about a JavaScript-related trick. I couldn’t understand specific pieces of the logic. I asked ChatGPT to explain the purpose of a combination of operators together and fill the necessary gaps to understand the code snippet. 8. Avoid excessively learning and start building. The main joy of being an engineer is to help others create their projects and allow them to help you create yours. Contribute to open-source and build open-source projects. Your theoretical knowledge will solidify and stay with you for the longest time when you practice and implement it daily. 9. Start by building smaller projects. Again, don’t overwhelm yourself. When I decided to learn the Swift language, I built a Pomodoro Timer for Apple Watch, Apple TV, and MacBooks. I started small and took baby steps. 10. Reflect on your mistakes and fundamental gaps in your learning. There are situations where you cannot implement a specific feature using a technology. At that point, write down the technology and features you failed to implement. After a bit of time, revert and start working on that gap. I couldn’t work with component libraries, so I took a project that allowed me to use those component libraries and build a project. 11. Build valuable connections. Life goes beyond work. I focused on only making friends who could help me earn more money. It made me shallow because I didn’t interact with them. I wasn’t friendly. I began treating everyone as a friend who is valuable to my career, but not limited to that, and became a positive person. I shared my problems; they shared theirs. We fixed bugs together, taught concepts to each other, etc. To succeed, you must try, fail, and then retry. We’re all humans. And that’s why we should give ourselves time to adapt to problems we face today with the knowledge of yesterday. --- If you want to contribute, comment with your opinion and if I should change anything. I am also available via E-mail at hello@afankhan.com. I'd love to get your feedback through the comments section.
whyafan
1,712,413
HOW TO RETRIEVE STOLEN STOLEN CURRENCY WITH WIZARD WEB RECOVERY
My experience of recovering my stolen Bitcoin has taught me valuable lessons about resilience,...
0
2023-12-30T12:04:29
https://dev.to/gudmundmolly/how-to-retrieve-stolen-stolen-currency-with-wizard-web-recovery-1edb
My experience of recovering my stolen Bitcoin has taught me valuable lessons about resilience, determination, and the importance of seeking professional help. Wizard Web Recovery was the Bitcoin expert I sought when I needed my Bitcoin back. With the expert now armed with all the information, it was time to start the pursuit. It felt like we were on a thrilling virtual treasure hunt, chasing after my stolen Bitcoin through the darkest corners of the internet. Using their expertise and sophisticated tools, the expert-guided me through the process of tracking the stolen funds. It was a delicate dance between technology and intuition, as we followed the digital footprints left by the thieves. Each step brought us closer to the truth, and my anticipation grew with every discovery. Throughout this journey, I couldn't help but marvel at the expertise of the person guiding me. It was like they had a sixth sense of navigating the online underworld. Their knowledge and skill were the secret weapons in our relentless pursuit, and I was grateful to have them by my side. While the recovery of my stolen Bitcoin was a cause for celebration, it also served as a wake-up call. I realized the importance of enhancing my security measures to prevent future attacks. I couldn't rely solely on the expertise of others to protect my assets; I had to take proactive steps myself. I started by educating myself about online scams and staying updated on the latest cybersecurity practices. I implemented two-factor authentication, strengthened my passwords, and became more cautious about the information I shared online. It was like putting on digital armor, ready to defend myself against any potential threats.This experience also made me realize the need to share my story and empower others. I wanted to raise awareness about online scams and the importance of vigilance in the digital world. By spreading knowledge and encouraging others to take their online security seriously, I hoped to prevent others from falling victim to the same fate. By sharing my story, I hope to inspire others to take action and protect themselves in the vast labyrinth of the internet. Together, we can build a community of informed and empowered individuals who refuse to be victims of cybercrime. Do reach Wizard web recovery via email: wizard web recovery @ programmer (dot) net, additionally, you can also communicate with them directly on the webpage at wizard web recovery (dot) net
gudmundmolly
1,712,539
Exploring the Potential of Retrieval-Augmented Generation using Azure Open AI Solutions and build your ownGPT
Photo by saeed mhmdi on Unsplash In the realm of artificial intelligence (AI), Retrieval-Augmented...
0
2024-01-14T14:18:28
https://the.cognitiveservices.ninja/exploring-the-potential-of-retrieval-augmented-generation-using-azure-open-ai-solutions-and-build-your-owngpt
--- title: Exploring the Potential of Retrieval-Augmented Generation using Azure Open AI Solutions and build your ownGPT published: true date: 2023-12-30 10:30:19 UTC tags: canonical_url: https://the.cognitiveservices.ninja/exploring-the-potential-of-retrieval-augmented-generation-using-azure-open-ai-solutions-and-build-your-owngpt cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qj6d6m1p6gd0x3ektkqj.png --- Photo by [saeed mhmdi](https://unsplash.com/@saeedanathema?utm_source=Hashnode&utm_medium=referral) on [Unsplash](https://unsplash.com/?utm_source=Hashnode&utm_medium=referral) In the realm of artificial intelligence (AI), Retrieval-Augmented Generation (RAG) is a groundbreaking approach that enhances the output of large language models (LLMs). It achieves this by referencing a reliable external knowledge base, separate from its original training data sources, before generating a response. LLMs, the technological foundation of AI, are trained on vast volumes of data and use billions of parameters to generate unique output for tasks such as answering questions, translating languages, and completing sentences. RAG takes the already remarkable capabilities of LLMs a step further, extending their reach to specific domains or an organization's internal knowledge base without retraining the model. This makes RAG a cost-effective strategy for improving LLM output, ensuring it remains relevant, accurate, and valuable across various contexts. ## The Significance of Retrieval-Augmented Generation LLMs are a vital AI technology that powers intelligent chatbots and other natural language processing (NLP) applications. The ultimate goal is to create bots capable of answering user questions in various contexts by cross-referencing authoritative knowledge sources. However, the nature of LLM technology introduces a degree of unpredictability in LLM responses. Additionally, the static nature of LLM training data imposes a cut-off date on the knowledge it possesses. Several challenges are associated with LLMs, including presenting false information when the answer is unknown, providing out-of-date or generic information when the user expects a specific, current response, generating a reaction from non-authoritative sources, and creating inaccurate responses due to terminology confusion. RAG is a solution to these challenges. It guides the LLM to retrieve relevant information from authoritative, pre-determined knowledge sources. This gives organizations more control over the generated text output and offers users insights into how the LLM generates the response. ## The Advantages of Retrieval-Augmented Generation RAG technology offers several advantages for an organization's generative AI efforts. It provides a cost-effective implementation, ensuring that productive AI technology is more accessible and usable. RAG allows developers to supply the latest research, statistics, or news to the generative models, maintaining relevance. RAG enhances user trust by allowing the LLM to present accurate information with source attribution. Developers gain more control with RAG, as they can test and improve their chat applications more efficiently, control and modify the LLM's information sources to adapt to changing requirements and ensure the LLM generates appropriate responses. ## The Mechanics of Retrieval-Augmented Generation RAG introduces an information retrieval component that initially utilizes user input to extract information from a new data source. Both the user query and the relevant information are provided to the LLM, which employs the latest knowledge and its training data to generate improved responses. The process involves creating external data, retrieving pertinent information, augmenting the LLM prompt, and updating external data. This ensures that the generative AI models can comprehend the new data, retrieve highly relevant documents, produce accurate answers to user queries, and maintain up-to-date information for retrieval. ## Retrieval-Augmented Generation vs. Semantic Search Semantic search improves RAG results for organizations looking to incorporate extensive external knowledge sources into their LLM applications. It can efficiently scan large databases of diverse information and retrieve data more accurately. Traditional or keyword search solutions in RAG yield limited results for knowledge-intensive tasks. In contrast, semantic search technologies handle the entire process of knowledge base preparation, saving developers the effort. They also generate semantically relevant passages and token words, ordered by relevance, to optimize the quality of the RAG payload. ## Get some practical knowledPractical Knowledge. You might have read [my series](https://the.cognitiveservices.ninja/create-your-owngpt-in-a-protected-way-and-advance-its-potential-part-1-a-simple-web-chat-experience-targeting-chatgpt-through-aoai) on creating your GPT with Azure OpenAI Services. Why not build something similar with RAG? You can explore a Microsoft Repository on Github, which offers several approaches for creating ChatGPT-like experiences using your data and the Retrieval Augmented Generation pattern. This repository utilizes Azure OpenAI Service to access the ChatGPT model and Azure AI Search for data indexing and retrieval. Link: [https://github.com/Azure-Samples/azure-search-openai-demo](https://github.com/Azure-Samples/azure-search-openai-demo) ### Features - Chat and Q&A interfaces - Explores various options to help users evaluate the trustworthiness of responses with citations, tracking of source content, etc. - Demonstrates possible approaches for data preparation, prompt construction, and orchestration of interaction between the model (ChatGPT) and retriever (AI Search) - Settings directly in the UX to tweak behavior and experiment with options - Performance tracing and monitoring with Application Insights ### What is the difference compared to the repository I used in the series? That repository, used in the series, is designed for customers using Azure OpenAI Studio and Azure Portal for setup. It also includes `azd` support for those who want to deploy it entirely from scratch. This, linked here, repository features multiple RAG (retrieval-augmented generation) approaches that combine the results of multiple API calls (to Azure OpenAI and ACS) in various ways. The other repository relies solely on the built-in data sources option for the ChatCompletions API, which employs an RAG approach on the specified ACS index. While this should suffice for most use cases, this sample might be a better choice if you require more flexibility. This repository is also somewhat more experimental, as it is not tied to the Azure OpenAI Studio like the other repository. This repository utilizes PDF documents, whereas the other repository is less restricted and offers persistent chat, which is unavailable here. ## Conclusion Retrieval-augmented generation (RAG) is a powerful tool in the field of artificial intelligence that enhances the output of large language models by leveraging an external knowledge base. This revolutionary approach addresses several challenges associated with LLMs, providing accurate, relevant, and up-to-date information. RAG not only improves user trust and engagement but also offers developers more control over the generated responses. The incorporation of semantic search further optimizes the results, making the process more efficient.
thecognitiveservicesninja
1,712,657
Cloud Forensics Services: Helping You Stay Safe in the Cloud
The cloud has revolutionized the way we store and access data. However, this shift has also...
0
2023-12-30T19:39:28
https://dev.to/secure_it_all/cloud-forensics-services-helping-you-stay-safe-in-the-cloud-7ia
The cloud has revolutionized the way we store and access data. However, this shift has also introduced new security challenges. With data scattered across multiple servers and applications, investigating and responding to security incidents in the cloud can be complex and time-consuming. This is where cloud forensics services come in. What is Cloud Forensics? Cloud forensics is the application of digital forensics techniques to investigate security incidents in the cloud. It involves collecting, preserving, and analyzing data from cloud storage, applications, and logs to identify the root cause of an incident and determine the scope of the damage. Challenges of Cloud Forensics Unlike traditional on-premises environments, cloud forensics presents several unique challenges: Data Volatility: Cloud data is often stored in temporary locations or automatically deleted, making it difficult to collect and preserve evidence. Jurisdictional Issues: Data may be stored across multiple jurisdictions, making it subject to different laws and regulations. Limited Access: Cloud providers may restrict access to certain data or logs, making it difficult for investigators to get the information they need. Lack of Expertise: The field of cloud forensics is still relatively new, and there is a shortage of qualified personnel. How Cloud Forensics Services Can Help Cloud forensics services can help organizations overcome these challenges by providing: Experienced investigators: Cloud forensics specialists have the knowledge and skills to collect, preserve, and analyze evidence from the cloud. Advanced tools and technologies: Cloud forensics services use specialized tools to automate data collection and analysis, saving time and effort. Incident response expertise: Cloud forensics services can help organizations develop and implement an incident response plan to quickly and effectively respond to security incidents. Benefits of Using Cloud Forensics Services There are many benefits to using cloud forensics services, including: Faster incident response: Cloud forensics services can help organizations identify and contain security incidents quickly, minimizing the damage. Reduced costs: Cloud forensics services can help organizations avoid the costs associated with data loss, downtime, and reputational damage. Improved security posture: Cloud forensics services can help organizations improve their overall security posture by identifying vulnerabilities and recommending corrective actions. Choosing a Cloud Forensics Provider When choosing a cloud forensics provider, it is important to consider the following factors: Experience: The provider should have experience investigating security incidents in the cloud. Expertise: The provider should have a team of qualified cloud forensics specialists. Tools and technologies: The provider should use advanced tools and technologies to automate data collection and analysis. Cost: The provider should offer competitive rates for their services. Conclusion Cloud forensics services are essential for any organization that uses the cloud. By partnering with a qualified cloud forensics provider, organizations can gain the expertise and resources they need to investigate and respond to security incidents quickly and effectively.
secure_it_all
1,712,746
How to: Next.js API Global Errors & Auth Middleware
This one's going to be quick. no bs. I was working on a Next.js project and needed a way to handle...
0
2023-12-31T00:16:10
https://dev.to/100lvlmaster/nextjs-api-global-errors-auth-middleware-c9i
nextjs, typescript, errors, authentication
This one's going to be quick. no bs. I was working on a Next.js project and needed a way to handle API Route Errors Globally. Similar to Express in Node.js. This is what I've come up with and it works wonders for my setup. - We create a handler function that will take multiple handlers and run them one by one. ```typescript import { ApiError } from "next/dist/server/api-utils"; import { NextResponse, NextRequest } from "next/server"; export const custom_middleware = (...handlers: Function[]) => async (req: NextRequest, res: NextResponse) => { try { for (const handler of handlers) { await handler(req, res); } } catch (error) { if (error instanceof ApiError) { return NextResponse.json( { message: error.message }, { status: error.statusCode } ); } else { /// Log server errors using winston or your preferred logger console.log(error); return NextResponse.json( { message: "Server died for some reason" }, { status: 500 } ); } } }; ``` - Now we can add it to a route ```typescript /// app/api/ping/route.ts import { custom_middleware } from "@/app/lib/server/middleware"; import { ApiError } from "next/dist/server/api-utils"; import { NextRequest, NextResponse } from "next/server"; const main_handler = (req: NextRequest, res: NextResponse) => { const isAuthenticated = false; if (isAuthenticated) { return NextResponse.json({ success: true }); } throw new ApiError(400, "Some error"); }; export const GET = custom_middleware(main_handler); ``` - We can see the fruits of our labour in the browser itself. ![image-description.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qc1mhdd4ilhijp5sh0qq.png) - We can add our own custom authentication logic here too. ```typescript export const auth_middleware = async (req: NextRequest, res: NextResponse) => { /// Your auth logic const isAuthenticated = false; if (!isAuthenticated) { throw new ApiError(401, "Unauthorized"); } }; ``` - Call it in our middleware_handler ```typescript export const custom_middleware = (...handlers: Function[]) => async (req: NextRequest, res: NextResponse) => { try { /// /// Auth middleware await auth_middleware(req, res); for (const handler of handlers) { await handler(req, res); } } catch (error) { if (error instanceof ApiError) { return NextResponse.json( { message: error.message }, { status: error.statusCode } ); } else { /// Log server errors using winston or your preferred logger console.log(error); return NextResponse.json( { message: "Server died for some reason" }, { status: 500 } ); } } }; ``` - Test the Auth middleware once ![image2.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e4e285ebbmrzurcrtyll.png) - Profit 💰 ![profiting.gif](https://tenor.com/en-GB/view/throwing-money-eric-cartman-tweek-tweak-stan-marsh-south-park-gif-21961230.gif) literally me profiting --- You can look at the code here: [Github](https://github.com/100lvlmaster/next-custom-middleware-example) Follow me everywhere. [OG Blog](https://www.100lvlmaster.com)
100lvlmaster
1,712,788
Reflections and Goal Setting as a Software Engineer
As software engineers, we're often busy year-round developing new features, fixing production bugs,...
0
2023-12-31T02:05:49
https://dev.to/kshyun28/reflections-and-goal-setting-as-a-software-engineer-4ddg
As software engineers, we're often busy year-round developing new features, fixing production bugs, learning new tech stacks, interacting with different teams, and many more. As a result, we tend to feel tired towards the end of the year. We would use our holiday breaks and vacation leaves to shut off completely, spending time with family and friends, traveling to new places, and doing hobbies like gaming, reading, and whatever makes us feel good. Unwinding during our break is completely normal, and is even needed to recharge for the next year. But the break also allows us to take a pause, reflecting on what happened in the past months. Then it gives us the time and space to plan and set big goals, making sure we have something meaningful to work towards and not just run endlessly in the hamster wheel of software engineering. In this article, I'll guide you on your journey towards reflecting on past experiences, along with planning for the future with big, ambitious goals. ## Reflecting on what happened this year > “We do not learn from experience...we learn from reflecting on experience.” ― John Dewey Often, we let the year pass by without really taking a pause. After a tiring year, we just want to binge our favorite shows, play our favorite video games, and so on. By reflecting on our past experiences, we can appreciate our achievements, while also remembering the lessons brought by failures and mistakes. This holiday break provides an opportunity to do exactly this, reflecting on what happened this year. ### A framework for reflecting For your reflections, I suggest starting simple, especially if you haven't done this before. Allot a few minutes of your time to answer these questions: 1. What have I done this year? 2. What have I learned this year? For question one, think of **all the things you've done, no matter how small**. Then remember what you felt after finishing the things you've done. By remembering all the good that you've done, you appreciate yourself more. **You did a good job this year!** Once you've listed all the things you've done and experienced, it's time for question two. From those experiences, what are the **lessons or improvements** that you can make? Think about what you could've done better or wish you'd done if you were able to do those things again. Take your time in answering these two questions. ### My reflections To give you a reference that might spark ideas of your own, here are my reflections. During 2023, I've managed to achieve the following: - Written **12 articles** (including this one). - Contributed to **two open-source projects**. - Met up with **two professionals** (a representative at AWS and a founder) over coffee. - Experienced being at an **early-stage startup**. - Watched 13 startups (including ours) pitch live at a **startup incubator's demo day**. - Finished the **AWS Startups Build Accelerator** program. - Passed the **AWS Certified Cloud Practitioner** exam. - Learned **Svelte and SvelteKit** by going through the interactive tutorials. Then here's what I learned from these experiences: - Writing is not just about writing itself, but all the pre-writing that comes with it. (I'm practicing [Mise en Place Writing](https://www.swyx.io/writing-mise-en-place).) - You can contribute to open-source projects, even if it's just [one line of code](https://dev.to/kshyun28/how-i-contributed-one-line-of-code-to-ethereum-3poh). - I should prepare more when meeting up with professionals to lessen the awkward moments. - If I can buy something instead of building it, then I should buy it. (Using existing solutions for your app's login and authentication.) - When learning new technologies, make it a habit to take notes as you learn. (Writing helps you learn faster, while also having valuable notes that you can share with others.) ## Planning for the new year > “You do not rise to the level of your goals. You fall to the level of your systems.” - James Clear, Atomic Habits ### A framework for planning and setting goals For 2024, I've adapted Sahil Bloom's [annual planning template](https://www.sahilbloom.com/annual-planning), which I recommend checking out for yourself. For setting your goals, there are two main categories, namely **professional** and **personal**. We'll be focusing on setting your professional goals, but the framework is also applicable to personal goals. There are three steps in setting your professional goals: 1. Choose **1-3 big goals**, they should be [SMART](https://en.wikipedia.org/wiki/SMART_criteria) (specific, measurable, attainable, relevant, time-bound). 2. Set up **1-3 checkpoint goals** for each **big goal** you have. 3. Plan **2-3 daily actions** that will give you progress towards your **big goals**. The idea here is that you choose your **big, ambitious goals** that feel impossible to achieve. Then work backwards from your big goals. After choosing your **big goals**, it's time to set **checkpoint goals** for each of your big goals. These checkpoint goals are meant to be shorter-term goals, making your big goals easier to work towards. For example, I'd like to **read 12 software engineering books** for next year. In order to achieve this goal, I'll set up **2-3 checkpoint goals**. So by April 1, I should have already read **three books**, then by July 1, **six books**, and by October 1, **nine books**. Checkpoint goals help keep track of your progress, making sure you don't stray away from your big goals. Lastly, the most important part of this framework is having **daily actions and systems** that will slowly get you towards your big goals. To continue with the example, if I want to **read three books every three months**, I'd have to start with my reading habits. I'll start by **reading for 30 minutes daily**. This is not a strict rule, and some days I'd find myself feeling lazy. On those days, I'd try to read one page at most. The important thing is to do your daily habits, even if you don't feel like doing it. Remember, **anything above zero compounds!** ### My goals To give you an example, here's one of my big goals for 2024: #### Big goal - Continue writing **at least 1 article per week**. #### Checkpoint goals - Written **at least 13 articles** by April 1. - Written **at least 26 articles** by July 1. - Written **at least 39 articles** by October 1. #### Daily systems - Write around 1000 words per day. - Take notes when learning new stuff. - Capture notes when consuming articles. - Document experiences when trying out new stuff. ## Conclusion To recap, we've walked through the process of reflecting on your past experiences and achievements, along with what you've learned from them. We also covered frameworks for setting your big, ambitious goals, and how to make sure that you're making daily progress towards them. I hope this inspires you to take a pause and go on your journey of reflection and planning. Thank you for reading and if you have any questions or feedback, I'd love to hear from you. Feel free to comment or connect with me [here](https://linktr.ee/kshyun28).
kshyun28
1,712,813
How to Create Your Own Image Optimization / Resizing Service for Practically Free
In the digital age, the power of visuals cannot be overstated. Images form the cornerstone of digital...
0
2023-12-31T03:48:03
https://blog.designly.biz/how-to-create-your-own-image-optimization-resizing-service-for-free
imageoptimization, nextjs, php, cloudflareworker
In the digital age, the power of visuals cannot be overstated. Images form the cornerstone of digital content, from websites to mobile apps, making their optimization an essential aspect of the digital experience. However, many businesses and content creators find themselves grappling with the escalating costs of image optimization services, especially as their scale of operations expands. This is where the concept of creating your own image optimization and resizing service comes into play. Imagine being able to offer an additional, highly valuable service to your clients without incurring significant expenses. Image optimization is not just a necessity; it's a potential value-add service that can distinguish your offerings from competitors. The best part? Implementing your own image optimization service is far simpler and more cost-effective than you might think. This guide is designed to demystify the process of setting up your own image optimization and resizing service. Contrary to popular belief, you don't need a hefty budget or extensive technical expertise. With some basic knowledge and the right tools, you can establish a service that rivals those of expensive third-party providers, ensuring your images are always web-ready, visually appealing, and optimized for performance. Whether you're a web developer, a digital agency, or a content creator looking to enhance your digital assets, this article will walk you through the steps of creating an efficient, practically free image optimization service. Let's dive in and unlock the potential of optimized visuals for your projects! --- ## Overview **1. Using PHP for Image Resizing:** The foundation of your image resizing service will be a PHP script. PHP, known for its simplicity and efficiency, is perfect for handling image manipulation tasks. To get started, you need either a Virtual Private Server (VPS) or a shared hosting plan that supports the PHP-Imagick extension, which is essential for image processing. A great option to consider is Hostinger’s shared hosting, which comes equipped with the PHP-Imagick extension. This setup ensures you have the necessary environment to run your PHP scripts efficiently, handling image resizing tasks seamlessly. This guide will assume you're using Hostinger but the process should be the same on a VPS. *If you want a great deal on a Hostinger plan, please use the link at the bottom to help support me!* **2. Employing a CloudFlare Worker:** The final piece of the puzzle involves creating a CloudFlare worker. This worker is responsible for making image resize requests to the PHP server and then caching the resized images in CloudFlare’s global network. This step is crucial because it significantly reduces the load on your server, as CloudFlare's network handles the distribution of the optimized images. By caching the images, you ensure that subsequent requests are served faster, enhancing the overall user experience and reducing bandwidth costs. --- ## 1. Setting Up the PHP Endpoint This guide will take you through the steps of setting up your PHP endpoint on Hostinger. We're going to use a fictional domain `images-api.example.com`. I decided to do a walkthrough on this because you'll need to navigate through the weeds of Hostinger's DIY templates and paid add-on services to just get a plain and simple shared hosting space attached to a subdomain. From your Hostinger dashboard, click the `Add or migrate a website` button. ![Create a new website](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image1.jpg) On the next screen, click `Skip, I don't want personalized experience`. ![Click skip, I don't want a personalized experience](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image2.jpg) Next, click `Skip, create an empty website`. 🥵 We're almost there! ![Click skip, create an empty website](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image3.jpg) Now click `Use an existing domain` and enter your chosen subdomain. I've used `images-api.example.com`. ![Enter your subdomain](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image4.jpg) On the next screen, just click `Continue`. On the last screen, you can just ignore the other options here. Instead, just click the Hostinger logo to return to the `hPanel`. Then `Pro Panel` at the top right of the screen. ![Click on Pro Panel](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image5.jpg) From the `Pro Panel`, click the `Hosting` tab and then expand the `Advanced` category from the left sidebar, then click `SSH Access`. You'll need to setup your DNS for your subdomain to point to the IP address listed here. I recommend you go ahead and setup SSH access so you can use `rsync` or `sftp` to upload your files. You can always use the file manager instead, but it's not the best option. If you have a public SSH key, you can upload it here or you can use password authentication (which is less secure). ![Copy your IP and setup SSH](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image6.jpg) Ok, that should complete your setup for your shared hosting space. Go ahead and type your domain URL into a web browser. You should hopefully get a Hostinger placeholder page. --- ## Building the Images API Now it's time to write some PHP code! Here's the directory structure we'll use: ```markdown ├── lib │ ├── ApiHelper.class.php ├── public_html │ ├── index.php │ ├── 404.html ├── routes │ ├── default.php │ ├── error404.php │ ├── resize.php ├── config.php ├── main.php └── .env ``` The best way to develop in PHP is to setup a local development environment. There are a number of ways you accomplish this. You can use Docker or you could setup a Virtual Box Ubuntu server. This tutorial will not go into setting that up, so for sake of simplicity, we're going to work directly out of the Hostinger shared space. Ok, now we're going to setup a shared access key that only our PHP server and the CloudFlare worker will know. Our clients will not be accessing the PHP server directly. Here's a quick way to generate a 32-byte secure ASCII string using OpenSSL: ``` openssl rand -base64 32 ``` Optionally, you can use Node.js to do the same (especially if you're on Windows): ``` node -e "console.log(require('crypto').randomBytes(32).toString('base64'))" ``` Copy this and create your `.env` file and create an env var called `ACCESS_KEY`: ``` ACCESS_KEY="xt1LhW0OO0pmeVwW/grkV/zGaUsarWvA1OI7oCRcjTE=" ``` Now let's create our world-facing PHP file. This will be the only file in our `public_html` directory. This will prevent anyone from ever being able to see our code. So if PHP ever stops working or something happens on Hostinger, your code will never be exposed. ```php <?php require('../main.php'); ``` Now let's create `main.php`, which will be our API router: ```php <?php // Get our environment define('DIR', realpath(getcwd() . '/../')); // Load deps require_once(DIR . '/vendor/autoload.php'); // Load config file require_once(DIR . '/config.php'); // Split URI path and get our route function getPath() { $path = $_SERVER['REQUEST_URI'] ?? ''; if (strpos($path, '?') !== false) { list($path, $_) = explode('?', $path); } $path = ltrim($path, '/'); $path = preg_replace("/[^a-z\d\/_.-]+/", '', $path); $path = preg_replace("/\.+/", ".", $path); return $path; } $url = explode('/', getPath()); // Serve OPTIONS preflight requests if ($_SERVER['REQUEST_METHOD'] === 'OPTIONS') { if (isset($_SERVER['HTTP_ORIGIN']) && in_array($_SERVER['HTTP_ORIGIN'], ALLOWED_CORS_ORIGINS)) { header('Access-Control-Allow-Origin: ' . $_SERVER['HTTP_ORIGIN']); header('Access-Control-Allow-Methods: GET, POST, OPTIONS'); header('Access-Control-Allow-Headers: Content-Type'); } exit; } // Determine which route to use if (!count($url) || empty($url[0])) { include DEFAULT_ROUTE_FILE; } else { // Validate the route if (!in_array($url[0], ALLOWED_ROUTES)) { // Display a 404 page include ERROR_404_ROUTE_FILE; exit; } $filePath = ROUTE_DIR . '/' . $url[0] . '.php'; // Validate the file path if (file_exists($filePath) && is_file($filePath) && is_readable($filePath)) { // Include the file include $filePath; } else { // Display a 404 page include ERROR_404_ROUTE_FILE; } } ``` Next, let's create our `config.php` file: ```php <?php // DIRS define('ROUTE_DIR', DIR . '/routes'); define('PUB_DIR', DIR . '/public_html'); define('LIB_DIR', DIR . '/lib'); // Max image size of 12MB define('MAX_IMAGE_SIZE', 12000000); // Load env file $dotenv = Dotenv\Dotenv::createImmutable(__DIR__); $dotenv->safeLoad(); $dotenv->required([ 'ACCESS_KEY', ]); // Allowed CORS origins define('ALLOWED_CORS_ORIGINS', []); // Define a whitelist of allowed page names define('ALLOWED_ROUTES', [ 'default', 'resize', ]); // route files define('DEFAULT_ROUTE_FILE', ROUTE_DIR . '/default.php'); define('ERROR_404_ROUTE_FILE', ROUTE_DIR . '/error404.php'); define('ERROR_404_HTML_FILE', PUB_DIR . '/404.html'); // Autoload Classes function autoload($className) { include LIB_DIR . '/' . $className . '.class.php'; } spl_autoload_register('autoload'); ``` For the other files, you can check out [this gist](https://gist.github.com/designly1/c60569e91e52abc2ceb275e73a920bd8). *NOTE: We're not using any CORS origins here because we don't need it for server-to-server communications. With this setup, though you can build your own PHP API for whatever purposes you need!* Next, let's create our `resize.php` route that will be the main brains of our operation: ```php <?php $AUTH_TOKEN = $_ENV['ACCESS_KEY']; $api = new ApiHelper(); // Assert allowed methods $api->assertAllowedMethods(['GET']); // Get X-Auth-Token header $token = $_SERVER['HTTP_X_AUTH_TOKEN'] ?? ''; if (!$token) { $api->unauthorized('Missing X-Auth-Token header'); exit; } if ($token !== $AUTH_TOKEN) { $api->unauthorized('Invalid X-Auth-Token header'); exit; } $w = $_GET['w'] ?? ''; // width $url = $_GET['url'] ?? ''; // image origin url $f = $_GET['f'] ?? ''; // format $q = $_GET['q'] ?? ''; // quality if (!$w) { $api->badRequest('Missing width (w)'); exit; } // Width must be integer if (!is_numeric($w)) { $api->badRequest('Invalid width (w)'); exit; } if (!$url) { $api->badRequest('Missing url'); exit; } if (!$f) { $api->badRequest('Missing format (f)'); exit; } // Format must be webp or avif if (!in_array($f, ['webp', 'avif'])) { $api->badRequest('Invalid format (f)'); exit; } // Create Imagick object and check if it is valid $image = new Imagick($url); if (!$image) { $api->badRequest('Invalid image url'); exit; } // Preserve transparency if ($image->getImageAlphaChannel()) { $image->setImageAlphaChannel(Imagick::ALPHACHANNEL_ACTIVATE); } // Get the current image dimensions $imageWidth = $image->getImageWidth(); $imageHeight = $image->getImageHeight(); // Calculate the new height $height = $w * ($imageHeight / $imageWidth); // Set the image format $image->setImageFormat($f); // Set image quality $image->setImageCompressionQuality($q); // Do not resize if the image is already smaller than the requested width if ($imageWidth < $w) { header('Content-Type: image/' . $f); echo $image->getImageBlob(); exit; } // Resize the image $image->resizeImage($w, $height, Imagick::FILTER_LANCZOS, 1); // Output the image header('Content-Type: image/' . $f); echo $image->getImageBlob(); exit; ``` Ok, that should do it. You can use `rsync` to upload your files to your shared hosting directory, or you can use SFTP or the hPanel File Manager. *NOTE: There's a file outside your public_html directory that says "do not upload here". You can just delete that file. You definitely want your code outside the public directory.* Here's a quick script you can create to rsync your files if you've setup a local dev server (which I recommend): ```bash #!/bin/bash rsync -avr --delete -e 'ssh -p 65002' /path/to/your/local/dir/ hostinger_username@images-api.example.com:/home/hostinger_username/domains/images-api.example.com ``` One last thing we need to do here. We're using one `composer` package called `Dotenv`. If you're using a local dev server, then you can install it locally and then sync it up with `rsync` or your can run composer directly on your Hostinger space by SSHing in. The command is: ``` composer require vlucas/phpdotenv ``` Now let's test our API with Postman: ![Testing the API with Postman](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image8.jpg) This example takes one of the images from this tutorial (1920x1080 jpeg) and reformats it to a width of 600 as `webp` with a quality of 75. If you get an image back, you're good to go! --- ## Building The CloudFlare Worker Hopefully, you already have a free CloudFlare account. If not, it's super easy to setup. I also recommend using CloudFlare to host your DNS for your domain because you can automatically create a record for your worker and you also can take advantage of the many other free services CloudFlare provides for your domain. The quickest way to get started building a worker is to use the `wrangler` tool from the command line. To install wrangler, run `npm i -g wrangler`. Next, navigate to the directory you want to create your project in and run `npm create cloudflare@2`. You may need to install some additional files, so select yes, then Type in your project name, I called it `images-worker`. For simplicity's sake, I've chosen not to use Typescript because this worker is very tiny and doesn't warrant the use of it. Next, choose yes for git as version control and no to deploy to CloudFlare: ![Testing the API with Postman](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image9.jpg) Now we can open up our project in VS Code: ``` cd images-worker code . ``` Ok, the first thing we need to do is add some vars to `wrangler.toml`. You can delete all the commented out stuff in there and add a `[vars]` section: ``` name = "images-worker" main = "src/index.js" compatibility_date = "2023-12-18" [vars] API_URL = "https://images-api.example.com" IMAGES_DOMAIN = "example.com" ACCESS_KEY="xt1LhW0OO0pmeVwW/grkV/zGaUsarWvA1OI7oCRcjTE=" CACHE_TTL=604800 ``` The `API_URL` is the Hostinger subdomain we created, `IMAGES_DOMAIN` is the allowed origin for our images. If you have more than one, you could use multiple vars or use a comma-delimited list and then split it in code. The `ACCESS_KEY` is our API key we generated. Make sure it matches the one in your PHP .env file. Lastly, `CACHE_TTL` is how long we want images to be valid for in the CloudFlare cache. I've set mine to `604800`, which is seven days. The only other file we need to edit is `index.js`, which will be our main route handler for the worker: ```js addEventListener('fetch', (event) => { event.respondWith(handleRequest(event.request, event)); }); /** * Fetch and log a request * @param {Request} request */ async function handleRequest(request, event) { // Construct the cache URL and key let cacheUrl = new URL(request.url); let cacheKey = new Request(cacheUrl.toString(), request); let cache = caches.default; // Check for the cached response let cachedResponse = await cache.match(cacheKey); if (cachedResponse) { console.log(`Cache hit for: ${request.url}`); return cachedResponse; } // Parse request URL to get access to query string let url = new URL(request.url); // Cloudflare-specific options are in the cf object. let options = { cf: { image: {} } }; // Copy parameters from query string to request options. if (url.searchParams.has('fit')) options.cf.image.fit = url.searchParams.get('fit'); if (url.searchParams.has('width')) options.cf.image.width = url.searchParams.get('width'); if (url.searchParams.has('height')) options.cf.image.height = url.searchParams.get('height'); if (url.searchParams.has('quality')) options.cf.image.quality = url.searchParams.get('quality'); // Automatic format negotiation. Check the Accept header. const accept = request.headers.get('Accept'); if (/image\/avif/.test(accept)) { options.cf.image.format = 'webp'; } else if (/image\/webp/.test(accept)) { options.cf.image.format = 'webp'; } // Get URL of the original (full size) image to resize. const imageURL = url.searchParams.get('image'); if (!imageURL) return new Response('Missing "image" value', { status: 400 }); try { // Validate the image URL const { hostname, pathname } = new URL(imageURL); if (!/\.(jpe?g|png|gif|webp)$/i.test(pathname)) { return new Response('Disallowed file extension', { status: 400 }); } // Validate the image domain origin if (!hostname.endsWith(IMAGES_DOMAIN)) { return new Response('Unauthorized origin', { status: 403 }); } } catch (err) { return new Response('Invalid "image" value', { status: 400 }); } // Fetch the image from the PHP API const response = await fetch(`${API_URL}/resize?url=${imageURL}&w=${options.cf.image.width}&f=${options.cf.image.format}&q=${options.cf.image.quality}`, { headers: { 'X-Auth-Token': ACCESS_KEY }, }); // Use Response constructor to create a new response let newResponse = new Response(response.body, response); // Set Cache-Control header for the new response newResponse.headers.append('Cache-Control', `s-maxage=${CACHE_TTL}`); // Store the new response in cache event.waitUntil(cache.put(cacheKey, newResponse.clone())); return newResponse; } ``` That's it for the worker. You can login to CloudFlare by running `wrangler login` and then deploy your worker by running `wrangler deploy`. The last thing you'll want to do is map a custom subdomain to your worker. You can do that from the CloudFlare workers panel: ![Testing the API with Postman](https://cdn.designly.biz/blog_files/how-to-create-your-own-image-optimization-resizing-service-for-free/image10.jpg) For sake of this tutorial, let's call our domain `imager.example.com`. --- ## Setting Up the Next.js Image Loader Now with everything in place, we can setup our Next.js project to use our little CDN. To do that we'll simply create a wrapper for the `next/image` component: ```jsx import React from 'react'; import Image from 'next/image'; import { StaticImageData } from 'next/image'; interface Props extends React.ComponentProps<typeof Image> {} interface LoaderProps { src: string; width: number; quality?: number; className?: string; } const imageLoader = (props: LoaderProps) => { const isDev = process.env.NODE_ENV === 'development'; const apiUrl = isDev ? 'https://localhost:9999' : 'https://imager.example.com'; const { src, width, quality } = props; if (!src.startsWith('http')) return src; const resizerSrc = `${apiUrl}/?image=${src}&width=${width}&quality=${quality || 75}`; return resizerSrc; }; const noLoader = (props: LoaderProps) => { const { src } = props; return src; }; export default function Imager(props: Props) { const { width, quality, className = '' } = props; // We want to ingore the loader if the image is an SVG let isSvg = false; if (typeof props.src === 'string') { isSvg = props.src.endsWith('.svg'); } else { const imageData = props.src as StaticImageData; isSvg = imageData.src.endsWith('.svg'); } const isDev = process.env.NODE_ENV === 'development'; // eslint-disable-next-line return <Image {...props} loader={!isDev && !isSvg ? imageLoader : noLoader} className={className} />; } ``` Now we have a custom `<Imager>` component we can use in place of `<Image>`. If you're integrating this into an existing project and you already have lots of instances of `<Image>`, you can use VS Code's find/replace in files, which is a powerful tool. Optionally, you can setup the image loader from `next.config.js` and put this into a separate script. Note that this will only work with external image files. This does *not* work with imported images. I would recommend you store all your images on a CDN. I have a couple of articles that delve deeply into setting up a CDN using AWS S3 and CloudFront. Links below. --- By harnessing the power of PHP and CloudFlare workers, you've learned how to create a cost-effective, efficient image optimization and resizing service. This DIY approach not only saves money but also grants you greater control over your digital content. It's a testament to how, with the right tools and knowledge, you can effectively manage web resources and enhance user experience. Keep exploring and adapting new techniques to stay ahead in the dynamic world of web development. Happy optimizing! --- ## Resources 1. [My Hostinger affiliate link](https://hostinger.com?REFERRALCODE=1J11864) 2. [GitHub Gist for additional PHP files](https://gist.github.com/designly1/c60569e91e52abc2ceb275e73a920bd8) 3. [How to Use AWS CloudFront to Create Your Own Free CDN](https://blog.designly.biz/how-to-use-aws-cloudfront-to-create-your-own-free-cdn) 4. [How to Get a Custom Domain For Your Free CloudFront CDN](https://blog.designly.biz/how-to-get-a-custom-domain-for-your-free-cloudfront-cdn) 5. [# How to Get a Free NGINX/PHP-FPM Web Server](https://blog.designly.biz/how-to-get-a-free-nginx-php-fpm-web-server) --- Thank you for taking the time to read my article and I hope you found it useful (or at the very least, mildly entertaining). For more great information about web dev, systems administration and cloud computing, please read the [Designly Blog](https://blog.designly.biz). Also, please leave your comments! I love to hear thoughts from my readers. If you want to support me, please follow me on [Spotify](https://open.spotify.com/album/2fq9S51ULwPmRM6EdCJAaJ?si=USeZDsmYSKSaGpcrSJJsGg)! Also, be sure to check out my new app called [Snoozle](https://snoozle.io)! It's an app that generates bedtime stories for kids using AI and it's completely free to use! Looking for a web developer? I'm available for hire! To inquire, please fill out a [contact form](https://designly.biz/contact).
designly
1,712,885
The best way to do CRUD in large-scale Next.js Projects with Redux Toolkit & Axios
When dealing with large-scale Next.js projects, using Redux Toolkit for state management and Axios...
0
2023-12-31T08:04:11
https://dev.to/nadim_ch0wdhury/the-best-way-to-do-crud-in-large-scale-nextjs-projects-with-redux-toolkit-axios-p09
When dealing with large-scale Next.js projects, using Redux Toolkit for state management and Axios for handling API requests is a common and effective combination. Below is a guide on how to implement CRUD operations in a large-scale Next.js project using Redux Toolkit and Axios. ### Step 1: Set Up Your Project Install the required packages: ```bash npm install react-redux @reduxjs/toolkit axios ``` ### Step 2: Configure Redux Toolkit Create your Redux slices for managing state: #### `redux/slices/entitiesSlice.js` ```jsx import { createSlice } from '@reduxjs/toolkit'; const entitiesSlice = createSlice({ name: 'entities', initialState: [], reducers: { setEntities: (state, action) => { return action.payload; }, addEntity: (state, action) => { state.push(action.payload); }, updateEntity: (state, action) => { const { id, updatedEntity } = action.payload; const index = state.findIndex((entity) => entity.id === id); if (index !== -1) { state[index] = updatedEntity; } }, removeEntity: (state, action) => { const idToRemove = action.payload; return state.filter((entity) => entity.id !== idToRemove); }, }, }); export const { setEntities, addEntity, updateEntity, removeEntity } = entitiesSlice.actions; export default entitiesSlice.reducer; ``` #### `redux/store.js` ```jsx import { configureStore } from '@reduxjs/toolkit'; import entitiesReducer from './slices/entitiesSlice'; const store = configureStore({ reducer: { entities: entitiesReducer, // Add other reducers as needed }, }); export default store; ``` ### Step 3: Set Up Axios for API Requests #### `api.js` ```jsx import axios from 'axios'; const instance = axios.create({ baseURL: 'https://your-api-base-url.com/api', // Adjust the base URL as needed }); export const getEntities = async () => { const response = await instance.get('/entities'); return response.data; }; export const createEntity = async (newEntity) => { const response = await instance.post('/entities', newEntity); return response.data; }; export const updateEntityById = async (id, updatedEntity) => { const response = await instance.put(`/entities/${id}`, updatedEntity); return response.data; }; export const deleteEntityById = async (id) => { const response = await instance.delete(`/entities/${id}`); return response.data; }; ``` ### Step 4: Integrate with Next.js #### `pages/index.js` ```jsx import { useEffect } from 'react'; import { useDispatch, useSelector } from 'react-redux'; import { setEntities, addEntity, updateEntity, removeEntity } from '../redux/slices/entitiesSlice'; import { getEntities, createEntity, updateEntityById, deleteEntityById } from '../api'; const Home = () => { const dispatch = useDispatch(); const entities = useSelector((state) => state.entities); useEffect(() => { // Fetch entities on component mount getEntities().then((data) => dispatch(setEntities(data))); }, [dispatch]); const handleCreateEntity = async () => { const newEntity = { name: 'New Entity' }; const createdEntity = await createEntity(newEntity); dispatch(addEntity(createdEntity)); }; const handleUpdateEntity = async (id, updatedName) => { const updatedEntity = { name: updatedName }; const updatedData = await updateEntityById(id, updatedEntity); dispatch(updateEntity({ id, updatedEntity: updatedData })); }; const handleDeleteEntity = async (id) => { await deleteEntityById(id); dispatch(removeEntity(id)); }; return ( <div> <h1>Entity List</h1> {entities.map((entity) => ( <div key={entity.id}> <span>{entity.name}</span> <button onClick={() => handleUpdateEntity(entity.id, prompt('Enter new name:', entity.name))}>Update</button> <button onClick={() => handleDeleteEntity(entity.id)}>Delete</button> </div> ))} <button onClick={handleCreateEntity}>Create Entity</button> </div> ); }; export default Home; ``` ### Step 5: Run Your Next.js App Ensure your development server is running: ```bash npm run dev ``` Visit `http://localhost:3000` in your browser to see the CRUD functionality in action. ### Conclusion This example illustrates a basic implementation of CRUD operations in a large-scale Next.js project using Redux Toolkit for state management and Axios for API requests. Adapt the code to meet your specific project requirements, such as adding error handling, authentication, or pagination, based on your application needs.
nadim_ch0wdhury
1,713,007
Managing iOS Augmented Reality Views with Objective-C in React Native Apps
Introduction Integrating 3D models into AR applications can significantly enhance the user...
0
2023-12-31T13:02:53
https://dev.to/9bytes/managing-ios-augmented-reality-views-with-objective-c-in-react-native-apps-7ko
reactnative, ios, objectivec, arkit
## Introduction Integrating 3D models into AR applications can significantly enhance the user experience by providing interactive and immersive elements. For iOS applications, using _Apple's ARKit_ with the .usdz file format is often the preferred approach due to its native support and optimization for AR experiences. This introduction will guide you through the steps of integrating a 3D model into an iOS application using _ARKit_. This is part four of a multi part article series about the integration of AR services via native modules into a _React Native_ app. While React Native simplifies cross-platform app development, there are scenarios where leveraging native iOS capabilities becomes essential, especially when dealing with advanced features like augmented reality. You can find the other parts here: - Part one: [Setting up a native module via Kotlin](https://dev.to/9bytes/building-ar-vr-experiences-with-react-native-part-1-bridging-the-gap-with-kotlin-1dii). - Part Two: [Integrating ARCore](https://dev.to/9bytes/navigating-the-ar-landscape-in-react-native-with-kotlin-2caj) - Part Three: [Setting up a native module via Objective-C](https://dev.to/9bytes/seamlessly-integrating-native-modules-in-ios-for-react-native-ar-apps-5alp) The full code for this part is [here](https://github.com/Gh05d/RN3DWorldExplorer/tree/part-4). ## 3D-Model integration Unfortunately, using _GLB_ files in _iOS_ AR applications is a bit of a hustle, so it is easier for us the use another format for the _iOS_ version, namely _USDZ_, instead of using the already obtained _GLB_ file from the _Android_ version in [part 2](https://dev.to/9bytes/navigating-the-ar-landscape-in-react-native-with-kotlin-2caj) of this series. ### Download a model Basically, download any model you like from a site like [this](https://developer.apple.com/augmented-reality/quick-look/). I downloaded the pancakes model. ### XCode The next step is to add the downloaded model to XCode. Naturally, with all things _Apple_, this is far more annoying than it has any right to be: 1. Right-click on the project's root, select _New Group_ and name it _Resources_. 2. Drag the .usdz file from Finder and drop it into the _Resources_ group in Xcode's project navigator. 3. When you add the file, make sure the options are set correctly in the dialog that appears. Ensure the _Copy items if needed_ is checked. Also, ensure that the file is added to the app's target by checking _RN3DWorldExplorer_, so it's included in the build. ### Update info.plist The last step is to add the following key to your info.plist file (located at _ios/RN3dWorldExplorer_): ```xml <key>NSCameraUsageDescription</key> <string>This app needs access to the camera to create 3D AR experiences.</string> ``` This is needed as we access the users camera for the 3D-Model, which _Apple_ regards as privacy-sensitive data. ## Coding We will be doing several updates to our _RCTARodule.m_ file in order to use _ARKit_. At first, add the following imports to the top of the file to leverage the AR capabilities on _iOS_: ```objective-c #import <ModelIO/ModelIO.h> #import <SceneKit/ModelIO.h> #import <ARKit/ARKit.h> ``` Now we will be implementing a new interface. Copy this code between the imports and the `@implementation RCTARModule`: ```objective-c @interface RCTARModule () <ARSCNViewDelegate> @property (nonatomic, strong) ARSCNView *sceneView; @property (nonatomic, strong) SCNNode *modelNode; @end ``` In essence, this code is preparing the `RCTARModule` class to handle AR functionalities. It's setting up an AR SceneView (`ARSCNView`) to display AR content and a node (`SCNNode`) to load our 3D model in the AR scene. The module is also prepared to handle `ARSCNView` events by conforming to the `ARSCNViewDelegate` protocol. We will update our `showAR` function in the now to be able to display the model. This will take several steps. Exchange the current code for this: ```objective-c RCT_EXPORT_METHOD(showAR:(NSString *)filename) { dispatch_async(dispatch_get_main_queue(), ^{ RCTLogInfo(@"Loading model: %@", filename); [self initializeARView]; [self loadAndDisplayModel:filename]; [self presentARView]; }); } ``` Now we are logging the passed in parameter `filename` before we are calling three functions to set up the AR view, load and display the model and present the view to the user in the end. Don't get shocked by the amount of errors popping up, they will go away as we are implementing the functions. ### initializeARView Use this code to implement the function: ```objective-c - (void)initializeARView { self.sceneView = [[ARSCNView alloc] initWithFrame:UIScreen.mainScreen.bounds]; self.sceneView.delegate = self; // Configure AR session ARWorldTrackingConfiguration *configuration = [ARWorldTrackingConfiguration new]; configuration.planeDetection = ARPlaneDetectionHorizontal; [self.sceneView.session runWithConfiguration:configuration]; } ``` In general, the `initializeARView` method sets up the AR environment for the application. It initializes an `ARSCNView` to render the AR content, sets up the class as its delegate to handle AR-related events, configures the AR session to include horizontal plane detection, and finally starts the AR session with these configurations. ### loadAndDisplayModel Now implement the function with this code: ```objective-c - (void)loadAndDisplayModel:(NSString *)filename { NSString *filePath = [[NSBundle mainBundle] pathForResource:filename ofType:@"usdz"]; NSURL *fileURL = [NSURL fileURLWithPath:filePath]; NSError *error = nil; SCNScene *scene = [SCNScene sceneWithURL:fileURL options:nil error:&error]; // Correctly set the modelNode property self.modelNode = [scene.rootNode.childNodes firstObject]; if (self.modelNode) { self.modelNode.scale = SCNVector3Make(0.1, 0.1, 0.1); self.modelNode.position = SCNVector3Make(0, -1, -3); // Adjust as needed [self.sceneView.scene.rootNode addChildNode:self.modelNode]; } } ``` This method is designed to load a 3D model from the application's main bundle and display it in the AR scene. First it constructs a file path for a 3D model file (usdz format) based on the provided filename. Then, it creates a URL (fileURL) pointing to this file. After that, it creates a `SCNScene` and adds the model node to the AR scene. Be aware that when you use your own model, you probably want to adjust the `scale` and `position`, as it could be too near / far from the camera. ### presentARView Finally, add these two functions to the file: ```objective-c - (void)presentARView { UIViewController *viewController = [UIViewController new]; [viewController.view addSubview:self.sceneView]; // Create a close button UIButton *closeButton = [UIButton buttonWithType:UIButtonTypeSystem]; [closeButton setTitle:@"X" forState:UIControlStateNormal]; [closeButton addTarget:self action:@selector(closeARView) forControlEvents:UIControlEventTouchUpInside]; CGFloat buttonSize = 44.0; CGFloat padding = 16.0; closeButton.frame = CGRectMake(viewController.view.bounds.size.width - buttonSize - padding, padding, buttonSize, buttonSize); closeButton.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleBottomMargin; closeButton.backgroundColor = [UIColor blueColor]; closeButton.layer.cornerRadius = buttonSize / 2; closeButton.clipsToBounds = YES; [viewController.view addSubview:closeButton]; // Assuming you have access to the root view controller or the current view controller UIViewController *rootViewController = RCTPresentedViewController(); [rootViewController presentViewController:viewController animated:YES completion:nil]; } - (void)closeARView { UIViewController *presentingController = self.sceneView.window.rootViewController; [presentingController dismissViewControllerAnimated:YES completion:nil]; } ``` The first half of the `presentARView` function just adds a close button to the view and assigns the helper function `closeARView` to it. The latter part of the method concludes by presenting the new view controller modally on top of the current view hierarchy. This is done by retrieving the current application's presented view controller (`RCTPresentedViewController()`) and calling `presentViewController:animated:completion:` on it. ### Update App.tsx Now we finally move to our _React Native_ code. Update the `Button` s `onPress` method: ```jsx await ARModule.showAR(Platform.OS == "ios" ? "pancakes" : "AR-model.glb"); ``` If you use a different model, exchange pancakes for the file name. Don't forget to `import { Platform } from "react-native"` at the top of the file. Start up the application and test the code. ## Conclusion Successfully integrating a 3D model into an iOS AR application involves several critical steps, from preparing the appropriate file format to coding the interaction with _ARKit_. While the process might seem daunting initially, especially due to platform-specific requirements like using .usdz files for _iOS_, the result is a compelling and immersive AR experience. This concludes this series for now. Eventually there will be some articles about the integration of _VR_ functionality like 360 videos for both platforms, but the setup process is way more complicated than anything explored in this series till now. For example on Android, it involves downloading the _Cardboard SDK_ and surgically integrating it into the Android portion of the app via generating .proto files and .java files. By now, you should have a basic understanding of the integration of _AR_ services into _React Native_ apps. Your next steps would the integration of advanced functionality like moving the models around or scaling them by using pinch gestures. The sky is the limit from here now on.
9bytes
1,713,165
🚀 The Ultimate Guide to ReactJS: Dive Deep and Boost Your Skills 🌟
Welcome to this comprehensive guide on ReactJS! If you're a developer who wants to level up your...
0
2023-12-31T18:04:51
https://dev.to/sarathadhithya/the-ultimate-guide-to-reactjs-dive-deep-and-boost-your-skills-n92
Welcome to this comprehensive guide on ReactJS! If you're a developer who wants to level up your skills, you've come to the right place. ReactJS is a popular JavaScript library for building user interfaces. Developed by Facebook, it's used by millions of developers worldwide. With its virtual DOM, component-based architecture, and reactive data flow, ReactJS offers a robust solution for managing complex and large-scale applications. In this guide, we'll cover the following topics: 1. ReactJS Fundamentals: Components, JSX, Props, and State 2. ReactJS Advanced Concepts: Hooks, Context API, and Error Boundaries 3. State Management: Redux, MobX, and Recoil 4. Testing ReactJS Applications: Jest, React Testing Library, and Enzyme 5. Optimizing Performance: Lazy Loading, Code Splitting, and Pure Components 6. Best Practices: Clean Code, Component Design, and Performance Optimization Ready to dive in? Let's go! **1. ReactJS Fundamentals: Components, JSX, Props, and State** - **Components**: The building blocks of ReactJS applications. They are reusable, composable, and manage their own state. - **JSX**: JavaScript XML. A syntax extension for JavaScript that allows you to write HTML-like code within your JavaScript code. - **Props**: Short for properties. Props are used to pass data from a parent component to a child component. - **State**: The component's internal data. State can be managed using class components or the useState hook. **2. ReactJS Advanced Concepts: Hooks, Context API, and Error Boundaries** - **Hooks**: Functions that allow you to use state and other React features in functional components. - **Context API**: A way to pass data through the component tree without having to pass it down manually at every level. - **Error Boundaries**: React components that catch JavaScript errors anywhere in their child component tree, log those errors, and display a fallback UI. **3. State Management: Redux, MobX, and Recoil** - **Redux**: A predictable state container for JavaScript apps. - **MobX**: A state management library that makes it easy to manage and update state. - **Recoil**: A modern alternative to Redux and MobX. **4. Testing ReactJS Applications: Jest, React Testing Library, and Enzyme** - **Jest**: A JavaScript testing framework developed by Facebook. - **React Testing Library**: A library for testing React components based on the DOM. - **Enzyme**: A JavaScript testing utility for React that makes it easy to test React components' output and state. **5. Optimizing Performance: Lazy Loading, Code Splitting, and Pure Components** - **Lazy Loading**: A technique for splitting the codebase into smaller chunks, which are then loaded on demand. - **Code Splitting**: A technique for splitting the codebase into smaller chunks, which can improve the performance of the application. - **Pure Components**: React components that implement the `shouldComponentUpdate` lifecycle method and only re-render when their state or props have changed. **6. Best Practices: Clean Code, Component Design, and Performance Optimization** - **Clean Code**: Writing clean, maintainable, and scalable code is crucial for building long-lasting and maintainable applications. - **Component Design**: Designing reusable, composable, and scalable components is essential for building a robust and maintainable application. - **Performance Optimization**: Regularly analyzing and optimizing your application's performance is essential for ensuring a smooth and responsive user experience. TL;DR: If you're looking to become a ReactJS expert, this guide will take you through all the essential concepts, best practices, and techniques to help you master this powerful and popular JavaScript library. CTC: This comprehensive guide should help you build a strong foundation in ReactJS. Keep in mind that becoming proficient in ReactJS requires
sarathadhithya
1,713,279
Tailwind, My Altered Opinion
My two cents on tailwind and how that has changed over the years!
0
2023-12-31T22:10:26
https://dev.to/kieranmv95/tailwind-my-altered-opinion-1of3
tailwindcss, webdev, programming, opinion
--- title: Tailwind, My Altered Opinion published: true description: My two cents on tailwind and how that has changed over the years! tags: - tailwindcss - webdev - programming - opinion # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-12-31 22:07 +0000 --- When Tailwind first emerged on the web development scene, I found myself firmly rooted in my comfort zone, relying on familiar practices and tools. As someone who wasn't an avid user of many frameworks, I typically stuck to a minimalistic approach, often implementing a reset like Meyerweb or resorting to the well-established Bootstrap when the need arose. Tailwind, with its extensive list of classes, initially struck me as overwhelming and cluttered—an apparent departure from the simplicity I had grown accustomed to. Perhaps it was a reluctance to embrace change or an attachment to my existing workflow that fueled my initial skepticism. However, as time passed, so did my perspective. Join me as I unravel the factors that prompted a shift in my opinions and ultimately led me to reconsider this dynamic framework. ## First Opinion When Tailwind first entered the web development scene, I found myself deeply entrenched in the practice of crafting everything from scratch. Bootstrap, at most, was a concession for minor components, particularly its grid system. Despite the availability of newer layout tools like Grid and Flex, I clung to Bootstrap's grid system, feeling a sense of control over my code. I was a practitioner of the BEM CSS methodology, believing in its ability to maintain cleanliness and organisation in my class names. However, the arrival of Tailwind felt like a bombardment of class names, an assault on the meticulous structure I had painstakingly built. The HTML files seemed to burst at the seams with a myriad of class names, a sight that was particularly disconcerting if you adhered to ESLint rules restricting line width to 120 characters or something similar. While I couldn't deny the visually stunning creations people achieved with Tailwind, I couldn't reconcile the idea of sacrificing the clarity of my HTML for such results. It felt as though the framework demanded a compromise that other tools didn't. In my initial encounter, I conducted only a cursory trial, swiftly dismissing Tailwind for both personal and project use. My dismissal extended to fervent discussions with friends and colleagues, showcasing a degree of inflexibility that, in hindsight, reflected my limited development experiences at the time. Back then, my approach was rooted in the belief that if I didn't fully understand something or the reasons behind its design, it couldn't possibly be beneficial. Little did I realise that this stance marked a significant rookie mistake—one that, over time, I would come to reassess and rectify. Ah, the sting of hindsight. ## When did I start realising it might be the right technology? Admittedly, my initial resistance to Tailwind persisted for quite some time. I held onto my skepticism, unconvinced of its merits, and continued to voice my reservations against it. It wasn't until I noticed a growing trend—Tailwind becoming an integrated option in popular CLIs like Create React App (CRA) and NextJS, that I started to entertain the idea of giving it a second chance. Around this time, a significant shift had occurred in my approach. I had broken free from Bootstrap's influence, no longer relying on it for my grid systems. Instead, I invested time in comprehending the nuances of Flex and Grid, allowing me to craft elegant and minimal code for my layouts. It felt liberating, aligning with my vision of authoring all my code independently, free from frameworks or constraint, or so I thought. However, as I immersed myself in crafting everything from scratch, I couldn't ignore the shortcomings of my previous CSS approach. Each project I worked on exhibited common issues: - **Boilerplate Overload**: A surplus of repetitive, boilerplate code that felt like unnecessary baggage. - **Isolated Understanding**: Code that could only be truly deciphered by myself, creating hurdles for collaboration within a team. - **Inconsistent Standards**: Projects that failed to conform to what I considered my established (and undocumented) standards when another developer touched them. - **Limited Contribution**: Maintenance and updates were solely my responsibility, lacking the collective vigilance of a team to ensure accessibility and identify potential bugs. These realisations prompted a reconsideration of my coding philosophy. Was complete autonomy truly the epitome of efficiency, or had I been overlooking the benefits of a more collaborative and standardised approach? The answers lay in exploring Tailwind with a renewed perspective. ## How Do I Use It Now, and How Has My Opinion Changed Over time, my perspective on Tailwind has undergone a significant transformation. While remnants of my initial reservations persist, I've embraced a hybrid approach, blending elements of the old and new. As evident on my personal website built with Next.js and Tailwind, I've managed to reduce my custom CSS to a mere 50 lines. Comparing it to the V1 of my personal site, which boasted over a thousand lines of CSS, the efficiency of Tailwind is evident. The framework selectively pulls only the required CSS, eliminating the need for unnecessary helper classes during build. The result is a visually enhanced site with a substantially smaller footprint, fostering a developer-friendly environment. Collaboration becomes seamless, as other developers can glean insights into my styling setup by referencing the Tailwind config and documentation. However, some aspects of my initial stance remain unaltered, notably my aversion to cluttered HTML with an abundance of class names. Despite the apparent visual chaos, I've come to appreciate the verbosity of Tailwind's approach, it may look overwhelming, but it provides immediate clarity on functionality. A compromise I've made is abstracting complex class combinations into CSS classes, employing the @apply keyword to maintain readability. This not only adheres to methodologies like BEM but also results in cleaner HTML structures. In essence, Tailwind has become an indispensable tool in my web development arsenal, not just for its efficiency but also for the adaptability it offers. The evolution of my opinion mirrors the dynamic nature of the web development landscape, emphasising the importance of embracing change and reevaluating assumptions for continual growth. Below is an example of employing the @apply lkeyword to abstract some of tailwinds larger class combinations into a css filer and removing it from your HTM/JSX **Before** ```js return ( <Link href={`blog/${post.slug}`} className="hover:underline grid border border-primary border-t-0 border-l-2 border-r-0 border-b-0 pl-4 md:border-none md:pl-0 md:block" > <span className="text-primary md:mr-2"> {prettyDate(new Date(post.date))} </span> {post.title} </Link> ); ``` **After** ```js return ( <Link href={`blog/${post.slug}`} className={styles.link}> <span className={styles.link__date}>{prettyDate(new Date(post.date))}</span> {post.title} </Link> ); ``` ```css .link { @apply hover:underline grid border border-primary border-t-0 border-l-2 border-r-0 border-b-0 pl-4 md:border-none md:pl-0 md:block; } .link__date { @apply text-primary md:mr-2; } ``` ## Conclusion In conclusion, my journey with Tailwind has seen a notable shift in perspective. Initially resistant to its extensive class naming, I reconsidered its value as major tools like CRA and Next.js integrated it. Breaking away from Bootstrap and crafting custom layouts allowed me to appreciate Tailwind's efficiency in minimising CSS. While still disliking cluttered HTML, I've embraced Tailwind's verbosity for its immediate clarity. Adopting a hybrid approach, I now use Tailwind for its adaptability, reducing custom CSS and fostering a more developer-friendly environment. The evolution of my opinion highlights the importance of embracing change in the ever-dynamic realm of web development.
kieranmv95
1,713,283
Tagging OCaml packages
How to tag your opam packages for better searchability
0
2023-12-31T22:38:45
https://dev.to/yawaramin/tagging-ocaml-packages-11dg
ocaml, opam
--- title: Tagging OCaml packages published: true description: How to tag your opam packages for better searchability tags: ocaml,opam # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-10-15 20:21 +0000 --- ## TL;DR Add `(tags (org:your-github-username))` to your `dune-project` file's `package` stanza. ## About OCAML's opam package manager has an unfortunately little-known feature calling 'tagging'. This allows you to give 'tags' or 'labels' to your packages and search using those tags. This works a lot like popular blogging platforms, like dev.to in fact! And even better, the OCaml.org website package search can already search for tags: https://ocaml.org/packages/search?q=tag:%22org:erratique%22 That's an example of searching for the `org:erratique` tag, which will find all packages by [Daniel Bünzli](https://erratique.ch/contact.en), who meticulously tags his OCaml packages. In fact the `org:` prefix for tags is specifically reserved for the 'organization' (or person) who publishes the package: https://opam.ocaml.org/doc/Manual.html#opamfield-tags ## How to tag If you are using the [dune](https://dune.build/) build system, add the tag(s) to your `dune-project` file's `package` stanza. E.g.: ```dune-project (package (name dream-html) (synopsis "HTML generator eDSL for Dream") (description "Write HTML directly in your OCaml source files with editor support.") (documentation "https://yawaramin.github.io/dream-html/") (tags (org:yawaramin)) (depends (dream (>= 1.0.0~alpha3)))) ``` Of course, you can add multiple tags, e.g. `(tags (tag1 tag2 tag3))`. Refer to https://dune.readthedocs.io/en/stable/dune-files.html#field-package-tags for the documentation. Make sure you run `dune build` so that dune regenerates the package's `opam` file. Now, commit these changes and the next time you [publish your package](https://ocaml.org/docs/publishing-packages-w-dune) on opam, these tags will appear and be searchable, e.g. https://ocaml.org/packages/search?q=tag:%22org:your-github-username%22 ## Namespacing You might have realized that I am specifically recommending adding the `org:` tag with high priority, because it enables an ad-hoc form of namespacing. The opam registry doesn't auto-enforce namespacing, of course, but you can always appeal to the registry maintainers if you think someone is squatting on your namespace. ## Searchability Of course, namespacing is not the only benefit–you also improve the searchability of the opam registry by adding this metadata to your projects. For example, if people are looking for web-related projects, they might search for `tag:"web"` etc. This will benefit the entire OCaml ecosystem. And even better, it's really easy to do.
yawaramin
1,713,435
MQL5 HTTP Web Request Using WinINet DLL
An easy-to-use, general-purpose library for sending HTTP web requests in MQL5 using the WinINet...
0
2024-01-01T07:07:36
https://dev.to/rabist/mql5-http-web-request-using-wininet-dll-fhk
mql5, mt5, metatrader5, metatrader
An easy-to-use, general-purpose library for sending HTTP web requests in MQL5 using the WinINet DLL. - :file_folder: \MQL5\Include\ - :page_facing_up: [WinINet.mqh](https://github.com/geraked/metatrader5/blob/master/Include/WinINet.mqh) - :file_folder: \MQL5\Scripts\ - :page_facing_up: [WinINet_Test.mq5](https://github.com/geraked/metatrader5/blob/master/Scripts/WinINet_Test.mq5) Introducing an easy-to-use, general-purpose library designed for seamless handling of HTTP web requests in MQL5 through the utilization of the [WinINet](https://learn.microsoft.com/en-us/windows/win32/wininet/portal) DLL. This library simplifies the process of incorporating web request functionalities into your MQL5 projects, providing a user-friendly interface for developers. With its intuitive design, the library empowers users to effortlessly send and manage HTTP requests, offering a versatile solution for a wide range of applications within the MQL5 environment. Streamlining the integration of web communication capabilities, this library serves as a valuable resource for developers seeking efficient and reliable methods to interact with online resources in their MQL5 projects. ### Example 1: Send a GET request and retrieve current GMT time. ```c++ WininetRequest req; WininetResponse res; req.host = "worldtimeapi.org"; req.path = "/api/timezone/Europe/London.txt"; WebReq(req, res); Print("status: ", res.status); Print(res.GetDataStr()); ``` ### Example 2: Send a POST request and echo it back. ```c++ WininetRequest req; WininetResponse res; req.method = "POST"; req.host = "httpbin.org"; req.path = "/post"; req.port = 80; req.headers = "Accept: application/json\r\n" "Content-Type: application/json; charset=UTF-8\r\n"; req.data_str = "{'id': 10, 'title': 'foo', 'message': 'bar'}"; StringReplace(req.data_str, "'", "\""); WebReq(req, res); Print("status: ", res.status); Print(res.GetDataStr()); ```
rabist
1,713,579
Viral Paper Tested MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
🌟 Welcome to this comprehensive tutorial video where I guide you through the process of installing...
0
2024-01-01T11:17:36
https://dev.to/furkangozukara/viral-paper-tested-magicanimate-temporally-consistent-human-image-animation-using-diffusion-model-n9c
animation, stablediffusion, ai, animate
🌟 Welcome to this comprehensive tutorial video where I guide you through the process of installing and using Magic Animate for Temporarily Consistent Human Image Animation using a Diffusion Model, along with other exciting tools like DensePose generator and CodeFormer face restore! 🌟 [MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model — Full Tutorial](https://youtu.be/HeXknItbMM8?si=fs8YpwRy_H6JdlO7) {% embed https://youtu.be/HeXknItbMM8?si=fs8YpwRy_H6JdlO7 %} Tutorial Source Patreon Post — Installers ⤵️ https://www.patreon.com/posts/94098751 C++ Tools, Python & FFmpeg Tutorial ⤵️ https://youtu.be/-NjNy7afOQ0 Stable Diffusion GitHub repository ⤵️ https://github.com/FurkanGozukara/Stable-Diffusion SECourses Discord To Get Full Support ⤵️ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388 🔹 In this detailed walkthrough, I cover: The one-click installation process for Magic Animate, ensuring you can easily run the demo and generate incredible animations. Steps to generate DensePose videos from any footage, turning standard videos into DensePose format with ease. Utilization of DaVinci Resolve for video cropping, zooming, and exporting, preparing your footage for transformation into DensePose format. Introduction to the Gradio application for CodeFormer face restoration, capable of video upscaling and face improvement. 🔸 Key Highlights: Detailed instructions for installing necessary tools like Python, Visual C++ tools, and FFmpeg, complete with a link to a helpful installation tutorial video. Advice on handling Magic Animate’s folder structure and file extraction for optimal performance. Step-by-step guide on installing and using various features within Magic Animate, including the creation of DensePose videos and the use of CodeFormer for face restoration and video upscaling. Demonstrations of the application’s capabilities, including the process of generating DensePose videos and the notable improvements in video quality and face restoration. 🔹 Special Features: Tips for Linux users and those without strong GPUs on using RunPod for the installation and use of Magic Animate, CodeFormer, and DensePose Video Maker. Insights into the best Python and FFmpeg versions for a seamless experience. Troubleshooting tips, including how to check for errors during installation and ensuring compatibility with other AI applications. 🔸 Additional Resources: Access to all instructions, scripts, and necessary files through a dedicated Patreon post. Direct download links for the latest Magic Animate version and other essential software. Invitation to join our Discord channel for support and community interaction. 💡 Pro Tips: Pay attention to the detailed instructions for installing each tool and application to ensure a smooth experience. Explore the video chapters to jump directly to specific sections of interest, especially if you’re focused on particular aspects like RunPod installation or CodeFormer use. 🙏 Support & Community: If you find this tutorial helpful, consider supporting my work on Patreon and joining our growing community. Follow me on LinkedIn for updates on new tutorials and projects. 🎥 Don’t forget to like, subscribe, and share this video if it helped you! Stay tuned for more amazing tutorials and tips. #MagicAnimate #DensePose #CodeFormer #AIAnimation #VideoEditing #Tutorial #TechGuide #RunPod #Python #DaVinciResolve #FFmpeg #GradioApp #TechTutorial #Linux 0:00 Introduction to Magic Animate, DensePose Maker and CodeFormer full tutorial 3:18 How to 1-click install Magic Animate on Windows 4:31 How to check and verify your installed Python and FFmpeg 5:43 How to run Magic Animate web app and start using it 6:21 How to see progress of making animation 6:46 How much VRAM Magic Animate uses 7:12 First output of Magic Animate with paper authors shared demo 7:22 How to use our pre-shared DensePose videos to animate images 7:33 Magic Animate supported resolution 7:44 How to install DensePose video generator 8:50 How to generate DensePose video from any video example 9:04 How to properly crop and extract 512x512 video from any video via Davinci Resolve free edition 11:10 How to run DensePose video maker once your input video is ready 12:07 How to fix noise having frames of DensePose video generator output 13:20 Testing new DensePose video we made ourselves 15:18 Testing a custom image with paper authors pre-shared DensePose video 15:26 How to install and use CodeFormer Gradio web APP to improve your videos face quality and upscale them 16:08 How to start and use CodeFormer app 16:41 Where the Magic Animate generated videos are saved by default 17:01 Options of CodeFormer face enhance and video upscale 17:53 CodeFormer results comparison 18:46 How to install and use Magic Animate on RunPod or Linux 22:17 How to install and use DensePose maker on RunPod or Linux 26:06 How to install and use CodeFormer Gradio App on RunPod or Linux
furkangozukara
1,713,682
Closure, Scope and lexical scope simplified
Introduction Kunle is a thief, he stole from Iya Basira jewellery store in Lagos and...
0
2024-01-01T13:30:08
https://dev.to/ozor/closure-scope-and-lexical-scope-simplified-ado
## Introduction Kunle is a thief, he stole from Iya Basira jewellery store in Lagos and managed to escape through the Seme border to Togo. The Nigerian police have been on his trail for some days now, and have gotten an intelligence report of his whereabouts, they traced him to Togo. The Nigerian police cannot enter Togo to arrest because that is outside their area of operation and so they contacted Interpol to help them make the arrest. With the help of Interpol, Nigerian police were able to arrest Kunle and bring him back to Nigeria to face the law. Now that is closure, the ability of a function to assess a variable outside its scope even when the function is executed in a different scope. In this article, you will be learning • what closure is • what a scope is • local and global variable • Precedence between global and local variable • Scope chain • Lexical scoping ## Scope Scope is simply a set of rules that controls how reference to a variable is resolved. Let’s look at scope as simply an environment, a space where a function or variable can be assessed. Local and Global Variable Let's begin by looking at what a variable is, a variable is like a cup that you can put some water in and cover it, you can assume it to be a bowl, bucket, or box that you can store your money in and keep it. A variable can be local or global. When a variable is local, it can only be assessed by the function in which it is declared and when a variable is global, it can be assessed by all the functions within the page because it belongs to the scripting page. Let us take a look at these two examples: ``` function myFunction(){ let a = 4 return a*a } ``` You can see that the variable a, was declared inside the myFunction and as such can only be assessed by myFunction. This makes it local as it can only be used inside the myFunction function and is hidden from other scripting code outside Now look at this example: ``` let a = 4 function myFunction() { return a*a } ``` Here you can see that the variable was declared outside the myFunction function and as such belongs to the scripting page. This makes it accessible to other functions within the page because the variable is a property of the page. It is important to note that local and global variables are not the same even if they have the same name. In a case where the local and global variables have the same name, the local variable takes precedence when the function is called, after the function execution, the local variable goes out of scope and the global variable takes precedence. This is called variable shadowing. ``` function myFunc() { let myVar = 'test'; if (true) { let myVar = 'old test'; console.log(myVar); // old test } console.log(myVar); // test } myFunc(); ``` In this code block, we can see that there are two variables with the same name, myVar. One is local, the other is global. We can see that when the variable is first consoled inside the if statement, it returns the “old test” instead of “test”, the reason is that the local variable shadowed the global variable within the if block scope. When the came variable was consoled outside the if statement, the global variable was given precedence. We can create a variable without the declaration keyword (var, let and const). This kind of variable is always global even if they are created inside a function. While the global variable lives until the window is closed or you move to another page, a local variable is created when the function is invoked and dies when the function call finishes. Scope Chain From the example above we saw what global and local scope mean. Let's look at the scope chain and what it means. Consider this code block: ``` let firstName = "kings" function myName(){ let lastName = "Badmus" function myLastName(){ let address = 3 function myProfile(){ return console.log(`my name is ${firstName} ${lastName} and I live at ${address} blv road`) } return myProfile() } return myLastName() } myName() ``` when this function is invoked, for the browser to give an output, it needs to look for the variables where they were defined (its lexical scope), it first looks at the myProfile function which is its local scope, if it does not find the variable, it will then look at the parent scope which is myLastName function where it finds address variable. It then looks at a higher scope to see if it can find another variable. It then looks at the next scope which is the myName function to find another variable. Finally, it looks at the global scope to see if it can find the remaining variable. The browser can only travel from the child to the parent and not the other way around. This is called the scope chain. That is, all the scope the browser needs to travel from the innermost scope to the global scope. In this case, from myProfile() scope to myLastName() scope to myName() scope to global scope. ## Lexical scope This is simply the scope in which a variable is created or defined. The scope where a variable is defined can be different from where it is invoked or called. An item definition space its lexical scope. Let us look at these two examples: ``` let car = “Innoson” function myCar(){ return car } ``` In this example, the car variable is defined at the global scope so its lexical scope is its global scope and as such the myCar function can assess it since it is at the global scope. ``` function myCar(){ Let car = “innoson” Return car } ``` In this case, the scope of the car variable is local and as such that is its lexical scope and can still be assessed by the myCar function since it is within its reach. In writing code, the lexical scope needs to be put into consideration as that will determine how our program will run. This is because only code within a variable lexical scope can assess it. Let’s look at this ``` function showName(){ const lastName = “Offor” return lastName } function disPlayName(){ const fullName = “Emeka” + lastName return fullName } console.log(displayName()) ``` We will get an error: Uncaught ReferenceError: lastName is not defined This is because the displayName() is trying to assess a variable outside its lexical scope. lastName has a local scope and is defined inside the showName function and as such its lexical scope is within the showName function. It cannot be assessed outside the showName function, that is the reason for the error. Let us look at it differently ``` function showName(){ const lastName return lastName } function disPlayName(){ const fullName = “Emeka” + showName() return fullName } console.log(displayName()) //Emeka Offor ``` This code worked because both the showName() and displayName() are in the same lexical scope, the showName() returned the variable which can then be assessed by the displayName() ## Pros and Cons of local and Global variable It is considered best practice to declare a variable with a limited scope. This is to reduce the impact of other functions on it and its visibility within the program. However, there are cases in which a global variable has more advantage than a local variable, examples are in cases in which multiple functions are to manipulate a particular variable. Another case in which a global variable has an advantage is in a case in which we need to preserve the value of the variable, an example is a case in which we need to preserve and keep track of user login details, we store it in a global variable so that it can be made available for multiple functions. It is important to declare a global variable using const as that will keep it from being changed. Global variables have their drawbacks in that they make our codes complex and difficult to understand and maintain and can be a source of bugs as it is being impacted by several functions. Aside from these advantages highlighted, it is advised to store our data in local variables as it makes our code easier to understand, reduces bugs and limits its scope. ## Conclusion In this article, we have been able to learn about variable scope, scope chain, lexical scope, and closure. I hope you enjoyed the article, please leave a comment for me
ozor
1,713,692
Requested URL returned error: 403
The image below shows a 403 error which means means credentials errors or forbidden (you don’t have...
0
2024-01-01T13:46:58
https://dev.to/shafia/requested-url-returned-error-403-1pfm
github, git, errors, 403
The image below shows a 403 error which means means credentials errors or forbidden (you don’t have permission to push). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxxyohqctd20g459q7nr.png) ## **Solution-1** **Step-1:** Check if you are pushing the code to your repository ``` git remote -v ``` **Step-2:** If it is not your repository, open a new repository in your GitHub ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47lw0jjkqhgana3r1e8a.png) **Step-3:** Copy your repository link ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsq498308yzjglmek1aq.png) **Step-4:** Go to the terminal and write the following command: ``` git remote set-url origin your-repo-link ``` After the entering command, check if the git repository url changed. Write **git remote -v** on your terminal. **Step-5:** Then, run these commands one by one ``` git branch -M main git push -u origin main ``` ## **Solution-2** **Step-1:** Go to your Start tab on windows and search for Credential Manager. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krlurifxtp3isn3yxfb6.png) **Step-2:** Click on the Credential Manager and then click on Windows Credential ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ahnthbn95xcepv2o52k.png) **Step-3:** Below, Generic Credentials you will see git:https://github.com. Remove it. **Step-4** Follow the steps in this blog https://dev.to/shafia/support-for-password-authentication-was-removed-please-use-a-personal-access-token-instead-4nbk
shafia
1,713,742
Error: (e.g git pull..) before pushing again
The error below takes place when you edit your file directly from github as a result code in local...
0
2024-01-01T14:28:34
https://dev.to/shafia/error-eg-git-pull-before-pushing-again-kl3
git, github, errors, gitpull
The error below takes place when you edit your file directly from github as a result code in local file and file github file does not match. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zgg19scrmwwkyn57vw7.png) Go to your terminal again and run these commands **<u>Solution-1</u>** ``` git pull git push origin main ``` **<u>Solution-2 (not recommended)</u>** ``` git push -f ``` By following these steps, you should be able to synchronize your local changes with the changes on GitHub and avoid conflicts.
shafia
1,713,809
Agility Accelerator: Scaling Your SaaS with Multi-Tenant Magic
In the realm of SaaS, agility is your golden chariot, propelling you towards growth and market...
0
2024-01-01T16:01:27
https://dev.to/marufhossain/agility-accelerator-scaling-your-saas-with-multi-tenant-magic-fbg
In the realm of SaaS, agility is your golden chariot, propelling you towards growth and market dominance. But scaling your platform with traditional methods can feel like navigating a labyrinth on a sluggish donkey – slow, cumbersome, and frustrating. Enter the multi-tenant cloud, a potent elixir that transforms your SaaS into a supersonic spaceship, ready to soar through the competitive stratosphere. **1. Charting Your Course to Agility:** Before unleashing the multi-tenant magic, define your desired agility outcomes. Do you dream of lightning-fast deployments? Seamlessly accommodating surging user bases? Identifying your goals ensures you choose the right [multi-tenant architecture](https://www.clickittech.com/saas/multi-tenant-architecture/) and tailor your cloud setup for optimal acceleration. **2. Selecting the Right Fuel:** Multi-tenant cloud solutions come in various flavors, each offering a unique blend of agility and control. Opt for a multi-tenant architecture that seamlessly accommodates your specific application needs, integrates with your existing systems, and scales effortlessly along with your user base. **3. Building Your Launchpad:** Your multi-tenant architecture is your launchpad to agility heaven. Design it with resource segregation and data isolation at its core, ensuring each tenant operates in a secure and performant sandbox. Leverage containerization technologies like Docker to further isolate applications and facilitate rapid scaling. **4. Optimize for Liftoff:** Agility isn't just about speed; it's about efficiency too. Analyze resource utilization within your multi-tenant environment, implementing resource quotas and auto-scaling groups to optimize resource allocation and minimize costs. Remember, a well-oiled multi-tenant architecture is a cost-effective one. **5. Continuous Flight Control:** The cloud is a dynamic ecosystem, demanding constant vigilance. Monitor your multi-tenant platform with hawk-like precision, utilizing tools like AWS CloudWatch and Kubernetes dashboards to identify performance bottlenecks and resource inefficiencies. Adapt your architecture and optimize configurations on the fly, ensuring your SaaS platform remains a nimble spaceship in the ever-changing cloud landscape. **Multi-Tenant Consulting: Your Expert Copilot:** Navigating the intricacies of multi-tenant architecture can be a complex endeavor. Consider partnering with experienced multi-tenant consulting services to gain invaluable expertise in areas such as: * **Designing and implementing secure and scalable multi-tenant architectures.** * **Optimizing resource utilization and minimizing costs.** * **Ensuring data isolation and compliance in a shared environment.** * **Monitoring and troubleshooting performance issues.** * **Providing ongoing support and expertise throughout your SaaS journey.** Multi-tenant consulting services can be your trusted copilot, guiding you through the challenges and maximizing the agility-boosting potential of your multi-tenant cloud setup. **Reach the Agility Apex:** Harnessing the multi-tenant magic takes strategic planning, tactical execution, and a commitment to continuous optimization. By defining your agility goals, choosing the right architecture, optimizing resource utilization, and embracing the expertise of multi-tenant consulting, you can transform your SaaS platform into an agile champion, soaring effortlessly towards market dominance. Remember, the skies are no longer the limit – with multi-tenant agility, the clouds are your launchpad to endless possibilities. So, buckle up, adventurer, and prepare for an exhilarating journey to the apex of agility in the SaaS stratosphere! This guide has equipped you with the knowledge and inspiration to ignite your multi-tenant agility adventure. But remember, the ultimate flight path is yours to forge. Embrace agility, optimize your cloud, and conquer the competitive landscape with the multi-tenant magic at your fingertips. The SaaS frontier awaits your arrival!
marufhossain
1,713,925
Send fluent bit collected logs to Seq
Here I will setup fluent-bit and seq both with docker and push logs through fluent bit to to seq. I...
0
2024-01-02T20:26:23
https://github.com/minhaz1217/devops-notes/tree/master/66.%20send%20fluent%20bit%20logs%20into%20seq
devops, fluentbit, seq, docker
Here I will setup fluent-bit and seq both with docker and push logs through fluent bit to to seq. I will also show how to setup fluent bit and seq with docker. ## Setup a docker network for the containers ``` docker network create fluent-bit_seq ``` ## Setting up Seq At first setup a hashed password to be used ``` PH=$(echo 'seqPass%%' | docker run --rm -i datalust/seq config hash) ``` Make sure that the password variable is ok ``` echo $PH ``` Run the container ``` docker run --name seq -d --network fluent-bit_seq \ -p8080:80 --restart unless-stopped \ -e ACCEPT_EULA=Y -e SEQ_FIRSTRUN_ADMINPASSWORDHASH="$PH" \ datalust/seq ``` Now browse to `localhost:8080` and login into Seq using username=admin password=seqPass%% ![Logging into seq](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byo0hbf5vzvrnxpk0cjc.png) ![After logging into seq](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hahj9sjoqgk6srmewehl.png) ## Setting up Fluent Bit Designate a folder where you'll store the fluent bit configs. In my case the directory is `D:/fluent-bit` Set the directory where fluent bit's config will come from ``` export sharedFolder=/var/fluent-bit_seq ``` Start a temporary container to copy default configs from ``` docker run -d --rm --name temp cr.fluentbit.io/fluent/fluent-bit ``` Copy the configs to your designated folder ``` docker cp temp:/fluent-bit/etc/ $sharedFolder ``` Stop the temporary container ``` docker stop temp ``` Now start fluent bit with the config folder mounted as a volume ``` docker run -dti --name fluent-bit --network fluent-bit_seq \ -v $sharedFolder:/fluent-bit/etc \ cr.fluentbit.io/fluent/fluent-bit ``` ![See what is in the shared folder directory](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61ts1ee3aiksqodxiqp3.png) By default fluent bit is configured to output to std out. So see docker log to see what fluent bit is logging. ``` docker logs fluent-bit ``` ![Fluent bit logs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j3x2uvgbaz0oeo2lpri.png) ## Configuring Fluent Bit to send logs to Seq Go to the fluent bit configuration directory and search for this section ``` # fluent-bit.conf [OUTPUT] name stdout match * ``` Replace this section with this and save ``` # fluent-bit.conf [OUTPUT] Name http Match * Host seq Port 5341 URI /api/events/raw?clef Format json_lines Json_date_key @t Json_date_format iso8601 Log_response_payload False ``` Now restart the fluent bit container for the changes to take effect. ``` docker restart fluent-bit ``` ![after pasting the config](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8g027cley8tphlcwxaab.png) Now browse to `localhost:8080` and login into Seq using username=admin password=seqPass%% and see that the logs are being written into Seq ![Fluent bit is sending message to seq](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x1njei7qqe1q2su7l2j.png) ### Thanks for reading, Happy Tinkering :) ## References 1. [Github repo](https://github.com/minhaz1217/devops-notes/tree/master/66.%20send%20fluent%20bit%20logs%20into%20seq)
minhaz1217
1,713,928
Ellipsis in Python
As a Python developer, you may have come across the ellipsis (...) symbol and wondered what it means....
25,915
2024-01-05T14:00:00
https://dev.to/julienawonga/ellipsis-in-python-fbd
As a Python developer, you may have come across the ellipsis (...) symbol and wondered what it means. In Python, an ellipsis is a built-in constant that represents a placeholder for code that you will add later. This constant is used in a variety of ways in Python, and in this article, we'll explore some of the common use cases of ellipsis. ## Using Ellipsis as a Placeholder In Python, you can use the ellipsis constant as a placeholder for code that you haven't written yet. This is useful when you are working on a complex program and need to come back to a particular section later. You can use the ellipsis to mark the section that you need to revisit, like this: ```python def complex_function(): # TODO: implement this function ... ``` ## Using Ellipsis in Slicing In Python, you can use ellipsis in slicing to represent an unspecified number of dimensions. When used in a slice, the ellipsis symbol means "all remaining dimensions." This is particularly useful when working with multidimensional arrays. For example, consider a 3D numpy array arr with shape (3, 4, 5). To select all elements in the first dimension, you can use the : symbol: ```python arr[:, :, :] ``` To select all elements in the second dimension, you can use the following: ``` arr[:, :, :] ``` But what if you want to select all elements in the third dimension? Here's where ellipsis comes in: ``` arr[..., :] ``` Here, the ellipsis symbol represents the unspecified dimensions. The resulting slice will select all elements in the third dimension. Using Ellipsis in Function Arguments In Python, you can use the ellipsis symbol in function arguments to indicate that the argument is optional. This is often used in function definitions where the function can accept an arbitrary number of arguments. For example, consider the following function: ```python def my_function(arg1, arg2, *args, **kwargs): ... ``` Here, the *args and **kwargs arguments indicate that the function can accept an arbitrary number of positional and keyword arguments. If you want to indicate that a particular argument is optional, you can use the ellipsis symbol as its default value, like this: ```python def my_function(arg1, arg2, arg3=...): ... ``` Here, the arg3 argument is optional, and its default value is .... ## Using Ellipsis in Type Annotations In Python, you can also use ellipsis in type annotations to represent a type that you haven't defined yet. This is useful when you're working on a large codebase and need to define types incrementally. For example, consider the following type annotation: ```python def my_function(arg1: int, arg2: str, arg3: ...) -> float: ... ``` Here, the arg3 parameter has an ellipsis type annotation, indicating that its type is unspecified. When you come back to this function later, you can fill in the type annotation for arg3. While the ellipsis constant may seem like a small and obscure feature in Python, it can actually be quite useful in certain situations. By using ellipsis as a placeholder for code that you haven't written yet, you can write more flexible and efficient Python code that is easier to maintain over time. I hope this article has helped you understand what ellipsis is and how it can be used in Python. If you have any questions or comments, feel free to leave them below!
julienawonga
1,714,092
Scraping the full snippet from Google search result
Sometimes, you see truncated text on a Google search result like this (...) . Google doesn't always...
0
2024-01-02T00:28:59
https://serpapi.com/blog/scraping-the-full-snippet-from-google-search-result/
webscraping, go
Sometimes, you see truncated text on a Google search result like this (...) . Google doesn't always display the meta description of a website. Sometimes, it gets a snippet of relevant text to the search query, which could truncate the text. ![Example of truncated text result](https://serpapi.com/blog/content/images/2023/12/CleanShot-2023-12-29-at-09.24.45@2x.png) Wonder how you can get the entire snippet of this search result? Let's dive in! The Idea -------- The idea is to visit the page URL and scrape part of the relevant text until the next period sign or the whole paragraph. But before that, we need to find the Google search results list. Therefore, we will use [Google search API by SerpApi](https://serpapi.com/search-api) to scrape the Google SERP. _You can use any programming language you want, but I'll use Go Lang for this sample._ Scraping Google SERP list with Go lang -------------------------------------- First, let's get the Google organic results. ### Step 1: Get your SerpApi key * Sign up for free at [SerpApi](https://serpapi.com/). * Get your SerpApi Api Key from [this page](https://serpapi.com/manage-api-key). ### Step 2: Create a new Go Lang project ``` mkdir fullsnippet && cd fullsnippet // Create a new folder and move touch main.go // Create a new go file ``` ### Step 3: Install [Golang SerpApi package](https://serpapi.com/blog/scraping-the-full-snippet-from-google-search-result/go%20get%20-u%20github.com/serpapi/serpapi-golang) ``` go mod init project-snippet // Initialize Go Module go get -u github.com/serpapi/serpapi-golang // Install Go lang package by SerpApi ``` ### Step 4: This is how to get the organic\_results from Google SERP ``` package main import ( "fmt" "github.com/serpapi/serpapi-golang" ) const API_KEY = "YOUR_API_KEY" func main() { client_parameter := map[string]string{ "engine": "google", "api_key": API_KEY, } client := serpapi.NewClient(client_parameter) parameter := map[string]string{ "q": "why the sky is blue", // Feel free to change with any keyword } data, err := client.Search(parameter) fmt.Println(data["organic_results"]) if err != nil { fmt.Println(err) } } ``` We've received each result's title, description, link, and other information. **Collect only specific data** We can collect and display only specific data in a variable like this ``` data, err := client.Search(parameter) type OrganicResult struct { Title string Snippet string Link string } var organic_results []OrganicResult for _, result := range data["organic_results"].([]interface{}) { result := result.(map[string]interface{}) organic_result := OrganicResult{ Title: result["title"].(string), Snippet: result["snippet"].(string), Link: result["link"].(string), } organic_results = append(organic_results, organic_result) } ``` Scraping the individual page ---------------------------- SerpApi focuses on scraping search results. That's why we need extra help to scrape individual sites. We'll use [GoColly package](https://github.com/gocolly/colly). **Install package** ``` go get -u github.com/gocolly/colly/v2 ``` **Code for scraping individual site** Add this code inside the loop ``` // Scrape each of the link c := colly.NewCollector() c.OnHTML("body", func(e *colly.HTMLElement) { rawText := e.Text fmt.Println("Raw text in entire body tag:", rawText) }) // Handle any errors c.OnError(func(r *colly.Response, err error) { fmt.Println("Request URL:", r.Request.URL, "failed with response:", r, "\nError:", err) }) // Start scraping c.Visit(organic_result.Link) ``` **If you need the whole text of each site, you can return the `rawText` from above. Then you're done.** _But if you need only the snippet part until the next period (the entire sentence), we will continue to the following function._ Scraping only the relevant text ------------------------------- Here's the pseudocode on returning only the relevant full snippet. ``` Find $partialSnippet in the rawText Find the position of $partialSnippet Find the next (closest) period after that partial snippet Return the whole snippet ``` Here is the Go Lang code ``` fullSnippet := findSentence(rawText, snippet) return fullSnippet ``` The `findSentence` method ``` func findSentence(rawText string, searchText string) string { // 1. Replace all whitespaces with a single space re := regexp.MustCompile(`\s+`) fullText := re.ReplaceAllString(rawText, " ") // 2. Replace all backtik ’ into ' at rawText re1 := regexp.MustCompile(`’`) fullText = re1.ReplaceAllString(fullText, "'") // 3. Find the start index of searchText startIndex := strings.Index(fullText, searchText) if startIndex == -1 { return "Text not found" } // 4. Calculate the end index of the snippet snippetEndIndex := startIndex + len(searchText) // 5. Find the end of the sentence after the snippet endOfSentenceIndex := strings.Index(fullText[snippetEndIndex:], ".") if endOfSentenceIndex == -1 { // Return the rest of the text from snippet if not found return fullText[startIndex:] } // Adjust to get the correct index in the full text endOfSentenceIndex += snippetEndIndex + 1 return fullText[startIndex:endOfSentenceIndex] } ``` Here is the result ![Result of a full snippet from Google](https://serpapi.com/blog/content/images/2024/01/CleanShot-2024-01-01-at-08.43.47@2x.png) _You can create a conditional logic (if statement) to only perform this when the snippet has "..." (three dots in the end)._ Full code sample ---------------- Here is the full code sample in GitHub: [https://github.com/hilmanski/serpapi-fullsnippet-golang](https://github.com/hilmanski/serpapi-fullsnippet-golang) Warning ------- Here are a few potential issues and solutions with our method. ### Different snippet format This might not work when Google displays a snippet list, where the snippet comes from some headings or key points. We'll need to write a different logic for this. ### Adding proxy To prevent getting blocked when scraping the individual site, you can add proxies to the GoColly. > As a reminder, You don't need to worry about getting block for scraping the Google search itself when using SerpApi. Reference: [https://go-colly.org/docs/examples/proxy\_switcher/](https://go-colly.org/docs/examples/proxy_switcher/) Sample proxy switcher ``` package main import ( "bytes" "log" "github.com/gocolly/colly" "github.com/gocolly/colly/proxy" ) func main() { // Instantiate default collector c := colly.NewCollector(colly.AllowURLRevisit()) // Rotate two socks5 proxies rp, err := proxy.RoundRobinProxySwitcher("socks5://127.0.0.1:1337", "socks5://127.0.0.1:1338") if err != nil { log.Fatal(err) } c.SetProxyFunc(rp) // Print the response c.OnResponse(func(r *colly.Response) { log.Printf("%s\n", bytes.Replace(r.Body, []byte("\n"), nil, -1)) }) // Fetch httpbin.org/ip five times for i := 0; i < 5; i++ { c.Visit("https://httpbin.org/ip") } } ``` I hope it helps you to collect more data for your Google SERP!
hilmanski
1,714,334
Elevate Your App with Creative Avatars and make additional revenue - Introducing Avataryug!
Hey DEV Community, I hope this message finds you well! 👋 I'm excited to introduce you to...
0
2024-01-02T06:00:03
https://dev.to/avataryug/elevate-your-app-with-creative-avatars-and-make-additional-revenue-introducing-avataryug-4cen
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/icxo7j3tatzn4uakmefj.png) Hey DEV Community, I hope this message finds you well! 👋 I'm excited to introduce you to Avataryug, a creative avatar design company dedicated to helping app developers like you enhance user experiences. Why Avataryug? 💸 Drive additional revenue: Implement our services to make some extra revenue from your apps. 🚀 Boost downloads: Get more engagement on your apps and observe the increase in downloads. 🌐 Reduce Churn: Create a loyal userbase for your apps by avoiding churns. How We Can Collaborate: 🤝 Partnership Opportunities: Explore potential partnerships with Avataryug to elevate your app's visual appeal and engage users from the start. 💼 Integrate our services: Use our avatars to monetize your apps and generate additional revenue. 📈 Easy integration: Simple, lightweight SDK to integrate our services with proper documentations and support plans. Feel free to drop by our website at http://avataryug.com to learn more about our services and how we can collaborate.
avataryug
1,714,402
The Magic of Home Salon Services with Our Exclusive Salon at Home App
Welcome to a world where beauty meets convenience – a realm of glamour and relaxation brought...
0
2024-01-02T07:37:06
https://dev.to/swagmeesalon/the-magic-of-home-salon-services-with-our-exclusive-salon-at-home-app-39pk
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88rhgu0uwvwi3iip1uhs.jpg) Welcome to a world where beauty meets convenience – a realm of glamour and relaxation brought directly to your doorstep. Our [salon at home app](https://play.google.com/store/apps/details?id=com.swagmee&pli=1) is not just a service; it's an experience, a journey into elegance that transforms your space into a sanctuary of beauty and tranquility. Imagine a personalized beauty haven where skilled professionals come equipped with the finest tools and premium products to pamper you in the comfort of your own home. No more rushing to appointments or waiting in queues – we bring the salon experience to you. Key Features: Personalized Beauty Concierge: Our app introduces you to a world of beauty experts ready to cater to your every need. Choose from a curated list of skilled professionals specializing in hair, nails, skincare, and more. Exclusive Services: Indulge in a range of exclusive beauty services designed to enhance your natural beauty. From rejuvenating facials to precision haircuts, our experts are dedicated to delivering top-notch services tailored to your preferences. Premium Products: We believe in using only the best for our clients. Our professionals bring high-quality, branded products to ensure a luxurious experience that leaves you looking and feeling your best. Flexible Scheduling: Book your beauty escape at a time that suits you. Our flexible scheduling options let you choose appointments that fit seamlessly into your busy lifestyle. Safety First: Your safety is our priority. Our professionals follow strict hygiene and safety protocols, ensuring a clean and secure environment for every session. Transparent Pricing: No hidden costs or surprises. Our app provides transparent pricing, allowing you to plan and budget for your beauty sessions with ease. Rewards and Loyalty: Enjoy the perks of being a valued customer with our loyalty program. Earn rewards with every booking and unlock special discounts for your loyalty. Embark on a Journey of Beauty, Relaxation, and Elegance: Download our [Salon at Home](https://www.swagmee.com/) app now and embark on a journey of beauty, relaxation, and elegance. Elevate your beauty routine to new heights, all within the comfort of your own space. Escape to elegance with every session – because you deserve nothing less. In conclusion, our Salon at Home app is not just an app; it's a gateway to a world where beauty meets convenience, where every session is a personalized experience, and where you can indulge in the luxury of top-notch Beauty services at home within the comfort of your own home. Join us in redefining the way you experience beauty – one session at a time.
swagmeesalon
1,714,510
10 Tips for Designing and Developing Your Career Roadmap
Embarking on a successful career journey doesn't happen overnight; it requires careful planning,...
0
2024-01-08T01:00:00
https://quire.io/blog/p/design-career-roadmap.html
Embarking on a successful career journey doesn't happen overnight; it requires careful planning, strategic thinking, and a clear roadmap. Crafting your career path is a dynamic process that involves setting goals, navigating challenges, and staying adaptable to change. In this blog post, we'll explore 10 essential tips for designing and developing your career roadmap. So, buckle up – your career adventure awaits! ## 1. Start with Self-Reflection: Uncover Your Passion and Strengths Before you sketch out your career roadmap, take a moment for self-reflection. What are your passions? What activities make you lose track of time? Identify your strengths and weaknesses. Use this self-awareness to align your career goals with what truly motivates and excites you. ## 2. Set SMART Career Goals: Specific, Measurable, Achievable, Relevant, Time-Bound When establishing your career goals, follow the SMART criteria. Make sure they are Specific (clear and precise), Measurable (quantifiable), Achievable (realistic), Relevant (aligned with your values), and Time-Bound (have a deadline). This approach provides a solid foundation for your roadmap. ## 3. Create a Career Planning Template: Your Blueprint to Success Developing a career planning template can be a game-changer. Outline your [short-term and long-term goals](https://quire.io/blog/p/short-term-goal.html), skills you need to acquire, and potential obstacles. This template will serve as your guide, helping you stay organized and focused on your career development journey. ## 4. Embrace the Career Planning Process: A Step-by-Step Approach The career planning process involves several steps, from self-assessment to goal setting and implementation. Familiarize yourself with these steps and tailor them to your unique journey. Don't rush; take the time to explore each phase thoroughly. ## 5. Craft a Dynamic Career Roadmap: Stay Flexible and Adaptable A career roadmap is not a rigid structure; it's a dynamic guide that evolves with your experiences. Embrace [flexibility and adaptability](https://quire.io/blog/p/adaptability-skills.html). Be open to detours, as unexpected opportunities may lead to exciting destinations. ## 6. Develop a Career Development Plan: Actionable Steps Towards Success Your career development plan is the bridge between your goals and reality. Break down your objectives into actionable steps. Whether it's gaining a new skill, networking, or pursuing further education, having a plan makes your journey more achievable. ## 7. Map Out Your Career Journey: Visualize Your Progress Consider creating a visual representation of your career journey. This could be a flowchart, a mind map, or even a vision board. Visualization can reinforce your commitment, making it easier to track your progress and stay motivated. ## 8. Identify Potential Career Paths: Explore Your Options Your career path might not be a straight line. Identify alternative routes and explore different career paths. Networking, informational interviews, and internships can provide valuable insights, helping you make informed decisions about your professional journey. ## 9. Seek Mentorship: Learn from Those Who Have Walked the Path Mentorship is a priceless resource in career development. Seek guidance from experienced professionals who have navigated similar paths. Their insights can offer invaluable perspectives, helping you make informed decisions and avoid common pitfalls. ## 10. Regularly Evaluate and Adjust: Stay on Course Career development is an ongoing process. Regularly evaluate your progress against your goals. Adjust your roadmap as needed, considering changes in the industry, personal growth, and shifting priorities. Flexibility is key to staying on course. In summary, designing and developing your career roadmap is a journey in itself. From self-reflection and goal setting to creating templates and seeking mentorship, each step is a vital component of your success. Keep in mind that your roadmap is a living document, meant to evolve as you grow both personally and professionally. Embrace the adventure, stay focused, and enjoy the fulfillment that comes with crafting a career that aligns with your passions and aspirations. Remember, your career journey is uniquely yours, and with these 10 tips in hand, you're well on your way to navigating the exciting twists and turns that lie ahead. Happy career planning! 🎉
brighty_miriam
1,743,432
LangGraph Agents
LangGraph Agents pip install langchain langgraph langchain_openai langchainhub langsmith...
0
2024-01-28T05:48:30
https://dev.to/jhparmar/langgraph-agents-1jog
**LangGraph Agents** pip install langchain langgraph langchain_openai langchainhub langsmith duckduckgo-search beautifulsoup4 gradio export OPENAI_API_KEY=xxxxxxxxxx export LANGCHAIN_API_KEY=xxxxxxxxxx import functools, operator, requests, os, json from bs4 import BeautifulSoup from duckduckgo_search import DDGS from langchain.agents import AgentExecutor, create_openai_tools_agent from langchain_core.messages import BaseMessage, HumanMessage from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langgraph.graph import StateGraph, END from langchain.tools import tool from langchain_openai import ChatOpenAI from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict import gradio as gr # Set environment variables os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_PROJECT"] = "LangGraph Research Agents" # Initialize model llm = ChatOpenAI(model="gpt-4-turbo-preview") # 1. Define custom tools @tool("internet_search", return_direct=False) def internet_search(query: str) -> str: """Searches the internet using DuckDuckGo.""" with DDGS() as ddgs: results = [r for r in ddgs.text(query, max_results=5)] return results if results else "No results found." @tool("process_content", return_direct=False) def process_content(url: str) -> str: """Processes content from a webpage.""" response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') return soup.get_text() tools = [internet_search, process_content] # 2. Agents # Helper function for creating agents def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str): prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), MessagesPlaceholder(variable_name="agent_scratchpad"), ]) agent = create_openai_tools_agent(llm, tools, prompt) executor = AgentExecutor(agent=agent, tools=tools) return executor # Define agent nodes def agent_node(state, agent, name): result = agent.invoke(state) return {"messages": [HumanMessage(content=result["output"], name=name)]} # Create Agent Supervisor members = ["Web_Searcher", "Insight_Researcher"] system_prompt = ( "As a supervisor, your role is to oversee a dialogue between these" " workers: {members}. Based on the user's request," " determine which worker should take the next action. Each worker is responsible for" " executing a specific task and reporting back their findings and progress. Once all tasks are complete," " indicate with 'FINISH'." ) options = ["FINISH"] + members function_def = { "name": "route", "description": "Select the next role.", "parameters": { "title": "routeSchema", "type": "object", "properties": {"next": {"title": "Next", "anyOf": [{"enum": options}] }}, "required": ["next"], }, } prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), ("system", "Given the conversation above, who should act next? Or should we FINISH? Select one of: {options}"), ]).partial(options=str(options), members=", ".join(members)) supervisor_chain = (prompt | llm.bind_functions(functions=[function_def], function_call="route") | JsonOutputFunctionsParser()) search_agent = create_agent(llm, tools, "You are a web searcher. Search the internet for information.") search_node = functools.partial(agent_node, agent=search_agent, name="Web_Searcher") insights_research_agent = create_agent(llm, tools, """You are a Insight Researcher. Do step by step. Based on the provided content first identify the list of topics, then search internet for each topic one by one and finally find insights for each topic one by one. Include the insights and sources in the final response """) insights_research_node = functools.partial(agent_node, agent=insights_research_agent, name="Insight_Researcher") # Define the Agent State, Edges and Graph class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add] next: str workflow = StateGraph(AgentState) workflow.add_node("Web_Searcher", search_node) workflow.add_node("Insight_Researcher", insights_research_node) workflow.add_node("supervisor", supervisor_chain) # Define edges for member in members: workflow.add_edge(member, "supervisor") conditional_map = {k: k for k in members} conditional_map["FINISH"] = END workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map) workflow.set_entry_point("supervisor") graph = workflow.compile() # Run the graph for s in graph.stream({ "messages": [HumanMessage(content="""Search for the latest AI technology trends in 2024, summarize the content. After summarise pass it on to insight researcher to provide insights for each topic""")] }): if "__end__" not in s: print(s) print("----") # final_response = graph.invoke({ # "messages": [HumanMessage( # content="""Search for the latest AI technology trends in 2024, # summarize the content # and provide insights for each topic.""")] # }) # print(final_response['messages'][1].content) User Interface import functools, operator, requests, os, json from bs4 import BeautifulSoup from duckduckgo_search import DDGS from langchain.agents import AgentExecutor, create_openai_tools_agent from langchain_core.messages import BaseMessage, HumanMessage from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder from langgraph.graph import StateGraph, END from langchain.tools import tool from langchain_openai import ChatOpenAI from typing import Annotated, Any, Dict, List, Optional, Sequence, TypedDict import gradio as gr # Set environment variables os.environ["LANGCHAIN_TRACING_V2"] = "true" os.environ["LANGCHAIN_PROJECT"] = "LangGraph Research Agents" # Initialize model llm = ChatOpenAI(model="gpt-4-turbo-preview") # Define custom tools @tool("internet_search", return_direct=False) def internet_search(query: str) -> str: """Searches the internet using DuckDuckGo.""" with DDGS() as ddgs: results = [r for r in ddgs.text(query, max_results=5)] return results if results else "No results found." @tool("process_content", return_direct=False) def process_content(url: str) -> str: """Processes content from a webpage.""" response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') return soup.get_text() tools = [internet_search, process_content] # Helper function for creating agents def create_agent(llm: ChatOpenAI, tools: list, system_prompt: str): prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), MessagesPlaceholder(variable_name="agent_scratchpad"), ]) agent = create_openai_tools_agent(llm, tools, prompt) executor = AgentExecutor(agent=agent, tools=tools) return executor # Define agent nodes def agent_node(state, agent, name): result = agent.invoke(state) return {"messages": [HumanMessage(content=result["output"], name=name)]} # Create Agent Supervisor members = ["Web_Searcher", "Insight_Researcher"] system_prompt = ( "As a supervisor, your role is to oversee a dialogue between these" " workers: {members}. Based on the user's request," " determine which worker should take the next action. Each worker is responsible for" " executing a specific task and reporting back their findings and progress. Once all tasks are complete," " indicate with 'FINISH'." ) options = ["FINISH"] + members function_def = { "name": "route", "description": "Select the next role.", "parameters": { "title": "routeSchema", "type": "object", "properties": {"next": {"title": "Next", "anyOf": [{"enum": options}] }}, "required": ["next"], }, } prompt = ChatPromptTemplate.from_messages([ ("system", system_prompt), MessagesPlaceholder(variable_name="messages"), ("system", "Given the conversation above, who should act next? Or should we FINISH? Select one of: {options}"), ]).partial(options=str(options), members=", ".join(members)) supervisor_chain = (prompt | llm.bind_functions(functions=[function_def], function_call="route") | JsonOutputFunctionsParser()) # Define the Agent State and Graph class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], operator.add] next: str search_agent = create_agent(llm, tools, "You are a web searcher. Search the internet for information.") search_node = functools.partial(agent_node, agent=search_agent, name="Web_Searcher") insights_research_agent = create_agent(llm, tools, """You are a Insight Researcher. Do step by step. Based on the provided content first identify the list of topics, then search internet for each topic one by one and finally find insights for each topic one by one. Include the insights and sources in the final response """) insights_research_node = functools.partial(agent_node, agent=insights_research_agent, name="Insight_Researcher") workflow = StateGraph(AgentState) workflow.add_node("Web_Searcher", search_node) workflow.add_node("Insight_Researcher", insights_research_node) workflow.add_node("supervisor", supervisor_chain) # Define edges for member in members: workflow.add_edge(member, "supervisor") conditional_map = {k: k for k in members} conditional_map["FINISH"] = END workflow.add_conditional_edges("supervisor", lambda x: x["next"], conditional_map) workflow.set_entry_point("supervisor") graph = workflow.compile() # Run the graph def run_graph(input_message): response = graph.invoke({ "messages": [HumanMessage(content=input_message)] }) return json.dumps(response['messages'][1].content, indent=2) inputs = gr.inputs.Textbox(lines=2, placeholder="Enter your query here...") outputs = gr.outputs.Textbox() demo = gr.Interface(fn=run_graph, inputs=inputs, outputs=outputs) demo.launch() Prompt Search for the latest AI technology trends in 2024, summarise the content. After summarising pass it on to insight researcher to provide insights for each topic Output Let’s delve into insights derived from the latest findings on AI technology trends for 2024: 1. Multimodal AI Insight: Multimodal AI’s growth to a $4.5 billion market by 2028 underscores a significant shift towards AI systems that can handle a variety of inputs like text, images, and audio, akin to human sensory processing. This indicates a move towards more natural and intuitive AI-human interactions. Sources: B12, Unite.AI 2. Agentic AI Insight: The emergence of AI systems capable of autonomous action without direct supervision highlights a trend towards AI that can independently pursue complex goals. This could dramatically enhance efficiency in various sectors by enabling AI to take on more proactive roles. Sources: GREY Journal, OpenAI 3. Open Source AI Insight: Google’s partnership with Hugging Face to provide open-source AI tools on its cloud platform exemplifies the growing trend of democratizing AI technology. This movement aims to make powerful AI tools more accessible, fostering innovation across the board. Sources: The Verge, TechRepublic 4. Retrieval-Augmented Generation (RAG) Insight: RAG’s integration with fine-tuning methods tailored to specific industries, like agriculture, suggests a push towards making AI-generated content more accurate and relevant. This could significantly benefit sectors where factual precision is paramount. Sources: Marktechpost, NVIDIA Blog 5. Customized Enterprise Generative AI Models Insight: Enterprises are increasingly favoring customized generative AI models tailored with proprietary data, indicating a shift from one-size-fits-all solutions to more specific, efficient, and cost-effective AI applications. Sources: Deloitte US, TechTarget 6. Quantum AI Insight: The integration of AI with quantum computing, as seen in Microsoft’s Azure Quantum Elements, signifies a pursuit of breakthrough capabilities in complex problem-solving, potentially revolutionizing fields like drug discovery and climate modeling. Sources: Microsoft, AIMultiple 7. AI Legislation Insight: The momentum towards comprehensive AI policies, highlighted by actions such as President Biden’s executive order, reflects a growing recognition of the need to balance AI innovation with ethical considerations and societal protections. Sources: MIT Technology Review, Scale 8. Ethical AI Insight: The focus on developing AI systems that are transparent, fair, and accountable is becoming increasingly prominent. This trend towards ethical AI is driven by a global effort to ensure AI’s growth benefits society as a whole. Sources: UNESCO, Forbes 9. Augmented Working Insight: The concept of Augmented Working emphasizes a synergistic relationship between human capabilities and AI’s potential. This trend suggests a future where AI not only assists but enhances human work across various sectors. Sources: Medium, Forbes 10. Next Generation of Generative AI Insight: The evolution towards multi-modal generative AI systems capable of creating complex, multi-faceted outputs signals a future where AI can produce content that rivals human creativity, potentially transforming the landscape of content creation. Sources: Forbes, MIT Technology Review These insights provide a deeper understanding of the potential impacts and opportunities associated with each of the AI technology trends for 2024. Categories
jhparmar
1,714,537
How to Create a 3D Game in Python and Swap Models (Pizza Toppings Tutorial)
This Python demo uses echo3D's 3D model streaming in combination with Panda3D, a framework for 3D...
0
2024-01-02T10:10:40
https://dev.to/echo3d/how-to-create-a-3d-game-in-python-and-swap-models-pizza-toppings-tutorial-576b
ai, python, programming, beginners
This Python demo uses [**echo3**D](http://www.echo3d.com)'s 3D model streaming in combination with [**Panda3D**](https://www.panda3d.org/), a framework for 3D rendering and game development in Python. Currently, any .obj or .glb model can be uploaded to the echo3D console and streamed into this app. The app allows you to design your own pizza given the available pepperoni, mushroom, broccoli, or pepper toppings. You can click on any pizza topping, whether on the pizza or on the extra topping plates, to pick up and move the topping to your desired location. However, feel free to use [echo3D's expansive functionality](https://docs.echo3d.com/) to modify the toppings to your pizza-preferences! The full demo can also be found on [GitHub](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo). ![echo3D](https://cdn-images-1.medium.com/max/3192/0*kYSyO08bSBBrMqUw.png) ## [Register](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#register) * Don't have an API key? Make sure to register for FREE at [echo3D](https://console.echo3d.co/#/auth/register). ## [Setup](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#setup) * Download and install Python 3.8.16 from [https://www.python.org/downloads/release/python-3816/](https://www.python.org/downloads/release/python-3816/). * Clone this project onto your local machine. * Set up [Panda3D](https://www.panda3d.com/) according the instructions matching your operating system. * Upload the contents of the [models folder](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/models) through the [echo3D console](https://console.echo3d.co/). * Update the values of each key-value pair in the [model\_](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/echo3D-pizza-maker/model_constants.py)[constants.py](http://constants.py) [file](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/echo3D-pizza-maker/model_constants.py) to match the entity id's of each model you uploaded to the echo3D console in the step before. You can also add your own models to the [echo3D console](https://console.echo3d.co/) by searching or adding your own, as each person may have their own pizza topping preferences. Just make sure to update the values in the [model\_](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/echo3D-pizza-maker/model_constants.py)[constants.py](http://constants.py) [file](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/echo3D-pizza-maker/model_constants.py) to match the entity ids' of the models you uploaded. ## [Run](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#run) Navigate to the [echo3D-pizza-maker folder](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo/blob/master/echo3D-pizza-maker) in your terminal, and run the following command: python [main.py](http://main.py) \[YOUR\_ECHO3D\_API\_KEY\] \[YOUR\_ECHO3D\_SECURITY\_KEY\] ## [Learn more](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#learn-more) Refer to our [documentation](https://docs.echo3d.com/python/using-the-sdk) to learn more about how to use Python and echo3D. If you want more demos of Panda3D's technology, there are additional demos created by Panda3D in their [GitHub samples folder](https://github.com/panda3d/panda3d/tree/master/samples). Although these do not incorporate echo3D, they show off a lot more functionality. Please see [Panda3D's website](https://docs.panda3d.org/1.10/python/index) to see documentation. ## [Support](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#support) Feel free to reach out at [support@echo3D.co](mailto:support@echo3D.co) or join our [support channel on Slack](https://go.echo3d.co/join). ## [Screenshots](https://github.com/echo3Dco/Python-PizzaMaker-echo3D-demo#screenshots) Demo created by [LeaBroudo](https://github.com/LeaBroudo/). > **echo3D** ([www.echo3D.com](http://www.echo3D.com)) is a 3D asset management platform that enables developers to manage, update, and stream 3D, AR/VR, Metaverse, & Spatial Computing content to real-time apps and games. ![echo3D](https://cdn-images-1.medium.com/max/2000/0*M8TubzI_4preReGm.png)
_echo3d_
1,714,562
Real-time Chat application under 5 minutes using NEXTJS and Socket.io
Real-time Chat application under 5 minutes using NEXTJS and Socket.io Let’s start...
0
2024-01-05T16:16:10
https://dev.to/sarathadhithya/real-time-chat-application-under-2-minutes-using-nextjs-and-socketio-2pj1
socketio, chatapplication, nextjs, realtimechatapp
--- title: Real-time Chat application under 5 minutes using NEXTJS and Socket.io published: true date: 2023-01-06 13:05:28 UTC tags: socketio,chatapplication,nextjs,realtimechatapp canonical_url: --- ## Real-time Chat application under 5 minutes using NEXTJS and Socket.io ### Let’s start the timer, #### 1. First, create a next app ``` npx create-next-app@latest ``` After creating the Next app, our project structure looks something like this. ![](https://cdn-images-1.medium.com/max/362/1*oXeNwN8LokTFe19nrn19sw.png) _Project structure_ #### 2. Install the following dependencies ``` npm i socket.io socket.io-client ``` ![](https://cdn-images-1.medium.com/max/355/1*boWds86lG6eNAa77ZbNmVQ.png) _Dependencies_ #### 3. Now let's start to code, go to **_pages/index.js_** file and copy the basic template. ``` import React, { useEffect, useState } from "react"; import io from "socket.io-client"; let socket; const Home = () => { useEffect(() => { socketInitializer(); return () => { socket.disconnect(); }; }, []); async function socketInitializer() { } function handleSubmit(e) { } return ( <div> <h1>Chat app</h1> </div> ); }; export default Home; ``` Whenever the component is mounted, **socketInitializer()** function is called and whenever the component is unmounted, we are disconnecting the socket connection. But where is the socket connection happening? Before that let's set up the backend, here we use Nextjs which **allows you to create APIs endpoints in a pages directory as though you’re writing backend code**. In **pages/api** we will create a new file called **socket.js.** ![](https://cdn-images-1.medium.com/max/356/1*LFiIDnYSqWsM340yujFxSg.png) _pages/api/socket.js_ ``` import { Server } from "socket.io"; export default function SocketHandler(req, res) { const io = new Server(res.socket.server); res.socket.server.io = io; io.on("connection", (socket) => { // after the connection..... }); console.log("Setting up socket"); } ``` Now, when the client connects from the front end, the backend captures the connection even, with that we are creating a new socket.io server and assigning it to **_res.socket.server.io._** ``` // Waits for a socket connection io.on("connection", socket => { // Each socket has a unique ID // console.log(socket.id) -> ojIckSD2jqNzOqIrAGzL }) ``` Improving the code. ``` import { Server } from "socket.io"; export default function SocketHandler(req, res) { if (res.socket.server.io) { console.log("Already set up"); res.end(); return; } const io = new Server(res.socket.server); res.socket.server.io = io; io.on("connection", (socket) => { socket.on("send-message", (obj) => { io.emit("receive-message", obj); }); }); console.log("Setting up socket"); res.end(); } ``` Now here, if a **res.socket.server.io** exist, then no need to create the socket.io server again, so we are simpling returning **.** Inside the socket connection, we have created a **listening event.** So what it does is, Adds the _listener_ function to the end of the listener's array for the event named _send-message (we call it as emit)_. So whenever a emit happens from the frontend with the event named _send-message,_ then this event gets executed. Inside it, we have an emitter named _receive-message_, which will emit the received object (obj) to the client side. #### Now back to the client side, let’s connect to the backend. ``` import React, { useEffect, useState } from "react"; import io from "socket.io-client"; let socket; const Home = () => { useEffect(() => { socketInitializer(); return () => { socket.disconnect(); }; }, []); async function socketInitializer() { await fetch("/api/socket"); socket = io(); socket.on("receive-message", (data) => { // we get the data here }); } function handleSubmit(e) { } return ( <div> <h1>Chat app</h1> </div> ); }; export default Home; ``` We have initialized the socket by using the **fetch()**. This will create the socket server. Next, we have an event listener which will listen if any emit is happening in the name **_receive-message._** #### Let's create a simple UI ``` import React, { useEffect, useState } from "react"; import io from "socket.io-client"; let socket; const Home = () => { const [message, setMessage] = useState(""); const [username, setUsername] = useState(""); useEffect(() => { socketInitializer(); return () => { socket.disconnect(); }; }, []); async function socketInitializer() { await fetch("/api/socket"); socket = io(); socket.on("receive-message", (data) => { // }); } function handleSubmit(e) { } return ( <div> <h1>Chat app</h1> <h1>Enter a username</h1> <input value={username} onChange={(e) => setUsername(e.target.value)} /> <br /> <br /> <div> <form onSubmit={handleSubmit}> <input name="message" placeholder="enter your message" value={message} onChange={(e) => setMessage(e.target.value)} autoComplete={"off"} /> </form> </div> </div> ); }; export default Home; ``` ![](https://cdn-images-1.medium.com/max/513/1*pclowDAQMctvsPQcoBMs5w.png) _Simple UI_ Username and a message box, the message box input field is wrapped inside the form. Whenever the form is submitted, we are calling the **_handleSubmit_** function. #### Now let’s emit the message ``` import React, { useEffect, useState } from "react"; import io from "socket.io-client"; let socket; const Home = () => { const [message, setMessage] = useState(""); const [username, setUsername] = useState(""); const [allMessages, setAllMessages] = useState([]); useEffect(() => { socketInitializer(); return () => { socket.disconnect(); }; }, []); async function socketInitializer() { await fetch("/api/socket"); socket = io(); socket.on("receive-message", (data) => { setAllMessages((pre) => [...pre, data]); }); } function handleSubmit(e) { e.preventDefault(); console.log("emitted"); socket.emit("send-message", { username, message }); setMessage(""); } return ( <div> <h1>Chat app</h1> <h1>Enter a username</h1> <input value={username} onChange={(e) => setUsername(e.target.value)} /> <br /> <br /> <div> {allMessages.map(({ username, message }, index) => ( <div key={index}> {username}: {message} </div> ))} <br /> <form onSubmit={handleSubmit}> <input name="message" placeholder="enter your message" value={message} onChange={(e) => setMessage(e.target.value)} autoComplete={"off"} /> </form> </div> </div> ); }; export default Home; ``` Inside the **_handleSubmit_** function, we have an emitter, that will emit a username and a message as an object to the _send-message_ event listener in the backend. After it reaches the backend, we have another emitter inside the event listener (_receive-message_), which will emit back to the frontend. Now the receive-message event listener gets the data. ``` socket.on("receive-message", (data) => { setAllMessages((pre) => [...pre, data]); }); ``` All the above function happens Real-time to all the user’s currently using the application. Go and play around with the code. Try creating a room for the user, refer to this [doc](https://socket.io/docs/v4/rooms/). All the codes are the in my GitHub. Check them out and STAR the repository if you have successfully completed the Chat Application. [GitHub - SarathAdhi/socket.io-starter-template: Created with CodeSandbox](https://github.com/SarathAdhi/socket.io-starter-template) Like and wait for an awesome project with socket.io and Nextjs. Spoiler alert, the next app would be a game.
sarathadhithya
1,714,688
Using BioBERT and Qdrant to Power Semantic Search on Medical Q&A data
Navigating Complex Medical Datasets: Integrating BioBERT's NLP with Qdrant's Vector Database for...
0
2024-01-02T12:57:00
https://dev.to/shumashah/using-biobert-and-qdrant-to-power-semantic-search-on-medical-qa-data-3f0m
ai, machinelearning, vectordatabase, naturallanguageprocessing
Navigating Complex Medical Datasets: Integrating BioBERT's NLP with Qdrant's Vector Database for Enhanced Semantic Accuracy ![Photo by [National Cancer Institute](https://unsplash.com/@nci?utm_source=medium&utm_medium=referral) on Unsplash](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/povjx7y40qkuiyak7nxq.png) *Photo by [National Cancer Institute](https://unsplash.com/@nci?utm_source=medium&utm_medium=referral) on Unsplash* In this tutorial, we're diving into the fascinating world of powering semantic search using BioBERT and Qdrant with a Medical Question Answering Dataset from HuggingFace. We'll unravel the complexities and intricacies of semantic search, a process that goes beyond mere keyword matching to understand the deeper meaning and context of queries.  Our journey will also explore the functionalities of Qdrant, a vector similarity search engine, in handling and extracting nuanced information from a rich medical dataset. BioBERT is a BERT-based language model specially designed for text-mining tasks in the healthcare(medicine) domain. The healthcare sector is in growing need of accurate and contextually relevant information retrieval as healthcare-related data is being amassed digitally.  But before we begin this tutorial, let's first refresh our understanding of the key concepts it encompasses. ## Introduction to Semantic Search Semantic search is a search process employed by a search engine to yield relevant results through content and contextual mapping, instead of exact literal matches. The Wikipedia definition of semantic search is as follows: _Semantic search denotes search with meaning, as distinguished from lexical search where the search engine looks for literal matches of the query words or variants of them, without understanding the overall meaning of the query._ Now, let me explain it in simpler terms. Semantic search is a bit like being a really good detective. It's not just about looking for the exact words you typed into a search bar. Instead, it's about understanding the meaning behind your words, almost like it's trying to get into your head. Imagine you're asking a friend about a book you can't quite remember the name of. You describe it as "that book about the boy wizard with a lightning scar." Your friend knows you're talking about Harry Potter, even though you didn't say it directly. That's what semantic search does but on the internet. This type of search looks at the context of your words, the relationships between them, and even the intent behind your query. It's like it's trying to understand the language the way humans do; not just as a list of keywords that needs to be matched. Semantic search is important because it makes finding information easier and more natural. It's like having a search engine that thinks more like a human and less like a robot, which is pretty cool. It helps you find what you're looking for, even if you're not sure how to ask for it perfectly. This is especially useful as the world's information keeps growing. It's like having a guide who not only knows the way but also understands why you're asking the question and where you want to go. ## About Qdrant Semantic search is employed by search engines and databases to scour through the data efficiently. Search engines, especially those that handle loads of information, use semantic search to sort through mountains of data and find those needles in the haystack that are most relevant to your query. One such search engine is Qdrant. It is a sophisticated open-source vector database engineered for scalability and efficiency in handling complex data searches. At its core, Qdrant utilizes vector embedding to represent data. ## How does Qdrant power semantic searches In simpler terms, Qdrant converts complex information like text, images, or even sound into numerical vectors - think of these as unique digital fingerprints. These vectors are not random; they are carefully calculated to represent the intrinsic properties of the data. This method allows Qdrant to compare and search through these vectors quickly and accurately. The engine is optimized for high-performance similarity search, meaning it can swiftly sift through these digital fingerprints to find the closest matches to a query. This is crucial for applications requiring nuanced data understanding, like content recommendation systems or sophisticated search functionalities across large databases. Once the vector embeddings are created from the raw data, it is stored in a vector database. The database then uses mathematical operations - such as Euclidean Distance, Cosine Similarity, Dot Product, etc. - to determine how similar the query is to the stored data. This results in highly accurate and context-aware search results. ## Problem statement The project aims to develop a semantic search engine to accurately retrieve medical information from a question-answering dataset. This search engine is intended to assist users in finding the most relevant medical answers to their queries, enhancing information accessibility and accuracy in the medical domain. The approach involves vectorizing medical questions, answers, and contextual information using BioBERT, a language model pre-trained on biomedical texts. These vectors represent the semantic content of the text, enabling a similarity-based search. The vectorized data is indexed using Qdrant, a vector search engine, which allows for efficient similarity searches. The search functionality utilizes the vector representations to find the closest matches to a given query in the dataset. This usecase is part of the healthcare domain and therefore needs more precision. Qdrant helps ensure that when you're looking for specific medical information, you're getting the most accurate and relevant answers.  This method is chosen for its ability to capture the complex semantics of medical language and provide contextually relevant search results, going beyond keyword matching to understand the deeper meaning of medical queries. ## Implementation of the solution This approach is structured to harness the advanced NLP capabilities of BioBERT for semantic understanding and leverage Qdrant's efficient search mechanism for retrieving contextually relevant medical information. **The following are the steps to the solution:** 1. Loading the Dataset: Acquire and load the medical question-answering dataset from huggingface. 2. Preprocessing Textual Data: Normalize the data and apply lemmatization while preserving named entities, crucial for maintaining medical terminologies. This process ensures the integrity and specificity of medical information. 3. Vectorization Using BioBERT: Utilize the BioBERT model, specifically trained on biomedical literature, to convert text into semantic vectors. This model is chosen for its proficiency in understanding complex medical contexts. 4. Setting Up Qdrant Cloud: Create an account and a cluster on Qdrant Cloud, and set up QdrantClient for interaction. 5. Uploading Data: Create a collection on Qdrant Cloud. Index and upload vectors formed from a combination of context, question, and answer. This comprehensive vectorization captures the full scope of the information. 6. Implementing Search Functionality: Vectorize input queries using the same model and search in Qdrant, followed by result handling. 7. Testing the Search Functionality: Conduct tests with known and novel queries to evaluate the system's effectiveness and ability to generalize. Now, let's jump to the code. ## Code I have provided the code below for each section of the implementation with an explanation. If you just want to check the entire code, [here is the link to github](https://github.com/shumashah/Natural-Language-Processing/blob/main/medical_QA_semantic_search_using_qdrant.ipynb).  ### Pre-requisites #### Install the requirements Run the following code to install all the required libraries and the dataset ``` pip install qdrant-client pip install transformers pip install spacy pip install https://huggingface.co/spacy/en_core_web_sm/resolve/main/en_core_web_sm-any-py3-none-any.whl pip install torch pip install datasets pip install torch torchvision torchaudio ``` #### Import all the required libraries ``` from datasets import load_dataset import spacy import re import pandas as pd from transformers import AutoTokenizer, AutoModel import torch import numpy as np from qdrant_client import QdrantClient, models import numpy as np from qdrant_client import models ``` ### Load dataset The dataset used is loaded from huggingface. This dataset has questions and answers related to 8 topics in the medical field. Each question and answer also has a context column that provides a detailed description of the medical issue pertaining the Q&A pair. Check this [huggingface link for a more thorough dataset description](https://huggingface.co/datasets/GonzaloValdenebro/MedicalQuestionAnsweringDataset). ``` # The dataset URL dataset_url = "GonzaloValdenebro/MedicalQuestionAnsweringDataset" # Load the dataset dataset = load_dataset(dataset_url) #convert the dataset to pandas dataframe df = dataset["train"].to_pandas() ``` ### Preprocessing the textual data The dataset is processed in this step. Firstly, we will normalize the textual data which includes removing any lowercase and unnecessary punctuation. Then, lemmatization is applied while preserving named entities, crucial for maintaining medical terminologies. This process ensures the integrity of medical information. ``` nlp = spacy.load("en_core_web_sm") def preprocess_text(text): # Normalization: Lowercase and remove unnecessary punctuation text = text.lower() text = re.sub(r'[^\w\s]', '', text) # Tokenization, Lemmatization and Named Entity Recognition with spaCy doc = nlp(text) processed_tokens = [] for token in doc: if token.is_stop or token.is_punct: continue elif token.ent_type_: # Check if the token is a named entity processed_tokens.append(token.text) # Preserve named entities as they are else: processed_tokens.append(token.lemma_) # Lemmatize non-entity tokens # Re-join tokens return ' '.join(processed_tokens) # Assuming df is your DataFrame df['Processed_Question'] = df['Question'].apply(preprocess_text) df['Processed_Context'] = df['Context'].apply(preprocess_text) df['Processed_Answer'] = df['Answer'].apply(preprocess_text) ``` ### Vectorization Using BioBERT ####Loading BioBERT tokenizer and model In this step, we will load the BioBERT model and the tokenizer which will help us to vectorize the pre-processed textual data. We are using the BioBERT model to vectorize the data because it is specifically trained on biomedical literature, to convert text into semantic vectors. This model is chosen for its proficiency in understanding complex medical contexts. ``` tokenizer = AutoTokenizer.from_pretrained("dmis-lab/biobert-base-cased-v1.1") model = AutoModel.from_pretrained("dmis-lab/biobert-base-cased-v1.1") ``` #### Vectorize data In this step, we will vectorize the dataset using the bioBERT tokenizer. Vectorization is a critical step in natural language processing, especially for tasks like semantic search. It is essential for semantic search because it transforms text into a mathematical representation that algorithms can process. By representing text as vectors, mathematical methods like cosine similarity can evaluate how semantically similar they are. This method allows us to effectively capture and compare the nuances of meaning within the text. ``` def vectorize_text(text, tokenizer, model): # Tokenize and encode the text inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=512) # Move tensors to the same device as the model inputs = {k: v.to(model.device) for k, v in inputs.items()} # Get the output from the model with torch.no_grad(): outputs = model(**inputs) # Use the mean of the last hidden states as the vector representation return outputs.last_hidden_state.mean(dim=1).squeeze().cpu().numpy() df['Vectorized_Question'] = df['Processed_Question'].apply(lambda x: vectorize_text(x, tokenizer, model)) df['Vectorized_Context'] = df['Processed_Context'].apply(lambda x: vectorize_text(x, tokenizer, model)) df['Vectorized_Answer'] = df['Processed_Answer'].apply(lambda x: vectorize_text(x, tokenizer, model)) ``` ### Setting Up Qdrant Cloud #### Create an account on Qdrant Cloud Sign up for an account on [Qdrant using this link](https://cloud.qdrant.io/login). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/amli7wgypg23vi8ayf19.png) *Photo by Author* 1. Create a new cluster. Under Set a Cluster Up enter a Cluster name. I have named my cluster "medical_QA". 2. Click Create Free Tier and then Continue. 3. Under Get an API Key, select the cluster and click Get API Key. 4. Save the API key, as you won't be able to request it again. Click Continue. 5. Save the code snippet provided to access your cluster. Click Complete to finish setup. 6. The created cluster will look like this ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f65y1be22igz9rbixt8o.png) *Photo by Author* #### Set Up QdrantClient Use the code snippet you received in the previous step to access your cluster. ``` qdrant_client = QdrantClient( url="https://1259bacd-03fe-4c21-992e-383f0e51fd47.us-east4-0.gcp.cloud.qdrant.io:6333", api_key="iKfB4nUdjCSrvuQZIrfQcMBdacJHVWybK1NGaJzCiKCx42jNbnCc7w" ) ``` ### Uploading Data #### Create a collection on Qdrant cloud All data in Qdrant is organized in collections. So we will create a collection using recreate_collection. It is used when experimenting as it first deletes a collection with the same name. Vector_size parameter defines the size of the vectors for a collection. The distance parameter allows us to calculate the distance between two points using different methods like Euclidean Distance, Cosine Similarity, etc. Here we used Cosine similarity to measure the distance. This metric measures the cosine of the angle between two non-zero vectors in a multi-dimensional space. ``` # Define the collection name and vector size (depends on BioBERT model, e.g., 768 for base models) collection_name = "semantic_search_medical_qa" vector_size = 768 # Adjust based on your BioBERT model # Create a collection qdrant_client.recreate_collection( collection_name=collection_name, vectors_config = models.VectorParams( size=vector_size, # Vector size is defined by used model distance=models.Distance.COSINE, ) ) ``` ### Create a vectorized data list Before uploading the data to the cloud, we need to create the vectorized data list. The "vector" field in this list is formed using a vectorized combination of context, question, and answer. The other fields are the raw textual data of the question, answer, and context. Instead of just vectorizing the answer, I vectorized all the fields in order to capture the full scope of the information related to the question. This will help to establish a more comprehensive link between the query and the answer. The 'vector' field within the vectorized_data list serves as the numerical representation of the text data, combining the semantic essence of the question, context, and answer. In Qdrant, this enables the database to perform similarity searches. When a query is vectorized and compared against these stored vectors, Qdrant can then efficiently retrieve the most semantically relevant entries by calculating the proximity between the query vector and document vectors, typically using cosine similarity as the metric. Since I have vectorized all the necessary information, the retrieved data after proximity calculation will be more accurate.  ``` vectorized_data = [ { 'id': row['id'], 'vector': np.concatenate([ np.fromstring(row['Vectorized_Question'].replace('\n', '').replace('[', '').replace(']', ''), sep=' '), np.fromstring(row['Vectorized_Context'].replace('\n', '').replace('[', '').replace(']', ''), sep=' '), np.fromstring(row['Vectorized_Answer'].replace('\n', '').replace('[', '').replace(']', ''), sep=' ') ]), 'question': row['Question'], 'context': row['Context'], 'answer': row['Answer'] } for _, row in df.iterrows() ] ``` ### Upload vectors to the cloud We are uploading the vectors to the cloud in batches. Given the large size of the dataset, batch processing reduces the risk of overloading the network and server capacity, and ensures no error is received. I have selected 50 to be the batch size, you can reduce it further incase you receive any network error. For this demonstration, I have uploaded only 60% of the dataset to showcase the functionality while conserving time. You can use the same method to upload the entire dataset, ensuring completeness of the data in Qdrant without compromising the upload process. Make sure to replace the subset_size with len(vectorized_data) in the range function and print statement while uploading the dataset to ensure the upload of 100% of the dataset. ``` collection_name = "semantic_search_medical_qa" # mention the collection name def upload_batch(batch): # Upload the batch to Qdrant qdrant_client.upload_records( collection_name=collection_name, records=[ models.Record( id=int(data_point["id"]), vector=data_point["vector"].tolist(), # Convert numpy array to list payload={ "question": data_point["question"], "context": data_point["context"], "answer": data_point["answer"] } ) for data_point in batch ], ) # Batch size batch_size = 50 # Calculate 60% of the dataset size subset_size = int(len(vectorized_data) * 0.6) # Upload the dataset for i in range(0, subset_size, batch_size): batch = vectorized_data[i:i + batch_size] upload_batch(batch) print(f"Total no of batches {subset_size/batch_size}") print(f"Uploaded batch {i // batch_size}") ``` ### Implement Search Functionality #### Search the database We will first create a function that vectorizes all the input queries just like we did for the raw data. Now, we will set up the function that searches the qdrant database. ``` def vectorize_query(query, tokenizer, model): inputs = tokenizer(query, return_tensors="pt", padding=True, truncation=True, max_length=512) with torch.no_grad(): outputs = model(**inputs) return outputs.last_hidden_state.mean(dim=1).squeeze().cpu().numpy() def search_in_qdrant(query, tokenizer, model, top_k=10): query_vector = vectorize_query(query, tokenizer, model) # Search in Qdrant hits = qdrant_client.search( collection_name="semantic_search_medical_qa", query_vector=query_vector.tolist(), limit=top_k, ) return hits ``` #### Result Handling Now, we will create a function that displays the search results based on the cutoff score. The score in Qdrant's search results generally signifies the similarity between the query vector and the vectors in the database. Since we used cosine similarity, the score represents how close the query is to each result in terms of the cosine of the angle between their vectors. A higher score indicates greater similarity. ``` def display_search_results(test_query, tokenizer, model, cutoff_score): results = search_in_qdrant(test_query, tokenizer, model) for result in results: if result.score >= cutoff_score: print("Answer:", result.payload["answer"]) print("Context:", result.payload["context"]) print("Score:", result.score) print("-----------") ``` ### Results #### Testing the Search Functionality Now we will test this using three different queries and check how it performs. **Example Query 1** ``` test_query = "I have a family history of gallbladder stones. How do I prevent gallstones" display_search_results(test_query, tokenizer, model, 0.90) ``` The query I have asked is not part of the original database but still, we got the relevant answer. I am filtering out the responses based on a 90% similarity score. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s64aw7t00nrfk6g0pzaq.png) *Photo by Author* **Example Query 2** ``` novel_query = "I found out I have whipple disease, what to do?" display_search_results(novel_query, tokenizer, model, 0.90) ``` In this scenario, I asked a question as a patient who wants to know the treatment of Whipple disease. In order to test the semantic search critically, I avoided the use of keywords like "treatment" and instead used informal spoken language like "what to do". Despite this, the response is the most relevant and correct amongst the database. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud2z3usl9t8gpdggpx20.png) *Photo by Author* **Example Query 3** ``` novel_query = "can cirrhosis be genetic?" display_search_results(novel_query, tokenizer, model, 0.92) ``` In this query, I used the word genetic but still the search correctly provided a response where "family history" is mentioned. It correctly identified the contextual link between the words genetic and family history. I filtered the response based on 92% similarity and got the most relevant answer. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7y57ayjcgnyxsq4otmt.png) *Photo by Author* ### Conclusion The semantic search engine, utilizing BioBERT for vectorization, has demonstrated an impressive ability to interpret and retrieve medically relevant answers. The process involved meticulous steps, from data preprocessing and vectorization to setting up Qdrant Cloud and implementing robust search functionality. The outcome is a search engine that not only understands the nuances of medical queries but also provides relevant and accurate answers. The testing phase with varied queries shows that the system effectively understands and responds to the context of inquiries. For instance, the query about gallbladder stones prevention returned a highly relevant answer without direct reference in the database. Similarly, a query about Whipple disease was met with an accurate response, despite the use of informal language. Furthermore, the system's adeptness at contextually linking "genetic" to "family history" in the cirrhosis query underscores its capability to discern semantic connections. These results highlight the engine's potential as a robust tool for providing accurate medical information which further goes on to showcase its understanding of complex medical terminologies and patient inquiries. As we conclude, it's clear that the marriage of advanced NLP models and semantic search engines like Qdrant opens new horizons for information retrieval, promising more accurate, efficient, and context-aware solutions for complex search scenarios that have high stakes like healthcare. --- _Thank you for reading this article! If you enjoyed the content and would like to stay in the loop on future explorations into technology, AI, and beyond, [please follow me on LinkedIn](https://www.linkedin.com/in/syed-huma-shah-132185201/). On [my LinkedIn profile](https://www.linkedin.com/in/syed-huma-shah-132185201/), I regularly delve into topics lying at the intersection of AI, technology, data science, personal development, and philosophy. I'd love to connect and continue the conversation with you there._
shumashah
1,714,813
Unit Testing in NodeJS with Express, TypeScript, Jest and Supertest || Part Three: Writing Unit Tests || A Comprehensive Guide
Welcome back to the third installment of our series on building a powerful API with Node.js, Express,...
0
2024-01-02T14:17:55
https://dev.to/abeinevincent/unit-testing-in-nodejs-with-express-typescript-jest-part-three-writing-unit-tests-a-comprehensive-guide-3ipj
jest, node, express, unittest
Welcome back to the third installment of our series on building a powerful API with Node.js, Express, and MongoDB and writing unit tests for each line of code! In the previous 2 parts: [part one](https://dev.to/abeinevincent/unit-testing-in-nodejs-with-express-typescript-part-one-environment-setup-24a7) and [part two](https://dev.to/abeinevincent/unit-testing-in-nodejs-with-express-typescript-part-two-building-the-api-2g88), we set up the environment and meticulously crafted our application, ensuring robust user authentication and seamless database interactions. Now, it's time to fortify our codebase and elevate our development practices through the implementation of thorough unit tests. In this part, we'll embark on a step-by-step journey through the process of writing unit tests for each line of code in our API. From utility functions, routes, controllers to services and models, we'll ensure that every aspect of our application is rigorously tested. ## Why Unit Testing Matters Unit testing plays a pivotal role in the development lifecycle, offering numerous benefits, including: ####Early Issue Detection: Identify and address bugs and issues early in the development process, reducing the likelihood of encountering them in later stages. #### Code Maintainability: Establish a safety net for code changes by ensuring that existing functionalities remain intact during modifications. #### Documentation: Unit tests serve as living documentation, showcasing the expected behavior of each component, function, or module. #### Enhanced Collaboration: Facilitate collaboration among team members by providing a clear understanding of the intended functionality and expected outcomes. ## Folder Structure for Unit Tests Organizing your unit tests is crucial for clarity and ease of maintenance. Consider adopting a structure similar to your application's folder layout. For our API, I recommend the following structure: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxpzfr8i5qf5aj2z32g7.png) Just like I promised, we will be writing unit tests for each line of code. In the first part of this guide, after setting up the environment, we wrote some tests in the app.test.ts but by then we didn't have all the code in the index file like we do now, lets first update that file and make sure everything we have in the src/index.ts is working as expected including datababase connection to MongoDB Atlas, among others before starting with models. Replace code inside app.test.ts with the following code: ```javascript import request from "supertest"; import app from "../index"; import mongoose from "mongoose"; beforeAll(async () => { // Set up: Establish the MongoDB connection before running tests if (!process.env.MONGODB_URL) { throw new Error("MONGO_URI environment variable is not defined/set"); } await mongoose.connect(process.env.MONGODB_URL); }); afterAll(async () => { // Teardown: Close the MongoDB connection after all tests have completed await mongoose.connection.close(); }); // Unit test for testing initial route ("/") describe("GET /", () => { it('responds with "Welcome to unit testing guide for nodejs, typescript and express!"', async () => { const response = await request(app).get("/"); expect(response.status).toBe(200); expect(response.text).toBe( "Welcome to unit testing guide for nodejs, typescript and express!" ); }); }); ``` Our `app.test.ts` file orchestrates the setup and teardown of the testing environment, establishing a MongoDB connection before tests begin and closing it afterward. The primary test case verifies the behavior of the root route ("/"), ensuring it responds with a welcome message and a 200 status. The code includes error handling to prompt the developer if the MongoDB connection string (MONGODB_URL) is not defined. This file serves as a foundational template for testing the core functionalities of the application, paving the way for more extensive unit testing across various directories and components. Open your terminal and run: ``` npm run test ``` The output should be as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hceoeuw5uxbaa1haqvzi.png) All set, now ready to write unit tests for our code in other directories. ## Unit Tests for Database Models Lets start with the data layer by writing unit tests for our database models. Testing models allows us to validate the structure, functionality, and interactions with the database. By establishing robust tests for our models, we create a solid base upon which the rest of our unit tests can thrive. The beforeAll hook orchestrates the setup process before running any tests, ensuring a valid MongoDB connection is established based on the MONGODB_URL environment variable. It throws an error if the variable is not defined, alerting the developer to set the MongoDB connection string. On the other hand, the afterAll hook handles cleanup after all tests are completed, closing the MongoDB connection to prevent resource leaks and maintain a clean testing environment. These hooks collectively manage the initialization and teardown phases, ensuring a consistent and isolated testing environment for the unit tests. Create a new file: src/__tests__/models/userModel.test.ts and place the following code: ```javascript import mongoose from "mongoose"; import User, { IUser } from "../../models/User"; import dotenv from "dotenv"; dotenv.config(); describe("User Model Tests", () => { let createdUser: IUser; beforeAll(async () => { // Set up: Establish the MongoDB connection before running tests if (!process.env.MONGODB_URL) { throw new Error("MONGODB_URL environment variable is not defined/set"); } await mongoose.connect(process.env.MONGODB_URL); }); afterAll(async () => { // Remove the created user // await User.deleteMany(); // Teardown: Close the MongoDB connection after all tests have completed await mongoose.connection.close(); }); // Test Case: Create a new user it("should create a new user", async () => { const userData: Partial<Omit<IUser, "_id">> = { email: "test@example.com", username: "testuser", password: "testpassword", isAdmin: false, savedProducts: [], }; createdUser = await User.create(userData); expect(createdUser.email).toBe(userData.email); expect(createdUser.username).toBe(userData.username); expect(createdUser.isAdmin).toBe(userData.isAdmin); }, 10000); // Increase timeout to 10 seconds // Test Case: Ensure email and username are unique it("should fail to create a user with duplicate email or username", async () => { const userData: Partial<Omit<IUser, "_id">> = { email: "test@example.com", username: "testuser", password: "testpassword", isAdmin: false, savedProducts: [], }; try { // Attempt to create a user with the same email and username await User.create(userData); // If the above line doesn't throw an error, the test should fail expect(true).toBe(false); } catch (error) { // Expect a MongoDB duplicate key error (code 11000) expect(error.code).toBe(11000); } }, 10000); // Increase timeout to 10 seconds // Test Case: Get all users it("should get all users", async () => { // Fetch all users from the database const allUsers = await User.find(); // Expectations const userWithoutTimestamps = { _id: createdUser._id, email: createdUser.email, username: createdUser.username, isAdmin: createdUser.isAdmin, savedProducts: createdUser.savedProducts, }; expect(allUsers).toContainEqual( expect.objectContaining(userWithoutTimestamps) ); }); const removeMongoProps = (user: any) => { const { __v, _id, createdAt, updatedAt, ...cleanedUser } = user.toObject(); return cleanedUser; }; // Test Case: Get all users it("should get all users", async () => { const allUsers = await User.find(); // If there is a created user, expect the array to contain an object // that partially matches the properties of the createdUser if (createdUser) { const cleanedCreatedUser = removeMongoProps(createdUser); expect(allUsers).toEqual( expect.arrayContaining([expect.objectContaining(cleanedCreatedUser)]) ); } }); // Test Case: Update an existing user it("should update an existing user", async () => { // Check if there is a created user to update if (createdUser) { // Define updated data const updatedUserData: Partial<IUser> = { username: "testuser", isAdmin: true, }; // Update the user and get the updated user const updatedUser = await User.findByIdAndUpdate( createdUser._id, updatedUserData, { new: true } ); // Expectations expect(updatedUser?.username).toBe(updatedUserData.username); expect(updatedUser?.isAdmin).toBe(updatedUserData.isAdmin); } }); // Test Case: Get user by ID it("should get user by ID", async () => { // Get the created user by ID const retrievedUser = await User.findById(createdUser._id); // Expectations expect(retrievedUser?.email).toBe(createdUser.email); expect(retrievedUser?.username).toBe(createdUser.username); // Add other properties that you want to compare // For example, if updatedAt is expected to be different, you can ignore it: // expect(retrievedUser?.updatedAt).toBeDefined(); }); // Test Case: Delete an existing user it("should delete an existing user", async () => { // Delete the created user await User.findByIdAndDelete(createdUser._id); // Attempt to find the deleted user const deletedUser = await User.findById(createdUser._id); // Expectations expect(deletedUser).toBeNull(); }); }); ``` Our `userModel.test.ts` script contains a series of Jest test cases for the user model. Before running the tests, it establishes a connection to a MongoDB database specified in the environment variable `MONGODB_URL`. The test suite covers creating a new user, ensuring the uniqueness of email and username, retrieving all users, updating an existing user, fetching a user by ID, and finally, deleting an existing user. The tests use the User model from the src/models/User file and leverage various assertions to verify the expected behavior of these operations. Additionally, a utility function `removeMongoProps` is used to clean MongoDB-specific properties from user objects for precise comparisons. The script performs setup and teardown procedures to connect to and disconnect from the database, making it a comprehensive set of tests for the user model. I always prefer to test all crud operations on the model at this stage to be sure but the main effort should be on the data since we are on the data layer. Feel free to add more test cases on the model forxample the one that tests the data type of data received on each field and compare it with what the model has among others. All good for our user model because all our tests passed under different test data. Its time to move to the product model. Create a new file: `src/__tests__/models/productModel.test.ts` and place the following code: ```javascript import mongoose from "mongoose"; import Product, { IProduct } from "../../models/Product"; import dotenv from "dotenv"; dotenv.config(); describe("Product Model Tests", () => { let createdProduct: IProduct; beforeAll(async () => { // Set up: Establish the MongoDB connection before running tests if (!process.env.MONGODB_URL) { throw new Error("MONGODB_URL environment variable is not defined/set"); } await mongoose.connect(process.env.MONGODB_URL); }); afterAll(async () => { // Remove the created product // await Product.deleteMany(); // Teardown: Close the MongoDB connection after all tests have completed await mongoose.connection.close(); }); // Test Case: Create a new product it("should create a new product", async () => { const productData: Partial<IProduct> = { title: "Test Product", description: "Product description", image: "https://testimage.png", category: "test category", quantity: "20 kgs", // You can add other fields }; createdProduct = await Product.create(productData); expect(createdProduct.title).toBe(productData.title); expect(createdProduct.description).toBe(productData.description); // Add other expectations for additional fields }, 10000); // Increase timeout to 10 seconds // Test Case: Fail to create a product with missing required fields it("should fail to create a product with missing required fields", async () => { const productData: Partial<IProduct> = { // Omitting required fields }; try { // Attempt to create a product with missing required fields await Product.create(productData); // If the above line doesn't throw an error, the test should fail expect(true).toBe(false); } catch (error) { // Expect a MongoDB validation error expect(error.name).toBe("ValidationError"); } }, 10000); // Increase timeout to 10 seconds // Test Case: Get all products it("should get all products", async () => { // Fetch all products from the database const allProducts = await Product.find(); // Expectations const productWithoutTimestamps = { // _id: createdProduct._id, title: createdProduct.title, description: createdProduct.description, // Add other necessary fields }; expect(allProducts).toContainEqual( expect.objectContaining(productWithoutTimestamps) ); }); const removeMongoProps = (product: any) => { const { __v, _id, createdAt, updatedAt, ...cleanedProduct } = product.toObject(); return cleanedProduct; }; // Test Case: Get all products it("should get all products", async () => { const allProducts = await Product.find(); // If there is a created product, expect the array to contain an object // that partially matches the properties of the createdProduct if (createdProduct) { const cleanedCreatedProduct = removeMongoProps(createdProduct); expect(allProducts).toEqual( expect.arrayContaining([expect.objectContaining(cleanedCreatedProduct)]) ); } }); // Test Case: Update an existing product it("should update an existing product", async () => { // Check if there is a created product to update if (createdProduct) { // Define updated data const updatedProductData: Partial<IProduct> = { title: "Test Product", // replace hre with your updated title // Update other necessary fields }; // Update the product and get the updated product const updatedProduct = await Product.findByIdAndUpdate( createdProduct._id, updatedProductData, { new: true } ); // Expectations expect(updatedProduct?.title).toBe(updatedProductData.title); // Add expectations for other updated fields } }); // Test Case: Get product by ID it("should get product by ID", async () => { // Get the created product by ID const retrievedProduct = await Product.findById(createdProduct._id); // Expectations expect(retrievedProduct?.title).toBe(createdProduct.title); // Add other expectations for properties you want to compare }); // Test Case: Delete an existing product it("should delete an existing product", async () => { // Delete the created product await Product.findByIdAndDelete(createdProduct._id); // Attempt to find the deleted product const deletedProduct = await Product.findById(createdProduct._id); // Expectations expect(deletedProduct).toBeNull(); }); }); ``` Our Product Model Tests are well-structured and cover essential CRUD operations on our data layer. Once again, the `beforeAll` and `afterAll` hooks ensure a connection to the MongoDB database is established before the tests and closed afterward. The "Create a new product" test validates the creation of a product with specified data, while the "Fail to create a product with missing required fields" test checks that the model correctly enforces required fields. The "Get all products" tests ensure that products retrieved from the database match the expected properties, and the utility function `removeMongoProps` helps clean unnecessary MongoDB-specific properties. The "Update an existing product" and "Get product by ID" tests validate the update and retrieval of products by ID. Lastly, the "Delete an existing product" test confirms the successful deletion of a product. Overall, our tests provide comprehensive coverage for your Product model, ensuring its correctness and reliability in a MongoDB environment. All set, lets proceed to services: ## Writing UnitTests for Services Create a new file: `src/__tests__/services/userService.test.ts` and place the following code: ```javascript import * as userService from "../../services/userService"; import * as passwordUtils from "../../utils/passwordUtils"; import * as jwtUtils from "../../utils/jwtUtils"; import User, { IUser } from "../../models/User"; import mongoose from "mongoose"; import dotenv from "dotenv"; import * as productModel from "../../models/Product"; // Import the Product model dotenv.config(); // Mock the Product model jest.mock("../../models/Product", () => ({ __esModule: true, default: { findById: jest.fn(), }, })); // Mock the populateUser function jest .spyOn(userService, "populateUser") .mockImplementation(async (user: IUser) => { // Mock the behavior of populateUser function return user; // You can replace this with your desired mock value }); describe("User Service Tests", () => { let createdUser: IUser; // Clean up after tests beforeAll(async () => { // Set up: Establish the MongoDB connection before running tests if (!process.env.MONGODB_URL) { throw new Error("MONGODB_URL environment variable is not defined/set"); } await mongoose.connect(process.env.MONGODB_URL); }); afterAll(async () => { // Remove the created user await User.deleteMany(); // Teardown: Close the MongoDB connection after all tests have completed await mongoose.connection.close(); // Clear all jest mocks jest.clearAllMocks(); }); // Mock the hashPassword function jest .spyOn(passwordUtils, "hashPassword") .mockImplementation(async (password) => { // Mocked hash implementation return password + "_hashed"; }); // Mock the jwtUtils' generateToken function jest .spyOn(jwtUtils, "generateToken") .mockImplementation(() => "mocked_token"); // Test Case: Create a new user it("should create a new user", async () => { const userData: Partial<IUser> = { email: "test@example.com", username: "testuser", password: "testpassword", isAdmin: false, savedProducts: [], }; createdUser = await userService.createUser(userData as IUser); // Expectations expect(createdUser.email).toBe(userData.email); expect(createdUser.username).toBe(userData.username); expect(createdUser.isAdmin).toBe(userData.isAdmin); expect(passwordUtils.hashPassword).toHaveBeenCalledWith(userData.password); }, 30000); // Test Case: Login user it("should login a user and generate a token", async () => { // Mock user data for login const loginEmail = "test@example.com"; const loginPassword = "testpassword"; // Mock the comparePassword function jest .spyOn(passwordUtils, "comparePassword") .mockImplementation(async (inputPassword, hashedPassword) => { return inputPassword === hashedPassword.replace("_hashed", ""); }); const { user, token } = await userService.loginUser( loginEmail, loginPassword ); // Expectations expect(user.email).toBe(createdUser.email); expect(user.username).toBe(createdUser.username); expect(user.isAdmin).toBe(createdUser.isAdmin); expect(jwtUtils.generateToken).toHaveBeenCalledWith({ id: createdUser._id, username: createdUser.username, email: createdUser.email, isAdmin: createdUser.isAdmin, }); expect(token).toBe("mocked_token"); }, 20000); const removeMongoProps = (user: any) => { const { __v, _id, createdAt, updatedAt, ...cleanedUser } = user.toObject(); return cleanedUser; }; // Test Case: Get all users it("should get all users", async () => { // Fetch all users from the database const allUsers = await userService.getAllUsers(); // If there is a created user, expect the array to contain an object // that partially matches the properties of the createdUser if (createdUser) { const cleanedCreatedUser = removeMongoProps(createdUser); expect(allUsers).toEqual( expect.arrayContaining([expect.objectContaining(cleanedCreatedUser)]) ); } }, 20000); // Test Case: Delete an existing user it("should delete an existing user", async () => { // Delete the created user await User.findByIdAndDelete(createdUser._id); // Attempt to find the deleted user const deletedUser = await User.findById(createdUser._id); // Expectations expect(deletedUser).toBeNull(); }); }); ``` and `src/__tests__/services/producService.test.ts` and place the following code: ```javascript import * as productService from "../../services/productService"; import Product, { IProduct } from "../../models/Product"; import mongoose from "mongoose"; import dotenv from "dotenv"; dotenv.config(); // Mock the Product model jest.mock("../../models/Product", () => ({ __esModule: true, default: { create: jest.fn(), find: jest.fn(), findById: jest.fn(), findByIdAndUpdate: jest.fn(), findByIdAndDelete: jest.fn(), }, })); // Mock the product data const productId = "mockedProductId"; const mockProduct: IProduct = { _id: productId, title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, } as IProduct; // Use the toObject method to include additional properties const mockProductWithMethods = { ...mockProduct, toObject: jest.fn(() => mockProduct), }; // Mock the product retrieval by ID (Product.findById as jest.Mock).mockResolvedValueOnce(mockProductWithMethods); describe("Product Service Tests", () => { // Clean up after tests beforeAll(async () => { // Set up: Establish the MongoDB connection before running tests if (!process.env.MONGODB_URL) { throw new Error("MONGODB_URL environment variable is not defined/set"); } await mongoose.connect(process.env.MONGODB_URL); }); afterAll(async () => { // Teardown: Close the MongoDB connection after all tests have completed await mongoose.connection.close(); // Clear all jest mocks jest.clearAllMocks(); }); // Test Case: Create a new product it("should create a new product", async () => { // Mock the product data const productData: Partial<IProduct> = { title: "Test Product", // Add other product properties based on your schema }; // Mock the product creation (Product.create as jest.Mock).mockResolvedValueOnce({ ...productData, _id: "mockedProductId", // Mocked product ID }); // Create the product const createdProduct = await productService.createProduct( productData as IProduct ); // Expectations expect(createdProduct.title).toBe(productData.title); // You can add more expectations based on your schema and business logic }, 20000); // Test Case: Get product by ID it("should get product by ID", async () => { // Mock product data const productId = "mockedProductId"; const mockProduct: IProduct = { _id: productId, title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, } as IProduct; // Mock the findById method of the Product model (Product.findById as jest.Mock).mockResolvedValueOnce(mockProduct); // Call the service const retrievedProduct = await productService.getProductById(productId); // Expectations expect(retrievedProduct).toEqual( expect.objectContaining({ _id: mockProduct._id, title: mockProduct.title, description: mockProduct.description, image: mockProduct.image, category: mockProduct.category, quantity: mockProduct.quantity, inStock: mockProduct.inStock, }) ); expect(Product.findById).toHaveBeenCalledWith(productId); }, 20000); // Test Case: Update product by ID it("should update product by ID", async () => { // Mock product data const productId = "mockedProductId"; const mockProduct: IProduct = { _id: productId, title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, } as IProduct; // Mock the findByIdAndUpdate method of the Product model (Product.findByIdAndUpdate as jest.Mock).mockResolvedValueOnce(mockProduct); // Mock updated product data const updatedProductData: Partial<IProduct> = { title: "Mocked Product", // update some fields quantity: "10", }; // Call the service const updatedProduct = await productService.updateProduct( productId, updatedProductData ); // Expectations expect(updatedProduct?._id).toBe(mockProduct._id); expect(updatedProduct?.title).toBe(updatedProductData.title); expect(updatedProduct?.quantity).toBe(updatedProductData.quantity); // Add similar expectations for other properties you want to compare expect(Product.findByIdAndUpdate).toHaveBeenCalledWith( productId, updatedProductData, { new: true } ); }, 20000); // Test Case: Delete product by ID it("should delete product by ID", async () => { // Mock product data const productId = "mockedProductId"; const mockProduct: IProduct = { _id: productId, title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, } as IProduct; // Mock the findByIdAndDelete method of the Product model (Product.findByIdAndDelete as jest.Mock).mockResolvedValueOnce(mockProduct); // Call the service await productService.deleteProduct(productId); // Expectations expect(Product.findByIdAndDelete).toHaveBeenCalledWith(productId); }, 20000); // Test Case: Get all products it("should get all products", async () => { // Mock product data const mockProducts: IProduct[] = [ { _id: "product1", title: "Product 1", description: "Description for Product 1", image: "product1_image.jpg", category: "Category 1", quantity: "5", inStock: true, }, { _id: "product2", title: "Product 2", description: "Description for Product 2", image: "product2_image.jpg", category: "Category 2", quantity: "10", inStock: false, }, ] as IProduct[]; // Mock the find method of the Product model (Product.find as jest.Mock).mockResolvedValueOnce(mockProducts); // Call the service const retrievedProducts = await productService.getAllProducts(); // Expectations expect(Product.find).toHaveBeenCalled(); expect(retrievedProducts).toEqual(mockProducts); }, 20000); }); ``` Our `createUser` service uses two external functions; `hashPassword` utility and `generateToken` middleware, all in src/utils. Lets first diagonise these utility functions and discuss their unit tests that this service is mocking. Create a new file: `src/__tests__/utils/passwordUtils.test.ts` and place the following code: ```javascript import * as passwordUtils from "../../utils/passwordUtils"; describe("Password Utilities Tests", () => { // Test Case: Hash Password it("should hash a password", async () => { const password = "testpassword"; const hashedPassword = await passwordUtils.hashPassword(password); expect(hashedPassword).toBeDefined(); expect(typeof hashedPassword).toBe("string"); }); // Test Case: Compare Password it("should compare a valid password", async () => { const password = "testpassword"; const hashedPassword = await passwordUtils.hashPassword(password); const isPasswordValid = await passwordUtils.comparePassword( password, hashedPassword ); expect(isPasswordValid).toBe(true); }); // Test Case: Compare Invalid Password it("should compare an invalid password", async () => { const password = "testpassword"; const hashedPassword = await passwordUtils.hashPassword(password); const isPasswordValid = await passwordUtils.comparePassword( "wrongpassword", hashedPassword ); expect(isPasswordValid).toBe(false); }); }); ``` and `src/__tests__/utils/jwtUtils.test.ts` and place the following code: ```javascript import { JWTPayload, generateToken, verifyToken, verifyTokenAndAuthorization, } from "../../utils/jwtUtils"; import dotenv from "dotenv"; dotenv.config(); describe("JWT Utils Tests", () => { const mockPayload: JWTPayload = { id: "mockUserId", username: "mockUsername", email: "mock@example.com", isAdmin: false, }; const mockToken = "mockToken"; process.env.JWT_SEC = "mockSecret"; process.env.JWT_EXPIRY_PERIOD = "1h"; // Test Case: Generate Token it("should generate a JWT token", () => { const token = generateToken(mockPayload); expect(token).toBeDefined(); }); it("should verify a valid JWT token", (done) => { const req: any = { headers: { token: `Bearer ${mockToken}`, }, }; const res: any = { status: (status: number) => { expect(status).toBe(403); return { json: (message: string) => { expect(message).toBe("Token is not valid!"); done(); }, }; }, }; const next = () => { // Should not reach here done.fail("Should not reach next middleware on invalid token"); }; verifyToken(req, res, next); }); // Test Case: Verify Token (invalid token) it("should handle an invalid JWT token", (done) => { const req: any = { headers: { token: "InvalidToken", }, }; const res: any = { status: (status: number) => { expect(status).toBe(403); return { json: (message: string) => { expect(message).toBe("Token is not valid!"); done(); }, }; }, }; const next = () => { // Should not reach here done.fail("Should not reach next middleware on invalid token"); }; verifyToken(req, res, next); }); // Cleanup: Reset environment variables after tests afterAll(() => { delete process.env.JWT_SEC; delete process.env.JWT_EXPIRY_PERIOD; }); }); ``` In our suite for hashing the password, we examine the core functionalities integral to safeguarding sensitive credentials. The initial litmus test involves the seamless hashing of passwords, ensuring a robust defense against unauthorized access. A keen eye is cast upon the compare password mechanism, where the suite deftly validates the correctness of a given password against its hashed counterpart. In cases of both valid and invalid password comparisons, the suite orchestrates a meticulous evaluation, fortifying our application's resilience against potential security threats. This comprehensive scrutiny instills confidence in the reliability and effectiveness of our Password Utilities, as they stand guard to fortify the fortress of our user authentication system. Also, in the suite for token generation and verification, we scrutinize the fundamental functionalities that underpin the secure handling of JSON Web Tokens (JWTs). The first checkpoint ensures the seamless generation of JWT tokens, a cornerstone in our approach to secure authentication. Delving into token verification, the suite rigorously examines scenarios of both valid and invalid tokens, intricately assessing the resilience of our verification mechanism. In cases where a valid token is presented, the verification process is expected to seamlessly proceed, ensuring that the user details are defined. Conversely, when confronted with an invalid token, our suite orchestrates a precise response, safeguarding our application against unauthorized access. As the final act of diligence, the suite gracefully resets environment variables, leaving the testing landscape pristine. In the `User Service Tests` suite, we meticulously verify the functionality of our user service within a Node.js backend. Leveraging the power of mocking, we intentionally isolate the service from external dependencies, ensuring a controlled testing environment. Our suite encompasses crucial aspects, such as creating a new user, logging in a user, retrieving all users, and deleting an existing user. Each test case meticulously orchestrates mock data and expectations, guaranteeing the service operates seamlessly. Our commitment to best practices shines through as we meticulously clean up by removing the created user post-testing. To facilitate more accurate comparisons of user objects, we employ the `removeMongoProps` function, eliminating MongoDB-specific properties. This suite is a testament to our dedication to thorough and isolated testing, laying a robust foundation for a resilient backend service. On the otherhand, in the `Product Service Tests` suite, we scrutinize the functionality of our backend API's product-related operations. Employing the power of mocking, we deliberately isolate the service from external dependencies, ensuring a controlled testing environment. The suite comprehensively tests key functionalities, including creating a new product, fetching a product by ID, updating a product, deleting a product, and fetching all products. Each test case is meticulously crafted, incorporating mock data, precise expectations, and detailed property comparisons, thereby validating the robustness and reliability of our ProductService. The suite adheres to best practices, ensuring that the testing environment is thoroughly cleaned up post-execution by closing the MongoDB connection and clearing all Jest mocks. This comprehensive and systematic testing approach reinforces our commitment to delivering a resilient and dependable backend service. ## Writing Unit Tests for Controllers Create a new file: `src/__tests__/controllers/userController.test.ts` and place the following code: ```javascript import { Request, Response } from "express"; import * as UserController from "../../controllers/userController"; import * as UserService from "../../services/userService"; import { IProduct } from "models/Product"; import { IUser } from "models/User"; // Mock the UserService jest.mock("../../services/userService"); describe("User Controller Tests", () => { // Test Case: Create a new user it("should create a new user", async () => { // Mock user data from the request body const mockUserData: IUser = { email: "test@example.com", username: "testuser", password: "testpassword", isAdmin: false, savedProducts: [], } as IUser; // Mock the create user response from the UserService const mockCreatedUser = { _id: "mockUserId", ...mockUserData, }; // Mock the request and response objects const mockRequest = { body: mockUserData, } as Request; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the createUser function of the UserService (UserService.createUser as jest.Mock).mockResolvedValueOnce( mockCreatedUser ); // Call the createUser controller await UserController.createUser(mockRequest, mockResponse); // Expectations expect(UserService.createUser).toHaveBeenCalledWith(mockUserData); expect(mockResponse.status).toHaveBeenCalledWith(201); expect(mockResponse.json).toHaveBeenCalledWith(mockCreatedUser); }, 20000); // Test Case: Login user it("should login a user and return a token", async () => { // Mock user credentials from the request body const mockUserCredentials = { email: "test@example.com", password: "testpassword", }; type IMockCreatedUser = { user: Partial<IUser>; token: string; }; // Mock the login response from the UserService const mockLoginResponse: IMockCreatedUser = { user: { _id: "mockUserId", email: mockUserCredentials.email, username: "testuser", isAdmin: false, savedProducts: [], }, token: "mocked_token", }; // Mock the request and response objects const mockRequest = { body: mockUserCredentials, } as Request; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the loginUser function of the UserService (UserService.loginUser as jest.Mock).mockResolvedValueOnce( mockLoginResponse ); // Call the loginUser controller await UserController.loginUser(mockRequest, mockResponse); // Expectations expect(UserService.loginUser).toHaveBeenCalledWith( mockUserCredentials.email, mockUserCredentials.password ); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith(mockLoginResponse); }, 20000); // Test Case: Get all users it("should get all users", async () => { // Mock the array of users from the UserService const mockUsers: IUser[] = [ { _id: "mockUserId1", email: "user1@example.com", username: "user1", isAdmin: false, savedProducts: [], }, { _id: "mockUserId2", email: "user2@example.com", username: "user2", isAdmin: true, savedProducts: [], }, ] as IUser[]; // Mock the request and response objects const mockRequest = {} as Request; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the getAllUsers function of the UserService (UserService.getAllUsers as jest.Mock).mockResolvedValueOnce(mockUsers); // Call the getAllUsers controller await UserController.getAllUsers(mockRequest, mockResponse); // Expectations expect(UserService.getAllUsers).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith(mockUsers); }, 20000); // Test Case: Error fetching users it("should handle error when fetching users", async () => { // Mock the error response from the UserService const mockErrorResponse = { message: "Error fetching users", }; // Mock the request and response objects const mockRequest = {} as Request; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the getAllUsers function of the UserService to throw an error (UserService.getAllUsers as jest.Mock).mockRejectedValueOnce( mockErrorResponse ); // Call the getAllUsers controller await UserController.getAllUsers(mockRequest, mockResponse); // Expectations expect(UserService.getAllUsers).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: mockErrorResponse.message, }); }, 20000); // Test Case: Get user by ID it("should get user by ID", async () => { // Mock the user from the UserService const mockUser: IUser = { _id: "mockUserId", email: "mock@example.com", username: "mockUser", isAdmin: false, savedProducts: [], } as IUser; // Mock the request and response objects const mockRequest: any = { params: { userId: "mockUserId", }, }; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the getUserById function of the UserService (UserService.getUserById as jest.Mock).mockResolvedValueOnce(mockUser); // Call the getUserById controller await UserController.getUserById(mockRequest, mockResponse); // Expectations expect(UserService.getUserById).toHaveBeenCalledWith( mockRequest.params.userId ); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith(mockUser); }, 20000); // Test Case: User not found it("should handle case where user is not found", async () => { // Mock the request and response objects const mockRequest: any = { params: { userId: "nonexistentUserId", }, }; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the getUserById function of the UserService to return null (UserService.getUserById as jest.Mock).mockResolvedValueOnce(null); // Call the getUserById controller await UserController.getUserById(mockRequest, mockResponse); // Expectations expect(UserService.getUserById).toHaveBeenCalledWith( mockRequest.params.userId ); expect(mockResponse.status).toHaveBeenCalledWith(404); expect(mockResponse.json).toHaveBeenCalledWith({ error: "User not found" }); }, 20000); // Test Case: Error fetching user it("should handle error when fetching user by ID", async () => { // Mock the error response from the UserService const mockErrorResponse = { message: "Error fetching user", }; // Mock the request and response objects const mockRequest: any = { params: { userId: "errorUserId", }, }; const mockResponse = { status: jest.fn().mockReturnThis(), json: jest.fn(), } as unknown as Response; // Mock the getUserById function of the UserService to throw an error (UserService.getUserById as jest.Mock).mockRejectedValueOnce( mockErrorResponse ); // Call the getUserById controller await UserController.getUserById(mockRequest, mockResponse); // Expectations expect(UserService.getUserById).toHaveBeenCalledWith( mockRequest.params.userId ); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: mockErrorResponse.message, }); }, 20000); // Test Case: Delete user by ID it("should delete user by ID", async () => { // Mock the request and response objects const mockRequest: Request< { userId: string }, any, any, any, Record<string, any> > = { params: { userId: "mockUserId", }, } as Request<{ userId: string }, any, any, any, Record<string, any>>; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the deleteUser function of the UserService (UserService.deleteUser as jest.Mock).mockResolvedValueOnce(null); // Call the deleteUser controller await UserController.deleteUser(mockRequest, mockResponse); // Expectations expect(UserService.deleteUser).toHaveBeenCalledWith( mockRequest.params.userId ); expect(mockResponse.status).toHaveBeenCalledWith(204); expect(mockResponse.send).toHaveBeenCalled(); expect(mockResponse.json).not.toHaveBeenCalled(); // Ensure json is not called for a 204 status }, 20000); }); ``` and `src/__tests__/controllers/productController.test.ts` and place the following code: ```javascript import { Request, Response } from "express"; import * as ProductController from "../../controllers/productController"; import * as ProductService from "../../services/productService"; // Mock the ProductService jest.mock("../../services/productService"); describe("Product Controller Tests", () => { // Test Case: Create a new product it("should create a new product", async () => { // Mock the request and response objects const mockRequest: Request<{}, any, any, any, Record<string, any>> = { body: { title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, }, } as Request<{}, any, any, any, Record<string, any>>; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the createProduct function of the ProductService (ProductService.createProduct as jest.Mock).mockResolvedValueOnce({ _id: "mockedProductId", title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, }); // Call the createProduct controller await ProductController.createProduct(mockRequest, mockResponse); // Expectations expect(ProductService.createProduct).toHaveBeenCalledWith(mockRequest.body); expect(mockResponse.status).toHaveBeenCalledWith(201); expect(mockResponse.json).toHaveBeenCalledWith({ _id: "mockedProductId", title: "Mocked Product", description: "A description for the mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 201 status }); // Test Case: Get all products - Success it("should get all products successfully", async () => { // Mock the request and response objects const mockRequest: any = {}; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the getAllProducts function of the ProductService (ProductService.getAllProducts as jest.Mock).mockResolvedValueOnce([ { _id: "mockedProductId1", title: "Mocked Product 1", description: "Description for mocked product 1", image: "image1.jpg", category: "Category 1", quantity: "5", inStock: true, }, { _id: "mockedProductId2", title: "Mocked Product 2", description: "Description for mocked product 2", image: "image2.jpg", category: "Category 2", quantity: "10", inStock: false, }, ]); // Call the getAllProducts controller await ProductController.getAllProducts(mockRequest, mockResponse); // Expectations expect(ProductService.getAllProducts).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith([ { _id: "mockedProductId1", title: "Mocked Product 1", description: "Description for mocked product 1", image: "image1.jpg", category: "Category 1", quantity: "5", inStock: true, }, { _id: "mockedProductId2", title: "Mocked Product 2", description: "Description for mocked product 2", image: "image2.jpg", category: "Category 2", quantity: "10", inStock: false, }, ]); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 200 status }); // Test Case: Get all products - Error it("should handle errors when getting all products", async () => { // Mock the request and response objects const mockRequest: any = {}; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the getAllProducts function of the ProductService to throw an error (ProductService.getAllProducts as jest.Mock).mockRejectedValueOnce( new Error("Error getting products") ); // Call the getAllProducts controller await ProductController.getAllProducts(mockRequest, mockResponse); // Expectations expect(ProductService.getAllProducts).toHaveBeenCalled(); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Error getting products", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 500 status }); // Test Case: Get product by ID - Success it("should get product by ID successfully", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the getProductById function of the ProductService (ProductService.getProductById as jest.Mock).mockResolvedValueOnce({ _id: "mockedProductId", title: "Mocked Product", description: "Description for mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, }); // Call the getProductById controller await ProductController.getProductById(mockRequest, mockResponse); // Expectations expect(ProductService.getProductById).toHaveBeenCalledWith( "mockedProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith({ _id: "mockedProductId", title: "Mocked Product", description: "Description for mocked product", image: "mocked_image.jpg", category: "Mocked Category", quantity: "10", inStock: true, }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 200 status }); // Test Case: Get product by ID - Not Found it("should handle product not found when getting by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "nonExistentProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the getProductById function of the ProductService to return null (ProductService.getProductById as jest.Mock).mockResolvedValueOnce(null); // Call the getProductById controller await ProductController.getProductById(mockRequest, mockResponse); // Expectations expect(ProductService.getProductById).toHaveBeenCalledWith( "nonExistentProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(404); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Product not found", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 404 status }); // Test Case: Get product by ID - Error it("should handle errors when getting product by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the getProductById function of the ProductService to throw an error (ProductService.getProductById as jest.Mock).mockRejectedValueOnce( new Error("Error getting product by ID") ); // Call the getProductById controller await ProductController.getProductById(mockRequest, mockResponse); // Expectations expect(ProductService.getProductById).toHaveBeenCalledWith( "mockedProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Error getting product by ID", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 500 status }); // Test Case: Update product by ID - Success it("should update product by ID successfully", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, body: { title: "Updated Product", description: "Updated description", image: "updated_image.jpg", category: "Updated Category", quantity: "15", inStock: false, }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the updateProduct function of the ProductService (ProductService.updateProduct as jest.Mock).mockResolvedValueOnce({ _id: "mockedProductId", title: "Updated Product", description: "Updated description", image: "updated_image.jpg", category: "Updated Category", quantity: "15", inStock: false, }); // Call the updateProduct controller await ProductController.updateProduct(mockRequest, mockResponse); // Expectations expect(ProductService.updateProduct).toHaveBeenCalledWith( "mockedProductId", mockRequest.body ); expect(mockResponse.status).toHaveBeenCalledWith(200); expect(mockResponse.json).toHaveBeenCalledWith({ _id: "mockedProductId", title: "Updated Product", description: "Updated description", image: "updated_image.jpg", category: "Updated Category", quantity: "15", inStock: false, }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 200 status }); // Test Case: Update product by ID - Not Found it("should handle product not found when updating by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "nonExistentProductId" }, body: { title: "Updated Product", description: "Updated description", image: "updated_image.jpg", category: "Updated Category", quantity: "15", inStock: false, }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the updateProduct function of the ProductService to return null (ProductService.updateProduct as jest.Mock).mockResolvedValueOnce(null); // Call the updateProduct controller await ProductController.updateProduct(mockRequest, mockResponse); // Expectations expect(ProductService.updateProduct).toHaveBeenCalledWith( "nonExistentProductId", mockRequest.body ); expect(mockResponse.status).toHaveBeenCalledWith(404); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Product not found", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 404 status }); // Test Case: Update product by ID - Error it("should handle errors when updating product by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, body: { title: "Updated Product", description: "Updated description", image: "updated_image.jpg", category: "Updated Category", quantity: "15", inStock: false, }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the updateProduct function of the ProductService to throw an error (ProductService.updateProduct as jest.Mock).mockRejectedValueOnce( new Error("Error updating product by ID") ); // Call the updateProduct controller await ProductController.updateProduct(mockRequest, mockResponse); // Expectations expect(ProductService.updateProduct).toHaveBeenCalledWith( "mockedProductId", mockRequest.body ); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Error updating product by ID", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 500 status }); // Test Case: Delete product by ID - Success it("should delete product by ID successfully", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), } as unknown as Response; // Call the deleteProduct function of the ProductService await ProductController.deleteProduct(mockRequest, mockResponse); // Expectations expect(ProductService.deleteProduct).toHaveBeenCalledWith( "mockedProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(204); expect(mockResponse.send).toHaveBeenCalled(); }); // Test Case: Delete product by ID - Not Found it("should handle product not found when deleting by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "nonExistentProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the deleteProduct function of the ProductService to return null (ProductService.deleteProduct as jest.Mock).mockResolvedValueOnce(null); // Call the deleteProduct controller await ProductController.deleteProduct(mockRequest, mockResponse); // Expectations expect(ProductService.deleteProduct).toHaveBeenCalledWith( "nonExistentProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(204); expect(mockResponse.send).toHaveBeenCalled(); expect(mockResponse.json).not.toHaveBeenCalled(); // Ensure json is not called for a 204 status }); // Test Case: Delete product by ID - Error it("should handle errors when deleting product by ID", async () => { // Mock the request and response objects const mockRequest: Request = { params: { productId: "mockedProductId" }, } as unknown as Request; const mockResponse = { status: jest.fn().mockReturnThis(), send: jest.fn(), json: jest.fn(), } as unknown as Response; // Mock the deleteProduct function of the ProductService to throw an error (ProductService.deleteProduct as jest.Mock).mockRejectedValueOnce( new Error("Error deleting product by ID") ); // Call the deleteProduct controller await ProductController.deleteProduct(mockRequest, mockResponse); // Expectations expect(ProductService.deleteProduct).toHaveBeenCalledWith( "mockedProductId" ); expect(mockResponse.status).toHaveBeenCalledWith(500); expect(mockResponse.json).toHaveBeenCalledWith({ error: "Error deleting product by ID", }); expect(mockResponse.send).not.toHaveBeenCalled(); // Ensure send is not called for a 500 status }); }); ``` In the test suite for the User Controller, we employ thorough mocking using jest.mock to isolate the UserService, ensuring controlled testing of the controller's functionalities. Each test case is meticulously crafted to cover key aspects of user management, such as creating a new user, user login, fetching all users, retrieving a user by ID, handling scenarios where the user is not found, and deleting a user by ID. The suite demonstrates the usage of precise mock data and expectations, validating the User Controller's reliability. Notably, the jest.mock line is crucial, replacing the actual UserService with a mock implementation to create a controlled environment for testing. This approach ensures that the tests focus solely on the User Controller's behavior without interference from external dependencies. This testing strategy aims to deliver a secure, efficient, and error-resilient user management system in our backend API. In the `Product Controller Tests` test suite for the Product Controller, we orchestrate a comprehensive set of scenarios to validate the functionality of the controller methods. The tests cover creating a new product, fetching all products (both successful and error cases), retrieving a product by ID (with successful, not found, and error scenarios), updating a product by ID (again, covering success, not found, and error cases), and finally, deleting a product by ID (encompassing successful, not found, and error scenarios). Each test is structured with meticulous mock data, expectations, and error handling, ensuring a robust examination of the Product Controller's behavior. The `jest.mock("../../services/productService")` helps us to isolate the ProductService for controlled testing. All set for our controllers, lets also discuss how we write tests for our routes. ## Writing Unit Tests for Routes Before we start to write our unit tests for all our routes, we need to first examine our setup and make some adjustments. First we need to isolate a testing server to avoid conflicts testing various suites for different routes. Create a new file in the root directory, call it: `test_setup.ts` and place the following code: ```javascript import app from "./src/index"; // Import your main app from the index file const port = 9000; // Choose a suitable port const test_server = app.listen(port, () => { console.log(`Server is running on port ${port}`); }); export default test_server; ``` This helps us interact with the testing server instead of using the actual server of ours running at port 8800. Lets also add the following line to our .env file: ``` TEST_ENV=true ``` and start our main server conditionally in our `src/index.ts` according to `TEST_ENV` environment variable. Replace: ```javascript app.listen(PORT, () => { console.log(`Backend server is running at port ${PORT}`); }); ``` with ```javascript // Check if it's not a test environment before starting the server if (!process.env.TEST_ENV) { app.listen(PORT, () => { console.log(`Backend server is running at port ${PORT}`); }); } ``` to start our main server only if we are not testing mode. Don't forget to alter the `TEST_ENV` by setting it to `false` when youre done running unit tests or when you get in production. Create a new file: `src/__tests__/routes/userRoutes.test.ts` and place the following code: ```javascript // userRoutes.test.ts import request from "supertest"; import test_server from "../../../test_setup"; import User from "../../models/User"; import supertest from "supertest"; // You can either test with a real token like this: const adminToken = "YourAdminToken"; // Alternatively: const nonAdminToken = "YourNonAdminToken"; // OR // Write a function that simulates an actual login process and extract the token from there // And then store it in a variable for use in various test cases e.g: // Assuming you have admin credentials for testing // const adminCredentials = { // email: "admin@example.com", // password: "adminpassword", // }; // Assuming you have a function to generate an authentication token // const getAuthToken = async () => { // const response = await supertest(test_server) // .post("/api/v1/users/login") // .send(adminCredentials); // return response.body.token; // }; // Get the admin authentication token (inside an async function or code block) // const authToken = await getAuthToken(); // ------------------> do the same for non admin user and store the token in // its own variable describe("User Routes", () => { beforeAll(async () => {}); // Clean up after tests afterAll(async () => { // Remove the created user await User.deleteOne({ username: "testuser", }); // Clear all jest mocks jest.clearAllMocks(); test_server.close(); }); // Test case for creating a new user it("should create a new user", async () => { const response = await request(test_server) .post("/api/v1/users/create") // Update the route path accordingly .send({ // Your user data for testing email: "admin@example.com", password: "adminpassword", username: "testuser", isAdmin: false, savedProducts: [], }); // Expectations expect(response.status).toBe(201); expect(response.body).toHaveProperty("email", "admin@example.com"); expect(response.body).toHaveProperty("username", "testuser"); // Add more expectations based on your user data // Optionally, you can store the created user for future tests const createdUser = response.body; }, 20000); // Test case for logging in a user it("should login a user and return a token", async () => { // Assuming you have a test user created previously const testUser = { email: "admin@example.com", password: "adminpassword", }; const response = await request(test_server) .post("/api/v1/users/login") // Update the route path accordingly .send({ email: testUser.email, password: testUser.password, }); // Expectations expect(response.status).toBe(200); expect(response.body).toHaveProperty("user"); expect(response.body).toHaveProperty("token"); // Add more expectations based on your login data // Optionally, you can store the token for future authenticated requests const authToken = response.body.token; }, 20000); // Assuming you have admin credentials for testing const adminCredentials = { email: "admin@example.com", password: "adminpassword", }; // Assuming you have a function to generate an authentication token const getAuthToken = async (credentials: object) => { const response = await supertest(test_server) .post("/api/v1/users/login") .send(credentials); return response.body.token; }; // Test case for getting all users (admin access) it("should get all users with admin access", async () => { // Assuming you have admin credentials for testing const adminCredentials = { email: "admin@example.com", password: "adminpassword", }; // Log in as admin to get the admin token // const adminToken = await getAuthToken(adminCredentials); // Send a request to the route with the admin token const response = await request(test_server) .get("/api/v1/users/all") .set("token", `Bearer ${adminToken}`); // Expectations expect(response.status).toBe(200); // Add more expectations based on your user data // Optionally, you can store the users for further assertions const allUsers = response.body; }, 20000); // Test case for getting all users without admin access it("should return 403 Forbidden when accessing all users without admin access", async () => { // Assuming you have non-admin credentials for testing const nonAdminCredentials = { email: "nonadmin@example.com", password: "nonadminpassword", }; // Log in as non-admin to get the non-admin token // const nonAdminToken = await getAuthToken(nonAdminCredentials); // Send a request to the route with the non-admin token const response = await request(test_server) .get("/api/v1/users/all") .set("token", `Bearer ${nonAdminToken}`); // Expectations expect(response.status).toBe(403); // Add more expectations based on how you handle non-admin access }, 20000); // Test case: Should successfully update a user with valid credentials (admin or account owner) it("should successfully update a user with valid credentials (admin or account owner)", async () => { // Assuming you have a test user created previously // If you dont have you can create one programattically like we did writing // Tests for create new user const testUser = { email: "testuser@example.com", password: "testuserpassword", username: "testuser", }; // Update the user using the hardcoded token const updateResponse = await request(test_server) .put(`/api/v1/users/update/6593ca275db905747ea085aa`) .set("token", `Bearer ${adminToken}`) .send({ // Your updated user data username: "updateduser", }); // Expectations expect(updateResponse.status).toBe(200); expect(updateResponse.body).toHaveProperty("username", "updateduser"); // Add more expectations based on your updated user data }, 20000); // Test case: Should return 403 Forbidden when updating a user without valid credentials it("should return 403 Forbidden when updating a user without valid credentials", async () => { // Assuming you have a different user for testing const otherUserCredentials = { email: "otheruser@example.com", password: "otheruserpassword", username: "otheruser", }; // Create a different user const createResponse = await request(test_server) .post("/api/v1/users/create") .send(otherUserCredentials); // Get the ID of the created user const otherUserId = createResponse.body._id; // Attempt to update the user without valid credentials (not admin or account owner) const updateResponse = await request(test_server) .put(`/api/v1/users/update/${otherUserId}`) .set("token", `Bearer ${nonAdminToken}`) .send({ // Your updated user data username: "updateduser", }); // Expectations expect(updateResponse.status).toBe(403); // Add more expectations based on how you handle non-authorized updates }, 20000); // Test case: Should successfully delete a user with valid credentials (admin or account owner) it("should successfully delete a user with valid credentials (admin or account owner)", async () => { // Assuming you have a test user created previously const testUser = { email: "testuser@example.com", password: "testuserpassword", username: "testuser", }; // Create a test user const createResponse = await request(test_server) .post("/api/v1/users/create") .send(testUser); // Get the ID of the created user const userId = createResponse.body._id; // Log the ID to check if it's correct console.log("User ID:", userId); // Delete the user using the hardcoded token const deleteResponse = await request(test_server) .delete(`/api/v1/users/delete/${userId}`) .set("token", `Bearer ${adminToken}`); // Expectations expect(deleteResponse.status).toBe(204); }, 20000); }); ``` and another file: `src/__tests__/routes/productRoutes.test.ts` and place the following code: ```javascript import request from "supertest"; import test_server from "../../../test_setup"; import supertest from "supertest"; // You can either test with a real token like this: const adminToken = "YourAdminToken"; // Alternatively: const nonAdminToken = "YourNonAdminToken"; // OR // Write a function that simulates an actual login process and extract the token from there // And then store it in a variable for use in various test cases e.g: // Assuming you have admin credentials for testing // const adminCredentials = { // email: "admin@example.com", // password: "adminpassword", // }; // Assuming you have a function to generate an authentication token // const getAuthToken = async () => { // const response = await supertest(test_server) // .post("/api/v1/users/login") // .send(adminCredentials); // return response.body.token; // }; // Get the admin authentication token (inside an async function or code block) // const authToken = await getAuthToken(); // ------------------> do the same for non admin user and store the token in // its own variable // Assuming you have a test product data const testProduct = { title: "Test Product", description: "This is a test product", image: "test-image.jpg", category: "Test Category", quantity: "10", inStock: true, }; describe("User Routes", () => { // Test case: Should successfully create a new product with valid admin credentials it("should successfully create a new product with valid admin credentials", async () => { // Create a new product using the admin token const createResponse = await request(test_server) .post("/api/v1/products/create") .set("token", `Bearer ${adminToken}`) .send(testProduct); // Expectations expect(createResponse.status).toBe(201); expect(createResponse.body).toHaveProperty("title", "Test Product"); // Add more expectations based on your product data }, 20000); // Test case: Should successfully get all products without authentication it("should successfully get all products without authentication", async () => { // Make a request to the endpoint without providing an authentication token const response = await request(test_server).get("/api/v1/products/all"); // Expectations expect(response.status).toBe(200); // Add more expectations based on your implementation }, 20000); // Test case: Should successfully get a product by ID with a valid token it("should successfully get a product by ID with a valid token", async () => { // Assuming you have a product ID for testing const productId = "6593e048731070abb0939faf"; // Make a request to the endpoint with a valid token and product ID const response = await request(test_server) .get(`/api/v1/products/${productId}`) .set("token", `Bearer ${adminToken}`); // Expectations expect(response.status).toBe(200); // Add more expectations based on your implementation }, 20000); // Test case: Should successfully update a product by ID with admin access it("should successfully update a product by ID with admin access", async () => { // Assuming you have a product ID for testing const productId = "6593e048731070abb0939faf"; // Replace 'yourUpdatedProductData' with the data you want to update the product with const updatedProductData = { title: "Updated Product", description: "Updated Product Description", // Add more fields as needed }; // Make a request to the endpoint with an admin token and product ID const response = await request(test_server) .put(`/api/v1/products/update/${productId}`) .set("token", `Bearer ${adminToken}`) .send(updatedProductData); // Expectations expect(response.status).toBe(200); // Add more expectations based on your implementation }, 20000); // Test case: Should successfully delete a product by ID with admin access it("should successfully delete a product by ID with admin access", async () => { // Assuming you have a product ID for testing const productId = "6593e0a5451f5e47ce363e00"; // Make a request to the endpoint with an admin token and product ID const response = await request(test_server) .delete(`/api/v1/products/delete/${productId}`) .set("token", `Bearer ${adminToken}`); // Expectations expect(response.status).toBe(204); // Add more expectations based on your implementation }, 20000); }); afterAll(() => { test_server.close(); }); ``` In our routes files like `src/routes/userRoutes.ts`, our routes donot include the prefix `/api/v1` as its only included in the `src/index.ts` file where the `src/routes/index.ts` is imported and included. However,dont forget to add it while writing unit tests for all the routes as they will all return a 404 status code in its absence. Our tests involve usage of our utility functions, JWT middlewares in `src/utils/jwtUtils.ts` so lets first complete their unit tests in a separate file: `src/__tests__/utils/jwtAuth.test.ts` with the following code: ```javascript import { verifyTokenAndAdmin, verifyTokenAndAuthorization, } from "../../utils/jwtUtils"; import dotenv from "dotenv"; dotenv.config(); // Mock token for testing purposes const mockTokenForNonAdmin = "YourMockTokenForNonAdmin"; const mockTokenForAdmin = "YourMockTokenForAdmin"; // You can also simulate an actual login process, // extract the token and store it in a variable like this: // const adminCredentials = { // email: "admin@example.com", // password: "adminpassword", // }; // Assuming you have a function to generate an authentication token // const getAuthToken = async () => { // const response = await supertest(test_server) // .post("/api/v1/users/login") // .send(adminCredentials); // return response.body.token; // }; // Get the admin authentication token (inside an async function or code block) // const authToken = await getAuthToken(); // Authorization Test Cases describe("Authorization Tests", () => { // Test Case: Verify Token and Authorization (valid token and authorization) it("should verify a valid JWT token and authorization", (done) => { const req: any = { headers: { token: `Bearer ${mockTokenForNonAdmin}`, }, user: { id: "mockUserId", isAdmin: false, }, params: { id: "mockUserId", }, }; const res: any = { status: () => res, // Mock status function json: (message: string) => { // Assert the message or other expectations expect(req.user).toBeDefined(); done(); }, }; const next = () => { // Should not reach here done.fail("Should not reach next middleware on valid token"); }; verifyTokenAndAuthorization(req, res, next); }); // Test Case: Verify Token and Authorization (unauthorized) it("should handle unauthorized access for authorization", (done) => { const req: any = { headers: { token: `Bearer ${mockTokenForNonAdmin}`, }, user: { id: "otherUserId", isAdmin: false, }, params: { id: "mockUserId", }, }; const res: any = { status: () => res, // Mock status function json: (message: string) => { // Assert the message or other expectations expect(message).toBe("You are not allowed to do that!"); done(); }, }; const next = () => { // Should not reach here done.fail("Should not reach next middleware on unauthorized access"); }; verifyTokenAndAuthorization(req, res, next); }); }); // Admin Access Test Cases describe("Admin Access Tests", () => { // Test Case: Verify Token and Admin Access (valid token and admin) it("should verify a valid JWT token and admin access", (done) => { const req: any = { headers: { token: `Bearer ${mockTokenForNonAdmin}`, }, user: { id: "mockUserId", isAdmin: true, }, params: { id: "mockUserId", }, }; const res: any = { status: () => res, // Mock status function json: (message: string) => { // Assert the message or other expectations expect(req.user).toBeDefined(); done(); }, }; const next = () => { // Should not reach here done.fail( "Should not reach next middleware on valid token and admin access" ); }; verifyTokenAndAdmin(req, res, next); }); // Test Case: Verify Token and Admin Access (non-admin) it("should handle non-admin access for admin authorization", (done) => { const req: any = { headers: { token: `Bearer ${mockTokenForNonAdmin}`, }, user: { id: "mockUserId", isAdmin: false, }, params: { id: "mockUserId", }, }; const res: any = { status: () => res, // Mock status function json: (message: string) => { // Assert the message or other expectations expect(message).toBe("You are not allowed to do that!"); done(); }, }; const next = () => { // Should not reach here done.fail("Should not reach next middleware on non-admin access"); }; verifyTokenAndAdmin(req, res, next); }); }); ``` This suite has robust test cases to verify the functionality of two crucial middleware functions, verifyTokenAndAuthorization and verifyTokenAndAdmin, and is an intergral part of our previous tests written in `src/__tests__/utils/jwtUtils.test.ts`. In the "Authorization Tests" section, we meticulously validate the proper functioning of token validation and authorization. We take pride in ensuring that our application accurately handles scenarios where a valid JWT token is presented along with appropriate authorization, as well as scenarios where unauthorized access is attempted, with clear error messages to guide the user. Moving on to the "Admin Access Tests" section, we continue to demonstrate our dedication to a secure user access control system. Our tests encompass scenarios where valid tokens with admin privileges are successfully processed, and where non-admin users encounter appropriate error messages when attempting admin-protected actions. By employing mocked request (req), response (res), and the next function, we simulate the middleware execution flow during testing. The use of the done callback underscores our commitment to thorough and reliable asynchronous testing methodologies. In essence, our test suite for these middleware functions serves as a testament to our unwavering commitment to delivering a secure and robust Express.js application, ensuring that token verification, authorization, and admin access control are implemented with the utmost precision and reliability. In our `User Routes` test suite, we ensure the robust functionality of various endpoints within our Express.js application. Leveraging the Supertest library, we validate the creation, login, retrieval, update, and deletion of user data. By employing both admin and non-admin tokens, we meticulously simulate scenarios that cover a spectrum of user access levels. Our tests underscore the meticulousness with which we handle user authentication, authorization, and access control, providing a solid foundation for the secure operation of our application. Furthermore, we prioritize cleanliness by incorporating cleanup procedures, inside `afterAll` method, to remove test users after execution, demonstrating our commitment to maintaining a pristine testing environment. With these tests, we confidently assure the reliability and security of our User Routes, fostering a robust user management system within our application. On the otherhand, our `Product Routes` suite orchestrates a meticulous examination of various product endpoints to guarantee the seamless operation of our Express.js application. The tests encompass the creation, retrieval, update, and deletion of products, ensuring their secure management. The initiation of a new product with valid admin credentials is rigorously validated, emphasizing our commitment to robust functionality. Furthermore, we ensure that product retrieval, whether by ID or not, transpires smoothly, even in the absence of authentication. For administrative tasks like updating and deleting products, our suite validates the integrity of these operations with precision. Each test is meticulously crafted to assess the application's response under different scenarios, solidifying our confidence in the resilience and reliability of our Product Routes. Following the suite's execution, we gracefully close the server, maintaining an orderly testing environment. By now you should have an output in the terminal with this ending: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jgjwjsp9jwoziqe1ytc.png) Indicating that all the tests succeeded. The Snapshots: 0 total message indicates that no Jest snapshots were created or updated during the test run. Snapshots in Jest are a way to automatically capture the output of a component or data structure and compare it against future changes to detect unintended changes. Jest snapshots are often used in front-end testing, especially for UI components to capture the rendered output of components and help identify unintended changes. For our backend API, we were mostly dealing with server-side logic, database interactions, and APIs, snapshots were less critical which is why we dont have any. Test Suites: 12 passed, 12 total mean we have 12 test suites and they all passed and Tests: 66 passed, 66 total means we have 66 tests and they all passed. As you can see, opposed to rumours from most developers that unit testing is hard, its truly not hard given time but time consuming instead though worth it as previously discussed. ## Overall Sammary Embarking on a three-phase voyage, our article delved into the realm of Node.js and TypeScript development, sculpting a masterpiece of efficiency and reliability. In the inaugural phase, we meticulously established a robust environment, configuring the stage with precision using tools like Jest, Supertest among others. With the foundation laid, our journey traversed the expansive landscape of building a dynamic API with Express, navigating the intricacies of routing, controllers, and middleware. Finally, as our opus neared completion, we explored the symphony of unit testing, employing Jest and Supertest to ensure our code's fortitude. This trilogy encapsulates not just the process but the artistry behind crafting resilient, scalable, and well-tested Node.js applications in TypeScript, leaving developers equipped and inspired for their own epic odyssey in the world of software development. ## Conclusion In the symphony of Node.js and TypeScript development, orchestrating harmony through unit testing with Jest and Supertest, coupled with the formidable MongoDB and Mongoose ORM in the grandeur of Express, transforms the software development journey into an enlightened expedition. Through meticulous testing, we've navigated the intricacies of our codebase, ensuring resilience and reliability. With Jest as our virtuoso conductor, and Supertest as the keen-eared observer, the seamless integration of MongoDB and Mongoose has rendered our Express applications not just functional, but robust and secure. As we conclude this odyssey through the landscape of unit testing, we embrace a future where our code stands as a testament to its own quality, a melody composed with precision and played with confidence in the ever-evolving symphony of software development. May your code remain harmonious, your tests unwavering, and your development journey a continual crescendo of success. ### Useful Links [Github Repo](https://github.com/Abeinevincent/nodejs_unit_testing_guide) [Twitter](https://twitter.com/abeinevincent)
abeinevincent
1,714,947
Rust Learning Note: async/await and Stream
This article is a summary of Chapter 4.11.4 of Rust Course (course.rs/) The Lifecycle of async If...
25,784
2024-01-02T16:37:24
https://raineyang.hashnode.dev/rust-learning-note-asyncawait-and-stream
rust, learning, asyncawait, stream
This article is a summary of Chapter 4.11.4 of Rust Course (course.rs/) **The Lifecycle of async** If an async function has reference type parameters, the parameter it refers to must life longer than the Future the function returns. For instance, the code below would throw an error since the variable x only lives inside function bad(), but the async function borrow_x, after being returned, lives in a larger scope than x. ```rust use std::future::Future; fn bad() -> impl Future<Output = u8> { let x = 5; borrow_x(&x) } async fn borrow_x(x: &u8) -> u8 {*x} ``` One solution to this problem is to put the variables the function refer to along with the function inside a single async block. In this way, the function and the variables it refers to always exist in the same scope, and thus have the same lifecycle. ```rust use std::future::Future; async fn borrow_x(x: &u8) -> u8 { *x } fn good() -> impl Future<Output = u8> { async { let x = 5; borrow_x(&x).await } } ``` Another solution is to use **move** keyword to transfer the ownership of the variable into the async function, which is similar to **move** in closures. However, this will no longer allow us to use the moved variable anywhere else. ```rust fn move_block() -> impl Future<Output = ()> { let my_string = "foo".to_string(); async move { println!("{my_string}"); } } ``` **.await in Multithreading Executor** Variable in async blocks need to be passed among multiple threads (since async/await is a M:N threading model), so data types that do not implement Send and Sync traits, like Rc, RefCell, cannot be used in async blocks. Also, the Mutex in std::sync::Mutex is not safe to use in async/await, since it is possible that one .await function that acquires a lock is suspended before releasing the lock, and another function currently processed by the thread tries the acquire the lock, leading to a deadlock. We should use the Mutex in **futures::lock::Mutex** instead. **Stream Processing** **Stream** trait is similar to Future trait, except that it can return multiple Future objects. Stream is similar to Iterator. ```rust trait Stream { type Item; fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>>; } ``` One use of Stream is the Receiver of message channel. The Stream receives a Some(val) when the sender sends a message, and None when the channel is closed. In the code below, Stream is implicitly used in the receiver (rx). The method call **rx.next()** is invoking the Stream trait implementation for the receiver. ```rust async fn send_recv() { const BUFFER_SIZE: usize = 10; let (mut tx, mut rx) = mpsc::channel::<i32>(BUFFER_SIZE); tx.send(1).await.unwrap(); tx.send(2).await.unwrap(); drop(tx); assert_eq!(Some(1), rx.next().await); assert_eq!(Some(2), rx.next().await); assert_eq!(None, rx.next().await); } ``` Similar to an Iterator, we can also iterate a Stream and use methods like map, filter, and fold. However, we cannot use a for loop to loop through a Stream. Instead, we can use **while let** for looping. ```rust async fn sum_with_next(mut stream: Pin<&mut dyn Stream<Item = i32>>) -> i32 { use futures::stream::streamExt; let mut sum = 0; while let Some(item) = stream.next().await { sum += item; } sum } async fn sum_with_try_next( mut stream: Pin<&mut dyn Stream<Item = Result<i32, io::Error>>> ) -> Result<i32, io::Error> { use futures::stream::TryStreamExt; let mut sum = 0; while let Some(item) = stream.try_next().await? { sum += item; } Ok(sum) } ``` However, the approach above processes only one value each time, and blocks the thread with await when waiting for the next value, which loses the meaning of concurrent programming. We can use **for_each_concurrent** or **try_for_each_concurrent** to process multiple values from Stream concurrently. ```rust async fn jump_around( mut stream: Pin<&mut dyn Stream<Item = Result<u8, io::Error>>> ) -> Result<(), io::Error> { use futures::stream::TryStreamExt; const MAX_CONCURRENT_JUMPERS: usize = 100; stream.try_for_each_concurrent(MAX_CONCURRENT_JUMPERS, |num| async move { jump_n_times(num).await?; report_n_jumps(num).await?; Ok(()) }).await?; Ok(()) } ``` In this example, try_for_each_concurrent method is used to apply an asynchronous closure to each item in the stream concurrently. Inside the closure, we defines two custome functions and uses ? to propagate error.
raineyanguoft
1,715,241
Avoid Getting Stuck Inside of Your Garage - Garage Door Company in Pittsburgh, PA and Surrounding Areas
People ignore their garage door until it malfunctions. The largest moving object in your house, the...
0
2024-01-02T21:06:52
https://dev.to/liveyourblog1/avoid-getting-stuck-inside-of-your-garage-garage-door-company-in-pittsburgh-pa-and-surrounding-areas-k2e
news
People ignore their garage door until it malfunctions. The largest moving object in your house, the garage door, may be frightful if it's broken and won't open. Usually, when you need your garage door to function the most is when you discover that it has turned on you. The worst-case scenario is that your garage door won't fully open as you're heading to work. It might be a bad connection to your garage door opener, perhaps. Although your automatic garage door opener is not the key actor in this scenario, you may visually check for any damaged connections. When a garage door spring breaks, your car may become trapped in the garage, forcing you to miss work or a doctor's appointment or even locking you out of your home. Due to the weight of garage doors and the inability of garage door openers to lift them without the assistance of springs, balancing systems are used. Most contemporary overhead doors employ torsion springs. The door normally lifts with the opener only a few inches before halting when the spring snaps. The highest stress and strain are also held by torsion springs. This is a problem if you've parked your car in the garage because you're probably finding out as you're attempting to depart. Don’t wait for your garage door spring to break, replace it before it breaks so you don’t get stuck. Consider asking your service professional for “high cycle springs”. Springs are made up of different wire sizes. High-cycle springs will last longer, hence the name. To get a high-cycle spring, you need to go with a larger wire and a longer spring. If you complete over three cycles in a day, switching over to these long-life garage door springs is ideal. Contact your [garage door service companies in pittsburgh PA](https://gfixes.com/garage-door-company/) your area. Before the repair or installation is finished, a garage door service companies near you performs a thorough examination and a number of tests. Even if you take proper care of your garage door, there may still be external forces at play. What happens if the electricity goes out? Not to worry. The procedures below will help you manually open and close your garage door: Spring problems can easily result in the garage door breaking open unexpectedly, which might cause serious injury. Spring replacement and repairs provide the longest possible life for your family's safety. To detach the door from the garage door opener, pull down on the emergency wire. The majority of garage doors include a red rope with a handle. The manual release will free the trolley from the connection on the rail after you pull on the red rope. After that, the garage door will be operated manually. To completely open the garage door, carefully lift it straight up. Before leaving the garage door alone, be sure it will stay open. Please exercise extreme caution, and if there are any young children around, make sure they keep out of the path. If your garage door malfunctions, we are always available to help you with garage door installation and repair services around you.
liveyourblog1
1,715,455
Mastering Advanced Spring Data Specifications within Spring Data Repositories for Complex Queries in Spring Boot
Greetings, fellow developers! Today, we're delving into the intricate world of Spring Data...
0
2024-01-05T19:00:00
https://dev.to/dinaessam/mastering-advanced-spring-data-specifications-within-spring-data-repositories-for-complex-queries-in-spring-boot-28m7
java, springboot
Greetings, fellow developers! Today, we're delving into the intricate world of Spring Data Specifications within the Spring Boot framework. Join us on this exploration as we uncover the prowess of Specifications in crafting advanced and complex queries within Spring Data Repositories. Let's dive deeper into the realm of advanced querying! 🌟🔍 ## Understanding Advanced Spring Data Specifications in Spring Boot ### Unveiling the Power of Specifications Spring Data Specifications offer a robust toolset allowing developers to construct complex and parameterized query predicates, facilitating sophisticated data retrieval and manipulation. ### Key Features of Advanced Spring Data Specifications Integration: #### 1. Complex Predicate Composition: Specifications empower developers to compose intricate query predicates combining multiple criteria and logical operations. #### 2. Parameterized Specifications: Parameterized Specifications provide flexibility by enabling the creation of reusable and adaptable query predicates for various scenarios. #### 3. Query Optimization: Specifications facilitate optimized query execution, ensuring efficient data retrieval by utilizing database indexes and other optimization techniques. ## Implementing Complex Queries using Advanced Spring Data Specifications in Spring Boot ### Example: Crafting Specifications for Advanced Queries Let's create advanced Specifications for a product-related query scenario: ```java public class ProductSpecifications { public static Specification<Product> hasPriceGreaterThan(double price) { return (root, query, criteriaBuilder) -> criteriaBuilder.greaterThan(root.get("price"), price); } public static Specification<Product> hasCategory(String category) { return (root, query, criteriaBuilder) -> criteriaBuilder.equal(root.get("category"), category); } public static Specification<Product> hasPriceGreaterThanAndCategory(double price, String category) { return Specification.where(hasPriceGreaterThan(price)).and(hasCategory(category)); } } ``` ## Integrating Advanced Specifications in Spring Data Repository Integrate the complex Specifications into the Spring Data Repository for product data retrieval: ```java public interface ProductRepository extends JpaRepository<Product, Long>, JpaSpecificationExecutor<Product> { // Spring Data methods will be inherited along with custom specifications } ``` ## Leveraging Advanced Specifications in a Service Utilize the advanced Specifications within a Spring Boot service: ```java @Service public class ProductService { private final ProductRepository productRepository; public ProductService(ProductRepository productRepository) { this.productRepository = productRepository; } public List<Product> getProductsByPriceAndCategory(double price, String category) { Specification<Product> spec = ProductSpecifications.hasPriceGreaterThanAndCategory(price, category); return productRepository.findAll(spec); } // Additional methods utilizing complex Specifications within the service } ``` ## Conclusion: Harnessing Advanced Spring Data Specifications for Complex Queries in Spring Boot Advanced Spring Data Specifications provide a powerful mechanism for constructing intricate and optimized query predicates within Spring Data Repositories. By leveraging these specifications, developers can effortlessly craft complex queries tailored to specific criteria, enhancing data retrieval and manipulation in Spring Boot applications. Happy coding, and may your advanced Specifications simplify and optimize your complex querying endeavors! 🚀🌐💻
dinaessam
1,715,658
Hello dev.to!
Hi! After signing up, I am excited to write my first blog post with the goal of enhancing my skills...
0
2024-01-03T09:03:02
https://dev.to/ruccess/hello-devto-428l
beginners, webdev
Hi! After signing up, I am excited to write my first blog post with the goal of enhancing my skills as a developer. dev.to에 처음으로 글 써본다... 맞는 블로그를 찾지 못해 정착하지 못한 나로서는 여긴 잘 맞을지 모르겠다 열심히 써보면 되지 않을까?
ruccess
1,715,699
AI powered video summarizer with Amazon Bedrock and Anthropic’s Claude
At times, I find myself wanting to quickly get a summary of a video or capture the key points of a...
0
2024-01-03T10:55:25
https://levelup.gitconnected.com/ai-powered-video-summarizer-with-amazon-bedrock-and-anthropics-claude-9f1832f397dc
bedrock, claude, generativeai, serverless
![Photo by [Andy Benham](https://unsplash.com/@benham3160?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)](https://cdn-images-1.medium.com/max/6942/0*u1aoh6IkniSqBqg8) At times, I find myself wanting to quickly get a summary of a video or capture the key points of a tech talk. Thanks to the capabilities of generative AI, achieving this is entirely possible with minimal effort. In this article, I’ll walk you through the process of creating a service that summarizes YouTube videos based their transcripts and generates audio from these summaries. ![AI powered youtube video summarizer](https://cdn-images-1.medium.com/max/5244/1*o2c9LbDWn59VZaz-GfscIA.png) We’ll leverage Anthropic’s Claude 2.1 foundation model through Amazon Bedrock for summary generation, and Amazon Polly to synthesize speech from these summaries. ## Solution overview I will use a step functions to orchestrate the different steps involved in the summary and audio generation : ![AI powered youtube video summarizer architecture](https://cdn-images-1.medium.com/max/4494/1*vW39v-yN0WNZJoHiOVfHwA.png) 🔍 Let’s break this down: * The **Get Video Transcript** function retrieves the transcript from a specified YouTube video URL. Upon successful retrieval, the transcript is stored in an S3 bucket, ready for processing in the next step. * **Generate Model Parameters** function retrieves the transcript from the bucket and generates the prompt and inference parameters specific to Anthropic’s Claude v2 model. These parameters are then stored in the bucket for use by the Bedrock API in the subsequent step. * Invoking the Bedrock API is achieved through the step functions’ AWS SDK integration, enabling the execution of the model inferences with inputs stored in the bucket. This step generates a structured JSON containing the summary. * **Generate audio form summary** relies on Amazon Polly to perform speech synthesis from the summary produced in the previous step. This step returns the final output containing the video summary in text format, as well as a presigned URL for the generated audio file. * The bucket serves as a state storage used across all the steps of the state machine. In fact, we don’t know the size of generated video transcript upfront; it might reach the Step Functions’ payload size limit of 256 KB in some lengthy videos. ### On using Anthoropic’s Claude 2.1 At the time of writing, Claude 2.1 model supports 200K tokens, an estimated word count of 150K. It provides also a good accuracy over long documents, making it well-suited for summarizing lengthy video transcripts. ## TL;DR You will find the complete source code here 👇 [**GitHub - ziedbentahar/yt-video-summarizer-with-bedrock**](https://github.com/ziedbentahar/yt-video-summarizer-with-bedrock) I will use NodeJs, typescript and CDK for IaC. ## Solution details ### 1- Enabling Anthropic’s Claude v2 in your account Amazon Bedrock offers a range of foundational models, including Amazon Titan, Anthropic’s Claude, Meta Llama2, etc., which are accessible through Bedrock APIs. By default, these foundational models are not enabled; they must be enabled through the console before use. We’ll request access to Anthropic’s Claude models. But first we’ll need to submit a use case details: ![Request Anthropic’s Claude access](https://cdn-images-1.medium.com/max/3200/1*WwRSrEGHnqbZ2CCOh-kLeA.png) ### 2- Getting transcripts from Youtube Videos I will rely on [this lib](https://github.com/Kakulukian/youtube-transcript) for the video transcript extraction (It feels like a cheat code 😉) ; in fact, this library makes use of an unofficial YouTube API without relying on a headless Chrome solution. For now, it yields good results on several YouTube videos, but I might explore a more robust solutions in the future : {% gist https://gist.github.com/ziedbentahar/9ecc91f8f6c6dbea0cb443b93d54e8e8.js %} The extracted transcript is then stored on the s3 bucket using `${requestId}/transcript` as a key. You can find the code for this lambda function [here](https://github.com/ziedbentahar/yt-video-summarizer-with-bedrock/blob/main/src/lambda-handlers/get-youtube-video-transcript.ts) ### 3- Finding the adequate prompt and generating model inference parameters At the time of writing, Bedrock currently only supports Claude’s Text Completions API. Prompts must be wrapped in `\n\nHuman: and \n\nAssistant:` markers to let Claude understand the conversation context. Here is the prompt; I find that it produces good results for our use case: ``` You are a video transcript summarizer. Summarize this transcript in a third person point of view in 10 sentences. Identify the speakers and the main topics of the transcript and add them in the output as well. Do not add or invent speaker names if you not able to identify them. Please output the summary JSON format conforming to this JSON schema: { "type": "object", "properties": { "speakers": { "type": "array", "items": { "type": "string" } }, "topics": { "type": "string" }, "summary": { "type": "array", "items": { "type": "string" } } } } <transcript>{{transcript}}</transcript> ``` 🤖 **Helping Claude producing good results**: * To clearly mark to the transcript to summarize, we use **<transcript/>** XML tags. [Claude will specifically focus](https://docs.anthropic.com/claude/docs/constructing-a-prompt#mark-different-parts-of-the-prompt) on the structure encapsulated by these XML tags. I will be substituting **`{{transcript}}`** string with the actual video transcript. * To assist Claude in generating a reliable JSON output format, I include in the prompt the JSON schema that needs to be adhered to. * Finally, I also need to inform Claude that I want to generate only a concise JSON response without unnecessary chattiness, meaning without including a preamble and postscript while returning the JSON payload: ``` \n\nHuman:{{prompt}}\n\nAssistant:{ ``` Note that the full prompt ends with a trailing **`{`** As mentioned on the section above, we will store this generated prompt as well as the model parameters in the bucket so that It can be used as an input of Bedrock API: ``` const modelParameters = { prompt, max_tokens_to_sample: MAX_TOKENS_TO_SAMPLE, top_k: 250, top_p: 1, temperature: 0.2, stop_sequences: ["Human:"], anthropic_version: "bedrock-2023-05-31", }; ``` You can follow [this link](https://github.com/ziedbentahar/yt-video-summarizer-with-bedrock/blob/main/src/lambda-handlers/generate-model-parameters.ts) for the full code of the generate-model-parameters lambda function. ### 4- Invoking Claude Model In this step, we’ll avoid writing custom lambda function to invoke Bedrock API. Instead, we’ll use Step functions direct SDK integration. This state loads from the bucket the model inference parameters that were generated in the previous step: {% gist https://gist.github.com/ziedbentahar/9bec10d5b0f77005e9785749e6723a30.js %} **☝️ Note:** As we instructed Claude to generate the response in JSON format, the completion API response misses a leading **{** as Claude outputs the rest of the requested JSON schema. We use [intrinsic functions](https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-intrinsic-functions.html) on the state’s ResultSelector to add the missing opening curly brace and to format the state output in a well formed JSON payload : ``` ResultSelector: { "id.$": "$$.Execution.Name", "summaryTaskResult.$": "States.StringToJson(States.Format('\\{{}', $.Body.completion))", } ``` I have to admit, it is not ideal but this helps get by without writing a custom Lambda function. ### 5- Generating audio from video summary This step is heavily inspired by this [previous blog post](https://levelup.gitconnected.com/building-a-serverless-text-to-speech-application-with-amazon-polly-step-functions-and-websocket-56e9871730b7). Amazon Polly generates the audio from the video summary: {% gist https://gist.github.com/ziedbentahar/3139fe3d7b4f42955fa8ec1183daed44.js %} Here are the details of synthesize function: {% gist https://gist.github.com/ziedbentahar/1502dd16a662c71e7496e371b24d3303.js %} Once the audio generated, we store it on the S3 bucket and we generate a presigned Url so it can be downloaded afterwards. ☝️ **On language detection :** In this example, I am not performing language detection; by default, I am assuming that the video is in English. You can find in [my previous article](https://medium.com/gitconnected/building-a-serverless-text-to-speech-application-with-amazon-polly-step-functions-and-websocket-56e9871730b7) how to perform such a process in speech synthesis. Alternatively, We can also leverage Claude model capabilities to detect the language of the transcript. ### 6- Defining the state machine Alright, let’s put it all together and let’s take a look at the CDK definition of the state machine: {% gist https://gist.github.com/ziedbentahar/5129aee60bd39a79253c669da1b9222d.js %} In order to be able to invoke Bedrock API, we’ll need to add this policy to the workflow’s role (And it’s important to remember granting the S3 bucket read & write permissions to the state machine): {% gist https://gist.github.com/ziedbentahar/d113dd27b43f6d4e694e2ed240b6dc0b.js %} ## Wrapping up I find creating generative AI based applications to be a fun exercise, I am always impressed by how quickly we can develop such applications by combining Serverless and Gen AI. Certainly, there is room for improvement to make this solution production-grade. This workflow can be integrated into a larger process, allowing the video summary to be sent asynchronously to a client, and let’s not forget robust error handling. Follow [this link](https://github.com/ziedbentahar/yt-video-summarizer-with-bedrock/tree/main) to get the source code for this article. Thanks for reading and hope you enjoyed it ! ## Further readings [**Put words in Claude's mouth**](https://docs.anthropic.com/claude/docs/put-words-in-claudes-mouth) [**Anthropic Claude models**](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html) [**What is Amazon Bedrock?**](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html)
zied
1,715,742
GIT Workflow
Terdapat empat metode yang saya ketahui dalam penggunaan Git, beberapa diantaranya adalah sebagai...
0
2024-01-03T10:23:19
https://dev.to/lanamaulanna/git-workflow-kda
Terdapat empat metode yang saya ketahui dalam penggunaan Git, beberapa diantaranya adalah sebagai berikut: 1. Centralized Workflow 2. Feature Branch Workflow 3. Gitflow Workflow 4. Forking Workflow Ketika saya baru bergabung di perusahaan, alur kerja Git yang digunakan adalah Centralized Workflow. Selang beberapa bulan seiring bertambahnya anggota tim code conflict makin sering terjadi lalu kami mencoba Gitflow Workflow. Singkat cerita Gitflow Workflow sangat membantu mengoptimalkan alur kerja pengguna Git. Code conflict berkurang, tidak ada lagi fitur yang belum selesai terbawa ke production, dan lain sebagainya. Sampai suatu hari terjadi lagi, mulai sering code conflict dan pekerjaan yang belum selesai malah terbawa ke production. Jelas tidak ada yang salah dengan Gitflow Workflow sekalipun ada yang mengomentari bahwa Gitflow Workflow tidak bagus, ribet dan lain-lain. Memang perkakas Gitflow ini membungkus perintah-perintah Git dengan tujuan memudahkan pengguna untuk praktek Gitflow Workflow, sekalipun tetap saja ada yang kebingungan dengan alur penggunaanya. Memang sudah cukup banyak artikel yang membahas Gitflow Workflow sayangnya saya masih melihat beberapa tim member kesulitan mengikuti konsepnya. 😓 Singkat cerita, daripada semakin tidak produktif akhirnya saya mencoba merancang ulang alur kerja pengguna Git. Alur kerja yang saya buat mirip dengan Feature Branch Workflow. Hanya saja saya coba terangkan menggunakan bahasa saya sendiri dan memvisualisasikannya supaya lebih mudah dicerna, dicetak dan ditempel di dinding biar tidak lupa. 😎 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgto9jat4ks85uhm2tnp.png)Yang Boleh Dilakukan 👍 Motivasi dari alur kerja ini adalah: 1. Branch master haruslah siap dan aman di deploy kapanpun 2. Mengurangi gap perbedaan source code diantara branch Penjelasan gambar diatas adalah sebagai berikut: 1. Ketika memulai pekerjaan, buatlah branch baru dari master branch. Tentu saja ada tata cara penamaan branch yang saya rancang juga, mungkin akan dibahas di artikel yang lain 2. Disebut working branch karena pekerjaan aktif ada di branch ini, dan branch ini menjadi tanggung jawab pembuatnya tidak boleh di share ke orang lain jika tidak ada kolaborasi kode bersama 3. Jika pekerjaan sudah selesai, maka merge working branch dengan development branch. Selanjutnya biasanya sudah di ambil alih oleh CI/CD untuk menarik kode di server development 4. Dan jika testing atau UAT di development environment sudah OK, selanjutnya adalah melakukan merge working branch dengan staging branch. Staging environment biasanya digunakan untuk UAT dengan stakeholder atau pengguna utama, sebelum di rilis ke production environment 5. Setelah UAT dengan stakholder dianggap PASS atau lolos atau telah diterima. Selanjutnya lakukan merge working branch dengan master branch. Melalui proses CI/CD kode yang kita kerjakan akan bisa digunakan oleh orang banyak Tidak ada pull request ya 😲? lalu kapan dan bagaimana code review dilakukan? ini juga akan dibahas di artikel yang lain 😁 termasuk CI/CD yang membuat Git tagging. Singkat cerita itulah hal yang dibolehkan. Tidak berhenti sampai disitu, untuk lebih memperjelas alur kerja pengguna Git, saya juga membuat aturan “TIDAK BOLEH”. Melalui diagram dibawah ini saya terangkan apa yang tidak boleh dilakukan. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32lgvppg1gzzp50ogo8p.png)Yang Tidak Boleh Dilakukan dan Pengecualian 👎 Inti dari diagram diatas adalah untuk menjaga agar tiap-tiap branch aman dari perubahan-perubahan yang terjadi di working branch dan development branch. Coba teman-teman bayangkan jika development branch di merge dengan staging branch. Development branch berisi kode-kode yang belum teruji atau bahkan setengah jadi. Lalu di merge ke branch staging dimana branch staging ini digunakan untuk UAT atau 3rd party integration. Amburadul tentunya ya, apalagi klo langsung di merge ke master branch 😲 Selain itu jika diperhatikan ada Friends Working Branch, maksudnya adalah ketika kita mendapatkan tugas mengerjakan suatu fitur dan dikerjakan lebih dari satu orang maka bisa terjadi kolaborasi di working branch. Dan ini harus dengan supervisi agar gap kode antar branch tidak semakin jauh. Sudah hampir 1.5 tahun dan alur kerja pengguna Git ini berhasil menjadi salah satu faktor stabilitas produktivitas teman-teman. 😎
lanamaulanna
1,715,837
How to Pay Gas Fees for Users of Your dApp: Meta Transactions on Tezos
This tutorial introduces Meta transactions on Tezos and covers how to use the Gas Station API to...
0
2024-01-03T12:06:20
https://dev.to/debosthefirst/how-to-pay-gas-fees-for-users-of-your-dapp-meta-transactions-on-tezos-po5
tezos, web3, gas, metatransactions
This tutorial introduces Meta transactions on Tezos and covers how to use the Gas Station API to cover gas fees for dApp users. To onboard the next billion users on Web3, simplifying user onboarding and improving the user experience of dApps (decentralized applications) is extremely important. ![Paying gas fees for your users](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2s1ojtvjkxn3rcg6hac.jpg) For many users, the mindset shift from using Web2 applications to using Web3 dApps can be a steep slope. As such, dApp developers are always on the lookout for new ways to improve the UX for users in a way that abstracts away some of the complexities of using decentralized applications.  One example of a major barrier to onboarding users to a dApp is the need to pay gas fees to interact with the dApp.   ![gas fees flow on tezos](https://cms.tzstaging.com/assets/decfe196-ee20-4784-889f-8bbc955ff0cd) Users of dApps built on Tezos will need to open a wallet and acquire XTZ(tez) to start using your dApp which can be a barrier to user onboarding. To get over this barrier, the idea of meta-transactions was developed.  ## What are Meta Transactions? **Meta transactions** allow dApp developers to improve the user experience of their applications by removing the need for users to hold tokens for paying gas fees. This is done by separating the transaction's sender (user) from the gas payer. The Gas Payer (usually another smart contract wallet) acts as the relayer for the transaction.  ![meta transactions on tezos](https://cms.tzstaging.com/assets/c33baa6b-9909-4cc5-abda-e813c414720f) *Let's look at why this is important;* In Web2 for example, an app like Instagram allows users to post pictures to their feed without the need to pay a fee each time they post. Users can also like posts, leave comments, send DMs, and more without the need to pay fees for each operation. Meta transactions allow for Web3 dApps to provide a similar experience to users making dApps more user-friendly and improving the overall experience.  **There are 2 types of Meta transactions:** **Gasless transactions**: Gasless or Zero-gas transactions allow users to interact with your dApp without the need to pay gas fees. **Gas Abstraction**: Gas abstraction allows users of your dApp to pay gas fees using a token other than the native token of your application. Any token supported by the relayer can be used to cover the gas fees for the transaction.  ## TZIP-17 and Meta Transactions [TZIP-17](https://medium.com/tqtezos/tzip-17-permit-497afd9b0e9e) (the permit proposal) introduced the standard for account abstraction. This standard allows pre-signing - a way to have transactions be signed and submitted separately. Read more [here](https://medium.com/tqtezos/tzip-17-permit-497afd9b0e9e).  **TZIP-17** is the standard that allows meta transactions to work on Tezos. With this standard, we can have a relayer submit a user's pre-signed (meta) transaction and pay the tez fees on their behalf.  **Gas Stations** Gas stations are a good example of systems that can act as a relayer for transactions on a dApp. A Gas station - as the name implies, is simply a system that allows you to store tez (or any token supported by the station) to be used in paying for gas on behalf of your dApp users.  In the next part of this tutorial, we'll build a simple dApp that interacts with a Gas Station.  ## Using the Gas Station API With everything we've covered in the previous sections, you should now be familiar with these concepts; - Relayer - Gas Abstraction - Gasless transactions - Meta Transactions - TZIP-17 proposal - Gas stations Now, we're ready to take all of this knowledge and build something in the wild. We'll be using the Gas Station API from Marigold to build a simple voting dApp that implements **gasless transactions**. If you have never built a dApp on Tezos before, follow [this](https://spotlight.tezos.com/how-to-build-a-voting-dapp-on-tezos-part-1/) tutorial to learn more. We'll be building on the [voting dApp](https://spotlight.tezos.com/how-to-build-a-voting-dapp-on-tezos-part-1/) example in that tutorial here.  The full code for the Voting dApp's front end can be found [here](https://github.com/onedebos/ballon-tz-or). **Let's get started!** Head over to the Marigold [Gas Station API](https://gas-station.gcp-npr.marigold.dev) environment and connect your wallet to the application using the Connect Wallet button in the top right. Connect your wallet on the Ghostnet network to the application.   Next, we'll need to **add credits** to the gas station to be used to pay for transactions for users of our dApp. Click on **My credits** in the menu. You should now be able to see your wallet balance. Enter the amount of tez you'd like to use within the gas station and click Add ꜩ to add the tez to your account. You can get test credits by using a faucet.  Now, we need to make some changes to our [Voting Smart Contract](https://spotlight.tezos.com/how-to-build-a-voting-dapp-on-tezos-part-1/) for it to still work correctly with our dApp and Gas station. We'll need to   1. Do away with `sp.sender` because the transaction's `sender` will always be the relayer (Gas Station API). As such, we'd need to get the actual `sender/user` from the transaction parameters - more on this later.  2. Get the transaction's sender from `params` instead. Make the following changes to the `increase_votes` entrypoint.  ```python @sp.entrypoint def increase_votes(self, params): assert not self.data.votersWalletAddresses.contains(params.sender), "YouAlreadyVoted" assert self.data.players.contains(params.playerId), "PlayerIDNotFound" self.data.players[params.playerId].votes += 1 self.data.votersWalletAddresses.add(params.sender) ``` We'll also need to change up our test scenarios to accommodate these changes. Your test scenarios should now look like the below;  ```python # Scenario 1: Increase votes when playerId Exists contract.increase_votes(playerId=2, sender=bob.address).run() scenario.verify(contract.data.players[2].votes == 1) # Scenario 2: Increase votes when playerId Exists contract.increase_votes(playerId=2, sender=charlie.address).run() scenario.verify(contract.data.players[2].votes == 2) # Scenario 3: Fail if User already voted contract.increase_votes(playerId=2, sender=charlie.address).run(valid=False, exception="YouAlreadyVoted" ) # Scenario 4: Fail if playerID does not exist contract.increase_votes(playerId=6, sender=adebola.address).run(valid=False, exception="PlayerIDNotFound") ``` Now, go ahead to deploy your smart contract.  See the full code for the updated smart contract [here](https://medium.com/r/?url=https%3A%2F%2Fsmartpy.io%2Fide%3Fcode%3DeJy1Vdtq3DAQffdXCPchNl1M4qR9CBgacqGhTSkNbSkhmFlLmwhkyUjaNKb03zuSZWe93m22kPpB2NLM0ZnbMa8bpS0xNWjbtAQMMU0UvTNNViu6FCyibEFq4DJJj6OI4NMIaJk2pW0bdkzcSgri7KG58QbuwW8u7Wz1W7NKaZoMW%406RUAdvYzWXd7PRactAbz99UJaZ7liCHY5S_3brV79UAowh35RFhASNT5W0GiqL8fQ%40Lsay5JLbskwME4tZH%40XMX6PNdxCC2RNKNTOGmRVfHx26ZBQsZMGto1WBsckAtJq3dIv7xssQbOP%40Fgy6rOsWfeI4igYTV1CGcbeNwrqMAuey0gwMK31C%40_BBQ70eJuaRYa9IZZ%40jnFWYZGwak3RAmWGSMp3OSPxDLU8EXkhbLAmj8aYbJvmc4HX7l9Qhfu7ezz4pe6GWcg1ygnWzBnGbdZ30uiAH_1KWDChdC2%40nfPv6%40DRv7SJvUvgCOiC8qESGNnHjUsRdK8epH02_H4BA8CqMk9suoaowHzbZ8wd7XdPN1XyTCW4Hg%40oetOAbccJRMATK5krAxgu7o2DoF1MxCZqrFfN%40K3H6ko7UBa1%40Dck5OF7Rjy4HH7mSTJArrASPZ14pijjfzw_jbmJNsZ8%40yUU%40BTjHQOQdeQ8gAFvmWYjDKcSHVnCQ5GoOTcOeRziaIlwDxYRcgdzB_c3U_VRzY5GCIl%40UBEHV31F%40r2hi0EBMs8t9FtRx0ConucwmN75xXP%405lr9N03ElcWJ6oK54r8h1f4Y1uwzCEpT65z2TpB86cv6I3M2ITLamRL1tkSMhP2AFdmnPJs30En9KI0bZA9N80SYD4mju837SC5z0dMI4_y%40Mw8i8DOt8yhr78gK4IHxBvqJ2ogZ4bfUR0Bch%404BNQIsLEIbNCHusWGNx%40IrNQj4lePREMNx4RqjCeNxvhLmc7sby7cAyiMtuLCc_hyBJ0R8zXc7I). ## Connecting your Smart Contract to the Gas Station Back in the Gas Station UI, navigate to contracts in the menu. Copy and paste your smart contract address in, then click fetch entrypoints. Give your contract a name you can easily remember.  Now, you can toggle on the entrypoints you'll like to use with the Gas Station API. For this example, we'll only use the `increase_votes` entrypoint.  ![increase_votes entrypoint](https://cms.tzstaging.com/assets/9e1904ef-3d43-4923-b5f8-cc99d500dbfa) Click **Add**. ### Connecting the Frontend to the Gas Station With our Smart contract deployed, let's head to our application's Frontend to make some changes. The original code for this application is here. We'll be modifying the following files; src/pages/index.js and src/helpers/constants.js . We'll also be installing Axios which will allow us to connect to APIs/endpoints like the Gas station API that exists outside our application. Let's install axios.  `npm i axios` Next, update the `src/helpers/constants.js` file. We'll need to update the `CONTRACT_ADDRESS` and `GAS_STATION_API_URL`  ```js const CONTRACT_ADDRESS = "KT1Aa2YS4RgQjDPDw7YQHpkXgtHvYwC7SYqo"; const GAS_STATION_API_URL = "https://gas-station-api.gcp-npr.marigold.dev"; ``` Don't forget to also export GAS_STATION_API_URL  Next, head into `src/pages/index.js`. Import the GAS_STATION_URL with the rest of your constants like below; ```js import { CONTRACT_ADDRESS, RPC_URL, GAS_STATION_API_URL, } from "@/helpers/constants"; ``` We'll also need to keep tabs on the User's Wallet Address userAddress so we can send this alongside the rest of the transaction parameters to the API. To do that, we'll use React's useState hook. Your hooks section should now look like below with `const [userAddress, setUserAddress] = useState(null)` added. ```js const [players, setPlayers] = useState([]); const [reload, setReload] = useState(false); const [message, setMessage] = useState(""); const [userAddress, setUserAddress] = useState(null); ``` Now, we'll update our `connectWallet` function to store the userAddress state ```js const connectWallet = async () => { setMessage(""); try { const options = { name: "Ballon tz'or", network: { type: "ghostnet" }, }; const wallet = new BeaconWallet(options); walletRef.current = wallet; await wallet.requestPermissions(); Tezos.setProvider({ wallet: walletRef.current }); const userWalletAddress = await wallet.getPKH(); setUserAddress(userWalletAddress); } catch (error) { console.error(error); setMessage(error.message); } }; ``` We'll also create a button for the user to connect their wallet when the application loads. This button will run the connectWallet function when clicked. This allows us to make sure we have stored the userAddress before we attempt to send a transaction.  ```js <button className="mt-4 rounded-full bg-green-500 p-3 hover:bg-green-700 transition-all ease-in-out" onClick={() => connectWallet()} > Connect Wallet </button> ``` Finally, we'll add a `callGasStation` function to replace our `votePlayer` function. i.e when a user clicks on Vote, our dApp will call `callGasStation` instead of `votePlayer`  You'll notice that the `callGasStation` function bypasses the need to have the user's wallet load up, show them the gas fees and have them approve and sign the transaction. Instead, all of that is handled by the gas station (relayer). ```js const callGasStation = async (playerId) => { try { console.log("in call Gas station"); const contract = await Tezos.wallet.at(CONTRACT_ADDRESS); const op = await contract.methods .increase_votes(playerId, userAddress) .toTransferParams(); console.log("address", userAddress); const response = await axios.post(GAS_STATION_API_URL + "/operation", { sender: userAddress, operations: [{ destination: op.to, parameters: op.parameter }], }); console.log("Gas Station response:", response.data); } catch (error) { console.log(error.message); setMessage(error.message) } }; ``` Let's walk through the code snippet above; We connect to the contract using Taquito as usual.  ```js const contract = await Tezos.wallet.at(CONTRACT_ADDRESS); const op = await contract.methods .increase_votes(playerId, userAddress) .toTransferParams(); ``` We tap into the `increase_votes` entrypoint that has been updated to take 2 parameters of `playerId` and `sender `- which is the userAddress . Then, we call the `toTransferParams()` method on it in order for us to access the operations object returned by the method. The operations object contains a number of objects including the `to` and `parameter` which we make use of when calling the Gas Station API. You can find out what the parameters expected by the Gas Station API are by reading the Swagger Docs/API documentation here. From the docs, we can see that the `/operation` API expects a JSON object of sender and operation in the form below; ```js { "sender": "string", "operations": [ {} ] } ``` So, we pass `userAddress` and the values obtained from op to the API. ```js const response = await axios.post(GAS_STATION_API_URL + "/operation", { sender: userAddress, operations: [{ destination: op.to, parameters: op.parameter }], }); ``` See the full code [here](https://github.com/onedebos/ballon-tz-or/blob/main/src/pages/gasless.js). Now, we can run our application to check that everything works correctly.  We can also head to a smart contract explorer like BetterCallDev to check that our smart contract works correctly. For the smart contract used in this example, you can see use the [BCD explorer](https://better-call.dev/ghostnet/KT1Aa2YS4RgQjDPDw7YQHpkXgtHvYwC7SYqo/storage) to see how it works.
debosthefirst
1,721,751
Rollups-as-a-service or building yourself with Rollups SDKs– which is viable?
Claimed as the end-game for blockchain scalability, rollups are being adopted by a lot of popular...
0
2024-01-09T07:12:37
https://www.zeeve.io/blog/rollups-as-a-service-or-building-yourself-with-rollups-sdks-which-is-viable/
rollups
<p>Claimed as the end-game for blockchain scalability, <a href="https://www.zeeve.io/rollups/">rollups</a> are being adopted by a lot of popular web3 projects, such as Coinbase’s BASE, Immutable, dYdX, and zkBNB. Speaking about the rollup development, <a href="https://www.zeeve.io/rollups/">rollups-as-a-service solutions</a> and rollup SDKs are currently the two viable ways for building use case-centric, customizable rollups chains. Both the RaaS and SDKs have the same end goal, which is enabling rollup development, but they work differently. Hence, there is some confusion around their usefulness.&nbsp;</p> <p>This article addresses all the concerns and confusion related to using <a href="https://www.zeeve.io/rollups/">rollups-as-a-service</a> or building with Rollups SDKs. If you are planning to build your <a href="https://www.zeeve.io/rollups/">custom rollup chain</a>, go through all the details given here to choose a setup that seems suitable for you.&nbsp;</p> <figure class="wp-block-image aligncenter size-large"><a href="https://www.zeeve.io/rollups/"><img src="https://www.zeeve.io/wp-content/uploads/2023/12/Launch-your-own-L2L3s-quickly-with-Zeeves-modular-RaaS.-1024x130.jpg" alt="" class="wp-image-55066"/></a></figure> <h2 id="h-breaking-the-confusions-around-rollups-as-a-service-and-rollup-sdks">Breaking the confusions around rollups-as-a-service and rollup SDKs:</h2> <p>Because the concept of rollups-as-a-service is relatively new, it can be confusing for many people. Also, <a href="https://www.zeeve.io/rollups/">RaaS</a> is often confused as a counterpart for rollup SDKs, which they are definitely not. So, What exactly are rollups-as-a-service and rollup SDKs, let’s understand.&nbsp;</p> <p><strong>What exactly are rollups-as-a-service?</strong></p> <p><a href="https://www.zeeve.io/rollups/">Rollup-as-a-service</a> are decentralized RaaS stack that allows anyone– be it dApp projects, web3 enterprises, or independent developers, to deploy their own modular rollup chain via a fully-managed services and no-code deployment panel.&nbsp; RaaS can be better referred to as the one-stop rollup deployment platform that gives you flexibility and modularity to deploy both <a href="https://www.zeeve.io/appchains/optimistic-rollups/">Optimistic</a> and <a href="https://www.zeeve.io/appchains/polygon-zkrollups/">Zk rolllups</a> using frameworks like&nbsp; <a href="https://www.zeeve.io/appchains/zksync-hyperchains-zkrollups/">ZkStack</a>, <a href="https://www.zeeve.io/appchains/polygon-zkrollups/">Polygon CDK</a>, <a href="https://www.zeeve.io/appchains/optimistic-rollups/">OP Stack</a>, and Arbitrum Orbit. It is designed to abstract away all the major challenges of building <a href="https://www.zeeve.io/rollups/">application-specific rollups</a>, which can be smart contracts development, managing infrastructure, configuration, integration of rollup components, hosting, infrastructure optimization, etc.&nbsp;</p> <p>To tackle these complexities, <a href="https://www.zeeve.io/rollups/">rollups-as-a-service solutions</a> provide a no-code deployment panel, where you can login, choose your preferred stack, select configuration options (such as choosing sequencer, aggregator, prover, DA etc), test your rollup chain, and deploy it seamlessly within few minutes. However, even to use RaaS, you must be familiar with the basic concept of rollups, its functions, components, and ideal features.</p> <p><strong>Now, let’s learn about rollup SDKs</strong></p> <p>Rollup SDKs refer to the modular frameworks that web3 developers can use to build custom-fit L2/L3 rollup chains, suiting the specific needs of their dApps. Basically, rollup SDKs are a set of tools and components such as starter templates, customizable virtual machines, and open-source codebase. Anyone with expertise in blockchain development can modify some aspects of these existing components to design a <a href="https://www.zeeve.io/rollups/">rollup</a> from scratch that becomes a version of their application.&nbsp;</p> <p>Rollup SDKs offer you the flexibility to customize your <a href="https://www.zeeve.io/rollups/">rollup network</a>, and do additional configuration for DA (data availability) layer, sequencer, prover, or storage. Simply put, It’s a fully customizable system that allows you to add anything in the rollup chain that you think is required from your applications’s perspective.&nbsp;</p> <p><a href="https://www.zeeve.io/appchains/polygon-zkrollups/">Polygon CDK</a>, <a href="https://www.zeeve.io/appchains/optimistic-rollups/">Op Stack</a>, <a href="https://www.zeeve.io/appchains/zksync-hyperchains-zkrollups/">zkStack</a>, Arbitrum Orbit, and Rollkit are some of the popular <a href="https://www.zeeve.io/rollups/">rollup frameworks</a> that are being actively used in the web3 developer community to build solutions for various industry sectors– be it <a href="https://www.zeeve.io/blog/rise-of-gaming-economy-powered-by-blockchain/">gaming</a>, <a href="https://www.zeeve.io/blog/banking-the-unbanked-defi-promoting-financial-inclusion/">DeFi</a>, <a href="https://www.zeeve.io/blog/a-simple-and-comprehensive-guide-on-non-fungible-tokens/">NFTs</a>, <a href="https://www.zeeve.io/blog/blockchain-for-government-public-services/">government</a>, <a href="https://www.zeeve.io/blog/comprehensive-guide-on-tokenized-real-world-assets/">real-world asset tokenization</a>, or <a href="https://www.zeeve.io/blog/blockchain-in-entertainment-industry-to-change-the-dynamic-industry/">media &amp; entertainment</a>.&nbsp;</p> <p>As discussed, it’s not fair to compare RaaS and SDKs, because these two just two distinct setups that serve the same purpose. <a href="https://www.zeeve.io/rollups/">Rollups-as-a-service</a> allows you to do some checkbox and your <a href="https://www.zeeve.io/rollups/">rollup chain</a> becomes ready. On the other hand, with rollup frameworks– you have to go ahead and write code to inject in your dApp. So, its basically code vs no-code, but not RaaS vs Rollup SDKs.&nbsp;</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebmmv4zbzbqf62zyocst.png) <h2 id="h-rollups-as-a-service-and-building-with-rollup-sdks-diving-into-development-process-stacks-resources-etc">Rollups-as-a-service and building with rollup SDKs: diving into development process, stacks, resources, etc:</h2> <p>As rollup-as-a-service works as a launchpad for rollup SDKs, the approach to building <a href="https://www.zeeve.io/rollups/">rollups</a> with these two setups is different. Let’s discuss those steps with respect to a standard RaaS solution and rollup SDK:</p> <h3>Using rollups-as-a-service solutions–&nbsp;</h3> <p><strong>Step:1- Developer onboarding:</strong>&nbsp; First, you need to sign up/register yourself on your desired <a href="https://www.zeeve.io/rollups/">rollups-as-a-service platform</a>. This generally requires email, a passcode, and, in some cases, a contact number. After that, login to RaaS platform and continue with your preferred stack.</p> <p><strong>Step:2- Rollup configurations: </strong>General configuration, blockchain-level, EVM, RPC, and infrastructure are some of the configuration options required in a <a href="https://www.zeeve.io/rollups/">rollup chain</a>. RaaS solution (as you can see in the below image) offers an automated, default configuration for each of these components.&nbsp;</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22o4jxf74nbr2g4xdfbp.png) <p>Further,&nbsp; to enhance the modularity of your <a href="https://www.zeeve.io/rollups/">rollup</a>, you can do RPC configuration, choose the DA layer, sequencer type, nodes, storage type, interoperability layer, etc, from third-party integration partners.</p> <p><strong>Step:3 Testing and deployment: </strong><strong>&nbsp;</strong>Once you test your setup thoroughly in DevNet, push your rollup chain to production with the one-click deployment option of RaaS.&nbsp;</p> <h3 id="h-using-rollup-sdks">Using rollup SDKs–&nbsp;</h3> <p><strong>Step:1 Setting up environment:</strong> Building with frameworks, requires you to have specific types of hardware and software setup. For example, you will need Operating system, cloud setup, hosting infrastructure, virtual machine, docker containers, <a href="https://www.zeeve.io/blog/how-to-connect-to-my-ec2-geth-node-with-json-rpc/">RPCs</a>, etc.</p> <p><strong>Step:2 Writing smart contract: </strong>You need to create smart contract for your rollup chain in the language that is supported on your preferred stack. For example, if you choose SDKs like <a href="https://www.zeeve.io/appchains/polygon-zkrollups/">Polygon CDK</a> and <a href="https://www.zeeve.io/appchains/zksync-hyperchains-zkrollups/">zkStack</a>, then a Solidity, Rust, or Vyper based contract will work because these are EVM-compatible frameworks.</p> <p><strong>Step:3 Wallet creation: </strong><strong>&nbsp;</strong>A web3 wallet is needed to perform transactions, pay the gas fees, and access other apps. You use existing web3 wallets or you can one from scratch.&nbsp;</p> <p><strong>Step:4 Hosting nodes</strong>: Since you are deploying your rollup chain by yourself, you also have to host one or more nodes as required. For this, you can opt for <a href="https://www.zeeve.io/">Node hosting service providers like Zeeve</a>.</p> <p><strong>Step:3</strong> <strong>Configuring rollup components: </strong>To continue with deployment, you need to configure <a href="https://www.zeeve.io/rollups/">rollup</a> components, such as prover, synchronizer, transaction manager, RPCs, sequencer, aggregator, explorer, bridges,etc. For each of these components, you will need their specific command lines.</p> <p><strong>Step:4 Testing and upgrade</strong>: Test your rollup chain on your preferred testnet, ensuring that its components and features work as expected. Do the required upgrade to improve its functionalities.</p> <p><strong>Step:5 Deploying the rollup chain: </strong>Push your rollup chain’s entire setup to the production environment to make it available for the intended users.&nbsp;&nbsp;</p> <p><strong>S</strong><strong>tep:6 Maintaining your rollup:</strong> To ensure your <a href="https://www.zeeve.io/rollups/">rollup chain</a> remain up and running all the time, you need to monitor it 24x7 and do all the required upgrades.</p> <h2>What are the possible challenges of building with Rollup SDKs?</h2> <p>Since rollup SDKs require developers to build their L2 or L3 chain from scratch, they may come across some sort of challenges, such as:&nbsp;</p> <li><strong>Pre-development challenges: </strong>Building a rollup chain requires aggressive planning and consultation to move things forward in the right direction. For example, you may be confused about choosing between a <a href="https://www.zeeve.io/rollups/">rollup framework</a>– <a href="https://www.zeeve.io/appchains/optimistic-rollups/">Optimistic</a> or <a href="https://www.zeeve.io/appchains/polygon-zkrollups/">Zkrollups</a> or you need help in choosing services from external service providers, like a decentralized sequencer, DA layer, account abstraction, etc.</li> <li><strong>Challenges of setting up a reliable infrastructure</strong>:&nbsp; Even when someone is building a rollup with SDKS, they need some infrastructure to host its virtual machine, RPC and non-RPC nodes etc. Also, they need additional infrastructure partners for DA, Sequencer, <a href="https://www.zeeve.io/blog/blockchain-interoperability-key-for-global-blockchain-adoption/">interoperability</a>, and more.</li> <li><strong>Rollup engineering-related complexity: </strong>There are so many things required to <a href="https://www.zeeve.io/rollups/">build rollups</a> with SDKs, like programming smart contracts, deploying nodes, integrating rollup components, testing, and deployment. Any developers doing this, will require extensive blockchain development skills with complete knowledge of latest technologies.</li> <li><strong>Selection of additional rollup services:</strong> To add modularity to a <a href="https://www.zeeve.io/rollups/">rollup chain</a> or we can say building a sovereign rollup require you to integrate additional rollup services. For example, you can choose a decentralized sequencer from Celestia, an account abstraction layer from Biconony or Halliday, and interoperability from XYZ, etc. This also requires intense research and analysis to choose the right services to achieve best output in a cost-optimized way.</li> <li><strong>Performance &amp; monitoring : </strong>Even after you successfully build your rollup chain with SDKs, it needs proper maintenance for uptime &amp; performance. For example, your <a href="https://www.zeeve.io/rollups/">rollup</a> setup should follow best security practices, its infrastructure should be scalable on-demand, gas cost optimization should be there, response time should be high, and so on.</li> <p>Speaking about monitoring, you will need to perform multiple layers of integration in your rollup, which is a challenging task to do. For example, you need to set up a component for real-time alerts, customize it to do monitoring on all the important parameters, have graphical analytics should be there, and a lot more.</p> <h2 id="h-how-can-rollups-as-a-service-solve-the-above-challenges">How can Rollups-as-a-service solve the above challenges?</h2> <li><strong>Project consultation &amp; decision-making support:</strong> A reliable <a href="https://www.zeeve.io/rollups/">RaaS</a> providers does not only offer a node-deployment platform. Rather, they offer expert’s support and end-to-end consultation to analyze your requirements well. For example- addressing your confusion of choosing between various rollup frameworks, support for tooling &amp; infrastructure, etc.</li> <li><strong>Ready-to-deploy, production-grade infrastructure-</strong>&nbsp; <a href="https://www.zeeve.io/rollups/">RaaS platform</a> are optimized to offer a production-grade, low-code rollup deployment infrastructure. Whether its about hosting nodes, or managing virtual machines– you do not need to manage infrastructure on your end.&nbsp;</li> <li><strong>Standardization: </strong>rollups-as-a-services are designed both– experts and people having less familiarity in rollups or blockchain. Hence, <a href="https://www.zeeve.io/rollups/">RaaS solutions</a> offer a default configuration of a standard rollup chain for every framework. You can simply go ahead with the setup and deploy your rollup. However, you have the option to customize every aspect of your chain, from EVM configuration to wallets, infrastructure, and more.</li> <li><strong>Seamless Integration for third-party rollup services: </strong>With a reliable RaaS solution, you will get the flexibility to <a href="https://www.zeeve.io/integrations/">integrate a range of third-party rollup services</a>, be it a decentralized sequencer, aggregator, prover, storage solution, or interoperability layer.</li> <li><strong>Proactive rollup monitoring &amp; maintenance:</strong> Once deployed on any RaaS infrastructure, your <a href="https://www.zeeve.io/rollups/">rollup network</a> will be monitored on critical parameters such as instance metrics, validium metrics, RPC metrics, logs, and alerts, maintaining uptime and superior performance of your chain.&nbsp;</li> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/osgf7ysfles48ye9ce1r.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s1fh0asz5c5iyewa2ix.png) <h2 id="h-rollups-as-a-service-solution-or-building-with-rollup-sdks-which-one-you-should-choose-and-why">Rollups-as-a-service solution or building with rollup SDKs: Which one you should choose, and why:</h2> <p><a href="https://www.zeeve.io/rollups/">Rollups-as-a-services</a> are becoming a preferred choice for a lot of industry leaders, futuristic projects, and dApps. Below is a tweet from Sandeep Narwal, Co-Founder of Polygon, highlighting the need of&nbsp; RaaS for building modular dApps.&nbsp;</p> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y4dmd2tzw2ccn9rfq4ts.png) <p>Now, talking about choosing between rollups-as-a-service or rollup SDKs, it all depends on the stage of your application and its overall complexity.&nbsp;</p> <p>If you already manage a use case in the form of protocol or a fully-matured dApp or <a href="https://www.zeeve.io/blog/smart-contract-standardization-a-necessity-for-the-large-scale-adoption-of-blockchain/">smart contract</a>, where you just want to spin up a additional L2/L3 chain, then using RaaS solutions is the viable option for this scenario. That’s because you don’t need to do anything from scratch, you just want to add an additional chain to your existing infrastructure.</p> <p>For other cases, like upgrading a web2 application to web3 or you are building a new web3 application, SDKs may be more suitable. Note that, <a href="https://www.zeeve.io/rollups/">rollups-as-a-service</a> can also work here. But, with SDKs you can choose everything from base level to design a highly personalized ecosystem. For example, what kind of state machine your application needs, what state transition function is required, and so on. Doing this same thing with RaaS can be a little heavy lifting because configurations are standardized there. Above all these, end decision is up to the projects, if they want an automated setup then RaaS is a viable option,and if they want to do things by themselves, then SDKs will work.</p> <h2 id="h-try-zeeve-s-modular-raas-stack-the-most-easy-way-to-deploy-rollups">Try Zeeve’s modular Raas Stack— The most easy way to deploy rollups</h2> <p><a href="https://www.zeeve.io/">Zeeve</a> offers a modular <a href="https://www.zeeve.io/rollups/">rollups-as-a-service</a> stack to deploy production-grade rollups natively integrated with all the essential rollup tools &amp; components such as white-labeled blockchain explorers, <a href="https://www.zeeve.io/blog/what-are-cross-chain-bridges-a-detailed-guide/">cross-chain bridges</a>, advanced wallets, data indexers, scalable nodes. <a href="https://www.zeeve.io/">Zeeve</a> calls its RaaS stack modular because it supports integration for a range of rollup services such as Avail, Near DA, and Celestia for data availability, Biconomy and Halliday for Account Abstraction (AA), Chainlink for decentralized oracles, Espresso and Radius for Decentralized Sequencer, <a href="https://www.zeeve.io/subgraphs/">Subgraph</a> for data indexers, and LayerZero and Router Protocol for interoperability layer.&nbsp;</p> <p><a href="https://www.zeeve.io/">Zeeve</a> currently offers Polygon CDK sandbox—a one-click deployment tool for launching custom CDK chains. The sandbox tool is absolutely free to use, which allows you to try and test your <a href="https://www.zeeve.io/rollups/">rollup chain</a> multiple times unless it is ready for production. Soon, Zeeve will offer a similar RaaS stack for launching <a href="https://www.zeeve.io/blockchain-protocols/deploy-zksync-era-node/">zkSync era</a>, <a href="https://www.zeeve.io/appchains/zksync-hyperchains-zkrollups/">Hyperchains</a> and Arbitrum Orbit. These are just a snapshot of what <a href="https://www.zeeve.io/">Zeeve</a> can offer, the list is vast, and its expanding with each passing day.&nbsp;</p> <p>For more information about <a href="https://www.zeeve.io/">Zeeve</a> and its comprehensive stack, feel free to <a href="https://www.zeeve.io/talk-to-an-expert/">connect with our experts</a> or schedule a one-to-one call for a detailed discussion.</p>
zeeve
1,724,146
Benefits And Challenges in Import Business: Import Business Insights
Introduction Every country engages in global trade, with India alone exporting goods worth $447...
0
2024-01-11T09:02:11
https://dev.to/10xexport/benefits-and-challenges-in-import-business-import-business-insights-515i
import, export, importexport
Introduction Every country engages in global trade, with India alone exporting goods worth $447 billion and importing goods worth $714 billion. This signifies the significant scale of the import business in India, making it a substantial player in the global marketplace. Starting an [import business](https://10xexport.com/our-coach/) is an exciting venture for entrepreneurs seeking to expand globally. This article delves into the benefits and challenges of import businesses, offering insights for those looking to navigate this dynamic field successfully. Advantages of Import Business 1. Access to Diverse Products Importing allows businesses to offer a variety of products, tapping into the unique resources, expertise, and cultural influences of different countries. This diversity enriches product portfolios and attracts a broader customer base. 2. Cost-Effective Sourcing Importing from countries with lower production costs can be more economical than domestic production. This includes factors like cheaper labor, raw materials, or favorable government policies, helping businesses reduce manufacturing expenses and increase profit margins. 3. Global Market Reach Engaging in import business expands market reach globally. By connecting with international suppliers, businesses can increase sales volumes, revenue, and establish global brand recognition. 4. Competitive Advantage Importing exclusive or high-quality products provides a competitive edge. Offering items not readily available locally can attract customers seeking novelty, fostering loyalty and repeat business. 5. Building Strong Business Relationships Successful import businesses thrive on solid partnerships with foreign suppliers. Establishing trust through effective communication leads to favorable terms, priority access to new products, and support during market fluctuations. 6. Assured Profits With demand in the local market, importing the right products after careful calculation of all expenses ensures assured profits. Many successful importers capitalize on existing local demand for a steady income. Challenges In Import Business 1. Legal and Regulatory Compliance Navigating import and trade regulations, customs laws, tariffs, and licensing requirements requires expertise. Utilizing the services of a customs broker or consultant is crucial for compliance. 2. Quality Control and Inspection Maintaining product quality is essential for building trust. Regular inspections and quality control checks ensure imported products meet international standards and are free from defects. 3. Currency Exchange and Payment Risks Dealing with international suppliers involves currency exchange risks. Hedging strategies and fixed currency exchange rate contracts can mitigate these risks. 4. Supplier Risk Dealing with suppliers in other countries involves risks in cargo delivery and payments. Thorough supplier verification and proper documentation are essential to avoid disputes. 5. Logistics and Transportation Efficient logistics and transportation are vital for timely and cost-effective operations. Working with reliable shipping partners and custom clearance agencies ensures seamless importation and distribution processes. 6. Calculating the Right Price Understanding all expenses, including custom duty, GST, and hidden costs, is crucial for calculating the imported product's landing price. This knowledge prevents unexpected expenses that can impact profits. Tips For A Successful Import Business 1. Thorough Market Research Identify in-demand products and potential competitors through comprehensive market research. Understanding market trends and consumer preferences optimizes product selection for better sales performance. 2. Supplier Evaluation and Verification Vet potential suppliers carefully to ensure reliability and credibility. Checking references, visiting manufacturing facilities, and reviewing certifications verify the legitimacy of suppliers. 3. Negotiation Skills Practical negotiation skills lead to favorable terms and competitive prices. Aim for mutually beneficial deals with suppliers to optimize profit margins. 4. Right Payment Terms Manage cargo nondelivery and nonpayment risks through trade conducted via banks and financial instruments like LC. Understanding payment risks and working on favorable terms is essential. 5. Proper Documentation and Contracts Maintain accurate documentation for smooth customs clearance and legal compliance. Well-drafted contracts with suppliers clarify terms and minimize misunderstandings. 6. Risk Mitigation Strategies Develop strategies to address challenges like supply chain disruptions, currency fluctuations, and political instability. Proactive risk mitigation enhances resilience in the import business. 7. Competitor Analysis Analyze competitors in the import business to make informed decisions. Accessing information about importers and their products through tools like India Import Export Federation's Buyer Data aids in strategic decision-making. 8. Marketing and Sales Effective marketing and sales activities are crucial for business growth. For import businesses, a well-designed website and brochures generate leads and create a strong online presence. Conclusion Embarking on an import business journey offers access to diverse products, cost-effective sourcing, and global market reach. Success in this venture requires overcoming challenges through diligent research, strategic planning, and strong supplier relationships. The Import Export Federation provides comprehensive support, guiding entrepreneurs through the complete import-export cycle, including banking, risk management, documents, compliances, and logistics. About Import Export Federation The Import Export Federation offers online import business training, classes, and live webinars, providing essential knowledge for starting a self-import-export journey. With a focus on Pune and Mumbai, the federation aims to equip entrepreneurs with the skills needed for a successful import business. FAQs Q1: What legal considerations should be addressed in import business? Legal considerations include customs laws, tariffs, licensing requirements, and compliance. Utilizing services like those offered by the Import-Export Federation ensures proper licensing and compliance. Q2: How can importers mitigate currency exchange risks? Importers can use hedging strategies or negotiate contracts with fixed currency exchange rates to mitigate currency exchange risks. Consulting banking experts can provide guidance on managing fluctuation risks. Q3: Why is thorough market research essential for import businesses? Thorough market research identifies in-demand products, potential competitors, and consumer preferences. This information guides product selection and enhances sales performance. Q4: How can businesses ensure the quality of imported products? Regular inspections and quality control checks are essential to ensure that imported products meet international standards and are free from defects. Third-party agencies can conduct audits and inspections globally. Q5: What role does the Import Export Federation play in supporting businesses? The Import Export Federation provides comprehensive support, guiding entrepreneurs through the complete import-export cycle. Services include assistance with banking, risk management, documents, compliances, and logistics.
10xexport
1,732,157
Unleashing TypeScript's Power: Exploring Key Concepts with Real-World Examples
In the dynamic world of web development, TypeScript emerges as a powerful companion to JavaScript....
0
2024-01-17T06:37:59
https://dev.to/hossain45/unleashing-typescripts-power-exploring-key-concepts-with-real-world-examples-55p9
typescript, javascript, beginners, programming
In the dynamic world of web development, TypeScript emerges as a powerful companion to JavaScript. Let's explore some key concepts in a minute introduced by TypeScript, illustrating its features through practical examples. This blog will not div in detail! ## But Why Typescript? As they say, > "TypeScript is JavaScript with syntax for types". TypeScript emerged to make developers' lives easy by explicitly declaring the types. It made debugging easier than before. For example, ``` let age: number = 25; ``` In this example, the variable age is explicitly declared as a number, providing clarity and aiding in error detection. ## Type Annotations TypeScript introduces type annotations, allowing developers to explicitly specify the type of variables, parameters, and return values. This enhances code documentation and facilitates better understanding. For example, ``` function greet(name: string): string { return `Hello, ${name}!`; } ``` Here, the function `greet` takes a `string` parameter and returns a `string`, clearly defining the expected types. ## Interfaces for Clear Object Shapes Interfaces in TypeScript define contracts for object shapes, ensuring that objects adhere to a specific structure. This enhances code readability and maintainability. For example, ``` interface Person { name: string; age: number; } const hossain45: Person = { name: "Hossain45", age: 23, }; ``` The `Person` interface specifies the structure of a person object, making it easy to create instances with consistent properties. ## Type Aliases for Simplifying Types Type aliases create descriptive names for types, making complex types more readable. They are particularly useful when dealing with unions, intersections, or custom types. For example, ``` type Age = number; type RankAndName = `${string} ${string}`; let userAge: Age = 23; let rankAndName : RankAndName = "Develpoer hossain45"; ``` Here, `Age` and `RankAndName` provide meaningful names for the `number` and template string types, respectively. ## Union and Intersection Types TypeScript supports union types, allowing variables to hold values of different types. Intersection types enable the combination of multiple types into one. For example, ``` type Admin = { role: 'admin'; }; type Employee = { role: 'employee'; }; type AdminEmployee = Admin | Employee; ``` The `AdminEmployee `type can represent either an admin or an employee, providing flexibility in type definitions. ## Generics for Reusable Code Generics in TypeScript enable the creation of functions and data structures that work with various types, promoting code reuse and flexibility. For example, ``` function identity<T>(arg: T): T { return arg; } let result: number = identity(42); ``` The `identity `function can accept and return values of any type, providing a versatile and reusable solution. ## The Power (and Caution) of "Any" While TypeScript promotes static typing, the `any `type provides a way to opt out of type checking. It offers flexibility but comes with a loss of the benefits of static typing. For example, ``` let dynamicValue: any = 42; ``` In a broader perspective, consider a scenario where data arrives from an external API, and its structure is not fully known. The `any `type allows flexibility in handling such dynamic data, but developers need to be cautious due to the lack of type checking. ## Optional Properties for Enhanced Flexibility TypeScript's optional properties provide an additional layer of flexibility by allowing properties to be present or absent in an object. For example, ``` interface User { name: string; age?: number; } const john: User = { name: "hossain45", }; ``` In a broader scenario, consider a user profile where an `age `property is optional. This accommodates scenarios where not all users may have an age specified, offering enhanced adaptability in data structures. That's all for today! WE WILL DIV INTO DEEPER ON EACH TOPIC LATER ON! HAPPY CODING!
hossain45
1,739,852
Mastering Data Analytics: Navigating from Foundations to Advanced Techniques
Introduction In the rapidly evolving landscape of business and technology, data analytics has emerged...
0
2024-01-24T09:20:03
https://dev.to/shakyapreeti650/mastering-data-analytics-navigating-from-foundations-to-advanced-techniques-iof
Introduction In the rapidly evolving landscape of business and technology, data analytics has emerged as a critical driver of success. From uncovering valuable insights to making informed decisions, mastering data analytics is essential for professionals in various industries. This comprehensive guide will take you on a journey from the foundational principles of data analytics to advanced techniques, providing you with the knowledge and skills necessary to navigate the intricacies of this dynamic field. I. Understanding the Foundations of Data Analytics A. Definition and Scope: Data analytics involves the process of examining, cleaning, transforming, and modeling data to derive meaningful insights, draw conclusions, and support decision-making. The scope of data analytics encompasses a wide range of techniques and methods that can be applied across diverse domains. B. Importance of Data Analytics: Enhanced Decision-Making: Data analytics empowers organizations to make data-driven decisions, reducing uncertainty and increasing the likelihood of success. Improved Efficiency: Analyzing large datasets enables businesses to identify inefficiencies, streamline processes, and optimize resource utilization. Competitive Advantage: Organizations that harness the power of data analytics gain a competitive edge by staying ahead of market trends and customer preferences. II. Foundational Concepts in Data Analytics: A. Data Collection and Preparation: Types of Data: Understanding structured and unstructured data and the significance of both in analytics. Data Cleaning: The crucial step of identifying and rectifying errors, inconsistencies, and inaccuracies in datasets. B. Exploratory Data Analysis (EDA): Data Visualization: Using graphs, charts, and other visual aids to explore and communicate patterns, trends, and outliers in data. Descriptive Statistics: Analyzing and summarizing key features of a dataset, including measures of central tendency and dispersion. C. Basic Statistical Concepts: Probability Distributions: Understanding the fundamentals of probability and common distributions in data analytics. Hypothesis Testing: Introduction to hypothesis testing and its application in drawing inferences from data. III. Progressing to Intermediate Data Analytics: A. Regression Analysis: Linear Regression: Examining relationships between variables and making predictions using linear regression models. Logistic Regression: Understanding the application of logistic regression in binary classification problems. B. Time Series Analysis: Time Series Components: Decomposing time series data into trend, seasonality, and residual components. Forecasting Techniques: Exploring methods for predicting future values based on historical data. C. Machine Learning Fundamentals: Supervised Learning: Overview of supervised learning algorithms and their applications in classification and regression. Unsupervised Learning: Introduction to clustering and dimensionality reduction techniques. IV. Advanced Data Analytics Techniques: A. Deep Learning: Neural Networks: Understanding the architecture and functioning of artificial neural networks. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs): Exploring specialized neural network architectures for image recognition and sequential data analysis. B. Big Data Analytics Hadoop and MapReduce: Overview of distributed computing frameworks for processing large datasets. Apache Spark: Understanding the role of Apache Spark in big data analytics and machine learning. C. Predictive Modeling and Optimizat ion Ensemble Methods: Introduction to ensemble learning techniques, including bagging and boosting. Optimization Algorithms: Overview of optimization techniques used in fine-tuning machine learning models. V. Real-world Applications and Case Studies A. Industry-specific Use Cases: Healthcare: Leveraging data analytics for patient outcomes prediction and resource optimization. Finance: Risk assessment, fraud detection, and portfolio optimization through advanced analytics. B. Ethical Considerations in Data Analytics: Privacy and Security: Addressing concerns related to the ethical use of data and protecting sensitive information. Bias and Fairness: Recognizing and mitigating biases in data analytics models. Conclusion mastering data analytics is not just about acquiring theoretical knowledge but also about applying that knowledge in real-world scenarios. By enrolling in a Best [Online Data Analytics Training Course in Dehradun](https://uncodemy.com/course/data-analytics-training-course-in-dehradun/), Lucknow, Delhi, Noida, and all Cities in India. you can tailor your learning experience to the specific needs and opportunities present in the local context. Remember, the journey to mastery is ongoing, and staying connected with the latest developments in data analytics will keep you at the forefront of this ever-evolving field. Best of luck on your exciting and rewarding venture into the world of data analytics!
shakyapreeti650
1,740,007
Mastering Asynchronous Programming in Node.js: A Comprehensive Guide to Callbacks, Promises, and Await
In the realm of Node.js development, understanding asynchronous programming is crucial for building...
0
2024-01-24T11:08:06
https://dev.to/theoddcoders/mastering-asynchronous-programming-in-nodejs-a-comprehensive-guide-to-callbacks-promises-and-await-2h81
node, development, developers
In the realm of Node.js development, understanding asynchronous programming is crucial for building scalable and efficient applications. As developers strive to create responsive and high-performance systems, mastering asynchronous programming becomes a cornerstone skill. In this comprehensive guide, we'll delve into the key concepts of asynchronous programming in Node.js, exploring callbacks, promises, and the powerful async/await syntax. If you're seeking a top-notch Node.js development company or aiming to hire a skilled Node.js developer, this knowledge will empower you to make informed decisions. The Fundamentals of Asynchronous Programming: Node.js, known for its non-blocking I/O model, enables developers to handle numerous tasks concurrently. At its core, asynchronous programming allows the execution of multiple operations without waiting for each to complete before moving on to the next. This is achieved through callbacks, promises, and the more recent async/await syntax. **1.1 Callbacks in Node.js:** Callbacks are the foundation of asynchronous programming in Node.js. A callback function is passed as an argument to another function and is executed once the operation is completed. While callbacks are effective, they can lead to callback hell or the pyramid of doom, making the code difficult to read and maintain. // Example of a callback function fetchData(callback) { // Simulating an asynchronous operation setTimeout(() => { callback("Data fetched successfully"); }, 1000); } // Using the callback fetchData((result) => { console.log(result); }); .2 Promises: A Better Approach: To address the issues associated with callbacks, promises were introduced in ECMAScript 6. Promises provide a cleaner and more readable way to handle asynchronous operations. A promise represents the eventual completion or failure of an asynchronous operation, simplifying error handling and allowing for more structured code. // Example using promises function fetchData() { return new Promise((resolve, reject) => { // Simulating an asynchronous operation setTimeout(() => { resolve("Data fetched successfully"); }, 1000); }); } // Using the promise fetchData() .then((result) => { console.log(result); }) .catch((error) => { console.error(error); }); The Evolution: Async/Await Syntax: While promises offer significant improvements, the async/await syntax takes asynchronous programming in Node.js to a new level. Introduced in ECMAScript 2017, async/await provides a more concise and readable way to work with promises, making the code resemble synchronous programming. // Example using async/await async function fetchData() { return new Promise((resolve) => { // Simulating an asynchronous operation setTimeout(() => { resolve("Data fetched successfully"); }, 1000); }); } // Using async/await async function getData() { try { const result = await fetchData(); console.log(result); } catch (error) { console.error(error); } } getData(); Choosing the Right Approach: When developing in Node.js, the choice between callbacks, promises, or async/await depends on the specific requirements of your project and personal coding preferences. While callbacks remain prevalent, promises and async/await offer cleaner and more maintainable alternatives. Node.js Development Company: Why It Matters: For businesses aiming to harness the full potential of Node.js, partnering with a proficient [Node.js development company](https://www.theoddcoders.com/nodejs-development/) is essential. A specialized company brings expertise, experience, and a streamlined development process, ensuring the successful implementation of your projects. **Hire Node.js Developer**: Finding the Right Talent: If you prefer a more hands-on approach and want to build an in-house development team, hiring a skilled Node.js developer is a critical step. Look for developers with a solid understanding of asynchronous programming, as well as proficiency in other relevant technologies and frameworks. Conclusion: In conclusion, mastering asynchronous programming in Node.js is pivotal for building robust and efficient applications. Whether you opt for callbacks, promises, or the modern async/await syntax, understanding the strengths of each approach is crucial. If you're looking to leverage Node.js for your projects, consider partnering with a reputable **Node.js development company** or hiring a skilled Node.js developer to ensure success in your endeavors. Read More :- [Native App Development vs. Cross-Platform App Development: Choosing the Right Approach for Your Mobile App](https://www.theoddcoders.com/native-app-development-vs-cross-platform-app-development-choosing-the-right-approach-for-your-mobile-app/)
theoddcoders
1,740,016
👨‍💻 Daily Code 48 | Random Number 1-100, 🐍 Python and 🟨 JavaScript (3)
alright as promised yesterday I’m going to try making the Random Number Generator HTML again today...
0
2024-01-24T11:17:53
https://dev.to/gregor_schafroth/daily-code-48-random-number-1-100-python-and-javascript-3-2oik
beginners, daily, javascript
alright as promised yesterday I’m going to try making the Random Number Generator HTML again today without looking up anything or using any ChatGPT. I already solved that for Python yesterday, so now it’s just JS left # My Code Tadaa! Turns out this time I was able to solve it fully on my own! ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>1-100 Random Generator Button</title> </head> <body> <button id="js-random-button">Generate Random Number</button> <div id="js-random-number"></div> <script> document.getElementById('js-random-button').onclick = function () { let randomNumber = Math.floor(Math.random() * 100) + 1; document.getElementById('js-random-number').textContent = randomNumber; } </script> </body> </html> ``` Next I want to get more reps with page manipulation. I want to become so familiar with this that I can control website content with JavaScript without even thinking much, so I’ll add more exercises to achieve this in the coming days.
gregor_schafroth
1,740,053
What Mr. Robot Teaches Us About Bugs
Sometimes, we act as if bugs just appear in our code by themselves, but we all know that's not true....
0
2024-01-24T12:04:41
https://dev.to/rodri-oliveira-dev/what-mr-robot-teaches-us-about-bugs-3hnm
programming
Sometimes, we act as if bugs just appear in our code by themselves, but we all know that's not true. The errors in my code were written by me. I didn't mean to write them. I didn't want to write them. But I did anyway. I remember when I first read Al Shalloway referring to a bug as 'code that he had written' and it struck me, I am the author of the bugs in my code. There was no one else to blame but myself. Many teams accumulate bugs. Instead of listening to the message the bugs are telling us and fixing them as soon as they are found, they identify, track them, and tell themselves that one day they will get rid of them. Good luck. That number just grows and grows until there are so many bugs that it becomes unfeasible to fix, so we learn to live with most of them and the bugs drag the project down. Bugs are often just the tip of the iceberg, they are the harbinger of something much worse to come, a warning sign that things are going wrong. We should pay attention to these warnings and not ignore them. Some bugs hide very well, and finding them can often be the hardest part of getting rid of them. An old friend tells me that finding a bug in a large system is like being told there is a wrong word in a dictionary. Fixing a wrong word is easy, finding the wrong word can be time-consuming and tedious. In the series Mr. Robot, in the episode 1.2_d3bug.mkv, Elliot says: 'Debugging isn't just about finding the bug. It's about understanding why the bug was there in the first place. Knowing that its existence was not accidental.' I have been dealing with bugs for many years as a developer, but only recently started to see them for what they really are: flaws in my software development process. And like insects, software bugs need the right conditions to breed. Bugs don't just happen, we let them happen. But it doesn't have to be like this. Instead of accumulating bugs, we can recognize them for the messengers they are and heed their message to get back on track. Every bug is a missing test, a critical distinction we failed to realize. If we can see that, we can not only fix that bug but also prevent a whole series of bugs. If we can see a bug for what it really is, then it becomes my teacher and we become its grateful students.
rodri-oliveira-dev
1,740,060
Why Flutter BLoC is Loved and Popular by the Developers?
𝗦𝗲𝗽𝗮𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀: Flutter BLoC allows for the separation of business logic and UI. This means...
0
2024-01-24T12:09:27
https://dev.to/amrazzam31/why-flutter-bloc-is-loved-and-popular-by-the-developers-1632
flutter, dart, bloc, statemanagement
- 𝗦𝗲𝗽𝗮𝗿𝗮𝘁𝗶𝗼𝗻 𝗼𝗳 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀: Flutter BLoC allows for the separation of business logic and UI. This means that developers can focus on implementing the business logic in the BLoC without worrying about the UI, and vice versa. - 𝗖𝗼𝗱𝗲 𝗥𝗲𝘂𝘀𝗮𝗯𝗶𝗹𝗶𝘁𝘆: With Flutter BLoC, developers can create reusable business logic components that can be used across different parts of the application. This reduces code duplication and makes it easier to maintain the application’s codebase. - 𝗣𝗿𝗲𝗱𝗶𝗰𝘁𝗮𝗯𝗹𝗲 𝗦𝘁𝗮𝘁𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁: Flutter BLoC provides a predictable way of managing the state of the application. The BLoC emits new states in response to events from the UI, making it easier to reason about the application’s behavior. - 𝗧𝗲𝘀𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Flutter BLoC makes it easier to write unit tests for the business logic of the application. Since the BLoC is separate from the UI, it can be tested independently of the UI. - 𝗟𝗮𝗿𝗴𝗲 𝗮𝗻𝗱 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: Flutter BLoC is particularly well-suited for large and complex applications, as it provides a scalable architecture that can be extended as the application grows.
amrazzam31
1,744,216
Is the DSA matter in Fullstack Development if yes how much ?
A post by Rsgcv
0
2024-01-29T04:35:56
https://dev.to/bhingarde8/is-the-dsa-matter-in-fullstack-development-if-yes-how-much--31d2
bhingarde8
1,740,069
Decoding Cryptocurrency and Navigating the Developer's Realm
In today's fast-paced financial landscape, the emergence of digital currency is beginning a new era,...
0
2024-01-24T12:17:06
https://dev.to/shridhargv/decoding-cryptocurrency-and-navigating-the-developers-realm-949
cpp, cryptoappdevelopmentwithcpp, cryptocurrency, apirequest
In today's fast-paced financial landscape, the emergence of digital currency is beginning a new era, reshaping our understanding and utilization of money. Beyond the familiar names of Bitcoin and Ethereum, there are over 9,000 cryptocurrencies, each contributing to the evolving narrative of decentralized finance. But what sets digital currency apart, and why is it becoming a focal point of conversations? ## Cracking the Cryptocurrency Code: A User's Primer Cryptocurrency, often dubbed the currency of the future, operates on blockchain technology. It's not just a digital version of traditional money; it represents a paradigm shift. Unlike conventional currencies controlled by central banks, cryptocurrency is decentralized and operates on a blockchain – a secure digital ledger. Users can employ cryptocurrency for transactions, similar to regular money. However, it's more commonly viewed as an investment akin to stocks or precious metals. But, as with any investment, venturing into cryptocurrency requires diligent research to grasp its intricate workings. Bitcoin, the inaugural cryptocurrency, was introduced by the mysterious Satoshi Nakamoto in a 2008 paper. Nakamoto aimed to create a system for electronic payments without the need for trust, giving birth to the groundbreaking concept of Bitcoin. ## Demystifying Blockchain: The User's Digital Checkbook At the heart of cryptocurrency's magic lies blockchain – a colossal digital checkbook distributed across countless computers globally. Each transaction becomes a page, and these pages intertwine to form an unbroken chain. Imagine a book where you write down everything you spend daily. Each page is similar to a block, whereas the entire book, a group of pages, is a blockchain. Every cryptocurrency user possesses a copy of this digital book. With each transaction, the records synchronize across all copies simultaneously, ensuring accuracy and uniformity. To thwart fraudulent activities, every transaction undergoes validation, employing techniques like proof of work or stake. ## Proof of Work & Proof of Stake: The Digital Power Play Two popular mechanisms, proof of work and proof of stake, verify the legitimacy of transactions before incorporation into the blockchain. Participants in this validation process receive cryptocurrency rewards. It's like a race where computers solve a puzzle to verify a group of transactions, and the winner gets a small amount of cryptocurrency. Bitcoin, for instance, awards approximately $200,000 worth of Bitcoin to the first computer validating a new block. Alternatively, some cryptocurrencies employ proof of stake, tying the verification capacity to the amount of cryptocurrency temporarily locked up by participants. Proof of stake is more efficient than proof of work and facilitates faster transaction verification times. For instance, Bitcoin takes at least 10 minutes for a transaction, while Solana averages around 3,000 transactions per second using proof of stake – a notable speed increase. ## Consensus in the Crypto World: A Universal Agreement Both proofs of stake and proof of work rely on consensus mechanisms to validate transactions. While individual users check transactions, most ledger holders must approve each verified transaction. ### Mining: Unearthing Digital Gold Mining is creating new cryptocurrency units, typically as a reward for validating transactions. Once accessible to anyone, mining has become challenging, especially in proof-of-work systems like Bitcoin. As the Bitcoin network grows, it gets more complicated and needs more power. The average person can't do it anymore because there are too many pros with better equipment. Proof-of-work mining also consumes substantial electricity – Bitcoin alone surpasses Norway's annual electricity consumption. In contrast, proof-of-stake is less energy-intensive but requires participants to own cryptocurrency for involvement. ### Using Cryptocurrency: Beyond Digital Dollars Cryptocurrencies like Bitcoin, Litecoin, or Ethereum offer more than a digital version of traditional money; they're perceived as investments akin to digital gold. Bitcoin, the most famous crypto, is like secure, decentralized gold. Utilizing cryptocurrency for purchases often requires a crypto wallet, functioning as a digital purse interacting with the blockchain for sending and receiving crypto. However, it's crucial to note that crypto transactions aren't instantaneous; they require validation before completion. Thus, those diving into the cryptocurrency wave should brace for a dynamic ride. ## Empowering Developers through Digital Currency: A Symbiotic Relationship Developers find themselves in a unique position, equipped in order to harness the potential and address the challenges presented by digital currencies: **Blockchain Development:** Developers use blockchain to create decentralized applications (DApps) and smart contracts, reducing reliance on central authorities. **Cryptocurrency Integration:** Seamlessly integrating cryptocurrencies into applications allows users to transact and invest within the ecosystem. **Financial Innovation:** Digital currencies have become a testing ground for financial innovation, enabling developers to explore novel transaction methods and experiment with decentralized finance (DeFi) platforms. **Enhanced Security:** Leveraging blockchain's cryptographic techniques, developers can fortify applications, ensuring robust security measures to protect user data and financial assets. **Global Accessibility:** Digital currencies break geographical barriers and enable developers to create applications with global reach, offering financial services to unbanked populations. ## Crypto Data The importance of [cryptocurrency data](https://tradermade.com/crypto) extends across various aspects of the financial, technological, and economic landscape. Here are key points highlighting the significance of cryptocurrency data: ### Market Analysis and Trading Cryptocurrency data provides [real-time information on crypto prices](https://tradermade.com/blog/how-to-get-live-crypto-prices), trading volumes, and market trends. Traders and investors use this data to make informed decisions, execute trades, and manage their portfolios. ### Investment Decisions Investors rely on cryptocurrency data to analyze historical performance, assess volatility, and identify potential investment opportunities. Access to accurate data is crucial for making strategic investment decisions. ### Risk Management Cryptocurrency data aids in risk assessment and management. Traders and investors use volatility metrics, historical price patterns, and other data points to gauge and mitigate risks associated with the crypto market. ### Blockchain Development Developers leverage blockchain-related data to understand network activities, validate transactions, and enhance the security of decentralized applications (DApps). Data from blockchain networks is fundamental for creating innovative and secure solutions. ### Market Research and Trends Cryptocurrency data is essential for market research and trend analysis. Analysts and researchers use data to track user behavior, adoption rates, and emerging patterns, providing valuable insights into the evolving cryptocurrency landscape. ### Decentralized Finance (DeFi) In the context of DeFi, cryptocurrency data is crucial for assessing liquidity, interest rates, and other parameters. [DeFi platforms](https://tradermade.com/blog/defi-blockchain-applications-using-tradermade-data) rely on accurate and up-to-date data to operate decentralized lending, borrowing, and trading protocols seamlessly. ### Technology Innovation Cryptocurrency data fuels technological innovation. Developers and entrepreneurs use data to create new blockchain-based applications and explore smart contract functionalities. ### Global Economic Impact Cryptocurrency data is increasingly recognized for impacting the global economy. Governments, financial institutions, and international organizations monitor and study crypto data to understand its influence on traditional financial systems and global economic dynamics. ### Educational Purposes Cryptocurrency data serves as a valuable educational resource. It helps individuals, institutions, and researchers understand the mechanics of blockchain technology, the economics of cryptocurrencies, and the implications for various industries. ## Fetching Live Cryptocurrency Market Data in C++. The tutorial is designed to retrieve live cryptocurrency market data for BTC/USD. It utilizes the CURL library for making HTTP requests and the JSONCPP library for parsing JSON responses. Also Read: Tutorial on [C++ WebSocket Client](https://tradermade.com/tutorials/how-to-build-your-first-cpp-websocket-client) ### Setting Up C++ Development Environment on Ubuntu Install C++ Compiler: Open a terminal and run the following command to install the GNU Compiler Collection (g++): ``` sudo apt-get update sudo apt-get install g++ ``` Install CURL Library: Install the CURL library for making HTTP requests: ``` sudo apt-get install libcurl4-openssl-dev ``` Install JSONCPP Library: Install the JSONCPP library for parsing JSON: ``` sudo apt-get install libjsoncpp-dev ``` ## Getting API Key from TraderMade **Sign Up on TraderMade:** Visit the [TraderMade website](https://tradermade.com/) and [sign up](https://tradermade.com/signup) for an account. **Obtain API Key:** After signing in, navigate to the API section or contact TraderMade support to obtain your API key. **Documentation:** Please visit the [documentation](https://tradermade.com/docs/restful-api) page for more details. ## Writing the C++ Program ### Include Necessary Libraries The code begins by including essential libraries: ``` #include <iostream> #include <string> #include <curl/curl.h> #include <json/json.h> ``` **Namespace Declaration:** The program uses the std namespace for standard C++ functionality: ``` using namespace std; ``` **Callback Function for CURL:** Defines a callback function (WriteCallback) to handle CURL's response: ``` size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp) { ((string*)userp)->append((char*)contents, size * nmemb); return size * nmemb; } ``` **Functions for CURL Request and JSON Parsing:** performCurlRequest: Initiates a CURL request and retrieves the response. ``` bool performCurlRequest(const string& url, string& response) { // (CURL setup and execution) } parseJsonResponse: Parses the JSON response using the JSONCPP library. bool parseJsonResponse(const string& jsonResponse, Json::Value& parsedRoot) { // (JSON parsing setup) } ``` **Main Function:** The main function initiates the program: ``` int main() { // (API URL and response string initialization) } ``` **API URL and CURL Initialization:** Sets up the API URL for fetching cryptocurrency data and initializes CURL: ``` string api_url = "https://marketdata.tradermade.com/api/v1/live?currency=BTCUSD&api_key=API_KEY"; string response; curl_global_init(CURL_GLOBAL_DEFAULT); ``` **Performing CURL Request:** Calls performCurlRequest to execute the CURL request: ``` if (performCurlRequest(api_url, response)) { // (JSON parsing and data extraction) } ``` **JSON Parsing and Data Extraction:** Utilizes parseJsonResponse to parse the JSON response and extract bid and ask prices: ``` Json::Value root; if (parseJsonResponse(response, root)) { // (Extracting bid and ask prices from JSON) } ``` **Displaying Results:** Displays bid and ask prices to the console: ``` const Json::Value quotes = root["quotes"]; for (const Json::Value &quote : quotes) { if (quote.isMember("bid") && quote.isMember("ask")) { double bid = quote["bid"].asDouble(); double ask = quote["ask"].asDouble(); cout << "Bid: " << bid << ", Ask: " << ask << endl; } } ``` **CURL Cleanup:** Cleans up CURL resources before program exit: ``` curl_global_cleanup(); return 0; ``` The program fetches live cryptocurrency market data, demonstrating the use of CURL for HTTP requests and JSONCPP for parsing JSON responses. Hurray! Our output is ready ``` { "endpoint": "live", "quotes": [ { "ask": 39943.14, "base_currency": "BTC", "bid": 39888.32, "mid": 39915.73, "quote_currency": "USD" } ], "requested_time": "Wed, 24 Jan 2024 07:14:42 GMT", "timestamp": 1706080483 } ``` **We have a full program here:** ``` #include <iostream> #include <string> #include <curl/curl.h> #include <json/json.h> using namespace std; // Callback function to handle curl's response size_t WriteCallback(void *contents, size_t size, size_t nmemb, void *userp) { ((string*)userp)->append((char*)contents, size * nmemb); return size * nmemb; } // Function to perform CURL request bool performCurlRequest(const string& url, string& response) { CURL *curl = curl_easy_init(); if (!curl) { cerr << "Failed to initialize CURL" << endl; return false; } curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, WriteCallback); curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response); CURLcode res = curl_easy_perform(curl); curl_easy_cleanup(curl); if (res != CURLE_OK) { cerr << "curl_easy_perform() failed: " << curl_easy_strerror(res) << endl; return false; } return true; } // Function to parse JSON response bool parseJsonResponse(const string& jsonResponse, Json::Value& parsedRoot) { Json::CharReaderBuilder builder; Json::CharReader *reader = builder.newCharReader(); string errs; bool parsingSuccessful = reader->parse(jsonResponse.c_str(), jsonResponse.c_str() + jsonResponse.size(), &parsedRoot, &errs); delete reader; if (!parsingSuccessful) { cerr << "Failed to parse JSON: " << errs << endl; return false; } return true; } int main() { string api_url = "https://marketdata.tradermade.com/api/v1/live?currency=BTCUSD&api_key=API_KEY"; string response; curl_global_init(CURL_GLOBAL_DEFAULT); if (performCurlRequest(api_url, response)) { Json::Value root; if (parseJsonResponse(response, root)) { const Json::Value quotes = root["quotes"]; for (const Json::Value &quote : quotes) { if (quote.isMember("bid") && quote.isMember("ask")) { double bid = quote["bid"].asDouble(); double ask = quote["ask"].asDouble(); cout << "Bid: " << bid << ", Ask: " << ask << endl; } } } } curl_global_cleanup(); return 0; } ``` ### Final Remarks This tutorial is a foundation for building more complex cryptocurrency-related applications using C++. Feel free to run this code in your development environment, and don't forget to replace "API_KEY" with a valid API key for the specified URL. This tutorial provides a practical example of making API requests, handling responses, and parsing JSON data in a C++ program.
shridhargv
1,740,137
Best Practices for Effective Software Testing in Agile Development
Introduction: In the fast-paced world of Agile development, where rapid iterations and continuous...
0
2024-01-24T13:11:31
https://dev.to/talenttinaapi/best-practices-for-effective-software-testing-in-agile-development-i7n
softwaretesting, testing, qaengineering, softwareqaengineering
**Introduction:** In the fast-paced world of Agile development, where rapid iterations and continuous delivery are the norm, the role of Software Quality Assurance (QA) has become more critical than ever. Effective software testing is essential to ensure that the product meets both functional and non-functional requirements and that it does so with the utmost quality. This article explores best practices for Software QA Engineers to navigate the challenges of Agile development and deliver high-quality software efficiently. **Collaboration and Communication:** Successful Agile testing begins with strong collaboration and communication among cross-functional teams. QA Engineers should actively participate in all phases of the development process, fostering open communication with developers, product owners, and other stakeholders. Regular stand-up meetings and sprint planning sessions help align testing efforts with development goals, ensuring that everyone is on the same page. **Early and Continuous Testing:** In Agile, testing is not a phase that occurs at the end of development; it's a continuous process integrated throughout the entire lifecycle. QA Engineers should aim to start testing as early as possible, validating user stories and acceptance criteria during sprint planning. This approach reduces the likelihood of defects slipping through and accelerates the feedback loop, allowing for quicker issue resolution. **Test Automation:** Test automation is a cornerstone of Agile testing. It enables quick and repetitive execution of test cases, providing rapid feedback on the application's stability. QA Engineers should prioritize the automation of repetitive and high-impact test scenarios, such as regression tests and critical path workflows. This not only saves time but also allows teams to focus on more complex and exploratory testing. Continuous Integration and Continuous Deployment (CI/CD): Implementing CI/CD practices enhances the efficiency of Agile testing. Automated builds and deployments ensure that the software is consistently tested in an environment that mirrors production. QA Engineers should work closely with DevOps teams to integrate testing into the CI/CD pipeline, enabling quick identification and resolution of issues. **Exploratory Testing:** While test automation is crucial, exploratory testing adds a human touch by simulating real-world user interactions. QA Engineers should leverage exploratory testing to uncover unforeseen issues, validate user experience, and ensure the application's usability. This dynamic testing approach complements automated testing and helps discover defects that might be overlooked by scripted tests. **Regression Testing Strategy:** With the frequent changes in Agile development, a robust regression testing strategy is imperative. QA Engineers should maintain a comprehensive suite of automated regression tests that cover critical functionalities. Prioritize test cases based on business impact and risk, ensuring that updates do not introduce unintended side effects. **Cross-Browser and Cross-Platform Testing:** Agile development often involves delivering software that runs on various browsers and platforms. QA Engineers should conduct thorough cross-browser and cross-platform testing to guarantee a consistent user experience across different environments. Cloud-based testing tools can aid in efficiently managing this aspect of testing. **Performance Testing:** Agile applications must meet functional requirements and also perform well under different loads. QA Engineers should incorporate performance testing early in the development cycle to identify and address scalability and performance issues. Load testing, stress testing, and scalability testing are essential components of a comprehensive performance testing strategy. **Continuous Learning and Adaptation:** The Agile landscape is dynamic, with technologies and methodologies evolving rapidly. QA Engineers should actively engage in continuous learning, staying updated on industry best practices, emerging tools, and testing techniques. Regular retrospectives and feedback loops within the team provide opportunities for improvement and adaptation. **Metrics and Reporting:** Establishing key performance indicators (KPIs) and generating meaningful metrics are crucial for assessing the effectiveness of the testing process. QA Engineers should collaborate with stakeholders to define and track metrics related to test coverage, defect density, and testing cycle time. Transparent reporting facilitates data-driven decision-making and helps in continuously improving the testing process. **Conclusion:** Effective software testing in Agile development demands a strategic and collaborative approach. By embracing these best practices, Software QA Engineers can contribute significantly to the delivery of high-quality software within the constraints of Agile methodologies. As Agile continues to shape the software development landscape, the role of QA becomes increasingly pivotal in ensuring that the end product not only meets customer expectations but exceeds them in terms of reliability, performance, and user satisfaction.
talenttinaapi
1,740,183
Unlocking the Potential of Containerization Tools in DevOps: A Comprehensive Guide
Introduction: In the ever-evolving realm of software sorcery, where the wizards of development and...
0
2024-01-24T14:18:57
https://dev.to/kingkunte_/exploring-containerization-tools-in-devops-3edk
devops, kubernetes, docker
**Introduction:** In the ever-evolving realm of software sorcery, where the wizards of development and the alchemists of operations converge, there's a mystical art known as containerization. It's the secret potion, the magical key that unlocks the door to a world where applications dance seamlessly across different landscapes. So, grab your cloak and staff, for this article is our journey into the enchanting realm of containerization tools, the unsung heroes in our DevOps saga. Picture this: a world where applications are encased in mystical containers instead of bound by chains of dependencies, ensuring they're as comfortable in the wild, untamed lands of development as they are in the sanctified realms of production. This is the essence of containerization—a practice that promises consistency, portability, and a touch of magic to the deployment process. At the forefront of this magical revolution stands Docker, a name whispered in awe among the DevOps wizards. Docker, the master sorcerer, gifted us the ability to wrap our applications in containers, making them impervious to the unpredictable winds of varying environments. Docker's legacy is etched in the scrolls of DevOps, setting the standard for all who dare tread the path of containerization. But, dear reader, the world of containers is vast, and its secrets are not confined to a single spell. Enter Podman, a mystical apprentice challenging the traditional norms. Daemonless, rootless—Podman is a new chant, providing a different perspective on the magic of container runtimes. As our journey unfolds, we'll encounter these diverse tools, each with its charm and purpose. Each tale of containerization is complete with the mention of Kubernetes, the grand orchestrator orchestrating the grandest of orchestras. It's the puppet master, automating the dance of containers, orchestrating their movements across the stage of distributed systems: Kubernetes, the unsung hero in our DevOps epic. As we traverse the magical landscape, we'll stumble upon Terraform, the architect's blueprint for crafting containerized infrastructure. Declarative and assertive, Terraform allows us to mold the foundations upon which our containers shall tread. Yet, dear reader, even in this magical realm, dangers lurk. Security is our shield against the dark arts, and tools like Clair and Docker Content Trust stand guard, ensuring our containers are fortified against vulnerabilities and evil forces. In our quest, we shall also unveil the mystic arts of monitoring and logging. Prometheus and Grafana weave spells of observability, while the ELK Stack scribes tales of events in the chronicles of logs. So, fasten your seatbelts, fellow adventurer, for this is not just a technical journey. It's a tale of triumphs and challenges, real-world exploits, and a glimpse into the future—the future where containers reign supreme, and DevOps becomes a tapestry woven with threads of containerization. Welcome to the magical symphony of containerization tools, where spells are cast in code, and the only limit is the imagination of the conjurer. **2. Docker: The Pioneer of Containerization:** In the enchanted world of containerization, Docker reigns as the undisputed pioneer—a name whispered in awe among the wizards of DevOps. Let's embark on a journey through Docker's realms, unraveling its essence and exploring the magical features that have made it the leading platform in the realm of containers. **_Container Creation Magic:_** At the core of Docker's sorcery lies the ability to encapsulate applications and their dependencies within ephemeral containers. These containers, akin to mystical vessels, encapsulate the very essence of an application, ensuring it runs consistently across diverse environments. Docker simplifies the creation of these containers through Dockerfiles—textual scripts that articulate the magical recipe for each application. Developers can weave their spells through the simple magic of a Dockerfile, creating containers that carry the magic of their applications. The beauty of Docker's container creation lies in its consistency; developers can cast their spells in one environment, and Docker ensures they unfold seamlessly in another. **Docker Hub: The Grand Repository of Spells:** Docker Hub stands as the bustling marketplace where containers convene. It is a grand repository, a library where wizards share their magical creations with the world. Docker Hub hosts base images that serve as the foundation for containers and fully-fledged applications ready for invocation. Docker Hub fosters a community where wizards exchange magical artifacts, quickly pulling and pushing container images. This collaborative dimension enhances the magic, enabling the swift sharing and improvement of spells across the vast DevOps landscape. **_Docker Compose: The Conductor's Baton:_** Docker Compose emerges as the conductor's baton in the orchestration of containers. It allows wizards to orchestrate complex compositions of containers, defining how they interact and perform their magical symphony. Developers declare the services, networks, and volumes that comprise their applications through a simple YAML file, empowering Docker Compose to conduct the ensemble. Docker Compose simplifies the management of multi-container applications, enabling developers to focus on the art of creation rather than the intricacies of orchestration. It ensures the containers dance harmoniously, creating a seamless and magical performance. **3. Alternative Containerization Tools:** As the magical world of containerization evolves, alternative tools arise, challenging Docker's dominance. One alternative is Podman, a daemon-less, rootless container experience that introduces a fresh perspective to container runtimes. **_Podman: The Daemonless Enigma:_** Podman is a unique alternative to Docker, providing a daemonless and rootless container experience. Unlike Docker, Podman operates without a central daemon process, offering greater flexibility and reduced attack surfaces. This daemonless nature allows each container to be managed as an individual process, aligning with principles of security and isolation. Podman's compatibility with Docker syntax eases the transition for those well-versed in Docker sorcery. It seamlessly integrates with Docker workflows, making it an enticing alternative without a paradigm shift. **_Comparing Container Runtimes:_** Docker, Podman, Containers, and CRI-O stand as notable contenders in container runtimes. With its daemon-centric approach, Docker remains a stalwart choice for many, providing a robust and battle-tested solution suitable for various scenarios. Podman, with its daemon-less design, excels in resource-constrained environments where heightened security is paramount. Containers, focusing on simplicity and modularity, serve as the core container runtime for Kubernetes. It provides the basic building blocks for containerization, allowing higher-level tools to orchestrate and manage containers. CRI-O, born from the Kubernetes community, is designed specifically for Kubernetes clusters, adhering to the specifications of Container Runtime Interface (CRI). When choosing a container runtime, considerations include environmental constraints, security requirements, and compatibility with existing tools. Each runtime has its strengths and weaknesses, offering diverse tools for the discerning DevOps sorcerer. **4. Container Orchestration with Kubernetes:** In the grand theater of containerization, where spells are cast within containers and orchestrated to create a harmonious symphony, Kubernetes takes center stage as a powerful container orchestration tool. Kubernetes: The Grand Maestro of Orchestration: Kubernetes, often affectionately known as "K8s," is an open-source container orchestration platform that orchestrates the deployment, scaling, and management of containerized applications. Born from Google's internal container orchestration systems, Kubernetes provides a universal, extensible platform for automating the lifecycle of containers. **_Automating Deployment with Kubernetes:_** Kubernetes brings forth the power to automate the deployment of containerized applications. Through the mystical concept of "Pods," Kubernetes ensures that applications are seamlessly deployed and run in their designated environments. Declarations of deployment configurations, expressed as YAML spells, define the desired state of applications, and Kubernetes tirelessly works to bring the actual state in line with these aspirations. Declarations become spells, where wizards define how many replicas of an application should exist, what containers they should house, and other crucial aspects. Kubernetes transforms these desires into reality, significantly reducing the manual burden on sorcerers and providing a consistent and reliable deployment mechanism. **_Scaling Magic with Kubernetes:_** As the need for magical scalability arises, Kubernetes extends its powers by orchestrating the scaling of applications. Through the mystical powers of "ReplicaSets," Kubernetes dynamically adjusts the number of replicas based on the demands of the mystical load placed upon the application. Kubernetes ensures the correct number of replicas dance in unison to meet the audience's demands. This auto-scaling feature guarantees optimal resource utilization and peak performance under varying workloads. The orchestration of scaling becomes a seamless part of Kubernetes' magical arsenal. **_Management Mastery by Kubernetes:_** Kubernetes excels not only in deployment and scaling but also in ongoing management tasks. Through the " Rolling Updates " feature, spells or applications undergo seamless updates with minimal downtime. Self-healing mechanisms reincarnate failed containers, ensuring that the magical dance of containers remains uninterrupted. In the grand tapestry of container orchestration, Kubernetes emerges as the maestro, conducting the symphony of containerized applications with precision and elegance. Its ability to automate deployment, scale applications dynamically, and manage the ongoing lifecycle of containers makes it an indispensable tool in the repertoire of any DevOps sorcerer. As we traverse the magical landscape, we'll continue to uncover more facets of Kubernetes and its spellbinding capabilities. The orchestration journey is an intricate dance, and Kubernetes stands ready as the grand maestro, orchestrating containerized applications with finesse and power. **5. Managing Containerized Infrastructure with Terraform:** In the mystical landscape of containerization, where spells are woven to bring applications to life, the architecture beneath the enchanted containers holds equal importance. Enter Terraform, a potent sorcery known as Infrastructure as Code (IaC), seamlessly bridges the realm of containers with the infrastructure fabric. Let us delve into the art of managing containerized infrastructure with Terraform, exploring how it harmonizes with the magic of containers. Terraform, a versatile IaC tool, allows sorcerers to define and provision infrastructure using declarative configuration files. In the realm of containerization, Terraform becomes the architect's wand, enabling the creation and management of the underlying infrastructure upon which containers perform their dance. Examples of Terraform's prowess can be witnessed across various cloud platforms. Imagine crafting a spell, a Terraform configuration file, that summons not just containers but an entire cloud-native infrastructure. With a few lines of Terraform incantations, one could conjure an Elastic Kubernetes Service (EKS) cluster on AWS. Similarly, a magical Azure Kubernetes Service (AKS) deployment becomes a reality on Azure. Terraform's cross-cloud compatibility ensures that the spells remain consistent across diverse environments. **6. Security Considerations in Containerization:** In the mystical realm of containerization, where spells create virtual realms for applications, the specter of security must be faced head-on. Security, a paramount concern, demands vigilant guardianship against evil forces. Let us uncover the security best practices that fortify containerized environments and explore tools like Clair and Docker Content Trust that stand as sentinels at the gate. Security best practices in containerization involve encapsulating spells within secure containers, regularly updating spells to patch vulnerabilities, and enforcing the principle of least privilege. Tools like Clair emerge as guardians, conducting vulnerability scans on container images, ensuring that even the slightest crack in the magical armor is detected and remedied. Docker Content Trust, a shield against malicious spells, leverages cryptographic signatures to ensure the integrity and authenticity of container images. By signing images and verifying signatures during deployment, Docker Content Trust guards against unauthorized tampering, enhancing the overall security posture. **7. Monitoring and Logging for Containers:** Observability becomes a crucial part of the magical kingdom of containerization, where spells are cast and containers dance. Tools like Prometheus and Grafana emerge as mystical seers, offering insights into the performance and health of containerized applications. Meanwhile, the ELK Stack (Elasticsearch, Logstash, Kibana) keeps chronicles, capturing the tales of events in the logs. Prometheus, a titan in the monitoring pantheon, collects and stores metrics from containerized applications. Paired with Grafana, an artist in visualization, Prometheus paints a vivid picture of the application's heartbeat, helping sorcerers identify performance bottlenecks and anomalies. The ELK Stack, on the other hand, offers a centralized logging solution. Elasticsearch stores logs, Logstash processes and transforms them, and Kibana provides a magical window into the logs. This triumvirate ensures that every spell cast and every container dance move is recorded and accessible for analysis. **8. Case Studies: Real-world Implementations:** In the scrolls of reality, organizations wield containerization tools as potent weapons in their DevOps arsenal. Witness real-world implementations where containerization has brought tangible improvements in deployment speed, resource utilization, and scalability. Organizations, both ancient and modern, showcase their prowess. Faster deployments reduce time-to-market, efficient resource utilization optimizes costs, and scalability ensures the infrastructure grows harmoniously with demand. These case studies illuminate the transformative power of containerization in the hands of adept sorcerers. **9. Integrating Containerization into CI/CD Pipelines:** As the saga of DevOps unfolds, Continuous Integration (CI) and Continuous Deployment (CD) pipelines emerge as the enchanted bridges connecting development, testing, and deployment. Containerization, with its consistency and portability, integrates seamlessly into these pipelines, becoming the catalyst for efficient testing and deployment practices. Learn the secrets of incorporating containers into CI/CD pipelines, where spells are tested within containerized environments, ensuring that the same chants perform consistently across different stages. Discover the advantages of containerized environments for testing, where dependencies are encapsulated, and environments are reproducible, providing a stable ground for spell-testing rituals. **10. Challenges and Future Trends:** No magical journey is without its challenges. Unravel the common challenges in adopting containerization tools, such as managing complex orchestration, ensuring security, and navigating the complexities of multi-cloud environments. Let the knowledge within these scrolls guide you in overcoming obstacles that may arise on your DevOps quest. Peering into the crystal ball, we glimpse the future trends in containerization. Explore the emergence of serverless containers, where spells are cast without the need to manage the underlying infrastructure. Delve into the orchestral harmonies of multi-cloud environments, where containers traverse different clouds, creating a symphony of resource utilization and flexibility. **Conclusion:** In the grand conclusion of our mystical journey through containerization, we find that these tools, like magical artifacts, play a pivotal role in the DevOps odyssey. The agility, scalability, and consistency they bestow upon application deployment propel organizations into a new era of efficiency. Organizations can elevate their sorcery by understanding and adeptly implementing these tools, delivering robust, scalable applications with unparalleled efficiency. As we close the scrolls, let the knowledge within guide you on your DevOps adventure as you continue to weave spells, orchestrate containerized dances, and shape the destiny of applications in the ever-evolving technology landscape. Cheers! Join me on [Freelancer.com](https://www.freelancer.com/get/norbert15?f=give )
kingkunte_
1,740,215
NestJs 환경 설정, 2024-01-24
cmd 또는 VScode 터미널에 입력(PowerShell 안됨) npm i -g @nestjs/cli Enter fullscreen mode ...
0
2024-01-24T14:52:43
https://dev.to/sunj/nestjs-hwangyeong-seoljeong-2024-01-24-50p0
nestjs
cmd 또는 VScode 터미널에 입력(PowerShell 안됨) ``` npm i -g @nestjs/cli ```
sunj
1,755,609
The continued rise of Handheld Gaming
My very first handheld game was probably the ring toss game… But in all seriousness it was my...
0
2024-02-08T14:19:39
https://dev.to/deusinmachina/the-continued-rise-of-handheld-gaming-5hkf
gaming
![Ring toss Water Game](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mgjwbjuyef27ehhsy2tl.jpg) My very first handheld game was probably the ring toss game… But in all seriousness it was my fathers Game Boy in the late 90s. I distinctly remember playing Star Wars and I don’t think I ever beat it. ![Original Game Boy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4hn6k9xsw49bpl8ekhj.jpeg) ![Star Wars for the Game Boy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2jlq7gceiww4x56f5g9.png) I remember handheld gaming being all the rage growing up, and I have fond memories of playing on Game Boys, Game Boy Colors, Game Boy Advances, PSPs, and DS'. Like many, after about the PSP era, I moved away from consoles and handhelds in favor of PCs. But since getting back into handheld gaming with my Steam Deck, I was shocked to see there had been a Cambrian explosion of handhelds. Many are from major players in the tech space, though some are from companies I've never heard of. It feels like we are in a renaissance, or, that is what I thought the angle of this article was going to be when I started writing it. But in actually researching handheld history, I've come to the conclusion that while I had abandoned it, it hadn't gone anywhere. ## A lightning Tour Let's take a look back at one of the earliest handhelds. For the sake of brevity I'll only consider handhelds that could play more than one game, so no Game and Watches, or single game electronics. In this vein the honor to the first handheld gaming console that fits our criteria is the Microvision created in 1979. ![Microvision handheld device](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kratvbvy7ygshhmsj0co.jpg) It looked more like the iClicker I had in College than a gaming device, but it had 12 different games that it could play. Clocking in on a CPU that ran at 1kHZ, a number I have never seen next to a CPU, It's honestly hugely impressive it could play anything. But just 10 years later we would be introduced to one of the most popular handhelds of all time, by one of handheld gaming's most prolific supporters, Nintendo. The Game Boy would be a smash hit, bringing handheld gaming to the masses. Nintendo has certainly not slowed down since then and if we look at the timeline from the Game Boy to the Switch we get a history that spans from 1989-2021 with no signs of stopping. * **Game Boy**: Released on April 21, 1989, in Japan, July 31, 1989, in North America, and September 28, 1990, in Europe. The OG. * **Game Boy Pocket**: Released in 1996, it was a smaller, lighter Game Boy that required fewer batteries. * **Game Boy Light**: Released only in Japan on April 14, 1998, this model featured a backlit screen. * **Game Boy Color**: Released on October 21, 1998, in Japan, November 18, 1998, in North America, and November 23, 1998, in Europe. Backwards compatible with the Game Boy! Contained a 15-bit color palette instead of the 4 in the original Game Boy. * **Game Boy Advance**: Released on March 21, 2001, in Japan, June 11, 2001, in North America, and June 22, 2001, in Europe. Backwards compatible again! Hardware that was more powerful than the SNES from a decade prior. * **Game Boy Advance SP**: Released in February 2003, it featured a front-lit screen and a clamshell design. * **Game Boy Micro**: Released on September 13, 2005, in Japan, and subsequently in other regions. A slimmer version of the original GBA. * **Nintendo DS**: Released on November 21, 2004, in North America, December 2, 2004, in Japan, February 24, 2005, in Australia, and March 11, 2005, in Europe. Successor to and backwards compatible with the Game Boy Advance. * **Nintendo DS Lite**: Released in early 2006, it was a slimmer and lighter version of the original DS. * **Nintendo DSi**: Released on November 1, 2008, in Japan, and in 2009 in other regions. No slot for GBA Games. Two cameras, SD Card slot, and other enhancements. * **Nintendo DSi XL/LL**: Released in 2009, it was a DS that larger screens. * **Nintendo 3DS**: Released on February 26, 2011, in Japan, March 25, 2011, in Europe, March 27, 2011, in North America, and March 31, 2011, in Australia. Successor to the DS with the most notable feature being the 3d graphics, 3 cameras, and AR capabilities. * **Nintendo 3DS XL**: Released in 2012, this version featured larger screens. * **Nintendo 2DS**: Released on October 12, 2013, as a budget version of the 3DS without 3D capability. * **New Nintendo 3DS**: Released on October 11, 2014, in Japan, and 2015 in other regions. Enhanced performance, better 3d, a C-stick, and built in NFC. * **New Nintendo 3DS XL**: Released alongside the New Nintendo 3DS, with larger screens. * **New Nintendo 2DS XL**: Released on July 13, 2017, as an upgraded version of the 2DS. * **Nintendo Switch**: Released on March 3, 2017, a hybrid console, usable as docked as a home console and undocked as a handheld device. * **Nintendo Switch Lite**: Released on September 20, 2019, features non removeable joycons. * **Nintendo Switch OLED**: Released on October 8, 2021, it's a Switch, but with a nicer screen. Phew! That was a lot. While compiling this list a couple of things became apparent to me. 1. Nintendo has been keeping the handheld dream alive, even after other hardware manufactures had left the space. 2. Nintendo _loves_ to sell you the same handheld over and over again. Multiple Game Boy's DS' and Switches. I wouldn't be surprised if we see a Switch XL at some point. And you already know there will be a Switch 2 Lite. But while Nintendo never stopped making Handhelds, the list of handhelds we got from 1989 to ~2013 ranged for weird, wacky, cool, and everything in between. Here are some of them * **Atari Lynx (1989)** - First handheld console with a color LCD screen. * **Sega Game Gear (1990)** - Competitor to the Nintendo Game Boy, known for its backlit color screen. * **Sega Nomad (1995)** - Handheld version of the Sega Genesis, allowing players to use Genesis cartridges. * **Tapwave Zodiac (2004)** - Short lived device that functioned as both a PDA and a gaming console. * **Sony PlayStation Portable (PSP) (2005)**- Known for its powerful hardware, multimedia capabilities, and wide game selection. I had one with a skin covered in green skulls, and it was incredible. * **GP2X Wiz (2009)** - An open-source, Linux-based handheld console produced by GamePark Holdings. * **Pandora (2010)** - A combination of a handheld game console and a mini PC, running Linux. * **Sony PlayStation Vita (2012)** - Successor to the PSP. Featured a touch screen, rear touchpad, and high-quality graphics. Please bring it back Sony. * **Nvidia Shield Portable (2013)** - An Android-based device that combined a game controller with a screen, capable of streaming games from a PC. ![Zodiac Tapwave raunchy advertisement](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jv9ru6izkn6fb22atjal.jpg) One interesting thing to note about this time. With devices like the Nvidia Shield, we begin to see the rise of handheld streaming devices. This is a lot more common now with the likes of the PlayStation Portal, and Logitech G Cloud (with an honorable mention to the Steam Link). I would be remiss if I didn't at least talk about the most common forms of handheld gaming of our generation, mobile gaming. While I do remember playing games like Dragon's lair on my father's Sprint flip phone... ![Dragon's Lair video game](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zn5pe0tpjoc13nd0m7ia.png) It was when we got smartphones where I really saw the quality of mobile gaming blow up. I remember games like Zenonia back in 2009 being awesome. I couldn't believe I was playing such a fully featured rpg right in the palm of my hand at that time. I remember sitting back and playing it for hours on my landscaped iPod in a wooden chair with no cushion, my back and eyes would never permit that now. ![Zenonia rpg mobile game](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifrasnvmtcyqnttzcqop.jpg) This was pre candy crush too, so the dark patterns were either non existent, or much simpler at the time. You could even buy a game once, and never pay for it again (shocking I know). And remember Infinity Blade? Mobile gaming may have truly peaked in 2010. ![Infinity Blade mobile game](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eb6awo8yr5e102yqo3c1.png) But if we include phones on this tour, we'd be here all day so let's get back to dedicated handhelds. Not only have we had great handhelds for the last 35 years, if you look at the top 5 best selling consoles of all time... 1. PlayStation 2 2. Nintendo DS 3. Nintendo Switch 4. Game Boy/Game Boy Color 5. PlayStation 4 We see that three out of the top 5 are actually handhelds. So with that fact, and with this article having mentioned over two dozen handheld devices at this point, It's safe to say handhelds, and handheld gaming, never went away. In fact I'd say it's more popular now that it has ever been! The experience is also so much better than it used to be. For one, I don't have to go through dozens of AA batteries a month anymore. And two, they don't feel cut down. Using my previously mentioned Star Wars Game Boy game as an example, I recently found out that this is what the NES version looked like. ![NES color version of Star Wars](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vusdwxguqgns36dtc53e.png) Shocking! I would have loved to have my Game Boy version look like that back in the 90s. By the Game Boy Advanced era in 2001, we were able to create incredible 2D handheld experiences, but we were only just _getting_ to the point where we could render anything impressive in 3D. Sure you could program a 3D GBA game if you were a literal programming God like Randal Linden, but unless you are willing to write 200,000 lines of highly optimized assembly then we still needed a few generations before this was feasible for your average programmer. ![Quake running on the Game Boy Advance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zy3lukupny7mpsjrmcte.png) But fast forward to the PSP era, and we were beginning to see what no compromise 3d handheld experiences looked like. ![Tekken Dark Resurrection on the PSP](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhoaekbg66px6vw5bsm2.png) And by the time we got to the Switch, it was clear that while certain games might have struggled to hit their 30fps target sometimes, we weren't getting stripped down version of a game originally made for PCs and consoles anymore. ![Breath of the wild](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jehx8epx0pjbn9amac6l.jpg) And now in the more modern times (a sentence that will age like milk) we see many other players in the space besides Sony and Nintendo. There are Android, Windows, and Linux based handhelds, some from well known brands, and others from rising stars... * **GPD XD**: 2015 * **GPD Win**: 2016 * **GPD XD Plus**: 2018 * **Abernic RG351P**: 2020 * **Retroid Pocketc 2S**: 2020 * **Miyoo Mini**: 2021 * **Ayn Odin**: 2022 * **Steam Deck**: 2022 * **Miyoo Mini +**: 2023 * **Logitech G Cloud**: 2023 * **Lenovo Legion Go**: 2023 * **Ayn Odin 2**: 2023 * **Asus Rog Ally**: 2023 * **Steam Deck OLED**: 2023 * **PlayStation Portal**: 2023 * **Msi Claw**: Sometime in 2024 And it's exciting. It feels like we've hit an inflection point with handhelds. With devices like my Steam Deck, I don't feel like I'm giving anything up when I'm using it. But at the same time, I'm excited for every new iteration. It feels just like how it was in the late 90s and 2000s where every console generation, or graphics card generation, was a noticeable improvement over the last. And with the Switch 2 rumored to be announced later this year, it seems like that excitement isn't going to be slowing down anytime soon. ## Call To Action 📣 Hi 👋 my name is Diego Crespo and I like to talk about technology, niche programming languages, and AI. I have a [Twitter](https://twitter.com/deusinmach) and a [Mastodon](https://mastodon.social/deck/@DiegoCrespo), if you’d like to follow me on other social media platforms. If you liked the article, consider checking out my [Substack](https://www.deusinmachina.net/). And if you haven’t why not check out another article of mine listed below! Thank you for reading and giving me a little of your valuable time. A.M.D.G https://www.deusinmachina.net/p/the-evolution-of-valves-source-engine
deusinmachina
1,740,302
Getting Started with TypeScript: A Comprehensive Guide
Introduction TypeScript is a superset of JavaScript that brings static typing to the...
0
2024-01-29T08:21:35
https://dev.to/jps27cse/getting-started-with-typescript-a-comprehensive-guide-1djm
typescript, react, webdev, javascript
## Introduction TypeScript is a superset of JavaScript that brings static typing to the language. Developed by Microsoft, it aims to enhance the development experience by catching errors during development rather than at runtime. TypeScript code is transpiled to JavaScript, making it compatible with all JavaScript environments. In this guide, we will delve into various aspects of TypeScript, starting from its installation to more advanced concepts like inheritance and generics. ## Installation To get started with TypeScript, you first need to install it. You can use npm (Node Package Manager) to install TypeScript globally using the following command: ```bash npm install -g typescript ``` This installs the TypeScript compiler (`tsc`), which you can use to compile your TypeScript code into JavaScript. ## How TypeScript Works TypeScript adds a static type-checking layer on top of JavaScript. This means that during development, TypeScript checks for type errors and provides meaningful feedback, enabling developers to catch potential issues early in the development process. After development, the TypeScript code is transpiled into JavaScript, which can then be run on any JavaScript runtime. ## TypeScript Configuration TypeScript projects often include a `tsconfig.json` file, which contains configuration options for the TypeScript compiler. This file allows you to specify compiler options, include or exclude files, and define the project's structure. Example `tsconfig.json`: ```json { "compilerOptions": { "target": "es5", "module": "commonjs", "strict": true }, "include": ["src/**/*.ts"], "exclude": ["node_modules"] } ``` ## Data Types ### Built-in Data Types #### Number The `number` type represents both integer and floating-point numbers. ```typescript let age: number = 25; let price: number = 29.99; ``` #### String The `string` type is used to represent textual data. ```typescript let name: string = "John"; ``` #### Boolean The `boolean` type represents true/false values. ```typescript let isStudent: boolean = true; ``` #### Undefined The `undefined` type is used for variables that have been declared but not assigned a value. ```typescript let undefinedVar: undefined; ``` #### Null The `null` type represents an intentional absence of any object value. ```typescript let nullVar: null = null; ``` #### Void The `void` type is often used as the return type of functions that do not return a value. ```typescript function logMessage(): void { console.log("Hello, TypeScript!"); } ``` ### User-Defined Data Types #### Arrays Arrays allow you to store multiple values of the same type. ```typescript let numbers: number[] = [1, 2, 3, 4, 5]; ``` #### Enums Enums are a way to organize and represent a set of named values. ```typescript enum Color { Red, Green, Blue } let myColor: Color = Color.Green; ``` #### Classes Classes provide a blueprint for creating objects with methods and properties. ```typescript class Person { constructor(public name: string) {} } let person = new Person("Alice"); ``` #### Interfaces Interfaces define the structure of objects and can be used to enforce a specific shape on objects. ```typescript interface Shape { area(): number; } class Circle implements Shape { constructor(private radius: number) {} area(): number { return Math.PI * this.radius ** 2; } } ``` #### Tuple Data Type Tuples allow you to express an array where the type of a fixed number of elements is known. ```typescript let person: [string, number] = ["John", 30]; ``` #### Enum Data Type Enums allow you to create a set of named constant values. ```typescript enum Direction { Up, Down, Left, Right } let playerDirection: Direction = Direction.Up; ``` #### Object Data Type Objects in TypeScript can have a specific shape defined by interfaces or types. ```typescript interface Person { name: string; age: number; } let person: Person = { name: "Alice", age: 25 }; ``` #### Custom Data Type You can create custom types using the `type` keyword. ```typescript type Point = { x: number; y: number }; let point: Point = { x: 10, y: 20 }; ``` #### Class Typescript Classes are a fundamental part of object-oriented programming in TypeScript. ```typescript class Animal { constructor(public name: string) {} makeSound(): void { console.log("Some generic sound"); } } let cat = new Animal("Fluffy"); cat.makeSound(); // Output: Some generic sound ``` #### Inheritance Inheritance allows a class to inherit properties and methods from another class. ```typescript class Dog extends Animal { makeSound(): void { console.log("Woof! Woof!"); } } let dog = new Dog("Buddy"); dog.makeSound(); // Output: Woof! Woof! ``` #### Abstract Class Abstract classes cannot be instantiated and are often used as base classes. ```typescript abstract class Shape { abstract area(): number; } class Circle extends Shape { constructor(private radius: number) { super(); } area(): number { return Math.PI * this.radius ** 2; } } ``` #### Encapsulation Encapsulation involves bundling the data (attributes) and methods (functions) that operate on the data into a single unit, i.e., a class. ```typescript class BankAccount { private balance: number = 0; deposit(amount: number): void { this.balance += amount; } withdraw(amount: number): void { if (amount <= this.balance) { this.balance -= amount; } else { console.log("Insufficient funds"); } } getBalance(): number { return this.balance; } } let account = new BankAccount(); account.deposit(1000); account.withdraw(500); console.log(account.getBalance()); // Output: 500 ``` #### Function Signature Function signatures define the structure of a function, including its parameters and return type. ```typescript type AddFunction = (a: number, b: number) => number; let add: AddFunction = (a, b) => a + b; console.log(add(3, 5)); // Output: 8 ``` #### Interface Interfaces define contracts for objects, specifying the properties and methods they must have. ```typescript interface Printable { print(): void; } class Document implements Printable { print(): void { console.log("Printing document..."); } } let document: Printable = new Document(); document.print(); // Output: Printing document... ``` #### Generic Type Generics allow you to create reusable components that can work with a variety of data types. ```typescript function identity<T>(arg: T): T { return arg; } let result: number = identity(42); ``` This comprehensive guide covers the fundamental concepts of TypeScript, from installation to advanced features like generics and encapsulation. By leveraging TypeScript's static typing and object-oriented capabilities, developers can build more robust and maintainable JavaScript applications. Whether you're a beginner or an experienced developer, TypeScript opens up new possibilities for writing scalable and error-resistant code. Follow me on : [Github](https://github.com/jps27cse) [Linkedin](https://www.linkedin.com/in/jps27cse/)
jps27cse
1,740,320
Hashes and Symbols
Codecademy Cheatsheet Ruby Symbols In Ruby, symbols are immutable names primarily used as hash keys...
0
2024-01-24T17:28:34
https://dev.to/aizatibraimova/hashes-and-symbols-p4m
_Codecademy Cheatsheet_ **Ruby Symbols** In Ruby, _symbols_ are immutable names primarily used as hash keys or for referencing method names. ``` my_bologna = { :first_name => "Oscar", :second_name => "Meyer", :slices => 12 } puts my_bologna[:second_name] # => Meyer ``` <u>_Symbol Syntax_</u> Symbols always start with a colon (`:`). They must be valid Ruby variable names, so the first character after the colon has to be a letter or underscore (`_`); after that, any combination of letters, numbers, and underscores is allowed. Make sure you don’t put any spaces in your symbol name—if you do, Ruby will get confused. ``` :my symbol # Don't do this! :my_symbol # Do this instead. ``` <u>_What are Symbols Used For?_</u> Symbols pop up in a lot of places in Ruby, but they’re primarily used either as hash keys or for referencing method names. ``` sounds = { :cat => "meow", :dog => "woof", :computer => 10010110, } ``` Symbols make good hash keys for a few reasons: 1. They’re immutable, meaning they can’t be changed once they’re created; 2. Only one copy of any symbol exists at a given time, so they save memory; 3. Symbol-as-keys are faster than strings-as-keys because of the above two reasons. --- **Ruby Hashes, Symbols, & Values** In Ruby hashes, key symbols and their values can be defined in either of two ways, using a `=>` or `:` to separate symbol keys from values. ``` my_progress = { :program => "Codecademy", :language => "Ruby", :enthusiastic? => true } #Key symbols and their values can be defined with a =>, also known as a hash rocket. my_progress = { program: "Codecademy", language: "Ruby", enthusiastic?: true } #Key symbols and their values can also be defined with the colon (:) at the end of the symbol followed by its value. ``` --- **Converting Between Symbols and Strings** Converting between strings and symbols is a snap. ``` :sasquatch.to_s # ==> "sasquatch" "sasquatch".to_sym # ==> :sasquatch ``` The `.to_s` and `.to_sym` methods are what you’re looking for! ``` strings = ["HTML", "CSS", "JavaScript", "Python", "Ruby"] symbols = [] strings.each do |s| symbols.push(s.to_sym) end print symbols ``` Besides using `.to_sym`, you can also use `.intern`. This will internalize the string into a symbol and works just like `.to_sym`: ``` "hello".intern # ==> :hello ``` Ruby’s `.to_sym` method can convert a string to a symbol, and `.to_i` will convert a string to an integer. --- **Ruby .select Method** In Ruby, the `.select` method can be used to grab specific values from a hash that meet a certain criteria. ``` olympic_trials = { Sally: 9.58, John: 9.69, Bob: 14.91 } olympic_trials.select { |name, time| time < 10.05 } #The example above returns {:Sally=>9.58, :John=>9.69} since Sally and John are the only keys whose values meet the time < 10.05 criteria. ``` --- **Ruby .each_key & .each_value** In Ruby, the `.each_key` and `.each_value` methods are used to iterate over only the keys or only the values in a hash ``` my_hash = { one: 1, two: 2, three: 3 } my_hash.each_key { |k| print k, " " } # ==> one two three my_hash.each_value { |v| print v, " " } # ==> 1 2 3 ``` --- **The Case Statement** `if` and `else` are powerful, but we can get bogged down in i`f`s and `elsif`s if we have a lot of conditions to check. Thankfully, Ruby provides us with a concise alternative: the `case` statement. The syntax looks like this: ``` case language when "JS" puts "Websites!" when "Python" puts "Science!" when "Ruby" puts "Web apps!" else puts "I don't know!" end ``` --- **A NIGHT AT THE MOVIES** ``` movies = { harry_potter: 4 } puts "What would you like to do?" choice = gets.chomp case choice when "add" puts "What movie would you like to add?" title = gets.chomp if movies[title.to_sym].nil? puts "What rating does the movie have? (Type a number from 0 to 4)" rating = gets.chomp movies[title.to_sym] = rating.to_i puts "#{title} was added with a rating of #{rating}!" else puts "That movie already exists! Its rating is #{movies[title.to_sym]}'" end when "update" puts 'What movie would you like to update?' title = gets.chomp if movies[title].nil? puts "There has been an error! Please check the spelling again." else puts "What is the new rating?" rating = gets.chomp movies[title.intern] = rating.to_i puts "#{title} has been updated with new rating of #{rating}." end when "display" movies.each { |movie, rating| puts "#{movie}: #{rating}"} when "delete" puts "What is the title of the movie you would like to delete?" title = gets.chomp if movies[title.intern].nil? puts "The movie was not found" else movies.delete(title.to_sym) puts "#{title} has been removed." end else puts "Error!" end ```
aizatibraimova
1,740,578
Hosting your static website in S3 by leveraging cloudfront
S3 as a service can be linked from on-premise and other cloud services like Amazon Ec2 a computing...
0
2024-01-27T17:48:13
https://dev.to/gathungu/s3-breakdown-3koc
aws
S3 as a service can be linked from on-premise and other cloud services like Amazon Ec2 a computing services that allows to 'rent' servers in AWS. We will be hosting a static website in s3, this does not require compute Before we setup the environment we need to understand that when adapting to cloud we should have a secure first approach. We will leverage another AWS service know a Cloud Front. It is a web service that enhances and distributes your static and dynamic web content faster. It leverages its self from a worldwide network of data centers. It delivers the content on http and https adding a secure and encrypted against a normal protecting against common cyber attacks. From the s3 bucket we had created, upload your website by selecting all files and folders. After a successful upload your console should provide a green success message at the top ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1tbxgterx3vair6yyhnb.png) in the buckets permission tab click on edit, disable the block public access ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xc48oed1pn99gpmvawqe.png) Deactivate the feature, this is temporary ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n5gtlw4et3i15qdzpqi5.png) Type confirm ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zbmow6t2rmaj48n2d2c.png) navigate down to bucket policy and click edit image... paste this policy `"Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket-name/*" } ] }` Replace the bucket name with your bucket name, it should like follows ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rgbznrqiq5bo3nlb6lj6.png) navigate down and save changes. Click on properties in the bucket profile settings and scroll down to static website hosting ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xh61r3jpqmtgoq9pgix9.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/py1k0fs7ithsj3svcoo8.png) click on the edit button, navigate and enable static web hosting. in the index document add your index.html as the index document, for the error handling file type "error/index.html. To retrieve the endpoint for your bucket, click the Properties tab, scroll to the bottom, and click the copy icon next to the Bucket website endpoint, copy it to a new tab and it will load your index.html page. Scroll down and save changes. Navigate to the search bar and search for CloudFront ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ge82o5jfj8xifhxub4w.png) create CloudFront a distribution ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwdn0lfl2l2n1vgfdgoa.png) Do not click at the web endpoint and scroll down to origin access use the origin access control settings, this allows s3 to allow cloudfront to access it. click on create control settings ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ncx5vowef8cqg28pznb.png) Name it to mirror your s3 bucket-name you can use your abbreviation of CF to show its cloud-front then click on create. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7oy1mu6g2do11k6gl1nr.png) scroll down to the Web Application Firewall do not enable protection ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/53rfp5v95649ml89izo9.png) Default root object type index.html, leave the other default settings as they are, create the distribution. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3i4csgx7qktjc4td1bh.png) There will be a splash screen suggesting you to update the policy in your s3 copy policy and navigate to s3 permissions and click on edit clear the previous policy you had pasted before and pste the new one, navigate below and save changes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpnwnno6vwwibsb61whj.png) navigate to the public access settings, click on edit, activate the settings, click on save. Type confirm and click on confirm ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4mnxy8cctajd9ebp5rv3.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1pc5ocks43wmg6rkmc2f.png) Navigate back to CloudFront page and copy the distribution name and paste it to a new tab. your website will be all set up ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pxjlhvojlco4ll8y3lz.png) Have a try, give me a feed back on how the exercise was
gathungu
1,740,652
Enhancing CX: Building a DataMotion-Powered Message Center Widget for React (Part 1)
Introduction For any business, efficient and secure communication isn't just beneficial —...
0
2024-03-01T15:18:31
https://dev.to/janellephalon/enhancing-cx-building-a-datamotion-powered-message-center-widget-for-react-part-1-3nak
## Introduction For any business, efficient and secure communication isn't just beneficial — it's essential. In this guide, we're focusing on creating a widget that centralizes your message management, offering a clear view of your inbox within your dashboard. While today we're honing in on efficiently accessing messages, this foundation sets the stage for a more comprehensive message center. Future expansions could leverage [DataMotion's APIs](https://datamotion.com/portal/project/DataMotion/dashboard) to include sending messages, managing forms, initiating chats, and beyond — paving the way for a fully integrated communication hub. This tutorial stands out for its versatility and adaptability across various frontend frameworks. Whether you're a fan of Bootstrap, Tailwind, Materialize, or any other, you'll find the concepts and code examples here not just useful but practical and easy to implement. Our goal? To enhance your user interface by centralizing message management, cutting down on the need to juggle between portals for updates, and making the most of what DataMotion's API offers. Let's dive in to make your portal — or any application — more efficient and communicatively robust, one widget at a time. ![SMC Widget Complete](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ijcrwls3jnp8wy9l59d.png) --- ## Prerequisites To get the most out of this guide, have the following ready: - **React and Node.js Knowledge:** Ensure you have a basic understanding of React and Node.js — having a handle on the fundamentals will help you follow along more easily. - **A React Project:** You'll need an existing React project or dashboard to work with. I'll be using the [CoreUI Free React Admin Template](https://coreui.io/product/free-react-admin-template/) for this tutorial. - **DataMotion Account:** Make sure you have a [DataMotion](https://datamotion.com/portal/project/DataMotion/dashboard) account to access the APIs. We'll be focusing on the [Secure Message Center API](https://datamotion.com/secure-message-center/) to enhance our project with secure messaging capabilities. ## Setting up the Project Setting up your project environment is the first step. Skip this part if you're integrating into an existing dashboard. #### Cloning the Dashboard Template: If starting new, clone the [CoreUI template](https://github.com/coreui/coreui-free-react-admin-template): ``` # Navigate to your desired project directory cd [Your_Project_Directory] # Clone the CoreUI Free React Admin Template from GitHub git clone https://github.com/coreui/coreui-free-react-admin-template.git # Change your current directory to the newly cloned project cd coreui-free-react-admin-template # Install project dependencies using npm npm install # Start the project npm start ``` #### Environment Setup: First up, let's prepare our environment. Open the project's terminal to create a dedicated directory for our server, install the necessary packages, and set up our `.env` file for secure API communication. ``` # Create a 'server' directory for our project mkdir server # Navigate to the 'server' directory cd server # Initialize a new npm project without the interactive prompt npm init -y # Install necessary dependencies: # - Axios for making HTTP requests # - Dotenv for managing environment variables # - Express for setting up our server # - Cors for enabling Cross-Origin Resource Sharing (CORS) npm install axios dotenv express cors # Create a '.env' file to securely store your DataMotion API credentials touch .env ``` Now, add your DataMotion credentials to the newly created `.env` file. This is crucial for authenticating our application with DataMotion's Secure Message Center API: ``` CLIENT_ID=your_datamotion_client_id CLIENT_SECRET=your_datamotion_client_secret ``` ## Building the Message Center Widget Let's dive into the heart of our project: building the message center widget. We'll start by setting up our server and establishing a connection with DataMotion. ``` # Create a 'server.js' file where our server setup will live touch server.js ``` #### Server Setup: Below, we're setting up a basic server using Node.js and Express. This server will manage our requests to DataMotion's API. ``` // Import necessary modules: Express for the server, Axios for HTTP requests, CORS, and Dotenv to use environment variables const express = require('express') const axios = require('axios') const cors = require('cors') require('dotenv').config() // Load our environment variables from the '.env' file const app = express() // Create an instance of the express application const PORT = process.env.PORT || 5000 // Define the port our server will run on app.use(cors()) // Add your API routes and handlers here // Start the server and listen on the defined PORT app.listen(PORT, () => { console.log(`Server running on port ${PORT}`) }) ``` #### Testing the Server: To ensure that your server is up and running, simply execute the following command: ``` node server.js ``` If your server is set up correctly, you should see a message in your terminal indicating that the server is running on the specified port: `Server running on port 5000`. > **Note:** Throughout the tutorial, whenever you make changes to the server code, remember to restart it using the `node server.js` command to apply those changes. #### Authentication with DataMotion: To securely access DataMotion's API, we need to authenticate our application. Here's how we obtain an access token using client credentials: ``` // Define a route to get an access token from DataMotion app.get('/token', async (req, res) => { try { // Make a POST request to DataMotion's token endpoint with our client credentials const response = await axios.post('https://api.datamotion.com/SMC/Messaging/v3/token', { grant_type: 'client_credentials', client_id: process.env.CLIENT_ID, // Use the client ID from our .env file client_secret: process.env.CLIENT_SECRET, // Use the client secret from our .env file }) res.json(response.data) } catch (error) { res.status(500).json({ message: 'Error fetching token', error: error.response.data }) } }) ``` #### Fetching Messages from Different Folders To tailor our secure message widget to show messages from various folders, we use the `folderId` parameter in the API request. This parameter helps us specify which folder we want to access. For example, setting `folderId=1` lets us retrieve messages from the inbox. Here's how different folders are identified within DataMotion's system, using their respective folderId values: - Inbox: `folderId = 1` - Trash: `folderId = 2` - Track Sent: `folderId = 3` - Drafts: `folderId = 4` - Outbox Trash: `folderId = 5` - Archive: `folderId = 6` - Deleted Inbox Trash: `folderId = 7` - Deleted Outbox Trash: `folderId = 8` Understanding these folder IDs allows you to customize which messages to display, enhancing the widget's functionality. Here's how we fetch messages from the inbox as an example: ``` // Define a route to get messages from the inbox app.get('/messages', async (req, res) => { try { // First, obtain an access token from DataMotion const tokenResponse = await axios.post('https://api.datamotion.com/SMC/Messaging/v3/token', { grant_type: 'client_credentials', client_id: process.env.CLIENT_ID, // Your DataMotion client ID client_secret: process.env.CLIENT_SECRET, // Your DataMotion client secret }); // Then, use the token to fetch messages from the inbox const messagesResponse = await axios.get('https://api.datamotion.com/SMC/Messaging/v3/content/messages/?folderId=1&pageSize=10&pageNumber=1&sortDirection=DESC&metadata=true', { headers: { Authorization: `Bearer ${tokenResponse.data.access_token}` // Authorization header with the access token } }); res.json(messagesResponse.data) } catch (error) { res.status(500).json({ message: 'Error fetching messages', error: error.response.data }) } }) ``` This approach ensures you can dynamically display messages from any folder, making your email widget both versatile and user-friendly. ## Building and Integrating the Message Center Widget into the Dashboard Our next step is to bring Secure Message Center API's power directly to our dashboard. We'll achieve this by creating a dedicated React component, `MessageCenterWidget`, which will serve as the core interface for displaying messages. #### Creating the Component File Before we start coding, we need to create the file where our `MessageCenterWidget` component will reside: ``` # Navigate to the src/components directory of your project cd src/components # Create the MessageCenterWidget.js file touch MessageCenterWidget.js ``` With the file created, open `MessageCenterWidget.js` in your code editor to begin implementing the component. #### Setting Up and Importing Dependencies At the top of the `MessageCenterWidget.js` file, import the necessary dependencies for our component. This includes React for building the component, Axios for HTTP requests, and specific CoreUI components for styling our message display. ``` // Importing React, Axios, and selected CoreUI components import React, { useEffect, useState } from 'react'; import axios from 'axios'; import { CCard, CCardBody, CCardHeader, CTable, CTableBody, CTableDataCell, CTableHead, CTableHeaderCell, CTableRow, } from '@coreui/react'; ``` #### Initializing State Variables Define the initial state within the `MessageCenterWidget` component to manage the fetched messages and the loading status. ``` // MessageCenterWidget component definition const MessageCenterWidget = () => { const [messages, setMessages] = useState([]) // Array to hold our fetched messages const [isLoading, setIsLoading] = useState(true) // Boolean to track loading status // Additional logic will follow } ``` #### Fetching and Displaying Emails Utilize the `useEffect` hook for fetching messages as soon as the component mounts, ensuring our dashboard is immediately populated with data. ``` // Additional logic will follow useEffect(() => { const fetchMessages = async () => { try { // Step 1: Securely obtain an authentication token const tokenResponse = await axios.get('http://localhost:5000/token') // Step 2: Fetch messages using the obtained token const messagesResponse = await axios.get('http://localhost:5000/messages', { headers: { Authorization: `Bearer ${tokenResponse.data.access_token}`, // Use the token in the request header }, }) // Sort the fetched messages by creation time before updating state const sortedMessages = messagesResponse.data.items.sort( (a, b) => new Date(b.createTime) - new Date(a.createTime), ) setMessages(sortedMessages) } catch (error) { console.error('Error fetching messages:', error) } finally { setIsLoading(false) // Ensure to update the isLoading state } } fetchMessages() }, []) ``` #### Sorting Emails and Formatting Dates Implement a helper function to format message dates for a better user experience, and prepare the component for rendering. ``` // Function to format the date strings of messages const formatDate = (dateString) => { const date = new Date(dateString) const today = new Date() // Display format differs if the message is from today or an earlier date return date.toDateString() === today.toDateString() ? date.toLocaleTimeString() : date.toLocaleDateString() } // Render loading indicator or the message table based on the isLoading state if (isLoading) { return <div>Loading messages...</div> } ``` Ensure to export your `MessageCenterWidget` at the end of the file for it to be used elsewhere in your project. ``` // Make the MessageCenterWidget available for import export default MessageCenterWidget ``` #### Rendering the Secure Message Center Now that we've established the logic for fetching and displaying messages, let's focus on presenting this information through a user-friendly and visually appealing table. We'll enhance our table's appearance and functionality with some custom CSS styling. #### Adding CSS for Table Styling First, we add CSS to ensure our table looks organized and maintains a consistent layout. Place this CSS in an appropriate stylesheet or within your component using styled-components or a similar approach, depending on your project setup. ``` /* CSS styles for the message center table */ .message-center-table-container { max-height: 600px; table { width: 100%; table-layout: fixed; } thead th, tbody td { text-align: left; width: 33.33%; } } .title-style { font-size: 1rem; font-weight: 400; margin-top: auto; margin-bottom: auto; } ``` #### JSX Structure for the Secure Message Center Utilizing CoreUI's table components, we can neatly organize and display the sender, subject, and date of each message. Here's the JSX that incorporates our CSS styling: ``` return ( <CCard className="mb-4"> <CCardHeader>Secure Message Center</CCardHeader> <CCardBody> <div className="message-table"> <CTable align="middle" className="mb-4 border" hover responsive> <CTableHead> <CTableRow> <CTableHeaderCell scope="col" className=""> Sender </CTableHeaderCell> <CTableHeaderCell scope="col" className=""> Subject </CTableHeaderCell> <CTableHeaderCell scope="col" className=""> Date </CTableHeaderCell> </CTableRow> </CTableHead> <CTableBody className="table-body"> {emails.length > 0 ? ( emails.map((email, index) => ( <CTableRow key={index}> <CTableDataCell>{email.senderAddress}</CTableDataCell> <CTableDataCell>{email.subject}</CTableDataCell> <CTableDataCell>{formatDate(email.createTime)}</CTableDataCell> </CTableRow> )) ) : ( <CTableRow> <CTableDataCell colSpan={3}>No emails found</CTableDataCell> </CTableRow> )} </CTableBody> </CTable> </div> </CCardBody> </CCard> ); }; ``` This structure leverages our CSS to create a clean, organized display for the messages, significantly enhancing the user experience of the dashboard. ## Integrating the Widget into the Dashboard Now that our `MessageCenterWidget` is ready and prepped with its functionality and styling, the final step is to weave it into the fabric of our dashboard. This phase is crucial for bringing the secure messaging capabilities directly to the users’ fingertips, enhancing the dashboard’s overall utility and user experience. #### Placing the MessageCenterWidget in the Dashboard Integration is straightforward. Assuming you have a `Dashboard.js` file or a similar setup for your main dashboard component, you’ll incorporate the `MessageCenterWidget` like so: ``` // Ensure the MessageCenterWidget is imported at the top of your Dashboard component file import MessageCenterWidget from '../../components/MessageCenterWidget' const Dashboard = () => { // Include other dashboard components and logic as needed return ( <> <WidgetsDropdown className="mb-4" /> {/* Position the MessageCenterWidget where it fits best in your dashboard layout */} <MessageCenterWidget className="mb-4" /> {/* Continue with the rest of your dashboard components */} <CCard className="mb-4"> {/* Remaining dashboard components... */} </CCard> {/* Other components... */} </> ); }; export default Dashboard ``` This step integrates the MessageCenterWidget into your dashboard. Depending on your design, you might adjust its placement, ensuring it complements the overall layout and flow of the dashboard. The widget’s self-contained nature makes it adaptable to various positions and configurations, affirming the modular and flexible design of React components. ![SMC Widget - Inbox Messages](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bsacefnl4zu88risoqgt.png) --- ## Expanding the Widget's Functionality After initially focusing on showcasing the inbox, it's time to broaden the capabilities of our widget. We'll introduce functionality that allows users to switch between viewing inbox and sent messages, similar to standard email clients. #### Updating the Server to Fetch Sent Messages To facilitate the display of both received and sent messages, let's extend our server's capabilities with an additional endpoint. This step involves navigating back to the `server` directory to modify the `server.js` file. ``` // New endpoint to fetch sent messages app.get('/sent-messages', async (req, res) => { try { const tokenResponse = await axios.post('https://api.datamotion.com/SMC/Messaging/v3/token', { grant_type: 'client_credentials', client_id: process.env.CLIENT_ID, client_secret: process.env.CLIENT_SECRET, }) const sentMessagesResponse = await axios.get( 'https://api.datamotion.com/SMC/Messaging/v3/content/messages/?folderId=3&pageSize=10&pageNumber=1&sortDirection=DESC&metadata=true', { headers: { Authorization: `Bearer ${tokenResponse.data.access_token}`, }, }, ) res.json(sentMessagesResponse.data) } catch (error) { res.status(500).json({ message: 'Error fetching sent messages', error: error.response.data }) } }) ``` > **Note:** Changing `folderId` to 3 targets sent messages, as mentioned earlier. #### Integrating Sidebar and Updating MessageCenterWidget.js With our backend ready to serve both inbox and sent messages, it's time to update the frontend in `MessageCenterWidget.js`. This involves implementing a sidebar for mailbox selection and adjusting our component to utilize the new `/sent-messages` endpoint. Update `MessageCenterWidget.js` to manage the view selection state: ``` const MessageCenterWidget = () => { const [messages, setMessages] = useState([]) // Holds our fetched messages (unchanged) const [isLoading, setIsLoading] = useState(true) // Tracks loading status (unchanged) // New state to manage the current mailbox view ('inbox' or 'sent') const [currentView, setCurrentView] = useState('inbox') } ``` > **Note:** The addition of currentView and setCurrentView allows us to dynamically fetch and display messages based on the user's selection between different mailbox views, such as the inbox or sent messages. Next, we adapt the `useEffect` hook to react to mailbox changes, fetching messages accordingly: ``` useEffect(() => { const fetchEmails = async () => { setIsLoading(true) let endpoint = '' if (currentView === 'inbox') { endpoint = '/messages' } else if (currentView === 'sent') { endpoint = '/sent-messages' } else if (currentView === 'drafts') { endpoint = '/drafts' } else if (currentView === 'trash') { endpoint = '/trash' } try { const tokenResponse = await axios.get('http://localhost:5000/token') const messagesResponse = await axios.get(`http://localhost:5000${endpoint}`, { headers: { Authorization: `Bearer ${tokenResponse.data.access_token}`, }, }) const sortedEmails = messagesResponse.data.items.sort( (a, b) => new Date(b.createTime) - new Date(a.createTime), ) setMessages(sortedEmails) } catch (error) { console.error(`Error fetching ${currentView} emails:`, error) } finally { setIsLoading(false) } } fetchEmails() }, [currentView]) ``` Within the JSX, we introduce the sidebar for mailbox switching, enhancing the component's interactivity: ``` return ( <CCard className="mb-4 email-table"> <CCardHeader>Secure Message Center</CCardHeader> <div className="d-flex"> <div className="email-sidebar"> <ul> <li className={currentView === 'inbox' ? 'active' : ''} onClick={() => setCurrentView('inbox')} > Inbox </li> <li className={currentView === 'sent' ? 'active' : ''} onClick={() => setCurrentView('sent')} > Sent </li> {/* Add more folders if needed */} </ul> </div> <CCardBody> {/* Table rendering logic */} </CCardBody> </div> </CCard> ); ``` This update significantly enriches the widget by offering users a more dynamic tool that mirrors the functionality of full-featured email clients. By facilitating the navigation between inbox and sent messages, the widget evolves into a more versatile component of the dashboard. #### Styling the Sidebar To ensure our sidebar is not only functional but also aesthetically pleasing, let's add some CSS: ``` .email-sidebar { width: 150px; background-color: #f8f9fa; padding: .5rem; } .email-sidebar ul { list-style-type: none; padding: 0; } .email-sidebar ul li { padding: .25rem; cursor: pointer; font-weight: 500; color: #495057; } .email-sidebar ul li.active { background-color: #e9ecef; } .email-sidebar ul li:hover { background-color: #dee2e6; } .placeholder { background-color: #f0f0f0; height: 1em; width: 100%; border-radius: 4px; } ``` Apply these styles to make the sidebar both functional and stylish. ![Expanded Widget with Sidebar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqs075cms6dpucepe6ht.png) ## Conclusion of the Expanded Widget Integrating DataMotion's Secure Message Center API into our dashboard has improved how we manage messages, showcasing the power and flexibility of DataMotion's tools. This tutorial highlighted the straightforward process of incorporating these APIs with the CoreUI framework, proving how they can enhance any user interface with secure and efficient communication. This project is a clear example of DataMotion's ability to provide secure communication solutions that are both powerful and user-friendly. We've started to explore the potential of DataMotion's Secure Message Center APIs, demonstrating that secure communication can be seamlessly integrated into any digital environment, offering both customization and compliance. DataMotion's APIs are designed to support secure, engaging, and efficient digital experiences, whether you're improving an existing system or creating something new. As we wrap up, consider this guide a starting point for discovering all that DataMotion has to offer. Check back for a part two, where we will build upon this groundwork to further expand your expertise and success. --- **Quick Links:** - [Github Repository](https://github.com/janellephalon/smc-api-dashboard-widget/tree/main) - [DataMotion](www.datamotion.com) & [DataMotion APIs](https://datamotion.com/portal/project/DataMotion/dashboard) - [Core UI](https://coreui.io/) - [React](https://react.dev/)
janellephalon
1,740,762
Boost Team Productivity with AI Presentation Maker for Meetings"
In the fast-paced world of business, effective communication is pivotal to success. One of the key...
0
2024-01-25T04:38:17
https://dev.to/team-meeting-presentation/boost-team-productivity-with-ai-presentation-maker-for-meetings-4g83
teammeetingpresentati
In the fast-paced world of business, effective communication is pivotal to success. One of the key components of communication within a professional setting is the ability to deliver compelling presentations. Whether it's a sales pitch, a project update, or a strategic plan, presentations play a crucial role in conveying information and influencing decisions. However, creating engaging and impactful presentations can be a time-consuming and daunting task for many professionals. This is where AI presentation makers come into play, revolutionizing the way teams prepare for meetings and enabling them to deliver polished, professional presentations with ease. In recent years, artificial intelligence has made significant advancements, offering a wide range of applications across various industries. The integration of AI in presentation making has streamlined the process, empowering teams to harness the power of technology to craft dynamic and visually appealing presentations. With the ability to analyze data, generate insights, and create visually stunning slides, AI presentation makers have become indispensable tools for modern businesses. [AI presentation makers](https://simplified.com/ai-presentation-maker/team-meeting) leverage machine learning algorithms to understand user inputs, analyze content, and suggest relevant visuals, thereby reducing the time and effort required to produce high-quality presentations. By automating mundane tasks such as formatting, layout design, and content suggestions, these tools free up valuable time for professionals to focus on refining their message and delivering impactful presentations. Moreover, [AI presentation makers](https://simplified.com/ai-presentation-maker/team-meeting) are equipped with features that enable seamless collaboration among team members. With real-time editing, sharing, and feedback capabilities, these tools facilitate a cohesive and efficient workflow, allowing teams to work together regardless of geographical locations. This level of collaboration not only enhances productivity but also fosters a sense of unity and shared purpose within the team. As businesses continue to embrace digital transformation, the adoption of AI presentation makers is poised to become a standard practice in modern workplaces. The ability to harness the capabilities of AI to streamline presentation creation, enhance visual storytelling, and facilitate collaborative teamwork positions these tools as indispensable assets for professionals seeking to elevate their communication strategies. In conclusion, AI presentation makers represent a paradigm shift in the way teams prepare for meetings and deliver presentations. By harnessing the power of artificial intelligence, professionals can create compelling, visually captivating presentations with efficiency and precision. The seamless integration of AI in presentation making not only saves time and resources but also empowers teams to convey their messages with impact and clarity. As technology continues to evolve, AI presentation makers stand as a testament to the transformative potential of artificial intelligence in revolutionizing business communication. [AI presentation makers](https://simplified.com/ai-presentation-maker/team-meeting) have the potential to redefine the way teams approach presentations, paving the way for a future where compelling storytelling and effective communication converge seamlessly, driving business success in the digital age.
team-meeting-presentation
1,740,850
Revolutionizing Sports with AI: A Deep Dive into Sports App Development
In the ever-evolving landscape of sports, technology has become a game-changer, and the integration...
0
2024-01-25T06:43:40
https://dev.to/sofiamurphy/revolutionizing-sports-with-ai-a-deep-dive-into-sports-app-development-4p29
ai, app, sportappdevelopment
In the ever-evolving landscape of sports, technology has become a game-changer, and the integration of Artificial Intelligence (AI) into sports app development has marked a new era. As athletes strive for peak performance and fans demand immersive experiences, AI emerges as the catalyst for transforming the way we engage with sports. This article explores the various facets of AI in sports app development, from enhancing player performance to captivating fan experiences. ##Key Applications of AI in Sports Apps ### Performance Analysis AI is reshaping how athletes train and perform on the field. Through player tracking and analytics, coaches can gain invaluable insights into individual and team performances. Injury prevention and rehabilitation programs are also benefitting from AI, using data-driven approaches to ensure athletes stay at their peak physical condition. Additionally, AI aids in optimizing game strategies by processing vast amounts of historical data to identify patterns and trends. ### Fan Engagement For fans, AI offers a personalized experience like never before. Sports apps leverage AI to deliver content tailored to individual preferences, creating a more engaging and enjoyable viewing experience. Virtual and augmented reality technologies enhance the immersive nature of sports, while social media integration allows fans to connect and share their passion in real-time. ### Data-driven Coaching Coaches now have powerful AI tools at their disposal for informed decision-making. Predictive analytics help in game planning, offering insights into opponents' strategies and potential outcomes. Real-time feedback allows coaches to make quick adjustments during games, while athlete performance monitoring ensures that training programs are optimized for individual needs. ## Technologies Enabling AI in Sports Apps ### Machine Learning At the core of AI in [sports app development solutions](https://www.excellentwebworld.com/sports-app-development-company/) is machine learning, which enables predictive modeling, pattern recognition, and in-depth analysis of player behavior. This technology is instrumental in providing accurate performance predictions, allowing teams to make data-driven decisions. ### Computer Vision Computer vision is revolutionizing the way sports events are analyzed. By tracking players through video analysis, sports apps can offer detailed statistics and insights. It's also being used to assist umpires and referees, reducing the margin of error in critical decisions. Furthermore, computer vision plays a role in crowd monitoring, enhancing security measures in sports venues. ### Natural Language Processing Enhancing the interaction between sports apps and users, natural language processing facilitates voice-enabled interfaces for fans. Automated data analysis from commentary helps in generating real-time insights, and sentiment analysis provides valuable feedback on player and team dynamics. ## Success Stories Several sports organizations have successfully integrated AI into their operations. The NBA's use of AI for player tracking has significantly influenced game strategies, providing teams with a competitive edge. Wimbledon's implementation of AI for video analysis has transformed how tennis is perceived, offering fans a deeper understanding of players' techniques. Fantasy sports apps have also leveraged AI to enhance user experience, creating a dynamic and engaging platform for sports enthusiasts. ## Challenges and Considerations While AI brings immense potential, some challenges and considerations need to be addressed. Ethical concerns surrounding data usage, integration challenges with existing sports infrastructure, and issues related to privacy and security require careful consideration as the sports industry embraces AI. ## Future Trends The future of AI in sports app development holds exciting possibilities. Continued evolution in technology, potential breakthroughs, and ongoing innovations will shape the sports industry in ways we can only imagine. As AI continues to mature, its impact on sports is expected to be profound, influencing how games are played, analyzed, and experienced. ## Conclusion In conclusion, the integration of AI into sports app development is a transformative journey that benefits athletes, coaches, and fans alike. From revolutionizing player performance analysis to offering personalized fan experiences, AI is redefining the sports landscape. As we look ahead, the promising future of technology in the sports domain encourages further exploration and development, setting the stage for unprecedented advancements in the way we engage with and enjoy sports.
sofiamurphy
1,740,945
Keep AWS Costs down: 5 steps to start with on your infrastructure
Leveraging built-in tools like Cost Optimization Hub, Cost Explorer, Compute Optimizer and others.
0
2024-01-25T08:53:29
https://dev.to/mkdev/keep-aws-costs-down-5-steps-to-start-with-on-your-infrastructure-3ng6
aws, costs, infrastructure
--- title: Keep AWS Costs down: 5 steps to start with on your infrastructure published: true description: Leveraging built-in tools like Cost Optimization Hub, Cost Explorer, Compute Optimizer and others. tags: aws, costs, infrastructure # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-01-25 08:52 +0000 --- Let's see which first steps your company needs to do to start reducing AWS costs, by leveraging built-in tools like Cost Optimization Hub, Cost Explorer, Compute Optimizer and others. {% embed https://www.youtube.com/watch?v=H9TRGblJPr8 %}
mkdev_me
1,741,051
AI on the Hunt: Revolutionizing the Recruiting Landscape
The world of recruitment is undergoing a seismic shift, with artificial intelligence (AI) emerging as...
0
2024-01-25T11:13:46
https://dev.to/airecruiter/ai-on-the-hunt-revolutionizing-the-recruiting-landscape-5cpj
The world of recruitment is undergoing a seismic shift, with artificial intelligence (AI) emerging as a game-changer. No longer relegated to science fiction, AI is rapidly transforming how we identify, attract, and hire top talent. But is this a robotic takeover of the HR department, or a powerful tool waiting to be harnessed? ##From Paper Piles to Algorithms Gone are the days of wading through endless paper resumes. AI-powered applicant tracking systems (ATS) and software like [Spire.ai](https://spire.ai/) can scan mountains of applications in seconds, filtering them based on pre-defined criteria like skills, experience, and keywords. This not only saves recruiters precious time, but also ensures that qualified candidates, regardless of their background or resume format, don't get lost in the shuffle. ##Beyond Buzzwords: Unearthing Hidden Gems: AI goes beyond mere keyword matching. Sophisticated algorithms can analyze a candidate's entire online footprint, including social media profiles and professional networks, to glean insights into their work ethic, cultural fit, and even cognitive abilities. This allows recruiters to build a more holistic picture of a candidate, going beyond traditional metrics and uncovering hidden gems who might excel in the role. ##Breaking the Bias Barrier: Human bias can creep into even the most well-intentioned recruitment processes. AI, however, can help level the playing field by introducing objectivity into the equation. By relying on data-driven algorithms, AI can remove subconscious biases based on factors like gender, age, or name. This opens doors for a more diverse and inclusive talent pool, fostering a fairer and more equitable hiring landscape. ##Beyond Automation: The Human Touch Remains King: While AI automates much of the heavy lifting, it's crucial to remember that it's not a magic bullet. AI is a powerful tool, but it cannot replace the human touch in building rapport, evaluating soft skills, and making final hiring decisions. Recruiters need to leverage AI's strengths, focusing on the human aspects of the process, like conducting meaningful interviews and building candidate relationships. ##The Future of AI-powered Recruitment: The evolution of AI in recruitment is just beginning. As technology advances, we can expect even more sophisticated algorithms that can predict candidate performance, assess cultural fit with greater accuracy, and even personalize the entire recruiting experience. However, it's important to remember that AI is just one piece of the puzzle. To build a truly successful recruitment strategy, businesses need to strike a balance between the power of technology and the irreplaceable human element. So, is AI the future of recruitment? The answer is a resounding yes, but with a caveat. AI is not here to replace human recruiters; it's here to empower them. By embracing AI as a partner, recruiters can free themselves from tedious tasks, focus on building meaningful connections, and ultimately make smarter hiring decisions, leading to a more diverse, engaged, and high-performing workforce. As the world of work continues to evolve, AI-powered recruitment promises to be a transformative force, shaping the future of talent acquisition and ensuring that the right people are matched with the right opportunities. It's time to ditch the paper resumes and embrace the age of the AI-powered recruiter, where data meets intuition and the perfect match is just a click away.
airecruiter
1,741,133
Desarrollo de un algoritmo anti-envenenamiento de sistemas de IA
He desarrollado CIBRAtool, una herramienta de #ciberseguridad aplicada a la #IA para evitar el...
0
2024-01-25T11:58:54
https://dev.to/gcjordi/desarrollo-de-un-algoritmo-anti-envenenamiento-de-sistemas-de-ia-4hl8
cybersecurity, ai, robotics, computervision
He desarrollado **CIBRAtool**, una herramienta de #ciberseguridad aplicada a la #IA para evitar el envenenamiento de datasets de imágenes en entrenamiento o inferencia, aquí el artículo técnico explicativo: --- ## **Desarrollo de un sistema algoritmo avanzado anti-envenenamiento de sistemas de IA centrados en la visión por computador y el reconocimiento de imágenes para la protección de entornos de IA mediante soluciones de ciberseguridad adaptadas y aplicadas a sus necesidades específicas.** --- **Introducción** En el campo de la inteligencia artificial, la integridad y seguridad de los datos se ha convertido en una preocupación primordial. <u>Esto es particularmente crítico en sistemas de visión por computador y reconocimiento de imágenes, donde la calidad y autenticidad de las imágenes ingeridas, procesadas e inferidas son esenciales para la precisión y fiabilidad de la IA</u>. En este artículo se describe el desarrollo de un innovador sistema **anti-envenenamiento** diseñado para proteger estos sistemas de IA contra el “**data poisoning**”, por ejemplo, ante oclusiones en las imágenes. **Desarrollo de la solución** El núcleo de la solución es un modelo de esencia autoencoder junto con una funcionalidad comparativa entre imágenes, entrenado con un conjunto de imágenes consideradas estándar y seguras. Este modelo aprende a reconstruir estas imágenes con alta precisión, lo que permite posteriormente utilizarlo para evaluar nuevas imágenes. <u>La premisa es simple: si el autoencoder no puede reconstruir con precisión una imagen entrante, es probable que esta sea una anomalía o un **intento de envenenamiento** del dataset, extendiendo ello a la comparación aparejada</u>. Para garantizar la aplicabilidad en escenarios de tiempo real, como en robótica o sistemas de monitoreo en línea y otros entornos dinámicos, el modelo resulta optimizado para un rendimiento rápido y eficiente. Esto se logra mediante la simplificación de la arquitectura del autoencoder y la implementación de técnicas de preprocesamiento de imágenes que permiten una rápida adaptación de las imágenes entrantes y que <u>los sistemas reaccionen instantáneamente ante posibles amenazas o cambios inesperados en su entorno visual tomando así **decisiones informadas y oportunas** basadas en una **mejorada percepción visual**</u>. **Integración** Una parte crucial del desarrollo es la creación de una interfaz accesible para permitir la integración fácil del sistema en diferentes -entornos diferidos o en tiempo real- permitiendo a los usuarios enviar imágenes para su análisis y recibir respuestas rápidas sobre la presencia de envenenamiento. Todo ello aplica a la preparación de un dataset para su entrenamiento y también a la inferencia directa para la toma de decisiones de output del sistema de IA, así como para el cribaje de potenciales datos de reentrenamiento que deban ser admitidos o bien omitidos dentro del flujo iterativo del entorno del sistema. Durante las pruebas de desarrollo, el sistema demostró ser altamente eficaz en la identificación de imágenes que diferían significativamente de las consideradas estándar. <u>El umbral de detección puede ser ajustado según las necesidades específicas del entorno de aplicación **porcentualmente** e, incluso, focalizando las **divergencias a escala de pixel**</u>, proporcionando así una herramienta flexible y robusta para la protección de datos en sistemas de IA. **Aplicaciones prácticas** En la práctica, esta capacidad se traduce en una amplia gama de aplicaciones. Por ejemplo, en la manufactura automatizada, los robots equipados con el sistema pueden identificar instantáneamente potenciales amenazas. En el ámbito de la vigilancia, los drones o cámaras de seguridad pueden utilizar el sistema para detectar intentos de envenenamiento, enviando alertas –o tomando las acciones pertinentes- en tiempo real para una respuesta rápida. **Beneficios y seguridad** Además de mejorar la eficiencia y la efectividad de los sistemas robóticos y automatizados, la solución juega un papel crucial en la seguridad. Al identificar rápidamente las potenciales amenazas, los sistemas pueden prevenir daños o interferencias malintencionadas, asegurando así una operación continua y segura. Esta funcionalidad es esencial en entornos donde la rapidez y precisión de la respuesta pueden ser críticas, como en la logística automatizada, conducción autónoma o incluso en contextos de asistencia médica. **Conclusiones** El sistema anti-envenenamiento que se ha desarrollado representa un avance significativo en la protección de sistemas de IA de visión por computador y reconocimiento de imágenes contra el envenenamiento de datos. Su capacidad para operar en tiempo real, junto con la facilidad de integración a través de una API, lo convierte en una solución ideal para una amplia gama de aplicaciones industriales y comerciales. Se continúa refinando (y adaptándolo ad-hoc cuando corresponde) el modelo y explorando nuevas aplicaciones, comprometidos con la mejora continua de la seguridad y fiabilidad en el campo de la inteligencia artificial. El algoritmo no solo proporciona una potente herramienta para proteger la integridad de los datos en aplicaciones de IA de visión por computador y reconocimiento de imágenes, sino que también ofrece una solución ágil y adaptable para la detección de amenazas en tiempo real en sistemas robóticos. Esta dualidad de aplicaciones asegura que esta tecnología sea una inversión valiosa para una variedad de industrias, marcando un paso adelante en la intersección de la inteligencia artificial y la ciberseguridad, junto con la robótica y demás disciplinas análogas. [Jordi Garcia Castillón](https://jordigarcia.eu/)
gcjordi
1,741,409
From Novice to Pro in 14 Days: My Journey to AWS MLOps Certification
Introduction In the dynamic world of cloud computing and machine learning, integrating ML...
0
2024-01-25T16:39:08
https://dev.to/devopsadventurer/from-novice-to-pro-in-14-days-my-journey-to-aws-mlops-certification-3ca
aws, beginners, devops, learning
###**Introduction** In the dynamic world of cloud computing and machine learning, integrating ML into operational processes is increasingly crucial. The AWS MLOps certification is a benchmark of excellence for professionals at all levels in DevOps and ML. My journey to earning this certification in a mere 14 days was a blend of personal challenge and professional development, demonstrating the accessibility and transformative potential of AWS MLOps. ### **Preparing for AWS MLOps Certification** 1. **Resource Selection:** - Frank Kane's and Stephane Maarek's courses on Udemy ([AWS Certifies Machine Learning Specialty 2024 - Hands On](https://www.udemy.com/share/1029ES3@Y7R3r4vwHbA5d_xr5o3UVjSCxunn1kEU33Qihe32rrFnC7rTQbT5MCm9ZLRv42bNxQ==/)), along with Tutorial Dojo's [practice exams](https://portal.tutorialsdojo.com/courses/aws-certified-machine-learning-specialty-practice-exams/?_gl=1*9yr0i6*_ga*MTk0NTE5MDE5Ny4xNzA2MTg1MjAx*_ga_L96TFJ1R9K*MTcwNjE4NTIwMC4xLjAuMTcwNjE4NTIwMC4wLjAuMA..), offer a comprehensive and in-depth approach to learning various technical and professional skills. - Together, these resources offer a well-rounded educational experience, combining comprehensive theoretical knowledge with practical, hands-on exercises and assessments. 2. **Study Plan:** - The Udemy course, spanning approximately 14 hours, was allocated the initial seven days of my study schedule, with a commitment to four hours of study each day. Subsequently, the following week was devoted entirely to working through practice exams to solidify my understanding and application of the course material. 3. **Community and Support:** - Given the extensive and challenging nature of the material, I experienced stress daily over the 14-day period. However, I discovered a valuable resource on Reddit, specifically the [r/AWSCertifications](https://www.reddit.com/r/AWSCertifications/) page. Here, individuals shared their personal experiences with the exams, their preparation strategies, and insights into what to expect. Reading through these posts provided me with a clearer perspective on the examination process and significantly alleviated my stress by setting clearer expectations. ###**Challenges and Overcoming Them** - **Identifying Challenges:** Balancing a full-time job with an intensive study schedule presented significant challenges. Working 8 hours a day while dedicating another 4 hours daily to studying not only strained my schedule but also nearly eliminated my social interactions. This rigorous routine led to a marked decrease in leisure time and opportunities for relaxation, which are essential for maintaining a healthy work-life balance. The demanding nature of this dual commitment often left me with little time for personal activities or socializing with friends and family, creating a sense of isolation - **Overcoming Strategies:** To manage stress and maintain a balanced lifestyle amidst my busy schedule, I prioritized healthy eating for sustained energy and focus. Additionally, late-night gym sessions became an essential part of my routine, serving as a stress reliever and a way to ensure better sleep, crucial for my daily productivity. This combination of a nutritious diet and regular exercise effectively supported my dual commitments to work and study. ###**The Certification Process** - **Exam Experience:** The exam predominantly centered around Amazon SageMaker, encompassing a detailed exploration of its features, use cases, and integration within the AWS framework. Remarkably, there was a notable omission of any questions about the confusion matrix, an element often considered crucial in similar contexts. T - **Managing Time:** My strategy for managing time during the exam was methodical and precise. With the exam duration set at 180 minutes and a total of 65 questions to answer, I allocated approximately 3 minutes to each question. This approach ensured that I spent a focused and consistent amount of time on each question. Adhering to this time frame was crucial; once I spent the allotted 3 minutes on a question, I moved on and did not return to it. This disciplined approach helped me to cover all questions within the given time without getting bogged down by any particularly challenging ones. - **Question Style:** The notorious complexity of AWS exam questions was something I was well-prepared for, thanks to extensive practice exams. Through these practice sessions, I noticed a recurring pattern in the multiple-choice and multiple-response questions. Typically, out of the available options, two were illogical or irrelevant to the question's context. Identifying and eliminating these implausible options first became a key part of my strategy. This approach effectively narrowed down my choices, significantly enhancing the likelihood of selecting the correct answer. It was a systematic way to tackle the trickiness of the questions and increase my overall accuracy on the exam. ###**Post-Certification Reflections** - **Career Impact:** Achieving this certification proved a pivotal milestone in my professional journey. It not only led to a well-deserved raise at my current job but also opened new avenues in my career. Armed with the knowledge and credentials, I am now entrusted with managing machine learning projects on the cloud. This new responsibility signifies not just an expansion of my skill set but also a broadening of my role within the organization, allowing me to delve into more complex, innovative projects that leverage cloud technologies. The certification has undeniably been a catalyst for growth, both in terms of professional recognition and the scope of opportunities available to me. ###**Conclusion** My 14-day journey to AWS MLOps certification was a testament to the power of dedication and focused learning. It's a path that offers immense growth and opportunities, regardless of where you stand on your professional journey in DevOps and ML. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47ng9z8fvajec5vvh3zq.jpg)
devopsadventurer
1,741,448
10 essential tools if you are a web developer
1.CSS Gradient A simple tool to visualize gradients and generate CSS for them. 2.JSON Placeholder A...
0
2024-01-25T17:33:29
https://dev.to/dev_abdulhaseeb/10-essential-tools-if-you-are-a-web-developer-9pm
webdev, javascript, beginners, programming
1.CSS Gradient A simple tool to visualize gradients and generate CSS for them. 2.JSON Placeholder A JSON API to return dummy data 3. Color Contrast Checker Check contrast ratio between foreground and background colors. 4.Better PLaceholder API to return placeholder images 5.Can I USe Check how well a particular web feature or an API is supported across browsers.
dev_abdulhaseeb
1,741,456
Aggregation Pipeline in MongoDB and the use of $match and $group operator (Part 2)
Hello and welcome back readers, sorry for the delay in this second part of the MongoDB Aggregation...
0
2024-01-25T17:54:54
https://dev.to/ganeshyadav3142/aggregation-pipeline-in-mongodb-and-the-use-of-match-and-group-operator-part-2-18gg
webdev, mongodb, beginners, javascript
Hello and welcome back readers, sorry for the delay in this second part of the MongoDB Aggregation Pipeline series, where we are going to explore the power of the Aggregation Pipeline provided by MongoDB to make a developer's life easy. If you are new to this article, I would like you to check the first part of the series by clicking [here](https://dev.to/ganeshyadav3142/introduction-to-aggregation-pipeline-in-mongodb-part-1-2cfo), and for the readers who are reading it on a continuous track let's first revise what we have learned till now. Till now we have a clear understanding of what is aggregation in MongoDB and what are the different types of aggregation provided by MongoDB such as 1. Map Reduce Function 2. Single Purpose Aggregation 3. Aggregation Pipeline In this article, we will deeply explore the power of the aggregation pipeline provided with the use of **$match** and **$group** operators. ## Aggregation Pipeline using $match operator: **$match** operator filters the documents to pass only the documents that match the specified conditions to the next pipeline stage. In General, if you have applied a **$match** operator to a set of data with mentioned expressions/fields then it will filter out only the documents that will match the field on each document and return the documents. ```javascript //Syntax Syntax: { $match: { <query> } } ``` let us understand the **$match** operator with the basic example where we have the collection of student data ```javascript //Collection Students { "_id": ObjectId("512bc95fe835e68f199c8686"), "student": "Dave Smith", "score": 80} { "_id": ObjectId("512bc962e835e68f199c8687"), "student": "Dave Smith", "score": 85} { "_id": ObjectId("55f5a192d4bede9ac365b257"), "student": "Ahn ben", "score": 60} { "_id": ObjectId("55f5a192d4bede9ac365b258"), "student": "li xin", "score": 55} { "_id": ObjectId("55f5a1d3d4bede9ac365b259"), "student": "Ben hue", "score": 60} { "_id": ObjectId("55f5a1d3d4bede9ac365b25a"), "student": "li xin", "score": 94} { "_id": ObjectId("55f5a1d3d4bede9ac365b25b"), "student": "Peter parker", "score": 95} //Now if we apply the $match operator to the above collection with the student's name //Dave Smith We get only the documents in which the student is Dave Smith db.students.aggregate([{$match:{student:"Dave smith"}}]) //we will get the results as shown below { "_id": ObjectId("512bc95fe835e68f199c8686"), "student": "Dave Smith", "score": 80} { "_id": ObjectId("512bc962e835e68f199c8687"), "student": "Dave Smith", "score": 85} ``` ## Aggregation Pipeline using $group operator: Now we can move to the next operator which is the **$group** operator, as we know in the aggregation pipeline there is a series of stages which we can introduce, to extract a certain kind of data as per our requirements. The **$group** stage separates documents into groups according to a "group key". The output is one document for each unique group key. ```javascript syntax: { $group: { _id: , // Group key : <field1>: {<accumulator> :<expression> }, ... } } ``` The above definition of group operator generally refers to, when it is applied on a set of documents it will return a set of documents with each document containing the field **_id** as the first field followed by the second field in which the group is done. for example, as shown below the group operator is applied in the student collection. ```javascript //Our student's collection with the name of the student as field "student" { "_id": ObjectId("512bc95fe835e68f199c8686"), "student": "Dave Smith", "score": 80} { "_id": ObjectId("512bc962e835e68f199c8687"), "student": "Dave Smith", "score": 85} { "_id": ObjectId("55f5a192d4bede9ac365b257"), "student": "Ahn ben", "score": 60} { "_id": ObjectId("55f5a192d4bede9ac365b258"), "student": "li xin", "score": 55} { "_id": ObjectId("55f5a1d3d4bede9ac365b259"), "student": "Ben hue", "score": 60} { "_id": ObjectId("55f5a1d3d4bede9ac365b25a"), "student": "li xin", "score": 94} { "_id": ObjectId("55f5a1d3d4bede9ac365b25b"), "student": "Peter parker", "score": 95} //Now if we group the collections based on the student as _id db.students.aggregate([{$group:{_id:"$student"}]) //_id is mandatory field for applying $group which always takes the filed in //which you want your collections to be grouped //will always return the total distinct names present inside your collections {"_id": "Dave Smith"} {"_id": "Ahn ben"} {"_id": "li xin"} {"_id": "Peter Parker"} ``` Now from the above two operators, it is clear that **$match** can be used when you want to filter out the document based on a certain field and **$group** can used to group the particular collection based on the "Group key". Similarly, as we know there can be multiple stages in an aggregation pipeline and we can introduce any number of stages as much as we want, Now we will try to add these two stages in our next example, which utilises both **$match** and **$group** operator. ## Problem Statement for Two-stage Pipeline: We want to find the name of the Student who has scored greater than or equal to 80. From the above problem statement, it is clear that we can use the $match operator to find the student with a score greater than or equal to 80 because in our collection we have the document with a duplicate student name as the student can be present twice, so we have to also use $group operator to also find distinct value. ```javascript //Our student's collection with the name of the student as field "student" { "_id": ObjectId("512bc95fe835e68f199c8686"), "student": "Dave Smith", "score": 80} { "_id": ObjectId("512bc962e835e68f199c8687"), "student": "Dave Smith", "score": 85} { "_id": ObjectId("55f5a192d4bede9ac365b257"), "student": "Ahn ben", "score": 60} { "_id": ObjectId("55f5a192d4bede9ac365b258"), "student": "li xin", "score": 55} { "_id": ObjectId("55f5a1d3d4bede9ac365b259"), "student": "Ben hue", "score": 60} { "_id": ObjectId("55f5a1d3d4bede9ac365b25a"), "student": "li xin", "score": 94} { "_id": ObjectId("55f5a1d3d4bede9ac365b25b"), "student": "Peter Parker", "score": 95} //we will be using two stages here for extracting the data db.students.aggregate([ { //first stage will find the document with the score //greater than or equal to 80 $match:{"score":{ $gte:80 }} }, { //second Stage will group that distinct name from each document got from the first stage $group:{ _id: "$student"} } ]) //The result we get from this two-stage pipeline is //In the first stage we get the four documents with similar names Dave Smith two times //from the second stage we will group those documents based on distinct student name {"_id": "Dave Smith"} {"_id": "li xin"} {"_id": "Peter Parker"} ``` ### Conclusion: In this Article, we now get a basic understanding of what is the **$match** and **$group** operators, why it is used and how they minimize the filter techniques which involve a tedious process at the front end if it is not filtered from the backend. But MongoDB's aggregation pipeline can do this thing in merely two lines of command to extract the particular required data, and that is what we call the essence of MongoDB Aggregation Power which can solve a bigger problem. I hope you like this article a lot and would appreciate my work, I have learned this thing from the internet as well do check the below link, and also stay tuned for further part of this series Do Like, Comment, Share and Subscribe to my Newsletter for getting my work directly in your inbox. {% embed https://www.mongodb.com/docs/manual/reference/operator/aggregation/group/ %}
ganeshyadav3142
1,741,556
(Part 2/2) FullStory Digital Analytics: Convert Results of Analysis to Product Requirements
In this podcast, Krish explores the process of digital analysis and how to take the analysis forward....
0
2024-01-25T21:10:33
https://dev.to/vpalania/part-22-fullstory-digital-analytics-convert-results-of-analysis-to-product-requirements-3oa
In this podcast, Krish explores the process of digital analysis and how to take the analysis forward. He discusses different approaches to analyzing product usage, including general product usage analysis, client-specific product usage analysis, feature-specific product usage analysis, ad hoc analysis, and usage pattern analysis. Krish emphasizes the importance of communicating the analysis to the product team and translating it into meaningful requirements for the engineering team. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fcr59sf73z6oipg2buw.png) ## Takeaways - Digital analysis involves analyzing product usage and user behavior. - Different approaches to analysis include general product usage analysis, client-specific product usage analysis, feature-specific product usage analysis, ad hoc analysis, and usage pattern analysis. - Communicating the analysis to the product team is crucial for making informed decisions. - Translating the analysis into meaningful requirements helps guide the engineering team. ## Chapters 00:00 Introduction and Recap 03:58 Communicating Analysis to Product Team 04:59 General Product Usage Analysis 09:34 Client Specific Product Usage Analysis 14:51 Feature Specific Product Usage Analysis 20:39 Ad Hoc Analysis 25:25 Usage Pattern Analysis 31:03 Translating Analysis into Requirements ## Video {% embed https://youtu.be/fTTHpXgyXIc %} ## Transcript https://products.snowpal.com/api/v1/file/0e3cd0b5-1a5d-424b-91c7-5c4292991b4b.pdf
vpalania
1,741,788
I built a task management and to-do list app for your goals
I built a task management and to-do list app that helps you prioritize your tasks toward your goals....
0
2024-01-26T04:00:25
https://dev.to/tanhenggek/i-built-a-task-management-and-to-do-list-app-that-helps-you-prioritize-your-tasks-toward-your-goals-33co
productivity
I built a task management and to-do list app that helps you prioritize your tasks toward your goals. It helps me focus on what matters and make conscious decision in my daily tasks. I hope it’s useful to you too. It’s currently in V1. I will add more features and integrations in the coming weeks. You might want to give it a try and share your feedback. Link of my site: https://goalfocus.io/ It currently has the following features: - create SMART goals and break them down into smaller goals and tasks. Then, work on each task progressively toward your main goals - increase your focus with a timer. You can customize duration for each tasks. Focus on one task at a time - prioritize tasks with the Eisenhower Matrix based on importance and urgency. Focus on what truly matters while eliminating less important tasks - motivated by a daily motivational quote that aligns with your current goal - overcome procrastination with timeboxing. Tobi (CEO of Shopify) [uses timeboxing to overcome Parkinson's Law](https://twitter.com/tobi/status/1615553994759278592). I personally find timeboxing an effective approach too. I added a few features that work well together to help you complete your tasks in a timely manner such as timeframe, due date, due date reminder, and task duration - weekly, monthly, and yearly reviews help you review progress, course-correct plans, stay motivated, and focus on your next steps for success
tanhenggek
1,741,838
winvnbz
WinVN – Dia diem nha cai hap dan voi da dang cac the loai ca cuoc game bai slot game cuoc the thao ma...
0
2024-01-26T06:05:47
https://dev.to/winvnbz/winvnbz-243p
WinVN – Dia diem nha cai hap dan voi da dang cac the loai ca cuoc game bai slot game cuoc the thao ma dam bao se dap ung nhung nhu cau giai tri cua ho Dia Chi: 83A Nguyen Thai Son Phuong 4 Go Vap Thanh pho Ho Chi Minh Viet Nam Email: winvnbz@gmail.com Website: https://winvn.bz Dien Thoai: (+63 ) 9628363765 #winvn #nhacai_winvn #nhacaiuytin #wincn_casino Social Media: https://winvn.bz/ https://winvn.bz/gioi-thieu-winvn/ https://winvn.bz/bao-mat-winvn/ https://winvn.bz/huong-dan/ https://winvn.bz/the-thao/ https://winvn.bz/casino/ https://winvn.bz/ban-ca/ https://winvn.bz/lien-he-winvn/ https://winvnbz.blogspot.com/ https://winvnbz.weebly.com/ https://www.facebook.com/winvnbz/ https://twitter.com/winvnbz https://www.youtube.com/channel/UCwih8bMqO7zP5HzzYA2ovfw https://scholar.google.com/citations?user=bAbUfuQAAAAJ&hl=vi https://www.pinterest.ph/winvnbz/ https://500px.com/p/winvnbz https://social.msdn.microsoft.com/Profile/winvnbz https://social.technet.microsoft.com/Profile/winvnbz https://www.skillshare.com/en/user/winvnbz https://soundcloud.com/winvnbz https://bbs.now.qq.com/home.php?mod=space&uid=6273783 https://draft.blogger.com/profile/15321860668776804664 https://twitback.com/winvnbz https://favinks.com/profile/winvnbz/ https://www.blogger.com/profile/15321860668776804664 https://winvnbz.blogspot.com/ https://www.reddit.com/user/winvnbz https://gravatar.com/winvnbz https://medium.com/@winvnbz/about https://www.flickr.com/people/winvnbz/ https://www.tumblr.com/winvnbz https://winvnbz.wixsite.com/winvnbz https://winvnbz.weebly.com/ https://angel.co/u/winvnbz https://www.goodreads.com/user/show/172070519-winvnbz https://vimeo.com/winvnbz https://www.liveinternet.ru/users/winvnbz/post502289207/ https://sites.google.com/view/winvnbz/trang-ch%E1%BB%A7 https://linktr.ee/winvnbz https://www.twitch.tv/winvnbz/about https://tinyurl.com/winvnbz https://ok.ru/winvnbz/statuses/156951722937641 https://profile.hatena.ne.jp/winvnbz/profile https://issuu.com/winvnbz https://dribbble.com/winvnbz/about https://www.behance.net/bzwinvn https://flipboard.com/@winvnbz https://www.kickstarter.com/profile/winvnbz/about https://leetcode.com/winvnbz/ https://winvnbz.thinkific.com/courses/your-first-course https://about.me/winvnbz https://tawk.to/winvnbz https://ko-fi.com/winvnbz https://hub.docker.com/r/winvnbz/winvnbz https://independent.academia.edu/winvnbz https://groups.google.com/g/winvnbz/c/d6_6y5vTrZ8 https://github.com/winvnbz https://biztime.com.vn/winvnbz https://vozforum.org/members/winvnbz.292550/ https://winvnbz.webflow.io/ https://glints.com/vn/profile/public/53df7330-0858-48fb-a0d9-c31d66c55067 https://www.awwwards.com/winvnbz/ https://castbox.fm/channel/id5694572?country=us https://www.datacamp.com/portfolio/winvnbz https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/654762 https://www.speedrun.com/users/winvnbz https://winvnbz.peatix.com/ https://sketchfab.com/winvnbz https://gitee.com/winvnbz https://public.tableau.com/app/profile/winvnbz/ https://connect.garmin.com/modern/profile/96e3b478-20e4-447d-b94a-eb6b2b175f2d https://fliphtml5.com/homepage/wsqiv https://www.reverbnation.com/artist/winvnbz https://anchor.fm/winvnbz https://guides.co/a/winvn-bz https://myanimelist.net/profile/winvnbz https://glose.com/u/winvnbz https://artmight.com/user/profile/3311776
winvnbz
1,741,930
Tips To Optimize Your Website For Better Performance
🚀 Unlock Peak Performance for Your Website! 🌐💻 In the digital landscape, a high-performing website...
0
2024-01-26T08:45:19
https://dev.to/eddieadams/tips-to-optimize-your-website-for-better-performance-a81
seo, design, javascript, webdev
🚀 Unlock Peak Performance for Your Website! 🌐💻 In the digital landscape, a high-performing website is key to success. Here are some tips to optimize your site for better performance and user satisfaction. 🌟🔧 **🚀 Key Optimization Tips:** **💨 Page Loading Speed:** Optimize images, leverage browser caching, and minimize HTTP requests to ensure swift page loading. Speed is crucial for user experience and SEO. **🔐 Security Measures:** Implement SSL certificates for secure data transfer. A secure website not only builds trust but also positively influences search engine rankings. **🔄 Browser Caching:** Utilize browser caching to store frequently accessed resources, reducing load times for returning visitors. **🧹 Regular Content Cleanup:** Remove unnecessary plugins, clean up outdated content, and optimize databases for a streamlined website structure. **📱 Mobile Optimization:** Prioritize mobile responsiveness for an optimal user experience across devices. Google's mobile-first indexing considers mobile performance for search rankings. **📈 The Impact on User Experience:** **📉 Reduced Bounce Rates:** A faster, secure, and well-optimized website keeps visitors engaged, reducing bounce rates. **🌐 Global Accessibility:** Optimize for global audiences by leveraging Content Delivery Networks (CDNs) to distribute content across servers worldwide. **🔄 Improved Conversion Rates:** A seamless, fast-loading website enhances the user journey, contributing to improved conversion rates. 🌟 Conclusion: Optimizing your website isn't just about technical enhancements; it's about creating an exceptional user experience. By implementing these tips, you're not just boosting performance; you're paving the way for digital success. 👉 What optimization tips have worked wonders for your website? Share your insights below! 👇🔧 🚀🌐 Read more article [Big Data Analytics News](https://bigdataanalyticsnews.com/tips-to-optimize-website-for-better-performance/) Subscribe to our **[Big Data News Weekly Newsletter](https://bigdatanewsweekly.com)**. Join us for insights on Big Data 📊, Data science 📈, AI 🧠, Machine learning 🤖 , AI News, AI Tools, AI updates, Tech, No Code, and Side Hustles.
eddieadams
1,742,008
Celebrating Data Privacy International Day!
The National Cybersecurity Alliance (NCA) has themed Data Privacy Day 2024 as "Take Control of Your...
0
2024-01-28T14:26:51
https://dev.to/cnatsopoulou/celebrating-data-privacy-international-day-11o5
cybersecurity, security, data, privacy
_The [National Cybersecurity Alliance](https://staysafeonline.org/) (NCA) has themed Data Privacy Day 2024 as "Take Control of Your Data"._ As we commemorate Data Privacy Day, a global initiative dedicated to raising awareness about the importance of safeguarding private information, we recognize the increasing need for individuals, not just professionals, to prioritize their online privacy. This private information, called sensitive data, includes things like your name, address, birthdate, race, gender, contact details, credit card number, ID card number, medical history, IP address, or location. In a world where technology intertwines with our daily lives, understanding and implementing data privacy measures is crucial for everyone. This blog post aims to empower users of all backgrounds with practical tips, ensuring that the protection of personal data becomes an accessible practice for everyone, regardless of their level of technical expertise. Let's explore simple yet effective strategies to fortify our online privacy in this digital age! ## A brief reflection on this day 🗓️ Data Privacy Day, known in Europe as Data Protection Day, is an international event that occurs every year on 28 January. The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. So on 28th of January, governments, parliaments, and organizations work together to spread awareness about the importance of protecting personal data and privacy rights. They might run campaigns for the public, organize educational projects for teachers and students, open their doors for visits to data protection agencies, and host conferences. The day was initiated by the Council of Europe to be first held in 2007 as the European Data Protection Day. Two years later, on 26 January 2009, the [United States House of Representatives](https://en.wikipedia.org/wiki/United_States_House_of_Representatives) passed House Resolution HR 31 by a vote of 402-0, declaring 28 January National Data Privacy Day. On 28 January 2009, the [Senate](https://en.wikipedia.org/wiki/United_States_Senate) passed Senate Resolution 25 also recognizing 28 January 2009 as National Data Privacy Day. The United States Senate also recognized Data Privacy Day in 2010 and 2011. ## Tips for Fortifying Your Privacy 🫵 Personal data is processed continuously—whether at work, in interactions with public authorities, in healthcare, during purchases of goods or services, while traveling, or while browsing the internet. Despite this, individuals are often unaware of the risks associated with protecting their personal data and their corresponding rights. On this day, let's commemorate by reflecting on some privacy reminders: ✔**Do not get phished!** Phishing emails are one of the most common, and effective methods cyber attackers will use to gain access to secure information. To defend against these manipulative emails, you must: - Stay cautious about emails from unfamiliar senders. - Refrain from clicking on links in unexpected emails, as they might lead to fake websites or download harmful software to your device. - Never reply to emails requesting confidential or personal information. Legitimate organizations won't ask for such details via email. - If something sounds too good to be true, it probably is, so ignore any emails proclaiming that you have won a prize or a special discount. ✔**Guarding Against Deepfakes** Deepfake technology can create highly realistic videos that manipulate or replace the likeness and voice of individuals. In a malicious context, someone could create a deepfake video impersonating a trusted person, such as a friend, family member, or colleague, and use it to request sensitive information. Although identifying a fake video may be challenging, you can always: - Double-check the identity of the person making the request. - Approach content with a healthy dose of skepticism, especially if it's unexpected or seems unusual. - Stay informed about the latest developments in deepfake technology and understand the potential risks. ✔**Watch out for vishing and smishing attempts** Emails are not the only medium cyber criminals use to try and receive personal information. Fraudsters will also use SMS and voice messages to trick users into giving up personal information. So you must never handing out personal data over the phone and never clicking on links included in unsolicited SMS messages. ✔**Avoid using public Wi-Fi** Browsing online using public Wi-Fi can be convenient. But it can put your information at risk, as hackers can snoop on data transmitted throughout the network. So: - You should refrain from transmitting your address and credit card information on public Wi-Fi. - Public Wi-Fi is unsafe when there is no password for access-and event then, Wi-Fi hotspots can be used by nearby hackers to steal your data. Use a Virtual Private Network (VPN) if you have to connect on public Wi-Fi to add a layer of protection to your data. ✔**Confirm website security** If you are about to log in to a web platform, ensure that the site is legitimate. The first thing you should do is to check the URL. Make sure it begins with "HTTPS", which shows that there is an encrypted communication between your browser. ✔**Strengthening Your Passwords** You access most applications, which may contain personal data, and various online platforms by providing a password and a username. Therefore: - Create strong passwords on smartphones, laptops, tablets, email accounts, and any other device or account where personal information is stored. Weak passwords, like "12345" are the easiest way for hackers to access your data. - Use Multi-Factor Authentication (MFA) wherever you can. MFA is an authentication method that requires the user to provide 2 or more verification factors to gain access to a resource such as an application, online account, or a VPN. One of the most common MFA factors that users encounter are one-time passwords (OTP). OTP are those 4-8 digit codes that you often receive via email or SMS. Other examples are fingerprints, facial recognition, answers to personal security information, etc. As we conclude this post on Data Privacy Day, let's use what we've learned to make online safety a habit. Following these easy tips for data privacy helps us feel more confident online. Every little effort to protect your info makes the internet safer for everyone. Let's keep caring about data privacy and work together for a future where our online experiences are safe and enjoyable! > “If you put a key under the mat for the cops, a burglar can find it, too. Criminals are using every technology tool at their disposal to hack into people’s accounts. If they know there’s a key hidden somewhere, they won’t stop until they find it.” – **Tim Cook, Apple’s CEO**
cnatsopoulou
1,742,137
ABEND dump #8
Welcome to the ABEND dump, the issue where I share the most interesting content I’ve been reading,...
19,725
2024-01-26T13:57:43
https://bitmaybewise.substack.com/p/abend-dump-8
news, programming
Welcome to the ABEND dump, the issue where I share the most interesting content I’ve been reading, listening to, and watching lately. Want to check the previous issue? Read it here: {% embed https://dev.to/bitmaybewise/abend-dump-7-4ine %} --- ## [Write Like You Talk](https://www.paulgraham.com/talk.html) > Here's a simple trick for getting more people to read what you write: write in spoken language. > Informal language is the athletic clothing of ideas. > If you simply manage to write in spoken language, you'll be ahead of 95% of writers. And it's so easy to do: just don't let a sentence through unless it's the way you'd say it to a friend. I love reading [Paul Graham’s essays](https://www.paulgraham.com/articles.html) and this one was the essay that inspired me to write more casually, not stress too much with perfectness, and get something done. ## [8 months of OCaml after 8 years of Haskell in production](https://dev.to/chshersh/8-months-of-ocaml-after-8-years-of-haskell-in-production-h96) Lately, I’ve been doing some Haskell for fun in my spare time and wrote about it. Ten years ago when I tried Haskell first time I also played a bit with OCaml. I found both languages amazing, but the terse syntax of Haskell and its way of dealing with side effects captivated me and I ignored OCaml since then. However, this blog post made me interested in trying it once more, as it looks like the tooling evolved a lot in the meantime. @chshersh also shares his opinion on why he prefers OCaml nowadays. ## Podcast recommendations Speaking about Haskell and OCaml, I’ve listened to two episodes of two different podcasts recently that I think it’s worth recommending: - [Signals & Threads: The Future of Programming with Richard Eisenberg](https://signalsandthreads.com/future-of-programming/) - [Haskell Foundation Podcast with Mike Sperber](https://haskell.foundation/podcast/40/) Both episodes talk about functional programming and the advantages and disadvantages of languages such as Haskell and OCaml, as well as some interesting use cases of these languages and their “unique” features. ## [How we execute PG major upgrades at GitLab, with zero downtime](https://youtu.be/o08kJggkovg) Alexander Sosna gave this great presentation at PGConf about how major PostgreSQL upgrades are made at GitLab. Highly recommended if you’re dealing with databases at scale: {% embed https://youtu.be/o08kJggkovg %} The slides can be seen [here](https://www.postgresql.eu/events/pgconfeu2023/sessions/session/4791/slides/439/2023.pgconf.eu%20Zero%20Downtime%20PostgreSQL%20Upgrades.pdf). ## [Pluto](https://www.netflix.com/browse?jbv=81281344) ![Pluto](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/risc7ctvjdbziabfjwf4.jpg) I normally don’t share non-related programming stuff here, but I felt like recommending this anime called [Pluto](https://www.netflix.com/browse?jbv=81281344)—there’s a [mangá](https://www.viz.com/pluto-urasawa-x-tezuka) also if you’re more into reading. I enjoyed reading the mangá and am now watching the anime on Netflix. It depicts a future where humans and highly advanced robots coexist, touching on many ethical and moral questions that permeate the AI world. What a coincidence for me to pick it up right on the AI boom moment! --- If you liked this post, consider subscribing to my newsletter [Bit Maybe Wise](http://bitmaybewise.com/). You can also follow me on [X](https://twitter.com/bitmaybewise) and [Mastodon](https://mastodon.online/@bitmaybewise).
bitmaybewise
1,742,177
Creating a Countdown Timer with Vue.js
Service Level Agreements (SLAs) often come with strict timelines, and having a visual representation...
0
2024-01-27T10:46:56
https://dev.to/ayowandeapp/creating-a-countdown-timer-with-vuejs-21nm
javascript, tutorial, vue, beginners
Service Level Agreements (SLAs) often come with strict timelines, and having a visual representation of the time remaining can be crucial. In this post, we'll explore how to implement a countdown timer in Vue.js to display the remaining time for SLAs. **Step 1: Set Up Your Vue Component** ``` <template> <div> <span v-if="sla.expired" style="color: red;">{{ `SLA Expired` }}</span> <span v-else style="color: rgb(80, 180, 80)">{{ `${displayTime}` }}</span> </div> </template> <script> import moment from 'moment'; export default { props: { sla: Object, created_at: String, }, data() { return { intervalId: null, displayTime: '', }; }, mounted() { this.startCountdown(); }, beforeDestroy() { clearInterval(this.intervalId); }, methods: { startCountdown() { const initialDate = moment(this.created_at).add(this.sla.time, 'hours'); this.intervalId = setInterval(() => { const countdownDuration = initialDate.diff(moment()); let secondsRemaining = moment.duration(countdownDuration).asSeconds(); const hours = Math.floor(secondsRemaining / 3600); const minutes = Math.floor((secondsRemaining % 3600) / 60); const seconds = Math.floor(secondsRemaining % 60); this.displayTime = `${hours > 0 ? hours + 'h ' : ''}${minutes}m ${seconds}s`; if (secondsRemaining <= 0) { clearInterval(this.intervalId); this.$set(this.sla, 'expired', true); } secondsRemaining--; }, 1000); }, }, }; </script> <style scoped> </style> ``` We use the mounted lifecycle hook to initiate the countdown when the component is mounted. The beforeDestroy hook ensures that the interval is cleared to prevent memory leaks when the component is destroyed. The startCountdown method calculates the remaining time and updates the displayTime variable accordingly. The countdown is displayed dynamically, and when it reaches zero, the SLA is marked as expired. **Step 2: Use the Countdown Timer Component** ``` <template> <ul> <li v-for="(sla, j) in liquidasset.slas" :key="sla.id"> <CountdownTimer :sla="sla" :created_at="liquidasset.created_at" /> </li> </ul> </template> <script> import CountdownTimer from '@/components/CountdownTimer.vue'; // Update the path based on your project structure export default { components: { CountdownTimer, }, data() { return { liquidasset: { created_at: '2024-01-27T12:00:00', // Example date slas: [...], // Your SLAs array }, }; }, }; </script> ``` **Conclusion** Implementing a countdown timer in Vue.js can enhance the user experience, especially in scenarios where time is of the essence. By breaking down the logic into a reusable component, you can easily integrate countdown timers into various parts of your application. Feel free to customize the code further based on your specific requirements and styling preferences. Happy coding!
ayowandeapp
1,742,180
Simplifying API Calls with Refit in ASP.NET Core
Refit is a library in ASP.NET Core that simplifies the process of making HTTP requests to RESTful...
0
2024-01-26T15:13:31
https://dev.to/mohammadkarimi/simplifying-api-calls-with-refit-in-aspnet-core-f6
dotnet, refit, aspdotnet, dotnetcore
Refit is a library in ASP.NET Core that simplifies the process of making HTTP requests to RESTful APIs. It allows you to define your API interfaces as C# interfaces and then use them as if they were local methods. This makes it easier to work with APIs in a type-safe manner. Here's a guide to help you get started on writing an article about using Refit in ASP.NET Core: Start by introducing the challenges of making HTTP requests to APIs in a traditional way in ASP.NET Core. Mention the verbosity and boilerplate code involved in handling HTTP requests and responses. **What is Refit?** Briefly explain what Refit is and its main purpose. Emphasize that it simplifies the process of working with APIs by allowing developers to define API interfaces as C# interfaces. **Setting Up Refit** - Install Refit Package Guide the readers through the process of installing the Refit NuGet package in an ASP.NET Core project. ``` dotnet add package Refit ``` **Creating an API Interface** Show how to create an API interface using Refit. This involves defining methods that represent the different API endpoints. ``` public class Post { public int Id { get; set; } public string Title { get; set; } public string Body { get; set; } } public interface IApiClient { [Get("/api/posts")] Task<List<Post>> GetPosts(); [Post("/api/posts")] Task<Post> CreatePost([Body] Post newPost); } ``` - Integrating Refit into ASP.NET Core **Registering Refit in Dependency Injection:** Explain how to register Refit in the ASP.NET Core dependency injection container. This is usually done in the `Startup.cs` file. ``` services.AddRefitClient<IApiClient>() .ConfigureHttpClient(c => c.BaseAddress = new Uri("https://api.example.com")); ``` **Using Refit in Controllers or Services** Show examples of how to use the Refit interface in controllers or services. Emphasize the simplicity and type-safety that Refit brings. ``` public class PostsController : ControllerBase { private readonly IApiClient _apiClient; public PostsController(IApiClient apiClient) { _apiClient = apiClient; } public async Task<IActionResult> GetPosts() { var posts = await _apiClient.GetPosts(); return Ok(posts); } // ... } ``` Handling Authentication Explain how Refit handles authentication, whether it's through headers, tokens, or other authentication mechanisms. - Advanced Features **Request and Response Logging** Show how to enable request and response logging for debugging purposes. ``` services.AddRefitClient<IApiClient>() .AddHttpMessageHandler<YourLoggingHandler>(); ``` **Customizing Requests** Discuss how to customize requests using Refit attributes and options. **Conclusion** Summarize the benefits of using Refit in ASP.NET Core. Highlight the reduction in boilerplate code, improved type safety, and increased developer productivity when working with APIs.
mohammadkarimi
1,742,261
Kotlin Design Patterns: Simplifying the Traditional Solutions (plus: Simplifying the Singleton Pattern)
Kotlin Design Patterns: Simplifying the Singleton Pattern The Singleton pattern is a...
26,674
2024-01-26T17:06:07
https://fugisawa.com/kotlin-design-patterns-simplifying-the-traditional-solutions-plus-simplifying-the-singleton-pattern/
kotlin, backend, designpatterns, java
## Kotlin Design Patterns: Simplifying the Singleton Pattern The Singleton pattern is a design pattern that ensures a class has only one instance, while providing a global point of access to it. This pattern is used when exactly one object is needed to coordinate actions across the system. This is useful in scenarios like configuration managers, where a single shared instance is preferable. Singleton ensures that a class has only one instance and provides a global access point to this instance. Traditionally, it does this by hiding the constructor and providing a static method to get the instance. ## Traditional Approach in Java: In Java, the ConfigurationManager is made Singleton by using a private constructor and a static method `getInstance()` to ensure only one instance: ```java public class ConfigurationManager { private static ConfigurationManager instance; // Configuration data fields omitted. private ConfigurationManager() { // Load configuration data... } public static ConfigurationManager getInstance() { if (instance == null) { instance = new ConfigurationManager(); } return instance; } } ``` ## Kotlin's Approach: In Kotlin, we use the `object` keyword to create a Singleton effortlessly: ```kotlin object ConfigurationManager { // Configuration data properties omitted. init { // Load configuration data... } } ``` Kotlin simplifies Singleton with the `object` declaration, automatically ensuring a single instance. ### Kotlin Features Simplifying Singleton: 1. **`object ` keyword**: Guarantees a single instance without additional code. 2. **Thread safety**: `object`s have lazy initialization. Kotlin automatically handles the synchronization to ensure that the instance is initialized only once, even when accessed by multiple threads. 3. **Ease of use**: Removes the need for a private constructor or static access method. ## Final Thoughts As you could see, Kotlin's approach to Singleton makes the code more straightforward and less error-prone compared to the traditional approach. This simplicity and efficiency are key advantages of using Kotlin. To explore more about design patterns and other Kotlin-related topics, subscribe to my newsletter on https://fugisawa.com/ and stay tuned for more insights and updates. --- ## This is a series kickoff, actually! 🚀 Welcome to **_Kotlin Design Patterns: Simplifying the Traditional Solutions_** Have you ever found yourself tangled in the complexity of traditional design patterns while coding? You're not alone. In this series, we will explore how Kotlin simplifies software development and make coding more enjoyable and less burdened by complex patterns. Each post will focus on a design pattern. You will see how Kotlin's features make these patterns simpler or even unnecessary. This series is for everyone, whether you know Kotlin well or are just starting. We just started this series with this article on the Singleton Pattern. ### A Brief Overview on Design Patterns Design patterns are like blueprints for solving common software design problems. They provide tested and reusable solutions for common problems in software development. They help make code easier to manage, understand and communicate. The authors of the book "Design Patterns: Elements of Reusable Object-Oriented Software" grouped these patterns into three types: - **Creational Patterns**, like *Singleton* and *Factory*, for creating objects. - **Structural Patterns**, like *Adapter* and *Composite*, for organizing classes structures and relationships. - **Behavioral Patterns**, like *Strategy* and *Observer*, for improving how objects behave and communicate. ### A Simplified Approach with Kotlin As a developer, you've likely relied on some design patterns to solve common problems. Also, you probably noticed they can bring along a complexity that may be overwhelming. With Kotlin, you can solve many of those common problems in an efficient and straightforward way. In some cases, the need of design patterns can be totally eliminated. In other cases, the patterns implementation can be way simpler than the traditional solutions - just as you could sneak-peak from this first article on the Singleton Pattern. Let's go together on this enlightening journey and uncover how Kotlin can simplify your coding life. -- This article was originally posted to my Lucas Fugisawa on Kotlin blog, at: https://fugisawa.com/kotlin-design-patterns-simplifying-the-traditional-solutions-plus-simplifying-the-singleton-pattern/ To explore more about design patterns and other Kotlin-related topics, subscribe to my newsletter on https://fugisawa.com/ and stay tuned for more insights and updates.
lucasfugisawa
1,742,571
ChatCraft Adventures #3
This week I've been getting more involved in ChatCraft's development. This is what I've done so...
26,549
2024-01-27T03:57:00
https://dev.to/rjwignar/chatcraft-adventures-3-4lng
This week I've been getting more involved in ChatCraft's development. This is what I've done so far. ## How I've been using ChatCraft To identify improvements or potential new features for ChatCraft I've been playing around with ChatCraft. Sometimes I ask ChatCraft to generate random code, but I've also found it helpful for my classwork. For example, in my capstone project I'm making a web app with my group. Part of our first sprint was initializing our app and setting up Customer Identity and Access Management (CIAM) using [Amazon Cognito](https://aws.amazon.com/pm/cognito/). In a previous course I've integrated AWS Cognito with a barebones web app (simple HTML pages with no framework). However, my group is using [NextJS](https://nextjs.org/) to write our project, which requires different steps for integration. Integration involved using [NextAuth.js](https://next-auth.js.org/), which I've never used before. I was able to do most of the integration, but in the end I encountered an issue, in which logging out of the web app doesn't actually sign the user out of the Cognito user pool. ChatCraft was able to help me figure out the issue, and even suggested the fix for my code. My conversation with ChatCraft is available [here](https://chatcraft.org/c/rjwignar/V1Y6R3BuEDYVlvTiGbY-N). ## Pull Request Reviews My classmates have been working on various feature additions and chores on ChatCraft. I've had to chance to review a couple of them. ### Text to Speech (TTS) Support [Speech synthesis/text-to-speech](https://en.wikipedia.org/wiki/Speech_synthesis) produces speech from text, and my classmate Amnish is working on a version of [text-to-speech support in ChatCraft](https://github.com/tarasglek/chatcraft.org/pull/357). I wasn't comfortable speaking on the new code, as I wasn't familiar with [OpenAI's audio API](https://platform.openai.com/docs/guides/text-to-speech?lang=node). I instead played around with the build and tested the UI and text-to-speech features. ### Open Code in New Window Previously, when ChatCraft generates wide code it can be hard to see on ChatCraft: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d0i8hed7a6b5yk51qu19.png) My classmate, Yumei, made a recently-merged PR that adds a button that [opens code in a new tab](https://github.com/tarasglek/chatcraft.org/pull/361): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8oyee5gbnqujvtpzobf.png) When reviewing this PR, I was able to find a small UI bug. As a dark-mode user, I noticed the new button was missing a dark mode coloring: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifghf9e9p6dgj2l5s3t4.png) By looking at how the other buttons are coloured in dark mode, I was able to suggest a solution to this bug and become more familiar with ChatCraft's code. ### Image as Input This is a really neat feature my classmate Mingming has been working on. It allows you to [send images as input to ChatCraft](https://github.com/tarasglek/chatcraft.org/pull/286). Here's what it looks like in action: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gk7r8du6n9givvny0f4z.png) It's a really cool feature, although I wasn't comfortable enough to review the code. For now, I've only reviewed the functionalities and UI. ## Issues Playing around with ChatCraft has helped me brainstorm potential features or UI changes. For now I've been identify one possible UI change. When ChatCraft generates code like HTML, it also renders a preview that you can open in a new tab. However, I noticed that the [button that opens the preview protrudes into it](https://github.com/tarasglek/chatcraft.org/issues/367): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddobadyhdlk9noph4vxi.png) Another issue I'm working on is [ensuring keyboard hints don't appear on mobile](https://github.com/tarasglek/chatcraft.org/issues/289). I hope to have PRs for these issues submitted this weekend.
rjwignar
1,742,657
Choosing the Right Gazebo Design:
Choosing the right gazeboconstruction design involves considering various factors, including your...
0
2024-01-27T04:53:23
https://dev.to/kaali33/choosing-the-right-gazebo-design-4j6k
Choosing the right[ gazeboconstruction](http://gazeboconstructions.com/) design involves considering various factors, including your personal preferences, the surrounding landscape, intended use, and the architectural style of your home. Here are some key considerations to help you select the ideal gazebo design: Purpose and Function: Determine the primary purpose of your gazebo. Are you looking for a space to relax, entertain guests, host events, or simply enhance the aesthetics of your outdoor area? Different designs cater to various functions, so it's essential to align the gazebo's design with its intended use. Architectural Style: Consider the architectural style of your home and outdoor space. Choose a gazebo design that complements or enhances the existing aesthetics. For example, a traditional Victorian-style gazebo might be a good fit for a historic home, while a sleek and modern design could complement a contemporary residence. Size and Space: Evaluate the available space in your yard and choose a gazebo size that fits comfortably within it. A small garden might benefit from a compact gazebo, while a larger yard could accommodate a more expansive design. Ensure that the gazebo doesn't overwhelm the surrounding space. Material Preferences: Select the material for your gazebo based on your preferences, budget, and maintenance considerations. Common materials include wood (traditional and versatile), metal (durable and modern), vinyl (low-maintenance), and composites (for a mix of durability and low maintenance). Each material has its unique characteristics and visual appeal. Roof Style: Gazebos come with various roof styles, such as hexagonal, octagonal, square, or rectangular. The roof design not only affects the gazebo's appearance but also influences its ability to provide shade and protection from the elements. Consider factors like ventilation and whether you want an open or closed roof. Customization Options: Some gazebo designs offer customization options, allowing you to add features like built-in benches, planters, or decorative elements. Explore designs that can be personalized to suit your preferences and enhance the functionality of the gazebo. Climate Considerations: Take your local climate into account when choosing a gazebo design. If you experience heavy winds, a gazebo with sturdy construction and a low profile may be more suitable. In areas with intense sunlight, consider a design that provides adequate shade. Budget Constraints: Determine your budget for the gazebo project, including both the structure itself and any additional features or landscaping. Different materials and designs come with varying costs, so be mindful of your budget while exploring options. Local Regulations: Check local building codes and regulations to ensure compliance with any restrictions on gazebo construction. Some areas may have specific requirements regarding setbacks, maximum height, or other factors that could impact your design choices. Personal Style: Ultimately, choose a gazebo design that resonates with your personal style and preferences. Whether you prefer a classic look, a rustic feel, or a modern aesthetic, the gazebo should reflect your taste and enhance your outdoor living space. By carefully considering these factors, you can narrow down the options and find a gazebo design that not only meets your practical needs but also enhances the beauty and functionality of your outdoor environment.
kaali33
1,742,677
How Facebook Keeps Millions of Servers Synced 🕰️⏰
If youre running a distributed system, its incredibly important to keep the system clocks of the...
0
2024-01-27T05:34:28
https://devangtomar.hashnode.dev/how-facebook-keeps-millions-of-servers-synced-efb88f-6d6ebd519963
facebook, softwaredesign, architecture, systemdesign
--- title: How Facebook Keeps Millions of Servers Synced 🕰️⏰ published: true date: 2024-01-26 05:31:44 UTC tags: facebook, softwaredesign, architecture, systemdesign canonical_url: https://devangtomar.hashnode.dev/how-facebook-keeps-millions-of-servers-synced-efb88f-6d6ebd519963 --- If youre running a distributed system, its _incredibly_ important to keep the system clocks of the machines synchronized. If the machines are off by a few seconds, this will cause a huge variety of different issues. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u435r8f5o1obafkzaquo.png) You can probably imagine why unsynchronized clocks would be a big issue, but just to beat a dead horse Today, lets dive into the intricate world of time synchronization at Meta. Buckle up, its not just about ticking clocks; its about the heartbeat of millions of servers! 💻🌐 #### Why Sync Matters Running a distributed system? Syncing system clocks is crucial! 🔄 Unsynchronized clocks? Brace yourself for data inconsistency, log chaos, security issues, and more! Facebook knows this too well. #### 🌐 Meet NTP and PTP NTP: Old but gold. Syncs computers within milliseconds. How? Clients talk to NTP servers, adjusting internal time based on roundtrip delays. PTP: The shiny new kid. Precision to nanoseconds or even picoseconds! It tackles network delays with hardware timestamping. But, watch out for the load it places on network hardware! 💽 #### 🌟 Facebooks Sync Symphony NTP Days: Facebook danced with Ntpd and Chrony. Chrony brought nanoseconds to the party, but the story doesnt end there. PTP Era: In 2022, Facebook waved goodbye to NTP, embracing Precision Time Protocol for superior precision and scalability. Say hello to nanosecond accuracy! 🚀 Absolutely, lets break down the details of Metas pursuit of nanosecond accuracy using Precision Time Protocol (PTP) and the key components involved: #### 🏹 Aim: Nanoseconds! 🎯 Metas shift to PTP comes with the ambitious goal of achieving nanosecond-level accuracy in time synchronization. This precision is crucial in the fast-paced world of distributed systems where every second matters. ### 🌐 Components Leading the Charge #### PTP Rack The PTP Rack is the nerve center, housing the hardware and software responsible for serving time to clients. Heres what makes it tick: - **GNSS Antenna** : This critical component communicates with the Global Navigation Satellite System (GNSS). It ensures that the PTP system is in sync with the global positioning signals, enhancing accuracy. - **Time Appliance:** A dedicated piece of hardware residing in the PTP Rack. It combines a GNSS receiver and a miniaturized atomic clock. This combination ensures that even in scenarios where GNSS connectivity is lost, the Time Appliance can maintain accurate timekeeping independently. The PTP Rack, with its GNSS synchronization and atomic clock precision, sets the stage for achieving the nanosecond goal. #### The Network The network plays a pivotal role in transmitting PTP messages from the PTP Rack to clients. Meta employs unicast transmission, a communication method where information flows from one sender (PTP Rack) to a single recipient (PTP Client). This choice simplifies network design and enhances scalability. Unicast transmission ensures that each client receives the necessary time synchronization information directly from the PTP Rack, reducing the complexity that could arise in a multicast setup. #### PTP Client Running on individual machines, the PTP Client is the endpoint that actively communicates with the PTP network. Meta uses an open-source PTP client called ptp4l for this purpose. However, the implementation wasnt without its challenges, especially when dealing with edge cases. - **Open Source Client:** Ptp4l is an open-source implementation, reflecting Metas commitment to transparency and community collaboration. Its a tool that facilitates communication between the PTP Rack and individual servers, allowing them to synchronize their clocks with nanosecond precision. - **Challenges with Edge Cases:** While ptp4l is a robust tool, Meta acknowledges challenges faced in specific scenarios or with certain types of network cards. These edge cases required additional attention to ensure the seamless functioning of the PTP system. #### 🌟 Benefits of the PTP System - **Higher Precision and Accuracy:** The PTP system allows Meta to achieve precision within nanoseconds, a significant leap from the millisecond-level synchronization provided by traditional methods like NTP. - **Better Scalability:** Unicast transmission reduces the frequency of check-ins required for synchronization, enabling smoother network operation as the system scales. A single source of truth for timing enhances overall scalability. - **Mitigation of Network Delays and Errors:** PTPs focus on hardware timestamping and transparent clocks helps reduce the impact of network delays and errors, contributing to a more robust and reliable synchronization process. In essence, the PTP trioPTP Rack, The Network, and PTP Clientembodies Metas commitment to precision, scalability, and overcoming challenges in the pursuit of nanosecond-level accuracy in time synchronization. ### 🌐 Network Time Strata Explained: NTP organizes computers into hierarchical layers called strata based on their proximity to highly accurate time sources, typically atomic clocks or GPS receivers. Each stratum represents a level in the hierarchy, and the lower the stratum number, the closer a device is to the primary time source. **1. Stratum 0: Atomic Clock or GPS Receiver:** - At the top of the hierarchy are devices with direct access to incredibly precise timekeeping mechanisms, such as atomic clocks or GPS receivers. - Stratum 0 devices serve as the primary time source for the entire synchronization network. **2. Stratum 1: Directly Synced with Stratum 0:** - Stratum 1 devices are directly synchronized with Stratum 0 devices. These could be servers or systems that have a direct connection to the primary time source. - Stratum 1 devices act as secondary time sources for the next level of devices in the hierarchy. **3. Stratum 2: Syncs with Stratum 1:** - Stratum 2 devices synchronize their clocks with Stratum 1 devices. These devices are one step further away from the primary time source. - Stratum 2 devices act as time sources for devices in the subsequent stratum. **4. and so on, until Stratum 15: Indicating Unsynced Devices:** - The strata continue in a cascading manner, with each stratum representing a level of hierarchy further away from the primary time source. - As the stratum number increases, the accuracy of time synchronization decreases. **5. Stratum 16: Unsynced Devices:** - Stratum 16 is a special designation used to indicate devices that are not synchronized with any source. It represents the highest stratum number and signifies unsynchronization. #### 🔄 How Synchronization Works - Computers in lower strata synchronize their clocks with devices in higher strata. For example, Stratum 2 devices synchronize with Stratum 1 devices, and so on. - Multiple devices in the same stratum can be synchronized with different devices in a higher stratum, introducing redundancy and fault tolerance. #### 🔍 Why the Hierarchy Matters - The hierarchical structure ensures that not every device needs to directly synchronize with the most accurate time source. This arrangement prevents overloading a single time server with numerous synchronization requests. - The hierarchy allows for redundancy and fault tolerance. If a device in a lower stratum loses synchronization, it can quickly switch to another source in the same stratum or a higher one. #### 🌟 Benefits - Efficient Synchronization: Devices in lower strata benefit from the accuracy of those in higher strata, optimizing the synchronization process. - Scalability: The hierarchical structure scales well, allowing for the expansion of the synchronization network without overwhelming primary time sources. Understanding Network Time Strata provides insights into how NTP organizes and maintains time synchronization across a distributed system, ensuring accuracy and efficiency in timekeeping. ![](https://cdn.hashnode.com/res/hashnode/image/upload/v1706333482023/877df832-6bf7-4648-b972-7dfc0ce5befc.jpeg) #### 🚀 Conclusion Syncing clocks isnt just about ticking seconds; its the heartbeat of a well-functioning system. Facebooks journey from NTP to PTP showcases the relentless pursuit of precision and scalability. Time, after all, waits for no server! 💡 What are your thoughts on Metas clockwork precision? Share your views! 👇 Happy Syncing! 🌐🚀 #### Connect with Me on social media 📲 🐦 Follow me on Twitter: [devangtomar7](https://twitter.com/devangtomar7) 🔗 Connect with me on LinkedIn: [devangtomar](https://www.linkedin.com/in/devangtomar) 📷 Check out my Instagram: [be\_ayushmann](https://instagram.com/be_ayushmann) Checkout my blogs on Medium: [Devang Tomar](https://medium.com/u/8f5e1c86129d) **#** Checkout my blogs on Hashnode: [devangtomar](https://devangtomar.hashnode.dev/) **🧑💻** Checkout my blogs on Dev.to: [devangtomar](https://dev.to/devangtomar)
devangtomar
1,742,788
Sheikh Zayed Road Dubai
The neighborhood along Sheikh Zayed Road Dubai has been renamed after the Burj Khalifa, Dubai Road...
0
2024-01-27T09:10:30
https://dev.to/dubaigpt/sheikh-zayed-road-dubai-267l
The neighborhood along [Sheikh Zayed Road Dubai](https://blog.dubaigpt.com/sheikh-zayed-road-dubai-28-neighbourhoods-renamed/ ) has been renamed after the Burj Khalifa, Dubai Road Naming Committee, a novel approach to road naming is unfolding.
dubaigpt
1,742,797
Buy Commercial Kitchen Equipment Online | Bakery & Catering equipment
Buy online commercial kitchen equipment for bakery, catering, cloud kitchen and restaurants. Ovens,...
0
2024-01-27T09:38:59
https://dev.to/akashroy/buy-commercial-kitchen-equipment-online-bakery-catering-equipment-4ija
kitchen, bakerymachiene, commercialkitchenequipment
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/op7aqwe88eyn3973s8th.png) Buy online **[commercial kitchen equipment](https://restolane.com/)** for bakery, catering, cloud kitchen and restaurants. Ovens, mixers, slicers, fryers, grillers and more. As a company, Restolane is dedicated to providing top-quality commercial kitchen equipment to businesses across the country. We have a wide range of products in our inventory, including ovens, cooktops, refrigerators, and more, to meet the needs of any restaurant, hotel, or other food service establishment. Our team is comprised of industry professionals who are knowledgeable and experienced in all aspects of commercial kitchen equipment. We are committed to providing excellent customer service and support, and we are always happy to answer any questions or concerns our clients may have. In addition to our wide selection of products, we also offer installation, maintenance, and repair services to ensure that our clients’ equipment is always in top working condition. We believe that a well-equipped and well-maintained kitchen is essential for the success of any food business, and we are proud to be a trusted partner for so many companies across the country. If you’re in need of commercial kitchen equipment, we invite you to take a look at our inventory and see how we can help your business succeed.
akashroy
1,742,819
Develop Your Next Mobile App with React Native
Are you looking to develop your next mobile app? Look no further than React. With React, you can...
0
2024-01-27T10:32:37
https://dev.to/dotpotit/develop-your-next-mobile-app-with-react-native-4i1p
appdevelopment, webdev, react, reactnative
Are you looking to develop your next mobile app? Look no further than React. With React, you can create dynamic and interactive applications that run seamlessly across devices. Its component-based architecture and virtual DOM make it a powerful and efficient tool for building user interfaces. Advantages of Using React for Mobile App Development When it comes to mobile app development, React offers several advantages that make it a popular choice among developers. One of the key advantages of using React is its ability to reuse components. This means that you can build reusable UI components and use them across different screens and even different apps. Reusing components not only saves time and effort but also ensures consistency and reduces the chances of errors. Another advantage of using React is its virtual DOM. The virtual DOM is a lightweight representation of the actual DOM, which allows React to efficiently update only the necessary parts of your app. This minimizes rendering time and enhances the overall user experience. With React, you don't have to worry about updating the entire app every time there is a change. React takes care of updating only the components that need to be updated, resulting in faster and smoother app performance. Scalability is another key advantage of using React for mobile app development. React provides a scalable solution that allows you to build complex applications without compromising on performance. Whether you are building a small app or a large-scale enterprise-level application, React can handle it all. Its component-based architecture and modular approach make it easy to manage and scale your app as your business grows. ## Understanding the Basics of React Native Before we dive into the development process, let's take a quick look at the basics of React Native. React Native is a framework that allows you to build native mobile apps using React. It uses the same design principles as React, but instead of rendering to the browser's DOM, it renders to native components. One of the key advantages of React Native is that it allows you to write code once and deploy it on both iOS and Android platforms. This means that you don't have to write separate code for each platform, saving you time and effort. React Native also provides a rich set of pre-built components that are designed to look and feel like native components. This ensures that your app has a native look and feel, providing a seamless user experience. To get started with React Native, you will need to set up your development environment. Let's walk through the steps involved in setting up your environment. ### Setting Up Your Development Environment To develop mobile apps with React Native, you will need to set up your development environment. Here are the steps involved in setting up your environment: Install Node.js: React Native requires Node.js, so the first step is to install Node.js on your machine. You can download the latest version of Node.js from the official website and follow the installation instructions. Install React Native CLI: Once you have Node.js installed, you will need to install the React Native command-line interface (CLI) globally on your machine. Open your terminal or command prompt and run the following command: npm install -g react-native-cli This will install the React Native CLI, which you will use to create and run your React Native apps. Set up an emulator or connect a physical device: To test your React Native app, you can either set up an emulator or connect a physical device. React Native provides built-in support for Android and iOS emulators, which you can use for testing your app. Alternatively, you can connect a physical device to your machine and use it for testing. To set up an Android emulator, you will need to install Android Studio and create a virtual device. For iOS, you will need to install Xcode, which comes with the iOS simulator. Once you have set up your emulator or connected your device, you are ready to create your first React Native app. ## Creating Your First [React Native App](https://dotpotit.com/key-features/mobile-app-developmen) Now that you have set up your development environment, it's time to create your first React Native app. Follow these steps to create a new React Native project: Open your terminal or command prompt and navigate to the directory where you want to create your project. Run the following command to create a new React Native project: react-native init MyApp Replace "[MyApp](https://dotpotit.com/key-features/mobile-app-developmen)" with the name of your app. This will create a new directory with the specified name and initialize a new React Native project inside it. Once the project is created, navigate to the project directory: cd MyApp To run the app on the emulator or device, run the following command: react-native run-android This will build the app and run it on the Android emulator or connected device. If you are using iOS, replace "android" with "ios" in the above command. Congratulations! You have successfully created your first React Native app. Now let's explore React Native components and APIs. #### Exploring React Native Components and APIs React Native provides a rich set of components and APIs that you can use to build your app. These components and APIs are designed to be platform-agnostic, meaning that they work the same way on both iOS and Android platforms. This allows you to write code once and deploy it on multiple platforms, saving you time and effort. Some of the commonly used React Native components include: View: The View component is similar to the div element in HTML. It is used to create container components that can hold other components. Text: The Text component is used to display text in your app. It supports basic styling and layout options. Image: The Image component is used to display images in your app. It supports various image formats and provides options for resizing and styling. Button: The Button component is used to create interactive buttons in your app. It provides a callback function that is triggered when the button is pressed. In addition to these components, React Native also provides a wide range of APIs for handling gestures, navigation, networking, and more. These APIs allow you to access device features and interact with the underlying platform. ### Styling and Layout in React Native Styling and layout are essential aspects of mobile app development. With React Native, you can style your components using CSS-like stylesheets. React Native uses a subset of CSS properties and values, which allows you to apply styles to your components in a familiar way. To style a component in React Native, you can use the style prop. The style prop accepts an object containing CSS-like properties and values. Here's an example of how you can style a Text component: ```javascript Text style={styles.title}>Hello, World!/Text> const styles = StyleSheet.create({ title: { fontSize: 24, fontWeight: 'bold', color: 'blue', }, }); ``` In the above example, we have defined a title style using the StyleSheet.create method. The title style sets the font size to 24, the font weight to bold, and the color to blue. We then apply this style to the Text component using the style prop. React Native also provides flexbox layout, which is a powerful layout system for building responsive designs. With flexbox, you can create flexible and dynamic layouts that adapt to different screen sizes and orientations. Flexbox allows you to distribute and align components within a container, making it easy to create complex layouts. Handling User Input and Events in React Native User input and events are an integral part of any app. With React Native, you can handle user input and events using event handlers. Event handlers are functions that are triggered when a specific event occurs, such as a button press or a text input change. To handle user input and events in React Native, you can use the onPress prop for buttons and the onChangeText prop for text inputs. Here's an example of how you can handle a button press: ```javascript Button title="Press Me" onPress={handlePress} /> const handlePress = () => { console.log('Button pressed!'); }; ``` In the above example, we have defined a button with the title "Press Me" and an onPress prop that calls the handlePress function when the button is pressed. The handlePress function logs a message to the console. Similarly, you can handle text input changes using the onChangeText prop. Here's an example: ```javascript TextInput onChangeText={handleChangeText} /> const handleChangeText = (text) => { console.log('Text changed:', text); }; ``` In the above example, we have a TextInput component with an onChangeText prop that calls the handleChangeText function whenever the text input changes. The handleChangeText function logs the changed text to the console. Testing and Debugging Your [React Native App](https://dotpotit.com/key-features/mobile-app-developmen) Testing and debugging are crucial steps in the app development process. React Native provides several tools and techniques to help you test and debug your app. To test your React Native app, you can use the built-in testing framework called Jest. Jest allows you to write unit tests for your components and functions, ensuring that they work as expected. You can run the tests using the npm test command. In addition to unit testing, React Native also provides tools for end-to-end testing and integration testing. These tools allow you to simulate user interactions and test the overall functionality of your app. When it comes to debugging, [React Native ](https://dotpotit.com/key-features/mobile-app-developmen)provides a powerful debugging tool called React Native Debugger. React Native Debugger allows you to inspect and debug your app's JavaScript code, view network requests, and analyze performance. It provides a comprehensive set of tools for debugging, making it easier to identify and fix issues in your app. ### Deploying Your React Native App Once you have developed and tested your React Native app, it's time to deploy it to the app stores. React Native allows you to build standalone apps for both iOS and Android platforms. To deploy your React Native app to the app stores, you will need to follow the respective platform-specific guidelines. For iOS, you will need to create an Apple Developer account and generate an app ID, signing certificate, and provisioning profile. For Android, you will need to create a Google Play Developer account and generate a keystore file. Once you have the necessary credentials, you can build your app using the respective build commands: For iOS: react-native run-ios --configuration Release For Android: react-native run-android --variant=release These commands will build your app in release mode, which is optimized for performance. Once the build process is complete, you can submit your app to the respective app stores for review and distribution. #### In conclusion, React is a powerful and efficient tool for developing mobile apps. Its component-based architecture, virtual DOM, and reusable components make it a popular choice among developers. Whether you are a seasoned developer or just getting started, React provides a user-friendly framework that simplifies the app development process. By following the steps outlined in this article, you can develop your next [mobile app](https://dotpotit.com/key-features/mobile-app-developmen) with React and take your app development to the next level. So, what are you waiting for? Get started with React and unleash the full potential of your mobile app ideas.
dotpotit
1,742,915
Best Body Massager Machine India
In the hustle and bustle of modern life, stress and tension often find a home in our bodies. A good...
0
2024-01-27T12:55:56
https://dev.to/annat/best-body-massager-machine-india-9da
In the hustle and bustle of modern life, stress and tension often find a home in our bodies. A good body massager can be the key to unlocking relaxation and relieving muscle fatigue. Whether you're seeking relief from a long day at work or looking to enhance your overall well-being, we present a guide to the best body massager machines in India, bringing you a curated list to help you make an informed decision. ****1. Dr. Physio Electric Full Body Massager ****Known for its versatility, the Dr. Physio Electric Full Body Massager offers various attachments for a complete massage experience. With adjustable speed settings, it caters to different preferences and targets various muscle groups. 2. JSB HF05 Leg and Foot Massager Specifically designed for leg and foot massage, the JSB HF05 is a popular choice for those seeking relief from tired and achy lower limbs. It features customizable massage modes and intensity levels for a personalized experience. 3. Agaro Relaxing Foot & Calf Massager Perfect for rejuvenating tired feet and calves, the Agaro Relaxing Foot & Calf Massager comes with kneading and vibrating functions. Its user-friendly design and adjustable intensity levels make it a great addition to your relaxation routine. 4. HealthSense HM 210 Toner-Pro Handheld Massager If you prefer a handheld massager, the HealthSense HM 210 Toner-Pro is a compact and powerful option. With multiple massage heads and variable speed settings, it offers a customizable massage experience for various muscle groups. 5. Robotouch Mini Full Body Massager The Robotouch Mini Full Body Massager is designed for portability without compromising on performance. Its compact size makes it easy to use on different body parts, and it features various massage modes for a holistic experience. 6. Lifelong LLM99 Foot, Calf, and Leg Massager Ideal for those seeking a comprehensive leg massage, the Lifelong LLM99 targets the feet, calves, and thighs. With customizable massage modes and adjustable intensity levels, it provides a relaxing and invigorating experience. 7. Ozomax BL 131 Relax & Spin Fat Burning Massager Combining relaxation with fitness, the Ozomax BL 131 is designed for a spin massage that aids in fat burning. Its unique design and adjustable speed settings make it a versatile choice for those looking for a multi-functional massager. 8. Dr. Trust Physio Electric Full Body Massager With a powerful motor and interchangeable heads, the Dr. Trust Physio Electric Full Body Massager offers a range of massage options. It is designed to target specific muscle groups and alleviate tension and soreness. 9. Lifelong LLM81 Foot Massager For dedicated foot massage, the Lifelong LLM81 Foot Massager provides kneading and vibrating functions. It features customizable modes and intensity levels, making it an excellent choice for relaxing after a long day. 10. Dealsure Hammer Pro Leg Massager Engineered for effective leg massage, the Dealsure Hammer Pro Leg Massager provides a combination of air compression and kneading. It comes with adjustable intensity levels and pre-programmed modes for a tailored experience. Considerations Before Buying a Body Massager: Massage Type: Different massagers offer various massage techniques such as kneading, vibrating, and air compression. Choose one that aligns with your preferences. Targeted Areas: Consider if you want a massager for specific body parts like the feet, legs, or if you prefer a full-body experience. Intensity Levels: Adjustable intensity levels allow you to customize the massage according to your comfort and needs. Portability: If you plan to use the massager on the go or in different rooms, opt for a portable and easy-to-handle model. Attachments: Some massagers come with interchangeable heads or attachments for a versatile massage experience.
annat
1,742,916
Query Performance - Data Masking
Note: This is Problem Statement Solution Introduction This case study revolves around...
0
2024-02-06T05:38:34
https://dev.to/lokesh-g/query-performance-data-masking-3aj
anonymize, datamasking, database, dataswapping
> **Note: This is Problem Statement Solution** ## Introduction This case study revolves around the optimization of an anonymization process for a large database within our system. The original script was observed to be significantly time-consuming, taking approximately **22+** **hours** to execute on larger databases. This not only posed efficiency concerns but also introduced disruptions to other system processes. The anonymization process is crucial for maintaining the privacy of sensitive data in our database tables and several associated tables in the database. The script is designed to replace real values with generated fake data, preserving the privacy of sensitive data while keeping the structure and functionality of our database intact for testing and development purposes. ## Background Analysis The central focus of this case study is a script tasked with anonymizing personal data of people within a large database, and more specifically, improving the efficiency of this process. The motivation for this optimization effort was the significant amount of time the initial script was taking to execute - over **22 hours** to anonymize approximately **10 million** records. This high volume of data is a result of the complex structure of the database, which comprises records not only in the `Employees` table, but also in several associated tables such as users, `Bookings`, `booking_actions`, `safeguards`, and `sensitive_values`. These tables hold different types of sensitive information related to employees. For instance, the users table stores personal data like email, first name, last name, etc., while the Bookings table logs data about employees seating arrangements. The anonymization process is not independent for each table - they are interconnected. When an employee's data is anonymized in the Employee table, all corresponding records in other tables also need to be updated to maintain consistency across the database. This implies that every time an employee record gets anonymized, its associations with other records in different tables also have to be modified. This intricate interrelation between tables amplifies the computation required for the anonymization process, which was a primary factor contributing to the initial script's long execution time. The challenge, therefore, was not only to anonymize the data but to do so in a manner that respects the relationships between tables, ensuring the integrity of the data as a whole, while significantly reducing the time required for the process to complete. ### **Schema**: The schema used for this case study involves several interconnected tables in a database system. The main tables involved include Employee, users, Bookings, booking_actions, safeguards, and sensitive_values. * **`sensitive_values`**: This table is used to store the old and new values of sensitive data for each employee during the anonymization process. Key columns include **old_email, new_email, new_first_name, new_last_name, old_first_name, old_last_name, employee_id, old_full_name, and new_full_name**. The relationships among these tables form the basis for the anonymization process, where sensitive data from the Employee and associated tables are replaced with anonymized data, and the mappings between old and new data are stored in the `sensitive_values` table. ![Schema Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3h9hue68bin436nwi5q.png) ## Problem Analysis In the context of this case study, we encountered two significant issues that heavily impacted the efficiency and performance of our anonymization process. * **Impediment Due to Foreign Key Constraints**: The primary issue that was identified lies within the foreign key constraints in our database schema. These constraints ensure the consistency and integrity of the data across different tables. However, they also have a substantial impact on the performance of the queries. Whenever a record in a parent table (in this case, the Employee table) is updated, the DBMS has to check all related records in the child tables to maintain the foreign key constraint. This can be a very time-consuming process, especially when dealing with large datasets, as it requires a significant amount of processing power to perform these checks for every operation. * **Inefficient Handling of Dependent Tables**: The second problem was related to the handling of dependent tables (tables with associations to the Employee table). The existing process executed individual queries for each dependent table when an employee record was anonymized. This resulted in a **1:N** ratio of queries, where N is the number of tables having a relationship with the Employee table and further for associated table M (matching records count with employee) queries instead of single query to update same info. This approach is inefficient, as it requires a separate database operation for each table, which can significantly slow down the overall process, particularly when there are many related tables or the tables have a large number of records. These issues combined, resulted in a process that was not only time-consuming but also computationally intensive, posing a significant bottleneck for the anonymization of data. Consequently, these problems necessitated a reevaluation of the existing process and the exploration of more efficient ways to anonymize data while maintaining the integrity of relationships in the database. ## Solution Implementation The execution of the anonymization process was optimized by implementing the following steps: 1. **Disabling Foreign Key Checks**: Initially, to overcome the query performance issues during the anonymization process, foreign key checks were disabled. This allowed for faster execution of queries, as the system didn't have to check for relational integrity during this process. 2. **Handling Dependent Tables**: A new temporary table was created to store both the sensitive data and its anonymized counterparts. This table served as a reference during the modification of associated dependent tables. 3. **Multithreading for Anonymization**: To accelerate the anonymization process, multi-threading was employed. The sensitive data in the main table was anonymized in parallel, and the necessary fields along with their masked values were stored in the previously created temporary table. 4. **Anonymizing Dependent Tables**: The sensitive data in the dependent tables was then anonymized by referring to the temporary table. This ensured consistent anonymization across all related records. 5. **Anonymizing Missed Records**: Any records that were not anonymized in the previous steps were dealt with at this stage. Synthetic data was used to anonymize these missed records, all in a single update query, to maintain efficiency. 6. **Cleanup**: After the anonymization process was complete, the temporary table containing sensitive and anonymized data was discarded, ensuring no sensitive data was preserved in the anonymized database. Foreign key checks were then re-enabled to restore relational integrity for future operations. 7. **Truncating Irrelevant Tables**: Finally, any tables that were irrelevant or not required were truncated. This helped in maintaining the cleanliness and efficiency of the database, ensuring only necessary data was retained. ### **Sequence Diagram of the Proposed Solution**: ![Sequence Diagram](https://github.com/HarpyTech/others/assets/77878864/9cf4a523-504c-4177-b884-86c07356475d) ### **Code Implementation**: #### SensitiveDataAnonymizer.java ```JAVA import org.springframework.beans.factory.annotation.Autowired; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.stereotype.Component; import com.github.javafaker.Faker; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.List; import java.util.Map; import java.util.HashMap; import javax.transaction.Transactional; import java.time.LocalDateTime; import java.time.format.DateTimeFormatter; @Component public class SensitiveDataAnonymizer { private final Faker faker = new Faker(); @Autowired private JdbcTemplate jdbcTemplate; private String scriptUpdateTime; private ExecutorService executorService; private AnonymizeEmployeeAssociations employeeAssociation; public SensitiveDataAnonymizer() { this.scriptUpdateTime = LocalDateTime.now().format(DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss")); this.employeeAssociation = new AnonymizeEmployeeAssociations(scriptUpdateTime); this.executorService = Executors.newFixedThreadPool(8); } @Transactional public void anonymize() { employeeAssociation.executeSql("SET foreign_key_checks = 0;"); truncateTables(); employeeAssociation.addIndexes(); System.out.println("ADDED ADDITIONAL INDEXES TO Bookings"); anonymizeTablesByEmployee(); System.out.println("ANONYMIZE SENSITIVE INFORMATION ASSOCIATED WITH EMPLOYEES"); employeeAssociation.anonymize(); employeeAssociation.dropIndexes(); truncateTable("sensitive_values"); System.out.println("TRUNCATE sensitive_values TABLE"); System.out.println("DROPPED ADDITIONAL INDEXES ON Bookings"); employeeAssociation.executeSql("SET foreign_key_checks = 1;"); } public void truncateTables() { // consider these tables data is no longer required jdbcTemplate.execute("TRUNCATE TABLE sensitive_values"); jdbcTemplate.execute("TRUNCATE TABLE table_X"); jdbcTemplate.execute("TRUNCATE TABLE table_Y"); } public void dataInsert(String firstName, String lastName, String email, String fullName, String newFirstName, String newLastName, String newEmail, String newFullName, String employeeId) { String sql = "INSERT INTO sensitive_values (old_first_name, old_last_name, old_email, " + "old_full_name, new_first_name, new_last_name, new_email, new_full_name, employee_id) " + "VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)"; jdbcTemplate.update(sql, firstName, lastName, email, fullName, newFirstName, newLastName, newEmail, newFullName, employeeId); } public void anonymizeTablesByEmployee() { System.out.println("\nANONYMIZE EMPLOYEES SENSITIVE DATA AND STORE INTO TEMPORARY TABLE"); System.out.println("\nEmployee Records Anonymization started at " + java.time.LocalTime.now()); List<Map<String, Object>> employees = jdbcTemplate.queryForList("SELECT * FROM Employee"); for (Map<String, Object> employee : employees) { executorService.submit(() -> { Map<String, String> anonymizedEmployee = employeeAttributes(employee.get("id").toString()); String fullName = anonymizedEmployee.get("FirstName") + " " + anonymizedEmployee.get("LastName"); dataInsert(employee.get("FirstName").toString(), employee.get("LastName").toString(), employee.get("Email").toString(), employee.get("full_name").toString(), anonymizedEmployee.get("FirstName"), anonymizedEmployee.get("LastName"), anonymizedEmployee.get("Email"), fullName, employee.get("id").toString()); anonymizeTable("Employee", anonymizedEmployee, "id", employee.get("id").toString()); }); } executorService.shutdown(); while (!executorService.isTerminated()) { } anonymizeMissingRows("Employee", 'updated_at', scriptUpdateTime) System.out.println("Completed Employee Records at " + java.time.LocalTime.now()); anonymizeEmployeeDefaultValues(); System.out.println("\nANONYMIZED EMPLOYEES SENSITIVE DATA AND STORED INTO TEMPORARY TABLE"); } // This method will help to anonymize any table record public void anonymizeTable(String tableName, Map<String, String> data, String whereColumn, String whereValue) { StringBuilder sql = new StringBuilder("UPDATE " + tableName + " SET "); // Prepare the SET part of the SQL data.forEach((key, value) -> sql.append(key + " = '" + value + "', ")); // Remove the last comma sql.deleteCharAt(sql.length() - 2); // Append the WHERE clause sql.append(" WHERE " + whereColumn + " = '" + whereValue + "'"); // Execute the SQL jdbcTemplate.update(sql.toString()); } public void anonymizeMissingRows(String tableName, String columnName, String time) { List<Map<String, Object>> rows = jdbcTemplate.queryForList("SELECT * FROM " + tableName + " WHERE ? > " + columnName, time); if (rows.size() > 0) { System.out.println(tableName + " of " + rows.size() + " Records Started At " + java.time.LocalTime.now()); for (Map<String, Object> row : rows) { executorService.submit(() -> { Map<String, String> anonymizedAttributes = employeeAttributes(row.get("id").toString()); // consider relevant data of the table anonymizeTable(tableName, anonymizedAttributes, 'id' row.get("id")); }); } executorService.shutdown(); while (!executorService.isTerminated()) { } System.out.println(tableName + " Completed At " + java.time.LocalTime.now()); } } public void anonymizeEmployeeDefaultValues() { // This method is used to nullify or set some default value on any sensitive informantion on a table System.out.println("Employee Images Started At " + java.time.LocalTime.now()); jdbcTemplate.update("UPDATE Employee SET profile_Image = NULL"); jdbcTemplate.update("UPDATE users SET encrypted_password = 'dummy-password-hash', reset_password_token = NULL, reset_password_sent_at = NULL"); System.out.println("Default Values Modification Completed At " + java.time.LocalTime.now()); } public Map<String, String> employeeAttributes(String id) { Map<String, String> attributes = new HashMap<>(); attributes.put("FirstName", faker.name().firstName()); attributes.put("LastName", faker.name().lastName()); attributes.put("WorkPhone", faker.phoneNumber().phoneNumber()); attributes.put("Extension", faker.phoneNumber().extension()); attributes.put("Photo", null); attributes.put("Bio", faker.lorem().sentence()); attributes.put("Email", id + faker.internet().emailAddress()); attributes.put("updated_at", String.valueOf(System.currentTimeMillis())); return attributes; } } ``` The provided code is written in JAVA and it seems to be part of a larger system that anonymizes sensitive data in a database. The code uses the JPA library, which is a part of Spring boot, but it can also be used standalone. Java Persistence API (JPA) tool, which means it allows you to interact with your database, like you would with SQL. In other words, it's a way to create, retrieve, update, and delete records from your database, without having to write raw SQL statements. Meanwhile in a Java Spring Boot application you can use `JdbcTemplate` to interact with your database with RAW SQL queries. Let's break down each function: 1. `anonymize`: This is the main function that orchestrates the anonymization process. It first disables foreign key checks, truncates certain tables, and adds indexes to improve performance. It then anonymizes tables associated with employees, adds more indexes, and truncates the sensitive_values table. It finally re-enables foreign key checks. 2. `anonymize_table`: This function anonymizes a specific table. It takes in a model class, updated attributes, a field, and a value. It first counts the number of records where the field equals the value. If there are any, it updates all those records with the provided attributes. 3. `data_insert`: This function inserts a new record which will have new values of senstive data and exisitng values of employees for the future reference into the `sensitive_values` table in the database. 4. `anonymize_missing_rows`: This function anonymizes rows that haven't been anonymized yet. It checks if the model instance has certain attributes and if it does, it updates those attributes. 5. `truncate_table`: This function truncates a specific table. If the environment is test, it deletes the records instead of truncating the table. 6. `employee_attributes`: This function returns a hash of anonymized employee attributes. It uses the Faker gem to generate fake data. #### AnonymizeEmployeeAssociations.java ```JAVA import com.github.javafaker.Faker; import org.springframework.jdbc.core.namedparam.MapSqlParameterSource; import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate; import org.springframework.jdbc.core.JdbcTemplate; import org.springframework.beans.factory.annotation.Autowired; import java.util.HashMap; import java.util.Map; public class AnonymizeEmployeeAssociations { @Autowired private NamedParameterJdbcTemplate namedParameterJdbcTemplate; @Autowired private JdbcTemplate jdbcTemplate; private final Faker faker = new Faker(); private final Map<String, String> employee = new HashMap<>(); private final Map<String, String> additionalIndexes = new HashMap<>(); private final long scriptUpdateTime; private final String WHITE_LIST_EXPRESSION = "%@domain.com"; // to white list the users of product or support team credentials. public AnonymizeEmployeeAssociations(long time) { this.scriptUpdateTime = time; this.additionalIndexes.put("checkInByEmail", "Bookings"); this.additionalIndexes.put("empSittingEmail", "Bookings"); this.employee.put("first_name", faker.name().firstName()); this.employee.put("last_name", faker.name().lastName()); this.employee.put("email", faker.internet().emailAddress()); this.employee.put("name", faker.name().fullName()); this.employee.put("subject", faker.lorem().sentence()); this.employee.put("content", faker.lorem().paragraph()); this.employee.put("updated_at", String.valueOf(time)); } public void anonymize() { updateUsers(); System.out.println("Completed Users Records at " + new Date()); updateBookings(); System.out.println("Completed Booking Records at " + new Date()); updateBookingActions(); System.out.println("Completed BookingAction Records at " + new Date()); updateSafeguards(); System.out.println("Completed Safeguards Records at " + new Date()); } public void addIndexes() { // These indexes are added to speed up the SQL query on joining tables additionalIndexes.forEach((column, table) -> { String indexExistsQuery = "SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_name = ? AND index_name = ?"; int indexExists = jdbcTemplate.queryForObject(indexExistsQuery, new Object[]{table, column}, Integer.class); if (indexExists == 0) { String addIndexQuery = "CREATE INDEX " + column + " ON " + table + " (" + column + ")"; jdbcTemplate.execute(addIndexQuery); } }); } public void dropIndexes() { additionalIndexes.forEach((column, table) -> { String indexExistsQuery = "SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS WHERE table_name = ? AND index_name = ?"; int indexExists = jdbcTemplate.queryForObject(indexExistsQuery, new Object[]{table, column}, Integer.class); if (indexExists > 0) { String removeIndexQuery = "DROP INDEX " + column + " ON " + table; jdbcTemplate.execute(removeIndexQuery); } }); } public void updateUsers() { String queryWithEmployee = "UPDATE users " + "INNER JOIN sensitive_values ON email = sensitive_values.old_email " + "SET email = sensitive_values.new_email, " + "first_name = sensitive_values.new_first_name, " + "last_name = sensitive_values.new_last_name, " + "updated_at = :updated_at " + "WHERE (email NOT LIKE :whiteList)"; // its optional if don't want to exclude any users data. Map<String, Object> params = new HashMap<>(); params.put("updated_at", scriptUpdateTime); params.put("whiteList", "%" + WHITE_LIST_EXPRESSION); namedParameterJdbcTemplate.update(queryWithEmployee, params); String queryWithoutEmployee = "UPDATE users " + "SET email = CONCAT(id, :email), " + "first_name = CONCAT(id, :first_name), " + "last_name = :last_name, " + "updated_at = :updated_at " + "WHERE (email NOT LIKE :whiteList AND updated_at < :updated_at)"; params.put("email", "@anonymous.com"); params.put("first_name", "Anonymous"); params.put("last_name", "User"); namedParameterJdbcTemplate.update(queryWithoutEmployee, params); } public void updateBookings() { String queryWithEmployee = "UPDATE Bookings " + "LEFT JOIN sensitive_values requestor ON checkInByEmail = requestor.old_email " + "LEFT JOIN sensitive_values occupant ON empSittingEmail = occupant.old_email " + "SET checkOutBy = occupant.new_full_name, " + "empSittingFirstName = occupant.new_first_name, " + "empSittingLastName = occupant.new_last_name, " + "empSittingEmail = occupant.new_email, " + "checkInBy = requestor.new_full_name, " + "checkInByEmail = requestor.new_email, " + "canceledBy = requestor.new_full_name, " + "updated_at = :updated_at " + "WHERE (checkInByEmail = requestor.old_email" + "OR empSittingEmail = occupant.old_email" + "OR checkOutBy = occupant.old_full_name)"; String queryWithoutEmployee = "UPDATE Bookings " + "SET checkOutBy = CONCAT(id, :name), " + "empSittingFirstName = CONCAT(id, :first_name), " + "empSittingLastName = :last_name, " + "empSittingEmail = CONCAT(id, :email), " + "checkInBy = CONCAT(id, :name), " + "checkInByEmail = CONCAT(id, :email), " + "updatedBy = :name, " + "updated_at = :updated_at " + "WHERE (updated_at < :updated_at)"; Map<String, Object> params = new HashMap<>(); params.put("name", "Anonymous User"); params.put("first_name", "Anonymous"); params.put("last_name", "User"); params.put("email", "@anonymous.com"); params.put("updated_at", scriptUpdateTime); namedParameterJdbcTemplate.update(queryWithEmployee, params); namedParameterJdbcTemplate.update(queryWithoutEmployee, params); } public void updateBookingActions() { String queryWithEmployee = "UPDATE booking_actions " + "INNER JOIN sensitive_values ON performer_email = sensitive_values.old_email " + "SET performed_by = sensitive_values.new_full_name," + "performer_email = sensitive_values.new_email, " + "updated_at = :updated_at " + "WHERE (performer_email = sensitive_values.old_email)"; String queryWithoutEmployee = "UPDATE booking_actions " + "SET performed_by = CONCAT(id, :name), " + "performer_email = CONCAT(id, :email), " + "updated_at = :updated_at " + "WHERE (updated_at < :updated_at)"; Map<String, Object> params = new HashMap<>(); params.put("name", "Anonymous User"); params.put("email", "@anonymous.com"); params.put("updated_at", scriptUpdateTime); namedParameterJdbcTemplate.update(queryWithEmployee, params); namedParameterJdbcTemplate.update(queryWithoutEmployee, params); } public void updateSafeguards() { String queryWithEmployee = "UPDATE safeguards " + "INNER JOIN sensitive_values ON email = sensitive_values.old_email " + "SET email = sensitive_values.new_email, " + "updated_at = :updated_at " + "WHERE (email = sensitive_values.old_email)"; String queryWithoutEmployee = "UPDATE safeguards " + "SET email = CONCAT(id, :email), " + "updated_at = :updated_at " + "WHERE (updated_at < :updated_at)"; Map<String, Object> params = new HashMap<>(); params.put("email", "@anonymous.com"); params.put("updated_at", scriptUpdateTime); namedParameterJdbcTemplate.update(queryWithEmployee, params); namedParameterJdbcTemplate.update(queryWithoutEmployee, params); } ``` The class named AnonymizeEmployeeAssociations is designed to anonymize real values with generated fake data in other related tables associated with main (Employee) table in the database. It uses the javafaker library to generate fake data. Key elements in this class include: * **Instance variables**: The class has instance variables such as namedParameterJdbcTemplate, jdbcTemplate, faker, employee, additionalIndexes and scriptUpdateTime. The namedParameterJdbcTemplate and jdbcTemplate are Spring JDBC templates used for interacting with the database. The faker instance is used to generate fake data. The employee and additionalIndexes are HashMaps used to store employee information and index information respectively. The scriptUpdateTime is used for tracking the time of script execution. * **anonymize() method**: This method calls a series of update methods that anonymize different parts of the database. These include updateUsers(),updateBooking(), updateBookingActions(), updateSafeguards(). * **addIndexes() and dropIndexes() methods**: These methods are used to add and drop indexes on the database tables to speed up the SQL queries. * **update methods**: These methods are used to update the various tables in the database. They use SQL queries to update the records in the tables. Some of the update methods use JOIN operations to anonymize the data stored into `sensitive_values` table during `Employee` table anonymization process. In the provided code, queryWithEmployee and queryWithoutEmployee are SQL queries used to update different sets of records in the database. > 1. **`queryWithEmployee`**: This SQL query is used for records that have a matching entry in the **sensitive_values** table. If an employee's old email is found in the sensitive_values table, the employee's email, first name, and last name are updated to the new anonymized values from the **sensitive_values** table. > 2. **`queryWithoutEmployee`**: Conversely, this SQL query is used for records that do not have a matching entry in the **sensitive_values** table. If an employee's email is not found in the **sensitive_values** table, or if the record has not been updated since the script started running, the employee's email, first name, and last name are set to a default anonymized value. Using these two separate queries allows the anonymization process to handle different cases based on whether a matching anonymized value exists for an employee, providing a more fine-tuned approach to data anonymization. ### Results & OutComes: 1. **Significant Performance Improvement**: The performance of the anonymization process improved drastically, decreasing from over **22 hours** to approximately **40-50 minutes**. This represents a significant efficiency gain, making the process of anonymizing **10 million records** far more viable and less time-consuming. 2. **Optimized Anonymization Process**: With the proposed solution, dependent tables are not updated each time an Employee record is anonymized. Instead, the dependent tables are anonymized after all the Employee records have been anonymized. This strategy leads to a reduction in the number of queries executed, further contributing to improved performance. 3. **Data Swapping with UPDATE and JOINS**: The proposed solution leverages the power of SQL UPDATE and JOIN statements to swap data with a temporary table. This approach allows for efficient, bulk updating of records, which is a more performant operation compared to row-by-row updates. ### Lessons Learned: 1. **Understanding Database Operations**: The case study provided an opportunity to understand the complexity and time consumption of the database operations in anonymizing large data sets. It helped to understand how data is structured, queried, and manipulated in a database, specifically for large tables. 2. **Performance Optimization**: One of the key learnings was the knowledge of performance optimization in handling large databases. The solution proposed had significantly reduced the time taken to anonymize data from over 22 hours to around 40-50 minutes, which indicates the importance of efficient data management and processing techniques. 3. **Exploration of Data Anonymization Techniques**: This case study provided a practical platform to delve into and apply sophisticated data anonymization techniques, specifically **Synthetic Data Generation** and **Data** **Swapping**. In the current digital climate, maintaining data privacy and ensuring security are paramount. Through this case study, an efficient methodology to anonymize sensitive database information was demonstrated, thereby emphasizing the importance of data privacy. The use of Synthetic Data Generation helps to create artificial data that can be used in place of real data, thereby protecting sensitive information. Furthermore, Data Swapping was employed to interchange the data among records, effectively anonymizing the data while preserving the overall data distribution and relationships. These techniques played a vital role in accomplishing the successful anonymization of the database in this study. 4. **Working with Joins and Temp Tables**: The case study provided a hands-on experience on how to use SQL joins and temporary tables to perform complex data manipulation tasks. These are vital skills in data management and analytics. 5. **Understanding Dependencies**: The case study emphasized the importance of understanding dependencies in a database schema. By ensuring that dependent tables are updated only after all records in the Employee table are anonymized, the solution avoided unnecessary updates and improved performance. 6. **Software Development Best Practices**: The case study highlighted the importance of careful planning, testing, and implementation in the software development process. The successful solution was the result of a well-thought-out plan, rigorous testing, and careful implementation. ### **Use Cases**: 1. **Data Anonymization for Compliance**: Companies often need to conform to various data privacy regulations such as GDPR, CCPA, HIPAA, etc. In such cases, this solution can be used to anonymize sensitive data in the database, ensuring regulatory compliance. 2. **Data Sharing and Collaboration**: When sharing data between departments or with external partners for business purposes, it's crucial to ensure sensitive details are not exposed. This solution can be used to anonymize the data before sharing. 3. **Data Analysis and Research**: For data analysis and research purposes, analysts often need access to real-world data. However, they usually don't need to know the real identities behind the data. This solution can anonymize the database, allowing safe analysis. 4. **Software Testing**: In software development lifecycle, testing software with realistic data is crucial. However, using real data brings about privacy issues. The proposed solution can be used to anonymize data, creating a realistic but privacy-safe testing environment. 5. **Data Backup and Archiving**: Companies archive old data for record-keeping, but it's not always necessary to keep sensitive details in these records. This solution can anonymize the data before archiving, reducing potential risks. In each of these use cases, the key benefit is the ability to use and manage data without exposing sensitive information, thereby improving privacy and security. ### **Conclusion**: This case study demonstrates a significant improvement in anonymizing sensitive data in a database. The initial process took more than **22 hours** to anonymize **10 million** records. However, the proposed solution drastically reduced this time to around **40-50 minutes** - an incredible performance enhancement. The solution involved swapping data with an UPDATE and JOIN operations on a temporary table, which avoids updating dependent tables every time an employee record is updated. Instead, the dependent tables are updated only after all employee records have been anonymized. This approach ensures efficiency by reducing the number of update operations on the database. Overall, the case study highlights a practical method to handle sensitive data anonymization in a large database. It underscores the importance of well-planned strategies when dealing with large volumes of data, especially when performance and data privacy matters. The proposed solution in this case study could serve as a valuable reference for similar tasks in the future.
lokesh-g
1,742,931
Why Javascript has to be slower than C++
The primary reason is that javascript is an interpreted language Interpreted vs Compiled...
0
2024-01-27T13:17:03
https://dev.to/codeparrot/why-javascript-has-to-be-slower-than-c-4inj
javascript, programming, performance, webdev
The primary reason is that javascript is an interpreted language ## Interpreted vs Compiled languages JavaScript is an interpreted language, meaning the code is executed line-by-line by an interpreter at runtime. In contrast, C++ is a compiled language, where the source code is translated into machine code before execution. This means 1. Additional overhead while running 2. Compiled languages can perform lot more optimisation before runtime like when to do memory cleanup ## But why Javascript has to be Interpreted ### Security As a language primarily used in browsers, executing in a sandboxed environment, being interpreted adds a layer of security. It's easier to impose restrictions and monitor executed code in an interpreted language. ### Rapid Development For web development, the ability to write code and immediately see the results in a browser is important (what would you do without hot reload :( ). Adding an additional compilation step would mean rebuilding the entire app. Interpreted language fits well into this rapid development cycle, allowing developers to quickly test and modify their code. ### Platform Independence Being interpreted allows JavaScript to run in any environment with a compatible interpreter, such as different web browsers. This is crucial for a web scripting language, as it needs to operate consistently across various platforms and devices. C++ needs to be compiled separately for each target platform ### Dynamic Features JavaScript was designed to be a flexible and dynamic language, with features like dynamic typing. If you've used any type in typescript you know exactly why we like dynamic typing. A compiled language needs to know the exact type of each object before runtime. An interpreted environment is more conducive to these dynamic features, as it allows for on-the-fly execution and changes. So, while Javascript is a bit slower it works well for the use cases it's designed for. And that what's programming is about - right tool for the job
royaljain
1,742,954
Brushing Up on Linear Regression in Python: Theory to Practice
Having completed an extensive machine learning course, I've noticed that my memory of the material is...
0
2024-02-01T11:23:59
https://dev.to/esakik/re-learn-linear-regression-in-python-from-theory-to-practice-277m
machinelearning, python, ai, datascience
Having completed an extensive machine learning course, I've noticed that my memory of the material is starting to diminish. To address this, I've made the decision to write a series of articles. ## Introduction Assuming the x-axis represents age and the y-axis indicates income, it appears possible to somehow express the data plotted with a linear function. ![Linear Regression with One Variable](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/urarlso3h1ta3vwqy95f.png) The blue line is merely a visual guide and is not based on mathematical accuracy; therefore, we need to do this and that to determine the actual equation of this blue line. ## Hypothesis Function Adjust the free parameters {% katex inline %} \theta_0, \theta_1 {% endkatex %} of the function to formulate an expression that most accurately fits the data with minimal error. {% katex %} h_\theta(x) = \theta_0 + \theta_1x_1 {% endkatex %} For scenarios involving multiple variables, the formula would be structured as follows. This no longer represents a linear function, yet the foundational principle stays the same. {% katex %} h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2x_2 + \theta_3x_3 + ... + \theta_nx_n {% endkatex %} ## Cost Function The cost function is a tool utilized to develop the hypothetical function. Simply put, it calculates the average discrepancy between the predicted results and the actual outputs. By determining the parameter {% katex inline %} θ {% endkatex %} that minimizes the error, the true parameters of the hypothetical function can be ascertained. This method is commonly known as the "mean squared error (MSE)". {% katex %} J(\theta) = \frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2 {% endkatex %} The division by 2 in the function {% katex inline %} J(\theta) {% endkatex %} is implemented to simplify the process of differentiation when calculating the function later. ## Optimization Using Gradient Descent A strategy must be formulated to optimize (in this instance, minimize) the performance of the cost function, aiming for the most favorable results. The minimization of the mean squared error occurs when the derivative of this function equals zero. This procedure is depicted by the following update formula, known as the gradient descent method. ![Gradient Descent 2D](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1a6f10zeqlar18ylvnyx.png) This method persistently applies this update until the parameter values reach a point of convergence: {% katex %} \theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta) = \theta_j - \alpha \frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)}) \cdot x_j^{(i)} {% endkatex %} Partial differentiation will be applied, where one variable is differentiated while treating the other variables as constants. The resulting gradient is then multiplied by α, which is called the learning rate, and subtracted from the original {% katex inline %} θ_j {% endkatex %} to derive the updated {% katex inline %} θ_j {% endkatex %}. As the gradient approaches 0, whatever the value of α, the range of variation of {% katex inline %} θ_j {% endkatex %} becomes smaller and closer to 0. When the range of variation becomes small enough, it is called convergence. ![Gradient Descent 3D](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/by7431df2t7ym1b8cdm0.png) Notes, if the value of α is excessively high, the variation in {% katex inline %} θ_j {% endkatex %} becomes too large, potentially leading to a failure in convergence. On the other hand, a smaller α results in a slower yet more reliable convergence. Additionally, the update of {% katex inline %} \theta_0, \theta_1, ..., \theta_j {% endkatex %} should be done at the same time, as this is a fundamental requirement for the process. ## Implementation of Linear Regression In this section, we will develop a linear regression model utilizing the gradient descent technique. We will use the [California Housing dataset](https://scikit-learn.org/stable/datasets/real_world.html#california-housing-dataset) from the scikit-learn library for this example. To begin, we will import the necessary libraries and load the dataset. ```shell pip install numpy==1.23.5 pandas==1.5.3 scikit-learn==1.2.2 matplotlib==3.7.4 seaborn==0.13.2 ``` ```python import numpy as np import pandas as pd from sklearn import datasets from sklearn import model_selection dataset = datasets.fetch_california_housing() X = pd.DataFrame(data=dataset.data, columns=dataset.feature_names) y = pd.Series(data=dataset.target, name="target") X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, random_state=0) ``` The dataset comprises 9 features and 20,640 samples. The target variable is the median house value within each block, expressed in units of 100,000 USD. The code provided next will generate a plot of the correlation matrix for this dataset. ```python import seaborn as sns import matplotlib.pyplot as plt corr_matrix = pd.concat([X, y], axis=1).corr() plt.figure(figsize=(9, 6)) plt.title("Correlation Matrix") sns.heatmap(corr_matrix, annot=True, square=True, cmap="Blues", fmt=".2f", linewidths=.5) plt.savefig("california_housing_corr_matrix.png") ``` ![california_housing_corr_matrix.png](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l47i46ti5d1c4smzaj1f.png) The correlation matrix reveals that the median income is the most strongly correlated with the target variable. The correlation between the target variable and the other features is comparatively lower. However, for simplicity in this example, all features will be used. We are now set to build the linear regression model. To gain a deeper understanding of its mechanics, we'll create it from the ground up, without relying on pre-existing machine learning libraries. ```python class LinearRegression: def __init__(self, alpha: float = 1e-7, eps: float = 1e-4) -> None: self.alpha = alpha # Learning rate for gradient descent self.eps = eps # Threshold of convergence def fit(self, X: pd.DataFrame, y: pd.Series) -> "LinearRegression": """Train the model. Optimization method is gradient descent. :param X: The feature values of the training data. :param y: The target values of the training data. :return: The trained model. """ self._m = X.shape[0] # The number of samples num_features = X.shape[1] # The number of features self._theta = np.zeros(num_features) # Parameters (weight) of the model (without bias) self._theta0 = np.zeros(1) # Bias of the model self._error_values = [] # The output values of the cost function in each iteration self._grad_values = [] # Gradient values in each iteration self._iter_counter = 0 # The counter of iterations error = self.J(X, y) # The initial output value of the cost function with random parameters diff = 1.0 # The difference between the previous and the current output values of the cost function # Repeat until convergence while diff > self.eps: # Update the parameters by gradient descent y_pred = self.predict(X) # Predict the target values with the current parameters grad = (1 / self._m) * np.dot(y_pred - y, X) # Calculate the gradient using the formula self._theta -= self.alpha * grad # Update the parameters self._theta0 -= (1 / self._m) * np.sum(y_pred - y) # Update the bias # Print the current status _error = self.J(X, y) # Compute the error with the updated parameters diff = abs(error - _error) # Compute the difference between the previous and the current error error = _error # Update the error self._error_values.append(error) self._grad_values.append(grad.sum()) self._iter_counter += 1 print(f"[{self._iter_counter}] error: {error}, diff: {diff}, grad: {grad.sum()}") print(f"Convergence in {self._iter_counter} iterations.") return self def predict(self, X: pd.DataFrame) -> np.ndarray: """Predict the target values using the hypothesis function. :param X: The feature values of the data. :return: The predicted target values. """ # Pass the bias and the parameters to the hypothesis function theta = np.concatenate([self._theta0, self._theta]) return self.h(X, theta) def h(self, X: pd.DataFrame, theta: np.ndarray) -> np.ndarray: """Hypothesis function. :param X: The feature values of the data. :param theta: The parameters (weight) of the model. :return: The predicted target values. """ # theta[0] is bias and theta[1:] is parameters return np.dot(X, theta[1:].T) + theta[0] def J(self, X: pd.DataFrame, y: pd.Series) -> float: """Cost function. Mean squared error (MSE). :param X: The feature values of the data. :param y: The target values of the data. :return: The error value. """ y_pred = self.predict(X) # Predict the target values with the current parameters return (1 / (2 * self._m)) * np.sum((y_pred - y) ** 2) # Compute the error using the formula ``` The code includes comprehensive explanations in the form of comments. For further understanding, please refer to these comments. Next, we will proceed to train the model and assess its performance on both the training and test data sets. ```python model = LinearRegression() model.fit(X_train, y_train) ``` The results of the training process can be visualized using the following plotting method. ```python import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 5)) ax = fig.add_subplot(1, 2, 1) ax.set_title("MSE") ax.set_ylabel("Error") ax.set_xlabel("Iteration") ax.plot(np.arange(model._iter_counter), model._error_values, color="b") ax = fig.add_subplot(1, 2, 2) ax.set_title("Gradient") ax.set_ylabel("Gradient") ax.set_xlabel("Iteration") ax.plot(np.arange(model._iter_counter), model._grad_values, color="r") plt.show() ``` ![Training Process](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gukodfkklu36r9twzoas.png) Now, let's evaluate the model on both the training and test data sets. ```python from sklearn.metrics import mean_squared_error y_train_pred = model.predict(X_train) print(f"MSE for train data: {mean_squared_error(y_train, y_train_pred)}") y_test_pred = model.predict(X_test) print(f"MSE for test data: {mean_squared_error(y_test, y_test_pred)}") ``` The Mean Squared Error (MSE) for the training data stands at 1.33, while for the test data, it is 1.32. The marginally lower MSE for the test data suggests that the model is not overfitting, which is a positive indication of its generalization capability. ```text MSE for train data: 1.3350646294600155 MSE for test data: 1.322791709774531 ``` By using the scikit-learn library, the same model can be implemented with a more streamlined code approach. This allows for an efficient and more straightforward way to achieve almost the same results. ```python from sklearn.linear_model import LinearRegression as SklearnLinearRegression sklearn_model = SklearnLinearRegression() sklearn_model.fit(X_train, y_train) sklearn_y_train_pred = sklearn_model.predict(X_train) print(f"MSE for train data: {mean_squared_error(y_train, sklearn_y_train_pred)}") sklearn_y_test_pred = sklearn_model.predict(X_test) print(f"MSE for test data: {mean_squared_error(y_test, sklearn_y_test_pred)}") ``` ```text MSE for train data: 0.5192270684511334 MSE for test data: 0.5404128061709085 ``` ## Regularization Regularization decreases the weights to prevent overfitting by making it difficult for any feature to have a high value. It seeks to find the optimal set of weights that enhance the cost function's performance within a given constraint. ### Ridge Regression Ridge Regression is one of the linear regression methods. The equations used for prediction are the same as those in linear regression, but L2 regularization is used to avoid over-fitting. It has high generalization performance by keeping each weight as close to zero as possible. Cost function: {% katex %} J(\theta) = \frac{1}{2m}(\sum_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^{n}\theta_j^2) {% endkatex %} Gradient descent with Regularization: {% katex %} \theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta) = \theta_j - \alpha (\frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)}) \cdot x_j^{(i)} + \frac{\lambda}{m}\theta_j) {% endkatex %} Thus, ridge regression uses the L2 norm for the regularization, which is calculated with the Euclidean distance: {% katex %} d = \sqrt{(b_1 - a_1)^2 + (b_2 - a_2)^2} {% endkatex %} ### Lasso Regression Lasso regression applies L1 regularization, leading to some weights becoming zero. This results in certain features being entirely excluded from the model. With some weights set to zero, the model simplifies and clarifies which features are significant. Cost function: {% katex %} J(\theta) = \frac{1}{2m}(\sum_{i=1}^{m}(h_\theta(x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^{n}|\theta_j|) {% endkatex %} Thus, lasso regression uses the L1 norm for the regularization, which is calculated with the Manhattan distance: {% katex %} d = |(b_1 - a_1)| + |(b_2 - a_2)| {% endkatex %} ## References - https://www.coursera.org/specializations/machine-learning-introduction
esakik
1,742,999
AI-Powered Snapchat Stories: Convert Instagram Videos with Ease
In the realm of social media, the seamless transition of content across platforms is crucial for...
0
2024-01-27T15:29:14
https://dev.to/instagram-to-snapchat/ai-powered-snapchat-stories-convert-instagram-videos-with-ease-4i74
webdev, javascript, beginners, programming
In the realm of social media, the seamless transition of content across platforms is crucial for maintaining a strong online presence. Simplified Clips introduces an innovative solution for content creators: the AI-Powered Snapchat Stories feature. This tool facilitates the effortless conversion of Instagram videos to captivating Snapchat stories. Let's explore how Simplified Clips' AI-powered capabilities make this transformation a breeze. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a66ieuwmr6okobho1ozi.png) 1. **AI Magic Cut:** Simplified Clips employs the AI Magic Cut feature, a smart tool that analyzes and optimizes Instagram videos for Snapchat. This ensures that your content retains its essence while seamlessly fitting the dynamic format of Snapchat Stories. 2. **Creative Enhancements:** Elevate your Snapchat Stories with creative elements powered by AI. From fancy subtitle generation to B-Roll integration, Simplified Clips' AI ensures that your repurposed content is visually appealing and engaging. 3. **Efficiency and Precision:** The efficiency of the AI-powered tools ensures a swift conversion process, saving creators valuable time. The precision with which the tools operate guarantees that your Instagram videos are transformed into Snapchat Stories with accuracy and finesse. 4. **User-Friendly Interface:** Navigating the conversion process is made user-friendly with Simplified Clips. Whether you're an experienced creator or a novice, the intuitive design of the tool ensures that the transition from **[Instagram to Snapchat]( ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wrpxlanqezfc3epu3ta.png) )** is a straightforward and enjoyable experience. 5. **Comprehensive Toolkit:** Beyond conversion, Simplified Clips offers a comprehensive toolkit for content creators. From advanced video editing to customizable templates, the platform empowers users to enhance their Snapchat Stories creatively. **Embrace the Evolution of Storytelling** In conclusion, the AI-Powered Snapchat Stories feature by Simplified Clips opens new possibilities for content creators, allowing them to effortlessly convert their Instagram videos into engaging Snapchat Stories. With a focus on efficiency, creativity, and a user-friendly interface, Simplified Clips paves the way for an evolution in storytelling across social media platforms. Embrace this innovative tool and transform your Instagram videos into captivating Snapchat Stories with ease.
instagram-to-snapchat
1,743,110
Conversational Excellence: Unleashing the Power of AI Chatbots with Consultopia.ai
In the dynamic landscape of digital communication, the emergence of AI chatbots has revolutionized...
0
2024-01-27T18:02:33
https://dev.to/janzaib121/conversational-excellence-unleashing-the-power-of-ai-chatbots-with-consultopiaai-1d6f
ai
In the dynamic landscape of digital communication, the emergence of AI chatbots has revolutionized the way businesses interact with their audience. At the forefront of this transformative wave is Consultopia.ai, a company dedicated to elevating conversational experiences through cutting-edge AI technology. This article delves into the realm of conversational excellence and how Consultopia.ai's AI chatbots are paving the way for unparalleled communication. ## The Rise of AI Chatbots In recent years, AI chatbots have become indispensable tools for businesses seeking efficient and effective communication channels. These virtual assistants leverage artificial intelligence to understand and respond to user queries, offering real-time engagement and personalized interactions. Consultopia.ai has harnessed the power of AI to develop chatbots that go beyond scripted responses, creating dynamic conversations that adapt to user needs. ## Enhancing Customer Experiences One of the key strengths of Consultopia.ai's AI chatbots lies in their ability to enhance customer experiences. These chatbots serve as round-the-clock virtual assistants, providing instant support, answering inquiries, and guiding users through processes. By automating routine tasks, businesses can streamline operations and allocate human resources to more complex and strategic endeavors, ultimately improving overall customer satisfaction. ## Tailored Solutions for Every Industry Consultopia.ai's AI chatbots are not one-size-fits-all; instead, they are tailored to meet the unique demands of various industries. Whether in healthcare, finance, e-commerce, or beyond, these chatbots adapt to industry-specific nuances, ensuring a seamless and contextually relevant conversation. This versatility positions Consultopia.ai as a leader in providing customized AI chatbot solutions that align with diverse business needs. ## Empowering Businesses with Intelligence The intelligence embedded in Consultopia.ai's AI chatbots is a game-changer for businesses aiming to stay ahead in the competitive market. These chatbots continuously learn from interactions, evolving to better understand user preferences and adapting to changing trends. The result is not just automated responses but intelligent conversations that foster meaningful connections and drive business success. ## The Future of Conversational Technology As we look to the future, Consultopia.ai remains committed to pushing the boundaries of conversational technology. The ongoing advancements in AI, coupled with Consultopia.ai's dedication to innovation, promise a future where AI chatbots become even more integral to businesses across industries. The journey towards conversational excellence continues, with Consultopia.ai leading the way. ## Real Estate AI Automation Services In the realm of real estate, Consultopia.ai's AI chatbots are revolutionizing interactions, providing intelligent solutions that streamline processes and enhance customer experiences. Through [real estate AI automation services](https://techxpert.io/), Consultopia.ai is helping businesses in this sector achieve operational efficiency, improved client engagement, and a competitive edge in the market. Embrace the future of real estate with Consultopia.ai's AI-powered conversational excellence.
janzaib121
1,743,118
Recent Fossil Findings Reveal a Pivotal Chapter in Evolution
In the epic tale of Earth's evolution, researchers from Curtin University have unearthed a...
0
2024-01-27T18:16:53
https://dev.to/lilyanakarim/recent-fossil-findings-reveal-a-pivotal-chapter-in-evolution-20mb
In the epic tale of Earth's evolution, researchers from Curtin University have unearthed a captivating chapter, delving into the mysteries of [ancient multicellular fossils](https://hubtales.com/2024/01/16/journey-through-time-recent-fossil-findings-reveal-a-pivotal-chapter-in-evolution/ ) dating back an astonishing 565 million years. Led by PhD student Anthony Clarke, this groundbreaking study not only refines the timeline of Earth's history but also offers a tantalizing glimpse into a transformative era when the planet's seas teemed with complex lifeforms following a global ice age. The fossils, meticulously uncovered in the Coed Cochion Quarry in Wales, have been dated with unprecedented precision, thanks to volcanic ash layers acting as crucial time indicators. This meticulous dating firmly places these relics of the past at 565 million years, providing a rare insight into a time when Earth's oceans underwent a remarkable transition from hosting solitary microbes to nurturing a diverse community of intricate multicellular organisms. What elevates this discovery to profound significance is its global reach. Similar Ediacaran fossils have been unearthed in locations spanning the globe, including Australia. This widespread distribution underscores the interconnectedness of Earth's ecosystems during this pivotal epoch. The fossils, aptly named after the Ediacara Hills in South Australia, showcase a rich array of life forms, ranging from disc-shaped Aspidella terranovica to enigmatic creatures bearing resemblance to modern-day marine species like jellyfish or sea pens. Professor Chris Kirkland, a co-author of the study, emphasizes the fossils' significance by drawing parallels with the renowned Ediacara fossils in South Australia. These findings shed light on Earth's biological response to the thawing aftermath of a global glaciation, highlighting the profound interplay between geological processes and the evolution of life. In essence, this study serves as a testament to the importance of unraveling the enigmas of ancient ecosystems to deepen our understanding of Earth's past. By deciphering the geological and biological intricacies of this transformative period, we gain invaluable insights into life's evolutionary journey and its remarkable resilience in the face of planetary upheavals. As we contemplate these revelations, we are struck by the intricate interplay between geological forces and the emergence of life on Earth. The fossils from Wales, echoing their counterparts in Australia, narrate a timeless saga that transcends the ages, reminding us of the profound connection between our past and our present. This research not only enriches our comprehension of Earth's evolutionary tapestry but also serves as a poignant reminder of the enduring symphony of life that has resonated across our planet for countless millennia.
lilyanakarim
1,743,176
7
Sure, I can help you understand these codes. They are written in JavaScript, a programming language...
0
2024-01-27T19:23:35
https://dev.to/pooyaalamdari/7-22fp
Sure, I can help you understand these codes. They are written in **JavaScript**, a programming language that can run on both the browser and the server. These codes are part of a web application that uses **Express.js**, a framework that simplifies Node.js web server APIs and adds new features¹²³. Let me explain each line of code with an easy example in the real world: - `const express = require('express');` - This line imports the **express** module, which is a package of code that provides various functions and features for web development. You can think of it as a toolbox that contains many tools for building websites. To use the express module, you need to install it using a command like `npm install express` in your terminal. Then, you can use the `require` function to load the module and assign it to a constant variable called `express`. This is like opening the toolbox and getting ready to use the tools inside. - `const router = express.Router();` - This line creates a new **router** object, which is a tool that helps you organize your web application into smaller and modular chunks. You can think of it as a map that shows you how to get from one page to another in your website. The router object is created by calling the `Router` method of the express module. This is like drawing a map and labeling the routes with names and directions..
pooyaalamdari
1,743,195
Can spring boot be used for front end rendering as well ?
Everyone knows that Spring boot might be the most popular framework to build powerful backend...
0
2024-01-27T20:29:47
https://dev.to/engineeringexpert/can-spring-boot-be-used-for-front-end-rendering-as-well--47lk
ai, llm
Everyone knows that Spring boot might be the most popular framework to build powerful backend services. However can it be also used for front ends ? The short answer appears to be yes. [Spring boot can be used for front ends but its key strength is backend development](https://www.frontendeng.dev/blog/35-spring-boot-for-front-end-development)
engineeringexpert
1,743,201
Como eu criei um servidor de PalWorld na AWS com start por bot no Discord — Parte 2
Na primeira parte, ensinei como criar uma máquina virtual na AWS e instalar um servidor de Palworld....
0
2024-01-27T20:50:05
https://dev.to/andarilhoz/como-eu-criei-um-servidor-de-palworld-na-aws-com-start-por-bot-no-discord-parte-2-7ce
programming, devops, aws, gamedev
Na primeira parte, ensinei como criar uma máquina virtual na AWS e instalar um servidor de Palworld. Porém, ainda precisamos resolver a questão da máquina ficar ligada o tempo todo, uma forma de permitir que qualquer pessoa inicie o servidor a partir do Discord e também ter acesso ao IP do server. ## Desligando a máquina automaticamente: Para automatizar a desativação do servidor, decidi criar um script que: 1. Verifica se o servidor está iniciado. 2. Verifica se há jogadores 3. Desliga a máquina caso não haja mais jogadores. Para isso, vamos criar um script chamado: shutdownServer.sh com o conteúdo abaixo: ```bash #!/bin/bash cd /home/ubuntu/palworld # Execute o comando para obter a lista de jogadores e capturar possíveis erros OUTPUT=$(docker-compose run --rm rcon ShowPlayers 2>&1) # Filtra linhas indesejadas CLEANED_OUTPUT=$(echo "$OUTPUT" | grep -v "Creating palworld_rcon_run") # Verifica se ocorreu um erro na execução do comando if echo "$CLEANED_OUTPUT" | grep -q "ERROR"; then echo "Erro ao executar o comando. Possivelmente, máquina está ligando." STOP_INSTANCE=false else # Conta o número de jogadores NUM_PLAYERS=$(echo "$CLEANED_OUTPUT" | wc -l) NUM_PLAYERS=$((NUM_PLAYERS-1)) # Ajusta para subtrair o cabeçalho, se houver if [ "$NUM_PLAYERS" -le 0 ]; then echo "Nenhum jogador online." STOP_INSTANCE=true else echo "Há $NUM_PLAYERS jogadores online." echo "Lista de Jogadores:" echo "$CLEANED_OUTPUT" | cut -d ',' -f 1 STOP_INSTANCE=false fi fi # Se não houver jogadores online ou ocorreu um erro, parar o container do Docker e desligar a máquina if [ "$STOP_INSTANCE" = true ]; then # Encontra o ID do container que está executando a imagem especificada CONTAINER_ID=$(docker ps -q --filter ancestor=jammsen/palworld-dedicated-server:latest) # Verifica se um container foi encontrado if [ -n "$CONTAINER_ID" ]; then echo "Parando o container do Docker: $CONTAINER_ID" docker stop $CONTAINER_ID else echo "Nenhum container encontrado para a imagem especificada." fi echo "Desligando a máquina..." sudo shutdown -h now fi ``` Aqui estamos usando o programa `rcon` para obter uma lista de jogadores online e armazenando em `OUTPUT`. Quando a máquina está iniciando, geralmente esse comando retorna um ERROR; neste caso, não vamos desligar a máquina. Após isso, se houverem jogadores online, nós exibimos uma lista com os nomes, e caso não haja ninguém, o servidor irá desligar o Docker e após isso, desligar a própria máquina. - Depois de criar o script, dê permissões de execução com o comando: ```bash $ chmod +x shutdownServer.sh ``` Agora vamos criar um repetidor, para executar o script de 30 em 30 minutos: - Execute o seguinte comando como sudo: ```bash $ sudo crontab -e ``` Vá até o final do arquivo e adicione a linha: ```bash */30 * * * * /home/ubuntu/palworld/shutdownServer.sh >> /home/ubuntu/logs/log_$(date +\%Y_\%m_\%d_\%H_\%M).log 2>&1 ``` Com isso, o servidor irá executar o script de 30 em 30 minutos e guardar os logs na pasta “logs” na home. O script precisa ser executado como sudo para ter acesso a parar o container do Docker e também desligar a máquina. Crie a pasta para guardar os logs: ```bash $ mkdir /home/ubuntu/logs ``` ## Criando sua função lambda Antes de criar o bot propriamente dito, precisamos criar as funções que serão executadas por ele. * No seu console da AWS, procure pelo serviço **“Lambda”** * Crie uma nova função no botão superior direito * Dê um nome como “discord-lambda” * Escolha a linguagem **“Node.js 20.x”** * Clique em **“Criar Função”** no canto inferior direito. Precisamos chamar um “gatilho” que irá executar essa Lambda quando um endereço for chamado, para isso: * Na **“Visão geral da função”** * Clique em **“+ Adicionar gatilho”** * Selecione **API Gateway** * Selecione **“Create a new API”** * Em **“API type”** Selecione **“HTTP API”** * Em Security Selecione **“Open”** * Abra a aba **“Additional Settings”** * Selecione a opção **“Cross-origin resource sharing (CORS)”** *(isso irá permitir o discord chamar sua função mesmo de outro domínio.)* * Clique em **“Adicionar”** * Na aba **“Configuração>Gatilhos”** acesse o endereço **“API endpoint”** * Se abrir uma nova aba com o texto: **“Hello from Lambda!”**, tudo deu certo até então. * Copie esse endereço, vamos utilizá-lo mais tarde na criação do bot. * Com sua função criada, vamos baixar o código desse repositório: [https://github.com/andarilhoz/discord-lambda](https://github.com/andarilhoz/discord-lambda) * Dentro dele, acesse a pasta “aws” e execute o comando `npm install` para instalar a dependência `tweetnacl` que irá ler o request do discord. * Adicione todo conteúdo de dentro da pasta “aws” (incluindo o node_modules) para um arquivo .zip * No seu navegador, na lambda recém criada dentro da aba “Código” clique em “Fazer upload de” e escolha “Arquivo .zip” * Faça o upload do zip criado anteriormente * Na aba “Configuração” selecione “Variáveis de ambiente” e preencha as seguintes variáveis: ```bash AWS_ECS_REGION -> a região da sua ecs, se você criou em São Paulo, será: "sa-east-1" (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html) INSTANCE_ID -> O id da sua instancia EC2 criada no artigo passado CHANNEL_ID -> O id do canal do seu servidor do discord onde o bot irá funcionar ROLE_ID -> O id do cargo que terá permissão para executar os comandos PUBLIC_KEY -> Será gerado mais a frente quando criarmos o bot no discord ``` Essa lambda também requer permissão para executar scripts do **EC2 **e **AWS Cost Explorer Service** * No menu Configuração, selecione o sub-menu **“Permissões”** * Embaixo da label **“Nome da função”** clique no link com o nome [sua-lambda]-role-[id], isso abrirá o gerenciador de permissões da AWS * Selecione a Politica de Permissões criada automaticamente (terá um suffixo: **AWSLambdaBasicExecutionRole**) * Clique em **“Editar”** Adicione a seguinte permissão no json, dentro da propriedade **“Statement**”: ```json { "Sid": "AccessCostExplorer", "Effect": "Allow", "Action": [ "ce:GetCostAndUsage", "ec2:DescribeInstances", "ec2:StartInstances", "ec2:StopInstances" ], "Resource": "*" } ``` Isso dará a lambda permissões para: * Ver o custo e uso de seus serviços na AWS * Descrever uma instancia EC2 * Iniciar uma instancia EC2 * Pausar uma instancia EC2 Clique em **“Próximo”** e depois em **“Salvar Alterações”**. Após salvar as permissões, só falta adicionarmos o `PUBLIC_KEY` que iremos gerar agora no Discord para finalizar a lambda. ## Criando um bot no Discord Abra o [Developer Portal](https://discord.com/developers/applications) do discord, * Crie uma nova aplicação, e dê um nome. * Na aba **“General Information”** copie o **“PUBLIC KEY”** e adicione esse valor na variável da lambda criada no passo anterior. * Ainda em **“General Information”** preencha o campo **“INTERACTIONS ENDPOINT URL”** com a url da sua lambda criada no passo anterior. * Clique em **“Save Changes”** * Você deverá ver a mensagem: **“All your edits have been carefully recorded.”** * Vá na aba **“OAuth2 > URL Generator”** * Em **“SCOPES”** selecione **“bot”** * Em **“BOT PERMISSIONS”** selecione **“Use Slash Commands”** * Copie a URL gerada na parte de baixo, e use-a para adicionar o seu novo bot no servidor desejado. * Na aba **“Bot”** clique no botão **“Reset Token”** * Copie o código gerado que iremos utilizar no próximo passo. * Na aba **“General Information”** copie o código **“APPLICATION ID”** que também será necessário. ### Adicionando comandos do bot: * Com o [repositório clonado anteriormente ](https://github.com/andarilhoz/discord-lambda)acesse a pasta: “discord” * Crie um arquivo .env e preencha as variáveis de ambiente: ```javascript BOT_TOKEN=ODUwMTk2NDDyMDc2MjgyOTAw.GpYcOB.q2XSa_IUw5A3sHId67s6kQzSoiZP_zfZFCyDvE //O Token Gerado anteriormente na aba "Bot" do discord APP_ID=850196442076282900 //O App Id gerado anteriormente na aba "General Information" GUILD_ID=337788789500449762 //O ID do seu servidor do discord. ``` * Execute o comando `npm install` na pasta “discord” * Execute o comando `node addcommand.js` Se tudo ocorreu bem, você verá essas mensagens: ```bash success 200 success 200 success 200 success 200 All commands sent ``` Pronto! agora você pode executar os comandos do seu bot no seu servidor para iniciar o EC2, pausar, ver o status (e IP) e também ver o custo total de EC2 da sua aplicação. ## Adicionando um IP Elástico Um dos problemas de se ligar e desligar um servidor EC2 é que seu ip acaba mudando constantemente, vamos adicionar um IP Elástico que não irá mudar. * No seu console da AWS, busque por EC2 * No menu esquerdo selecione **“IPs elásticos”** * Clique em **“Alocar endereço de IP elástico”** * Clique em **“Alocar” **no canto inferior direito * Agora selecione o IP na lista de endereços * Clique em **“Ações”** no botão direito superior * Clique em **“Associar endereço IP elástico”** * Abaixo da label **“Instância”** selecione seu servidor EC2 * Clique em **“Associar”** no canto direito inferior. Pronto, esse IP será fixo e será o ip permanente da sua máquina, e não mudara mesmo após desligá-la. ## Conclusão Como dito anteriormente, a máquina t3a.xlarge custaria mensalmente **$170 USD**, com essa alteração, considerando que você e seus amigos joguem cerca de 4 horas por dia, todos os dias, esse valor cai para: **$29 USD**. Existem outras maneiras de baratear esse custo, já que é uma máquina que tem capacidade para 32 jogadores simultâneos. Você pode testar máquinas com uma memória menor para 4 jogadores por exemplo. Dessa forma, com a máquina t3a.xlarge, economizamos em um mês **$141 USD**, quase **$700 Reais**! ### Daqui pra frente: Um dos problemas dessa configuração é o **“cold start”** da lambda AWS, você pode transformar a lambda em um servidor EC2 nano, que além de garantir o baixo custo, sempre estará disponível!
andarilhoz
1,743,264
Getting Started with Natural Language Toolkit (NLTK)
Introduction NLTK (Natural Language Toolkit), one of the most popular libraries in Python...
0
2024-01-27T22:22:19
https://dev.to/mahmoudrasmyfathy1/getting-started-with-natural-language-toolkit-nltk-3eok
nlp, llm
# Introduction NLTK (Natural Language Toolkit), one of the most popular libraries in Python for working with human language data (i.e., text). This tutorial will guide you through the installation process, basic concepts, and some key functionalities of NLTK. > [Link for the Notebook](https://github.com/mahmoudrasmyfathy1/NLP-Tutorial/blob/main/getting-started-nltk.ipynb) # 1.Installation First, you need to install NLTK. You can do this easily using pip. In your command line (Terminal, Command Prompt, etc.), enter the following command: ``` !pip install nltk ``` # 2.Understanding the Role of nltk.download() in NLTK Setup Use nltk.download() to fetch datasets and models for text processing with NLTK, ensuring updated resources and easing setup. ``` import nltk nltk.download() ``` # 3.Tokenization Tokenization is the process of splitting a text into meaningful units, such as words or sentences. ``` from nltk.tokenize import word_tokenize, sent_tokenize text = "Hello there! How are you? I hope you're learning a lot from this tutorial." # Sentence Tokenization sentences = sent_tokenize(text) print(sentences) # Word Tokenization words = word_tokenize(text) print(words) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3n2fcdultieimked6ls3.png) # 4. Part-of-Speech (POS) Tagging POS tagging means labeling words with their part of speech (noun, verb, adjective, etc.). ``` from nltk import pos_tag ​ words = word_tokenize("I am learning NLP with NLTK") pos_tags = pos_tag(words) print(pos_tags) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skpvotlg539wkwp2jw5h.png) # 5. Stopwords Stopwords are common words that are usually removed from text because they carry little meaningful information. ``` from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize words = word_tokenize("Hello there! How are you? I hope you're learning a lot from this tutorial.") stop_words = set(stopwords.words('english')) filtered_words = [word for word in words if not word in stop_words] print(filtered_words) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n5mlwzq8au79misi94qm.png) # 6. Stemming Stemming is a process of stripping suffixes from words to extract the base or root form, known as the 'stem'. For example, the stem of the words 'waiting', 'waited', and 'waits' is 'wait'. ``` from nltk.stem import PorterStemmer from nltk.tokenize import word_tokenize ps = PorterStemmer() sentence = "It's important to be waiting patiently when you're learning to code." words = word_tokenize(sentence) stemmed_words = [ps.stem(word) for word in words] print(stemmed_words) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/osu9aeem6e52yxtr823g.png) # 7. Lemmatization Lemmatization is the process of reducing a word to its base or dictionary form, known as the 'lemma'. Unlike stemming, lemmatization considers the context and converts the word to its meaningful base form. For instance, 'is', 'are', and 'am' would all be lemmatized to 'be'. ``` import nltk from nltk.stem import WordNetLemmatizer from nltk.tokenize import word_tokenize nltk.download('punkt') nltk.download('wordnet', download_dir='/usr/share/nltk_data/corpora/wordnet') # specify your NLTK data directory if it's not in the default location lemmatizer = WordNetLemmatizer() sentence = "The leaves on the ground were raked by the gardener, who was also planting bulbs for the coming spring." words = word_tokenize(sentence) lemmatized_words = [lemmatizer.lemmatize(word) for word in words] print(lemmatized_words) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bty4dryzocb7me3nq3eu.png) # 8.Frequency Distribution This is used to find the frequency of each vocabulary item in the text. from nltk.probability import FreqDist words = word_tokenize("I need to write a very, very simple sentence") fdist = FreqDist(words) print(fdist.most_common(1)) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r0peeuozxgjtahmy84km.png) # 9. Named Entity Recognition (NER) NER is used to identify entities like names, locations, dates, etc., in the text. ``` import nltk from nltk.tokenize import word_tokenize from nltk.tag import pos_tag from nltk.chunk import ne_chunk nltk.download('punkt') nltk.download('averaged_perceptron_tagger') nltk.download('maxent_ne_chunker') nltk.download('words') sentence = "I will travel to Spain" # Tokenize the sentence words = word_tokenize(sentence) # Part-of-speech tagging pos_tags = pos_tag(words) # Named entity recognition named_entities = ne_chunk(pos_tags) # Print named entities print(named_entities) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrflv3veizzjbiqv6sjj.png)
mahmoudrasmyfathy1
1,743,271
Speech-to-Text Discord bot written in Go
As part of my personal journey to learn Go, I've decided to rewrite one of my open source projects in...
0
2024-01-27T22:42:16
https://dev.to/codr/speech-to-text-discord-bot-written-in-go-eih
go, discord, ai, productivity
As part of my personal journey to learn Go, I've decided to rewrite one of my open source projects in GoLang. The project is a standalone (offline) speech-to-text bot for Discord. Basically it transcribes everything you say in a voice channel. This is useful if you want to have custom voice commands (eg. while gaming), or enhance the communication experience for hearing impaired/deaf people. Repository: https://github.com/inevolin/DiscordEarsGo The project makes use of the Vosk library, which does not work well on Mac OS (M1), so by default it is designed to only work on Linux x86 systems (since you would likely be hosting it on Linux). But the great thing is that Vosk works offline, open source and comes with a ton of models and languages (english, german, french, chinese, ...) for download https://alphacephei.com/vosk/models One of the annoying things was also to decode Opus packets to PCM, it requires opus libraries to be installed. The same is true for the NodeJS version (which now requires ffmpeg to be installed). It would be nice if there was a tiny library/snippet that does this (not external library). Enjoy!
codr
1,743,364
How To Customize WordPress Website?
I started a new blog site created on WordPress. Now website customization is difficult to me like...
0
2024-01-28T03:20:46
https://dev.to/jenni007/how-to-customize-wordpress-website-1ng5
webdev, themes, wordpress, tutorial
I started a new blog site created on WordPress. Now website customization is difficult to me like themes, customize themes and more and I do not know how to do it. Anyone here or give me suggestions "How Can I customize my theme?". My website is [apkworldx](www.apkworldx.com) you can check it and give me suggestions.
jenni007
1,743,454
Crew AI Travel Agents
Crew AI Travel Agents git clone https://github.com/joaomdmoura/crewAI-examples cd crewAI-examples cd...
0
2024-01-28T06:23:01
https://dev.to/jhparmar/crew-ai-travel-agents-8dl
**Crew AI Travel Agents** git clone https://github.com/joaomdmoura/crewAI-examples cd crewAI-examples cd trip_planner conda create -n crewai python=3.11 -y conda activate crewai pip install poetry platformdirs poetry install --no-root poetry run python main.py
jhparmar
1,743,479
RAG with ChromaDB + Llama Index + Ollama + CSV
*RAG with ChromaDB + Llama Index + Ollama + CSV * curl https://ollama.ai/install.sh | sh ollama...
0
2024-01-28T07:23:20
https://dev.to/jhparmar/rag-with-chromadb-llama-index-ollama-csv-23f7
**RAG with ChromaDB + Llama Index + Ollama + CSV ** curl https://ollama.ai/install.sh | sh ollama serve ollama run mixtral pip install llama-index torch transformers chromadb Section 1: # Import modules from llama_index.llms import Ollama from pathlib import Path import chromadb from llama_index import VectorStoreIndex, ServiceContext, download_loader from llama_index.storage.storage_context import StorageContext from llama_index.vector_stores.chroma import ChromaVectorStore # Load CSV data SimpleCSVReader = download_loader("SimpleCSVReader") loader = SimpleCSVReader(encoding="utf-8") documents = loader.load_data(file=Path('./fine_food_reviews_1k.csv')) # Create Chroma DB client and store client = chromadb.PersistentClient(path="./chroma_db_data") chroma_collection = client.create_collection(name="reviews") vector_store = ChromaVectorStore(chroma_collection=chroma_collection) storage_context = StorageContext.from_defaults(vector_store=vector_store) # Initialize Ollama and ServiceContext llm = Ollama(model="mixtral") service_context = ServiceContext.from_defaults(llm=llm, embed_model="local") # Create VectorStoreIndex and query engine index = VectorStoreIndex.from_documents(documents, service_context=service_context, storage_context=storage_context) query_engine = index.as_query_engine() # Perform a query and print the response response = query_engine.query("What are the thoughts on food quality?") print(response) Section 2: # Import modules import chromadb from llama_index import VectorStoreIndex, ServiceContext from llama_index.llms import Ollama from llama_index.vector_stores.chroma import ChromaVectorStore # Create Chroma DB client and access the existing vector store client = chromadb.PersistentClient(path="./chroma_db_data") chroma_collection = client.get_collection(name="reviews") vector_store = ChromaVectorStore(chroma_collection=chroma_collection) # Initialize Ollama and ServiceContext llm = Ollama(model="mixtral") service_context = ServiceContext.from_defaults(llm=llm, embed_model="local") # Create VectorStoreIndex and query engine with a similarity threshold of 20 index = VectorStoreIndex.from_vector_store(vector_store=vector_store, service_context=service_context, similarity_top_k=20) query_engine = index.as_query_engine() # Perform a query and print the response response = query_engine.query("What are the thoughts on food quality?") print(response) 6bca48b1-fine_food_reviews
jhparmar
1,743,495
Matching Any character (Using the dot (.))| Regular Expressions for the Absolute Beginner
In this video, we look at how we can match every single character within text. we achieve this by...
0
2024-01-28T07:48:17
https://dev.to/jod35/matching-any-character-using-the-dot-regular-expressions-for-the-absolute-beginner-3pae
regex, programming, tutorial, basics
In this video, we look at how we can match every single character within text. we achieve this by using the dot character. The dot (.) allows us to construct regular expressions that match only a single character. In regular expressions, the dot (.) matches any single character except for a new line. It's like a wildcard, allowing you to find patterns where any character can appear at a specific position {%youtube H81OV1WVXgY%}
jod35
1,743,507
Unlocking Data Analysis Efficiency: A Guide to PivotTables and Data Cleaning Functions
In the dynamic world of data analysis, mastering tools like PivotTables and data cleaning functions...
0
2024-01-28T08:12:17
https://dev.to/tinapyp/unlocking-data-analysis-efficiency-a-guide-to-pivottables-and-data-cleaning-functions-40f8
tutorial, data, analytics
In the dynamic world of data analysis, mastering tools like PivotTables and data cleaning functions is crucial for deriving meaningful insights from your datasets. Whether you're a seasoned analyst or just stepping into the world of data, understanding how to leverage these tools can significantly enhance your efficiency and accuracy in handling large volumes of information. In this article, we'll delve into PivotTables and various data cleaning functions, shedding light on their applications and how to use them effectively. ## PivotTables: Unraveling Data Complexity ### What are PivotTables? PivotTables are powerful tools in spreadsheet applications, such as Microsoft Excel or Google Sheets, that allow you to summarize and analyze large datasets. They enable users to reorganize and manipulate data easily, providing a dynamic and interactive way to explore information. #### How to Create a PivotTable: 1. **Select Data:** Highlight the dataset you want to analyze. 2. **Insert PivotTable:** Navigate to the "Insert" tab and select "PivotTable." 3. **Define Rows and Columns:** Drag and drop fields into the Rows and Columns areas. 4. **Add Values:** Place variables in the Values area to calculate summaries like sums, averages, or counts. ### Calculated Field in PivotTables: A Calculated Field in a PivotTable allows you to create new fields by performing calculations on existing ones. This can be immensely useful when you need to derive additional insights from your data. #### How to Create a Calculated Field: 1. **Select PivotTable:** Click on any cell within your PivotTable. 2. **PivotTable Tools:** Go to the "PivotTable Analyze" or "Options" tab. 3. **Calculated Field:** Find the "Fields, Items, & Sets" dropdown and select "Calculated Field." 4. **Define Calculation:** Input a name for your field and create the formula using existing fields. ## Data Cleaning Functions: Transforming Raw Data into Insights ### String Manipulation Functions: #### 1. TRIM: The TRIM function is used to remove leading and trailing spaces from text. This is especially handy when dealing with data that might have inconsistent spacing. **Example:** ```excel =TRIM(A2) ``` #### 2. LOWER and UPPER: LOWER converts text to lowercase, while UPPER converts it to uppercase. These functions are valuable for standardizing text data. **Example:** ```excel =LOWER(B2) =UPPER(C2) ``` #### 3. CONCAT: The CONCAT function combines multiple text strings into one. It's useful for creating composite fields or joining information. **Example:** ```excel =CONCAT(D2, " ", E2) ``` #### 4. SEPARATE: The SEPARATE function splits text into multiple columns based on a specified delimiter. It's beneficial when dealing with datasets containing combined information. **Example:** ```excel =SEPARATE(F2, ",") ``` #### 5. JOIN: Conversely, the JOIN function merges text from multiple columns into a single text string. It's helpful for creating summaries or combining information. **Example:** ```excel =JOIN(",", G2, H2) ``` ### How These Functions Help in Data Analysis: 1. **Data Consistency:** TRIM, LOWER, and UPPER functions ensure consistency by eliminating extra spaces and standardizing text case. 2. **Combining Information:** CONCAT, SEPARATE, and JOIN functions facilitate the creation of meaningful variables by combining or separating text fields. 3. **Enhanced PivotTable Analysis:** Cleaned data is crucial for accurate analysis with PivotTables. The functions mentioned help in preparing the data for efficient use in PivotTable calculations. In conclusion, mastering PivotTables and data cleaning functions equips you with the tools needed to navigate and extract insights from complex datasets. Whether you're cleaning messy data or summarizing information with PivotTables, these skills are essential for any data analyst. By incorporating these techniques into your workflow, you'll streamline your data analysis processes and make informed decisions based on accurate, well-organized information.
tinapyp
1,761,789
6 Best Tubi Tv Alternatives 2024
In the realm of free streaming services, Tubi stands as a popular choice for entertainment...
0
2024-02-15T04:29:09
https://dev.to/siddiquelab/6-best-tubi-tv-alternatives-2024-3kf
tubitv, softcodeon, alternratives, webdev
In the realm of free streaming services, Tubi stands as a popular choice for entertainment enthusiasts. However, if you're looking to expand your horizons and explore other options beyond Tubi, there are several noteworthy alternatives available. Let's delve into six of the best alternatives that offer compelling features and diverse content libraries. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phdwp0wjwqd01wok1az5.png) 1. **Pluto TV** 2. **Crackle** 3. **IMDb TV** 4. **Peacock** 5. **Vudu** 6. **Hulu** Each of these alternatives provides unique offerings, ranging from a wide selection of channels to on-demand movies and TV shows. By exploring these alternatives, you can discover new content and tailor your streaming experience to suit your preferences. For more detailed information about each alternative and insights into their features, stay tuned for the full article below. [6 Best Tubi Tv Alternatives 2024](https://softcodeon.com/alternatives/6-best-tubi-tv-alternatives-2024.htm) Here is our website link where you can explore plugins, website building, languages for designing, and much more. Click below for more attractive details. Here is our website link where you can explore plugins, website building, languages for designing, and much more. Click below for more attractive details. [Soft CodeOn | Where Technology Meets Creativity](https://softcodeon.com/)
siddiquelab
1,743,594
Memory leak detection in modern frontend app
One of the challenges when building a single-page application (SPA) like dev.to or an app where users...
0
2024-01-28T11:32:58
https://dev.to/shcheglov/graphql-non-standard-way-of-selecting-a-client-library-5bid
javascript, react, performance
One of the challenges when building a single-page application (SPA) like dev.to or an app where users have standart session more than **10 minutes**, is testing for memory leaks at a large scale. Manual testing, finding and analyzing memory leaks can be time-consuming and inefficient, especially when there are a large number of changes being made continuously by a large team of developers. This article demonstrate how to detect and automatically detect memory leaks in large applications, and how to avoid memory issues for a long period of time. ## Catching users experiencing problems > Hi, dear developers. I've encountered a problem with freezing. Could you help me? How does it happen that some users complain about everything freezing and slowing down, while everything works fine on our expensive and beautiful Macs? The issue is that it can be challenging to replicate the problem without having access to the specific devices that your users log in from, and due to privacy settings, we may not be able to recreate the exact environment that the user experiencing the issue is in. Fortunately, browsers provide us with technical data about devices and how much memory your site consumes or allocates for certain actions. There is an API for this information, which is called the ["Performance: memory"](https://developer.mozilla.org/en-US/docs/Web/API/Performance/memory) property. **Solution:** To write a small script that checks the performance of the system from time to time. If it detects that memory usage has exceeded a certain threshold, it can indicate that the user may be experiencing issues with RAM or a memory leak. In such a case, it is best to send all relevant technical metrics to assist in diagnosing and fixing the problem. Here is a small example: ```javascript // Memory usage threshold (e.g. 80% of jsHeapSizeLimit) const threshold = performance.memory?.jsHeapSizeLimit * 0.8 // Function for checking and sending memory data function checkMemoryUsage(): void { if (!performance.memory) { console.warn('Performance.memory API is not available.'); return; } const { usedJSHeapSize, jsHeapSizeLimit } = performance.memory; console.log(`Current memory usage: ${usedJSHeapSize} of ${jsHeapSizeLimit}`); if (usedJSHeapSize > threshold) { sendEvent(`Memory usage exceeded: ${usedJSHeapSize} exceeds threshold of ${threshold}`); sendSnapshot({ device, userAgent, os, ... }) } } setInterval(checkMemoryUsage, 10000) ``` > NOTE: Please note that this API is not supported in all browsers and > has recently been marked as deprecated. Sounds like that replaced by [measureUserAgentSpecificMemory](https://developer.mozilla.org/en-US/docs/Web/API/Performance/measureUserAgentSpecificMemory) Here we send the **userAgent, device as data** (there may be a problem with the device itself), and of course i would recomend also send specific for you date as example: size of the store, number of DOM nodes and etc... ## Catching on a dev stage with MemLab We have figured out how to perfectly catch problems from real users, but this approach seems rather rash and dangerous for a large-scale enterprise project. It would be nice if our development team could highlight possible problems at the assembly or testing stages. Fortunately, Facebook has a separate team that deals with performance and quality issues. So, they developed a special tool for detecting problems at the development stage and made it available as open-source - [MemLab](https://github.com/facebook/memlab). Now, I'll talk more about it. ### How it works In short, Memlab finds memory leaks by running an offline browser according to predefined test scenarios, as well as by comparing and analyzing snapshots of the JavaScript heap. This process takes place in six stages: 1. Browser interaction 2. Diffing the heap 3. Refining the list of memory leaks 4. Clustering retainer traces 5. Reporting the leaks > You can read about all these steps in detail in the [official documentation.](https://facebook.github.io/memlab/) Simply put, MemLab takes a snapshot of all the metrics every time and compares them to the standard, as shown in the image below: ![enter image description here](https://engineering.fb.com/wp-content/uploads/2022/08/Figure-1-TEST.gif?w=1024) After that, it generates a detailed report on all possible new problems, with detailed information about what was changed and how: ![MemLab-image-2.png (1999×972)](https://engineering.fb.com/wp-content/uploads/2022/08/MemLab-image-2.png) I would consider the disadvantages that this tool requires very careful configuration and description of all possible scenarios and screens, while if you believe the Facebook presentation, they were able to halve memory consumption and almost get rid of performance problems (**crashes**) And this tool also allowed us to find a leak even in react.js ## Conclusion So, nowadays, we should still not forget about the **cost of JavaScript** and identifying performance issues. These can mean a lot and significantly affect a company's profits or costs. So, for example, I once worked to speed up the customer support chat for a popular European bank. The average session time for agents was **5-6 hours** without restarts, but due to lag in the interface, the average user response time was around **140 seconds**. After implementing two tools and fixing problems identified, both support agents and users were incredibly happy, as response time had dropped by almost half to **80 seconds.** Agents themselves told me that it was nice when everything worked smoothly and quickly (I think this was because their salary depended on it too, haha). **Good luck and speed to your projects, and no MEMORY LEAKS anymore!**
shcheglov
1,743,595
Tracking redirects on Cloudflare with Google Analytics
I recently came across this interesting problem of wanting to track URL redirects with Google...
0
2024-01-28T11:59:32
https://dev.to/allentv/tracking-redirects-on-cloudflare-with-google-analytics-3bfi
webdev, cloudcomputing, analytics, edgecomputing
I recently came across this interesting problem of wanting to track URL redirects with Google Analytics on Cloudflare and it got me spending some time into investigating how things work and a potential solution. Before I explain my solution, let me provide some context on the problem. I am working on a side project that requires directly invoking a 3rd party native mobile application when the user visits a certain URL on my domain. As the final touch point for the user is a native app, I don't have much control over tracking the user action for opening an app. A typical solution for this would be to introduce an interstitial page where you would load all of your tracking and then redirect the user. As the end result is a mobile app and to avoid loading another page in between, keeping the number of steps and intermediaries to a minimum will keep the latency as low as possible. As my domain is currently setup with Cloudflare, I started exploring options for computing at the edge as that provides the lowest latency physically possible. Cloudflare has an offering called Workers which is essentially computing at the edge and it can hook into handling incoming requests for one or more routes through wildcards. So if I can capture the requests, extract necessary metadata and send a request to Google Analytics, then I should be able to have analytics on my URLs with low latency. The first step for the process is to create a Cloudflare worker. That was pretty straightforward based on the documentation that gives you a good scaffolding for either plain JavaScript or to use Typescript, I went with the latter to have type safety and to detect any potential issues early on. Using the wrangler CLI, I was able to build and deploy the project directly from my machine with very little hassle. I directly added environment variables in the dashboard and updated the deploy command to skip deleting these vars. The dashboard also gives you an option to encrypt secrets such as passwords and API secrets. The next step was to integrate with the Google Analytics API. After setting up a new property on Google Analytics 4, I obtained an API secret for the Measurement Protocol. As per docs, this API supports server to server invocation and is used for enriching existing events sent directly from the UI. In my case, there is no web UI and so I will solely be using this for sending event data. Creating the event payload was tougher than I thought. It wasn't easy to find the right combination of attributes to get the events to show up on the realtime report view in Google Analytics. After going over multiple posts on StackOverflow and trial-n-error, I was able to get the events to show up with the attribute data that I sent. User location information is important to understand where your users are from and how your marketing campaigns are performing so that you can make a call on how to structure your content marketing strategy. Sending location information: city and country directly to Google Analytics did not show up on the default user demographics dashboard which was frustrating but after some investigation into the documentation, I found that sending geo location information for users is not accepted by the Measurement Protocol for now. The workaround for viewing User location data is straightforward. Attach the location information for each event and then create a custom dashboard where this information shows up in with aggregates on timeframe such as 6 hours, 1 day, 3 days etc. If you have come across this problem before and would like to implement the same solution as described above, you can use this [Github repository ](https://github.com/allentv/cloudflare-worker-ga4) to deploy the changes. Just update the environment variables and deploy using wrangler. Good luck!
allentv